Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Pawel Tyll
 Easiest way to create sparse eg 20 GB assuming test.img doesn't exist
 already
No no no. Easiest way to do what you want to do:
mdconfig -a -t malloc -s 3t -u 0
mdconfig -a -t malloc -s 3t -u 1

Just make sure to offline and delete mds ASAP, unless you have 6TB of
RAM waiting to be filled ;) - note that with RAIDZ2 you have no
redundancy with two fake disks gone, and if going with RAIDZ1 this
won't work at all. I can't figure out a safe way (data redundancy all
the way) of doing things with only 2 free disks and 3.5TB data - third
disk would make things easier, fourth would make them trivial; note
that temporary disks 3 and 4 don't have to be 2TB, 1.5TB will do.

I've done this dozen of times, had no problems, no gray hair, and not
a bit of data lost ;)


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Dan Langille

On 7/24/2010 7:56 AM, Pawel Tyll wrote:

Easiest way to create sparse eg 20 GB assuming test.img doesn't exist
already


You trim posts too much... there is no way to compare without opening 
another email.


Adam wrote:


truncate -s 20g test.img
ls -sk test.img
1 test.img




No no no. Easiest way to do what you want to do:
mdconfig -a -t malloc -s 3t -u 0
mdconfig -a -t malloc -s 3t -u 1


In what way is that easier?  Now I have /dev/md0 and /dev/md1 as opposed 
to two sparse files.



Just make sure to offline and delete mds ASAP, unless you have 6TB of
RAM waiting to be filled ;) - note that with RAIDZ2 you have no
redundancy with two fake disks gone, and if going with RAIDZ1 this
won't work at all. I can't figure out a safe way (data redundancy all
the way) of doing things with only 2 free disks and 3.5TB data - third
disk would make things easier, fourth would make them trivial; note
that temporary disks 3 and 4 don't have to be 2TB, 1.5TB will do.


The lack of redundancy is noted and accepted.  Thanks.  :)

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Dan Langille

On 7/22/2010 4:11 AM, Dan Langille wrote:

On 7/22/2010 4:03 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug. All my stuff is
internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg




The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
backplane, now has none).

I am not certain, but I think with 8.1 (which it's running) and all the
cam integration stuff, hotplug is possible. Is a special backplane
required? I seriously don't know... I'm going to give it a shot though.

Oh, you also might get NCQ. Try:

[r...@h21 /tmp]# camcontrol tags ada0
(pass0:ahcich0:0:0:0): device openings: 32


# camcontrol tags ada0
(pass0:siisch2:0:0:0): device openings: 31

resending with this:

ada{0..4} give the above.

# camcontrol tags ada5
(pass5:ahcich0:0:0:0): device openings: 32

That's part of the gmirror array for the OS, along with ad6 which has
similar output.

And again with this output from one of the ZFS drives:

# camcontrol identify ada0
pass0: Hitachi HDS722020ALA330 JKAOA28A ATA-8 SATA 2.x device
pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

protocol ATA/ATAPI-8 SATA 2.x
device model Hitachi HDS722020ALA330
firmware revision JKAOA28A
serial number JK1130YAH531ST
WWN 5000cca221d068d5
cylinders 16383
heads 16
sectors/track 63
sector size logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported 3907029168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6
media RPM 7200

Feature Support Enable Value Vendor
read ahead yes yes
write cache yes yes
flush cache yes yes
overlap no
Tagged Command Queuing (TCQ) no no
Native Command Queuing (NCQ) yes 32 tags
SMART yes yes
microcode download yes yes
security yes no
power management yes yes
advanced power management yes no 0/0x00
automatic acoustic management yes no 254/0xFE 128/0x80
media status notification no no
power-up in Standby yes no
write-read-verify no no 0/0x0
unload no no
free-fall no no
data set management (TRIM) no


Does this support NCQ?

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Jeremy Chadwick
On Sat, Jul 24, 2010 at 12:12:54PM -0400, Dan Langille wrote:
 On 7/22/2010 4:11 AM, Dan Langille wrote:
 On 7/22/2010 4:03 AM, Charles Sprickman wrote:
 On Thu, 22 Jul 2010, Dan Langille wrote:
 
 On 7/22/2010 3:30 AM, Charles Sprickman wrote:
 On Thu, 22 Jul 2010, Dan Langille wrote:
 
 On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:
 On 22.07.2010 10:32, Dan Langille wrote:
 I'm not sure of the criteria, but this is what I'm running:
 
 atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
 0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
 pci7
 
 atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
 0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
 pci3
 
 I added ahci_load=YES to loader.conf and rebooted. Now I see:
 
 You can add siis_load=YES to loader.conf for SiI 3124.
 
 Ahh, thank you.
 
 I'm afraid to do that now, before I label my ZFS drives for fear that
 the ZFS array will be messed up. But I do plan to do that for the
 system after my plan is implemented. Thank you. :)
 
 You may even get hotplug support if you're lucky. :)
 
 I just built a box and gave it a spin with the old ata stuff and then
 with the new (AHCI) stuff. It does perform a bit better and my BIOS
 claims it supports hotplug with ahci enabled as well... Still have to
 test that.
 
 Well, I don't have anything to support hotplug. All my stuff is
 internal.
 
 http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg
 
 
 
 The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
 backplane, now has none).
 
 I am not certain, but I think with 8.1 (which it's running) and all the
 cam integration stuff, hotplug is possible. Is a special backplane
 required? I seriously don't know... I'm going to give it a shot though.
 
 Oh, you also might get NCQ. Try:
 
 [r...@h21 /tmp]# camcontrol tags ada0
 (pass0:ahcich0:0:0:0): device openings: 32
 
 # camcontrol tags ada0
 (pass0:siisch2:0:0:0): device openings: 31
 
 resending with this:
 
 ada{0..4} give the above.
 
 # camcontrol tags ada5
 (pass5:ahcich0:0:0:0): device openings: 32
 
 That's part of the gmirror array for the OS, along with ad6 which has
 similar output.
 
 And again with this output from one of the ZFS drives:
 
 # camcontrol identify ada0
 pass0: Hitachi HDS722020ALA330 JKAOA28A ATA-8 SATA 2.x device
 pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
 
 protocol ATA/ATAPI-8 SATA 2.x
 device model Hitachi HDS722020ALA330
 firmware revision JKAOA28A
 serial number JK1130YAH531ST
 WWN 5000cca221d068d5
 cylinders 16383
 heads 16
 sectors/track 63
 sector size logical 512, physical 512, offset 0
 LBA supported 268435455 sectors
 LBA48 supported 3907029168 sectors
 PIO supported PIO4
 DMA supported WDMA2 UDMA6
 media RPM 7200
 
 Feature Support Enable Value Vendor
 read ahead yes yes
 write cache yes yes
 flush cache yes yes
 overlap no
 Tagged Command Queuing (TCQ) no no
 Native Command Queuing (NCQ) yes 32 tags
 SMART yes yes
 microcode download yes yes
 security yes no
 power management yes yes
 advanced power management yes no 0/0x00
 automatic acoustic management yes no 254/0xFE 128/0x80
 media status notification no no
 power-up in Standby yes no
 write-read-verify no no 0/0x0
 unload no no
 free-fall no no
 data set management (TRIM) no
 
 Does this support NCQ?

Does *what* support NCQ?  The output above, despite having lost its
whitespace formatting, indicates the drive does support NCQ and due to
using CAM (via ahci.ko or siis.ko) has NCQ in use:

 Native Command Queuing (NCQ) yes 32 tags

A binary verification (does it/does it not) is also visible in your
kernel log, ex:

ada2: Command Queueing enabled

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread Dan Langille

On 7/23/2010 7:42 AM, John Hawkes-Reed wrote:

Dan Langille wrote:

Thank you to all the helpful discussion. It's been very helpful and
educational. Based on the advice and suggestions, I'm going to adjust
my original plan as follows.


[ ... ]

Since I still have the medium-sized ZFS array on the bench, testing this
GPT setup seemed like a good idea.
bonnie -s 5
The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons,
3x Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

Partitioning the drives with the command-line:
gpart add -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the following
results with bonnie-64: (Bonnie -r -s 5000|2|5)[2]


What test is this?  I just installed benchmarks/bonnie and I see no -r 
option.  Right now, I'm trying this: bonnie -s 5



--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-24 Thread John Hawkes-Reed

On 24/07/2010 21:35, Dan Langille wrote:

On 7/23/2010 7:42 AM, John Hawkes-Reed wrote:

Dan Langille wrote:

Thank you to all the helpful discussion. It's been very helpful and
educational. Based on the advice and suggestions, I'm going to adjust
my original plan as follows.


[ ... ]

Since I still have the medium-sized ZFS array on the bench, testing this
GPT setup seemed like a good idea.
bonnie -s 5
The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons,
3x Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

Partitioning the drives with the command-line:
gpart add -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the following
results with bonnie-64: (Bonnie -r -s 5000|2|5)[2]


What test is this? I just installed benchmarks/bonnie and I see no -r
option. Right now, I'm trying this: bonnie -s 5


http://code.google.com/p/bonnie-64/


--
JH-R
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dan Langille

On 7/22/2010 9:51 PM, Pawel Tyll wrote:

So... the smaller size won't mess things up...

If by smaller size you mean smaller size of existing
drives/partitions, then growing zpools by replacing smaller vdevs
with larger ones is supported and works. What isn't supported is
basically everything else:
- you can't change number of raid columns (add/remove vdevs from raid)
- you can't change number of parity columns (raidz1-2 or 3)
- you can't change vdevs to smaller ones, even if pool's free space
would permit that.


Isn't what I'm doing breaking the last one?



Good news is these features are planned/being worked on.

If you can attach more drives to your system without disconnecting
existing drives, then you can grow your pool pretty much risk-free.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org




--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread John Hawkes-Reed

Dan Langille wrote:
Thank you to all the helpful discussion.  It's been very helpful and 
educational.  Based on the advice and suggestions, I'm going to adjust 
my original plan as follows.


[ ... ]

Since I still have the medium-sized ZFS array on the bench, testing this 
GPT setup seemed like a good idea.


The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons, 
3x Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.


Partitioning the drives with the command-line:
gpart add -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the following 
results with bonnie-64: (Bonnie -r -s 5000|2|5)[2]


   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
 5  97.7 92.8 387.2 40.1 341.8 45.7 178.7 81.6 972.4 54.7   335  1.5
20  98.0 87.0 434.9 45.2 320.9 42.5 141.4 87.4 758.0 53.5   178  1.6
50  98.0 92.0 435.7 46.0 325.4 44.7 143.4 93.1 788.6 57.1   140  1.5


Repartitioning with
gpart add -b 1024 -s 1800G -t freebsd-zfs -l disk00 da0[1] gave the 
following:


   ---Sequential Output ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
 5  97.8 93.4 424.5 45.4 338.4 46.1 180.0 93.9 934.9 57.8   308  1.5
20  97.6 91.7 448.4 49.2 338.5 45.9 176.1 91.8 914.7 57.3   180  1.3
50  96.3 90.3 452.8 47.6 330.9 44.7 174.8 74.5 917.9 53.6   134  1.2

... So it would seem that bothering to align the blocks does make a 
difference.



For an apples/oranges comparison, here's the output from the other box 
we built. The hardware's more or less the same - the drive controller's 
an Areca-1280, but the OS was Solaris 10.latest:


   Sequential Output--- ---Sequential Input-- --Random--
   -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---

GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
 5 116.8 75.0 524.7 65.4 156.3 20.3 161.6 99.2 2924.0 100.0 19 300.0
20 139.9 95.4 503.5 51.7 106.6 13.4  97.6 62.0 133.0  8.8   346  4.2
50 147.4 95.8 465.8 50.1 106.1 13.5  97.9 62.5 143.8  8.7   195  4.1




[1] da0 - da23, obviously.
[2] Our assumption locally is that the first test is likely just 
stressing the bandwidth to memory and the ZFS cache.


--
JH-R
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Tom Evans
On Thu, Jul 22, 2010 at 8:22 PM, Pawel Tyll pt...@nitronet.pl wrote:
 I do not think I can adjust the existing zpool on the fly.  I think I
 need to copy everything elsewhere (i.e the 2 empty drives).  Then start
 the new zpool from scratch.
 You can, and you should (for educational purposes if not for fun :),
 unless you wish to change raidz1 to raidz2. Replace, wait for
 resilver, if redoing used disk then offline it, wipe magic with dd
 (16KB at the beginning and end of disk/partition will do), carry on
 with GPT, rinse and repeat with next disk. When last vdev's replace
 finishes, your pool will grow automagically.


If you do do it like this, be sure to leave the drive you are
replacing attached to the array. Otherwise, in a raidz, if you were to
suffer a disk failure on one of the other disks whilst
replacing/growing the array, your raidz would be badly broken.

Other than that, I can thoroughly recommend this method, I had data on
2 x 1.5 TB drives, and set up my raidz initially with 4 x 1.5 TB, 2 x
0.5 TB, copying data off the 1.5 TB drives onto the array and
replacing each 0.5 TB drive when done.

Cheers

Tom
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dmitry Morozovsky
On Fri, 23 Jul 2010, John Hawkes-Reed wrote:

JH Since I still have the medium-sized ZFS array on the bench, testing this GPT
JH setup seemed like a good idea.
JH 
JH The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons, 3x
JH Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

[snip]

JH For an apples/oranges comparison, here's the output from the other box we
JH built. The hardware's more or less the same - the drive controller's an
JH Areca-1280, but the OS was Solaris 10.latest:
JH 
JHSequential Output--- ---Sequential Input-- --Random--
JH-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
JH 
JH GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
JH  5 116.8 75.0 524.7 65.4 156.3 20.3 161.6 99.2 2924.0 100.0 19 300.0

.  ~~

Frakking Cylons! You have quite a beast! ;-)


-- 
Sincerely,
D.Marck [DM5020, MCK-RIPE, DM3-RIPN]
[ FreeBSD committer: ma...@freebsd.org ]

*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- ma...@rinet.ru ***

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread John Hawkes-Reed

On 23/07/2010 19:04, Dmitry Morozovsky wrote:

On Fri, 23 Jul 2010, John Hawkes-Reed wrote:

JH  Since I still have the medium-sized ZFS array on the bench, testing this 
GPT
JH  setup seemed like a good idea.
JH
JH  The hardware's a Supermicro X8DTL-iF m/b + 12Gb memory, 2x 5502 Xeons, 3x
JH  Supermicro USASLP-L8I 3G SAS controllers and 24x Hitachi 2Tb drives.

[snip]

JH  For an apples/oranges comparison, here's the output from the other box we
JH  built. The hardware's more or less the same - the drive controller's an
JH  Areca-1280, but the OS was Solaris 10.latest:
JH
JH Sequential Output--- ---Sequential Input-- --Random--
JH -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
JH
JH  GB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU  /sec %CPU
JH   5 116.8 75.0 524.7 65.4 156.3 20.3 161.6 99.2 2924.0 100.0 19 300.0

.  ~~

Frakking Cylons! You have quite a beast! ;-)


:D

Like I said, I think those numbers were the Bonnie test data fitting 
inside the cache.


Making all the bits work together has been a bit of a nightmare, but 
we're really pleased with the performance and stability of 8.1 + ZFS.


(Having said that, the kit will spit the drives out of the cab over the 
weekend for spite...)



--
JH-R
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dan Langille

On 7/22/2010 8:47 PM, Dan Langille wrote:

Thank you to all the helpful discussion.  It's been very helpful and
educational. Based on the advice and suggestions, I'm going to adjust my
original plan as follows.

NOTE: glabel will not be used.


First, create a new GUID Partition Table partition scheme on the HDD:

gpart create -s GPT ad0


Let's see how much space we have. This output will be used to determine
SOMEVALUE in the next command.

gpart show


Create a new partition within that scheme:

gpart add -b 1024 -s SOMEVALUE -t freebsd-zfs -l disk00 ad0

The -b 1024 ensures alignment on a 4KB boundary.

SOMEVALUE will be set so approximately 200MB is left empty at the end of
the HDD. That's part more than necessary to accommodate the different
actualy size of 2TB HDD.

Repeat the above with ad1 to get disk01. Repeat for all other HDD...

Then create your zpool:

zpool create bigtank gpt/disk00 gpt/disk02 ... etc


This plan will be applied to an existing 5 HDD ZFS pool. I have two new
empty HDD which will be added to this new array (giving me 7 x 2TB HDD).
The array is raidz1 and I'm wondering if I want to go to raidz2. That
would be about 10TB and I'm only using up 3.1TB at present. That
represents about 4 months of backups.

I do not think I can adjust the existing zpool on the fly. I think I
need to copy everything elsewhere (i.e the 2 empty drives). Then start
the new zpool from scratch.

The risk: when the data is on the 2 spare HDD, there is no redundancy. I
wonder if my friend Jerry has a spare 2TB HDD I could borrow for the
evening.



The work is in progress.  Updates are at 
http://beta.freebsddiary.org/zfs-with-gpart.php which will be updated 
frequently as the work continues.


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dan Langille

On 7/22/2010 9:22 PM, Pawel Tyll wrote:

I do not think I can adjust the existing zpool on the fly.  I think I
need to copy everything elsewhere (i.e the 2 empty drives).  Then start
the new zpool from scratch.



You can, and you should (for educational purposes if not for fun :),
unless you wish to change raidz1 to raidz2. Replace, wait for
resilver, if redoing used disk then offline it, wipe magic with dd
(16KB at the beginning and end of disk/partition will do), carry on
with GPT, rinse and repeat with next disk. When last vdev's replace
finishes, your pool will grow automagically.


Pawell and I had an online chat about part of my strategy.  To be clear:

I have a 5x2TB raidz1 array.

I have 2x2TB empty HDD

My goal was to go to raidz2 by:
- copy data to empty HDD
- redo the zpool to be raidz2
- copy back the data
- add in the two previously empty HDD to the zpol

I now understand that after a raidz array has been created, you can't 
add a new HDD to it.  I'd like to, but it sounds like you cannot.


It is not possible to add a disk as a column to a RAID-Z, RAID-Z2, or 
RAID-Z3 vdev. http://en.wikipedia.org/wiki/ZFS#Limitations


So, it seems I have a 5-HDD zpool and it's going to stay that way.





--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Freddie Cash
On Fri, Jul 23, 2010 at 6:33 PM, Dan Langille d...@langille.org wrote:
 Pawell and I had an online chat about part of my strategy.  To be clear:

 I have a 5x2TB raidz1 array.

 I have 2x2TB empty HDD

 My goal was to go to raidz2 by:
 - copy data to empty HDD
 - redo the zpool to be raidz2
 - copy back the data
 - add in the two previously empty HDD to the zpol

 I now understand that after a raidz array has been created, you can't add a
 new HDD to it.  I'd like to, but it sounds like you cannot.

 It is not possible to add a disk as a column to a RAID-Z, RAID-Z2, or
 RAID-Z3 vdev. http://en.wikipedia.org/wiki/ZFS#Limitations

 So, it seems I have a 5-HDD zpool and it's going to stay that way.

You can fake it out by using sparse files for members of the new
raidz2 vdev (when creating the vdev), then offline the file-based
members so that you are running a degraded pool, copy the data to the
pool, then replace the file-based members with physical harddrives.

I've posted a theoretical method for doing so here:
http://forums.freebsd.org/showpost.php?p=93889postcount=7

It's theoretical as I have not investigated how to create sparse files
on FreeBSD, nor have I done this.  It's based on several posts to the
zfs-discuss mailing list where several people have done this on
OpenSolaris.

-- 
Freddie Cash
fjwc...@gmail.com
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dan Langille

On 7/23/2010 10:25 PM, Freddie Cash wrote:

On Fri, Jul 23, 2010 at 6:33 PM, Dan Langilled...@langille.org  wrote:

Pawell and I had an online chat about part of my strategy.  To be clear:

I have a 5x2TB raidz1 array.

I have 2x2TB empty HDD

My goal was to go to raidz2 by:
- copy data to empty HDD
- redo the zpool to be raidz2
- copy back the data
- add in the two previously empty HDD to the zpol

I now understand that after a raidz array has been created, you can't add a
new HDD to it.  I'd like to, but it sounds like you cannot.

It is not possible to add a disk as a column to a RAID-Z, RAID-Z2, or
RAID-Z3 vdev. http://en.wikipedia.org/wiki/ZFS#Limitations

So, it seems I have a 5-HDD zpool and it's going to stay that way.


You can fake it out by using sparse files for members of the new
raidz2 vdev (when creating the vdev), then offline the file-based
members so that you are running a degraded pool, copy the data to the
pool, then replace the file-based members with physical harddrives.


So I'm creating a 7 drive pool, with 5 real drives members and two 
file-based members.



I've posted a theoretical method for doing so here:
http://forums.freebsd.org/showpost.php?p=93889postcount=7

It's theoretical as I have not investigated how to create sparse files
on FreeBSD, nor have I done this.  It's based on several posts to the
zfs-discuss mailing list where several people have done this on
OpenSolaris.


I see no downside.  There is no risk that it won't work and I'll lose 
all the data.


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Daniel O'Connor

On 24/07/2010, at 11:55, Freddie Cash wrote:
 It's theoretical as I have not investigated how to create sparse files
 on FreeBSD, nor have I done this.  It's based on several posts to the
 zfs-discuss mailing list where several people have done this on
 OpenSolaris.

FYI you would do..
truncate -s 1T /tmp/fake-disk1
mdconfig -a -t vnode -f /tmp/fake-disk1

etc..

Although you'd want to determine the exact size of your real disks from geom 
and use that.

--
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C






___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dan Langille

On 7/23/2010 10:42 PM, Daniel O'Connor wrote:


On 24/07/2010, at 11:55, Freddie Cash wrote:

It's theoretical as I have not investigated how to create sparse files
on FreeBSD, nor have I done this.  It's based on several posts to the
zfs-discuss mailing list where several people have done this on
OpenSolaris.


FYI you would do..
truncate -s 1T /tmp/fake-disk1
mdconfig -a -t vnode -f /tmp/fake-disk1

etc..

Although you'd want to determine the exact size of your real disks from geom 
and use that.



 $ dd if=/dev/zero of=/tmp/sparsefile1.img bs=1 count=0 oseek=2000G
0+0 records in
0+0 records out
0 bytes transferred in 0.25 secs (0 bytes/sec)

$ ls -l /tmp/sparsefile1.img
-rw-r--r--  1 dan  wheel  2147483648000 Jul 23 22:49 /tmp/sparsefile1.img

$ ls -lh /tmp/sparsefile1.img
-rw-r--r--  1 dan  wheel   2.0T Jul 23 22:49 /tmp/sparsefile1.img


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Dan Langille

On 7/23/2010 10:51 PM, Dan Langille wrote:

On 7/23/2010 10:42 PM, Daniel O'Connor wrote:


On 24/07/2010, at 11:55, Freddie Cash wrote:

It's theoretical as I have not investigated how to create sparse
files on FreeBSD, nor have I done this. It's based on several
posts to the zfs-discuss mailing list where several people have
done this on OpenSolaris.


FYI you would do.. truncate -s 1T /tmp/fake-disk1 mdconfig -a -t
vnode -f /tmp/fake-disk1

etc..

Although you'd want to determine the exact size of your real disks
from geom and use that.



$ dd if=/dev/zero of=/tmp/sparsefile1.img bs=1 count=0 oseek=2000G
0+0 records in 0+0 records out 0 bytes transferred in 0.25 secs
(0 bytes/sec)

$ ls -l /tmp/sparsefile1.img -rw-r--r-- 1 dan wheel 2147483648000
Jul 23 22:49 /tmp/sparsefile1.img

$ ls -lh /tmp/sparsefile1.img -rw-r--r-- 1 dan wheel 2.0T Jul 23
22:49 /tmp/sparsefile1.img


Going a bit further, and actually putting 30MB of data in there:


$ rm sparsefile1.img
$ dd if=/dev/zero of=/tmp/sparsefile1.img bs=1 count=0
oseek=2000G
0+0 records in
0+0 records out
0 bytes transferred in 0.30 secs (0 bytes/sec)

$ ls -lh /tmp/sparsefile1.img
-rw-r--r--  1 dan  wheel   2.0T Jul 23 22:59 /tmp/sparsefile1.img

$ dd if=/dev/zero of=sparsefile1.img bs=1M count=30 conv=notrunc
30+0 records in
30+0 records out
31457280 bytes transferred in 0.396570 secs (79323405 bytes/sec)

$ ls -l sparsefile1.img
-rw-r--r--  1 dan  wheel  2147483648000 Jul 23 23:00 sparsefile1.img

$ ls -lh sparsefile1.img
-rw-r--r--  1 dan  wheel   2.0T Jul 23 23:00 sparsefile1.img
$


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-23 Thread Adam Vande More
On Fri, Jul 23, 2010 at 9:25 PM, Freddie Cash fjwc...@gmail.com wrote:

 It's theoretical as I have not investigated how to create sparse files
 on FreeBSD, nor have I done this.  It's based on several posts to the
 zfs-discuss mailing list where several people have done this on
 OpenSolaris.


Easiest way to create sparse eg 20 GB assuming test.img doesn't exist
already


truncate -s 20g test.img
ls -sk test.img
1 test.img

The other standard dd method works fine too, trucate just makes it easy.

-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/21/2010 11:39 PM, Adam Vande More wrote:

On Wed, Jul 21, 2010 at 10:34 PM, Adam Vande More amvandem...@gmail.com
mailto:amvandem...@gmail.com wrote:



Also if you have an applicable SATA controller, running the ahci module
with give you more speed.  Only change one thing a time though.
Virtualbox makes a great testbed for this, you don't need to allocate
the VM a lot of RAM just make sure it boots and such.


I'm not sure of the criteria, but this is what I'm running:

atapci0: SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem 
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7


atapci1: SiI 3124 SATA300 controller port 0xac00-0xac0f mem 
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3


I added ahci_load=YES to loader.conf and rebooted.  Now I see:

ahci0: ATI IXP700 AHCI SATA controller port 
0x8000-0x8007,0x7000-0x7003,0x6000-0x6007,0x5000-0x5003,0x4000-0x400f 
mem 0xfb3fe400-0xfb3fe7ff irq 22 at device 17.0 on pci0


Which is the onboard SATA from what I can tell, not the controllers I 
installed to handle the ZFS array.  The onboard SATA runs a gmirror 
array which handles /, /tmp, /usr, and /var (i.e. the OS).  ZFS runs 
only on on my /storage mount point.


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Andrey V. Elsukov
On 22.07.2010 10:32, Dan Langille wrote:
 I'm not sure of the criteria, but this is what I'm running:
 
 atapci0: SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
 0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7
 
 atapci1: SiI 3124 SATA300 controller port 0xac00-0xac0f mem
 0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3
 
 I added ahci_load=YES to loader.conf and rebooted.  Now I see:

You can add siis_load=YES to loader.conf for SiI 3124.

-- 
WBR, Andrey V. Elsukov



signature.asc
Description: OpenPGP digital signature


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Jeremy Chadwick
On Thu, Jul 22, 2010 at 02:32:48AM -0400, Dan Langille wrote:
 On 7/21/2010 11:39 PM, Adam Vande More wrote:
 On Wed, Jul 21, 2010 at 10:34 PM, Adam Vande More amvandem...@gmail.com
 mailto:amvandem...@gmail.com wrote:
 
 Also if you have an applicable SATA controller, running the ahci module
 with give you more speed.  Only change one thing a time though.
 Virtualbox makes a great testbed for this, you don't need to allocate
 the VM a lot of RAM just make sure it boots and such.
 
 I'm not sure of the criteria, but this is what I'm running:
 
 atapci0: SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
 0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
 pci7
 
 atapci1: SiI 3124 SATA300 controller port 0xac00-0xac0f mem
 0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
 pci3
 
 I added ahci_load=YES to loader.conf and rebooted.  Now I see:
 
 ahci0: ATI IXP700 AHCI SATA controller port
 0x8000-0x8007,0x7000-0x7003,0x6000-0x6007,0x5000-0x5003,0x4000-0x400f
 mem 0xfb3fe400-0xfb3fe7ff irq 22 at device 17.0 on pci0
 
 Which is the onboard SATA from what I can tell, not the controllers
 I installed to handle the ZFS array.  The onboard SATA runs a
 gmirror array which handles /, /tmp, /usr, and /var (i.e. the OS).
 ZFS runs only on on my /storage mount point.

The Silicon Image controllers have their own driver, siis(4), which uses
AHCI as well.  It's just as reliable as ahci(4), and undergoes
similar/thorough testing.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller  port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7

atapci1:SiI 3124 SATA300 controller  port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3

I added ahci_load=YES to loader.conf and rebooted.  Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that 
the ZFS array will be messed up.  But I do plan to do that for the 
system after my plan is implemented.  Thank you.  :)


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Jeremy Chadwick
On Thu, Jul 22, 2010 at 03:02:33AM -0400, Dan Langille wrote:
 On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:
 On 22.07.2010 10:32, Dan Langille wrote:
 I'm not sure of the criteria, but this is what I'm running:
 
 atapci0:SiI 3124 SATA300 controller  port 0xdc00-0xdc0f mem
 0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7
 
 atapci1:SiI 3124 SATA300 controller  port 0xac00-0xac0f mem
 0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3
 
 I added ahci_load=YES to loader.conf and rebooted.  Now I see:
 
 You can add siis_load=YES to loader.conf for SiI 3124.
 
 Ahh, thank you.
 
 I'm afraid to do that now, before I label my ZFS drives for fear
 that the ZFS array will be messed up.  But I do plan to do that for
 the system after my plan is implemented.  Thank you.  :)

They won't be messed up.  ZFS will figure out, using its metadata, which
drive is part of what pool despite the device name changing.  I don't
use glabel or GPT so I can't comment on whether or not those work
reliably in this situation (I imagine they would, but I keep seeing
problem reports on the lists when people have them in use...)

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 3:08 AM, Jeremy Chadwick wrote:

On Thu, Jul 22, 2010 at 03:02:33AM -0400, Dan Langille wrote:

On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller   port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7

atapci1:SiI 3124 SATA300 controller   port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3

I added ahci_load=YES to loader.conf and rebooted.  Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear
that the ZFS array will be messed up.  But I do plan to do that for
the system after my plan is implemented.  Thank you.  :)


They won't be messed up.  ZFS will figure out, using its metadata, which
drive is part of what pool despite the device name changing.


I now have:
siis0: SiI3124 SATA controller port 0xdc00-0xdc0f mem 
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7


siis1: SiI3124 SATA controller port 0xac00-0xac0f mem 
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3


And my zpool is now:

$ zpool status
  pool: storage
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
storage ONLINE   0 0 0
  raidz1ONLINE   0 0 0
ada0ONLINE   0 0 0
ada1ONLINE   0 0 0
ada2ONLINE   0 0 0
ada3ONLINE   0 0 0
ada4ONLINE   0 0 0

Whereas previously, it was ad devices (see 
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=399538+0+current/freebsd-stable).


Thank you (and to Andrey V. Elsukov who posted the same suggestion at 
the same time you did).  I appreciate it.


 I don't

use glabel or GPT so I can't comment on whether or not those work
reliably in this situation (I imagine they would, but I keep seeing
problem reports on the lists when people have them in use...)


Really?  The whole basis of the action plan I'm highlighting in this 
post is to avoid ZFS-related problems when devices get renumbered and 
ZFS is using device names (e.g. /dev/ad0 instead of labels (e.g. 
gpt/disk00).


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Charles Sprickman

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller  port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on pci7

atapci1:SiI 3124 SATA300 controller  port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on pci3

I added ahci_load=YES to loader.conf and rebooted.  Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that the ZFS 
array will be messed up.  But I do plan to do that for the system after my 
plan is implemented.  Thank you.  :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then 
with the new (AHCI) stuff.  It does perform a bit better and my BIOS 
claims it supports hotplug with ahci enabled as well...  Still have to 
test that.


Charles


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug.  All my stuff is internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg



--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Charles Sprickman

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug.  All my stuff is internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg


The frankenbox I'm testing on is a retrofitted 1U (it had a scsi 
backplane, now has none).


I am not certain, but I think with 8.1 (which it's running) and all the 
cam integration stuff, hotplug is possible.  Is a special backplane 
required?  I seriously don't know...  I'm going to give it a shot though.


Oh, you also might get NCQ.  Try:

[r...@h21 /tmp]# camcontrol tags ada0
(pass0:ahcich0:0:0:0): device openings: 32

Charles




--
Dan Langille - http://langille.org/


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 4:03 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug. All my stuff is internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg



The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
backplane, now has none).

I am not certain, but I think with 8.1 (which it's running) and all the
cam integration stuff, hotplug is possible. Is a special backplane
required? I seriously don't know... I'm going to give it a shot though.

Oh, you also might get NCQ. Try:

[r...@h21 /tmp]# camcontrol tags ada0
(pass0:ahcich0:0:0:0): device openings: 32


# camcontrol tags ada0
(pass0:siisch2:0:0:0): device openings: 31


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 4:03 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug. All my stuff is internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg



The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
backplane, now has none).

I am not certain, but I think with 8.1 (which it's running) and all the
cam integration stuff, hotplug is possible. Is a special backplane
required? I seriously don't know... I'm going to give it a shot though.

Oh, you also might get NCQ. Try:

[r...@h21 /tmp]# camcontrol tags ada0
(pass0:ahcich0:0:0:0): device openings: 32


# camcontrol tags ada0
(pass0:siisch2:0:0:0): device openings: 31

resending with this:

ada{0..4} give the above.

# camcontrol tags ada5
(pass5:ahcich0:0:0:0): device openings: 32

That's part of the gmirror array for the OS, along with ad6 which has 
similar output.


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 4:03 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 3:30 AM, Charles Sprickman wrote:

On Thu, 22 Jul 2010, Dan Langille wrote:


On 7/22/2010 2:59 AM, Andrey V. Elsukov wrote:

On 22.07.2010 10:32, Dan Langille wrote:

I'm not sure of the criteria, but this is what I'm running:

atapci0:SiI 3124 SATA300 controller port 0xdc00-0xdc0f mem
0xfbeffc00-0xfbeffc7f,0xfbef-0xfbef7fff irq 17 at device 4.0 on
pci7

atapci1:SiI 3124 SATA300 controller port 0xac00-0xac0f mem
0xfbbffc00-0xfbbffc7f,0xfbbf-0xfbbf7fff irq 19 at device 4.0 on
pci3

I added ahci_load=YES to loader.conf and rebooted. Now I see:


You can add siis_load=YES to loader.conf for SiI 3124.


Ahh, thank you.

I'm afraid to do that now, before I label my ZFS drives for fear that
the ZFS array will be messed up. But I do plan to do that for the
system after my plan is implemented. Thank you. :)


You may even get hotplug support if you're lucky. :)

I just built a box and gave it a spin with the old ata stuff and then
with the new (AHCI) stuff. It does perform a bit better and my BIOS
claims it supports hotplug with ahci enabled as well... Still have to
test that.


Well, I don't have anything to support hotplug. All my stuff is internal.

http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg



The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
backplane, now has none).

I am not certain, but I think with 8.1 (which it's running) and all the
cam integration stuff, hotplug is possible. Is a special backplane
required? I seriously don't know... I'm going to give it a shot though.

Oh, you also might get NCQ. Try:

[r...@h21 /tmp]# camcontrol tags ada0
(pass0:ahcich0:0:0:0): device openings: 32


# camcontrol tags ada0
(pass0:siisch2:0:0:0): device openings: 31

resending with this:

ada{0..4} give the above.

# camcontrol tags ada5
(pass5:ahcich0:0:0:0): device openings: 32

That's part of the gmirror array for the OS, along with ad6 which has 
similar output.


And again with this output from one of the ZFS drives:

# camcontrol identify ada0
pass0: Hitachi HDS722020ALA330 JKAOA28A ATA-8 SATA 2.x device
pass0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

protocol  ATA/ATAPI-8 SATA 2.x
device model  Hitachi HDS722020ALA330
firmware revision JKAOA28A
serial number JK1130YAH531ST
WWN   5000cca221d068d5
cylinders 16383
heads 16
sectors/track 63
sector size   logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported   3907029168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6
media RPM 7200

Feature  Support  EnableValue   Vendor
read ahead yes  yes
write cacheyes  yes
flush cacheyes  yes
overlapno
Tagged Command Queuing (TCQ)   no   no
Native Command Queuing (NCQ)   yes  32 tags
SMART  yes  yes
microcode download yes  yes
security   yes  no
power management   yes  yes
advanced power management  yes  no  0/0x00
automatic acoustic management  yes  no  254/0xFE128/0x80
media status notification  no   no
power-up in Standbyyes  no
write-read-verify  no   no  0/0x0
unload no   no
free-fall  no   no
data set management (TRIM) no


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org



Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Jeremy Chadwick
On Thu, Jul 22, 2010 at 04:03:05AM -0400, Charles Sprickman wrote:
 On Thu, 22 Jul 2010, Dan Langille wrote:
 Well, I don't have anything to support hotplug.  All my stuff is internal.
 
 http://sphotos.ak.fbcdn.net/hphotos-ak-ash1/hs430.ash1/23778_106837706002537_10289239443_171753_3508473_n.jpg
 
 The frankenbox I'm testing on is a retrofitted 1U (it had a scsi
 backplane, now has none).
 
 I am not certain, but I think with 8.1 (which it's running) and all
 the cam integration stuff, hotplug is possible.  Is a special
 backplane required?  I seriously don't know...  I'm going to give it
 a shot though.

Yes, a special backplane is required.
 
 Oh, you also might get NCQ.  Try:
 
 [r...@h21 /tmp]# camcontrol tags ada0
 (pass0:ahcich0:0:0:0): device openings: 32

NCQ should be enabled by default.  camcontrol identify will provide
much more verbose details about the state of these disks.  Don't confuse
identify with inquiry.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Boris Samorodov
On Wed, 21 Jul 2010 23:15:41 -0400 Dan Langille wrote:
 On 7/21/2010 11:05 PM, Dan Langille wrote (something close to this):

  First, create a new GUID Partition Table partition scheme on the HDD:
  gpart create -s GPT ad0
 
  Let's see how much space we have. This output will be used to determine
  SOMEVALUE in the next command.
 
  gpart show
 
  Create a new partition within that scheme:
  gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0
 
  Now, label the thing:
  glabel label -v disk00 /dev/ad0

That command will destroy secondary GPT.

 Or, is this more appropriate?
   glabel label -v disk00 /dev/ad0s1

-- 
WBR, Boris Samorodov (bsam)
Research Engineer, http://www.ipt.ru Telephone  Internet SP
FreeBSD Committer, http://www.FreeBSD.org The Power To Serve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Charles Sprickman

On Thu, 22 Jul 2010, Boris Samorodov wrote:


On Wed, 21 Jul 2010 23:15:41 -0400 Dan Langille wrote:

On 7/21/2010 11:05 PM, Dan Langille wrote (something close to this):



First, create a new GUID Partition Table partition scheme on the HDD:
gpart create -s GPT ad0

Let's see how much space we have. This output will be used to determine
SOMEVALUE in the next command.

gpart show

Create a new partition within that scheme:
gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0

Now, label the thing:
glabel label -v disk00 /dev/ad0


That command will destroy secondary GPT.


I was just reading about GUID partitioning last night and saw that one of 
the benefits is that there's a copy of the partition table kept at the end 
of the disk.  That seems like a pretty neat feature.


Do you by any chance have a reference I can point to (I was documenting 
stuff about GPT in an internal wiki and this is a nice piece of info to 
have)?


Also, how does one access/use the backup partition table?

Thanks,

Charles


Or, is this more appropriate?
  glabel label -v disk00 /dev/ad0s1


--
WBR, Boris Samorodov (bsam)
Research Engineer, http://www.ipt.ru Telephone  Internet SP
FreeBSD Committer, http://www.FreeBSD.org The Power To Serve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Boris Samorodov
Charles Sprickman sp...@bway.net writes:
 On Thu, 22 Jul 2010, Boris Samorodov wrote:
 On Wed, 21 Jul 2010 23:15:41 -0400 Dan Langille wrote:
 On 7/21/2010 11:05 PM, Dan Langille wrote (something close to this):

 First, create a new GUID Partition Table partition scheme on the HDD:
 gpart create -s GPT ad0

 Let's see how much space we have. This output will be used to determine
 SOMEVALUE in the next command.

 gpart show

 Create a new partition within that scheme:
 gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0

 Now, label the thing:
 glabel label -v disk00 /dev/ad0

 That command will destroy secondary GPT.

 I was just reading about GUID partitioning last night and saw that one
 of the benefits is that there's a copy of the partition table kept at
 the end of the disk.  That seems like a pretty neat feature.

 Do you by any chance have a reference I can point to (I was
 documenting stuff about GPT in an internal wiki and this is a nice
 piece of info to have)?

 Also, how does one access/use the backup partition table?

http://en.wikipedia.org/wiki/GUID_Partition_Table

-- 
WBR, bsam
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Daniel O'Connor

On 22/07/2010, at 12:35, Dan Langille wrote:
 Why use glabel?
 
 * So ZFS can find and use the correct HDD should the HDD device ever
   get renumbered for whatever reason.  e.g. /dev/da0 becomes /dev/da6
   when you move it to another controller.
 
 Why use partitions?
 
 * Primarily: two HDD of a given size, say 2TB, do not always provide
   the same amount of available space.  If you use a slightly smaller
   partition instead of the entire physical HDD, you're much more
   likely to have a happier experience when it comes time to replace an
   HDD.
 
 * There seems to be a consensus amongst some that leaving the start and
   and of your HDD empty.  Give the rest to ZFS.

I would combine both!

GPT generates a UUID for each partition and glabel presents this so ZFS can use 
it, eg I have..
[cain 19:45] ~ sudo zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE 
CKSUM
tankONLINE   0 0
 0
  raidz2ONLINE   0 0
 0
gptid/d7467802-418f-11df-bcfc-001517e077fb  ONLINE   0 0
 0
gptid/d7eeeced-418f-11df-bcfc-001517e077fb  ONLINE   0 0
 0
gptid/d8761aa0-418f-11df-bcfc-001517e077fb  ONLINE   0 0
 0
gptid/d9083d18-418f-11df-bcfc-001517e077fb  ONLINE   0 0
 0
gptid/d97203ec-418f-11df-bcfc-001517e077fb  ONLINE   0 0
 0

and on each disk..
[cain 19:46] ~ gpart list ada0   
Geom name: ada0
fwheads: 16
fwsectors: 63
last: 1953525134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 8589934592 (8.0G)
   Sectorsize: 512
   Mode: r0w0e0
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 8589934592
   offset: 17408
   type: freebsd-swap
   index: 1
   end: 16777249
   start: 34
2. Name: ada0p2
   Mediasize: 991614917120 (924G)
   Sectorsize: 512
   Mode: r1w1e2
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 991614917120
   offset: 8589952000
   type: freebsd-zfs
   index: 2
   end: 1953525134
   start: 16777250
Consumers:
1. Name: ada0
   Mediasize: 1000204886016 (932G)
   Sectorsize: 512
   Mode: r1w1e3

The only tedious part is working out which drive has what UUIDs on it because 
gpart doesn't list them.

The advantage of using the UUIDs is that if you setup another machine the same 
way you don't have to worry about things when you plug in the disks from it to 
recover something. Or perhaps you are upgrading at the same time as replacing 
hardware so you have all the disks in at once.

 Create a new partition within that scheme:
 
  gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0
 
 Why '-b 34'?  Randi pointed me to 
 http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what the 
 first 33 LBA are used for.  It's not for us to use here.

If you don't specify -b it will DTRT - that's how I did it.

You can also specify the size (and start) in human units (Gb etc).

--
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C






___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Daniel O'Connor

On 22/07/2010, at 13:59, Adam Vande More wrote:
 To be clear, we are talking about data partitions, not the boot one.
 Difficult for me to explain concisely, but basically it has to do with seek
 time.  A mis-aligned partition will almost always have an extra seek for
 each standard seek you'd have on aligned one.  There have been some
 discussions about in the archives, also this is not unique to FreeBSD so
 google will have a more detailed and probably better explanation.

Newer disks have 4kb sectors internally at least, and some expose it to the OS.

If you create your partitions unaligned to this every read and write will 
involve at least one more sector than it would otherwise and that hurts 
performance.

The disks which don't expose it have a jumper which offsets all accesses to 
Windows XP's performance doesn't take a dive but I'm not sure if that helps 
FreeBSD.

--
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C






___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Paul Mather
On Jul 21, 2010, at 11:05 PM, Dan Langille wrote:

 I hope my terminology is correct
 
 I have a ZFS array which uses raw devices.  I'd rather it use glabel and 
 supply the GEOM devices to ZFS instead.  In addition, I'll also partition the 
 HDD to avoid using the entire HDD: leave a little bit of space at the start 
 and end.
 
 Why use glabel?
 
 * So ZFS can find and use the correct HDD should the HDD device ever
   get renumbered for whatever reason.  e.g. /dev/da0 becomes /dev/da6
   when you move it to another controller.

I have created ZFS pools using this strategy.  However, about a year ago I 
still fell foul of the drive shuffling problem, when GEOM labels appeared not 
to be detected properly:

http://lists.freebsd.org/pipermail/freebsd-geom/2009-July/003654.html

This was using RELENG_7, and the problem was provoked by external USB drives.

The same issue might not occur with FreeBSD 8.x, but I thought I'd point out my 
experience as a possible warning about using glabel.

Nowadays, I use GPT labels (gpart ... -l somelabel, referenced via 
/dev/gpt/somelabel).

 Why use partitions?
 
 * Primarily: two HDD of a given size, say 2TB, do not always provide
   the same amount of available space.  If you use a slightly smaller
   partition instead of the entire physical HDD, you're much more
   likely to have a happier experience when it comes time to replace an
   HDD.
 
 * There seems to be a consensus amongst some that leaving the start and
   and of your HDD empty.  Give the rest to ZFS.

You should also try and accommodate 4K sector size drives these days.  
Apparently, the performance boosts from hitting 4K-aligned sectors can be very 
good.

Cheers,

Paul.___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Volodymyr Kostyrko

22.07.2010 06:05, Dan Langille wrote:

Create a new partition within that scheme:

gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0

Why '-b 34'? Randi pointed me to
http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what
the first 33 LBA are used for. It's not for us to use here.


gpart is not so dumb to not protect this space. If you don't specify -b 
when creating first partition it automagically defaults to 34.


--
Sphinx of black quartz judge my vow.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille
Thank you to all the helpful discussion.  It's been very helpful and 
educational.  Based on the advice and suggestions, I'm going to adjust 
my original plan as follows.


NOTE: glabel will not be used.


First, create a new GUID Partition Table partition scheme on the HDD:

gpart create -s GPT ad0


Let's see how much space we have. This output will be used to determine
SOMEVALUE in the next command.

gpart show


Create a new partition within that scheme:

gpart add -b 1024 -s SOMEVALUE -t freebsd-zfs -l disk00 ad0

The -b 1024 ensures alignment on a 4KB boundary.

SOMEVALUE will be set so approximately 200MB is left empty at the end of 
the HDD.  That's part more than necessary to accommodate the different 
actualy size of 2TB HDD.


Repeat the above with ad1 to get disk01. Repeat for all other HDD...

Then create your zpool:

zpool create bigtank gpt/disk00 gpt/disk02 ... etc


This plan will be applied to an existing 5 HDD ZFS pool.  I have two new 
empty HDD which will be added to this new array (giving me 7 x 2TB HDD). 
 The array is raidz1 and I'm wondering if I want to go to raidz2.  That 
would be about 10TB and I'm only using up 3.1TB at present.  That 
represents about 4 months of backups.


I do not think I can adjust the existing zpool on the fly.  I think I 
need to copy everything elsewhere (i.e the 2 empty drives).  Then start 
the new zpool from scratch.


The risk: when the data is on the 2 spare HDD, there is no redundancy. 
I wonder if my friend Jerry has a spare 2TB HDD I could borrow for the 
evening.


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Pawel Tyll
 I do not think I can adjust the existing zpool on the fly.  I think I
 need to copy everything elsewhere (i.e the 2 empty drives).  Then start
 the new zpool from scratch.
You can, and you should (for educational purposes if not for fun :),
unless you wish to change raidz1 to raidz2. Replace, wait for
resilver, if redoing used disk then offline it, wipe magic with dd
(16KB at the beginning and end of disk/partition will do), carry on
with GPT, rinse and repeat with next disk. When last vdev's replace
finishes, your pool will grow automagically.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Dan Langille

On 7/22/2010 9:22 PM, Pawel Tyll wrote:

I do not think I can adjust the existing zpool on the fly.  I think I
need to copy everything elsewhere (i.e the 2 empty drives).  Then start
the new zpool from scratch.



You can, and you should (for educational purposes if not for fun :),
unless you wish to change raidz1 to raidz2. Replace, wait for
resilver, if redoing used disk then offline it, wipe magic with dd
(16KB at the beginning and end of disk/partition will do), carry on
with GPT, rinse and repeat with next disk. When last vdev's replace
finishes, your pool will grow automagically.


So... the smaller size won't mess things up...

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Pawel Tyll
 So... the smaller size won't mess things up...
If by smaller size you mean smaller size of existing
drives/partitions, then growing zpools by replacing smaller vdevs
with larger ones is supported and works. What isn't supported is
basically everything else:
- you can't change number of raid columns (add/remove vdevs from raid)
- you can't change number of parity columns (raidz1-2 or 3)
- you can't change vdevs to smaller ones, even if pool's free space
would permit that.

Good news is these features are planned/being worked on.

If you can attach more drives to your system without disconnecting
existing drives, then you can grow your pool pretty much risk-free.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-22 Thread Daniel O'Connor

On 23/07/2010, at 24:56, Volodymyr Kostyrko wrote:

 22.07.2010 06:05, Dan Langille wrote:
 Create a new partition within that scheme:
 
 gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0
 
 Why '-b 34'? Randi pointed me to
 http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what
 the first 33 LBA are used for. It's not for us to use here.
 
 gpart is not so dumb to not protect this space. If you don't specify -b when 
 creating first partition it automagically defaults to 34.

Maybe it should default to 40 to get 4k alignment..?
(Probably a POLA/legacy issue there)

--
Daniel O'Connor software and network engineer
for Genesis Software - http://www.gsoft.com.au
The nice thing about standards is that there
are so many of them to choose from.
  -- Andrew Tanenbaum
GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C






___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Using GTP and glabel for ZFS arrays

2010-07-21 Thread Dan Langille

I hope my terminology is correct

I have a ZFS array which uses raw devices.  I'd rather it use glabel and 
supply the GEOM devices to ZFS instead.  In addition, I'll also 
partition the HDD to avoid using the entire HDD: leave a little bit of 
space at the start and end.


Why use glabel?

 * So ZFS can find and use the correct HDD should the HDD device ever
   get renumbered for whatever reason.  e.g. /dev/da0 becomes /dev/da6
   when you move it to another controller.

Why use partitions?

 * Primarily: two HDD of a given size, say 2TB, do not always provide
   the same amount of available space.  If you use a slightly smaller
   partition instead of the entire physical HDD, you're much more
   likely to have a happier experience when it comes time to replace an
   HDD.

 * There seems to be a consensus amongst some that leaving the start and
   and of your HDD empty.  Give the rest to ZFS.

Things I've read that led me to the above reasons:

* 
http://docs.freebsd.org/cgi/getmsg.cgi?fetch=399538+0+current/freebsd-stable
* 
http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055008.html

* http://lists.freebsd.org/pipermail/freebsd-geom/2009-July/003620.html

The plan for this plan, I'm going to play with just two HDD, because 
that's what I have available.  Let's assume these two HDD are ad0 and 
ad1.  I am not planning to boot from these HDD; they are for storage only.


First, create a new GUID Partition Table partition scheme on the HDD:

  gpart create -s GPT ad0


Let's see how much space we have.  This output will be used to determine 
SOMEVALUE in the next command.


  gpart show


Create a new partition within that scheme:

  gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0

Why '-b 34'?  Randi pointed me to 
http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what 
the first 33 LBA are used for.  It's not for us to use here.


Where SOMEVALUE is the number of blocks to use.  I plan not to use all 
the available blocks but leave a few hundred MB free at the end. 
That'll allow for the variance in HDD size.



Now, label the thing:

  glabel label -v disk00 /dev/ad0

Repeat the above with ad1 to get disk01.  Repeat for all other HDD...

Then create your zpool:

 zpool create bigtank disk00 disk01 ... etc


Any suggestions/comments?  Is there any advantage to using the -l option 
on 'gpart add' instead of the glabel above?


Thanks


--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Dan Langille

On 7/21/2010 11:05 PM, Dan Langille wrote (something close to this):


First, create a new GUID Partition Table partition scheme on the HDD:

gpart create -s GPT ad0


Let's see how much space we have. This output will be used to determine
SOMEVALUE in the next command.

gpart show


Create a new partition within that scheme:

gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0


Now, label the thing:

glabel label -v disk00 /dev/ad0


Or, is this more appropriate?

  glabel label -v disk00 /dev/ad0s1

--
Dan Langille - http://langille.org/
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Edho P Arief
On Thu, Jul 22, 2010 at 10:15 AM, Dan Langille d...@langille.org wrote:
 glabel label -v disk00 /dev/ad0

 Or, is this more appropriate?

  glabel label -v disk00 /dev/ad0s1


actually it's /dev/ad0p1.

GPT scheme uses p, not s. And yes, that's more appropriate - if you
create zpool on disk00 labeled as ad0 it'll use entire disk, ignoring
the partitioning.


-- 
O ascii ribbon campaign - stop html mail - www.asciiribbon.org
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Adam Vande More
On Wed, Jul 21, 2010 at 10:05 PM, Dan Langille d...@langille.org wrote:

 Why '-b 34'?  Randi pointed me to
 http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what
 the first 33 LBA are used for.  It's not for us to use here.

 Where SOMEVALUE is the number of blocks to use.  I plan not to use all the
 available blocks but leave a few hundred MB free at the end. That'll allow
 for the variance in HDD size.

 Any suggestions/comments?  Is there any advantage to using the -l option on
 'gpart add' instead of the glabel above?


You'll want to make sure your partitions are aligned, discussion here(says
4k drives, but info pertinent to all):

http://lists.freebsd.org/pipermail/freebsd-hackers/2010-March/031154.html

My understanding is that you weren't booting from zfs, just using it as an
data file system.  In that case, you'd want to use gpart add -b 512 ...
 or some other multiple of 16.  Even 1024 would be a good safe number.  Also
GPT creates partitions not slices.  Your resulting partitions with be
labeled something like ad0p1, ad0p2, etc.



-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Adam Vande More
On Wed, Jul 21, 2010 at 10:34 PM, Adam Vande More amvandem...@gmail.comwrote:



 On Wed, Jul 21, 2010 at 10:05 PM, Dan Langille d...@langille.org wrote:

 Why '-b 34'?  Randi pointed me to
 http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what
 the first 33 LBA are used for.  It's not for us to use here.

 Where SOMEVALUE is the number of blocks to use.  I plan not to use all the
 available blocks but leave a few hundred MB free at the end. That'll allow
 for the variance in HDD size.

 Any suggestions/comments?  Is there any advantage to using the -l option
 on 'gpart add' instead of the glabel above?


 You'll want to make sure your partitions are aligned, discussion here(says
 4k drives, but info pertinent to all):

 http://lists.freebsd.org/pipermail/freebsd-hackers/2010-March/031154.html

 My understanding is that you weren't booting from zfs, just using it as an
 data file system.  In that case, you'd want to use gpart add -b 512 ...
  or some other multiple of 16.  Even 1024 would be a good safe number.  Also
 GPT creates partitions not slices.  Your resulting partitions with be
 labeled something like ad0p1, ad0p2, etc.


Also if you have an applicable SATA controller, running the ahci module with
give you more speed.  Only change one thing a time though.  Virtualbox makes
a great testbed for this, you don't need to allocate the VM a lot of RAM
just make sure it boots and such.

-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Charles Sprickman

On Wed, 21 Jul 2010, Adam Vande More wrote:


On Wed, Jul 21, 2010 at 10:05 PM, Dan Langille d...@langille.org wrote:


Why '-b 34'?  Randi pointed me to
http://en.wikipedia.org/wiki/GUID_Partition_Table where it explains what
the first 33 LBA are used for.  It's not for us to use here.

Where SOMEVALUE is the number of blocks to use.  I plan not to use all the
available blocks but leave a few hundred MB free at the end. That'll allow
for the variance in HDD size.

Any suggestions/comments?  Is there any advantage to using the -l option on
'gpart add' instead of the glabel above?



You'll want to make sure your partitions are aligned, discussion here(says
4k drives, but info pertinent to all):

http://lists.freebsd.org/pipermail/freebsd-hackers/2010-March/031154.html



From that thread:


http://lists.freebsd.org/pipermail/freebsd-hackers/2010-March/031173.html

(longer explanation)

I'm not really understanding the alignment issue myself on a few levels:

-Does it only affect the new drives with 4K blocks?
-If it does not, is it generally good to start your first partition at 1MB 
in?  How exactly does doing this fix the alignment issue?



My understanding is that you weren't booting from zfs, just using it as an
data file system.  In that case, you'd want to use gpart add -b 512 ...
or some other multiple of 16.  Even 1024 would be a good safe number.  Also
GPT creates partitions not slices.  Your resulting partitions with be
labeled something like ad0p1, ad0p2, etc.


I assume the same can be applied if you do boot from zfs; you'd still 
create the freebsd-boot partition starting at 34, but your next 
partition (be it swap or zfs) would start either 512 or 1024 sectors in?


Thanks,

Charles




--
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Adam Vande More
On Wed, Jul 21, 2010 at 11:20 PM, Charles Sprickman sp...@bway.net wrote:


 -Does it only affect the new drives with 4K blocks?


No, although blocksize does effect these symptoms


 -If it does not, is it generally good to start your first partition at 1MB
 in?  How exactly does doing this fix the alignment issue?


To be clear, we are talking about data partitions, not the boot one.
Difficult for me to explain concisely, but basically it has to do with seek
time.  A mis-aligned partition will almost always have an extra seek for
each standard seek you'd have on aligned one.  There have been some
discussions about in the archives, also this is not unique to FreeBSD so
google will have a more detailed and probably better explanation.


 I assume the same can be applied if you do boot from zfs; you'd still
 create the freebsd-boot partition starting at 34, but your next partition
 (be it swap or zfs) would start either 512 or 1024 sectors in?


Yes.

-- 
Adam Vande More
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Using GTP and glabel for ZFS arrays

2010-07-21 Thread Scot Hetzel
On Wed, Jul 21, 2010 at 10:05 PM, Dan Langille d...@langille.org wrote:
 I hope my terminology is correct

 I have a ZFS array which uses raw devices.  I'd rather it use glabel and
 supply the GEOM devices to ZFS instead.  In addition, I'll also partition
 the HDD to avoid using the entire HDD: leave a little bit of space at the
 start and end.

 Why use glabel?

  * So ZFS can find and use the correct HDD should the HDD device ever
   get renumbered for whatever reason.  e.g. /dev/da0 becomes /dev/da6
   when you move it to another controller.

 Why use partitions?

  * Primarily: two HDD of a given size, say 2TB, do not always provide
   the same amount of available space.  If you use a slightly smaller
   partition instead of the entire physical HDD, you're much more
   likely to have a happier experience when it comes time to replace an
   HDD.

  * There seems to be a consensus amongst some that leaving the start and
   and of your HDD empty.  Give the rest to ZFS.

 Things I've read that led me to the above reasons:

 *
 http://docs.freebsd.org/cgi/getmsg.cgi?fetch=399538+0+current/freebsd-stable
 *
 http://lists.freebsd.org/pipermail/freebsd-stable/2010-February/055008.html
 * http://lists.freebsd.org/pipermail/freebsd-geom/2009-July/003620.html

 The plan for this plan, I'm going to play with just two HDD, because that's
 what I have available.  Let's assume these two HDD are ad0 and ad1.  I am
 not planning to boot from these HDD; they are for storage only.

 First, create a new GUID Partition Table partition scheme on the HDD:

  gpart create -s GPT ad0


 Let's see how much space we have.  This output will be used to determine
 SOMEVALUE in the next command.

  gpart show


 Create a new partition within that scheme:

  gpart add -b 34 -s SOMEVALUE -t freebsd-zfs ad0


Instead of using glabel to label the partition on a GPT disk, use this
command instead:

gpart add -b 34 -s SOMEVALUE -t freebsd-zfs -l disk00 ad0

You will then see a /dev/gpt/disk00.

Then to create the zpool use:

zpool create bigtank gpt/disk00 gpt/disk02 ...

Scot
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org