Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread perryh
John Baldwin j...@freebsd.org wrote:

 ... even NFS UDP mounts maintain their own set of socket state
 to manage retries and retransmits for UDP RPCs.

Not according to what I remember of the SunOS NFS documentation,
which indicated that the driving force behind using UDP instead of
TCP was to have the server be _completely_ stateless.  (Of course
locking is inherently stateful; they made it very clear that the
locking protocol was considered to be an adjunct rather than part
of the NFS protocol itself.)

It's been quite a few years since I read that, and I didn't get
into the details, but I suppose the handle returned to a client (in
response to a mount or open request) must have contained both a
representation of the inode number and a unique identification of
the filesystem (so that, in the case where server crash recovery
included a newfs and reload from backup, the FS ID would not match
and the client would get a stale handle response).  All of the
retry and retransmit burden had to have been managed by the client,
for both reading and writing.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread perryh
Rick Macklem rmack...@uoguelph.ca wrote:

 Sun did add a separate file locking protocol called the NLM
 or rpc.lockd if you prefer, but that protocol design was
 fundamentally flawed imho and, as such, using it is in the
 your mileage may vary category.

I suppose it was not all that bad, considering that what it sought
to accomplish is incomputable.  There is simply no way for either
the server or the client to distinguish between the other end has
crashed and there is a temporary communication failure until the
other end comes back up or communication is restored.

On a good day, in a completely homogeneous environment (server and
all clients running the same OS revision and patchlevel), I trust
lockd about as far as I can throw 10GB of 1980's SMD disk drives :)

Exporting /var/spool/mail read/write tends to ensure that good days
will be rare.  Been there, done that, seen the result.  Never again.
That's what IMAP is for.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Regression with iwn drivers (4965BGN) in 8.2-PRERELEASE ?

2011-01-06 Thread Olivier Cochard-Labbé
Hi all,
Since I've upgraded from 8.1 to 8.2RC, my wireless negotiated speed is
very very slow (more exactly it start at normal speed, but decrease
each second still stopping a 1Mbps and became unusuable).

I'm using iwn drivers:
iwn0: Intel(R) PRO/Wireless 4965BGN mem 0xf6cfe000-0xf6cf irq 17
at device 0.0 on pci12
iwn0: MIMO 2T3R, MoW2, address 00:1d:e0:72:10:01
iwn0: [ITHREAD]
iwn0: 11a rates: 6Mbps 9Mbps 12Mbps 18Mbps 24Mbps 36Mbps 48Mbps 54Mbps
iwn0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
iwn0: 11g rates: 1Mbps 2Mbps 5.5Mbps 11Mbps 6Mbps 9Mbps 12Mbps 18Mbps
24Mbps 36Mbps 48Mbps 54Mbps

[r...@d630]~#uname -a
FreeBSD d630.bsdrp.net 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #1: Sun
Jan  2 01:32:14 CET 2011
r...@d630.bsdrp.net:/usr/obj/usr/src/sys/GENERIC  amd64

Does anybody else meet the same problem ?

Thanks,

Olivier
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Damien Fleuriot
You both make good points, thanks for the feedback :)

I am more concerned about data protection than performance, so I suppose raidz2 
is the best choice I have with such a small scale setup.

Now the question that remains is wether or not to use parts of the OS's ssd for 
zil, cache, or both ?

---
Fleuriot Damien

On 5 Jan 2011, at 23:12, Artem Belevich fbsdl...@src.cx wrote:

 On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...
 
 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives
 
 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays
 
 So really, in both cases 2 different parity drives and same storage...
 
 In second case you get better performance, but lose some data
 protection. It's still raidz1 and you can't guarantee functionality in
 all cases of two drives failing. If two drives fail in the same vdev,
 your entire pool will be gone.  Granted, it's better than single-vdev
 raidz1, but it's *not* as good as raidz2.
 
 --Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Regression with iwn drivers (4965BGN) in 8.2-PRERELEASE ?

2011-01-06 Thread Bernhard Schmidt
On Thursday, January 06, 2011 10:02:54 Olivier Cochard-Labbé wrote:
 Hi all,
 Since I've upgraded from 8.1 to 8.2RC, my wireless negotiated speed is
 very very slow (more exactly it start at normal speed, but decrease
 each second still stopping a 1Mbps and became unusuable).
 
 I'm using iwn drivers:
 iwn0: Intel(R) PRO/Wireless 4965BGN mem 0xf6cfe000-0xf6cf irq 17
 at device 0.0 on pci12
 iwn0: MIMO 2T3R, MoW2, address 00:1d:e0:72:10:01
 iwn0: [ITHREAD]
 iwn0: 11a rates: 6Mbps 9Mbps 12Mbps 18Mbps 24Mbps 36Mbps 48Mbps 54Mbps
 iwn0: 11b rates: 1Mbps 2Mbps 5.5Mbps 11Mbps
 iwn0: 11g rates: 1Mbps 2Mbps 5.5Mbps 11Mbps 6Mbps 9Mbps 12Mbps 18Mbps
 24Mbps 36Mbps 48Mbps 54Mbps
 
 [r...@d630]~#uname -a
 FreeBSD d630.bsdrp.net 8.2-PRERELEASE FreeBSD 8.2-PRERELEASE #1: Sun
 Jan  2 01:32:14 CET 2011
 r...@d630.bsdrp.net:/usr/obj/usr/src/sys/GENERIC  amd64
 
 Does anybody else meet the same problem ?

Haven't seen this yet.

What do you mean with 'unusable' exactly? Lots of packet loss, or just slow 
transfer rates? 'wlandebug +rate' might shed some light on this one.

-- 
Bernhard
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Regression with iwn drivers (4965BGN) in 8.2-PRERELEASE ?

2011-01-06 Thread Olivier Cochard-Labbé
2011/1/6 Bernhard Schmidt bschm...@freebsd.org:

 What do you mean with 'unusable' exactly? Lots of packet loss, or just slow
 transfer rates? 'wlandebug +rate' might shed some light on this one.

Hi, it's just very slow transfer rates.
I didn't know wlandebug, thanks for the tips.
Here are the result just after a boot, during pinging my gateway (few traffic):

Jan  6 11:02:25 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 48 (txcnt=11 retrycnt=0)
Jan  6 11:02:36 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 72 (txcnt=11 retrycnt=0)
Jan  6 11:03:02 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 96 (txcnt=11 retrycnt=0)

Now, I start xorg and a start a browser:

Jan  6 11:04:04 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 72 (txcnt=11 retrycnt=10)
Jan  6 11:04:09 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 48 (txcnt=13 retrycnt=7)
Jan  6 11:04:10 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 36 (txcnt=43 retrycnt=24)
Jan  6 11:04:11 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 24 (txcnt=84 retrycnt=38)
Jan  6 11:04:12 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 22 (txcnt=25 retrycnt=9)
Jan  6 11:04:12 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 18 (txcnt=31 retrycnt=11)
Jan  6 11:04:13 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 12 (txcnt=29 retrycnt=11)
Jan  6 11:04:13 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 11 (txcnt=53 retrycnt=28)
Jan  6 11:04:18 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 4 (txcnt=17 retrycnt=7)
Jan  6 11:04:20 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 2 (txcnt=11 retrycnt=10)
Jan  6 11:06:07 d630 wpa_supplicant[413]: CTRL-EVENT-SCAN-RESULTS
Jan  6 11:09:48 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 4 (txcnt=11 retrycnt=0)
Jan  6 11:09:57 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 11 (txcnt=11 retrycnt=0)
Jan  6 11:10:30 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 12 (txcnt=11 retrycnt=0)
Jan  6 11:10:38 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 11 (txcnt=35 retrycnt=16)
Jan  6 11:10:43 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 4 (txcnt=11 retrycnt=4)
Jan  6 11:11:04 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
increasing rate 11 (txcnt=11 retrycnt=0)
Jan  6 11:11:09 d630 wpa_supplicant[413]: CTRL-EVENT-SCAN-RESULTS
Jan  6 11:11:09 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
decreasing rate 4 (txcnt=11 retrycnt=4)

The rate decrease too much for using a browser (but I can still ping
my gateway)…

Regards,

Olivier
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


FreeBSD 8.2-PRERELEASE hangs under load with live kernel

2011-01-06 Thread Lev Serebryakov
Hello, Freebsd-stable.

 I've  added  torrent  client  (transmission)  to  software on my home
 server  and it starts to hang in very unusual way: kernel works but
 userland doesn't.

   I can ping it (and it answers). I can scroll console with
 scrolllock button and keys. I can break into debugger with
 Ctrl+SysReq and it shows, that one CPU is occupied by idle process and
 other by Giant tasq, but no userland processes answer: I can not
 ssh to it, I cannot login on console, samba is dead, etc.

   ps in kernel debugger shows, that many of processes in pfault
 state, and noting more special.

   memtest86+ doesn't show any errors after 8 passes of tests (about
 10 hours), so RAM looks Ok.

   What should I do in kdb to understand what happens?

   Kernel config and /var/run/dmesg.boot is attached.

-- 
// Black Lion AKA Lev Serebryakov l...@freebsd.org

BLOB
Description: Binary data


dmesg.boot
Description: Binary data
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org

RE: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Chris Forgeron
You know, these days I'm not as happy with SSD's for ZIL. I may blog about some 
of the speed results I've been getting over the last 6mo-1yr that I've been 
running them with ZFS. I think people should be using hardware RAM drives. You 
can get old Gigabyte i-RAM drives with 4 gig of memory for the cost of a 60 gig 
SSD, and it will trounce the SSD for speed. 

I'd put your SSD to L2ARC (cache). 


-Original Message-
From: Damien Fleuriot [mailto:m...@my.gd] 
Sent: Thursday, January 06, 2011 5:20 AM
To: Artem Belevich
Cc: Chris Forgeron; freebsd-stable@freebsd.org
Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

You both make good points, thanks for the feedback :)

I am more concerned about data protection than performance, so I suppose raidz2 
is the best choice I have with such a small scale setup.

Now the question that remains is wether or not to use parts of the OS's ssd for 
zil, cache, or both ?

---
Fleuriot Damien

On 5 Jan 2011, at 23:12, Artem Belevich fbsdl...@src.cx wrote:

 On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...
 
 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives
 
 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays
 
 So really, in both cases 2 different parity drives and same storage...
 
 In second case you get better performance, but lose some data
 protection. It's still raidz1 and you can't guarantee functionality in
 all cases of two drives failing. If two drives fail in the same vdev,
 your entire pool will be gone.  Granted, it's better than single-vdev
 raidz1, but it's *not* as good as raidz2.
 
 --Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: Regression with iwn drivers (4965BGN) in 8.2-PRERELEASE ?

2011-01-06 Thread Bernhard Schmidt
On Thursday, January 06, 2011 11:23:44 Olivier Cochard-Labbé wrote:
 2011/1/6 Bernhard Schmidt bschm...@freebsd.org:
  What do you mean with 'unusable' exactly? Lots of packet loss, or just
  slow transfer rates? 'wlandebug +rate' might shed some light on this
  one.
 
 Hi, it's just very slow transfer rates.
 I didn't know wlandebug, thanks for the tips.
 Here are the result just after a boot, during pinging my gateway (few
 traffic):
 
 Jan  6 11:02:25 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
 increasing rate 48 (txcnt=11 retrycnt=0)
 Jan  6 11:02:36 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
 increasing rate 72 (txcnt=11 retrycnt=0)
 Jan  6 11:03:02 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
 increasing rate 96 (txcnt=11 retrycnt=0)
 
 Now, I start xorg and a start a browser:
 
 Jan  6 11:04:04 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
 decreasing rate 72 (txcnt=11 retrycnt=10)
 [..]
 Jan  6 11:11:09 d630 kernel: wlan0: [3a:41:c4:e3:1e:18] AMRR
 decreasing rate 4 (txcnt=11 retrycnt=4)
 
 The rate decrease too much for using a browser (but I can still ping
 my gateway)…

That looks indeed quite weird. I'll have a look into that.

Can you post 'ifconfig wlan0 list scan' output, just to see how staffed the 
band is?

-- 
Bernhard
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: FreeBSD 8.2-PRERELEASE hangs under load with live kernel

2011-01-06 Thread Kostik Belousov
On Thu, Jan 06, 2011 at 01:31:45PM +0300, Lev Serebryakov wrote:
 Hello, Freebsd-stable.
 
  I've  added  torrent  client  (transmission)  to  software on my home
  server  and it starts to hang in very unusual way: kernel works but
  userland doesn't.
 
I can ping it (and it answers). I can scroll console with
  scrolllock button and keys. I can break into debugger with
  Ctrl+SysReq and it shows, that one CPU is occupied by idle process and
  other by Giant tasq, but no userland processes answer: I can not
  ssh to it, I cannot login on console, samba is dead, etc.
 
ps in kernel debugger shows, that many of processes in pfault
  state, and noting more special.
 
memtest86+ doesn't show any errors after 8 passes of tests (about
  10 hours), so RAM looks Ok.
 
What should I do in kdb to understand what happens?
 
Kernel config and /var/run/dmesg.boot is attached.

http://www.freebsd.org/doc/en_US.ISO8859-1/books/developers-handbook/kerneldebug-deadlocks.html


pgpWvepMMaGgU.pgp
Description: PGP signature


Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread Rick Macklem
 Rick Macklem rmack...@uoguelph.ca wrote:
 
  Sun did add a separate file locking protocol called the NLM
  or rpc.lockd if you prefer, but that protocol design was
  fundamentally flawed imho and, as such, using it is in the
  your mileage may vary category.
 
 I suppose it was not all that bad, considering that what it sought
 to accomplish is incomputable. There is simply no way for either
 the server or the client to distinguish between the other end has
 crashed and there is a temporary communication failure until the
 other end comes back up or communication is restored.
 
Yep. The blocking lock operation is also a trainwreck looking for a
place to happen, imho. (In the NLM, the client can do an RPC that says
get a lock, waiting as long as necessary for it, and then let me know.)

 On a good day, in a completely homogeneous environment (server and
 all clients running the same OS revision and patchlevel), I trust
 lockd about as far as I can throw 10GB of 1980's SMD disk drives :)
 
Heh, heh. For those too young to have had the priviledge, a 1980s SMD
drive was big and HEAVY. I just about got a hernia every time one had
to go in a 19inch rack. You definitely didn't throw them far:-)

 Exporting /var/spool/mail read/write tends to ensure that good days
 will be rare. Been there, done that, seen the result. Never again.
 That's what IMAP is for.
 
Great post. I couldn't have said it as well, rick
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Damien Fleuriot
I see, so no dedicated ZIL device in the end ?

I could make a 15gb slice for the OS running UFS (I don't wanna risk
losing the OS when manipulating ZFS, such as during upgrades), and a
25gb+ for L2ARC, depending on the disk.

I can't afford a *dedicated* drive for the cache though, not enough room
in the machine.


On 1/6/11 12:26 PM, Chris Forgeron wrote:
 You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
 some of the speed results I've been getting over the last 6mo-1yr that I've 
 been running them with ZFS. I think people should be using hardware RAM 
 drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the 
 cost of a 60 gig SSD, and it will trounce the SSD for speed. 
 
 I'd put your SSD to L2ARC (cache). 
 
 
 -Original Message-
 From: Damien Fleuriot [mailto:m...@my.gd] 
 Sent: Thursday, January 06, 2011 5:20 AM
 To: Artem Belevich
 Cc: Chris Forgeron; freebsd-stable@freebsd.org
 Subject: Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks
 
 You both make good points, thanks for the feedback :)
 
 I am more concerned about data protection than performance, so I suppose 
 raidz2 is the best choice I have with such a small scale setup.
 
 Now the question that remains is wether or not to use parts of the OS's ssd 
 for zil, cache, or both ?
 
 ---
 Fleuriot Damien
 
 On 5 Jan 2011, at 23:12, Artem Belevich fbsdl...@src.cx wrote:
 
 On Wed, Jan 5, 2011 at 1:55 PM, Damien Fleuriot m...@my.gd wrote:
 Well actually...

 raidz2:
 - 7x 1.5 tb = 10.5tb
 - 2 parity drives

 raidz1:
 - 3x 1.5 tb = 4.5 tb
 - 4x 1.5 tb = 6 tb , total 10.5tb
 - 2 parity drives in split thus different raidz1 arrays

 So really, in both cases 2 different parity drives and same storage...

 In second case you get better performance, but lose some data
 protection. It's still raidz1 and you can't guarantee functionality in
 all cases of two drives failing. If two drives fail in the same vdev,
 your entire pool will be gone.  Granted, it's better than single-vdev
 raidz1, but it's *not* as good as raidz2.

 --Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread Rick Macklem
 John Baldwin j...@freebsd.org wrote:
 
  ... even NFS UDP mounts maintain their own set of socket state
  to manage retries and retransmits for UDP RPCs.
 
 Not according to what I remember of the SunOS NFS documentation,
 which indicated that the driving force behind using UDP instead of
 TCP was to have the server be _completely_ stateless. (Of course
 locking is inherently stateful; they made it very clear that the
 locking protocol was considered to be an adjunct rather than part
 of the NFS protocol itself.)
 
For UDP, in the server all requests show up at socket/port 2049. They
pretty quickly discovered that retries of non-idempotent RPCs trashed
things, so the Duplicate Request Cache was invented, which is really
state that doesn't have to be recovered after a server crash.
(By Chet Jacuzak at DEC, if I recall correctly, who is living on a
little island on a lake up in Maine, last I heard.)

My recollection of why Sun didn't use TCP was that they knew that
the overhead would be excessive, which wasn't completely untrue,
given the speed of an MC68020.

 It's been quite a few years since I read that, and I didn't get
 into the details, but I suppose the handle returned to a client (in
 response to a mount or open request) must have contained both a
 representation of the inode number and a unique identification of
 the filesystem (so that, in the case where server crash recovery
 included a newfs and reload from backup, the FS ID would not match
 and the client would get a stale handle response). All of the
 retry and retransmit burden had to have been managed by the client,
 for both reading and writing.
Yea, it depended on how the backup was done. To avoid stale handle
the backup/reload had to retain the same i-nodes, including the generation
number in them. (But, then, those 1980s SMD disks never trashed the
file systems, or did they?:-)

You shouldn't get me reminising on the good ole days, rick
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Daniel Kalchev
For pure storage, that is a place you send/store files, you don't really 
need the ZIL. You also need the L2ARC only if you read over and over 
again the same dataset, which is larger than the available ARC (ZFS 
cache memory). Both will not be significant for 'backup server' 
application, because it's very unlikely to do lots of SYNC I/O (where 
separate ZIL helps), or serve the same files back (where the L2ARC might 
help).


You should also know that having large L2ARC requires that you also have 
larger ARC, because there are data pointers in the ARC that point to the 
L2ARC data. Someone will do good to the community to publish some 
reasonable estimates of the memory needs, so that people do not end up 
with large but unusable L2ARC setups.


It seems that the upcoming v28 ZFS will help greatly with the ZIL in the 
main pool..


You need to experiment with the L2ARC (this is safe with current v14 and 
v15 pools) to see if your usage will see benefit from it's use. 
Experimenting with ZIL currently requires that you recreate the pool. 
With the experimental v28 code things are much easier.


On 06.01.11 15:11, Damien Fleuriot wrote:

I see, so no dedicated ZIL device in the end ?

I could make a 15gb slice for the OS running UFS (I don't wanna risk
losing the OS when manipulating ZFS, such as during upgrades), and a
25gb+ for L2ARC, depending on the disk.

I can't afford a *dedicated* drive for the cache though, not enough room
in the machine.


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread John Baldwin
On Thursday, January 06, 2011 3:08:04 am per...@pluto.rain.com wrote:
 John Baldwin j...@freebsd.org wrote:
 
  ... even NFS UDP mounts maintain their own set of socket state
  to manage retries and retransmits for UDP RPCs.
 
 Not according to what I remember of the SunOS NFS documentation,
 which indicated that the driving force behind using UDP instead of
 TCP was to have the server be _completely_ stateless.  (Of course
 locking is inherently stateful; they made it very clear that the
 locking protocol was considered to be an adjunct rather than part
 of the NFS protocol itself.)

No extra NFS state is tied to a TCP mount aside from maintaining TCP state 
(i.e. congestion window for the socket, etc.).  A TCP mount does not have a 
different amount of NFS state than a UDP mount.  As Rick noted, many
servers do maintain a DRPC, but that applies to both UDP and TCP mounts.

 It's been quite a few years since I read that, and I didn't get
 into the details, but I suppose the handle returned to a client (in
 response to a mount or open request) must have contained both a
 representation of the inode number and a unique identification of
 the filesystem (so that, in the case where server crash recovery
 included a newfs and reload from backup, the FS ID would not match
 and the client would get a stale handle response).  All of the
 retry and retransmit burden had to have been managed by the client,
 for both reading and writing.

Yes, this is true for both UDP and TCP (if you exclude TCP's retransmit for 
missed packets in server replies on a TCP mount).  Even with TCP a client can
still retransmit requests for which it does not receive a reply in case the
connection dies due to a network problem, server reboot, etc.

-- 
John Baldwin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Damien Fleuriot
On 6 January 2011 14:45, Daniel Kalchev dan...@digsys.bg wrote:
 For pure storage, that is a place you send/store files, you don't really
 need the ZIL. You also need the L2ARC only if you read over and over again
 the same dataset, which is larger than the available ARC (ZFS cache memory).
 Both will not be significant for 'backup server' application, because it's
 very unlikely to do lots of SYNC I/O (where separate ZIL helps), or serve
 the same files back (where the L2ARC might help).

 You should also know that having large L2ARC requires that you also have
 larger ARC, because there are data pointers in the ARC that point to the
 L2ARC data. Someone will do good to the community to publish some reasonable
 estimates of the memory needs, so that people do not end up with large but
 unusable L2ARC setups.

 It seems that the upcoming v28 ZFS will help greatly with the ZIL in the
 main pool..

 You need to experiment with the L2ARC (this is safe with current v14 and v15
 pools) to see if your usage will see benefit from it's use. Experimenting
 with ZIL currently requires that you recreate the pool. With the
 experimental v28 code things are much easier.


I see, thanks for the pointers.

The thing is, this will be a home storage (samba share, media server)
box, but I'd also like to experiment a bit, and it seems like a waste
to not try at least the cache, seeing I'll have a SSD at hand.

If things go well, I may be able to recommend ZFS for production
storage servers at work and I'd really like to know how the cache and
ZIL work at that time ;)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


NFS - DNS fail stops boot in mountlate

2011-01-06 Thread grarpamp
RELENG_8.

### setup
mount -d -a -l -v -t nfs
exec: mount_nfs -o ro -o tcp -o bg -o nolockd -o intr 192.168.0.10:/tmp /mnt
exec: mount_nfs -o ro -o tcp -o bg -o nolockd -o intr foo:/tmp /mnt

192.168.0.10 has been unplugged, no arp entry.
Host foo not found: 3(NXDOMAIN)

### result
mount -v 192.168.0.10:/tmp ; echo $?
[tcp] 192.168.0.10:/tmp: RPCPROG_NFS: RPC: Port mapper failure - RPC: Timed out
mount_nfs: Cannot immediately mount 192.168.0.10:/tmp, backgrounding
/dev/ad0s1a on / (ufs, local, read-only, fsid snip1)
0

[this is ok.]


mount -v foo:/tmp ; echo $?
mount_nfs: foo: hostname nor servname provided, or not known
/dev/ad0s1a on / (ufs, local, read-only, fsid snip1)
1

[drops to shell, which is obviously bad behaviour.]
[mount_nfs should background as in the former.]
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread Rick Macklem
 
  Not according to what I remember of the SunOS NFS documentation,
  which indicated that the driving force behind using UDP instead of
  TCP was to have the server be _completely_ stateless. (Of course
  locking is inherently stateful; they made it very clear that the
  locking protocol was considered to be an adjunct rather than part
  of the NFS protocol itself.)
 
When I said I recalled that they didn't do TCP because of excessive
overhead, I forgot to mention that my recollection could be wrong.
Also, I suspect you are correct w.r.t. the above statement. (ie. Sun's
official position vs something I heard.)

Anyhow, appologies if I gave the impression that I was correcting your
statement. My intent was just to throw out another statement that I
vaguely recalled someone an Sun stating.

rick
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFS - DNS fail stops boot in mountlate

2011-01-06 Thread Doug Barton
It's generally better to post a description of your problem, rather than 
copy and pasting command line examples. What makes perfect sense to you 
may (or even probably does) not make sense to others. :)



Doug

--

Nothin' ever doesn't change, but nothin' changes much.
-- OK Go

Breadth of IT experience, and depth of knowledge in the DNS.
Yours for the right price.  :)  http://SupersetSolutions.com/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFSv4 - how to set up at FreeBSD 8.1 ?

2011-01-06 Thread Jean-Yves Avenard
On 7 January 2011 08:16, Rick Macklem rmack...@uoguelph.ca wrote:

 When I said I recalled that they didn't do TCP because of excessive
 overhead, I forgot to mention that my recollection could be wrong.
 Also, I suspect you are correct w.r.t. the above statement. (ie. Sun's
 official position vs something I heard.)

 Anyhow, appologies if I gave the impression that I was correcting your
 statement. My intent was just to throw out another statement that I
 vaguely recalled someone an Sun stating.

After hitting yet another serious bug in 8.2 ; I reverted back to 8.1

Interestingly, it now complains about having V4: / in /etc/exports

NFSv4 isn't available in 8.1 ?
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jean-Yves Avenard
Hi

On 7 January 2011 00:45, Daniel Kalchev dan...@digsys.bg wrote:
 For pure storage, that is a place you send/store files, you don't really
 need the ZIL. You also need the L2ARC only if you read over and over again
 the same dataset, which is larger than the available ARC (ZFS cache memory).
 Both will not be significant for 'backup server' application, because it's
 very unlikely to do lots of SYNC I/O (where separate ZIL helps), or serve
 the same files back (where the L2ARC might help).

 You should also know that having large L2ARC requires that you also have
 larger ARC, because there are data pointers in the ARC that point to the
 L2ARC data. Someone will do good to the community to publish some reasonable
 estimates of the memory needs, so that people do not end up with large but
 unusable L2ARC setups.

 It seems that the upcoming v28 ZFS will help greatly with the ZIL in the
 main pool..

yes, it made a *huge* difference for me.. It went from way too slow
to comprehend what's going on to still slow but I can live with it

and I found no significant difference between ZIL on the main pool and
on a separate SSD
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jean-Yves Avenard
On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
 You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
 some of the speed results I've been getting over the last 6mo-1yr that I've 
 been running them with ZFS. I think people should be using hardware RAM 
 drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the 
 cost of a 60 gig SSD, and it will trounce the SSD for speed.

 I'd put your SSD to L2ARC (cache).

Where do you find those though.

I've looked and looked and all references I could find was that
battery-powered RAM card that Sun used in their test setup, but it's
not publicly available..
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jeremy Chadwick
On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
 On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
  You know, these days I'm not as happy with SSD's for ZIL. I may blog about 
  some of the speed results I've been getting over the last 6mo-1yr that I've 
  been running them with ZFS. I think people should be using hardware RAM 
  drives. You can get old Gigabyte i-RAM drives with 4 gig of memory for the 
  cost of a 60 gig SSD, and it will trounce the SSD for speed.
 
  I'd put your SSD to L2ARC (cache).
 
 Where do you find those though.
 
 I've looked and looked and all references I could find was that
 battery-powered RAM card that Sun used in their test setup, but it's
 not publicly available..

DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

ACard ANS-9010:
  http://techreport.com/articles.x/16255

GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

Be aware these products are absurdly expensive for what they offer (the
cost isn't justified), not to mention in some cases a bottleneck is
imposed by use of a SATA-150 interface.  I'm also not sure if all of
them offer BBU capability.

In some respects you might be better off just buying more RAM for your
system and making md(4) memory disks that are used by L2ARC (cache).
I've mentioned this in the past (specifically back in the days when
the ARC piece of ZFS on FreeBSD was causing havok, and asked if one
could work around the complexity by using L2ARC with md(4) drives
instead).

I tried this, but couldn't get rc.d/mdconfig2 to do what I wanted on
startup WRT the aforementioned.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFS - DNS fail stops boot in mountlate

2011-01-06 Thread grarpamp
So what was unclear?

mount_nfs emits a nonzero exit status upon failing to look
up an FQDN causing mountlate to trigger a dump to shell
on boot during rc processing. That's a *showstopper*. The
right thing to do is to hack mount_nfs to punt to background
mounting in this case with an appropriate exit status.

Personally I'd distinguish mount_nfs exit codes between:
0 - mounted
1 - backgrounded, for any reason
2 - none of the above
and adjust the rc's to deal with it accordingly.

Words are subject to interpretation and take time. Though
perhaps masked by brevity, I believe all the above elements
were in the prior concise post. Thanks everybody :)
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Gary Palmer
On Thu, Jan 06, 2011 at 05:42:49PM -0800, Jeremy Chadwick wrote:
 On Fri, Jan 07, 2011 at 12:29:17PM +1100, Jean-Yves Avenard wrote:
  On 6 January 2011 22:26, Chris Forgeron cforge...@acsi.ca wrote:
   You know, these days I'm not as happy with SSD's for ZIL. I may blog 
   about some of the speed results I've been getting over the last 6mo-1yr 
   that I've been running them with ZFS. I think people should be using 
   hardware RAM drives. You can get old Gigabyte i-RAM drives with 4 gig of 
   memory for the cost of a 60 gig SSD, and it will trounce the SSD for 
   speed.
  
   I'd put your SSD to L2ARC (cache).
  
  Where do you find those though.
  
  I've looked and looked and all references I could find was that
  battery-powered RAM card that Sun used in their test setup, but it's
  not publicly available..
 
 DDRdrive:
   http://www.ddrdrive.com/
   http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
 
 ACard ANS-9010:
   http://techreport.com/articles.x/16255

There is also

https://www.hyperossystems.co.uk/07042003/hardware.htm

which I believe is a rebadged ACard drive.  They should be SATA-300, but
the test results I saw were not that impressive to be honest.  I think
whatever FPGA they use for the SATA interface and DRAM controller is
either underpowered or the gate layout needs work.

Regards,

Gary
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFS - DNS fail stops boot in mountlate

2011-01-06 Thread Doug Barton

On 01/06/2011 18:19, grarpamp wrote:

So what was unclear?


I thought I probably understood your situation, but I wanted to be sure. 
Not to mention the value of the more general point.  :)



mount_nfs emits a nonzero exit status upon failing to look
up an FQDN causing mountlate to trigger a dump to shell
on boot during rc processing. That's a *showstopper*.


The canonical answer to this is to either mount them by IP, or to put 
the appropriate name in /etc/hosts. Depending on DNS for NFS mounts is 
not recommended.



hth,

Doug

--

Nothin' ever doesn't change, but nothin' changes much.
-- OK Go

Breadth of IT experience, and depth of knowledge in the DNS.
Yours for the right price.  :)  http://SupersetSolutions.com/

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFS - DNS fail stops boot in mountlate

2011-01-06 Thread Jeremy Chadwick
On Thu, Jan 06, 2011 at 09:19:06PM -0500, grarpamp wrote:
 So what was unclear?
 
 mount_nfs emits a nonzero exit status upon failing to look
 up an FQDN causing mountlate to trigger a dump to shell
 on boot during rc processing. That's a *showstopper*. The
 right thing to do is to hack mount_nfs to punt to background
 mounting in this case with an appropriate exit status.
 
 Personally I'd distinguish mount_nfs exit codes between:
 0 - mounted
 1 - backgrounded, for any reason
 2 - none of the above
 and adjust the rc's to deal with it accordingly.
 
 Words are subject to interpretation and take time. Though
 perhaps masked by brevity, I believe all the above elements
 were in the prior concise post. Thanks everybody :)

So basically the problem is that the bg option in mount_nfs only
applies to network unreachable conditions and not DNS resolution
failed conditions.

Initially I was going to refute the above request until I looked closely
at the mount_nfs(8) man page which has the following clauses:

For non-critical file systems, the bg and retrycnt options
provide mechanisms to prevent the boot process from hanging
if the server is unavailable.

[...describing the bg option...]

Useful for fstab(5), where the file system mount is not
critical to multiuser operation.

I read these statements to mean if -o bg is used, the system should not
hang/stall/fail during the boot process.  Dumping to /bin/sh on boot as
a result of a DNS lookup failure violates those statements, IMHO.

I would agree that DNS resolution should be part of the bg/retry feature
of bg in mount_nfs.  How/whether this is feasible to implement is
unknown to me.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jean-Yves Avenard
On 7 January 2011 12:42, Jeremy Chadwick free...@jdc.parodius.com wrote:

 DDRdrive:
  http://www.ddrdrive.com/
  http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/

 ACard ANS-9010:
  http://techreport.com/articles.x/16255

 GC-RAMDISK (i-RAM) products:
  http://us.test.giga-byte.com/Products/Storage/Default.aspx

 Be aware these products are absurdly expensive for what they offer (the
 cost isn't justified), not to mention in some cases a bottleneck is
 imposed by use of a SATA-150 interface.  I'm also not sure if all of
 them offer BBU capability.


Why not one of those SSD PCIe card that gives over 500MB/s read and write.

And they aren't too expensive either...
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: NFS - DNS fail stops boot in mountlate

2011-01-06 Thread grarpamp
 understood

Yep, I gathered that. Is cool :)

 The canonical answer to this is to either mount them by IP, or
 to put the appropriate name in /etc/hosts. Depending on DNS for
 NFS mounts is not recommended.

That is the only available answer at the moment, but perhaps not
the best. And it's use case. Scalable systems require the flexibility
of name resolution. Secure environments (which you may be alluding
to) may already make use of secure/private DNS, hardcoding, keying,
etc. The DNS server may also be the NFS server for which backgrounding
could be appropriate. Forced hardcoding of IP's may not be ideal.
Not to mention in split-horizon networks, etc. In this DNS down
case, enhancing mount_nfs would allow the admin three choices:
 hosts+fqdn:, ip:, fqdn:
However, not enhancing it only allows two:
 hosts+fqdn:, ip:

FreeBSD is flexible (or should be) :)

 I read these statements to mean if -o bg is used, the system
 should not hang/stall/fail during the boot process.  Dumping to
 /bin/sh on boot as a result of a DNS lookup failure violates those
 statements, IMHO.

Yep, that's what I was thinking. If the admin want's to play DNS
and network games, sure, let them. But at least come up to let them
do it :)

I try not to argue use cases as someone will always have a need for
the bizarre and it just wastes keystrokes :)

Also, afaik, once mounted, the kernel uses only the resolved IP
address thereafter. So that is also a 'safe', unchanged, semantic.

Not sure about it's unmount, df and mount -v semantics, hopefully
something sane. Haven't tried downing NFS or changing DNS lately
to see. It's probably ok though.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Jeremy Chadwick
On Fri, Jan 07, 2011 at 01:40:52PM +1100, Jean-Yves Avenard wrote:
 On 7 January 2011 12:42, Jeremy Chadwick free...@jdc.parodius.com wrote:
 
  DDRdrive:
   http://www.ddrdrive.com/
   http://www.engadget.com/2009/05/05/ddrdrives-ram-based-ssd-is-snappy-costly/
 
  ACard ANS-9010:
   http://techreport.com/articles.x/16255
 
  GC-RAMDISK (i-RAM) products:
   http://us.test.giga-byte.com/Products/Storage/Default.aspx
 
  Be aware these products are absurdly expensive for what they offer (the
  cost isn't justified), not to mention in some cases a bottleneck is
  imposed by use of a SATA-150 interface.  I'm also not sure if all of
  them offer BBU capability.
 
 Why not one of those SSD PCIe card that gives over 500MB/s read and write.
 
 And they aren't too expensive either...

You need to be careful when you use the term SSD in this context.
There are multiple types of SSDs with regards to what we're discussing;
some are flash-based, some are RAM-based.

Below are my opinions -- and this is getting WAY off-topic.  I'm
starting to think you just need to pull yourself up by the bootstraps
and purchase something that suits *your* needs.  You can literally spend
weeks, months, years asking people what should I buy? or what should
I do? or how do I optimise this? and never actually get anywhere.
Sorry if it sounds harsh, but my advice would be to take the plunge and
buy whatever suits *your* needs and meets your finances.


HyperDrive 5M (DDR2-based; US$299)

1) Product documentation claims that the drive has built-in ECC so you
can use non-ECC DDR2 DIMMs -- this doesn't make sense to me from a
technical perspective.  How is this device doing ECC on a per-DIMM
basis?  And why can't I just buy ECC DIMMs and use those instead (they
cost, from Crucial, $1 more than non-ECC)?

2) Monitoring capability -- how?  Does it support SMART?  If so, are the
vendor-specific attributes documented in full?  What if a single DIMM
goes bad?  How would you know which DIMM it is?  Is there even an LED
indicator of when there's a hard failure on a DIMM?  What about checking
its status remotely?

3) Use of DDR2; DDR2 right now is significantly more expensive then
DDR3, and we already know DDR2 is on its way out.

4) Claims 175MB/s read, 145MB/s write; much slower than 500MB/s, so
maybe you're talking about a different product?  I don't know.

5) Uses 2x SATA ports; why?  Probably because it uses SATA-150 ports,
and thus 175MB/s would exceed that.  Why not just go with SATA-300,
or even SATA-600 these days?

6) Form factor requires a 5.25 bay; not effective for a 1U box.


DDRdrive (DDR2-based; US$1995)

1) Absurdly expensive for a product of this nature, even more so
because the price doesn't include the RAM.

2) Limited to 4GB maximum.

3) Absolutely no mention if the product supports ECC RAM or not.

4) PCIe x1 only (limited to 250MB/sec tops).

5) Not guaranteed to fit in all chassis (top DIMM exceeds height of
   the card itself).


ACard ANS-9010 (DDR2-based)
=
Looks like it's either identical to the HyperDrive 5, or maybe the
HyperDrive is a copy of this.  Either way...


GC-RAMDISK

I'm not even going to bother with a review.  I can't imagine anyone
buying this thing.  It's part of the l33td00d demographic.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.   PGP 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: ZFS - moving from a zraid1 to zraid2 pool with 1.5tb disks

2011-01-06 Thread Gary Palmer
On Thu, Jan 06, 2011 at 08:20:00PM -0800, Jeremy Chadwick wrote:
 HyperDrive 5M (DDR2-based; US$299)
 
 1) Product documentation claims that the drive has built-in ECC so you
 can use non-ECC DDR2 DIMMs -- this doesn't make sense to me from a
 technical perspective.  How is this device doing ECC on a per-DIMM
 basis?  And why can't I just buy ECC DIMMs and use those instead (they
 cost, from Crucial, $1 more than non-ECC)?
 
 2) Monitoring capability -- how?  Does it support SMART?  If so, are the
 vendor-specific attributes documented in full?  What if a single DIMM
 goes bad?  How would you know which DIMM it is?  Is there even an LED
 indicator of when there's a hard failure on a DIMM?  What about checking
 its status remotely?
 
 3) Use of DDR2; DDR2 right now is significantly more expensive then
 DDR3, and we already know DDR2 is on its way out.
 
 4) Claims 175MB/s read, 145MB/s write; much slower than 500MB/s, so
 maybe you're talking about a different product?  I don't know.
 
 5) Uses 2x SATA ports; why?  Probably because it uses SATA-150 ports,
 and thus 175MB/s would exceed that.  Why not just go with SATA-300,
 or even SATA-600 these days?

FAQ 2:

Q Why does the HyperDrive5 have two SATA ports?

A So that you can split one 8 DIMM slot device into two 4 DIMM slot deivces and 
run them both in RAID0 using a RAID controller for even faster performance.

It claims SATA-300 (or SATA2 in the incorrect terminology from their website)

Note, I have no relation to hyperos systems and don't use their gear.  I did
look at it for a while for journal/log type applications but to me the
price/performance wasn't there.

As it relates to the ACard, from memory the HyperDrive4 was ditched and
then HyperOS came out with the HyperDrive 5 which looks remarkably similar
to the ACard product. I was told by someone (or read somewhere) that
HyperOS outsourced it to or OEMd it from some Asian country, which
would fit if ACard was the manufacturer as they're in Taiwan.

Regards,

Gary
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org