You have not described your requirements (low-power ??, low-cost ??).
But I'll contribute some pointers anyway! :)
Well for a home NAS I'm looking at noise as a big factor. Also for a 24x7 box,
power consumption, that's why the northbridge is putting me off slightly.
So far the other solutions
On Sun, Oct 26, 2008 at 12:47 AM, Peter Bridge [EMAIL PROTECTED] wrote:
Well for a home NAS I'm looking at noise as a big factor. Also for a 24x7
box, power consumption, that's why the northbridge is putting me off slightly.
That's why I built a full-sized tower using a Lian-Li case with
Same case and same idea :) I have 2 dampered drives already installed from a
previous project. Another 2 I pulled out to install into a qnap 209. Ideally
I'd return the disks and replace the qnap with this new single ZFS NAS,
although I'm quite fond of the qnap bt client, and can't use all
1) When I notice an error in a file that I've copied from a ZFS disk I
want to know whether that error is also in the original file on my ZFS
disk or if it's only in the copy.
This was already addressed but let me do so slightly differently: One
of the major points of ZFS checksumming is that,
A slight nit.
Using cat(1) to read the file to /dev/null will not actually cause the data to
be read thanks to the magic that is mmap(). If you use dd(1) to read the file
then yes you will either get the data and thus know it's blocks match their
checksums or dd will give you an error if
On Sun, Oct 26, 2008 at 5:31 AM, Peter Baumgartner [EMAIL PROTECTED] wrote:
I have a 7x150GB drive (+1 spare) raidz pool that I need to expand.
There are 6 open drive bays, so I bought 6 300GB drives and went to
add them as a raidz vdev to the existing zpool, but I didn't realize
the raidz
Is it or isn't it possible to boot off two mirrored ZFS disks and if
yes, can this be done in the upcoming solaris 10 10/08 too?
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce snv99 ++
___
zfs-discuss mailing list
A nit on the nit...
cat does not use mmap for files = 32K in size. For those files
it's a simple read() into a buffer and write() it out.
Jim
---
Chris Gerhard wrote:
A slight nit.
Using cat(1) to read the file to /dev/null will not actually cause the data
to be read thanks to the magic
On Sun, Oct 26, 2008 at 4:02 AM, Mattias Pantzare [EMAIL PROTECTED] wrote:
On Sun, Oct 26, 2008 at 5:31 AM, Peter Baumgartner [EMAIL PROTECTED] wrote:
I have a 7x150GB drive (+1 spare) raidz pool that I need to expand.
There are 6 open drive bays, so I bought 6 300GB drives and went to
add
On Sat, Oct 25, 2008 at 06:50:46PM -0700, Nigel Smith wrote:
Hi Matt
What chipset is your PCI network card?
(obviously, it not Intel, but what is it?)
Do you know which driver the card is using?
I believe it's some sort of Realtek (8139 probably). It's coming up as rtls0
You say '..The
On Sun, Oct 26, 2008 at 3:00 PM, Peter Baumgartner [EMAIL PROTECTED] wrote:
On Sun, Oct 26, 2008 at 4:02 AM, Mattias Pantzare [EMAIL PROTECTED] wrote:
On Sun, Oct 26, 2008 at 5:31 AM, Peter Baumgartner [EMAIL PROTECTED] wrote:
I have a 7x150GB drive (+1 spare) raidz pool that I need to expand.
On Sun, Oct 26, 2008 at 2:00 PM, Peter Baumgartner [EMAIL PROTECTED] wrote:
They do not have to have the same number of drivers, you can even mix
raidz and plain
disks. That is more a recommendation. Add -f to the command.
What is the risk of creating a pool consisting of two raidz vdevs
Hello,
I've looked quickly through the archives and haven't found mention of
this issue. I'm running SXCE (snv_99), which I believe uses zfs version
13. I had an existing zpool:
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
Hello,
I've looked quickly through the archives and haven't found mention of
this issue. I'm running SXCE (snv_99), which uses zfs version 13. I
had an existing zpool:
I've done some more research, but would still greatly appreciate someone
helping me understand this.
It seems that writes to only the home directory of the person logged in to the
console suffers from degraded performance. If I write to a subdirectory
beneath my home, or to any other
William Bauer wrote:
I've done some more research, but would still greatly appreciate someone
helping me understand this.
It seems that writes to only the home directory of the person logged in to the
console suffers from degraded performance. If I write to a subdirectory
beneath my home,
I just built a homeserver that pulls 62 watts from the plug at idle for
~$700. I had some of the parts lying around but even if you bought
everything at frys you should be able to set yourself up for under 1K for
the next 3-5 years.
Seasonic 80 plus 300 watt power supply
Intel DP35DP motherboard
If that were the case, why would it matter if I was logged into the console,
and why would subdirectories of my home exhibit better write performance than
the top level home directory? A write to /export/home/username is slower than
to /export/home/username/blah, but ONLY if that user is
On Sun, 26 Oct 2008, Peter Baumgartner wrote:
What is the risk of creating a pool consisting of two raidz vdevs that
don't have the same number of disks?
Virtually no risk. The only matter of concern would be if the vdevs
have substantially different I/O performance and latencies since ZFS's
dick hoogendijk wrote:
Is it or isn't it possible to boot off two mirrored ZFS disks and if
yes, can this be done in the upcoming solaris 10 10/08 too?
Yes. Yes. For details, please consult the ZFS Administration Guide.
http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
--
CR 6574286 removing a slog doesn't work
http://bugs.opensolaris.org/view_bug.do?bug_id=6574286
-- richard
Ethan Erchinger wrote:
Sorry for the first incomplete send, stupid Ctrl-Enter. :-)
Hello,
I've looked quickly through the archives and haven't
For those of you who wants to build a NAS, this is mandatory reading I think.
Read all comments too.
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
I use a P45 mobo, Intel Q9450, ATI4850 and 4 GB RAM. AOC SATA card with 8 Sata
slots. 4 Samsung 500GB drives. Works excellent in a
On Sun, 26 Oct 2008, William Bauer wrote:
This has proven true on every OpenSolaris system I've tried--all of
which are using ZFS. So what is it about logging into the console
that slows write performance to ONLY the top level home directory of
the username on the same console?
Recently
If that were the case, why would it matter if I was logged into the console,
and why would subdirectories of my home exhibit better write performance
than the top level home directory? A write to /export/home/username is
slower than to /export/home/username/blah, but ONLY if that user is
Folks,
I have a zpool with a raidz2 configuration which I've been switching between
two machines - an old one with a hardware problem and a new one, which
doesn't have hardware issues, but has a different configuration . I've
been trying to import the pool on the new machine, so I can back up
Ok on the answers to all my questions.
There's nothing that really stands out as being obviously wrong.
Just out of interest, what build of OpenSolaris are you using?
One thing you could try on the Ethernet capture file, is to set
the WireShark 'Time' column like this:
View Time Display Format
I asked Tano to use the 'snoop' command to capture the Ethernet
packets to a file, while he attempted VMware's 'VMotion'.
# snoop -d {device} -o {filename} tcp port 3260
This file was made available to me on Tano's web server.
The file size was nearly 85 Mbytes, capturing over 100,000 packets.
Nigel Smith wrote:
Ok on the answers to all my questions.
There's nothing that really stands out as being obviously wrong.
Just out of interest, what build of OpenSolaris are you using?
One thing you could try on the Ethernet capture file, is to set
the WireShark 'Time' column like this:
Nigel Smith wrote:
Ok on the answers to all my questions.
There's nothing that really stands out as being obviously wrong.
Just out of interest, what build of OpenSolaris are you using?
Damn forgot to add that, I'm running SXCE snv_97.
Thanks
Matt
This smells of name resolution delays somewhere. Do
you have
a shell prompt that gets some host name or user name
from
name services? Is your /home directory owned by a
non-existing
user or group? Do you accidentally have something
enabled
in /etc/nsswitch.conf that does not exist
This sounds plausible I suppose Being unfamiliar with this tracker daemon,
I can blindly accept it as a maybe!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
For clarity, here's how you can reproduce what I'm asking about:
This is for local file systems on build 86 and not about NFS or
any remote mounts. You can repeat these 100 times and always get
the same result, whether you reboot between trials or leave the
system running.
1. Log into the
Terry Heatlie wrote:
Folks,
I have a zpool with a raidz2 configuration which I've been switching
between two machines - an old one with a hardware problem and a new
one, which doesn't have hardware issues, but has a different
configuration . I've been trying to import the pool on the
I cannot recreate this on b101. There is no significant difference between
the two on my system.
-- richard
William Bauer wrote:
For clarity, here's how you can reproduce what I'm asking about:
This is for local file systems on build 86 and not about NFS or
any remote mounts. You can repeat
I cannot recreate this on b101. There is no significant difference between
the two on my system.
That's encouraging...unless no one can reproduce it on 86, then I'm forgetting
something. I've done this a dozen times on several systems, so maybe ZFS
performance has been improved.
What
Bingo! I just updated a system from 86 to 99 and the problem is gone. Even
better, it was a VB guest, and the ZFS performance on the guest increased 5x in
this test, as I mentioned earlier. Granted, a VB guest may not be the best
test and it only applies to top level home directories, but it
After a zpool upgrade, this simple test's write speed jumped up yet another
20%. Looks like ZFS is getting better. As one would hope expect.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
37 matches
Mail list logo