Good news - I got snv_98 up without a hitch. So far, so good.
Onboard video works great (well, console. Haven't used X11)
Top NIC works great (e1000g) - haven't tried the second NIC
Did not try the onboard SATA
Two Supermicro AOC-SAT2-MV8 PCI-X's working well
Here's the specifics:
- LIAN LI
Ciao, I have a thumper with Opensolaris (snv_91), and 48 disks.
I would like to try a new brand of HD, by replacing a spare disk with a new
one and build on it a zfs pool.
Unfortunately the official utility to map a disk to the physical position
inside the thumper (hd, in /opt/SUNWhd) is not
On Wed, 15 Oct 2008, Gray Carper wrote:
be good to set different recordsize paramaters for each one. Do you have any
suggestions on good starting sizes for each? I'd imagine filesharing might
benefit from a relatively small record size (64K?), image-based backup
targets might like a pretty
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi targets and mirroring them for high availability
with T5220s.
Initially, our performance was also
I'm using 2008-05-07 (latest stable), am I right in assuming that one is ok?
Date: Wed, 15 Oct 2008 13:52:42 +0200
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Improving zfs send performance
The original disk failure was very explicit. High Read Errors and errors
inside /var/adm/messages.
When I replaced the disk however, these have all gone and the resilver was
okay. I am not seeing any read/write or /var/adm/messages errors -- but for
some reason I am seeing errors inside the
Howdy, Brent!
Thanks for your interest! We're pretty enthused about this project over here
and I'd be happy to share some details with you (and anyone else who cares
to peek). In this post I'll try to hit the major configuration
bullet-points, but I can also throw you command-line level specifics
Hi,
I'm just doing my first proper send/receive over the network and I'm getting
just 9.4MB/s over a gigabit link. Would you be able to provide an example of
how to use mbuffer / socat with ZFS for a Solaris beginner?
thanks,
Ross
--
This message posted from opensolaris.org
Archie Cowan wrote:
I just stumbled upon this thread somehow and thought I'd share my zfs over
iscsi experience.
We recently abandoned a similar configuration with several pairs of x4500s
exporting zvols as iscsi targets and mirroring them for high availability
with T5220s.
In
Tommaso Boccali wrote:
Ciao, I have a thumper with Opensolaris (snv_91), and 48 disks.
I would like to try a new brand of HD, by replacing a spare disk with a new
one and build on it a zfs pool.
Unfortunately the official utility to map a disk to the physical position
inside the thumper
Hi all,
Carsten Aulbert wrote:
More later.
OK, I'm completely puzzled right now (and sorry for this lengthy email).
My first (and currently only idea) was that the size of the files is
related to this effect, but that does not seem to be the case:
(1) A 185 GB zfs file system was transferred
Thomas Maier-Komor schrieb:
BTW: I release a new version of mbuffer today.
WARNING!!!
Sorry people!!!
The latest version of mbuffer has a regression that can CORRUPT output
if stdout is used. Please fall back to the last version. A fix is on the
way...
- Thomas
Ross Smith schrieb:
I'm using 2008-05-07 (latest stable), am I right in assuming that one is ok?
Date: Wed, 15 Oct 2008 13:52:42 +0200
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Improving
Hi Gray,
You've got a nice setup going there, few comments:
1. Do not tune ZFS without a proven test-case to show otherwise, except...
2. For databases. Tune recordsize for that particular FS to match DB recordsize.
Few questions...
* How are you divvying up the space ?
* How are you taking
Thanks, that got it working. I'm still only getting 10MB/s, so it's not solved
my problem - I've still got a bottleneck somewhere, but mbuffer is a huge
improvement over standard zfs send / receive. It makes such a difference when
you can actually see what's going on.
Hi Ross
Ross Smith wrote:
Thanks, that got it working. I'm still only getting 10MB/s, so it's not
solved my problem - I've still got a bottleneck somewhere, but mbuffer is a
huge improvement over standard zfs send / receive. It makes such a
difference when you can actually see what's
Hello all,
I think in SS 11 should be -xarch=amd64.
Leal.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ross schrieb:
Hi,
I'm just doing my first proper send/receive over the network and I'm getting
just 9.4MB/s over a gigabit link. Would you be able to provide an example of
how to use mbuffer / socat with ZFS for a Solaris beginner?
thanks,
Ross
--
This message posted from
comments below...
Carsten Aulbert wrote:
Hi all,
Carsten Aulbert wrote:
More later.
OK, I'm completely puzzled right now (and sorry for this lengthy email).
My first (and currently only idea) was that the size of the files is
related to this effect, but that does not seem to be
Tommaso Boccali wrote:
Ciao, I have a thumper with Opensolaris (snv_91), and 48 disks.
I would like to try a new brand of HD, by replacing a
spare disk with a new one and build on it a zfs pool.
Unfortunately the official utility to map a disk to the
physical position inside the
Am I right in thinking your top level zpool is a raid-z pool consisting of six
28TB iSCSI volumes? If so that's a very nice setup, it's what we'd be doing if
we had that kind of cash :-)
--
This message posted from opensolaris.org
___
zfs-discuss
Alternatively, just follow the instructions in the x4500 manual to offline the
relevant disk and you should see it light up with a nice blue please replace
me light.
From memory the commands you need are along the lines of:
# zpool offline -pool- -disk-
# cfgadm -c unconfigure satax/y
--
This
gc == Gray Carper [EMAIL PROTECTED] writes:
gc 5. The NAS nead node has wrangled up all six of the iSCSI
gc targets
are you using raidz on the head node? It sounds like simple striping,
which is probably dangerous with the current code. This kind of sucks
because with simple striping
r == Ross [EMAIL PROTECTED] writes:
r Am I right in thinking your top level zpool is a raid-z pool
r consisting of six 28TB iSCSI volumes? If so that's a very
r nice setup,
not if it scrubs at 400GB/day, and 'zfs send' is uselessly slow. Also
I am thinking the J4500 Richard
Hi Richard,
Richard Elling wrote:
Since you are reading, it depends on where the data was written.
Remember, ZFS dynamic striping != RAID-0.
I would expect something like this if the pool was expanded at some
point in time.
No, the RAID was set-up in one go right after jumpstarting the box.
On Wed, 15 Oct 2008, Marcelo Leal wrote:
Are you talking about what he had in the logic of the configuration at top
level, or you are saying his top level pool is a raidz?
I would think his top level zpool is a raid0...
ZFS does not support RAID0 (simple striping).
Bob
On 15 October, 2008 - Bob Friesenhahn sent me these 0,6K bytes:
On Wed, 15 Oct 2008, Marcelo Leal wrote:
Are you talking about what he had in the logic of the configuration at top
level, or you are saying his top level pool is a raidz?
I would think his top level zpool is a raid0...
So, there is no raid10 in a solaris/zfs setup?
I´m talking about no redundancy...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed, 15 Oct 2008, Tomas Ögren wrote:
ZFS does not support RAID0 (simple striping).
zpool create mypool disk1 disk2 disk3
Sure it does.
This is load-share, not RAID0. Also, to answer the other fellow,
since ZFS does not support RAID0, it also does not support RAID 1+0
(10). :-)
With
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit, but for
data) and set it to 0.5G or so. Is there any way to simulate it?
We have a cluster of linux frontends (http/ftp/rsync) for
Debian/Mozilla/etc archives and as a NFS disk backend we currently have
a DL145 running
Hi All,
Just want to note that I had the same issue with zfs send + vdevs that had
11 drives in them on a X4500. Reducing the count of drives per zvol cleared
this up.
One vdev is IOPS limited to the speed of one drive in that vdev, according
to this post
Does it seem feasible/reasonable to enable compression on ZFS root disks during
JumpStart?
Seems like it could buy some space performance.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The onboard SATA ports work on the PDSME+. One of these days I'm
going to pick up a couple of Supermicro's 5-in-3 enclosures for mine:
http://www.newegg.com/Product/Product.aspx?Item=N82E16817121405
Scott
On Wed, Oct 15, 2008 at 12:26 AM, mike [EMAIL PROTECTED] wrote:
Good news - I got
Oh, also I kind of doubt that a 750W power supply will spin 16 disks
up reliably. I have 10 in mine with a 600W supply, and it's
borderline--10 drives work, 11 doesn't, and adding a couple extra PCI
cards has pushed mine over the edge before. Most 3.5 drives want
about 30W at startup; that'd be
On Thu, Oct 16, 2008 at 12:24 AM, Vincent Fox [EMAIL PROTECTED] wrote:
Does it seem feasible/reasonable to enable compression on ZFS root disks
during JumpStart?
Absolutely. I did it (compression=on) on all my machines - ranged from
laptop to servers. Beware, though, that on oldish CPU it can
Vincent Fox wrote:
Does it seem feasible/reasonable to enable compression on ZFS root disks
during JumpStart?
Seems like it could buy some space performance.
Yes. There have been several people who do this regularly.
Glenn wrote a blog on how to do this when installing OpenSolaris
Did you enable it in the jumpstart profile somehow?
If you do it after install the OS files are not compressed.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I was told here:
http://discuss.extremetech.com/forums/permalink/1004422973/1004422973/ShowThread.aspx#1004422973
That I'd need at least 40amps - and this PSU has a 12V rail with 60amps...
On Wed, Oct 15, 2008 at 3:30 PM, Scott Laird [EMAIL PROTECTED] wrote:
Oh, also I kind of doubt that a 750W
Yeah for this plan I needed with 8 onboard SATA or another 8 port SATA
controller, so I opted just to get two of the PCI-X ones.
The Supermicro 5-in-3's don't have a fan alarm so you could remove it
or find a quieter fan. I think most of them have quite noisy fans (the
main goal for this besides
Richard Elling wrote:
Vincent Fox wrote:
Does it seem feasible/reasonable to enable compression on ZFS root disks during
JumpStart?
Seems like it could buy some space performance.
Yes. There have been several people who do this regularly.
Glenn wrote a blog on how to do this
Bob Friesenhahn wrote:
On Wed, 15 Oct 2008, Tomas Ögren wrote:
ZFS does not support RAID0 (simple striping).
zpool create mypool disk1 disk2 disk3
Sure it does.
This is load-share, not RAID0. Also, to answer the other fellow,
since ZFS does not support RAID0, it also does not support
On Wed, Oct 15, 2008 at 18:30, Scott Laird [EMAIL PROTECTED] wrote:
Oh, also I kind of doubt that a 750W power supply will spin 16 disks
up reliably. I have 10 in mine with a 600W supply, and it's
borderline--10 drives work, 11 doesn't, and adding a couple extra PCI
cards has pushed mine over
Greetings.
I'm currently looking into creating a better solution for my combination of Sun
xVM Virtualbox and ZFS.
I have two 500g sata drives configured into a zpool. I've used virtualbox for
awhile, as well as zfs, so I am familiar with their functionalities. My main
question, is more of a
Tomas Ögren wrote:
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit, but for
data) and set it to 0.5G or so. Is there any way to simulate it?
We describe how to limit the size of the ARC cache in the Evil Tuning Guide.
On Wed, Oct 15, 2008 at 4:12 PM, Will Murnane [EMAIL PROTECTED] wrote:
On Wed, Oct 15, 2008 at 18:30, Scott Laird [EMAIL PROTECTED] wrote:
Oh, also I kind of doubt that a 750W power supply will spin 16 disks
up reliably. I have 10 in mine with a 600W supply, and it's
borderline--10 drives
s == Steve [EMAIL PROTECTED] writes:
s the use of zfs
s clones/snapshots encompasses the entire zfs filesystem
I use one ZFS filesystem per VDI file. It might be better to use
vmdk's and zvol's, but right now that's not what I do.
I also often copy ExPee VDI's onto physical
s if I ever add a new 'gold vdi file', it does not effect the
s clones, [...] I'll be testing more OS's than the current ones,
s so scalability
what?
What I meant is that if I have a zfs filesystem of a bunch of gold images
(VDIs), if I would zfs snapshot/clone the filesystem. If I add
On Wed, Oct 15, 2008 at 2:17 PM, Scott Williamson
[EMAIL PROTECTED] wrote:
Hi All,
Just want to note that I had the same issue with zfs send + vdevs that had
11 drives in them on a X4500. Reducing the count of drives per zvol cleared
this up.
One vdev is IOPS limited to the speed of one
On Wed, Oct 15, 2008 at 23:51, Scott Laird [EMAIL PROTECTED] wrote:
Most 3.5 drives want
about 30W at startup; that'd be around 780W with 16 drives.
I'm not sure what kind of math you're using here.
See
On Wed, Oct 15, 2008 at 8:38 PM, Will Murnane [EMAIL PROTECTED] wrote:
On Wed, Oct 15, 2008 at 23:51, Scott Laird [EMAIL PROTECTED] wrote:
Most 3.5 drives want
about 30W at startup; that'd be around 780W with 16 drives.
I'm not sure what kind of math you're using here.
See
Hello All,
Summary:
cp command for mirrored zfs hung when all the disks in the mirrored
pool were unavailable.
Detailed description:
~
The cp command (copy a 1GB file from nfs to zfs) hung when all the disks
in the mirrored pool (both c1t0d9 and
Karthik,
The pool failmode property as implemented governs the behaviour when all
the devices needed are unavailable. The default behaviour is to wait
(block) until the IO can continue - perhaps by re-enabling the device(s).
The behaviour you expected can be achieved by zpool set
Neil,
Thanks for the quick suggestion, the hang seems to happen even with the
zpool set failmode=continue pool option.
Any other way to recover from the hang ?
thanks and regards,
Karthik
On 10/15/08 22:03, Neil Perrin wrote:
Karthik,
The pool failmode property as implemented governs the
Hi again
Brent Jones wrote:
Scott,
Can you tell us the configuration that you're using that is working for you?
Were you using RaidZ, or RaidZ2? I'm wondering what the sweetspot is
to get a good compromise in vdevs and usable space/performance
Some time ago I made some tests to find
We did try with this
zpool set failmode=continue pool option
and the wait option before pulling running the cp command and pulling
out the mirrors and in both cases there was a hang and I have a core
dump of the hang as well.
Any pointers to the bug opening process ?
Thanks
Karthik
On
55 matches
Mail list logo