Hello Erik,
Friday, May 12, 2006, 10:34:58 PM, you wrote:
ET So, I'll be using this:
ET 3 x Sun 3511FC SATA JBODs, each with 12 400GB disks. The setup is a
ET 11-wide stripe of 3-disk RAIDZ vdevs, with the remain 3 drives as
ET hot-spares. They'll attach to x4200 Opterons or V440 Sparcs as
Hello Roch,
Friday, May 12, 2006, 5:31:10 PM, you wrote:
RBPE Robert Milkowski writes:
Hello Roch,
Friday, May 12, 2006, 2:28:59 PM, you wrote:
RBPE Hi Robert,
RBPE Could you try 35 concurrent dd each issuing 128K I/O ?
RBPE That would be closer to how ZFS would behave
Hello zfs-discuss,
Just to be sure - if I create ZFS filesystems on snv_39 and then
later I would want just to import that pool on S10U2 - can I safely
assume it will just work (I mean nothing new to on-disk format was
added or changed in last few snv releases which is not going to be
Hello Eric,
Wednesday, May 17, 2006, 12:13:48 AM, you wrote:
ES Yes, this will work. If you install build snv_39 and run 'zpool version
ES -v' you'll see that it uses ZFS version 2. If you following the link
ES it provides to:
ES http://www.opensolaris.org/os/community/zfs/version/2/
ES
Hello Roch,
Monday, May 15, 2006, 3:23:14 PM, you wrote:
RBPE The question put forth is whether the ZFS 128K blocksize is sufficient
RBPE to saturate a regular disk. There is great body of evidence that shows
RBPE that the bigger the write sizes and matching large FS clustersize lead
RBPE to
Hello Darren,
Tuesday, May 23, 2006, 11:12:15 AM, you wrote:
DJM [EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
But only if compression is turned on for a filesystem.
Of course, and the default is off.
However I think it would be good to have an API so application can
decide what
Hello Darren,
Tuesday, May 23, 2006, 4:19:05 PM, you wrote:
DJM Robert Milkowski wrote:
The problem is that with stronger compression algorithms due to
performance reasons I want to decide which algorithms and to what
files ZFS should try to compress. For some files I write a lot of data
Hello Tom,
Tuesday, May 23, 2006, 9:46:24 PM, you wrote:
TG Hi,
TG I have these two pools, four luns each. One has two mirrors x two luns,
TG the other is one mirror x 4 luns.
TG I am trying to figure out what the pro's and cons are of these two configs.
TG One thing I have noticed is that
Hello Roland,
Tuesday, May 23, 2006, 10:31:37 PM, you wrote:
RM Darren J Moffat wrote:
James Dickens wrote:
I think ZFS should add the concept of ownership to a ZFS filesystem,
so if i create a filesystem for joe, he should be able to use his
space how ever he see's fit, if he wants to
Hello Tom,
Tuesday, May 23, 2006, 10:37:31 PM, you wrote:
TG Robert Milkowski wrote:
Hello Tom,
Tuesday, May 23, 2006, 9:46:24 PM, you wrote:
TG Hi,
TG I have these two pools, four luns each. One has two mirrors x two luns,
TG the other is one mirror x 4 luns.
TG I am trying to figure
Hello Roch,
Monday, May 22, 2006, 3:42:41 PM, you wrote:
RBPE Robert Says:
RBPE Just to be sure - you did reconfigure system to actually allow larger
RBPE IO sizes?
RBPE Sure enough, I messed up (I had no tuning to get the above data); So
RBPE 1 MB was my max transfer sizes. Using 8MB
Hello Phil,
Wednesday, May 24, 2006, 7:28:51 PM, you wrote:
PC Will the ability to import a destroyed ZFS pool and the fsstat
PC command that's part of the latest Solaris Express release (B38)
PC make it into Solaris 10 Update-2 when it's released in
PC June/July??? Also has any decision been
Hello Scott,
Wednesday, May 24, 2006, 9:42:06 PM, you wrote:
SD How does (or does) ZFS maintain sequentiality of the blocks of a file.
SD If I mkfile on a clean UFS, I likely will get contiguous blocks for my
SD file, right? A customer I talked to recently has a desire to access
SD large
Hello zfs-discuss,
I noticed on a nfs server with ZFS that even with atime set to off
and clients only reading data (almost 100% reads - except some
unlinks()) I still can see some MB/s being written according to
zpool iostat. What could be the couse? How can I see what is
actually
Hello Robert,
Wednesday, May 31, 2006, 12:22:34 AM, you wrote:
RM Hello zfs-discuss,
RM I have an nfs server with zfs as a local file server.
RM System is snv_39 on SPARC.
RM There are 6 raid-z pools (p1-p6).
RM The problem is that I do not see any heavy traffic on network
RM interfaces nor
Hello Anton,
Tuesday, May 30, 2006, 9:59:09 PM, you wrote:
AR On May 30, 2006, at 2:16 PM, Richard Elling wrote:
[assuming we're talking about disks and not hardware RAID arrays...]
AR It'd be interesting to know how many customers plan to use raw disks,
AR and how their performance relates
Hello David,
Thursday, June 1, 2006, 11:35:41 PM, you wrote:
DJO Just as a hypothetical (not looking for exact science here
DJO folks..), how would ZFS fare (in your educated opinion) in this sitation:
DJO 1 - Machine with 8 10k rpm SATA drives. High performance machine
DJO of sorts (ie dual
Hello David,
Friday, June 2, 2006, 12:52:05 AM, you wrote:
DJO - Original Message -
DJO From: Matthew Ahrens [EMAIL PROTECTED]
DJO Date: Thursday, June 1, 2006 12:30 pm
DJO Subject: Re: [zfs-discuss] question about ZFS performance for
webserving/java
There is no need for multiple
Hello David,
Friday, June 2, 2006, 4:03:45 AM, you wrote:
DJO - Original Message -
DJO From: Robert Milkowski [EMAIL PROTECTED]
DJO Date: Thursday, June 1, 2006 1:17 pm
DJO Subject: Re[2]: [zfs-discuss] question about ZFS performance for
webserving/java
Hello David,
The system
I can't send/receive incramental for one filesystem. Other filesystems on the
same servers (some in the same pool) work ok - just problem with that one. Even
if rollback destination filesystem I still can't receive incramental send.
I try to send incramental from 'SRC HOST' to 'DST HOST'. On
According to zfs(1M)
-v Print verbose information about the stream and
the time required to perform the receive.
However when using -v option I only get:
receiving full stream of p6/[EMAIL PROTECTED] into nfs-s5-s8/[EMAIL PROTECTED]
Or maybe it's due to I
Hello Matthew,
Friday, June 9, 2006, 1:16:41 AM, you wrote:
MA On Thu, Jun 08, 2006 at 03:43:08PM -0700, Robert Milkowski wrote:
According to zfs(1M)
-v Print verbose information about the stream and
the time required to perform the receive.
However
Hello Rob,
Friday, June 9, 2006, 7:36:58 AM, you wrote:
RL why is sum of disks bandwidth from `zpool iostat -v 1`
RL less than the pool total while watching `du /zfs`
RL on opensol-20060605 bits?
RL capacity operationsbandwidth
RL pool used avail read write
Hello Eric,
Friday, June 9, 2006, 5:16:29 PM, you wrote:
ES On Fri, Jun 09, 2006 at 06:16:53AM -0700, Robert Milkowski wrote:
bash-3.00# zpool status -v nfs-s5-p1
pool: nfs-s5-p1
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt
Hello Jeff,
Saturday, June 10, 2006, 2:32:49 AM, you wrote:
btw: I'm really suprised how SATA disks are unreliable. I put dozen
TBs of data on ZFS last time and just after few days I got few hundreds
checksum error (there raid-z was used). And these disks are 500GB in
3511 array. Well that
Hello zfs-discuss,
I'm writing a script to do automatically snapshots and destroy old
one. I think it would be great to add to zfs destroy another option
so only snapshots can be destroyed. Something like:
zfs destroy -s SNAPSHOT
so if something other than snapshot is provided
Hi.
snv_39, SPARC - nfs server with local ZFS filesystems.
Under heavy load traffic to all filesystems in one pool ceased - it was ok for
other pools.
By ceased I mean that 'zpool iostat 1' showed no traffic to that pool
(nfs-s5-p0).
Commands like 'df' or 'zfs list' hang.
I issued 'reboot
Hello zfs-discuss,
NFS server on snv_39/SPARC, zfs filesystems exported.
Solaris 10 x64 clients (zfs-s10-0315), filesystems mounted from nfs
server using NFSv3 over TCP.
What I see from NFS clients is that mkdir operations to ZFS filesystems
could take even 20s! while to UFS exported
Hello Matthew,
Tuesday, June 13, 2006, 10:43:08 PM, you wrote:
MA On Mon, Jun 12, 2006 at 12:58:17PM +0200, Robert Milkowski wrote:
I'm writing a script to do automatically snapshots and destroy old
one. I think it would be great to add to zfs destroy another option
so only snapshots
I issued svcadm disable nfs/server
nfsd is still there with about 1300 threads (down from 2052).
mpstat show at least on CPU with 0% idle all the time and:
bash-3.00# dtrace -n fbt:::entry'{self-vt=vtimestamp;}' -n
fbt:::return'/self-vt/[EMAIL PROTECTED](vtimestamp-self-vt);self-vt=0;}' -n
It's not only when I try to stop nfsd - during normall operations I see that
one CPU has 0% idle, all traffic is only to one pool (and this is very small
traffic) and all nfs threads hung - I guess all these threads are to this pool.
bash-3.00# zpool iostat 1
capacity
Hello Roch,
Wednesday, June 21, 2006, 2:31:25 PM, you wrote:
R This just published:
R
R http://blogs.sun.com/roller/trackback/roch/Weblog/the_dynamics_of_zfs
Proper link is: http://blogs.sun.com/roller/page/roch?entry=the_dynamics_of_zfs
--
Best regards,
Robert
Hello Neil,
Wednesday, June 21, 2006, 6:41:50 PM, you wrote:
NP Torrey McMahon wrote On 06/21/06 10:29,:
Roch wrote:
Sean Meighan writes:
The vi we were doing was a 2 line file. If you just vi a new file,
add one line and exit it would take 15 minutes in fdsynch. On
recommendation
Hello Roch,
Thursday, June 22, 2006, 9:55:41 AM, you wrote:
R How about the 'deferred' option be on a leased basis with a
R deadline to revert to normal behavior; at most 24hrs at a
R time. Console output everytime the option is enabled.
I really hate when tools try to be more clever than
Hello Erik,
Friday, June 23, 2006, 2:35:30 AM, you wrote:
ET So, basically, the problem boils down to those with Xeons, a few
ET single-socket P4s, and some of this-year's Pentium Ds. Granted, this
ET makes up most of the x86 server market. So, yes, it _would_ be nice to
ET be able to dump a
Hello Nathanael,
NB I'm a little confused by the first poster's message as well, but
NB you lose some benefits of ZFS if you don't create your pools with
NB either RAID1 or RAIDZ, such as data corruption detection. The
NB array isn't going to detect that because all it knows about are blocks.
Hello David,
Wednesday, June 28, 2006, 12:30:54 AM, you wrote:
DV If ZFS is providing better data integrity then the current storage
DV arrays, that sounds like to me an opportunity for the next generation
DV of intelligent arrays to become better.
Actually they can't.
If you want end-to-end
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to
Hello przemolicc,
Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
ppf On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
Case in point, there was a gentleman who posted on the Yahoo Groups solx86
list and described how faulty firmware on a Hitach HDS system damaged a
bunch of data.
Hello Noel,
Wednesday, June 28, 2006, 5:59:18 AM, you wrote:
ND a zpool remove/shrink type function is on our list of features we want
ND to add.
ND We have RFE
ND 4852783 reduce pool capacity
ND open to track this.
Is there someone actually working on this right now?
--
Best regards,
Hello Peter,
Wednesday, June 28, 2006, 11:24:32 PM, you wrote:
PT Robert,
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me that the simplest way to go is to use zfs to mirror
PT HW
Hello Erik,
Wednesday, June 28, 2006, 6:32:38 PM, you wrote:
ET Robert -
ET I would definitely like to see the difference between read on HW RAID5
ET vs read on RAIDZ. Naturally, one of the big concerns I would have is
ET how much RAM is needed to avoid any cache starvation on the ZFS
ET
Hello Philip,
Thursday, June 29, 2006, 2:58:41 AM, you wrote:
PB Erik Trimble wrote:
Since the best way to get this is to use a Mirror or RAIDZ vdev, I'm
assuming that the proper way to get benefits from both ZFS and HW RAID
is the following:
(1) ZFS mirror of HW stripes, i.e. zpool
Hello przemolicc,
Thursday, June 29, 2006, 10:08:23 AM, you wrote:
ppf On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
Hello przemolicc,
Thursday, June 29, 2006, 8:01:26 AM, you wrote:
ppf On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
ppf What I
Hello Steve,
Thursday, June 29, 2006, 5:54:50 PM, you wrote:
SB I've noticed another possible issue - each mount consumes about 45KB of
SB memory - not an issue with tens or hundreds of filesystems, but going
SB back to the 10,000 user scenario this would be 450MB of memory. I know
SB that memory
Hello zfs-discuss,
http://www.redhat.com/archives/fedora-list/2006-June/msg03623.html
Are they so afraid they have to write such bullshit!?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
Hello Peter,
Friday, July 7, 2006, 2:02:49 PM, you wrote:
PvG Can anyone tell me why pool created with zpool are also zfs file
PvG systems (and mounted) which can be used for storing files? It
PvG would have been more transparent if pool would not allow the storage of
files.
Pool itself is
Hello Robert,
Thursday, July 6, 2006, 1:49:34 AM, you wrote:
RM Hello Eric,
RM Monday, June 12, 2006, 11:21:24 PM, you wrote:
ES I reproduced this pretty easily on a lab machine. I've filed:
ES 6437568 ditto block repair is incorrectly propagated to root vdev
ES To track this issue. Keep
Hello David,
Tuesday, July 11, 2006, 8:34:10 PM, you wrote:
DA Hi,
DA I've been trying to understand how transactional writes work in
DA RAID-Z. I think I understand the ZFS system for transactional writes
DA in general (the only place I could find that info was wikipedia;
DA someone should
Hello zfs-discuss,
What would you rather propose for ZFS+ORACLE - zvols or just files
from the performance standpoint?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
Hello J.P.,
Monday, July 17, 2006, 2:15:56 PM, you wrote:
JPK Possibly not the right list, but the only appropriate one I knew about.
JPK I have a Solaris box (just reinstalled to Sol 10 606) with a 3.19TB device
JPK hanging off it, attatched by fibre.
JPK Solaris refuses to see this device
Hello Roch,
Monday, July 17, 2006, 6:09:54 PM, you wrote:
R Robert Milkowski writes:
Hello zfs-discuss,
What would you rather propose for ZFS+ORACLE - zvols or just files
from the performance standpoint?
--
Best regards,
Robert mailto:[EMAIL
Hello Bill,
Friday, July 21, 2006, 7:31:25 AM, you wrote:
BM On Thu, Jul 20, 2006 at 03:45:54PM -0700, Jeff Bonwick wrote:
However, we do have the advantage of always knowing when something
is corrupted, and knowing what that particular block should have been.
We also have ditto blocks
Hello Gregory,
Friday, July 21, 2006, 3:22:17 PM, you wrote:
After reading the ditto blocks blog (good article, btw), an idea occurred to me:
Since we use ditto blocks to preserve critical filesystem data, would it be practical to add a filesystem property that would cause all files
Hello Eric,
Wednesday, July 26, 2006, 8:44:55 PM, you wrote:
ES And no, there is currently no way to remove a dynamically striped disk
ES from a pool. We're working on it.
That's interesting (I mean that's something is actually being done
about it).
Can you give us some specifics (features,
Hello zfs-discuss,
Is someone working on a backport (patch) to S10? Any timeframe?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss mailing
Hello eric,
Thursday, July 27, 2006, 4:34:16 AM, you wrote:
ek Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006, 7:27:04 AM, you wrote:
GW Additionally, I've just putback the latest feature set and bugfixes
GW which will be part of s10u3_03. There were some additional
Hello Fred,
Friday, July 28, 2006, 12:37:22 AM, you wrote:
FZ Hi Robert,
FZ The fix for 6424554 is being backported to S10 and will be available in
FZ S10U3, later this year.
I know that already - I was rather asking if a patch containing the
fix will be available BEFORE U3 and if yes then
Hello Jeff,
Friday, July 28, 2006, 4:21:42 PM, you wrote:
JV Now that I've gone and read the zpool man page :-[ it seems that only
whole
JV disks can be exported/imported.
No, it's not that way.
If you create a pool from slices you'll be able to import/export only
those slices. So if you
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file
Hello Richard,
Monday, July 31, 2006, 6:29:03 PM, you wrote:
RE Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on
a presentation for my management on this ---
RE In management speak, this is easy. VxFS $0. ZFS priceless. :-)
Well, it's free for
Hello Robert,
Monday, July 31, 2006, 12:48:30 AM, you wrote:
RM Hello ZFS,
RMSystem was rebooted and after reboot server again
RM System is snv_39, SPARC, T2000
RM bash-3.00# ptree
RM 7 /lib/svc/bin/svc.startd -s
RM 163 /sbin/sh /lib/svc/method/fs-local
RM 254 /usr/sbin/zfs
Hello Neil,
Tuesday, August 1, 2006, 8:45:02 PM, you wrote:
NP Robert Milkowski wrote On 08/01/06 11:41,:
Hello Robert,
Monday, July 31, 2006, 12:48:30 AM, you wrote:
RM Hello ZFS,
RMSystem was rebooted and after reboot server again
RM System is snv_39, SPARC, T2000
RM bash
Hello Richard,
Monday, July 31, 2006, 9:23:46 PM, you wrote:
RE Robert Milkowski wrote:
Hello Richard,
Monday, July 31, 2006, 6:29:03 PM, you wrote:
RE Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on
a presentation for my management
Hello Robert,
Wednesday, August 2, 2006, 12:22:11 AM, you wrote:
RM Hello Neil,
RM Tuesday, August 1, 2006, 8:45:02 PM, you wrote:
NP Robert Milkowski wrote On 08/01/06 11:41,:
Hello Robert,
Monday, July 31, 2006, 12:48:30 AM, you wrote:
RM Hello ZFS,
RMSystem was rebooted
Hello Joseph,
Thursday, August 3, 2006, 2:02:28 AM, you wrote:
JM I know this is going to sound a little vague but...
JM A coworker said he read somewhere that ZFS is more efficient if you
JM configure pools from entire disks instead of just slices of disks. I'm
JM curious if there is any
Hi.
3510 with two HW controllers, configured on LUN in RAID-10 using 12 disks in
head unit (FC-AL 73GB 15K disks). Optimization set to random, stripe size 32KB.
Connected to v440 using two links, however in tests only one link was used (no
MPxIO).
I used filebench and varmail test with
Hello zfs-discuss,
Just a note to everyone experimenting with this - if you change it
online it has only effect when pools are exported and then imported.
ps. I didn't use for my last posted benchmarks - with it I get about
35,000IOPS and 0.2ms latency - but it's meaningless.
--
Hello Eric,
Monday, August 7, 2006, 5:53:38 PM, you wrote:
ES Cool stuff, Robert. It'd be interesting to see some RAID-Z (single- and
ES double-parity) benchmarks as well, but understandably this takes time
ES ;-)
I intend to test raid-z. Not sure there'll be enough time for raidz2.
ES The
Hello Eric,
Monday, August 7, 2006, 6:29:45 PM, you wrote:
ES Robert -
ES This isn't surprising (either the switch or the results). Our long term
ES fix for tweaking this knob is:
ES 6280630 zil synchronicity
ES Which would add 'zfs set sync' as a per-dataset option. A cut from the
ES
Hello Neil,
Monday, August 7, 2006, 6:40:01 PM, you wrote:
NP Not quite, zil_disable is inspected on file system mounts.
I guess you right that umount/mount will suffice - I just hadn't time
to check it and export/import worked.
Anyway is there a way for file systems to make it active without
Hello Richard,
Monday, August 7, 2006, 6:54:37 PM, you wrote:
RE Hi Robert, thanks for the data.
RE Please clarify one thing for me.
RE In the case of the HW raid, was there just one LUN? Or was it 12 LUNs?
Just one lun which was build on 3510 from 12 luns in raid-1(0).
--
Best regards,
Hello David,
Tuesday, August 8, 2006, 3:39:42 AM, you wrote:
DJO Thanks, interesting read. It'll be nice to see the actual
DJO results if Sun ever publishes them.
You may bet I'll post some results hopefully soon :)
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
Hi.
This time some RAID5/RAID-Z benchmarks.
This time I connected 3510 head unit with one link to the same server as 3510
JBODs are connected (using second link). snv_44 is used, server is v440.
I also tried changing max pending IO requests for HW raid5 lun and checked with
DTrace that
Hello Pierre,
Tuesday, August 8, 2006, 4:51:20 PM, you wrote:
PK Thanks for your answer Eric!
PK I don't see any problem mounting a filesystem under 'legacy'
PK options as long as i can have the freedom of ZFS features by being
PK able to add/remove/play around with disks really!
PK I tested
Hello Luke,
Tuesday, August 8, 2006, 4:48:38 PM, you wrote:
LL Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the
prefetch logic?
LL These are great results for random I/O, I wonder how the sequential I/O
looks?
LL Of course you'll not get great results for sequential I/O on
Hello Luke,
Tuesday, August 8, 2006, 6:18:39 PM, you wrote:
LL Robert,
LL On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW
Hi.
snv_44, v440
filebench/varmail results for ZFS RAID10 with 6 disks and 32 disks.
What is suprising is that the results for both cases are almost the same!
6 disks:
IO Summary: 566997 ops 9373.6 ops/s, (1442/1442 r/w) 45.7mb/s,
299us cpu/op, 5.1ms latency
IO Summary:
Hello Matthew,
Tuesday, August 8, 2006, 7:25:17 PM, you wrote:
MA On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
filebench/singlestreamread v440
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1
Hello Doug,
Tuesday, August 8, 2006, 7:28:07 PM, you wrote:
DS Looks like somewhere between the CPU and your disks you have a limitation
of 9500 ops/sec.
DS How did you connect 32 disks to your v440?
Some 3510 JBODs connected directly over FC.
--
Best regards,
Robert
filebench in varmail by default creates 16 threads - I configrm it with prstat,
16 threrads are created and running.
bash-3.00# lockstat -kgIW sleep 60|less
Profiling interrupt: 23308 events in 60.059 seconds (388 events/sec)
Count genr cuml rcnt nsec Hottest CPU+PILCaller
Hello Torrey,
Wednesday, August 9, 2006, 5:39:54 AM, you wrote:
TM I read through the entire thread, I think, and have some comments.
TM * There are still some granny smith to Macintosh comparisons
TM going on. Different OS revs, it looks like different server types,
TM and I
Hello Roch,
Wednesday, August 9, 2006, 5:36:39 PM, you wrote:
R mario heimel writes:
Hi.
i am very interested in ZFS compression on vs off tests maybe you can run
another one with the 3510.
i have seen a slightly benefit with compression on in the following test
(also with high
Hello Matthew,
Tuesday, August 8, 2006, 8:08:39 PM, you wrote:
MA On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote:
filebench in varmail by default creates 16 threads - I configrm it
with prstat, 16 threrads are created and running.
MA Ah, OK. Looking at these results
Hello eric,
Friday, August 11, 2006, 3:04:38 AM, you wrote:
ek Leon Koll wrote:
...
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0
Hello Frank,
Saturday, August 12, 2006, 2:52:40 AM, you wrote:
FC On August 11, 2006 5:25:11 PM -0700 Peter Looyenga [EMAIL PROTECTED]
wrote:
However, while you can make one using 'zfs send' it somewhat worries me that
the only way to
perform a restore is by restoring the entire filesystem
Hello Mark,
Sunday, August 13, 2006, 8:00:31 PM, you wrote:
MM Robert Milkowski wrote:
Hello zfs-discuss,
bash-3.00# zpool status nfs-s5-s6
pool: nfs-s5-s6
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct
All pools were exported than I tried to import one-by-one and got this with
only a first pool.
bash-3.00# zpool export nfs-s5-p4 nfs-s5-s5 nfs-s5-s6 nfs-s5-s7 nfs-s5-s8
bash-3.00# zpool import nfs-s5-p4
cannot mount '/nfs-s5-p4/d5139': directory is not empty
cannot mount '/nfs-s5-p4/d5141':
Hello zfs-discuss,
I do have several pools in a SAN shared environment where some pools
are mounted by one server and some by another.
Now I can do 'zpool export A B C D'
But I can't do 'zpool import A B C D'
import -a isn't an option since I want import only those pools I
just
Hello Mark,
Wednesday, August 16, 2006, 3:23:43 PM, you wrote:
MM Robert,
MM Are you sure that nfs-s5-p0/d5110 and nfs-s5-p0/d5111 are mounted
MM following the import? These messages imply that the d5110 and d5111
MM directories in the top-level filesystem of pool nfs-s5-p0 are not
MM empty.
Hello Eric,
Wednesday, August 16, 2006, 4:48:46 PM, you wrote:
ES What does 'zfs list -o name,mountpoint' and 'zfs mount' show after the
ES import? My only guess is that you have some explicit mountpoint set
ES that's confusing the DSl-orderered mounting code. If this is the case,
ES this was
Hello Eric,
Wednesday, August 16, 2006, 4:49:27 PM, you wrote:
ES This seems like a reasonable RFE. Feel free to file it at
ES bugs.opensolaris.org.
I've just did :)
However currently 'zpool import A B' means importing pool A and
renaming it to pool B.
I think it would be better to change
Hello Roch,
Thursday, August 17, 2006, 11:08:37 AM, you wrote:
R My general principles are:
R If you can, to improve you 'Availability' metrics,
R let ZFS handle one level of redundancy;
R For Random Read performance prefer mirrors over
R raid-z. If you use
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss
Hello David,
Friday, August 18, 2006, 5:39:31 PM, you wrote:
DDB On 8/18/06, Ben Short [EMAIL PROTECTED] wrote:
I'm plan to build home server that will host my svn repository, fileserver,
mailserver and webserver.
This is my plan..
I have an old dell precision 420 dual 933Mhz pIII cpus.
Hello Ben,
Friday, August 18, 2006, 4:36:45 PM, you wrote:
BS What i dont know is what happens if the boot disk dies? can i
BS replace is, install solaris again and get it to see the zfs mirror?
BS Also what happens if one of the ide drives fails? can i plug
BS another one in and run some zfs
Hello Sanjaya,
Friday, August 18, 2006, 7:50:21 PM, you wrote:
Hi,
I have been seeing data corruption on the ZFS filesystem. Here are some details. The machine is running s10 on X86 platform with a single 160Gb SATA disk. (root on s0 and zfs on s7)
Well you have a ZFS without
Hello zfs-discuss,
I've got many ydisks in a JBOD (100) and while doing tests there
are lot of destroyed pools. Then some disks are re-used to be part
of new pools. Now if I do zpool import -D I can see lot of destroyed
pool in a state that I can't import them anyway (like only two disks
Hello zfs-discuss,
Looks like I can't get pool ID once pool is imported.
IMHO zpool show should display it also.
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0 mirror
1 - 100 of 948 matches
Mail list logo