[EMAIL PROTECTED] said:
> It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
> Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
> handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
> more on random I/O. The server/initiator side is
[EMAIL PROTECTED] said:
> difference my tweaks are making. Basically, the problem users experience,
> when the load shoots up are huge latencies. An ls on a non-cached
> directory, which usually is instantaneous, will take 20, 30, 40 seconds or
> more. Then when the storage array catches up,
[EMAIL PROTECTED] said:
> One thought I had was to unconfigure the bad disk with cfgadm. Would that
> force the system back into the 'offline' response?
In my experience (X4100 internal drive), that will make ZFS stop trying
to use it. It's also a good idea to do this before you hot-unplug the
[EMAIL PROTECTED] said:
> Here are some performance numbers. Note that, when the application server
> used a ZFS file system to save its data, the transaction took TWICE as long.
> For some reason, though, iostat is showing 5x as much disk writing (to the
> physical disks) on the ZFS partition. C
[EMAIL PROTECTED] said:
> Your finding for random reads with or without NCQ match my findings: http://
> blogs.sun.com/erickustarz/entry/ncq_performance_analysis
>
> Disabling NCQ looks like a very tiny win for the multi-stream read case. I
> found a much bigger win, but i was doing RAID-0 inst
[EMAIL PROTECTED] said:
> FYI, you can use the '-c' option to compare results from various runs and
> have one single report to look at.
That's a handy feature. I've added a couple of such comparisons:
http://acc.ohsu.edu/~hakansom/thumper_bench.html
Marion
_
[EMAIL PROTECTED] said:
> Depending on needs for space vs. performance, I'd probably pixk eithr 5*9 or
> 9*5, with 1 hot spare.
[EMAIL PROTECTED] said:
> How you can check the speed (I'm totally newbie on Solaris)
We're deploying a new Thumper w/750GB drives, and did space vs performance
[EMAIL PROTECTED] said:
> . . .
> ZFS filesystem [on StorageTek 2530 Array in RAID 1+0 configuration
> with a 512K segment size]
> . . .
> Comparing run 1 and 3 shows that ZFS is roughly 20% faster on
> (unsynchronized) writes versus UFS. What's really surprising, to me at least,
> is
[EMAIL PROTECTED] said:
> You still need interfaces, of some kind, to manage the device. Temp sensors?
> Drive fru information? All that information has to go out, and some in, over
> an interface of some sort.
Looks like the Sun 2530 array recently added in-band management over the
SAS (data)
[EMAIL PROTECTED] said:
> I feel like we're being hung out to dry here. I've got 70TB on 9 various
> Solaris 10 u4 servers, with different data sets. All of these are NFS
> servers. Two servers have a ton of small files, with a lot of read and
> write updating, and NFS performance on these ar
[EMAIL PROTECTED] said:
> I'd take a look at bonnie++
> http://www.sunfreeware.com/programlistintel10.html#bonnie++
Also filebench:
http://www.solarisinternals.com/wiki/index.php/FileBench
You'll see the most difference between 5x9 and 9x5 in small random reads:
http://blogs.sun.com/relling/e
[EMAIL PROTECTED] said:
> I have a set of threads each doing random reads to about 25% of its own,
> previously written, large file ... a test run will read in about 20GB on a
> server with 2GB of RAM
> . . .
> after several successful runs of my test application, some run of my test
> will be ru
[EMAIL PROTECTED] said:
> When i modify zfs FS propreties I get "device busy"
> -bash-3.00# zfs set mountpoint=/mnt1 pool/zfs1 cannot unmount '/mnt': Device
> busy
> Do you know how to identify porcess accessing this FS ? fuser doesn't work
> with zfs!
Actually, fuser works fine with ZFS here.
[EMAIL PROTECTED] said:
> You are confusing unrecoverable disk errors (which are rare but orders of
> magnitude more common) with otherwise *undetectable* errors (the occurrence
> of which is at most once in petabytes by the studies I've seen, rather than
> once in terabytes), despite my attempt to
[EMAIL PROTECTED] said:
> What are the approaches to finding what external USB disks are currently
> connected? I'm starting on backup scripts, and I need to check which
> volumes are present before I figure out what to back up to them. I
> . . .
In addition to what others have suggested so f
[EMAIL PROTECTED] said:
> Interesting. The HDS folks I talked to said the array no-ops the cache sync.
> Which models were you using? Midrange only, right?
HDS "modular" product -- ours is 9520V, which was the smallest available.
It has a mix of FC and SATA drives (yes, really).
Check the HDS f
[EMAIL PROTECTED] said:
> They clearly suggest to disable cache flush http://www.solarisinternals.com/
> wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
>
> It seems to be the only serious article on the net about this subject.
>
> Could someone here state on this tuning suggestion ? My cu is run
[EMAIL PROTECTED] said:
> From Googling, it seems suggested that I use automount, which would cut out
> any version of Unix without automount, either from the age of the OS (early
> Sun might be ok still?) and Unix flavours without automount.
Some users have reported "solving" this issue by crea
[EMAIL PROTECTED] said:
> That link specifically mentions "new Solaris 10 release", so I am assuming
> that means going from like u4 to Sol 10 u5, and that shouldn't cause a
> problem when doing plain patchadd's (w/o live upgrade). If so, then I am fine
> with those warnings and can use zfs with zo
[EMAIL PROTECTED] said:
> Living on the edge... The T3 has a 2 year battery life (time is counted).
> When it decides the batteries are too old, it will shut down the nonvolatile
> write cache. You'll want to make sure you have fresh batteries soon.
Hmm, doesn't the array put the cache into "writ
Greetings,
Last April, in this discussion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517
...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts. We're about to
reconfigure a 6120 here for use wit
[EMAIL PROTECTED] said:
> Duh... makes sense.
Oh, I dunno, I think your first try makes sense, too. That's what
I tried to do my first time out. Maybe the zones team will get
around to supporting multiple datasets in one clause someday
Regards,
Marion
___
[EMAIL PROTECTED] said:
> I'm trying to add filesystems from two different pools to a zone but can't
> seem to find any mention of how to do this in the docs.
>
> I tried this but the second set overwrites the first one.
>
> add dataset
> set name=pool1/fs1
> set name=pool2/fs2
> end
>
> Is thi
[EMAIL PROTECTED] said:
> # zpool clear storage
> cannot open 'storage': pool is unavailable
>
> Bother...
Greetings,
It looks to me like maybe the device names changed with the controller
swap you mentioned. Possibly the "new" device has not been fully
recognized by the OS yet. Maybe a "cfga
> . . .
>> Use JBODs. Or tell the cache controllers to ignore
>> the flushing requests.
[EMAIL PROTECTED] said:
> Unfortunately HP EVA can't do it. About the 9900V, it is really fast (64GB
> cache helps a lot) end reliable. 100% uptime in years. We'll never touch it
> to solve a ZFS problem.
On o
[EMAIL PROTECTED] said:
> The situation: a three 500gb disk raidz array. One disk breaks and you
> replace it with a new one. But the new 500gb disk is slightly smaller than
> the smallest disk in the array.
> . . .
> So I figure the only way to build smaller-than-max-disk-size functionality
>
[EMAIL PROTECTED] said:
> But I don't see how copying a label will do any good. Won't that just
> confuse ZFS and make it think it's talking to one of the other disks?
No, the disk label doesn't contain any ZFS info, it just tells the disk
drivers (scsi_vhci, in this case) where the disk slices
[EMAIL PROTECTED] said:
> With this OS version, format is giving lines such as:
> 9. c2t2104D9600099d0
> /[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1077,[EMAIL
> PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
> whereas, again to my recollection, previously the drive man
[EMAIL PROTECTED] said:
> attached below the errors. But the question still remains is ZFS only happy
> with JBOD disks and not SAN storage with hardware raid. Thanks
ZFS works fine on our SAN here. You do get a kernel panic (Solaris-10U3)
if a LUN disappears for some reason (without ZFS-level r
[EMAIL PROTECTED] said:
> Richard Elling wrote:
>> For the time being, these SATA disks will operate in IDE compatibility mode,
>> so don't worry about the write cache. There is some debate about whether
>> the write cache is a win at all, but that is another rat hole. Go ahead
>> and split off s
[EMAIL PROTECTED] said:
> On 5/30/07, Ian Collins <[EMAIL PROTECTED]> wrote:
> > How about 8 two way mirrors between shelves and a couple of hot spares?
>
> That's fine and good, but then losing just one disk from each shelf fast
> enough means the whole array is gone. Then one strong enough pow
>> My first guess is the NFS vs array cache-flush issue. Have you
>> configured the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll
>> make a huge difference for NFS clients of ZFS file servers.
>
[EMAIL PROTECTED] said:
> Doesn't setting zfs:zfs_nocacheflush=1 achieve the same result:
> ht
[EMAIL PROTECTED] said:
> Why can't the NFS performance match that of SSH?
Hi Albert,
My first guess is the NFS vs array cache-flush issue. Have you configured
the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll make a huge difference
for NFS clients of ZFS file servers.
Also, you might ma
[EMAIL PROTECTED] said:
>because of a problem with EMC Power Path we need to change the
> configuration of a ZFS pool changing "emcpower?g" devices (EMC Power Path
> created devices) to underlaying "c#t#d#" (Solaris path to those devices).
> . . .
You should be able to export the pool, "zpoo
[EMAIL PROTECTED] said:
> - I will try your test.
> - But How the zfs cash affect my test?
You can measure this yourself. Try running the test both with and without
the "sync" command at the end. You should see a faster completion time
without the "sync", but not all data will have made it to
[EMAIL PROTECTED] said:
> And I did another preforman test by copy 512MB file into zfs pool that
> created from 1 lun only. and the test result was the same - 12 sec !?
>
> NOTE : server V240, solaris10(11/06), 2GB RAM, connected to HDS storage type
> AMS500 with two HBA type qlogic QLA2342.
>
>
[EMAIL PROTECTED] said:
> bash-3.00# uname -a SunOS nfs-10-1.srv 5.10 Generic_125100-04 sun4u sparc
> SUNW,Sun-Fire-V440
>
> zil_disable set to 1 Disks are over FCAL from 3510.
>
> bash-3.00# dtrace -n fbt::*SYNCHRONIZE*:entry'{printf("%Y",walltimestamp);}'
> dtrace: description 'fbt::*SYNCHRONIZ
[EMAIL PROTECTED] said:
> The 6120 isn't the same as a 6130/61340/6540. The instructions referenced
> above won't work on a T3/T3+/6120/6320
Sigh. I can't keep up (:-). Thanks for the correction.
Marion
___
zfs-discuss mailing list
zfs-discuss@ope
[EMAIL PROTECTED] said:
> We have been combing the message boards and it looks like there was a lot of
> talk about this interaction of zfs+nfs back in november and before but since
> i have not seen much. It seems the only fix up to that date was to disable
> zil, is that still the case? Did any
Thanks to all for the helpful comments and questions.
[EMAIL PROTECTED] said:
> Isn't MPXIO support by HBA and hard drive identification (not by the
> enclosure)? At least I don't see how the enclosure should matter, as long as
> it has 2 active paths. So if you add the drive vendor info into /
[EMAIL PROTECTED] said:
> The scsi_vhci multipathing driver doesn't just work with Sun's FC stack, it
> also works with SAS (at least, it does in snv_63 and ... soon .. with patches
> for s10).
Yes, it's nice to see that's coming; And that FC & SAS are "the same".
But I'm at S10U3 right now.
>
[EMAIL PROTECTED] said:
> This looks similar to the recently announced Sun StorageTek 2500 Low Cost
> Array product line. http://www.sun.com/storagetek/disk_systems/workgroup/2500/
Wonder how I missed those. Oh, probably because you can't see them
on store.sun.com/shop.sun.com. On papger, there
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I've
run across the recent "VTrak" SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
http://www.promise.com/product/product_detail_eng.asp?product_
[EMAIL PROTECTED] said:
> The only obvious thing would be if the exported ZFS filesystems where
> initially mounted at a point in time when zil_disable was non-null.
No changes have been made to zil_disable. It's 0 now, and we've never
changed the setting. Export/import doesn't appear to change
[EMAIL PROTECTED] said:
> [b]How the ZFS striped on 7 slices of FC-SATA LUN via NFS worked [u]146 times
> faster[/u] than the ZFS on 1 slice of the same LUN via NFS???[/b]
Well, I do have more info to share on this issue, though how it worked
faster in that test still remains a mystery. Folks ma
[EMAIL PROTECTED] said:
> The reality is that
> ZFS turns on the write cache when it owns the
> whole disk.
> _Independantly_ of that,
> ZFS flushes the write cache when ZFS needs to insure
> that data reaches stable storage.
>
> The point is that the flushes occur whether
Adding to my own post, I said earlier:
> Anyway, I've also read that if ZFS notices it's using "slices" instead of
> whole disks, it will not enable/use the write cache. So I thought I'd be
> clever and configure a ZFS pool on our array with a slice of a LUN instead of
> the whole LUN, and "fool"
I had followed with interest the "turn off NV cache flushing" thread, in
regard to doing ZFS-backed NFS on our low-end Hitachi array:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg05000.html
In short, if you have non-volatile cache, you can configure the array
to ignore the ZFS cac
[EMAIL PROTECTED] said:
> That is the part of your setup that puzzled me. You took the same 7 disk
> raid5 set and split them into 9 LUNS. The Hitachi likely splits the "virtual
> disk" into 9 continuous partitions so each LUN maps back to different parts
> of the 7 disks. I speculate that ZFS t
I wrote:
> Just thinking out loud here. Now I'm off to see what kind of performance
> cost there is, comparing (with 400GB disks):
> Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
> 8+1 RAID-Z on 9 244.2GB LUN's from a 6+1 HW RAID5 volume
[EMAIL PROTECTED] said:
> Int
Albert Chin said:
> Well, ZFS with HW RAID makes sense in some cases. However, it seems that if
> you are unwilling to lose 50% disk space to RAID 10 or two mirrored HW RAID
> arrays, you either use RAID 0 on the array with ZFS RAIDZ/RAIDZ2 on top of
> that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of
[EMAIL PROTECTED] said:
> . . .
> realize that the pool is now in use by the other host. That leads to two
> systems using the same zpool which is not nice.
>
> Is there any solution to this problem, or do I have to get Sun Cluster 3.2 if
> I want to serve same zpools from many hosts? We may try S
[EMAIL PROTECTED] said:
> I was talking about the huge gap in storage solutions from Sun for the
> middle-ground. While $24,000 is a wonderful deal, it's absolute overkill for
> what I'm thinking about doing. I was looking for more around 6-8 drives.
How about a Sun V40z? It's available with up
[EMAIL PROTECTED] said:
> . . .
> We don't want to buy an Legato solution This will be overkill and is too
> expensive. Scripting with tar and other archievers is not the best solution
> for doing a backup.
Gerrit,
It seems you/they must already be scripting with ufsdump now. It's no
more diffi
[EMAIL PROTECTED] said:
> While trouble shooting a full-disk scenario I booted from DVD after adding
> two new disks. Still under DVD boot I created a pool from those two disks
> and moved iso images I had downloaded to the zfs filesystem. Next I fixed
> my grub, exported the zpool and reboot
Greetings,
I followed closely the thread "ZFS and Storage", and other discussions
about using ZFS on hardware RAID arrays, since we are deploying ZFS in
a similar situation here. I'm sure I'm oversimplifying, but the consensus
for general filesystem-type storage needs, as I've read it, tends towa
Folks,
I realize this thread has run its course, but I've got a variant of
the original question: What performance problems or anomalies might
one see if mixing both whole disks _and_ slices within the same pool?
I have in mind some Sun boxes (V440, T2000, X4200) with four internal
drives. Typi
[EMAIL PROTECTED] said:
> There's no reason at all why you can't do this. The only thing preventing
> most file systems from taking advantage of ?adjustable? replication is that
> they don?t have the integrated volume management capabilities that ZFS does.
And in fact, Sun's own QFS can do this,
[EMAIL PROTECTED] said:
> Solved.. well at least a work around.
> . . .
> had to boot another version of Solaris, 9 in this case, and used format -e to
> wipe the efi label, so this is a bug, not sure if its a duplicate of one of
> the numerous other efi bugs on this list so I will let one of the z
Greetings,
I've seen discussion that tar & cpio are "ZFS ACL aware"; And that
Veritas NetBackup is not. GNU tar is not (at this time); Joerg's "star"
probably will be Real Soon Now. Feel free to correct me if I'm wrong.
What about other utilities like Samba and rsync? We'd like to share
out
Interesting discussion. I've often been impressed at how NetApp-like
the overal ZFS feature-set is (implies that I like NetApp's). Is it
verboten to compare ZFS to NetApp? I hope not
NetApp has two ways of making snapshots. There is a set of automatic
snapshots, which are created, rotate a
101 - 161 of 161 matches
Mail list logo