[EMAIL PROTECTED] said:
Solved.. well at least a work around.
. . .
had to boot another version of Solaris, 9 in this case, and used format -e to
wipe the efi label, so this is a bug, not sure if its a duplicate of one of
the numerous other efi bugs on this list so I will let one of the zfs
[EMAIL PROTECTED] said:
There's no reason at all why you can't do this. The only thing preventing
most file systems from taking advantage of ?adjustable? replication is that
they don?t have the integrated volume management capabilities that ZFS does.
And in fact, Sun's own QFS can do this, on
Folks,
I realize this thread has run its course, but I've got a variant of
the original question: What performance problems or anomalies might
one see if mixing both whole disks _and_ slices within the same pool?
I have in mind some Sun boxes (V440, T2000, X4200) with four internal
drives.
[EMAIL PROTECTED] said:
While trouble shooting a full-disk scenario I booted from DVD after adding
two new disks. Still under DVD boot I created a pool from those two disks
and moved iso images I had downloaded to the zfs filesystem. Next I fixed
my grub, exported the zpool and rebooted.
[EMAIL PROTECTED] said:
I was talking about the huge gap in storage solutions from Sun for the
middle-ground. While $24,000 is a wonderful deal, it's absolute overkill for
what I'm thinking about doing. I was looking for more around 6-8 drives.
How about a Sun V40z? It's available with up to
[EMAIL PROTECTED] said:
. . .
realize that the pool is now in use by the other host. That leads to two
systems using the same zpool which is not nice.
Is there any solution to this problem, or do I have to get Sun Cluster 3.2 if
I want to serve same zpools from many hosts? We may try Sun
Albert Chin said:
Well, ZFS with HW RAID makes sense in some cases. However, it seems that if
you are unwilling to lose 50% disk space to RAID 10 or two mirrored HW RAID
arrays, you either use RAID 0 on the array with ZFS RAIDZ/RAIDZ2 on top of
that or a JBOD with ZFS RAIDZ/RAIDZ2 on top of
I wrote:
Just thinking out loud here. Now I'm off to see what kind of performance
cost there is, comparing (with 400GB disks):
Simple ZFS stripe on one 2198GB LUN from a 6+1 HW RAID5 volume
8+1 RAID-Z on 9 244.2GB LUN's from a 6+1 HW RAID5 volume
[EMAIL PROTECTED] said:
[EMAIL PROTECTED] said:
That is the part of your setup that puzzled me. You took the same 7 disk
raid5 set and split them into 9 LUNS. The Hitachi likely splits the virtual
disk into 9 continuous partitions so each LUN maps back to different parts
of the 7 disks. I speculate that ZFS thinks
I had followed with interest the turn off NV cache flushing thread, in
regard to doing ZFS-backed NFS on our low-end Hitachi array:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg05000.html
In short, if you have non-volatile cache, you can configure the array
to ignore the ZFS
[EMAIL PROTECTED] said:
The reality is that
ZFS turns on the write cache when it owns the
whole disk.
_Independantly_ of that,
ZFS flushes the write cache when ZFS needs to insure
that data reaches stable storage.
The point is that the flushes occur whether or not
[EMAIL PROTECTED] said:
[b]How the ZFS striped on 7 slices of FC-SATA LUN via NFS worked [u]146 times
faster[/u] than the ZFS on 1 slice of the same LUN via NFS???[/b]
Well, I do have more info to share on this issue, though how it worked
faster in that test still remains a mystery. Folks may
[EMAIL PROTECTED] said:
The only obvious thing would be if the exported ZFS filesystems where
initially mounted at a point in time when zil_disable was non-null.
No changes have been made to zil_disable. It's 0 now, and we've never
changed the setting. Export/import doesn't appear to change
Greetings,
In looking for inexpensive JBOD and/or RAID solutions to use with ZFS, I've
run across the recent VTrak SAS/SATA systems from Promise Technologies,
specifically their E-class and J-class series:
E310f FC-connected RAID:
[EMAIL PROTECTED] said:
This looks similar to the recently announced Sun StorageTek 2500 Low Cost
Array product line. http://www.sun.com/storagetek/disk_systems/workgroup/2500/
Wonder how I missed those. Oh, probably because you can't see them
on store.sun.com/shop.sun.com. On papger, there
[EMAIL PROTECTED] said:
The scsi_vhci multipathing driver doesn't just work with Sun's FC stack, it
also works with SAS (at least, it does in snv_63 and ... soon .. with patches
for s10).
Yes, it's nice to see that's coming; And that FC SAS are the same.
But I'm at S10U3 right now.
[EMAIL PROTECTED] said:
bash-3.00# uname -a SunOS nfs-10-1.srv 5.10 Generic_125100-04 sun4u sparc
SUNW,Sun-Fire-V440
zil_disable set to 1 Disks are over FCAL from 3510.
bash-3.00# dtrace -n fbt::*SYNCHRONIZE*:entry'{printf(%Y,walltimestamp);}'
dtrace: description
[EMAIL PROTECTED] said:
And I did another preforman test by copy 512MB file into zfs pool that
created from 1 lun only. and the test result was the same - 12 sec !?
NOTE : server V240, solaris10(11/06), 2GB RAM, connected to HDS storage type
AMS500 with two HBA type qlogic QLA2342.
Any
[EMAIL PROTECTED] said:
- I will try your test.
- But How the zfs cash affect my test?
You can measure this yourself. Try running the test both with and without
the sync command at the end. You should see a faster completion time
without the sync, but not all data will have made it to
[EMAIL PROTECTED] said:
because of a problem with EMC Power Path we need to change the
configuration of a ZFS pool changing emcpower?g devices (EMC Power Path
created devices) to underlaying c#t#d# (Solaris path to those devices).
. . .
You should be able to export the pool, zpool export
[EMAIL PROTECTED] said:
Why can't the NFS performance match that of SSH?
Hi Albert,
My first guess is the NFS vs array cache-flush issue. Have you configured
the 6140 to ignore SYNCHRONIZE_CACHE requests? That'll make a huge difference
for NFS clients of ZFS file servers.
Also, you might
[EMAIL PROTECTED] said:
On 5/30/07, Ian Collins [EMAIL PROTECTED] wrote:
How about 8 two way mirrors between shelves and a couple of hot spares?
That's fine and good, but then losing just one disk from each shelf fast
enough means the whole array is gone. Then one strong enough power
[EMAIL PROTECTED] said:
Richard Elling wrote:
For the time being, these SATA disks will operate in IDE compatibility mode,
so don't worry about the write cache. There is some debate about whether
the write cache is a win at all, but that is another rat hole. Go ahead
and split off some
[EMAIL PROTECTED] said:
attached below the errors. But the question still remains is ZFS only happy
with JBOD disks and not SAN storage with hardware raid. Thanks
ZFS works fine on our SAN here. You do get a kernel panic (Solaris-10U3)
if a LUN disappears for some reason (without ZFS-level
[EMAIL PROTECTED] said:
With this OS version, format is giving lines such as:
9. c2t2104D9600099d0 DEFAULT cyl 48638 alt 2 hd 255 sec 63
/[EMAIL PROTECTED],0/pci10de,[EMAIL PROTECTED]/pci1077,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0/[EMAIL PROTECTED],0
whereas, again to my
[EMAIL PROTECTED] said:
But I don't see how copying a label will do any good. Won't that just
confuse ZFS and make it think it's talking to one of the other disks?
No, the disk label doesn't contain any ZFS info, it just tells the disk
drivers (scsi_vhci, in this case) where the disk slices
[EMAIL PROTECTED] said:
The situation: a three 500gb disk raidz array. One disk breaks and you
replace it with a new one. But the new 500gb disk is slightly smaller than
the smallest disk in the array.
. . .
So I figure the only way to build smaller-than-max-disk-size functionality
into
. . .
Use JBODs. Or tell the cache controllers to ignore
the flushing requests.
[EMAIL PROTECTED] said:
Unfortunately HP EVA can't do it. About the 9900V, it is really fast (64GB
cache helps a lot) end reliable. 100% uptime in years. We'll never touch it
to solve a ZFS problem.
On our
[EMAIL PROTECTED] said:
# zpool clear storage
cannot open 'storage': pool is unavailable
Bother...
Greetings,
It looks to me like maybe the device names changed with the controller
swap you mentioned. Possibly the new device has not been fully
recognized by the OS yet. Maybe a cfgadm -al
[EMAIL PROTECTED] said:
I'm trying to add filesystems from two different pools to a zone but can't
seem to find any mention of how to do this in the docs.
I tried this but the second set overwrites the first one.
add dataset
set name=pool1/fs1
set name=pool2/fs2
end
Is this possible
[EMAIL PROTECTED] said:
Duh... makes sense.
Oh, I dunno, I think your first try makes sense, too. That's what
I tried to do my first time out. Maybe the zones team will get
around to supporting multiple datasets in one clause someday
Regards,
Marion
Greetings,
Last April, in this discussion...
http://www.opensolaris.org/jive/thread.jspa?messageID=143517
...we never found out how (or if) the Sun 6120 (T4) array can be configured
to ignore cache flush (sync-cache) requests from hosts. We're about to
reconfigure a 6120 here for use
[EMAIL PROTECTED] said:
That link specifically mentions new Solaris 10 release, so I am assuming
that means going from like u4 to Sol 10 u5, and that shouldn't cause a
problem when doing plain patchadd's (w/o live upgrade). If so, then I am fine
with those warnings and can use zfs with zones'
[EMAIL PROTECTED] said:
They clearly suggest to disable cache flush http://www.solarisinternals.com/
wiki/index.php/ZFS_Evil_Tuning_Guide#FLUSH .
It seems to be the only serious article on the net about this subject.
Could someone here state on this tuning suggestion ? My cu is running
[EMAIL PROTECTED] said:
Interesting. The HDS folks I talked to said the array no-ops the cache sync.
Which models were you using? Midrange only, right?
HDS modular product -- ours is 9520V, which was the smallest available.
It has a mix of FC and SATA drives (yes, really).
Check the HDS
[EMAIL PROTECTED] said:
What are the approaches to finding what external USB disks are currently
connected? I'm starting on backup scripts, and I need to check which
volumes are present before I figure out what to back up to them. I
. . .
In addition to what others have suggested so far,
[EMAIL PROTECTED] said:
You are confusing unrecoverable disk errors (which are rare but orders of
magnitude more common) with otherwise *undetectable* errors (the occurrence
of which is at most once in petabytes by the studies I've seen, rather than
once in terabytes), despite my attempt to
[EMAIL PROTECTED] said:
When i modify zfs FS propreties I get device busy
-bash-3.00# zfs set mountpoint=/mnt1 pool/zfs1 cannot unmount '/mnt': Device
busy
Do you know how to identify porcess accessing this FS ? fuser doesn't work
with zfs!
Actually, fuser works fine with ZFS here. One
[EMAIL PROTECTED] said:
I have a set of threads each doing random reads to about 25% of its own,
previously written, large file ... a test run will read in about 20GB on a
server with 2GB of RAM
. . .
after several successful runs of my test application, some run of my test
will be running
[EMAIL PROTECTED] said:
I'd take a look at bonnie++
http://www.sunfreeware.com/programlistintel10.html#bonnie++
Also filebench:
http://www.solarisinternals.com/wiki/index.php/FileBench
You'll see the most difference between 5x9 and 9x5 in small random reads:
[EMAIL PROTECTED] said:
I feel like we're being hung out to dry here. I've got 70TB on 9 various
Solaris 10 u4 servers, with different data sets. All of these are NFS
servers. Two servers have a ton of small files, with a lot of read and
write updating, and NFS performance on these are
[EMAIL PROTECTED] said:
You still need interfaces, of some kind, to manage the device. Temp sensors?
Drive fru information? All that information has to go out, and some in, over
an interface of some sort.
Looks like the Sun 2530 array recently added in-band management over the
SAS (data)
[EMAIL PROTECTED] said:
Depending on needs for space vs. performance, I'd probably pixk eithr 5*9 or
9*5, with 1 hot spare.
[EMAIL PROTECTED] said:
How you can check the speed (I'm totally newbie on Solaris)
We're deploying a new Thumper w/750GB drives, and did space vs performance
[EMAIL PROTECTED] said:
FYI, you can use the '-c' option to compare results from various runs and
have one single report to look at.
That's a handy feature. I've added a couple of such comparisons:
http://acc.ohsu.edu/~hakansom/thumper_bench.html
Marion
[EMAIL PROTECTED] said:
Your finding for random reads with or without NCQ match my findings: http://
blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Disabling NCQ looks like a very tiny win for the multi-stream read case. I
found a much bigger win, but i was doing RAID-0 instead
[EMAIL PROTECTED] said:
Here are some performance numbers. Note that, when the application server
used a ZFS file system to save its data, the transaction took TWICE as long.
For some reason, though, iostat is showing 5x as much disk writing (to the
physical disks) on the ZFS partition. Can
[EMAIL PROTECTED] said:
One thought I had was to unconfigure the bad disk with cfgadm. Would that
force the system back into the 'offline' response?
In my experience (X4100 internal drive), that will make ZFS stop trying
to use it. It's also a good idea to do this before you hot-unplug the
[EMAIL PROTECTED] said:
difference my tweaks are making. Basically, the problem users experience,
when the load shoots up are huge latencies. An ls on a non-cached
directory, which usually is instantaneous, will take 20, 30, 40 seconds or
more. Then when the storage array catches up,
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The server/initiator side is a
[EMAIL PROTECTED] said:
I also tried using O_DSYNC, which stops the pathological behaviour but makes
things pretty slow - I only get a maximum of about 20MBytes/sec, which is
obviously much less than the hardware can sustain.
I may misunderstand this situation, but while you're waiting for
[EMAIL PROTECTED] said:
Some of us are still using Solaris 10 since it is the version of Solaris
released and supported by Sun. The 'filebench' software from SourceForge
does not seem to install or work on Solaris 10. The 'pkgadd' command
refuses to recognize the package, even when it is
[EMAIL PROTECTED] said:
I'm creating a zfs volume, and sharing it with zfs set shareiscsi=on
poolname/volume. I can access the iSCSI volume without any problems, but IO
is terribly slow, as in five megabytes per second sustained transfers.
I've tried creating an iSCSI target stored on a UFS
[EMAIL PROTECTED] said:
It may not be relevant, but I've seen ZFS add weird delays to things too. I
deleted a file to free up space, but when I checked no more space was
reported. A second or two later the space appeared.
Run the sync command before you do the du. That flushes the ARC
[EMAIL PROTECTED] said:
This is what I get with the filebench-1.1.0_x86_pkg.tar.gz from SourceForge:
# pkgadd -d .
pkgadd: ERROR: no packages were found in
/home/bfriesen/src/benchmark/filebench
# ls
install/ pkginfo pkgmapreloc/
. . .
Um, cd .. and pkgadd -d . again. The
[EMAIL PROTECTED] said:
i am a little new to zfs so please excuse my ignorance. i have a poweredge
2950 running Nevada B82 with an Apple Xraid attached over a fiber hba. they
are formatted to JBOD with the pool configured as follows:
. . .
i have a filesystem (tpool4/seplog) shared over
[EMAIL PROTECTED] said:
It is also interesting to note that this system is now making negative
progress. I can understand the remaining time estimate going up with time,
but what does it mean for the % complete number to go down after 6 hours of
work?
Sorry I don't have any helpful
[EMAIL PROTECTED] said:
I am having trouble destroying a zfs file system (device busy) and fuser
isn't telling me who has the file open:
. . .
This situation appears to occur every night during a system test. The only
peculiar operation on the errant file system is that another system NFS
[EMAIL PROTECTED] said:
I'm curious about your array configuration above... did you create your
RAIDZ2 as one vdev or multiple vdev's? If multiple, how many? On mine, I
have all 10 disks set up as one RAIDZ2 vdev which is supposed to be near the
performance limit... I'm wondering how much I
[EMAIL PROTECTED] said:
AFAIK there is no way to tell resilvering to pause, so I want to detach the
inconsistent disk and attach it again tonight, when it won't affect users. To
do that I need to know which disk is inconsistent, but zpool status does not
show me any info in regard.
Is
[EMAIL PROTECTED] said:
Seriously, I don't even care about the cost. Even with the smallest
capacity, four of those gives me 128GB of write cache supporting 680MB/s and
40k IOPS. Show me a hardware raid controller that can even come close to
that. Four of those will strain even 10GB/s
[EMAIL PROTECTED] said:
That's the one that's been an issue for me and my customers - they get billed
back for GB allocated to their servers by the back end arrays. To be more
explicit about the 'self-healing properties' - To deal with any fs
corruption situation that would traditionally
[EMAIL PROTECTED] said:
I took a snapshot of a directory in which I hold PDF files related to math.
I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly
reccomend this to any math enthusiast) and did zfs list to see the size of
the snapshot (sheer curiosity). I don't have
[EMAIL PROTECTED] said:
We did ask our vendor, but we were just told that AVS does not support
x4500.
You might have to use the open-source version of AVS, but it's not
clear if that requires OpenSolaris or if it will run on Solaris-10.
Here's a description of how to set it up between two
[EMAIL PROTECTED] said:
It's interesting how the speed and optimisation of these maintenance
activities limit pool size. It's not just full scrubs. If the filesystem is
subject to corruption, you need a backup. If the filesystem takes two months
to back up / restore, then you need really
[EMAIL PROTECTED] said:
In general, such tasks would be better served by T5220 (or the new T5440 :-)
and J4500s. This would change the data paths from:
client --net-- T5220 --net-- X4500 --SATA-- disks to
client --net-- T5440 --SAS-- disks
With the J4500 you get the same storage
[EMAIL PROTECTED] said:
but Marion's is not really possible at all, and won't be for a while with
other groups' choice of storage-consumer platform, so it'd have to be
GlusterFS or some other goofy fringe FUSEy thing or not-very-general crude
in-house hack.
Well, of course the magnitude of
[EMAIL PROTECTED] said:
# ludelete beA
ERROR: cannot open 'pool00/zones/global/home': dataset does not exist
ERROR: cannot mount mount point /.alt.tmp.b-QY.mnt/home device
pool00/zones/global/home
ERROR: failed to mount file system pool00/zones/global/home on
/.alt.tmp.b-QY.mnt/home
[EMAIL PROTECTED] said:
I thought to look at df output before rebooting, and there are PAGES PAGES
like this:
/var/run/.patchSafeModeOrigFiles/usr/platform/FJSV,GPUZC-M/lib/libcpc.so.1
7597264 85240 7512024 2%/usr/platform/FJSV,GPUZC-M/lib/libcpc.so.1
. . .
Hundreds of
[EMAIL PROTECTED] said:
I think we found the choke point. The silver lining is that it isn't the
T2000 or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has
7200RPM SATA disks with the cache turned off. This system has a very small
cache, and when we did turn it on for one of
[EMAIL PROTECTED] said:
Thanks for the tips. I'm not sure if they will be relevant, though. We
don't talk directly with the AMS1000. We are using a USP-VM to virtualize
all of our storage and we didn't have to add anything to the drv
configuration files to see the new disk (mpxio was
vincent_b_...@yahoo.com said:
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
Well, it's been considered a best practice for servers for a lot of
years to
richard.ell...@sun.com said:
L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
not back-ported to Solaris 10u6.
You sure? Here's output on a Solaris-10u6 machine:
cyclops 4959# uname -a
SunOS cyclops 5.10 Generic_137138-09 i86pc i386 i86pc
cyclops 4960# zpool
d...@yahoo.com said:
Any recommendations for an SSD to work with an X4500 server? Will the SSDs
used in the 7000 series servers work with X4500s or X4540s?
The Sun System Handbook (sunsolve.sun.com) for the 7210 appliance (an
X4540-based system) lists the logzilla device with this fine print:
The zilstat tool is very helpful, thanks!
I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
both via an NFS client, and locally on the X4500 itself. Over NFS,
said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
going through the ZIL.
When run locally on
, data has been fine. We also do tape backups
of these pools, of course.
Regards,
--
Marion Hakanson hakan...@ohsu.edu
OHSU Advanced Computing Center
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
n...@jnickelsen.de said:
As far as I know the situation with ATI is that, while ATI supplies
well-performing binary drivers for MS Windows (of course) and Linux, there is
no such thing for other OSs. So OpenSolaris uses standardized interfaces of
the graphics hardware, which have comparatively
casper@sun.com said:
I've upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
. . .
So I removed the zvol swap and now I have a standard swap partition. The
performance is much better (night and day). The system is usable and I
don't know
james.ma...@sun.com said:
I'm not yet sure what's broken here, but there's something pathologically
wrong with the IO rates to the device during the ZFS tests. In both cases,
the wait queue is getting backed up, with horrific wait queue latency
numbers. On the read side, I don't understand why
bh...@freaks.com said:
Even with a very weak CPU the system is close to saturating the PCI bus for
reads with most configurations.
Nice little machine. I wonder if you'd get some of the bonnie numbers
increased if you ran multiple bonnie's in parallel. Even though the
sequential throughput
mi...@cc.umanitoba.ca said:
What would I look for with mpstat?
Look for a CPU (thread) that might be 100% utilized; Also look to see
if that CPU (or CPU's) has a larger number in the ithr column than all
other CPU's. The idea here is that you aren't getting much out of the
T2000 if only one
udip...@gmail.com said:
dick at nagual.nl wrote:
Maybe because on the fifth day some hardware failure occurred? ;-)
That would be which? The system works and is up and running beautifully.
OpenSolaris, as of now.
Running beautifully as long as the power stays on? Is it hard to believe
Greetings,
We have a small Oracle project on ZFS (Solaris-10), using a SAN-connected
array which is need of replacement. I'm weighing whether to recommend
a Sun 2540 array or a Sun J4200 JBOD as the replacement. The old array
and the new ones all have 7200RPM SATA drives.
I've been watching
bfrie...@simple.dallas.tx.us said:
Your IOPS don't seem high. You are currently using RAID-5, which is a poor
choice for a database. If you use ZFS mirrors you are going to unleash a
lot more IOPS from the available spindles.
RAID-5 may be poor for some database loads, but it's perfectly
jlo...@ssl.berkeley.edu said:
What's odd is we've checked a few hundred files, and most of them don't
seem to have any corruption. I'm thinking what's wrong is the metadata for
these files is corrupted somehow, yet we can read them just fine. I wish I
could tell which ones are really
bfrie...@simple.dallas.tx.us said:
No. I am suggesting that all Solaris 10 (and probably OpenSolaris systems)
currently have a software-imposed read bottleneck which places a limit on
how well systems will perform on this simple sequential read benchmark.
After a certain point (which is
asher...@versature.com said:
And, on that subject, is there truly a difference between Seagate's line-up
of 7200 RPM drives? They seem to now have a bunch:
. . .
Other manufacturers seem to have similar lineups. Is the difference going to
matter to me when putting a mess of them into a SAS
rswwal...@gmail.com said:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one
controller or path and the other on another then a failure of one will not
take down the set.
You can't get that with RAIDZn.
rswwal...@gmail.com said:
It's not the stripes that make a difference, but the number of controllers
there.
What's the system config on that puppy?
The zpool status -v output was from a Thumper (X4500), slightly edited,
since in our real-world Thumper, we use c6t0d0 in c5t4d0's place in the
vidar.nil...@palantir.no said:
I'm trying to move disks in a zpool from one SATA-kontroller to another. Its
16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from
one raidz over to the new controller. Server was powered off.
. . .
zpool replace storage c10t7d0 c11t0d0
j...@jamver.id.au said:
For a predominantly NFS server purpose, it really looks like a case of the
slog has to outperform your main pool for continuous write speed as well as
an instant response time as the primary criterion. Which might as well be a
fast (or group of fast) SSDs or 15kRPM
rswwal...@gmail.com said:
Yes, but if it's on NFS you can just figure out the workload in MB/s and use
that as a rough guideline.
I wonder if that's the case. We have an NFS server without NVRAM cache
(X4500), and it gets huge MB/sec throughput on large-file writes over NFS.
But it's
David Stewart wrote:
How do I identify which drive it is? I hear each drive spinning (I listened
to them individually) so I can't simply select the one that is not spinning.
You can try reading from each raw device, and looking for a blinky-light
to identify which one is active. If you don't
webcl...@rochester.rr.com said:
To verify data, I cannot depend on existing tools since diff is not large
file aware. My best idea at this point is to calculate and compare MD5 sums
of every file and spot check other properties as best I can.
Ray,
I recommend that you use rsync's -c to
mmusa...@east.sun.com said:
What benefit are you hoping zfs will provide in this situation? Examine
your situation carefully and determine what filesystem works best for you.
There are many reasons to use ZFS, but if your configuration isn't set up to
take advantage of those reasons, then
I wrote:
Is anyone else tired of seeing the word redundancy? (:-)
matthias.ap...@lanlabor.com said:
Only in a perfect world (tm) ;-)
IMHO there is no such thing as too much redundancy. In the real world the
possibilities of redundancy are only limited by money,
Sigh. I was just joking
jel+...@cs.uni-magdeburg.de said:
2nd) Never had a Sun STK RAID INT before. Actually my intention was to create
a zpool mirror of sd0 and sd1 for boot and logs, and a 2x2-way zpool mirror
with the 4 remaining disks. However, the controller seems not to support
JBODs :( - which is also bad,
knatte_fnatte_tja...@yahoo.com said:
Is rsync faster? As I have understood it, zfs send.. gives me an exact
replica, whereas rsync doesnt necessary do that, maybe the ACL are not
replicated, etc. Is this correct about rsync vs zfs send?
It is true that rsync (as of 3.0.5, anyway) does not
opensolaris-zfs-disc...@mlists.thewrittenword.com said:
Is it really pointless? Maybe they want the insurance RAIDZ2 provides. Given
the choice between insurance and performance, I'll take insurance, though it
depends on your use case. We're using 5-disk RAIDZ2 vdevs.
. . .
Would love to
da...@elemental.org said:
Normally on UFS I would just take the 'nuke it from orbit' route and use clri
to wipe the directory's inode. However, clri doesn't appear to be zfs aware
(there's not even a zfs analog of clri in /usr/lib/fs/ zfs), and I don't
immediately see an option in zdb which
zfs...@jeremykister.com said:
# format -e c12t1d0 selecting c12t1d0 [disk formatted] /dev/dsk/c3t11d0s0 is
part of active ZFS pool dbzpool. Please see zpool(1M).
It is true that c3t11d0 is part of dbzpool. But why is solaris upset about
c3t11 when i'm working with c12t1 ?? So i checked
1 - 100 of 145 matches
Mail list logo