Re: [zfs-discuss] Petabyte pool?

2013-03-16 Thread Marion Hakanson
hakan...@ohsu.edu said: I get a little nervous at the thought of hooking all that up to a single server, and am a little vague on how much RAM would be advisable, other than as much as will fit (:-).  Then again, I've been waiting for something like pNFS/NFSv4.1 to be usable for gluing

[zfs-discuss] Petabyte pool?

2013-03-15 Thread Marion Hakanson
Greetings, Has anyone out there built a 1-petabyte pool? I've been asked to look into this, and was told low performance is fine, workload is likely to be write-once, read-occasionally, archive storage of gene sequencing data. Probably a single 10Gbit NIC for connectivity is sufficient. We've

Re: [zfs-discuss] Petabyte pool?

2013-03-15 Thread Marion Hakanson
rvandol...@esri.com said: We've come close: admin@mes-str-imgnx-p1:~$ zpool list NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT datapool 978T 298T 680T30% 1.00x ONLINE - syspool278G 104G 174G37% 1.00x ONLINE - Using a Dell R720 head unit, plus a

Re: [zfs-discuss] Petabyte pool?

2013-03-15 Thread Marion Hakanson
Ray said: Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual pathed to a couple of LSI SAS switches. Marion said: How many HBA's in the R720? Ray said: We have qty 2 LSI SAS 9201-16e HBA's (Dell resold[1]). Sounds similar in approach to the Aberdeen product another sender

[zfs-discuss] mpt_sas multipath problem?

2013-01-07 Thread Marion Hakanson
Greetings, We're trying out a new JBOD here. Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. The OS is oi151a7, running on an existing server with a 54TB pool of internal drives. I believe the server hardware is not relevant to the JBOD issue,

Re: [zfs-discuss] mpt_sas multipath problem?

2013-01-07 Thread Marion Hakanson
On Jan 7, 2013, at 1:20 PM, Marion Hakanson hakan...@ohsu.edu wrote: Greetings, We're trying out a new JBOD here.  Multipath (mpxio) is not working, and we could use some feedback and/or troubleshooting advice. . . . richard.ell...@gmail.com said: Sometimes the mpxio detection doesn't work

Re: [zfs-discuss] mpt_sas multipath problem?

2013-01-07 Thread Marion Hakanson
richard.ell...@gmail.com said: Sometimes the mpxio detection doesn't work properly. You can try to whitelist them, https://www.illumos.org/issues/644 And I said: Thanks Richard, I was hoping I hadn't just made up my vague memory of such functionality. We'll give it a try. That did the

Re: [zfs-discuss] mpt_sas multipath problem?

2013-01-07 Thread Marion Hakanson
j...@opensolaris.org said: Output from 'prtconf -v' would help, as would a cogent description of what you are looking at to determine that MPxIO isn't working. Sorry James, I must've made a cut-and-paste-o and left out my description of the symptom. That being, 40 new drives show up as 80 new

Re: [zfs-discuss] cannot replace X with Y: devices have different sector alignment

2012-11-12 Thread Marion Hakanson
tron...@gmail.com said: That said, I've already migrated far too many times already. I really, really don't want to migrate the pool again, if it can be avoided. I've already migrated from raidz1 to raidz2 and then from raidz2 to mirror vdevs. Then, even though I already had a mix of 512b and

Re: [zfs-discuss] Seagate Constellation vs. Hitachi Ultrastar

2012-04-09 Thread Marion Hakanson
richard.ell...@richardelling.com said: We are starting to see a number of SAS HDDs that prefer logical-block to round-robin. I see this with late model Seagate and Toshiba HDDs. There is another, similar issue with recognition of multipathing by the scsi_vhci driver. Both of these are being

Re: [zfs-discuss] Basic ZFS Questions + Initial Setup Recommendation

2012-03-21 Thread Marion Hakanson
p...@kraus-haus.org said: Without knowing the I/O pattern, saying 500 MB/sec. is meaningless. Achieving 500MB/sec. with 8KB files and lots of random accesses is really hard, even with 20 HDDs. Achieving 500MB/sec. of sequential streaming of 100MB+ files is much easier. . . . For ZFS,

Re: [zfs-discuss] permissions

2012-02-10 Thread Marion Hakanson
capcas...@gmail.com said: I have a file that I can't delete, change permissions or owner. ls -v does not show any acl's on the file not even those for normal unix rw etc. permissions from ls -l show -rwx-- chmod gived an error of not owner for the owner !! and for root just says can't

Re: [zfs-discuss] L2ARC, block based or file based?

2012-01-13 Thread Marion Hakanson
mattba...@gmail.com said: We're looking at buying some additional SSD's for L2ARC (as well as additional RAM to support the increased L2ARC size) and I'm wondering if we NEED to plan for them to be large enough to hold the entire file or if ZFS can cache the most heavily used parts of a single

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Marion Hakanson
lmulc...@marinsoftware.com said: . . . The MySQL server is: Dell R710 / 80G Memory with two daisy chained MD1220 disk arrays - 22 Disks each - 600GB 10k RPM SAS Drives Storage Controller: LSI, Inc. 1068E (JBOD) I have also seen similar symptoms on systems with MD1000 disk arrays containing

Re: [zfs-discuss] Cannot format 2.5TB ext disk (EFI)

2011-06-23 Thread Marion Hakanson
kitty@oracle.com said: It wouldn't let me # zpool create test_pool c5t0d0p0 cannot create 'test_pool': invalid argument for this pool operation Try without the p0, i.e. just: # zpool create test_pool c5t0d0 Regards, Marion ___ zfs-discuss

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Marion Hakanson
jp...@cam.ac.uk said: I can't speak for this particular situation or solution, but I think in principle you are wrong. Networks are fast. Hard drives are slow. Put a 10G connection between your storage and your front ends and you'll have the bandwidth[1]. Actually if you really were

Re: [zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-06 Thread Marion Hakanson
sigbj...@nixtra.com said: I will do some testing on the loadbalance on/off. We have nearline SAS disks, which does have dual path from the disk, however it's still just 7200rpm drives. Are you using SATA , SAS or SAS-nearline in your array? Do you have multiple SAS connections to your

Re: [zfs-discuss] Sun T3-2 and ZFS on JBODS

2011-03-02 Thread Marion Hakanson
sigbj...@nixtra.com said: I've played around with turning on and off mpxio on the mpt_sas driver, disabling increased the performance from 30MB / sec, but it's still far from the original performance. I've attached some dumps of zpool iostat before and after reinstallation. I find zpool

Re: [zfs-discuss] SIL3114 and sparc solaris 10

2011-02-25 Thread Marion Hakanson
nat...@tuneunix.com said: I can confirm that on *at least* 4 different cards - from different board OEMs - I have seen single bit ZFS checksum errors that went away immediately after removing the 3114 based card. I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have

Re: [zfs-discuss] Drive i/o anomaly

2011-02-07 Thread Marion Hakanson
matt.connolly...@gmail.com said: extended device statistics r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 1.2 36.0 153.6 4608.0 1.2 0.3 31.99.3 16 18 c12d0 0.0 113.40.0 7446.7 0.8 0.17.00.5 15 5

Re: [zfs-discuss] Drive i/o anomaly

2011-02-07 Thread Marion Hakanson
matt.connolly...@gmail.com said: After putting the drive online (and letting the resilver complete) I took the slow drive (c8t1d0 western digital green) offline and the system ran very nicely. It is a 4k sector drive, but I thought zfs recognised those drives and didn't need any special

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Marion Hakanson
p...@bolthole.com said: Any other suggestions for (large-)enterprise-grade, supported JBOD hardware for ZFS these days? Either fibre or SAS would be okay. As others have said, it depends on your definition of enterprise-grade. We're using Dell's MD1200 SAS JBOD's with Solaris-10 and ZFS. Ours

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-25 Thread Marion Hakanson
tmcmah...@yahoo.com said: Interesting. Did you switch to the load-balance option? Yes, I ended up with load-balance=none. Here's a thread about it in the storage-discuss mailing list: http://opensolaris.org/jive/thread.jspa?threadID=130975tstart=90 Regards, Marion

Re: [zfs-discuss] ZFS slows down over a couple of days

2011-01-12 Thread Marion Hakanson
Stephan, The vmstat shows you are not actually short of memory; The pi and po columns are zero, so the system is not having to do any paging, and it seems unlike the system is slow directly because of RAM shortage. With the ARC, it's not unusual for vmstat to show little free memory, but the

Re: [zfs-discuss] raidz recovery

2010-12-13 Thread Marion Hakanson
z...@lordcow.org said: For example when I 'dd if=/dev/zero of=/dev/ad6', or physically remove the drive for awhile, then 'online' the disk, after it resilvers I'm typically left with the following after scrubbing: r...@file:~# zpool status pool: pool state: ONLINE status: One or more

Re: [zfs-discuss] ZFS with STK raid card w battery

2010-10-24 Thread Marion Hakanson
replic...@gmail.com said: One other question, how can I ensure that the controller's cache is really being used? (arcconf doesn't seem to show much). Since ZFS would flush the data as soon as it can, I am curious to see if the caching is making a difference or not. Share out a dataset on the

Re: [zfs-discuss] Performance issues with iSCSI under Linux

2010-10-14 Thread Marion Hakanson
rewar...@hotmail.com said: ok... we're making progress. After swapping the LSI HBA for a Dell H800 the issue disappeared. Now, I'd rather not use those controllers because they don't have a JBOD mode. We have no choice but to make individual RAID0 volumes for each disks which means we need

[zfs-discuss] possible ZFS-related panic?

2010-09-02 Thread Marion Hakanson
Folks, Has anyone seen a panic traceback like the following? This is Solaris-10u7 on a Thumper, acting as an NFS server. The machine was up for nearly a year, I added a dataset to an existing pool, set compression=on for the first time on this system, loaded some data in there (via rsync), then

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Marion Hakanson
markwo...@yahoo.com said: So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be able to support all the VM's or would it still be pushing the limits a 44 disk pool? If it weren't a closed 7000-series appliance, I'd suggest running the zilstat script. It should make it

Re: [zfs-discuss] Performance Testing

2010-08-11 Thread Marion Hakanson
p...@kraus-haus.org said: Based on these results, and our capacity needs, I am planning to go with 5 disk raidz2 vdevs. I did similar tests with a Thumper in 2008, with X4150/J4400 in 2009, and more recently comparing X4170/J4400 and X4170/MD1200:

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Marion Hakanson
doug.lin...@merchantlink.com said: Apparently, before Outlook there WERE no meetings, because it's clearly impossible to schedule one without it. Don't tell my boss, but I use Outlook for the scheduling, and fetchmail plus procmail to download email out of Exchange and into my favorite email

Re: [zfs-discuss] one more time: pool size changes

2010-06-03 Thread Marion Hakanson
frank+lists/z...@linetwo.net said: I remember, and this was a few years back but I don't see why it would be any different now, we were trying to add drives 1-2 at a time to medium-sized arrays (don't buy the disks until we need them, to hold onto cash), and the Netapp performance kept going

Re: [zfs-discuss] Write retry errors to SSD's on SAS backplane (mpt)

2010-03-25 Thread Marion Hakanson
rvandol...@esri.com said: We have a Silicon Mechanics server with a SuperMicro X8DT3-F (Rev 1.02) (onboard LSI 1068E (firmware 1.28.02.00) and a SuperMicro SAS-846EL1 (Rev 1.1) backplane. . . . The system is fully patched Solaris 10 U8, and the mpt driver is version 1.92: Since you're

Re: [zfs-discuss] j4500 cache flush

2010-03-05 Thread Marion Hakanson
erik.trim...@sun.com said: All J4xxx systems are really nothing more than huge SAS expanders hooked to a bunch of disks, so cache flush requests will either come from ZFS or any attached controller. Note that I /think/ most non-RAID controllers don't initiate their own cache flush

Re: [zfs-discuss] j4500 cache flush

2010-03-05 Thread Marion Hakanson
bene...@yahoo.com said: Marion - Do you happen to know which SAS hba it applys to? Here's the article: http://sunsolve.sun.com/search/document.do?assetkey=1-66-248487-1 The title is Write-Caching on JBOD SATA Drive is Erroneously Enabled by Default When Connected to Non-RAID SAS HBAs. By

Re: [zfs-discuss] Who is using ZFS ACL's in production?

2010-03-02 Thread Marion Hakanson
car...@taltos.org said: NetApp does _not_ expose an ACL via NFSv3, just old school POSIX mode/owner/ group info. I don't know how NetApp deals with chmod, but I'm sure it's documented. The answer is, It depends. If the NetApp volume is NTFS-only permissions, then chmod from the Unix/NFS

Re: [zfs-discuss] Poor ZIL SLC SSD performance

2010-02-19 Thread Marion Hakanson
felix.buenem...@googlemail.com said: I think I'll try one of thise inexpensive battery-backed PCI RAM drives from Gigabyte and see how much IOPS they can pull. Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit. Dunno if that's sufficient for your purposes, but it

Re: [zfs-discuss] Identifying firmware version of SATA controller (LSI)

2010-02-05 Thread Marion Hakanson
rvandol...@esri.com said: I'm trying to figure out where I can find the firmware on the LSI controller... are the bootup messages the only place I could expect to see this? prtconf and prtdiag both don't appear to give firmware information. . . . Solaris 10 U8 x86. The raidctl command is

Re: [zfs-discuss] Building big cheap storage system. What hardware to use?

2010-01-29 Thread Marion Hakanson
fjwc...@gmail.com said: Yes, if I was to re-do the hardware config for these servers, using what I know now, I would do things a little differently: . . . - find a case with more than 24 drive bays (any way to get a Thumper without the extra hardware/software?) ;) . . . It's called the

Re: [zfs-discuss] Zpool creation best practices

2009-12-31 Thread Marion Hakanson
mijoh...@gmail.com said: I've never had a lun go bad but bad things do happen. Does anyone else use ZFS in this way? Is this an unrecommended setup? We used ZFS like this on a Hitachi array for 3 years. Worked fine, not one bad block/checksum error detected. Still using it on an old Sun

Re: [zfs-discuss] getting decent NFS performance

2009-12-23 Thread Marion Hakanson
You can always replace them when funding for your Zeus SSD's comes in (:-). Regards, -- Marion Hakanson hakan...@ohsu.edu OHSU Advanced Computing Center ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] [storage-discuss] ZFS on JBOD storage, mpt driver issue - server not responding

2009-11-11 Thread Marion Hakanson
m...@cybershade.us said: So at this point this looks like an issue with the MPT driver or these SAS cards (I tested two) when under heavy load. I put the latest firmware for the SAS card from LSI's web site - v1.29.00 without any changes, server still locks. Any ideas, suggestions how to

Re: [zfs-discuss] Solaris disk confusion ?

2009-11-04 Thread Marion Hakanson
zfs...@jeremykister.com said: unfortunately, fdisk won't help me at all: # fdisk -E /dev/rdsk/c12t1d0p0 # zpool create -f testp c12t1d0 invalid vdev specification the following errors must be manually repaired: /dev/dsk/c3t11d0s0 is part of active ZFS pool dbzpool. Please see zpool(1M).

Re: [zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-04 Thread Marion Hakanson
jimkli...@cos.ru said: Thanks for the link, but the main concern in spinning down drives of a ZFS pool is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a transaction group (TXG) which requires a synchronous write of metadata to disk. You know, it's just going to

Re: [zfs-discuss] Solaris disk confusion ?

2009-11-03 Thread Marion Hakanson
I said: You'll need to give the same dd treatment to the end of the disk as well; ZFS puts copies of its labels at the beginning and at the end. Oh, and zfs...@jeremykister.com said: im not sure what you mean here - I thought p0 was the entire disk in x86 - and s2 was the whole disk in the

Re: [zfs-discuss] Solaris disk confusion ?

2009-11-02 Thread Marion Hakanson
zfs...@jeremykister.com said: # format -e c12t1d0 selecting c12t1d0 [disk formatted] /dev/dsk/c3t11d0s0 is part of active ZFS pool dbzpool. Please see zpool(1M). It is true that c3t11d0 is part of dbzpool. But why is solaris upset about c3t11 when i'm working with c12t1 ?? So i checked

Re: [zfs-discuss] Sniping a bad inode in zfs?

2009-10-27 Thread Marion Hakanson
da...@elemental.org said: Normally on UFS I would just take the 'nuke it from orbit' route and use clri to wipe the directory's inode. However, clri doesn't appear to be zfs aware (there's not even a zfs analog of clri in /usr/lib/fs/ zfs), and I don't immediately see an option in zdb which

Re: [zfs-discuss] zfs send... too slow?

2009-10-26 Thread Marion Hakanson
knatte_fnatte_tja...@yahoo.com said: Is rsync faster? As I have understood it, zfs send.. gives me an exact replica, whereas rsync doesnt necessary do that, maybe the ACL are not replicated, etc. Is this correct about rsync vs zfs send? It is true that rsync (as of 3.0.5, anyway) does not

Re: [zfs-discuss] Performance problems with Thumper and 7TB ZFS pool using RAIDZ2

2009-10-26 Thread Marion Hakanson
opensolaris-zfs-disc...@mlists.thewrittenword.com said: Is it really pointless? Maybe they want the insurance RAIDZ2 provides. Given the choice between insurance and performance, I'll take insurance, though it depends on your use case. We're using 5-disk RAIDZ2 vdevs. . . . Would love to

Re: [zfs-discuss] strange results ...

2009-10-22 Thread Marion Hakanson
jel+...@cs.uni-magdeburg.de said: 2nd) Never had a Sun STK RAID INT before. Actually my intention was to create a zpool mirror of sd0 and sd1 for boot and logs, and a 2x2-way zpool mirror with the 4 remaining disks. However, the controller seems not to support JBODs :( - which is also bad,

Re: [zfs-discuss] Zpool without any redundancy

2009-10-20 Thread Marion Hakanson
mmusa...@east.sun.com said: What benefit are you hoping zfs will provide in this situation? Examine your situation carefully and determine what filesystem works best for you. There are many reasons to use ZFS, but if your configuration isn't set up to take advantage of those reasons, then

Re: [zfs-discuss] Zpool without any redundancy

2009-10-20 Thread Marion Hakanson
I wrote: Is anyone else tired of seeing the word redundancy? (:-) matthias.ap...@lanlabor.com said: Only in a perfect world (tm) ;-) IMHO there is no such thing as too much redundancy. In the real world the possibilities of redundancy are only limited by money, Sigh. I was just joking

Re: [zfs-discuss] Best way to convert checksums

2009-10-02 Thread Marion Hakanson
webcl...@rochester.rr.com said: To verify data, I cannot depend on existing tools since diff is not large file aware. My best idea at this point is to calculate and compare MD5 sums of every file and spot check other properties as best I can. Ray, I recommend that you use rsync's -c to

Re: [zfs-discuss] [ZFS-discuss] RAIDZ drive removed status

2009-09-29 Thread Marion Hakanson
David Stewart wrote: How do I identify which drive it is? I hear each drive spinning (I listened to them individually) so I can't simply select the one that is not spinning. You can try reading from each raw device, and looking for a blinky-light to identify which one is active. If you don't

Re: [zfs-discuss] periodic slow responsiveness

2009-09-25 Thread Marion Hakanson
j...@jamver.id.au said: For a predominantly NFS server purpose, it really looks like a case of the slog has to outperform your main pool for continuous write speed as well as an instant response time as the primary criterion. Which might as well be a fast (or group of fast) SSDs or 15kRPM

Re: [zfs-discuss] periodic slow responsiveness

2009-09-25 Thread Marion Hakanson
rswwal...@gmail.com said: Yes, but if it's on NFS you can just figure out the workload in MB/s and use that as a rough guideline. I wonder if that's the case. We have an NFS server without NVRAM cache (X4500), and it gets huge MB/sec throughput on large-file writes over NFS. But it's

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-17 Thread Marion Hakanson
rswwal...@gmail.com said: It's not the stripes that make a difference, but the number of controllers there. What's the system config on that puppy? The zpool status -v output was from a Thumper (X4500), slightly edited, since in our real-world Thumper, we use c6t0d0 in c5t4d0's place in the

Re: [zfs-discuss] Moving volumes to new controller

2009-09-17 Thread Marion Hakanson
vidar.nil...@palantir.no said: I'm trying to move disks in a zpool from one SATA-kontroller to another. Its 16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from one raidz over to the new controller. Server was powered off. . . . zpool replace storage c10t7d0 c11t0d0

Re: [zfs-discuss] RAIDZ versus mirrroed

2009-09-16 Thread Marion Hakanson
rswwal...@gmail.com said: There is another type of failure that mirrors help with and that is controller or path failures. If one side of a mirror set is on one controller or path and the other on another then a failure of one will not take down the set. You can't get that with RAIDZn.

Re: [zfs-discuss] Understanding SAS/SATA Backplanes and Connectivity

2009-07-21 Thread Marion Hakanson
asher...@versature.com said: And, on that subject, is there truly a difference between Seagate's line-up of 7200 RPM drives? They seem to now have a bunch: . . . Other manufacturers seem to have similar lineups. Is the difference going to matter to me when putting a mess of them into a SAS

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-20 Thread Marion Hakanson
bfrie...@simple.dallas.tx.us said: No. I am suggesting that all Solaris 10 (and probably OpenSolaris systems) currently have a software-imposed read bottleneck which places a limit on how well systems will perform on this simple sequential read benchmark. After a certain point (which is

Re: [zfs-discuss] Does zpool clear delete corrupted files

2009-06-01 Thread Marion Hakanson
jlo...@ssl.berkeley.edu said: What's odd is we've checked a few hundred files, and most of them don't seem to have any corruption. I'm thinking what's wrong is the metadata for these files is corrupted somehow, yet we can read them just fine. I wish I could tell which ones are really

Re: [zfs-discuss] storage zilstat assistance

2009-04-28 Thread Marion Hakanson
bfrie...@simple.dallas.tx.us said: Your IOPS don't seem high. You are currently using RAID-5, which is a poor choice for a database. If you use ZFS mirrors you are going to unleash a lot more IOPS from the available spindles. RAID-5 may be poor for some database loads, but it's perfectly

[zfs-discuss] storage zilstat assistance

2009-04-27 Thread Marion Hakanson
Greetings, We have a small Oracle project on ZFS (Solaris-10), using a SAN-connected array which is need of replacement. I'm weighing whether to recommend a Sun 2540 array or a Sun J4200 JBOD as the replacement. The old array and the new ones all have 7200RPM SATA drives. I've been watching

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Marion Hakanson
mi...@cc.umanitoba.ca said: What would I look for with mpstat? Look for a CPU (thread) that might be 100% utilized; Also look to see if that CPU (or CPU's) has a larger number in the ithr column than all other CPU's. The idea here is that you aren't getting much out of the T2000 if only one

Re: [zfs-discuss] [on-discuss] Reliability at power failure?

2009-04-19 Thread Marion Hakanson
udip...@gmail.com said: dick at nagual.nl wrote: Maybe because on the fifth day some hardware failure occurred? ;-) That would be which? The system works and is up and running beautifully. OpenSolaris, as of now. Running beautifully as long as the power stays on? Is it hard to believe

Re: [zfs-discuss] Bad SWAP performance from zvol

2009-03-31 Thread Marion Hakanson
casper@sun.com said: I've upgraded my system from ufs to zfs (root pool). By default, it creates a zvol for dump and swap. . . . So I removed the zvol swap and now I have a standard swap partition. The performance is much better (night and day). The system is usable and I don't know

Re: [zfs-discuss] [perf-discuss] ZFS performance issue - READ is slow as hell...

2009-03-31 Thread Marion Hakanson
james.ma...@sun.com said: I'm not yet sure what's broken here, but there's something pathologically wrong with the IO rates to the device during the ZFS tests. In both cases, the wait queue is getting backed up, with horrific wait queue latency numbers. On the read side, I don't understand why

Re: [zfs-discuss] OpenSolaris / ZFS at the low-end

2009-03-31 Thread Marion Hakanson
bh...@freaks.com said: Even with a very weak CPU the system is close to saturating the PCI bus for reads with most configurations. Nice little machine. I wonder if you'd get some of the bonnie numbers increased if you ran multiple bonnie's in parallel. Even though the sequential throughput

Re: [zfs-discuss] Virutal zfs server vs hardware zfs server

2009-03-02 Thread Marion Hakanson
n...@jnickelsen.de said: As far as I know the situation with ATI is that, while ATI supplies well-performing binary drivers for MS Windows (of course) and Linux, there is no such thing for other OSs. So OpenSolaris uses standardized interfaces of the graphics hardware, which have comparatively

Re: [zfs-discuss] ZFS on SAN?

2009-02-16 Thread Marion Hakanson
, data has been fine. We also do tape backups of these pools, of course. Regards, -- Marion Hakanson hakan...@ohsu.edu OHSU Advanced Computing Center ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Introducing zilstat

2009-02-04 Thread Marion Hakanson
The zilstat tool is very helpful, thanks! I tried it on an X4500 NFS server, while extracting a 14MB tar archive, both via an NFS client, and locally on the X4500 itself. Over NFS, said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes going through the ZIL. When run locally on

Re: [zfs-discuss] ZFS over NFS, poor performance with many small files

2009-01-20 Thread Marion Hakanson
d...@yahoo.com said: Any recommendations for an SSD to work with an X4500 server? Will the SSDs used in the 7000 series servers work with X4500s or X4540s? The Sun System Handbook (sunsolve.sun.com) for the 7210 appliance (an X4540-based system) lists the logzilla device with this fine print:

Re: [zfs-discuss] Hybrid Pools - Since when?

2008-12-15 Thread Marion Hakanson
richard.ell...@sun.com said: L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was not back-ported to Solaris 10u6. You sure? Here's output on a Solaris-10u6 machine: cyclops 4959# uname -a SunOS cyclops 5.10 Generic_137138-09 i86pc i386 i86pc cyclops 4960# zpool

Re: [zfs-discuss] To separate /var or not separate /var, that is the question....

2008-12-11 Thread Marion Hakanson
vincent_b_...@yahoo.com said: Just wondering if (excepting the existing zones thread) there are any compelling arguments to keep /var as it's own filesystem for your typical Solaris server. Web servers and the like. Well, it's been considered a best practice for servers for a lot of years to

Re: [zfs-discuss] zpool replace - choke point

2008-12-05 Thread Marion Hakanson
[EMAIL PROTECTED] said: Thanks for the tips. I'm not sure if they will be relevant, though. We don't talk directly with the AMS1000. We are using a USP-VM to virtualize all of our storage and we didn't have to add anything to the drv configuration files to see the new disk (mpxio was

Re: [zfs-discuss] zpool replace - choke point

2008-12-04 Thread Marion Hakanson
[EMAIL PROTECTED] said: I think we found the choke point. The silver lining is that it isn't the T2000 or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has 7200RPM SATA disks with the cache turned off. This system has a very small cache, and when we did turn it on for one of

Re: [zfs-discuss] zfs boot - U6 kernel patch breaks sparc boot

2008-11-18 Thread Marion Hakanson
[EMAIL PROTECTED] said: I thought to look at df output before rebooting, and there are PAGES PAGES like this: /var/run/.patchSafeModeOrigFiles/usr/platform/FJSV,GPUZC-M/lib/libcpc.so.1 7597264 85240 7512024 2%/usr/platform/FJSV,GPUZC-M/lib/libcpc.so.1 . . . Hundreds of

Re: [zfs-discuss] unable to ludelete BE with ufs

2008-11-12 Thread Marion Hakanson
[EMAIL PROTECTED] said: # ludelete beA ERROR: cannot open 'pool00/zones/global/home': dataset does not exist ERROR: cannot mount mount point /.alt.tmp.b-QY.mnt/home device pool00/zones/global/home ERROR: failed to mount file system pool00/zones/global/home on /.alt.tmp.b-QY.mnt/home

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Marion Hakanson
[EMAIL PROTECTED] said: It's interesting how the speed and optimisation of these maintenance activities limit pool size. It's not just full scrubs. If the filesystem is subject to corruption, you need a backup. If the filesystem takes two months to back up / restore, then you need really

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Marion Hakanson
[EMAIL PROTECTED] said: In general, such tasks would be better served by T5220 (or the new T5440 :-) and J4500s. This would change the data paths from: client --net-- T5220 --net-- X4500 --SATA-- disks to client --net-- T5440 --SAS-- disks With the J4500 you get the same storage

Re: [zfs-discuss] ZFS-over-iSCSI performance testing (with low random access results)...

2008-10-16 Thread Marion Hakanson
[EMAIL PROTECTED] said: but Marion's is not really possible at all, and won't be for a while with other groups' choice of storage-consumer platform, so it'd have to be GlusterFS or some other goofy fringe FUSEy thing or not-very-general crude in-house hack. Well, of course the magnitude of

Re: [zfs-discuss] x4500 vs AVS ?

2008-09-03 Thread Marion Hakanson
[EMAIL PROTECTED] said: We did ask our vendor, but we were just told that AVS does not support x4500. You might have to use the open-source version of AVS, but it's not clear if that requires OpenSolaris or if it will run on Solaris-10. Here's a description of how to set it up between two

Re: [zfs-discuss] ZFS noob question

2008-08-29 Thread Marion Hakanson
[EMAIL PROTECTED] said: I took a snapshot of a directory in which I hold PDF files related to math. I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly reccomend this to any math enthusiast) and did zfs list to see the size of the snapshot (sheer curiosity). I don't have

Re: [zfs-discuss] ZFS with Traditional SAN

2008-08-21 Thread Marion Hakanson
[EMAIL PROTECTED] said: That's the one that's been an issue for me and my customers - they get billed back for GB allocated to their servers by the back end arrays. To be more explicit about the 'self-healing properties' - To deal with any fs corruption situation that would traditionally

Re: [zfs-discuss] SSD update

2008-08-20 Thread Marion Hakanson
[EMAIL PROTECTED] said: Seriously, I don't even care about the cost. Even with the smallest capacity, four of those gives me 128GB of write cache supporting 680MB/s and 40k IOPS. Show me a hardware raid controller that can even come close to that. Four of those will strain even 10GB/s

Re: [zfs-discuss] resilver in progress - which disk is inconsistent?

2008-08-08 Thread Marion Hakanson
[EMAIL PROTECTED] said: AFAIK there is no way to tell resilvering to pause, so I want to detach the inconsistent disk and attach it again tonight, when it won't affect users. To do that I need to know which disk is inconsistent, but zpool status does not show me any info in regard. Is

Re: [zfs-discuss] ZFS jammed while busy

2008-05-20 Thread Marion Hakanson
[EMAIL PROTECTED] said: I'm curious about your array configuration above... did you create your RAIDZ2 as one vdev or multiple vdev's? If multiple, how many? On mine, I have all 10 disks set up as one RAIDZ2 vdev which is supposed to be near the performance limit... I'm wondering how much I

Re: [zfs-discuss] zfs device busy

2008-03-28 Thread Marion Hakanson
[EMAIL PROTECTED] said: I am having trouble destroying a zfs file system (device busy) and fuser isn't telling me who has the file open: . . . This situation appears to occur every night during a system test. The only peculiar operation on the errant file system is that another system NFS

Re: [zfs-discuss] scrub performance

2008-03-06 Thread Marion Hakanson
[EMAIL PROTECTED] said: It is also interesting to note that this system is now making negative progress. I can understand the remaining time estimate going up with time, but what does it mean for the % complete number to go down after 6 hours of work? Sorry I don't have any helpful

Re: [zfs-discuss] nfs over zfs

2008-02-28 Thread Marion Hakanson
[EMAIL PROTECTED] said: i am a little new to zfs so please excuse my ignorance. i have a poweredge 2950 running Nevada B82 with an Apple Xraid attached over a fiber hba. they are formatted to JBOD with the pool configured as follows: . . . i have a filesystem (tpool4/seplog) shared over

Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said: Some of us are still using Solaris 10 since it is the version of Solaris released and supported by Sun. The 'filebench' software from SourceForge does not seem to install or work on Solaris 10. The 'pkgadd' command refuses to recognize the package, even when it is

Re: [zfs-discuss] five megabytes per second with Microsoft iSCSI initiator (2.06)

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said: I'm creating a zfs volume, and sharing it with zfs set shareiscsi=on poolname/volume. I can access the iSCSI volume without any problems, but IO is terribly slow, as in five megabytes per second sustained transfers. I've tried creating an iSCSI target stored on a UFS

Re: [zfs-discuss] 'du' is not accurate on zfs

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said: It may not be relevant, but I've seen ZFS add weird delays to things too. I deleted a file to free up space, but when I checked no more space was reported. A second or two later the space appeared. Run the sync command before you do the du. That flushes the ARC

Re: [zfs-discuss] filebench for Solaris 10?

2008-02-19 Thread Marion Hakanson
[EMAIL PROTECTED] said: This is what I get with the filebench-1.1.0_x86_pkg.tar.gz from SourceForge: # pkgadd -d . pkgadd: ERROR: no packages were found in /home/bfriesen/src/benchmark/filebench # ls install/ pkginfo pkgmapreloc/ . . . Um, cd .. and pkgadd -d . again. The

Re: [zfs-discuss] ZFS write throttling

2008-02-15 Thread Marion Hakanson
[EMAIL PROTECTED] said: I also tried using O_DSYNC, which stops the pathological behaviour but makes things pretty slow - I only get a maximum of about 20MBytes/sec, which is obviously much less than the hardware can sustain. I may misunderstand this situation, but while you're waiting for

Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Marion Hakanson
[EMAIL PROTECTED] said: difference my tweaks are making. Basically, the problem users experience, when the load shoots up are huge latencies. An ls on a non-cached directory, which usually is instantaneous, will take 20, 30, 40 seconds or more. Then when the storage array catches up,

Re: [zfs-discuss] Which DTrace provider to use

2008-02-13 Thread Marion Hakanson
[EMAIL PROTECTED] said: It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP. Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart more on random I/O. The server/initiator side is a

Re: [zfs-discuss] Need help with a dead disk

2008-02-12 Thread Marion Hakanson
[EMAIL PROTECTED] said: One thought I had was to unconfigure the bad disk with cfgadm. Would that force the system back into the 'offline' response? In my experience (X4100 internal drive), that will make ZFS stop trying to use it. It's also a good idea to do this before you hot-unplug the

Re: [zfs-discuss] ZFS configuration for a thumper

2008-02-06 Thread Marion Hakanson
[EMAIL PROTECTED] said: Your finding for random reads with or without NCQ match my findings: http:// blogs.sun.com/erickustarz/entry/ncq_performance_analysis Disabling NCQ looks like a very tiny win for the multi-stream read case. I found a much bigger win, but i was doing RAID-0 instead

  1   2   >