Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-05 Thread Richard Elling
On Jan 5, 2010, at 11:56 AM, Tristan Ball wrote: On 6/01/2010 3:00 AM, Roch wrote: That said, I truly am for a evolution for random read workloads. Raid-Z on 4K sectors is quite appealing. It means that small objects become nearly mirrored with good random read performance while large objects ar

Re: [zfs-discuss] Solaris installation Exiting (caught signal 11 ) and reboot crashes

2010-01-06 Thread Richard Elling
Hi Pradeep, This is the ZFS forum. You might have better luck on the caiman-discuss forum which is where the folks who work on the installers hang out. -- richard On Jan 6, 2010, at 5:26 AM, Pradeep wrote: Hi , I am trying to install solaris10 update8 on a san array using solaris jumpst

Re: [zfs-discuss] Solaris installation Exiting (caught signal 11 ) and reboot crashes

2010-01-06 Thread Richard Elling
Note to self: drink coffee before posting :-) Thanks Glenn, et.al. -- richard On Jan 6, 2010, at 9:54 AM, Glenn Lagasse wrote: * Richard Elling (richard.ell...@gmail.com) wrote: Hi Pradeep, This is the ZFS forum. You might have better luck on the caiman- discuss forum which is where the

Re: [zfs-discuss] rethinking RaidZ and Record size

2010-01-06 Thread Richard Elling
On Jan 6, 2010, at 1:30 PM, Wes Felter wrote: Michael Herf wrote: I agree that RAID-DP is much more scalable for reads than RAIDZx, and this basically turns into a cost concern at scale. The raw cost/GB for ZFS is much lower, so even a 3-way mirror could be used instead of netapp. But this

Re: [zfs-discuss] rethinking RaidZ and Record size [SEC=UNCLASSIFIED]

2010-01-06 Thread Richard Elling
On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote: 0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote: Rather, ZFS works very nicely with "hardware RAID" systems or JBODs iSCSI, et.al. You can happily add the Im not sure how ZFS works very nicely with say for

Re: [zfs-discuss] rethinking RaidZ and Record size [SEC=UNCLASSIFIED]

2010-01-07 Thread Richard Elling
On Jan 6, 2010, at 11:09 PM, Wilkinson, Alex wrote: 0n Wed, Jan 06, 2010 at 11:00:49PM -0800, Richard Elling wrote: On Jan 6, 2010, at 10:39 PM, Wilkinson, Alex wrote: 0n Wed, Jan 06, 2010 at 02:22:19PM -0800, Richard Elling wrote: Rather, ZFS works very nicely with "hardware

[zfs-discuss] ZFS Tutorial slides from USENIX LISA09

2010-01-07 Thread Richard Elling
I have posted my ZFS Tutorial slides from USENIX LISA09 on slideshare.net. You will notice that there is no real material on dedup. The reason is that dedup was not yet released when the materials were created. Everything in the slides is publicly known information and, perhaps by chance,

Re: [zfs-discuss] Disks and caches

2010-01-07 Thread Richard Elling
On Jan 7, 2010, at 12:02 PM, Anil wrote: I *am* talking about situations where physical RAM is used up. So definitely the SSD could be touched quite a bit when used as a rpool - for pages in/out. In the cases where rpool does not serve user data (eg. home directories and databases are not i

Re: [zfs-discuss] [zones-discuss] Zones on shared storage - a warning

2010-01-08 Thread Richard Elling
On Jan 8, 2010, at 6:20 AM, Frank Batschulat (Home) wrote: On Fri, 08 Jan 2010 13:55:13 +0100, Darren J Moffat > wrote: Frank Batschulat (Home) wrote: This just can't be an accident, there must be some coincidence and thus there's a good chance that these CHKSUM errors must have a common sou

Re: [zfs-discuss] ssd pool + ssd cache ?

2010-01-09 Thread Richard Elling
On Jan 9, 2010, at 1:32 AM, Lutz Schumann wrote: Depends. a) Pool design 5 x SSD as raidZ = 4 SSD space - read I/O performance of one drive Adding 5 cheap 40 GB L2ARC device (which are pooled) increases the read performance for your working window of 200 GB. An interesting thing happens when

Re: [zfs-discuss] I/O Read starvation

2010-01-10 Thread Richard Elling
On Jan 8, 2010, at 7:49 PM, bank kus wrote: > dd if=/dev/urandom of=largefile.txt bs=1G count=8 > > cp largefile.txt ./test/1.txt & > cp largefile.txt ./test/2.txt & > > Thats it now the system is totally unusable after launching the two 8G > copies. Until these copies finish no other applicati

Re: [zfs-discuss] Help needed backing ZFS to tape

2010-01-11 Thread Richard Elling
Good question. Zmanda seems to be a popular open source solution with commercial licenses and support available. We try to keep the Best Practices Guide up to date on this topic: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Using_ZFS_With_Enterprise_Backup_Solutions Ad

Re: [zfs-discuss] (Practical) limit on the number of snapshots?

2010-01-11 Thread Richard Elling
comment below... On Jan 11, 2010, at 10:00 AM, Lutz Schumann wrote: > Ok, tested this myself ... > > (same hardware used for both tests) > > OpenSolaris svn_104 (actually Nexenta Core 2): > > 100 Snaps > > r...@nexenta:/volumes# time for i in $(seq 1 100); do zfs snapshot > ssd

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-11 Thread Richard Elling
On Jan 11, 2010, at 4:42 PM, Daniel Carosone wrote: > I have a netbook with a small internal ssd as rpool. I have an > external usb HDD with much larger storage, as a separate pool, which > is sometimes attached to the netbook. > > I created a zvol on the external pool, the same size as the inte

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Richard Elling
On Jan 12, 2010, at 2:53 AM, Brad wrote: > Has anyone worked with a x4500/x4540 and know if the internal raid > controllers have a bbu? I'm concern that we won't be able to turn off the > write-cache on the internal hds and SSDs to prevent data corruption in case > of a power failure. Yes, w

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Richard Elling
On Jan 12, 2010, at 12:37 PM, Gary Mills wrote: > On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: >> On Tue, 12 Jan 2010, Gary Mills wrote: >>> >>> Is moving the databases (IMAP metadata) to a separate ZFS filesystem >>> likely to improve performance? I've heard that this is imp

Re: [zfs-discuss] set zfs:zfs_vdev_max_pending

2010-01-12 Thread Richard Elling
On Jan 12, 2010, at 2:54 PM, Ed Spencer wrote: > We have a zpool made of 4 512g iscsi luns located on a network appliance. > We are seeing poor read performance from the zfs pool. > The release of solaris we are using is: > Solaris 10 10/09 s10s_u8wos_08a SPARC > > The server itself is a T2000 >

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-13 Thread Richard Elling
On Jan 12, 2010, at 7:46 PM, Brad wrote: > Richard, > > "Yes, write cache is enabled by default, depending on the pool configuration." > Is it enabled for a striped (mirrored configuration) zpool? I'm asking > because of a concern I've read on this forum about a problem with SSDs (and > disks)

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-14 Thread Richard Elling
On Jan 14, 2010, at 6:41 AM, Gary Mills wrote: > On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote: >> >> Gary Mills writes: >>> >>> Yes, I understand that, but do filesystems have separate queues of any >>> sort within the ZIL? If not, would it help to put the database >>> filesystems into

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-14 Thread Richard Elling
additional clarification ... On Jan 14, 2010, at 8:49 AM, Richard Elling wrote: > On Jan 14, 2010, at 6:41 AM, Gary Mills wrote: > >> On Thu, Jan 14, 2010 at 01:47:46AM -0800, Roch wrote: >>> >>> Gary Mills writes: >>>> >>>> Yes, I under

Re: [zfs-discuss] 2-way Mirror With Spare

2010-01-14 Thread Richard Elling
On Jan 14, 2010, at 11:09 AM, Mr. T Doodle wrote: > I am considering RAIDZ or a 2-way mirror with a spare. > > I have 6 disks and would like the best possible performance and reliability > and not really concerned with disk space. > > My thought was a 2 disk 2-way mirror with a spare. > > Woul

Re: [zfs-discuss] New ZFS Intent Log (ZIL) device available - Beta program now open!

2010-01-14 Thread Richard Elling
On Jan 14, 2010, at 11:02 AM, Christopher George wrote: >> That's kind of an overstatement. NVRAM backed by on-board LI-Ion >> batteries has been used in storage industry for years; > > Respectfully, I stand by my three points of Li-Ion batteries as they relate > to enterprise class NVRAM: igniti

Re: [zfs-discuss] ZIL to disk

2010-01-14 Thread Richard Elling
On Jan 14, 2010, at 10:58 AM, Jeffry Molanus wrote: > Hi all, > > Are there any recommendations regarding min IOPS the backing storage pool > needs to have when flushing the SSD ZIL to the pool? Pedantically, as many as you can afford :-) The DDRdrive folks sell IOPS at 200 IOPS/$. Sometimes

Re: [zfs-discuss] ZIL to disk

2010-01-14 Thread Richard Elling
On Jan 14, 2010, at 3:59 PM, Ray Van Dolson wrote: > On Thu, Jan 14, 2010 at 03:55:20PM -0800, Ray Van Dolson wrote: >> On Thu, Jan 14, 2010 at 03:41:17PM -0800, Richard Elling wrote: >>>> Consider a pool of 3x 2TB SATA disks in RAIZ1, you would roughly >>>>

Re: [zfs-discuss] ZIL to disk

2010-01-14 Thread Richard Elling
On Jan 14, 2010, at 4:02 PM, Richard Elling wrote: > That is a simple performance model for small, random reads. The ZIL > is a write-only workload, so the model will not apply. BTW, it is a Good Thing (tm) the small, random read model does not apply to the ZIL. -- r

Re: [zfs-discuss] Backing up a ZFS pool

2010-01-17 Thread Richard Elling
On Jan 17, 2010, at 2:38 AM, Edward Ned Harvey wrote: >>> Personally, I use "zfs send | zfs receive" to an external disk. >> Initially a >>> full image, and later incrementals. >> >> Do these incrementals go into the same filesystem that received the >> original zfs stream? > > Yes. In fact, I

Re: [zfs-discuss] I can't seem to get the pool to export...

2010-01-17 Thread Richard Elling
On Jan 16, 2010, at 10:03 PM, Travis Tabbal wrote: > Hmm... got it working after a reboot. Odd that it had problems before that. I > was able to rename the pools and the system seems to be running well now. > Irritatingly, the settings for sharenfs, sharesmb, quota, etc. didn't get > copied ove

Re: [zfs-discuss] Recordsize...

2010-01-17 Thread Richard Elling
On Jan 17, 2010, at 11:59 AM, Tristan Ball wrote: > Hi Everyone, > > Is it possible to use send/recv to change the recordsize, or does each file > need to be individually recreated/copied within a given dataset? Yes. The former does the latter. > Is there a way to check the recordsize of a gi

Re: [zfs-discuss] Boot Disk Configuration

2010-01-18 Thread Richard Elling
On Jan 18, 2010, at 10:22 AM, Mr. T Doodle wrote: > I would like some opinions on what people are doing in regards to configuring > ZFS for root/boot drives: > > 1) If you have onbaord RAID controllers are you using them then creating the > ZFS pool (mirrored from hardware)? I let ZFS do the m

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-18 Thread Richard Elling
On Jan 18, 2010, at 11:04 AM, Miles Nordin wrote: ... > Another problem is that the snv_112 man page says this: > > -8<- > The format of the stream is evolving. No backwards com- > patibility is guaranteed. You may not be able to receive > your streams on future ve

Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-18 Thread Richard Elling
On Jan 18, 2010, at 7:55 AM, Jesus Cea wrote: > zpool and zfs report different free space because zfs takes into account > an internal reservation of 32MB or 1/64 of the capacity of the pool, > what is bigger. This space is also used for the ZIL. > So in a 2TB Harddisk, the reservation would be 3

Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-18 Thread Richard Elling
On Jan 18, 2010, at 3:25 PM, Erik Trimble wrote: > Given my (imperfect) understanding of the internals of ZFS, the non-ZIL > portions of the reserved space are there mostly to insure that there is > sufficient (reasonably) contiguous space for doing COW. Hopefully, once BP > rewrite materialize

Re: [zfs-discuss] Is ZFS internal reservation excessive?

2010-01-19 Thread Richard Elling
On Jan 19, 2010, at 4:36 AM, Jesus Cea wrote: > On 01/19/2010 01:14 AM, Richard Elling wrote: >> For example, b129 >> includes a fix for CR6869229, zfs should switch to shiny new metaslabs more >> frequently. >> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_i

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Richard Elling
On Jan 19, 2010, at 1:53 AM, Julian Regel wrote: > > When we brought it up last time, I think we found no one knows of a > > userland tool similar to 'ufsdump' that's capable of serializing a ZFS > > along with holes, large files, ``attribute'' forks, windows ACL's, and > > checksums of its own, an

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-19 Thread Richard Elling
On Jan 19, 2010, at 4:26 PM, Allen Eastwood wrote: >> Message: 3 >> Date: Tue, 19 Jan 2010 15:48:52 -0500 >> From: Miles Nordin >> To: zfs-discuss@opensolaris.org >> Subject: Re: [zfs-discuss] zfs send/receive as backup - reliability? >> Message-ID: >> Content-Type: text/plain; charset="us-asci

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-20 Thread Richard Elling
On Jan 20, 2010, at 3:15 AM, Joerg Schilling wrote: > Richard Elling wrote: > >>> >>> ufsdump/restore was perfect in that regard. The lack of equivalent >>> functionality is a big problem for the situations where this functionality >>> is a business

Re: [zfs-discuss] Mirror of SAN Boxes with ZFS ? (split site mirror)

2010-01-20 Thread Richard Elling
Comment below. Perhaps someone from Sun's ZFS team can fill in the blanks, too. On Jan 20, 2010, at 3:34 AM, Lutz Schumann wrote: > Actually I found some time (and reason) to test this. > > Environment: > - 1 osol server > - one SLES10 iSCSI Target > - two LUN's exported via iSCSi to the OSol

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Richard Elling
Hi Lutz, On Jan 20, 2010, at 3:17 AM, Lutz Schumann wrote: > Hello, > > we tested clustering with ZFS and the setup looks like this: > > - 2 head nodes (nodea, nodeb) > - head nodes contain l2arc devices (nodea_l2arc, nodeb_l2arc) This makes me nervous. I suspect this is not in the typical Q

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-20 Thread Richard Elling
Though the ARC case, PSARC/2007/618 is "unpublished," I gather from googling and the source that L2ARC devices are considered auxiliary, in the same category as spares. If so, then it is perfectly reasonable to expect that it gets picked up regardless of the GUID. This also implies that it is share

Re: [zfs-discuss] x4500...need input and clarity on striped/mirrored configuration

2010-01-20 Thread Richard Elling
On Jan 20, 2010, at 8:14 PM, Brad wrote: > I was reading your old posts about load-shares > http://opensolaris.org/jive/thread.jspa?messageID=294580񇺴 . > > So between raidz and load-share "striping", raidz stripes a file system block > evenly across each vdev but with load sharing the file syst

Re: [zfs-discuss] zfs send/receive as backup - reliability?

2010-01-21 Thread Richard Elling
On Jan 21, 2010, at 3:55 AM, Julian Regel wrote: > >> Until you try to pick one up and put it in a fire safe! > > >Then you backup to tape from x4540 whatever data you need. > >In case of enterprise products you save on licensing here as you need a one > >client license per x4540 but in fact can

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-21 Thread Richard Elling
On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote: > On Wed, Jan 20, 2010 at 03:20:20PM -0800, Richard Elling wrote: >> Though the ARC case, PSARC/2007/618 is "unpublished," I gather from >> googling and the source that L2ARC devices are considered auxiliary, >>

Re: [zfs-discuss] Dedup memory overhead

2010-01-21 Thread Richard Elling
On Jan 21, 2010, at 8:04 AM, erik.ableson wrote: > Hi all, > > I'm going to be trying out some tests using b130 for dedup on a server with > about 1,7Tb of useable storage (14x146 in two raidz vdevs of 7 disks). What > I'm trying to get a handle on is how to estimate the memory overhead requir

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Richard Elling
CC'ed to ext3-disc...@opensolaris.org because this is an ext3 on Solaris issue. ZFS has no problem with large files, but the older ext3 did. See also the ext3 project page and documentation, especially http://hub.opensolaris.org/bin/view/Project+ext3/Project_status -- richard On Jan 21, 2010,

Re: [zfs-discuss] 2gig file limit on ZFS?

2010-01-21 Thread Richard Elling
On Jan 21, 2010, at 1:55 PM, Michelle Knight wrote: > The error messages are in the original post. They are... > /mirror2/applications/Microsoft/Operating Systems/Virtual PC/vm/XP-SP2/XP-SP2 > Hard Disk.vhd: File too large > /mirror2/applications/virtualboximages/xp/xp.tar.bz2: File too large >

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-21 Thread Richard Elling
[Richard makes a hobby of confusing Dan :-)] more below.. On Jan 21, 2010, at 1:13 PM, Daniel Carosone wrote: > On Thu, Jan 21, 2010 at 09:36:06AM -0800, Richard Elling wrote: >> On Jan 20, 2010, at 4:17 PM, Daniel Carosone wrote: >> >>> On Wed, Jan 20, 2010 at 03:20:20

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-21 Thread Richard Elling
On Jan 21, 2010, at 4:32 PM, Daniel Carosone wrote: >> I propose a best practice of adding the cache device to rpool and be >> happy. > > It is *still* not that simple. Forget my slow disks caching an even > slower pool (which is still fast enough for my needs, thanks to the > cache and zil). >

Re: [zfs-discuss] zero out block / sectors

2010-01-22 Thread Richard Elling
Another approach is to make a new virtual disk and attach it as a mirror. Once the silver is complete, detach and destroy the old virtual disk. Normal procedures for bootable disks still apply. This works because ZFS only silvers data. -- richard On Jan 22, 2010, at 12:42 PM, Cindy Swearingen w

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Richard Elling
On Jan 23, 2010, at 12:12 PM, Bob Friesenhahn wrote: > On Sat, 23 Jan 2010, A. Krijgsman wrote: > >> Just to jump in. >> >> Did you guys ever consider to shortstroke a larger sata disk? >> I'm not familiar with this, but read a lot about it; >> >> Since the drive cache gets larger on the bigger

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Richard Elling
On Jan 23, 2010, at 8:04 AM, R.G. Keen wrote: > Interesting question. > > The answer I came to, perhaps through lack of information and experience, is > that there isn't a best 1.5tb drive. I decided that 1.5tb is too big, and > that it's better to use more and smaller devices so I could get to

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-23 Thread Richard Elling
On Jan 23, 2010, at 3:47 PM, Frank Cusack wrote: > On January 23, 2010 1:20:13 PM -0800 Richard Elling >> My theory is that drives cost $100. > > Obviously you're not talking about Sun drives. :) Don't confuse cost with price :-) -- richard __

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-23 Thread Richard Elling
AIUI, this works as designed. I think the best practice will be to add the L2ARC to syspool (nee rpool). However, for current NexentaStor releases, you cannot add cache devices to syspool. Earlier I mentioned that this made me nervous. I no longer hold any reservation against it. It should wor

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2010-01-23 Thread Richard Elling
On Jan 23, 2010, at 5:06 AM, Simon Breden wrote: > Thanks a lot. > > I'd looked at SO many different RAID boxes and never had a good feeling about > them from the point of data safety, that when I read the 'A Conversation with > Jeff Bonwick and Bill Moore – The future of file systems' article

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Richard Elling
On Jan 24, 2010, at 8:26 AM, R.G. Keen wrote: > > “Disk drives cost $100”: yes, I fully agree, with minor exceptions. End of > marketing, which is where the cost per drive drops significantly, is > different from end of life – I hope! http://en.wikipedia.org/wiki/End-of-life_(product) Some vend

Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?

2010-01-24 Thread Richard Elling
On Jan 24, 2010, at 8:26 PM, Frank Middleton wrote: > What an entertaining discussion! Hope the following adds to the > entertainment value :). > > Any comments on this Dec. 2005 study on disk failure and error rates? > http://research.microsoft.com/apps/pubs/default.aspx?id=64599 > > Seagate sa

Re: [zfs-discuss] Performance of partition based SWAP vs. ZFS zvol SWAP

2010-01-27 Thread Richard Elling
On Jan 27, 2010, at 12:25 PM, RayLicon wrote: > Ok ... > > Given that ... yes, we all know that swapping is bad (thanks for the > enlightenment). > > To Swap or not to Swap isn't releated to this question, and besides, even if > you don't page swap, other mechanisms can still claim swap space,

Re: [zfs-discuss] Going from 6 to 8 disks on ASUS M2N-SLI Deluxe motherboa

2010-01-27 Thread Richard Elling
On Jan 27, 2010, at 12:34 PM, David Dyer-Bennet wrote: > > Google is working heavily with the philosophy that things WILL fail, so they > plan for it, and have enough redundance to survive it -- and then save lots > of money by not paying for premium components. I like that approach. Yes, it d

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-01-28 Thread Richard Elling
On Jan 28, 2010, at 10:54 AM, Lutz Schumann wrote: > Actuall I tested this. > > If I add a l2arc device to the syspool it is not used when issueing I/O to > the data pool (note: on root pool it must no be a whole disk, but only a > slice of it otherwise ZFS complains that root disks may not co

Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-28 Thread Richard Elling
On Jan 28, 2010, at 2:23 PM, Michelle Knight wrote: > Hi Folks, > > As usual, trust me to come up with the unusual. I'm planning ahead for > future expansion and running tests. > > Unfortunately until 2010-2 comes out I'm stuck with 111b (no way to upgrade > to anything than 130, which gives

Re: [zfs-discuss] Media server build

2010-01-28 Thread Richard Elling
On Jan 28, 2010, at 4:58 PM, Tiernan OToole wrote: > Good morning. This is more than likley a stupid question on this alias > but I will ask anyway. I am building a media server in the house and > am trying to figure out what os to install. I know it must have zfs > support but can't figure if I s

Re: [zfs-discuss] ZFS configuration suggestion with 24 drives

2010-01-29 Thread Richard Elling
On Jan 29, 2010, at 9:12 AM, Scott Meilicke wrote: > Link aggregation can use different algorithms to load balance. Using L4 (IP > plus originating port I think), using a single client computer and the same > protocol (NFS), but different origination ports has allowed me to saturate > both NICS

Re: [zfs-discuss] Media server build

2010-01-29 Thread Richard Elling
On Jan 29, 2010, at 4:10 AM, Tiernan OToole wrote: > thanks. > > I have looked at nexentastor, but i have a lot more drives than 2Tb... i know > their nexentacore could be better suited... I think its also based on > OpenSolaris too, correct? The current NexentaStor developer edition has a 4 TB

Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-29 Thread Richard Elling
On Jan 29, 2010, at 12:45 AM, Henrik Johansen wrote: > On 01/28/10 11:13 PM, Lutz Schumann wrote: >> While thinking about ZFS as the next generation filesystem without >> limits I am wondering if the real world is ready for this kind of >> incredible technology ... >> >> I'm actually speaking of h

Re: [zfs-discuss] Verify NCQ status

2010-01-29 Thread Richard Elling
On Jan 29, 2010, at 12:01 PM, Christo Kutrovsky wrote: > Hello, > > I have PDSMi board > (http://www.supermicro.com/products/motherboard/PD/E7230/PDSMi.cfm) with > Intel® ICH7R SATA2 (3 Gbps) controller built-in. > > I suspect NCQ is not working as I never see "actv" bigger than 1.0 i in > ios

Re: [zfs-discuss] why checksum data?

2010-01-30 Thread Richard Elling
On Jan 30, 2010, at 8:58 AM, matthew patton wrote: > please forgive the 'stupid' question. This is not a stupid question, it is actually a good question that is frequently asked. > Aside from having a convenient hash table of checksums to consult and upon > detection of a collision knowing we

Re: [zfs-discuss] zfs rpool mirror on non-equal drives

2010-01-30 Thread Richard Elling
On Jan 30, 2010, at 10:27 AM, Michelle Knight wrote: > Another question ... > > I did this as a test because I am aware that zpools don't like drives > switching controlers without being exported first. The question was, what > would a rpool boot drive do if it was put on a different controller

Re: [zfs-discuss] demise of community edition

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 8:38 AM, Tom Bird wrote: > Afternoon, > > I note to my dismay that I can't get the "community edition" any more past > snv_129, this version was closest to the normal way of doing things that I am > used to with Solaris <= 10, the standard OpenSolaris releases seem only to

Re: [zfs-discuss] ZFS Snapshots

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 6:55 AM, Tony MacDoodle wrote: > Has anyone encountered any file corruption when snapping ZFS file systems? I've had no problems. My first snapshot was in June 2006 and I've been regularly snapshotting since then. > How does ZFS handle open files when compared to other file s

Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 7:21 AM, Henrik Johansson wrote: > Hello Christo, > > On Jan 31, 2010, at 4:07 PM, Christo Kutrovsky wrote: > >> Hello All, >> >> I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 >> and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM.

Re: [zfs-discuss] ZFS Snapshots

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 9:39 AM, Bob Friesenhahn wrote: > On Sun, 31 Jan 2010, Tony MacDoodle wrote: > >> Has anyone encountered any file corruption when snapping ZFS file systems? >> How does ZFS handle open files when compared to other file system types that >> use similar technology ie. Veritas,

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-31 Thread Richard Elling
See also PSARC 2008/769 which considers 4 KB blocks for the entire OS in a phased approach. http://arc.opensolaris.org/caselog/PSARC/2008/769/inception.materials/design_doc -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.

Re: [zfs-discuss] mimic size=1024m setting in /etc/vfstab when using rpool/swap (zfs root)

2010-01-31 Thread Richard Elling
On Jan 31, 2010, at 10:15 PM, Prakash Kochummen wrote: > Hi, > > While using ufs root, we had an option for limiting the /tmp size using mount > -o size manual option or setting size=1024m in the vfstab. This is no different when using ZFS root. /tmp is, by default, a tmpfs file system. > Do

Re: [zfs-discuss] mimic size=1024m setting in /etc/vfstab whenusing rpool/swap (zfs root)

2010-02-01 Thread Richard Elling
On Jan 31, 2010, at 11:24 PM, Prakash Kochummen wrote: > Thanks for the reply. > > Sorry i confused you too. when I mentioned ufs , i just meant ufs root > scenario (pre u6). > > Suppose I have a 136G Hdd which as my boot disk,which has been sliced it like > s0-80gb (root slice) > s1-55Gb (swa

Re: [zfs-discuss] L2ARC in Cluster is picked up althought not part of the pool

2010-02-01 Thread Richard Elling
On Feb 1, 2010, at 5:53 AM, Lutz Schumann wrote: > I tested some more and found that Pool disks are picked UP. > > Head1: Cachedevice1 (c0t0d0) > Head2: Cachedevice2 (c0t0d0) > Pool: Shared, c1td > > I created a pool on shared storage. > Added the cache device on Head1. > Switched the pool to

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Richard Elling
On Feb 2, 2010, at 8:49 AM, David Dyer-Bennet wrote: > On Tue, February 2, 2010 10:21, Marc Nicholas wrote: >> I agree wholeheartedlyyou're paying to make the problem "go away" in >> an >> expedient manner. That said, I see how much we spend on NetApp storage at >> work and it makes me shudder

Re: [zfs-discuss] How to grow ZFS on growing pool?

2010-02-02 Thread Richard Elling
On Feb 2, 2010, at 9:29 AM, David Champion wrote: > * On 02 Feb 2010, Darren J Moffat wrote: >> >> zpool get autoexpand test > > This seems to be a new property -- it's not in my Solaris 10 or > OpenSolaris 2009.06 systems, and they have always expanded immediately > upon replacement. In what

Re: [zfs-discuss] Help needed with zfs send/receive

2010-02-02 Thread Richard Elling
On Feb 2, 2010, at 12:05 PM, Arnaud Brand wrote: > Hi folks, > > I'm having (as the title suggests) a problem with zfs send/receive. > Command line is like this : > pfexec zfs send -Rp tank/t...@snapshot | ssh remotehost pfexec zfs recv -v -F > -d tank > > This works like a charm as long as the

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Richard Elling
On Feb 2, 2010, at 10:54 AM, Orvar Korvar wrote: > 100% uptime for 20 years? > > So what makes OpenVMS so much more stable than Unix? What is the difference? Software reliability studies show that the more reliable software is old software that hasn't changed :-) On Feb 2, 2010, at 12:42 PM, D

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Richard Elling
On Feb 2, 2010, at 1:56 PM, Frank Cusack wrote: > On February 2, 2010 4:31:47 PM -0500 Miles Nordin wrote: >> and FCoE is just dumb if you have IB, honestly. > > by FCoE are you talking about iSCSI? FCoE is to iSCSI as Netware (IPX/SPX) is to NFS :-) -- richard __

Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced drives

2010-02-02 Thread Richard Elling
On Feb 2, 2010, at 2:56 PM, David Magda wrote: > > On Feb 2, 2010, at 15:17, Tim Cook wrote: > >> On Tue, Feb 2, 2010 at 12:54 PM, Orvar Korvar wrote: >> >>> 100% uptime for 20 years? >>> >>> So what makes OpenVMS so much more stable than Unix? What is the >>> difference? >>> >> >> They had/h

Re: [zfs-discuss] Version to upgrade to?

2010-02-03 Thread Richard Elling
On Feb 2, 2010, at 7:58 PM, Tim Cook wrote: > > As an aside, is the stable branch being regularly patched now with security > and bug fixes? >From the horse's mouth: http://sunsolve.sun.com/show.do?target=opensolaris -- richard ___ zfs-discuss mailin

Re: [zfs-discuss] What happens when: file-corrupted and no-redundancy?

2010-02-03 Thread Richard Elling
On Feb 3, 2010, at 3:15 PM, Aleksandr Levchuk wrote: > We switched to OpenSolaris + ZFS. RAID6 + hot spare on LSI Engenio san > hardware, worked well for us. (I'm used to the san management GUI. Also, > something that RAID-Z would not be able to do is: the san lights-up the amber > LEDs on the

Re: [zfs-discuss] How to get a list of changed files between two snapshots?

2010-02-03 Thread Richard Elling
On Feb 3, 2010, at 3:46 PM, Ross Walker wrote: > On Feb 3, 2010, at 12:35 PM, Frank Cusack > wrote: > >> On February 3, 2010 12:19:50 PM -0500 Frank Cusack >> wrote: >>> If you do need to know about deleted files, the find method still may >>> be faster depending on how ddiff determines wheth

Re: [zfs-discuss] cannot receive incremental stream

2010-02-03 Thread Richard Elling
On Feb 3, 2010, at 3:52 PM, Jan Hlodan wrote: > Hi, > can anybody explain me what does it mean?: sure > ips% pfexec zfs recv storage/ips/osol-dev/sync_repo < > /storage/snapshots_sync/osol-dev-incr-20100122-20100129 > cannot receive incremental stream: most recent snapshot of > storage/ips/osol

Re: [zfs-discuss] Cores vs. Speed?

2010-02-04 Thread Richard Elling
Put your money into RAM, especially for dedup. -- richard On Feb 4, 2010, at 3:19 PM, Brian wrote: > I am Starting to put together a home NAS server that will have the following > roles: > > (1) Store TV recordings from SageTV over either iSCSI or CIFS. Up to 4 or 5 > HD streams at a time.

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-05 Thread Richard Elling
On Feb 5, 2010, at 3:11 AM, grarpamp wrote: > Are the sha256/fletcher[x]/etc checksums sent to the receiver along > with the other data/metadata? No. Checksums are made on the records, and there could be a different record size for the sending and receiving file systems. The stream itself is check

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-05 Thread Richard Elling
On Feb 5, 2010, at 7:20 PM, grarpamp wrote: >> No. Checksums are made on the records, and there could be a different >> record size for the sending and receiving file systems. > > Oh. So there's a zfs read to ram somewhere, which checks the sums on disk. > And then entirely new stream checksums ar

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-05 Thread Richard Elling
On Feb 5, 2010, at 8:09 PM, grarpamp wrote: >>> Hmm, is that configurable? Say to match the checksums being >>> used on the filesystem itself... ie: sha256? It would seem odd to >>> send with less bits than what is used on disk. > >>> Was thinking that plaintext ethernet/wan and even some of the

Re: [zfs-discuss] ZFS send/recv checksum transmission

2010-02-06 Thread Richard Elling
On Feb 5, 2010, at 10:50 PM, grarpamp wrote: >>> Perhaps I meant to say that the box itself [cpu/ram/bus/nic/io, except disk] >>> is assumed to handle data with integrity. So say netcat is used as >>> transport, >>> zfs is using sha256 on disk, but only fletcher4 over the wire with >>> send/recv

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-08 Thread Richard Elling
To add to Bob's notes... On Feb 8, 2010, at 8:37 AM, Bob Friesenhahn wrote: > On Mon, 8 Feb 2010, Felix Buenemann wrote: >> >> I was under the impression, that using HW RAID10 would save me 50% PCI >> bandwidth and allow the controller to more intelligently handle its cache, >> so I sticked wit

Re: [zfs-discuss] Drive failure causes system to be unusable

2010-02-08 Thread Richard Elling
On Feb 8, 2010, at 9:05 AM, Martin Mundschenk wrote: > Hi! > > I have a OSOL box as a home file server. It has 4 1TB USB Drives and 1 TB > FW-Drive attached. The USB devices are combined to a RaidZ-Pool and the FW > Drive acts as a hot spare. > > This night, one USB drive faulted and the follow

Re: [zfs-discuss] zpool list size

2010-02-08 Thread Richard Elling
This is a FAQ, but the FAQ is not well maintained :-( http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq On Feb 8, 2010, at 1:35 PM, Lasse Osterild wrote: > Hi, > > This may well have been covered before but I've not been able to find an > answer to this particular question. > > I've s

Re: [zfs-discuss] Dedup Questions.

2010-02-08 Thread Richard Elling
On Feb 8, 2010, at 6:04 PM, Kjetil Torgrim Homme wrote: > Tom Hall writes: > >> If you enable it after data is on the filesystem, it will find the >> dupes on read as well as write? Would a scrub therefore make sure the >> DDT is fully populated. > > no. only written data is added to the DDT,

Re: [zfs-discuss] Intrusion Detection - powered by ZFS Checksumming ?

2010-02-08 Thread Richard Elling
On Feb 8, 2010, at 9:10 PM, Damon Atkins wrote: > I would have thought that if I write 1k then ZFS txg times out in 30secs, > then the 1k will be written to disk in a 1k record block, and then if I write > 4k then 30secs latter txg happen another 4k record size block will be > written, and then

Re: [zfs-discuss] Dedup Questions.

2010-02-09 Thread Richard Elling
On Feb 9, 2010, at 7:24 AM, Kjetil Torgrim Homme wrote: > Richard Elling writes: > >> On Feb 8, 2010, at 6:04 PM, Kjetil Torgrim Homme wrote: >>> the size of [a DDT] entry is much larger: >>> >>> | From: Mertol Ozyoney >>> | >>

Re: [zfs-discuss] ZFS replication primary secondary

2010-02-10 Thread Richard Elling
On Feb 10, 2010, at 10:31 AM, Terry Hull wrote: > First of all, I must apologize. I'm an OpenSolaris newbie so please don't > be too hard on me. [phasers on stun] > Sorry if this has been beaten to death before, but I could not find it, so > here goes. I'm wanting to be able to have two d

Re: [zfs-discuss] ZFS replication primary secondary

2010-02-10 Thread Richard Elling
On Feb 10, 2010, at 1:38 PM, Terry Hull wrote: > Thanks for the info. > > If that last common snapshot gets destroyed on the primary server, it is then > a full replication back to the primary server. Is that correct? If there are no common snapshots, then the first question is "how did we

Re: [zfs-discuss] /usr/bin/chgrp destroys ACL's?

2010-02-10 Thread Richard Elling
CC'ed to security-disc...@opensolaris.org -- richard On Feb 10, 2010, at 4:45 PM, Paul B. Henson wrote: > > We have an open bug which results in new directories created over NFSv4 > from a linux client having the wrong group ownership. While waiting for a > patch to resolve the issue, we have a

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-12 Thread Richard Elling
On Feb 12, 2010, at 8:20 AM, Felix Buenemann wrote: > Hi Mickaël, > > Am 12.02.10 13:49, schrieb Mickaël Maillot: >> Intel X-25 M are MLC not SLC, there are very good for L2ARC. > > Yes, I'm only using those for L2ARC, I'm planing on getting to Mtron Pro 7500 > 16GB SLC SSDs for ZIL. > >> and

Re: [zfs-discuss] ZFS ZIL + L2ARC SSD Setup

2010-02-12 Thread Richard Elling
On Feb 12, 2010, at 9:36 AM, Felix Buenemann wrote: > Am 12.02.10 18:17, schrieb Richard Elling: >> On Feb 12, 2010, at 8:20 AM, Felix Buenemann wrote: >> >>> Hi Mickaël, >>> >>> Am 12.02.10 13:49, schrieb Mickaël Maillot: >>>> Intel

<    1   2   3   4   5   6   7   8   9   10   >