Re: [zfs-discuss] How to verify ecc for ram is active and enabled?

2010-03-11 Thread Christo Kutrovsky
Robert, That's great info. Do you know how you can check the number of CORRECTED errors by ECC in OpenSolaris? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] l2arc current usage (population size)

2010-02-19 Thread Christo Kutrovsky
Hello, How do you tell how much of your l2arc is populated? I've been looking for a while now, can't seem to find it. Must be easy, as this blog entry shows it over time: http://blogs.sun.com/brendan/entry/l2arc_screenshots And follow up, can you tell how much of each data set is in the arc

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-17 Thread Christo Kutrovsky
Dan, loose was a typo. I meant lose. Interesting how a typo (write error) can cause a lot of confusion on what exactly I mean :) Resulting in corrupted interpretation. Note that my idea/proposal is targeted for a growing number of home users. To those, value for money usually is a much more

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-17 Thread Christo Kutrovsky
Dan, Exactly what I meant. An allocation policy, that will help in distributing the data in a way that when one disk is lost (entire mirror) than some data remains fully accessible as opposed to not been able to access pieces all over the storage pool. -- This message posted from

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Christo Kutrovsky
Jeff, thanks for link, looking forward to per data set control. 6280630 zil synchronicity (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6280630) It's been open for 5 years now :) Looking forward to not compromising my entire storage with disabled ZIL when I only need it on a few

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Christo Kutrovsky
Ok, now that you explained it, it makes sense. Thanks for replying Daniel. Feel better now :) Suddenly, that Gigabyte i-Ram is no longer a necessity but a nice to have thing. What would be really good to have is the that per-data set ZIL control in 2010.02. And perhaps add another mode sync no

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-16 Thread Christo Kutrovsky
Robert, That would be pretty cool especially if it makes into the 2010.02 release. I hope there are no weird special cases that pop-up from this improvement. Regarding workaround. That's not my experience, unless it behaves differently on ZVOLs and datasets. On ZVOLs it appears the setting

[zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Christo Kutrovsky
Just finished reading the following excellent post: http://queue.acm.org/detail.cfm?id=1670144 And started thinking what would be the best long term setup for a home server, given limited number of disk slots (say 10). I considered something like simply do a 2way mirror. What are the chances

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Christo Kutrovsky
Thanks for your feedback James, but that's not the direction where I wanted this discussion to go. The goal was not how to create a better solution for an enterprise. The goal was to do damage control in a disk failure scenario involving data loss. Back to the original question/idea. Which

Re: [zfs-discuss] Proposed idea for enhancement - damage control

2010-02-16 Thread Christo Kutrovsky
Bob, Using a separate pool would dictate other limitations, such as not been able to use more space than what's allocated in the pool. You could add space as needed, but you can't remove (move) devices freely. By using a shared pool with a hint of desired vdev/space allocation policy, you

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Christo Kutrovsky
Eric, I am confused. What's difference between: - turning off slogs (via logbias) vs - turning off ZIL (via kernel tunable) Isn't that similar, just one is more granular? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Christo Kutrovsky
Darren, thanks for reply. Still not clear to me thought. The only purpose of the slog is to serve the ZIL. There may be many ZILs on a single slog. From Milek's blog: logbias=latency - data written to slog first logbias=throughtput - data written directly to dataset. Here's my problem. I

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-07 Thread Christo Kutrovsky
Has anyone seen soft corruption in NTFS iSCSI ZVOLs after a power loss? I mean, there is no guarantee writes will be executed in order, so in theory, one could corrupt it's NTFS file system. Would best practice be to rollback the last snapshot before making those iSCSI available again? --

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Christo Kutrovsky
Me too, I would like to know the answer. I am considering Gigabyte's i-RAM for ZIL, but I don't want to worry what happens if the battery dies after a system crash. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Pool import with failed ZIL device now possible ?

2010-02-06 Thread Christo Kutrovsky
Eric, thanks for clarifying. Could you confirm the release for #1 ? As today can be misleading depending on the user. Is there a schedule/target for #2 ? And just to confirm the alternative to turn off the ZIL globally is the equivalent to always throwing away some commited data on a

[zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Christo Kutrovsky
Hello All, I am running NTFS over iSCSI on a ZFS ZVOL volume with compression=gzip-9 and blocksize=8K. The server is 2 core P4 3.0 Ghz with 5 GB of RAM. Whenever I start copying files from Windows onto the ZFS disk, after about 100-200 Mb been copied the server starts to experience freezes. I

Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Christo Kutrovsky
Thanks for your replies. I am aware of the 512 bytes concept, thus my selection of 8 KB (matched with 8KB ntfs). Even 20% reduction is still good, that's like having 20% extra ram (for cache). I haven't experimented with the default lzjb compression. If I want to compress something usually I

Re: [zfs-discuss] server hang with compression on, ping timeouts from remote machine

2010-01-31 Thread Christo Kutrovsky
Thanks Bill, that looks relevant. Note however this only happens with gzip compression, but it's definiteness something I've experienced. I've decided to wait for the next full release before upgrading. I was just wondering if the problem was resolved. I'll migrate to COMSTAR soon, I hope the

Re: [zfs-discuss] Verify NCQ status

2010-01-30 Thread Christo Kutrovsky
, Jan 29, 2010 at 4:04 PM, Richard Elling richard.ell...@gmail.comwrote: On Jan 29, 2010, at 12:01 PM, Christo Kutrovsky wrote: Hello, I have PDSMi board ( http://www.supermicro.com/products/motherboard/PD/E7230/PDSMi.cfm) with IntelĀ® ICH7R SATA2 (3 Gbps) controller built-in. I suspect

[zfs-discuss] Verify NCQ status

2010-01-29 Thread Christo Kutrovsky
Hello, I have PDSMi board (http://www.supermicro.com/products/motherboard/PD/E7230/PDSMi.cfm) with IntelĀ® ICH7R SATA2 (3 Gbps) controller built-in. I suspect NCQ is not working as I never see actv bigger than 1.0 i in iostat, even though I have requests in wait. How can I verify the status

Re: [zfs-discuss] primarycache=off, secondarycache=all

2010-01-28 Thread Christo Kutrovsky
Thanks for info Dan, I will test it out, but won't be anytime soon. Waiting for that SSD. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ARC not using all available RAM?

2010-01-27 Thread Christo Kutrovsky
I am interested in this as well. My machine is with 5 gb ram, and will soon have an 80gb SSD device. My free memory hovers around 750 Mb, and the arc around 3GB. This machine doesn't do anything other than iSCSI/CIFS, I wouldn't mind using some extra 500 Mb for caching. And this becomes

Re: [zfs-discuss] ARC Ghost lists, why have them and how much ram is used to keep track of them? [long]

2010-01-27 Thread Christo Kutrovsky
I have the exact same questions. I am very interested in the answers of those. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] primarycache=off, secondarycache=all

2010-01-27 Thread Christo Kutrovsky
In the case of a ZVOL with the following settings: primarycache=off, secondarycache=all How does the L2ARC get populated if the data never makes it to ARC ? Is this even a valid configuration? The reason I ask is I have iSCSI volumes for NTFS, I intend to use an SSD for l2arc. If something is

Re: [zfs-discuss] ZFS ARC vs Oracle cache

2009-09-25 Thread Christo Kutrovsky
server) on the system, a db_cache size in the 70 GiB range would be perfectly acceptable. Don't forget to set pga_aggregate_target to something reasonable too, like 20 GiB. Christo Kutrovsky Senior DBA The Pythian Group I Blog at: www.pythian.com/news -- This message posted from opensolaris.org