Re: [zfs-discuss] OpenSolaris snv_134 zfs pool hangs after some time with dedup=on

2010-04-28 Thread Jim Horng
This is not a performance issue. The rsync will hang hard and one of the child process can not be killed (I assume it's the one running on the zfs). the command gets slower I am referring to the output of the file system commands (zpool, zfs, df, du, etc) from the different shell. I left the

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-28 Thread Jim Horng
3 shelves with 2 controllers each. 48 drive per shelf. These are Fibrechannel attached. We would like all 144 drives added to the same large pool. I would do either a 12 or 16 disk raidz3 vdev and do spread out the disk across controllers within vdevs. also may want to leave a least 1 spare

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
Why would you recommend a spare for raidz2 or raidz3? -- richard Spare is to minimize the reconstruction time. Because remember a vdev can not start resilvering until there is a spare disk available. And with disks as big as they are today, resilvering also take many hours. I rather have

Re: [zfs-discuss] Migrate ZFS volume to new pool

2010-04-29 Thread Jim Horng
Would your opinion change if the disks you used took 7 days to resilver? Bob That will only make a stronger case that hot spare is absolutely needed. This will also make a strong case for choosing raidz3 over raidz2 as well as vdev smaller number of disks. -- This message posted from

Re: [zfs-discuss] Panic when deleting a large dedup snapshot

2010-04-30 Thread Jim Horng
Looks like I am hitting the same issue now from the earlier post that you responded. http://opensolaris.org/jive/thread.jspa?threadID=128532tstart=15 Continue my test migration with the dedup=off and synced couple more file systems. I decided the merge two of the file systems together by

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-04 Thread Jim Dunham
ZVOLs 'vol01/zvol01' and 'vol01/zvol02', under COMSTAR soon. http://wikis.sun.com/display/OpenSolarisInfo/How+to+Configure+iSCSI+Target+Ports http://wikis.sun.com/display/OpenSolarisInfo/COMSTAR+Administration - Jim _ Przem From: Rick McNeal

Re: [zfs-discuss] [storage-discuss] iscsitgtd failed request to share on zpool import after upgrade from b104 to b134

2010-05-05 Thread Jim Dunham
Przem, Anybody has an idea what I can do about it? zfs set shareiscsi=off vol01/zvol01 zfs set shareiscsi=off vol01/zvol02 Doing this will have no impact on the LUs if configured under COMSTAR. This will also transparently go away with b136, when ZFS ignores the shareiscsi property. - Jim

[zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-09 Thread Jim Horng
Okay, so after some test with dedup on snv_134. I decided we can not to use dedup feature for the time being. While unable to destroy a dedupped file system. I decided to migrate the file system to another pool then destroy the pool. (see below)

Re: [zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-09 Thread Jim Horng
size of snapshot? r...@filearch1:/var/adm# zfs list mpool/export/projects/project1...@today NAMEUSED AVAIL REFER MOUNTPOINT mpool/export/projects/project1...@today 0 - 407G - r...@filearch1:/var/adm# zfs list

Re: [zfs-discuss] How can I be sure the zfs send | zfs received is correct?

2010-05-10 Thread Jim Horng
I was expecting zfs send tank/export/projects/project1...@today would send everything up to @today. That is the only snapshot and I am not using the -i options. The things worries me is that tank/export/projects/project1_nb was the first file system that I tested with full dedup and

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-13 Thread Jim Horng
When I boot up without the disks in the slots. I manually bring the pool on line with zpool clear poolname I believe that was what you were missing from your command. However I did not try to change controller. Hopefully you only been unplug disks while the system is turn off. If that's

Re: [zfs-discuss] Moved disks to new controller - cannot import pool even after moving back

2010-05-14 Thread Jim Horng
You may or may not need to add the log device back. zfs clear should bring the pool online. either way shouldn't affect the data. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

[zfs-discuss] Can I recover filesystem from an offline pool?

2010-05-25 Thread Jim Horng
Hi All, is there any procedure to recover a filesystem from an office pool or bring a pool on-line quickly. Here is my issue. * One 700GB Zpool * 1 filesystem with compression turn on (only using few MB) * Try to migrated another filesystem from a different pool with dedup stream. with zfs send

Re: [zfs-discuss] Can I recover filesystem from an offline pool?

2010-05-30 Thread Jim Horng
10GB of memory + 5 days later. The pool was imported. this file server is a virtual machine. I allocated 2GB of memory and 2 CPU cores assume this was enough to mange 6 TB (6x 1TB disks). While the pool I am try to recover is only 700 GB and not the 6TB pool I am try to migrate. So I decided

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2010-06-15 Thread Jim Klimov
, but most people may init them by packages (though zoneadm says it is copying thousands of files), so /etc/skel might be a better example of the usecase - though nearly useless ,) jim -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] file concatenation with ZFS copy-on-write

2010-06-15 Thread Jim Klimov
, but most people may init them by packages (though zoneadm says it is copying thousands of files), so /etc/skel might be a better example of the usecase - though nearly useless ,) jim -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] Fwd: zpool import despite missing log [PSARC/2010/292Self Review]

2010-07-31 Thread Jim Doyle
A solution to this problem would be my early Christmas present! Here is how I lost access to an otherwise healthy mirrored pool two months ago: Box running snv_130 with two disks in a mirror and an iRAM battery-backed ZIL device was shutdown orderly and powered down normally. While I was away

Re: [zfs-discuss] Adding ZIL to pool questions

2010-08-01 Thread Jim Doyle
, plus updates to files' atime attr - and that particular scale of operation will be greatly improved by an NVRAM ZIL. If I were to use a ZIL again, i'd use something like the ACARD DDR-2 SATA boxes, and not an SSD or an iRAM. -- Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-08-06 Thread Jim Barker
I have been looking at why a zfs receive operation is terribly slow and one observation that seemed directly linked to why it is slow is that at any one time one of the cpus is pegged at 100% sys while the other 5 in my case are relatively quiet. I haven't dug any deeper than that, but was

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-08-06 Thread Jim Barker
Just an update, I had a ticket open with Sun regarding this and it looks like they have a CR for what I was seeing (6975124). -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Maximum zfs send/receive throughput

2010-08-06 Thread Jim Barker
, but I guess I just delayed the freeze a little longer. I provided Oracle some explorer output and a crash dump to analyze and this is the data they used to provide the information I passed on. Jim Barker -- This message posted from opensolaris.org

Re: [zfs-discuss] Finding corrupted files

2010-10-06 Thread Jim Dunham
it, but I wanted to know, if there're more of them. Assuming that the ZFS filesystem in question is not degrading further (as in a disk going bad), upon completion of a successful scrub, zpool reports the complete status of the filesystem being reported on. - Jim Regards, budy

[zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
I have a 20Tb pool on a mount point that is made up of 42 disks from an EMC SAN. We were running out of space and down to 40Gb left (loading 8Gb/day) and have not received disk for our SAN. Using df -h results in: Filesystem size used avail capacity Mounted on pool1

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
Yes, you're correct. There was a typo when I copied to the forum. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
Yes. We run a snap in cron to a disaster recovery site. NAME USED AVAIL REFER MOUNTPOINT po...@20100930-22:20:00 13.2M - 19.5T - po...@20101001-01:20:00 4.35M - 19.5T - po...@20101001-04:20:00 0 - 19.5T - po...@20101001-07:20:00

Re: [zfs-discuss] Help - Deleting files from a large pool results in less free space!

2010-10-07 Thread Jim Sloey
One of us found the following: The presence of snapshots can cause some unexpected behavior when you attempt to free space. Typically, given appropriate permissions, you can remove a file from a full file system, and this action results in more space becoming available in the file system.

Re: [zfs-discuss] ZFS pool issues with COMSTAR

2010-10-08 Thread Jim Dunham
On Oct 8, 2010, at 2:06 AM, Wolfraider wrote: We have a weird issue with our ZFS pool and COMSTAR. The pool shows online with no errors, everything looks good but when we try to access zvols shared out with COMSTAR, windows reports that the devices have bad blocks. Everything has been

Re: [zfs-discuss] adding new disks and setting up a raidz2

2010-10-14 Thread Jim Dunham
c0t5000C500268C0576d0 c0t5000C500268C5414d0 c0t5000C500268CFA6Bd0 c0t5000C500268D0821d0 - Jim Unfortunately I get an error: [b]cannot open '/dev/dsk/c0t5000C500268CFA6Bd0s0': I/O error[/b] Can anyone give me some clues as to what is wrong? I have included the zpool status and format

Re: [zfs-discuss] Performance problems due to smaller ZFS recordsize

2010-10-21 Thread Jim Mauro
There is nothing in here that requires zfs confidential. cross-posted to zfs discuss. On Oct 21, 2010, at 3:37 PM, Jim Nissen wrote: Cross-posting. Original Message Subject: Performance problems due to smaller ZFS recordsize Date: Thu, 21 Oct 2010 14:00:42 -0500

Re: [zfs-discuss] Performance problems due to smaller ZFS recordsize

2010-10-25 Thread Jim Mauro
Hi Jim - cross-posting to zfs-discuss, because 20X is, to say the least, compelling. Obviously, it would be awesome if we had the opportunity to whittle-down which of the changes made this fly, or if it was a combination of the changes. Looking at them individually set

Re: [zfs-discuss] Performance problems due to smaller ZFS recordsize

2010-11-01 Thread Jim Nissen
Jim, They are running Solaris 10 11/06 (u3) with kernel patch 142900-12. See inline for the rest... On 10/25/10 11:19 AM, Jim Mauro wrote: Hi Jim - cross-posting to zfs-discuss, because 20X is, to say the least, compelling. Obviously, it would be awesome if we had the opportunity

Re: [zfs-discuss] zpool import is this safe to use -f option in this case ?

2010-11-16 Thread Jim Dunham
. Also I have observed that zpool import took some time to for successful completion. Is there a way minimize zpool import -f operation time ?? No. - Jim Regards, sridhar. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-16 Thread Jim Dunham
/mpxio/mpath whatever your OS calls multi-pathing. MC/S (Multiple Connections per Sessions) support was added to the iSCSI Target in COMSTAR, now available in Oracle Solaris 11 Express. - Jim -Ross ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] zpool import is this safe to use -f option in this case ?

2010-11-16 Thread Jim Dunham
Tim, On Wed, Nov 17, 2010 at 10:12 AM, Jim Dunham james.dun...@oracle.com wrote: sridhar, I have done the following (which is required for my case) Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1 created a array level snapshot of the device using dscli

Re: [zfs-discuss] L2ARC - shared or associated with a pool?

2011-01-12 Thread Jim Dunham
on the manual formatting done above) NOTE: The omission of either slice designator ('s0' or 's1' above), will cause ZFS to (re)format the whole device, undoing any manual partitioning done with format. Jim So - is it local to a pool, or global? If it's global, will I need to do something

Re: [zfs-discuss] existing performance data for on-disk dedup?

2011-02-14 Thread Jim Dunham
of dedup, I found that the latest version of VDBench supports dedup, and is helpful on narrowing in on specific issues related to the size of the DDT, the ARC and L2ARC. http://blogs.sun.com/henk/entry/first_beta_version_of_vdbench Jim Thanks for the help, Janice

Re: [zfs-discuss] iSCSI initiator question

2011-02-26 Thread Jim Dunham
: dd of=/dev/rdsk/c?t?d?s0 if=/dev/rdsk/c?t?d?s0 seek=4294967296 count=1 Note: Make sure that both devices specified (dev/rdsk/c?t?d?s0) are identical so that the data written is identical to the data read. - Jim On the initiator or the target? I tried to setup a new server

Re: [zfs-discuss] Slices and reservations Was: Re: How long should an empty destroy take? snv_134

2011-03-07 Thread Jim Dunham
measurable write I/O performance, although how much is unclear. For those interested, one can trace back the ZFS code starting here: http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_disk.c#276 Jim 3. Assuming I want to do such an allocation

[zfs-discuss] detach configured log devices?

2011-03-16 Thread Jim Mauro
With ZFS, Solaris 10 Update 9, is it possible to detach configured log devices from a zpool? I have a zpool with 3 F20 mirrors for the ZIL. They're coming up corrupted. I want to detach them, remake the devices and reattach them to the zpool. Thanks /jim

Re: [zfs-discuss] DTrace IO provider and oracle

2011-05-10 Thread Jim Litchfield
, especially with oracle, that using the psargs string is much more informative - curpsinfo-pr_psargs. Jim --- - Original Message - From: przemol...@poczta.fm To: zfs-discuss@opensolaris.org Sent: Tuesday, May 10, 2011 10:27:55 AM GMT -08:00 US/Canada Pacific Subject: Re: [zfs-discuss] DTrace IO

Re: [zfs-discuss] Performance problem suggestions?

2011-05-10 Thread Jim Klimov
Well, as I wrote in other threads - i have a pool named pool on physical disks, and a compressed volume in this pool which i loopback-mount over iSCSI to make another pool named dcpool. When files in dcpool are deleted, blocks are not zeroed out by current ZFS and they are still allocated for

Re: [zfs-discuss] Tuning disk failure detection?

2011-05-10 Thread Jim Klimov
In a recent post r-mexico wrote that they had to parse system messages and manually fail the drives on a similar, though different, occasion: http://opensolaris.org/jive/message.jspa?messageID=515815#515815 -- This message posted from opensolaris.org

Re: [zfs-discuss] Modify stmf_sbd_lu properties

2011-05-10 Thread Jim Dunham
at the command set associated with stmfadm, and you should see that it has taken on all sbdadm options, and more. I believe you are looking for the functionality associated with stmfadm offline-lu, ... online-lu. - Jim Is it possible to change the GUID of the newly imported volume to match the old

Re: [zfs-discuss] bootfs ID on zfs root

2011-05-11 Thread Jim Klimov
. It is also possible that device names changed (i.e. on x86 - when SATA HDD access mode in BIOS changed from IDE to AHCI) and the boot device name saved in eeprom or its GRUB emulator is no longer valid. But this has different error strings ;) Good luck, //Jim -- This message posted from

Re: [zfs-discuss] Modify stmf_sbd_lu properties

2011-05-11 Thread Jim Klimov
like this clone was always here with this original naming, and your current newer dataset is a cloned deviation. Hopefully this will fool STMF into using this data instead of new data, with existing GUID... 5) Enable stmf and iscsi/* services *) Tell us if it works ;) HTH, //Jim -- This message

Re: [zfs-discuss] Performance problem suggestions?

2011-05-11 Thread Jim Klimov
come up with an idea of a dtrace for your situation. I have little non-zero hope that the experts would also come to the web-forums and review the past month's posts and give their comments to my, your and others' questions and findings ;) //Jim Klimov -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS backup and restore

2011-05-11 Thread Jim Klimov
or used by another system with a solution as simple as that you'd have to do a forced import (zpool import -f tank) - if it is indeed a local non-networked pool and no other machine really uses it. HTH, //Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] Performance problem suggestions?

2011-05-11 Thread Jim Litchfield
tweaked this on the fly. One key indicator is if your disk queues hover around 10. Jim --- - Original Message - From: jimkli...@cos.ru To: zfs-discuss@opensolaris.org Sent: Wednesday, May 11, 2011 3:22:19 AM GMT -08:00 US/Canada Pacific Subject: Re: [zfs-discuss] Performance problem

Re: [zfs-discuss] bootfs ID on zfs root

2011-05-12 Thread Jim Klimov
-m verbose from eeprom or via reboot - -m verbose from single-user), just to maybe see some more insight on what fails?. //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Disk space size, used, available mismatch

2011-05-12 Thread Jim Klimov
different mirrors), or RAIDZ1 when we need more space available. HTH, //Jim Klimov -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Disk space size, used, available mismatch

2011-05-12 Thread Jim Klimov
exceed 6Gb by itself (and your ETL software uses a separate dataset), you can reserve 6Gb for only the root FS (and hopefully its descendants - but better see manpages): # zfs set reservation=6G rpool/ROOT/myBeName HTH, //Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] Performance problem suggestions?

2011-05-12 Thread Jim Klimov
a sufficiently empty pool... Hopefully the Illumos team or some other developers would push this idea into reality ;) There was a good tip from Jim Litchfield regarding VDEV Queue Sizing, though. Possible current default for zfs_vdev_max_pending is 10, which is okay (or may be even too much

Re: [zfs-discuss] 350TB+ storage solution

2011-05-15 Thread Jim Klimov
or test if the theoretical warnings are valid? Thanks, //Jim Klimov -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] bootfs ID on zfs root

2011-05-15 Thread Jim Klimov
on the first try. It found the rpool and current bootfs and imported it with no problems. Then I just did init 6 to finish the failsafe mode, and after a reboot the system came back up with no hiccups. HTH, //Jim -- This message posted from opensolaris.org

Re: [zfs-discuss] ZFS ZPOOL = trash

2011-05-16 Thread Jim Klimov
the zpool import -F command? Good luck, //Jim -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] 350TB+ storage solution

2011-05-16 Thread Jim Klimov
2011-05-16 9:14, Richard Elling пишет: On May 15, 2011, at 10:18 AM, Jim Klimovjimkli...@cos.ru wrote: Hi, Very interesting suggestions as I'm contemplating a Supermicro-based server for my work as well, but probably in a lower budget as a backup store for an aging Thumper (not as its

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-16 Thread Jim Klimov
substantial) - now I'd get rid of this experiment much faster ;) -- ++ || | Климов Евгений, Jim Klimov | | технический директор

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread Jim Klimov
, Jim Klimov | | технический директор CTO | | ЗАО ЦОС и ВТ JSC COSHT | || | +7-903-7705859 (cellular) mailto:jimkli...@cos.ru

Re: [zfs-discuss] ZFS File / Folder Management

2011-05-17 Thread Jim Klimov
many people suggest that a backup on another similar server box is superior to using tape backups - although probably using more electricity in real-time). sorry if this goes in the wrong spot i could no find Seems to have come correctly ;) HTH, //Jim Klimov -- This message posted from

Re: [zfs-discuss] Extremely slow zpool scrub performance

2011-05-17 Thread Jim Klimov
this is somewhat complicated and hard to explain without a whiteboard. :-) From recent reading on Jeff's blog and links leading from it, I might guess this relates to different disk offsets with different writing speeds? Yes-no would suffice, as to spare the absent whiteboard ,) Thanks, //Jim

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Jim Klimov
-1/iopattern The latter tries to estimate the amounts of SEQuential and RNDom reads and writes in your workload. HTH, //Jim 2011-05-19 16:35, Sašo Kiselkov пишет: Hi all, I'd like to ask whether there is a way to monitor disk seeks. I have an application where many concurrent readers (50

Re: [zfs-discuss] Monitoring disk seeks

2011-05-19 Thread Jim Klimov
2011-05-19 17:00, Jim Klimov пишет: I am not sure you can monitor actual mechanical seeks short of debugging and interrogating the HDD firmware - because it is the last responsible logic in the chain of caching, queuing and issuing actual commands to the disk heads. For example, a long logical

Re: [zfs-discuss] Same device node appearing twice in same mirror; one faulted, one not...

2011-05-19 Thread Jim Klimov
, and configuring MPxIO failover properly helped the system detect them as being actually one device and stop complaining as long as one path works. On another hand, you might have some dd if=disk1 of=disk2 kind of cloning which may have puzzled the system... HTH, //Jim

[zfs-discuss] Is Dedup processing parallelized?

2011-05-20 Thread Jim Klimov
: 90%3342 MB (p) Most Frequently Used Cache Size: 9%362 MB (c-p) arc_meta_used = 2617 MB arc_meta_limit= 6144 MB arc_meta_max = 4787 MB Thanks for any insights, //Jim Klimov

[zfs-discuss] iSCSI pool timeouts during high latency moments

2011-05-22 Thread Jim Klimov
IP addresses (i.e. localhost and NIC IP) - but that would probably fail at the same bottleneck moment - or to connect to the zvol/rdsk/... directly, without iSCSI? Thanks for ideas, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] iSCSI pool timeouts during high latency moments

2011-05-22 Thread Jim Klimov
. -- ++ || | Климов Евгений, Jim Klimov | | технический директор CTO | | ЗАО ЦОС и ВТ

Re: [zfs-discuss] offline dedup

2011-05-26 Thread Jim Klimov
it ;) -- ++ || | ?? ???, Jim Klimov | | ??? CTO | | ??? ??? ? ?? JSC COSHT

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-27 Thread Jim Klimov
thought about it, can't get rid of the idea ;) ... -- ++ || | Климов Евгений, Jim Klimov | | технический директор

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-27 Thread Jim Klimov
://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- ++ || | Климов Евгений, Jim Klimov | | технический директор

Re: [zfs-discuss] offline dedup

2011-05-27 Thread Jim Klimov
/s at least. HTH, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] DDT sync?

2011-05-27 Thread Jim Klimov
entries into the HDD pool like we do now? (BTW, what to we do with dedicated ZIL device - flush the TXG early?) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Jim Klimov
of HCL HDDs all have one connector... Still, I gess my post poses mre questions than answers, but maybe some other list readers can reply... Hint: Nexenta people seem to be good OEM friends with Supermicro, so they might know ;) HTH, //Jim Klimov

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Jim Klimov
know ;) Yes :-) -- richard Thanks! //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Jim Klimov
to have so much actual bandwidth. Thanks, //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-31 Thread Jim Klimov
. -- ++ || | Климов Евгений, Jim Klimov | | технический директор CTO | | ЗАО ЦОС и ВТ JSC COSHT

[zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Jim Klimov
negligible and there are more options quickly available, such as mounting the iSCSI device on another server? Now that I hit the problem of reverting to direct volume access, this makes sense ;) Thanks in advance for ideas or clarifications, //Jim Klimov

Re: [zfs-discuss] Question on ZFS iSCSI

2011-05-31 Thread Jim Klimov
4295GB 4295GB 8389kB But lofiadm doesn't let me address that partition #1 as a separate device :( Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-05-31 Thread Jim Klimov
of 3*4-disk-raidz1 vs 1*12-disk raidz3, so which of the tradeoffs is better - more vdevs or more parity to survive loss of ANY 3 disks vs. right 3 disks? Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Jim Klimov
- From: Matt Keenan matt...@opensolaris.org Date: Tuesday, May 31, 2011 21:02 Subject: Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE To: j...@cos.ru Cc: zfs-discuss@opensolaris.org Jim, Thanks for the response, I've nearly got it working, coming up

Re: [zfs-discuss] Importing Corrupted zpool never ends

2011-05-31 Thread Jim Klimov
right. My FreeRAM-Watchdog code and compiled i386 binary and a primitive SMF service wrapper can be found here: http://thumper.cos.ru/~jim/freeram-watchdog-20110531-smf.tgz Other related forum threads: * zpool import hangs indefinitely (retry post in parts; too long?) http://opensolaris.org/jive

Re: [zfs-discuss] Ensure Newly created pool is imported automatically in new BE

2011-05-31 Thread Jim Klimov
Actually if you need beadm to know about the data pool, it might be beneficial to mix both approaches - yours with bemount, and init-script to enforce the pool import on that first boot... HTH, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] not sure how to make filesystems

2011-05-31 Thread Jim Klimov
-- ++ || | Климов Евгений, Jim Klimov | | технический директор CTO | | ЗАО ЦОС и ВТ JSC COSHT

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Jim Klimov
dedicated tasks with data you're okay with losing. You can also make the rpool a three-way mirror which may increase read speeds if you have enough concurrentcy. And when one drive breaks, your rpool is still mirrored. HTH, //Jim Klimov ___ zfs-discuss

Re: [zfs-discuss] Is another drive worth anything?

2011-05-31 Thread Jim Klimov
out to be a known bug which may have since been fixed... Also, in a mirroring scenario is there any good reason to keep a warm spare instead of making a three-way mirror right away (beside energy saving)? Rebuild times and non-redundant windows can be decreased considerably ;) //Jim

Re: [zfs-discuss] Question on ZFS iSCSI

2011-06-01 Thread Jim Klimov
inside the volume): # zpool import -d /dev/zvol/dsk/pool dcpool cannot import 'dcpool': no such pool available # zpool import -d /dev/zvol/rdsk/pool/ dcpool cannot import 'dcpool': no such pool available //Jim ___ zfs-discuss mailing list zfs

[zfs-discuss] How to properly read zpool iostat -v ? ;)

2011-06-01 Thread Jim Klimov
0 0 0 -- - - - - - - Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Should Intel X25-E not be used with a SAS Expander?

2011-06-02 Thread Jim Klimov
. (SAS drives are dual-port, full-duplex devices.) Another reason *may be* (maybe not, speculative) if these drives have a SATA protocol firmware instead of a SAS one - resulting in general feature sets... //Jim ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jim Klimov
(and/or use rsync to correct some misreceived blocks if network was faulty). -- ++ || | Климов Евгений, Jim Klimov | | технический директор

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-09 Thread Jim Klimov
link. Took many retries, and zfs send is not strong at retrying ;) -- ++ || | Климов Евгений, Jim Klimov | | технический директор

[zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-10 Thread Jim Klimov
for my assistant by catching near-freeze conditions, is here: * http://thumper.cos.ru/~jim/freeram-watchdog-20110610-v0.11.tgz I guess it is time for questions now :) What methods can I use (beside 20-hour-long ZDB walks) to gain a quick insight on the cause of problems - why doesn't the pool import

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Jim Klimov
of a single full dump, the chance of a single corruption making your (latest) backup useless would be also higher, right? Thanks for clarifications, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-10 Thread Jim Klimov
2011-06-10 13:51, Jim Klimov пишет: and the system dies in swapping hell (scanrates for available pages were seen to go into millions, CPU context switches reach 200-300k/sec on a single dualcore P4) after eating the last stable-free 1-2Gb of RAM within a minute. After this the system responds

Re: [zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-10 Thread Jim Klimov
2011-06-10 18:00, Steve Gonczi пишет: Hi Jim, I wonder what OS version you are running? There was a problem similar to what you are describing in earlier versions in the 13x kernel series. Should not be present in the 14x kernels. It is OpenIndiana oi_148a, and unlike many other details

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-10 Thread Jim Klimov
) and send these ZIP/RAR archives to the tape. Obviously, a standard integrated solution within ZFS would be better and more portable. See FEC suggestion from another poster ;) //Jim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

[zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Klimov
sync times bumped to 30 sec and reduced to 1 sec. So far I did not find a DTraceToolkit-0.99 utility which would show me what that would be: # /export/home/jim/DTraceToolkit-0.99/rwsnoop | egrep -v '/proc|/dev|unkn' UIDPID CMD D BYTES FILE 0 1251 freeram-watc W 78 /var

Re: [zfs-discuss] ZFS receive checksum mismatch

2011-06-11 Thread Jim Klimov
? Or there is no coalescing and this is why? ;) Thanks, //Jim Klimov ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ { @[execname,fds[arg0].fi_pathname]=count(); }' On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote: While looking over iostats from various programs, I see that my OS HDD is busy writing, about 2Mb/sec stream

Re: [zfs-discuss] Impact of L2ARC device failure and SSD recommendations

2011-06-11 Thread Jim Klimov
but otherwise the system should have remained remain responsive (tested failmode=continue and failmode=wait on different occasions). So I can relate - these things happen, they do annoy, and I hope they will be fixed sometime soon so that ZFS matches its docs and promises ;) //Jim Klimov

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Klimov
2011-06-11 19:16, Jim Mauro пишет: Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ { @[execname,fds[arg0].fi_pathname]=count(); }' Alas, not much. # time dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ { @[execname,fds[arg0].fi_pathname]=count

Re: [zfs-discuss] What is my pol writing? :)

2011-06-11 Thread Jim Mauro
]-fi_pathname] = count(); }' On Jun 11, 2011, at 12:34 PM, Jim Klimov wrote: 2011-06-11 19:16, Jim Mauro пишет: Does this reveal anything; dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == zfs/ { @[execname,fds[arg0].fi_pathname]=count(); }' Alas, not much. # time dtrace -n 'syscall

<    1   2   3   4   5   6   7   8   >