[zfs-discuss] Problem booting with zpool

2009-08-27 Thread Stephen Green
I'm having trouble booting with one of my zpools. It looks like this: pool: tank state: ONLINE scrub: none requested config: NAMESTATE READ WRITE CKSUM tankONLINE 0 0 0 raidz1ONLINE 0 0 0 c4d0

Re: [zfs-discuss] zpool import hangs with indefinite writes

2009-08-27 Thread noguaran
Thank you so much for your reply! Here are the outputs: 1. Find PID of the hanging 'zpool import', e.g. with 'ps -ef | grep zpool' r...@mybox:~# ps -ef|grep zpool root 915 908 0 03:34:46 pts/3 0:00 grep zpool root 901 874 1 03:34:09 pts/2 0:00 zpool import

[zfs-discuss] Pulsing write performance

2009-08-27 Thread David Bond
Hi, I was directed here after posting in CIFS discuss (as i first thought that it could be a CIFS problem). I posted the following in CIFS: When using iometer from windows to the file share on opensolaris svn101 and svn111 I get pauses every 5 seconds of around 5 seconds (maybe a little less)

Re: [zfs-discuss] zpool import hangs with indefinite writes

2009-08-27 Thread noguaran
I used the GUI to delete all my snapshots, and after that, zfs list worked without hanging. I did a zpool scrub and will wait to see what happens with that. I DID have automatic snapshots enabled before. They are disabled now. I don't know how the snapshots work to be honest, so maybe I ran

[zfs-discuss] ARC limits not obeyed in OSol 2009.06

2009-08-27 Thread Udo Grabowski
Hi, we've capped Arcsize via set zfs:zfs_arc_max = 0x2000 in /etc/system to 512 MB, since ARC still does not release memory when applications need it (this is another bug). But this hard limit is not obeyed, instead, when traversing all files in a large and deep directory, we see the

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Gary Gendel
It looks like It's definitely related to the snv_121 upgrade. I decided to roll back to snv_110 and the checksum errors have disappeared. I'd like to issue a bug report, but I don't have any information that might help track this down, just lots of checksum errors. Looks like I'm stuck at

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Albert Chin
On Thu, Aug 27, 2009 at 06:29:52AM -0700, Gary Gendel wrote: It looks like It's definitely related to the snv_121 upgrade. I decided to roll back to snv_110 and the checksum errors have disappeared. I'd like to issue a bug report, but I don't have any information that might help track this

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Casper . Dik
It looks like It's definitely related to the snv_121 upgrade. I decided to roll back to snv_110 and the checksum errors have disappeared. I'd like to issue a bug report, but I don't have any information that might help track this down, just lots of checksum errors. Looks like I'm stuck at

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Ross Walker
On Aug 27, 2009, at 4:30 AM, David Bond david.b...@tag.no wrote: Hi, I was directed here after posting in CIFS discuss (as i first thought that it could be a CIFS problem). I posted the following in CIFS: When using iometer from windows to the file share on opensolaris svn101 and svn111

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Bob Friesenhahn
On Thu, 27 Aug 2009, David Bond wrote: I just noticed that if the server hasnt hit its target arc size, the pauses are for maybe .5 seconds, but as soon as it hits its arc target, the iops drop to around 50% of what it was and then there are the longer pauses around 4-5 seconds. and then

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Roman Naumenko
Hi David, Just wanted to ask you, how your windows server behaves during these pauses? Are there any clients, connected to it? The issue you've described might be related to one I saw on my server, see here: http://www.opensolaris.org/jive/thread.jspa?threadID=110013tstart=0 I just wonder how

Re: [zfs-discuss] Problem booting with zpool

2009-08-27 Thread Mark J Musante
Hi Stephen, Have you got many zvols (or snapshots of zvols) in your pool? You could be running into CR 6761786 and/or 6693210. On Thu, 27 Aug 2009, Stephen Green wrote: I'm having trouble booting with one of my zpools. It looks like this: pool: tank state: ONLINE scrub: none requested

[zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Roman Naumenko
Can somebody help me with this? I'd like to convert old storage servers into JBODs for zfs storage to use them as pools for backups. There are sata drives in it, and they have backplane. Right now there are 8 ports hardware raid cards in it. What cables/converters do I need for this? What

Re: [zfs-discuss] Problem booting with zpool

2009-08-27 Thread Stephen Green
Mark J Musante wrote: Hi Stephen, Have you got many zvols (or snapshots of zvols) in your pool? You could be running into CR 6761786 and/or 6693210. There are four volumes on that pool: stgr...@blue:/tank/tivo/videos$ zfs list -t volume NAME USED AVAIL REFER

Re: [zfs-discuss] snv_110 - snv_121 produces checksum errors on Raid-Z pool

2009-08-27 Thread Adam Leventhal
Hey Gary, There appears to be a bug in the RAID-Z code that can generate spurious checksum errors. I'm looking into it now and hope to have it fixed in build 123 or 124. Apologies for the inconvenience. Adam On Aug 25, 2009, at 5:29 AM, Gary Gendel wrote: I have a 5-500GB disk Raid-Z

Re: [zfs-discuss] [cifs-discuss] CIFS pulsing transfers

2009-08-27 Thread Woong Bin Kang
Hi, It's funny, since I spent hours researching this problem yesterday, too. I kind of fixed the problem by echo 'zfs_txg_synctime/W 0t1' | mdb -kw (Pasted from http://echelog.matzon.dk/logs/browse/opensolaris/1210284000 ) I only say kind of, because zfs response time sometimes is still horrible

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Henrik Johansen
Ross Walker wrote: On Aug 27, 2009, at 4:30 AM, David Bond david.b...@tag.no wrote: Hi, I was directed here after posting in CIFS discuss (as i first thought that it could be a CIFS problem). I posted the following in CIFS: When using iometer from windows to the file share on

[zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Dave
Can anyone from Sun comment on the status/priority of bug ID 6761786? Seems like this would be a very high priority bug, but it hasn't been updated since Oct 2008. Has anyone else with thousands of volume snapshots experienced the hours long import process? -- Dave

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Remco Lengers
Dave, Its logged as an RFE (Request for Enhancement) not as a CR (bug). The status is 3-Accepted/ P1 RFE RFE's are generally looked at in a much different way then a CR. ..Remco Dave wrote: Can anyone from Sun comment on the status/priority of bug ID 6761786? Seems like this would be a

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Tim Cook
On Thu, Aug 27, 2009 at 3:24 PM, Remco Lengers re...@lengers.com wrote: Dave, Its logged as an RFE (Request for Enhancement) not as a CR (bug). The status is 3-Accepted/ P1 RFE RFE's are generally looked at in a much different way then a CR. ..Remco Seriously? It's considered works

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Blake
I think the value of auto-snapshotting zvols is debatable. At least, there are not many folks who need to do this. What I'd rather see is a default property of 'auto-snapshot=off' for zvols. Blake On Thu, Aug 27, 2009 at 4:29 PM, Tim Cookt...@cook.ms wrote: On Thu, Aug 27, 2009 at 3:24 PM,

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Tristan
I saw similar behavior when I was running under the kernel debugger (-k switch the the kernel). It largely went away when I went back to normal. T David Bond wrote: Hi, I was directed here after posting in CIFS discuss (as i first thought that it could be a CIFS problem). I posted the

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Remco Lengers
Tim, Seriously? It's considered works as designed for a system to take 5+ hours to boot? Wow. Thats not what I am saying...I am merely stating the administrative facts, as it may explain the inactivity on this matter. I am unsure if it is supposed to be an RFE or it became one by mistake.

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Scott Meilicke
Roman, are you saying you want to install OpenSolaris on your old servers, or make the servers look like an external JBOD array, that another server will then connect to? -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Pulsing write performance

2009-08-27 Thread Ross Walker
On Aug 27, 2009, at 11:29 AM, Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Thu, 27 Aug 2009, David Bond wrote: I just noticed that if the server hasnt hit its target arc size, the pauses are for maybe .5 seconds, but as soon as it hits its arc target, the iops drop to around

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Dave
Just to make sure we're looking at the same thing: http://bugs.opensolaris.org/view_bug.do?bug_id=6761786 This is not an issue of auto snapshots. If I have a ZFS server that exports 300 zvols via iSCSI and I have daily snapshots retained for 14 days, that is a total of 4200 snapshots.

Re: [zfs-discuss] Connect couple of SATA JBODs to one storage server

2009-08-27 Thread Ron Mexico
This non-raid sas controller is $199 and is based on the LSI SAS 1068. http://accessories.us.dell.com/sna/products/Networking_Communication/productdetail.aspx?c=usl=ens=bsdcs=04sku=310-8285~lt=popup~ck=TopSellers What kind of chassis do these drives currently reside in? Does the backplane have

[zfs-discuss] Boot error

2009-08-27 Thread Grant Lowe
I've got a 240z with Solaris 10 Update 7, all the latest patches from Sunsolve. I've installed a boot drive with ZFS. I mirrored the drive with zpool. I installed the boot block. The system had been working just fine. But for some reason, when I try to boot, I get the error: {1} ok boot

Re: [zfs-discuss] utf8only and normalization properties

2009-08-27 Thread Nicolas Williams
So, the manpage seems to have a bug in it. The valid values for the normalization property are: none | formC | formD | formKC | formKD Nico -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Trevor Pretty
Dave Yep that's an RFE. (Request For Enchantment) that's how things are reported to engineers to fix things inside Sun. If it's an honest to goodness CR = bug (However it normally need a real support paying customer to have a problem to go from RFE to CR) the "responsible engineer" evaluates

[zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-27 Thread Paul B. Henson
Well, so I'm getting ready to install the first set of patches on my x4500 since we deployed into production, and have run into an unexpected snag. I already knew that with about 5-6k file systems the reboot cycle was going to be over an hour (not happy about, but knew about and planned for).

Re: [zfs-discuss] Boot error

2009-08-27 Thread Cindy Swearingen
Hi Grant, I don't have all my usual resources at the moment, but I would boot from alternate media and use the format utility to check the partitioning on newly added disk, and look for something like overlapping partitions. Or, possibly, a mismatch between the actual root slice and the one

Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-27 Thread Trevor Pretty
Paul You need to exclude all the file system that are not the "OS" My S10 Virtual machine is not booted but you can put all the "excluded" file systems in a file and use -f from memory. You use to have to do this if there was a DVD in the drive otherwise /cdrom got copied to the new boot

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread thomas
For whatever it's worth to have someone post on a list.. I would *really* like to see this improved as well. The time it takes to iterate over both thousands of filesystems and thousands of snapshots makes me very cautious about taking advantage of some of the built-in zfs features in an HA

[zfs-discuss] shrink the rpool zpool or increase rpool zpool via add disk.

2009-08-27 Thread Randall Badilla
Hi all: First; it is possible modify the boot zpool rpool after OS installation...? I install the OS on the whole 72GB harddisk.. it is mirrored so If I want to decrease the rpool; for example resize to a 36GB slice it can be done? As far I remember on UFS/SVM I was able to resize boot OS disk via

Re: [zfs-discuss] Status/priority of 6761786

2009-08-27 Thread Trevor Pretty
Dave This helps:- http://defect.opensolaris.org/bz/page.cgi?id=fields.html The most common thing you will see is "Duplicate". As different people find the same problem at different times in different ways and when they searched database to see if it was "known" they could not find a bug

Re: [zfs-discuss] live upgrade with lots of zfs filesystems

2009-08-27 Thread Paul B. Henson
On Thu, 27 Aug 2009, Trevor Pretty wrote: My S10 Virtual machine is not booted but you can put all the excluded file systems in a file and use -f from memory. Unfortunately, I wasn't that stupid. I saw the -f option, but it's not applicable to ZFS root: -f exclude_list_file Use

Re: [zfs-discuss] Boot error

2009-08-27 Thread Grant Lowe
Hi Cindy, I tried booting from DVD but nothing showed up. Thanks for the ideas, though. Maybe your other sources might have something? - Original Message From: Cindy Swearingen cindy.swearin...@sun.com To: Grant Lowe gl...@sbcglobal.net Cc: zfs-discuss@opensolaris.org Sent: