Re: [zfs-discuss] internal backup power supplies?

2010-01-12 Thread Daniel Carosone
On Mon, Jan 11, 2010 at 10:10:37PM -0800, Lutz Schumann wrote: p.s. While writing this I'm thinking if a-card handles this case well ? ... maybe not. apart from the fact that they seem to be hard to source, this is a big question about this interesting device for me too. I hope so, since it

Re: [zfs-discuss] opensolaris-vmware

2010-01-12 Thread Arnaud Brand
Your machines won't come up running, they'll start up from scratch (like if you had hit the reset button). If you want your machines to come up you have to make vmware snapshots, which capture the state of the running VM (memory, etc..). Typically this is automated with solutions like VCB

Re: [zfs-discuss] Is LSI SAS3081E-R suitable for a ZFS NAS ?

2010-01-12 Thread Arnaud Brand
Thanks for your answer. I asked primarily because of the mpt timeout issues I saw on the list. I never experienced timeouts with my (personal) usas-l8i (lsi 1068e) but feared this issue might cause some problems with 3081. Anyway, thanks again. Arnaud -Message d'origine- DeĀ :

[zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Brad
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I'm concern that we won't be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. -- This message posted from opensolaris.org

[zfs-discuss] dmu_zfetch_find - lock contention?

2010-01-12 Thread Robert Milkowski
Hi, I have a mysql instance which if I point more load towards it it suddenly gets 100% in SYS as shown below. It can work fine for an hour but eventually it gets to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output below: # prtdiag | head System Configuration: SUN

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-01-12 Thread Gary Mills
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote: This line was a workaround for bug 6642475 that had to do with searching for for large contiguous pages. The result was high system time and slow response. I can't find any public information on this bug, although I assume it's

[zfs-discuss] iSCSI Qlogic 4010 TOE card

2010-01-12 Thread Allen Jasewicz
I had an emergency need for 400gb of storage yesterday and I spent 8 hours looking for a way to get iSCSI working via a qlogic QLA4010 TOE card and was unable to get my windows Qlogic 4050 cTOE card to recognize the target. I do have a Netapp iSCSI connection on the client cat /etc/release

[zfs-discuss] Permanent errors

2010-01-12 Thread epiq
Hello ! Can anybody help me with some trouble: j...@opensolaris:~# zpool status -v pool: green state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if

Re: [zfs-discuss] Repeating scrub does random fixes

2010-01-12 Thread Gary Gendel
Thanks for all the suggestions. Now for a strange tail... I tried upgrading to dev 130 and, as expected, things did not go well. All sorts of permission errors flew by during the upgrade stage and it would not start X-windows. I've heard that things installed from the contrib and extras

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Toby Thain
On 12-Jan-10, at 5:53 AM, Brad wrote: Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I'm concern that we won't be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. A power

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Richard Elling
On Jan 12, 2010, at 2:53 AM, Brad wrote: Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I'm concern that we won't be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure. Yes,

[zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Gary Mills
I'm working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Bob Friesenhahn
On Tue, 12 Jan 2010, Gary Mills wrote: Is moving the databases (IMAP metadata) to a separate ZFS filesystem likely to improve performance? I've heard that this is important, but I'm not clear why this is. There is an obvious potential benefit in that you are then able to tune filesystem

Re: [zfs-discuss] Permanent errors

2010-01-12 Thread Cindy Swearingen
Hi-- The best approach is to correct the issues that are causing these problems in the first place. The fmdump -eV commnand will identify the hardware problems that caused the checksum errors and the corrupted files. You might be able to use some combination of zpool scrub, zpool clear, and

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-12 Thread Cindy Swearingen
Hi Dan, I'm not sure I'm following everything here but I will try: 1. How do you offline a zvol? Can you show your syntax? You can only offline a redundant pool component, such as a file, slice, or whole disk. 2. What component does black represent? Only a pool can be exported. 3. In

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-12 Thread Cindy Swearingen
Dan, I see now how you might have created this config. I tried to reproduce this issue by creating a separate pool on another disk and a volume to attach to my root pool, but my system panics when I try to attach the volume to the root pool. This is on Nevada, build 130. Panic aside, we don't

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Gary Mills
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: On Tue, 12 Jan 2010, Gary Mills wrote: Is moving the databases (IMAP metadata) to a separate ZFS filesystem likely to improve performance? I've heard that this is important, but I'm not clear why this is. There is an

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Ray Van Dolson
On Tue, Jan 12, 2010 at 12:37:30PM -0800, Gary Mills wrote: On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: On Tue, 12 Jan 2010, Gary Mills wrote: Is moving the databases (IMAP metadata) to a separate ZFS filesystem likely to improve performance? I've heard that this is

Re: [zfs-discuss] iSCSI Qlogic 4010 TOE card

2010-01-12 Thread Allen Jasewicz
Ok I have found the issue however i do not know how to get around it. iscsiadm list target-param Target: iqn.1986-03.com.sun:01:0003ba08d5ae.47571faa Alias: - Target: iqn.2000-04.com.qlogic:qla4050c.gs10731a42094.1 Alias: - I need to attach all iSCSI targets to

[zfs-discuss] ZFS auto-snapshot in zone

2010-01-12 Thread Dan
Hello, I've got auto-snapshots enabled in global zone for home directories of all users. Users log in to their individual zones and home directory loads from global zone. All works fine, except that new auto-snapshot have no properties and therefore can't be accessed in zones. example from

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Richard Elling
On Jan 12, 2010, at 12:37 PM, Gary Mills wrote: On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: On Tue, 12 Jan 2010, Gary Mills wrote: Is moving the databases (IMAP metadata) to a separate ZFS filesystem likely to improve performance? I've heard that this is important, but

Re: [zfs-discuss] Permanent errors

2010-01-12 Thread epiq
Cindys, thank you for answer, but i need explain some details. This pool is new hardware for my system - 2x1Tb WD Green hard drives, but data on this pool was copied from old 9x300 Gb hard drives pool with hw problem. while i copied it data where was many errors, but at the end i see this

[zfs-discuss] set zfs:zfs_vdev_max_pending

2010-01-12 Thread Ed Spencer
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending

Re: [zfs-discuss] Permanent errors

2010-01-12 Thread Cindy Swearingen
Hi, I think you are saying that you copied the data on this system from a previous system with hardware problems. It looks like the data that was copied was corrupt, which is causing the permanent errors on the new system (?) The manual removal of the corrupt files, zpool scrub and zpool clear

Re: [zfs-discuss] Thin device support in ZFS?

2010-01-12 Thread Miles Nordin
ah == Al Hopper a...@logical-approach.com writes: ah The main issue is that most flash devices support 128k byte ah pages, and the smallest chunk (for want of a better word) of ah flash memory that can be written is a page - or 128kb. So if ah you have a write to an SSD that

Re: [zfs-discuss] set zfs:zfs_vdev_max_pending

2010-01-12 Thread Bob Friesenhahn
On Tue, 12 Jan 2010, Ed Spencer wrote: I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files). If 'iostat -x' shows that svc_t is quite high, then reducing zfs_vdev_max_pending might help.

Re: [zfs-discuss] set zfs:zfs_vdev_max_pending

2010-01-12 Thread Robert Milkowski
On 12/01/2010 23:47, Bob Friesenhahn wrote: On Tue, 12 Jan 2010, Ed Spencer wrote: I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files). If 'iostat -x' shows that svc_t is quite high,

Re: [zfs-discuss] set zfs:zfs_vdev_max_pending

2010-01-12 Thread Richard Elling
On Jan 12, 2010, at 2:54 PM, Ed Spencer wrote: We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I

Re: [zfs-discuss] rpool mirror on zvol, can't offline and detach

2010-01-12 Thread Daniel Carosone
On Tue, Jan 12, 2010 at 01:26:15PM -0700, Cindy Swearingen wrote: I see now how you might have created this config. I tried to reproduce this issue by creating a separate pool on another disk and a volume to attach to my root pool, but my system panics when I try to attach the volume to the

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Brad
(Caching isn't the problem; ordering is.) Weird I was reading about a problem where using SSDs (intel x25-e) if the power goes out and the data in cache is not flushed, you would have loss of data. Could you elaborate on ordering? -- This message posted from opensolaris.org

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Brad
Richard, Yes, write cache is enabled by default, depending on the pool configuration. Is it enabled for a striped (mirrored configuration) zpool? I'm asking because of a concern I've read on this forum about a problem with SSDs (and disks) where if a power outage occurs any data in cache would

Re: [zfs-discuss] x4500/x4540 does the internal controllers have a bbu?

2010-01-12 Thread Toby Thain
On 12-Jan-10, at 10:40 PM, Brad wrote: (Caching isn't the problem; ordering is.) Weird I was reading about a problem where using SSDs (intel x25-e) if the power goes out and the data in cache is not flushed, you would have loss of data. Could you elaborate on ordering? ZFS integrity