Re: [zfs-discuss] RFE: Un-dedup for unique blocks

2013-01-22 Thread Gary Mills
real memory to the swap device is certainly beneficial. Swapping out complete processes is a desperation move, but paging out most of an idle process is a good thing. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs

Re: [zfs-discuss] LUN sizes

2012-10-29 Thread Gary Mills
of their customers encountered this performance problem because almost all of them used their Netapp only for NFS or CIFS. Our Netapp was extremely reliable but did not have the Iscsi LUN performance that we needed. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada

Re: [zfs-discuss] Zpool LUN Sizes

2012-10-28 Thread Gary Mills
will find no errors. If ZFS does find an error, there's no nice way to recover. Most commonly, this happens when the SAN is powered down or rebooted while the ZFS host is still running. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada

Re: [zfs-discuss] What happens when you rm zpool.cache?

2012-10-21 Thread Gary Mills
this by specifying the `cachefile' property on the command line. The `zpool' man page describes how to do this. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] [developer] Re: History of EPERM for unlink() of directories on ZFS?

2012-06-26 Thread Gary Mills
, then yeah, that's horrible. This all sounds like a good use for LD_PRELOAD and a tiny library that intercepts and modernizes system calls. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] zfs and iscsi performance help

2012-01-27 Thread Gary Mills
it. This is a separate problem, introduced with an upgrade to the Iscsi service. The new one has a dependancy on the name service (typically DNS), which means that it isn't available when the zpool import is done during the boot. Check with Oracle support to see if they have found a solution. -- -Gary Mills

Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-26 Thread Gary Mills
and Bob mentioned, I saw this Issue with ISCSI Devices. Instead of export / import is a zpool clear also working? mpathadm list LU mpathadm show LU /dev/rdsk/c5t1d1s2 -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada

Re: [zfs-discuss] unable to access the zpool after issue a reboot

2012-01-24 Thread Gary Mills
the reboot. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs defragmentation via resilvering?

2012-01-16 Thread Gary Mills
when the storage became 50% full. It would increase markedly when the oldest snapshot was deleted. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Does raidzN actually protect against bitrot? If yes - how?

2012-01-15 Thread Gary Mills
of ZFS writes to the disk, then data belonging to ZFS will be modified. I've heard of RAID controllers or SAN devices doing this when they modify the disk geometry or reserved areas on the disk. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada

Re: [zfs-discuss] Very poor pool performance - no zfs/controller errors?!

2011-12-19 Thread Gary Mills
there are no contiguous blocks available. Deleting a snapshot provides some of these, but only temporarily. -- -Gary Mills--refurb--Winnipeg, Manitoba, Canada- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

[zfs-discuss] Does the zpool cache file affect import?

2011-08-29 Thread Gary Mills
system.' error, or will the import succeed? Does the cache change the import behavior? Does it recognize that the server is the same system? I don't want to include the `-f' flag in the commands above when it's not needed. -- -Gary Mills--Unix Group--Computer and Network Services

Re: [zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-12 Thread Gary Mills
On Sun, Jul 10, 2011 at 11:16:02PM +0700, Fajar A. Nugraha wrote: On Sun, Jul 10, 2011 at 10:10 PM, Gary Mills mi...@cc.umanitoba.ca wrote: The `lofiadm' man page describes how to export a file as a block device and then use `mkfs -F pcfs' to create a FAT filesystem on it. Can't I do

[zfs-discuss] How create a FAT filesystem on a zvol?

2011-07-10 Thread Gary Mills
device? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] write cache partial-disk pools (was Server with 4 drives, how to configure ZFS?)

2011-06-20 Thread Gary Mills
. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] JBOD recommendation for ZFS usage

2011-05-30 Thread Gary Mills
bandwidth * Up to 72 Gb/sec of total bandwidth * Four x4-wide 3 Gb/sec SAS host/uplink ports (48 Gb/sec bandwidth) * Two x4-wide 3 Gb/sec SAS expansion ports (24 Gb/sec bandwidth) * Scales up to 48 drives -- -Gary Mills--Unix Group--Computer and Network Services

Re: [zfs-discuss] Best practice for boot partition layout in ZFS

2011-04-06 Thread Gary Mills
to have `setuid=off' for improved security, for example. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] One LUN per RAID group

2011-02-14 Thread Gary Mills
scheduling interfere with I/O scheduling already done by the storage device? Is there any reason not to use one LUN per RAID group? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] One LUN per RAID group

2011-02-14 Thread Gary Mills
On Mon, Feb 14, 2011 at 03:04:18PM -0500, Paul Kraus wrote: On Mon, Feb 14, 2011 at 2:38 PM, Gary Mills mi...@cc.umanitoba.ca wrote: Is there any reason not to use one LUN per RAID group? [...] In other words, if you build a zpool with one vdev of 10GB and another with two vdev's each

[zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread Gary Mills
. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-24 Thread Gary Mills
redundancy reside in the storage device on the SAN. ZFS certainly can't do any disk management in this situation. Error detection and correction is still a debatable issue, one that quickly becomes exceedingly complex. The decision rests on probabilities rather than certainties. -- -Gary Mills

Re: [zfs-discuss] Sliced iSCSI device for doing RAIDZ?

2010-09-23 Thread Gary Mills
has to be done on the SAN storage device. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS with Equallogic storage

2010-08-22 Thread Gary Mills
provided by reliable SAN devices. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Solaris startup script location

2010-08-18 Thread Gary Mills
. It should also specify a `single_instance/' and `transient' service. The method script can do whatever the mount requires, such as creating the ramdisk. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-16 Thread Gary Mills
recipients of any such Covered Software in Executable form as to how they can obtain such Covered Software in Source Code form in a reasonable manner on or through a medium customarily used for software exchange. -- -Gary Mills--Unix Group--Computer and Network Services

[zfs-discuss] ZFS development moving behind closed doors

2010-08-13 Thread Gary Mills
. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs upgrade unmounts filesystems

2010-07-29 Thread Gary Mills
'/space/log': Device busy cannot unmount '/space/mysql': Device busy 2 filesystems upgraded Do I have to shut down all the applications before upgrading the filesystems? This is on a Solaris 10 5/09 system. -- -Gary Mills--Unix Group--Computer and Network Services

Re: [zfs-discuss] zfs upgrade unmounts filesystems

2010-07-29 Thread Gary Mills
. Mapping them to services can be difficult. The server is essentially down during the upgrade. For a root filesystem, you might have to boot off the failsafe archive or a DVD and import the filesystem in order to upgrade it. -- -Gary Mills--Unix Group--Computer and Network Services

[zfs-discuss] ZFS disks hitting 100% busy

2010-06-07 Thread Gary Mills
to the zpool will double the bandwidth. /var/log/syslog is quite large, reaching about 600 megabytes before it's rotated. This takes place each night, with compression bringing it down to about 70 megabytes. The server handles about 500,000 messages a day. -- -Gary Mills--Unix Group

Re: [zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-17 Thread Gary Mills
that they have special hardware in the SATA version that simulates SAS dual interface drives. That's what lets you use SATA drives in a two-node configuration. There's also some additional software setup for that configuration. That would be the SATA interposer that does that. -- -Gary Mills

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-05-07 Thread Gary Mills
of issues, much the same as what you've had in the past, ps and prstats hanging. are you able to tell me the IDR number that you applied? The IDR was only needed last year. Upgrading to Solaris 10 10/09 and applying the latest patches resolved the problem. -- -Gary Mills--Unix Group

[zfs-discuss] Is the J4200 SAS array suitable for Sun Cluster?

2010-05-03 Thread Gary Mills
paths. I plan to use ZFS everywhere, for the root filesystem and the shared storage. The only exception will be UFS for /globaldevices . -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] SAS vs SATA: Same size, same speed, why SAS?

2010-04-26 Thread Gary Mills
be easy to find a pair of 1U servers, but what's the smallest SAS array that's available? Does it need an array controller? What's needed on the servers to connect to it? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-11 Thread Gary Mills
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote: We have an IMAP e-mail server running on a Solaris 10 10/09 system. It uses six ZFS filesystems built on a single zpool with 14 daily snapshots. Every day at 11:56, a cron command destroys the oldest snapshots and creates new ones

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-09 Thread Gary Mills
On Mon, Mar 08, 2010 at 03:18:34PM -0500, Miles Nordin wrote: gm == Gary Mills mi...@cc.umanitoba.ca writes: gm destroys the oldest snapshots and creates new ones, both gm recursively. I'd be curious if you try taking the same snapshots non-recursively instead, does the pause go

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-09 Thread Gary Mills
much physical memory does this system have? Mine has 64 GB of memory with the ARC limited to 32 GB. The Cyrus IMAP processes, thousands of them, use memory mapping extensively. I don't know if this design affects the snapshot recycle behavior. -- -Gary Mills--Unix Group--Computer

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-05 Thread Gary Mills
On Thu, Mar 04, 2010 at 04:20:10PM -0600, Gary Mills wrote: We have an IMAP e-mail server running on a Solaris 10 10/09 system. It uses six ZFS filesystems built on a single zpool with 14 daily snapshots. Every day at 11:56, a cron command destroys the oldest snapshots and creates new ones

[zfs-discuss] Snapshot recycle freezes system activity

2010-03-04 Thread Gary Mills
. Is it destroying old snapshots or creating new ones that causes this dead time? What does each of these procedures do that could affect the system? What can I do to make this less visible to users? -- -Gary Mills--Unix Group--Computer and Network Services

Re: [zfs-discuss] Snapshot recycle freezes system activity

2010-03-04 Thread Gary Mills
On Thu, Mar 04, 2010 at 07:51:13PM -0300, Giovanni Tirloni wrote: On Thu, Mar 4, 2010 at 7:28 PM, Ian Collins [1]...@ianshome.com wrote: Gary Mills wrote: We have an IMAP e-mail server running on a Solaris 10 10/09 system. It uses six ZFS filesystems built

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-14 Thread Gary Mills
On Thu, Jan 14, 2010 at 10:58:48AM +1100, Daniel Carosone wrote: On Wed, Jan 13, 2010 at 08:21:13AM -0600, Gary Mills wrote: Yes, I understand that, but do filesystems have separate queues of any sort within the ZIL? I'm not sure. If you can experiment and measure a benefit, understanding

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-01-12 Thread Gary Mills
On Mon, Jan 11, 2010 at 01:43:27PM -0600, Gary Mills wrote: This line was a workaround for bug 6642475 that had to do with searching for for large contiguous pages. The result was high system time and slow response. I can't find any public information on this bug, although I assume it's

[zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Gary Mills
7 6 3525830G 32G 10:50:361K 117 9 105812 5344 1030G 32G -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] How do separate ZFS filesystems affect performance?

2010-01-12 Thread Gary Mills
On Tue, Jan 12, 2010 at 11:11:36AM -0600, Bob Friesenhahn wrote: On Tue, 12 Jan 2010, Gary Mills wrote: Is moving the databases (IMAP metadata) to a separate ZFS filesystem likely to improve performance? I've heard that this is important, but I'm not clear why

[zfs-discuss] Does ZFS use large memory pages?

2010-01-11 Thread Gary Mills
fixed by now. It may have only affected Oracle database. I'd like to remove this line from /etc/system now, but I don't know if it will have any adverse effect on ZFS or the Cyrus IMAP server that runs on this machine. Does anyone know if ZFS uses large memory pages? -- -Gary Mills--Unix

[zfs-discuss] ZFS filesystems not mounted on reboot with Solaris 10 10/09

2009-12-19 Thread Gary Mills
start method (/lib/svc/method/fs-local) ] [ Dec 19 08:09:12 Method start exited with status 0 ] Is a dependancy missing? -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Permanent errors on two files

2009-12-06 Thread Gary Mills
memory. There were no disk errors reported. I suppose we can blame the memory. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] Permanent errors on two files

2009-12-06 Thread Gary Mills
. You might be able to identify these object numbers with zdb, but I'm not sure how do that. You can try to use zdb this way to check if these objects still exist zdb -d space/dcc 0x11e887 0xba25aa -- -Gary Mills--Unix Group--Computer and Network Services

Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-21 Thread Gary Mills
or I can share in a summary anything else that might be of interest You are welcome to share this information. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS commands hang after several zfs receives

2009-09-15 Thread Gary Mills
for us back in May. -- -Gary Mills--Unix Group--Computer and Network Services- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-07 Thread Gary Mills
On Mon, Jul 06, 2009 at 04:54:16PM +0100, Andrew Gabriel wrote: Andre van Eyssen wrote: On Mon, 6 Jul 2009, Gary Mills wrote: As for a business case, we just had an extended and catastrophic performance degradation that was the result of two ZFS bugs. If we have another one like that, our

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Gary Mills
On Sat, Jul 04, 2009 at 07:18:45PM +0100, Phil Harman wrote: Gary Mills wrote: On Sat, Jul 04, 2009 at 08:48:33AM +0100, Phil Harman wrote: ZFS doesn't mix well with mmap(2). This is because ZFS uses the ARC instead of the Solaris page cache. But mmap() uses the latter. So if anyone

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-04 Thread Gary Mills
to optimize the two caches in this environment? Will mmap(2) one day play nicely with ZFS? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] Lots of metadata overhead on filesystems with 100M files

2009-06-18 Thread Gary Mills
increased. Our problem was indirectly a result of fragmentation, but it was solved by a ZFS patch. I understand that this patch, which fixes a whole bunch of ZFS bugs, should be released soon. I wonder if this was your problem. -- -Gary Mills--Unix Support--U of M Academic Computing

Re: [zfs-discuss] What causes slow performance under load?

2009-05-13 Thread Gary Mills
On Mon, Apr 27, 2009 at 04:47:27PM -0500, Gary Mills wrote: On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: We have an IMAP server with ZFS for mailbox storage that has recently become extremely slow on most weekday mornings and afternoons. When one of these incidents happens

Re: [zfs-discuss] What causes slow performance under load?

2009-04-27 Thread Gary Mills
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: We have an IMAP server with ZFS for mailbox storage that has recently become extremely slow on most weekday mornings and afternoons. When one of these incidents happens, the number of processes increases, the load average increases

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
unlikely) Since the LUN is just a large file on the Netapp, I assume that all it can do is to put the blocks back into sequential order. That might have some benefit overall. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking

Re: [zfs-discuss] Peculiarities of COW over COW?

2009-04-26 Thread Gary Mills
On Sun, Apr 26, 2009 at 05:02:38PM -0500, Tim wrote: On Sun, Apr 26, 2009 at 3:52 PM, Gary Mills [1]mi...@cc.umanitoba.ca wrote: We run our IMAP spool on ZFS that's derived from LUNs on a Netapp filer. There's a great deal of churn in e-mail folders, with messages

Re: [zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-25 Thread Gary Mills
On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote: Gary Mills wrote: Does anyone know about this device? SESX3Y11Z 32 GB 2.5-Inch SATA Solid State Drive with Marlin Bracket for Sun SPARC Enterprise T5120, T5220, T5140 and T5240 Servers, RoHS-6 Compliant

[zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-24 Thread Gary Mills
for ZFS? Is there any way I could use this in a T2000 server? The brackets appear to be different. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] What causes slow performance under load?

2009-04-22 Thread Gary Mills
Thread: mail systems using ZFS filesystems? Thanks. Those problems do sound similar. I also see positive experiences with T2000 servers, ZFS, and Cyrus IMAP from UC Davis. None of the people involved seem to be active on either the ZFS mailing list or the Cyrus list. -- -Gary Mills--Unix

Re: [zfs-discuss] What causes slow performance under load?

2009-04-21 Thread Gary Mills
code for handling two different sizes of memory pages. You can find more information here: http://forums.sun.com/thread.jspa?threadID=5257060 Also, open a support case with Sun if you haven't already. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking

Re: [zfs-discuss] What causes slow performance under load?

2009-04-20 Thread Gary Mills
On Sat, Apr 18, 2009 at 04:27:55PM -0500, Gary Mills wrote: We have an IMAP server with ZFS for mailbox storage that has recently become extremely slow on most weekday mornings and afternoons. When one of these incidents happens, the number of processes increases, the load average increases

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Gary Mills
On Sat, Apr 18, 2009 at 09:41:39PM -0500, Tim wrote: On Sat, Apr 18, 2009 at 9:01 PM, Gary Mills [1]mi...@cc.umanitoba.ca wrote: On Sat, Apr 18, 2009 at 06:53:30PM -0400, Ellis, Mike wrote: In case the writes are a problem: When zfs sends a sync-command

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Gary Mills
On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote: [perf-discuss cc'd] On Sat, Apr 18, 2009 at 4:27 PM, Gary Mills mi...@cc.umanitoba.ca wrote: Many other layers are involved in this server.  We use scsi_vhci for redundant I/O paths and Sun's Iscsi initiator to connect

[zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
for the slow performance? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
On Sat, Apr 18, 2009 at 05:25:17PM -0500, Bob Friesenhahn wrote: On Sat, 18 Apr 2009, Gary Mills wrote: How do we determine which layer is responsible for the slow performance? If the ARC size is diminishing under heavy load then there must be excessive pressure for memory from

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
? (You're not by chance using any type of ssh-transfers etc as part of the backups are you) No, Networker use RPC to connect to the backup server, but there's no encryption or compression on the client side. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Gary Mills
On Sat, Apr 18, 2009 at 06:06:49PM -0700, Richard Elling wrote: [CC'ed to perf-discuss] Gary Mills wrote: We have an IMAP server with ZFS for mailbox storage that has recently become extremely slow on most weekday mornings and afternoons. When one of these incidents happens, the number

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-13 Thread Gary Mills
0 0 0 c4t60A98000433469764E4A2D456A696579d0 ONLINE 0 0 0 c4t60A98000433469764E4A476D2F6B385Ad0 ONLINE 0 0 0 c4t60A98000433469764E4A476D2F664E4Fd0 ONLINE 0 0 0 errors: No known data errors -- -Gary

[zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
that are memory-mapped by all processes. I can move these from ZFS to UFS if this is likely to help. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
On Sun, Apr 12, 2009 at 10:49:49AM -0700, Richard Elling wrote: Gary Mills wrote: We're running a Cyrus IMAP server on a T2000 under Solaris 10 with about 1 TB of mailboxes on ZFS filesystems. Recently, when under load, we've had incidents where IMAP operations became very slow. The general

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
5.87K 0 7 67.5K 108 2.34M zfs -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Any news on ZFS bug 6535172?

2009-04-12 Thread Gary Mills
him. Is there a way to determine this from the Iscsi initiator side? I do have a test mail server that I can play with. That could make a big difference... (Perhaps disabling the write-flush in zfs will make a big difference here, especially on a write-heavy system) -- -Gary Mills--Unix

Re: [zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-10 Thread Gary Mills
On Thu, Apr 09, 2009 at 04:25:58PM +0200, Henk Langeveld wrote: Gary Mills wrote: I've been watching the ZFS ARC cache on our IMAP server while the backups are running, and also when user activity is high. The two seem to conflict. Fast response for users seems to depend on their data being

[zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-06 Thread Gary Mills
. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] How to set a minimum ARC size?

2009-04-02 Thread Gary Mills
that, but in this case ZFS is starved for memory and the whole thing slows to a crawl. Is there a way to set a minimum ARC size so that this doesn't happen? We are going to upgrade the memory, but a lower limit on ARC size might still be a good idea. -- -Gary Mills--Unix Support--U of M Academic Computing

Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-04 Thread Gary Mills
. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-04 Thread Gary Mills
On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote: gm == Gary Mills mi...@cc.umanitoba.ca writes: gm I suppose my RFE for two-level ZFS should be included, Not that my opinion counts for much, but I wasn't deaf to it---I did respond. I appreciate that. I thought

Re: [zfs-discuss] zfs related google summer of code ideas - your vote

2009-03-04 Thread Gary Mills
On Wed, Mar 04, 2009 at 06:31:59PM -0700, Dave wrote: Gary Mills wrote: On Wed, Mar 04, 2009 at 01:20:42PM -0500, Miles Nordin wrote: gm == Gary Mills mi...@cc.umanitoba.ca writes: gm I suppose my RFE for two-level ZFS should be included, It's a simply a consequence of ZFS's end-to-end

Re: [zfs-discuss] RFE for two-level ZFS

2009-02-21 Thread Gary Mills
On Thu, Feb 19, 2009 at 12:36:22PM -0800, Brandon High wrote: On Thu, Feb 19, 2009 at 6:18 AM, Gary Mills mi...@cc.umanitoba.ca wrote: Should I file an RFE for this addition to ZFS? The concept would be to run ZFS on a file server, exporting storage to an application server where ZFS also

Re: [zfs-discuss] RFE for two-level ZFS

2009-02-20 Thread Gary Mills
On Thu, Feb 19, 2009 at 09:59:01AM -0800, Richard Elling wrote: Gary Mills wrote: Should I file an RFE for this addition to ZFS? The concept would be to run ZFS on a file server, exporting storage to an application server where ZFS also runs on top of that storage. All storage management

[zfs-discuss] RFE for two-level ZFS

2009-02-19 Thread Gary Mills
around these problems. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-12 Thread Gary Mills
instead of just the IT professionals. That implies that ZFS will have to detect removable devices and treat them differently than fixed devices. It might have to be an option that can be enabled for higher performance with reduced data security. -- -Gary Mills--Unix Support--U of M

Re: [zfs-discuss] Two-level ZFS

2009-02-02 Thread Gary Mills
On Mon, Feb 02, 2009 at 09:53:15PM +0700, Fajar A. Nugraha wrote: On Mon, Feb 2, 2009 at 9:22 PM, Gary Mills mi...@cc.umanitoba.ca wrote: On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote: If there are two (or more) instances of ZFS in the end-to-end data path, each instance

Re: [zfs-discuss] Two-level ZFS

2009-02-02 Thread Gary Mills
systems, redundancy only on the file server, and end-to-end error detection and correction, does not exist. What additions to ZFS are required to make this work? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss

[zfs-discuss] Two-level ZFS

2009-02-01 Thread Gary Mills
can identify the source of the data in the event of an error? Does this additional exchange of information fit into the Iscsi protocol, or does it have to flow out of band somehow? -- -Gary Mills--Unix Support--U of M Academic Computing and Networking

[zfs-discuss] What are the usual suspects in data errors?

2009-01-14 Thread Gary Mills
checksums reasonably detect? Certainly if some of the other error checking failed to detect an error, ZFS would still detect one. How likely are these other error checks to fail? Is there anything else I've missed in this analysis? -- -Gary Mills--Unix Support--U of M Academic Computing

Re: [zfs-discuss] snapshot before patching..

2008-12-30 Thread Gary Mills
'. And how/what do I do to reverse to the non-patched system in case something goes terribly wrong? ;-) Just revert to the old BE. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Gary Mills
On Sat, Dec 20, 2008 at 03:52:46AM -0800, Uwe Dippel wrote: This might sound sooo simple, but it isn't. I read the ZFS Administration Guide and it did not give an answer; at least no simple answer, simple enough for me to understand. The intention is to follow the thread Easiest way to

Re: [zfs-discuss] How to create a basic new filesystem?

2008-12-20 Thread Gary Mills
`zpool' a complete disk, by omitting the slice part, it will write its own label to the drive. If you specify it with a slice, it expects that you have already defined that slice. For a root pool, it has to be a slice. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-12 Thread Gary Mills
On Thu, Dec 11, 2008 at 10:41:26PM -0600, Bob Friesenhahn wrote: On Thu, 11 Dec 2008, Gary Mills wrote: The split responsibility model is quite appealing. I'd like to see ZFS address this model. Is there not a way that ZFS could delegate responsibility for both error detection and correction

Re: [zfs-discuss] Split responsibility for data with ZFS

2008-12-11 Thread Gary Mills
On Wed, Dec 10, 2008 at 12:58:48PM -0800, Richard Elling wrote: Nicolas Williams wrote: On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote: On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote: On the server, a variety of filesystems can be created on this virtual

[zfs-discuss] Split responsibility for data with ZFS

2008-12-10 Thread Gary Mills
is responsible for integrity of the filesystem. How can it be made to behave in a reliable manner? Can ZFS be better than UFS in this configuration? Is a different form of communication between the two components necessary in this case? -- -Gary Mills--Unix Support--U of M Academic Computing

Re: [zfs-discuss] Separate /var

2008-12-02 Thread Gary Mills
On Mon, Dec 01, 2008 at 04:45:16PM -0700, Lori Alt wrote: On 11/27/08 17:18, Gary Mills wrote: On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote: On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent: On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote: I'm

Re: [zfs-discuss] Separate /var

2008-11-27 Thread Gary Mills
. They believe that a separate /var is still good practice. If your mount options are different for /var and /, you will need a separate filesystem. In our case, we use `setuid=off' and `devices=off' on /var for security reasons. We do the same thing for home directories and /tmp . -- -Gary Mills

Re: [zfs-discuss] Separate /var

2008-11-27 Thread Gary Mills
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote: On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent: On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote: I'm currently working with an organisation who want use ZFS for their full zones. Storage is SAN

Re: [zfs-discuss] Fwd: [osol-announce] IMPT: Do not use SXCE Build 102

2008-11-17 Thread Gary Mills
again. Disabling it with `-t' after the system's up seems to do no harm. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] Is there a baby thumper?

2008-11-05 Thread Gary Mills
and the same disk controller would be most suitable. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Gary Mills
One of our storage guys would like to put a thumper into service, but he's looking for a smaller model to use for testing. Is there something that has the same CPU, disks, and disk controller as a thumper, but fewer disks? The ones I've seen all have 48 disks. -- -Gary Mills--Unix Support

Re: [zfs-discuss] Is there a baby thumper?

2008-11-04 Thread Gary Mills
On Tue, Nov 04, 2008 at 03:31:16PM -0700, Carl Wimmi wrote: There isn't a de-populated version. Would X4540 with 250 or 500 GB drives meet your needs? That might be our only choice. -- -Gary Mills--Unix Support--U of M Academic Computing and Networking

  1   2   >