Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-26 Thread Mike Gerdts
On Thu, Nov 26, 2009 at 8:53 PM, Toby Thain t...@telegraphics.com.au wrote: On 26-Nov-09, at 8:57 PM, Richard Elling wrote: On Nov 26, 2009, at 1:20 PM, Toby Thain wrote: On 25-Nov-09, at 4:31 PM, Peter Jeremy wrote: On 2009-Nov-24 14:07:06 -0600, Mike Gerdts mger...@gmail.com wrote

Re: [zfs-discuss] ZFS Random Read Performance

2009-11-25 Thread Mike Gerdts
from cache. A dtrace analysis of just how random the reads are would be interesting. I think that hotspot.d from the DTrace Toolkit would be a good starting place. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] proposal partial/relative paths for zfs(1)

2009-11-25 Thread Mike Gerdts
://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html. http://mail.opensolaris.org/pipermail/zfs-discuss/2008-July/019762.html Mike On Thu, Jul 10, 2008 at 4:42 AM, Darren J Moffat darren.mof...@sun.com wrote: I regularly create new zfs filesystems or snapshots and I find it annoying

[zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
characteristics in this area? Is there less to be concerned about from a performance standpoint if the workload is primarily read? To maximize the efficacy of dedup, would it be best to pick a fixed block size and match it between the layers of zfs? -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling richard.ell...@gmail.com wrote: Good question!  Additional thoughts below... On Nov 24, 2009, at 6:37 AM, Mike Gerdts wrote: Suppose I have a storage server that runs ZFS, presumably providing file (NFS) and/or block (iSCSI, FC) services

Re: [zfs-discuss] Best practices for zpools on zfs

2009-11-24 Thread Mike Gerdts
On Tue, Nov 24, 2009 at 1:39 PM, Richard Elling richard.ell...@gmail.com wrote: On Nov 24, 2009, at 11:31 AM, Mike Gerdts wrote: On Tue, Nov 24, 2009 at 9:46 AM, Richard Elling richard.ell...@gmail.com wrote: Good question!  Additional thoughts below... On Nov 24, 2009, at 6:37 AM, Mike

Re: [zfs-discuss] PSARC recover files?

2009-11-09 Thread Ellis, Mike
Maybe to create snapshots after the fact as a part of some larger disaster recovery effort. (What did my pool/file-system look like at 10am?... Say 30-minutes before the database barffed on itself...) With some enhancements might this functionality be extendable into a poor man's CDP offering

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
and sha256 implemented in hardware? I've been waiting very patiently to see this code go in. Thank you for all your hard work (and the work of those that helped too!). -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
, a T2 CPU can do 41 Gb/s of SHA256. The implication here is that this keeps the MAU's busy but the rest of the core is still idle for things like compression, TCP, etc. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] dedup question

2009-11-02 Thread Mike Gerdts
blocks stay deduped in the ARC, it means that it is feasible to every block that is accessed with any frequency to be in memory. Oh yeah, and you save a lot of disk space. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs

[zfs-discuss] ZFS near-synchronous replication...

2009-10-26 Thread Mike Watkins
Anyone have any creative solutions for near-synchronous replication between 2 ZFS hosts? Near-synchronous, meaning RPO X---0 I realize performance will take a hit. Thanks, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] moving files from one fs to another, splittin/merging

2009-10-20 Thread Mike Bo
Once data resides within a pool, there should be an efficient method of moving it from one ZFS file system to another. Think Link/Unlink vs. Copy/Remove. Here's my scenario... When I originally created a 3TB pool, I didn't know the best way carve up the space, so I used a single, flat ZFS file

[zfs-discuss] zfs disk encryption

2009-10-13 Thread Mike DeMarco
Does anyone know when this will be available? Project says Q4 2009 but does not give a build. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs on FDE

2009-10-13 Thread Mike DeMarco
Any reason why ZFS would not work on a FDE (Full Data Encryption) Hard drive? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] bigger zfs arc

2009-10-02 Thread Mike Gerdts
(arcsize)         Target Size (Adaptive):   4207 MB (c) That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system that you have booted from a 32-bit kernel? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
will give each thing X/Y space. This is because it is quite likely that someone will do the operation Y++ and there are very few storage technologies that allow you to shrink the amount of space allocated to each item. -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
On Wed, Sep 23, 2009 at 7:32 AM, bertram fukuda bertram.fuk...@hp.com wrote: Thanks for the info Mike. Just so I'm clear.  You suggest 1)create a single zpool from my LUN 2) create a single ZFS filesystem 3) create 2 zone in the ZFS filesystem. Sound right? Correct -- Mike Gerdts http

Re: [zfs-discuss] New to ZFS: One LUN, multiple zones

2009-09-23 Thread Mike Gerdts
...@migrate \ | ssh host2 zfs receive zones/zo...@migrate host2# zonecfg -z zone1 create -a /zones/zone1 host2# zonecfg -z zone1 attach host2# zoneadm -z zone1 boot -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-09-13 Thread Mike Gerdts
release (045C)8626. On August 11 they released firmware revisions 8820, 8850, and 02G9, depending on the drive model. http://downloadcenter.intel.com/Detail_Desc.aspx?agr=YProdId=3043DwnldID=17485lang=eng -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs

[zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
that will lead them to unsympathetic ears if things go poorly. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
On Wed, Sep 2, 2009 at 4:06 PM, cindy.swearin...@sun.com wrote: Hi Mike, I reviewed this doc and the only issue I have with it now is that uses /var/tmp an an example of storing snapshots in long-term storage elsewhere. One other point comes from zfs(1M): The format of the stream

Re: [zfs-discuss] Archiving and Restoring Snapshots

2009-09-02 Thread Mike Gerdts
On Wed, Sep 2, 2009 at 4:46 PM, Richard Ellingrichard.ell...@gmail.com wrote: Thanks Cindy! Mike, et.al., I think the confusion is surrounding replacing an enterprise backup scheme with send-to-file. There is nothing wrong with send-to-file, it functions as designed.  But it isn't designed

Re: [zfs-discuss] Snapshot creation time

2009-08-28 Thread Ellis, Mike
Try a: zfs get -pH -o value creation snapshot -- MikeE -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Chris Baker Sent: Friday, August 28, 2009 10:52 AM To: zfs-discuss@opensolaris.org Subject: [zfs-discuss]

Re: [zfs-discuss] How to prevent /usr/bin/chmod from following symbolic links?

2009-08-24 Thread Mike Gerdts
/alice/proj1 Alice$ rm /etc/shadow Alice$ cp myshadow /etc Alice$ su - root# -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Snapshot access from non-global zones

2009-08-20 Thread Mike Futerko
. But if the snapshots were created after the mount - they are not accessible from inside of a zone. So this is correct behavior or it's bug, any workarounds? Thanks in advance for all comments. Regards, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] file change long - was zfs fragmentation

2009-08-12 Thread Mike Gerdts
, as this type of data is already presented via zpool status -v when corruption is detected. http://docs.sun.com/app/docs/doc/819-5461/gbctx?a=view -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
me at the moment. At an average file size of 45 KB, that translates to about 3 MB/sec. As you run two data streams, you are seeing throughput that looks kinda like the 2 * 3 MB/sec. With 4 backup streams do you get something that looks like 4 * 3 MB/s? How does that effect iostat output? -- Mike

Re: [zfs-discuss] zfs fragmentation

2009-08-11 Thread Mike Gerdts
in the parallelism gaps as the longer-running ones finish. 3. That is, there is sometimes benefit in having many more jobs to run than you have concurrent streams. This avoids having one save set that finishes long after all the others because of poorly balanced save sets. -- Mike Gerdts http

Re: [zfs-discuss] pathnames in zfs(1M) arguments

2009-08-09 Thread Mike Gerdts
0 - 9.76G - # rmdir .zfs/snapshot/foo # zfs list | grep foo no output I don't know of a similar shortcut for the create or clone subcommands. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
iSCSI LUNs is probably already giving you most of this benefit (assuming low latency on network connections). -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] zfs fragmentation

2009-08-08 Thread Mike Gerdts
On Sat, Aug 8, 2009 at 3:25 PM, Ed Spencered_spen...@umanitoba.ca wrote: On Sat, 2009-08-08 at 15:12, Mike Gerdts wrote: The DBA's that I know use files that are at least hundreds of megabytes in size.  Your problem is very different. Yes, definitely. I'm relating records in a table to my

Re: [zfs-discuss] How Virtual Box handles the IO

2009-07-31 Thread Mike Gerdts
/pipermail/zfs-discuss/2007-September/013233.html Quite likely related to: http://bugs.opensolaris.org/view_bug.do?bug_id=6684721 In other words, it was a buggy Sun component that didn't do the right thing with cache flushes. -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Mike Gerdts
? It appears as though there is an upgrade path. http://www.c0t0d0s0.org/archives/5750-Upgrade-of-a-X4500-to-a-X4540.html However, the troll that you have to pay to follow that path demands a hefty sum ($7995 list). Oh, and a reboot is required. :) -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-27 Thread Mike Gerdts
, Virtual PC) have the same default behaviour as VirtualBox? I've lost a pool due to LDoms doing the same. This bug seems to be related. http://bugs.opensolaris.org/view_bug.do?bug_id=6684721 -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss

Re: [zfs-discuss] An amusing scrub

2009-07-15 Thread Mike Gerdts
/2009/US/07/15/quadrillion.dollar.glitch/index.html - Rich (Footnote: I ran ntpdate between starting the scrub and it finishing, and time rolled backwards. Nothing more exciting.) And Visa is willing to wave the $15 over the limit fee associated with the errant charge... -- Mike Gerdts http

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
report contains more detail of the configuration. One thing not covered in that bug report is that the S10u7 ldom has 2048 MB of RAM and the 2009.06 ldom has 2024 MB of RAM. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
On Mon, Jul 13, 2009 at 3:16 PM, Joerg Schillingjoerg.schill...@fokus.fraunhofer.de wrote: Bob Friesenhahn bfrie...@simple.dallas.tx.us wrote: On Mon, 13 Jul 2009, Mike Gerdts wrote: FWIW, I hit another bug if I turn off primarycache. http://defect.opensolaris.org/bz/show_bug.cgi?id

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
128K. I thought I had seen excessive reads there too, but now I can't reproduce that. Creating another fs with recordsize=8k seems to make this behavior go away - things seem to be working as designed. I'll go update the (nota-)bug. -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-13 Thread Mike Gerdts
# uname -srvp SunOS 5.11 snv_111b sparc -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] deduplication

2009-07-11 Thread Mike Gerdts
ongoing operation. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Mike Gerdts
/lib/libbc/libc/gen/common/readdir.c The libbc version hasn't changed since the code became public. You can get to an older libc variant of it by clicking on the history link or using the appropriate hg command to get a specific changeset. -- Mike Gerdts http://mgerdts.blogspot.com

Re: [zfs-discuss] Interposing on readdir and friends

2009-07-02 Thread Mike Gerdts
that returns the entries for . and .. out of order. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] zfs select

2009-06-23 Thread Mike Forey
Hi, I'd like to be able to select zfs filesystems, based on the value of properties. Something like this: zfs select mounted=yes Is anyone aware if this feature might be available in the future? If not, is there a clean way of achieving the same result? Thanks, Mike. -- This message posted

Re: [zfs-discuss] zfs select

2009-06-23 Thread Mike Forey
very tidy, thanks! :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-22 Thread Mike Gerdts
is available through sunsolve if you have a support contract. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] CR# 6574286, remove slog device

2009-05-21 Thread Mike Gerdts
/thread.jspa?messageID=377018 I have no idea of the quality or correctness of this solution. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Areca 1160 ZFS

2009-05-07 Thread Mike Gerdts
=== GuiErrMsg0x00: Success. r...@nfs0009:~# Perhaps you have change the configuration of the array since the last reconfiguration boot. If you run devfsadm then run format, does it see more disks? -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-06 Thread Mike Gerdts
On Wed, May 6, 2009 at 2:54 AM, casper@sun.com wrote: On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote: PS: At one point the old JumpStart code was encumbered, and the community wasn't able to assist. I haven't looked at the next-gen jumpstart framework

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Ellis, Mike
How about a generic zfs options field in the JumpStart profile? (essentially an area where options can be specified that are all applied to the boot-pool (with provisions to deal with a broken-out-var)) That should future proof things to some extent allowing for compression=x, copies=x,

Re: [zfs-discuss] Compression/copies on root pool RFE

2009-05-05 Thread Mike Gerdts
On Tue, May 5, 2009 at 6:09 PM, Ellis, Mike mike.el...@fmr.com wrote: PS: At one point the old JumpStart code was encumbered, and the community wasn't able to assist. I haven't looked at the next-gen jumpstart framework that was delivered as part of the OpenSolaris SPARC preview. Can anyone

Re: [zfs-discuss] What is the 32 GB 2.5-Inch SATA Solid State Drive?

2009-04-27 Thread Mike Watkins
Create the zpool with: zpool create name log dev(s) - for the ZIL zpool create name cache dev(s) - for the L2ARC On Sat, Apr 25, 2009 at 11:13 PM, Richard Elling richard.ell...@gmail.comwrote: Gary Mills wrote: On Fri, Apr 24, 2009 at 09:08:52PM -0700, Richard Elling wrote: Gary Mills

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-04-27 Thread Mike Gerdts
of those finding this conversation in the archives, this looks like it will be fixed in snv_114. http://bugs.opensolaris.org/view_bug.do?bug_id=6824968 http://hg.genunix.org/onnv-gate.hg/rev/4f68f041ddcd -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs

Re: [zfs-discuss] Add WORM to OpenSolaris

2009-04-26 Thread Ellis, Mike
Wow... that's seriously cool! Throw in some of this... http://www.nexenta.com/demos/auto-cdp.html and now we're really getting somewhere... Nice to see this level of innovation here. Anyone try to employ these types of techniques on s10? I haven't used nexenta in the past, and I'm not clear in

Re: [zfs-discuss] What causes slow performance under load?

2009-04-19 Thread Mike Gerdts
On Sun, Apr 19, 2009 at 10:58 AM, Gary Mills mi...@cc.umanitoba.ca wrote: On Sat, Apr 18, 2009 at 11:45:54PM -0500, Mike Gerdts wrote: Also, you may want to consider doing backups from the NetApp rather than from the Solaris box. I've certainly recommended finding a different way to perform

Re: [zfs-discuss] What causes slow performance under load?

2009-04-18 Thread Mike Gerdts
to be on the same spindles? What does the network look like from the NetApp side? Are the mail server and the NetApp attached to the same switch, or are they at opposite ends of the campus? Is there something between them that is misbehaving? -- Mike Gerdts http://mgerdts.blogspot.com

[zfs-discuss] Permission problems with nfs-mounted zfs user directories

2009-03-31 Thread Pacey, Mike
544 18137839360 1% /home/users % ls -ld paceytmp drwxr-xr-x+ 2 root root 2 2009-03-31 09:47 paceytmp The owner is root, not the user I set chown for. I also seem to have a facl I never set up. Can someone advise on the correct way to do this so that the permissions are correct? Thanks, Mike

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Mike Gerdts
in the global zone and the dataset is deleted to a non-global zone, display the UID rather than a possibly mistaken username. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] [Fwd: ZFS user/group quotas space accounting [PSARC/2009/204 FastTrack timeout 04/08/2009]]

2009-03-31 Thread Mike Gerdts
on the other end of bugs.opensolars.org will get confused by the request to enhance a feature that doesn't yet exist. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] j4200 drive carriers

2009-03-30 Thread Mike Futerko
Hello 1) Dual IO module option 2) Multipath support 3) Zone support [multi host connecting to same JBOD or same set of JBOD's connected in series. ] This sounds interesting - where I can read more about connecting two hosts to same J4200 etc? Thanks Mike

Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-12 Thread mike
the tools simpler - absolutely no UI for instance. does it really need one to dump out things? :) On Wed, Mar 11, 2009 at 7:15 PM, David Magda dma...@ee.ryerson.ca wrote: On Mar 11, 2009, at 21:59, mike wrote: On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca wrote: If you know

[zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm It's hard to use the HAL sometimes. I am trying to locate chipset info but having a hard time...

Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
) would be forward compatible... On Wed, Mar 11, 2009 at 5:14 PM, mike mike...@gmail.com wrote: http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm http://www.intel.com/support/motherboards/server/ssr212mc2/index.htm It's hard to use the HAL sometimes. I am

Re: [zfs-discuss] Trying to determine if this box will be compatible with Opensolaris or Solaris

2009-03-11 Thread mike
doesnt it require java and x11? On Wed, Mar 11, 2009 at 6:53 PM, David Magda dma...@ee.ryerson.ca wrote: On Mar 11, 2009, at 20:14, mike wrote: http://www.intel.com/products/server/storage-systems/ssr212mc2/ssr212mc2-overview.htm http://www.intel.com/support/motherboards/server/ssr212mc2

Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread mike
this up by reducing the number of mnttab lookups. And zfs list has been changed to no longer show snapshots by default. But it still might make sense to limit the number of snapshots saved: http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10 -- Rich On Sun, Mar 8, 2009 at 10:10 PM, mike

Re: [zfs-discuss] Is there a limit to snapshotting?

2009-03-09 Thread mike
Stone about rolling up daily snapshots into monthly snapshots, which would roll up into yearly snapshots... On Mon, Mar 9, 2009 at 1:29 PM, Richard Elling richard.ell...@gmail.com wrote: mike wrote: Well, I could just use the same script to create my daily snapshot to remove a snapshot

[zfs-discuss] Is there a limit to snapshotting?

2009-03-08 Thread mike
I do a daily snapshot of two filesystems, and over the past few months it's obviously grown to a bunch. zfs list shows me all of those. I can change it to use the -t flag to not show them, so that's good. However, I'm worried about boot times and other things. Will it get to a point with 1000's

Re: [zfs-discuss] Details on raidz boot + zfs patents?

2009-02-28 Thread Mike Gerdts
/#patents. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
snapshots could be very helpful to prevent file system crawls and to avoid being fooled by bogus mtimes. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Mike Gerdts
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams nicolas.willi...@sun.com wrote: On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote: On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams nicolas.willi...@sun.com wrote: On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote

Re: [zfs-discuss] Is zfs snapshot -r atomic?

2009-02-22 Thread Mike Gerdts
are created together (all at once) or not created at all. The benefit of atomic snapshots operations is that the snapshot data is always taken at one consistent time, even across descendent file systems. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs

Re: [zfs-discuss] strange performance drop of solaris 10/zfs

2009-01-29 Thread Mike Gerdts
or ksh, so long as the list of zfs mount points does not overflow the maximum command line length. $ fsstat $(zfs list -H -o mountpoint | nawk '$1 !~ /^(\/|-|legacy)$/') 5 -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] replace same sized disk fails with too small error

2009-01-18 Thread Ellis, Mike
Does this all go away when BP-rewrite gets fully resolved/implemented? Short of the pool being 100% full, it should allow a rebalancing operation and possible LUN/device-size-shrink to match the new device that is being inserted? Thanks, -- MikeE -Original Message- From:

Re: [zfs-discuss] Can the new consumer NAS devices run OpenSolaris?

2009-01-12 Thread mike
i'm not sure how many via chips support 64-bit, which seems to be highly recommended. atoms seem to be more suitable. On Mon, Jan 12, 2009 at 1:14 PM, Joe S js.li...@gmail.com wrote: In the last few weeks, I've seen a number of new NAS devices released from companies like HP, QNAP, VIA, Lacie,

Re: [zfs-discuss] zfs send / zfs receive hanging

2009-01-12 Thread Mike Futerko
Hi It would be also nice to be able to specify the zpool version during pool creation. E.g. If I have a newer machine and I want to move data to an older one, I should be able to specify the pool version, otherwise it's a one-way street. zpool create -o version=xx ... Mike

Re: [zfs-discuss] 'zfs recv' is very slow

2009-01-08 Thread Mike Futerko
in practice. Regards Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] zfs list improvements?

2009-01-08 Thread Mike Futerko
snapshot -r file/system it still takes quite long time if there are hundreds of snapshots, while ls /file/system/.zfs/snapshot returns immediately. Can this also be improved somehow please? Thanks Mike ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Problem with time-slider

2008-12-29 Thread Mike Gerdts
running svcs -v zfs/auto-snapshot The last few lines of the log files mentioned in the output from the above command may provide helpful hints. -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] How to compile mbuffer

2008-12-06 Thread Mike Futerko
--disable-debug CFLAGS=-O -m64 MAKE=gmake 5) gmake gmake install 6) /usr/local/bin/mbuffer -V Regards Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-05 Thread Mike Brancato
I've seen discussions as far back as 2006 that say development is underway to allow the addition and remove of disks in a raidz vdev to grow/shrink the group. Meaning, if a 4x100GB raidz only used 150GB of space, one could do 'zpool remove tank c0t3d0' and data residing on c0t3d0 would be

[zfs-discuss] redundancy in non-redundant stripes

2008-12-05 Thread Mike Brancato
With ZFS, we can enable copies=[1,2,3] to configure how many copies of data there are. With copies of 2 or more, in theory, an entire disk can have read errors, and the zfs volume still works. The unfortunate part here is that the redundancy lies in the volume, not the pool vdev like with

Re: [zfs-discuss] redundancy in non-redundant stripes

2008-12-05 Thread Mike Brancato
In theory, with 2 80GB drives, you would always have a copy somewhere else. But a single drive, no. I guess I'm thinking in the optimal situation. With multiple drives, copies are spread through the vdevs. I guess it would work better if we could define that if copies=2 or more, that at

Re: [zfs-discuss] Status of zpool remove in raidz and non-redundant stripes

2008-12-05 Thread Mike Brancato
Well, I knew it wasn't available. I meant to ask what is the status of the development of the feature? Not started, I presume. Is there no timeline? -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Separate /var

2008-12-02 Thread Mike Gerdts
=/zones/$zone/root/var rpool/zones/$zone/var -- Mike Gerdts http://mgerdts.blogspot.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Separate /var

2008-12-02 Thread Mike Gerdts
On Tue, Dec 2, 2008 at 6:13 PM, Lori Alt [EMAIL PROTECTED] wrote: On 12/02/08 10:24, Mike Gerdts wrote: I follow you up to here. But why do the next steps? zonecfg -z $zone remove fs dir=/var zfs set mountpoint=/zones/$zone/root/var rpool/zones/$zone/var It's not strictly required

[zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike DeMarco
My root drive is ufs. I have corrupted my zpool which is on a different drive than the root drive. My system paniced and now it core dumps when it boots up and hits zfs start. I have a alt root drive that can boot the system up with but how can I disable zfs from starting on a different drive?

Re: [zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike Gerdts
Boot from the other root drive, mount up the bad one at /mnt. Then: # mv /mnt/etc/zfs/zpool.cache /mnt/etc/zpool.cache.bad On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco [EMAIL PROTECTED] wrote: My root drive is ufs. I have corrupted my zpool which is on a different drive than the root

Re: [zfs-discuss] HELP!!! Need to disable zfs

2008-11-25 Thread Mike DeMarco
Boot from the other root drive, mount up the bad one at /mnt. Then: # mv /mnt/etc/zfs/zpool.cache /mnt/etc/zpool.cache.bad On Tue, Nov 25, 2008 at 8:18 AM, Mike DeMarco [EMAIL PROTECTED] wrote: My root drive is ufs. I have corrupted my zpool which is on a different drive than

Re: [zfs-discuss] Performance bake off vxfs/ufs/zfs need some help

2008-11-23 Thread Mike Gerdts
in ASM. Have you tried ASM on Solaris? It should give you a lot of the benefits you would expect from ZFS (pooled storage, incremental backups, (I think) efficient snapshots). It will only work for oracle database files (and indexes, etc.) and should work for clustered storage as well. -- Mike

Re: [zfs-discuss] zfs w/ SATA port multipliers?

2008-11-20 Thread mike
I think you'll need to get device support first. Last I checked there was still no device support for PMPs, sadly. On Thu, Nov 20, 2008 at 4:52 PM, Krenz von Leiberman [EMAIL PROTECTED] wrote: Does ZFS support pooled, mirrored, and raidz storage with SATA-port-multipliers

[zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hello Is there any way to list all snapshots of particular file system without listing the snapshots of its children file systems? Thanks, Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] ZFS snapshot list

2008-11-15 Thread Mike Futerko
Hi [Default] On Sat, 15 Nov 2008 11:37:50 +0200, Mike Futerko [EMAIL PROTECTED] wrote: Hello Is there any way to list all snapshots of particular file system without listing the snapshots of its children file systems? fsnm=tank/fs;zfs list -rt snapshot ${fsnm}|grep ${fsnm}@ or even

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread mike
Solaris versions: http://blogs.sun.com/weber/entry/solaris_opensolaris_nevada_indiana_sxde On Fri, Nov 14, 2008 at 2:15 AM, Vincent Boisard [EMAIL PROTECTED] wrote: Do you have an idea if your problem is due to live upgrade or b101 itself ? Vincent On Thu, Nov 13, 2008 at 8:06 PM, mike [EMAIL

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-14 Thread mike
On Fri, Nov 14, 2008 at 3:18 PM, Al Hopper [EMAIL PROTECTED] wrote: No clue. My friend also upgraded to b101. Said it was working awesome - improved network performance, etc. Then he said after a few days, he's decided to downgrade too - too many other weird side effects. Any more details

Re: [zfs-discuss] Best SXCE version for ZFS Home Server

2008-11-13 Thread mike
Depends on your hardware. I've been stable for the most part on b98. Live upgrade to b101 messed up my networking to nearly a standstill. It stuck even after I nuked the upgrade. I had to reinstall b98. On Nov 13, 2008, at 10:01 AM, Vincent Boisard [EMAIL PROTECTED] wrote: Thanks for

Re: [zfs-discuss] 10u6 any patches yet?

2008-11-12 Thread Mike Watkins
Will probably have a 10_recommended u6 patch bundle sometime in December... For now, to get to u6 (and ZFS) you must do LU (ie u5 to u6) Just FYI On Wed, Nov 12, 2008 at 12:48 PM, Johan Hartzenberg [EMAIL PROTECTED]wrote: On Wed, Nov 12, 2008 at 8:15 PM, Vincent Fox [EMAIL PROTECTED]wrote:

Re: [zfs-discuss] ZFS on Fit-PC Slim?

2008-11-10 Thread Mike Futerko
to disable some other unnecessary processes, ex: svcs | egrep 'webco|wbem|avahi|print|font|cde|sendm|name-service-cache|opengl' | awk '{print $3}' | xargs -n1 svcadm disable This should made your system more usable on light hardware. Regards Mike ___ zfs

[zfs-discuss] Some Samba questions

2008-11-02 Thread mike
doesn't seem to be remembered or I'm not understanding it properly... The user 'mike' should have -all- the privileges, period, no matter what the client machine is etc. I am mounting it -as- mike from both clients... ___ zfs-discuss mailing list zfs

Re: [zfs-discuss] Strange result when syncing between SPARC and x86

2008-10-29 Thread Mike Futerko
whole zpool not just sync with send/recv. But I think all will be fine there as is seems the problem is in send/recv part on the file system itself on different architectures. Thanks Mike ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

[zfs-discuss] zfs zpool recommendation

2008-10-29 Thread Mike
Hi all, I have been asked to build a new server and would like to get some opinions on how to setup a zfs pool for the application running on the server. The server will be exclusively for running netbackup application. Now which would be better? setting up a raidz pool with 6x146gig drives

<    1   2   3   4   5   >