[zfs-discuss] RAM size tuning for ZFS file servers...

2006-05-12 Thread Erik Trimble
(if at all). Ideas? Point me to docs? Thank you! -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Re: Re: zfs snapshot for backup, Quota

2006-05-18 Thread Erik Trimble
On the topic of ZFS snapshots: does the snapshot just capture the changed _blocks_, or does it effectively copy the entire file if any block has changed? That is, assuming that the snapshot (destination) stays inside the same pool space. -Erik

Re: [zfs-discuss] user undo

2006-05-30 Thread Erik Trimble
Nathan Kroenert wrote: Anyhoo - What do you think the chances are that any application vendor is going to write in special handling for Solaris file removal? I'm guessing slim to none, but have been wrong before... Agreed. However, to this I reply: Who Cares? I'm guessing that 99% of the

Re: [zfs-discuss] New Feature Idea: ZFS Views ?

2006-06-07 Thread Erik Trimble
to handle that data efficiently. Views fit with that model. Cheers, Henk -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS questions

2006-06-17 Thread Erik Trimble
Saying Solid State disk in the storage arena means battery-backed DRAM (or, rarely, NVRAM). It does NOT include the various forms of solid-state memory (compact flash, SD, MMC, etc.);Flash disk is reserved for those kind of devices. This is historical, since Flash disk hasn't been

Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Erik Trimble
AMD Geodes are 32-bit only. I haven't heard any mention that they will _ever_ be 64-bit. But, honestly, this and the Via chip aren't really ever going to be targets for Solaris. That is, they simply aren't (any substantial) part of the audience we're trying to reach with Solaris x86. Also,

Re: [zfs-discuss] ZFS on 32bit x86

2006-06-22 Thread Erik Trimble
Artem Kachitchkine wrote: AMD Geodes are 32-bit only. I haven't heard any mention that they will _ever_ be 64-bit. But, honestly, this and the Via chip aren't really ever going to be targets for Solaris. That is, they simply aren't (any substantial) part of the audience we're trying to

[zfs-discuss] Priorities (was: ZFS on 32bit x86)

2006-06-23 Thread Erik Trimble
Darren J Moffat wrote: This is an @opensolaris.org alias it is about working together as a community and identifying problems and discovering solutions. I don't think it is at all appropriate to bring up Sun business choices here. Where that is appropriate is when Sun employees need to

Re: [zfs-discuss] recommended hardware for a zfs/nfs NAS?

2006-06-23 Thread Erik Trimble
Dick Davies wrote: I was wondering if anyone could recommend hardware forr a ZFS-based NAS for home use. The 'zfs on 32-bit' thread has scared me of a mini-itx fanless setup, so I'm looking at sparc or opteron. Ideally it would: a) run quiet (blade 100/150 is ok, x4100 ain't :) ) b) take

Re: [zfs-discuss] Priorities (moving forums...)

2006-06-23 Thread Erik Trimble
Please refer all followups to this thread over to the [EMAIL PROTECTED] list. On Fri, 2006-06-23 at 11:27 -0700, Stephen Hahn wrote: * Erik Trimble [EMAIL PROTECTED] [2006-06-23 11:15]: It is a good start (yes, I know it's an interface to Bugster, just as the Java one I pointed out is too

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
Robert Milkowski wrote: Hello Peter, Wednesday, June 28, 2006, 1:11:29 AM, you wrote: PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote: PT You really need some level of redundancy if you're using HW raid. PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all PT that. Seems to me

Re: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote: On Wed, 2006-06-28 at 17:32, Erik Trimble wrote: Given a reasonable number of hot-spares, I simply can't see the (very) marginal increase in safety give by using HW RAID5 as out balancing the considerable speed hit using RAID5 takes

Re: Re[2]: [zfs-discuss] Re: ZFS and Storage

2006-06-28 Thread Erik Trimble
can also survive a complete HW mirror array failure. (3) Both configs can survive AT LEAST 3 drive failures. RAIDZ of HW mirrors is slightly better at being able to survive 4+ drive failures, statistically speaking. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa

Re: [zfs-discuss] Expanding raidz2

2006-07-12 Thread Erik Trimble
Just out of curiosity, what is the progress on allowing the addition of drives to an existing RAIDZ (whether pool or udev). Particularly in the case of udevs, the ability to add additional drives to expand a udev is really useful when adding more JBODs to an existing setup... -- Erik Trimble

Re: [zfs-discuss] Re: Expanding raidz2

2006-07-13 Thread Erik Trimble
of configuration and maintenance. At the Medium Business level, less stress on the Admin staff is usually the driving factor after raw cost, since Admin staff tend to be extremely overworked. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT

[zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Erik Trimble
... :-) -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] jbod questions

2006-09-29 Thread Erik Trimble
On Fri, 2006-09-29 at 09:41 +0200, Roch wrote: Erik Trimble writes: On Thu, 2006-09-28 at 10:51 -0700, Richard Elling - PAE wrote: Keith Clay wrote: We are in the process of purchasing new san/s that our mail server runs on (JES3). We have moved our mailstores to zfs

Re: [zfs-discuss] A versioning FS

2006-10-05 Thread Erik Trimble
On Thu, 2006-10-05 at 16:08 -0700, David Dyer-Bennet wrote: On 10/5/06, Erik Trimble [EMAIL PROTECTED] wrote: Doing versioning at the file-system layer allows block-level changes to be stored, so it doesn't consume enormous amounts of extra space. In fact, it's more efficient than any

Re: [zfs-discuss] A versioning FS

2006-10-05 Thread Erik Trimble
a workspace for code, and you can do VC inside that workspace without having to do a putback into the main tree. That way, you do frequent VC checkins, but don't putback to the main tree until things actually work. Or, at least, you _claim_ them to work. :-) -- Erik Trimble Java System

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Erik Trimble
First of all, let's agree that this discussion of File Versioning makes no more reference to its usage as Version Control. That is, we aren't going to talk about it being useful for source code, other than in the context where a source code file is a document, like any other text document.

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Erik Trimble
Chad Leigh -- Shire.Net LLC wrote: disclaimer: I have not used zfs snapshots a lot as I am still experimenting with zfs, but they appear to be similar to freebsd snapshots, with which I am familiar. The user experience with snapshots, in terms of file versioning (#1, #2, maybe #3) is much

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Erik Trimble
Chad, I think our problem is that we look at FV from different angles. I look at it from the point of view of people who have NEVER used FV, and you look at it from the view of people who have ALWAYS used FV. For those of us who have never had FV available, technical users have used VC

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Erik Trimble
David Dyer-Bennet wrote: On 10/6/06, Nicolas Williams [EMAIL PROTECTED] wrote: Maybe Erik would find it confusing. I know I would find it _annoying_. Then leave it set to 1 version Per-directory? Per-filesystem? Whatever. What's the actual issue here? I don't recall that on TOPS-20

Re: [zfs-discuss] A versioning FS

2006-10-06 Thread Erik Trimble
Joseph Mocker wrote: Nicolas Williams wrote: The big question though is: how to snapshot file versions when they are touched/created by applications that are not aware of FV? Certainly not with every write(2). At fsync(2), close(2), open(2) for write/append? What if an application deals in

Re: [zfs-discuss] A versioning FS

2006-10-07 Thread Erik Trimble
Chad Leigh -- Shire.Net LLC wrote: Plus, the number of files being created under typical modern systems is at least two (and probably three or four) orders of magnitude greater. I've got 100,000 files under /usr in Solaris, and almost 1,000 under my home directory. wimp :-) I

Re: [zfs-discuss] A versioning FS

2006-10-07 Thread Erik Trimble
Chad Leigh -- Shire.Net LLC wrote: But see, that assumes you have a logout-type functionality to use. Which indeed is possible for command-line usage, but then only in a very limited way. During a typical session, I access almost 20 NFS-mounted directories. And anyone using autofs/automount

Re: [zfs-discuss] A versioning FS

2006-10-08 Thread Erik Trimble
Joerg Schilling wrote: Erik Trimble [EMAIL PROTECTED] wrote: In order for an FV implementation to be useful for this stated purpose, it must fulfill the following requirements: (1) Clean interface for users. That is, one must NOT be presented with a complete list of all versions unless

Re: [zfs-discuss] A versioning FS

2006-10-09 Thread Erik Trimble
Joseph Mocker wrote: However would it be great if I could somehow easily FV a file I am working on with some arbitrary (closed) application I am forced to use without the application really knowing about it, and with little or no actions I have to take to do so? To paraphrase an old wive's

Re: [zfs-discuss] ZFS Inexpensive SATA Whitebox

2006-10-11 Thread Erik Trimble
Generally, I've found the way to go is to get a 4-port SATA PCI controller (something based on the Silicon Image stuff seems to be cheap, common, and supported), and then plunk it into any old PC you can find (or get off of eBay). The major caveat here is that I'd recommend trying to find a

Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-24 Thread Erik Trimble
The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2 device is actually one of the more critical missing features from ZFS, IMHO. It is very common for folks to add additional shelf or shelves into an existing array setup, and if you have created a pool which uses RAIDZ

Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-24 Thread Erik Trimble
Matthew Ahrens wrote: Erik Trimble wrote: The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2 device is actually one of the more critical missing features from ZFS, IMHO. It is very common for folks to add additional shelf or shelves into an existing array setup

[zfs-discuss] ZFS/NFS issue...

2006-11-03 Thread Erik Trimble
It looks like the Solaris 10 machines aren't mapping the userIDs correctly. All machines belong to the same NIS domain. I suspect NFSv4, but can't be sure. Am I doing something wrong here? -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US

Re: [zfs-discuss] ZFS for Linux 2.6

2006-11-07 Thread Erik Trimble
There have been extensive discussions on loadable modules and licensing w/r/t the GPLv2 in the linux kernel. nVidia, amongst others, pushed hard to allow for non-GPL-compatible licensed code to be allowed as a Linux kernel module. However, the kernel developers' consensus seems to have come

Re: [zfs-discuss] Re: Adding disk to a RAID-Z?

2007-01-11 Thread Erik Trimble
Robert Milkowski wrote: I don't know if ZFS MAN pages should teach people about RAID. If somebody doesn't understand RAID basics then some kind of tool where you just specify pool of disk and have to choose from: space efficient, performance, non-redundant and that's it - all the rest will be

Re: [zfs-discuss] Solid State Drives?

2007-01-11 Thread Erik Trimble
-based HW controllers just fine and don't find their problems to be excessive. And, honestly, I wouldn't think another driver would be needed. Attaching a SSD or similar usually uses an existing driver (it normally appears as a SCSI or FC drive to the OS). -- Erik Trimble Java System Support

Re: [zfs-discuss] Implementation Question

2007-01-17 Thread Erik Trimble
at the same time will cause disk head thrashing. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread Erik Trimble
I'd consider it a lower priority than say, adding a drive to a RAIDZ vdev, but yes, being able to reduce a zpool's size by removing devices is quite useful, as it adds a considerable degree of flexibility that (we) admins crave. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone

Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread Erik Trimble
, and the total difference is going to be $300 or so across the whole setup (which will cost you $5000 or more fully populated). So the cost to use SCSI vs eSATA as the host- attach is a rounding error. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US

Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Erik Trimble
Frank Cusack wrote: On January 19, 2007 6:47:30 PM -0800 Erik Trimble [EMAIL PROTECTED] wrote: Not to be picky, but the X2100 and X2200 series are NOT designed/targeted for disk serving (they don't even have redundant power supplies). They're compute-boxes. The X4100/X4200 are what you

Re: [zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-20 Thread Erik Trimble
line just isn't cheap enough. Of course the opinions expressed herein are my own, and I have no special knowledge of anything relevant to this discussion. (TM) :-) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] Re: Re: ZFS or UFS - what to do?

2007-01-28 Thread Erik Trimble
. And, I think we've jumped the shark. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] solaris - ata over ethernet - zfs - HPC

2007-02-05 Thread Erik Trimble
-- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Re: solaris - ata over ethernet - zfs - HPC

2007-02-07 Thread Erik Trimble
), and actually frequently has either just crapped out or caused data corruption when used under significant load. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing

Re: Re[2]: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Erik Trimble
-purposes: the real solution for most enterprise customers is SAN + ZFS, not either just by itself. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs

Re: Re[4]: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread Erik Trimble
journalling filesystem but in case you do: Feb 13 12:03:16 ufs: [ID 879645 kern.notice] NOTICE: /opt/d1635: unexpected free inode 54305084, run fsck(1M) -o f This file system is on a medium large array (IBM) in a SAN environment. -- Erik Trimble Java System Support Mailstop: usca14-102

[zfs-discuss] Re: opensolaris, zfs rootfs raidz

2007-04-05 Thread Erik Trimble
, which (I'm told) is a bit away (I take it to mean about 10 builds or so - figure a couple of months). -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list

Re: [zfs-discuss] Re: opensolaris, zfs rootfs raidz

2007-04-05 Thread Erik Trimble
://www.opensolaris.org/os/discussions/), as this place is pretty much a ZFS-specific place. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Re: Re: simple Raid-Z question

2007-04-08 Thread Erik Trimble
. What most people here would prefer is that you could instead (if desired) end up with a single RAIDZ2 vdev of 6 data drives and 2 parity drives, but that is NOT currently possible. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific

[zfs-discuss] Re: simple Raid-Z question

2007-04-08 Thread Erik Trimble
. testpool1 should have a size of approximately 6*64m = 384, not 7*64m = 448m as in testpool2. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Erik Trimble
differentiation. I do not speak for Sun on this matter, nor would I presume that my opinion is held by others here; it's just my opinion. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] Re: ZFS for Linux (NO LISCENCE talk, please)

2007-04-17 Thread Erik Trimble
VFS layer, and port the ZFS code to use that new API. Go look at the aforementioned nVidia drivers for an example of how they do it. Or, maybe even look at the OSS (Open Sound System) code for how to provide this kind of meta-API. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone

Re: [zfs-discuss] zfs block allocation strategy

2007-04-18 Thread Erik Trimble
there are multiple vdevs in a pool. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Erik Trimble
to the next (and possibly prev) block, in effect a doubly-linked list? I'd hope for the former, since that seems most efficient. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs

Re: [zfs-discuss] zfs space efficiency

2007-06-23 Thread Erik Trimble
Matthew Ahrens wrote: Erik Trimble wrote: Under ZFS, any equivalent to 'cp A B' takes up no extra space. The metadata is updated so that B points to the blocks in A. Should anyone begin writing to B, only the updated blocks are added on disk, with the metadata for B now containing the proper

Re: [zfs-discuss] zfs space efficiency

2007-06-24 Thread Erik Trimble
Matthew Ahrens wrote: Will Murnane wrote: On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote: Now, wouldn't it be nice to have syscalls which would implement cp and mv, thus abstracting it away from the userland app? Not really. Different apps want different behavior in their copying, so

Re: [zfs-discuss] zfs space efficiency

2007-06-25 Thread Erik Trimble
, that definitively indicates they are different), then do a bitwise compare on any that produce the same checksum, to see if they really are the same file. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Erik Trimble
resizing A B (that is, the reboot would be needed to update the new LUN size on the host). -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-27 Thread Erik Trimble
partitions on B D, causing head seek), though you can still lose up to 2 drives before experiencing data loss. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss

Re: [zfs-discuss] Re: Drive Failure w/o Redundancy

2007-06-28 Thread Erik Trimble
Richard Elling wrote: Erik Trimble wrote: If you had known about the drive sizes beforehand, the you could have done something like this: Partition the drives as follows: A: 1 20GB partition B: 1 20gb 1 10GB partition C: 1 40GB partition D: 1 40GB partition 2 10GB paritions then you do

[zfs-discuss] ZFS on 32-bit...

2007-06-29 Thread Erik Trimble
2TB of space in the system, what (if any) kind of issues does ZFS have with running in only 32-bit mode? I remember some discussions about limitations on certain buffer/structure sizes, but my memory is foggy, so... -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa

Re: [zfs-discuss] enlarge a mirrored pool

2007-10-12 Thread Erik Trimble
mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss Yes. After both drives are replaced, you will automatically see the additional space. -- Erik Trimble Java System Support Mailstop: usca14-102 Phone: x17195 Santa Clara, CA Timezone: US/Pacific

Re: [zfs-discuss] LowEnd Batt. backed raid controllers that will deal with ZFS commit semantics correctly?

2008-01-24 Thread Erik Trimble
, the various FC-controller-to-host, SCSI-controller-to-jbod solutions are the most flexible and reasonable. HP's StorageWorks 1500cs is an example. But there, you're looking at $10k for a decent solution of a couple TB. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195

Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-03-26 Thread Erik Trimble
, but everyone remembers if it was buggy or caused data loss. : Of course, if it was an MS product, we are constantly reminded of both wink -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

[zfs-discuss] Any fix for the ZFS pool corruption Bug 6393634 ?

2008-05-15 Thread Erik Trimble
I have a machine that just hit the zpool corruption issue in CR 6393634. It panics and crashes the system. :-( Does anyone have a workaround that I might be able to recover the zpool(s) without having to destroy recreate them? -- Erik Trimble Java System Support Mailstop: usca22-123 Phone

Re: [zfs-discuss] The ZFS inventor and Linus sitting in a tree?

2008-05-20 Thread Erik Trimble
engineering, which may provide many more interesting insights into improving what is truly the FS-for-the-new-millenium. (Please, if I'm wrong about our [Sun's] patent protection of ZFS's internals, I want to know _now_. Speak up and correct me please, folks). -- Erik Trimble Java System

Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Erik Trimble
slots... -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZFS: A general question

2008-05-24 Thread Erik Trimble
automatically detect AND repair block-level faults. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org

Re: [zfs-discuss] ZFS Project Hardware

2008-05-30 Thread Erik Trimble
extensions, so running a Windows guest under xVM on them isn't currently possible. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] ZFS Project Hardware

2008-05-30 Thread Erik Trimble
Brandon High wrote: On Fri, May 30, 2008 at 5:59 PM, Erik Trimble [EMAIL PROTECTED] wrote: One thought on this:for a small server, which is unlikely to ever be CPU bound, I would suggest looking for an older dual-Socket 940 Opteron motherboard. They almost all have many PCI-X slots

Re: [zfs-discuss] ZFS Project Hardware

2008-05-30 Thread Erik Trimble
you new parts for obsolete machines, at cut-rate pricing. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS Project Hardware

2008-05-31 Thread Erik Trimble
are in the same price range. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] ZFS Project Hardware

2008-06-02 Thread Erik Trimble
Keith Bierman wrote: On May 30, 2008, at 6:59 PM, Erik Trimble wrote: The only drawback of the older Socket 940 Opterons is that they don't support the hardware VT extensions, so running a Windows guest under xVM on them isn't currently possible. From the VirtualBox manual

Re: [zfs-discuss] ZFS root finally here in SNV90

2008-06-04 Thread Erik Trimble
spools, or whatever), you'll be creating a new zpool for that purpose. Otherwise, filling /var can be _bad_ (even if on a different ZFS filesystem), so I don't see much benefit. But, with ZFS, the counter (it's so simple, why not?) is also valid. It's just personal whim, now, really. -- Erik

Re: [zfs-discuss] system backup and recovery

2008-06-05 Thread Erik Trimble
to flash archives. I agree that zfs send/receive is not a good backup tool, for all the reasons previously discussed. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs

Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-05 Thread Erik Trimble
confusing is it if I have 6MB in foo/bar/baz, and then can't put more than 4mb in foo/quux? Does quota foo report all the nested quotas, also? Should quota foo/bar/baz also include the quotas of its parents (i.e. foo and foo/bar )? -- Erik Trimble Java System Support Mailstop: usca22-123 Phone

Re: [zfs-discuss] system backup and recovery

2008-06-05 Thread Erik Trimble
pax. The actual flar file output as a result is a special format unique to flar, so there is no limit on filesize intrinsic to flar itself. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] system backup and recovery

2008-06-05 Thread Erik Trimble
the filesystem has been laid out and created. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] system backup and recovery

2008-06-05 Thread Erik Trimble
, picking your favorite ZFS layout then. (6) wait - after the install, the system will reboot and ask you to input the name/ip/nameservice info. Note: I have not tried this yet, but it _should_ be straightforward. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara

Re: [zfs-discuss] system backup and recovery

2008-06-05 Thread Erik Trimble
Aubrey Li wrote: On Thu, Jun 5, 2008 at 9:49 PM, Erik Trimble [EMAIL PROTECTED] wrote: I'm pretty sure that OpenSolaris 2008.5 is the same a Nevada b89, which definite _does_ have flar. No, OpenSolaris 200805 is based on b86, not b89. And if you read indiana mailing list, you'll

Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays

2008-06-16 Thread Erik Trimble
time). So, for FC or iSCSI targets, I would HIGHLY recommend that ZFS _ALWAYS_ be configured in a redundant setup. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss

Re: [zfs-discuss] ZFS Mirror Problem

2008-06-16 Thread Erik Trimble
understanding talking with the relevant folks is that the fix will be in 10 Update 6, but not likely available as a patch beforehand. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] memory hog

2008-06-23 Thread Erik Trimble
trying to do 1000 ops/sec. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] memory hog

2008-06-23 Thread Erik Trimble
to 4GB of physical RAM. Back on topic: the one thing I haven't tried out is ZFS on a 32-bit-only system with PAE, and more than 4GB of RAM. Anyone? -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] ZFS configuration for VMware

2008-06-27 Thread Erik Trimble
we want the DRAM SSDs. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] Solaris 10 does not boat

2008-06-27 Thread Erik Trimble
Sorry about this. I just couldn't resist. Andrius wrote: Solaris 10 does not boat But it does ship! wink -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss

Re: [zfs-discuss] ZFS configuration for VMware

2008-06-28 Thread Erik Trimble
Brian Hechinger wrote: On Fri, Jun 27, 2008 at 03:02:43PM -0700, Erik Trimble wrote: Unfortunately, we need to be careful here with our terminology. You are completely and 100% correct, Erik. I've been throwing the term SSD around, but in the context of what I'm thinking, by SSD I

Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays

2008-06-30 Thread Erik Trimble
to any single-vdev zpool). Indeed, there are some nasty problems with using single-LUN zpools, so DON'T DO IT. ZFS is happiest (and you will be too) when you allow some redundancy inside ZFS, and not just at the hardware level. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone

Re: [zfs-discuss] zfs on top of 6140 FC array

2008-07-01 Thread Erik Trimble
hardware, for that matter). -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] ZFS deduplication

2008-07-08 Thread Erik Trimble
/small guy can make a real statement, and back it up. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Erik Trimble
are of dedup before committing to a filesystem-level solution, rather than an application-level one. In particular, we need some real-world data on the actual level of duplication under a wide variety of circumstances. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa

Re: [zfs-discuss] ZFS deduplication

2008-07-22 Thread Erik Trimble
Bob Friesenhahn wrote: On Tue, 22 Jul 2008, Erik Trimble wrote: Dedup Disadvantages: Obviously you do not work in the Sun marketing department which is intrested in this feature (due to some other companies marketing it). Note that the topic starter post came from someone

Re: [zfs-discuss] ZFS system requirements

2008-09-16 Thread Erik Trimble
Just one more things on this: Run with a 64-bit processor. Don't even think of using a 32-bit one - there are known issues with ZFS not quite properly using 32-bit only structures. That is, ZFS is really 64-bit clean, but not 32-bit clean. grin -- Erik Trimble Java System Support Mailstop

Re: [zfs-discuss] ZFS system requirements

2008-09-17 Thread Erik Trimble
Cyril Plisko wrote: On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble [EMAIL PROTECTED] wrote: Just one more things on this: Run with a 64-bit processor. Don't even think of using a 32-bit one - there are known issues with ZFS not quite properly using 32-bit only structures. That is, ZFS

[zfs-discuss] Which is better for root ZFS: mlc or slc SSD?

2008-09-24 Thread Erik Trimble
technology? (I'll worry about rated access times/etc of the drives, I'm just wondering about general tech for an OS boot drive usage...) -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800

Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Erik Trimble
cache (4) Host HBA (5) SAN/NAS controller (6) Host RAM (7) Host bus issues -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Erik Trimble
switches between my hosts and disks. And, with longer cables, comes more of the chance that something gets bent a bit too much. Finally, HBAs are not the most reliable things I've seen (sadly). -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US

Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Erik Trimble
a 7.2k drive, depending on I/O load. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-02 Thread Erik Trimble
tell you from real-life experience you're not even remotely correct in your assumptions. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -- Erik Trimble Java System

Re: [zfs-discuss] questions about replacing a raidz2 vdev disk with a larger one

2008-10-11 Thread Erik Trimble
before replacing anymore. If you don't care about the data, then, just destroy the zpool, replace the drives, and recreate the zpool from scratch. It's faster and easier than waiting for the resilvers. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Erik Trimble
every time. I _really_ wish rsync had an option to copy in place or something like that, where the updates are made directly to the file, rather than a temp copy. -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA

  1   2   3   4   5   6   >