(if at all).
Ideas? Point me to docs?
Thank you!
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On the topic of ZFS snapshots:
does the snapshot just capture the changed _blocks_, or does it
effectively copy the entire file if any block has changed?
That is, assuming that the snapshot (destination) stays inside the same
pool space.
-Erik
Nathan Kroenert wrote:
Anyhoo - What do you think the chances are that any application vendor
is going to write in special handling for Solaris file removal? I'm
guessing slim to none, but have been wrong before...
Agreed. However, to this I reply: Who Cares? I'm guessing that 99% of
the
to handle
that data efficiently. Views fit with that model.
Cheers,
Henk
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Saying Solid State disk in the storage arena means battery-backed DRAM
(or, rarely, NVRAM). It does NOT include the various forms of
solid-state memory (compact flash, SD, MMC, etc.);Flash disk is
reserved for those kind of devices.
This is historical, since Flash disk hasn't been
AMD Geodes are 32-bit only. I haven't heard any mention that they will
_ever_ be 64-bit. But, honestly, this and the Via chip aren't really
ever going to be targets for Solaris. That is, they simply aren't (any
substantial) part of the audience we're trying to reach with Solaris x86.
Also,
Artem Kachitchkine wrote:
AMD Geodes are 32-bit only. I haven't heard any mention that they
will _ever_ be 64-bit. But, honestly, this and the Via chip aren't
really ever going to be targets for Solaris. That is, they simply
aren't (any substantial) part of the audience we're trying to
Darren J Moffat wrote:
This is an @opensolaris.org alias it is about working together as a
community and identifying problems and discovering solutions. I don't
think it is at all appropriate to bring up Sun business choices here.
Where that is appropriate is when Sun employees need to
Dick Davies wrote:
I was wondering if anyone could recommend hardware
forr a ZFS-based NAS for home use.
The 'zfs on 32-bit' thread has scared me of a mini-itx fanless
setup, so I'm looking at sparc or opteron. Ideally it would:
a) run quiet (blade 100/150 is ok, x4100 ain't :) )
b) take
Please refer all followups to this thread over to the
[EMAIL PROTECTED] list.
On Fri, 2006-06-23 at 11:27 -0700, Stephen Hahn wrote:
* Erik Trimble [EMAIL PROTECTED] [2006-06-23 11:15]:
It is a good start (yes, I know it's an interface to Bugster, just as
the Java one I pointed out is too
Robert Milkowski wrote:
Hello Peter,
Wednesday, June 28, 2006, 1:11:29 AM, you wrote:
PT On Tue, 2006-06-27 at 17:50, Erik Trimble wrote:
PT You really need some level of redundancy if you're using HW raid.
PT Using plain stripes is downright dangerous. 0+1 vs 1+0 and all
PT that. Seems to me
On Wed, 2006-06-28 at 22:13 +0100, Peter Tribble wrote:
On Wed, 2006-06-28 at 17:32, Erik Trimble wrote:
Given a reasonable number of hot-spares, I simply can't see the (very)
marginal increase in safety give by using HW RAID5 as out balancing the
considerable speed hit using RAID5 takes
can also survive a complete HW mirror array
failure.
(3) Both configs can survive AT LEAST 3 drive failures. RAIDZ of HW
mirrors is slightly better at being able to survive 4+ drive failures,
statistically speaking.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa
Just out of curiosity, what is the progress on allowing the addition of
drives to an existing RAIDZ (whether pool or udev). Particularly in the
case of udevs, the ability to add additional drives to expand a udev is
really useful when adding more JBODs to an existing setup...
--
Erik Trimble
of configuration and maintenance. At the Medium
Business level, less stress on the Admin staff is usually the driving
factor after raw cost, since Admin staff tend to be extremely
overworked.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT
...
:-)
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 2006-09-29 at 09:41 +0200, Roch wrote:
Erik Trimble writes:
On Thu, 2006-09-28 at 10:51 -0700, Richard Elling - PAE wrote:
Keith Clay wrote:
We are in the process of purchasing new san/s that our mail server
runs
on (JES3). We have moved our mailstores to zfs
On Thu, 2006-10-05 at 16:08 -0700, David Dyer-Bennet wrote:
On 10/5/06, Erik Trimble [EMAIL PROTECTED] wrote:
Doing versioning at the file-system layer allows block-level changes to
be stored, so it doesn't consume enormous amounts of extra space. In
fact, it's more efficient than any
a workspace for
code, and you can do VC inside that workspace without having to do a
putback into the main tree. That way, you do frequent VC checkins,
but don't putback to the main tree until things actually work. Or, at
least, you _claim_ them to work.
:-)
--
Erik Trimble
Java System
First of all, let's agree that this discussion of File Versioning makes
no more reference to its usage as Version Control. That is, we aren't
going to talk about it being useful for source code, other than in the
context where a source code file is a document, like any other text
document.
Chad Leigh -- Shire.Net LLC wrote:
disclaimer: I have not used zfs snapshots a lot as I am still
experimenting with zfs, but they appear to be similar to freebsd
snapshots, with which I am familiar.
The user experience with snapshots, in terms of file versioning (#1,
#2, maybe #3) is much
Chad,
I think our problem is that we look at FV from different angles. I look
at it from the point of view of people who have NEVER used FV, and you
look at it from the view of people who have ALWAYS used FV.
For those of us who have never had FV available, technical users have
used VC
David Dyer-Bennet wrote:
On 10/6/06, Nicolas Williams [EMAIL PROTECTED] wrote:
Maybe Erik would find it confusing. I know I would find it
_annoying_.
Then leave it set to 1 version
Per-directory? Per-filesystem?
Whatever. What's the actual issue here?
I don't recall that on TOPS-20
Joseph Mocker wrote:
Nicolas Williams wrote:
The big question though is: how to snapshot file versions when they are
touched/created by applications that are not aware of FV?
Certainly not with every write(2). At fsync(2), close(2), open(2) for
write/append? What if an application deals in
Chad Leigh -- Shire.Net LLC wrote:
Plus, the number of files being created under typical
modern systems is at least two (and probably three or four) orders
of magnitude greater. I've got 100,000 files under /usr in Solaris,
and almost 1,000 under my home directory.
wimp :-) I
Chad Leigh -- Shire.Net LLC wrote:
But see, that assumes you have a logout-type functionality to use.
Which indeed is possible for command-line usage, but then only in a
very limited way. During a typical session, I access almost 20
NFS-mounted directories. And anyone using autofs/automount
Joerg Schilling wrote:
Erik Trimble [EMAIL PROTECTED] wrote:
In order for an FV implementation to be useful for this stated purpose,
it must fulfill the following requirements:
(1) Clean interface for users. That is, one must NOT be presented with
a complete list of all versions unless
Joseph Mocker wrote:
However would it be great if I could somehow easily FV a file I am
working on with some arbitrary (closed) application I am forced to use
without the application really knowing about it, and with little or no
actions I have to take to do so?
To paraphrase an old wive's
Generally, I've found the way to go is to get a 4-port SATA PCI
controller (something based on the Silicon Image stuff seems to be
cheap, common, and supported), and then plunk it into any old PC you can
find (or get off of eBay).
The major caveat here is that I'd recommend trying to find a
The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2
device is actually one of the more critical missing features from ZFS,
IMHO. It is very common for folks to add additional shelf or shelves
into an existing array setup, and if you have created a pool which uses
RAIDZ
Matthew Ahrens wrote:
Erik Trimble wrote:
The ability to expand (and, to a less extent, shrink) a RAIDZ or
RAIDZ2 device is actually one of the more critical missing features
from ZFS, IMHO. It is very common for folks to add additional shelf
or shelves into an existing array setup
It looks like the Solaris 10 machines aren't mapping the userIDs
correctly. All machines belong to the same NIS domain. I suspect NFSv4,
but can't be sure. Am I doing something wrong here?
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US
There have been extensive discussions on loadable modules and licensing
w/r/t the GPLv2 in the linux kernel. nVidia, amongst others, pushed hard
to allow for non-GPL-compatible licensed code to be allowed as a Linux
kernel module. However, the kernel developers' consensus seems to have
come
Robert Milkowski wrote:
I don't know if ZFS MAN pages should teach people about RAID.
If somebody doesn't understand RAID basics then some kind of tool
where you just specify pool of disk and have to choose from: space
efficient, performance, non-redundant and that's it - all the rest
will be
-based HW controllers just fine and don't find their
problems to be excessive.
And, honestly, I wouldn't think another driver would be needed.
Attaching a SSD or similar usually uses an existing driver (it normally
appears as a SCSI or FC drive to the OS).
--
Erik Trimble
Java System Support
at the same time will cause disk head thrashing.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
I'd consider it a lower priority than say, adding a drive to a RAIDZ
vdev, but yes, being able to reduce a zpool's size by removing devices
is quite useful, as it adds a considerable degree of flexibility that
(we) admins crave.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone
, and the total difference is
going to be $300 or so across the whole setup (which will cost you $5000
or more fully populated). So the cost to use SCSI vs eSATA as the host-
attach is a rounding error.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US
Frank Cusack wrote:
On January 19, 2007 6:47:30 PM -0800 Erik Trimble
[EMAIL PROTECTED] wrote:
Not to be picky, but the X2100 and X2200 series are NOT
designed/targeted for disk serving (they don't even have redundant power
supplies). They're compute-boxes. The X4100/X4200 are what you
line just isn't cheap enough.
Of course the opinions expressed herein are my own, and I have no
special knowledge of anything relevant to this discussion. (TM)
:-)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
.
And, I think we've jumped the shark.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
), and actually frequently has either just
crapped out or caused data corruption when used under significant load.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing
-purposes: the real solution for most enterprise customers is
SAN + ZFS, not either just by itself.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs
journalling
filesystem but in case you do:
Feb 13 12:03:16 ufs: [ID 879645 kern.notice] NOTICE: /opt/d1635:
unexpected free inode 54305084, run fsck(1M) -o f
This file system is on a medium large array (IBM) in a SAN
environment.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
, which (I'm told) is a bit away (I
take it to mean about 10 builds or so - figure a couple of months).
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
://www.opensolaris.org/os/discussions/),
as this place is pretty much a ZFS-specific place.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
. What most people here would prefer is that you could
instead (if desired) end up with a single RAIDZ2 vdev of 6 data drives
and 2 parity drives, but that is NOT currently possible.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific
. testpool1 should have a size of
approximately 6*64m = 384, not 7*64m = 448m as in testpool2.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss
differentiation.
I do not speak for Sun on this matter, nor would I presume that my
opinion is held by others here; it's just my opinion.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
VFS layer, and port the ZFS code to
use that new API. Go look at the aforementioned nVidia drivers for an
example of how they do it. Or, maybe even look at the OSS (Open Sound
System) code for how to provide this kind of meta-API.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
there are multiple vdevs in a pool.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
to the next (and
possibly prev) block, in effect a doubly-linked list? I'd hope for
the former, since that seems most efficient.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs
Matthew Ahrens wrote:
Erik Trimble wrote:
Under ZFS, any equivalent to 'cp A B' takes up no extra space. The
metadata is updated so that B points to the blocks in A. Should
anyone begin writing to B, only the updated blocks are added on disk,
with the metadata for B now containing the proper
Matthew Ahrens wrote:
Will Murnane wrote:
On 6/23/07, Erik Trimble [EMAIL PROTECTED] wrote:
Now, wouldn't it be nice to have syscalls which would implement cp
and
mv, thus abstracting it away from the userland app?
Not really. Different apps want different behavior in their copying,
so
, that
definitively indicates they are different), then do a bitwise compare on
any that produce the same checksum, to see if they really are the same file.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
resizing A B (that is, the
reboot would be needed to update the new LUN size on the host).
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss
partitions
on B D, causing head seek), though you can still lose up to 2 drives
before experiencing data loss.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss
Richard Elling wrote:
Erik Trimble wrote:
If you had known about the drive sizes beforehand, the you could have
done something like this:
Partition the drives as follows:
A: 1 20GB partition
B: 1 20gb 1 10GB partition
C: 1 40GB partition
D: 1 40GB partition 2 10GB paritions
then you do
2TB of space in the system, what (if any) kind of
issues does ZFS have with running in only 32-bit mode? I remember some
discussions about limitations on certain buffer/structure sizes, but my
memory is foggy, so...
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Yes.
After both drives are replaced, you will automatically see the
additional space.
--
Erik Trimble
Java System Support
Mailstop: usca14-102
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific
, the various FC-controller-to-host,
SCSI-controller-to-jbod solutions are the most flexible and reasonable.
HP's StorageWorks 1500cs is an example. But there, you're looking at
$10k for a decent solution of a couple TB.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
, but everyone remembers if it was buggy or caused data loss.
:
Of course, if it was an MS product, we are constantly reminded of both
wink
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
I have a machine that just hit the zpool corruption issue in CR 6393634.
It panics and crashes the system. :-(
Does anyone have a workaround that I might be able to recover the
zpool(s) without having to destroy recreate them?
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
engineering, which may
provide many more interesting insights into improving what is truly the
FS-for-the-new-millenium.
(Please, if I'm wrong about our [Sun's] patent protection of ZFS's
internals, I want to know _now_. Speak up and correct me please, folks).
--
Erik Trimble
Java System
slots...
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
automatically detect AND repair block-level faults.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
extensions, so running a Windows guest under xVM
on them isn't currently possible.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss
Brandon High wrote:
On Fri, May 30, 2008 at 5:59 PM, Erik Trimble [EMAIL PROTECTED] wrote:
One thought on this:for a small server, which is unlikely to ever be CPU
bound, I would suggest looking for an older dual-Socket 940 Opteron
motherboard. They almost all have many PCI-X slots
you new parts for obsolete machines, at cut-rate pricing.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
are in the same price range.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Keith Bierman wrote:
On May 30, 2008, at 6:59 PM, Erik Trimble wrote:
The only drawback of the older Socket 940 Opterons is that they don't
support the hardware VT extensions, so running a Windows guest
under xVM
on them isn't currently possible.
From the VirtualBox manual
spools, or whatever), you'll be creating a
new zpool for that purpose. Otherwise, filling /var can be _bad_ (even
if on a different ZFS filesystem), so I don't see much benefit.
But, with ZFS, the counter (it's so simple, why not?) is also valid.
It's just personal whim, now, really.
--
Erik
to flash archives.
I agree that zfs send/receive is not a good backup tool, for all the
reasons previously discussed.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs
confusing is it if I have 6MB in foo/bar/baz, and
then can't put more than 4mb in foo/quux? Does quota foo report all
the nested quotas, also? Should quota foo/bar/baz also include the
quotas of its parents (i.e. foo and foo/bar )?
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
pax. The actual flar file output
as a result is a special format unique to flar, so there is no limit on
filesize intrinsic to flar itself.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
the
filesystem has been laid out and created.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
, picking your
favorite ZFS layout then.
(6) wait - after the install, the system will reboot and ask you to
input the name/ip/nameservice info.
Note: I have not tried this yet, but it _should_ be straightforward.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara
Aubrey Li wrote:
On Thu, Jun 5, 2008 at 9:49 PM, Erik Trimble [EMAIL PROTECTED] wrote:
I'm pretty sure that OpenSolaris 2008.5 is the same a Nevada b89, which
definite _does_ have flar.
No, OpenSolaris 200805 is based on b86, not b89.
And if you read indiana mailing list, you'll
time).
So, for FC or iSCSI targets, I would HIGHLY recommend that ZFS _ALWAYS_
be configured in a redundant setup.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss
understanding talking with the relevant folks is that the fix will be
in 10 Update 6, but not likely available as a patch beforehand.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
trying to do 1000 ops/sec.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
to 4GB of physical RAM.
Back on topic: the one thing I haven't tried out is ZFS on a
32-bit-only system with PAE, and more than 4GB of RAM. Anyone?
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
we want the DRAM SSDs.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Sorry about this. I just couldn't resist.
Andrius wrote:
Solaris 10 does not boat
But it does ship!
wink
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss
Brian Hechinger wrote:
On Fri, Jun 27, 2008 at 03:02:43PM -0700, Erik Trimble wrote:
Unfortunately, we need to be careful here with our terminology.
You are completely and 100% correct, Erik. I've been throwing the
term SSD around, but in the context of what I'm thinking, by SSD I
to any single-vdev zpool). Indeed, there are some
nasty problems with using single-LUN zpools, so DON'T DO IT. ZFS is
happiest (and you will be too) when you allow some redundancy inside
ZFS, and not just at the hardware level.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone
hardware, for that matter).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
/small guy can make a real statement, and back it up.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
are of dedup before committing to a filesystem-level solution, rather
than an application-level one. In particular, we need some real-world
data on the actual level of duplication under a wide variety of
circumstances.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa
Bob Friesenhahn wrote:
On Tue, 22 Jul 2008, Erik Trimble wrote:
Dedup Disadvantages:
Obviously you do not work in the Sun marketing department which is
intrested in this feature (due to some other companies marketing it).
Note that the topic starter post came from someone
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS is really 64-bit clean, but not 32-bit clean.
grin
--
Erik Trimble
Java System Support
Mailstop
Cyril Plisko wrote:
On Wed, Sep 17, 2008 at 6:06 AM, Erik Trimble [EMAIL PROTECTED] wrote:
Just one more things on this:
Run with a 64-bit processor. Don't even think of using a 32-bit one -
there are known issues with ZFS not quite properly using 32-bit only
structures. That is, ZFS
technology? (I'll worry about rated access
times/etc of the drives, I'm just wondering about general tech for an OS
boot drive usage...)
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800
cache
(4) Host HBA
(5) SAN/NAS controller
(6) Host RAM
(7) Host bus issues
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
switches between my hosts and disks. And, with longer cables, comes
more of the chance that something gets bent a bit too much. Finally,
HBAs are not the most reliable things I've seen (sadly).
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US
a 7.2k drive, depending on I/O load.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
Timezone: US/Pacific (GMT-0800)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
tell you from real-life experience you're not even remotely
correct in your assumptions.
--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Erik Trimble
Java System
before replacing anymore.
If you don't care about the data, then, just destroy the zpool, replace
the drives, and recreate the zpool from scratch. It's faster and easier
than waiting for the resilvers.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
every time.
I _really_ wish rsync had an option to copy in place or something like
that, where the updates are made directly to the file, rather than a
temp copy.
--
Erik Trimble
Java System Support
Mailstop: usca22-123
Phone: x17195
Santa Clara, CA
1 - 100 of 507 matches
Mail list logo