Orvar Korvar wrote:
Ouch, that seems slow. Do you think ZFS is still the best solution for this
work load, or would for instance Veritas do better?
Maybe this workload would be more appropriate for Postgress or Oracle?
How big are the files, how much does their size vary, and how
Carson Gaspar wrote:
And s2 isn't special in any way except convention. _Any_ slice that
starts at sector 0 (and ends far enough away) will behave the same. If
your disk label is broken/missing/whatever, you won't _have_ an s2.
So, can I build a working system without s2?
I think the
Darryl wrote:
Is there a consensus that seagate barracuda drives are worthwhile, stable,
etc...?
All drives suck. Use RAID. :-)
-Luke
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
MC wrote:
Putting into the zpool command would feel odd to me, but I agree that
there may be a useful utility here.
There MAY be a useful utility here? I know this isn't your fight Dave, but
this tipped me and I have to say something :)
Can we agree that the format command lists the
Volker A. Brandt wrote:
Hmmm... the current scheme seems to be subject verb object.
E.g.
disk list
That would work fine for me!
It would also be easy enough to put on a rosetta stone type reference
card.
[0] Including lspci and lsusb with Solaris would be a great idea --
Well,
Luke Scharf wrote:
3. None of the grey-haired Solaris gurus that I've talked to have
ever been able to explain why.
I do realize that older architectures needed some way to record the disk
geometry. But why do it that way?
-Luke
Brandon High wrote:
Inexpensive cases have inexpensive power supplies.
Most of the Antec cases come with good power supplies. I have a
Thermaltake Matrix case that came with a PSU and it's been reliable
for 2 years. I believe the case and PSU was about $100.
I just picked up an Antec
On Sunday 18 May 2008 2:30:30 pm Mario Goebbels wrote:
Here's a link to a recent blog entry of Jeff Bonwick, lead engineer of
ZFS, showing him with Linus Torvalds, making mysterious comments in a
blog post that's tagged ZFS.
Well, here's the link, anyhow. :S
Dave wrote:
On 05/08/2008 08:11 AM, Ross wrote:
It may be an obvious point, but are you aware that snapshots need to be
stopped any time a disk fails? It's something to consider if you're
planning frequent snapshots.
I've never heard this before. Why would snapshots need to be
Message-
From: Stuart Anderson [EMAIL PROTECTED]
Date: Mon, 14 Apr 2008 15:45:03
To:Luke Scharf [EMAIL PROTECTED]
Cc:zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Confused by compressratio
On Mon, Apr 14, 2008 at 05:22:03PM -0400, Luke Scharf wrote:
Stuart Anderson wrote
zfs list /export/compress
NAME USED AVAIL REFER MOUNTPOINT
export-cit/compress 90.4M 1.17T 90.4M /export/compress
is 2GB/90.4M = 2048 / 90.4 = 22.65
That still leaves me puzzled what the precise definition of compressratio is?
My guess is that
Maurice Volaski wrote:
Perhaps providing the computations rather than the conclusions would
be more persuasive on a technical list ;
2 16-disk SATA arrays in RAID 5
2 16-disk SATA arrays in RAID 6
1 9-disk SATA array in RAID 5.
4 drive failures over 5 years. Of course, YMMV,
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported compressratio for that filesystem
is 1.00x even though this 1GB file only uses only 1kB.
ZFS seems to treat files filled with zeroes as sparse files, regardless
Stuart Anderson wrote:
On Mon, Apr 14, 2008 at 09:59:48AM -0400, Luke Scharf wrote:
Stuart Anderson wrote:
As an artificial test, I created a filesystem with compression enabled
and ran mkfile 1g and the reported compressratio for that filesystem
is 1.00x even though this 1GB file
*Platform:*
* OpenSolaris snv79 on an older beige-box Intel x86
* Apple XRaid disk box, with 7 JBOD disks
* LSI FC controller -
http://www.lsi.com/storage_home/products_home/host_bus_adapters/fibre_channel_hbas/lsi7404eplc/index.html?remote=1locale=EN
I'm running ZFS in a test-server against a bunch of drives in an Apple
XRaid (configured in the JBOD mode). It works pretty well, except that
when I yank one of the drives, ZFS hangs -- presumably, it's waiting
for a response from the the XRAID.
Is there any way to set the device-failure
Richard Elling wrote:
In general, ZFS doesn't manage device timeouts. The lower
layer drivers do. The timeout management depends on which OS,
OS version, and HBA you use. A fairly extreme example may be
Solaris using parallel SCSI and the sd driver, which uses a default
timeout of 60
Toby Thain wrote:
On 7-May-07, at 3:44 PM, [EMAIL PROTECTED] wrote:
Hi Lee,
You can decide whether you want to use ZFS for a root file system now.
You can find this info here:
http://opensolaris.org/os/community/zfs/boot/
Bearing in mind that his machine is a G4 PowerPC. When Solaris 10
Manoj Joseph wrote:
Brian Hechinger wrote:
After having set my desktop to install (to a pair of 140G SATA disks
that zfs is mirroring) at work, I was trying to skip the dump slice
since in this case, no, I don't really want it. ;)
Don't underestimate the usefulness of a dump device. You
Toby Thain wrote:
On 25-Apr-07, at 12:17 PM, cedric briner wrote:
hello the list,
After reading the _excellent_ ZFS Best Practices Guide, I've seen in
the section: ZFS and Complex Storage Consideration that we should
configure the storage system to ignore command which will flush the
Anton B. Rang wrote:
This might be impractical for a large file system, of course. It might be
easier to have a 'zscavenge' that would recover data, where possible, from a
corrupted file system. But there should be at least one of these. Losing a
whole pool due to the corruption of a couple
Tim Foster wrote:
And is it possible to add 1 new disk to raidz configuration
without backups and recreating zpool from cratch.
You can add a disk to a raidz configuration, but then that makes a pool
containing 1 raidz + 1 additional disk in a dynamic stripe configuration
(which ZFS will
Has anyone made the Promise Ultra133TX2 2-port PCI-IDE card work with
Solaris x86 11/06?
I've seen some references to the Ultra100TX2, but it doesn't seem to
refer to the version that I'm using.
Thanks,
-Luke
smime.p7s
Description: S/MIME Cryptographic Signature
Frank Cusack wrote:
On March 1, 2007 12:19:22 AM -0800 Jeff Bonwick [EMAIL PROTECTED]
wrote:
import it. Assuming this works, you can fix the stupid boot archive
thank you. i hate the boot archive. i have just spent MANY unnecessary
hours on some machines thanks to the stupid boot archive.
Dave Sneddon wrote:
Can anyone shed any light on whether the actual software side of this canbr
be achieved? Can I share my entire ZFS pool as a folder or network drivebr
so WinXP can read it? Will this be fast enough to read/write to at DV speedsbr
(25mbit/s)? Once the pool is set up and I have
David Magda wrote:
On Jan 30, 2007, at 09:52, Luke Scharf wrote:
Hey, I can take a double-drive failure now! And I don't even need
to rebuild! Just like having a hot spare with raid5, but without the
rebuild time!
Theoretically you want to rebuild as soon as possible, because running
David Magda wrote:
What about a rotating spare?
When setting up a pool a lot of people would (say) balance things
around buses and controllers to minimize single points of failure,
and a rotating spare could disrupt this organization, but would it be
useful at all?
Functionally, that
Luke Scharf wrote:
It sounded to me like he wanted to implement tripwire, but save some
time and CPU power by querying the checksumming-work that was already
done by ZFS.
Nevermind. The e-mail client that I chose to use broke up the thread,
and I didn't see that the issue had already been
Luke Scharf wrote:
The problem is that when I mount from a client, I can only mount if I
specify the IP address 1st network interface. If I use the 2nd or 3rd
interface (both also on the internal network), then I get the
following error:
kernel: nfs server 10.1.5.10:/xr7/group/ntnt
Luke Scharf wrote:
zfs set sharenfs='rw=.foo-internal.bar.edu insecure no_root_squash'
xr7/group/ntnt ; zfs share -a
Also, is this the proper syntax for the no_root_squash?
Thanks,
-Luke
smime.p7s
Description: S/MIME Cryptographic Signature
Dale Ghent wrote:
On Sep 11, 2006, at 4:49 PM, Luke Scharf wrote:
Luke Scharf wrote:
zfs set sharenfs='rw=.foo-internal.bar.edu insecure
no_root_squash' xr7/group/ntnt ; zfs share -a
Also, is this the proper syntax for the no_root_squash?
no_root_squash is a Linux-ism.
What you're
Neal Miskin wrote:
Your best best is to run a replicated pool.
Thanks George, will do
BTW is the best way to learn ZFS reading the ZFS Admin Guide or do Sun run more in-depth courses?
I found the ZFS Administration Guide to be very-much worth the time I
spent reading
Ricardo Correia wrote:
Wow, congratulations, nice work!
I'm the one porting ZFS to FUSE and seeing you doing such progress so fast is
very very encouraging :)
I'd like to throw a me too into the pile of thank-you messages!
I spent part of the weekend expanding and manipulating a set of
Karen Chau wrote:
I understand Legato doesn't work with ZFS yet. I looked through the
email archives, cpio and tar were mentioned. What's is my best option
if I want to dump approx 40G to tape?
Am I correct in saying that the issue was not getting the files to tape,
but properly storing
ZFS (Root)
Is there a zfs version command that I don't see?
Thanks,
-Luke
--
Luke Scharf
Virginia Tech Unix Administration Services
Terascale Computing Facility
smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs
George Wilson wrote:
Luke,
You can run 'zpool upgrade' to see what on-disk version you are capable
of running. If you have the latest features then you should be running
version 3:
hadji-2# zpool upgrade
This system is currently running ZFS version 3.
Unfortunately this
Darren Reed wrote:
On Solaris,
pkginfo -l SUNWzfsr
would give you a package version for that part of ZFS..
and modinfo | grep zfs will tell you something about the kernel
module rev.
No such luck. Modinfo doesn't show the ZFS module as loaded; that's
probably because I'm not running
Darren J Moffat wrote:
Buth the
reason thing is how do you tell the admin "its done now the filesystem
is safe". With compression you don't generally care if some old stuff
didn't compress (and with the current implementation it has to compress
a certain amount or it gets written uncompressed
David Abrahams wrote:
I've seen people wondering if ZFS was a scam because the claims just
seemed too good to be true. Given that ZFS *is* really great, I don't
think it would hurt to prominently advertise limitations like this one
it would probably benefit credibility considerably, and it's a
David Dyer-Bennet wrote:
It's easy to corrupt the volume, though -- just copy random data over
*two* disks of a RAIDZ volume. Okay, you have to either do the whole
volume, or get a little lucky to hit both copies of some piece of
information before you get corruption. Or pull two disks out of
40 matches
Mail list logo