On Tue, Mar 31, 2009 at 01:16:42PM -0700, Matthew Ahrens wrote:
Robert Milkowski wrote:
Hello Matthew,
Excellent news.
Wouldn't it be better if logical disk usage would be accounted and not
physical - I mean when compression is enabled should quota be
accounted based by a logical file
On Tue, Mar 31, 2009 at 01:25:35PM -0700, Matthew Ahrens wrote:
quote case materials
These new properties are not printed by zfs get all, since that could
generate a huge amount of output, which would not be very well
organized. The new zfs userspace subcommand should be used instead.
Ah, I
On Wed, Mar 18, 2009 at 11:15:48AM -0400, Moore, Joe wrote:
Posix doesn't require the OS to sync() the file contents on close for
local files like it does for NFS access? How odd.
Why should it? If POSIX is agnostic as to system crashes / power
failures, then why should it say anything about
On Wed, Mar 18, 2009 at 11:43:09AM -0500, Bob Friesenhahn wrote:
In summary, I don't agree with you that the misbehavior is correct,
but I do agree that copious expensive fsync()s should be assured to
work around the problem.
fsync() is, indeed, expensive. Lots of calls to fsync() that are
On Wed, Mar 18, 2009 at 03:01:30PM -0400, Miles Nordin wrote:
IMHO the best reaction to the KDE hysteria would be to make sure
SQLite and BerkeleyDB are fast as possible and effortlessly correct on
ZFS, and anything that's slow because of too much synchronous writing
I tried to do that for
On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote:
On 03/06/09 08:10, Jim Dunham wrote:
A simple test I performed to verify this, was to append to a ZFS file
(no synchronous filesystem options being set) a series of blocks with a
block order pattern contained within. At some random
On Fri, Mar 06, 2009 at 03:10:41PM -0500, Jim Dunham wrote:
Wouldn't one have to quiesce (export) the pool on the primary before
importing it on the secondary?
No. ZFS is always on-disk consistent, so as long as SNDR is in logging
mode, zpool import will work on the secondary node.
As
On Wed, Mar 04, 2009 at 02:16:53PM -0600, Wes Felter wrote:
T10 UNMAP/thin provisioning support in zvols
That's probably simple enough, and sufficiently valuable too.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, Mar 04, 2009 at 02:13:51PM -0700, Lisa Week wrote:
(pnfs-17-21:/home/lisagab):6 % zfs list -o
name,type,used,avail,refer,mountpoint
NAME TYPE USED AVAIL REFER
MOUNTPOINT
rpool filesystem30.0G 37.0G 32.5K /rpool
On Wed, Mar 04, 2009 at 03:49:54PM -0700, Lisa Week wrote:
My (humble) opinion is: Even though it is hard to tell if a dataset is
a filesystem or a zvol now, it doesn't mean we can't make it better...
Agreed.
___
zfs-discuss mailing list
On Tue, Mar 03, 2009 at 11:35:40PM +0200, C. Bergström wrote:
7) vdev evacuation as an upgrade path (which may depend or take
advantage of zfs resize/shrink code)
IIRC Matt Ahrens has said on this list that vdev evacuation/pool
shrinking is being worked. So (7) would be duplication of
On Sat, Feb 28, 2009 at 09:45:12PM -0600, Mike Gerdts wrote:
On Sat, Feb 28, 2009 at 8:34 PM, Nicolas Williams
Right, but normally each head in a cluster will have only one pool
imported.
Not necessarily. Suppose I have a group of servers with a bunch of
zones. Each zone represents
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
This may be interesting... I'm not sure how often you need to shrink a
pool
though? Could this be
On Sat, Feb 28, 2009 at 05:19:26PM -0600, Mike Gerdts wrote:
On Sat, Feb 28, 2009 at 4:33 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
On Sat, Feb 28, 2009 at 10:44:59PM +0100, Thomas Wagner wrote:
pool-shrinking (and an option to shrink disk A when i want disk B to
become
On Fri, Feb 27, 2009 at 11:25:42PM -0500, Alastair Neil wrote:
I can tell it's been a while since I did this - forgot to uncomment the
correct lines in /etc/nfssec.conf
You're not the only one who gets tripped up by this. If you use
kclient(1M) then that won't happen, but preferably
On Wed, Feb 25, 2009 at 07:33:34PM -0500, Miles Nordin wrote:
You might also have a look at the, somewhat overcomplicated
w.r.t. database-running-snapshot backups, SQLite2 atomic commit URL
Toby posted:
http://sqlite.org/atomiccommit.html
That's for SQLite_3_, 3, not 2.
Also, we don't
On Mon, Feb 23, 2009 at 10:05:31AM -0800, Christopher Mera wrote:
I recently read up on Scott Dickson's blog with his solution for
jumpstart/flashless cloning of ZFS root filesystem boxes. I have to say
that it initially looks to work out cleanly, but of course there are
kinks to be worked
On Tue, Feb 24, 2009 at 07:37:39PM +0100, Mattias Pantzare wrote:
On Tue, Feb 24, 2009 at 19:18, Nicolas Williams
nicolas.willi...@sun.com wrote:
When you snapshot a ZFS filesystem you get just that -- a snapshot at
the filesystem level. That does not mean you get a snapshot
On Tue, Feb 24, 2009 at 10:56:45AM -0800, Brent Jones wrote:
If you are writing a script to handle ZFS snapshots/backups, you could
issue an SMF command to stop the service before taking the snapshot.
Or at the very minimum, perform an SQL dump of the DB so you at least
have a consistent full
On Tue, Feb 24, 2009 at 01:17:47PM -0600, Nicolas Williams wrote:
I don't think there's any way to ask svc.config to pause.
Well, IIRC that's not quite right. You can pstop svc.startd, gently
kill (i.e., not with SIGKILL) svc.configd, take your snapshot, then prun
svc.startd.
Nico
On Tue, Feb 24, 2009 at 02:53:14PM -0500, Miles Nordin wrote:
cm == Christopher Mera cm...@reliantsec.net writes:
cm it would be ideal to quiesce the system before a snapshot
cm anyway, no?
It would be more ideal to find the bug in SQLite2 or ZFS. Training
everyone, ``you always
On Tue, Feb 24, 2009 at 02:27:18PM -0600, Tim wrote:
On Tue, Feb 24, 2009 at 2:15 PM, Nicolas Williams
nicolas.willi...@sun.comwrote:
On Tue, Feb 24, 2009 at 01:17:47PM -0600, Nicolas Williams wrote:
I don't think there's any way to ask svc.config to pause.
Well, IIRC that's not quite
On Tue, Feb 24, 2009 at 12:19:22PM -0800, Christopher Mera wrote:
There are over 700 boxes deployed using Flash Archive's on an S10 system
with a UFS root. We've been working on basing our platform on a ZFS
root and took Scott Dickson's suggestions
On Tue, Feb 24, 2009 at 03:25:53PM -0600, Tim wrote:
On Tue, Feb 24, 2009 at 2:37 PM, Nicolas Williams
nicolas.willi...@sun.comwrote:
Hot Backup?
# Connect to the database
sqlite3 db $dbfile
# Lock the database, copy and commit or rollback
if {[catch {db
On Mon, Feb 23, 2009 at 02:36:07PM -0800, Christopher Mera wrote:
panic[cpu0]/thread=dacac880: BAD TRAP: type=e (#pf Page fault)
rp=d9f61850 addr=1048c0d occurred in module zfs due to an illegal
access to a user address
Can you describe what you're doing with your snapshot?
Are you zfs
On Tue, Feb 24, 2009 at 03:08:18PM -0800, Christopher Mera wrote:
It's a zfs snapshot that's then sent to a file..
On the new boxes I'm doing a jumpstart install with the SUNWCreq
package, and using the finish script to mount an NFS filesystem that
contains the *.zfs dump files. Zfs receive
On Fri, Feb 13, 2009 at 10:29:05AM -0800, Frank Cusack wrote:
On February 13, 2009 1:10:55 PM -0500 Miles Nordin car...@ivy.net wrote:
fc == Frank Cusack fcus...@fcusack.com writes:
fc If you're misordering writes
fc isn't that a completely different problem?
no. ignoring the
On Fri, Feb 13, 2009 at 02:00:28PM -0600, Nicolas Williams wrote:
Ordering matters for atomic operations, and filesystems are full of
those.
Also, note that ignoring barriers is effectively as bad as dropping
writes if there's any chance that some writes will never hit the disk
because of, say
On Tue, Feb 10, 2009 at 12:31:05PM -0800, D. Eckert wrote:
(...)
You don't move a pool with 'zfs umount', that only unmounts a single zfs
filesystem within a pool, but the pool is still active.. 'zpool export'
releases the pool from the OS, then 'zpool import' on the other machine.
(...)
On Mon, Feb 02, 2009 at 08:22:13AM -0600, Gary Mills wrote:
On Sun, Feb 01, 2009 at 11:44:14PM -0500, Jim Dunham wrote:
I wrote:
I realize that this configuration is not supported.
The configuration is supported, but not in the manner mentioned below.
If there are two (or more)
On Sun, Feb 01, 2009 at 04:26:13PM -0600, Gary Mills wrote:
I realize that this configuration is not supported. What's required
It would be silly for ZFS to support zvols as iSCSI LUNs and then say
you can put anything but ZFS on them. I'm pretty sure there's no such
restriction.
(That said,
On Thu, Jan 29, 2009 at 03:02:50PM -0800, Peter Reiher wrote:
Does ZFS currently support actual use of extended attributes? If so, where
can I find some documentation that describes how to use them?
man runat.1 openat.2 etcetera
Nico
--
___
On Wed, Jan 28, 2009 at 09:32:23AM -0800, Frank Cusack wrote:
On January 28, 2009 9:24:21 AM -0800 Richard Elling
richard.ell...@gmail.com wrote:
Frank Cusack wrote:
i was wondering if you have a zfs filesystem that mounts in a subdir
in another zfs filesystem, is there any problem with
On Wed, Jan 28, 2009 at 09:07:06AM -0800, Frank Cusack wrote:
On January 28, 2009 9:41:20 AM -0600 Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 27 Jan 2009, Frank Cusack wrote:
i was wondering if you have a zfs filesystem that mounts in a subdir
in another zfs filesystem,
On Wed, Jan 28, 2009 at 02:11:54PM -0800, bdebel...@intelesyscorp.com wrote:
Recovering Destroyed ZFS Storage Pools.
You can use the zpool import -D command to recover a storage pool that has
been destroyed.
http://docs.sun.com/app/docs/doc/819-5461/gcfhw?a=view
But the OP destroyed a
On Mon, Jan 12, 2009 at 12:14:09PM -0500, David Briand wrote:
Is there a simple way to convert subdirectories in a ZFS pool into data
sets?
An inefficient and complicated, but fast way is to snapshot and clone
the dataset containing the directory in question, then remove everything
from the
On Mon, Jan 12, 2009 at 01:59:10PM -0500, Moore, Joe wrote:
Nicolas Williams wrote:
It'd be awesome to have a native directory-dataset conversion feature
in ZFS. And, relatedly, fast moves of files across datasets
in the same
volume. These two RFEs have been discussed to death
On Fri, Jan 09, 2009 at 12:13:17PM -0800, Richard Elling wrote:
Jerry K wrote:
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
No ZFS crypto, but it has lofi crypto.You can use lofi for ZFS,
though. Perhaps that was
On Tue, Jan 06, 2009 at 01:27:41PM -0800, Peter Skovgaard Nielsen wrote:
ls -V file
--+ 1 root root 0 Jan 6 22:15 file
user:root:rwxpdDaARWcCos:--:allow
everyone@:--:--:allow
Not bad at all. However, I contend that this
On Sat, Dec 27, 2008 at 02:29:58PM -0800, Ross wrote:
All of which sound like good reasons to use send/receive and a 2nd zfs
pool instead of mirroring.
Yes.
Send/receive has the advantage that the receiving filesystem is
guaranteed to be in a stable state. How would you go about recovering
On Thu, Dec 18, 2008 at 07:32:33AM -0800, Pedro Lobo wrote:
I've just installed OpensSolaris 2008.11 and among the great features
(zfs being THE feature) I'm finding some minor annoyances. One of them
is that I can't create a zfs filesystem with an accented character in
its name (the encoding
On Wed, Dec 17, 2008 at 10:02:18AM -0800, Ross wrote:
In fact, thinking about it, could this be more generic than just a USB
backup service?
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
On Thu, Dec 18, 2008 at 07:05:44PM +, Ross Smith wrote:
Absolutely.
The tool shouldn't need to know that the backup disk is accessed via
USB, or whatever. The GUI should, however, present devices
intelligently, not as cXtYdZ!
Yup, and that's easily achieved by simply prompting
On Thu, Dec 18, 2008 at 07:55:14PM +, Ross Smith wrote:
On Thu, Dec 18, 2008 at 7:11 PM, Nicolas Williams
nicolas.willi...@sun.com wrote:
I was thinking more something like:
- find all disk devices and slices that have ZFS pools on them
- show users the devices and pool names
On Thu, Dec 18, 2008 at 12:57:54PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
Device names are, but there's no harm in showing them if there's
something else that's less variable. Pool names are not very variable
at all.
I was thinking of something a little different. Don't
On Wed, Dec 17, 2008 at 12:05:50AM -0800, Ross wrote:
Thinking about it, I think Darren is right. An automatic send/receive to the
external drive may be preferable, and it sounds like it has many advantages:
You forgot *the* most important advantage of using send/recv instead of
mirroring as
On Wed, Dec 17, 2008 at 08:51:54AM -0800, Niall Power wrote:
What serious compat issues ? There has been one and
only one
incompatible change in the stream format and that
only impacted really
really early (before S10 FCS IIRC) adopters.
Here are the issues that I am aware:
-
On Mon, Dec 15, 2008 at 01:36:46PM -0600, Bob Friesenhahn wrote:
On Mon, 15 Dec 2008, Ross Smith wrote:
I'm not sure I follow how that can happen, I thought ZFS writes were
designed to be atomic? They either commit properly on disk or they
don't?
Yes, this is true. One reason why
On Mon, Dec 15, 2008 at 05:04:03PM -0500, Miles Nordin wrote:
As Tim said, the one-filesystem-per-user thing is not working out.
For NFSv3 clients that truncate MOUNT protocol answers (and v4 clients
that still rely on the MOUNT protocol), yes, one-filesystem-per-user is
a problem. For NFSv4
On Fri, Dec 12, 2008 at 01:52:54PM -0600, Gary Mills wrote:
On Fri, Dec 12, 2008 at 04:30:51PM +1300, Ian Collins wrote:
No matter how good your SAN is, it won't spot a flaky cable or bad RAM.
Of course it will. There's an error-checking protocol that runs over
the SAN cable. Memory will
On Fri, Dec 12, 2008 at 05:31:37PM -0500, Miles Nordin wrote:
nw If you can fully trust the SAN then there's no reason not to
nw run ZFS on top of it with no ZFS mirrors and no RAID-Z.
The best practice I understood is currently to use zpool-layer
redundancy especially with SAN even
On Thu, Dec 11, 2008 at 04:46:33PM -0700, Mark Shellenbaum wrote:
Mark Shellenbaum wrote:
You should probably make sure that you just don't keep continually
adding the same entry over and over again to the ACL. With NFSv4 ACLs
you can insert the same entry multiple times and if you keep
On Fri, Dec 12, 2008 at 12:04:39AM +, Robert Milkowski wrote:
Slightly off-topic, but only slightly.
With ZFS I tend to configure /var/cores as a separate zfs file system
with a quota set on it + coreadm configured that way so all cores go
to /var/cores.
This is especially useful with
On Thu, Dec 11, 2008 at 09:54:36PM -0800, Richard Elling wrote:
I'm not really sure what you mean by split responsibility model. I
think you will find that previous designs have more (blind?) trust in
the underlying infrastructure. ZFS is designed to trust, but verify.
I think he means ZFS w/
On Wed, Dec 10, 2008 at 11:40:16AM -0600, Tim wrote:
On Wed, Dec 10, 2008 at 10:51 AM, Jay Anderson [EMAIL PROTECTED]wrote:
I have many large zfs filesystems on Solaris 10 servers that I would like
to upgrade to OpenSolaris so the filesystems can be shared using the CIFS
Service (I'm
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
disk. UFS is most common, but ZFS has a number of advantages over
UFS. Two of these are dynamic space management and snapshots. There
are also a number of
On Wed, Dec 10, 2008 at 11:13:21AM -0800, Jay Anderson wrote:
The casesensitivity option is just like utf8only and normalization, it
can only be set at creation time. The result from attempting to change
it on an existing filesystem:
# zfs set casesensitivity=mixed pool0/data1
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
disk. UFS is most common, but ZFS has a number of advantages over
UFS. Two
On Wed, Dec 10, 2008 at 12:58:48PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
But note that the setup you describe puts ZFS in no worse a situation
than any other filesystem.
Well, actually, it does. ZFS is susceptible to a class of failure modes
I classify as kill the canary
On Wed, Dec 10, 2008 at 02:08:28PM -0800, John Smith wrote:
When I create a volume I am unable to mount it locally. I pretty sure
it has something to do with the other volumes in the same ZFS pool
being shared out as ISCSI luns. For some reason ZFS things the base
volume is ISCSI. Is there a
On Tue, Dec 09, 2008 at 09:09:15AM +0100, [EMAIL PROTECTED] wrote:
When I switch away from a session where programs are producing sound
what should happen is this: a) those programs continue to operate, b)
but they don't produce actual sound until I switch back to that VT (and
unlock the
On Sun, Dec 07, 2008 at 03:20:01PM -0600, Brian Cameron wrote:
Thanks for the information. Unfortunately, using chmod/chown does not
seem a workable solution to me, unless I am missing something. Normally
logindevperm(4) is used for managing the ownership and permissions of
device files
On Mon, Dec 08, 2008 at 02:22:01PM -0600, Brian Cameron wrote:
That said, I don't see why di_devperm_login() couldn't stomp all over
the ACL too. So you'll need to make sure that di_devperm_login()
doesn't stomp over the ACL, which will probably mean running an ARC case
and updating the
On Mon, Dec 08, 2008 at 03:27:49PM -0600, Brian Cameron wrote:
Once VT is enabled in the Xserver and GDM, users can start multiple
graphical logins with GDM. So, if a user logs into the first graphical
Ah, right, I'd forgotten this.
login, they get the audio device. Then you can use VT
On Mon, Dec 08, 2008 at 04:46:37PM -0600, Brian Cameron wrote:
Is there a shortcomming in VT here?
I guess it depends on how you think VT should work. My understanding
is that VT works on a first-come-first-serve basis, so the first user
who calls logindevperm interfaces gets permission.
On Tue, Nov 25, 2008 at 11:55:17AM +0100, [EMAIL PROTECTED] wrote:
My idea is simply to allow the pool to continue operation while
waiting for the drive to fault, even if that's a faulty write. It
just means that the rest of the operations (reads and writes) can keep
working for the minute
On Tue, Oct 21, 2008 at 05:50:09AM -0700, Marcelo Leal wrote:
If i have many small files (smaller than 128K), i would not waste
time reading 128K? And after the ZFS has allocated a FSB of 64K for
example, if that file gets bigger, ZFS will use 64K blocks right?
ZFS uses the smallest
On Wed, Oct 22, 2008 at 04:46:00PM -0400, Miles Nordin wrote:
I thought NFSv2 - NFSv3 was supposed to make this prestoserv, SSD,
battery-backed DRAM stuff not needed for good performance any more. I
guess not though.
There are still a number of operations in NFSv3 and NFSv4 which the
client
On Wed, Oct 22, 2008 at 11:05:09PM +0200, Kees Nuyt wrote:
[Default] On Tue, 21 Oct 2008 15:43:08 -0400, Bill Sommerfeld
[EMAIL PROTECTED] wrote:
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size
and ZFS
On Wed, Oct 22, 2008 at 04:31:43PM -0500, Nicolas Williams wrote:
On Wed, Oct 22, 2008 at 11:05:09PM +0200, Kees Nuyt wrote:
Just a remark:
Increasing the SQLite page_size while keeping the same
[default_]cache_size will effectively increase the amount of memory
allocated to the SQLite
On Mon, Oct 20, 2008 at 04:57:22PM -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size and
ZFS' causes some performance problems for Thunderbird users.
It'd be great if there was an API by which SQLite3 could set its block
size to match
On Tue, Oct 21, 2008 at 03:43:08PM -0400, Bill Sommerfeld wrote:
On Mon, 2008-10-20 at 16:57 -0500, Nicolas Williams wrote:
I've a report that the mismatch between SQLite3's default block size and
ZFS' causes some performance problems for Thunderbird users.
I was seeing a severe
I've a report that the mismatch between SQLite3's default block size and
ZFS' causes some performance problems for Thunderbird users.
It'd be great if there was an API by which SQLite3 could set its block
size to match the hosting filesystem or where it could set the DB file's
record size to
On Thu, Oct 16, 2008 at 12:20:36PM -0700, Marion Hakanson wrote:
I'll chime in here with feeling uncomfortable with such a huge ZFS pool,
and also with my discomfort of the ZFS-over-ISCSI-on-ZFS approach. There
just seem to be too many moving parts depending on each other, any one of
which
On Thu, Oct 16, 2008 at 04:30:28PM -0400, Miles Nordin wrote:
nw == Nicolas Williams [EMAIL PROTECTED] writes:
nw But does it work well enough? It may be faster than NFS if
You're talking about different things. Gray is using NFS period
between the storage cluster and the compute
On Mon, Oct 06, 2008 at 05:38:33PM -0400, Brian Hechinger wrote:
On Sun, Oct 05, 2008 at 11:30:54PM -0500, Nicolas Williams wrote:
There have been threads about adding a feature to support slow mirror
devices that don't stay synced synchronously. At least IIRC. That
would help
On Sun, Oct 05, 2008 at 09:07:31PM -0400, Brian Hechinger wrote:
On Sat, Oct 04, 2008 at 10:37:26PM -0700, Chris Greer wrote:
I'm not sure I could survive a crash of both nodes, going to try and
test some more.
Ok, so taking my idea above, maybe a pair of 15K SAS disks in those
boxes so
On Thu, Oct 02, 2008 at 12:46:59PM -0500, Bob Friesenhahn wrote:
On Thu, 2 Oct 2008, Ahmed Kamal wrote:
What is the real/practical possibility that I will face data loss during
the next 5 years for example ? As storage experts please help me
interpret whatever numbers you're going to throw,
On Tue, Sep 30, 2008 at 09:54:04PM -0400, Miles Nordin wrote:
ok, I get that S3 went down due to corruption, and that the network
checksums I mentioned failed to prevent the corruption. The missing
piece is: belief that the corruption occurred on the network rather
than somewhere else.
On Wed, Oct 01, 2008 at 01:12:08PM -0400, Miles Nordin wrote:
pt == Peter Tribble [EMAIL PROTECTED] writes:
pt I think the term is mirror mounts.
he doesn't need them---he's using the traditional automounter, like we
all used to use before this newfangled mirror mounts baloney.
Oh
On Wed, Oct 01, 2008 at 12:22:56PM -0500, Tim wrote:
- This will mainly be used for NFS sharing. Everyone is saying it will have
bad performance. My question is, how bad is bad ? Is it worse than a
plain Linux server sharing NFS over 4 sata disks, using a crappy 3ware raid
card with
On Wed, Oct 01, 2008 at 01:30:45PM +0100, Peter Tribble wrote:
On Wed, Oct 1, 2008 at 3:42 AM, Douglas R. Jones [EMAIL PROTECTED] wrote:
Any ideas?
Well, I guess you're running Solaris 10 and not OpenSolaris/SXCE.
I think the term is mirror mounts. It works just fine on my SXCE boxes.
On Wed, Oct 01, 2008 at 11:54:55AM -0600, Robert Thurlow wrote:
Miles Nordin wrote:
sounds
like they are not good enough though, because unless this broken
router that Robert and Darren saw was doing NAT, yeah, it should not
have touch the TCP/UDP checksum.
I believe we proved that
On Tue, Sep 30, 2008 at 06:09:30PM -0500, Tim wrote:
On Tue, Sep 30, 2008 at 6:03 PM, Ahmed Kamal
[EMAIL PROTECTED] wrote:
BTW, for everyone saying zfs is more reliable because it's closer to the
application than a netapp, well at least in my case it isn't. The solaris
box will be NFS
On Tue, Sep 30, 2008 at 08:54:50PM -0500, Tim wrote:
As it does in ANY fileserver scenario, INCLUDING zfs. He is building a
FILESERVER. This is not an APPLICATION server. You seem to be stuck on
this idea that everyone is using ZFS on the server they're running the
application. That does a
On Wed, Sep 10, 2008 at 06:35:49PM -0700, Paul B. Henson wrote:
I'd appreciate any feedback, particularly about things that don't work
right :).
I bet you think it'd be nice if we had a public equivalent of
_getgroupsbymember()...
Even better if we just had utility functions to do ACL
On Thu, Sep 11, 2008 at 10:36:38AM -0700, Paul B. Henson wrote:
On Thu, 11 Sep 2008, Nicolas Williams wrote:
I bet you think it'd be nice if we had a public equivalent of
_getgroupsbymember()...
Indeed, that would be useful in numerous contexts. It would be even nicer
if the appropriate
On Thu, Aug 28, 2008 at 11:29:21AM -0500, Bob Friesenhahn wrote:
Which of these do you prefer?
o System waits substantial time for devices to (possibly) recover in
order to ensure that subsequently written data has the least
chance of being lost.
o System immediately
On Thu, Aug 28, 2008 at 01:05:54PM -0700, Eric Schrock wrote:
As others have mentioned, things get more difficult with writes. If I
issue a write to both halves of a mirror, should I return when the first
one completes, or when both complete? One possibility is to expose this
as a tunable,
On Fri, Aug 15, 2008 at 08:15:56PM -0600, Mark Shellenbaum wrote:
We are currently investigating adding more functionality to libsec to
provide many of the things you desire. We will have iterators, editing
capabilities and so on.
I'm still ironing a design/architecture document out. I'll
On Wed, Aug 06, 2008 at 02:23:44PM -0400, Will Murnane wrote:
On Wed, Aug 6, 2008 at 13:57, Miles Nordin [EMAIL PROTECTED] wrote:
If that's really the excuse for this situation, then ZFS is not
``always consistent on the disk'' for single-VDEV pools.
Well, yes. If data is sent, but
On Wed, Aug 06, 2008 at 03:44:08PM -0400, Miles Nordin wrote:
re == Richard Elling [EMAIL PROTECTED] writes:
c If that's really the excuse for this situation, then ZFS is
c not ``always consistent on the disk'' for single-VDEV pools.
re I disagree with your assessment. The
On Thu, Jul 31, 2008 at 01:07:20PM -0500, Paul Fisher wrote:
Stephen Stogner wrote:
True we could have all the syslog data be directed towards the host but the
underlying issue remains the same with the performance hit. We have used
nfs shares for log hosts and mail hosts and we are
[OT, I know.]
On Fri, Jul 25, 2008 at 07:14:09PM +0200, Justin Vassallo wrote:
Meanwhile, I had to permit root login (obviously disabled passwd auth;
PasswordAuthentication no; PAMAuthenticationViaKBDInt no).
Why obviously?
I think instead you may just want to:
PermitRootLogin
On Sat, Jun 28, 2008 at 12:58:31AM +0300, Mertol Ozyoney wrote:
Ability to mount snap shots somewhere else. [this doesnt look easy, perhaps
a proxy kind of set up ? ]
Snapshots are available through .zfs/snapshot/snapshot-name.
Snapshots are read-only. They can be cloned to create read-write
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
On Fri, 6 Jun 2008, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my
On Fri, Jun 06, 2008 at 07:37:18AM -0400, Brian Hechinger wrote:
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
- as separate filesystems, they have to be separately NFS mounted
I think this is the one that gets under my skin. If there would be a
way to merge a
On Fri, Jun 06, 2008 at 08:51:13PM +0200, Mattias Pantzare wrote:
2008/6/6 Richard Elling [EMAIL PROTECTED]:
I was going to post some history of scaling mail, but I blogged it instead.
http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
The problem with that argument is that 10.000
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
I expect that mirror mounts will be coming Linux's way too.
The should already have them:
http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
Even better.
___
On Thu, Jun 05, 2008 at 01:40:21PM -0500, Bob Friesenhahn wrote:
On Thu, 5 Jun 2008, Richard Elling wrote:
Nathan Kroenert wrote:
I'd expect it's the old standard.
if /var/tmp is filled, and that's part of /, then bad things happen.
Such as? If you find a part of Solaris that
101 - 200 of 361 matches
Mail list logo