The ZFS discuss list was getting heavily spammed, resulting in me having
to spend the first 15 minutes of every day sifting through notifications
and going to reject the messages. The current policy for zfs-discuss is
to reject any non-member mail, though all *.sun.com addresses are
Casper == Casper Dik [EMAIL PROTECTED] writes:
Casper How can I configure it to reject non-member mail?
Go to the Mailman list admin page. The control is under Privacy
options-Sender filters (Action to take for postings from
options-non-members...).
Never noticed these options; thanks
Can I take an existing Solaris 10 box, current on patches, but not
installed from a sun express with ZFS install disk, and add ZFS support
to it?
Not at this time.
- Update 2 needs to be released
- You may need to upgrade to update 2
Though I've heard it said that they want
[EMAIL PROTECTED] wrote:
Robert Milkowski wrote:
But only if compression is turned on for a filesystem.
Of course, and the default is off.
However I think it would be good to have an API so application can
decide what to compress and what not.
I agree that an API would be good. However I
On 6/19/06, Eric Schrock [EMAIL PROTECTED] wrote:
Simply because we erred on the side of caution. The fewer metachacters,
the better. It's easy to change if there's enough interest.
You may want to change that since many applications including KDE use
''+' to encode paths (replacing blanks
On 6/20/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
On 6/19/06, Eric Schrock [EMAIL PROTECTED] wrote:
Simply because we erred on the side of caution. The fewer metachacters,
the better. It's easy to change if there's enough interest.
You may want to change that since many
Also, options such as -nomtime and -noctime have been introduced
alongside -noatime in some free operating systems to limit the amount
of meta data that gets written back to disk.
Those seem rather pointless. (mtime and ctime generally imply other
changes, often to the inode; atime does not)
Processes like ssh-agent that do not need their identiity may drop =
them. An exploit too these processes may not exploit the fact, that t=
he euid/groups of the process allow some file operations that are den=
ied to everyone. Only files that are globally readable/writable/execu=
table may still
On Thu, Jun 22, 2006 at 01:01:38AM +0200, [EMAIL PROTECTED] wrote:
I'm not sure if I like the name, then; nor the emphasis on the
euid/egid (as those terms are not commonly used in the kernel;
there's a reason why the effective uid was cr-cr_uid and not cr_euid.
In other words, what your
Concerning the reopen problem of files created in world writable dire=
ctories:
One may use the following algorithm:
First compute the permissions of the newly created file.
For every permission granted to the user or group, check whether the =
corresponding identity-privilege is set. If not,
Yes, world readable/writable files can still be accessed by dropping =
the new privileges. One reason are library calls that need to read so=
me public files (like things in /etc). The need to manipulate or remo=
ve world writable files is harder to justify, on the other hand, worl=
d writable
Another concern would be: what UID owns files created by such processes?
I don't think it could be anything other than the current euid;
otherwise it is too easy to create files under a different uid.
For non-basic privs we can always do things with the client's root
credential and, when
Are VIA processor chips 64bit capable yet ?
No, I don't think so.
And Geode?
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, 2006-06-22 at 10:55, [EMAIL PROTECTED] wrote:
To me, a PRIV_OBJECT_MODIFY which is required for any file modifying
operation would seem to be more useful as often a read-only user is
a worthwhile thing to have; perhaps mirrored with a PRIV_OBJECT_ACCESS
in case you want to prevent any
AMD Geodes are 32-bit only. I haven't heard any mention that they will
_ever_ be 64-bit. But, honestly, this and the Via chip aren't really
ever going to be targets for Solaris. That is, they simply aren't (any
substantial) part of the audience we're trying to reach with Solaris x86.
I'm
Saturating 100Mbit with a 64-bit CPU and redundant disks for $300-400 Pounds
may be tough.
Anything in the market can saturate 100Mbit easily; even with a single
cheap IDE disk. The disks are generally a factor 5-10 faster than the
100Mbit network.
Casper
That's the dilemma, the array provides nice features like RAID1 and
RAID5, but those are of no real use when using ZFS.
RAID5 is not a nice feature when it breaks.
A RAID controller cannot guarantee that all bits of a RAID5 stripe
are written when power fails; then you have data corruption
This is getting pretty picky. You're saying that ZFS will detect any
errors introduced after ZFS has gotten the data. However, as stated
in a previous post, that doesn't guarantee that the data given to ZFS
wasn't already corrupted.
But there's a big difference between the time ZFS gets
Depends on your definition of firmware. In higher end arrays the data is
checksummed when it comes in and a hash is written when it gets to disk.
Of course this is no where near end to end but it is better then nothing.
The checksum is often stored with the data (so if the data is not
yeah, thought of that, but we put some structure in ages ago
to get around the possible problems with thousands of entries
in one directory - so we have /export/home/NN/username
where NN is a 2 digit number.
I don't think there's any way to specify an automount map
with multiple levels in it.
I understand the copy-on-write thing. That was very well illustrated in
ZFS The Last Word in File Systems by Jeff Bonwick.
But if every block is it's own RAID-Z stripe, if the block is lost, how
does ZFS recover the block???
You should perhaps not take block literally; the block is written
Currently, I'm using executable maps to create zfs
home directories.
Casper
Casper, anything you can share with us on that? Sounds interesting.
It's really very lame:
Add to /etc/auto_home as last entry:
+/etc/auto_home_import
And install /etc/auto_home_import as executable script:
Casper;
Does this mean it would be a good practice to say increase the amount of
memory and/or swap space we usually recommend if the customer intends to
use ZFS very heavily?
Memory is always good; but it is *virtual* memory (address space) which
matters most.
The 32 bit kernel only has a
Darren J Moffat wrote:
Steven Sim wrote:
Casper;
Does this mean it would be a good practice to say increase the amount
of memory and/or swap space we usually recommend if the customer
intends to use ZFS very heavily?
ZFS doesn't necessarily use more memory (physical or virtual) than
You'll also note that there's a line saying Stopping because process dumped
core which we shouldn't ignore, IMO.
In case this is a Sun-supported config (s10u2 indicates as much), please file
a
case :-)
This looks like the rpcgen issue where the list is encoded using a recursive
rather than
Bart Smaalders wrote:
How much swap space is configured on this machine?
Zero. Is there any reason I would want to configure any swap space?
Yes.
In this particular case:
total: 213728k bytes allocated + 8896k reserved = 222624k used, 11416864k
available
you have 9MB of reserved memory
We've kind of side tracked, but Yes, I do understand the limitations of
running without swap. However, in the interest of performance, I, and in
fact my whole organization which runs about 300 servers, disable swap.
We've never had an out of memory problem in the past because of kernel
Are you trying to convince me that having applications/application data
occasionally swapped out to disk is actually faster than keeping it all
in memory?
Yes. Having more memory available generally causes the
system to be a faster.
I have another box, which I LU'd to U1 a while ago. Its
Hi All,
I have looked on the HCL list for Sol 10 x86 without much luck. I am
looking for a 8 or 16 port SATA card for a JBOD Sol 10 x86 ZFS
installation. Anyone know of one that is well supported in Sol 10? I
am starting to do some testing with an LSI Logic 320-XLP SATA RAID card,
but so far
So what does this exercise leave me thinking? Is Linux 2.4.x really
screwed up in NFS-land? This Solaris NFS replaces a Linux-based NFS
server that the clients (linux and IRIX) liked just fine.
Yes; the Linux NFS server and client work together just fine but generally
only because the Linux
Right, but I never had this speed problem when the NFS server was
running Linux on hardware that had the quarter of the CPU power and
half the disk i/o capacity that the new Solaris-based one has.
So either Linux's NFS client was more compatible with the bugs in
Linux's NFS server and ran
Hi,
[EMAIL PROTECTED] cat /etc/release
Solaris Nevada snv_33 X86
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 06 February 2006
I have zfs running
I have the same with my home-installserver. As a dirty solution I
set mount-at-boot to no for the lofs Filesystems, to get the system up.
But with every new OS added by JET the mount at reboot reappears.
Seems to me as the question when should a lofs filesystem be mounted at boot.
When does a
I believe that add_install_client [with a -b option?] is what is
creating my vfstab entries. I haven't had reboot issues until
overnight (a system move), and I have been doing PXE boot of some x64
systems only recently, i.e. since the most recent power failure.
Install images are being put
Any mkdir in a builds directory on a shared build machine. Would be
very cool because then every user/project automatically gets a ZFS
fileystems.
Why map it to mkdir rather than using zfs create ? Because mkdir means
it will work over NFS or CIFS.
NFS will be fairly difficult because you
Please elaborate: CIFS just requires the automount hack.
CIFS currently access the files through the local file system so
it can invoke the automouter and there can use tricky maps.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Casper Dik,
Yes, I am familiar with Bonwick's slab allocators and tried
it for wirespeed test of 64byte pieces for a 1Gb and then
100Mb Eths and lastly 10Mb Eth. My results were not
encouraging. I assume it has improved over time.
Nothing which tries to send 64 byte
the remaining, now aside from sub directories empty directories are r=
emoved silently and successfully. And this is exactly okay when using=
the -depth option only, because this guarantees the right director=
y traversal, where the exec is applied only on the leaves first and a=
fterwards on
On 10/5/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Unmount all the ZFS filesystems and check the permissions on the mount
points and the paths leading up to them.
I experienced the same problem and narrowed it down to that
essentially, chdir(..) in rm -rf failed to ascend up the
directory.
Brian Hechinger wrote:
On Thu, Oct 05, 2006 at 11:19:19AM -0700, David Dyer-Bennet wrote:
On 10/5/06, Jeremy Teo [EMAIL PROTECTED] wrote:
What would a version FS buy us that cron+ zfs snapshots doesn't?
Finer granularity; no chance of missing a change.
TOPS-20 did this, and it was
zpool fork -p poolname -n newpoolname [devname ...]
Create the new exported pool newpoolname from poolname by detaching
one side from each mirrored vdev, starting with the
device names listed on the command line. Fails if the pool does not
consist exclusively of mirror
On 11/11/06, Bart Smaalders [EMAIL PROTECTED] wrote:
It would seem useful to separate the user's data from the system's data
to prevent problems with losing mail, log file data, etc, when either
changing boot environments or pivoting root boot environments.
I'll be more concerned about the
Previously I wrote:
I still don't like forcing ZFS on people, though; I've found that ZFS
does not work on 1GB SPARC systems; I found that a rather high lower limit.
(Whenever the NFS find runs over the zpool, the system hangs)
It appears that this is a regression in build 52 or 51, I filed
This is a problem since how can anyone use ZFS on a PC??? My motherboard is a
newly minted AM2 w/
all the latest firmware. I disabled boot detection on the sata channels and
it still refuses to b
oot. I had to purchase an external SATA enclosure to fix the drives. This
seems to me to be a
I suspect a lack of an MBR could cause some BIOS implementations to
barf ..
Why?
Zeroed disks don't have that issue either.
What appears to be happening is more that raid controllers attempt
to interpret the data in the EFI label as the proprietary
hardware raid labels. At least, it seems
While other file systems, when they become corrupt, allow you to
salvage data :-)
They allow you to salvage what you *think* is your data.
But in reality, you have no clue what the disks are giving you.
Casper
___
zfs-discuss mailing list
On Dec 9, 2006, at 8:59 , Jim Mauro wrote:
AnywayI'm feeling rather naive' here, but I've seen the NFS
enforced synchronous semantics phrase
kicked around many times as the explanation for suboptimal
performance for metadata-intensive
operations when ZFS is the underlying file
Bill Sommerfeld wrote:
Similarly, the bulk of the synchronous I/O done during the import of SMF
manifests early in boot after an install or upgrade are wasted effort..
I've done hundreds of installs. Empirically, my observation is that
the SMF manifest import scales well with processors. In
Darren J Moffat wrote:
I think we need 5 distinct places to set the policy:
1) On file delete
This would be a per dataset policy.
The bleaching would happen in a new transaction group
created by the one that did the normal deletion, and would
run only if theoriginal
After importing some pools after a re-install of the OS, i hit that ..:
Permission denied proble
m. I figured out I could unmount, chmod, and mount to fix it but that wouldn't
be a good situation
on a production box. Is there anyway to fix this problem without unmounting?
NFS share the
On 12/29/06, Eric Schrock [EMAIL PROTECTED] wrote:
On Fri, Dec 29, 2006 at 11:23:30PM +0100, Holger Berger wrote:
So the goal is to allow infinite nesting?
That would be my guess, based on the fact that disallowing the opposite
is effectively impossible.
I guess it may be possible by
Bascially ZFS pass all my tests (about 3000). I see one problem with UFS
and two differences:
That's good; do you have those tests published anywhere.
1. link(2) manual page states that privileged processes can make
multiple links to a directory. This looks like a general comment, but
Link with the target being a directory and the source a any file or
only directories? And only as superuer?
I'm sorry, I ment unlink(2) here.
Ah, so symmetrical with link(2) to directories.
unlink(2) doesn't always work and rmdir(2) will not remove empty directories
with a link count other
I think removing the ability to use link(2) or unlink(2) on directories
would hurt no-one and would make a few things easier.
I'd be rather carful here, see the standards implications drafted in
4917742.
The standard gives permission to disallow unlink() on directories:
The path
However, it gets interesting when SVID3 comes into play:
snip
The link(BA_OS) and unlink(BA_OS) descriptions in SVID3 both specify that
a process with appropriate privileges is allowed to operate on a
directory.
We have claimed to conform to SVID3 since Solaris 2.0 and have not
Hmmm, so there is lots of evictable cache here (mostly in the MFU
part of the cache)... could you make your core file available?
I would like to take a look at it.
Isn't this just like:
6493923 nfsfind on ZFS filesystem quickly depletes memory in a 1GB system
Which was introduced in b51(or 52)
Is there some reason why a small read on a raidz2 is not statistically very
likely to require I/O on only one device? Assuming a non-degraded pool of
course.
ZFS stores its checksums for RAIDZ/RAIDZ2 in such a way that all disks must be
read to compute and
verify the checksum.
But why
So actually I mis-spoke slightly; rather than all disks, I should
have said all data disks.
In practice this has the same effect: No more than one read may be
processed at a time.
But aren't short blocks sometimes stored on only a subset of disks?
Casper
On Thu, Jan 11, 2007 at 11:52:19AM +, Darren J Moffat wrote:
Fabian W??rner wrote:
I think of have solaris and mac os 10.5 on the same machine and mount same
filesystem on to differnet point on each os.
Is/will it possible or do I have to use sym. links?
You can NOT mount the same ZFS
We had a 2TB filesystem. No matter what options I set explicitly, the
UFS filesystem kept getting written with a 1 million file limit.
Believe me, I tried a lot of options, and they kept getting se t back
on me.
The limit is documented as 1 million inodes per TB. So something
must not have gone
That said, this definition is not always used consistently, as is the case
with the x2100. I filed a bug against the docs in this case, and unfortunately
it was closed as will not fix. :-(
In the context of a hardware platform it makes little sense to
distinguish between hot-plug and hot-swap.
What I gather from this is that today, SATA drives will either look like IDE
drives or SCSI drives, to some extent. When they look like IDE drives, you
don't get all of the cfgadm or luxadm management options and you have to do
thinks like hot plug in a more-rather-than-less manual mode. When
Is there an BIOS uptade for Ultra20 to make it understand EFI?
Understanding EFI is perhaps asking too much; but I believe the
latest BIOS no longer hangs/crashes when it encountered EFI labels
on disks it examines. (All disks it probes)
Casper
___
Is there someway to synchronously mount a ZFS filesystem?
'-o sync' does not appear to be honoured.
What does that mean? None of the Solaris filesystems support
an option sync.
What exactly do you want the sync option to do?
Casper
___
zfs-discuss
Actually, it was meant to hold the entire electronic transcript of the
George Bush impeachment proceedings ... we were thinking ahead.
Fortunately, larger disks became available in time.
Casper
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have an 800GB raidz2 zfs filesystem. It already has approx 142Gb of data.
Can I simply turn on compression at this point, or do you need to start
with compression
at the creation time? If I turn on compression now, what happens to the
existing data?
Yes. Nothing.
Casper
Bryan Cantrill wrote:
well, Thumper is actually a reference to Bambi
You'd have to ask Fowler, but certainly when he coined it, Bambi was the
last thing on anyone's mind. I believe Fowler's intention was one that
thumps (or, in the unique parlance of a certain Commander-in-Chief,
one that
Well Solaris SAS isn't there yet but anyway just found some interesting
high density SAS/SATA enclosures.
http://xtore.com/product_list.asp?cat=JBOD
The XJ 2000 is like the x4500 in that it holds 48 drives, however with
the XJ 2000 2 drives are on each carrier and you can get to them from
the
The other reason is that the machine has been around for years, already
using UFS and quotas extensively. Over winter break we had time to
upgrade to Solaris 10 and migrate the volume from svm to zvol, but not
much more.There are a few thousand users on the machine. The thought of
On 27-Jan-07, at 10:15 PM, Anantha N. Srirama wrote:
... ZFS will not stop alpha particle induced memory corruption
after data has been received by server and verified to be correct.
Sadly I've been hit with that as well.
My brother points out that you can use a rad hardened CPU. ECC
Take note though, that giving zfs the entire disk gives a possible
performance win, as zfs will only enable the write cache for the disk
if it is given the entire disk.
really?
why this?
In the old days, Sun never enabled the write cache on devices because
of reliability issues. (Sun SCSI
On 28-Jan-07, at 7:59 AM, [EMAIL PROTECTED] wrote:
On 27-Jan-07, at 10:15 PM, Anantha N. Srirama wrote:
... ZFS will not stop alpha particle induced memory corruption
after data has been received by server and verified to be correct.
Sadly I've been hit with that as well.
My brother
Ok, I'll bite. It's been a long day, so that may be why I can't see
why the radioisotopes in lead that was dug up 100 years ago would be
any more depleted than the lead that sat in the ground for the
intervening 100 years. Half-life is half-life, no?
Now if it were something about the
I understand all the math involved with RAID 5/6 and failure rates,
but its wise to remember that even if the probabilities are small
they aren't zero. :)
And after 3-5 years of continuous operation, you better decommission the
whole thing or you will have many disk failures.
Casper
Hello zfs-discuss,
I've patched U2 system to 118855-36. Several zfs related bugs id
should be covered between -19 and -36 like HotSpare support.
However despite -36 is installed 'zpool upgrade' still claims only
v1 and v2 support. Alse there's no zfs promote, etc.
/kernel/drv/zfs is
Looks like 124205-04 is needed.
While I can see it on SunSolve smpatch doesn't show it.
Also many ZFS bugs listed in 124205-04 are also listed in 118855-36 while
it looks like only 124205-04 is actually covering them and provides
necessary binaries.
Something is messed up with -36.
Sometimes
Many thanks for answering my question. Hopefully my noisy X4200
will be installed in the data centre tomorrow (Thursday); I had
a set back today while fighting with the Remote Console feature
of ILOM 1.1.1 (i.e., it doesn't work). :-(
Just ssh into it and use the serial console from within
With the CPU overhead imposed in checksum of blocks by ZFS, on a large
sequential write test, the CPU was heavily loaded in a test that I ran.
By turning off the checksum, the CPU load was greatly reduced.
Obviously, this caused a tradeoff in reliability for CPU cycles.
What hardware platform
That's how I usually use the console on the X4200. However, that
arrangement doesn't work when one wants to (re)install Solaris.
Unless there's a way of telling the installer to use the serial
console while booting from DVD, rather than using the GUI?
I thought there were a grub use ttya and
On 11/2/07 3:04, Ian Collins [EMAIL PROTECTED] wrote:
Rayson Ho wrote:
Interesting...
http://www.rhic.bnl.gov/RCF/LiaisonMeeting/20070118/Other/thumper-eval.pdf
I wonder where they got the information that Solaris 10 doesn't support
dual-core Intel from?
Does OpenSolaris or Solaris
I cannot let you say that.
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror features, because we already have a SAN with
RAID-5 disks, and dual fabric connection to the hosts.
But you understand that these underlying RAID mechanism give absolutely
no
On February 21, 2007 4:43:34 PM +0100 [EMAIL PROTECTED] wrote:
I cannot let you say that.
Here in my company we are very interested in ZFS, but we do not care
about the RAID/mirror features, because we already have a SAN with
RAID-5 disks, and dual fabric connection to the hosts.
But you
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum [EMAIL PROTECTED] wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
What
I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
it to disk until you do an fsync() (or open the file with the right flags,
or other techniques). If an application REQUIRES that data get to disk,
it really MUST DTRT.
Indeed; want your data safe? Use:
Thanks for clarifying! Seems I really need to check the apps with truss or
dtrace to see if they use that sequence. Allow me one more question: why
is fflush() required prior to fsync()?
When you use stdio, you need to make sure the data is in the
system buffers prior to call fsync.
fclose()
What is slow here is mounting all those FS at boot and
unmounting at shutdown. The most relevant project here in my
mind is :
6478980 zfs should support automount property
which would give ZFS a mount on demand behavior. Fast boot/shutdown
and fewer mounted FS at any one time.
Tricky
From: Darren J Moffat [EMAIL PROTECTED]
...
The other problem is that you basically need a global unique registry
anyway so that compress algorithm 1 is always lzjb, 2 is gzip, 3 is
etc etc. Similarly for crypto and any other transform.
I've two thoughts on that:
1) if there is to be
On Mon, May 21, 2007 at 06:21:40PM -0500, Albert Chin wrote:
On Mon, May 21, 2007 at 06:11:36PM -0500, Nicolas Williams wrote:
On Mon, May 21, 2007 at 06:09:46PM -0500, Albert Chin wrote:
But still, how is tar/SSH any more multi-threaded than tar/NFS?
It's not that it is, but that NFS
You're right of course and lots of people use them. My point is that
Solaris has been 64 bits lon ger then most others. I think 64 bits in
AIX got 64 bits after Solaris and Linux (via Alpha) did.
Irix was 64 bit near the same time as Solaris but the end of the Irix
is visible. Did they port i
Depend on the guarantees. Some RAID systems have built in block
checksumming.
But we all know that block checksums stored with the blocks do
not catch a number of common errors.
(Ghost writes, misdirected writes, misdirected reads)
Casper
___
On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:
Mike Dotson wrote:
On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:
Would help in many cases where an admin needs to work on a system but
doesn't need, say 20k users home directories mounted, to do this work.
So single-user
devils_advocate
So how are you guaranteeing NFS server and automount with autofs are up,
running and working for the user for console-login.
/devils_advocate
Irrelevant; chances are that when someone boots a system (e.g., laptop
or desktop) he/she is sitting their waiting at the console until
Mike Dotson [EMAIL PROTECTED] wrote:
Create 20k zfs file systems and reboot. Console login waits for all the
zfs file systems to be mounted (fully loaded 880, you're looking at
about 4 hours so have some coffee ready).
Does this mean, we will get quotas for ZFS in the future?
We need it
What I personally do for ZFS loopback mounts, such as required
for /tftpboot/I86PC.Solaris_11 on install server, is making
them into auto_direct mounts.
OK - I know this is entirely obvious to you (Casper) - but can you
provide more detail for those who are not lucky enough to work on
After one aborted ufsrestore followed by some cleanup I tried
to restore again but this time ufsrestore faultered with:
bad filesystem block size 2560
The reason was this return value for the stat of . of the
filesystem:
8339: stat(., 0xFFBFF818) = 0
8339:
Oh, I see, this is bug 6479267: st_size (struct stat) is unreliable in
ZFS. Any word on when the fix will be out?
It's a bug in scandir (obviously) and it is filed as such.
Does scandir fail on zfs because of this or does scandir needs to
reallocate and does it use the size as first order
What was the reason to make ZFS use directory sizes as the number of
entries rather than the way other Unix filesystems use it? I fear that
several more of the 700 open source packages we've ported to our hosts
are going to exhibit this problem.
It's a choice as good as any.
The scandir
On 9/6/07 10:01, Eric Schrock [EMAIL PROTECTED] wrote:
On Sat, Jun 09, 2007 at 01:56:35PM -0700, Ed Ravin wrote:
I encountered the problem in NetBSD's scandir(), when reading off
a Solaris NFS fileserver with ZFS filesystems. I've already filed a
bug report with NetBSD. They were using
Maybe some additional pragmatism is called for here. If we want NFS
over ZFS to work well for a variety of clients, maybe we should set
st_size to larger values..
+1; let's teach the admins to do st_size /= 24 mentally :-)
Casper
___
zfs-discuss
I believe we should rather educate other people that st_size/24 is a bad
solution.
That's all well and good but fixing all clients, including potentially
really old ones, might not be feasible. Being correct doesn't help
our customers.
Casper
___
1 - 100 of 354 matches
Mail list logo