arc-discuss doesn't have anything specifically to do with ZFS;
in particular, it has nothing to do with the ZFS ARC. Just an
unfortunate overlap of acronyms.
Cross-posted to zfs-discuss, where this probably belongs.
Hey all1
Recently I've decided to implement OpenSolaris as a
target for
This is not the appropriate group/list for this message.
Crossposting to zfs-discuss (where it perhaps primarily
belongs) and to cifs-discuss, which also relates.
Hi,
I have an I/O load issue and after days of searching
wanted to know if anyone has pointers on how to
approach this.
My
On 10/28/10 08:40 AM, Richard L. Hamilton wrote:
I have sharesmb=on set for a bunch of filesystems,
including three that weren't mounted.
Nevertheless,
all of those are advertised. Needless to say,
the one that isn't mounted can't be accessed
remotely,
even though since advertised
I have sharesmb=on set for a bunch of filesystems,
including three that weren't mounted. Nevertheless,
all of those are advertised. Needless to say,
the one that isn't mounted can't be accessed remotely,
even though since advertised, it looks like it could be.
# zfs list -o
PS obviously these are home systems; in a real environment,
I'd only be sharing out filesystems with user or application
data, and not local system filesystems! But since it's just
me, I somewhat trust myself not to shoot myself in the foot.
--
This message posted from opensolaris.org
On Thu, Sep 30, 2010 at 08:14:24PM -0400, Miles
Nordin wrote:
Can the user in (3) fix the permissions from
Windows?
no, not under my proposal.
Then your proposal is a non-starter. Support for
multiple remote
filesystem access protocols is key for ZFS and
Solaris.
The
Hmm...according to
http://www.mail-archive.com/vbox-users-commun...@lists.sourceforge.net/msg00640.html
that's only needed before VirtualBox 3.2, or for IDE. = 3.2, non-IDE should
honor flush requests, if I read that correctly.
Which is good, because I haven't seen an example of how to enabling
Typically on most filesystems, the inode number of the root
directory of the filesystem is 2, 0 being unused and 1 historically
once invisible and used for bad blocks (no longer done, but kept
reserved so as not to invalidate assumptions implicit in ufsdump tapes).
However, my observation seems
Even the most expensive decompression algorithms
generally run
significantly faster than I/O to disk -- at least
when real disks are
involved. So, as long as you don't run out of CPU
and have to wait for
CPU to be available for decompression, the
decompression will win. The
same concept
Losing ZFS would indeed be disastrous, as it would
leave Solaris with
only the Veritas File System (VxFS) as a semi-modern
filesystem, and a
non-native FS at that (i.e. VxFS is a 3rd-party
for-pay FS, which
severely inhibits its uptake). UFS is just way to old
to be competitive
these
On Tue, 13 Jul 2010, Edward Ned Harvey wrote:
It is true there's no new build published in the
last 3 months. But you
can't use that to assume they're killing the
community.
Hmm, the community seems to think they're killing the
community:
never make it any better. Just for a record: Solaris
9 and 10 from Sun
was a plain crap to work with, and still is
inconvenient conservative
stagnationware. They won't build a free cool tools
Everybody but geeks _wants_ stagnationware, if you means
something that just runs. Even my old Sun
AFAIK, zfs should be able to protect against (if the pool is redundant), or at
least
detect, corruption from the point that it is handed the data, to the point
that the data is written to permanent storage, _provided_that_ the system
has ECC RAM (so it can detect and often correct random
I've googled this for a bit, but can't seem to find
the answer.
What does compression bring to the party that dedupe
doesn't cover already?
Thank you for you patience and answers.
That almost sounds like a classroom question.
Pick a simple example: large text files, of which each is
Another thought is this: _unless_ the CPU is the bottleneck on
a particular system, compression (_when_ it actually helps) can
speed up overall operation, by reducing the amount of I/O needed.
But storing already-compressed files in a filesystem with compression
is likely to result in wasted
[...]
To answer Richard's question, if you have to rename a
pool during
import due to a conflict, the only way to change it
back is to
re-import it with the original name. You'll have to
either export the
conflicting pool, or (if it's rpool) boot off of a
LiveCD which
doesn't use an rpool
One can rename a zpool on import
zpool import -f pool_or_id newname
Is there any way to rename it (back again, perhaps)
on export?
(I had to rename rpool in an old disk image to access
some stuff in it, and I'd like to put it back the way it
was so it's properly usable if I ever want to boot
[...]
There is a way to do this kind of object to name
mapping, though there's no documented public
interface for it. See zfs_obj_to_path() function and
ZFS_IOC_OBJ_TO_PATH ioctl.
I think it should also be possible to extend it to
handle multiple names (in case of multiple hardlinks)
in
It might be nice if zfs list would check an environment variable for
a default list of properties to show (same as the comma-separated list
used with the -o option). If not set, it would use the current default list;
if set, it would use the value of that environment variable as the list.
I find
Just make 'zfs' an alias to your version of it. A
one-time edit
of .profile can update that alias.
Sure; write a shell function, and add an alias to it.
And use a quoted command name (or full path) within the function
to get to the real command. Been there, done that.
But to do a good job
FYI, the arc and arc-discuss lists or forums are not appropriate for this.
There are
two arc acronyms:
* Architecture Review Committee (arc list is for cases being considered,
arc-discuss is for
other discussion. Non-committee business is most unwelcome on the arc list.)
* the ZFS Adaptive
Cute idea, maybe. But very inconsistent with the size in blocks (reported by
ls -dls dir).
Is there a particular reason for this, or is it one of those just for the heck
of it things?
Granted that it isn't necessarily _wrong_. I just checked SUSv3 for stat() and
sys/stat.h,
and it appears
Richard L. Hamilton rlha...@smart.net wrote:
Cute idea, maybe. But very inconsistent with the
size in blocks (reported by ls -dls dir).
Is there a particular reason for this, or is it one
of those just for the heck of it things?
Granted that it isn't necessarily _wrong_. I just
On Wed, 13 Aug 2008, Richard L. Hamilton wrote:
Reasonable enough guess, but no, no compression,
nothing like that;
nor am I running anything particularly demanding
most of the time.
I did have the volblocksize set down to 512 for
that volume, since I thought
that for the purpose
Hmm...my SB2K, 2GB RAM, 2x 1050MHz UltraSPARC III Cu CPU, seems
to freeze momentarily for a couple of seconds every now and then in
a zfs root setup on snv_90, which it never did with mostly ufs on snv_81;
that despite having much faster disks now (LSI SAS 3800X and a pair of
Seagate 1TB SAS
Are you using
set md:mirrored_root_flag=1
in /etc/system?
See the entry for md:mirrored_root_flag on
http://docs.sun.com/app/docs/doc/819-2724/chapter2-156?a=view
keeping in mind all the cautions...
This message posted from opensolaris.org
___
I wonder if one couldln't reduce (but probably not eliminate) the likelihood
of this sort of situation by setting refreservation significantly lower than
reservation?
Along those lines, I don't see any property that would restrict the number
of concurrent snapshots of a dataset :-( I think that
On Sat, 7 Jun 2008, Mattias Pantzare wrote:
If I need to count useage I can use du. But if you
can implement space
usage info on a per-uid basis you are not far from
quota per uid...
That sounds like quite a challenge. UIDs are just
numbers and new
ones can appear at any time.
btw: it's seems to me that this thread is a little
bit OT.
I don't think its OT - because SSDs make perfect
sense as ZFS log
and/or cache devices. If I did not make that clear
in my OP then I
failed to communicate clearly. In both these roles
(log/cache)
reliability is of the utmost
On Tue, Jun 10, 2008 at 11:33:36AM -0700, Wyllys
Ingersoll wrote:
Im running build 91 with ZFS boot. It seems that
ZFS will not allow
me to add an additional partition to the current
root/boot pool
because it is a bootable dataset. Is this a known
issue that will be
fixed or a
I'm not even trying to stripe it across multiple
disks, I just want to add another partition (from the
same physical disk) to the root pool. Perhaps that
is a distinction without a difference, but my goal is
to grow my root pool, not stripe it across disks or
enable raid features (for now).
I don't presently have any working x86 hardware, nor do I routinely work with
x86 hardware configurations.
But it's not hard to find previous discussion on the subject:
http://www.opensolaris.org/jive/thread.jspa?messageID=96790
for example...
Also, remember that SAS controllers can usually also
On Thu, Jun 05, 2008 at 09:13:24PM -0600, Keith
Bierman wrote:
On Jun 5, 2008, at 8:58 PM 6/5/, Brad Diggs
wrote:
Hi Keith,
Sure you can truncate some files but that
effectively corrupts
the files in our case and would cause more harm
than good. The
only files in our volume
If I read the man page right, you might only have to keep a minimum of two
on each side (maybe even just one on the receiving side), although I might be
tempted to keep an extra just in case; say near current, 24 hours old, and a
week old (space permitting for the larger interval of the last one).
I encountered an issue that people using OS-X systems
as NFS clients
need to be aware of. While not strictly a ZFS issue,
it may be
encounted most often by ZFS users since ZFS makes it
easy to support
and export per-user filesystems. The problem I
encountered was when
using ZFS to
[...]
That's not to say that there might not be other
problems with scaling to
thousands of filesystems. But you're certainly not
the first one to test it.
For cases where a single filesystem must contain
files owned by
multiple users (/var/mail being one example), old
fashioned
Hi All,
I'm new to ZFS but I'm intrigued by the possibilities
it presents.
I'm told one of the greatest benefits is that,
instead of setting
quotas, each user can have their own 'filesystem'
under a single pool.
This is obviously great if you've got 10 users but
what if you have
Hi list,
for windows we use ghost to backup system and
recovery.
can we do similar thing for solaris by ZFS?
I want to create a image and install to another
machine,
So that the personal configuration will not be lost.
Since I don't do Windows, I'm not familiar with ghost, but I gather
On Thu, 2008-06-05 at 15:44 +0800, Aubrey Li wrote:
for windows we use ghost to backup system and
recovery.
can we do similar thing for solaris by ZFS?
How about flar ?
http://docs.sun.com/app/docs/doc/817-5668/flash-24?a=v
iew
[ I'm actually not sure if it's supported for zfs
root
Nathan Kroenert wrote:
For what it's worth, I started playing with USB +
flash + ZFS and was
most unhappy for quite a while.
I was suffering with things hanging, going slow or
just going away and
breaking, and thought I was witnessing something
zfs was doing as I was
trying to
A Darren Dunham [EMAIL PROTECTED] writes:
On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard
L. Hamilton wrote:
How about SPARC - can it do zfs install+root yet,
or if not, when?
Just got a couple of nice 1TB SAS drives, and I
think I'd prefer to
have a mirrored pool where zfs owns
On Tue, Jun 03, 2008 at 05:56:44PM -0700, Richard L.
Hamilton wrote:
How about SPARC - can it do zfs install+root yet,
or if not, when?
Just got a couple of nice 1TB SAS drives, and I
think I'd prefer to
have a mirrored pool where zfs owns the entire
drives, if possible.
(I'd also
P.S. the ST31000640SS drives, together with the LSI SAS 3800x
controller (in a 64-bit 66MHz slot) gave me, using dd with
a block size of either 1024k or 16384k (1MB or 16MB) and a count
of 1024, a sustained read rate that worked out to a shade over 119MB/s,
even better than the nominal sustained
How about SPARC - can it do zfs install+root yet, or if not, when?
Just got a couple of nice 1TB SAS drives, and I think I'd prefer to
have a mirrored pool where zfs owns the entire drives, if possible.
(I'd also eventually like to have multiple bootable zfs filesystems in
that pool, corresponding
On Mon, May 19, 2008 at 10:06 PM, Bill McGonigle
[EMAIL PROTECTED] wrote:
On May 18, 2008, at 14:01, Mario Goebbels wrote:
I mean, if the Linux folks to want it, fine. But
if Sun's actually
helping with such a possible effort, then it's
just shooting itself in
the foot here, in my
Dana H. Myers [EMAIL PROTECTED] wrote:
Bob Friesenhahn wrote:
Are there any plans to support ZFS for write-only
media such as
optical storage? It seems that if mirroring or
even zraid is used
that ZFS would be a good basis for long term
archival storage.
I'm just going to
So, I set utf8only=on and try to create a file with a
filename that is
a byte array that can't be decoded to text using
UTF-8. What's supposed
to happen? Should fopen(), or whatever syscall
'touch' uses, fail?
Should the syscall somehow escape utf8-incompatible
bytes, or maybe
replace
Hello,
I have just done comparison of all the above
filesystems
using the latest filebench. If you are interested:
http://przemol.blogspot.com/2008/02/zfs-vs-vxfs-vs-ufs
-on-x4500-thumper.html
Regards
przemol
I would think there'd be a lot more variation based on workload,
such that
On Sat, 16 Feb 2008, Richard Elling wrote:
ls -l shows the length. ls -s shows the size,
which may be
different than the length. You probably want size
rather than du.
That is true. Unfortunately 'ls -s' displays in
units of disk blocks
and does not also consider the 'h' option
New, yes. Aware - probably not.
Given cheap filesystems, users would create many
filesystems was an easy guess, but I somehow don't
think anybody envisioned that users would be creating
tens of thousands of filesystems.
ZFS - too good for it's own good :-p
IMO (and given mails/posts
Hello Marc,
Sunday, July 29, 2007, 9:57:13 PM, you wrote:
MB MC rac at eastlink.ca writes:
Obviously 7zip is far more CPU-intensive than
anything in use with ZFS
today. But maybe with all these processor cores
coming down the road,
a high-end compression system is just the thing
Bringing this back towards ZFS-land, I think that
there are some clever
things we can do with snapshots and clones. But the
age-old problem
of arbitration rears its ugly head. I think I could
write an option to expose
ZFS snapshots to read-only clients. But in doing so,
I don't see how
Victor Engle wrote:
Roshan,
As far as I know, there is no problem at all with
using SAN storage
with ZFS and it does look like you were having an
underlying problem
with either powerpath or the array.
Correct. A write failed.
The best practices guide on opensolaris does
Well, I just grabbed the latest SXCE, and just for the heck of it, fooled
around until I got the Java Web Start to work.
Basically, one's browser needs to know the following (how to do that depends
on the browser):
MIME Type: application/x-java-jnlp-file
File Extension: jnlp
Open With:
Intending to experiment with ZFS, I have been
struggling with what
should be a simple download routine.
Sun Download Manager leaves a great deal to be
desired.
In the Online Help for Sun Download Manager there's a
section on
troubleshooting, but if it causes *anyone* this much
I wish there was a uniform way whereby applications could
register their ability to achieve or release consistency on demand,
and if registered, could also communicate back that they had
either achieved consistency on-disk, or were unable to do so. That
would allow backup procedures to
I'd love to be able to server zvols out as SCSI or FC
targets. Are
there any plans to add this to ZFS? That would be
amazingly awesome.
Can one use a spare SCSI or FC controller as if it were a target?
Even if the hardware is capable, I don't see what you describe as
a ZFS thing really; it
Well, no; his quote did say software or hardware. The theory is apparently
that ZFS can do better at detecting (and with redundancy, correcting) errors
if it's dealing with raw hardware, or as nearly so as possible. Most SANs
_can_ hand out raw LUNs as well as RAID LUNs, the folks that run them
# zfs create pool raidz d1 … d8
Surely you didn't create the zfs pool on top of SVM metadevices? If so,
that's not useful; the zfs pool should be on top of raw devices.
Also, because VxFS is extent based (if I understand correctly), not unlike how
MVS manages disk space I might add, _it ought_
So you're talking about not just reserving something for on-disk compatibility,
but also maybe implementing these for Solaris? Cool. Might be fairly useful
for hardening systems (although as long as someone had raw device access,
or physical access, they could of course still get around it; that
I hope this isn't turning into a License flame war.
But why do Linux
contributors not deserve the right to retain their
choice of license
as equally as Sun, or any other copyright holder,
does?
The anti-GPL kneejerk just witnessed on this list is
astonishing. The
BSD license, for
On Wed, Mar 28, 2007 at 06:55:17PM -0700, Anton B.
Rang wrote:
It's not defined by POSIX (or Solaris). You can
rely on being able to
atomically write a single disk block (512 bytes);
anything larger than
that is risky. Oh, and it has to be 512-byte
aligned.
File systems with
and does it vary by filesystem type? I know I ought to know the
answer, but it's been a long time since I thought about it, and
I must not be looking at the right man pages. And also, if it varies,
how does one tell? For a pipe, there's fpathconf() with _PC_PIPE_BUF,
but how about for a regular
_FIOSATIME - why doesn't zfs support this (assuming I didn't just miss it)?
Might be handy for backups.
Could/should zfs support a new ioctl, constrained if needed to files of
zero size, that sets an explicit (and fixed) blocksize for a particular
file? That might be useful for performance in
If I create a mirror, presumably if possible I use two or more identically
sized devices,
since it can only be as large as the smallest. However, if later I want to
replace a disk
with a larger one, and detach the mirror (and anything else on the disk),
replace the
disk (and if applicable
I hope there will be consideration given to providing compatibility with UFS
quotas
(except that inode limits would be ignored). At least to the point of having
edquota(1m)
quot(1m)
quota(1m)
quotactl(7i)
repquota(1m)
rquotad(1m)
and possibly quotactl(7i) work with zfs (with the exception
...such that a snapshot (cloned if need be) won't do what you want?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What would a version FS buy us that cron+ zfs
snapshots doesn't?
Some people are making money on the concept, so I
suppose there are those who perceive benefits:
http://en.wikipedia.org/wiki/Rational_ClearCase
(I dimly remember DSEE on the Apollos; also some sort of
versioning file type on
Are both of you doing a umount/mount (or export/import, I guess) of the
source filesystem before both first and second test? Otherwise, there might
still be a fair bit of cached data left over from the first test, which would
give the 2nd an unfair advantage. I'm fairly sure unmounting a
Filed as 6462690.
If our storage qualification test suite doesn't yet
check for support of this bit, we might want to get
that added; it would be useful to know (and gently
nudge vendors who don't yet support it).
Is either the test suite, or at least a list of what it tests
(which it
70 matches
Mail list logo