Hi,
apologies for posting a fishworks-related question here, but I don't know of a
better place (please tell me if that exists).
Can anyone say anything about (planned) options to stretch out a storage 7000
series cluster for longer distances than what eSAS allows (prefarrable for more
than
Hi,
I have just observed the following issue and I would like to ask if it is
already known:
I'm using zones on ZFS filesystems which were cloned from a common template
(which is itself an original filesystem). A couple of weeks ago, I did a pkg
image-update, so all zone roots got cloned
BTW, this was on snv_111b - sorry I forgot to mention.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Eric and all,
Eric Schrock wrote:
On Nov 3, 2009, at 6:01 AM, Jürgen Keil wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do
Well, then you could have more logical space than physical space
Reconsidering my own question again, it seems to me that the question of space
management is probably more fundamental than I had initially thought, and I
assume members of the core team will have thought through much of it.
I
Hi David,
simply can't stand up to reality.
I kind of dislike the idea to talk about naiveness here.
Maybe it was a poor choice of words; I mean something more along the lines
of simplistic. The point is, space is no longer as simple a concept
as it was 40 years ago. Even without
Hi Adam,
thank you for your precise statement. Be it only from an engineering
standpoint, this is the kind of argumentation which I was expecting (and hoping
for).
I'm not sure what would lead you to believe that there is fork between
the open source / OpenSolaris ZFS and what we have in
Hi Bob,
Regarding my bonus question: I haven't found yet a definite answer if
there is a way to read the currently active controller setting. I
still assume that the nvsram settings which can be read with
service -d arrayname -c read -q nvsram region=0xf2 host=0x00
do not necessarily
Hi,
I am trying to find out some definite answers on what needs to be done on an STK
2540 to set the Ingnore Cache Sync Option. The best I could find is Bob's Sun
StorageTek 2540 / ZFS Performance Summary (Dated Feb 28, 2008, thank you, Bob),
in which he quotes a posting of Joel Miller:
To
Hi Bob and all,
I should update this paper since the performance is now radically
different and the StorageTek 2540 CAM configurables have changed.
That would be great, I think you'd do the community (and Sun, probably) a big
favor.
Is this information still current for F/W 07.35.44.10 ?
Hi Bob and all,
So this sounds like we need to wait for someone to come with a definite
answer.
I've received some helpful information on this:
Byte 17 is for Ignore Force Unit Access.
Byte 18 is for Ignore Disable Write Cache.
Byte 21 is for Ignore Cache Sync.
Change ALL settings to 1
I should add that I have quite a lot of datasets:
and maybe I should also add that I'm still running an old zpool version in order
to keep the ability to boot snv_98:
aggis:~$ zpool upgrade
This system is currently running ZFS pool version 14.
The following pools are out of date, and can
Hi Neil and all,
thank you very much for looking into this:
So I don't know what's going on. What is the typical call stack for those
zil_clean() threads?
I'd say they are all blocking on their respective CVs:
ff0009066c60 fbc2c0300 0 60 ff01d25e1180
PC:
Hi All,
out of curiosity: Can anyone come up with a good idea about why my snv_111
laptop computer should run more than 1000 zil_clean threads?
ff0009a9dc60 fbc2c0300 tq:zil_clean
ff0009aa3c60 fbc2c0300 tq:zil_clean
ff0009aa9c60
Hi,
yesterday, my backup zpool on two usb drives failed for USB errors (I don't know
if connecting my iPhone plays a role) while scrubbing the pool. This lead to all
I/O on the zpool hanging, including df, zpool and zfs commands.
init 6 would also hang due to bootadm hanging:
process id
Hi All,
over the last couple of weeks, I had to boot from my rpool from various physical
machines because some component on my laptop mainboard blew up (you know that
burned electronics smell?). I can't retrospectively document all I did, but I am
sure I recreated the boot-archive, ran
Hi Miles and All,
this is off-topic, but as the discussion has started here:
Finally, *ALL THIS IS COMPLETELY USELESS FOR NFS* because L4 hashing
can only split up separate TCP flows.
The reason why I have spend some time with
Hi,
in nfs-discuss, Andrwe Watkins has brought up the question, why an inheritable
ACE is split into two ACEs when a descendant directory is created.
Ref:
http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_acl.c#1506
I must admit that I had observed this
Well done, Nathan, thank you taking on the additional effort to write it all up.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Eric and all,
Can anyone point me in the right direction here? Much appreciated!
I have worked on a similar issue this week.
Though I have not worked through all the information you have provided, could
you please try the settings and source code changes I posted here:
If you run the id username on the box, does it show the users
secondary groups?
id never shows secondary groups.
Use id -a
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Graham,
(this message was posed on opensolaris-bugs initially, I am CC'ing and
reply-to'ing zfs-discuss as it seems to be a more appropriate place to discuss
this.)
I'm surprised to see that the status of bug 6592835 hasn't moved beyond yes
that's a problem.
My understanding is that the
Jürgen,
In a snoop I see that, when the access(2) fails, the nfsclient gets
a Stale NFS file handle response, which gets translated to an
ENOENT.
What happens if you use the noac NFS mount option on the client?
I'd not recommend to use it for production environments unless you really need
Before re-inventing the wheel, does anyone have any nice shell script to do
this
kind of thing (to be executed from cron)?
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11
___
Tim,
- Frequent snapshots, taken every 15 minutes, keeping the 4 most recent
- Hourly snapshots taken once every hour, keeping 24
- Daily snapshots taken once every 24 hours, keeping 7
- Weekly snapshots taken once every 7 days, keeping 4
- Monthly snapshots taken on the first day of
Wade,
that order. Also I guess user case in my mind would leave a desktop user
more likely to need access to a few minutes, hours or days ago then 12
months ago.
You are guessing that, but I am a desktop user who'd rather like the contrary.
I think Tim has already stated that he would not
Hi Darren,
http://www.opensolaris.org/jive/thread.jspa?messageID=271983#271983
The case mentioned there is one where concatenation in zdevs would be
useful.
That case appears to be about trying to get a raidz sized properly
against disks of different sizes. I don't see a similar issue
See
http://www.opensolaris.org/jive/thread.jspa?messageID=271983#271983
The case mentioned there is one where concatenation in zdevs would be useful.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi Pablo,
Why is needed this step (the touch one) ?
# make bootadm re-create archive
bootadm update-archive
/boot/solaris/bin/update_grub
This is just an easy way to make sure bootadm will write new archive files.
You could also use
rm /platform/i86pc/amd64/boot_archive \
Hi Richard,
Someone in the community was supposedly working on this, at one
time. It gets brought up about every 4-5 months or so. Lots of detail
in the archives.
Thank you for the pointer and sorry for the noise. I will definitely browse the
archives to find out more regarding this
Hi all,
Ben Rockwood wrote:
You want to keep stripes wide to reduce wasted disk space but you
also want to keep them narrow to reduce the elements involved in parity
calculation.
I Ben's argument, and the main point IMHO is how the RAID behaves in the
degraded state. When a disk fails,
Hi Peter,
Sorry, I have read you post after posting a reply myself.
Peter Tribble wrote:
No. The number of spindles is constant. The snag is that for random reads,
the performance of a raidz1/2 vdev is essentially that of a single disk. (The
writes are fast because they're always full-stripe;
I Ben's argument, and the main point IMHO is how the RAID behaves in the
^
second
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Robert,
Basically, the way RAID-Z works is that it spreads FS block to all
disks in a given VDEV, minus parity/checksum disks). Because when you
read data back from zfs before it gets to application zfs will check
it's checksum (fs checksum, not a raid-z one) so it needs entire fs
(not sure if this has already been answered)
I have a similar situation and would love some concise suggestions:
Had a working version of 2008.05 running svn_93 with the updated grub. I did
a pkg-update to svn_95 and ran the zfs update when it was suggested. System
ran fine until I did a
Not knowing of a better place to put this, I have created
http://www.genunix.org/wiki/index.php/ZFS_rpool_Upgrade_and_GRUB
Please make any corrections there.
Thanks, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hi,
It is important to remember that ZFS is ideal for writing new files from
scratch.
IIRC, maildir MTAs never overwrite mail files. But courier-imap does maintain
some additional index files which will be overwritten and I guess other IMAP
servers will probably do the same.
Nils
Hi David,
have you tried mounting and re-mounting all filesystems which are not
being mounted automatically? See other posts to zfs-discuss.
Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
glitch:
have you tried mounting and re-mounting all filesystems which are not
^^^
unmounting
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
zfs itself can't, but Tim Foster has written a nice script, integrated into
SMF, which can be used to automatically create and delete snapshots at various
intervals.
see http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10 for the latest
release and
Hi,
John wrote:
I'm setting up a ZFS fileserver using a bunch of spare drives. I'd like some
redundancy and to maximize disk usage, so my plan was to use raid-z. The
problem is that the drives are considerably mismatched and I haven't found
documentation (though I don't see why it
Hi,
I thought that this question must have been answered already, but I have
not found any explanations. I'm sorry in advance if this is redundant, but:
Why exactly doesn't ZFS let me detach a device from a degraded mirror?
haggis:~# zpool status
pool: rmirror
state: DEGRADED
status: One or
Matthias,
that does not answer my question.
The question is: Why can't I decide that I consciously want to destroy the (two
way)
mirror (and, yes, do away with any redundancy).
Nils
This message posted from opensolaris.org
___
zfs-discuss mailing
Hi all, especially Matthias,
I am very sorry for having bothered you with this stupid question, I am
embarrassed by the fact that I did not realize it's not a mirror. The
fact that I named it rmirror definitely added confusion on my side.
Apologies in particular for not having taken Mathias'
My previous reply via email did not get linked to this post, so let me resend
it:
can roles run cron jobs ?),
No. You need a user who can take on the role.
Darn, back to the drawing board.
I don't have all the context on this but Solaris RBAC roles *can* run cron
jobs. Roles don't have to
Hi Tim,
So, I've got a pretty basic solution:
Every time the service starts, we check for the existence of a snapshot
[...] - if one doesn't exist, then we take a snapshot under the policy set
down by that instance.
This does sound like a valid alternative solution for this requirement if
Hi Tim,
Finally getting around to answering Nil's mail properly - only a month
late!
Not a problem.
Okay, after careful consideration, I don't think I'm going to add this
that's fine for me, but ...
but in cases where you're powering down a laptop overnight,
you don't want to just take a
An example from the readme does not work and fails with:
Error: Cant schedule at job: at midnight sun
Change:
--- README.zfs-auto-snapshot.txt.o Sun Jun 29 11:23:35 2008
+++ README.zfs-auto-snapshot.txtSun Jun 29 11:24:31 2008
@@ -171,7 +171,7 @@
'setprop zfs/at_timespec =
Hi all,
I'll attach a new version zfs-auto-snapshot including some more
improvements, and probably some new bugs. Seriously, I have
tested it, but certainly not all functionality, so please let me know
about any (new) problems you come across.
Except from the change log:
- Added support to
And how about making this an official project?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
and the tar file ...
This message posted from opensolaris.org
zfs-auto-snapshot-0.10_atjobs.tar.bz2
Description: BZip2 compressed data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
see: http://bugs.opensolaris.org/view_bug.do?bug_id=6700597
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
52 matches
Mail list logo