Hello Matthew,
Friday, March 23, 2007, 2:49:03 AM, you wrote:
MA Robert Milkowski wrote:
MA Ah -- I think that may help explain things. It may be that your file
MA has some runs of zeros in it, which are represented as holes in
MA d100-copy1/m1, but as blocks of zeros in the d100/m1. It
On Thu, Mar 22, 2007 at 08:39:55AM -0700, Eric Schrock wrote:
Again, thanks to devids, the autoreplace code would not kick in here at
all. You would end up with an identical pool.
Eric, maybe I'm missing something, but why ZFS depend on devids at all?
As I understand it, devid is something
On Fri, Mar 23, 2007 at 11:31:03AM +0100, Pawel Jakub Dawidek wrote:
On Thu, Mar 22, 2007 at 08:39:55AM -0700, Eric Schrock wrote:
Again, thanks to devids, the autoreplace code would not kick in here at
all. You would end up with an identical pool.
Eric, maybe I'm missing something, but
Hi.
bash-3.00# uname -a
SunOS nfs-14-2.srv 5.10 Generic_125101-03 i86pc i386 i86pc
I created first zpool (stripe of 85 disks) and did some simple stress testing -
everything seems almost alright (~700MB seq reads, ~430 seqential writes).
Then I destroyed pool and put SVM stripe on top the same
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST-PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
Dear all.
I've setup the following scenario:
Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
diskspace of the two internal drives with a total of 90GB is used as zpool
for the two 32GB volumes exported via iSCSI
The initiator is an up to date Solaris 10 11/06 x86 box
Hello,
Our Solaris 10 machine need to be reinstalled.
Inside we have 2 HDDs in striping ZFS with 4 filesystems.
After Solaris is installed how can I mount or recover the 4 filesystems
without losing the existing data?
Thank you very much!
This message posted from opensolaris.org
See fsattr(5)
It was helpful :). Thanks!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 3/23/07, Ionescu Mircea [EMAIL PROTECTED] wrote:
Hello,
Our Solaris 10 machine need to be reinstalled.
Inside we have 2 HDDs in striping ZFS with 4 filesystems.
After Solaris is installed how can I mount or recover the 4 filesystems
without losing the existing data?
Check zfs import
--
Hello Robert,
Forget it, silly me.
Pool was mounted on one host, SVM metadevice was created on another
host on the same disk at the same time and both hosts were issuing
IOs.
Once I corrected it I do no longer see CKSUM errors with ZFS on top of
SVM and performance is similar.
On Mar 23, 2007, at 6:13 AM, Łukasz wrote:
When I'm trying to do in kernel in zfs ioctl:
1. snapshot destroy PREVIOS
2. snapshot rename LATEST-PREVIOUS
3. snapshot create LATEST
code is:
/* delete previous snapshot */
zfs_unmount_snap(snap_previous, NULL);
where the name of the pool is xyx:
zpool export xyz
rebuild the system (Stay clear of the pool disks)
zpool import xyx
Ron Halstead
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Robert Milkowski wrote:
Basically we've implemented a mechanizm to replicate zfs file system
implementing new ioctl based on zfs send|recv. The difference is that
we sleep() for specified time (default 5s) and then ask for new
transcation and if there's one we send it out.
More details really
How it got that way, I couldn't really say without looking at your code.
It works like this:
In new ioctl operation
zfs_ioc_replicate_send(zfs_cmd_t *zc)
we open filesystem ( not snapshot )
dmu_objset_open(zc-zc_name, DMU_OST_ANY,
DS_MODE_STANDARD |
On Fri, Mar 23, 2007 at 11:31:03AM +0100, Pawel Jakub Dawidek wrote:
Eric, maybe I'm missing something, but why ZFS depend on devids at all?
As I understand it, devid is something that never change for a block
device, eg. disk serial number, but on the other hand it is optional, so
we can
On 3/23/07, Mark Shellenbaum [EMAIL PROTECTED] wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
What exactly is the POSIX
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum [EMAIL PROTECTED] wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
What
Thanks for advice.
I removed my buffers snap_previous and snap_latest and it helped.
I'm using zc-value as buffer.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, 23 Mar 2007, Roch - PAE wrote:
I assume the rsync is not issuing fsyncs (and it's files are
not opened O_DSYNC). If so, rsync just works against the
filesystem cache and does not commit the data to disk.
You might want to run sync(1M) after a successful rsync.
A larger rsync would
Thank you all !
The machine crashed unexpectedly so no export was possible.
Anyway just using zpool import pool_name helped me to recover everything.
Thanks again for your help!
This message posted from opensolaris.org
___
zfs-discuss mailing list
On March 23, 2007 5:38:20 PM +0800 Wee Yeh Tan [EMAIL PROTECTED] wrote:
I should be able to reply to you next Tuesday -- my 6140 SATA
expansion tray is due to arrive. Meanwhile, what kind of problem do
you have with the 3511?
I'm not sure that it had anything to do with the raid controller
On March 23, 2007 6:51:10 PM +0100 Thomas Nau [EMAIL PROTECTED] wrote:
Thanks for the hints but this would make our worst nightmares become
true. At least they could because it means that we would have to check
every application handling critical data and I think it's not the apps
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
With this, ZFS now supports gzip compression. To enable gzip compression
just set the 'compression' property to 'gzip' (or 'gzip-N' where N=1..9).
Existing pools will need to upgrade in order to use this feature, and,
On Fri, 23 Mar 2007, Adam Leventhal wrote:
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
Cool! Can you recall into which build it went?
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
CEO,
My Online Home Inventory
Voice: +1 (250) 979-1638
URLs:
On Fri, Mar 23, 2007 at 11:41:21AM -0700, Rich Teer wrote:
I recently integrated this fix into ON:
6536606 gzip compression for ZFS
Cool! Can you recall into which build it went?
I put it back yesterday so it will be in build 62.
Adam
--
Adam Leventhal, Solaris Kernel Development
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum [EMAIL PROTECTED] wrote:
The original plan was to allow the inheritance of owner/group/other
permissions. Unfortunately, during ARC reviews we were forced to remove
that functionality, due to POSIX compliance and security concerns.
What
Well, I am aware that /tmp can be mounted on swap as tmpfs and that this is
really fast as most all writes go straight to memory, but this is of little to
no value to the server in question.
The server in question is running 2 enterprise third party applications. No
compilers are
I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
it to disk until you do an fsync() (or open the file with the right flags,
or other techniques). If an application REQUIRES that data get to disk,
it really MUST DTRT.
Indeed; want your data safe? Use:
On Fri, 23 Mar 2007, Matt B wrote:
The server in question is running 2 enterprise third party
applications. No compilers are installed...in fact its a super minimal
Solaris 10 core install (06/06). The reasoning behind moving /tmp onto
ZFS was to protect against the occasional misdirected
On Fri, Mar 23, 2007 at 11:57:40AM -0700, Matt B wrote:
The server in question is running 2 enterprise third party
applications. No compilers are installed...in fact its a super minimal
Solaris 10 core install (06/06). The reasoning behind moving /tmp onto
ZFS was to protect against the
Anton B. Rang wrote:
Is this because C would already have a devid? If I insert an unlabeled disk,
what happens? What if B takes five minutes to spin up? If it never does?
N.B. You get different error messages from the disk. If a disk is not ready
then it will return a not ready code and the
Peter Tribble wrote:
On 3/23/07, Mark Shellenbaum [EMAIL PROTECTED] wrote:
Peter Tribble wrote:
What exactly is the POSIX compliance requirement here?
The ignoring of a users umask.
Where in POSIX does it specify the interaction of ACLs and a
user's umask?
Let me try and summarize the
Thomas Nau wrote:
Dear all.
I've setup the following scenario:
Galaxy 4200 running OpenSolaris build 59 as iSCSI target; remaining
diskspace of the two internal drives with a total of 90GB is used as
zpool for the two 32GB volumes exported via iSCSI
The initiator is an up to date Solaris 10
Consider that 18GByte disks are old and their failure
rate will
increase dramatically over the next few years.
I guess thats why i am asking about raidz and mirrors, not just creating a huge
stripe them
Do something to
have redundancy. If raidz2 works for your workload,
I'd go with
Just to clarify
pool1 - 5 disk raidz2
pool2 - 4 disk raid 10
spare for both pools
Is that correct?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ok so you are suggesting that I simply mount /tmp as tmpfs on my existing 8GB
swap slice and then put in the VM limit on /tmp? Will that limit only affect
users writing data to /tmp or will it also affect the systems use of swap?
This message posted from opensolaris.org
Robert Milkowski wrote:
Hello Robert,
Forget it, silly me.
Pool was mounted on one host, SVM metadevice was created on another
host on the same disk at the same time and both hosts were issuing
IOs.
Once I corrected it I do no longer see CKSUM errors with ZFS on top of
SVM and performance is
For reference...here is my disk layout currently (one disk of two, but both are
identical)
s4 is for the MetaDB
s5 is dedicated for ZFS
partition print
Current partition table (original):
Total disk cylinders available: 8921 + 2 (reserved cylinders)
Part TagFlag Cylinders
On Fri, 23 Mar 2007, Matt B wrote:
Ok so you are suggesting that I simply mount /tmp as tmpfs on my
existing 8GB swap slice and then put in the VM limit on /tmp? Will that
Yes.
limit only affect users writing data to /tmp or will it also affect the
systems use of swap?
Well, they'd
Ok, since I already have an 8GB swap slice i'd like to use, what would be the
best way of setting up /tmp on this existing SWAP slice as tmpfs and then apply
the 1GB quota limit?
I know how to get rid of the zpool/tmp filesystem in ZFS, but I'm not sure how
to actually get to the above in a
On Fri, 23 Mar 2007, Matt B wrote:
Ok, since I already have an 8GB swap slice i'd like to use, what
would be the best way of setting up /tmp on this existing SWAP slice as
tmpfs and then apply the 1GB quota limit?
Have a line similar to the following in your /etc/vfstab:
swap- /tmp
And just doing this will automatically target my /tmp at my 8GB swap slice on
s1 as well as placing the quota in place?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, 23 Mar 2007, Matt B wrote:
And just doing this will automatically target my /tmp at my 8GB swap
slice on s1 as well as placing the quota in place?
After a reboot, yes.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
CEO,
My Online Home Inventory
Voice: +1 (250) 979-1638
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, 23 Mar 2007, Matt B wrote:
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1)
That's not relevant in this case.
--
Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
CEO,
My Online Home Inventory
Voice: +1 (250) 979-1638
URLs: http://www.rite-group.com/rich
Worked great. Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Dear Fran Casper
I'd tend to disagree with that. POSIX/SUS does not guarantee data makes
it to disk until you do an fsync() (or open the file with the right flags,
or other techniques). If an application REQUIRES that data get to disk,
it really MUST DTRT.
Indeed; want your data safe?
snv_62
On Fri, 23 Mar 2007, Rich Teer wrote:
Date: Fri, 23 Mar 2007 11:41:21 -0700 (PDT)
From: Rich Teer [EMAIL PROTECTED]
To: Adam Leventhal [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] gzip compression support
On Fri, 23 Mar 2007, Adam Leventhal wrote:
I
Richard,
Like this?
disk--zpool--zvol--iscsitarget--network--iscsiclient--zpool--filesystem--app
exactly
I'm in a way still hoping that it's a iSCSI related Problem as detecting
dead hosts in a network can be a non trivial problem and it takes quite
some time for TCP to timeout and inform
Thanks for clarifying! Seems I really need to check the apps with truss or
dtrace to see if they use that sequence. Allow me one more question: why
is fflush() required prior to fsync()?
When you use stdio, you need to make sure the data is in the
system buffers prior to call fsync.
fclose()
Łukasz wrote:
How it got that way, I couldn't really say without looking at your code.
It works like this:
...
we set max_txg
ba.max_txg = (spa_get_dsl(filesystem-os-os_spa))-dp_tx.tx_synced_txg;
So, how do you send the initial stream? Presumably you need to do it
with ba.max_txg = 0?
If I create a mirror, presumably if possible I use two or more identically
sized devices,
since it can only be as large as the smallest. However, if later I want to
replace a disk
with a larger one, and detach the mirror (and anything else on the disk),
replace the
disk (and if applicable
Yes, this is supported now. Replacing one half of a mirror with a larger device;
letting it resilver; then replacing the other half does indeed get a larger
mirror.
I believe this is described somewhere but I can't remember where now.
Neil.
Richard L. Hamilton wrote On 03/23/07 20:45,:
If I
HI Guys !
Please share you experience on how to backup zfs with ACL using commercially
available backup softwares. Has any one tested backup of zfs with acl using
Tivoli (TSM)
thanks
Ayaz
This message posted from opensolaris.org
___
zfs-discuss
On Fri, Mar 23, 2007 at 11:28:19AM -0700, Frank Cusack wrote:
I'm in a way still hoping that it's a iSCSI related Problem as detecting
dead hosts in a network can be a non trivial problem and it takes quite
some time for TCP to timeout and inform the upper layers. Just a
guess/hope here that
55 matches
Mail list logo