I'm looking to create a new pool for storing CIFS files. I know that I need to
set casesensitivity=mixed, but appears I can only set this option when using
the zfs create command, I get told it's not a valid pool property if I try to
use it with zpool create.
Is there no way to create a pool
Ross wrote:
I'm looking to create a new pool for storing CIFS files. I know that I need
to set casesensitivity=mixed, but appears I can only set this option when
using the zfs create command, I get told it's not a valid pool property if
I try to use it with zpool create.
Is there no way
Ross wrote:
Good god. Talk about non intuitive. Thanks Darren!
Why isn't that intuitive ? It is even documented in the man page.
zpool create [-fn] [-o property=value] ... [-O file-system-
property=value] ... [-m mountpoint] [-R root] pool vdev ...
Is it possible for me to
Dear Candy,
This is the log of zpool status, along with partition of c3d0 and c4d0
1. It appears to me that once I destroy the partition of c4d0 and recreate it
again, I get different slices in c4d0. I forgot which fdisk partition I
chose, it is either Solaris, Solaris2, or Unix System, and it
It's not intuitive because when you know that -o sets options, an
error message saying that it's not a valid property makes you think
that it's not possible to do what you're trying.
Documented and intuitive are very different things. I do appreciate
that the details are there in the manuals,
In August last year I posted this bug, a brief summary of which would be that
ZFS still accepts writes to a faulted pool, causing data loss, and potentially
silent data loss:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6735932
There have been no updates to the bug since
Hello everyone,
I am trying to take ZFS snapshots (ie. zfs send) and burn them to DVD's for
offsite storage. In many cases, the snapshots greatly exceed the 8GB I can
stuff onto a single DVD-DL.
In order to make this work, I have used the split utility to break the images
into smaller,
On Wed, February 4, 2009 05:14, Darren J Moffat wrote:
Ross wrote:
Good god. Talk about non intuitive. Thanks Darren!
Why isn't that intuitive ? It is even documented in the man page.
zpool create [-fn] [-o property=value] ... [-O file-system-
property=value] ... [-m
Jean-Paul,
Our goofy disk formatting is tripping you...
Put the disk space of c8t0d0 in c8t0d0s0 and try the
zpool add syntax again. If you need help with the
format syntax, let me know.
This command syntax should have complained:
pfexec zpool add rpool cache /dev/rdsk/c8t0d0
See the zpool
Well, after a quick test today I can confirm that this isn't possible.
You can do a send/receive to an existing filesystem, but you need to use the -F
option, and it overwrites the receiving filesystem, giving it identical
properties to the source.
Looks like this'll have to be a proper backup
Handojo,
Use the format utility to put the disk space of c4d0 into c4d0s0
and try the zpool attach syntax again, like this:
# zpool attach rpool c3d0s0 c4d0s0
Let the newly added disk resilver by monitoring with zpool status.
Then, install the bootblocks on the newly added disk, like this:
#
On Wed, Feb 4, 2009 at 6:19 PM, Michael McKnight
michael_mcknigh...@yahoo.com wrote:
#split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
But when I compare the checksum of the original snapshot to that of the
rejoined snapshot, I get a different result:
#cksum
Good god. Talk about non intuitive. Thanks Darren!
Is it possible for me to suggest a quick change to the zpool error message in
solaris? Should I file that as an RFE? I'm just wondering if the error
message could be changed to something like:
property 'casesensitivity' is not a valid pool
I jumpstarted my machine with sNV b106, and installed with ZFS root/boot.
It left me at a shell prompt in the JumpStart environment, with my ZFS
root on /a.
I wanted to try out some things that I planned on scripting for the
JumpStart to run, one of these waas creating a new ZFS pool from the
Anyway, you could try simply creating standard
FDISK/Solaris/vtoc
partitioning on the SD card, with all the free space
contained in one
slice, and give that slice to ZFS.
This is what I've done so far.
fdisk -
Total disk size is 1943 cylinders
Cylinder size is 4096 (512
I am trying to keep a file system (actually quite a few) in sync across two
systems for DR purposes, but I am encountering something that I find strange.
Maybe its not strange, and I just don't understand - but I will pose to you
fine people to help answer my question. This is all scripted, but
Yeah, I knew that zpool creates a root filesystem (since it's listed in zfs
list), but I also knew these properties had to be set on creation, not after,
so I figured zpool -o was the way to do it.
Completely threw me when zpool -o said it wasn't a valid property, I'd never
have thought to
Tony,
I believe you want to use zfs recv -F to force a rollback on the
receiving side.
I'm wondering if your ls is updating the atime somewhere, which would
indeed be a change...
-Greg
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
If you have an older Solaris release using ZFS and Samba, and you upgrade to a
version with CIFS support, how do you ensure the file systems/pools have
casesensitivity mixed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
On 4-Feb-09, at 6:19 AM, Michael McKnight wrote:
Hello everyone,
I am trying to take ZFS snapshots (ie. zfs send) and burn them to
DVD's for offsite storage. In many cases, the snapshots greatly
exceed the 8GB I can stuff onto a single DVD-DL.
In order to make this work, I have used
Greg Mason wrote:
Tony,
I believe you want to use zfs recv -F to force a rollback on the
receiving side.
I'm wondering if your ls is updating the atime somewhere, which would
indeed be a change...
Yes.
If you want to have a look around it, cd into the last snapshot and look
around in
You can check whether it's set with:
$ zfs get casesensitivity pool/filesystem
If you're using CIFS, you need that to return mixed or insensitive. If it
returns sensitive, it will cause you problems.
Unfortunately there's no way to change this setting on an existing filesystem,
so if you do
Thanks ... the -F works perfectly, and provides a further benefit in that the
client can mess with the file system as much as they want for testing purposes,
but when it comes time to ensure it is synchronized each night, it will revert
back to the previous state.
Thanks
-Tony
--
This message
Hello,
I have setup a fileserver using zfs and am able to see the share from my mac.
I am able to create/write to the share as well as read. I've ensured that I
have the same user and uid on both the server (opensolaris snv101b) as well as
the mac. The root folder of the share is owned by
On February 4, 2009 8:39:13 AM -0800 Ross myxi...@googlemail.com wrote:
Yeah, I knew that zpool creates a root filesystem (since it's listed in
zfs list), but I also knew these properties had to be set on creation,
not after, so I figured zpool -o was the way to do it.
sorry, which properties
On Wed, 4 Feb 2009, Fajar A. Nugraha wrote:
On Wed, Feb 4, 2009 at 1:28 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 3 Feb 2009, Fajar A. Nugraha wrote:
Just wondering, why didn't you compress it first? something like
zfs send | gzip backup.zfs.gz
The 'lzop'
Frank Cusack wrote:
On February 4, 2009 8:39:13 AM -0800 Ross myxi...@googlemail.com wrote:
Yeah, I knew that zpool creates a root filesystem (since it's listed in
zfs list), but I also knew these properties had to be set on creation,
not after, so I figured zpool -o was the way to do it.
mm == Michael McKnight michael_mcknigh...@yahoo.com writes:
mm #split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
mm #cat mypictures.zfssnap.split.a[a-g] testjoin
mm But when I compare the checksum of the original snapshot to
mm that of the rejoined snapshot, I get a
Aaron wrote:
I have setup a fileserver using zfs and am able to see the share from my mac.
I am able to create/write to the share as well as read. I've ensured
that I
have the same user and uid on both the server (opensolaris snv101b)
as well
as the mac. The root folder of the share is
On Wed, February 4, 2009 11:05, Ross wrote:
You can check whether it's set with:
$ zfs get casesensitivity pool/filesystem
If you're using CIFS, you need that to return mixed or insensitive.
If it returns sensitive, it will cause you problems.
It will? What symptoms?
Unfortunately
On Wed, February 4, 2009 12:01, Miles Nordin wrote:
* stream format is not guaranteed to be forward compatible with new
kernels. and versioning may be pickier than zfs/zpool versions.
Useful points, all of them. This particular one also points out something
I hadn't previously thought
Well for one, occasional 'file not found' errors when programs (or shortcuts)
check whether a file exists if the case is wrong.
And you can expect more problems too. Windows systems are case insensitive, so
there's nothing stopping a program referring to one of it's files as file,
File and
r == Ross myxi...@googlemail.com writes:
r Suffice to say that while you might get away with running ZFS
r and CIFS in case sensitive mode for a bit, sooner or later
r it's going to go horribly wrong.
It will work okay with Samba, though.
pgpcMQSPMQMx5.pgp
Description: PGP
On Wed, 4 Feb 2009, Toby Thain wrote:
In order to make this work, I have used the split utility ...
I use the following command to convert them back into a single file:
#cat mypictures.zfssnap.split.a[a-g] testjoin
But when I compare the checksum of the original snapshot to that of
the
Ok, thanks for the info - I was really puling my hair out over this. Would you
know if sharing over nfs via zfs would fare any better?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Aaron wrote:
Ok, thanks for the info - I was really puling my hair out over this. Would
you know if sharing over nfs via zfs would fare any better?
I am *quite* happy with the Mac NFS client, and use it against ZFS
files all the time. It's worth the time to make sure you're using
the same
On 4-Feb-09, at 2:29 PM, Bob Friesenhahn wrote:
On Wed, 4 Feb 2009, Toby Thain wrote:
In order to make this work, I have used the split utility ...
I use the following command to convert them back into a single file:
#cat mypictures.zfssnap.split.a[a-g] testjoin
But when I compare the
Just tired sharing over nfs and [i]much[/i] improved experience. I'll wait and
see if performance becomes an issue, but this looks prety good now. The key
being as you mentioned, keeping the uid/gids in sync.
Thanks!
Aaron
--
This message posted from opensolaris.org
Put the disk space of c8t0d0 in c8t0d0s0 and try the
zpool add syntax again. If you need help with the
format syntax, let me know.
This command syntax should have complained:
pfexec zpool add rpool cache /dev/rdsk/c8t0d0
See the zpool syntax below for pointers.
Cindy
I've tried the
Jean-Paul,
Regarding your comments here:
Expected because s0 is defined as 0 bytes in the partition table I presume?
Yes, you need to put the disk space into s0 by using the format
utility. Use the modify option from format's partition menu is
probably the easiest way. Email me directly if you
tt == Toby Thain t...@telegraphics.com.au writes:
tt I know this was discussed a while back, but in what sense does
tt tar do any of those things? I understand that it is unlikely
tt to barf completely on bitflips, but won't tar simply silently
tt de-archive bad data?
yeah, I
Miles Nordin wrote:
mm == Michael McKnight michael_mcknigh...@yahoo.com writes:
mm #split -b8100m ./mypictures.zfssnap mypictures.zfssnap.split.
mm #cat mypictures.zfssnap.split.a[a-g] testjoin
mm But when I compare the checksum of the original snapshot to
mm
heya,
I was wondering what's the status on the ability to remove disks from a ZFS
pool? It's mentioned in the ZFS faq, as coming soon, and the BigAdmin Xperts
page on ZFS mentions it as coming out in 2007.
I know this is also discussed elsewhere, e.g.:
Not sure is best to put something like this.
There is wikis like
http://www.solarisinternals.com/wiki/index.php/Solaris_Internals_and_Performance_FAQ
http://wiki.genunix.org/wiki/index.php/WhiteBox_ZFSStorageServer
But I haven't seen anything which has an active community like
The zilstat tool is very helpful, thanks!
I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
both via an NFS client, and locally on the X4500 itself. Over NFS,
said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
going through the ZIL.
When run locally on
Interesting, but what does it mean :)
The x4500 for mail (NFS vers=3 on ufs on zpool with quotas):
# ./zilstat.ksh
N-Bytes N-Bytes/s N-Max-Bytes/sB-Bytes B-Bytes/s B-Max-Bytes/s
376720 376720 376720128614412861441286144
419608 419608 419608
Tony,
On Wed, Feb 04, 2009 at 09:10:26AM -0800, Tony Galway wrote:
Thanks ... the -F works perfectly, and provides a further benefit in that the
client can mess with the file system as much as they want for testing
purposes, but when it comes time to ensure it is synchronized each night, it
Jorgen Lundman wrote:
Interesting, but what does it mean :)
The x4500 for mail (NFS vers=3 on ufs on zpool with quotas):
# ./zilstat.ksh
N-Bytes N-Bytes/s N-Max-Bytes/sB-Bytes B-Bytes/s B-Max-Bytes/s
376720 376720 376720128614412861441286144
419608
Marion Hakanson wrote:
The zilstat tool is very helpful, thanks!
I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
both via an NFS client, and locally on the X4500 itself. Over NFS,
said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
going through
Richard Elling wrote:
# ./zilstat.ksh
N-Bytes N-Bytes/s N-Max-Bytes/sB-Bytes B-Bytes/s B-Max-Bytes/s
376720 376720 376720128614412861441286144
419608 419608 419608136806413680641368064
555256 555256 5552561732608
Hi Richard,
Richard Elling schrieb:
Yes. I've got a few more columns in mind, too. Does anyone still use
a VT100? :-)
Only when using ILOM ;)
(anyone using 72 char/line MUA, sorry to them, the following lines are longer):
Thanks for the great tool, it showed something very interesting
Anybody have a guess to the cause of this problem?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
52 matches
Mail list logo