On 10/13/10 10:20 AM, dirk schelfhout wrote:
Wanted to test the zfs diff command and ran into this.
I turned off all windows sharing.
the rpool has normal permissions for .zfs/shares
how do I fix this ?
Dirk
r...@osolpro:/data/.zfs# zfs diff d...@10aug2010 d...@13oct2010
Cannot stat
On 04/21/10 03:24 AM, Darren J Moffat wrote:
On 21/04/2010 05:09, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Nicolas Williams
The .zfs/snapshot directory is most certainly available over NFS.
I'm not sure
On 04/14/10 11:48 AM, Glenn Fowler wrote:
On Wed, 14 Apr 2010 17:54:02 +0200 =?KOI8-R?B?z8zYx8Egy9LZ1sHOz9fTy8HR?= wrote:
Can I use getconf to test if a ZFS file system is mounted in case
insensitive mode?
we would have to put in the zfs query (hopefull more generic that just for zfs)
the
On 04/12/10 09:05 AM, J James wrote:
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a
failed drive. zpool status shows that the pool is in DEGRADED state.
I want syslog to log these type of ZFS errors. I have syslog running and
logging all sorts of error to a log
On 04/ 1/10 01:46 PM, Eiji Ota wrote:
During the IPS upgrade, the file system got full, then I cannot do anything to
recover it.
# df -kl
Filesystem 1K-blocks Used Available Use% Mounted on
rpool/ROOT/opensolaris
4976642 4976642 0 100% /
swap
On 3/29/10 8:02 PM, Daniel Carosone wrote:
On Tue, Mar 30, 2010 at 12:37:15PM +1100, Daniel Carosone wrote:
There will also need to be clear rules on output ordering, with
respect to renames, where multiple changes have happened to renamed
files.
Separately, but relevant in particular to the
On 03/ 9/10 10:53 AM, Cindy Swearingen wrote:
Hi D,
Is this a 32-bit system?
We were looking at your panic messages and they seem to indicate a
problem with memory and not necessarily a problem with the pool or
the disk. Your previous zpool status output also indicates that the
disk is okay.
There seems to be a rash of posts lately where people are resetting or
rebooting without getting any data, so I thought I'd post a quick
overview on collecting crash dumps. If you think you've got a hang
problem with ZFS and you want to gather data for someone to look at,
then here are a few
Rich Teer wrote:
Congrats for integrating dedup! Quick question: in what build
of Nevada will dedep first be found? b126 is the current one
presently.
Cheers,
128
-tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Dennis Clarke wrote:
Does the dedupe functionality happen at the file level or a lower block
level?
block level, but remember that block size may vary from file to file.
I am writing a large number of files that have the fol structure :
-- file begins
1024 lines of random ASCII chars 64
Orvar Korvar wrote:
Does this putback mean that I have to upgrade my zpool, or is it a zfs tool? If
I missed upgrading my zpool I am smoked?
The putback did not bump zpool or zfs versions. You shouldn't have to upgrade
your pool.
-tim
___
Robert Milkowski wrote:
Miles Nordin wrote:
csb == Craig S Bell cb...@standard.com writes:
csb Two: If you lost data with another filesystem, you may have
csb overlooked it and blamed the OS or the application,
yeah, but with ZFS you often lose the whole pool in certain
classes of
Robert Milkowski wrote:
Kevin Walker wrote:
Hi all,
Just subscribed to the list after a debate on our helpdesk lead me to
the posting about ZFS corruption and the need for a fsck repair tool
of some kind...
Has there been any update on this?
I guess the discussion started after
Chris Murray wrote:
Accidentally posted the below earlier against ZFS Code, rather than ZFS Discuss.
My ESXi box now uses ZFS filesystems which have been shared over NFS. Spotted
something odd this afternoon - a filesystem which I thought didn't have any
files in it, weighs in at 14GB.
Allen Eastwood wrote:
Does DNLC even play a part in ZFS, or are the Docs out of date?
Defines the number of entries in the directory name look-up cache
(DNLC). This parameter is used by UFS and NFS to cache elements of path
names that have been resolved.
No mention of ZFS. Noticed that
Miles Nordin wrote:
bmm == Bogdan M Maryniuk bogdan.maryn...@gmail.com writes:
bmm OK, so what is the status of your bugreport about this?
That's a good question if it's meant genuinely, and not to be
obstructionist. It's hard to report one bug with clear information
because the problem
Brent Jones wrote:
On Mon, Jun 8, 2009 at 9:38 PM, Richard Lowerichl...@richlowe.net wrote:
Brent Jones br...@servuhome.net writes:
I've had similar issues with similar traces. I think you're waiting on
a transaction that's never going to come.
I thought at the time that I was hitting:
Brent Jones wrote:
Hello all,
I had been running snv_106 for about 3 or 4 months on a pair of X4540's.
I would ship snapshots from the primary server to the secondary server
nightly, which was working really well.
However, I have upgraded to 2009.06, and my replication scripts appear
to hang
Dave wrote:
C. Bergström wrote:
Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer
of code. There is only so much that a starving college student can
accomplish from a dead-start in 1-1/2 months. The ZFS equivalent of
eliminating world hunger is not
Alastair Neil wrote:
x86 snv 108
I have a pool with around 5300 file systems called home. I can do:
zfs set sharenfs=on home
however
zfs set sharenfs=sec=krb5,rw home
complains:
cannot set property for 'home': 'sharenfs' cannot be set to invalid options
I feel I must be overlooking
Colin Johnson wrote:
I was having CIFs problems on my Mac so I upgrade to build 105.
After getting all my shares populated with data I ran zpool scrub on
the raidz array and it told me the version was out of date so I
upgraded.
One of my shares is now inaccessible and I cannot even
Jerry K wrote:
It was rumored that Nevada build 105 would have ZFS encrypted file
systems integrated into the main source.
In reviewing the Change logs (URL's below) I did not see anything
mentioned that this had come to pass. Its going to be another week
before I have a chance to play
Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 11:13:21AM -0800, Jay Anderson wrote:
The casesensitivity option is just like utf8only and normalization, it
can only be set at creation time. The result from attempting to change
it on an existing filesystem:
# zfs set
Kristof Van Damme wrote:
Hi Tim,
Thanks for having a look.
The 'utf8only' setting is set to off.
Important bit of additional information:
We only seem to have this problem when copying to a zfs filesystem with the
casesensitivity=mixed property. We need this because the filesystem will
Kristof Van Damme wrote:
Hi Tim,
That's splendid!
In case other people want to reproduce the issue themselves, here is how.
In attach is a tar which contains the 2 files (UTF8 and ISO8859) like the
ones I used in my first post to demonstrate the problem. Here are the
instructions to
Kristof Van Damme wrote:
Hi All,
We have set up a zpool on OpenSolaris 2008.11, but have difficulties copying
files with special chars in the name when the name is encoded in ISO8859-15.
When the name is in UTF8 we don't have this problem. We get Operation not
supported.
We want to copy
Ross wrote:
While it's good that this is at least possible, that looks horribly
complicated to me.
Does anybody know if there's any work being done on making it easy to remove
obsolete
boot environments?
If the clones were promoted at the time of their creation the BEs would
stay
Kyle McDonald wrote:
Tim Haley wrote:
Ross wrote:
While it's good that this is at least possible, that looks horribly
complicated to me.
Does anybody know if there's any work being done on making it easy to
remove obsolete
boot environments?
If the clones were promoted
Thomas Nau wrote:
Dear all.
I stumbled over an issue triggered by Samba while accessing ZFS snapshots.
As soon as a Windows client tries to open the .zfs/snapshot folder it
issues the Microsoft equivalent of ls dir, dir *. It get's translates
by Samba all the way down into
Thomas Nau wrote:
Dear all.
I stumbled over an issue triggered by Samba while accessing ZFS snapshots.
As soon as a Windows client tries to open the .zfs/snapshot folder it
issues the Microsoft equivalent of ls dir, dir *. It get's translates
by Samba all the way down into
Aaron Moore wrote:
Hello,
I am trying to create a file server that will be used across multiple OSs and
I wanted to create the root pool/zfs system with mixed case. I am not sure
how to do this since the root zfs fs gets automatically created without this
flag when I create the pool.
Sachin Palav wrote:
Friends,
I have recently built a file server on x2200 with solaris x86 having zfs
(version4) and running NFS version2 samba.
the AIX 5.2 AIX 5.2 client give error while running command cp -R
zfs_nfs_mount_source zfs_nfs_mount_desticantion as below:
cp: 0653-440
Roland Mainz wrote:
Bart Smaalders wrote:
Marcus Sundman wrote:
I'm unable to find more info about this. E.g., what does reject file
names mean in practice? E.g., if a program tries to create a file
using an utf8-incompatible filename, what happens? Does the fopen()
fail? Would this normally
On Wed, 19 Jul 2006, Eric Lowe wrote:
(Also BTW that page has a typo, you might want to get the typo fixed, I
didn't know where the doc bugs should go for those messages)
- Eric
Product: event_registry
Category: events
Sub-Category: msg
-tim
34 matches
Mail list logo