Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 11:13:21AM -0800, Jay Anderson wrote:
The casesensitivity option is just like utf8only and normalization, it
can only be set at creation time. The result from attempting to change
it on an existing filesystem:
# zfs set
Kristof Van Damme wrote:
Hi Tim,
Thanks for having a look.
The 'utf8only' setting is set to off.
Important bit of additional information:
We only seem to have this problem when copying to a zfs filesystem with the
casesensitivity=mixed property. We need this because the filesystem will
Ahhh...I missed the difference between a volume and a FS. That was it...thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Tim,
That's splendid!
In case other people want to reproduce the issue themselves, here is how.
In attach is a tar which contains the 2 files (UTF8 and ISO8859) like the ones
I used in my first post to demonstrate the problem. Here are the instructions
to reproduce the issue:
Create a zfs
I would think only the casesensitivity=mixed should have to be set at
creation time, that casesensitivity=insensitive could be set at any
time. Hmmm.
We don't allow this for a couple of reasons. If the file system was
case-sensitive or mixed and you suddenly make it insensitive,
Kristof Van Damme wrote:
Hi Tim,
That's splendid!
In case other people want to reproduce the issue themselves, here is how.
In attach is a tar which contains the 2 files (UTF8 and ISO8859) like the
ones I used in my first post to demonstrate the problem. Here are the
instructions to
Hi,
ZFSAgent is a small JMX agent for ZFS. It is free and you can find it at
http://www.re.be/zfsagent/.
Best regards,
Werner.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, 10 Dec 2008, Anton B. Rang wrote:
It sounds like you have access to a source of information that the
rest of us don't have access to.
I think if you read the archives of this mailing list, and compare
it to the discussions on the other Solaris mailing lists re UFS,
it's a
Hello Anton,
Thursday, December 11, 2008, 4:17:15 AM, you wrote:
It sounds like you have access to a source of information that the
rest of us don't have access to.
ABR I think if you read the archives of this mailing list, and
ABR compare it to the discussions on the other Solaris mailing
Whether tis nobler.
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
Or arguments against it.
--
This message posted from opensolaris.org
Maybe this has been discussed before, but I haven't been able to find any
relevant threads.
I have a simple OpenSolaris 2008.11 setup with one ZFS pool consisting of the
whole of the single hard drive on the system. What I want to do is to replace
the present 500 GB drive with a 1.5 TB drive.
vincent_b_...@yahoo.com said:
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
Well, it's been considered a best practice for servers for a lot of
years to
Richard Elling wrote:
Please file a bug.
I have.
-- richard
Ian Collins wrote:
A while back I started a thread Possible ZFS panic on Solaris 10
Update 6 and it now turns out the cause is one incremental stream.
Sending this stream to an x86 system running Solaris 10 Update 6 or SXCE
Vincent Fox wrote:
Whether tis nobler.
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
IMHO, the *only* good reason to create a new file system
Vincent Fox wrote:
Whether tis nobler.
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
Or arguments against
with zfs it's easy to set quotas so
Marion Hakanson wrote:
Personally, I'd like to place a limit on /var/core/; That's the only
consistent out of disk space cause I've seen on our Solaris-10 systems,
and that happens whether /var/ is separate or not. Maybe /var/crash/
as well.
You can specify the volsize on /rpool/dump
It just seems like in a typical ZFS root install the need for a separate /var
is difficult for me to justify now. By default there are no quotas or
reservations set on /var. Okay I set them.
I have a monitoring system able to tell me when disks are getting full. It
seems easier to say just
On Thu, Dec 11, 2008 at 12:43 PM, Alex Viskovatoff aufgeho...@imap.ccwrote:
Maybe this has been discussed before, but I haven't been able to find any
relevant threads.
I have a simple OpenSolaris 2008.11 setup with one ZFS pool consisting of
the whole of the single hard drive on the system.
vf == Vincent Fox vincent_b_...@yahoo.com writes:
vf the need for a separate /var is difficult for me to justify
vf now.
so long as you keep the word ``me'' in there! great that you don't
need it, but it's not difficult to justify.
pgp8rJMe1rUqs.pgp
Description: PGP signature
Miles Nordin wrote:
ic == Ian Collins i...@ianshome.com writes:
Personally, I'd like to place a limit on /var/core/;
ic You can specify the volsize on /rpool/dump on a zfs boot
ic system.
so what? so you can truncate each core dump to make it useless before
Thanks, that's what I thought. Just wanted to make sure.
I guess the writers of the documentation think that this is so obviously the
way things would work in a well designed system that there is no reason to
mention it explicitly.
--
This message posted from opensolaris.org
Kyle McDonald wrote:
Tim Haley wrote:
Ross wrote:
While it's good that this is at least possible,
that looks horribly complicated to me.
Does anybody know if there's any work being done
on making it easy to remove obsolete
boot environments?
If the clones were
Hi Alex,
Not exactly. Just hadn't thought of that specific example yet, but its a
good one so I'll add it.
In your case, ZFS might not see the expanded capacity of the larger disk
automatically due to a recent bug. For non-root pools, the workaround to
see the expanded space is to export and
Hi Cindy,
Thanks for clearing that up. I don't mind rebooting, just as long as that makes
the zpool use the additional space. I did read about the export/import
workaround, but wasn't sure if rebooting would have the same effect.
The ZFS documentation convinced me to set up a mirrored pool,
On 11-Dec-08, at 12:28 PM, Robert Milkowski wrote:
Hello Anton,
Thursday, December 11, 2008, 4:17:15 AM, you wrote:
It sounds like you have access to a source of information that the
rest of us don't have access to.
ABR I think if you read the archives of this mailing list, and
ABR
Mark Others:
I think you may have misunderstood what people were suggesting. They
weren't suggesting changing the mode of the file, but using chmod(1M) to
add/modify ZFS ACLs on the device file.
chmod A+user:gdm:rwx:allow file
See chmod(1M) or the zfs admin guide for ZFS ACL
Brian Cameron wrote:
Mark Others:
I think you may have misunderstood what people were suggesting. They
weren't suggesting changing the mode of the file, but using chmod(1M) to
add/modify ZFS ACLs on the device file.
chmod A+user:gdm:rwx:allow file
See chmod(1M) or the zfs admin guide
Mark:
You could call acl(2) directly, but you would have to construct a
complete ACL to set. It would be easier to use acl_get() and acl_set()
which understand the various ACL flavors and will call the syscall with
correct ACL flavor arguments.
Unfortunately, what you are wanting to do
You should probably make sure that you just don't keep continually
adding the same entry over and over again to the ACL. With NFSv4 ACLs
you can insert the same entry multiple times and if you keep doing it
long enough you will eventually get an error back when you reach the
ACE limit on
Mark Shellenbaum wrote:
You should probably make sure that you just don't keep continually
adding the same entry over and over again to the ACL. With NFSv4 ACLs
you can insert the same entry multiple times and if you keep doing it
long enough you will eventually get an error back when you
On Thu, Dec 11, 2008 at 04:46:33PM -0700, Mark Shellenbaum wrote:
Mark Shellenbaum wrote:
You should probably make sure that you just don't keep continually
adding the same entry over and over again to the ACL. With NFSv4 ACLs
you can insert the same entry multiple times and if you keep
Hello,
Slightly off-topic, but only slightly.
With ZFS I tend to configure /var/cores as a separate zfs file system
with a quota set on it + coreadm configured that way so all cores go
to /var/cores.
This is especially useful with in-house applications running on
servers.
--
Best regards,
On Fri, Dec 12, 2008 at 12:04:39AM +, Robert Milkowski wrote:
Slightly off-topic, but only slightly.
With ZFS I tend to configure /var/cores as a separate zfs file system
with a quota set on it + coreadm configured that way so all cores go
to /var/cores.
This is especially useful with
Robert Milkowski wrote:
Hello,
Slightly off-topic, but only slightly.
With ZFS I tend to configure /var/cores as a separate zfs file system
with a quota set on it + coreadm configured that way so all cores go
to /var/cores.
While this might cause some issues with programs that expect or
New to this list and I have simple question. It states in the Solaris
ZFS Admin Guide
"You cannot use the standard upgrade program to upgrade your UFS root
file system to a
ZFS root file system. If at least one bootable UFS slice exists, then
the standard upgrade
option should be
On Fri 12/12/08 14:51 , Michael Barto mba...@logiqwest.com sent:
I am probably being paranoid, but there is always an upgrade option
using the Solaris CDROM image to install software. It appears this
note in only for Jumpstart and Live Upgrade. So when Sun release the
next Solaris release, I
I compiled just the iscsi initiator this evening and remembered this thread, so
here are the simple instructions. This is not the right way to do it.
It's just the easiest. I should (and probably will) use a Makefile for this,
but for now, this works and I've been able to tweak some of
Michael,
Sure. You can use Solaris 10 10/08 to initially install a ZFS root
file system as long as you're not interested in migrating a UFS
root file system to a ZFS root file system.
But if you want to migrate your existing UFS root file system to
a ZFS root file system, then you must perform
On Wed, Dec 10, 2008 at 12:58:48PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety of filesystems can be created on this virtual
Gary Mills wrote:
The split responsibility model is quite appealing. I'd like to see
ZFS address this model. Is there not a way that ZFS could delegate
responsibility for both error detection and correction to the storage
device, at least one more sophisticated than a physical disk?
Ok. I figured out how to basically backport snv104's iscsi initiator (version
1.55 versus snv101b's 1.54) into snv101b.
Set everything up as in my previous post then create a new C file:
cat gt; boot_sym.c lt;lt;EOF
#include sys/bootprops.h
/*
* Global iscsi boot prop
*/
ib_boot_prop_t
On Thu, 11 Dec 2008, Gary Mills wrote:
The split responsibility model is quite appealing. I'd like to see
ZFS address this model. Is there not a way that ZFS could delegate
responsibility for both error detection and correction to the storage
device, at least one more sophisticated than a
Gary Mills wrote:
On Wed, Dec 10, 2008 at 12:58:48PM -0800, Richard Elling wrote:
Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 01:30:30PM -0600, Nicolas Williams wrote:
On Wed, Dec 10, 2008 at 12:46:40PM -0600, Gary Mills wrote:
On the server, a variety
On Thu, Dec 11, 2008 at 09:54:36PM -0800, Richard Elling wrote:
I'm not really sure what you mean by split responsibility model. I
think you will find that previous designs have more (blind?) trust in
the underlying infrastructure. ZFS is designed to trust, but verify.
I think he means ZFS w/
44 matches
Mail list logo