On Thu, Mar 5, 2009 at 1:09 PM, Kyle Kakligian kaklig...@google.com wrote:
On Wed, Mar 4, 2009 at 7:59 PM, Richard Elling richard.ell...@gmail.com
wrote:
additional comment below...
Kyle Kakligian wrote:
On Mon, Mar 2, 2009 at 8:30 AM, Blake blake.ir...@gmail.com wrote:
that link
How do I make sure any new file inherit the group permission from its
directory in ZFS?
I tried to add a non-trivial acl (index id 3), but the files
permissions are still following the users umask
# ls -dv folder/
drwxrwxr-x+ 2 root other 3 Mar 6 02:09 folder/
0:owner@::deny
A recent increase in email about ZFS and SNDR (the replication
component of Availability Suite), has given me reasons to post one of
my replies.
Well, now I'm confused! A collegue just pointed me towards your blog
entry about SNDR and ZFS which, until now, I thought was not a
supported
Jim Dunham wrote:
Unlike UFS filesystems and lockfs -f, or lockfs -w, there is no
'supported' way to get ZFS to empty the ZIL to disk on demand. So even
though one will get both ZFS and application filesystem consistency
within the SNDR secondary volume, there can be many seconds worth of
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does maintain
filesystem consistency through coordination between the ZPL (ZFS POSIX
Layer) and the ZIL (ZFS Intent Log). Unfortunately for SNDR, ZFS
caches a lot of an applications filesystem data in the ZIL, therefore
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91 (or thereabout)
On build 91, after the command
#zfs snapshot -r myplace
I could easily see the snapshot using ZFS list.
But after install OpenSolaris 2008.11 snv 101b, I once again created a
On Fri, Mar 6, 2009 at 7:32 AM, Asif Iqbal vad...@gmail.com wrote:
How do I make sure any new file inherit the group permission from its
directory in ZFS?
I tried to add a non-trivial acl (index id 3), but the files
permissions are still following the users umask
# ls -dv folder/
Asif Iqbal wrote:
How do I make sure any new file inherit the group permission from its
directory in ZFS?
I tried to add a non-trivial acl (index id 3), but the files
permissions are still following the users umask
# ls -dv folder/
drwxrwxr-x+ 2 root other 3 Mar 6 02:09 folder/
Hi Steven,
Try doing 'zfs list -t all'. This is a change that went in late last year
to list only datasets unless snapshots were explicitly requested.
On Fri, 6 Mar 2009, Steven Sim wrote:
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote:
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot
Andrew,
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the
ZPL (ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately
for SNDR, ZFS caches a lot of an applications filesystem data in
the
Steven Sim wrote:
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91 (or thereabout)
On build 91, after the command
#zfs snapshot -r myplace
I could easily see the snapshot using ZFS list.
But after install OpenSolaris 2008.11 snv 101b, I once
On 03/06/09 09:53, Steven Sim wrote:
Gurus;
I am using OpenSolaris 2008.11 snv_101b_rc2 X86
Prior to this I was using SXCE built 91 (or thereabout)
On build 91, after the command
#zfs snapshot -r myplace
I could easily see the snapshot using ZFS list.
But after install OpenSolaris 2008.11
I'd like to correct a few misconceptions about the ZIL here.
On 03/06/09 06:01, Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does maintain
filesystem consistency through coordination between the ZPL (ZFS POSIX
Layer) and the ZIL (ZFS Intent Log).
Pool and file
On 03/06/09 08:10, Jim Dunham wrote:
Andrew,
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the ZPL
(ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for
SNDR, ZFS caches a lot of an
On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote:
On 03/06/09 08:10, Jim Dunham wrote:
A simple test I performed to verify this, was to append to a ZFS file
(no synchronous filesystem options being set) a series of blocks with a
block order pattern contained within. At some random
Jonathan Edwards wrote:
On Mar 6, 2009, at 8:58 AM, Andrew Gabriel wrote:
Jim Dunham wrote:
ZFS the filesystem is always on disk consistent, and ZFS does
maintain filesystem consistency through coordination between the ZPL
(ZFS POSIX Layer) and the ZIL (ZFS Intent Log). Unfortunately for
C. Bergström wrote:
Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer of
code. There is only so much that a starving college student can
accomplish from a dead-start in 1-1/2 months. The ZFS equivalent of
eliminating world hunger is not among the
Dave wrote:
C. Bergström wrote:
Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer
of code. There is only so much that a starving college student can
accomplish from a dead-start in 1-1/2 months. The ZFS equivalent of
eliminating world hunger is not
Nicolas,
On Fri, Mar 06, 2009 at 10:05:46AM -0700, Neil Perrin wrote:
On 03/06/09 08:10, Jim Dunham wrote:
A simple test I performed to verify this, was to append to a ZFS
file
(no synchronous filesystem options being set) a series of blocks
with a
block order pattern contained within. At
On Fri, Mar 06, 2009 at 03:10:41PM -0500, Jim Dunham wrote:
Wouldn't one have to quiesce (export) the pool on the primary before
importing it on the secondary?
No. ZFS is always on-disk consistent, so as long as SNDR is in logging
mode, zpool import will work on the secondary node.
As
I have savecore enabled, but it doesn't look like the machine is
dumping core as it should - that is, I don't think it's a panic - I
suspect interrupt handling.
Speaking of which, does OpenSolaris support Plug'n'Play IRQ assignment?
On Thu, Mar 5, 2009 at 3:12 PM, Mark J Musante
I've got knee deep into learning how to use Opensolaris and zfs, and I
see now that my goal of home zfs server may have been better served if
I had partitioned the install disk leaving some of the 60GB to be
added to a zpool.
First, how much space does a working OS need. I don't mean bare
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but it doesn't look like the machine is dumping
core as it should - that is, I don't think it's a panic - I suspect
interrupt handling.
Then when you say you had a machine crash, what did you mean?
Did you look in /var/crash/* to see
I have savecore enabled, but nothing in /var/crash:
r...@filer:~# savecore -v
savecore: dump already processed
r...@filer:~# ls /var/crash/filer/
r...@filer:~#
On Fri, Mar 6, 2009 at 4:21 PM, Mark J Musante mmusa...@east.sun.com wrote:
On Fri, 6 Mar 2009, Blake wrote:
I have savecore
jd == Jim Dunham james.dun...@sun.com writes:
jd It is my understanding that the ZFS intent log (ZIL) satisfies
jd POSIX requirements for synchronous transactions, thus
jd filesystem consistency.
maybe ``file consistency'' would be clearer. When you say filesystem
consistency
On Fri, 6 Mar 2009, Blake wrote:
I have savecore enabled, but nothing in /var/crash:
r...@filer:~# savecore -v
savecore: dump already processed
r...@filer:~# ls /var/crash/filer/
r...@filer:~#
OK, just to ask the dumb questions: is dumpadm configured for
/var/crash/filer? Is the dump zvol
SOLVED
According to `zdb -l /dev/rdsk/vdev`, one of my drives was missing
two of its four redundant labels. (#2 and #3) These two are next to
each other at the end of the device so it makes some sense that they
could both be garbled.
I'm not sure why `zfs import` choked on this [typical?] error
np == Neil Perrin neil.per...@sun.com writes:
np Alternatively, a lockfs will flush just a file system to
np stable storage but in this case just the intent log is
np written. (Then later when the txg commits those intent log
np records are discarded).
In your blog it sounded
These are fair questions, answered inline below :)
On Fri, Mar 6, 2009 at 4:45 PM, Mark J Musante mmusa...@east.sun.com wrote:
On Fri, 6 Mar 2009, Blake wrote:
OK, just to ask the dumb questions: is dumpadm configured for
/var/crash/filer? Is the dump zvol big enough? How do you know the
On Fri, 06 Mar 2009 15:17:22 -0600, Harry Putnam
rea...@newsguy.com wrote:
First, how much space does a working OS need. I don't mean bare
minimum but to be comfortable and have some growing room (on the
install disk)?
It depends on the installation you use (Plain Solaris 10,
one of the
Its been suggested before and I've heard its in the freebsd port...
support for spindown?
On Fri, Mar 6, 2009 at 11:40 AM, Dave dave-...@dubkat.com wrote:
C. Bergström wrote:
Bob Friesenhahn wrote:
I don't know if anyone has noticed that the topic is google summer of
code. There is only so
On 03/06/09 14:51, Miles Nordin wrote:
np == Neil Perrin neil.per...@sun.com writes:
np Alternatively, a lockfs will flush just a file system to
np stable storage but in this case just the intent log is
np written. (Then later when the txg commits those intent log
np records
I would really like to see a feature like 'zfs diff f...@snap1 f...@othersnap'
that would report the paths of files that have either been added, deleted,
or changed between snapshots. If this could be done at the ZFS level instead
of the application level it would be very cool.
--
AFAIK,
Kees Nuyt k.n...@zonnet.nl writes:
On Fri, 06 Mar 2009 15:17:22 -0600, Harry Putnam
rea...@newsguy.com wrote:
First, how much space does a working OS need. I don't mean bare
minimum but to be comfortable and have some growing room (on the
install disk)?
It depends on the installation you
Hi!
Today I have ten computers with Xen and Linux, each with 2 discs of 500G in
raid1, each node sees only its own raid1 volume, I do not have live motion of
my virtual machines... and moving the data from one hypervisor to another is a
pain task...
Now that I discovered this awesome file
On Sat, Mar 7, 2009 at 11:05 AM, Thiago C. M. Cordeiro | World Web
thiago.mart...@worldweb.com.br wrote:
Hi!
Today I have ten computers with Xen and Linux, each with 2 discs of 500G in
raid1, each node sees only its own raid1 volume, I do not have live motion of
my virtual machines... and
On Sat, Mar 7, 2009 at 11:23 AM, Sriram Narayanan sri...@belenix.org wrote:
On Sat, Mar 7, 2009 at 11:05 AM, Thiago C. M. Cordeiro | World Web
thiago.mart...@worldweb.com.br wrote:
Hi!
Today I have ten computers with Xen and Linux, each with 2 discs of 500G in
raid1, each node sees only
On Sat, Mar 7, 2009 at 11:26 AM, Sriram Narayanan sri...@belenix.org wrote:
On Sat, Mar 7, 2009 at 11:23 AM, Sriram Narayanan sri...@belenix.org wrote:
snip/
I intend to experiment with iSCSI later when I free up some machines
for such an experiment.
My only tip for Linux based iSCSI
On Sat, Mar 7, 2009 at 8:27 AM, Harry Putnam rea...@newsguy.com wrote:
I'm still a little confused about the various versions but I guess
since I installed from the official opensolaris 2008.11, which gave me
101b. And then updated to dev (208) that would be Indiana right?
That'd be build
Hello!
I want to know something... It is okay export the AoE discs to the virtual
OpenSolaris machine through dom0?
For example, in dom0 I'd like to have:
from node01 via AoE to dom0 - opensolaris01 domU
500G /dev/ether/e1.0 (c3d1 /xpvd/x...@1)
500G /dev/ether/e1.1 (c3d2 /xpvd/x...@2)
from
41 matches
Mail list logo