Marko Milisavljevic wrote:
> Hmm.. my b69 installation understands zfs allow, but man zfs has no info
> at all.
Usually the manpages are updated in the same build as a new feature is added,
but the delegated admin manpage changes were extensive and slipped to build 70.
--matt
__
Hmm.. my b69 installation understands zfs allow, but man zfs has no info at
all. man says it was last modified on june 28. 2007, and also:-r--r--r-- 1
root bin 59081 Jul 10 12:34 /usr/share/man/man1m/zfs.1m
I installed b69 by using live upgrade from, I think, b65.
Is this a bug that needs filing
Yaniv Aknin wrote:
> When volumes approach 90% usage, and under medium/light load (zpool iostat
> reports 50mb/s and 750iops reads), some creat64 system calls take over 50
> seconds to complete (observed with 'truss -D touch'). When doing manual
> tests, I've seen similar times on unlink() calls (t
Close Sync on file systems option (ie when the app calls close the file
is flushed, including mmap, no data loss of closed files on system
crash) Atomic/Locked operations across all pools e.g. snapshot all or
selected pools at the same time.
Allowance for offline files, eg. first part of a file
Blake wrote:
> Now I'm curious.
>
> I was recursively removing snapshots that had been generated recursively
> with the '-r' option. I'm running snv65 - is this a recent feature?
No; it was integrated in snv_43, and is in s10u3. See:
PSARC 2006/388 snapshot -r
6373978 want to take lots of sna
Thank you very much for this input. I eventually upgraded to snv_69 and did the
ON build of 69 with your patch. I copied to patched kernels over and have now
re-imported the defunct pool. The pool is working after a quick 'resilvering'.
Thanks very much!
This message posted from opensolaris.
Now I'm curious.
I was recursively removing snapshots that had been generated recursively
with the '-r' option. I'm running snv65 - is this a recent feature?
Example:
2007-07-08.05:48:36 zfs destroy -r [EMAIL PROTECTED]
(this removed each recursive snapshot for all the filesystems contained in
Brandon Barker wrote:
> Can a filesystem that is part of a raid-z pool be booted? If not, is support
> for this planned?
>
It is not planned for the first release of zfs root file system support.
A root pool can be mirrored, but not striped or raid-z.
It is likely to be supported in the next
Can a filesystem that is part of a raid-z pool be booted? If not, is support
for this planned?
It seems that this would be a dream come true for the small to mid sized file
server owner.
This message posted from opensolaris.org
___
zfs-discuss mai
On Thu, Aug 16, 2007 at 05:20:25AM -0700, ramprakash wrote:
> #zfs mount -a
> 1. mounts "c" again.
> 2. but not "vol1".. [ ie /dev/zvol/dsk/mytank/b/c does not contain "vol1"
> ]
>
> Is this the normal behavior or is it a bug?
That looks like a bug. Please file it.
Adam
--
Adam Leven
On 8/16/07, Itay Menahem <[EMAIL PROTECTED]> wrote:
> Hi,
> Are there any Legato backup system considerations that I should take in
> account while building zpools and file systems on a Sun X4500 machine? I was
> thinking of such consideration such as the zpool size, file system properties
> su
D'oh!
Many thanks, and sorry for the noise...
/jim
Joy Marshall - Solaris Revenue Product Engineering wrote:
> Hi Jim,
>
> The handout referenced is in fact the second of the two PDF documents
> posted on the LOSUG website.
>
> Cheers,
> Joy
>
>
> Jim Mauro wrote:
>>
>> Is the referenced Lami
Hi Jim,
The handout referenced is in fact the second of the two PDF documents
posted on the LOSUG website.
Cheers,
Joy
Jim Mauro wrote:
>
> Is the referenced Laminated Handout on slide 3 available anywhere in
> any form electronically?
>
> If not, I'd be happy to create an electronic copy and
Is the referenced Laminated Handout on slide 3 available anywhere in any
form electronically?
If not, I'd be happy to create an electronic copy and make it pubically
available.
Thanks,
/jim
Joy Marshall wrote:
> It's taken a while but at last we have been able to post the ZFS "Under the
> H
To list your snapshots:
/usr/sbin/zfs list -H -t snapshot -o name
Then you could use that in a for loop:
for i in `/usr/sbin/zfs list -H -t snapshot -o name` ;
do
echo "Destroying snapshot: $i"
/usr/sbin/zfs destroy $i
done
The above would destroy all your snapshots. You could put a grep o
It's taken a while but at last we have been able to post the ZFS "Under the
Hood" presentation slides from the session back at May's LOSUG.
You can view both the presentation slides and a layered overview here:
Presentation:
http://www.opensolaris.org/os/community/os_user_groups/losug/ZFS-UTH_3
> Hi. I want to delete a whole series of snapshots.
>
> How do I go about that.
>
> I have tried doing a rm -rf of one of the snapshot directories but I'm
> not allowed since its read-only despite that I'm root.
You can't use '-r' because the contents are immutable.
But you may be able to simp
Lars-Erik Bjørk wrote:
> Hi all!
>
> I need a non-root user to be able to perform zfs snapshots and rollbacks.
> Does anybody know what privileges that should be specified in
> /etc/user_attr ?
Use the user delegation feature instead, this is exactly what it was
designed for.
# zfs allow -u l
Hi all!
I need a non-root user to be able to perform zfs snapshots and rollbacks.
Does anybody know what privileges that should be specified in
/etc/user_attr ?
Best regards,
Lars-Erik Bjørk
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
htt
Hey,
"zfs destroy snapshotname" is the way I use to remove snapshots.
Kind regards,
Steve
On 8/17/07, Luke Vanderfluit <[EMAIL PROTECTED]> wrote:
>
> Hi.
>
> Thanks greatly for your reply.
> Since I am trying _not_ to inadvertently destroy anything but the
> snapshots... could you tell me your
20 matches
Mail list logo