Please read also http://docs.info.apple.com/article.html?artnum=303503.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I am using Netbackup 6.0 MP3 on several ZFS systems
just fine. I
think that NBU won't back up some exotic ACLs of ZFS,
but if you
are using ZFS like other filesystems (UFS, etc) then there aren't any issues.
Hum. ACLs are not so exotic.
This IS a really BIG issue. If you are using
Hi all,
as I am newbie in ZFS, yesterday I played with it a little bit and there
are so many good things but I've notes few things I couldn't explain so.
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give
Hi,
Dick Davies wrote:
On 22/09/06, Alf [EMAIL PROTECTED] wrote:
1) It's not possible anymore within a pool create a file system with a
specific sizeIf I have 2 file systems I can't decide to give for
example 10g to one and 20g to the other one unless I set a reservation
for them. Also I
Hi Michael,
I completely agree with you. I was just wondering about the differences
between ZFS and others VM and also if I got the essence of it.
Also customers could ask these things and if they can use ZFS
filesystems like old fashion mode setting a specific size.
What do you thing about
Alf wrote:
What do you thing about pulling out a mirror on D1000 and the completely
hang of the system?
I on purpose left that for others to answer - I don't know HW well enough by
far :-)
--
Michael Schuster +49 89 46008-2974 / x62974
visit the online support center:
Hi James,
I agree. with you but I think it could take a while
cheers
Alf
James C. McPherson wrote:
Alf wrote:
Hi Michael,
I completely agree with you. I was just wondering about the
differences between ZFS and others VM and also if I got the essence
of it.
Also customers could ask
On 9/22/06, Dick Davies [EMAIL PROTECTED] wrote:
On 22/09/06, Alf [EMAIL PROTECTED] wrote:
2) I mirrored 2 disks within the same D1000 and while I was putting a
big tar ball in the FS I tried to physically remove one mirror and
You mean pull it out? Does your hardware support hotswap?
You mean pull it out? Does your hardware support hotswap?
As far as I know D1000 support itdoes it?
I'm sure the D1000 is fine with the concept. It's probably something in
the software stack that is upset.
I was told that a similar issue that I once had when testing was likely
due to
I believe I am experiencing a similar, but more severe issue and I do
not know how to resolve it. I used liveupgrade from s10u2 to NV b46
(via solaris express release). My second disk is zfs with the file
system fitz. I did a 'zpool export fitz'
Reboot with init 6 into new environment, NV
I believe I am experiencing a similar, but more
severe issue and I do
not know how to resolve it. I used liveupgrade from
s10u2 to NV b46
(via solaris express release). My second disk is zfs
with the file
system fitz. I did a 'zpool export fitz'
Reboot with init 6 into new
Apologies for any confusion, but I am now able to give more output
regarding the zpool fitz.
unknown# zfs list -- returns list of zfs file system fitz and related
snapshots
unknown# zpool status
pool: fitz
state: ONLINE
status: The pool is formatted using an older on-disk format. The pool
can
Alexei Rodriguez wrote:
Unless they break the spec, yes, it should work. PCI
Excellent to know! I will verify that the motherboard and the PCI-X cards play
well together.
Thanks!
Alexei
This message posted from opensolaris.org
___
zfs-discuss
On September 22, 2006 10:26:01 AM -0700 Alexei Rodriguez
[EMAIL PROTECTED] wrote:
Alexei Rodriguez wrote:
Unless they break the spec, yes, it should work. PCI
Excellent to know! I will verify that the motherboard and the PCI-X cards
play well together.
You might run into a problem with 3.3V
I have set up a small box to work with zfs. (2x 2.4GHz
xeons, 4GB memory, 6x scsi disks) I made one drive the boot
drive and put the other five into a pool with the zpool
create tank command right out of the admin manual.
The administration experience has been very nice and most
ZFS uses a 128k block size. If you change dd to use a bs=128k, do you observe
any performance improvement?
| # time dd if=zeros-10g of=/dev/null bs=8k
count=102400
| 102400+0 records in
| 102400+0 records out
| real1m8.763s
| user0m0.104s
| sys 0m1.759s
It's also worth
Wow! I solved a tricky problem this morning thanks to Zones ZFS integration.
We have a SAS SPDS database environment running on Sol10 06/06. The SPDS
database is unique in that when a table is being updated by one user it is
unavailable to the rest of the user community. Our nightly update
The history is quite simple:
1) Installed nv_b32 or around there on a zeroed drive. Created this
ZFS pool for the first time.
2) Non-live upgraded to nv_b42 when it came out, zpool upgrade on the
zpool in question from v2 to v3.
3) Tried to non-live upgrade to nv_b44, upgrade failed every time,
On Fri, 22 Sep 2006, johansen wrote:
ZFS uses a 128k block size. If you change dd to use a
bs=128k, do you observe any performance improvement?
I had tried other sizes with much the same results, but
hadnt gone as large as 128K. With bs=128K, it gets worse:
| # time dd if=zeros-10g
Haik,
Thank you very much. 'zpool list' yeilds
NAMESIZEUSEDAVAILCAPHEALTH
ALTROOT
z 74.5G 22.9G 51.6G30%ONLINE -
How do I confirm that /fitz is not currently a zfs
mountpoint? 'zfs mount' yeilds
fitz/home/fitz/home
Harley:
I had tried other sizes with much the same results, but
hadnt gone as large as 128K. With bs=128K, it gets worse:
| # time dd if=zeros-10g of=/dev/null bs=128k count=102400
| 81920+0 records in
| 81920+0 records out
|
| real2m19.023s
| user0m0.105s
| sys
Update ...
iostat output during zpool scrub
extended device statistics
device r/sw/s Mr/s Mw/s wait actv svc_t %w %b
sd34 2.0 395.20.10.6 0.0 34.8 87.7 0 100
sd3521.0 312.21.22.9 0.0 26.0 78.0 0
Update ...
iostat output during zpool scrub
extended device statistics
w/s Mr/s Mw/s wait actv svc_t %w %b
34 2.0 395.20.10.6 0.0 34.8 87.7
0 100
3521.0 312.21.22.9 0.0 26.0 78.0
0 79
3620.01.0
On Fri, 22 Sep 2006, [EMAIL PROTECTED] wrote:
Are you just trying to measure ZFS's read performance here?
That is what I started looking at. We scrounged around
and found a set of 300GB drives to replace the old ones we
started with. Comparing these new drives to the old ones:
Old 36GB
Harley:
Old 36GB drives:
| # time mkfile -v 1g zeros-1g
| zeros-1g 1073741824 bytes
|
| real2m31.991s
| user0m0.007s
| sys 0m0.923s
Newer 300GB drives:
| # time mkfile -v 1g zeros-1g
| zeros-1g 1073741824 bytes
|
| real0m8.425s
| user0m0.010s
| sys
On 9/22/06, Gino Ruopolo [EMAIL PROTECTED] wrote:
Update ...
iostat output during zpool scrub
extended device statistics
w/s Mr/s Mw/s wait actv svc_t %w %b
34 2.0 395.20.10.6 0.0 34.8 87.7
0 100
3521.0 312.21.22.9 0.0 26.0 78.0
0
26 matches
Mail list logo