On 3/28/07, Fred Oliver [EMAIL PROTECTED] wrote:
Has consideration been given to setting multiple properties at once in a
single zfs set command?
For example, consider attempting to maintain quota == reservation, while
increasing both. It is impossible to maintain this equality without some
Hello Selim,
Wednesday, March 28, 2007, 5:45:42 AM, you wrote:
SD talking of which,
SD what's the effort and consequences to increase the max allowed block
SD size in zfs to highr figures like 1M...
Think what would happen then if you try to read 100KB of data - due to
chekcsumming ZFS would
and does it vary by filesystem type? I know I ought to know the
answer, but it's been a long time since I thought about it, and
I must not be looking at the right man pages. And also, if it varies,
how does one tell? For a pipe, there's fpathconf() with _PC_PIPE_BUF,
but how about for a regular
Hi All,
Last week we had a panic caused by ZFS and then we had a corrupted zpool!
Today we are doing some test with the same data, but on a different
server/storage array. While copying the data ... panic!
And again we had a corrupted zpool!!
Mar 28 12:38:19 SERVER144 genunix: [ID 403854
I forgot to mention we are using S10U2.
Gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Richard L. Hamilton wrote:
and does it vary by filesystem type? I know I ought to know the
answer, but it's been a long time since I thought about it, and
I must not be looking at the right man pages. And also, if it varies,
how does one tell? For a pipe, there's fpathconf() with _PC_PIPE_BUF,
Hi Gino,
this looks like an instance of bug 6458218 (see
http://bugs.opensolaris.org/view_bug.do?bug_id=6458218)
The fix for this bug is integrated into snv_60.
Kind regards,
Victor
Gino Ruopolo wrote:
Hi All,
Last week we had a panic caused by ZFS and then we had a corrupted zpool!
Hello zfs-discuss,
What will happen if I create a stripe pool of 3 disks, then create
somy symlinks and then overwrite one disk with 0s.
Ditto blocks should self-heal meta data so file systems will be
consistent. Now when it comes to symlinks...
I was looking into a ZFS code and it looks like if
[EMAIL PROTECTED] wrote on 03/28/2007 06:34:12 AM:
Hi Gino,
this looks like an instance of bug 6458218 (see
http://bugs.opensolaris.org/view_bug.do?bug_id=6458218)
The fix for this bug is integrated into snv_60.
Kind regards,
Victor
I know I may be somewhat of an outsider here,
I was thinking of something similar. When we go to download the various bits
(iso-a.zip through iso-e.zip and the md5sums), it seems like there should
also be Release Notes on the list of files being downloaded. Similar to the
Java release notes, I would expect it to point out which bugs were
We are running Solaris 10 11/06 on a Sun V240 with 2 CPUS and 8 GB of memory.
This V240 is attached to a 3510 FC that has 12 x 300 GB disks. The 3510 is
configured as HW RAID 5 with 10 disks and 2 spares and it's exported to the
V240 as a single LUN.
We create iso images of our product in the
== Reminder: this meeting is tomorrow ==
Also, we will briefly talk about the Project Blackbox tour that is
coming to the Denver area April 12-13. More information is at:
http://www.sun.com/emrkt/blackbox
== Reminder: this meeting is tomorrow ==
This month's FROSUG (Front Range OpenSolaris User
Hi, all
I have 1 system with zfs pool, which has 3 mirror with 3
partitionseach one. Not as I can erase a mirror. Have been able to erase 2
partitionswith #8220;detach#8221; but it completes it does not leave me. as
I can erase the mirror?
Thanks
We are running Solaris 10 11/06 on a Sun V240 with 2
CPUS and 8 GB of memory. This V240 is attached to a
3510 FC that has 12 x 300 GB disks. The 3510 is
configured as HW RAID 5 with 10 disks and 2 spares
and it's exported to the V240 as a single LUN.
We create iso images of our product in
On Wed, 28 Mar 2007, prasad wrote:
We create iso images of our product in the following way (high-level):
# mkfile 3g /isoimages/myiso
# lofiadm -a /isoimages/myiso
/dev/lofi/1
# newfs /dev/rlofi/1
# mount /dev/lofi/1 /mnt
# cd /mnt; zcat /product/myproduct.tar.Z | tar xf -
How big does
We are currently recommending separate (ZFS) file systems for redo logs.
Did you try that? Or did you go straight to a separate UFS file system for
redo logs?
I'd answered this directly in email originally.
The answer was that yes, I tested using zfs for logpools among a number of
disk
Try throttling back the max # of IOs. I saw a number of errors similar to this
on Pillar and EMC.
In /etc/system, set:
set sd:sd_max_throttle=20
and reboot.
This message posted from opensolaris.org
___
zfs-discuss mailing list
We are currently recommending separate (ZFS) file systems for redo logs.
Did you try that? Or did you go straight to a separate UFS file system for
redo logs?
I'd answered this directly in email originally.
The answer was that yes, I tested using zfs for logpools among a number of
On HDS arrays we set sd_max_throttle to 8.
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Gino Ruopolo wrote:
On HDS arrays we set sd_max_throttle to 8.
HDS provides an algorithm for estimating sd[d]_max_throttle in their
planning docs. It will vary based on a number of different parameters.
AFAIK, EMC just sets it to 20.
-- richard
___
The thought is to start throttling and possibly tune up or down, depending on
errors or lack of errors. I don't know of a specific NexSAN throttle preference
(we use SATABoy, and go with 20).
This message posted from opensolaris.org
___
zfs-discuss
Gino Ruopolo wrote:
On HDS arrays we set sd_max_throttle to 8.
HDS provides an algorithm for estimating
sd[d]_max_throttle in their
planning docs. It will vary based on a number of
different parameters.
AFAIK, EMC just sets it to 20.
-- richard
you're right but after -a lot- of
Hello folks, I have a small problem, originally I had this setup:
[16:39:40] @zglobix1: /root zpool status -x
pool: mypool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action:
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like to suggest that the material referenced by HEADS UP
message [1] be made available to non-SWAN
Hello Krzys,
Wednesday, March 28, 2007, 10:58:40 PM, you wrote:
K Hello folks, I have a small problem, originally I had this setup:
K [16:39:40] @zglobix1: /root zpool status -x
Kpool: mypool
K state: DEGRADED
K status: One or more devices could not be opened. Sufficient replicas exist
Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like to suggest that the material referenced by HEADS UP
message [1] be made
Richard Elling wrote:
Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like to suggest that the material referenced by HEADS UP
We will make the manual and netinstall instructions available to
non-SWAN folks shortly.
Tim Foster also has a script to do the set up, wait for his blog.
Lin
Richard Elling wrote:
Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
integration of their
Hello Richard,
Wednesday, March 28, 2007, 11:14:41 PM, you wrote:
RE Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like to
Sorry, thought I did reply-to-all.
On 3/28/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
I think so.
I can access:
http://www.opensolaris.org/os/community/on/flag-days/pages/2007032801/
I can not access:
http://www.fs.central/projects/zfsboot/zfsboot_manual_setup.html
Malachi
On 3/28/07,
Should they also be put up on:
http://opensolaris.org/os/project/zfsboot/
On 3/28/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Richard,
Wednesday, March 28, 2007, 11:14:41 PM, you wrote:
RE Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
JS wrote:
The thought is to start throttling and possibly tune up or down, depending
on errors or lack of errors. I don't know of a specific NexSAN throttle
preference (we use SATABoy, and go with 20).
One guess is as good as another :-) The default is 256, so even with 20
you are a long way
Gino Ruopolo wrote:
Hi All,
Last week we had a panic caused by ZFS and then we had a corrupted
zpool! Today we are doing some test with the same data, but on a
different server/storage array. While copying the data ... panic!
And again we had a corrupted zpool!!
This is bug 6458218, which
On 3/29/07, Robert Milkowski [EMAIL PROTECTED] wrote:
1. Instructions for Manual set up:
http://fs.central/projects/zfsboot/zfsboot_manual_setup.html
2. Instructions for Netisntall set up:
http://fs.central/projects/zfsboot/how_to_netinstall_zfsboot
I think those documents should be
Cyril Plisko wrote:
On 3/28/07, Richard Elling [EMAIL PROTECTED] wrote:
Cyril Plisko wrote:
First of all I'd like to congratulate the ZFS boot team with the
integration of their work into ON. Great job ! I am sure there
are plenty of people waiting anxiously for this putback.
I'd also like
Adam,
With the blog entry[1] you've made about gzip for ZFS, it raises
a couple of questions...
1) It would appear that a ZFS filesystem can support files of
varying compression algorithm. If a file is compressed using
method A but method B is now active, if I truncate the file
and
Hello Nicholas,
Wednesday, March 28, 2007, 11:47:18 PM, you wrote:
Which build is required to try this?
62
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
Could I get your opinion then? I have just downloaded and burnt the b60
ISO. I was just getting ready to follow Tabriz and Tim's instructions from
last year in order to get the ZFS root boot. Seeing the Heads Up, it says
that the old mechanism will no longer work.
Should I:
a) install b60,
On 3/29/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
Could I get your opinion then? I have just downloaded and burnt the b60
ISO. I was just getting ready to follow Tabriz and Tim's instructions from
last year in order to get the ZFS root boot. Seeing the Heads Up, it says
that the old
Hello Darren,
Thursday, March 29, 2007, 12:01:21 AM, you wrote:
DRSC Adam,
DRSC With the blog entry[1] you've made about gzip for ZFS, it raises
DRSC a couple of questions...
DRSC 1) It would appear that a ZFS filesystem can support files of
DRSCvarying compression algorithm. If a file is
Hello Nicholas,
Thursday, March 29, 2007, 12:15:31 AM, you wrote:
On 3/29/07,Malachi de Ælfweald[EMAIL PROTECTED] wrote:
Could I get your opinion then? I have just downloaded and burnt the b60 ISO. I was just getting ready to follow Tabriz and Tim's instructions from last year in
So would you say use the default layout (most of the extra space going into
/export/home)?
Malachi
On 3/28/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Nicholas,
Thursday, March 29, 2007, 12:15:31 AM, you wrote:
On 3/29/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
Hello Robert,
Wednesday, March 21, 2007, 10:36:15 AM, you wrote:
RM Hello Robert,
RM Saturday, March 17, 2007, 6:49:05 PM, you wrote:
RM Hello Thomas,
RM Saturday, March 17, 2007, 11:46:14 AM, you wrote:
TN On Fri, 16 Mar 2007, Anton B. Rang wrote:
It's possible (if unlikely) that you are
Howdy,
This is awesome news that ZFS boot support is available for x86
platforms. Do any of the ZFS developers happen to know when ZFS boot
support for SPARC will be available?
Thanks,
- Ryan
--
UNIX Administrator
http://prefetch.net
___
zfs-discuss
Malachi de Ælfweald wrote:
Should I:
a) install b60, figure out how to bfu to b62, then try to convert to
the zfs root
b) wait to install Solaris until b62 comes out
c) follow the original instructions from last year (with b60) and then
figure out how to switch to the new mechanism when it
Robert Milkowski wrote:
Hello Darren,
Thursday, March 29, 2007, 12:01:21 AM, you wrote:
DRSC Adam,
...
DRSC 2) The question of whether or not to use bzip2 was raised in
DRSCthe comment section of your blog. How easy would it be to
DRSCimplement a plugable (or more generic) interface
We currently have a working prototype for SPARC (via newboot SPARC project).
We don't have a firm date yet, but shouldn't be too far away :-).
Lin
Matty wrote:
Howdy,
This is awesome news that ZFS boot support is available for x86
platforms. Do any of the ZFS developers happen to know when
Hello Darren,
Thursday, March 29, 2007, 12:55:03 AM, you wrote:
DRSC So, for example, if the interface was plugable and Sun only
DRSC wanted to ship gzip, but I wanted to create a better ZFS
DRSC based appliance than one based on just OpenSolaris, I might
DRSC build a bzip2 module for the kernel
Hello Malachi,
Thursday, March 29, 2007, 12:33:05 AM, you wrote:
So would you say use the default layout (most of the extra space going into /export/home)?
Rather one large / and a swap slice.
That way once you switch to ZFS you will be able to create /home dataset and manually
Hello Lin,
Thursday, March 29, 2007, 1:01:32 AM, you wrote:
LL We currently have a working prototype for SPARC (via newboot SPARC project).
LL We don't have a firm date yet, but shouldn't be too far away :-).
SPARC newboot project - can you shed more light on it?
Would ZFS support on SPARC
Ok. Will install b60 with large / and a 2xMemory swap...
Will wait to specify the mirror until I go to do the ZFS
Got the ON instructions up, looking them over to see about BFU'ing
Malachi
On 3/28/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Malachi,
Thursday, March 29, 2007,
Gino Ruopolo wrote:
Hi All,
Last week we had a panic caused by ZFS and then we
had a corrupted
zpool! Today we are doing some test with the same
data, but on a
different server/storage array. While copying the
data ... panic!
And again we had a corrupted zpool!!
This is bug
Hello Malachi,
Thursday, March 29, 2007, 1:12:41 AM, you wrote:
Ok. Will install b60 with large / and a 2xMemory swap...
Will wait to specify the mirror until I go to do the ZFS
Got the ON instructions up, looking them over to see about BFU'ing
Just one question - why 2xmem for
On 3/29/07, Robert Milkowski [EMAIL PROTECTED] wrote:
BFU - just for testing I guess. I would rather propose waiting for SXCE
b62.
Is there a release date for this? I note that the install iso for b60 seems
to only release in the last week.
Nicholas
Robert Milkowski wrote:
Hello Lin,
Thursday, March 29, 2007, 1:01:32 AM, you wrote:
LL We currently have a working prototype for SPARC (via newboot SPARC project).
LL We don't have a firm date yet, but shouldn't be too far away :-).
SPARC newboot project - can you shed more light on it?
Any chance these fixes will make it into the normal Solaris RS patches?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Why 2x(4G)? Hmmm. Good question. I guess I am just used to doing that for
FreeBSD. I do plan on running multiple Xen domU at the same time... Are you
thinking swap shouldn't be that big?
Don't BFU? I'm good with that :) I'd prefer to get to know Solaris before
screwing it up too much.
BTW: Is
That's probably bug 6382683 lofi is confused about
sync/async I/O,
and AFAIK it's fixed in current opensolaris
releases.
See the thread with subject bad lofi performance
with zfs file backend /
bad mmap write performance from january / february
2006:
Hello Malachi,
Thursday, March 29, 2007, 1:36:46 AM, you wrote:
Why 2x(4G)? Hmmm. Good question. I guess I am just used to doing that for FreeBSD. I do plan on running multiple Xen domU at the same time... Are you thinking swap shouldn't be that big?
If you have a disk space to
Comments inline
On 3/28/07, Robert Milkowski [EMAIL PROTECTED] wrote:
Hello Malachi,
Thursday, March 29, 2007, 1:36:46 AM, you wrote:
Why 2x(4G)? Hmmm. Good question. I guess I am just used to doing that for
FreeBSD. I do plan on running multiple Xen domU at the same time... Are you
Awesome, that worked great for me... I did not know I had to put c1t2d0 in
there... but hey, it works and that is all it matters. Thank you so very much.
Chris
[19:58:24] @zglobix1: /root zpool attach -f mypool c1t2d0 c1t3d0
[19:58:33] @zglobix1: /root zpool list
NAME
Robert Milkowski wrote:
Hello zfs-discuss,
What will happen if I create a stripe pool of 3 disks, then create
somy symlinks and then overwrite one disk with 0s.
Ditto blocks should self-heal meta data so file systems will be
consistent. Now when it comes to symlinks...
I was looking into a ZFS
For the particular HDS array you're working on, or also on NexSAN storage?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
In the BIOS, LBA is set to Auto (with only other option being Disabled)
Running in text mode, it says that the '/' partition has to be 7993MB or
less
Is there any way to get past that? Seems my only other option is the Default
layout, with puts all the space into /export/home
Malachi
On
On 28/03/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
In the BIOS, LBA is set to Auto (with only other option being Disabled)
Running in text mode, it says that the '/' partition has to be 7993MB or
less
Is there any way to get past that? Seems my only other option is the
Default layout,
Is that on slice 0? Mine is trying to do it on c1d0s0
On 3/28/07, Shawn Walker [EMAIL PROTECTED] wrote:
On 28/03/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
In the BIOS, LBA is set to Auto (with only other option being Disabled)
Running in text mode, it says that the '/' partition has
Constantin Gonzalez wrote:
What is the most elegant way of migrating all filesystems to the new pool,
including snapshots?
Can I do a master snapshot of the whole pool, including sub-filesystems and
their snapshots, then send/receive them to the new pool?
Or do I have to write a script that
According to Bug Database bug 6382683 is in 1-Dispatched state, what does
that mean?
Roughly speaking, the bug has been entered into the database, but no developer
has been assigned to it. (State 3 indicates that a developer or team has agreed
that it's a bug; it sounds likely that this bug
It's not defined by POSIX (or Solaris). You can rely on being able to
atomically write a single disk block (512 bytes); anything larger than that is
risky. Oh, and it has to be 512-byte aligned.
File systems with overwrite semantics (UFS, QFS, etc.) will never guarantee
atomicity for more than
On 28/03/07, Malachi de Ælfweald [EMAIL PROTECTED] wrote:
Is that on slice 0? Mine is trying to do it on c1d0s0
Yes, c2d0s0 in my case.
--
Less is only more where more is no good. --Frank Lloyd Wright
Shawn Walker, Software and Systems Analyst
[EMAIL PROTECTED] -
It appears to be a bug with the partitioning tool in the installer.
I clicked Cylinders and noticed that swap started at cylinder 3 and /
started after it.
Changed it around to make swap the last bit of the drive, and make / start
at cylinder 3, and it is continuing fine with 240GB (or so) '/'
On Wed, Mar 28, 2007 at 06:55:17PM -0700, Anton B. Rang wrote:
It's not defined by POSIX (or Solaris). You can rely on being able to
atomically write a single disk block (512 bytes); anything larger than
that is risky. Oh, and it has to be 512-byte aligned.
File systems with overwrite
I should probably clarify my answer.
All file systems provide writes by default which are atomic with respect to
readers of the file. That's a POSIX requirement. In other words, if you're
writing ABC, there's no possibility that a reader might see ABD (if D was
previously contained in the
73 matches
Mail list logo