Vincent Fox writes:
I don't understand. How do you
setup one LUN that has all of the NVRAM on the array dedicated to it
I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
thick here, but can you be more specific for the n00b?
Do you mean from firmware side or
Matthew Ahrens wrote:
Ross Newell wrote:
What are this issues preventing the root directory being stored on
raidz?
I'm talking specifically about root, and not boot which I can see
would be
difficult.
Would it be something an amateur programmer could address in a
weekend, or
On Sep 25, 2007, at 19:57, Bryan Cantrill wrote:
On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote:
It seems like ZIL is a separate issue.
It is very much the issue: the seperate log device work was done
exactly
to make better use of this kind of non-volatile memory. To use
Hi Neel - Thanks for pushing this out. I've been tripping over this for
a while.
You can instrument zfs_read() and zfs_write() to reliably track filenames:
#!/usr/sbin/dtrace -s
#pragma D option quiet
zfs_read:entry,
zfs_write:entry
{
printf(%s of %s\n,probefunc,
Vincent Fox wrote:
It seems like ZIL is a separate issue.
I have read that putting ZIL on a separate device helps, but what about the
cache?
OpenSolaris has some flag to disable it. Solaris 10u3/4 do not. I have
dual-controllers with NVRAM and battery backup, why can't I make use of it?
Jim I can't use zfs_read/write as the file is mmap()'d so no read/write!
-neel
On Sep 26, 2007, at 5:07 AM, Jim Mauro [EMAIL PROTECTED] wrote:
Hi Neel - Thanks for pushing this out. I've been tripping over this
for a while.
You can instrument zfs_read() and zfs_write() to reliably track
What sayeth the ZFS team regarding the use of a stable DTrace provider
with their file system?
For the record, the above has a tone to it that I really did not intend
(antagonistic?), so
I had a good chat with Roch about this. The file pathname is derived via
a translator
from the
On Tue, Sep 25, 2007 at 06:09:04PM -0700, Richard Elling wrote:
Actually, you can use the existing name space for this. By default,
ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could
setup your own space, say /dev/myknowndisks and use more descriptive
names. You might
Neelakanth Nadgir writes:
io:::start probe does not seem to get zfs filenames in
args[2]-fi_pathname. Any ideas how to get this info?
-neel
Who says an I/O is doing work for a single pathname/vnode
or for a single process. There is not that one to one
correspondance anymore. Not in the
Hey Neel - Try this:
nv70b cat zfs_page.d
#!/usr/sbin/dtrace -s
#pragma D option quiet
zfs_putpage:entry
{
printf(zfs write to %s\n,stringof(args[0]-v_path));
}
zfs_getpage:entry
{
printf(zfs read from %s\n,stringof(args[0]-v_path));
}
I did some quick tests with mmap'd ZFS
On Wed, Sep 26, 2007 at 02:10:39PM -0400, Torrey McMahon wrote:
Albert Chin wrote:
On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
I don't understand. How do you
setup one LUN that has all of the NVRAM on the array dedicated to it
I'm pretty familiar with 3510 and
Quick reset, Greg Shaw asked for a more descriptive output for zpool
status. I've already demonstrated how to do that. We also discussed
the difficulty in making a reliable name to physical location map
without involving humans.
continuing on...
A Darren Dunham wrote:
On Wed, Sep 26, 2007 at
Kugutsumen wrote:
Matthew Ahrens wrote:
Ross Newell wrote:
What are this issues preventing the root directory being stored on
raidz?
I'm talking specifically about root, and not boot which I can see
would be
difficult.
Would it be something an amateur programmer could
Hi all,
I have an interesting project that I am working on. It is a large volume file
download service that is in need of a new box. There current systems are not
able to handle the load because for various reasons they have become very I/O
limited. We currently run on Debian Linux with
Howdy,
We are running zones on a number of Solaris 10 update 3 hosts, and we
are bumping into an issue where the kernel doesn't clean up
connections after an application exits. When this issue occurs, the
netstat utility doesn't show anything listening on the port the
application uses (8080 in
I'm about to build a fileserver and I think I'm gonna use OpenSolaris and ZFS.
I've got a 40GB PATA disk which will be the OS disk, and then I've got 4x250GB
SATA + 2x500GB SATA disks. From what you are writing I would think my best
option would be to slice the 500GB disks in two 250GB and then
I'm about to build a fileserver and I think I'm gonna
use OpenSolaris and ZFS.
I've got a 40GB PATA disk which will be the OS disk,
and then I've got 4x250GB SATA + 2x500GB SATA disks.
From what you are writing I would think my best
option would be to slice the 500GB disks in two 250GB
On Sep 26, 2007, at 14:10, Torrey McMahon wrote:
You probably don't have to create a LUN the size of the NVRAM
either. As
long as its dedicated to one LUN then it should be pretty quick. The
3510 cache, last I checked, doesn't do any per LUN segmentation or
sizing. Its a simple front end
Yes, this is it. Thanks.
David
On Wed, 2007-09-26 at 13:55 -0400, Will Murnane wrote:
David Smith wrote:
Under the GUI, there is an advanced option which shows vdev capacity,
etc. I'm drawing a blank about how to get with the commands...
'zpool iostat -v' gives that level of
Vincent Fox wrote:
Is this what you're referring to?
http://www.solarisinternals.com/wiki/index.php/ZFS_Evi
l_Tuning_Guide#Cache_Flushes
As I wrote several times in this thread, this kernel variable does not work in
Sol 10u3.
Probably not in u4 although I haven't tried it.
I would like
I would keep it simple. Let's call your 250GB disks A, B, C, D,
and your 500GB disks X and Y. I'd either make them all mirrors:
zpool create mypool mirror A B mirror C D mirror X Y
or raidz the little ones and mirror the big ones:
zpool create mypool raidz A B C D mirror X Y
or, as
Vincent Fox wrote:
Vincent Fox wrote:
Is this what you're referring to?
http://www.solarisinternals.com/wiki/index.php/ZFS_Evi
l_Tuning_Guide#Cache_Flushes
As I wrote several times in this thread, this kernel variable does not work
in Sol 10u3.
Probably not in u4 although I haven't
Nigel Smith wrote:
It's a pity that Sun does not manufacture something like this.
The x4500 Thumper, with 48 disks is way over the top for most companies,
and too expensive. And the new X4150 only has 8 disks.
This Intel box with 12 hot-swap drives and two internal boot drives
looks like the
ZFS should allow 31+NULL chars for a comment against each disk.
This would work well with the host name string (I assume is max_hostname
255+NULL)
If a disk fails it should report c6t4908029d0 failed comment from
disk, it should also remember the comment until reboot
This would be useful for
On 26 September, 2007 - Nigel Smith sent me these 1,2K bytes:
It's a pity that Sun does not manufacture something like this.
The x4500 Thumper, with 48 disks is way over the top for most companies,
and too expensive. And the new X4150 only has 8 disks.
This Intel box with 12 hot-swap drives
zdb?
Damon Atkins wrote:
ZFS should allow 31+NULL chars for a comment against each disk.
This would work well with the host name string (I assume is max_hostname
255+NULL)
If a disk fails it should report c6t4908029d0 failed comment from
disk, it should also remember the comment until
On Wed, Sep 26, 2007 at 11:36:57AM -0700, Richard Elling wrote:
AFAIK, VxVM still only expects one private region per disk. The private
region stores info on the configuration of the logical devices on the
disk, and its participation therein. ZFS places this data in the on-disk
format on
I have a raidz zpool comprised of 5 320GB SATA drives and I am seeing the
following numbers.
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
vol01 123G 1.33T 66182 8.22M
David Runyon david.runyon at sun.com writes:
I'm trying to get maybe 200 MB/sec over NFS for large movie files (need
(I assume you meant 200 Mb/sec with a lower case b.)
large capacity to hold all of them). Are there any rules of thumb on how
much RAM is needed to handle this (probably
29 matches
Mail list logo