Max Bruning wrote:
Hi everybody,
while trying to figure out what on earth has been
going on in my u20m2 due to
6636511 u20m2 bios version 1.45.1 still can't
distinguish disks on sata channel #1,
I engaged in a lot of cable swapping operations for
the internal sata drive cables.
...
Hi
case materials (spec and manpage) should be
visible on www.opensolaris.org/os/community/arc
cheers
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs
manage multipathing
then the list is
scsi_vhci
While there are other multipathing solutions out there such
as PowerPath and VxDMP, I don't think they work quite so well
with ZFS.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http
mdconfig -d -u 1
mdconfig -d -u 2
rm /usr/_disk1
rm /usr/_disk2
/Daniel Eriksson
Anybody seen anything like this, on Solaris or freebsd?
I've got my doubts about whether Daniel's got a valid test.
thanks,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun
At that point, I was all set to export the pool, but realised
that I didn't need to because zfs had picked up that the disks
were larger - an extra ~100Gb just showed up.
I don't know whether using a raidZ or raidZ2 pool requires the
export/import routine, but I hope it doesn't.
cheers,
James C
and /home
conventions. This leaves open the possibility of some NFS mounts
later.
All of which is a workaround for the essential problem that
we don't have a easy and obvious way of adding users to zones
from the command line unless you use the webconsole thingy.
James C. McPherson
--
Senior Kernel
Bob Scheifler wrote:
James C. McPherson wrote:
You can definitely loopback mount the same fs into multiple
zones, and as far as I can see you don't have the multiple-writer
issues that otherwise require Qfs to solve - since you're operating
within just one kernel instance.
Is there any
have
jmcplocalhost:/export/home/jmcp
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
a loopback
fs mount that might fix things up. However that would require
me to re-jiggle my filesystems a lot and I just can't be
bothered since I do have a workaround.
cheers,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http
cables for the aoc-sat2-mv8 and will report
which ones work when I get them
That will be very good info to have - there's too little information
and personal experience surrounding the SAS cabling world as yet.
cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin
Al Hopper wrote:
On Tue, 18 Dec 2007, James C. McPherson wrote:
... snip .
That will be very good info to have - there's too little information
and personal experience surrounding the SAS cabling world as yet.
Here's a couple of resources:
SAS Integrators Guide:
wget http
he could try to limit # of queued request per disk in zfs to
something smaller than default 35 (maybe even down to 1?)
Hi Robert,
yup, that's on my list of things for Frank to try. I've
asked for a bit more config information though so we can
get a bit of clarity on that front first.
James C
are stuck - for the moment
at least - with ad hoc measures.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss
of your zpool, but if the
devices are disks that are direct or san-attached, I
would be wondering about their outstanding queue depths.
I think it's time to break out some D to find out where
in the stack the bottleneck(s) really are.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
had a chance to analyze it so I've passed the details on to
more knowledgeable ZFS ppl.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss
Thankyou in advance,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss
that your
current patch levels can be identified.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
at the ST5650, myself.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
XChat
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
James C. McPherson wrote:
Got an issue which is rather annoying to me - three of my
ZFS caches are regularly using nearly 1/2 of the 1.09Gb of
allocated kmem in my system
...[snip]
Following suggestions from Andre and Rich that this was
probably the ARC, I've implemented a 256Mb limit for my
to find out why this is happening, and prevent
it recurring?
(If Team ZFS would like access to the core please contact me
directly for access).
thanks in advance,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http
Anton B. Rang wrote:
Got an issue which is rather annoying to me - three of my
ZFS caches are regularly using nearly 1/2 of the 1.09Gb of
allocated kmem in my system.
I think this is just the ARC; you can limit its size using:
hope so.
cheers,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
it into OpenSolaris
http://bugs.opensolaris.org/view_bug.do?bug_id=6614012
6614012 add Areca SAS/SATA RAID adapter driver
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_and_Zones
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs
, with the bringover utility stuck
looping in the same functions as mentioned earlier in this CR's history.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http
Ian Collins wrote:
James C. McPherson wrote:
Ian Collins wrote:
...
I don't know if anything else breaks when you do this, but if you are
building software in a zone on a lofs filesystem, dmake hangs. Regular
make works fine.
The output from truss is:
stat64(/export/home, 0x08045B60
depending on the
card itself to be reliable any more.
1Tb of photos shot on a 10MPx camera in the camera's native
RAW format is around 100,000 photos. It's not difficult to
imagine a consumer having that sort of storage requirement.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun
week.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
cpu, add more ram(tm).
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that you're suffering from bug 6354872?
I suggest that you log a call with Sun Support if you
haven't already done so, so that you can get the assistance
which you need to resolve the issue in a timely fashion.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
in
a hotplug world?
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Tim Spriggs wrote:
James C. McPherson wrote:
Gregory Shaw wrote:
...
The above would be very useful should a disk fail to identify what
device is what.
How would you gather that information?
How would you ensure that it stayed accurate in
a hotplug world?
If it is stored on the device
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli for 3xxx, etc., etc.,
No consistent
Greg Shaw wrote:
James C. McPherson wrote:
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal
Ellis, Mike wrote:
With Leadville/MPXIO targets in the 32-digit range, identifying the new
storage/LUNs is not a trivial operatrion.
Have a look at my devid/guid presentation for some details on
how we use them with ZFS/SVM:
http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuide.pdf
James C
A Darren Dunham wrote:
On Tue, Sep 18, 2007 at 10:11:11AM +1000, James C. McPherson wrote:
Have a look at my devid/guid presentation for some details on
how we use them with ZFS/SVM:
http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuide.pdf
Ah, a very silent 'e'... :-)
http
appear in S10u5.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, and that it's sufficiently different from existing ideas
on how to write filesystems that there wasn't really any
incentive to actually do it.
$X ain't broke (sufficiently) so let's not rock the boat
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
this support for NV - they told me that
the current driver in NV is supposed to work with
ICH6/7/8/9, but only for hard disks. Support for
SATA dvd drives is in the works at the moment, and
yes, this was tested with Solaris.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
with zfs.
Are you using the marvell88sx driver to attach
your sata disks?
If you are, then perhaps this putback from yesterday
is what you are in need of:
6564677 oracle datafiles corrupted on thumper
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6564677
James C. McPherson
--
Solaris
where to go from there. Is ZFS smart enough to recognize MPxIO
changes? Does anyone have any recommendations?
Yes. You can check the devids of the vdevs that make up the
pool, and you'll see that they don't change under mpxio.
James C. McPherson
--
Solaris kernel software engineer
Sun
, or
reboot -dq
cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Some might say that's a waste of space :)
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs
Stuart Anderson wrote:
On Tue, Jul 17, 2007 at 02:49:08PM +1000, James C. McPherson wrote:
Stuart Anderson wrote:
Running Solaris 10 Update 3 on an X4500 I have found that it is possible
to reproducibly block all writes to a ZFS pool by running chgrp -R
on any large filesystem in that pool
://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
and
I know Ben Rockwood (now of Joyent) has blogged about how much
storage they're using, all managed with ZFS... I just can't
find the blog entry.
Hope this helps,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
replication.
Hi Roshan,
that's a great shame because if they actually want
to make use of the features of ZFS such as replication,
then they need to be serious about configuring their
storage to play in the ZFS world and that means
replication that ZFS knows about.
James C. McPherson
--
Solaris
per system ?
for all practical purposes, it's unlimited.
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
Robert Milkowski wrote:
Hello Louwtjie,
Monday, June 4, 2007, 9:14:26 AM, you wrote:
LB On 5/30/07, James C. McPherson [EMAIL PROTECTED] wrote:
Louwtjie Burger wrote:
I know the above mentioned kit (2530) is new, but has anybody tried a
direct attached SAS setup using zfs? (and the Sun SG
to be supported. It's
one of those contractual obligation things.
As always, please confer with your Sun sales rep for the
actual details. All I've done is work on the software for
the product :-)
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
that it would
just work.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Al Hopper wrote:
On Wed, 13 Jun 2007, James C. McPherson wrote:
Robert Milkowski wrote:
...
JCM As far as I understand it, I do not think that a plain
JCM jbod version of the ST2530 is supported. I believe that
JCM a jbod attached to the ST2540 (fc-connected) is supported.
If it works
as part of our project to add support to
mpt for MPxIO. The zfs test suite was one of the required
tests in our suite.
Yes, it worked correctly.
What other questions do you have?
cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
. The backport has been
assigned to me and we're hoping to get patches out
real soon now. No committment on actual dates though.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
James W. Abendschan wrote:
On Fri, 20 Apr 2007, James C. McPherson wrote:
Richard Elling wrote:
...
Nice catch... timing is everything :-)
I'll infer from this that the SAS HBA from Sun is based on the mpt
driver which works with LSI controllers.
yes.
As of Solaris 10 update 2, this combo
deserve what you get!
That's 6 years away, surely even linux fs developers
can come up with something better in that time.
cheers,
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss
Ivan Wang wrote:
So any date on when install utility will support zfs root fresh install?
almost can't wait for that.
Hi Ivan,
there's no firm date for this yet, though the install team are
working *really* hard at getting this to happen as soon as humanly
possible.
James C. McPherson
going to
be released, sorry.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/src/lib/libzfs_jni
usr/src/lib/libzpool
usr/src/lib/libdiskmgmt
usr/src/cmd/zpool
usr/src/cmd/zfs
usr/src/cmd/ztest
There are possibly others as well. Read through the
source code walkthrough link above and keep asking
questions :)
cheers,
James C
zfs:zio_write_allocate_gang_members+33c (60017964140,
...
Any help? I do zfs file system snaps and I am pretty low on disk space
at this moment but I still have like 2 gig free
this is bug 6452923 which is fixed in Solaris 10 update 3.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
easily managed.
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs
Cluster
3.2 if I want to serve same zpools from many hosts? We may try Sun
Cluster anyway, but I'd like to know if this can be solved without it.
You can't do it *safely* without the protection of a high-
availability framework such as SunCluster.
best regards,
James C. McPherson
--
Solaris kernel
Prashanth Radhakrishnan wrote:
Is there someway to synchronously mount a ZFS filesystem?
'-o sync' does not appear to be honoured.
No there isn't. Why do you think it is necessary?
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http
Frank Cusack wrote:
On January 23, 2007 8:53:30 AM +1100 James C. McPherson
...
Why would you start your numbering at 10?
Because you don't have a choice. It is up to the HBA and getting it
to do the right thing (ie, what you want) isn't always easy. IIRC,
the LSI Logic HBA(s) I had would
C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Al Hopper wrote:
On Sun, 21 Jan 2007, James C. McPherson wrote:
... snip
Would you please expand upon this, because I'm really interested
in what your thoughts are. since I work on Sun's SAS driver :)
Hi James - just the man I have a couple of questions for... :)
Will the LsiLogic
it will
be a stripe or raidz or raidz2 or whatever.
From that pool you create as many filesystems as you need.
If from those 8 disks you want two different underlying types
of storage layout, you would create two pools.
James C. McPherson
--
Solaris kernel software engineer, system admin
?
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs-discuss mailing list
zfs-discuss
James C. McPherson wrote:
Jason J. W. Williams wrote:
I agree with others here that the kernel panic is undesired behavior.
If ZFS would simply offline the zpool and not kernel panic, that would
obviate my request for an informational message. It'd be pretty darn
obvious what was going
0 0
mirrorONLINE 0 0 0
c1d0s3 ONLINE 0 0 0
c2d0s3 ONLINE 0 0 0
errors: No known data errors
$
libdiskmgmt protects you (mostly) from using slices or
partitions which are already in use
James C. McPherson
.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
, always on Fridays!
This is the fourth panic, first time with a ZFS error. There are no
errors in zpool status.
Without data, it is difficult to suggest what might have caused
your NFS panics.
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http
into an *Instant PANIC thanks to ZFS*
Yes, as noted, this is by design in order to *protect your data*
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
the maximum number of targets on a
loop, which might also solve your second problem.)
Your alternative option isn't going to happen. The ifp driver and
the card it supports have both been long since EOLd.
James C. McPherson
--
Solaris kernel software engineer
Sun Microsystems
this dependancy?
What do you suggest in its place?
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
___
zfs
to get the most from
filesystem that has a huge number of small files?
You're not quite thinking ZFS-style yet. With ZFS you do not have to
worry about block sizes unless you want to - the filesystem handles
that for you.
cheers,
James C. McPherson
--
Solaris kernel software engineer, system admin
/kshtest default
You should also have a look at the legacy option in the zfs
manpage, which provides more details on how to get zpools and
zfs integrated into your system.
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http
by ZFS, and use it for those same purposes.
Would you want a single checksum per file, or the list of every checksum
for every block that the file referenced?
The second option might get unwieldy.
The first option - a meta-checksum if you like - would require some
interesting design.
James C
*.
Didn't we have the PMC (poor man's cluster) talk last week as well?
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
valid tests do you think you are going to be able to run?
Wait for the SunCluster 3.2 release (or the beta). Don't faff around
with a data-killing test suite in an unsupported configuration.
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss
Erik Trimble wrote:
OK, this may seem like a stupid question (and we all know that there are
such things...)
I'm considering sharing a disk array (something like a 3510FC) between
two different systems, a SPARC and an Opteron.
Will ZFS transparently work to import/export pools between the two
to get it working, how well it copes and how you are
addressing any data corruption that might occur.
I tend to refer to SunCluster more than VCS simply because I've got
more in depth experience with Sun's offering.
James C. McPherson
___
zfs-discuss
?
thanks in advance,
James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/pub/2/1ab/967
___
zfs-discuss mailing list
zfs-discuss
Joe Little wrote:
So, people here recommended the Marvell cards, and one even provided a
link to acquire them for SATA jbod support. Well, this is what the
latest bits (B47) say:
Sep 12 13:51:54 vram marvell88sx: [ID 679681 kern.warning] WARNING:
marvell88sx0: Could not attach, unsupported chip
.
Save yourself the hassle and do things right from the start.
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Lieven De Geyndt wrote:
I know this is not supported . But we try to build a safe configuration,
till zfs is supported in Sun cluster. The customer did order SunCluster,
but needs a workarround till the release date . And I think it must be
possible to setup .
So build them a configuration
zpool create -R /alternate_root newpoolname vdevlist
You might need to add a -f, but try it without -f first.
cheers,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
...
Any pointers muchly appreciated! :-|
Did you try a zpool export on either or both machines?
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Stuart Low wrote:
Nada.
[EMAIL PROTECTED] ~]$ zpool export -f ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
I wonder if it's possible to force the pool to be marked as inactive? Ideally
all I want to do is get it back online then scrub it for errors. :-|
At this point it
reliable zfs is :)
cheers,
James C. McPherson
(on a permanent search for more disk space )
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski wrote:
Hello zfs-discuss,
S10U2 SPARC + patches
Generic_118833-20
LUNs from 3510 array.
bash-3.00# zpool import
no pools available to import
bash-3.00# zpool create f3-1 mirror c5t600C0FF0098FD535C3D2B900d0
c5t600C0FF0098FD54CB01E1100d0
you want to observe if your system hit the upper
limit in zfs_max_phys_mem?
cheers,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
probably
have to reboot :(
cheers,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
special-case handle the syscall in an elegant
manner, too :)
best regards,
James C. McPherson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
ditto blocks
will be examined, and those which have incorrect checksums will be
detected and fixed.
James C. McPherson
--
Solaris Datapath Engineering
Storage Division
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
my memory back!
This is essentially by design, due to the way that ZFS uses kernel
memory for caching and other stuff.
You can alleviate this somewhat by running a 64bit processor, which
has a significantly larger address space to play with.
best regards,
James C. McPherson
--
Solaris Datapath
module in the miniroot!
Thanks to Team ZFS for all the help in getting over this
annoying issue.
cheers,
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
for your help with this, I reallyreally appreciate it.
best regards,
James C. McPherson
--
Solaris Datapath Engineering
Data Management Group
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
201 - 300 of 306 matches
Mail list logo