Hi.
Someone currently reported a 'ss == NULL' panic in
space_map.c/space_map_add() on FreeBSD's version of ZFS.
I found that this problem was previously reported on Solaris and is
already fixed. I verified it and FreeBSD's version have this fix in
place...
This question triggered some silly questions in my
mind:
Actually, they're not silly at all.
Lots of folks are determined that the whole COW to
different locations
are a Bad Thing(tm), and in some cases, I guess it
might actually be...
What if ZFS had a pool / filesystem property
some business do not accept any kind of risk
Businesses *always* accept risk: they just try to minimize it within the
constraints of being cost-effective. Which is a good thing for ZFS, because it
can't eliminate risk either, just help to minimize it cost-effectively.
However, the subject
...
And how about FAULTS?
hw/firmware/cable/controller/ram/...
If you had read either the CERN study or what I
already said about
it, you would have realized that it included the
effects of such
faults.
...and ZFS is the only prophylactic available.
You don't *need* a
Hi;
Do anyone have experiance on iSCSI target volumes on ZFS accessed by linux
clients? (Red hat , suse ?)
regards
http://www.sun.com/ http://www.sun.com/emrkt/sigs/6g_top.gif
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Istanbul TR
Phone +902123352200
can you guess? wrote:
at the moment only ZFS can give this assurance,
plus
the ability to
self correct detected
errors.
You clearly aren't very familiar with WAFL (which
can do the same).
...
so far as I can tell it's quite
irrelevant to me at home; I
can't
On 9-Nov-07, at 2:45 AM, can you guess? wrote:
Au contraire: I estimate its worth quite
accurately from the undetected error rates reported
in the CERN Data Integrity paper published last
April (first hit if you Google 'cern data
integrity').
While I have yet to see any checksum
I was able to create second Solaris partition by running
#fdisk /dev/rdsk/c1t0d0p0
First was NTFS (40GB)
Second was SNV76 installation (40 GB)
Third has been created by me.
Rebooted system.Double checked
by fdisk that partition exists
My intent is to run:-
# zpool create pool c1t0d0
Cannot
Hi Boris,
When you create a Solaris2 Partition under x86, Solaris sees the
partition as a disk that you can cut into slices. You can find a list of
disks available via the format command.
A slice is much like a partition but there is a difference; that's most
or all you really need to know to
On 11/14/07, Gary Wright [EMAIL PROTECTED] wrote:
Hope you don't mind me asking but we are planning to use a CX3-20 Dell/EMC
SAN connected to a T5220 server (Solaris 10). Can you tell me
if you were forced to use PowerPath or have you used MPXIO/Traffic Manager.
Did you use LPe11000-E
On Wed, 2007-11-14 at 21:23 +, A Darren Dunham wrote:
On Wed, Nov 14, 2007 at 09:40:59AM -0800, Boris Derzhavets wrote:
I was able to create second Solaris partition by running
#fdisk /dev/rdsk/c1t0d0p0
I'm afraid that won't do you much good.
Solaris only works with one Solaris
On 14-Nov-07, at 12:43 AM, Jason J. W. Williams wrote:
Hi Darren,
Ah, your CPU end was referring to the NFS client cpu, not the
storage
device CPU. That wasn't clear to me. The same limitations would
apply
to ZFS (or any other filesystem) when running in support of an NFS
server.
On 14-Nov-07, at 7:06 AM, can you guess? wrote:
...
And how about FAULTS?
hw/firmware/cable/controller/ram/...
If you had read either the CERN study or what I
already said about
it, you would have realized that it included the
effects of such
faults.
...and ZFS is the only
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Louwtjie Burger wrote:
On 11/8/07, Richard Elling [EMAIL PROTECTED] wrote:
Potentially, depending on the write part of the workload, the system may
read
128 kBytes to get a 16 kByte block. This is not efficient and may be
noticeable
as a
Quoth Mark Ashley on Mon, Nov 12, 2007 at 11:35:57AM +1100:
Is it possible to tell ZFS to forget those SE6140 LUNs ever belonged to the
zpool? I know that ZFS will have probably put some user data on them, but if
there is a possibility of recovering any of those zvols on the zpool
it'd really
Nathan Kroenert wrote:
...
What if it did a double update: One to a
staged area, and another
immediately after that to the 'old' data blocks.
Still always have
on-disk consistency etc, at a cost of double the
I/O's...
This is a non-starter. Two I/Os is worse than one.
Well, that
On 14-Nov-07, at 7:06 AM, can you guess? wrote:
...
And how about FAULTS?
hw/firmware/cable/controller/ram/...
If you had read either the CERN study or what I
already said about
it, you would have realized that it included the
effects of such
faults.
...and ZFS is
...
Well single bit error rates may be rare in
normal
operation hard
drives, but from a systems perspective, data can
be
corrupted anywhere
between disk and CPU.
The CERN study found that such errors (if they
found any at all,
which they couldn't really be sure of) were
Hi ,
I am getting following error message when I run any zfs command.I have
attach the script I use to create ramdisk image for Thumper.
# zfs volinit
internal error: Bad file number
Abort - core dumped
# zpool status
internal error: Bad file number
Abort - core dumped
#
# zfs list
internal
Hi ,
I am using s10u3 in x64 AMD Opteron thumper.
Thanks
Manoj Nayak
Manoj Nayak wrote:
Hi ,
I am getting following error message when I run any zfs command.I have
attach the script I use to create ramdisk image for Thumper.
# zfs volinit
internal error: Bad file number
Abort - core
20 matches
Mail list logo