Hi All,
While pumping IO on a zfs file system my ststem is crashing/panicing. Please
find the crash dump below.
panic[cpu0]/thread=2a100adfcc0: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
02a100adec40 genunix:assfail+74 (7b652448, 7b652458,
Ima wrote:
3. Can anyone recommend a PCI-Express SATA controller that will work with
64-bit x86 Solaris 10?
I believe these cards support SAS and SATA devices just fine:
http://www.sun.com/storagetek/storage_networking/hba/sas/
___
On Fri, Oct 05, 2007 at 08:52:17AM +0100, Robert Milkowski wrote:
Hello Eric,
Thursday, October 4, 2007, 5:54:06 PM, you wrote:
ES On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:
This bug was rendered moot via 6528732 in build
snv_68 (and s10_u5). We
now store physical
Besides,
there are some new results about BWT that I'm sure would be of
interest in this context.
I thought bzip2/BWT is a compression scheme that has a heavy footprint
and is generally brain damaging to implement?
-mg
signature.asc
Description: OpenPGP digital signature
Is there any collateral that I could share with a prospective customer
on ZFS and Oracle 10 ?
Many thanks
Paul
--
http://www.sun.com * Paul Killingback *
*Sun Microsystems, Inc.*
GB
Phone +44 (0)1252 422 554
Mobile +44 (0)7841363767
Fax +44 (0)1252 422 088
Email [EMAIL PROTECTED]
I had some trouble installing a zone on ZFS with S10u4
(bug in the postgres packages) that went away when I used a
ZVOL-backed UFS filesystem
for the zonepath.
I thought I'd push on with the experiment (in the hope Live Upgrade
would be able to upgrade such a zone).
It's a bit unwieldy, but
Dick Davies wrote:
I had some trouble installing a zone on ZFS with S10u4
(bug in the postgres packages) that went away when I used a
ZVOL-backed UFS filesystem
for the zonepath.
Hi
Out of interest what was the bug.
Enda
I thought I'd push on with the experiment (in the hope Live Upgrade
On Mon, 8 Oct 2007, Dick Davies wrote:
I had some trouble installing a zone on ZFS with S10u4
(bug in the postgres packages) that went away when I used a
ZVOL-backed UFS filesystem
for the zonepath.
I thought I'd push on with the experiment (in the hope Live Upgrade
would be able to
Having recently upgraded from snv_57 to snv_73 I've noticed some strange
behaviour with the -v option to zpool iostat.
Without the -v option on an idle pool things look reasonable.
bash-3.00# zpool iostat 1
capacity operationsbandwidth
pool used avail read
On Wed, Oct 03, 2007 at 10:02:03PM +0200, Pawel Jakub Dawidek wrote:
On Wed, Oct 03, 2007 at 12:10:19PM -0700, Richard Elling wrote:
-
# zpool scrub tank
# zpool status -v tank
pool: tank
state: ONLINE
status: One or more devices could not be used because the
1) I would use soft-mirror:
During install dedicate s7 to metadb (~10MB is plenty)
cat /etc/lvm/md.tab
/dev/md/dsk/d0 -m /dev/md/dsk/d10
/dev/md/dsk/d10 1 1 /dev/dsk/c0d0s0
/dev/md/dsk/d20 1 1 /dev/dsk/c0d1s0
# metadb -a -c 3 /dev/dsk/c0d0s7 /dev/dsk/c0d1s7
Hi all,
i want to replace a bunch of Apple Xserves with Xraids and HFS+ (brr) by Sun
x4200 with SAS-Jbods and ZFS. The application will be the Helios UB+ fileserver
suite.
I installed the latest Solaris 10 on a x4200 with 8gig of ram and two Sun SAS
controllers, attached two sas-jbods with 8
Hi Mario,
This is common knowledge but not completely true. The bottleneck of
BWT is the suffix sorting step and there have been many recent
advances that significantly reduced the time and space needs of the
algorithm. Of course, it will probably never be so fast as a
lightweight Ziv-Lempel
Hi
this may be of interest:
http://blogs.sun.com/timthomas/entry/samba_performance_on_sun_fire
I appreciate that this is not a frightfully clever set of tests but I
needed some throughout numbersand the easiest way to share the
results is to blog.
Rgds
Tim
--
*Tim Thomas
*Storage
On 10/8/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Is there any collateral that I could share with a prospective customer on
ZFS and Oracle 10 ?
The following documents (as well as the blogs from the ZFS developers)
contain some useful information related to database tuning (including
Hi Tim,
Sorry for mailing you directly; I meant to reply to the list. My mistake.
On 10/8/07, Tim Thomas [EMAIL PROTECTED] wrote:
this was a one day project which is why I kept it simple and I don't
have detailed data beyond what I collected for the graphs.
The graphs are pretty informative,
Using ZFS for a zones root is currently planned to be supported in
solaris 10 update 5, but we are working on moving it up to update 4.
Did this make it into Update 4? Or is it still in the works for update 5?
This message posted from opensolaris.org
statfile1 988ops/s 0.0mb/s 0.0ms/op 22us/op-cpu
deletefile1 991ops/s 0.0mb/s 0.0ms/op 48us/op-cpu
closefile2997ops/s 0.0mb/s 0.0ms/op4us/op-cpu
readfile1 997ops/s 139.8mb/s 0.2ms/op
besides re-inventing the wheel somebody at sun should wake up and go ask mr.
oberhumer and pay him $$$ to get lzo into ZFS.
this is taken from http://www.oberhumer.com/opensource/lzo/lzodoc.php :
Copyright
-
LZO is Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004,
On 8-Oct-07, at 5:39 PM, roland wrote:
besides re-inventing the wheel somebody at sun should wake up and
go ask mr. oberhumer and pay him $$$ to get lzo into ZFS.
this is taken from http://www.oberhumer.com/opensource/lzo/
lzodoc.php :
Copyright
-
LZO is Copyright (C) 1996,
On 10/6/07, Vincent Fox [EMAIL PROTECTED] wrote:
So I went ahead and loaded 10u4 on a pair of V210 units.
I am going to set this nocacheflush option and cross my fingers and see how
it goes.
I have my ZPool mirroring LUNs off 2 different arrays. I have
single-controllers in each 3310.
Hi All,
Any one has any chance to look into this issue ?
-Masthan D
dudekula mastan [EMAIL PROTECTED] wrote:
Hi All,
While pumping IO on a zfs file system my ststem is crashing/panicing. Please
find the crash dump below.
panic[cpu0]/thread=2a100adfcc0: assertion
Battery back-ed cache...
Interestingly enough, I've seen this configuration in production
(V880/SAP on Oracle) running Solaris 8 + Veritas Storage Foundation
(for the RAID-1 part).
Speed is good ... redundancy is good ... price is not (2/3).
Uptime 499 days :)
On 10/9/07, Wee Yeh Tan [EMAIL
23 matches
Mail list logo