should be enough for him to make significant progress.
James C. McPherson
--
Oracle
Systems / Solaris / Core
http://www.jmcpdotcom.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 17/02/13 08:48 AM, Sašo Kiselkov wrote:
On 02/16/2013 10:47 PM, James C. McPherson wrote:
...
Whether that message winds up being something you need
to talk with a Oracle about is entirely different.
He got a kernel panic on a completely legitimate operation (booting with
one half
that
MPxIO isn't working.
James C. McPherson
--
Oracle
Systems / Solaris / Core
http://www.jmcpdotcom.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
devid's in
preference to physical paths.
James C. McPherson
--
Oracle
Systems / Solaris / Core
http://www.jmcpdotcom.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 19/10/12 09:27 PM, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of James C. McPherson
As far as I'm aware, having an rpool on multipathed devices is fine.
Even a year ago
reads and I'm
seeing 20-80% l2arc hits. These have been running for about a week and, given
my understanding of how L2ARC fills, I'd suggest maybe leaving it to warm up
longer (e.g. 1-2 weeks?)
caveat: I'm a complete newbie to zfs so I could be completely wrong ;)
Cheers,
James
inline
On 07/02/12 15:00, Nico Williams wrote:
On Mon, Jul 2, 2012 at 3:32 PM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Mon, 2 Jul 2012, Iwan Aucamp wrote:
I'm interested in some more detail on how ZFS intent log behaves for
updated done via a memory mapped file - i.e. will the
Agreed - msync/munmap is the only guarantee.
On 07/ 3/12 08:47 AM, Nico Williams wrote:
On Tue, Jul 3, 2012 at 9:48 AM, James Litchfield
jim.litchfi...@oracle.com wrote:
On 07/02/12 15:00, Nico Williams wrote:
You can't count on any writes to mmap(2)ed files hitting disk until
you msync(2
On 12/06/12 06:40 AM, David Combs wrote:
Actual newsgroup for zfs-discuss?
Actually, no. Where's the value in having a newsgroup
as well as a mailing list?
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
Subscribe
--
ORACLE
James Cypcar | Solaris and Network Domain, Global Systems Support
Oracle Global Customer Services
Log, update, and monitor your Service Request online
usinghttps://support.oracle.com
___
zfs-discuss mailing list
zfs-discuss
location information in format, and
using the diskinfo too.
Otherwise, if you're running S11, you could try using
/usr/lib/fm/fmd/fmti - a tool which blinks LEDs at you
and prompts for label confirmation.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
in order for those changes to be correctly propagated.
You can (and should) read about this in the stmsboot(1m) manpage,
and there's more information available in my blog post
http://blogs.oracle.com/jmcp/entry/on_stmsboot_1m
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
this cache by running strings over /etc/mpxio/devid_path.cache.
This is all available for your perusal at
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/cmd/stmsboot/
cheers,
James
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss
and/or prtconf -v.
hth,
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that was supplied
to the manufacturer by a third party.
Personally, I'd start looking at the cables first - in my
experience they seem to incur more physical stress through the
connect/disconnect operations than HBAs.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
The value of zfs_arc_min specified in /etc/system must be over 64MB
(0x400).
Otherwise the setting is ignored. The value is in bytes not pages.
Jim
---
n 10/ 6/11 05:19 AM, Frank Van Damme wrote:
Hello,
quick and stupid question: I'm breaking my head over how to tunz
zfs_arc_min on a
On 10/07/2011 11:02 AM, James Lee wrote:
Hello,
I had a pool made from a single LUN, which I'll call c4t0d0 for the
purposes of this email. We replaced it with another LUN, c4t1d0, to
grow the pool size. Now c4t1d0 is hosed and I'd like to see about
recovering whatever data we can from
guys
don't really understand ZFS or else I would have made the pool redundant
in the first place.
Thanks,
James
[1] starlight ~ # zdb -l /dev/dsk/c4t0d0s0
LABEL 0
version=22
name='idmtestdb2
Jim wrote:
But I may be wrong, and anyway the single user shell in the u9 DVD also
panics when I try to import tank so maybe that won't help.
Ian wrote:
Put your old drive in a USB enclosure and connect it
to another system in order to read back the data.
Given that update 9 can't import
I am opening a new thread since I found somebody else reported a similar
failure in May and I didn't see a resolution hopefully this post will be easier
to find for people with similar problems. Original thread was
http://opensolaris.org/jive/thread.jspa?threadID=140861
System: snv_151a 64 bit
I'm opening a new thread since the original subject was not as helpful and I
saw a similar problem mentioned in May of this year (2011) and others going
back to 2009. New thread is found at
http://opensolaris.org/jive/thread.jspa?threadID=140899
--
This message posted from opensolaris.org
Thanks for your comments so far. I'll try to put everything I know into this
post now that I have signed up at the forums.
Solaris 10, update 8 Intel, 500G ZFS root mirror rpool.
I recently received two 320G drives and realized from reading this list it
would have been better if I would have
Anyone know what this means? After a scrub I apparently have an error in a
file name that I don't understand:
zpool status -v pumbaa1
pool: pumbaa1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may
A reboot and then another scrub fixed this. Reboot made no difference. So
after the reboot I started another scrub and now the pool shows clean.
So the sequence was like this:
1. zpool reported ioerrors after a scrub with an error on a file in a snapshot
2. destroyed the snapshot with the
which tracks the inclusion in a Solaris 10 Update.
I'd also like to know where you're getting your information from
on this topic.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 14/03/11 11:26 PM, Edward Ned Harvey wrote:
From: James C. McPherson [mailto:j...@opensolaris.org]
Sent: Monday, March 14, 2011 9:20 AM
Just for clarity:
The in-kernel CIFS service is indeed available in solaris 10.
Are you really, really sure about that? Please point the RFE number
which
On 1/03/11 03:00 AM, Dave Pooser wrote:
On 2/27/11 11:13 PM, James C. McPhersonj...@opensolaris.org wrote:
/pci@0,0/pci8086,340c@5/pci1000,3020@0
and
/pci@0,0/pci8086,340e@7/pci1000,3020@0
which are in different slots on your motherboard and connected to
different PCI Express Root Ports
employer.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
to see bay numbers with the
fmtopo command - when you run it as root:
# /usr/lib/fm/fmd/fmtopo -V
If this doesn't work for you, then you'll have to resort to the
tried and tested use of dd to /dev/null for each disk, and see
which lights blink.
James C. McPherson
--
Oracle
http
and has different firmware to what you have
in your 9211 card. The 9211 card is also 2nd generation SAS, not 1st
generation like the 3081.
Personally, having worked on the mpt_sas(7d) project, I'm disappointed
that you believe the card and its driver are a failed bit.
James C. McPherson
--
Oracle
are fairly closely tied
to the PC architecture, that perhaps they do some bios calls to
try to figure out correct order mappings.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On 28/02/11 02:08 AM, Dave Pooser wrote:
On 2/27/11 5:15 AM, James C. McPhersonj...@opensolaris.org wrote:
On 27/02/11 05:24 PM, Dave Pooser wrote:
On 2/26/11 7:43 PM, Bill Sommerfeldsommerf...@hamachi.org wrote:
On your system, c12 is the mpxio virtual controller; any disk which
On 28/02/11 12:46 PM, Dave Pooser wrote:
On 2/27/11 4:07 PM, James C. McPhersonj...@opensolaris.org wrote:
...
PHY iport@
01
12
24
38
410
520
640
780
OK, bear with me for a moment because I'm feeling extra dense this evening.
The PHY tells me which port
On 28/02/11 02:51 PM, Dave Pooser wrote:
On 2/27/11 10:06 PM, James C. McPhersonj...@opensolaris.org wrote:
...
2nd controller
c16t5000CCA222DDD7BAd0
/pci@0,0/pci8086,340c@5/pci1000,3020@0/iport@2/disk@w5000cca222ddd7ba,0
3rd controller
c14t5000CCA222DF8FBEd0
/pci@0,0/pci8086,340e@7
Edward,
Thanks for the reply.
Good point on platter density. I'ld considered the benefit of lower
fragmentation but not the possible increase in sequential iops due to density.
I assume while a 2TB 7200rpm drive may have better sequential IOPS than a
500GB, it will not be double and
Thanks Richard Edward for the additional contributions.
I had assumed that maximum sequential transfer rates on datasheets (btw -
those are the same for differing capacity seagate's) were based on large block
sizes and a ZFS 4kB recordsize* would mean much lower IOPS. e.g. Seagate
He says he's using FreeBSD. ZFS recorded names like ada0 which always means
a whole disk.
In any case FreeBSD will search all block storage for the ZFS dev components if
the cached name is wrong: if the attached disks are connected to the system at
all FreeBSD will find them wherever they may
G'day All.
I’m trying to select the appropriate disk spindle speed for a proposal and
would welcome any experience and opinions (e.g. has anyone actively chosen
10k/15k drives for a new ZFS build and, if so, why?).
This is for ZFS over NFS for VMWare storage ie. primarily random 4kB
Chris Eff,
Thanks for your expertise on this and other posts. Greatly appreciated. I've
just been re-reading some of the great SSD-as-ZIL discussions.
Chris,
Cost: Our case is a bit non-representative as we have spare P410/512's that
came with ESXi hosts (USB boot) so I've budgetted them at
.
James
* For NTFS 4kB clusters on VMWare / NFS, I believe 4kB zfs recordsize will
provide best performance (avoid partial writes). Thoughts welcome on that too.
** Assumes 10k SAS can do max 900 sequential writes each striped across 12
mirrors and rounded down (900 based on TomsHardware hdd
I am seeing a zfs recv bug on FreeBSD and am wondering if someone could test
this in the Solaris code. If it fails there then I guess a bug report into
Solaris is needed.
This is a perverse case of filesystem renaming between snapshots.
kraken:/root# cat zt
zpool create rec1 da3
zpool create
On 18/11/10 01:49 PM, Fred Liu wrote:
http://www.lsi.com/channel/about_channel/whatsnew/warpdrive_slp300/index.html
Good stuff for ZFS.
Looks a bit like the Sun/Oracle Flash Accelerator card,
only with a 2nd generation SAS controller - which would
probably use the mpt_sas(7d) driver.
James
On 18/11/10 03:05 PM, Fred Liu wrote:
Yeah, no driver issue.
BTW, any new storage-controller-related drivers introduced in snv151a?
LSI seems the only one who works very closely with Oracle/Sun.
You would have to have a look at what's in the repo,
I'm not allowed to tell you :|
James C
it to the zpool to create a mirror, then detach the
old smaller device. Then run zpool online -e to actually expand the zpool.
James.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
I’m testing the new online zpool expansion feature of Solaris 10 9/10. My
zpool was created using the entire disk (ie. no slice number was used). When I
resize my LUN on our SAN (an HP-EVA4400) the EFI label does not change.
On the zpool, I have autoexpand=on, and I’ve tried using zpool
runs under OSOL build134 or solaris10?
I can.
This card should attach using the mpt_sas(7d) driver.
This is *different* to the mpt(7d) driver.
PSARC 2008/443 Driver for LSI MPT2.0 compliant SAS controller
went into build 118.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
On 8/10/10 03:28 PM, Anand Bhakthavatsala wrote:
...
--
*From:* James C. McPherson j...@opensolaris.org
*To:* Ramesh Babu rama.b...@gmail.com
On 7/10/10 03:46 PM, Ramesh Babu wrote:
I am trying to create ZPool using
the kernel.
Do you have the panic stack trace we can look at, and/or a
crash dump?
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
which was fixed in snv_135.
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
0 0 0
c5t50024E90037AF38Cd0s0 ONLINE 0 0 0
errors: No known data errors
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss
On 4/08/10 12:55 PM, Emily Grettel wrote:
Wow! Thanks for the information James, after consulting with my manager
we're going to install the text-install version.
Better to stick with the supportable methods, imho :-)
I'm going to try that as we're installing it on a new disk. Just
curious
+1
On Wed, Jul 28, 2010 at 6:11 PM, Robert Milkowski mi...@task.gda.pl wrote:
fyi
--
Robert Milkowski
http://milek.blogspot.com
Original Message Subject: zpool import despite missing
log [PSARC/2010/292 Self Review] Date: Mon, 26 Jul 2010 08:38:22 -0600 From:
Tim
I have been working on the same problem now for almost 48 straight hours. I
have managed to recover some of my data using
zpool import -f pool
The command never completes, but you can do a
zpool list
and
zpool status
and you will see the pool.
Then you do
zfs list
and the file systems
I might be mistaken, but it looks like 3ware does have a driver, several in
fact:
http://www.3ware.com/support/downloadpageprod.asp?pcode=9path=Escalade9500SSeriesprodname=3ware%209500S%20Series
Any comment on this? I'm thinking about picking up a server with this card,
and it would be cool
,
why crypto bits are you using, and what changeset is your own workspace
synced with?
James C. McPherson
--
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
a service contract. Doesn't take too long
for that kind of math to blow out any savings whiteboxes may have had.
Worst case, someone goes and buys Dell. :-)
--
James Litchfield | Senior Consultant
Phone: +1 4082237059 | Mobile: +1 4082180790
Oracle Oracle ACS
California
Oracle is
On Thu, 8 Jul 2010, Edward Ned Harvey wrote:
Yep. Provided it supported ZFS, a Mac Mini makes for
a compelling SOHO server.
Warning: a Mac Mini does not have eSATA ports for external storage. It's
dangerous to use USB for external storage since many (most? all?) USB-SATA
chips discard SYNC
Under FreeBSD I've seen zpool scrub sustain nearly 500 MB/s in pools with large
files (a pool with eight MIRROR vdevs on two Silicon Image 3124 controllers).
You need to carefully look for bottlenecks in the hardware. You don't indicate
how the disks are attached. I would measure the total
On Fri, Jul 2, 2010 at 1:18 AM, Ray Van Dolson rvandol...@esri.com wrote:
We have a server with a couple X-25E's and a bunch of larger SATA
disks.
To save space, we want to install Solaris 10 (our install is only about
1.4GB) to the X-25E's and use the remaining space on the SSD's for ZIL
,
James
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
4.00G -
James Dickens
http://uadmin.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 21/06/10 10:38 PM, Edward Ned Harvey wrote:
From: James C. McPherson [mailto:j...@opensolaris.org]
On the build systems that I maintain inside the firewall,
we mandate one filesystem per user, which is a very great
boon for system administration.
What's the reasoning behind
On 22/06/10 01:05 AM, Fredrich Maney wrote:
On Mon, Jun 21, 2010 at 8:59 AM, James C. McPherson
j...@opensolaris.org wrote:
[...]
So when I'm
trying to figure out who I need to yell at because they're
using more than our acceptable limit (30Gb), I have to run
du -s /builds/[zyx
RESPONDING to this thread?
It's not about ZFS at all.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
don't have to traverse
whole directory trees (ala ufs).
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
device.
It really shouldn't be a problem for you.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable disk-pathn
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
No need to use luxadm.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs
/dsk/c5t50014EE1007EE473d0s0(sd39)
unavailable disk-pathn
/devices/p...@ff,0/pci10de,3...@f/pci1000,3...@0:scsi::6,0
No need to use luxadm.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs
On 2/06/10 11:39 AM, Fred Liu wrote:
Thanks.
No.
If you must disable MPxIO, then you do so after installation,
using the stmsboot command.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs
plain old targets, or
that no disk devices of any sort show up in your host when
you are installing?
What is your actual problem, and why do you think that
turning off MPxIO will solve it?
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
...@tata_st3320620as_4qf01rze'
And while I'm at it, let me recommend my presentation on
devids and guids
http://www.jmcp.homeunix.com/~jmcp/WhatIsAGuid.pdf
hth,
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
believe you are talking through your hat.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
on a driver that
I worked on (mpt_sas), and I'm still trying to find out from
you and others what you think is a problem with it.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
in progress in
regards to being 'production ready'.
What metric are you using for production ready ?
Are there features missing which you expect to see
in the driver, or is it just oh noes, I haven't
seen enough big customers with it ?
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http
it.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
it.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
.
Note that not all of those will be applicable for ZFS.
You should read the ZFS Best Practices Guide
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
and the ZFS Config Guide too
http://www.solarisinternals.com/wiki/index.php/ZFS_Configuration_Guide
James C. McPherson
:-)
I don't know of any other specific difference between Enterprise
SATA and SAS drives.
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
as a base for a NAS system?
James C. McPherson
--
Senior Software Engineer, Solaris
Oracle
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
If you're concerned about someone reading the charge level of a Flash cell to
infer the value of the cell before being erased, then overwrite with random
data twice before issuing TRIM (remapping in an SSD probably makes this
ineffective).
Most people needing a secure erase feature need it to
Thanks for the clue.
Still not successful, but some hope is there.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
that some of the error messages are generated only once.
Joji James
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have a simple mirror pool with 2 disks. I pulled out one disk to simulate a
failed drive. zpool status shows that the pool is in DEGRADED state.
I want syslog to log these type of ZFS errors. I have syslog running and
logging all sorts of error to a log server. But this failed disk in ZFS
My point is not to advocate the TRIM command - those issues are already
well-known - but rather suggest that the code that sends TRIM is also a good
place to securely erase data on other media, such a hard disk.
TRIM is not a Windows 7 command but rather a device command. FreeBSD's CAM
layer
OpenSolaris needs support for the TRIM command for SSDs. This command is
issued to an SSD to indicate that a block is no longer in use and the SSD may
erase it in preparation for future writes.
A SECURE_FREE dataset property might be added that says that when a block is
released to free space
On 6/04/10 11:47 PM, Willard Korfhage wrote:
Yes, I was hoping to find the serial numbers. Unfortunately, it doesn't
show any serial numbers for the disk attached to the Areca raid card.
You'll need to reboot and go into the card bios to
get that information.
James C. McPherson
--
Senior
with an Adaptec 52445 Raid HBA, and
the driver supplied by Opensolaris doesn't support JBOD drives.
FYI, there is a bug report open for this issue:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6862536
Hopefully we'll see some action on it soon.
James
* SII3132-based PCIe X1 SATA card (2 ports)
This chip is slow.
PCIe cards based on the Silicon Image 3124 are much faster, peaking around 1
GB/sec aggregate throughput. However, the 3124 is a PCI-X chip and hence is
used behind an Intel PCI serial-to-parallel bridge for PCIe applications:
)
driver name:mr_sas
This should be using the mpt_sas driver, not the mr_sas driver.
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs
user0m0.458s
sys 0m5.260s
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
sata
Memory 16 GB
Processor Processor 1GH 6 core
Solaris 10 8/07 s10s_u4wos_12b SPARC
Since you are seeing this on a Solaris 10 update
release, you should log a call with your support
provider to get this investigated.
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
build
snv_118.
So you could either wait until 2010.$spring comes out,
or start using the /dev repo instead.
hth,
James C. McPherson
--
Senior Software Engineer, Solaris
Sun Microsystems
http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs
On 8/03/10 01:42 AM, Tim Cook wrote:
On Sun, Mar 7, 2010 at 3:12 AM, James C. McPherson j...@opensolaris.org
mailto:j...@opensolaris.org wrote:
On 7/03/10 12:28 PM, norm.tallant wrote:
I'm about to try it! My LSI SAS 9211-8i should arrive Monday or
Tuesday. I bought
) exceeds memory, your
performance degrades exponentially probably before that.
James Dickens
http://uadmin.blogspot.com
I.e., I am not using any snapshots and have also turned off automatic
snapshots because I was bitten by system hangs while destroying datasets
with living snapshots.
I am
please post the output of zpool status -v.
Thanks
James Dickens
On Fri, Mar 5, 2010 at 3:46 AM, Abdullah Al-Dahlawi dahl...@ieee.orgwrote:
Greeting All
I have create a pool that consists oh a hard disk and a ssd as a cache
zpool create hdd c11t0d0p3
zpool add hdd cache c8t0d0p0
to be used or
as extra swap ?
Yes. This is what I do at home, and what we do on the onnv
gate machines - we've got swap in rpool and a separate,
dedicated, swap pool.
Would this have any performance implications ?
Negative performance implications? none that I know of.
James C. McPherson
--
Senior
for the
official word to be announced - as will we all.
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss
=0xc
Feb 17 04:47:57 thecratewall scsi: [ID 365881 kern.info]
/p...@0,0/pci15ad,7...@15/pci1000,3...@0 (mpt_sas0):
Feb 17 04:47:57 thecratewall Log info 0x31110630 received for target 33.
Feb 17 04:47:57 thecratewall scsi_status=0x0, ioc_status=0x804b,
scsi_state=0xc
--
James C. McPherson
--
Senior
the likely hood of getting hit by a bad
batch taking out your pool.
replace disks early as soon as you see disk errors. And above all backup all
data you can't afford to loose.
James Dickens
http://uadmin.blogspot.com
Remember. The goal is damage control. I know 2x raidz2 offers better
Yes send and receive will do the job. see zfs manpage for details.
James Dickens
http://uadmin.blogspot.com
On Mon, Feb 15, 2010 at 11:56 AM, Tiernan OToole lsmart...@gmail.comwrote:
Good morning all.
I am in the process of building my V1 SAN for media storage in house, and i
am already
1 - 100 of 550 matches
Mail list logo