Ian,
It would help to have some config detail (e.g. what options are you using?
zpool status output; property lists for specific filesystems and zvols; etc)
Some basic Solaris stats can be very helpful too (e.g. peak flow samples of
vmstat 1, mpstst 1, iostat -xnz 1, etc)
It would also be
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Phil Harman
I'm wondering whether your HBA has a write through or write back cache
enabled? The latter might make things very fast, but could put data at
risk if not sufficiently
Am 14.10.10 17:48, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed discs vs.
HW raids, but if RAID5/6 would be that bad, no one would use it
A customer is running ZFS version15 on Solaris SPARC 10/08 supporting Oracle
10.2.0.3 databases in a dev and production test environment. We have come
across some cache inconsistencies with one of the Oracle databases where
fetching a record displays a 'historical value' (that has been
Hi
so to be absolutely clear
in the same session, you ran an update, commit and select, and the
select returned an earlier value than the committed update?
Things like
ALTER SESSION set ISOLATION_LEVEL = SERIALIZABLE;
will cause a session to NOT see commits from other sessions, but in
Oracle
As I have mentioned already, we have the same performance issues whether we
READ or we WRITE to the array, shouldn't that rule out caching issues?
Also we can get great performances with the LSI HBA if we use the JBODs as a
local file system. The issues only arise when it is done through iSCSI
He already said he has SSD's for dedicated log. This
means the best
solution is to disable WriteBack and just use
WriteThrough. Not only is it
more reliable than WriteBack, it's faster.
And I know I've said this many times before, but I
don't mind repeating: If
you have slog devices,
Using snv_111b and yesterday both the Mac OS X Finder and Solaris File Browser
started reporting that I had 0 space available on the SMB shares. Earlier in
the day I had copied some files from the Mac to the SMB shares and no problems
reported by the Mac (Automator will report errors if the
I've had a few people sending emails directly
suggesting it might have something to do with the
ZIL/SLOG. I guess I should have said that the issue
happen both ways, whether we copy TO or FROM the
Nexenta box.
You mentioned a second Nexenta box earlier. To rule out client-side issues,
As I have mentioned already, it would be useful to know more about the
config, how the tests are being done, and to see some basic system
performance stats.
On 15/10/2010 15:58, Ian D wrote:
As I have mentioned already, we have the same performance issues whether we
READ or we WRITE to the
Derek,
The c0t5000C500268CFA6Bd0 disk has some kind of label problem.
You might compare the label of this disk to the other disks.
I agree with Richard that using whole disks (use the d0 device)
is best.
You could also relabel it manually by using the format--fdisk--
delete the current
You mentioned a second Nexenta box earlier. To rule
out client-side issues, have you considered testing
with Nexenta as the iSCSI/NFS client?
If you mean running the NFS client AND server on the same box then yes, and it
doesn't show the same performance issues. It's only when a Linux box
As I have mentioned already, it would be useful to
know more about the
onfig, how the tests are being done, and to see some
basic system
performance stats.
I will shortly. Thanks!
--
This message posted from opensolaris.org
___
zfs-discuss
On 15/10/2010 19:09, Ian D wrote:
It's only when a Linux box SEND/RECEIVE data to the NFS/iSCSI shares that we
have problems. But if the Linux box send/receive file through scp on the
external disks mounted by the Nexenta box as a local filesystem then there is
no problem.
Does the Linux
Does the Linux box have the same issue to any other
server ?
What if the client box isn't Linux but Solaris or
Windows or MacOS X ?
That would be a good test. We'll try that.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
After contacting LSI they say that the 9200-16e HBA is not supported in
OpenSolaris, just Solaris. Aren't Solaris drivers the same as OpenSolaris?
Is there anyone here using 9200-16e HBAs? What about the 9200-8e? We have a
couple lying around and we'll test one shortly.
Ian
--
This message
The mpt_sas driver supports it. We've had LSI 2004 and 2008 controllers hang
for quite some time when used with SuperMicro chassis and Intel X25-E SSDs
(OSOL b134 and b147). It seems to be a firmware issue that isn't fixed with
the last update.
Do you mean to include all the PCie cards not just
A little setback We found out that we also have the issue with the Dell
H800 controllers, not just the LSI 9200-16e. With the Dell it's initially
faster as we benefit from the cache, but after a little while it goes sour-
from 350MB/sec down to less than 40MB/sec. We've also tried with a
On 15 oct. 2010, at 22:19, Ian D wrote:
A little setback We found out that we also have the issue with the Dell
H800 controllers, not just the LSI 9200-16e. With the Dell it's initially
faster as we benefit from the cache, but after a little while it goes sour-
from 350MB/sec down to
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ian D
Sent: Friday, October 15, 2010 4:19 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Performance issues with iSCSI under Linux
A little
On Wed, 13 Oct 2010, Edward Ned Harvey wrote:
raidzN takes a really long time to resilver (code written inefficiently,
it's a known problem.) If you had a huge raidz3, it would literally never
finish, because it couldn't resilver as fast as new data appears. A week
In what way is the code
Has anyone suggested either removing L2ARC/SLOG
entirely or relocating them so that all devices are
coming off the same controller? You've swapped the
external controller but the H700 with the internal
drives could be the real culprit. Could there be
issues with cross-controller IO in this
Sorry, I can't not respond...
Edward Ned Harvey wrote:
whatever you do, *don't* configure one huge raidz3.
Peter, whatever you do, *don't* make a decision based on blanket
generalizations.
If you can afford mirrors, your risk is much lower.
Because although it's
hysically possible for 2
On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes martyscho...@yahoo.com wrote:
My home server's main storage is a 22 (19 + 3) disk RAIDZ3 pool backed up
hourly to a 14 (11+3) RAIDZ3 backup pool.
How long does it take to resilver a disk in that pool? And how long
does it take to run a scrub?
On Oct 15, 2010, at 9:18 AM, Stephan Budach stephan.bud...@jvm.de wrote:
Am 14.10.10 17:48, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Toby Thain
I don't want to heat up the discussion about ZFS managed
Hello,
I would like to know how to replace a failed vdev in a non redundant pool?
I am using fiber attached disks, and cannot simply place the disk back into
the machine, since it is virtual.
I have the latest kernel from sept 2010 that includes all of the new ZFS
upgrades.
Please, can you
On Oct 15, 2010, at 5:34 PM, Ian D rewar...@hotmail.com wrote:
Has anyone suggested either removing L2ARC/SLOG
entirely or relocating them so that all devices are
coming off the same controller? You've swapped the
external controller but the H700 with the internal
drives could be the real
If the pool is non-redundant and your vdev has failed, you have lost your data.
Just rebuild the pool, but consider a redundant configuration.
On Oct 15, 2010, at 3:26 PM, Cassandra Pugh wrote:
Hello,
I would like to know how to replace a failed vdev in a non redundant pool?
I am using
On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes
martyscho...@yahoo.com wrote:
My home server's main storage is a 22 (19 + 3) disk
RAIDZ3 pool backed up hourly to a 14 (11+3) RAIDZ3
backup pool.
How long does it take to resilver a disk in that
pool? And how long
does it take to run a
Thanks James for the response.
Please find attached here with the crash dump that we got from the admin.
Regards,
Anand
From: James C. McPherson j...@opensolaris.org
To: Ramesh Babu rama.b...@gmail.com
Cc: zfs-discuss@opensolaris.org; anand_...@yahoo.com
Thanks you very much Victor for the update.
Regards,
Anand
From: Victor Latushkin victor.latush...@oracle.com
To: j...@opensolaris.org
Cc: Anand Bhakthavatsala anand_...@yahoo.com; zfs-discuss discuss
zfs-discuss@opensolaris.org
Sent: Fri, 8 October, 2010
Am 12.10.10 14:21, schrieb Edward Ned Harvey:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Stephan Budach
c3t211378AC0253d0 ONLINE 0 0 0
How many disks are there inside of c3t211378AC0253d0?
How are they
Hi,
Can someone shed some light on what this ZPOOL_CONFIG is exactly.
At a guess is it a bad sector of the disk, non writable and thus ZFS
marks it as a hole ?
cheers
Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
The following new test versions have had STEP pkgs built for them.
[You are receiving this email because you are listed as the owner of the
testsuite in the STC.INFO file, or you are on the s...@sun.com alias]
tcp v2.7.10 STEP pkg built for Solaris Snv
zfstest v1.23 STEP pkg built for Solaris
You should only see a HOLE in your config if you removed a slog after having
added more stripes. Nothing to do with bad sectors.
On 14 Oct 2010, at 06:27, Matt Keenan wrote:
Hi,
Can someone shed some light on what this ZPOOL_CONFIG is exactly.
At a guess is it a bad sector of the disk,
From: Stephan Budach [mailto:stephan.bud...@jvm.de]
Point taken!
So, what would you suggest, if I wanted to create really big pools? Say
in the 100 TB range? That would be quite a number of single drives
then, especially when you want to go with zpool raid-1.
You have a lot of disks.
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Cassandra Pugh
I would like to know how to replace a failed vdev in a non redundant
pool?
Non redundant ... Failed ... What do you expect? This seems like a really
simple answer... You
On 10/16/10 12:29 PM, Marty Scholes wrote:
On Fri, Oct 15, 2010 at 3:16 PM, Marty Scholes
martyscho...@yahoo.com wrote:
My home server's main storage is a 22 (19 + 3) disk
RAIDZ3 pool backed up hourly to a 14 (11+3) RAIDZ3
backup pool.
How long does it take to resilver a disk
38 matches
Mail list logo