hakan...@ohsu.edu said:
I get a little nervous at the thought of hooking all that up to a single
server, and am a little vague on how much RAM would be advisable, other than
as much as will fit (:-). Then again, I've been waiting for something
like
pNFS/NFSv4.1 to be usable for gluing
Greetings,
Has anyone out there built a 1-petabyte pool? I've been asked to look
into this, and was told low performance is fine, workload is likely
to be write-once, read-occasionally, archive storage of gene sequencing
data. Probably a single 10Gbit NIC for connectivity is sufficient.
We've
rvandol...@esri.com said:
We've come close:
admin@mes-str-imgnx-p1:~$ zpool list
NAME SIZE ALLOC FREECAP DEDUP HEALTH ALTROOT
datapool 978T 298T 680T30% 1.00x ONLINE -
syspool278G 104G 174G37% 1.00x ONLINE -
Using a Dell R720 head unit, plus a
Ray said:
Using a Dell R720 head unit, plus a bunch of Dell MD1200 JBODs dual pathed
to a couple of LSI SAS switches.
Marion said:
How many HBA's in the R720?
Ray said:
We have qty 2 LSI SAS 9201-16e HBA's (Dell resold[1]).
Sounds similar in approach to the Aberdeen product another sender
Greetings,
We're trying out a new JBOD here. Multipath (mpxio) is not working,
and we could use some feedback and/or troubleshooting advice.
The OS is oi151a7, running on an existing server with a 54TB pool
of internal drives. I believe the server hardware is not relevant
to the JBOD issue,
On Jan 7, 2013, at 1:20 PM, Marion Hakanson hakan...@ohsu.edu wrote:
Greetings,
We're trying out a new JBOD here. Multipath (mpxio) is not working, and we
could use some feedback and/or troubleshooting advice.
. . .
richard.ell...@gmail.com said:
Sometimes the mpxio detection doesn't work
richard.ell...@gmail.com said:
Sometimes the mpxio detection doesn't work properly. You can try to whitelist
them, https://www.illumos.org/issues/644
And I said:
Thanks Richard, I was hoping I hadn't just made up my vague memory of such
functionality. We'll give it a try.
That did the
j...@opensolaris.org said:
Output from 'prtconf -v' would help, as would a cogent description of what
you are looking at to determine that MPxIO isn't working.
Sorry James, I must've made a cut-and-paste-o and left out my description
of the symptom. That being, 40 new drives show up as 80 new
tron...@gmail.com said:
That said, I've already migrated far too many times already. I really, really
don't want to migrate the pool again, if it can be avoided. I've already
migrated from raidz1 to raidz2 and then from raidz2 to mirror vdevs. Then,
even though I already had a mix of 512b and
richard.ell...@richardelling.com said:
We are starting to see a number of SAS HDDs that prefer logical-block to
round-robin. I see this with late model Seagate and Toshiba HDDs.
There is another, similar issue with recognition of multipathing by the
scsi_vhci driver. Both of these are being
p...@kraus-haus.org said:
Without knowing the I/O pattern, saying 500 MB/sec. is meaningless.
Achieving 500MB/sec. with 8KB files and lots of random accesses is really
hard, even with 20 HDDs. Achieving 500MB/sec. of sequential streaming of
100MB+ files is much easier.
. . .
For ZFS,
capcas...@gmail.com said:
I have a file that I can't delete, change permissions or owner. ls -v does
not show any acl's on the file not even those for normal unix rw etc.
permissions from ls -l show -rwx-- chmod gived an error of not owner for
the owner !! and for root just says can't
mattba...@gmail.com said:
We're looking at buying some additional SSD's for L2ARC (as well as
additional RAM to support the increased L2ARC size) and I'm wondering if we
NEED to plan for them to be large enough to hold the entire file or if ZFS
can cache the most heavily used parts of a single
lmulc...@marinsoftware.com said:
. . .
The MySQL server is:
Dell R710 / 80G Memory with two daisy chained MD1220 disk arrays - 22 Disks
each - 600GB 10k RPM SAS Drives Storage Controller: LSI, Inc. 1068E (JBOD)
I have also seen similar symptoms on systems with MD1000 disk arrays
containing
kitty@oracle.com said:
It wouldn't let me
# zpool create test_pool c5t0d0p0
cannot create 'test_pool': invalid argument for this pool operation
Try without the p0, i.e. just:
# zpool create test_pool c5t0d0
Regards,
Marion
___
zfs-discuss
jp...@cam.ac.uk said:
I can't speak for this particular situation or solution, but I think in
principle you are wrong. Networks are fast. Hard drives are slow. Put a
10G connection between your storage and your front ends and you'll have the
bandwidth[1]. Actually if you really were
sigbj...@nixtra.com said:
I will do some testing on the loadbalance on/off. We have nearline SAS disks,
which does have dual path from the disk, however it's still just 7200rpm
drives.
Are you using SATA , SAS or SAS-nearline in your array? Do you have multiple
SAS connections to your
sigbj...@nixtra.com said:
I've played around with turning on and off mpxio on the mpt_sas driver,
disabling increased the performance from 30MB / sec, but it's still far from
the original performance. I've attached some dumps of zpool iostat before and
after reinstallation.
I find zpool
nat...@tuneunix.com said:
I can confirm that on *at least* 4 different cards - from different board
OEMs - I have seen single bit ZFS checksum errors that went away immediately
after removing the 3114 based card.
I stepped up to the 3124 (pci-x up to 133mhz) and 3132 (pci-e) and have
matt.connolly...@gmail.com said:
extended device statistics
r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
1.2 36.0 153.6 4608.0 1.2 0.3 31.99.3 16 18 c12d0
0.0 113.40.0 7446.7 0.8 0.17.00.5 15 5
matt.connolly...@gmail.com said:
After putting the drive online (and letting the resilver complete) I took the
slow drive (c8t1d0 western digital green) offline and the system ran very
nicely.
It is a 4k sector drive, but I thought zfs recognised those drives and didn't
need any special
p...@bolthole.com said:
Any other suggestions for (large-)enterprise-grade, supported JBOD hardware
for ZFS these days? Either fibre or SAS would be okay.
As others have said, it depends on your definition of enterprise-grade.
We're using Dell's MD1200 SAS JBOD's with Solaris-10 and ZFS. Ours
tmcmah...@yahoo.com said:
Interesting. Did you switch to the load-balance option?
Yes, I ended up with load-balance=none. Here's a thread about
it in the storage-discuss mailing list:
http://opensolaris.org/jive/thread.jspa?threadID=130975tstart=90
Regards,
Marion
Stephan,
The vmstat shows you are not actually short of memory; The pi and po
columns are zero, so the system is not having to do any paging, and it seems
unlike the system is slow directly because of RAM shortage. With the ARC,
it's not unusual for vmstat to show little free memory, but the
z...@lordcow.org said:
For example when I 'dd if=/dev/zero of=/dev/ad6', or physically remove the
drive for awhile, then 'online' the disk, after it resilvers I'm typically
left with the following after scrubbing:
r...@file:~# zpool status
pool: pool
state: ONLINE status: One or more
replic...@gmail.com said:
One other question, how can I ensure that the controller's cache is really
being used? (arcconf doesn't seem to show much). Since ZFS would flush the
data as soon as it can, I am curious to see if the caching is making a
difference or not.
Share out a dataset on the
rewar...@hotmail.com said:
ok... we're making progress. After swapping the LSI HBA for a Dell H800 the
issue disappeared. Now, I'd rather not use those controllers because they
don't have a JBOD mode. We have no choice but to make individual RAID0
volumes for each disks which means we need
Folks,
Has anyone seen a panic traceback like the following? This is Solaris-10u7
on a Thumper, acting as an NFS server. The machine was up for nearly a
year, I added a dataset to an existing pool, set compression=on for the
first time on this system, loaded some data in there (via rsync),
then
markwo...@yahoo.com said:
So the question is with a proper ZIL SSD from SUN, and a RAID10... would I be
able to support all the VM's or would it still be pushing the limits a 44
disk pool?
If it weren't a closed 7000-series appliance, I'd suggest running the
zilstat script. It should make it
p...@kraus-haus.org said:
Based on these results, and our capacity needs, I am planning to go with 5
disk raidz2 vdevs.
I did similar tests with a Thumper in 2008, with X4150/J4400 in 2009,
and more recently comparing X4170/J4400 and X4170/MD1200:
doug.lin...@merchantlink.com said:
Apparently, before Outlook there WERE no meetings, because it's clearly
impossible to schedule one without it.
Don't tell my boss, but I use Outlook for the scheduling, and fetchmail
plus procmail to download email out of Exchange and into my favorite
email
frank+lists/z...@linetwo.net said:
I remember, and this was a few years back but I don't see why it would be any
different now, we were trying to add drives 1-2 at a time to medium-sized
arrays (don't buy the disks until we need them, to hold onto cash), and the
Netapp performance kept going
rvandol...@esri.com said:
We have a Silicon Mechanics server with a SuperMicro X8DT3-F (Rev 1.02)
(onboard LSI 1068E (firmware 1.28.02.00) and a SuperMicro SAS-846EL1 (Rev
1.1) backplane.
. . .
The system is fully patched Solaris 10 U8, and the mpt driver is
version 1.92:
Since you're
erik.trim...@sun.com said:
All J4xxx systems are really nothing more than huge SAS expanders hooked to
a bunch of disks, so cache flush requests will either come from ZFS or any
attached controller. Note that I /think/ most non-RAID controllers don't
initiate their own cache flush
bene...@yahoo.com said:
Marion - Do you happen to know which SAS hba it applys to?
Here's the article:
http://sunsolve.sun.com/search/document.do?assetkey=1-66-248487-1
The title is Write-Caching on JBOD SATA Drive is Erroneously Enabled
by Default When Connected to Non-RAID SAS HBAs.
By
car...@taltos.org said:
NetApp does _not_ expose an ACL via NFSv3, just old school POSIX mode/owner/
group info. I don't know how NetApp deals with chmod, but I'm sure it's
documented.
The answer is, It depends. If the NetApp volume is NTFS-only permissions,
then chmod from the Unix/NFS
felix.buenem...@googlemail.com said:
I think I'll try one of thise inexpensive battery-backed PCI RAM drives from
Gigabyte and see how much IOPS they can pull.
Another poster, Tracy Bernath, got decent ZIL IOPS from an OCZ Vertex unit.
Dunno if that's sufficient for your purposes, but it
rvandol...@esri.com said:
I'm trying to figure out where I can find the firmware on the LSI
controller... are the bootup messages the only place I could expect to see
this? prtconf and prtdiag both don't appear to give firmware information.
. . .
Solaris 10 U8 x86.
The raidctl command is
fjwc...@gmail.com said:
Yes, if I was to re-do the hardware config for these servers, using what I
know now, I would do things a little differently:
. . .
- find a case with more than 24 drive bays (any way to get a Thumper
without the extra hardware/software?) ;)
. . .
It's called the
mijoh...@gmail.com said:
I've never had a lun go bad but bad things do happen. Does anyone else use
ZFS in this way? Is this an unrecommended setup?
We used ZFS like this on a Hitachi array for 3 years. Worked fine, not
one bad block/checksum error detected. Still using it on an old Sun
You can always replace them when funding for your Zeus SSD's comes in (:-).
Regards,
--
Marion Hakanson hakan...@ohsu.edu
OHSU Advanced Computing Center
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
m...@cybershade.us said:
So at this point this looks like an issue with the MPT driver or these SAS
cards (I tested two) when under heavy load. I put the latest firmware for the
SAS card from LSI's web site - v1.29.00 without any changes, server still
locks.
Any ideas, suggestions how to
zfs...@jeremykister.com said:
unfortunately, fdisk won't help me at all:
# fdisk -E /dev/rdsk/c12t1d0p0
# zpool create -f testp c12t1d0
invalid vdev specification
the following errors must be manually repaired:
/dev/dsk/c3t11d0s0 is part of active ZFS pool dbzpool. Please see zpool(1M).
jimkli...@cos.ru said:
Thanks for the link, but the main concern in spinning down drives of a ZFS
pool is that ZFS by default is not so idle. Every 5 to 30 seconds it closes
a transaction group (TXG) which requires a synchronous write of metadata to
disk.
You know, it's just going to
I said:
You'll need to give the same dd treatment to the end of the disk as well;
ZFS puts copies of its labels at the beginning and at the end. Oh, and
zfs...@jeremykister.com said:
im not sure what you mean here - I thought p0 was the entire disk in x86 -
and s2 was the whole disk in the
zfs...@jeremykister.com said:
# format -e c12t1d0 selecting c12t1d0 [disk formatted] /dev/dsk/c3t11d0s0 is
part of active ZFS pool dbzpool. Please see zpool(1M).
It is true that c3t11d0 is part of dbzpool. But why is solaris upset about
c3t11 when i'm working with c12t1 ?? So i checked
da...@elemental.org said:
Normally on UFS I would just take the 'nuke it from orbit' route and use clri
to wipe the directory's inode. However, clri doesn't appear to be zfs aware
(there's not even a zfs analog of clri in /usr/lib/fs/ zfs), and I don't
immediately see an option in zdb which
knatte_fnatte_tja...@yahoo.com said:
Is rsync faster? As I have understood it, zfs send.. gives me an exact
replica, whereas rsync doesnt necessary do that, maybe the ACL are not
replicated, etc. Is this correct about rsync vs zfs send?
It is true that rsync (as of 3.0.5, anyway) does not
opensolaris-zfs-disc...@mlists.thewrittenword.com said:
Is it really pointless? Maybe they want the insurance RAIDZ2 provides. Given
the choice between insurance and performance, I'll take insurance, though it
depends on your use case. We're using 5-disk RAIDZ2 vdevs.
. . .
Would love to
jel+...@cs.uni-magdeburg.de said:
2nd) Never had a Sun STK RAID INT before. Actually my intention was to create
a zpool mirror of sd0 and sd1 for boot and logs, and a 2x2-way zpool mirror
with the 4 remaining disks. However, the controller seems not to support
JBODs :( - which is also bad,
mmusa...@east.sun.com said:
What benefit are you hoping zfs will provide in this situation? Examine
your situation carefully and determine what filesystem works best for you.
There are many reasons to use ZFS, but if your configuration isn't set up to
take advantage of those reasons, then
I wrote:
Is anyone else tired of seeing the word redundancy? (:-)
matthias.ap...@lanlabor.com said:
Only in a perfect world (tm) ;-)
IMHO there is no such thing as too much redundancy. In the real world the
possibilities of redundancy are only limited by money,
Sigh. I was just joking
webcl...@rochester.rr.com said:
To verify data, I cannot depend on existing tools since diff is not large
file aware. My best idea at this point is to calculate and compare MD5 sums
of every file and spot check other properties as best I can.
Ray,
I recommend that you use rsync's -c to
David Stewart wrote:
How do I identify which drive it is? I hear each drive spinning (I listened
to them individually) so I can't simply select the one that is not spinning.
You can try reading from each raw device, and looking for a blinky-light
to identify which one is active. If you don't
j...@jamver.id.au said:
For a predominantly NFS server purpose, it really looks like a case of the
slog has to outperform your main pool for continuous write speed as well as
an instant response time as the primary criterion. Which might as well be a
fast (or group of fast) SSDs or 15kRPM
rswwal...@gmail.com said:
Yes, but if it's on NFS you can just figure out the workload in MB/s and use
that as a rough guideline.
I wonder if that's the case. We have an NFS server without NVRAM cache
(X4500), and it gets huge MB/sec throughput on large-file writes over NFS.
But it's
rswwal...@gmail.com said:
It's not the stripes that make a difference, but the number of controllers
there.
What's the system config on that puppy?
The zpool status -v output was from a Thumper (X4500), slightly edited,
since in our real-world Thumper, we use c6t0d0 in c5t4d0's place in the
vidar.nil...@palantir.no said:
I'm trying to move disks in a zpool from one SATA-kontroller to another. Its
16 disks in 4x4 raidz. Just to see if it could be done, I moved one disk from
one raidz over to the new controller. Server was powered off.
. . .
zpool replace storage c10t7d0 c11t0d0
rswwal...@gmail.com said:
There is another type of failure that mirrors help with and that is
controller or path failures. If one side of a mirror set is on one
controller or path and the other on another then a failure of one will not
take down the set.
You can't get that with RAIDZn.
asher...@versature.com said:
And, on that subject, is there truly a difference between Seagate's line-up
of 7200 RPM drives? They seem to now have a bunch:
. . .
Other manufacturers seem to have similar lineups. Is the difference going to
matter to me when putting a mess of them into a SAS
bfrie...@simple.dallas.tx.us said:
No. I am suggesting that all Solaris 10 (and probably OpenSolaris systems)
currently have a software-imposed read bottleneck which places a limit on
how well systems will perform on this simple sequential read benchmark.
After a certain point (which is
jlo...@ssl.berkeley.edu said:
What's odd is we've checked a few hundred files, and most of them don't
seem to have any corruption. I'm thinking what's wrong is the metadata for
these files is corrupted somehow, yet we can read them just fine. I wish I
could tell which ones are really
bfrie...@simple.dallas.tx.us said:
Your IOPS don't seem high. You are currently using RAID-5, which is a poor
choice for a database. If you use ZFS mirrors you are going to unleash a
lot more IOPS from the available spindles.
RAID-5 may be poor for some database loads, but it's perfectly
Greetings,
We have a small Oracle project on ZFS (Solaris-10), using a SAN-connected
array which is need of replacement. I'm weighing whether to recommend
a Sun 2540 array or a Sun J4200 JBOD as the replacement. The old array
and the new ones all have 7200RPM SATA drives.
I've been watching
mi...@cc.umanitoba.ca said:
What would I look for with mpstat?
Look for a CPU (thread) that might be 100% utilized; Also look to see
if that CPU (or CPU's) has a larger number in the ithr column than all
other CPU's. The idea here is that you aren't getting much out of the
T2000 if only one
udip...@gmail.com said:
dick at nagual.nl wrote:
Maybe because on the fifth day some hardware failure occurred? ;-)
That would be which? The system works and is up and running beautifully.
OpenSolaris, as of now.
Running beautifully as long as the power stays on? Is it hard to believe
casper@sun.com said:
I've upgraded my system from ufs to zfs (root pool).
By default, it creates a zvol for dump and swap.
. . .
So I removed the zvol swap and now I have a standard swap partition. The
performance is much better (night and day). The system is usable and I
don't know
james.ma...@sun.com said:
I'm not yet sure what's broken here, but there's something pathologically
wrong with the IO rates to the device during the ZFS tests. In both cases,
the wait queue is getting backed up, with horrific wait queue latency
numbers. On the read side, I don't understand why
bh...@freaks.com said:
Even with a very weak CPU the system is close to saturating the PCI bus for
reads with most configurations.
Nice little machine. I wonder if you'd get some of the bonnie numbers
increased if you ran multiple bonnie's in parallel. Even though the
sequential throughput
n...@jnickelsen.de said:
As far as I know the situation with ATI is that, while ATI supplies
well-performing binary drivers for MS Windows (of course) and Linux, there is
no such thing for other OSs. So OpenSolaris uses standardized interfaces of
the graphics hardware, which have comparatively
, data has been fine. We also do tape backups
of these pools, of course.
Regards,
--
Marion Hakanson hakan...@ohsu.edu
OHSU Advanced Computing Center
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
The zilstat tool is very helpful, thanks!
I tried it on an X4500 NFS server, while extracting a 14MB tar archive,
both via an NFS client, and locally on the X4500 itself. Over NFS,
said extract took ~2 minutes, and showed peaks of 4MB/sec buffer-bytes
going through the ZIL.
When run locally on
d...@yahoo.com said:
Any recommendations for an SSD to work with an X4500 server? Will the SSDs
used in the 7000 series servers work with X4500s or X4540s?
The Sun System Handbook (sunsolve.sun.com) for the 7210 appliance (an
X4540-based system) lists the logzilla device with this fine print:
richard.ell...@sun.com said:
L2ARC arrived in NV at the same time as ZFS boot, b79, November 2007. It was
not back-ported to Solaris 10u6.
You sure? Here's output on a Solaris-10u6 machine:
cyclops 4959# uname -a
SunOS cyclops 5.10 Generic_137138-09 i86pc i386 i86pc
cyclops 4960# zpool
vincent_b_...@yahoo.com said:
Just wondering if (excepting the existing zones thread) there are any
compelling arguments to keep /var as it's own filesystem for your typical
Solaris server. Web servers and the like.
Well, it's been considered a best practice for servers for a lot of
years to
[EMAIL PROTECTED] said:
Thanks for the tips. I'm not sure if they will be relevant, though. We
don't talk directly with the AMS1000. We are using a USP-VM to virtualize
all of our storage and we didn't have to add anything to the drv
configuration files to see the new disk (mpxio was
[EMAIL PROTECTED] said:
I think we found the choke point. The silver lining is that it isn't the
T2000 or ZFS. We think it is the new SAN, an Hitachi AMS1000, which has
7200RPM SATA disks with the cache turned off. This system has a very small
cache, and when we did turn it on for one of
[EMAIL PROTECTED] said:
I thought to look at df output before rebooting, and there are PAGES PAGES
like this:
/var/run/.patchSafeModeOrigFiles/usr/platform/FJSV,GPUZC-M/lib/libcpc.so.1
7597264 85240 7512024 2%/usr/platform/FJSV,GPUZC-M/lib/libcpc.so.1
. . .
Hundreds of
[EMAIL PROTECTED] said:
# ludelete beA
ERROR: cannot open 'pool00/zones/global/home': dataset does not exist
ERROR: cannot mount mount point /.alt.tmp.b-QY.mnt/home device
pool00/zones/global/home
ERROR: failed to mount file system pool00/zones/global/home on
/.alt.tmp.b-QY.mnt/home
[EMAIL PROTECTED] said:
It's interesting how the speed and optimisation of these maintenance
activities limit pool size. It's not just full scrubs. If the filesystem is
subject to corruption, you need a backup. If the filesystem takes two months
to back up / restore, then you need really
[EMAIL PROTECTED] said:
In general, such tasks would be better served by T5220 (or the new T5440 :-)
and J4500s. This would change the data paths from:
client --net-- T5220 --net-- X4500 --SATA-- disks to
client --net-- T5440 --SAS-- disks
With the J4500 you get the same storage
[EMAIL PROTECTED] said:
but Marion's is not really possible at all, and won't be for a while with
other groups' choice of storage-consumer platform, so it'd have to be
GlusterFS or some other goofy fringe FUSEy thing or not-very-general crude
in-house hack.
Well, of course the magnitude of
[EMAIL PROTECTED] said:
We did ask our vendor, but we were just told that AVS does not support
x4500.
You might have to use the open-source version of AVS, but it's not
clear if that requires OpenSolaris or if it will run on Solaris-10.
Here's a description of how to set it up between two
[EMAIL PROTECTED] said:
I took a snapshot of a directory in which I hold PDF files related to math.
I then added a 50MB pdf file from a CD (Oxford Math Reference; I strongly
reccomend this to any math enthusiast) and did zfs list to see the size of
the snapshot (sheer curiosity). I don't have
[EMAIL PROTECTED] said:
That's the one that's been an issue for me and my customers - they get billed
back for GB allocated to their servers by the back end arrays. To be more
explicit about the 'self-healing properties' - To deal with any fs
corruption situation that would traditionally
[EMAIL PROTECTED] said:
Seriously, I don't even care about the cost. Even with the smallest
capacity, four of those gives me 128GB of write cache supporting 680MB/s and
40k IOPS. Show me a hardware raid controller that can even come close to
that. Four of those will strain even 10GB/s
[EMAIL PROTECTED] said:
AFAIK there is no way to tell resilvering to pause, so I want to detach the
inconsistent disk and attach it again tonight, when it won't affect users. To
do that I need to know which disk is inconsistent, but zpool status does not
show me any info in regard.
Is
[EMAIL PROTECTED] said:
I'm curious about your array configuration above... did you create your
RAIDZ2 as one vdev or multiple vdev's? If multiple, how many? On mine, I
have all 10 disks set up as one RAIDZ2 vdev which is supposed to be near the
performance limit... I'm wondering how much I
[EMAIL PROTECTED] said:
I am having trouble destroying a zfs file system (device busy) and fuser
isn't telling me who has the file open:
. . .
This situation appears to occur every night during a system test. The only
peculiar operation on the errant file system is that another system NFS
[EMAIL PROTECTED] said:
It is also interesting to note that this system is now making negative
progress. I can understand the remaining time estimate going up with time,
but what does it mean for the % complete number to go down after 6 hours of
work?
Sorry I don't have any helpful
[EMAIL PROTECTED] said:
i am a little new to zfs so please excuse my ignorance. i have a poweredge
2950 running Nevada B82 with an Apple Xraid attached over a fiber hba. they
are formatted to JBOD with the pool configured as follows:
. . .
i have a filesystem (tpool4/seplog) shared over
[EMAIL PROTECTED] said:
Some of us are still using Solaris 10 since it is the version of Solaris
released and supported by Sun. The 'filebench' software from SourceForge
does not seem to install or work on Solaris 10. The 'pkgadd' command
refuses to recognize the package, even when it is
[EMAIL PROTECTED] said:
I'm creating a zfs volume, and sharing it with zfs set shareiscsi=on
poolname/volume. I can access the iSCSI volume without any problems, but IO
is terribly slow, as in five megabytes per second sustained transfers.
I've tried creating an iSCSI target stored on a UFS
[EMAIL PROTECTED] said:
It may not be relevant, but I've seen ZFS add weird delays to things too. I
deleted a file to free up space, but when I checked no more space was
reported. A second or two later the space appeared.
Run the sync command before you do the du. That flushes the ARC
[EMAIL PROTECTED] said:
This is what I get with the filebench-1.1.0_x86_pkg.tar.gz from SourceForge:
# pkgadd -d .
pkgadd: ERROR: no packages were found in
/home/bfriesen/src/benchmark/filebench
# ls
install/ pkginfo pkgmapreloc/
. . .
Um, cd .. and pkgadd -d . again. The
[EMAIL PROTECTED] said:
I also tried using O_DSYNC, which stops the pathological behaviour but makes
things pretty slow - I only get a maximum of about 20MBytes/sec, which is
obviously much less than the hardware can sustain.
I may misunderstand this situation, but while you're waiting for
[EMAIL PROTECTED] said:
difference my tweaks are making. Basically, the problem users experience,
when the load shoots up are huge latencies. An ls on a non-cached
directory, which usually is instantaneous, will take 20, 30, 40 seconds or
more. Then when the storage array catches up,
[EMAIL PROTECTED] said:
It's not that old. It's a Supermicro system with a 3ware 9650SE-8LP.
Open-E iSCSI-R3 DOM module. The system is plenty fast. I can pretty
handily pull 120MB/sec from it, and write at over 100MB/sec. It falls apart
more on random I/O. The server/initiator side is a
[EMAIL PROTECTED] said:
One thought I had was to unconfigure the bad disk with cfgadm. Would that
force the system back into the 'offline' response?
In my experience (X4100 internal drive), that will make ZFS stop trying
to use it. It's also a good idea to do this before you hot-unplug the
[EMAIL PROTECTED] said:
Your finding for random reads with or without NCQ match my findings: http://
blogs.sun.com/erickustarz/entry/ncq_performance_analysis
Disabling NCQ looks like a very tiny win for the multi-stream read case. I
found a much bigger win, but i was doing RAID-0 instead
1 - 100 of 145 matches
Mail list logo