On May 19, 2011, at 2:09 PM, Paul Kraus wrote:
> I just got a call from another of our admins, as I am the resident ZFS
> expert, and they have opened a support case with Oracle, but I figured
> I'd ask here as well, as this forum often provides better, faster
> answers :-)
>
>We have a serv
>-Original Message-
>From: zfs-discuss-boun...@opensolaris.org
>[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Paul Kraus
>
>Over the past few months I have seen mention of FreeBSD a couple time in
> regards to ZFS. My question is how stable (reliable) is ZFS on this platf
I just got a call from another of our admins, as I am the resident ZFS
expert, and they have opened a support case with Oracle, but I figured
I'd ask here as well, as this forum often provides better, faster
answers :-)
We have a server (M4000) with 6 FC attached SE-3511 disk arrays
(some behi
Just a random thought: if two devices have same IDs and seem to work in
turns,
are you certain you have a mirror and not two paths to the same backend?
A few years back I was given to support a box with "sporadically failing
drives"
which turned out to be two paths to the same external array, a
I thought this was interesting - it looks like we have a failing drive in our
mirror, but the two device nodes in the mirror are the same:
pool: tank
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for th
On May 19, 2011, at 12:55 AM, Evaldas Auryla wrote:
> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single
> path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with
> sas-addresses such as this in "zpool status" output:
>
>NAME STAT
On May 19, 2011, at 5:35 AM, Sašo Kiselkov wrote:
> Hi all,
>
> I'd like to ask whether there is a way to monitor disk seeks. I have an
> application where many concurrent readers (>50) sequentially read a
> large dataset (>10T) at a fairly low speed (8-10 Mbit/s). I can monitor
> read/write ops
On Thu, May 19, 2011 at 5:35 AM, Sašo Kiselkov wrote:
> I'd like to ask whether there is a way to monitor disk seeks. I have an
> application where many concurrent readers (>50) sequentially read a
> large dataset (>10T) at a fairly low speed (8-10 Mbit/s). I can monitor
> read/write ops using ios
IIRC there are tool/SW from lsi like MegaRaid SW that may display some
info
not sure you can use Common Array Manager
On 5/19/2011 11:20 AM, Eric D. Mudama wrote:
On Thu, May 19 at 9:55, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MP
On Thu, May 19 at 9:55, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are
visible with sas-addresses such as this in "zpool status" output:
NAME STATE READ WRI
On Thu, 19 May 2011 15:39:50 +0200, Frank Van Damme wrote:
Op 03-05-11 17:55, Brandon High schreef:
-H: Hard links
If you're going to this for 2 TB of data, remember to expand your
swap
space first (or have tons of memory). Rsync will need it to store
every
inode number in the directory.
If you cannot even boot to single user mode on the server, boot from SXCE or
openindiana, then:
1. import syspool:
# zpool import syspool
2. mount affected rootfs:
# mkdir /a; mount -F zfs syspool/rootfs-nmu-### /a
3. remove zpool.cache:
# rm -f /a/etc/zfs/zpool.cache
4. rebuild boot archive:
#
I'll add my 2 cents, since I just suffered some pretty bad pool corruption a
few months ago and went through a lot of pain to get most of it restored. See
http://www.opensolaris.org/jive/thread.jspa?messageID=512687 for the gory
details.
Steps you should take:
1) as mentioned above, delete (o
On 19 May 2011, at 14:44, Evaldas Auryla wrote:
> Hi Chris, there is no sestopo on this box (Solaris Express 11 151a), fmtopo
> -dV works nice, although it's a bit "overkill" with manually parsing the
> output :)
You need to install pkg:/system/io/tests.
Chris
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Evaldas Auryla
>
> Is there an easy way to map these sas-addresses to the physical disks in
> enclosure ?
Of course in the ideal world, when a disk needs to be pulled, hardware would
know abou
On 05/19/2011 03:35 PM, Tomas Ögren wrote:
> On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
>
>> Hi all,
>>
>> I'd like to ask whether there is a way to monitor disk seeks. I have an
>> application where many concurrent readers (>50) sequentially read a
>> large dataset (>10T) at a fai
The same format as in zpool status:
3. c9t5000C50025D5A266d0
/pci@0,0/pci10de,376@e/pci1000,3080@0/iport@f/disk@w5000c50025d5a266,0
4. c9t5000C50025D5AF66d0
/pci@0,0/pci10de,376@e/pci1000,3080@0/iport@f/disk@w5000c50025d5af66,0
On 05/19/11 03:04 PM, Hung
Op 03-05-11 17:55, Brandon High schreef:
> -H: Hard links
If you're going to this for 2 TB of data, remember to expand your swap
space first (or have tons of memory). Rsync will need it to store every
inode number in the directory.
--
No part of this copyright message may be reproduced, read or
On 19 May, 2011 - Sa??o Kiselkov sent me these 0,6K bytes:
> Hi all,
>
> I'd like to ask whether there is a way to monitor disk seeks. I have an
> application where many concurrent readers (>50) sequentially read a
> large dataset (>10T) at a fairly low speed (8-10 Mbit/s). I can monitor
> read/w
On 19 May 2011, at 08:55, Evaldas Auryla wrote:
> Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure, single
> path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible with
> sas-addresses such as this in "zpool status" output:
>
>NAME STATE
2011-05-19 17:00, Jim Klimov пишет:
I am not sure you can monitor actual mechanical seeks short
of debugging and interrogating the HDD firmware - because
it is the last responsible logic in the chain of caching,
queuing and issuing actual commands to the disk heads.
For example, a long logical I
what is output
echo |format
On 5/19/2011 3:55 AM, Evaldas Auryla wrote:
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible
with sas-addresses such as this in "zpool status" output:
NAME
I am not sure you can monitor actual mechanical seeks short
of debugging and interrogating the HDD firmware - because
it is the last responsible logic in the chain of caching, queuing
and issuing actual commands to the disk heads.
For example, a long logical IO spanning several cylinders
would pr
Hi all,
I'd like to ask whether there is a way to monitor disk seeks. I have an
application where many concurrent readers (>50) sequentially read a
large dataset (>10T) at a fairly low speed (8-10 Mbit/s). I can monitor
read/write ops using iostat, but that doesn't tell me how contiguous the
data
Hi, we have SunFire X4140 connected to Dell MD1220 SAS enclosure,
single path, MPxIO disabled, via LSI SAS9200-8e HBA. Disks are visible
with sas-addresses such as this in "zpool status" output:
NAME STATE READ WRITE CKSUM
cuve ON
25 matches
Mail list logo