On Apr 28, 2013, at 6:06 AM, Ram Chander ramqu...@gmail.com wrote:
Hi,
I am trying to set fsid in nfs option while sharing. This is for Nfs
failover across hosts, so that I can export pool from one host and import on
other, and the nfs clients doesnt see stale nfs errors. The Virtual IP
On May 27, 2013, at 10:01 AM, Ram Chander ramqu...@gmail.com wrote:
Hi,
Am transferring Zfs snapshot using mbuffer. But after sometime, it stalls and
breaks with below error. I retried 3 times but breaks with same error (
happens randomly after transferring 40G, 120G, etc ).
Please
On Jun 6, 2013, at 12:34 PM, Michael Palmer palmert...@gmail.com wrote:
I guess the main question was graphical statistics. Besides buying
oracle storage I would suggest an network management system like
opennms. There are lots of chooses out there but not sure if that is
the best solution.
On Jun 14, 2013, at 12:08 AM, Hafiz Rafibeyli rafibe...@gmail.com wrote:
Hello ,i'm running omnios(d3950d8) as a NAS base with napp-it,
hardware is Supermicro SC846E26-R1200B+LSI SAS 9211-8i HBA(IT mode enabled
with last firmware)
everything working great,but i'm getting errors with my
This is also a symptom of a failing or slow disk. Check for slow disks and
errors using iostat and fmdump -e
-- richard
On Jun 21, 2013, at 6:53 AM, Eric Sproul espr...@omniti.com wrote:
Hi Felix,
This sounds like the characteristics of a dedup table (DDT) that will
not fit in RAM. The
On Jul 18, 2013, at 8:23 PM, Schweiss, Chip c...@innovates.com wrote:
I have used LSI HBAs exclusively. Performance and reliability has been very
good.
The only problem I have consistently seen is if I hotplug a sas expander with
or without disks attached it will crash the system at
Hi Steve,
On Aug 19, 2013, at 9:37 AM, st...@linuxsuite.org wrote:
SNIP
c5t5000C50045561CEAd0 Soft Errors: 0 Hard Errors: 1 Transport Errors: 7
Vendor: ATA Product: ST3000DM001-9YN1 Revision: CC4H Serial No:
W1F09G4Q
Size: 3000.59GB 3000592982016 bytes
Media Error: 0 Device Not
FYI, iostat in illumos shows pool stats too :-)
Personally, I don't find zpool iostat to be very useful, especially now that
iostat
knows about pools.
-- richard
On Aug 21, 2013, at 1:06 PM, Eric Sproul espr...@omniti.com wrote:
kstat is awesome, and has tens of thousands of statistics from
On Sep 13, 2013, at 11:46 PM, Hugh McIntyre li...@mcintyreweb.com wrote:
I have a related question to the recent discussion of ashift=12 and advanced
format disks. The problem is that I have 4 disks attached via a LSI 9211-4i
SATA/SAS adapter (which are fine) but several others attached
On Sep 26, 2013, at 2:41 AM, Muhammad Yousuf Khan sir...@gmail.com wrote:
is this a bug or result of wrong configuration, to me it seems like a bug.
i notice that one of my FS set on top a pool (acipool) and that has been
grown instead of base pool.
The zfs command does not show the size
On Sep 26, 2013, at 1:07 PM, Muhammad Yousuf Khan sir...@gmail.com wrote:
...
root@omni:~# zpool list
NAME SIZE ALLOC FREE EXPANDSZCAP DEDUP HEALTH ALTROOT
acipool 928G 406G 522G -43% 1.00x ONLINE -
rpool 37G 27.4G 9.60G -74% 1.00x
Comment below...
On Nov 8, 2013, at 8:17 AM, Matt Weiss mwe...@cimlbr.com wrote:
I am working on a failover script using OmniOS as a NFS server.
According to VMware, if I mount and nfs datastore via its IP Address then I
should be able to move the IP around and still mount it, however it
On Nov 8, 2013, at 2:07 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2013-11-08 22:54, Richard Elling wrote:
It is a replica, but it isn't the same from an NFS perspective. The
files may have the
same contents, but the NFSv3 file handles are different because they are
in two different
file
On Dec 10, 2013, at 4:01 PM, Tom Robinson tom.robin...@motec.com.au wrote:
OmniOS v11 r151006
Hi,
I'm having many stability/performance issues with NFS. Server end is OmniOS;
client end is CentOS 5.
When the server end is functioning, I can mount OK, but there are really long
waits
Here is another method:
There is a handy mptstat dcmd.
echo ::mptsas -t | mdb -k
mptsas_t inst ncmds suspend power
ff0703785000 0 0 0 ON=D0
The SCSI target
On Dec 13, 2013, at 5:13 AM, Tobias Oetiker t...@oetiker.ch wrote:
Hi Paul,
Today Paul Jochum wrote:
Hi Tobias:
Sorry, I don't know why you are having these strang io-patterns, but I
was wondering if you could share how you
record and display this info?
I created a little
On Dec 17, 2013, at 5:51 AM, Hafiz Rafibeyli rafibe...@gmail.com wrote:
Hello,
I'm getting these errors on my omnios after adding INTEL D14799-001 PRO/1000
PF Dual Port Server Network Adapter.
New added NIC working well.
what is wrong with my system?
Nothing.
This is a common
On Dec 22, 2013, at 4:23 PM, Tobias Oetiker t...@oetiker.ch wrote:
Hi Richard,
Yesterday Richard Elling wrote:
c) shouldn't the smarter write throttle change
https://github.com/illumos/illumos-gate/commit/69962b5647e4a8b9b14998733b765925381b727e
have helped with this by makeing zfs
On Feb 17, 2014, at 2:48 PM, Derek Yarnell de...@umiacs.umd.edu wrote:
Hi,
So we bought a new Dell R720xd with 2 Dell SLC SSDs which were shipped
as a Pliant-LB206S-D323-186.31GB via format.
Pliant SSDs (note: Pliant was purchased by Sandisk in 2011) are optimized for
lots of concurrent I/O
On Feb 14, 2014, at 8:29 AM, Dan Swartzendruber dswa...@druber.com wrote:
I've been running an install on r151008. rpool mirrored on two WD black
160GB sata drives. 8 SAS nearline drives in a raid10. Two samsung 840PRO
128GB drives as l2arc. The sas and l2arc drives are on an LSI HBA,
On Feb 18, 2014, at 10:40 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2014-02-18 08:05, Richard Elling wrote:
Fortunately, most real workloads are not tar -x.
Oh, alas, on build farms they are. And the hordes of files produced
as a result of make are not far behind. Oh, well, often
On Feb 18, 2014, at 4:17 PM, Anh Quach anhqu...@me.com wrote:
Is it possible to tell the disk-transport FMA module to ignore
over-temperature on only a certain set of disks?
In Solaris 11, yes this is possible. However, the open source community has not
implemented it
yet, AFAIK.
I’m
Clarification below...
On Feb 13, 2014, at 2:18 AM, Thibault VINCENT thibault.vinc...@smartjog.com
wrote:
On 02/12/2014 09:59 PM, Steamer wrote:
Did you ever find a solution to the overheating faults with the
ST4000NM0023?
I'm currently having the exact same issue with ST1000NM0023
On Feb 27, 2014, at 2:01 PM, Jim Klimov jimkli...@cos.ru wrote:
On 2014-02-27 20:39, Richard Elling wrote:
I hope, NFS cached-data syncs and locks, and ZFS write-syncs are
not very related in this case (i.e. zfs sync=disabled does not
influence co-ordination of NFS data between hosts), right
On Mar 10, 2014, at 8:13 AM, Jim Klimov jimkli...@cos.ru wrote:
On 2014-03-09 03:28, Richard Elling wrote: The basic problem affects other
file systems, too. The general best practice
has always been to keep your hierarchy flat. But...
That is a strange best practice, especially given
On Mar 21, 2014, at 9:48 AM, Tobias Oetiker t...@oetiker.ch wrote:
a zpool on one of our boxes has been degraded with several disks
faulted ...
* the disks are all sas direct attached
* according to smartctl the offending disks have no faults.
* zfs decided to fault the disks after the
On Mar 21, 2014, at 3:23 PM, Tobias Oetiker t...@oetiker.ch wrote:
Today Zach Malone wrote:
On Fri, Mar 21, 2014 at 3:50 PM, Richard Elling
richard.ell...@richardelling.com wrote:
On Mar 21, 2014, at 9:48 AM, Tobias Oetiker t...@oetiker.ch wrote:
a zpool on one of our boxes has been
On Mar 21, 2014, at 10:13 PM, Tobias Oetiker t...@oetiker.ch wrote:
Yesterday Richard Elling wrote:
On Mar 21, 2014, at 3:23 PM, Tobias Oetiker t...@oetiker.ch wrote:
[...]
it happened over time as you can see from the timestamps in the
log. The errors from zfs's point of view were
On Apr 21, 2014, at 9:19 AM, Schweiss, Chip c...@innovates.com wrote:
I suspecting these drives have self-destructed.
Can anyone confirm this firmware issue causes the drives to permanently go
offline?
They are fine. FMA retires them, so you have to coerce the OS to reinstantiate
On Apr 22, 2014, at 10:58 AM, Saso Kiselkov skiselkov...@gmail.com wrote:
On 4/22/14, 5:03 PM, Schweiss, Chip wrote:
Are you sure you have SAS multipath disabled on the disk you are trying
to flash?
I couldn't get these to flash at all with MP enabled. I too kept
getting OS related
going out on a limb...
On Apr 22, 2014, at 2:02 PM, Saso Kiselkov skiselkov...@gmail.com wrote:
On 4/22/14, 10:31 PM, Schweiss, Chip wrote:
On Tue, Apr 22, 2014 at 3:17 PM, Saso Kiselkov skiselkov...@gmail.com
mailto:skiselkov...@gmail.com wrote:
I know, but if I understand it
Hi Filip,
On May 5, 2014, at 3:32 AM, Filip Marvan filip.mar...@aira.cz wrote:
Hello,
I have storage server with OmniOS LTS and 64 GB RAM. This server was
installed one year ago. There are about 50 ZVOLs on that server and they are
shared through Comstar iSCSI to KVM servers, which are
some data on that pools and see what happen with more detailed
monitoring.
Thank you,
Filip
From: Richard Elling [mailto:richard.ell...@richardelling.com]
Sent: Wednesday, May 07, 2014 3:56 AM
To: Filip Marvan
Cc: omnios-discuss@lists.omniti.com
Subject: Re: [OmniOS-discuss
On May 8, 2014, at 9:20 AM, Robin P. Blanchard ro...@coraid.com wrote:
# zfs list -r -t snapshot
NAME USED AVAIL REFER MOUNTPOINT
rpool/ROOT/omnios@2013-12-08-00:42:380 - 3.66G -
rpool/ROOT/omnios-6@install 292M -
Hi Svavar,
On May 9, 2014, at 2:27 AM, Svavar Örn Eysteinsson sva...@januar.is wrote:
Hello List.
I recently installed and configured OmniOS on a HP Microserver N40L NAS box.
In time to time my box hangs/freezes for some seconds, when I try to SSH into
the box or execute some
Hi Tim,
On May 12, 2014, at 1:41 PM, Tim Brown timbr...@muskegonisd.org wrote:
I have a question about ZFS usage and how to predictability allocate
space. I have scoured the web trying to get a good answer, but have
yet to find one.
I am advertising 2TB datastores to our VMware cluster
Try the changes embedded below...
On May 16, 2014, at 7:03 AM, Matthew Lagoe matthew.la...@subrigo.net wrote:
I am looking at moving from openindiana to omnios and I have a dtrace script
(below) that works on openindiana however I have tried running it on the
latest omnios r151010 and it
assesses came back to
high numbers as before deletion of data (you can see that in screenshot).
And I try to delete the same amount of data on different storage server, and
the accesses to ARC droped in the same way as on first pool
Interesting.
Filip Marvan
From: Richard Elling
On May 28, 2014, at 7:08 AM, Schweiss, Chip c...@innovates.com wrote:
The 840 Pro doesn't have a super cap, but it does properly honor cache
flushes which ZFS will do on a log device. This drastically reduces it's
write performance and makes it a poor choice for a log device.
This is a
On May 29, 2014, at 12:23 PM, Dan Swartzendruber dswa...@druber.com wrote:
On Thu, 29 May 2014 20:58:50 +0200
Chris Ferebee c...@ferebee.net wrote:
Or, BTW, the cables. They could be marginal, still tolerated @ 3 Gbps by
the Seagate, but rejected by the WDs, for instance.
Since all
Ah, vintage hardware :-)
On May 29, 2014, at 5:59 PM, Michael Rasmussen m...@miras.org wrote:
On Thu, 29 May 2014 17:12:09 -0700
Richard Elling richard.ell...@richardelling.com wrote:
How to troubleshoot:
sasinfo hba-port -y
shows negotiated speeds
sasinfo hba-port
On Jun 24, 2014, at 1:09 AM, Johan Kragsterman johan.kragster...@capvert.se
wrote:
Hi!
How would I do in a proper way when I need to import a pool from a crashed
system into a new system, if that pool has a zil connected?
zpool import should be able to find the slog, if not it will
On Jul 28, 2014, at 5:11 PM, wuffers m...@wuffers.net wrote:
Does this look normal?
maybe, maybe not
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0h3m with 0 errors on Tue Jul 15 09:36:17 2014
config:
NAME STATE READ WRITE CKSUM
rpool
apologies for the long post, data for big systems tends to do that, comments
below...
On Jul 30, 2014, at 9:10 PM, wuffers m...@wuffers.net wrote:
So as I suspected, I lost 2 weeks of scrub time after the resilver. I started
a scrub again, and it's going extremely slow (~13x slower than
On Aug 8, 2014, at 7:33 AM, Dan McDonald dan...@omniti.com wrote:
On Aug 8, 2014, at 10:10 AM, Stephen Nelson-Smith
step...@atalanta-systems.com wrote:
Hi,
On 8 August 2014 15:06, Eric Sproul espr...@omniti.com wrote:
IMHO the existence of an FTP daemon in the base OS runs counter
On Aug 12, 2014, at 4:38 PM, Scott LeFevre slefe...@indy.rr.com wrote:
This may help (or may not). I presume this will work on OmniOS as it works
on other Illumos variants.
NFS logging is a perfect way to destroy performance. It is very serial and will
not scale well.
One important Sun
Dan,
This causes much angst for x86 systems. Some distros disable fastboot out of
the box.
I suggest OmniOS do likewise.
-- richard
On Aug 13, 2014, at 1:46 PM, Dan McDonald dan...@omniti.com wrote:
On Aug 13, 2014, at 4:43 PM, Matthew Mabis mma...@vmware.com wrote:
Hey all,
I looked
On Aug 28, 2014, at 1:43 AM, Vincenzo Pii p...@zhaw.ch wrote:
Hello,
What software/technologies can be used on OmniOS to get an active/passive
setup between two (OmniOS) nodes?
Basically, one node should be up and running all the time and, in case of
failures, the second one should
On Sep 22, 2014, at 7:58 PM, Matthew Lagoe matthew.la...@subrigo.net
wrote:
Is there anything I can do then to work around this issue on the Seagate
drives?
4TB nearline SAS? You'll need a firmware update from Seagate if you are at rev
3. The fix allows you to change the drive's
On Sep 30, 2014, at 1:16 AM, Yuri Vorobyev vo...@yamalfin.ru wrote:
Hello.
I'm getting errors like in https://www.illumos.org/issues/1787 on last
updated LTS release 151006.
Error for Command: undecoded cmd 0x85Error Level: Recovered
scsi: [ID 107833 kern.notice] Requested
On Oct 3, 2014, at 6:25 AM, Fábio Rabelo fa...@fabiorabelo.wiki.br wrote:
How can I check how many HBA controlres do I have connected in a
system, if this system are in a ( very ) remote location, and I just
have SSH available ?
prtconf -v returns too many noise !
prtconv -v |grep RAID
DanMcD will know for sure, but vols do support SCSI UNMAP over comstar.
The missing support is for ZFS to issue SCSI UNMAP commands to the disks.
-- richard
On Oct 9, 2014, at 3:39 AM, Rune Tipsmark r...@steait.net wrote:
Yeah, I searched and found a few threads about it, seems like it won't
On Oct 10, 2014, at 6:15 AM, Schweiss, Chip c...@innovates.com wrote:
On Thu, Oct 9, 2014 at 9:54 PM, Dan McDonald dan...@omniti.com wrote:
On Oct 9, 2014, at 10:23 PM, Schweiss, Chip c...@innovates.com wrote:
Just tried my 2nd system. r151010 nlockmgr starts after clearing
On Oct 9, 2014, at 4:58 PM, Rune Tipsmark r...@steait.net wrote:
Just updated to latest version r151012
Still same... I checked for vdev settings, is there another place I can check?
It won't be a ZFS feature. On the initiator, use something like sg3_utils
thusly:
[root@congo ~]#
On Oct 31, 2014, at 7:14 AM, Eric Sproul eric.spr...@circonus.com wrote:
On Fri, Oct 31, 2014 at 2:33 AM, Rune Tipsmark r...@steait.net wrote:
Why is this pool showing near 100% busy when the underlying disks are doing
nothing at all….
Simply put, it's just how the accounting works in
17.5K 155K 1.04G
pool04 400G 39.5T 0 17.6K 31.5K 1.03G
-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On
Behalf Of Rune Tipsmark
Sent: Friday, October 31, 2014 12:38 PM
To: Richard Elling; Eric Sproul
Cc: omnios-discuss
On Nov 5, 2014, at 12:42 PM, Schweiss, Chip via illumos-discuss
disc...@lists.illumos.org wrote:
On Wed, Nov 5, 2014 at 2:36 PM, Dan McDonald dan...@omniti.com wrote:
On Nov 5, 2014, at 3:31 PM, Schweiss, Chip via illumos-discuss
disc...@lists.illumos.org wrote:
I had a system
On Nov 14, 2014, at 3:06 PM, Rune Tipsmark r...@steait.net wrote:
I get only about half the bandwidth with Sync=Always compared to
Sync=Disabled.
Using an SLC device that should perform better, its rated as 750 MB/sec, I
only get something like 60% of that at best of times. If there is a
Hi CJ,
I'm away from my notes at the moment, but know that an mptsas instance's
reported target number is not the same as the sd driver instance number. The
most expeditious way to cross reference is to use sasinfo or, failing that,
echo ::mptsas -t | mdb -k
which dumps the sas WWN to target
On Nov 15, 2014, at 7:49 AM, Richard Elling
richard.ell...@richardelling.com wrote:
Hi CJ,
I'm away from my notes at the moment, but know that an mptsas instance's
reported target number is not the same as the sd driver instance number. The
most expeditious way to cross reference
is that one device can result in a reset of the HBA,
which causes ereports
from potentially all outstanding requests (aka spam). The trick is to filter
out the root cause from the
recovery.
— richard
On 11/15/14, 9:54 AM, Richard Elling wrote:
On Nov 15, 2014, at 7:49 AM, Richard Elling
On Nov 19, 2014, at 6:36 AM, Rune Tipsmark r...@steait.net wrote:
I moved one of my PCI-E IOdrives and the disks changed from c14d0 and c15d0
to c16d0 and c17d0
How do I change it back so I can get my pool back online?
In most cases, you don't need to change it, just import.
-- richard
On Nov 25, 2014, at 11:31 AM, st...@linuxsuite.org wrote:
Howdy
zfs get all
AND
zpool get all
have settings for readonly
readonly off default
if I set readonly=on
will this affect scrub and its ability to fix errors?
On Dec 2, 2014, at 10:02 PM, wuffers m...@wuffers.net wrote:
I'm at home just looking into the health of our SAN and came across a bunch
of errors on the Stec ZeusRAM (in a mirrored log configuration):
# iostat -En
c12t5000A72B300780FFd0 Soft Errors: 0 Hard Errors: 1 Transport Errors:
I'm glad you got it running ok. Helpful hint below...
On Dec 3, 2014, at 2:37 AM, Randy S sim@live.nl wrote:
Anybody else had problems with two or more sas hba's in an omnios system ?
How was this solved?
From: sim@live.nl
To: omnios-discuss@lists.omniti.com
Date: Wed, 3 Dec
echo $i
zdb -l $i
done
This will show you the ZFS labels on each and every slice/partition for the
disk.
Ideally, you'll only see one set of ZFS labels for each disk.
-- richard
Thank you very much,
Filip
-Original Message-
From: Richard Elling
On Dec 26, 2014, at 2:36 PM, sergei serge...@gmail.com wrote:
Hi
The disks I want to install OmniOS to are TOSHIBA AL13SEB300 model which
scsi_vhci won't take over without proper conf file listing this model under
scsi-vhci-failover-override line. Right now those disk device path
On Dec 31, 2014, at 11:25 AM, Kevin Swab kevin.s...@colostate.edu wrote:
Hello Everyone,
We've been running OmniOS on a number of SuperMicro 36bay chassis, with
Supermicro motherboards, LSI SAS controllers (9211-8i 9207-8i) and
various SAS HDD's. These systems are serving block
: No known data errors
Thanks for your help,
Kevin
On 12/31/2014 3:22 PM, Richard Elling wrote:
On Dec 31, 2014, at 11:25 AM, Kevin Swab kevin.s...@colostate.edu wrote:
Hello Everyone,
We've been running OmniOS on a number of SuperMicro 36bay chassis, with
Supermicro motherboards, LSI
that come as
replacements. Almost makes one wish for a *some* tool to add entries to vhci
and make them active at runtime.
This is not my experience. scsi_vhci.conf is a nicety, not a requirement.
— richard
On Sat, Dec 27, 2014 at 10:54 AM, Richard Elling
richard.ell...@richardelling.com
On Jan 2, 2015, at 9:52 AM, Johan Kragsterman johan.kragster...@capvert.se
wrote:
Hmmm againa lot of hmmm's here today...
Been reading some more, and it looks like it is possible to reserve at LU
level.
You are correct. The spec is for targets as managed by the initiator. If
On Jan 19, 2015, at 3:55 AM, Rune Tipsmark r...@steait.net wrote:
hi all,
just in case there are other people out there using their ZFS box against
vSphere 5.1 or later... I found my storage vmotion were slow... really
slow... not much info available and so after a while of trial and
On Jan 26, 2015, at 5:16 PM, W Verb wver...@gmail.com wrote:
Hello All,
I am mildly confused by something iostat does when displaying statistics for
a zpool. Before I begin rooting through the iostat source, does anyone have
an idea of why I am seeing high wait and wsvc_t values for
On Jan 24, 2015, at 9:25 AM, Rune Tipsmark r...@steait.net wrote:
hi all, I am just writing some scripts to gather performance data from
iostat... or at least trying... I would like to completely skip the first
output since boot from iostat output and just get right to the period I
On Jan 9, 2015, at 1:33 PM, Randy S sim@live.nl wrote:
Hi all,
Maybe this has been covered already (I saw a bug about this so I thought this
occurence should not be present in omnios r12) but when I do a zdb -d rpool
after having upgraded the rpool to the latest version, I get a :
or HBAs, then is it safe to conclude the fault
lies with the drive?
With high probability.
-- richard
Kevin
On 01/06/2015 02:23 PM, Richard Elling wrote:
On Jan 6, 2015, at 12:18 PM, Kevin Swab kevin.s...@colostate.edu wrote:
SAS expanders are involved in my systems, so I installed
On Jan 7, 2015, at 12:11 PM, Stephan Budach stephan.bud...@jvm.de wrote:
Am 07.01.15 um 18:00 schrieb Richard Elling:
On Jan 7, 2015, at 2:28 AM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de wrote:
Hello everyone,
I am sharing my zfs via NFS to a couple of OVM
On Jan 7, 2015, at 2:28 AM, Stephan Budach stephan.bud...@jvm.de wrote:
Hello everyone,
I am sharing my zfs via NFS to a couple of OVM nodes. I noticed really bad
NFS read performance, when rsize goes beyond 128k, whereas the performance is
just fine at 32k. The issue is, that the
On Jan 6, 2015, at 9:28 AM, Schweiss, Chip c...@innovates.com wrote:
On Tue, Jan 6, 2015 at 5:16 AM, Filip Marvan filip.mar...@aira.cz
mailto:filip.mar...@aira.cz wrote:
Hi
as few guys before, I'm thinking again about High Availability storage with
ZFS. I know, that there is
On Feb 18, 2015, at 12:04 PM, Rune Tipsmark r...@steait.net wrote:
hi all,
I found an entry about zil_slog_limit here:
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII
it basically explains how writes
On Mar 20, 2015, at 1:09 PM, Chris Siebenmann c...@cs.toronto.edu wrote:
We're running into a situation with one of our NFS ZFS fileservers[*]
where we're wondering if we have enough NFS server threads to handle
our load. Per 'sharectl get nfs', we have 'servers=512' configured,
but we're
On Mar 5, 2015, at 6:00 AM, Nate Smith nsm...@careyweb.com wrote:
I’ve had this problem for a while, and I have no way to diagnose what is
going on, but occasionally when system IO gets high (I’ve seen it happen
especially on backups), I will lose connectivity with my Fibre Channel cards
On Mar 26, 2015, at 11:24 PM, wuffers m...@wuffers.net wrote:
So here's what I will attempt to test:
- Create thin vmdk @ 10TB with vSphere fat client: PASS
- Create lazy zeroed vmdk @ 10 TB with vSphere fat client: PASS
- Create eager zeroed vmdk @ 10 TB with vSphere web client: PASS!
On Mar 30, 2015, at 1:16 PM, wuffers m...@wuffers.net wrote:
On Mar 30, 2015, at 4:10 PM, Richard Elling
richard.ell...@richardelling.com wrote:
is compression enabled?
-- richard
Yes, LZ4. Dedupe off.
Ironically, WRITE_SAME is the perfect workload for dedup
On Feb 25, 2015, at 3:17 PM, Tobias Oetiker t...@oetiker.ch wrote:
experts!
If you were to buy 6TB disks for a RAIDZ2 Pool, would you go for
512n like in the olden days, or use the new 4Kn.
I know ZFS can deal with both ...
So what would be your choice, and WHY?
Better yet, what
On Apr 21, 2015, at 2:41 PM, Theo Schlossnagle je...@omniti.com wrote:
Given that several of the original core OmniOS team work for Circonus, I'd
say the answer from this side would be pretty biased.
Collectd works okay, but certainly isn't my preference as the polling
interval can't
On May 5, 2015, at 12:54 AM, d...@xmweixun.com d...@xmweixun.com wrote:
Hi All,
When I present lu to hpux or aix, lu writeback cache auto
disabled,why?
In SCSI, initiators can change the write cache policy.
— richard
LU Name: 600144F05548DC360005
the target's default.
-- richard
Best Regards,
Deng Wei Quan / 邓伟权
Mob: +86 13906055059
Mail: d...@xmweixun.com mailto:d...@xmweixun.com
厦门维讯信息科技有限公司
发件人: dwq+auto_=dengweiquan=139@xmweixun.com
[mailto:dwq+auto_=dengweiquan=139@xmweixun.com] 代表 Richard Elling
发送时间: 2015
On Apr 7, 2015, at 8:49 AM, Chris Siebenmann c...@cs.toronto.edu wrote:
Short story is that /opt is part of a namespace managed by the Solaris
packaging and as such is part of a BE fs tree. If you have privately
managed packages under certain subdirs, turn those sub-dirs into
separate
On Jun 9, 2015, at 8:05 AM, Robert A. Brock robert.br...@2hoffshore.com
wrote:
List,
This is probably a silly question, but I’ve honestly never tried this and
don’t have a test machine handy at the moment – can a pool be safely exported
and re-imported later if it is currently
On Jun 9, 2015, at 12:00 PM, Narayan Desai narayan.de...@gmail.com wrote:
You might also crank up the priority on your resilver, particularly if it is
getting tripped all of the time:
http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/
it has been in for a year or so
-- richard
On Jun 27, 2015, at 8:13 AM, Dan McDonald dan...@omniti.com wrote:
On Jun 27, 2015, at 7:24 AM, Tobias Oetiker t...@oetiker.ch wrote:
I am just watching OpenZFS Conference Videos. George Wilson just
showed off his allocation throttle work
On Jun 26, 2015, at 8:47 AM, John Barfield john.barfi...@bissinc.com wrote:
I’ve been interested in configuring omnios to run in memory off of a ram
disk myself.
Does anyone know where you could find a good guide for booting Solaris(h)
kernel into memory with a ramdisk?
:-)
the funny
On May 18, 2015, at 11:25 AM, Jeff Stockett jstock...@molalla.com wrote:
A drive failed in one of our supermicro 5048R-E1CR36L servers running omnios
r151012 last night, and somewhat unexpectedly, the whole system seems to have
panicked.
May 18 04:43:08 zfs01 scsi: [ID 365881
On Jul 16, 2015, at 9:48 AM, Chris Siebenmann c...@cs.toronto.edu wrote:
I wrote:
We have one ZFS-based NFS fileserver that persistently runs at a very
high level of non-ARC kernel memory usage that never seems to shrink.
On a 128 GB machine, mdb's ::memstat reports 95% memory usage by
On Jul 16, 2015, at 11:30 AM, Schweiss, Chip c...@innovates.com wrote:
The 850 Pro should never be used as a log device. It does not have power
fail protection of its ram cache. You might as well set sync=disabled and
skip using a log device entirely because the 850 Pro is not
On Jul 20, 2015, at 7:56 PM, Michael Talbott mtalb...@lji.org wrote:
Thanks for the reply. The bios for the card is disabled already. The 8 second
per drive scan happens after the kernel has already loaded and it is scanning
for devices. I wonder if it's due to running newer firmware. I
additional insight below...
> On Oct 22, 2015, at 12:02 PM, Matej Zerovnik wrote:
>
> Hello,
>
> I'm building a new system and I'm having a bit of a performance problem.
> Well, its either that or I'm not getting the whole ZIL idea:)
>
> My system is following:
> - IBM
> On Oct 7, 2015, at 1:59 PM, Mick Burns wrote:
>
> So... how does Nexenta copes with hot spares and all kinds of disk failures ?
> Adding hot spares is part of their administration manuals so can we
> assume things are almost always handled smoothly ? I'd like to hear
>
On Jul 12, 2015, at 5:26 PM, Derek Yarnell de...@umiacs.umd.edu wrote:
On 7/12/15 3:21 PM, Günther Alka wrote:
First action:
If you can mount the pool read-only, update your backup
We are securing all the non-scratch data currently before messing with
the pool any more. We had backups
1 - 100 of 144 matches
Mail list logo