> On Dec 14, 2018, at 6:39 AM, Schweiss, Chip wrote:
>
> Has the NVMe support in Illumos come far enough along to properly support two
> servers connected to NVMe JBOF storage such as the Supermicro SSG-136R-N32JBF?
I can't speak to the Supermicro, but I can talk in detail about
fwiw, nfssvrstat breaks down the NFS writes by sync, async, and commits:
explicitly for determining how the workload will impact ZIL. For writing many
files, the (compound) operations can also include creates and sync-on-close
that also impacts performance.
-- richard
> On Aug 23, 2018,
fmadm allows you to load/unload modules.
-- richard
> On Mar 16, 2018, at 8:24 AM, Schweiss, Chip wrote:
>
> I need to get this JBOD working with OmniOS. Is there a way to get FMD to
> ignore this SES device until this issue is fixed?
>
> It is a RAID, Inc. 4U 96-Bay
Serial Num: not set
> Write Protect : Disabled
> Writeback Cache : Disabled
> Access State : Active
>
> Problem is the same.
>
> Cheers,
>
> Anthony
>
> 2017-09-28 10:33 GMT+02:00 Stephan Budach <stephan.bud...@jvm.de
Comment below...
> On Sep 27, 2017, at 12:57 AM, anthony omnios wrote:
>
> Hi,
>
> i have a problem, i used many ISCSI zvol (for each vm), network traffic is
> 2MB/s between kvm host and filer but i write on disks many more than that. I
> used a pool with separated
> On Aug 24, 2017, at 5:41 AM, Schweiss, Chip wrote:
>
> I just move one of my production systems to OmniOS CE 151022m from 151014 and
> my NFS performance has tanked.
>
> Here's a snapshot of nfssvrtop:
>
> 2017 Aug 24 07:34:39, load: 1.54, read: 5427 KB, swrite:
is this effect relevant?
> If a ZIL pre-allocates say 4G and the remaining fragmented poolsize for
> regular writes is 12T
>
> Gea
>
> Am 23.06.2017 um 19:30 schrieb Richard Elling:
>> A slog helps fragmentation because the space for ZIL is pre-allocated based
>> on
A slog helps fragmentation because the space for ZIL is pre-allocated based on
a prediction of
how big the write will be. The pre-allocated space includes a
physical-block-sized chain block for the
ZIL. An 8k write can allocate 12k for the ZIL entry that is freed when the txg
commits. Thus, a
ut for more modern expanders and HBAs, we see fewer problems
mixing.
I wouldn’t attempt for 3G SAS/SATA, but 12G seems more robust.
— richard
>
> I was 100% banking on the ZeusRAM since that is what I could get my hands
> immediately.
> From: Richard Elling <richard.e
so the smaller
capacity (GB) drives are also slower.
— richard
>
>
> From: Richard Elling <richard.ell...@richardelling.com>
> Sent: Monday, April 10, 2017 4:15:32 PM
> To: Machine Man
> Cc: omnios-discuss@lists.omniti.com
> Subject: Re: [OmniOS-discuss] ZeusRAM - predi
685
> If they are files which have been deleted they must be very short lived files
> ?
not necessarily
If you want auditing, you need auditing tools and dtrace is not the right tool
-- richard
>
> Richard Elling wrote:
>>
>>
>>> On Mar 21, 2017
> On Jan 29, 2017, at 3:10 AM, Stephan Budach wrote:
>
> Hi,
>
> just to wrap this up… I decided to go with 15 additional LUNs on each storage
> zpool, to avoid zfs complainign about replication mismatches. I know, I cluld
> have done otherwise, but it somehow felt
> On Jan 26, 2017, at 12:20 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
>
> Hi Richard,
>
> gotcha… read on, below…
"thin provisioning" bit you. For "thick provisioning" you’ll have a
refreservation and/or reservation.
— richard
>
&
more below…
> On Jan 25, 2017, at 3:01 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
>
> Ooops… should have waited with sending that message after I rebootet the
> S11.1 host…
>
>
> Am 25.01.17 um 23:41 schrieb Stephan Budach:
>> Hi Richard,
>>
Hi Stephan,
> On Jan 25, 2017, at 5:54 AM, Stephan Budach wrote:
>
> Hi guys,
>
> I have been trying to import a zpool, based on a 3way-mirror provided by
> three omniOS boxes via iSCSI. This zpool had been working flawlessly until
> some random reboot of the S11.1
> On Jan 23, 2017, at 5:17 PM, Michael Rasmussen wrote:
>
> On Mon, 23 Jan 2017 19:27:51 -0500
> Dale Ghent wrote:
>
>>
>> Check your BIOS settings for the slots, they might be limited there. I also
>> note that this motherboard model has two physical x8
> On Jan 21, 2017, at 8:07 AM, Dale Ghent wrote:
>
> We developed Zetaback for this.
+1 for zetaback. There are perhaps hundreds of implementations of this over the
years. I think you'll find that zetaback is one of the best designs.
-- richard
> As for how you exactly
one more thing…
> On Jan 4, 2017, at 10:29 AM, Richard Elling
> <richard.ell...@richardelling.com> wrote:
>
>>
>> On Jan 4, 2017, at 10:04 AM, Chris Siebenmann <c...@cs.toronto.edu> wrote:
>>
>> We recently had a server reboot due to the
> On Jan 4, 2017, at 10:04 AM, Chris Siebenmann wrote:
>
> We recently had a server reboot due to the ZFS vdev_deadman/spa_deadman
> timeout timer activating and panicing the system. If you haven't heard
> of this timer before, that's not surprising; triggering it requires
> On Dec 15, 2016, at 11:47 AM, Dale Ghent wrote:
>
>
>> On Dec 15, 2016, at 12:02 PM, Machine Man wrote:
>>
>> Can the smartmontools be used on OmniOS to collect JBOD enclosure status,
>> FANs voltages etc. ?
>
> SMARTmontools is a disk device
> On Dec 6, 2016, at 8:16 PM, Lawrence Giam wrote:
>
> Hi All,
>
> Also to note that I have another running OpenIndiana 151a7 and this is also a
> SuperMicro server, this has the same behaviour which is the system keep
> generating the
> On Dec 6, 2016, at 7:30 AM, Dan McDonald wrote:
>
> I got a link to this commit from the Delphix illumos repo a while back:
>
> https://github.com/openzfs/openzfs/pull/186/
>
> I was curious if NFS-using people in the audience here would like to see this
> one Just
> On Jun 8, 2016, at 12:24 AM, Jim Klimov wrote:
>
> 8 июня 2016 г. 7:00:50 CEST, Martijn Fennis пишет:
>> Hi,
>>
>>
>> Does someone have some info about setting up ZFS on top of slices ?
>>
>> Having 10 drives which i would like to have the outer ring
> On Jun 8, 2016, at 7:37 AM, Martijn Fennis wrote:
>
> Hi Dale and Jim,
>
> Thanks for your time.
>
> The reason is to store VMs on the Quick-pool and downloads on the Slow-pool.
>
> I would personally not assign the Slow-pool any of my memory, perhaps
> meta-data.
>
> On May 24, 2016, at 4:28 PM, Robert Fantini wrote:
>
> we've installed napp-it to a couple of systems . they run a lot louder -
> fans do not seem to be controlled from software.
>
> is there a software package that that can be installed to handle sensors and
>
we've been working on H241 JBOD driver. We have it talking to drives on a
variety of enclosures along with several weeks of constant load. The current
cpqary3 driver is not suitable, even if it attaches.
Let me know if you want to become involved and have enough time in your
schedule to wait
> On May 3, 2016, at 7:57 PM, Ergi Thanasko wrote:
>
> Hi Dan,
> Yes is it a zfs pool shared over NFS. Yup going through the rabbit whole, but
> I can wait for a while I have patience. Any help is appreciated thank you
The most direct approach is to use multiple IP
> On Apr 22, 2016, at 10:28 AM, Dan McDonald <dan...@omniti.com> wrote:
>
>
>> On Apr 22, 2016, at 1:13 PM, Richard Elling
>> <richard.ell...@richardelling.com> wrote:
>>
>> If you're running Solaris 11 or pre-2015 OmniOS, then the old write thrott
> On Apr 22, 2016, at 5:00 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
>
> Am 21.04.16 um 18:36 schrieb Richard Elling:
>>> On Apr 21, 2016, at 7:47 AM, Chris Siebenmann <c...@cs.toronto.edu> wrote:
>>>
>>> [About ZFS scrub tunables:]
>
> On Apr 21, 2016, at 7:47 AM, Chris Siebenmann wrote:
>
> [About ZFS scrub tunables:]
>> Interesting read - and it surely works. If you set the tunable before
>> you start the scrub you can immediately see the thoughput being much
>> higher than with the standard setting.
> On Apr 10, 2016, at 3:05 AM, Guenther Alka wrote:
>
> Check this prebuild machine as a reference
> http://www.supermicro.com/products/system/2U/2028/SSG-2028R-ACR24L.cfm
>
> it comes with 3 x Avago 3008 HBAs in IT mode and 2 X 10 GbE Ethernet
FYI, Avago bought Broadcom,
> On Mar 23, 2016, at 6:37 PM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
> wrote:
>
> On Wed, 23 Mar 2016, Richard Elling wrote:
>
>>
>>> On Mar 23, 2016, at 7:49 AM, Richard Jahnel <rjah...@ellipseinc.com> wrote:
>>>
>>> It sh
> On Mar 23, 2016, at 7:49 AM, Richard Jahnel wrote:
>
> It should be noted that using a 512e disk as a 512n disk subjects you to a
> significant risk of silent corruption in the event of power loss. Because
> 512e disks does a read>modify>write operation to modify
> On Mar 22, 2016, at 7:41 AM, Chris Siebenmann wrote:
>
>>> This implicitly assumes that the only reason to set ashift=12 is
>>> if you are currently using one or more drives that require it. I
>>> strongly disagree with this view. Since ZFS cannot currently replace
>>> a
a good default.
-- richard
>
> Regards
>
> Richard Jahnel
> Network Engineer
> On-Site.com | Ellipse Design
> 866-266-7483 ext. 4408
> Direct: 669-800-6270
>
> -Original Message-
> From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On
> On Mar 21, 2016, at 12:19 PM, Bob Friesenhahn <bfrie...@simple.dallas.tx.us>
> wrote:
>
> On Mon, 21 Mar 2016, Richard Elling wrote:
>>>
>>> Adding the ashift argument to zpool was discussed every few years and so
>>> far was always deemed not en
> On Mar 21, 2016, at 11:11 AM, Jim Klimov wrote:
>
> 21 марта 2016 г. 10:02:03 CET, Hanno Hirschberger
> пишет:
>> On 21.03.2016 08:00, Fred Liu wrote:
>>> So that means illumos can handle 512n and 4kn automatically and
>> properly?
>>
>>
> On Mar 14, 2016, at 11:42 AM, CJ Keist wrote:
>
> Dan,
> You know if this is a just one shot attempt? Meaning if I choose the wrong
> TXG to import, can I export and try again with a different TXG?
>
pro tip: try retro import with readonly option
-- richard
comment below...
> On Mar 9, 2016, at 11:05 AM, Michael Rasmussen wrote:
>
> Hi all,
>
> I suddenly noticed one of the disk bays in my storage server going red
> with this logged in dmesg:
> Mar 9 19:19:47 nas genunix: [ID 107833 kern.warning] WARNING:
>
> On Feb 19, 2016, at 7:14 AM, Trey Palmer wrote:
>
> I haven't checked the Supermicro models lately, but the HP DL380Gen9 has a
> model with 24 direct-wired 2.5" slots in 3 modular 8-disk bays. The SAS
> expander is an add-in card which you don't have to buy.
>
> It's
> On Dec 23, 2015, at 12:58 AM, Dan Vatca wrote:
>
> If you need latency, you will most likely need a latency distribution
> histogram, and not an average latency.
> With averages you will lose latency outliers that are very important. Here's
> a good read with lots of
> On Dec 22, 2015, at 4:44 PM, 張峻宇 wrote:
>
> Hi all,
> According to the release note of OmniOS r151016, we could get “IOPS,
> bandwidth, and latency kstats for NFS server”
>
> there is lots of information showing when I use enter command #kstat,
> I
> On Dec 10, 2015, at 12:02 PM, Dave Pooser <dave...@pooserville.com> wrote:
>
> On 12/10/15, 12:13 PM, "OmniOS-discuss on behalf of Richard Elling"
> <omnios-discuss-boun...@lists.omniti.com on behalf of
> richard.ell...@richardelling.com> wrote:
>
>
additional insight below...
> On Oct 22, 2015, at 12:02 PM, Matej Zerovnik wrote:
>
> Hello,
>
> I'm building a new system and I'm having a bit of a performance problem.
> Well, its either that or I'm not getting the whole ZIL idea:)
>
> My system is following:
> - IBM
> On Oct 7, 2015, at 1:59 PM, Mick Burns wrote:
>
> So... how does Nexenta copes with hot spares and all kinds of disk failures ?
> Adding hot spares is part of their administration manuals so can we
> assume things are almost always handled smoothly ? I'd like to hear
>
> On Sep 11, 2015, at 11:37 AM, Dan McDonald wrote:
>
>
>> On Sep 11, 2015, at 2:33 PM, Michael Rasmussen wrote:
>>
>> What should one look for from the zdb output to identify any errors?
>
> Look for assertion failures, or other non-0 exits.
>
>
On Jul 20, 2015, at 7:56 PM, Michael Talbott mtalb...@lji.org wrote:
Thanks for the reply. The bios for the card is disabled already. The 8 second
per drive scan happens after the kernel has already loaded and it is scanning
for devices. I wonder if it's due to running newer firmware. I
On Jul 16, 2015, at 9:48 AM, Chris Siebenmann c...@cs.toronto.edu wrote:
I wrote:
We have one ZFS-based NFS fileserver that persistently runs at a very
high level of non-ARC kernel memory usage that never seems to shrink.
On a 128 GB machine, mdb's ::memstat reports 95% memory usage by
On Jul 16, 2015, at 11:30 AM, Schweiss, Chip c...@innovates.com wrote:
The 850 Pro should never be used as a log device. It does not have power
fail protection of its ram cache. You might as well set sync=disabled and
skip using a log device entirely because the 850 Pro is not
On Jul 12, 2015, at 5:26 PM, Derek Yarnell de...@umiacs.umd.edu wrote:
On 7/12/15 3:21 PM, Günther Alka wrote:
First action:
If you can mount the pool read-only, update your backup
We are securing all the non-scratch data currently before messing with
the pool any more. We had backups
On Jun 26, 2015, at 8:47 AM, John Barfield john.barfi...@bissinc.com wrote:
I’ve been interested in configuring omnios to run in memory off of a ram
disk myself.
Does anyone know where you could find a good guide for booting Solaris(h)
kernel into memory with a ramdisk?
:-)
the funny
it has been in for a year or so
-- richard
On Jun 27, 2015, at 8:13 AM, Dan McDonald dan...@omniti.com wrote:
On Jun 27, 2015, at 7:24 AM, Tobias Oetiker t...@oetiker.ch wrote:
I am just watching OpenZFS Conference Videos. George Wilson just
showed off his allocation throttle work
On Jun 9, 2015, at 8:05 AM, Robert A. Brock robert.br...@2hoffshore.com
wrote:
List,
This is probably a silly question, but I’ve honestly never tried this and
don’t have a test machine handy at the moment – can a pool be safely exported
and re-imported later if it is currently
On Jun 9, 2015, at 12:00 PM, Narayan Desai narayan.de...@gmail.com wrote:
You might also crank up the priority on your resilver, particularly if it is
getting tripped all of the time:
http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/
On May 18, 2015, at 11:25 AM, Jeff Stockett jstock...@molalla.com wrote:
A drive failed in one of our supermicro 5048R-E1CR36L servers running omnios
r151012 last night, and somewhat unexpectedly, the whole system seems to have
panicked.
May 18 04:43:08 zfs01 scsi: [ID 365881
the target's default.
-- richard
Best Regards,
Deng Wei Quan / 邓伟权
Mob: +86 13906055059
Mail: d...@xmweixun.com mailto:d...@xmweixun.com
厦门维讯信息科技有限公司
发件人: dwq+auto_=dengweiquan=139@xmweixun.com
[mailto:dwq+auto_=dengweiquan=139@xmweixun.com] 代表 Richard Elling
发送时间: 2015
On May 5, 2015, at 12:54 AM, d...@xmweixun.com d...@xmweixun.com wrote:
Hi All,
When I present lu to hpux or aix, lu writeback cache auto
disabled,why?
In SCSI, initiators can change the write cache policy.
— richard
LU Name: 600144F05548DC360005
On Apr 21, 2015, at 2:41 PM, Theo Schlossnagle je...@omniti.com wrote:
Given that several of the original core OmniOS team work for Circonus, I'd
say the answer from this side would be pretty biased.
Collectd works okay, but certainly isn't my preference as the polling
interval can't
On Apr 7, 2015, at 8:49 AM, Chris Siebenmann c...@cs.toronto.edu wrote:
Short story is that /opt is part of a namespace managed by the Solaris
packaging and as such is part of a BE fs tree. If you have privately
managed packages under certain subdirs, turn those sub-dirs into
separate
On Mar 26, 2015, at 11:24 PM, wuffers m...@wuffers.net wrote:
So here's what I will attempt to test:
- Create thin vmdk @ 10TB with vSphere fat client: PASS
- Create lazy zeroed vmdk @ 10 TB with vSphere fat client: PASS
- Create eager zeroed vmdk @ 10 TB with vSphere web client: PASS!
On Mar 30, 2015, at 1:16 PM, wuffers m...@wuffers.net wrote:
On Mar 30, 2015, at 4:10 PM, Richard Elling
richard.ell...@richardelling.com wrote:
is compression enabled?
-- richard
Yes, LZ4. Dedupe off.
Ironically, WRITE_SAME is the perfect workload for dedup
On Mar 20, 2015, at 1:09 PM, Chris Siebenmann c...@cs.toronto.edu wrote:
We're running into a situation with one of our NFS ZFS fileservers[*]
where we're wondering if we have enough NFS server threads to handle
our load. Per 'sharectl get nfs', we have 'servers=512' configured,
but we're
On Mar 5, 2015, at 6:00 AM, Nate Smith nsm...@careyweb.com wrote:
I’ve had this problem for a while, and I have no way to diagnose what is
going on, but occasionally when system IO gets high (I’ve seen it happen
especially on backups), I will lose connectivity with my Fibre Channel cards
On Feb 25, 2015, at 3:17 PM, Tobias Oetiker t...@oetiker.ch wrote:
experts!
If you were to buy 6TB disks for a RAIDZ2 Pool, would you go for
512n like in the olden days, or use the new 4Kn.
I know ZFS can deal with both ...
So what would be your choice, and WHY?
Better yet, what
On Feb 18, 2015, at 12:04 PM, Rune Tipsmark r...@steait.net wrote:
hi all,
I found an entry about zil_slog_limit here:
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII
http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII
it basically explains how writes
On Jan 26, 2015, at 5:16 PM, W Verb wver...@gmail.com wrote:
Hello All,
I am mildly confused by something iostat does when displaying statistics for
a zpool. Before I begin rooting through the iostat source, does anyone have
an idea of why I am seeing high wait and wsvc_t values for
On Jan 24, 2015, at 9:25 AM, Rune Tipsmark r...@steait.net wrote:
hi all, I am just writing some scripts to gather performance data from
iostat... or at least trying... I would like to completely skip the first
output since boot from iostat output and just get right to the period I
On Jan 19, 2015, at 3:55 AM, Rune Tipsmark r...@steait.net wrote:
hi all,
just in case there are other people out there using their ZFS box against
vSphere 5.1 or later... I found my storage vmotion were slow... really
slow... not much info available and so after a while of trial and
On Jan 9, 2015, at 1:33 PM, Randy S sim@live.nl wrote:
Hi all,
Maybe this has been covered already (I saw a bug about this so I thought this
occurence should not be present in omnios r12) but when I do a zdb -d rpool
after having upgraded the rpool to the latest version, I get a :
On Jan 7, 2015, at 12:11 PM, Stephan Budach stephan.bud...@jvm.de wrote:
Am 07.01.15 um 18:00 schrieb Richard Elling:
On Jan 7, 2015, at 2:28 AM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de wrote:
Hello everyone,
I am sharing my zfs via NFS to a couple of OVM
On Jan 7, 2015, at 2:28 AM, Stephan Budach stephan.bud...@jvm.de wrote:
Hello everyone,
I am sharing my zfs via NFS to a couple of OVM nodes. I noticed really bad
NFS read performance, when rsize goes beyond 128k, whereas the performance is
just fine at 32k. The issue is, that the
or HBAs, then is it safe to conclude the fault
lies with the drive?
With high probability.
-- richard
Kevin
On 01/06/2015 02:23 PM, Richard Elling wrote:
On Jan 6, 2015, at 12:18 PM, Kevin Swab kevin.s...@colostate.edu wrote:
SAS expanders are involved in my systems, so I installed
On Jan 6, 2015, at 9:28 AM, Schweiss, Chip c...@innovates.com wrote:
On Tue, Jan 6, 2015 at 5:16 AM, Filip Marvan filip.mar...@aira.cz
mailto:filip.mar...@aira.cz wrote:
Hi
as few guys before, I'm thinking again about High Availability storage with
ZFS. I know, that there is
On Jan 2, 2015, at 9:52 AM, Johan Kragsterman johan.kragster...@capvert.se
wrote:
Hmmm againa lot of hmmm's here today...
Been reading some more, and it looks like it is possible to reserve at LU
level.
You are correct. The spec is for targets as managed by the initiator. If
that come as
replacements. Almost makes one wish for a *some* tool to add entries to vhci
and make them active at runtime.
This is not my experience. scsi_vhci.conf is a nicety, not a requirement.
— richard
On Sat, Dec 27, 2014 at 10:54 AM, Richard Elling
richard.ell...@richardelling.com
On Dec 31, 2014, at 11:25 AM, Kevin Swab kevin.s...@colostate.edu wrote:
Hello Everyone,
We've been running OmniOS on a number of SuperMicro 36bay chassis, with
Supermicro motherboards, LSI SAS controllers (9211-8i 9207-8i) and
various SAS HDD's. These systems are serving block
: No known data errors
Thanks for your help,
Kevin
On 12/31/2014 3:22 PM, Richard Elling wrote:
On Dec 31, 2014, at 11:25 AM, Kevin Swab kevin.s...@colostate.edu wrote:
Hello Everyone,
We've been running OmniOS on a number of SuperMicro 36bay chassis, with
Supermicro motherboards, LSI
On Dec 26, 2014, at 2:36 PM, sergei serge...@gmail.com wrote:
Hi
The disks I want to install OmniOS to are TOSHIBA AL13SEB300 model which
scsi_vhci won't take over without proper conf file listing this model under
scsi-vhci-failover-override line. Right now those disk device path
I'm glad you got it running ok. Helpful hint below...
On Dec 3, 2014, at 2:37 AM, Randy S sim@live.nl wrote:
Anybody else had problems with two or more sas hba's in an omnios system ?
How was this solved?
From: sim@live.nl
To: omnios-discuss@lists.omniti.com
Date: Wed, 3 Dec
echo $i
zdb -l $i
done
This will show you the ZFS labels on each and every slice/partition for the
disk.
Ideally, you'll only see one set of ZFS labels for each disk.
-- richard
Thank you very much,
Filip
-Original Message-
From: Richard Elling
On Dec 2, 2014, at 10:02 PM, wuffers m...@wuffers.net wrote:
I'm at home just looking into the health of our SAN and came across a bunch
of errors on the Stec ZeusRAM (in a mirrored log configuration):
# iostat -En
c12t5000A72B300780FFd0 Soft Errors: 0 Hard Errors: 1 Transport Errors:
On Nov 25, 2014, at 11:31 AM, st...@linuxsuite.org wrote:
Howdy
zfs get all
AND
zpool get all
have settings for readonly
readonly off default
if I set readonly=on
will this affect scrub and its ability to fix errors?
On Nov 19, 2014, at 6:36 AM, Rune Tipsmark r...@steait.net wrote:
I moved one of my PCI-E IOdrives and the disks changed from c14d0 and c15d0
to c16d0 and c17d0
How do I change it back so I can get my pool back online?
In most cases, you don't need to change it, just import.
-- richard
Hi CJ,
I'm away from my notes at the moment, but know that an mptsas instance's
reported target number is not the same as the sd driver instance number. The
most expeditious way to cross reference is to use sasinfo or, failing that,
echo ::mptsas -t | mdb -k
which dumps the sas WWN to target
On Nov 15, 2014, at 7:49 AM, Richard Elling
richard.ell...@richardelling.com wrote:
Hi CJ,
I'm away from my notes at the moment, but know that an mptsas instance's
reported target number is not the same as the sd driver instance number. The
most expeditious way to cross reference
is that one device can result in a reset of the HBA,
which causes ereports
from potentially all outstanding requests (aka spam). The trick is to filter
out the root cause from the
recovery.
— richard
On 11/15/14, 9:54 AM, Richard Elling wrote:
On Nov 15, 2014, at 7:49 AM, Richard Elling
On Nov 14, 2014, at 3:06 PM, Rune Tipsmark r...@steait.net wrote:
I get only about half the bandwidth with Sync=Always compared to
Sync=Disabled.
Using an SLC device that should perform better, its rated as 750 MB/sec, I
only get something like 60% of that at best of times. If there is a
On Nov 5, 2014, at 12:42 PM, Schweiss, Chip via illumos-discuss
disc...@lists.illumos.org wrote:
On Wed, Nov 5, 2014 at 2:36 PM, Dan McDonald dan...@omniti.com wrote:
On Nov 5, 2014, at 3:31 PM, Schweiss, Chip via illumos-discuss
disc...@lists.illumos.org wrote:
I had a system
17.5K 155K 1.04G
pool04 400G 39.5T 0 17.6K 31.5K 1.03G
-Original Message-
From: OmniOS-discuss [mailto:omnios-discuss-boun...@lists.omniti.com] On
Behalf Of Rune Tipsmark
Sent: Friday, October 31, 2014 12:38 PM
To: Richard Elling; Eric Sproul
Cc: omnios-discuss
On Oct 31, 2014, at 7:14 AM, Eric Sproul eric.spr...@circonus.com wrote:
On Fri, Oct 31, 2014 at 2:33 AM, Rune Tipsmark r...@steait.net wrote:
Why is this pool showing near 100% busy when the underlying disks are doing
nothing at all….
Simply put, it's just how the accounting works in
On Oct 10, 2014, at 6:15 AM, Schweiss, Chip c...@innovates.com wrote:
On Thu, Oct 9, 2014 at 9:54 PM, Dan McDonald dan...@omniti.com wrote:
On Oct 9, 2014, at 10:23 PM, Schweiss, Chip c...@innovates.com wrote:
Just tried my 2nd system. r151010 nlockmgr starts after clearing
On Oct 9, 2014, at 4:58 PM, Rune Tipsmark r...@steait.net wrote:
Just updated to latest version r151012
Still same... I checked for vdev settings, is there another place I can check?
It won't be a ZFS feature. On the initiator, use something like sg3_utils
thusly:
[root@congo ~]#
DanMcD will know for sure, but vols do support SCSI UNMAP over comstar.
The missing support is for ZFS to issue SCSI UNMAP commands to the disks.
-- richard
On Oct 9, 2014, at 3:39 AM, Rune Tipsmark r...@steait.net wrote:
Yeah, I searched and found a few threads about it, seems like it won't
On Oct 3, 2014, at 6:25 AM, Fábio Rabelo fa...@fabiorabelo.wiki.br wrote:
How can I check how many HBA controlres do I have connected in a
system, if this system are in a ( very ) remote location, and I just
have SSH available ?
prtconf -v returns too many noise !
prtconv -v |grep RAID
On Sep 30, 2014, at 1:16 AM, Yuri Vorobyev vo...@yamalfin.ru wrote:
Hello.
I'm getting errors like in https://www.illumos.org/issues/1787 on last
updated LTS release 151006.
Error for Command: undecoded cmd 0x85Error Level: Recovered
scsi: [ID 107833 kern.notice] Requested
On Sep 22, 2014, at 7:58 PM, Matthew Lagoe matthew.la...@subrigo.net
wrote:
Is there anything I can do then to work around this issue on the Seagate
drives?
4TB nearline SAS? You'll need a firmware update from Seagate if you are at rev
3. The fix allows you to change the drive's
On Aug 28, 2014, at 1:43 AM, Vincenzo Pii p...@zhaw.ch wrote:
Hello,
What software/technologies can be used on OmniOS to get an active/passive
setup between two (OmniOS) nodes?
Basically, one node should be up and running all the time and, in case of
failures, the second one should
Dan,
This causes much angst for x86 systems. Some distros disable fastboot out of
the box.
I suggest OmniOS do likewise.
-- richard
On Aug 13, 2014, at 1:46 PM, Dan McDonald dan...@omniti.com wrote:
On Aug 13, 2014, at 4:43 PM, Matthew Mabis mma...@vmware.com wrote:
Hey all,
I looked
On Aug 12, 2014, at 4:38 PM, Scott LeFevre slefe...@indy.rr.com wrote:
This may help (or may not). I presume this will work on OmniOS as it works
on other Illumos variants.
NFS logging is a perfect way to destroy performance. It is very serial and will
not scale well.
One important Sun
On Aug 8, 2014, at 7:33 AM, Dan McDonald dan...@omniti.com wrote:
On Aug 8, 2014, at 10:10 AM, Stephen Nelson-Smith
step...@atalanta-systems.com wrote:
Hi,
On 8 August 2014 15:06, Eric Sproul espr...@omniti.com wrote:
IMHO the existence of an FTP daemon in the base OS runs counter
1 - 100 of 144 matches
Mail list logo