Hi all,
I have run into a situation, where ipadm hangs upon trying to create a
new interface on my ixgbe adaptors. ipadm create-if ixgbe[0|1|2] didn't
return and either got killed eventually by the system or by kill -9,
which I issued against that process. I just tried to create another if
Am 10.12.14 um 17:20 schrieb Dan McDonald:
On Dec 10, 2014, at 11:12 AM, Stephan Budach stephan.bud...@jvm.de wrote:
Hi Dan,
I actually don't know the term incantation yet, but I assume, that you wanted
to know the release of OmniOS I am running?
I meant what exact command-line arguments you
Am 10.12.14 um 20:28 schrieb Dan McDonald:
On Dec 10, 2014, at 2:23 PM, Stephan Budach stephan.bud...@jvm.de wrote:
This worked, but I wanted to change the MTU, so I deciced to remove this config
again:
ipadm delete-addr ixgb3/v4static
ipadm delete-if ixgbe3
Interesting. Your dladm show
Am 10.12.14 um 20:56 schrieb Stephan Budach:
I don't know… it seems to have serious issues with the MTU specifically…
root@nfsvmpool01:~# dladm show-phys
LINK MEDIASTATE SPEED DUPLEX DEVICE
igb0 Ethernet up 1000 full igb0
igb1
Hi Dan,
Am 10.12.14 um 21:17 schrieb Dan McDonald:
On Dec 10, 2014, at 3:14 PM, Stephan Budach stephan.bud...@jvm.de wrote:
Now, this one… I don't get it. ;)
root@nfsvmpool01:~# ipadm create-if ixgbe3
root@nfsvmpool01:~# ipadm show-if
IFNAME STATECURRENT PERSISTENT
lo0ok
,
budy
On Thu, 18 Dec 2014 08:54:05 +0100 Stephan Budach stephan.bud...@jvm.de wrote
Good morning,
I wanted to install gcc to compile znapzend on my r012 box, so I issued:
root@nfsvmpool02:/root# pkg install developer/gcc48
Packages to install: 6
Create boot environment
Ahh… and I almost forgot. I will of course first try that procedure on
the backup OmniOS box… ;)
Should that fail, the next zfs send/recv will take care of that…
Thanks for all your valiuable input,
budy
___
OmniOS-discuss mailing list
Am 20.01.15 um 14:15 schrieb Stephan Budach:
Hi guys,
we just experienced a lock-up on one of our OmniOS r006 boxes in a way
that we had to reset it to get it working again. This box is running
on a SuperMicro storage server and it had been checked using smartctl
by our check_mk client each
Am 21.01.15 um 15:38 schrieb Schweiss, Chip:
On Wed, Jan 21, 2015 at 2:54 AM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de wrote:
The sas2flash utility perfectly fits my needs, so I will go with
that. Now, the only thing to check is, which FW to use. I have
Hi guys,
I am struggling to get storcli to work - well, not actually to work, but
to show me any of my 2907 HBAs, as storcli reports a number of 0
installed HBAs:
root@nfsvmpool02:/opt/MegaRAID/CLI# ./storcli show
Status Code = 0
Status = Success
Description = None
Number of Controllers = 0
Hi guys,
I do have two zpools of mirrored vdevs, where each vdev is spread over
two LSI 9207-8i HBAs. Is it possible to perform a round-robin FW upgrade
on the LSI HBAs, without rebooting the box? I'd like to keep the zpools
up while I am performing the upgrade and I thought of breaking the
Hi Johan,
Am 22.01.15 um 10:39 schrieb Johan Kragsterman:
Hi!
-OmniOS-discuss omnios-discuss-boun...@lists.omniti.com skrev: -
Till: omnios-discuss@lists.omniti.com
Från: Stephan Budach
Sänt av: OmniOS-discuss
Datum: 2015-01-22 09:50
Ärende: [OmniOS-discuss] Rolling FW upgrade on LSI
Hi Carl,
Am 22.01.15 um 11:05 schrieb Carl Brunning:
HI
Yes I've done it using lsiutils
You do the firmware update in that and then do reset of the port (99)
I will use the sas2flash utility on OmniOS, but it has also the
capabilities to reset the HBA after upgrade.
This if it a mirror pool
Hi Johan,
Am 22.01.15 um 11:19 schrieb Johan Kragsterman:
Stephan!
Well, I didn't consider data loss as the problem, but service down time, since
you are reluctant to take it down, it is probably a problem for you with down
time. And if down time for firmware upgrade a problem for you,
Hi Johan,
Am 21.01.15 um 08:33 schrieb Johan Kragsterman:
Hi!
-OmniOS-discuss omnios-discuss-boun...@lists.omniti.com skrev: -
Till: omnios-discuss@lists.omniti.com
Från: Stephan Budach
Sänt av: OmniOS-discuss
Datum: 2015-01-20 23:21
Ärende: [OmniOS-discuss] How to use LSI's storcli
Hi,
so… I settled for the simpler sas2flash option… this works under OmniOS
and basically provides what I need: checking/updating the FW of the HBA(s).
Although the 9207-8i is listed in the readme of the MRM software - and
it actually works, if you fire up that big clunky Java app, it is
Am 20.01.15 um 16:42 schrieb Dan McDonald:
Check the firmware revisions on both mpt_sas controllers. It's possible one
need up-or-down grading.
There are known good and known bad revisions of the mpt_sas firmware. Other on
this list are more cognizant of what those revisions are.
Dan
Hi
So just for everyone to know: that didn't work on my backup host. After
downloading the fw update and resetting the HBA, the communication to
the drives was lost and it couldn't be restored other than through a
host reset.
Seems that the mpt driver didn't like that…
Cheers,
budy
Am 06.01.15 um 14:08 schrieb Filip Marvan:
Hi Vincenzo,
your solution is much more better, so thank you very much for your
notes. I will try that too!
Filip
*From:*Vincenzo Pii [mailto:p...@zhaw.ch]
*Sent:* Tuesday, January 06, 2015 1:54 PM
*To:* Filip Marvan
*Cc:*
Am 08.01.15 um 00:01 schrieb Richard Elling:
On Jan 7, 2015, at 1:21 PM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de wrote:
Am 07.01.15 um 21:48 schrieb Richard Elling:
On Jan 7, 2015, at 12:11 PM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de
Am 07.01.15 um 21:48 schrieb Richard Elling:
On Jan 7, 2015, at 12:11 PM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de wrote:
Am 07.01.15 um 18:00 schrieb Richard Elling:
On Jan 7, 2015, at 2:28 AM, Stephan Budach stephan.bud...@jvm.de
mailto:stephan.bud...@jvm.de
Hi,
OmniOS: SunOS nfsvmpool05 5.11 omnios-10b9c79 i86pc i386 i86pc (0.151012)
when trying to run zpool import, the command yields this output:
Assertion failed: rn-rn_nozpool == B_FALSE, file
../common/libzfs_import.c, line 1080, function zpool_open_func
Abort (core dumped)
I don't think
Am 09.03.15 um 15:47 schrieb Dan McDonald:
On Mar 9, 2015, at 10:23 AM, Eric Sproul eric.spr...@circonus.com wrote:
On Sat, Mar 7, 2015 at 3:56 PM, Brogyányi József bro...@gmail.com wrote:
Has anyone tested this firmware? Is it free from this error message Parity
Error on path?
Thanks any
@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
--
Stephan Budach
Managing Director
Jung von Matt/it-services GmbH
Glashüttenstraße 79
20357 Hamburg
Tel: +49 40-4321-1353
Fax: +49 40-4321-1114
E-Mail: stephan.bud...@jvm.de
Internet: http://www.jvm.com
Geschäftsführer
Am 13.06.15 um 14:54 schrieb Graham Stephens:
Perhaps another dumb question, but here goes...
I currently have a FC disk array attached to a server acting, among
other things, as a Samba file server. I don't need the files serving
all the time, so mainly start the machine (it is normally off
Am 15.08.15 um 04:23 schrieb wuffers:
My scrub actually cleared the error, so I don't think it's similar.
So my question remains.. is this block storage compromised or now
marked safe to use?
On Fri, Aug 14, 2015 at 12:21 PM, Michael Rasmussen m...@miras.org
mailto:m...@miras.org wrote:
amount of space in that datastore).
On 16.08.2015 19:11, Stephan Budach wrote:
So, did your first scrub reveal any error at all? Mine didn't
and I
suspect, that you issued a zpool clear prior to scrubbing, which
made
the errors go away on both of my two
Am 19.08.15 um 19:37 schrieb Michael Rasmussen:
On Wed, 19 Aug 2015 18:49:05 +0200
Stephan Budach stephan.bud...@jvm.de wrote:
Looking at what we've got here, I don't think, that we're actually dealing
with real disk errors, as those should have been reported as read errors, or
mayby
PM, Stephan Budach
stephan.bud...@jvm.de mailto:stephan.bud...@jvm.de wrote:
Hi Joerg,
Am 19.08.15 um 14:59 schrieb Joerg Goltermann:
Hi,
the PSOD you got can cause the problems on your exchange database.
Can you check the ESXi logs for the root cause
Today I have experienced the same issue on another OmniOS box, which is
also part of that RAC storage. I had a similar setup running, where I
also had these two RAC nodes connected to one OmniOS R006 box, which
didn't exhibit this error. The only differences being these:
a) OmniOS R006
Hi everyone,
yesterday I was alerted about one of my zpools reporting an
uncorrectable error. When I checked that, I was presented with some sort
of generic error at one of my iSCSI zvols:
root@nfsvmpool08:/root# zpool status -v sataTank
pool: sataTank
state: ONLINE
status: One or more
Am 12.08.15 um 17:19 schrieb Michael Rasmussen:
On Wed, 12 Aug 2015 16:50:49 +0200
Stephan Budach stephan.bud...@jvm.de wrote:
Ahh… that was too soon… ;) Actually one of the RAC nodes noticed an
error at or rather a couple of mintues before this issue, when it
reported this:
Aug 11 20:25:04
Am 10.09.15 um 13:53 schrieb Dan McDonald:
On Sep 10, 2015, at 7:53 AM, Dan McDonald wrote:
If you are using a zpool with r151014 and you have an L2ARC ("cache") vdev, I
recommend at this time disabling it. You may disable it by uttering:
This also affects bloody as well.
Hi Dan,
I will apply the upgrade to a couple of my OmniOS boxes today and give
it a go.
Thanks,
Stephan
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
Am 15.09.15 um 03:46 schrieb Paul B. Henson:
From: Omen Wild
Sent: Monday, September 14, 2015 3:10 PM
Mostly we are wondering how to clear the corruption off disk and worried
what else might be corrupt since the scrub turns up no issues.
While looking into possible corruption from the recent
Am 15.09.15 um 10:17 schrieb Johan Kragsterman:
Hi!
-"OmniOS-discuss" skrev: -
Till: "'Dan McDonald'" , "'omnios-discuss'"
Från: "Paul B. Henson"
Sänt av: "OmniOS-discuss"
Datum: 2015-09-15
Am 15.09.15 um 12:18 schrieb Johan Kragsterman:
Hi!
-"OmniOS-discuss" <omnios-discuss-boun...@lists.omniti.com> skrev: -
Till: <omnios-discuss@lists.omniti.com>
Från: Stephan Budach
Sänt av: "OmniOS-discuss"
Datum: 2015-09-15 12:02
Ärende: Re: [OmniOS-
Am 09.09.15 um 16:30 schrieb Dan McDonald:
On Sep 9, 2015, at 10:23 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Is there any option to get rid of dladm without taking down the whole box?
First, see what it's locked on:
pstack `pgrep dladm`
if you know the PID of th
Am 23.09.15 um 10:51 schrieb Martin Truhlář:
Tests revealed, that problem is somewhere in disk array itself. Write
performance of disk connected directly (via iSCSI) to KVM is poor as well, even
write performance measured on Omnios is very poor. So loop is tightened, but
there still remains
Am 23.09.15 um 18:59 schrieb Michael Rasmussen:
On Wed, 23 Sep 2015 17:23:24 +0200
Stephan Budach <stephan.bud...@jvm.de> wrote:
At any way, you will need to get the performance of your zpools straight first,
before even beginning to think on how to tweak the performance over the n
Vendor ID : SUN
Product ID: COMSTAR
Serial Num: not set
Write Protect : Disabled
Writeback Cache : Disabled
Access State : Active
Thanks,
Stephan
Am 14.12.15 um 16:35 schrieb Dan McDonald:
On Dec 14, 2015, at 10:25 AM, Stephan Budach
Hi guys,
I am trying to configure a FCoE target in OmniOS r016, but I seem to
cannot get it right. I started out with the documentation for Solaris
11, which seemd appropriate and configured a fc target and added a view,
which granted the fcoe port access to that LUN, but the ort doesn't seem
further down...
-"OmniOS-discuss" <omnios-discuss-boun...@lists.omniti.com> skrev: -
Till: Dan McDonald <dan...@omniti.com>
Från: Stephan Budach
Sänt av: "OmniOS-discuss"
Datum: 2015-12-14 16:48
Kopia: omnios-discuss <omnios-discuss@lists.omniti.co
Hi all,
a couple of hours ago one of my OmniOS boxes crashed and rebootet. As
I'd like to determine the reason as of why that happend, I'd could use
some advice on how to do that. There is a vmdump.0 available, but I am
lacking the knowledge what to do with it.
Could anyone fill me in on
A little addendum…
Am 20.12.15 um 22:16 schrieb Stephan Budach:
Hi all,
a couple of hours ago one of my OmniOS boxes crashed and rebootet. As
I'd like to determine the reason as of why that happend, I'd could use
some advice on how to do that. There is a vmdump.0 available, but I am
lacking
Am 03.06.16 um 15:42 schrieb Fábio Rabelo:
Hi to all
A question:
This are the board you used ?
https://www.supermicro.com/products/motherboard/Xeon/C600/X10DRi-T4_.cfm
If so, this board uses Intel X540, and this issue are only with Intel
X550 chips !
Fábio Rabelo
Yes, this is the board I
Hi Dale,
Am 17.05.16 um 20:55 schrieb Dale Ghent:
On May 17, 2016, at 8:30 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
I have checked all of my ixgbe interfaces and they all report that now flow
controll is in place, as you can see:
root@zfsha01colt:/root# dladm show-linkp
Am 27.05.16 um 15:18 schrieb Stephan Budach:
Hi,
I just tried to install OmniOS r018 onto a new SuperMicro server and
when the install kernel starts up, I am getting this panic:
cpu1: featureset
WARNING: cpu1 feature mismatch
panic[cpu1/thread=f00f4920c40: unsupported mixed cpu monitor
Hi,
I just tried to install OmniOS r018 onto a new SuperMicro server and
when the install kernel starts up, I am getting this panic:
cpu1: featureset
WARNING: cpu1 feature mismatch
panic[cpu1/thread=f00f4920c40: unsupported mixed cpu monitor/nwait
support
detected
The the hosts
Am 23.06.16 um 14:10 schrieb qutic development:
Am 23.06.2016 um 01:06 schrieb Josh Barton :
Any ideas why the HP is so much faster? It just has one Smart Array Controller
which I didn’t think would be faster than JBOD
Could you please provide a few more information
Hi all,
I am currently thinking of setting up a HA ZFS/NFS filer using RSF-1 for
my Oracle VM infrastructure and I would like to collect some opinions on
how to do that? Especially, I'd like to know, if there're any objections
against setting up ZFS on an iSCSI target which is also setup from
Am 16.02.16 um 17:11 schrieb Johan Kragsterman:
Hi!
-"OmniOS-discuss" skrev: -
Till: omnios-discuss
Från: Hafiz Rafiyev
Sänt av: "OmniOS-discuss"
Datum: 2016-02-16 16:47
Ärende: [OmniOS-discuss] zvol snapshot
Am 16.02.16 um 18:58 schrieb Johan Kragsterman:
Hi!
-"OmniOS-discuss" <omnios-discuss-boun...@lists.omniti.com> skrev: -
Till: <omnios-discuss@lists.omniti.com>
Från: Stephan Budach
Sänt av: "OmniOS-discuss"
Datum: 2016-02-16 17:44
Ärende: Re: [Om
Hi,
I have been test driving RSF-1 for the last week to accomplish the
following:
- cluster a zpool, that is made up from 8 mirrored vdevs, which are
based on 8 x 2 SSD mirrors via iSCSI from another OmniOS box
- export a nfs share from above zpool via a vip
- have RSF-1 provide the
(or bad things will happen).
Just a thought.
Michael
Sent from my iPhone
On Feb 17, 2016, at 10:13 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi,
I have been test driving RSF-1 for the last week to accomplish the following:
- cluster a zpool, that is made up from 8 mirrored
tried UDP instead?
/dale
On Feb 18, 2016, at 1:13 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi,
I have been test driving RSF-1 for the last week to accomplish the following:
- cluster a zpool, that is made up from 8 mirrored vdevs, which are based on 8
x 2 SSD mirrors via iSCS
Am 18.02.16 um 09:29 schrieb Andrew Gabriel:
On 18/02/2016 06:13, Stephan Budach wrote:
Hi,
I have been test driving RSF-1 for the last week to accomplish the
following:
- cluster a zpool, that is made up from 8 mirrored vdevs, which are
based on 8 x 2 SSD mirrors via iSCSI from another
Am 18.02.16 um 12:14 schrieb Michael Rasmussen:
On Thu, 18 Feb 2016 07:13:36 +0100
Stephan Budach <stephan.bud...@jvm.de> wrote:
So, when I issue a simple ls -l on the folder of the vdisks, while the
switchover is happening, the command somtimes comcludes in 18 to 20 seconds,
but somet
Am 18.02.16 um 21:57 schrieb Schweiss, Chip:
On Thu, Feb 18, 2016 at 5:14 AM, Michael Rasmussen <m...@miras.org
<mailto:m...@miras.org>> wrote:
On Thu, 18 Feb 2016 07:13:36 +0100
Stephan Budach <stephan.bud...@jvm.de
<mailto:stephan.bud...@jvm.de>> wrote:
On Thu, 18 Feb 2016 07:13:36 +0100
Stephan Budach <stephan.bud...@jvm.de
<mailto:stephan.bud...@jvm.de>> wrote:
>
> So, when I issue a simple ls -l on the folder of the vdisks,
while the switchover is happening, the command somtimes comcludes
in 18 to
Hi,
I have set up a rather simple RSF-1 project, where two RSF-1 nodes
connect to two storage heads via iSCSI. I have deployed one network and
two disc heatbeats and I was trying all sorts of possible failures, when
I noted that one node would panic, if I offlined an iSCSI target on one
Hi Dan,
Am 07.03.16 um 15:41 schrieb Dan McDonald:
On Mar 6, 2016, at 9:44 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
when I noted that one node would panic,
AS A RULE -- if you have an OmniOS box panic, you should save off the corefile
(vmdump.N) and be able to share it with th
Hi Dan,
Am 25.04.16 um 16:23 schrieb Dan McDonald:
This one is a NULL pointer dereference. If you're still running with kmem_flags
= 0xf, the dump will be especially useful.
Dan
Sent from my iPhone (typos, autocorrect, and all)
On Apr 25, 2016, at 3:27 AM, Stephan Budach <stephan.
Am 22.04.16 um 19:28 schrieb Dan McDonald:
On Apr 22, 2016, at 1:13 PM, Richard Elling
wrote:
If you're running Solaris 11 or pre-2015 OmniOS, then the old write throttle is
impossible
to control and you'll chase your tail trying to balance scrubs/resilvers
Hi,
I have been struck by kernel panics on my OmniOS boxes lateley, when any
one of the target hosts, where the system get it's LUNs from,
experiences a kernel panic itself. When this happens, my RSF-1 node
immediately panics as well. Looking at the vmdump, it shows this:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi,
I have a strange behaviour where OmniOS om
Am 11.05.16 um 13:36 schrieb Stephan Budach:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach <stephan.bud...@jvm.de>
wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach <stephan.bud...@jvm.de>
wrote:
Am 11.05.16 um 14:50 schrieb Stephan Budach:
Am 11.05.16 um 13:36 schrieb Stephan Budach:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach <stephan.bud...@jvm.de>
wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan
Am 11.05.16 um 16:48 schrieb Dale Ghent:
On May 11, 2016, at 7:36 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9
Am 11.05.16 um 19:28 schrieb Dale Ghent:
On May 11, 2016, at 12:32 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
I will try to get one node free of all services running on it, as I will have
to reboot the system, since I will have to change the ixgbe.conf, haven't I?
This is a RSF-
Hi Dan,
I actually ran into this issue, when tying to upgrade my 016 to 018:
root@zfsha01colt:/root# pkg update -v --be-name=OmniOS-r151018 entire
Creating Plan (Es wird auf widersprüchliche Aktionen geprüft): /
pkg update: Folgende Pakete stellen widersprüchliche Aktionstypen in
our pkg update.
Dan
Sent from my iPhone (typos, autocorrect, and all)
On Apr 15, 2016, at 12:57 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi Dan,
I actually ran into this issue, when tying to upgrade my 016 to 018:
root@zfsha01colt:/root# pkg update -v --be-name=OmniOS-r151018 ent
Hi all,
I am running a scrub on a SSD-only zpool on r018. This zpool consists of
16 iSCSI targets, which are served from two other OmniOS boxes -
currently still running r016 over 10GbE connections.
This zpool serves as a NFS share for my Oracle VM cluster and it
delivers reasonable
Am 17.04.16 um 20:42 schrieb Dale Ghent:
On Apr 17, 2016, at 9:07 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Well… searching the net somewhat more thoroughfully, I came across an archived
discussion which deals also with a similar issue. Somewhere down the
conversation, this par
Hi,
I have experienced this issue a couple of times now. First in r016 but
just today on r018, too. When trying to remove a LUN by issueing
something like
root@nfsvmpool05:/root# stmfadm delete-lu 600144F04E4653564D504F4F4C303538
packet_write_wait: Connection to 10.11.14.49: Broken pipe
djust zfs_vdev_scrub_max_active."
I had to increase my zfs_vdev_scrub_max_active parameter higher than
5, but it sounds like the default setting for that tunable is no
longer satisfactory for today's high performance systems.
On Sun, Apr 17, 2016 at 4:07 PM, Stephan Budach <stephan.
Hi,
I have a strange behaviour where OmniOS omnios-r151018-ae3141d will
break the LACP aggr-link on different boxes, when Intel X540-T2s are
involved. It first starts with a couple if link downs/ups on one port
and finally the link on that port negiotates to 1GbE instead of 10GbE,
which
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi,
I have a strange behaviour where OmniOS omnios-r151018-ae3141d will break the
LACP aggr-link on different boxes, when Intel X540-T2s are involved. It first
Awesome! I will try that out asap.
Cheers,
Stephan
___
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss
Hi Dan,
I'd love to be able to actually spend any time on this, but my workload
doesn't allow it… I hope to get into this at the end of september.
Don't give up on it, please… ;)
Cheers,
Stephan
Am 09.08.16 um 10:05 schrieb Peter Tribble:
Dan,
I've not heard from anyone, so I'm going
Hi all,
I am having trouble chasing down some network or drive-related errors on
one of my OmniOS r018 boxes. It started by me noticing these errors in
the syslog on one of my RSF-1 nodes. These are just a few, but I found
almost every drive/LUN of that target node mentioned in the syslogd on
Hi,
I just tried updating one of my 018 nides to 020 and got this:
DOWNLOAD PAKETE DATEIEN ÜBERTRAGUNG
(MB) Geschwindigkeit
Abgeschlossen397/397 13177/13177 289.9/289.9
1.4M/s
PHASE ELEMENTE
Hi Fábio,
Am 26.01.17 um 12:22 schrieb Fábio Rabelo:
sorry, I forgot to change address to all list before send ...
-- Forwarded message --
From: Fábio Rabelo
Date: 2017-01-26 9:21 GMT-02:00
Subject: Re: [OmniOS-discuss] Install on Supermicro DOM=low
Am 31.01.17 um 00:15 schrieb Richard Elling:
On Jan 29, 2017, at 3:10 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi,
just to wrap this up… I decided to go with 15 additional LUNs on each storage
zpool, to avoid zfs complainign about replication mismatches. I know, I cluld
hav
Hi Richard,
Am 26.01.17 um 20:18 schrieb Richard Elling:
On Jan 26, 2017, at 12:20 AM, Stephan Budach <stephan.bud...@jvm.de
<mailto:stephan.bud...@jvm.de>> wrote:
Hi Richard,
gotcha… read on, below…
"thin provisioning" bit you. For "thick provisioning&quo
Hi Richard,
Am 25.01.17 um 20:27 schrieb Richard Elling:
Hi Stephan,
On Jan 25, 2017, at 5:54 AM, Stephan Budach <stephan.bud...@jvm.de
<mailto:stephan.bud...@jvm.de>> wrote:
Hi guys,
I have been trying to import a zpool, based on a 3way-mirror provided
by three omniOS box
Ooops… should have waited with sending that message after I rebootet the
S11.1 host…
Am 25.01.17 um 23:41 schrieb Stephan Budach:
Hi Richard,
Am 25.01.17 um 20:27 schrieb Richard Elling:
Hi Stephan,
On Jan 25, 2017, at 5:54 AM, Stephan Budach <stephan.bud...@jvm.de
<mailto:steph
Hi Richard,
gotcha… read on, below…
Am 26.01.17 um 00:43 schrieb Richard Elling:
more below…
On Jan 25, 2017, at 3:01 PM, Stephan Budach <stephan.bud...@jvm.de
<mailto:stephan.bud...@jvm.de>> wrote:
Ooops… should have waited with sending that message after I rebootet
the S11.1
e…
I forgot, that I had some older JRE installed, due to some application
which doesn't run with newer ones. Undone that and the update to 020
went just fine.
Sorry for the noise…
Stephan
smime.p7s
Description: S/MIME cryptographic signature
Just for sanity… these are a couple of errors fmdump outputs using -eV
root@solaris11atest2:~# fmdump -eV
TIME CLASS
Jan 25 2017 10:10:45.011761190 ereport.io.pciex.rc.tmp
nvlist version: 0
class = ereport.io.pciex.rc.tmp
ena = 0xff37bc9a861
hat system is still
> imprinted in the vdev labels for the zpool (not the hostid of the system you
> are trying to import on)
>
> Have you tried 'zpool import -f vsmPool10' ?
>
> /dale
>
>> On Jan 25, 2017, at 12:14 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
>
Hi,
just to wrap this up… I decided to go with 15 additional LUNs on each
storage zpool, to avoid zfs complainign about replication mismatches. I
know, I cluld have done otherwise, but it somehow felt better this way.
After all three underlying zpools were "pimped", I was able to mount the
Hi Andre,
well, I wouldn't call it dead… there is a way to accomplish, what you want and
I had already performed such an action before. However, I cannot recall all the
steps necessary. Looking at the link Dan provided, the steps lined out in the
document are still valid and I don't think
- Ursprüngliche Mail -
Von: "Adam Feigin"
An: omnios-discuss@lists.omniti.com
Gesendet: Donnerstag, 16. Februar 2017 09:47:50
Betreff: [OmniOS-discuss] SMB and Netatalk
On 15/02/17 18:16, omnios-discuss-requ...@lists.omniti.com wrote:
> From: F?bio Rabelo
Am 17.01.17 um 23:09 schrieb Dale Ghent:
On Jan 17, 2017, at 2:39 PM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Am 17.01.17 um 17:37 schrieb Dale Ghent:
On Jan 17, 2017, at 11:31 AM, Stephan Budach <stephan.bud...@jvm.de>
wrote:
Hi Dale,
Am 17.01.17 um 17:22 schrie
Am 18.01.17 um 17:38 schrieb Stephan Budach:
Am 18.01.17 um 17:32 schrieb Dan McDonald:
Generally the X540 has had a good track record. I brought up the support for
this a long time ago, and it worked alright then. I think Dale has an X540
in-house which works fine too (he should confirm
Hi Dale,
Am 17.01.17 um 17:22 schrieb Dale Ghent:
On Jan 17, 2017, at 11:12 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi guys,
I am sorry, but I do have to undig this old topic, since I do now have three
hosts running omniOS 018/020, which show these pesky issues with fl
Hi guys,
I am sorry, but I do have to undig this old topic, since I do now have
three hosts running omniOS 018/020, which show these pesky issues with
flapping their ixgbeN links on my Nexus FEXes…
Does anyone know, if there has any change been made to the ixgbe drivers
since 06/2016?
Am 17.01.17 um 17:37 schrieb Dale Ghent:
On Jan 17, 2017, at 11:31 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Hi Dale,
Am 17.01.17 um 17:22 schrieb Dale Ghent:
On Jan 17, 2017, at 11:12 AM, Stephan Budach <stephan.bud...@jvm.de>
wrote:
Hi guys,
I am sorry, but I do h
Am 18.01.17 um 09:01 schrieb Dale Ghent:
On Jan 18, 2017, at 2:38 AM, Stephan Budach <stephan.bud...@jvm.de> wrote:
Am 17.01.17 um 23:09 schrieb Dale Ghent:
On Jan 17, 2017, at 2:39 PM, Stephan Budach <stephan.bud...@jvm.de>
wrote:
Am 17.01.17 um 17:37 schrieb Dale Ghent:
On
1 - 100 of 147 matches
Mail list logo