> On Jan 19, 2017, at 4:27 AM, Stephan Budach wrote:
>
> Am 18.01.17 um 17:38 schrieb Stephan Budach:
>> Am 18.01.17 um 17:32 schrieb Dan McDonald:
>>> Generally the X540 has had a good track record. I brought up the support
>>> for this a long time ago, and it worked
Am 18.01.17 um 17:38 schrieb Stephan Budach:
Am 18.01.17 um 17:32 schrieb Dan McDonald:
Generally the X540 has had a good track record. I brought up the support for
this a long time ago, and it worked alright then. I think Dale has an X540
in-house which works fine too (he should confirm
Am 18.01.17 um 17:32 schrieb Dan McDonald:
Generally the X540 has had a good track record. I brought up the support for
this a long time ago, and it worked alright then. I think Dale has an X540
in-house which works fine too (he should confirm this).
Some other things to check:
* Is your
Generally the X540 has had a good track record. I brought up the support for
this a long time ago, and it worked alright then. I think Dale has an X540
in-house which works fine too (he should confirm this).
Some other things to check:
* Is your BIOS set to map the PCI-E space into the
Am 18.01.17 um 09:01 schrieb Dale Ghent:
On Jan 18, 2017, at 2:38 AM, Stephan Budach wrote:
Am 17.01.17 um 23:09 schrieb Dale Ghent:
On Jan 17, 2017, at 2:39 PM, Stephan Budach
wrote:
Am 17.01.17 um 17:37 schrieb Dale Ghent:
On Jan 17, 2017,
Thanks for clarifying that. I just checked the cables and they classify
as Cat6a and they are from a respectable german vendor, not that this
would be any guarantee, but at least they're no bulkware from china. ;)
The X540s are either onboard on some Supermicro X10 boards, but also on
a genuine
> On Jan 18, 2017, at 2:38 AM, Stephan Budach wrote:
>
> Am 17.01.17 um 23:09 schrieb Dale Ghent:
>>> On Jan 17, 2017, at 2:39 PM, Stephan Budach
>>> wrote:
>>>
>>> Am 17.01.17 um 17:37 schrieb Dale Ghent:
>>>
> On Jan 17, 2017, at 11:31 AM,
Am 17.01.17 um 23:09 schrieb Dale Ghent:
On Jan 17, 2017, at 2:39 PM, Stephan Budach wrote:
Am 17.01.17 um 17:37 schrieb Dale Ghent:
On Jan 17, 2017, at 11:31 AM, Stephan Budach
wrote:
Hi Dale,
Am 17.01.17 um 17:22 schrieb Dale Ghent:
On
> On Jan 17, 2017, at 2:39 PM, Stephan Budach wrote:
>
> Am 17.01.17 um 17:37 schrieb Dale Ghent:
>>> On Jan 17, 2017, at 11:31 AM, Stephan Budach
>>> wrote:
>>>
>>> Hi Dale,
>>>
>>> Am 17.01.17 um 17:22 schrieb Dale Ghent:
>>>
> On Jan 17,
On Tue, 17 Jan 2017 20:39:49 +0100
Stephan Budach wrote:
> The cables are actually specifically purchased cat6 cables. They run about
> 2m, not more. It could be tna cables, but I am running a couple of those and
> afaik, I only get these issues on these three nodes. I
Am 17.01.17 um 17:37 schrieb Dale Ghent:
On Jan 17, 2017, at 11:31 AM, Stephan Budach wrote:
Hi Dale,
Am 17.01.17 um 17:22 schrieb Dale Ghent:
On Jan 17, 2017, at 11:12 AM, Stephan Budach
wrote:
Hi guys,
I am sorry, but I do have to undig
> On Jan 17, 2017, at 11:31 AM, Stephan Budach wrote:
>
> Hi Dale,
>
> Am 17.01.17 um 17:22 schrieb Dale Ghent:
>>> On Jan 17, 2017, at 11:12 AM, Stephan Budach
>>> wrote:
>>>
>>> Hi guys,
>>>
>>> I am sorry, but I do have to undig this old
I'd check your switch, though you're using 10GigBaseT, which shouldn't be as
big of a problem. Hmmm, using cat6 or better cables? 5e isn't going to cut it
for reliable 10Gig service.
Dan
___
OmniOS-discuss mailing list
Hi Dale,
Am 17.01.17 um 17:22 schrieb Dale Ghent:
On Jan 17, 2017, at 11:12 AM, Stephan Budach wrote:
Hi guys,
I am sorry, but I do have to undig this old topic, since I do now have three
hosts running omniOS 018/020, which show these pesky issues with flapping
> On Jan 17, 2017, at 11:12 AM, Stephan Budach wrote:
>
> Hi guys,
>
> I am sorry, but I do have to undig this old topic, since I do now have three
> hosts running omniOS 018/020, which show these pesky issues with flapping
> their ixgbeN links on my Nexus FEXes…
> On Jan 17, 2017, at 11:12 AM, Stephan Budach wrote:
>
> Hi guys,
>
> I am sorry, but I do have to undig this old topic, since I do now have three
> hosts running omniOS 018/020, which show these pesky issues with flapping
> their ixgbeN links on my Nexus FEXes…
>
>
Hi guys,
I am sorry, but I do have to undig this old topic, since I do now have
three hosts running omniOS 018/020, which show these pesky issues with
flapping their ixgbeN links on my Nexus FEXes…
Does anyone know, if there has any change been made to the ixgbe drivers
since 06/2016?
Am 03.06.16 um 15:42 schrieb Fábio Rabelo:
Hi to all
A question:
This are the board you used ?
https://www.supermicro.com/products/motherboard/Xeon/C600/X10DRi-T4_.cfm
If so, this board uses Intel X540, and this issue are only with Intel
X550 chips !
Fábio Rabelo
Yes, this is the board I
Hi to all
A question:
This are the board you used ?
https://www.supermicro.com/products/motherboard/Xeon/C600/X10DRi-T4_.cfm
If so, this board uses Intel X540, and this issue are only with Intel
X550 chips !
Fábio Rabelo
2016-06-03 10:20 GMT-03:00 Stephan Budach :
>
Hi Dale,
Am 17.05.16 um 20:55 schrieb Dale Ghent:
On May 17, 2016, at 8:30 AM, Stephan Budach wrote:
I have checked all of my ixgbe interfaces and they all report that now flow
controll is in place, as you can see:
root@zfsha01colt:/root# dladm show-linkprop -p
On May 17, 2016, at 8:30 AM, Stephan Budach wrote:
> I have checked all of my ixgbe interfaces and they all report that now flow
> controll is in place, as you can see:
>
> root@zfsha01colt:/root# dladm show-linkprop -p flowctrl ixgbe0
> LINK PROPERTYPERM
Am 11.05.16 um 19:28 schrieb Dale Ghent:
On May 11, 2016, at 12:32 PM, Stephan Budach wrote:
I will try to get one node free of all services running on it, as I will have
to reboot the system, since I will have to change the ixgbe.conf, haven't I?
This is a RSF-1 host,
> On May 11, 2016, at 12:32 PM, Stephan Budach wrote:
> I will try to get one node free of all services running on it, as I will have
> to reboot the system, since I will have to change the ixgbe.conf, haven't I?
> This is a RSF-1 host, so this will likely be done over
Am 11.05.16 um 16:48 schrieb Dale Ghent:
On May 11, 2016, at 7:36 AM, Stephan Budach wrote:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at
> On May 11, 2016, at 7:36 AM, Stephan Budach wrote:
>
> Am 09.05.16 um 20:43 schrieb Dale Ghent:
>>> On May 9, 2016, at 2:04 PM, Stephan Budach wrote:
>>>
>>> Am 09.05.16 um 16:33 schrieb Dale Ghent:
> On May 9, 2016, at 8:24 AM, Stephan
Am 11.05.16 um 14:50 schrieb Stephan Budach:
Am 11.05.16 um 13:36 schrieb Stephan Budach:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach
wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach
Am 11.05.16 um 13:36 schrieb Stephan Budach:
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach
wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach
wrote:
Hi,
I have a
Am 09.05.16 um 20:43 schrieb Dale Ghent:
On May 9, 2016, at 2:04 PM, Stephan Budach wrote:
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach wrote:
Hi,
I have a strange behaviour where OmniOS
> On May 9, 2016, at 2:04 PM, Stephan Budach wrote:
>
> Am 09.05.16 um 16:33 schrieb Dale Ghent:
>>> On May 9, 2016, at 8:24 AM, Stephan Budach wrote:
>>>
>>> Hi,
>>>
>>> I have a strange behaviour where OmniOS omnios-r151018-ae3141d will break
Am 09.05.16 um 16:33 schrieb Dale Ghent:
On May 9, 2016, at 8:24 AM, Stephan Budach wrote:
Hi,
I have a strange behaviour where OmniOS omnios-r151018-ae3141d will break the
LACP aggr-link on different boxes, when Intel X540-T2s are involved. It first
starts with a
> On May 9, 2016, at 8:24 AM, Stephan Budach wrote:
>
> Hi,
>
> I have a strange behaviour where OmniOS omnios-r151018-ae3141d will break the
> LACP aggr-link on different boxes, when Intel X540-T2s are involved. It first
> starts with a couple if link downs/ups on one
Hi,
I have a strange behaviour where OmniOS omnios-r151018-ae3141d will
break the LACP aggr-link on different boxes, when Intel X540-T2s are
involved. It first starts with a couple if link downs/ups on one port
and finally the link on that port negiotates to 1GbE instead of 10GbE,
which
32 matches
Mail list logo