Re: UPS, Network UPS Tools and UPD(4)
On 2016-09-12, Lawrence Wieser wrote: > I have a CyberPower UPS that my OpenBSD 5.8 system sees just fine at uhidev0 > on upd0. But the `usbhid-ups` driver for NUT is unable to talk to it. > > There are a handful of older comments in the lists that offer a couple of > alternatives. One involved disabling the upd driver and messing with usb > quirks. The other involved a revised NUT driver that talked directly to upd. > If there’s a way for NUT to talk directly to UPD I haven’t found it. > What’s the current preferred approach? Or am I better off with a serial > cable? > > Thanks for any insight Did you follow the instructions in the pkg-readme file that pkg_add pointed you at after it installed the package?
Re: UPS, Network UPS Tools and UPD(4)
On Sun, 11 Sep 2016 21:35:46 -0400, Lawrence Wieser wrote: > I have a CyberPower UPS that my OpenBSD 5.8 system sees just fine at uhidev0 > on upd0. But the `usbhid-ups` driver for NUT is unable to talk to it. I'm successfully using a CyberPower CP1000PFCLCD with NUT and have no problems with the `usbhid-ups` driver. Did you set the group to _ups on /dev/usb0 (or whichever USB bus the upd attached to)? If not, that would explain the problem. E.g. % ls -l /dev/usb0 crw-rw 1 root _ups 61, 0 Jun 30 09:33 /dev/usb0 - todd
UPS, Network UPS Tools and UPD(4)
I have a CyberPower UPS that my OpenBSD 5.8 system sees just fine at uhidev0 on upd0. But the `usbhid-ups` driver for NUT is unable to talk to it. There are a handful of older comments in the lists that offer a couple of alternatives. One involved disabling the upd driver and messing with usb quirks. The other involved a revised NUT driver that talked directly to upd. If thereâs a way for NUT to talk directly to UPD I havenât found it. Whatâs the current preferred approach? Or am I better off with a serial cable? Thanks for any insight [demime 1.01d removed an attachment of type application/pkcs7-signature which had a name of smime.p7s]
Re: Routing 10-40 Mpps on OpenBSD
K K [kk...@outlook.com] wrote: > I thought Intel, but I speak out of impressions, not backed by any facts. > David Gwynne who is working on the Myricom driver recommends the intel card if that helps > What is the take of OpenBSD developers on this? > Are they any plans? > There's a lot of work going into multi-threading the stack right now. If you read www.openbsd.org/papers/ and undeadly.org you can keep up with some of the documented progress. I don't think anyone is using OpenBSD at 10Mpps on a box today, or if they are, that's the upper limit in a configuration without pf. I'd say that 10Mpps - 40Mpps is a bit past the "typical small ISP". > Many options seems available, but I have no idea how they could be > integrated in OpenBSD. I now clearly nothing of proper software > development. > > - DPDK (now BSD licensed) > - NETMAP/FW > The general consensus goes against these types of tools at the moment. I think people want to get the network stack right, first, before making it hot-pluggable... Chris
panic: aml_die on 6.0/amd64 (Intel N3050)
Just did a fresh install of 6.0/amd64 on my HP 250 G4 laptop with Celeron N3050 CPU. 5.9 was working, but 6.0 panics on the first boot immediately after installing base sets. I took pictures with cellphone digital camera; it's the only one I have. The first images are cut off a little, so I took them again at the very end. In order, there is: - panic - dmesg - trace - ps - machine acpi tree; didn't know there would be so many screens! - (and again everything before the acpi tree) Maybe I screwed up, but "machine ddbcpu 0" says: Invalid cpu 0, and "machine ddbcpu 1" just hangs the system. Here's a link to tarball with the 288 pictures. It's named HP250G4_N3050.tar.gz with size 196,539,398 bytes. https://www.sendspace.com/file/yltbuc
Re: Routing 10-40 Mpps on OpenBSD
> I think Intel and Myricom are going to be the best-supported 10GbE on > OpenBSD at the moment. I thought Intel, but I speak out of impressions, not backed by any facts. > The best performance today will be with a processor that packs a lot > of punch into a smaller number of cores. I'm using Xeon E5-1630 v3 > right now. The E5-2xxx series tend to have more cores at lower clock > speeds. They make more sense on a regular server. Also came to this conclusion when I picked E5-2697v2. > There is a lot of ongoing work in this area, OpenBSD doesn't claim to > be the performance leader today. What is the take of OpenBSD developers on this? Are they any plans? Many options seems available, but I have no idea how they could be integrated in OpenBSD. I now clearly nothing of proper software development. - DPDK (now BSD licensed) - NETMAP/FW > Chris Thank you for your insights.
Re: Routing 10-40 Mpps on OpenBSD
On 11.9.2016. 19:17, K wrote: > All, > > This message is a call for people who are interested to benchmark commodity > hardware with the goal of pushing as much PPS as possible through OpenBSD. > The initial target is to reach 10 Mpps at 64 bytes (or more precisely 84 > bytes with interpacket gap) and if the experiment proves to be successful, > we would then aim at 40+ Mpps. > > The ultimate goal of this experiment is to build and share with the > community a recognized hardware configuration that provides a good ground > for real-world traffic at a typical small ISP. > > We couldn't find such information online. In our case, the final setup > would be two routers, each with two 10 Gbps uplink to upstreams Internet > providers and an OSPF and iBGP connection between them. The software > stack would be based on OpenBSD, OpenBGPD and OpenOSPFD. There is no > commercial idea around the finding of this experiment. > > While our budget is not unlimited and privately funded (by individuals), > we are open to hear what hardware specifications people on this list > would be interested to see. At the moment, we aim for this: > > CPUs: Intel Xeon CPU E5-2697v2, E5-2667v2, E5-2680v3, E5-2640v3 > Intel NICs: Intel 82599ES, X520, X540-{T1/T2/AT2}, 85595, 82598, > AF/82598, AT/82598, EB/82599, EB/82599 EN > Chelsio NIcs: Chelsio T540-CR (although not sure there is an OpenBSD driver) > > If you consider other hardware options, please feel free to reply and let us > know. > We surely will not be testing all these configurations, we will most likely > pick on > CPU from the list and 2-3 NICs from the list as well. This experiment might > be also > taken to FreeBSD for comparison. If necessary, we consider sending this > configuration in a test center with Spirent hardware to validate this. > > Feedbacks, questions, remarks, doubts, irony, are all welcome :-) > > Cheers. > Hi, if you are optimist like me buy 2 socket box with intel 82599 cards and with more than 200MB of RAM which is enough for one full BGP feed :) At first i would buy one 8-core CPU with higher GHz as i can, and when, and this is optimistic part :), openbsd gets multiqueue ix stuff and RSS on top of it i would buy second 8-core CPU because it seems that 82599 is having 16 RSS queues. For now you can get max 1Mpps with only plain routing without any pseudo interfaces or pf.
Re: Routing 10-40 Mpps on OpenBSD
On 09/11/16 19:46, K K wrote: > // Previous email bounced, so I resend it. Sorry for duplicate // Just curious, if you look at the bounce, would that be a DMARC-worshipper failing to understand mailing list mail? I'm researching what will likely be a longish, fact-based rant on the subject. - P -- Peter N. M. Hansteen, member of the first RFC 1149 implementation team http://bsdly.blogspot.com/ http://www.bsdly.net/ http://www.nuug.no/ "Remember to set the evil bit on all malicious network traffic" delilah spamd[29949]: 85.152.224.147: disconnected after 42673 seconds.
Re: Routing 10-40 Mpps on OpenBSD
K [k...@protonmail.com] wrote: > All, > > This message is a call for people who are interested to benchmark commodity > hardware with the goal of pushing as much PPS as possible through OpenBSD. > The initial target is to reach 10 Mpps at 64 bytes (or more precisely 84 > bytes with interpacket gap) and if the experiment proves to be successful, > we would then aim at 40+ Mpps. > > The ultimate goal of this experiment is to build and share with the > community a recognized hardware configuration that provides a good ground > for real-world traffic at a typical small ISP. > > We couldn't find such information online. In our case, the final setup > would be two routers, each with two 10 Gbps uplink to upstreams Internet > providers and an OSPF and iBGP connection between them. The software > stack would be based on OpenBSD, OpenBGPD and OpenOSPFD. There is no > commercial idea around the finding of this experiment. > > While our budget is not unlimited and privately funded (by individuals), > we are open to hear what hardware specifications people on this list > would be interested to see. At the moment, we aim for this: > > CPUs: Intel Xeon CPU E5-2697v2, E5-2667v2, E5-2680v3, E5-2640v3 > Intel NICs: Intel 82599ES, X520, X540-{T1/T2/AT2}, 85595, 82598, > AF/82598, AT/82598, EB/82599, EB/82599 EN > Chelsio NIcs: Chelsio T540-CR (although not sure there is an OpenBSD driver) I think Intel and Myricom are going to be the best-supported 10GbE on OpenBSD at the moment. The best performance today will be with a processor that packs a lot of punch into a smaller number of cores. I'm using Xeon E5-1630 v3 right now. The E5-2xxx series tend to have more cores at lower clock speeds. They make more sense on a regular server. There is a lot of ongoing work in this area, OpenBSD doesn't claim to be the performance leader today. Chris
Routing 10-40 Mpps on OpenBSD
// Previous email bounced, so I resend it. Sorry for duplicate // All, This message is a call for people who are interested to benchmark commodity hardware with the goal of pushing as much PPS as possible through OpenBSD. The initial target is to reach 10 Mpps at 64 bytes (or more precisely 84 bytes with interpacket gap) and if the experiment proves to be successful, we would then aim at 40+ Mpps. The ultimate goal of this experiment is to build and share with the community a recognized hardware configuration that provides a good ground for real-world traffic at a typical small ISP. We couldn't find such information online. In our case, the final setup would be two routers, each with two 10 Gbps uplink to upstreams Internet providers and an OSPF and iBGP connection between them. The software stack would be based on OpenBSD, OpenBGPD and OpenOSPFD. There is no commercial idea around the finding of this experiment. While our budget is not unlimited and privately funded (by individuals), we are open to hear what hardware specifications people on this list would be interested to see. At the moment, we aim for this: CPUs: Intel Xeon CPU E5-2697v2, E5-2667v2, E5-2680v3, E5-2640v3 Intel NICs: Intel 82599ES, X520, X540-{T1/T2/AT2}, 85595, 82598, AF/82598, AT/82598, EB/82599, EB/82599 EN Chelsio NIcs: Chelsio T540-CR (although not sure there is an OpenBSD driver) If you consider other hardware options, please feel free to reply and let us know. We surely will not be testing all these configurations, we will most likely pick on CPU from the list and 2-3 NICs from the list as well. This experiment might be also taken to FreeBSD for comparison. If necessary, we consider sending this configuration in a test center with Spirent hardware to validate this. Feedbacks, questions, remarks, doubts, irony, are all welcome :-) Cheers.
Routing 10-40 Mpps on OpenBSD
All, This message is a call for people who are interested to benchmark commodity hardware with the goal of pushing as much PPS as possible through OpenBSD. The initial target is to reach 10 Mpps at 64 bytes (or more precisely 84 bytes with interpacket gap) and if the experiment proves to be successful, we would then aim at 40+ Mpps. The ultimate goal of this experiment is to build and share with the community a recognized hardware configuration that provides a good ground for real-world traffic at a typical small ISP. We couldn't find such information online. In our case, the final setup would be two routers, each with two 10 Gbps uplink to upstreams Internet providers and an OSPF and iBGP connection between them. The software stack would be based on OpenBSD, OpenBGPD and OpenOSPFD. There is no commercial idea around the finding of this experiment. While our budget is not unlimited and privately funded (by individuals), we are open to hear what hardware specifications people on this list would be interested to see. At the moment, we aim for this: CPUs: Intel Xeon CPU E5-2697v2, E5-2667v2, E5-2680v3, E5-2640v3 Intel NICs: Intel 82599ES, X520, X540-{T1/T2/AT2}, 85595, 82598, AF/82598, AT/82598, EB/82599, EB/82599 EN Chelsio NIcs: Chelsio T540-CR (although not sure there is an OpenBSD driver) If you consider other hardware options, please feel free to reply and let us know. We surely will not be testing all these configurations, we will most likely pick on CPU from the list and 2-3 NICs from the list as well. This experiment might be also taken to FreeBSD for comparison. If necessary, we consider sending this configuration in a test center with Spirent hardware to validate this. Feedbacks, questions, remarks, doubts, irony, are all welcome :-) Cheers.
Re: can't find fstab entry ?
On Saturday 10 Sep 2016 13:54:50 Theo de Raadt wrote: > Summary: The OP has a learning disability. He should probably stay in > Linux land, where the field is large, and his inability can remain > hidden. See, once again I am not insulting Linux. You sell OpenBSD short somewhat. I've vast amounts of inability but I get on with OpenBSD just fine. But then I take time to read OpenBSD's excellent documentation - FAQs and man pages, etc. Gratefully Tim H
OpenBSD as primary OS
Hi, I'm moving to OpenBSD for primary use, I'll have to keep a Windows OS for some specific purposes also. Just thanks for the development of OpenBSD, it's very easy to use since logical and well documented, I've been enjoying it for the past years for what it deserved to do. Also looked at the softraid development, just few words to thank the development of the OS and softwares. Jeff
Re: nat for ipv6 (RFC4193)
Am 09.09.2016 um 20:16 schrieb Stuart Henderson: On 2016/09/09 18:01, Holger Glaess wrote: On 2016-09-09, Holger Glaess wrote: inet6 2001:4dd0:af15:483d:20d:48ff:fe26:7a1f -> prefixlen 64 autoconf pltime 559190 vltime 2546390 inet6 2001:4dd0:af15:cbd9:20d:48ff:fe26:7a1f -> prefixlen 64 autoconf pltime 604767 vltime 2591967 That's fun, you have autoconfigured addresses from two separate prefixes. If the ISP are going to move you around between prefixes, they should probably lower pltime/vtime. if do an pass out on $pppoe_if inet6 from { fe80::/64 , fde0::/64 , fd00::/64 } to any nat-to ($pppoe_if) he use the :7a1f ip as nat addr that do not work. If it doesn't work, it shouldn't be on the interface.. pass out on $pppoe_if inet6 from { fe80::/64 , fde0::/64 , fd00::/64 } to any nat-to ($pppoe_if:0) he use the Link local addr for nat it fails. I think that's incorrect behaviour. But fixing it wouldn't necessarily solve your problem; any standard addresses (not link-local, etc) configured on the interface are meant to be equally valid. You shouldn't need to nat though - the expected setup for an ISP is for them to run DHCPv6 prefix delegation, which would allow them to handover one or more prefixes for you to useon internal networks (a client like dhcpcd can configure them for you, and rtadvd will pick up the prefixes automatically). thats true because how can i do this with rdomains ? in my home setup , i have the dsl provider and as second line an cable provider both in a separate rdomain . how can i say rtadvd to listen i a rdomain ( this i know ) and then he advertise to an other rdomain. in this case i use private ipv6 addresse in my rdomain 0 . Ah - that wasn't in the original description :) I think that is probably not possible to do automatically with the current code. Maybe you could parse the address list from ifconfig and update rtadvd's configuration from a script and restart it (in that case you will also need to make sure you keep pltime/vltime low so that clients are able to change network when needed) ... In general, this is an area that IPv6 copes with poorly. I think that the specs expect this to be done either by advertising multiple routable v6 prefixes on the inside network (which means that end hosts make routing decisions; not very helpful in a controlled environment), or by advertising your own prefix with BGP etc. hi ok , question why is below working ? the nat rule pass out on $pppoe_if inet6 from { fe80::/64 , fde0::/64 , fd00::/64 } to any nat-to 2001:4dd0:af15:cbd9:74c2:814d:9f0e:7809 pass out on pppoe0 inet6 from fe80::/64 to any flags S/SA nat-to 2001:4dd0:af15:cbd9:74c2:814d:9f0e:7809 [ Evaluations: 37Packets: 0 Bytes: 0 States: 0 ] [ Inserted: uid 0 pid 18381 State Creations: 0 ] pass out on pppoe0 inet6 from fde0::/64 to any flags S/SA nat-to 2001:4dd0:af15:cbd9:74c2:814d:9f0e:7809 [ Evaluations: 37Packets: 107 Bytes: 41932 States: 0 ] [ Inserted: uid 0 pid 18381 State Creations: 17] pass out on pppoe0 inet6 from fd00::/64 to any flags S/SA nat-to 2001:4dd0:af15:cbd9:74c2:814d:9f0e:7809 [ Evaluations: 37Packets: 262 Bytes: 114510 States: 0 ] [ Inserted: uid 0 pid 18381 State Creations: 18] # ifconfig pppoe0 pppoe0: flags=208851 rdomain 4 mtu 1500 priority: 0 dev: em3 state: session sid: 0x1c PADI retries: 29 PADR retries: 0 time: 2d 05:36:23 sppp: phase network authproto pap authname "yy-xx...@netcologne.de" groups: pppoe status: active inet6 fe80::20d:48ff:fe26:7a1f%pppoe0 -> prefixlen 64 scopeid 0x15 inet6 2001:4dd0:af15:483d:20d:48ff:fe26:7a1f -> prefixlen 64 autoconf pltime 411780 vltime 2398980 inet 84.44.211.173 --> 195.14.226.22 netmask 0x inet6 2001:4dd0:af15:cbd9:20d:48ff:fe26:7a1f -> prefixlen 64 autoconf pltime 604776 vltime 2591976 inet6 2001:4dd0:af15:cbd9:81c:e228:d3d:8b8a -> prefixlen 64 deprecated autoconf autoconfprivacy pltime 0 vltime 498227 inet6 2001:4dd0:af15:cbd9:98a3:a5b0:eb7b:9fa2 -> prefixlen 64 autoconf autoconfprivacy pltime 65896 vltime 584615 got all line at the moment holger