It would be a bit of work, but I'd consider looking at how the host clock
is exposed by vmt(4) and whether vmm(4) and vmmci(4) could/should be
extended in the same way. If so, you could use Ted Unangst's solution to a
similar problem with VMWare guests.
http://www.tedunangst.com/flak/post/vmtimed
On 2017-02-08, Michael W. Lucas wrote:
> Hi,
>
> I'm collecting relayd check scripts for the httpd/relayd book.
>
> If you have a check script that you don't mind sharing, please send it
> to me.
>
> Regards,
>==ml
>
>
There are lots of "nagios" scripts available
On 2017-02-09, Eric Brown wrote:
> Dear List,
>
> I've recently learned (and discovered) that time in VM's is tricky
> business. I'm looking for the least stupid way to keep any semblance of
> time in vmd instances while I hungrily await a "correct solution" to
> descend from
On 2017-02-09, Eike Lantzsch wrote:
> Hi,
> just out of curiosity I purchased a Delock 95228 MiniPCIe I/O 1 x Gigabit LAN
> card to test in my PC-Engines APU2C4 board.
> Well, it works - somehow.
> If the media is set to 100baseTX at least it does but at 1000baseTX it has
>
Hi,
just out of curiosity I purchased a Delock 95228 MiniPCIe I/O 1 x Gigabit LAN
card to test in my PC-Engines APU2C4 board.
Well, it works - somehow.
If the media is set to 100baseTX at least it does but at 1000baseTX it has
about 35 to 72% package loss on ping.
I guess that the pig tail
Stefan Wollny wrote:
> at least with
>
> $ dmesg | grep Open
> OpenBSD 6.0-current (GENERIC.MP) #166: Wed Feb 8 19:15:03 MST 2017
>
> the issue still persists.
The patch that solve the issue (at least in my machine) was committed today:
Oh, thanks a lot, will wait for current update then :)
2017-02-09 22:25 GMT+03:00 Robert Peichaer :
> On Thu, Feb 09, 2017 at 10:18:44PM +0300, Asbel Kiprop wrote:
> > hi misc.
> > i've moved my -current system from hdd to ssd disk. everything work fine
> > for me, but got
On Thu, Feb 09, 2017 at 10:18:44PM +0300, Asbel Kiprop wrote:
> hi misc.
> i've moved my -current system from hdd to ssd disk. everything work fine
> for me, but got some strange i3bar behavior.
> wireless _first_ {
> format_up = "W: (%signal at %essid) %ip"
> format_down = "W:
hi misc.
i've moved my -current system from hdd to ssd disk. everything work fine
for me, but got some strange i3bar behavior.
wireless _first_ {
format_up = "W: (%signal at %essid) %ip"
format_down = "W: down"
}
battery 0 {
format = "%status %percentage \% %remaining"
}
Gregor Best writes:
> Hi,
>
>> [...]
>> # tail -4 /var/log/messages
>> Feb 9 11:21:44 air vmd[73442]: parent terminating
>> Feb 9 11:21:47 air vmd[73405]: config_setvm: can't open tap tap: No
>> such file or directory
>> [...]
>
> You're probably missing the device files for
Gregor Best writes:
> Hi,
>
> On Thu, Feb 09, 2017 at 11:33:19AM -0600, Eric Brown wrote:
>> [...]
>> # tail -4 /var/log/messages
>> Feb 9 11:21:44 air vmd[73442]: parent terminating
>> Feb 9 11:21:47 air vmd[73405]: config_setvm: can't open tap tap: No such
>> file or
Eric Brown writes:
> Dear List,
>
> I've recently learned (and discovered) that time in VM's is tricky
> business. I'm looking for the least stupid way to keep any semblance of
> time in vmd instances while I hungrily await a "correct solution" to
> descend from the heavens.
Am 02/09/17 um 18:02 schrieb Martin Pieuchot:
> On 09/02/17(Thu) 17:55, Stefan Wollny wrote:
>> Am 02/08/17 um 17:57 schrieb Hrvoje Popovski:
>>> On 8.2.2017. 17:51, Scott Vanderbilt wrote:
Updated a machine to latest (5 Feb.) snapshot of amd64. I'm now seeing
the following message after
Hi,
On Thu, Feb 09, 2017 at 11:33:19AM -0600, Eric Brown wrote:
> [...]
> # tail -4 /var/log/messages
> Feb 9 11:21:44 air vmd[73442]: parent terminating
> Feb 9 11:21:47 air vmd[73405]: config_setvm: can't open tap tap: No such
> file or directory
> [...]
You're probably missing the device
Dear List,
I've recently learned (and discovered) that time in VM's is tricky
business. I'm looking for the least stupid way to keep any semblance of
time in vmd instances while I hungrily await a "correct solution" to
descend from the heavens.
I've disabled openntpd, installed ntp package (but
On Wed, Feb 08, 2017 at 10:04:04AM +, Comète wrote:
> Hi,
>
> I use OpenBSD 6.0 amd64 (stable) on a Shuttle XS35v2. I've installed
> "ushare" but same problem with "minidlna" and I don't think the problem comes
> from these apps... When I try to read a big file (ex.: a 1Go video) from my
>
Dear List,
I am experimenting with virtual machines (vmd) in recent OpenBSD
snapshots. Having gotten a few VMs working, I am eager to make many
more and also run them. I'm pleased to have an autoinstall process
running from a vmd instance.
However, when running more than 4 instances, I run into
On 09/02/17(Thu) 17:55, Stefan Wollny wrote:
> Am 02/08/17 um 17:57 schrieb Hrvoje Popovski:
> > On 8.2.2017. 17:51, Scott Vanderbilt wrote:
> >> Updated a machine to latest (5 Feb.) snapshot of amd64. I'm now seeing
> >> the following message after booting that I've not recalled seeing before:
>
Am 02/08/17 um 17:57 schrieb Hrvoje Popovski:
> On 8.2.2017. 17:51, Scott Vanderbilt wrote:
>> Updated a machine to latest (5 Feb.) snapshot of amd64. I'm now seeing
>> the following message after booting that I've not recalled seeing before:
>>
>>splassert: yield: want 0 have 1
>
>
> add
OSPF is sensitive to MTU changes. You probably want the change in
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/usr.sbin/ospfd/ospfe.c.diff?r1=1.96=1.97
from -current. This will track interface MTU changes.
On 2017 Feb 09 (Thu) at 14:51:05 +0100 (+0100), Maxim Bourmistrov wrote:
:This actually
Hm, seems that I mistyped MTU in my original mail.
lacp system-priority 1
rate-limit cpu direction input pps 1024
system jumbo mtu 1518
It is 1518 by default.
> 9 feb. 2017 kl. 14:51 skrev Maxim Bourmistrov :
>
>
> This actually a default setting for this switch, then
This actually a default setting for this switch, then you don’t configure
jumbo at all.
'sh running-config all’ shows this.
I had ’ip ospf mtu-ignore’ in the config as well, but it didn’t help, so
it is gone now.
I’ll try with 1518.
As seen in tcpdump, both obsd and dell announcing them selves
Are you establishing an ospf session with the N3048? If you are, then
there is an MTU miss-match.
Either "system jumbo mtu" refers to the IP packet, which doesn't match
the 1500 set on trunk1, or it refers to the ethernet packet which should
be 1518 (16 bytes for the ethernet header).
Is it
I see similar behavior with Cisco Nexus and 5.9-stable.
How ever not 100% sure if it is the same trigger.
> 9 feb. 2017 kl. 14:08 skrev Maxim Bourmistrov :
>
> Hey,
>
> ospfd on 6.0-stable stucks in EXCHG/EXSTA while neighboring with Dell N3048
switch.
> According to
Hey,
ospfd on 6.0-stable stucks in EXCHG/EXSTA while neighboring with Dell N3048
switch.
According to some documentation around, this is due to MTU mismatch.
This is not in my case.
N3048:
system jumbo mtu 1512
obsd:
trunk1: flags=8943 mtu 1500
0
C Australia
P New South Wales
T Sydney
Z 2000
O NSW IT Support
I Suraj Poudel
A 19 Martin Pl
M binaryit.market...@gmail.com
U http://nswits.com.au/
B 1300 138 600
N NSW IT Support provide OpenBSD and Linux installations services that
is a secure antispam web server and make any customers happy.
added, thanks
0
C Australia
P New South Wales
T Sydney
Z 2000
O NSW IT Support
I Suraj Poudel
A 19 Martin Pl
M binaryit.market...@gmail.com
U http://nswits.com.au/
B 1300 138 600
N NSW IT Support provide OpenBSD and Linux installations services that
is a secure antispam web server and make any customers happy.
On 02/01/2017 03:41 PM, Erling Westenvik wrote:
I have an OpenBSD 5.9 server at a colocation. It stopped accepting new
connections (ping, ssh, http, whatever) yesterday night but fortunately
I had one ssh session open from my workstation from which I can still
access it.
Did you think about
i came to the same conclusion,
ok benno@
Reyk Floeter(r...@openbsd.org) on 2017.02.09 00:25:31 +0100:
> On Tue, Feb 07, 2017 at 05:04:18PM -0500, Michael W. Lucas wrote:
> > host 104.236.197.233, check send expect (9020ms,tcp read timeout), state
> > unknown -> down, availability 0.00%
>
> The
2017-02-09 16:41 GMT+08:00 David Gwynne :
..
> hey mikael,
>
> can you be more specific about what you mean by multiqueuing for disks?
> even a
> reference to an implementation of what youâre asking about would help me
> answer this question.
>
> ill write up a bigger reply
> On 9 Feb 2017, at 12:42 pm, Mikael wrote:
>
> Hi misc@,
>
> The SSD reading benchmark in the previous email shows that per-device
> multiqueuing will boost multithreaded random read performance very much
> e.g. by ~7X+, e.g. the current 50MB/sec will increase to
32 matches
Mail list logo