Re: https for pkg_add?

2017-01-05 Thread Antoine Jacoutot
On Thu, Jan 05, 2017 at 06:50:38PM -0800, jungle boogie wrote:
> Hi All,
> 
> With all the recent changes to supporting https on the various mirrors, does
> that mean https may also be used with the PKG_PATH variable?

Yes.

-- 
Antoine



https for pkg_add?

2017-01-05 Thread jungle boogie

Hi All,

With all the recent changes to supporting https on the various mirrors, 
does that mean https may also be used with the PKG_PATH variable?


Thanks,
jb



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Theo de Raadt
>You could possibly make a separate "event" or "wait" pledge to register new
>events or NOTE_EXIT calls, but I suspect that that would complicate things,
>making the large presumption that that could be desired.

Why would we do that?

We've not seen any source code which requires what you propose.

Please isn't support to adapt to odd-ball software; odd-ball software
should adapt to pledge.  Usually that suggests a move towards privsep.



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Luke Small
You could possibly make a separate "event" or "wait" pledge to register new
events or NOTE_EXIT calls, but I suspect that that would complicate things,
making the large presumption that that could be desired.

On Thu, Jan 5, 2017, 15:42 Theo de Raadt  wrote:

> > I imagine that the mitigation that is sought by pledge is to minimize
> > aberrent code reuse in whatever way a hacker can make code run again in a
> > way that it isn't supposed to. And maybe the programmer can choose what
> can
> > be problematic and what isn't if it runs again with their choice of the
> > calls. What problem could occur that EVFILT_PROC with NOTE_EXIT (as
> opposed
> > to EVFILT_PROC with maybe other fflags) could make that couldn't occur by
> > trying to put a kevent on a file descriptor. Is "abusively" monitoring a
> > process a security hole?
>
> You want to avoid using pledge "proc" in your own code; you want
> EVFILT_PROC to just work.  I guess you wish that EVFILT_PROC was
> always enabled, inside the pledge "stdio" environment, since you
> haven't described another proposal for what would toggle access.
>
> But that means you don't care about everyone else's code, in
> particular the whole base system.  1000+ code-chunks in the tree use
> "stdio" (or something else slightly more) for normal operation, but
> don't need or want EVFILT_PROC access as a matter of course.
>
> So you think those 1000+ code-chunks should gain EVFILT_PROC ability,
> because it is convenient for your code.  Result is if any of those
> code-chunks has a buffer overflow or means for code execution
> achievement, then it can observe all processes in the system.
>
> Basically, you desire that all pledge "stdio" processes can scan for
> the existance of all pid's by doing EVFILT_PROC attempts.  YES, that
> is exactly what you want, I am not mincing words!  Basically you want
> to undo the annotation/safety of pledge, because you haven't thought
> things through.
>
> Like, you'd be OK if a bug in the sshd pre-auth sandbox allowed such an
> operation?
>
> > If it is, shouldn't a task manager be run as root to see when a root
> > process dies?
>
> That has nothing to do with pledge.  If you want to ask an entirely
> seperate question, ask it in a seperate email.



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Theo de Raadt
> I should also clarify a bit. wait() only works for processes you've created
> with fork(), which requires "proc". There's good reason to allow you to watch
> for a child's exit much later, but without the ability to fork again.

that's right.

during development of pledge, we found many instances where a parent
creates a child, then both of them pledge narrowly.  Imagine the
parent goes to "stdio", but still wants to observe termination of the
child, and perform some very small action afterwards.  It is a common
pattern, so waitpid(2) is serviced normally in "stdio".

> Also, kevent allows exactly this setup with the same set of pledges. After
> calling fork() is when you attach the kevent for the child. Then you drop
> "proc" and can continue to receive notifications about child exits.
> 
> Using kevent() in the same way as wait() requires exactly the same pledge.

Only a subset of kqueue/kevent behaviours are allowed in "stdio" --
basically the poll/select equivelant behaviours exposed by libevent.
That kind of thing occurs in around 60 base programs (sometimes in
2-3x seperate event loops due to privsep, sometimes done by hand rather
than using libevent).

In the future if we encounter risky behaviours of kqueue/kevent which
are not critical for "stdio" programs, they may also get blocked.



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Theo de Raadt
> I imagine that the mitigation that is sought by pledge is to minimize
> aberrent code reuse in whatever way a hacker can make code run again in a
> way that it isn't supposed to. And maybe the programmer can choose what can
> be problematic and what isn't if it runs again with their choice of the
> calls. What problem could occur that EVFILT_PROC with NOTE_EXIT (as opposed
> to EVFILT_PROC with maybe other fflags) could make that couldn't occur by
> trying to put a kevent on a file descriptor. Is "abusively" monitoring a
> process a security hole?

You want to avoid using pledge "proc" in your own code; you want
EVFILT_PROC to just work.  I guess you wish that EVFILT_PROC was
always enabled, inside the pledge "stdio" environment, since you
haven't described another proposal for what would toggle access.

But that means you don't care about everyone else's code, in
particular the whole base system.  1000+ code-chunks in the tree use
"stdio" (or something else slightly more) for normal operation, but
don't need or want EVFILT_PROC access as a matter of course.

So you think those 1000+ code-chunks should gain EVFILT_PROC ability,
because it is convenient for your code.  Result is if any of those
code-chunks has a buffer overflow or means for code execution
achievement, then it can observe all processes in the system.

Basically, you desire that all pledge "stdio" processes can scan for
the existance of all pid's by doing EVFILT_PROC attempts.  YES, that
is exactly what you want, I am not mincing words!  Basically you want
to undo the annotation/safety of pledge, because you haven't thought
things through.

Like, you'd be OK if a bug in the sshd pre-auth sandbox allowed such an
operation?

> If it is, shouldn't a task manager be run as root to see when a root
> process dies?

That has nothing to do with pledge.  If you want to ask an entirely
seperate question, ask it in a seperate email.



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Luke Small
Registering a EVFILT_PROC, NOTE_EXIT kevent requires proc

On Thu, Jan 5, 2017, 15:25 Ted Unangst  wrote:

> Theo de Raadt wrote:
> > > Luke Small wrote:
> > > > What if I want to prevent a process from forking while I want to
> create new
> > > > EVFILT_PROC events? Say, to accept the pid of a sibling fork from a
> pipe
> > > > and load it into a kqueue. Is there a reason why waitpid() isn't
> beholden
> > > > to this, or is there a reason that EVFILT_PROC is?
> > >
> > > wait() is a less powerful syscall than kevent().
> >
> > indeed, EVFILT_PROC lets you observe processes other than your own
> > children.
> >
> > that way far outside "stdio", you are reasoning about processes in
> general,
> > so of course you need pledge "proc".
>
> I should also clarify a bit. wait() only works for processes you've created
> with fork(), which requires "proc". There's good reason to allow you to
> watch
> for a child's exit much later, but without the ability to fork again.
>
> Also, kevent allows exactly this setup with the same set of pledges. After
> calling fork() is when you attach the kevent for the child. Then you drop
> "proc" and can continue to receive notifications about child exits.
>
> Using kevent() in the same way as wait() requires exactly the same pledge.



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Ted Unangst
Theo de Raadt wrote:
> > Luke Small wrote:
> > > What if I want to prevent a process from forking while I want to create 
> > > new
> > > EVFILT_PROC events? Say, to accept the pid of a sibling fork from a pipe
> > > and load it into a kqueue. Is there a reason why waitpid() isn't beholden
> > > to this, or is there a reason that EVFILT_PROC is?
> > 
> > wait() is a less powerful syscall than kevent().
> 
> indeed, EVFILT_PROC lets you observe processes other than your own
> children.
> 
> that way far outside "stdio", you are reasoning about processes in general,
> so of course you need pledge "proc".

I should also clarify a bit. wait() only works for processes you've created
with fork(), which requires "proc". There's good reason to allow you to watch
for a child's exit much later, but without the ability to fork again.

Also, kevent allows exactly this setup with the same set of pledges. After
calling fork() is when you attach the kevent for the child. Then you drop
"proc" and can continue to receive notifications about child exits.

Using kevent() in the same way as wait() requires exactly the same pledge.



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Luke Small
I imagine that the mitigation that is sought by pledge is to minimize
aberrent code reuse in whatever way a hacker can make code run again in a
way that it isn't supposed to. And maybe the programmer can choose what can
be problematic and what isn't if it runs again with their choice of the
calls. What problem could occur that EVFILT_PROC with NOTE_EXIT (as opposed
to EVFILT_PROC with maybe other fflags) could make that couldn't occur by
trying to put a kevent on a file descriptor. Is "abusively" monitoring a
process a security hole? If it is, shouldn't a task manager be run as root
to see when a root process dies? It would be difficult enough to discover a
pid when you aren't supposed to since they probably wouldn't be stored
unless they are going to be used. waitpid(pid, &status, WNOHANG); in a loop
could do the same thing as that, but would be uglier to implement. I
suppose it may be difficult to turn back now after pledging so much in a
certain way.

On Thu, Jan 5, 2017, 14:41 Ted Unangst  wrote:

Luke Small wrote:
> What if I want to prevent a process from forking while I want to create
new
> EVFILT_PROC events? Say, to accept the pid of a sibling fork from a pipe
> and load it into a kqueue. Is there a reason why waitpid() isn't beholden
> to this, or is there a reason that EVFILT_PROC is?

wait() is a less powerful syscall than kevent().



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Theo de Raadt
> Luke Small wrote:
> > What if I want to prevent a process from forking while I want to create new
> > EVFILT_PROC events? Say, to accept the pid of a sibling fork from a pipe
> > and load it into a kqueue. Is there a reason why waitpid() isn't beholden
> > to this, or is there a reason that EVFILT_PROC is?
> 
> wait() is a less powerful syscall than kevent().

indeed, EVFILT_PROC lets you observe processes other than your own
children.

that way far outside "stdio", you are reasoning about processes in general,
so of course you need pledge "proc".



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Ted Unangst
Luke Small wrote:
> What if I want to prevent a process from forking while I want to create new
> EVFILT_PROC events? Say, to accept the pid of a sibling fork from a pipe
> and load it into a kqueue. Is there a reason why waitpid() isn't beholden
> to this, or is there a reason that EVFILT_PROC is?

wait() is a less powerful syscall than kevent().



Re: Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Theo de Raadt
> What if I want to prevent a process from forking while I want to create new
> EVFILT_PROC events? Say, to accept the pid of a sibling fork from a pipe
> and load it into a kqueue. Is there a reason why waitpid() isn't beholden
> to this, or is there a reason that EVFILT_PROC is?

Your usage case doesn't seem like boring normal code.

pledge is designed to serve two goals:

(1) encourage refactoring towards safer models

(2) reduce code exposure to the kernel

The pledge you are talking about is called "proc" rather than "fork"
for a reason -- it annotates code that attempts to reason about
process-trees as much as possible.  I can't specifically say what
reasoning about a process means, but a reasonable edge was chosen to
allow code to work.  Pledge contains a few cases where further
syscalls are allowed because too much existing code uses it;
similarily other syscalls contain narrow checks internally as you just
discovered because a feature wasn't found neccessary.  waitpid was
left in "stdio" because we found too much needing to be syncronously
aware of sub-process termination; we also found found which used
SIGCHLD and waitpid, but we found none using this non-POSIX kqueue
extension in this manner.

It is too complicated to explain all these details in the manual page.
Also strict explanation would be a trap for program developers and
kernel developers in the future.  The manual page is EXPLICITLY vague
as a result.  Actually, I forced the removal of information that was
too detailed.

If you can't make pledge work in a certain way, don't use it.  Or
refactor your program to not rely on the narrowest extensions you
find.

Oh and you want a final answer?  If we allow EVFILT_PROC to work in
"stdio", then around 200 programs suddenly can flow through that kernel
code path.  See point (2) above.



Why can I waitpid() but can't EVFILT_PROC under pledge("proc")

2017-01-05 Thread Luke Small
What if I want to prevent a process from forking while I want to create new
EVFILT_PROC events? Say, to accept the pid of a sibling fork from a pipe
and load it into a kqueue. Is there a reason why waitpid() isn't beholden
to this, or is there a reason that EVFILT_PROC is?



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Kevin
On Thu, Jan 5, 2017 at 10:07 AM, Peter Faiman  wrote:

> Hmm. The default number of files is 128 for daemons, but it's strange
> you'd hit that JUST starting up.
>
> Can you try starting relayd with -v -d to see if it logs anything of
> interest?
>

# /usr/sbin/relayd -vvv -d
startup
init_filter: filter init done
socket_rlimit: max open files 1024
socket_rlimit: max open files 1024
init_tables: created 1 tables
socket_rlimit: max open files 1024
socket_rlimit: max open files 1024
hce_notify_done: 192.168.2.0 (tcp connect ok)
host 192.168.2.0, check tcp (0ms,tcp connect ok), state unknown -> up,
availability 100.00%
hce_notify_done: 192.168.2.1 (tcp connect failed)
host 192.168.2.1, check tcp (1ms,tcp connect failed), state unknown ->
down, availability 0.00%
pfe_dispatch_hce: state 1 for host 1 192.168.2.0
pfe_dispatch_hce: state -1 for host 2 192.168.2.1
table Example.com: 1 added, 0 deleted, 0 changed, 0 killed

Also, take a look at the interesting difference between these two...

# /etc/rc.d/relayd start


relayd(failed)

# /usr/sbin/relayd


# ps uax | grep rel
_relayd  82300  0.0  0.3  1140  1964 ??  Sp11:37AM0:00.00 relayd:
hce (
_relayd  60360  0.0  0.3  1144  2028 ??  Sp11:37AM0:00.00 relayd:
pfe (
root 32087  0.0  0.3  1456  2300 ??  Ss11:37AM0:00.00
/usr/sbin/rel
_relayd  40535  0.0  0.2  1072  1800 ??  Sp11:37AM0:00.00 relayd:
ca (r
_relayd  15864  0.0  0.2  1208  1900 ??  Sp11:37AM0:00.00 relayd:
relay
_relayd  15159  0.0  0.2  1208  1900 ??  Sp11:37AM0:00.00 relayd:
relay
_relayd   7514  0.0  0.3  1208  2004 ??  Sp11:37AM0:00.00 relayd:
relay
_relayd  23861  0.0  0.2  1072  1676 ??  Sp11:37AM0:00.00 relayd:
ca (r
_relayd  16117  0.0  0.2  1072  1680 ??  Sp11:37AM0:00.00 relayd:
ca (r
root 61405  0.0  0.1   336  1128 p0  S+p   11:37AM0:00.00 grep rel




> Can you binary search ulimits until you find the lowest it will start with?
>

I increased ulimit with rational intervals 'til it finally started...


> Reading the source it looks like socket pairs are created between all the
> relayd processes, i.e. n^2 * 2 ish file descriptors, which could exceed 128
> pretty fast. Are you running with a non-default prefork setting?
>


Nope.

My full relayd.conf is in the thread below.



>
> Peter
>
> On Jan 5, 2017, at 09:12, Kevin  wrote:
>
> Nope. I was hoping for another solution, especially given that:
>
> 1. the only thing runnings on this machine are pf and relayd
> 2. there's zero traffic going to it at present
> 3. there's only one site being load balanced
>
> it seems like it shouldn't be necessary.
>
> I'm open to it, if that's the only choice, but it strikes me as outside of
> the bounds of normal operation.
>
> On Thu, Jan 5, 2017 at 9:07 AM, Peter Faiman 
> wrote:
>
>> Have you modified your open file limits in /etc/login.conf? Especially in
>> the daemon section?
>>
>> Peter
>>
>> > On Jan 5, 2017, at 08:50, Kevin  wrote:
>> >
>> >> On Tue, Jan 3, 2017 at 1:16 PM, Kevin  wrote:
>> >>
>> >> Hey gang,
>> >>
>> >> So I'm putting a new firewall in place and have run into issues with
>> >> getting relayd to start using:
>> >>
>> >> # /etc/rc.d/relayd start
>> >>
>> >> When I try starting it like that inevitably I get:
>> >>
>> >>relayd(failed)
>> >>
>> >> checking the log files tells me:
>> >>
>> >>relayd: socketpair: Too many open files
>> >>
>> >> Having trolled through pages of SERPs, I can't find an answer;
>> however, in
>> >> the interest of science, if I do this:
>> >>
>> >> # ulimit -n 512
>> >> # /usr/sbin/relayd
>> >>
>> >> it starts perfectly.
>> >>
>> >> Anyone care to give me a quick strike with the clue stick, please?
>> >>
>> >> Oh yah, here's my relayd.conf
>> >>
>> >> # Example.com
>> >> # 145.176.20.136
>> >> exm_chi01="192.168.2.0"
>> >> exm_chi02="192.168.2.1"
>> >>
>> >> table{ $exm_chi01, $exm_chi02 }
>> >>
>> >> #=#
>> >> # Servers #
>> >> #=#
>> >> redirect "Example.com" {
>> >>listen on 145.176.20.162 port 80 interface vio0
>> >>pftag RELAYD-Example.com
>> >>forward to  check tcp
>> >> }
>> >>
>> >>
>> >> For what it's worth, I'm using a hosts file to point example.com to
>> my IP
>> >> for the time being, as I can't pull the real sites down and move them
>> 'til
>> >> this is working.
>> >>
>> >> Also of interest: pf seems to be working as advertised, as does relayd
>> >> when it's started with the ulimit cranked up.
>> >>
>> >>
>> >> Thanks,
>> >> Kevin
>> >>
>> >
>> >
>> >
>> > Unless there's word to the contrary, and as much as it's not officially
>> the
>> > right thing to do, it seems the only real choice for me here is to run
>> > relayd with ulimit sufficiently cranked, eh?



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Kevin
On Thu, Jan 5, 2017 at 10:19 AM, Peter Faiman  wrote:

> Ah yes I see those lines now, thank you.
>
> Kevin, what version of OpenBSD are you using? You mentioned this is a new
> project so I assume 6.0?
>

>From my dmesg:

OpenBSD 6.0-stable (GENERIC.MP ) #0: Wed Dec 28
14:13:24 PST 2016



Re: maybe move texinfo from base in the ports?

2017-01-05 Thread Todd C. Miller
On Thu, 05 Jan 2017 21:18:45 +0300, =?UTF-8?B?0JDQvdC00YDQtdC5INCR0L7Qu9C60L7Qv
dGB0LrQuNC5?= wrote:

> https://github.com/openbsd/src/tree/master/usr.bin/keynote
> remove this obsolete directory, please...

How is it obsolete?  The keynote binary is still built from there,
it's just that the source files all live in the libkeynote directory.

 - todd



Re: maybe move texinfo from base in the ports?

2017-01-05 Thread Андрей Болконский
https://github.com/openbsd/src/tree/master/usr.bin/keynote
remove this obsolete directory, please...

2016-11-18 21:09 GMT+03:00 Ingo Schwarze :

> Andrey Bolkonsky wrote on Thu, Nov 17, 2016 at 07:47:48PM +0300:
>
> > IMHO, texinfo isn't need in most cases, is GPL software and legacy
> > version
>
> In most cases, it isn't needed, but in some cases, it is still in use:
>
> schwarze@isnote $ ls /usr/share/info/
> amdref.info cppinternals.info   gdb.infolibiberty.info
> annotate.info   cvs.infogdbint.info readline.info
> as.info cvsclient.info  history.infostabs.info
> bfd.infodir info-stnd.info  texinfo
> binutils.info   gcc.infoinfo.info
> cpp.infogccint.info ld.info
>
> So it is not an option to just remove it.
>
> > Use man, like!
>
> I considered converting all these info documents to mdoc(7) format
> with texi2mdoc(1), see http://mdocml.bsd.lv/texi2mdoc/ .
> But it's a non-trivial amount of work.
> And it is not high priority for the following reasons:
>
>  - The above manuals are rarely used on OpenBSD in the first place.
>  - None of the above programs are maintained by the OpenBSD project,
>so having the manuals in the unfamiliar texinfo format is not a
>manitenance burden because they rarely change.
>  - Even though upstream changes are rarely (if ever) pulled in for
>these programs, having as little changes as possible in the
>directories takes from upstream may help porting in the rare
>case when something is ported.
>
> I would welcome having the work done, but i'm not going to do it
> myself right now.
>
> Yours,
>   Ingo



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Peter Faiman
Ah yes I see those lines now, thank you.

Kevin, what version of OpenBSD are you using? You mentioned this is a new
project so I assume 6.0?

Peter

On Jan 5, 2017, at 10:08, Theo de Raadt  wrote:

>> Hmm. The default number of files is 128 for daemons, but it's strange
you'd
>> hit that JUST starting up.
>>
>> Can you try starting relayd with -v -d to see if it logs anything of
>> interest?
>>
>> Can you binary search ulimits until you find the lowest it will start
with?
>>
>> Reading the source it looks like socket pairs are created between all the
>> relayd processes, i.e. n^2 * 2 ish file descriptors, which could exceed
128
>> pretty fast. Are you running with a non-default prefork setting?
>
> This was fixed after 6.0.
>
> date: 2016/11/24 21:01:18;  author: reyk;  state: Exp;  lines: +110 -79;
commitid: FkVuQgzULddApn9S;
> The new fork+exec mode used too many fds in the parent process on
> startup, for a short time, so we needed a rlimit hack in relayd.c.
> Sync the fix from httpd: rzalamena@ has fixed proc.c and I added the
> proc_flush_imsg() mechanism that makes sure that each fd is
> immediately closed after forwarding it to a child process instead of
> queueing it up.
>
> OK rzalamena@ jca@ benno@



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Theo de Raadt
> Hmm. The default number of files is 128 for daemons, but it's strange you'd
> hit that JUST starting up.
> 
> Can you try starting relayd with -v -d to see if it logs anything of
> interest?
> 
> Can you binary search ulimits until you find the lowest it will start with?
> 
> Reading the source it looks like socket pairs are created between all the
> relayd processes, i.e. n^2 * 2 ish file descriptors, which could exceed 128
> pretty fast. Are you running with a non-default prefork setting?

This was fixed after 6.0.

date: 2016/11/24 21:01:18;  author: reyk;  state: Exp;  lines: +110 -79;  
commitid: FkVuQgzULddApn9S;
The new fork+exec mode used too many fds in the parent process on
startup, for a short time, so we needed a rlimit hack in relayd.c.
Sync the fix from httpd: rzalamena@ has fixed proc.c and I added the
proc_flush_imsg() mechanism that makes sure that each fd is
immediately closed after forwarding it to a child process instead of
queueing it up.

OK rzalamena@ jca@ benno@



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Peter Faiman
Hmm. The default number of files is 128 for daemons, but it's strange you'd
hit that JUST starting up.

Can you try starting relayd with -v -d to see if it logs anything of
interest?

Can you binary search ulimits until you find the lowest it will start with?

Reading the source it looks like socket pairs are created between all the
relayd processes, i.e. n^2 * 2 ish file descriptors, which could exceed 128
pretty fast. Are you running with a non-default prefork setting?

Peter

> On Jan 5, 2017, at 09:12, Kevin  wrote:
>
> Nope. I was hoping for another solution, especially given that:
>
> 1. the only thing runnings on this machine are pf and relayd
> 2. there's zero traffic going to it at present
> 3. there's only one site being load balanced
>
> it seems like it shouldn't be necessary.
>
> I'm open to it, if that's the only choice, but it strikes me as outside of
the bounds of normal operation.
>
>> On Thu, Jan 5, 2017 at 9:07 AM, Peter Faiman 
wrote:
>> Have you modified your open file limits in /etc/login.conf? Especially in
the daemon section?
>>
>> Peter
>>
>> > On Jan 5, 2017, at 08:50, Kevin  wrote:
>> >
>> >> On Tue, Jan 3, 2017 at 1:16 PM, Kevin  wrote:
>> >>
>> >> Hey gang,
>> >>
>> >> So I'm putting a new firewall in place and have run into issues with
>> >> getting relayd to start using:
>> >>
>> >> # /etc/rc.d/relayd start
>> >>
>> >> When I try starting it like that inevitably I get:
>> >>
>> >>relayd(failed)
>> >>
>> >> checking the log files tells me:
>> >>
>> >>relayd: socketpair: Too many open files
>> >>
>> >> Having trolled through pages of SERPs, I can't find an answer; however,
in
>> >> the interest of science, if I do this:
>> >>
>> >> # ulimit -n 512
>> >> # /usr/sbin/relayd
>> >>
>> >> it starts perfectly.
>> >>
>> >> Anyone care to give me a quick strike with the clue stick, please?
>> >>
>> >> Oh yah, here's my relayd.conf
>> >>
>> >> # Example.com
>> >> # 145.176.20.136
>> >> exm_chi01="192.168.2.0"
>> >> exm_chi02="192.168.2.1"
>> >>
>> >> table{ $exm_chi01, $exm_chi02 }
>> >>
>> >> #=#
>> >> # Servers #
>> >> #=#
>> >> redirect "Example.com" {
>> >>listen on 145.176.20.162 port 80 interface vio0
>> >>pftag RELAYD-Example.com
>> >>forward to  check tcp
>> >> }
>> >>
>> >>
>> >> For what it's worth, I'm using a hosts file to point example.com to my
IP
>> >> for the time being, as I can't pull the real sites down and move them
'til
>> >> this is working.
>> >>
>> >> Also of interest: pf seems to be working as advertised, as does relayd
>> >> when it's started with the ulimit cranked up.
>> >>
>> >>
>> >> Thanks,
>> >> Kevin
>> >>
>> >
>> >
>> >
>> > Unless there's word to the contrary, and as much as it's not officially
the
>> > right thing to do, it seems the only real choice for me here is to run
>> > relayd with ulimit sufficiently cranked, eh?



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Kevin
Nope. I was hoping for another solution, especially given that:

1. the only thing runnings on this machine are pf and relayd
2. there's zero traffic going to it at present
3. there's only one site being load balanced

it seems like it shouldn't be necessary.

I'm open to it, if that's the only choice, but it strikes me as outside of
the bounds of normal operation.

On Thu, Jan 5, 2017 at 9:07 AM, Peter Faiman  wrote:

> Have you modified your open file limits in /etc/login.conf? Especially in
> the daemon section?
>
> Peter
>
> > On Jan 5, 2017, at 08:50, Kevin  wrote:
> >
> >> On Tue, Jan 3, 2017 at 1:16 PM, Kevin  wrote:
> >>
> >> Hey gang,
> >>
> >> So I'm putting a new firewall in place and have run into issues with
> >> getting relayd to start using:
> >>
> >> # /etc/rc.d/relayd start
> >>
> >> When I try starting it like that inevitably I get:
> >>
> >>relayd(failed)
> >>
> >> checking the log files tells me:
> >>
> >>relayd: socketpair: Too many open files
> >>
> >> Having trolled through pages of SERPs, I can't find an answer; however,
> in
> >> the interest of science, if I do this:
> >>
> >> # ulimit -n 512
> >> # /usr/sbin/relayd
> >>
> >> it starts perfectly.
> >>
> >> Anyone care to give me a quick strike with the clue stick, please?
> >>
> >> Oh yah, here's my relayd.conf
> >>
> >> # Example.com
> >> # 145.176.20.136
> >> exm_chi01="192.168.2.0"
> >> exm_chi02="192.168.2.1"
> >>
> >> table{ $exm_chi01, $exm_chi02 }
> >>
> >> #=#
> >> # Servers #
> >> #=#
> >> redirect "Example.com" {
> >>listen on 145.176.20.162 port 80 interface vio0
> >>pftag RELAYD-Example.com
> >>forward to  check tcp
> >> }
> >>
> >>
> >> For what it's worth, I'm using a hosts file to point example.com to my
> IP
> >> for the time being, as I can't pull the real sites down and move them
> 'til
> >> this is working.
> >>
> >> Also of interest: pf seems to be working as advertised, as does relayd
> >> when it's started with the ulimit cranked up.
> >>
> >>
> >> Thanks,
> >> Kevin
> >>
> >
> >
> >
> > Unless there's word to the contrary, and as much as it's not officially
> the
> > right thing to do, it seems the only real choice for me here is to run
> > relayd with ulimit sufficiently cranked, eh?



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Peter Faiman
Have you modified your open file limits in /etc/login.conf? Especially in the
daemon section?

Peter

> On Jan 5, 2017, at 08:50, Kevin  wrote:
>
>> On Tue, Jan 3, 2017 at 1:16 PM, Kevin  wrote:
>>
>> Hey gang,
>>
>> So I'm putting a new firewall in place and have run into issues with
>> getting relayd to start using:
>>
>> # /etc/rc.d/relayd start
>>
>> When I try starting it like that inevitably I get:
>>
>>relayd(failed)
>>
>> checking the log files tells me:
>>
>>relayd: socketpair: Too many open files
>>
>> Having trolled through pages of SERPs, I can't find an answer; however, in
>> the interest of science, if I do this:
>>
>> # ulimit -n 512
>> # /usr/sbin/relayd
>>
>> it starts perfectly.
>>
>> Anyone care to give me a quick strike with the clue stick, please?
>>
>> Oh yah, here's my relayd.conf
>>
>> # Example.com
>> # 145.176.20.136
>> exm_chi01="192.168.2.0"
>> exm_chi02="192.168.2.1"
>>
>> table{ $exm_chi01, $exm_chi02 }
>>
>> #=#
>> # Servers #
>> #=#
>> redirect "Example.com" {
>>listen on 145.176.20.162 port 80 interface vio0
>>pftag RELAYD-Example.com
>>forward to  check tcp
>> }
>>
>>
>> For what it's worth, I'm using a hosts file to point example.com to my IP
>> for the time being, as I can't pull the real sites down and move them 'til
>> this is working.
>>
>> Also of interest: pf seems to be working as advertised, as does relayd
>> when it's started with the ulimit cranked up.
>>
>>
>> Thanks,
>> Kevin
>>
>
>
>
> Unless there's word to the contrary, and as much as it's not officially the
> right thing to do, it seems the only real choice for me here is to run
> relayd with ulimit sufficiently cranked, eh?



Re: relayd[66834]: relayd: socketpair: Too many open files

2017-01-05 Thread Kevin
On Tue, Jan 3, 2017 at 1:16 PM, Kevin  wrote:

> Hey gang,
>
> So I'm putting a new firewall in place and have run into issues with
> getting relayd to start using:
>
> # /etc/rc.d/relayd start
>
> When I try starting it like that inevitably I get:
>
> relayd(failed)
>
> checking the log files tells me:
>
> relayd: socketpair: Too many open files
>
> Having trolled through pages of SERPs, I can't find an answer; however, in
> the interest of science, if I do this:
>
> # ulimit -n 512
> # /usr/sbin/relayd
>
> it starts perfectly.
>
> Anyone care to give me a quick strike with the clue stick, please?
>
> Oh yah, here's my relayd.conf
>
> # Example.com
> # 145.176.20.136
> exm_chi01="192.168.2.0"
> exm_chi02="192.168.2.1"
>
> table{ $exm_chi01, $exm_chi02 }
>
> #=#
> # Servers #
> #=#
> redirect "Example.com" {
> listen on 145.176.20.162 port 80 interface vio0
> pftag RELAYD-Example.com
> forward to  check tcp
> }
>
>
> For what it's worth, I'm using a hosts file to point example.com to my IP
> for the time being, as I can't pull the real sites down and move them 'til
> this is working.
>
> Also of interest: pf seems to be working as advertised, as does relayd
> when it's started with the ulimit cranked up.
>
>
> Thanks,
> Kevin
>



Unless there's word to the contrary, and as much as it's not officially the
right thing to do, it seems the only real choice for me here is to run
relayd with ulimit sufficiently cranked, eh?



Re: usermod: Invalid password: `*'

2017-01-05 Thread Todd C. Miller
This works in -current.  I've verified that it works with rev 1.112
of user.c but OpenBSD 6.0 has user.c rev 1.111.

 - todd



Re: isakmpd set up

2017-01-05 Thread Stuart Henderson
On 2017-01-02, Peter Fraser  wrote:
> I want the fixed IP address so I don't have to drive there to fix problems.

PS: I haven't used it recently, but I've found ports/sysutils/autossh useful
in the past for these.



Re: isakmpd set up

2017-01-05 Thread Stuart Henderson
On 2017-01-02, Peter Fraser  wrote:
> A charity that I support has been having trouble with its internet provider
> (Rogers).
> The problem I have is that Roger is the only supplier that is available that
> will
> give a fixed IP address.
>
> I want the fixed IP address so I don't have to drive there to fix problems.
>
> It occurred to me that if I could get a VPN set up automatically when their
> OpenBSD  firewall boots.
> I could then use the VPN to reach back into their computer.
>
> Having never set up a VPN using OpenBSD I started by reading, and I was left
> very confused.
>
> I came up with:
>
> On my firewall I have /etc/ipsec.conf
>
> ike passive from egress to 192.168.254/24 peer 192.168.254.1 srcid thinkage.ca
> dstid kwaccessability.ca tag ipsec-kwa
> ike passive from 192.102.11.0/24 to 192.168.254.0/24 peer 192.168.254.1 srcid
> thinkage.ca  dstid kwaccessability.ca tag ipsec-kwa

Because you don't know the other side's IP address, use "to any" here
to set it as the "default peer", i.e. the peer that matches traffic from a
destination where you don't have a specific IP configuration in isakmpd.conf.
Remove "peer 192.168.254.1".

Don't rely on shortcuts like 192.168.254/24, use the proper 192.168.254.0/24.
It might not be a problem but it's something else to go wrong (or something that
might work now but go wrong later). Not worth the typing saving.

So I'd try something like

ike passive from {egress, 192.102.11.0/24} to any srcid thinkage.ca \
dstid kwaccessability.ca tag ipsec-kwa

> on their firewall
>
> ike  from egress to 192.102.11/24 peer 192.102.11.1 srcid kwaccessability.ca
> dstid thinkage.ca tag ipsec-kwa
> ike  from 192.168.254/24 to 192.102.11/24 peer 192.102.11.1 srcid
> kwaccessability.ca dstid thinkage.ca tag ipsec-kwa
>
> I also  opened up the firewall to allow packed in from both networks without
> restrictions,
> something I will have to clean up later
>
> On both system I have isakmpd_flags=-K -v -D A=10

After reading code and trying things out I settled on using this as my
standard config for systems where I'm interested in getting logging out of
isakmpd:

isakmpd_flags="-Kv -D0=29 -D1=49 -D2=10 -D3=30 -D5=20 -D6=30 -D8=30 -D9=30 
-D10=20"

Then if there's something particular I need to look at I'll bump that
area's logging based on looking at the source code.

> because of some of the readings I also put on both systems into
> /etc/hostname.enc0
> up

Not needed.

> when I try to start isakmpd on the remote system I get only a message about
> privilege droping.

Are you doing anything funny with logging setup?

Are you actually loading ipsec.conf (ipsec=YES in rc.conf.local or manually
running ipsecctl -f /etc/ipsec.conf)?

> Jan  2 16:24:00 gateway isakmpd[71980]: ipsec_get_id: invalid section 
> to-192.168.254/24 network 192.168.254

that might be connected with your truncated IP addresses.

> Jan  2 16:24:00 gateway isakmpd[71980]: connection_init: could not record 
> passive connection "from-ste0-to-192.168.254/24"

ste, not my favourite nic ;)



Re: nslookup and dig output when using rebound

2017-01-05 Thread Stuart Henderson
On 2017-01-04, Ted Unangst  wrote:
> Glenn Faustino wrote:
>> Hi,
>> 
>> The output of nslookup and dig when using rebound are like these:
>
> this finally annoyed me enough the other day i made a patch.

Oh please don't This is important information.

>
> Index: bin/dig/dighost.c
>===
> RCS file: /cvs/src/usr.sbin/bind/bin/dig/dighost.c,v
> retrieving revision 1.15
> diff -u -p -r1.15 dighost.c
> --- bin/dig/dighost.c 28 Sep 2015 15:55:54 -  1.15
> +++ bin/dig/dighost.c 31 Dec 2016 20:26:11 -
> @@ -2854,7 +2854,7 @@ recv_done(isc_task_t *task, isc_event_t 
>   * sent to 0.0.0.0, :: or to a multicast addresses.
>   * XXXMPA broadcast needs to be handled here as well.
>   */
> - if ((!isc_sockaddr_eqaddr(&query->sockaddr, &any) &&
> + if (0) if ((!isc_sockaddr_eqaddr(&query->sockaddr, &any) &&
>!isc_sockaddr_ismulticast(&query->sockaddr)) ||
>   isc_sockaddr_getport(&query->sockaddr) !=
>   isc_sockaddr_getport(&sevent->address)) {



Re: The right way to delete elements from ohash

2017-01-05 Thread Maciej Adamczyk
Yes, it's the simplest and probably the fastest way (and I care about
simplicity more than speed anyway), so I like it more than the other
option.
It seems there's no third one, except from replacing ohash or forking it
(f.e. to not downsize unless explicitly requested).
I think I'll stay with the simplest option (just fix the bug in my code)
for now.
Thanks to everybody for the answers

Maciej Admczyk



From: Ted Unangst
Sent: Wednesday, January 04, 2017 7:16PM
To: Maciej Adamczyk
Cc: Misc
Subject: Re: The right way to delete elements from ohash

Maciej Adamczyk wrote:  

  The task is simple: remove all elements that satisfy some property.  

  * add another layer of looping and keep iterating as long as a pass over 
  the map deleted at least one element.

this should be fast enough. even with looping and resizing, it's
asymptotically linear.