incorrect pf rule?

2020-11-28 Thread Родин Максим

Hello
I have a small 5 year old home router (upgraded to OpenBSD 6.8 stable) 
with a static white IP from my internet provider (gotten by dhcp) and a 
simple http/https server (OpenBSD httpd) in my network using VirtualBox 
VM (OpenBSD 6.8) which has a static IP 192.168.1.102.
The http server is available from the internal network on http and https 
ports when 192.168.1.102 is used.
To make the http server work from outside I'm trying to use the 
following PF rule on my router:

...
web_server = "192.168.1.102"
web_ports = "{ http https }"...
...
# Web-server
pass in log on egress inet proto tcp \
from ! to (egress) port $web_ports \
rdr-to $web_server

The problem is that only port 80 seems to be open from the outside.
I used several online port scanners to check this.
All of them tell:
port 80 OPEN
port 443 CLOSED

The whole ruleset is below:
__
"""
router root ~ # grep -v '^#' /etc/pf.conf 




int_if = "{ vether1 em1 em3 athn0 }"
beeline_tv = "{ em0 em2 }"
table  { 0.0.0.0/8 10.0.0.0/8 127.0.0.0/8 169.254.0.0/16 \
   172.16.0.0/12 192.0.2.0/24 \
   192.168.0.0/16 198.18.0.0/15 198.51.100.0/24\
   }
table  persist file "/etc/pf/bad_ip"
asterisk_server = "192.168.1.101"
web_server = "192.168.1.102"
web_ports = "{ http https }"

block log all

set block-policy drop
set skip on lo

match in all scrub (no-df random-id max-mss 1440)
match out on egress inet from (vether1:network) to any nat-to (egress:0)

pass out quick inet
pass in on $int_if inet

pass on $beeline_tv allow-opts

pass in on egress inet proto tcp from ! \
to (egress) port 22 keep state \
(max-src-conn 2, max-src-conn-rate 2/300, \
overload  flush global)

pass in on egress inet proto udp from ! \
to (egress) port 5060 keep state \
(max-src-states 1) rdr-to $asterisk_server
pass in on $int_if inet proto udp from (vether1:network) \
to (egress) port 5060 \
rdr-to $asterisk_server


pass in on egress inet proto udp from ! \
to (egress) port 1:2 keep state \
(max-src-states 1) rdr-to $asterisk_server

pass in on $int_if inet proto udp from (vether1:network) \
to (egress) port 1:2 \
rdr-to $asterisk_server


pass in on egress inet proto { tcp udp } from ! \
to (egress) port { 5 }  rdr-to 192.168.1.65

pass in log on egress inet proto tcp from ! \
to (egress) port $web_ports \
rdr-to $web_server
"""



I added some log options to try to understand which rule can be blocking 
access to https port from the outside but the log shows the following:



"""
router root ~ # tcpdump -n -e -ttt -i pflog0 port 80 or port 443

tcpdump: WARNING: snaplen raised from 116 to 160
tcpdump: listening on pflog0, link-type PFLOG
Nov 29 08:28:44.602109 rule 23/(match) pass in on vether0: 
5.101.123.139.40470 > 89.179.243.222.80: S 2282440086:2282440086(0) win 
29200  (DF) [tos 0x28]

"""


Access to port http is logged successfully but access to port https is 
nowhere
There are other rdr-to rules in my ruleset and all of them work as 
expected e.g.:
port 5060 udp and port range 1:2 udp are redirected to 
VirtualBox VM (Asterisk) as expected.




--
Best regards
Maksim Rodin



Re: ldapd.conf certificate directive not working?

2020-11-28 Thread Jonathan Matthew
On Sun, Nov 29, 2020 at 12:23:51AM +0100, Theo Buehler wrote:
> On Sun, Nov 29, 2020 at 12:00:29AM +0100, Martijn van Duren wrote:
> > On Sat, 2020-11-28 at 23:08 +0100, Theo Buehler wrote:
> > > > "If the certificate name is an absolute path, a .crt and .key
> > > > extension are appended to form the certificate path and key path
> > > > respectively."
> > > > This part does not seem to work at all.
> > > > Neither it tries to search certificates using the absolute path nor
> > > > it tries to append .crt or .key extension to the absolute path when no
> > > > extension is used in config.
> > > > 
> > > > Or I do it completely wrong?
> > > 
> > > It's a bug. If the certificate path is absolute, faulty short-circuiting
> > > logic would result in first correctly appending ".crt" to the path, then
> > > incorrectly prepending "/etc/ldap/cert".
> > > 
> > > You can see the problem with a config containing
> > > 
> > > listen on lo0 port 6636 tls certificate "/bogus/lo0"
> > > 
> > > $ ldapd -vv -f ldapd.conf -n
> > > ...
> > > loading certificate file /etc/ldap/certs//bogus/lo0.crt
> > > ldapd.conf:5: cannot load certificate: /bogus/lo0
> > > ...
> > > 
> > > The diff below avoids calling bsnprintf() twice for an absolute
> > > certificate path.
> > > 
> > 
> > Wouldn't it be more future idiot proof if we were a little more verbose?
> > But if you prefer, your diff also looks good to me.
> 
> I have no strong preference either way (I would probably use yours if it
> were my code). Feel free to go ahead with your diff and my ok after
> giving jmatthew a bit of time to respond.

I'm ok with either, but I prefer Martijn's diff.



Re: ldapd.conf certificate directive not working?

2020-11-28 Thread Theo Buehler
On Sun, Nov 29, 2020 at 12:00:29AM +0100, Martijn van Duren wrote:
> On Sat, 2020-11-28 at 23:08 +0100, Theo Buehler wrote:
> > > "If the certificate name is an absolute path, a .crt and .key
> > > extension are appended to form the certificate path and key path
> > > respectively."
> > > This part does not seem to work at all.
> > > Neither it tries to search certificates using the absolute path nor
> > > it tries to append .crt or .key extension to the absolute path when no
> > > extension is used in config.
> > > 
> > > Or I do it completely wrong?
> > 
> > It's a bug. If the certificate path is absolute, faulty short-circuiting
> > logic would result in first correctly appending ".crt" to the path, then
> > incorrectly prepending "/etc/ldap/cert".
> > 
> > You can see the problem with a config containing
> > 
> > listen on lo0 port 6636 tls certificate "/bogus/lo0"
> > 
> > $ ldapd -vv -f ldapd.conf -n
> > ...
> > loading certificate file /etc/ldap/certs//bogus/lo0.crt
> > ldapd.conf:5: cannot load certificate: /bogus/lo0
> > ...
> > 
> > The diff below avoids calling bsnprintf() twice for an absolute
> > certificate path.
> > 
> 
> Wouldn't it be more future idiot proof if we were a little more verbose?
> But if you prefer, your diff also looks good to me.

I have no strong preference either way (I would probably use yours if it
were my code). Feel free to go ahead with your diff and my ok after
giving jmatthew a bit of time to respond.



Re: ldapd.conf certificate directive not working?

2020-11-28 Thread Martijn van Duren
On Sat, 2020-11-28 at 23:08 +0100, Theo Buehler wrote:
> > "If the certificate name is an absolute path, a .crt and .key
> > extension are appended to form the certificate path and key path
> > respectively."
> > This part does not seem to work at all.
> > Neither it tries to search certificates using the absolute path nor
> > it tries to append .crt or .key extension to the absolute path when no
> > extension is used in config.
> > 
> > Or I do it completely wrong?
> 
> It's a bug. If the certificate path is absolute, faulty short-circuiting
> logic would result in first correctly appending ".crt" to the path, then
> incorrectly prepending "/etc/ldap/cert".
> 
> You can see the problem with a config containing
> 
> listen on lo0 port 6636 tls certificate "/bogus/lo0"
> 
> $ ldapd -vv -f ldapd.conf -n
> ...
> loading certificate file /etc/ldap/certs//bogus/lo0.crt
> ldapd.conf:5: cannot load certificate: /bogus/lo0
> ...
> 
> The diff below avoids calling bsnprintf() twice for an absolute
> certificate path.
> 

Wouldn't it be more future idiot proof if we were a little more verbose?
But if you prefer, your diff also looks good to me.

martijn@

Index: parse.y
===
RCS file: /cvs/src/usr.sbin/ldapd/parse.y,v
retrieving revision 1.36
diff -u -p -r1.36 parse.y
--- parse.y 24 Jun 2020 07:20:47 -  1.36
+++ parse.y 28 Nov 2020 22:54:42 -
@@ -1279,12 +1279,17 @@ load_certfile(struct ldapd_config *env, 
goto err;
}
 
-   if ((name[0] == '/' &&
-!bsnprintf(certfile, sizeof(certfile), "%s.crt", name)) ||
-   !bsnprintf(certfile, sizeof(certfile), "/etc/ldap/certs/%s.crt",
-   name)) {
-   log_warn("load_certfile: path truncated");
-   goto err;
+   if (name[0] == '/') {
+   if (!bsnprintf(certfile, sizeof(certfile), "%s.crt", name)) {
+   log_warn("load_certfile: path truncated");
+   goto err;
+   }
+   } else {
+   if (!bsnprintf(certfile, sizeof(certfile),
+   "/etc/ldap/certs/%s.crt", name)) {
+   log_warn("load_certfile: path truncated");
+   goto err;
+   }
}
 
log_debug("loading certificate file %s", certfile);
@@ -1298,12 +1303,17 @@ load_certfile(struct ldapd_config *env, 
goto err;
}
 
-   if ((name[0] == '/' &&
-!bsnprintf(certfile, sizeof(certfile), "%s.key", name)) ||
-   !bsnprintf(certfile, sizeof(certfile), "/etc/ldap/certs/%s.key",
-   name)) {
-   log_warn("load_certfile: path truncated");
-   goto err;
+   if (name[0] == '/') {
+   if (!bsnprintf(certfile, sizeof(certfile), "%s.key", name)) {
+   log_warn("load_certfile: path truncated");
+   goto err;
+   }
+   } else {
+   if (!bsnprintf(certfile, sizeof(certfile),
+   "/etc/ldap/certs/%s.key", name)) {
+   log_warn("load_certfile: path truncated");
+   goto err;
+   }
}
 
log_debug("loading key file %s", certfile);




Re: ldapd.conf certificate directive not working?

2020-11-28 Thread Theo Buehler
> "If the certificate name is an absolute path, a .crt and .key
> extension are appended to form the certificate path and key path
> respectively."
> This part does not seem to work at all.
> Neither it tries to search certificates using the absolute path nor
> it tries to append .crt or .key extension to the absolute path when no
> extension is used in config.
> 
> Or I do it completely wrong?

It's a bug. If the certificate path is absolute, faulty short-circuiting
logic would result in first correctly appending ".crt" to the path, then
incorrectly prepending "/etc/ldap/cert".

You can see the problem with a config containing

listen on lo0 port 6636 tls certificate "/bogus/lo0"

$ ldapd -vv -f ldapd.conf -n
...
loading certificate file /etc/ldap/certs//bogus/lo0.crt
ldapd.conf:5: cannot load certificate: /bogus/lo0
...

The diff below avoids calling bsnprintf() twice for an absolute
certificate path.

Index: parse.y
===
RCS file: /cvs/src/usr.sbin/ldapd/parse.y,v
retrieving revision 1.36
diff -u -p -r1.36 parse.y
--- parse.y 24 Jun 2020 07:20:47 -  1.36
+++ parse.y 28 Nov 2020 21:40:13 -
@@ -1281,8 +1281,9 @@ load_certfile(struct ldapd_config *env, 
 
if ((name[0] == '/' &&
 !bsnprintf(certfile, sizeof(certfile), "%s.crt", name)) ||
-   !bsnprintf(certfile, sizeof(certfile), "/etc/ldap/certs/%s.crt",
-   name)) {
+   (name[0] != '/' &&
+!bsnprintf(certfile, sizeof(certfile), "/etc/ldap/certs/%s.crt",
+   name))) {
log_warn("load_certfile: path truncated");
goto err;
}
@@ -1300,8 +1301,9 @@ load_certfile(struct ldapd_config *env, 
 
if ((name[0] == '/' &&
 !bsnprintf(certfile, sizeof(certfile), "%s.key", name)) ||
-   !bsnprintf(certfile, sizeof(certfile), "/etc/ldap/certs/%s.key",
-   name)) {
+   (name[0] != '/' &&
+!bsnprintf(certfile, sizeof(certfile), "/etc/ldap/certs/%s.key",
+   name))) {
log_warn("load_certfile: path truncated");
goto err;
}



Re: pflogd: Corrupted log file, move it away

2020-11-28 Thread Jan Stary
On Nov 28 16:13:35, s...@spacehopper.org wrote:
> On 2020-11-27, Harald Dunkel  wrote:
> > Hi folks,
> >
> > I got a bazillion of error messages in /var/log/daemon
> >
> >:
> > Nov 27 08:33:25 gate6a pflogd[26893]: Corrupted log file.
> > Nov 27 08:33:25 gate6a pflogd[26893]: Invalid/incompatible log file, move 
> > it away
> > Nov 27 08:33:25 gate6a pflogd[26893]: Logging suspended: open error
> > Nov 27 08:33:32 gate6a pflogd[2985]: Corrupted log file.
> > Nov 27 08:33:32 gate6a pflogd[2985]: Invalid/incompatible log file, move it 
> > away
> > Nov 27 08:33:32 gate6a pflogd[2985]: Logging suspended: open error
> >:
> >
> > Problem is, pflogd doesn't tell which one. I am logging to /var/log/\
> > pflog{0..3}. Nothing else but pflogd is writing these files. They are
> > rotated every hour, using the default
> 
> It is easy enough to add the filename, but adding that to the log
> might suggest to users that things are setup to handle multiple pflogd
> processes and that is not the case.
> 
> Various parts of the system would need changing in order to handle this.
> Currently there is no way to distinguish between multiple "priv" processes
> as the process title doesn't show the command-line flags. In order to
> support multiple pflogd processes this would need adding, then the rc.d
> scripts and default newsyslog.conf entry would need updating to use them.
> 
> > I can't remember having seen this problem for 6.7.
> 
> I think you got lucky.

Maybe I got lucky too - please help me understand.

I have two pflog interfaces: pflog0 (created by default)
and pflog1 (created with 'up' in /etc/hostname.pflog1).

pflgo0 logs the suspicious network traffic aimed at my machine:

block log all
# pass legit stuff

pflog1 logs all the SIP traffic to and from my SIP phone
(on an internal network):

match in  log (all, to pflog1) on $int from $sip
match out log (all, to pflog1) on $int to   $sip

There are two corresponding pflogd processes: one is started
with pflogd_flags="-s 1500" in /etc/rc.conf.local, and becomes

13680 pflogd: [running] -s 1500 -i pflog0 -f /var/log/pflog
84985 pflogd: [priv]

The other is started in /etc/rc.local as
/sbin/pflogd -s 1500 -i pflog1 -f /var/log/siplog
which runs

10562 pflogd: [running] -s 1500 -i pflog1 -f /var/log/siplog
94396 pflogd: [priv]

The two log files (/var/log/pflog, /var/log/siplog)
are rotated as follows:

/var/log/pflog  600 3650 * @T00 ZB "pkill -HUP -u root -U root -t - -x pflogd"
/var/log/siplog 600 3650 * @T00 ZB "pkill -HUP -u root -U root -t - -x pflogd"

I have had the same messages as the OP describes,
until I realized I missed the B iz 'ZB' for siplog,
which indeed rendered the file 'invalid' on rotation
because of the textual 'logfile turned over' message.

Since fixing that, I haven't had seen the message
(that would be a couple of weeks now).

If I'm reading you right, the rotation sends a SIGHUP to each
of the pflogd processes; twice, in fact: after rotating each
of the two files. Is that the case?

That would indeed be a problem; namely, it would break the nice
sequence of one rotated logfile per day.

However, looking at the timestamps of the few first and last entries
in pflog.1, pflog.0 and pflog (and, similarly, siplog{1,0,}), they
seem to follow one another as they should - one beginning just
where the previous one ends, none of them rotated empty.

If I read the newsyslog lines right, each of

13680 pflogd: [running] -s 1500 -i pflog0 -f /var/log/pflog
84985 pflogd: [priv]
10562 pflogd: [running] -s 1500 -i pflog1 -f /var/log/siplog
94396 pflogd: [priv]

is getting HUP'd, right? Would it be enough to HUP the [running] child?

 |-+= 84985 root pflogd: [priv] (pflogd)
 | \--- 13680 _pflogd pflogd: [running] -s 1500 -i pflog0 -f /var/log/pflog 
(pflogd)

 |-+= 94396 root pflogd: [priv] (pflogd)
 | \--- 10562 _pflogd pflogd: [running] -s 1500 -i pflog1 -f /var/log/siplog 
(pflogd)

Probably not, based on what you said about [priv]; but the [running]
processes can be distinguished in newsyslog.conf with "pkill -xf pflog0".

Jan



Re: Reinstall to upgrade

2020-11-28 Thread Gregory Edigarov



On 11/25/20 3:26 PM, Manuel Giraud wrote:
> Hi,
>
> I'd like to upgrade (on -current) and, in the process, remove some cruft
> accumulated over the years. I usually do sysupgrade and sysclean for
> system.
>
> But for packages, I think I would be better to reinstall everything
> since "pkg_check -F" does not seems to complain and I can see I have,
> for example, some firefox-57 files left.
>
> I think I could do the following but I don't know if it is safe:
> - sysupgrade (+ sysclean)
> - pkg_info -mz > mypkg
> - umount /usr/local
> - newfs partition_of_usr_local
> - mount /usr/local
> - pkg_add -l mypkg
>
> Or maybe, I should dump, do a complete reinstall, pkg_add -l mypkg,
> restore /home and, tediously, restore some /etc files.
> How would you do this?
Here's what I found easy to do periodically on my home computers, when I
feel it is a time to de-clutter:

#!/bin/sh
rm -rf /usr/local/*  /var/db/pkg/* /var/db/pkg/.* /etc/rc.d/*_daemon
/etc/rc.d/avahi* 
for i in \
adobe-source-code-pro \
ansible \
borgbackup \
chromium \
emacs--gtk3 \
gnupg-- \
dmenu \
firefox \
thunderbird \
rsync-- \
git \
gpicview \
go \
rust \
inconsolata-font \
ipcalc \
mplayer \
mtr-- \
nmap \
ntfs_3g \
openvpn \
pidgin-- \
pv \
spectrwm \
splint \
tcptraceroute \
telegram-purple \
terminus-font \
transmission \
vim--gtk2 \
xpdf \
zsh ; do pkg_add  -v $i; done

so when I am running it I am easily getting the system which I have most
essential software installed.



Recommendations regarding configuring IPv6 for the first time

2020-11-28 Thread Erik Lauritsen
Hi,

I'm slowly beginning to look at IPv6 in preparations for my ISP to roll
out IPv6.

Currently I'm running an IPv4 LAN with physically segmented networks.
I'm using dhcpd with fixed IP addresses based upon MAC, and have these
setup in Unbound as well, as I have many clients and don't want to
remember IP addresses. Also I'm using NAT as I have a couple of boxes
running on a public IP.

I'm old school IPv4, but I think I have understood the basics of IPv6,
however I'm still not quite sure what the best approach to a similar
IPv6 setup to the above would be.

Also what are the general security recommendations rolling out IPv6?

Any IPv6 newbie advice would be greatly appreciated.

Maybe it's just me, but am I the only one who thinks that IPv6 is a
pain to wrap the head around compared to IPv4?

Kind regards



Re: development best practices

2020-11-28 Thread Stuart Henderson
Your i3status problem with an out-of-ports build is probably because the
configure script runs "make" with a file that has GNU make syntax. Running
it with "MAKE=gmake" in the environment fixes this (this is one of many
things that are set automatically by the ports infrastructure).

On 2020-11-28, Hannu Vuolasaho  wrote:
> la 28. marrask. 2020 klo 16.11 Stefan Sperling (s...@stsp.name) kirjoitti:
>
>> You can then extract your fix and apply it to an upstream development tree.
>> If additional patches are required to get the software to compile, you
>> might as well attempt to upstream those changes, too, while at it.
>>
> Is there a way to follow the development repository within the ports tree?

Not directly. Point MASTER_SITES/DISTNAME to a tar, if it's on github
you can use their on-the-fly tar generation like this

GH_ACCOUNT= i3
GH_PROJECT= i3status
GH_COMMIT=  3f27399d730bb9a66bebfed6aff2660828687ca5
DISTNAME=   i3status-2.13pl20201009

and remove MASTER_SITES and EXTRACT_SUFX. "make makesum" to download
and update distinfo. "make patch", if there are conflicts fix them up and
"make update-patches", "make clean", and "make patch" again.
Assuming you get it to build successfully you'll need "make plist" to
update pkg/PLIST.

When you use GH_* variables, DISTNAME is used to set the name of the
file the tar is written to locally, and the default name for the package
created - the format I showed above avoids interfering with possible
future releases (ports has some checks to avoid a version number "going
backwards" and avoid some changes being made to the port without
changing the version or revision number - it can be cleaned separately
but it's easier to avoid it in the first place).

> The scenario is that I write some patch which fixes something and then
> gets to the project tree. Then the testing and fixing cycle starts again.
>
> I know a few programs which are easy to compile in ~/src but writing a
> port is PITA.

There are some things like this, but once you're familiar with ports
it's usually less of a PITA to write a port rather than figure out
what the build/install has done to your system if/when you want to
remove it.




Re: pflogd: Corrupted log file, move it away

2020-11-28 Thread Stuart Henderson
On 2020-11-27, Harald Dunkel  wrote:
> Hi folks,
>
> I got a bazillion of error messages in /var/log/daemon
>
>:
> Nov 27 08:33:25 gate6a pflogd[26893]: Corrupted log file.
> Nov 27 08:33:25 gate6a pflogd[26893]: Invalid/incompatible log file, move it 
> away
> Nov 27 08:33:25 gate6a pflogd[26893]: Logging suspended: open error
> Nov 27 08:33:32 gate6a pflogd[2985]: Corrupted log file.
> Nov 27 08:33:32 gate6a pflogd[2985]: Invalid/incompatible log file, move it 
> away
> Nov 27 08:33:32 gate6a pflogd[2985]: Logging suspended: open error
>:
>
> Problem is, pflogd doesn't tell which one. I am logging to /var/log/\
> pflog{0..3}. Nothing else but pflogd is writing these files. They are
> rotated every hour, using the default

It is easy enough to add the filename, but adding that to the log
might suggest to users that things are setup to handle multiple pflogd
processes and that is not the case.

Various parts of the system would need changing in order to handle this.
Currently there is no way to distinguish between multiple "priv" processes
as the process title doesn't show the command-line flags. In order to
support multiple pflogd processes this would need adding, then the rc.d
scripts and default newsyslog.conf entry would need updating to use them.

> I can't remember having seen this problem for 6.7.

I think you got lucky.

> (Not to mention that syslog should try to avoid printing the same
> message again and again.)

Some kind of "last 3 messages repeated X times" might be nice indeed,
but every one of the messages you pasted are different (at least different
pid).

> I am legally bound to provide log files, so this is a huge problem.
> Every insightful comment is highly appreciated.
> Harri
>
>

I think it would be better to simplify the setup and use a single log
for pflogd. You can split in postprocessing with commands like this

tcpdump -r /var/log/pflog -w out-vlan2.pcap action block and on vlan2

using whatever BPF filter you like (ports, IP addresses, whatever).

Some people like to run tcpdump all the time to read on the pflog
interface and write in plaintext to syslog in realtime. That is another
possibility but I don't think this is a good idea because it will use
the dissectors in tcpdump to decode the packets, quality of these is not
always great. Better to write pcap files and handle any decoding later
so that if a dissector causes a crash it doesn't stop logging.




Re: development best practices

2020-11-28 Thread Hannu Vuolasaho
la 28. marrask. 2020 klo 16.11 Stefan Sperling (s...@stsp.name) kirjoitti:

> You can then extract your fix and apply it to an upstream development tree.
> If additional patches are required to get the software to compile, you
> might as well attempt to upstream those changes, too, while at it.
>
Is there a way to follow the development repository within the ports tree?

The scenario is that I write some patch which fixes something and then
gets to the project tree. Then the testing and fixing cycle starts again.

I know a few programs which are easy to compile in ~/src but writing a
port is PITA.

Naturally that kind of port won't ever get to CVS but for testing and
development.

Best regards,
Hannu Vuolasaho



Re: pflogd: Corrupted log file, move it away

2020-11-28 Thread Jan Stary
On Nov 27 09:02:04, harald.dun...@aixigo.com wrote:
> Hi folks,
> 
> I got a bazillion of error messages in /var/log/daemon
> 
> :
> Nov 27 08:33:25 gate6a pflogd[26893]: Corrupted log file.
> Nov 27 08:33:25 gate6a pflogd[26893]: Invalid/incompatible log file, move it 
> away

Last time I had this, it was a missing B in the rotation flag,
slapping a "logfile turned over" text message into a binary log file.

> Nov 27 08:33:25 gate6a pflogd[26893]: Logging suspended: open error
> Nov 27 08:33:32 gate6a pflogd[2985]: Corrupted log file.
> Nov 27 08:33:32 gate6a pflogd[2985]: Invalid/incompatible log file, move it 
> away
> Nov 27 08:33:32 gate6a pflogd[2985]: Logging suspended: open error
> :
> 
> Problem is, pflogd doesn't tell which one. I am logging to /var/log/\
> pflog{0..3}. Nothing else but pflogd is writing these files. They are
> rotated every hour, using the default
> 
> /var/log/pflog   600  3 250  * ZB "pkill -HUP -u root -U root -t - -x 
> pflogd"

Does that mean you only keep the last 4 x 250 kilobytes of logs
(or whatever it accumulates to during the hour)?
Is that your intentiton?

> in /etc/newsyslog.conf. crontab entry:
> 
> 0 * * * * /usr/bin/newsyslog




Re: pflogd: Corrupted log file, move it away

2020-11-28 Thread Jan Stary
On Nov 27 09:02:04, harald.dun...@aixigo.com wrote:
> Hi folks,
> 
> I got a bazillion of error messages in /var/log/daemon
> 
> :
> Nov 27 08:33:25 gate6a pflogd[26893]: Corrupted log file.
> Nov 27 08:33:25 gate6a pflogd[26893]: Invalid/incompatible log file, move it 
> away
> Nov 27 08:33:25 gate6a pflogd[26893]: Logging suspended: open error
> Nov 27 08:33:32 gate6a pflogd[2985]: Corrupted log file.
> Nov 27 08:33:32 gate6a pflogd[2985]: Invalid/incompatible log file, move it 
> away
> Nov 27 08:33:32 gate6a pflogd[2985]: Logging suspended: open error
> :
> 
> Problem is, pflogd doesn't tell which one. I am logging to /var/log/\
> pflog{0..3}.

To be sure, are these the names?

/var/log/pflog0
/var/log/pflog1
/var/log/pflog2
/var/log/pflog3

> Nothing else but pflogd is writing these files.

How, exactly? What are the pf.conf log rules?
What are the pflogd -i command lines?

> They are
> rotated every hour, using the default
> 
> /var/log/pflog   600  3 250  * ZB "pkill -HUP -u root -U root -t - -x 
> pflogd"

If the above names are correct, this does not rotate any of them.
Do you mean /var/log/pflog.0 etc, being the rotated copies
of a single /var/log/pflog ?




Re: development best practices

2020-11-28 Thread Stefan Sperling
On Sat, Nov 28, 2020 at 12:27:47PM +, björn gohla wrote:
> hi all,
> 
> i'm fairly new to openbsd. and i've run into the following problem,
> where i want to hack a project (most recently trying to fix a possible
> issue with i3status), but building the from the git source
> tree fails.
> 
> now, in the specific case, i'm trying to build a version that,
> also exists in ports, so we know it can be built on openbsd; and i
> presume the various patches included with the port are what makes it
> work.
> 
> i could of course try to apply those patches and fix my issue. but then
> when i submit a PR upstream i'd have to remove them again. that seems
> cumbersome, expecially if done repeatedly.
> 
> so what is the best practice in this situation? should i just upstream
> the ports patches?

You could edit the source files which are extracted to /usr/ports/pobj/
when the port is built. If you modify a file the port has not patched
yet, create a copy of this file with a .orig filename extension first.

'make update-patches' in the port's directory will diff files against
their .orig versions and update patches the port's patches directory.

You can then extract your fix and apply it to an upstream development tree.
If additional patches are required to get the software to compile, you
might as well attempt to upstream those changes, too, while at it.



ldapd.conf certificate directive not working?

2020-11-28 Thread Родин Максим

Hello
When I use the following directive in ldapd.conf:
1)
...
listen on em0 ldaps
...
or
...
listen on em0 tls
...
and the certificate (em0.crt) and key (em0.key) files are in 
/etc/ldap/certs,

then "ldapd -n" shows OK.

When I use:
2)
...
listen on em0 ldaps certificate "/etc/ldap/certs/em0.crt"
or
listen on em0 ldaps certificate "/etc/ldap/certs/em0"
...
or
...
listen on em0 tls certificate "/etc/ldap/certs/em0.crt"
or
listen on em0 tls certificate "/etc/ldap/certs/em0"
...
then "ldapd -n" shows the following:
"/etc/ldapd.conf:10: cannot load certificate: /etc/ldap/certs/em0.crt
/etc/ldapd.conf:11: cannot load certificate: /etc/ldap/certs/em0.crt"
or
"/etc/ldapd.conf:10: cannot load certificate: /etc/ldap/certs/em0
/etc/ldapd.conf:11: cannot load certificate: /etc/ldap/certs/em0"

man ldapd.conf says:
"If no certificate name is specified, the /etc/ldap/certs directory is
searched for a file named by joining the interface name with a
.crt extension, e.g. /etc/ldap/certs/fxp0.crt."

This works OK
But the following:

"If the certificate name is an absolute path, a .crt and .key
extension are appended to form the certificate path and key path
respectively."
This part does not seem to work at all.
Neither it tries to search certificates using the absolute path nor
it tries to append .crt or .key extension to the absolute path when no 
extension is used in config.


Or I do it completely wrong?

--
Maksim Rodin



Re: development best practices

2020-11-28 Thread Ingo Schwarze
Hi Bjoern,

bjoern gohla wrote on Sat, Nov 28, 2020 at 12:27:47PM +:

> i'm fairly new to openbsd. and i've run into the following problem,
> where i want to hack a project (most recently trying to fix a possible
> issue with i3status), but building the from the git source
> tree fails.
> 
> now, in the specific case, i'm trying to build a version that,
> also exists in ports, so we know it can be built on openbsd; and i
> presume the various patches included with the port are what makes it
> work.
> 
> i could of course try to apply those patches and fix my issue. but then
> when i submit a PR upstream i'd have to remove them again. that seems
> cumbersome, expecially if done repeatedly.
> 
> so what is the best practice in this situation? should i just upstream
> the ports patches?

It really depends on the case at hand.

Some patches can be upstreamed, only nobody did the work yet.
Some patches cannot be upstreamed because upstream never makes releases.
Some patches cannot be upstreamed because OpenBSD developers
  disagree with upstream about whether they are needed/useful/right.
Some (few) patches cannot be upstreamed because they deliberately
  change functionality in the OpenBSD port only.

So, let's assume you end up with some patches that will have to
stay or at least to stay for now.

Some of those may be required in the port, but the upstream build
system may at least be able to do builds without them.  Sometimes,
development for upstream purposes can be done without having such
patches in your upstream development tree.

But you may well end up in a situation where a small number of
patches may be required in your checkout of the upstream code to
do upstream-style builds at all.  In that case, you just need to
be careful to not include these patches when sending patches upstream.
And whenever sending patches upstream, you should of course consider
whether those are indeed independent of any other patches you can't
help having in your tree.

Nobody can save you from the work of understanding every single
patch you use before trying to send anything upstream...

These ideas are good enough for a port with moderate amounts of
patches (like textproc/groff).  The x11/i3status port might be of
a similar category.  I have no idea whether working on a behemoth
of the www/chromium kind (which has over 750 patches) and submitting
patches upstream without causing mixups would be feasible on OpenBSD.
There is certainly a limit to practicality, somewhere.

Yours,
  Ingo



development best practices

2020-11-28 Thread björn gohla
hi all,

i'm fairly new to openbsd. and i've run into the following problem,
where i want to hack a project (most recently trying to fix a possible
issue with i3status), but building the from the git source
tree fails.

now, in the specific case, i'm trying to build a version that,
also exists in ports, so we know it can be built on openbsd; and i
presume the various patches included with the port are what makes it
work.

i could of course try to apply those patches and fix my issue. but then
when i submit a PR upstream i'd have to remove them again. that seems
cumbersome, expecially if done repeatedly.

so what is the best practice in this situation? should i just upstream
the ports patches?

thanks.

--
cheers,
björn



Re: Large Filesystem

2020-11-28 Thread infoomatic
On 28.11.20 05:51, Nick Holland wrote:
> I've heard that from a lot of people.
> And yet, those same people, when pressed, will tell you that a ZFS-equipped
> system will crash much more often than simpler file systems.  That's one
> heck of a real penalty to pay for a theoretical advantage.
>
> I've setup some cool stuff using ZFS (dynamically sized partitions,
> snapshots, zfs sends of snapshots to other machines, etc), but man, I
> spent a comical amount of time babysitting and fixing file system
> problems.  The 1980s are over, file systems should Just Work now.
> If you are babysitting them constantly, something ain't right.  If
> someone wants to add a ZFS-like "scrubbing" feature to ffs, I'd be all
> for it. But not for the penalties that come with ZFS.

no idea what you did but I have never had problems on ZFS (in ~ 10
years, 250 servers, few PB of storage) with Solaris and FreeBSD, Linux yes.

Other than that I can just highly recommend reconsidering ZFS, my
experience was: bit rot on modern high density disks _is_ a problem.
sorry for offtopic.