support new

2020-11-15 Thread Emre Kal
0
C Turkey
P
T Istanbul
Z 34330
O Consultant
I Emre Kal
A Levent, Besiktas
M e...@tuta.io
U
B
X
N OpenBSD consulting and support. Experienced in OpenBSD httpd, relayd and 
Packet Filter (PF).



address lists in iked.conf?

2020-11-15 Thread Harald Dunkel

Hi folks,

would it be possible to support address lists in iked.conf(5),
similar to ipsec.conf(5)?


Regards
Harri



OpenLDAP under 6.8 - no intermediate certs in chain

2020-11-15 Thread Paul B. Henson
I just updated one of my servers running 6.7 to 6.8, and am having a
problem with openldap. I have the intermediate cert and root CA in a
file referenced by the openldap config:

TLSCACertificateFile/etc/openldap/cabundle.crt

Under 6.7 with the openldap port from that version, this results in the
chain being served:

Certificate chain
 0 s:CN = ldap-netsvc.pbhware.com
   i:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
 1 s:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
   i:O = Digital Signature Trust Co., CN = DST Root CA X3
 2 s:O = Digital Signature Trust Co., CN = DST Root CA X3
   i:O = Digital Signature Trust Co., CN = DST Root CA X3

However, under 6.8 with the newer openldap 2.4.53 port, only the server
cert itself is being served, not the intermediate or root:

Certificate chain
 0 s:CN = ldap-netsvc.pbhware.com
   i:C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3

This of course causes clients to fail to validate the server cert :(.

I'm running openldap 2.4.53 on other operating systems and as far as I
know there's no change in behavior with it. So I'm guessing there's an
interoperability issue between openbsd libressl and openldap that's
causing this problem?

Do I need to configure something differently? Any other suggestions?

Thanks much...



Re: OpenBSD 6.8 (release) guest (qemu/kvm) on Linux 5.9 host (amd64) fails with protection fault trap

2020-11-15 Thread Bodie




On 15.11.2020 19:20, Gabriel Garcia wrote:

Hi,

I would like to run OpenBSD as stated on the subject - I have been
able, however, to run it successfully with "-cpu Opteron_G2-v1", but I
would rather use "-cpu host" instead. Also note that on an Intel host,
OpenBSD appears to work successfully on the same Linux base.



Can you show what is in /proc/cpuinfo in Linux host and possibly even
its dmesg?

Any chance to try live OpenBSD there to get dmesg too?

What is the OS version and Qemu/KVM version on that Linux host?



qemu invocation that yields a trap:
qemu-system-x86_64 -enable-kvm -machine q35 -cpu
host,-nodeid-msr,-vmx-msr-bitmap,-popcnt,-tsc-deadline,-mmxext,-fxsr-opt,-pdpe1gb,-rdtscp,-3dnow,-3dnowext,-cmp-legacy,-svm,-cr8legacy,-abm,-sse4a,-misalignsse,-3dnowprefetch,-osvw,-amd-no-ssb
\

-drive file=/path/to/raw.img,format=raw,if=virtio \

-m 512M  \

-display curses

(note that `-cpu host` without deactivating any flag also yields a 
trap)


dmesg output:
ddb> dmesg

 OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020


r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC

 real mem = 519954432 (495MB)

 avail mem = 489299968 (466MB)

 random: good seed from bootblocks

 mpath0 at root

 scsibus0 at mpath0: 256 targets

 mainbus0 at root

 bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf5aa0 (9 entries)

 bios0: vendor SeaBIOS version 
"?-20190711_202441-buildvm-armv7-10.arm.fedorapro


 ject.org-2.fc31" date 04/01/2014

 bios0: QEMU Standard PC (Q35 + ICH9, 2009)

 acpi0 at bios0: ACPI 3.0

 acpi0: sleep states S3 S4 S5

 acpi0: tables DSDT FACP APIC HPET MCFG WAET

 acpi0: wakeup devices

 acpitimer0 at acpi0: 3579545 Hz, 24 bits

 acpimadt0 at acpi0 addr 0xfee0: PC-AT compat

 cpu0 at mainbus0: apid 0 (boot processor)

 cpu0: AMD Turion(tm) II Neo N40L Dual-Core Processor, 1497.89 MHz, 
10-06-03


 cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,
MMX,FXSR,SSE,SSE2,SSE3,CX16,x2APIC,POPCNT,DEADLINE,HV,NXE,MMXX,FFXSR,PAGE1GB,
RDTSCP,LONG,3DNOW2,3DNOW,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,

SSBDNR

 cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
64b/line 1

 6-way L2 cache, 16MB 64b/line 16-way L3 cache

 cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries 
direct-mapped


 cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries 
direct-mapped


 kernel: protection fault trap, code=0

 Stopped at  amd64_errata_setmsr+0x4e:   wrmsr


Contents of CPU registers:
ddb> show registers

 rdi   0x9c5a203a

 rsi   0x820ff920errata+0xe0

 rbp   0x824c5740end+0x2c5740

 rbx 0x18

 rdx0

 rcx   0xc0011029

 rax  0x3

 r80x824c55a8end+0x2c55a8

 r9 0

 r10   0xbdf7dabff85d847b

 r11   0x51e076fef1dcfa7b

 r120

 r130

 r14   0x820ff940acpihid_ca

 r15   0x820ff920errata+0xe0

 rip   0x81bc6edeamd64_errata_setmsr+0x4e

 cs   0x8

 rflags   0x10256__ALIGN_SIZE+0xf256

 rsp   0x824c5730end+0x2c5730

 ss  0x10

 amd64_errata_setmsr+0x4e:   wrmsr



Working system dmesg (only change from invocation above is "-cpu
Opteron_G2-v1"):
OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020


r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC

real mem = 519950336 (495MB)

avail mem = 489304064 (466MB)

random: good seed from bootblocks

mpath0 at root

scsibus0 at mpath0: 256 targets

mainbus0 at root

bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf5aa0 (9 entries)

bios0: vendor SeaBIOS version
"?-20190711_202441-buildvm-armv7-10.arm.fedoraproject.org-2.fc31" date
04/01/2014

bios0: QEMU Standard PC (Q35 + ICH9, 2009)

acpi0 at bios0: ACPI 3.0

acpi0: sleep states S3 S4 S5

acpi0: tables DSDT FACP APIC HPET MCFG WAET

acpi0: wakeup devices

acpitimer0 at acpi0: 3579545 Hz, 24 bits

acpimadt0 at acpi0 addr 0xfee0: PC-AT compat

cpu0 at mainbus0: apid 0 (boot processor)

cpu0: AMD Opteron 22xx (Gen 2 Class Opteron), 1497.89 MHz, 0f-06-01

cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,

CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,x2APIC,HV,NXE,LONG,LAHF

cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
64b/line 16-way L2 cache, 16MB 64b/line 16-way L3 cache

cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped

cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped

cpu0: smt 0, core 0, package 0

mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges

cpu0: apic clock running at 

Re: OpenBSD 6.8 (release) guest (qemu/kvm) on Linux 5.9 host (amd64) fails with protection fault trap

2020-11-15 Thread Bryan Steele
On Sun, Nov 15, 2020 at 06:20:52PM +, Gabriel Garcia wrote:
> Hi,
> 
> I would like to run OpenBSD as stated on the subject - I have been able,
> however, to run it successfully with "-cpu Opteron_G2-v1", but I would
> rather use "-cpu host" instead. Also note that on an Intel host, OpenBSD
> appears to work successfully on the same Linux base.
> 
> qemu invocation that yields a trap:
> qemu-system-x86_64 -enable-kvm -machine q35 -cpu 
> host,-nodeid-msr,-vmx-msr-bitmap,-popcnt,-tsc-deadline,-mmxext,-fxsr-opt,-pdpe1gb,-rdtscp,-3dnow,-3dnowext,-cmp-legacy,-svm,-cr8legacy,-abm,-sse4a,-misalignsse,-3dnowprefetch,-osvw,-amd-no-ssb
> \
> 
>   -drive file=/path/to/raw.img,format=raw,if=virtio \
> 
>   -m 512M  \
> 
>   -display curses
> 
> (note that `-cpu host` without deactivating any flag also yields a trap)
> 
> dmesg output:
> ddb> dmesg
> 
>  OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
> 
> 
> r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC
> 
>  real mem = 519954432 (495MB)
> 
>  avail mem = 489299968 (466MB)
> 
>  random: good seed from bootblocks
> 
>  mpath0 at root
> 
>  scsibus0 at mpath0: 256 targets
> 
>  mainbus0 at root
> 
>  bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf5aa0 (9 entries)
> 
>  bios0: vendor SeaBIOS version
> "?-20190711_202441-buildvm-armv7-10.arm.fedorapro
> 
>  ject.org-2.fc31" date 04/01/2014
> 
>  bios0: QEMU Standard PC (Q35 + ICH9, 2009)
> 
>  acpi0 at bios0: ACPI 3.0
> 
>  acpi0: sleep states S3 S4 S5
> 
>  acpi0: tables DSDT FACP APIC HPET MCFG WAET
> 
>  acpi0: wakeup devices
> 
>  acpitimer0 at acpi0: 3579545 Hz, 24 bits
> 
>  acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> 
>  cpu0 at mainbus0: apid 0 (boot processor)
> 
>  cpu0: AMD Turion(tm) II Neo N40L Dual-Core Processor, 1497.89 MHz, 10-06-03
> 
>  cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,
> MMX,FXSR,SSE,SSE2,SSE3,CX16,x2APIC,POPCNT,DEADLINE,HV,NXE,MMXX,FFXSR,PAGE1GB,
> RDTSCP,LONG,3DNOW2,3DNOW,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,
> 
> SSBDNR
> 
>  cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
> 64b/line 1
> 
>  6-way L2 cache, 16MB 64b/line 16-way L3 cache
> 
>  cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> 
>  cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> 
>  kernel: protection fault trap, code=0
> 
>  Stopped at  amd64_errata_setmsr+0x4e:   wrmsr
> 
> 
> Contents of CPU registers:
> ddb> show registers
> 
>  rdi   0x9c5a203a
> 
>  rsi   0x820ff920errata+0xe0
> 
>  rbp   0x824c5740end+0x2c5740
> 
>  rbx 0x18
> 
>  rdx0
> 
>  rcx   0xc0011029
> 
>  rax  0x3
> 
>  r80x824c55a8end+0x2c55a8
> 
>  r9 0
> 
>  r10   0xbdf7dabff85d847b
> 
>  r11   0x51e076fef1dcfa7b
> 
>  r120
> 
>  r130
> 
>  r14   0x820ff940acpihid_ca
> 
>  r15   0x820ff920errata+0xe0
> 
>  rip   0x81bc6edeamd64_errata_setmsr+0x4e
> 
>  cs   0x8
> 
>  rflags   0x10256__ALIGN_SIZE+0xf256
> 
>  rsp   0x824c5730end+0x2c5730
> 
>  ss  0x10
> 
>  amd64_errata_setmsr+0x4e:   wrmsr
> 
> 
> 
> Working system dmesg (only change from invocation above is "-cpu
> Opteron_G2-v1"):
> OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
> 
> 
> r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC
> 
> real mem = 519950336 (495MB)
> 
> avail mem = 489304064 (466MB)
> 
> random: good seed from bootblocks
> 
> mpath0 at root
> 
> scsibus0 at mpath0: 256 targets
> 
> mainbus0 at root
> 
> bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf5aa0 (9 entries)
> 
> bios0: vendor SeaBIOS version
> "?-20190711_202441-buildvm-armv7-10.arm.fedoraproject.org-2.fc31" date
> 04/01/2014
> 
> bios0: QEMU Standard PC (Q35 + ICH9, 2009)
> 
> acpi0 at bios0: ACPI 3.0
> 
> acpi0: sleep states S3 S4 S5
> 
> acpi0: tables DSDT FACP APIC HPET MCFG WAET
> 
> acpi0: wakeup devices
> 
> acpitimer0 at acpi0: 3579545 Hz, 24 bits
> 
> acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> 
> cpu0 at mainbus0: apid 0 (boot processor)
> 
> cpu0: AMD Opteron 22xx (Gen 2 Class Opteron), 1497.89 MHz, 0f-06-01
> 
> cpu0:
> FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,
> CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,x2APIC,HV,NXE,LONG,LAHF
> 
> cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
> 64b/line 16-way L2 cache, 16MB 64b/line 16-way L3 cache
> 
> cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
> 
> cpu0: DTLB 255 4KB entries 

VPN IKEv2 Traffic Flows Only One Direction

2020-11-15 Thread Ian Timothy
I’ve been a long time user of OpenBSD, but this is the first time I’m trying to 
setup a VPN. I’m not sure what I’m doing wrong, or what should be the next step 
to troubleshoot. I’ve probably reviewed every IKEv2 how-to I can find.

I need to end up with a configuration that will support several simultaneous 
roaming users connecting from anywhere they happen to be.

Client:
macOS 10.15.7
Using builtin VPN client

Server:
OpenBSD 6.6
em1 = 23.X.X.128/29
em0 = 10.0.0.0/16
enc0 = 10.1.0.0.16

>From the client I can connect to 10.0.0.1 but anything outside that network 
>traffic slows but does not return:


# --- client: curl -v ipinfo.io/ip ---

*   Trying 216.239.36.21:80...
[ never connects ]




# --- server: iked -dv ---

ikev2 "vpn" passive esp inet from 0.0.0.0/0 to 0.0.0.0/0 local 23.30.51.129 
peer any ikesa enc aes-256,aes-192,aes-128,3des prf hmac-sha2-256,hmac-sha1 
auth hmac-sha2-256,hmac-sha1 group modp2048,modp1536,modp1024 childsa enc 
aes-256,aes-192,aes-128 auth hmac-sha2-256,hmac-sha1 srcid vpn.ipaperbox.com 
lifetime 10800 bytes 536870912 psk 0x70617373776f7264 config address 10.1.0.0 
config netmask 255.255.0.0 config name-server 10.0.0.1
[--- CLIENT CONNECTS ---]
spi=0x69f90afcc96f7600: recv IKE_SA_INIT req 0 peer 166.X.X.161:62140 local 
23.X.X.129:500, 604 bytes, policy 'vpn'
spi=0x69f90afcc96f7600: send IKE_SA_INIT res 0 peer 166.X.X.161:62140 local 
23.X.X.129:500, 432 bytes
spi=0x69f90afcc96f7600: recv IKE_AUTH req 1 peer 166.X.X.161:54501 local 
23.X.X.129:4500, 544 bytes, policy 'vpn'
spi=0x69f90afcc96f7600: send IKE_AUTH res 1 peer 166.X.X.161:54501 local 
23.X.X.129:4500, 272 bytes, NAT-T
spi=0x69f90afcc96f7600: sa_state: VALID -> ESTABLISHED from 166.X.X.161:54501 
to 23.X.X.129:4500 policy 'vpn'
[--- CLIENT DICONNECT ---]
spi=0x69f90afcc96f7600: recv INFORMATIONAL req 2 peer 166.X.X.161:54501 local 
23.X.X.129:4500, 80 bytes, policy 'vpn'
spi=0x69f90afcc96f7600: send INFORMATIONAL res 2 peer 166.X.X.161:54501 local 
23.X.X.129:4500, 80 bytes, NAT-T
spi=0x69f90afcc96f7600: ikev2_ikesa_recv_delete: received delete
spi=0x69f90afcc96f7600: sa_state: ESTABLISHED -> CLOSED from 166.X.X.161:54501 
to 23.X.X.129:4500 policy 'vpn'



# --- server: tcpdump -i em1 -n host ipinfo.io and port 80 ---

tcpdump: listening on em1, link-type EN10MB
03:37:34.210823 10.1.114.47.59349 > 216.239.36.21.80: SWE 
3159801057:3159801057(0) win 65535  (DF)
03:37:35.228721 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:36.242039 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:37.254607 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:38.267900 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:39.330256 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:41.345983 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:45.424183 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:37:53.510541 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)
03:38:10.364579 10.1.114.47.59349 > 216.239.36.21.80: S 
3159801057:3159801057(0) win 65535  (DF)



# --- server: tcpdump -i enc0 -n host ipinfo.io and port 80 ---

tcpdump: listening on enc0, link-type ENC
[ no output ]



# --- server: iked.conf ---

# TODO: Change from psk authtication to user-based later.

ikev2 "vpn" passive esp \
from 0.0.0.0/0 to 0.0.0.0/0 \
local egress peer any \
srcid vpn..com \
psk "password" \
config address 10.1.0.0/16 \
config netmask 255.255.0.0 \
config name-server 10.0.0.1 \
tag "IKED” 



#  server: pf.conf ---

doas cat pf.conf.vpn 
int_if = "em0"

ext_if = "em1"
ext_net = "23.X.X.128/29"

gateway_ip_ext = "{ 23.X.X.129 }"
gateway_ip_int = "{ 10.0.0.1 }"

set skip on {lo, enc0}

block return# block stateless traffic
pass# establish keep-state

pass out on $ext_if from $int_if:network to any nat-to ($ext_if:0)



# --- server: sysctl net.inet.{ipcomp.enable,esp.enable,esp.udpencap} ---

net.inet.ipcomp.enable=1
net.inet.esp.enable=1
net.inet.esp.udpencap=1







Re: OpenBSD 6.8 (release) guest (qemu/kvm) on Linux 5.9 host (amd64) fails with protection fault trap

2020-11-15 Thread Ottavio Caruso

On 15/11/2020 18:20, Gabriel Garcia wrote:

Hi,

I would like to run OpenBSD as stated on the subject - I have been able, 
however, to run it successfully with "-cpu Opteron_G2-v1", but I would 
rather use "-cpu host" instead. Also note that on an Intel host, OpenBSD 
appears to work successfully on the same Linux base.




Not sure if this answers your question, but this is how I boot OpenBSD 
6.6 (...yes!) on kvm/qemu:


#!/bin/sh
qemu-system-x86_64 \
-drive if=virtio,file=/home/oc/VM/img/openbsd.image,index=0,media=disk \
-M q35,accel=kvm -m 250M -cpu host,-kvmclock-stable-bit -smp $(nproc) \
-nic user,hostfwd=tcp:127.0.0.1:5556-:22,model=virtio-net-pci \
-daemonize -display none   -vga none \
-serial mon:telnet:127.0.0.1:,server,nowait \
-pidfile /home/oc/VM/pid/openbsd-pid

telnet 127.0.0.1 


(pay attention to "-kvmclock-stable-bit" otherwise it will crash into a 
ddb debug shell)



--
Ottavio Caruso




Re: Failed sysupgrade from 6.6 to 6.7 amd64

2020-11-15 Thread Theo de Raadt
Maxim Khitrov  wrote:

> After all these years of trouble-free upgrades, I ran into my first
> problem. I used sysupgrade to go from 6.6/amd64 to 6.7. The upgrade
> process was successful, but after bsd.upgrade did its thing and
> rebooted the system, the new kernel would not boot.
> 
> It got to the "boot>" prompt, started loading the kernel, but then the
> system would reboot right after showing "booting hd0a:bsd:
> 12957+2753552..." line. I tried booting bsd.sp, bsd.rd, and
> bsd.booted with identical results. Was able to boot from cd67.iso.
> Tried downloading the original kernel, but that didn't work either.
> Re-running the upgrade didn't help.

This seems to be the efi issue.

> Finally, decided to upgrade to 6.8, so did that from cd68.iso, which
> fixed the problem. I also replaced bootx64.efi file on the EFI
> partition after this upgrade, but I'm not actually sure if it was
> different or not.

But the issue was also in 6.8.

It is fixed in current.

> Obviously curious as to what the issue may have been, but mostly
> wondering whether any upgrade steps may have been missed as a result
> of never fully booting the 6.7 OS and running post-upgrade steps
> there.

We don't know.  At this point, no developers have machines sensitive
to the issue.



Re: OpenBSD 6.8 (release) guest (qemu/kvm) on Linux 5.9 host (amd64) fails with protection fault trap

2020-11-15 Thread Gabriel Garcia

On 15/11/2020 20:14, Ottavio Caruso wrote:
(pay attention to "-kvmclock-stable-bit" otherwise it will crash into a 
ddb debug shell)
Thank you very much for the idea; sadly, it didn't work, must be 
something else!




Failed sysupgrade from 6.6 to 6.7 amd64

2020-11-15 Thread Maxim Khitrov
After all these years of trouble-free upgrades, I ran into my first
problem. I used sysupgrade to go from 6.6/amd64 to 6.7. The upgrade
process was successful, but after bsd.upgrade did its thing and
rebooted the system, the new kernel would not boot.

It got to the "boot>" prompt, started loading the kernel, but then the
system would reboot right after showing "booting hd0a:bsd:
12957+2753552..." line. I tried booting bsd.sp, bsd.rd, and
bsd.booted with identical results. Was able to boot from cd67.iso.
Tried downloading the original kernel, but that didn't work either.
Re-running the upgrade didn't help.

Finally, decided to upgrade to 6.8, so did that from cd68.iso, which
fixed the problem. I also replaced bootx64.efi file on the EFI
partition after this upgrade, but I'm not actually sure if it was
different or not.

Obviously curious as to what the issue may have been, but mostly
wondering whether any upgrade steps may have been missed as a result
of never fully booting the 6.7 OS and running post-upgrade steps
there.

Thanks!



Re: Wrong net in vlan

2020-11-15 Thread Mihai Popescu
> What is wrong here?

You show info about vlans then suddenly complain about non working dhcpd.
Hint: show some dhcpd configs.


Re: APU4 hardware network interfaces tied together

2020-11-15 Thread Jordan Geoghegan




On 11/15/20 12:25 PM, Mihai Popescu wrote:

Hello,

In the scenario of building a router with APU4, one interface is for wan,
the rest of three are free to use.
What is the most sane and performance wise ( CPU load, interface load,
etc.) way to tie together the remaining three interfaces as a switch, and
avoid using one IP class per interface?
Is it better to use one for lan, leave the remaining two unused and cascade
a dumb switch for other lan connections?

Thank you.


I wouldn't recommend putting the remaining ports into a bridge 
configuration as that will force the interfaces into promiscuous mode, 
and cause higher CPU load. It would be better to just run the LAN off of 
a switch connected to a single port on the APU as that will allow LAN 
traffic to flow without the APU having to touch every single packet. If 
you wanted to be pedantic, an argument could also be made that using a 
single interface would also lend itself to maximally effective interrupt 
coalescing.


Regards,

Jordan



APU4 hardware network interfaces tied together

2020-11-15 Thread Mihai Popescu
Hello,

In the scenario of building a router with APU4, one interface is for wan,
the rest of three are free to use.
What is the most sane and performance wise ( CPU load, interface load,
etc.) way to tie together the remaining three interfaces as a switch, and
avoid using one IP class per interface?
Is it better to use one for lan, leave the remaining two unused and cascade
a dumb switch for other lan connections?

Thank you.


Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
On Sun, Nov 15, 2020 at 02:57:49PM -0500, Kenneth Gober wrote:

> On Sun, Nov 15, 2020 at 8:59 AM Mischa  wrote:
> 
> > On 15 Nov at 14:52, Otto Moerbeek  wrote:
> > > fsck wil get slower once you start filling it, but since your original
> > > fs had about 104k files it expect it not getting too bad. If the speed
> > > for your usecase is good as well I guess you should be fine.
> >
> > Will see how it behaves and try to document as much as possible.
> > I can always install another BSD on it. ;)
> >
> 
> To give a very rough idea, here is a sample running fsck on an FFS2
> file system with a fairly large number of files:
> 
> 
> $ df -ik /nfs/archive
> 
> Filesystem  1K-blocks  Used Avail Capacity iused   ifree  %iused
> Mounted on
> 
> /dev/sd1g   12308149120 7477490128 421525153664% 4800726 383546408
> 1%   /nfs/archive
> 
> $ doas time fsck -f /nfs/archive
> 
> ** /dev/sd1g (6d3438729df51b22.g) (NO WRITE)
> 
> ** Last Mounted on /nfs/archive
> 
> ** Phase 1 - Check Blocks and Sizes
> 
> ** Phase 2 - Check Pathnames
> 
> ** Phase 3 - Check Connectivity
> 
> ** Phase 4 - Check Reference Counts
> 
> ** Phase 5 - Check Cyl groups
> 
> 4800726 files, 934686266 used, 603832374 free (35534 frags, 75474605
> blocks, 0.0% fragmentation)
>  3197.25 real35.86 user66.03 sys
> 
> This is on older hardware, and not running the most recent release.
> The server is a Dell PowerEdge 2900 with a PERC H700 controller, and
> 4 WD Red Pro 8TB disks (WD8001FFWX-6) forming a RAID10 volume
> containing 3 small 1TB file systems and 1 large 12TB file system.  The
> OS is OpenBSD 6.1/amd64.  All the file systems on this volume are
> mounted with the softdep option and the big one has noatime as well.

If you upgrade; there's a good chance fskc wil be faster.

-Otto
> 
> The time to run fsck is really only an issue when the server reboots
> unexpectedly (i.e. due to a power outage).  Coming up after a proper
> reboot or shutdown is very fast due to the file systems being clean.
> A UPS can help avoid most of these power-related reboots.  Alas, this
> particular server was connected to a UPS with a bad battery so it has
> rebooted due to power outages at least a half-dozen times this year,
> each of them involving a fairly long fsck delay.  I finally took the time
> last week to replace the UPS batteries so going forward this should
> be much less of a problem.  I do recommend the use of a UPS (and
> timely replacement of batteries when needed) if you are going to
> host very large FFS2 volumes.
> 
> I have never lost files due to a problem with FFS2 (or with FFS for that
> matter), but that is no reason not to perform regular backups.  For this
> particular file system I only back it up twice a year, but the data on it
> doesn't change often.  File systems with more 'normal' patterns of usage
> get backed up weekly.  The practice of taking regular backups also helps
> ensure that 'bit rot' is detected early enough that it can be corrected.
> 
> -ken



Re: Large Filesystem

2020-11-15 Thread Kenneth Gober
On Sun, Nov 15, 2020 at 8:59 AM Mischa  wrote:

> On 15 Nov at 14:52, Otto Moerbeek  wrote:
> > fsck wil get slower once you start filling it, but since your original
> > fs had about 104k files it expect it not getting too bad. If the speed
> > for your usecase is good as well I guess you should be fine.
>
> Will see how it behaves and try to document as much as possible.
> I can always install another BSD on it. ;)
>

To give a very rough idea, here is a sample running fsck on an FFS2
file system with a fairly large number of files:


$ df -ik /nfs/archive

Filesystem  1K-blocks  Used Avail Capacity iused   ifree  %iused
Mounted on

/dev/sd1g   12308149120 7477490128 421525153664% 4800726 383546408
1%   /nfs/archive

$ doas time fsck -f /nfs/archive

** /dev/sd1g (6d3438729df51b22.g) (NO WRITE)

** Last Mounted on /nfs/archive

** Phase 1 - Check Blocks and Sizes

** Phase 2 - Check Pathnames

** Phase 3 - Check Connectivity

** Phase 4 - Check Reference Counts

** Phase 5 - Check Cyl groups

4800726 files, 934686266 used, 603832374 free (35534 frags, 75474605
blocks, 0.0% fragmentation)
 3197.25 real35.86 user66.03 sys

This is on older hardware, and not running the most recent release.
The server is a Dell PowerEdge 2900 with a PERC H700 controller, and
4 WD Red Pro 8TB disks (WD8001FFWX-6) forming a RAID10 volume
containing 3 small 1TB file systems and 1 large 12TB file system.  The
OS is OpenBSD 6.1/amd64.  All the file systems on this volume are
mounted with the softdep option and the big one has noatime as well.

The time to run fsck is really only an issue when the server reboots
unexpectedly (i.e. due to a power outage).  Coming up after a proper
reboot or shutdown is very fast due to the file systems being clean.
A UPS can help avoid most of these power-related reboots.  Alas, this
particular server was connected to a UPS with a bad battery so it has
rebooted due to power outages at least a half-dozen times this year,
each of them involving a fairly long fsck delay.  I finally took the time
last week to replace the UPS batteries so going forward this should
be much less of a problem.  I do recommend the use of a UPS (and
timely replacement of batteries when needed) if you are going to
host very large FFS2 volumes.

I have never lost files due to a problem with FFS2 (or with FFS for that
matter), but that is no reason not to perform regular backups.  For this
particular file system I only back it up twice a year, but the data on it
doesn't change often.  File systems with more 'normal' patterns of usage
get backed up weekly.  The practice of taking regular backups also helps
ensure that 'bit rot' is detected early enough that it can be corrected.

-ken


Wrong net in vlan

2020-11-15 Thread Axel Rau
Hi all,

in hostname.vlan11, I have:
- - -
vnetid 11 parent em3
inet 172.16.11.1 255.255.255.0 NONE
- - -
in hostname.vlan12, I have:
- - -
vnetid 12 parent em3
inet 172.16.12.1 255.255.255.0 NONE
- - -

but dhcpd logs:
- - -
DHCPOFFER on 172.16.11.106 to d6:b5:e4:2a:3a:1c via vlan12
- - -

What is wrong here?

Thanks, Axel
---
PGP-Key: CDE74120  ☀  computing @ chaos claudius



signature.asc
Description: Message signed with OpenPGP


Re: seafile client doesn't sync files

2020-11-15 Thread avv. Nicola Dell'Uomo

So, here is what I discovered so far.

1. private ca: apparently everything works fine if you append your 
private ca to /etc/ssl/cert.pem. If somebody has an alternative which 
does not involve modifying stock cert.pem, please let me know;


2. seafile client syncs files only if auto update is disabled. You still 
have to manually update each single library, but in this case everything 
works fine and file count from sysctl kern.nfiles won't go wild.


3. If you enable auto update, file count suddendly passes from ~800 - 
2000 to ~45000; the client then uses a huge amount of cpu time and files 
don't sync (maybe because there are too many opened files and the system 
is not efficient?);


4. if I restrict /etc/login.conf openfiles param to lower numbers (i.e. 
~8192), the client just won't work, reporting the same errors as in my 
first post (Too many open files);


5. I let the client sync for ~ half an hour and nothing changed: so I 
think this unlikely would change in more sync time.


6. on other OS (mac OS and Linux) file count never exceed 3600.

Do you think this could be due to a bug?

As I told you I've been having this problem since 6.6 and on different 
(client) hosts, but I never try to figure it out before because I didn't 
need seafile to sync my files at the time...


Please let me know if I can help with some tests or data.

Nicola

Il 15/11/20 12:27, avv. Nicola Dell'Uomo ha scritto:


Hi Stuart,

thank you for your help!

Now it works (almost).

The matter here was that login.conf limits were not high enough.

With my server configuration seafile client opens ~ 43000 files: I 
really didn't expect such an high limit, so I set it to 8192 in 
login.conf.


Now it works, but it definitely wastes an exaggerate amount of resources.

When I close seafile client, kern.nfiles count drops from ~ 44.000 to 
~ 890!


Moreover seafile client consumes a huge amount of cpu time.

I'll monitor the app to see if it's just a momentary situation and if 
it can be fine tuned...


I also have problems in making it understand I use a private ca.

When I have news, I'll post them.

If somebody has been using seafile client since longtime and already 
has experience, please join in!


Nicola

> On 2020-11-14, avv. Nicola Dell'Uomo 
 wrote:


>
> Hi,
>
> thank you for answering.
>
> I raised class limit to 4096 current and 8192 max, but nothing changes: 
> the logs report exactly the same error scheme as before.

>
> Do you use this client?

No, but the error messages are pretty clear, and I can see that it uses
kqueue for file monitoring (via libinotify which emulates Linux's inotify
interface using kqueue) so it's a fairly safe bet that's what you're running
into.

Did you re-login after cbanging login.conf? Check the new values show up
with ulimit.

> Does it work for you?
>
> I have many files in each directory, but it works out of the box with 
> other OS, so I was trying to understand why this is happening in OpenBSD.


OpenBSD has pretty tight default limits, plus kqueue works differently than
the native inotify on Linux causing it to use more file descriptors.
(All dirs and files, not just dirs).

> puffy$ ulimit -s
> 4096

-s is stack. You want -n (or -a to show all values).

> puffy$ sysctl kern.nfiles
> kern.nfiles=708
>
> I do agree kern.maxfiles is wildly high: this limit was obviously not 
> meant for production but just for testing purposes...


It's not uncommon to set things "for testing" and forget that they're
there until the system falls over badly.

> Moreover I can't figure out why first sync works and all files are 
> regularly saved to local destination; then it stops saving to local, but 
> I can still use all other client function...


I suppose if it just syncs all files then it doesn't need to monitor
them for updates.
>




OpenBSD 6.8 (release) guest (qemu/kvm) on Linux 5.9 host (amd64) fails with protection fault trap

2020-11-15 Thread Gabriel Garcia

Hi,

I would like to run OpenBSD as stated on the subject - I have been able, 
however, to run it successfully with "-cpu Opteron_G2-v1", but I would 
rather use "-cpu host" instead. Also note that on an Intel host, OpenBSD 
appears to work successfully on the same Linux base.


qemu invocation that yields a trap:
qemu-system-x86_64 -enable-kvm -machine q35 -cpu 
host,-nodeid-msr,-vmx-msr-bitmap,-popcnt,-tsc-deadline,-mmxext,-fxsr-opt,-pdpe1gb,-rdtscp,-3dnow,-3dnowext,-cmp-legacy,-svm,-cr8legacy,-abm,-sse4a,-misalignsse,-3dnowprefetch,-osvw,-amd-no-ssb 
\


-drive file=/path/to/raw.img,format=raw,if=virtio \

-m 512M  \

-display curses

(note that `-cpu host` without deactivating any flag also yields a trap)

dmesg output:
ddb> dmesg

 OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020


r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC

 real mem = 519954432 (495MB)

 avail mem = 489299968 (466MB)

 random: good seed from bootblocks

 mpath0 at root

 scsibus0 at mpath0: 256 targets

 mainbus0 at root

 bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf5aa0 (9 entries)

 bios0: vendor SeaBIOS version 
"?-20190711_202441-buildvm-armv7-10.arm.fedorapro


 ject.org-2.fc31" date 04/01/2014

 bios0: QEMU Standard PC (Q35 + ICH9, 2009)

 acpi0 at bios0: ACPI 3.0

 acpi0: sleep states S3 S4 S5

 acpi0: tables DSDT FACP APIC HPET MCFG WAET

 acpi0: wakeup devices

 acpitimer0 at acpi0: 3579545 Hz, 24 bits

 acpimadt0 at acpi0 addr 0xfee0: PC-AT compat

 cpu0 at mainbus0: apid 0 (boot processor)

 cpu0: AMD Turion(tm) II Neo N40L Dual-Core Processor, 1497.89 MHz, 
10-06-03


 cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,

MMX,FXSR,SSE,SSE2,SSE3,CX16,x2APIC,POPCNT,DEADLINE,HV,NXE,MMXX,FFXSR,PAGE1GB,
RDTSCP,LONG,3DNOW2,3DNOW,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,

SSBDNR

 cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 1


 6-way L2 cache, 16MB 64b/line 16-way L3 cache

 cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped

 cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped

 kernel: protection fault trap, code=0

 Stopped at  amd64_errata_setmsr+0x4e:   wrmsr


Contents of CPU registers:
ddb> show registers

 rdi   0x9c5a203a

 rsi   0x820ff920errata+0xe0

 rbp   0x824c5740end+0x2c5740

 rbx 0x18

 rdx0

 rcx   0xc0011029

 rax  0x3

 r80x824c55a8end+0x2c55a8

 r9 0

 r10   0xbdf7dabff85d847b

 r11   0x51e076fef1dcfa7b

 r120

 r130

 r14   0x820ff940acpihid_ca

 r15   0x820ff920errata+0xe0

 rip   0x81bc6edeamd64_errata_setmsr+0x4e

 cs   0x8

 rflags   0x10256__ALIGN_SIZE+0xf256

 rsp   0x824c5730end+0x2c5730

 ss  0x10

 amd64_errata_setmsr+0x4e:   wrmsr



Working system dmesg (only change from invocation above is "-cpu 
Opteron_G2-v1"):

OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020


r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC

real mem = 519950336 (495MB)

avail mem = 489304064 (466MB)

random: good seed from bootblocks

mpath0 at root

scsibus0 at mpath0: 256 targets

mainbus0 at root

bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf5aa0 (9 entries)

bios0: vendor SeaBIOS version 
"?-20190711_202441-buildvm-armv7-10.arm.fedoraproject.org-2.fc31" date 
04/01/2014


bios0: QEMU Standard PC (Q35 + ICH9, 2009)

acpi0 at bios0: ACPI 3.0

acpi0: sleep states S3 S4 S5

acpi0: tables DSDT FACP APIC HPET MCFG WAET

acpi0: wakeup devices

acpitimer0 at acpi0: 3579545 Hz, 24 bits

acpimadt0 at acpi0 addr 0xfee0: PC-AT compat

cpu0 at mainbus0: apid 0 (boot processor)

cpu0: AMD Opteron 22xx (Gen 2 Class Opteron), 1497.89 MHz, 0f-06-01

cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,

CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,x2APIC,HV,NXE,LONG,LAHF

cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache, 16MB 64b/line 16-way L3 cache


cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped

cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped

cpu0: smt 0, core 0, package 0

mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges

cpu0: apic clock running at 999MHz

ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins

acpihpet0 at acpi0: 1 Hz

acpimcfg0 at acpi0

acpimcfg0: addr 0xb000, bus 0-255

acpiprt0 at acpi0: bus 0 (PCI0)

"ACPI0006" at acpi0 not configured

acpipci0 at 

Re: dhcpd and pf table with fixed-address

2020-11-15 Thread Stuart Henderson
On 2020-11-15, Joel Carnat  wrote:
> Hello,
>
> I have linked dhcpd(8) and pf(4) using -A, -C and -L dhcpd flags.
> It seems dhcpd only adds IP for dynamic leases and not for leases
> configured using fixed-address.
>
> Is this expected or is there something I misconfigured?

To my mind it's expected, I think of the fixed-address ones more like
assignments rather than leases.

> Thanks,
> Jo
>
> PS: configuration extracts
>
> rc.conf.local:
> dhcpd_flags=-A abandoned_ip_table -C changed_ip_table -L leased_ip_table em1
>
> pf.conf:
> table  persist
> tablepersist
> table persist
>
> dhcpd.conf:
> (...)
> subnet 192.168.2.0 netmask 255.255.255.0 {
>   range 192.168.2.20 192.168.2.50;
> (...)
>   host Raspberry-Pi-4 {
> hardware ethernet xx:xx:xx:xx:xx:xx;
> fixed-address 192.168.2.250;
>   }
> (...)
>
>



dhcpd and pf table with fixed-address

2020-11-15 Thread Joel Carnat
Hello,

I have linked dhcpd(8) and pf(4) using -A, -C and -L dhcpd flags.
It seems dhcpd only adds IP for dynamic leases and not for leases
configured using fixed-address.

Is this expected or is there something I misconfigured?

Thanks,
Jo

PS: configuration extracts

rc.conf.local:
dhcpd_flags=-A abandoned_ip_table -C changed_ip_table -L leased_ip_table em1

pf.conf:
table  persist
tablepersist
table persist

dhcpd.conf:
(...)
subnet 192.168.2.0 netmask 255.255.255.0 {
  range 192.168.2.20 192.168.2.50;
(...)
  host Raspberry-Pi-4 {
hardware ethernet xx:xx:xx:xx:xx:xx;
fixed-address 192.168.2.250;
  }
(...)



Re: Large Filesystem

2020-11-15 Thread Mischa
On 15 Nov at 14:52, Otto Moerbeek  wrote:
> On Sun, Nov 15, 2020 at 02:43:03PM +0100, Mischa wrote:
> 
> > On 15 Nov at 14:25, Otto Moerbeek  wrote:
> > > On Sun, Nov 15, 2020 at 02:14:47PM +0100, Mischa wrote:
> > > 
> > > > On 15 Nov at 13:04, Otto Moerbeek  wrote:
> > > > > On Sat, Nov 14, 2020 at 05:59:37PM +0100, Otto Moerbeek wrote:
> > > > > 
> > > > > > On Sat, Nov 14, 2020 at 04:59:22PM +0100, Mischa wrote:
> > > > > > 
> > > > > > > On 14 Nov at 15:54, Otto Moerbeek  wrote:
> > > > > > > > On Sat, Nov 14, 2020 at 03:13:57PM +0100, Leo Unglaub wrote:
> > > > > > > > 
> > > > > > > > > Hey,
> > > > > > > > > my largest filesystem with OpenBSD on it is 12TB and for the 
> > > > > > > > > minimal usecase
> > > > > > > > > i have it works fine. I did not loose any data or so. I have 
> > > > > > > > > it mounted with
> > > > > > > > > the following flags:
> > > > > > > > > 
> > > > > > > > > > local, noatime, nodev, noexec, nosuid, softdep
> > > > > > > > > 
> > > > > > > > > The only thing i should mention is that one time the server 
> > > > > > > > > crashed and i
> > > > > > > > > had to do a fsck during the next boot. It took around 10 
> > > > > > > > > hours for the 12TB.
> > > > > > > > > This might be something to keep in mind if you want to use 
> > > > > > > > > this on a server.
> > > > > > > > > But if my memory serves me well otto did some changes to fsck 
> > > > > > > > > on ffs2, so
> > > > > > > > > maybe thats a lot faster now.
> > > > > > > > > 
> > > > > > > > > I hope this helps you a little bit!
> > > > > > > > > Greetings from Vienna
> > > > > > > > > Leo
> > > > > > > > > 
> > > > > > > > > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > > > > > > > > I am currently in the process of building a large 
> > > > > > > > > > filesystem with
> > > > > > > > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, 
> > > > > > > > > > to serve as a
> > > > > > > > > > central, mostly download, platform with around 100 
> > > > > > > > > > concurrent
> > > > > > > > > > connections.
> > > > > > > > > > 
> > > > > > > > > > The current system is running FreeBSD with ZFS and I would 
> > > > > > > > > > like to
> > > > > > > > > > see if it's possible on OpenBSD, as it's one of the last 
> > > > > > > > > > two systems
> > > > > > > > > > on FreeBSD left.:)
> > > > > > > > > > 
> > > > > > > > > > Has anybody build a large filesystem using FFS2? Is it a 
> > > > > > > > > > good idea?
> > > > > > > > > > How does it perform? What are good tests to run?
> > > > > > > > > > 
> > > > > > > > > > Your help and suggestions are really appriciated!
> > > > > > > > > 
> > > > > > > > 
> > > > > > > > It doesn't always has to be that bad, on current:
> > > > > > > > 
> > > > > > > > [otto@lou:22]$ dmesg | grep sd[123]
> > > > > > > > sd1 at scsibus1 targ 2 lun 0:  
> > > > > > > > naa.5000c500c3ef0896
> > > > > > > > sd1: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > > > > sd2 at scsibus1 targ 3 lun 0:  
> > > > > > > > naa.5000c500c40e8569
> > > > > > > > sd2: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > > > > sd3 at scsibus3 targ 1 lun 0: 
> > > > > > > > sd3: 30519295MB, 512 bytes/sector, 62503516672 sectors
> > > > > > > > 
> > > > > > > > [otto@lou:20]$ df -h /mnt 
> > > > > > > > Filesystem SizeUsed   Avail Capacity  Mounted on
> > > > > > > > /dev/sd3a 28.9T5.1G   27.4T 0%/mnt
> > > > > > > > 
> > > > > > > > [otto@lou:20]$ time doas fsck -f /dev/rsd3a 
> > > > > > > > ** /dev/rsd3a
> > > > > > > > ** File system is already clean
> > > > > > > > ** Last Mounted on /mnt
> > > > > > > > ** Phase 1 - Check Blocks and Sizes
> > > > > > > > ** Phase 2 - Check Pathnames
> > > > > > > > ** Phase 3 - Check Connectivity
> > > > > > > > ** Phase 4 - Check Reference Counts
> > > > > > > > ** Phase 5 - Check Cyl groups
> > > > > > > > 176037 files, 666345 used, 3875083616 free (120 frags, 484385437
> > > > > > > > blocks, 0.0% fragmentation)
> > > > > > > > 1m47.80s real 0m14.09s user 0m06.36s system
> > > > > > > > 
> > > > > > > > But note that fsck for FFS2 will get slower once more inodes 
> > > > > > > > are in
> > > > > > > > use or have been in use.
> > > > > > > > 
> > > > > > > > Also, creating the fs with both blockszie and fragment size of 
> > > > > > > > 64k
> > > > > > > > will make fsck faster (due to less inodes), but that should 
> > > > > > > > only be
> > > > > > > > done if the files you are going to store ar relatively big 
> > > > > > > > (generally
> > > > > > > > much bigger than 64k).
> > > > > > > 
> > > > > > > Good to know. This will be mostly large files indeed.
> > > > > > > That would be "newfs -i 64"?
> > > > > > 
> > > > > > Nope, newfs -b 65536 -f 65536 
> > > > > 
> > > > > To clarify: the default block size for large filesystems is already
> > > > > 2^16, but this value is taken from the label, so if another fs was on
> > > > > that partition before, it might have changed. The default fragsize is

Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
On Sun, Nov 15, 2020 at 02:43:03PM +0100, Mischa wrote:

> On 15 Nov at 14:25, Otto Moerbeek  wrote:
> > On Sun, Nov 15, 2020 at 02:14:47PM +0100, Mischa wrote:
> > 
> > > On 15 Nov at 13:04, Otto Moerbeek  wrote:
> > > > On Sat, Nov 14, 2020 at 05:59:37PM +0100, Otto Moerbeek wrote:
> > > > 
> > > > > On Sat, Nov 14, 2020 at 04:59:22PM +0100, Mischa wrote:
> > > > > 
> > > > > > On 14 Nov at 15:54, Otto Moerbeek  wrote:
> > > > > > > On Sat, Nov 14, 2020 at 03:13:57PM +0100, Leo Unglaub wrote:
> > > > > > > 
> > > > > > > > Hey,
> > > > > > > > my largest filesystem with OpenBSD on it is 12TB and for the 
> > > > > > > > minimal usecase
> > > > > > > > i have it works fine. I did not loose any data or so. I have it 
> > > > > > > > mounted with
> > > > > > > > the following flags:
> > > > > > > > 
> > > > > > > > > local, noatime, nodev, noexec, nosuid, softdep
> > > > > > > > 
> > > > > > > > The only thing i should mention is that one time the server 
> > > > > > > > crashed and i
> > > > > > > > had to do a fsck during the next boot. It took around 10 hours 
> > > > > > > > for the 12TB.
> > > > > > > > This might be something to keep in mind if you want to use this 
> > > > > > > > on a server.
> > > > > > > > But if my memory serves me well otto did some changes to fsck 
> > > > > > > > on ffs2, so
> > > > > > > > maybe thats a lot faster now.
> > > > > > > > 
> > > > > > > > I hope this helps you a little bit!
> > > > > > > > Greetings from Vienna
> > > > > > > > Leo
> > > > > > > > 
> > > > > > > > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > > > > > > > I am currently in the process of building a large filesystem 
> > > > > > > > > with
> > > > > > > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to 
> > > > > > > > > serve as a
> > > > > > > > > central, mostly download, platform with around 100 concurrent
> > > > > > > > > connections.
> > > > > > > > > 
> > > > > > > > > The current system is running FreeBSD with ZFS and I would 
> > > > > > > > > like to
> > > > > > > > > see if it's possible on OpenBSD, as it's one of the last two 
> > > > > > > > > systems
> > > > > > > > > on FreeBSD left.:)
> > > > > > > > > 
> > > > > > > > > Has anybody build a large filesystem using FFS2? Is it a good 
> > > > > > > > > idea?
> > > > > > > > > How does it perform? What are good tests to run?
> > > > > > > > > 
> > > > > > > > > Your help and suggestions are really appriciated!
> > > > > > > > 
> > > > > > > 
> > > > > > > It doesn't always has to be that bad, on current:
> > > > > > > 
> > > > > > > [otto@lou:22]$ dmesg | grep sd[123]
> > > > > > > sd1 at scsibus1 targ 2 lun 0:  
> > > > > > > naa.5000c500c3ef0896
> > > > > > > sd1: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > > > sd2 at scsibus1 targ 3 lun 0:  
> > > > > > > naa.5000c500c40e8569
> > > > > > > sd2: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > > > sd3 at scsibus3 targ 1 lun 0: 
> > > > > > > sd3: 30519295MB, 512 bytes/sector, 62503516672 sectors
> > > > > > > 
> > > > > > > [otto@lou:20]$ df -h /mnt 
> > > > > > > Filesystem SizeUsed   Avail Capacity  Mounted on
> > > > > > > /dev/sd3a 28.9T5.1G   27.4T 0%/mnt
> > > > > > > 
> > > > > > > [otto@lou:20]$ time doas fsck -f /dev/rsd3a 
> > > > > > > ** /dev/rsd3a
> > > > > > > ** File system is already clean
> > > > > > > ** Last Mounted on /mnt
> > > > > > > ** Phase 1 - Check Blocks and Sizes
> > > > > > > ** Phase 2 - Check Pathnames
> > > > > > > ** Phase 3 - Check Connectivity
> > > > > > > ** Phase 4 - Check Reference Counts
> > > > > > > ** Phase 5 - Check Cyl groups
> > > > > > > 176037 files, 666345 used, 3875083616 free (120 frags, 484385437
> > > > > > > blocks, 0.0% fragmentation)
> > > > > > > 1m47.80s real 0m14.09s user 0m06.36s system
> > > > > > > 
> > > > > > > But note that fsck for FFS2 will get slower once more inodes are 
> > > > > > > in
> > > > > > > use or have been in use.
> > > > > > > 
> > > > > > > Also, creating the fs with both blockszie and fragment size of 64k
> > > > > > > will make fsck faster (due to less inodes), but that should only 
> > > > > > > be
> > > > > > > done if the files you are going to store ar relatively big 
> > > > > > > (generally
> > > > > > > much bigger than 64k).
> > > > > > 
> > > > > > Good to know. This will be mostly large files indeed.
> > > > > > That would be "newfs -i 64"?
> > > > > 
> > > > > Nope, newfs -b 65536 -f 65536 
> > > > 
> > > > To clarify: the default block size for large filesystems is already
> > > > 2^16, but this value is taken from the label, so if another fs was on
> > > > that partition before, it might have changed. The default fragsize is
> > > > blocksize/8. When not specified on the command line, it is also taken
> > > > from the label.
> > > > 
> > > > Inode density is derived from the number of frgaments (normally 1
> > > > inoder per 4 fragments), if you increase framgent size, the number of
> > > > fragments drops and 

Re: seafile client doesn't sync files

2020-11-15 Thread avv. Nicola Dell'Uomo

Hi Stuart,

thank you for your help!

Now it works (almost).

The matter here was that login.conf limits were not high enough.

With my server configuration seafile client opens ~ 43000 files: I 
really didn't expect such an high limit, so I set it to 8192 in login.conf.


Now it works, but it definitely wastes an exaggerate amount of resources.

When I close seafile client, kern.nfiles count drops from ~ 44.000 to ~ 890!

Moreover seafile client consumes a huge amount of cpu time.

I'll monitor the app to see if it's just a momentary situation and if it 
can be fine tuned...


I also have problems in making it understand I use a private ca.

When I have news, I'll post them.

If somebody has been using seafile client since longtime and already has 
experience, please join in!


Nicola

> On 2020-11-14, avv. Nicola Dell'Uomo 
 wrote:




Hi,

thank you for answering.

I raised class limit to 4096 current and 8192 max, but nothing changes: 
the logs report exactly the same error scheme as before.


Do you use this client?


No, but the error messages are pretty clear, and I can see that it uses
kqueue for file monitoring (via libinotify which emulates Linux's inotify
interface using kqueue) so it's a fairly safe bet that's what you're running
into.

Did you re-login after cbanging login.conf? Check the new values show up
with ulimit.


Does it work for you?

I have many files in each directory, but it works out of the box with 
other OS, so I was trying to understand why this is happening in OpenBSD.


OpenBSD has pretty tight default limits, plus kqueue works differently than
the native inotify on Linux causing it to use more file descriptors.
(All dirs and files, not just dirs).


puffy$ ulimit -s
4096


-s is stack. You want -n (or -a to show all values).


puffy$ sysctl kern.nfiles
kern.nfiles=708

I do agree kern.maxfiles is wildly high: this limit was obviously not 
meant for production but just for testing purposes...


It's not uncommon to set things "for testing" and forget that they're
there until the system falls over badly.

Moreover I can't figure out why first sync works and all files are 
regularly saved to local destination; then it stops saving to local, but 
I can still use all other client function...


I suppose if it just syncs all files then it doesn't need to monitor
them for updates.








Re: Large Filesystem

2020-11-15 Thread Mischa
On 15 Nov at 14:25, Otto Moerbeek  wrote:
> On Sun, Nov 15, 2020 at 02:14:47PM +0100, Mischa wrote:
> 
> > On 15 Nov at 13:04, Otto Moerbeek  wrote:
> > > On Sat, Nov 14, 2020 at 05:59:37PM +0100, Otto Moerbeek wrote:
> > > 
> > > > On Sat, Nov 14, 2020 at 04:59:22PM +0100, Mischa wrote:
> > > > 
> > > > > On 14 Nov at 15:54, Otto Moerbeek  wrote:
> > > > > > On Sat, Nov 14, 2020 at 03:13:57PM +0100, Leo Unglaub wrote:
> > > > > > 
> > > > > > > Hey,
> > > > > > > my largest filesystem with OpenBSD on it is 12TB and for the 
> > > > > > > minimal usecase
> > > > > > > i have it works fine. I did not loose any data or so. I have it 
> > > > > > > mounted with
> > > > > > > the following flags:
> > > > > > > 
> > > > > > > > local, noatime, nodev, noexec, nosuid, softdep
> > > > > > > 
> > > > > > > The only thing i should mention is that one time the server 
> > > > > > > crashed and i
> > > > > > > had to do a fsck during the next boot. It took around 10 hours 
> > > > > > > for the 12TB.
> > > > > > > This might be something to keep in mind if you want to use this 
> > > > > > > on a server.
> > > > > > > But if my memory serves me well otto did some changes to fsck on 
> > > > > > > ffs2, so
> > > > > > > maybe thats a lot faster now.
> > > > > > > 
> > > > > > > I hope this helps you a little bit!
> > > > > > > Greetings from Vienna
> > > > > > > Leo
> > > > > > > 
> > > > > > > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > > > > > > I am currently in the process of building a large filesystem 
> > > > > > > > with
> > > > > > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to 
> > > > > > > > serve as a
> > > > > > > > central, mostly download, platform with around 100 concurrent
> > > > > > > > connections.
> > > > > > > > 
> > > > > > > > The current system is running FreeBSD with ZFS and I would like 
> > > > > > > > to
> > > > > > > > see if it's possible on OpenBSD, as it's one of the last two 
> > > > > > > > systems
> > > > > > > > on FreeBSD left.:)
> > > > > > > > 
> > > > > > > > Has anybody build a large filesystem using FFS2? Is it a good 
> > > > > > > > idea?
> > > > > > > > How does it perform? What are good tests to run?
> > > > > > > > 
> > > > > > > > Your help and suggestions are really appriciated!
> > > > > > > 
> > > > > > 
> > > > > > It doesn't always has to be that bad, on current:
> > > > > > 
> > > > > > [otto@lou:22]$ dmesg | grep sd[123]
> > > > > > sd1 at scsibus1 targ 2 lun 0:  
> > > > > > naa.5000c500c3ef0896
> > > > > > sd1: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > > sd2 at scsibus1 targ 3 lun 0:  
> > > > > > naa.5000c500c40e8569
> > > > > > sd2: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > > sd3 at scsibus3 targ 1 lun 0: 
> > > > > > sd3: 30519295MB, 512 bytes/sector, 62503516672 sectors
> > > > > > 
> > > > > > [otto@lou:20]$ df -h /mnt 
> > > > > > Filesystem SizeUsed   Avail Capacity  Mounted on
> > > > > > /dev/sd3a 28.9T5.1G   27.4T 0%/mnt
> > > > > > 
> > > > > > [otto@lou:20]$ time doas fsck -f /dev/rsd3a 
> > > > > > ** /dev/rsd3a
> > > > > > ** File system is already clean
> > > > > > ** Last Mounted on /mnt
> > > > > > ** Phase 1 - Check Blocks and Sizes
> > > > > > ** Phase 2 - Check Pathnames
> > > > > > ** Phase 3 - Check Connectivity
> > > > > > ** Phase 4 - Check Reference Counts
> > > > > > ** Phase 5 - Check Cyl groups
> > > > > > 176037 files, 666345 used, 3875083616 free (120 frags, 484385437
> > > > > > blocks, 0.0% fragmentation)
> > > > > > 1m47.80s real 0m14.09s user 0m06.36s system
> > > > > > 
> > > > > > But note that fsck for FFS2 will get slower once more inodes are in
> > > > > > use or have been in use.
> > > > > > 
> > > > > > Also, creating the fs with both blockszie and fragment size of 64k
> > > > > > will make fsck faster (due to less inodes), but that should only be
> > > > > > done if the files you are going to store ar relatively big 
> > > > > > (generally
> > > > > > much bigger than 64k).
> > > > > 
> > > > > Good to know. This will be mostly large files indeed.
> > > > > That would be "newfs -i 64"?
> > > > 
> > > > Nope, newfs -b 65536 -f 65536 
> > > 
> > > To clarify: the default block size for large filesystems is already
> > > 2^16, but this value is taken from the label, so if another fs was on
> > > that partition before, it might have changed. The default fragsize is
> > > blocksize/8. When not specified on the command line, it is also taken
> > > from the label.
> > > 
> > > Inode density is derived from the number of frgaments (normally 1
> > > inoder per 4 fragments), if you increase framgent size, the number of
> > > fragments drops and so the number if inodes.
> > > 
> > > A fragment is the minimal alloctation unit. So if you have lots of
> > > small files you will waste a lot of space and potentially run out of
> > > inodes. You only want to increase fragment size of you mostly store
> > > large files.
> > 
> > This is for large 

Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
On Sun, Nov 15, 2020 at 02:14:47PM +0100, Mischa wrote:

> On 15 Nov at 13:04, Otto Moerbeek  wrote:
> > On Sat, Nov 14, 2020 at 05:59:37PM +0100, Otto Moerbeek wrote:
> > 
> > > On Sat, Nov 14, 2020 at 04:59:22PM +0100, Mischa wrote:
> > > 
> > > > On 14 Nov at 15:54, Otto Moerbeek  wrote:
> > > > > On Sat, Nov 14, 2020 at 03:13:57PM +0100, Leo Unglaub wrote:
> > > > > 
> > > > > > Hey,
> > > > > > my largest filesystem with OpenBSD on it is 12TB and for the 
> > > > > > minimal usecase
> > > > > > i have it works fine. I did not loose any data or so. I have it 
> > > > > > mounted with
> > > > > > the following flags:
> > > > > > 
> > > > > > > local, noatime, nodev, noexec, nosuid, softdep
> > > > > > 
> > > > > > The only thing i should mention is that one time the server crashed 
> > > > > > and i
> > > > > > had to do a fsck during the next boot. It took around 10 hours for 
> > > > > > the 12TB.
> > > > > > This might be something to keep in mind if you want to use this on 
> > > > > > a server.
> > > > > > But if my memory serves me well otto did some changes to fsck on 
> > > > > > ffs2, so
> > > > > > maybe thats a lot faster now.
> > > > > > 
> > > > > > I hope this helps you a little bit!
> > > > > > Greetings from Vienna
> > > > > > Leo
> > > > > > 
> > > > > > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > > > > > I am currently in the process of building a large filesystem with
> > > > > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to 
> > > > > > > serve as a
> > > > > > > central, mostly download, platform with around 100 concurrent
> > > > > > > connections.
> > > > > > > 
> > > > > > > The current system is running FreeBSD with ZFS and I would like to
> > > > > > > see if it's possible on OpenBSD, as it's one of the last two 
> > > > > > > systems
> > > > > > > on FreeBSD left.:)
> > > > > > > 
> > > > > > > Has anybody build a large filesystem using FFS2? Is it a good 
> > > > > > > idea?
> > > > > > > How does it perform? What are good tests to run?
> > > > > > > 
> > > > > > > Your help and suggestions are really appriciated!
> > > > > > 
> > > > > 
> > > > > It doesn't always has to be that bad, on current:
> > > > > 
> > > > > [otto@lou:22]$ dmesg | grep sd[123]
> > > > > sd1 at scsibus1 targ 2 lun 0:  
> > > > > naa.5000c500c3ef0896
> > > > > sd1: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > sd2 at scsibus1 targ 3 lun 0:  
> > > > > naa.5000c500c40e8569
> > > > > sd2: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > > sd3 at scsibus3 targ 1 lun 0: 
> > > > > sd3: 30519295MB, 512 bytes/sector, 62503516672 sectors
> > > > > 
> > > > > [otto@lou:20]$ df -h /mnt 
> > > > > Filesystem SizeUsed   Avail Capacity  Mounted on
> > > > > /dev/sd3a 28.9T5.1G   27.4T 0%/mnt
> > > > > 
> > > > > [otto@lou:20]$ time doas fsck -f /dev/rsd3a 
> > > > > ** /dev/rsd3a
> > > > > ** File system is already clean
> > > > > ** Last Mounted on /mnt
> > > > > ** Phase 1 - Check Blocks and Sizes
> > > > > ** Phase 2 - Check Pathnames
> > > > > ** Phase 3 - Check Connectivity
> > > > > ** Phase 4 - Check Reference Counts
> > > > > ** Phase 5 - Check Cyl groups
> > > > > 176037 files, 666345 used, 3875083616 free (120 frags, 484385437
> > > > > blocks, 0.0% fragmentation)
> > > > > 1m47.80s real 0m14.09s user 0m06.36s system
> > > > > 
> > > > > But note that fsck for FFS2 will get slower once more inodes are in
> > > > > use or have been in use.
> > > > > 
> > > > > Also, creating the fs with both blockszie and fragment size of 64k
> > > > > will make fsck faster (due to less inodes), but that should only be
> > > > > done if the files you are going to store ar relatively big (generally
> > > > > much bigger than 64k).
> > > > 
> > > > Good to know. This will be mostly large files indeed.
> > > > That would be "newfs -i 64"?
> > > 
> > > Nope, newfs -b 65536 -f 65536 
> > 
> > To clarify: the default block size for large filesystems is already
> > 2^16, but this value is taken from the label, so if another fs was on
> > that partition before, it might have changed. The default fragsize is
> > blocksize/8. When not specified on the command line, it is also taken
> > from the label.
> > 
> > Inode density is derived from the number of frgaments (normally 1
> > inoder per 4 fragments), if you increase framgent size, the number of
> > fragments drops and so the number if inodes.
> > 
> > A fragment is the minimal alloctation unit. So if you have lots of
> > small files you will waste a lot of space and potentially run out of
> > inodes. You only want to increase fragment size of you mostly store
> > large files.
> 
> This is for large files only. 
> 
> 16 partitions:
> #size   offset  fstype [fsize bsize   cpg]
>   a: 1171993395200  4.2BSD  65536 65536 52270 # /data
>   c: 1171993395200  unused
> 
> The new FS now has:
> 
> new# df -hi /data
> Filesystem   

Re: Large Filesystem

2020-11-15 Thread Mischa
On 15 Nov at 13:04, Otto Moerbeek  wrote:
> On Sat, Nov 14, 2020 at 05:59:37PM +0100, Otto Moerbeek wrote:
> 
> > On Sat, Nov 14, 2020 at 04:59:22PM +0100, Mischa wrote:
> > 
> > > On 14 Nov at 15:54, Otto Moerbeek  wrote:
> > > > On Sat, Nov 14, 2020 at 03:13:57PM +0100, Leo Unglaub wrote:
> > > > 
> > > > > Hey,
> > > > > my largest filesystem with OpenBSD on it is 12TB and for the minimal 
> > > > > usecase
> > > > > i have it works fine. I did not loose any data or so. I have it 
> > > > > mounted with
> > > > > the following flags:
> > > > > 
> > > > > > local, noatime, nodev, noexec, nosuid, softdep
> > > > > 
> > > > > The only thing i should mention is that one time the server crashed 
> > > > > and i
> > > > > had to do a fsck during the next boot. It took around 10 hours for 
> > > > > the 12TB.
> > > > > This might be something to keep in mind if you want to use this on a 
> > > > > server.
> > > > > But if my memory serves me well otto did some changes to fsck on 
> > > > > ffs2, so
> > > > > maybe thats a lot faster now.
> > > > > 
> > > > > I hope this helps you a little bit!
> > > > > Greetings from Vienna
> > > > > Leo
> > > > > 
> > > > > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > > > > I am currently in the process of building a large filesystem with
> > > > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve 
> > > > > > as a
> > > > > > central, mostly download, platform with around 100 concurrent
> > > > > > connections.
> > > > > > 
> > > > > > The current system is running FreeBSD with ZFS and I would like to
> > > > > > see if it's possible on OpenBSD, as it's one of the last two systems
> > > > > > on FreeBSD left.:)
> > > > > > 
> > > > > > Has anybody build a large filesystem using FFS2? Is it a good idea?
> > > > > > How does it perform? What are good tests to run?
> > > > > > 
> > > > > > Your help and suggestions are really appriciated!
> > > > > 
> > > > 
> > > > It doesn't always has to be that bad, on current:
> > > > 
> > > > [otto@lou:22]$ dmesg | grep sd[123]
> > > > sd1 at scsibus1 targ 2 lun 0:  
> > > > naa.5000c500c3ef0896
> > > > sd1: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > sd2 at scsibus1 targ 3 lun 0:  
> > > > naa.5000c500c40e8569
> > > > sd2: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > > sd3 at scsibus3 targ 1 lun 0: 
> > > > sd3: 30519295MB, 512 bytes/sector, 62503516672 sectors
> > > > 
> > > > [otto@lou:20]$ df -h /mnt 
> > > > Filesystem SizeUsed   Avail Capacity  Mounted on
> > > > /dev/sd3a 28.9T5.1G   27.4T 0%/mnt
> > > > 
> > > > [otto@lou:20]$ time doas fsck -f /dev/rsd3a 
> > > > ** /dev/rsd3a
> > > > ** File system is already clean
> > > > ** Last Mounted on /mnt
> > > > ** Phase 1 - Check Blocks and Sizes
> > > > ** Phase 2 - Check Pathnames
> > > > ** Phase 3 - Check Connectivity
> > > > ** Phase 4 - Check Reference Counts
> > > > ** Phase 5 - Check Cyl groups
> > > > 176037 files, 666345 used, 3875083616 free (120 frags, 484385437
> > > > blocks, 0.0% fragmentation)
> > > > 1m47.80s real 0m14.09s user 0m06.36s system
> > > > 
> > > > But note that fsck for FFS2 will get slower once more inodes are in
> > > > use or have been in use.
> > > > 
> > > > Also, creating the fs with both blockszie and fragment size of 64k
> > > > will make fsck faster (due to less inodes), but that should only be
> > > > done if the files you are going to store ar relatively big (generally
> > > > much bigger than 64k).
> > > 
> > > Good to know. This will be mostly large files indeed.
> > > That would be "newfs -i 64"?
> > 
> > Nope, newfs -b 65536 -f 65536 
> 
> To clarify: the default block size for large filesystems is already
> 2^16, but this value is taken from the label, so if another fs was on
> that partition before, it might have changed. The default fragsize is
> blocksize/8. When not specified on the command line, it is also taken
> from the label.
> 
> Inode density is derived from the number of frgaments (normally 1
> inoder per 4 fragments), if you increase framgent size, the number of
> fragments drops and so the number if inodes.
> 
> A fragment is the minimal alloctation unit. So if you have lots of
> small files you will waste a lot of space and potentially run out of
> inodes. You only want to increase fragment size of you mostly store
> large files.

This is for large files only. 

16 partitions:
#size   offset  fstype [fsize bsize   cpg]
  a: 1171993395200  4.2BSD  65536 65536 52270 # /data
  c: 1171993395200  unused

The new FS now has:

new# df -hi /data
Filesystem SizeUsed   Avail Capacity iused   ifree  %iused  Mounted on
/dev/sd1a 54.5T   64.0K   51.8T 0%   1 229301757 0%   /data

The server I am replacing has:

old# df -hi /data
FilesystemSizeUsed   Avail Capacity iused ifree %iused  Mounted on
data   35T 34T539G98%104k  

Re: seafile client doesn't sync files

2020-11-15 Thread avv. Nicola Dell'Uomo

Hi Stuart,

thank you for your help!

Now it works (almost).

The matter here was that login.conf limits were not high enough.

With my server configuration seafile client opens ~ 43000 files: I 
really didn't expect such an high limit, so I set it to 8192 in login.conf.


Now it works, but it definitely wastes an exaggerate amount of resources.

When I close seafile client, kern.nfiles count drops from ~ 44.000 to ~ 890!

Moreover seafile client consumes a huge amount of cpu time.

I'll monitor the app to see if it's just a momentary situation and if it 
can be fine tuned...


I also have problems in making it understand I use a private ca.

When I have news, I'll post them.

If somebody has been using seafile client since longtime and already has 
experience, please join in!


Nicola

> On 2020-11-14, avv. Nicola Dell'Uomo 
 wrote:




Hi,

thank you for answering.

I raised class limit to 4096 current and 8192 max, but nothing changes: 
the logs report exactly the same error scheme as before.


Do you use this client?


No, but the error messages are pretty clear, and I can see that it uses
kqueue for file monitoring (via libinotify which emulates Linux's inotify
interface using kqueue) so it's a fairly safe bet that's what you're running
into.

Did you re-login after cbanging login.conf? Check the new values show up
with ulimit.


Does it work for you?

I have many files in each directory, but it works out of the box with 
other OS, so I was trying to understand why this is happening in OpenBSD.


OpenBSD has pretty tight default limits, plus kqueue works differently than
the native inotify on Linux causing it to use more file descriptors.
(All dirs and files, not just dirs).


puffy$ ulimit -s
4096


-s is stack. You want -n (or -a to show all values).


puffy$ sysctl kern.nfiles
kern.nfiles=708

I do agree kern.maxfiles is wildly high: this limit was obviously not 
meant for production but just for testing purposes...


It's not uncommon to set things "for testing" and forget that they're
there until the system falls over badly.

Moreover I can't figure out why first sync works and all files are 
regularly saved to local destination; then it stops saving to local, but 
I can still use all other client function...


I suppose if it just syncs all files then it doesn't need to monitor
them for updates.








Re: Large Filesystem

2020-11-15 Thread Otto Moerbeek
On Sat, Nov 14, 2020 at 05:59:37PM +0100, Otto Moerbeek wrote:

> On Sat, Nov 14, 2020 at 04:59:22PM +0100, Mischa wrote:
> 
> > On 14 Nov at 15:54, Otto Moerbeek  wrote:
> > > On Sat, Nov 14, 2020 at 03:13:57PM +0100, Leo Unglaub wrote:
> > > 
> > > > Hey,
> > > > my largest filesystem with OpenBSD on it is 12TB and for the minimal 
> > > > usecase
> > > > i have it works fine. I did not loose any data or so. I have it mounted 
> > > > with
> > > > the following flags:
> > > > 
> > > > > local, noatime, nodev, noexec, nosuid, softdep
> > > > 
> > > > The only thing i should mention is that one time the server crashed and 
> > > > i
> > > > had to do a fsck during the next boot. It took around 10 hours for the 
> > > > 12TB.
> > > > This might be something to keep in mind if you want to use this on a 
> > > > server.
> > > > But if my memory serves me well otto did some changes to fsck on ffs2, 
> > > > so
> > > > maybe thats a lot faster now.
> > > > 
> > > > I hope this helps you a little bit!
> > > > Greetings from Vienna
> > > > Leo
> > > > 
> > > > Am 14.11.2020 um 13:50 schrieb Mischa:
> > > > > I am currently in the process of building a large filesystem with
> > > > > 12 x 6TB 3.5" SAS in raid6, effectively ~55TB of storage, to serve as 
> > > > > a
> > > > > central, mostly download, platform with around 100 concurrent
> > > > > connections.
> > > > > 
> > > > > The current system is running FreeBSD with ZFS and I would like to
> > > > > see if it's possible on OpenBSD, as it's one of the last two systems
> > > > > on FreeBSD left.:)
> > > > > 
> > > > > Has anybody build a large filesystem using FFS2? Is it a good idea?
> > > > > How does it perform? What are good tests to run?
> > > > > 
> > > > > Your help and suggestions are really appriciated!
> > > > 
> > > 
> > > It doesn't always has to be that bad, on current:
> > > 
> > > [otto@lou:22]$ dmesg | grep sd[123]
> > > sd1 at scsibus1 targ 2 lun 0:  
> > > naa.5000c500c3ef0896
> > > sd1: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > sd2 at scsibus1 targ 3 lun 0:  
> > > naa.5000c500c40e8569
> > > sd2: 15259648MB, 512 bytes/sector, 31251759104 sectors
> > > sd3 at scsibus3 targ 1 lun 0: 
> > > sd3: 30519295MB, 512 bytes/sector, 62503516672 sectors
> > > 
> > > [otto@lou:20]$ df -h /mnt 
> > > Filesystem SizeUsed   Avail Capacity  Mounted on
> > > /dev/sd3a 28.9T5.1G   27.4T 0%/mnt
> > > 
> > > [otto@lou:20]$ time doas fsck -f /dev/rsd3a 
> > > ** /dev/rsd3a
> > > ** File system is already clean
> > > ** Last Mounted on /mnt
> > > ** Phase 1 - Check Blocks and Sizes
> > > ** Phase 2 - Check Pathnames
> > > ** Phase 3 - Check Connectivity
> > > ** Phase 4 - Check Reference Counts
> > > ** Phase 5 - Check Cyl groups
> > > 176037 files, 666345 used, 3875083616 free (120 frags, 484385437
> > > blocks, 0.0% fragmentation)
> > > 1m47.80s real 0m14.09s user 0m06.36s system
> > > 
> > > But note that fsck for FFS2 will get slower once more inodes are in
> > > use or have been in use.
> > > 
> > > Also, creating the fs with both blockszie and fragment size of 64k
> > > will make fsck faster (due to less inodes), but that should only be
> > > done if the files you are going to store ar relatively big (generally
> > > much bigger than 64k).
> > 
> > Good to know. This will be mostly large files indeed.
> > That would be "newfs -i 64"?
> 
> Nope, newfs -b 65536 -f 65536 

To clarify: the default block size for large filesystems is already
2^16, but this value is taken from the label, so if another fs was on
that partition before, it might have changed. The default fragsize is
blocksize/8. When not specified on the command line, it is also taken
from the label.

Inode density is derived from the number of frgaments (normally 1
inoder per 4 fragments), if you increase framgent size, the number of
fragments drops and so the number if inodes.

A fragment is the minimal alloctation unit. So if you have lots of
small files you will waste a lot of space and potentially run out of
inodes. You only want to increase fragment size of you mostly store
large files.

-Otto