Re: httpd redirect

2019-11-15 Thread Ted Unangst
Thomas wrote:
> Hi,
> 
> I need to do this redirect with httpd:
> 
> from:
> http://my.old.site/#info
> to:
> https://my.new.site/products/product.html

browsers don't send #fragments to the server. so short answer: impossible.



Re: Home NAS

2019-11-15 Thread Jordan Geoghegan



On 2019-11-15 20:47, Predrag Punosevac wrote:

Jan Betlach wrote:

[snip]


2. A HP P222 array controller works right out of the box on
OpenBSD, maybe FreeBSD as well but the combination of ZFS and RAID
controller seems weird to me.


FreeBSD has a better support for HWRaid cards than OpenBSD. I am talking
about serious HWRaid cards like former LSI Controllers. Only Areca used
to fully support OpenBSD. Also FreeBSD UFS journaling is more advanced
than OpenBSD journaling.


OpenBSD's UFS doesn't do any journalling.

[snip]


3. OpenBSD is actually out of my expectation. CIFS and NFS is just
easy to setup. The most fabulous thing to me is the full disk
encryption. I had a disk failure and the array controller was burnt
once because I had some cooling issue. However, I was confident to get
a replacement and no data was lost.


OpenBSD NFS server implementation is slow comparing to others but for
home users YMMV.
I was able to get Gigabit line rate from an OpenBSD NAS to CentOS 
clients no problem. The OpenBSD NFS client is admittedly somewhat slow-- 
I was only able to get ~70MB/s out of it when connected to the same NAS 
that gets 100MBps+ from Linux based NFS clients.


Code:
# bioctl sd4
Volume  Status   Size Device
softraid0 0 Online  2000396018176 sd4 RAID1
   0 Online  2000396018176 0:0.0   noencl 
   1 Online  2000396018176 0:1.0   noencl 

is very crude. It took me 4 days to rebuild 1TB mirror after accidental
power off one HDD. That is just not something usable for a storage
purpose in real life.


I have an OpenBSD NAS at home with 20TB of RAID1 storage comprised of 10 
4TB drives. Last time I had to rebuild one of the arrays, it took just 
under 24 hours to rebuild. This was some months ago, but I remember 
doing the math and I was getting just under 50MB/s rebuild speed. This 
was on a fairly ancient Xeon rig using WD Red NAS drives. If it took 
your machine 4 days to rebuild a 1TB mirror, something must be wrong, 
possibly hardware related as that's less than 4MB/s rebuild speed.




At work where I have to store petabytes of data I use only ZFS. At home
that is another story.

For the record BTRFS is a vaporware and I would never store the pictures
of my kids to that crap.

Cheers,
Predrag


Cheers,

Jordan



Re: Home NAS

2019-11-15 Thread Predrag Punosevac
Jan Betlach wrote: 


> - FFS seems to be reliable and stable enough for my purpose. ZFS is too 
> complicated and bloated (of course it has its advantages), however major 
> factor for me has been that it is not possible to encrypt ZFS natively 
> on FreeBSD as of now.

Illumos distro OmniOS CE 

https://omniosce.org/

has support for native encryption since r151032

https://github.com/omniosorg/omnios-build/blob/r151032/doc/ReleaseNotes.md

Patrick Marchand wrote:

> Hi,
>
> 
> I'll be playing around with DragonflyBSD Hammer2 (and multiple offsite
> backups) for a home NAS over the next few weeks. I'll probably do a
> presentation about the experience at the Montreal BSD user group
> afterwards. It does not require as many ressources as ZFS or BTRFS,
> but offers many similar features.
> 

Been there, done that! 


dfly# uname -a
DragonFly dfly.int.bagdala2.net 5.6-RELEASE DragonFly v5.6.2-RELEASE
#26: Sun Aug 11 16:04:07 EDT 2019
r...@dfly.int.bagdala2.net:/usr/obj/usr/src/sys/X86_64_GENERIC  x86_64

# DeviceMountpoint  FStype  Options Dump
Pass#
/dev/serno/B620550018.s1a   /boot   ufs rw
  1   1
# /dev/serno/B620550018.s1b noneswapsw
  0   0
# Next line adds swapcache on the separate HDD instead of original swap
commented out above
/dev/serno/451762B0E46228230099.s1b noneswap
sw  0   0
/dev/serno/B620550018.s1d   /   hammer  rw
  1   1
/pfs/var/varnullrw  0
0
/pfs/tmp/tmpnullrw  0
0
/pfs/home   /home   nullrw  0
0
/pfs/usr.obj/usr/objnullrw  0
0
/pfs/var.crash  /var/crash  nullrw  0
0
/pfs/var.tmp/var/tmpnullrw  0
0
proc/proc   procfs  rw  0
0


# Added by Predrag Punosevac
/dev/serno/ZDS01176.s1a /data   hammer  rw  2
2
/dev/serno/5QG00WTH.s1a /mirror hammer  rw  2
2
# /dev/serno/5QG00XF0.s1e   /test-hammer2   hammer2 rw
2   2


# Mount pseudo file systems from the master drive which is used as a
backup for my desktop
/data/pfs/backups /data/backups nullrw  0
0
/data/pfs/nfs /data/nfs nullrw  0
0


H2 lacks built in backup mechanism. I was hoping that H2 will get some
kind "hammer mirror-copy" of H1, or "zfs send/receive". My server is
still on H1 and I really enjoy being able to continuously back it up.
That's the only thing I am missing in H2. On the positive note H2 did
get support for boot environment manager last year.

https://github.com/newnix/dfbeadm

Also DF jails are stuck in 2004 or something like that. I like their
NFSv3. DragonFly which gets it software RAID discipline through old
unmaintained FreeBSD natacontrol utility. Hardware RAID cards are not
frequently tested and community seems to be keen on treating DF as a
desktop OS rather than a storage workhorse. Having said that HDD are
cheap this days and home users probably don't need anything bigger than
a 12TB mirror. 


Zhi-Qiang Lei wrote:

> 1. FreeBSD was my first consideration because of ZFS, but as far as I
> know, ZFS doesn't work well with RAID controller, 

Of course not. ZFS is a volume manager and file system in one. How would
ZFS detect errors and do self-healing if it relies on the HW Raid
controller to get the info about block devices?

> and neither FreeBSD
> nor OpenBSD has a driver for the B120i array controller on the
> mainboard (HP is to be blamed). I could use AHCI mode instead RAID
> which also suits ZFS of FreeBSD, yet there is a notorious fan noise
> issue of that approach.
> 

That is not a genuine HWRaid card. That is a build in software
raid. You should not be using that crap. 


> 2. A HP P222 array controller works right out of the box on
> OpenBSD, maybe FreeBSD as well but the combination of ZFS and RAID
> controller seems weird to me. 
> 

FreeBSD has a better support for HWRaid cards than OpenBSD. I am talking
about serious HWRaid cards like former LSI Controllers. Only Areca used
to fully support OpenBSD. Also FreeBSD UFS journaling is more advanced
than OpenBSD journaling. However unless you put H1 on H2 on the top of
hardware RAID you will not get COW, snapshots, history, and all other
stuff with any version of UFS. 

I know people on this list who prefer HWRaid and also know people on
this list who prefer softward (including ZFS).


> 3. OpenBSD is actually out of my expectation. CIFS and NFS is just
> easy to setup. The most fabulous thing to me is the full disk
> encryption. I had a disk failure and the array controller was burnt
> once because I had some cooling issue. However, I was confident to get
> a replacement and no data was lost.


OpenBSD NFS server implementatio

Re: heavy CPU consumption and laggy/stuttering video on thinkpad x230

2019-11-15 Thread David Trudgian
On 11/15/19 9:51 AM, Michael H wrote:
> *laptop: thinkpad x230, i7 processor, 8G ram, intel hd 4000 gpu*
> *New OpenBSD user with a fresh install.*

I have a ThinkPad T430 which I'm now typing this on. It's an i5-3320m
(vs your i7-3520m) with 12GB RAM and the same HD4000 class graphics, so
it's pretty close.

> My user account is created from the install process and has "staff" class -
> though i haven't increased the datasize-cur, datasize-max for staff yet.
> Additionally, apmd has been set to -A as suggested by the faq.

Am no expert, having only installed OpenBSD for the first time recently,
but played around with the staff settings when I couldn't use a browser
or play video at all well. Started with some values in a blog post on
the net from someone setting up a laptop, and ended up with:

:datasize-cur=8192M:\
:datasize-max=8192M:\
:maxproc-max=4096:\
:maxproc-cur=1024:\
:openfiles-max=32768:\
:openfiles-cur=16384:\

I have also set the following systcl values:

# shared memory limits (browsers, etc.)
# max shared memory pages (*4096=8GB)
kern.shminfo.shmall=20971552
# max shared memory segment size (2GiB)
kern.shminfo.shmmax=2147483647
# max shared memory identifiers
kern.shminfo.shmmni=1024
# max shared memory segments per process
kern.shminfo.shmseg=1024

# Other
kern.maxproc=32768
kern.maxfiles=131072
kern.maxvnodes=262144
kern.bufcachepercent=50

The large files numbers here are due to using syncthing, and (I'd guess)
probably not generally advisable. The other stuff is quite likely to be
inadvisable or just plain wrong (due to my inexperience), but it has
given me a responsive system when using Firefox / Chromium, playing
video etc.

> *Is this an issue with the system somehow using the modesetting driver
> instead of the inteldrm* *driver*? if so, why is that and how should i best
> remedy this problem? I thought old thinkpads are generally fully supported
> by OpenBSD?

Although the login.conf and sysctl settings made the most difference for
me, I do have a smoother experience using the intel driver than the
modesetting one. It's especially noticable when playing video in
Firefox, and dragging the browser window around on my XFCE desktop. The
intel driver happily plays the video smoothly as the window moves
around. The modesetting driver wouldn't do that for me.

I have the following at /etc/X11/xorg.conf.d/intel.conf

Section "Device"
Identifier "drm"
Driver "intel"
Option "TearFree" "true"
EndSection

Hope some of this might be useful!

Cheers,

Dave Trudgian







httpd redirect

2019-11-15 Thread Thomas
Hi,

I need to do this redirect with httpd:

from:
http://my.old.site/#info
to:
https://my.new.site/products/product.html

Has anyone an idea to achieve this? I have tried several variations
using the "location match" statement, but without success.

Thanks a lot,
Thomas




Re: build error on octeon, 6.6

2019-11-15 Thread Christian Groessler

Hi,

On 2019-11-11 12:18, Christian Groessler wrote:


Now I'm going to rebuild again, capturing the "make" output, and try 
to replicate the problem manually.




Interestingly, this time the build fails at a later stage.


c++ -O2 -pipe  -fomit-frame-pointer -std=c++11 
-fvisibility-inlines-hidden -fno-exceptions -fno-rtti -Wall -W 
-Wno-unused-parameter -Wwrite-strings -Wcast-qual 
-Wno-missing-field-initializers -pedantic -Wno-long-long 
-Wdelete-non-virtual-dtor -Wno-comment   -MD -MP 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/lib/Target/AMDGPU 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Analysis 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Analysis 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/BinaryFormat 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Bitcode 
-I/include/llvm/CodeGen -I/include/llvm/CodeGen/PBQP 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/IR 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms/Coroutines 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/ProfileData/Coverage 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/DebugInfo/CodeView 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/DebugInfo/DWARF 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/DebugInfo/MSF 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/DebugInfo/PDB 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Demangle 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/ExecutionEngine 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/IRReader 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms/InstCombine 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/obj/../include/llvm/Transforms/InstCombine 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/LTO 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Linker 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/MC 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/MC/MCParser 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Object 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Option 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Passes 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/ProfileData 
-I/net/sirius/temp/routie-build/6.6/src/gnu/usr.bin/clang/libLLVM/../../../llvm/include/llvm/Transforms 
-I/net/sirius/temp/routie-build/6.6/

Re: build error on octeon, 6.6

2019-11-15 Thread Christian Groessler

Rudolf,

On 2019-11-11 15:23, Rudolf Leitgeb wrote:

Somewhere in his error output it says:

Target: mips64-unknown-openbsd6.6

This would not work with octeon AFAIK. Maybe this is the
reason the build fails ? It would at least make sense regarding
the "unable to execute command" message



I think your comment is wrong.

See this:


routie# cc --version
OpenBSD clang version 8.0.1 (tags/RELEASE_801/final) (based on LLVM 8.0.1)
Target: mips64-unknown-openbsd6.6
Thread model: posix
InstalledDir: /usr/bin
routie# cat hello.c
#include 

int main(void)
{
    printf("World, Hello\n");
    return 0;
}
routie# cc -o hello hello.c
routie# ./hello
World, Hello
routie# uname -a
OpenBSD routie.xxx.xxx 6.6 GENERIC.MP#107 octeon
routie#


regards,
chris



Re: vi in ramdisk?

2019-11-15 Thread chohag
U'll Be King of the Stars writes:
> This has gotten me thinking about whether line-based editing is really 
> the best abstraction for simple editors.

Yes. Yes it is. You can prise ed out of my cold dead hands.

I don't get where the desire for an editor in the installer comes
from. If you have even the barest scraps of a system you can boot
a full kernel and get a much richer environment to work in. The
installer kernel itself contains much of what's necessary to run
the installed binaries. boot -s. It's a thing.

If any tools should be added to the installer it's something for
network interrogation because that gives you more scope for dealing
with odd scenarios *before* you have an installed system, but what's
already there was apparently enough to get over whatever that problem
was I encountered.

> > The power of ed is in the regular expressions, search and substitution.

The power of ed comes from the power of unix that it runs within
not any particular string matching technique. It is a simple tool
which does one thing, does it well, operates as a pipeline on text
streams, etc.

It would still have that power with any other string matching
technology or even some other obscure addressing mode.

> I assumed that the canonical reference for ed was K&P, "The Unix 
> Programming Environment".  But since then I have discovered this book:
>
> https://mwl.io/nonfiction/tools#ed
>
> When I return home I will buy it.  (I'm overseas at the moment.)
>
> What are some other good books for learning ed?  How about online 
> resources, e.g., FTP sites with collections of interesting scripts.

I assumed the canonical reference for ed was ed(1).

Well I guess the canonical reference would be the contents of
/usr/src/bin/ed but I'm not about to go trawling through that when
there's some stellar documentation sitting right there.

> I'm particularly interested in its history, usage idioms, different 
> implementations, multilingual capabilities, and using it as a vehicle 
> for mastering regular expressions to the point that they are second nature.

I would recommend perl for this, not ed. ed it a text editor. Perl
is a pattern matching engine with a turing complete programming
environment hanging off the back of it. It's adapted (some might
say warped) the language that represents regular expressions in
strange ways but the machinery that's been built around it is
fantastically easy to use in a "get out of the way" sense, leaving
you to concentrate on the arcana of regular expressions. Deciding
why one would want to do that is left as an exercise for the reader.

But absolutely do keep on top of the differences between perl's
regexes and regular regular expressions as described in re_format(7).

Matthew



Re: teco, and Re: vi in ramdisk?

2019-11-15 Thread gwes

On 11/15/19 1:59 PM, gwes wrote:

TECOC from github...
For general amusement:

without video (curses)
  UID   PID  PPID CPU PRI  NI   VSZ   RSS WCHAN   STAT TT TIME COMMAND
 1000 29775 86827   0  28   0   540  1296 -   T p2 0:00.00 ./tecoc
$ size tecoc
text    data    bss dec hex
102449  13096   13424   128969  1f7c9

with video (curses)
$ size tecoc
text    data    bss dec hex
114772  13456   12432   140660  22574
  UID   PID  PPID CPU PRI  NI   VSZ   RSS WCHAN   STAT TT TIME COMMAND
 1000 82440 86827   0  28   0   808  2296 -   T p2 0:00.01 ./tecoc

for comparison:

$ size /bin/ed
text    data    bss dec hex
207704  10800   24264   242768  3b450

  UID   PID  PPID CPU PRI  NI   VSZ   RSS WCHAN   STAT TT TIME COMMAND
 1000 75971 86827   0   3   0   256   196 -   Tp p2 0:00.00 ed

Interesting to note that the text size of ed(1) is almost twice that 
of vi.

RSS is larger for teco. 1.3MB isn't too bad, though.

On disk:

12412$ ls -l tecoc
-rwxr-xr-x  1   xxx  256920 Nov 15 13:48 tecoc*

12494$ ls -l /bin/ed
-r-xr-xr-x  1 root  bin  229928 Apr 13  2019 /bin/ed*


As Mr. Davis kindly points out, everything in /bin is statically linked.

With -Bstatic
$ ls -l tecoc
-rwxr-xr-x  1     1472504 Nov 15 15:47 tecoc*

Still not huge. I don't know what the current upper limit for
programs in the install medium is. As this is a totally irrelevant
thread, I suspect that squashing teco into the single install
executable would only raise it 250K because it uses only very
vanilla libraries.

Geoff Steckel



Re: Home NAS

2019-11-15 Thread gwes

[misc intermediate comments removed]
On 11/15/19 3:54 AM, Andrew Luke Nesbit wrote:

In particular I'm trying to figure out a generally applicable way of 
taking a 
_consistent_ backup of a disk without resorting to single user mode.


I think COW file systems might help in this regard but I don't think
anything like this exists in OpenBSD.


COW in the filesystem, no. However...
a backup is a precautionary copy-before-write.
The only difference is the time granularity.

Consistency? An arbitrary file system snapshot doesn't guarantee that
you won't see -application level- inconsistent data, just that the files
didn't change during backup. Even a COW system that doesn't reveal
a new version of a file until it's been closed won't protect you
from an inconsistent group of files.

What groups of files --must-- be perfectly archived?

If you (a) can pause user activity
(b) can tolerate some inconsistency in captured system log files,
then just run a backup.
Partial DB transactions had better be OK or get a new DBM.
You might have to pause cron(8).
I don't remember any other daemon that would care.

Some archive programs will complain if the size or modification time
of a file changes during copy or if the directory contents change.
Something could be done to automatically go back for those.

Depending very much on your mix of uses, don't even stop anything.

Breaking up the backup into sections - per project, per file system, etc.
can make the pauses less objectionable. It can make recovery easier as well.
Assuming you have control over the system files those only need a couple of
copies when they change, for instance.

Brute force:
  ls -Rl /users /proj1 /proj2 > before0
  $BACKUP -o /$BACKUPFS/backup-$(date)
  ls -Rl /users /proj1 /proj2 > after0

# remove known don't-cares
  sed -f ignores before0 > before
  sed -f ignores after0 > after

# check to see if any action needed
  diff before after > changed

  grep -f vitalfiles changed > urgent
  cat urgent changed | mail -s "changes during backup $(date)" you

# calculate list of files needing recopy
  $SCRIPT changed > newbackup

# copy files missed - should run quickly
  $BACKUP -o /$BACKUPFS/bdelta-$(date) -f newbackup

This worked pretty well for me.
The truly paranoid would put a while loop around the diff & recopy...

Binary files can be regenerated if the source *and* environment
are backed up.


Storing the environment is a tricky problem that I haven't found an 
entirely satisfactory solution for, yet.

The key is for the project never to use an unqualified program -
 always "our current version".

One solution is to copy or link a consistent set of utilities
(compiler, linker, libraries) into the project and always use
those in production. Then a backup will capture everything.
This won't necessarily work if the OS changes its ABI but it
can be pretty effective.
I've been in a project that used this approach and it did work.

Keeping an automatic record of utility and library versions used works as
long as the system itself is backed up well.

The discipline to keep everything tidy, ... well.
Without regard to backups, the precaution to take periodic
snapshots of a project, transplant it into an empty system
and make sure the snapshot actually works
has been erm, revealing.

# mv /usr/bin/cc /usr/bin/saved-cc
# rm /usr/bin/cc
$ make
.not found 

Andrew

It can be a pain to design a procedure that fits your needs
and doesn't need a staff of operators (:-(

Good luck!

Geoff Steckel



Re: vi in ramdisk?

2019-11-15 Thread U'll Be King of the Stars

On 16/11/2019 06:55, Roderick wrote:


On Thu, 22 Jan 1970, Chris Bennett wrote:


Yes, but ed also allows one to easily work with only 1-3 lines of
screen.


I think with every line editor is so?


I don't know of any line editors aside from ed, Vi's open mode, Sam, 
Edlin, and QED and its deriviatives.


This has gotten me thinking about whether line-based editing is really 
the best abstraction for simple editors.


If I understand right then this is what structural regular expressions 
are supposed to expand on.



The power of ed is in the regular expressions, search and substitution.


I assumed that the canonical reference for ed was K&P, "The Unix 
Programming Environment".  But since then I have discovered this book:


https://mwl.io/nonfiction/tools#ed

When I return home I will buy it.  (I'm overseas at the moment.)

What are some other good books for learning ed?  How about online 
resources, e.g., FTP sites with collections of interesting scripts.


I'm particularly interested in its history, usage idioms, different 
implementations, multilingual capabilities, and using it as a vehicle 
for mastering regular expressions to the point that they are second nature.


Sam looks very interesting too, and twenty years after writing my first 
text editor I've returned to my favorite type of personal side project, 
and looking for the kindest mix of functionality and simplicity.  The 
key was understanding not to make something "no simpler" than the 
simplest useful design.



The only thing that I find more comfortable in sos and miss in ed
is the line alter mode that allows to interactively delete and
insert characters in a line.


What is sos?  Is it something like open mode in Vi?


That is also what one wants to carefully
do in configuration files. Normaly no big editing.


Indeed.  Sometimes my blood runs cold when I'm writing and deploying a 
hotfix of this nature in a production system.  The example that somebody 
gave earlier in this(?) thread about fixing a `/etc/fstab` is one that I 
have experience with.


Andrew
--
OpenPGP key: EB28 0338 28B7 19DA DAB0  B193 D21D 996E 883B E5B9



Re: vi in ramdisk?

2019-11-15 Thread Roderick



On Thu, 22 Jan 1970, Chris Bennett wrote:


Yes, but ed also allows one to easily work with only 1-3 lines of
screen.


I think with every line editor is so?

The power of ed is in the regular expressions, search and substitution.

The only thing that I find more comfortable in sos and miss in ed
is the line alter mode that allows to interactively delete and
insert characters in a line. That is also what one wants to carefully
do in configuration files. Normaly no big editing.

Rod.



Re: vi in ramdisk?

2019-11-15 Thread Chris Bennett
On Fri, Nov 15, 2019 at 06:02:16PM +, Roderick wrote:
> 
> On Fri, 15 Nov 2019, Theo de Raadt wrote:
> 
> > Christian Weisgerber  wrote:
> 
> > > How large is a C implementation of TECO?
> > 
> > he probably means cat plus the shell's redirection capability.
> 
> I think, TECO is much more powerfull that ed and vi.
> 
> But perhaps DEC 10s SOS?
> 
> I do not know if it runs in unix or if there is a C implementation.
> But I remember it as much simpler than ed, but more comfortable,
> and for manually editing enough.
> 
> Isn't really in unix nothing simpler than ed?
> 
> Well, the advantage of ed is, that it is the standard unix editor.
> 
> Rod.
> 

Yes, but ed also allows one to easily work with only 1-3 lines of
screen. Screen size can matter. I have always fallen back to using ed
when running single user. If I'm running off of my phone in landscape,
one line is all I can see. With ed, that is enough. With vi plus making
a mistake, it's harder to see. Vi is more powerful than ed, but I'm used
to using vim, so I keep hitting keys that don't work in vi (which is my
problem, not vi's).
I have never used teco or sos.

I'm neutral overall on this, but the number of screen lines used does
matter to me.

Forgive me if the date is wrong, I can't find the cause. Going to do a
new snapshot right now.

Chris Bennett




teco, and Re: vi in ramdisk?

2019-11-15 Thread gwes

TECOC from github...
For general amusement:

without video (curses)
  UID   PID  PPID CPU PRI  NI   VSZ   RSS WCHAN   STAT  TT TIME COMMAND
 1000 29775 86827   0  28   0   540  1296 -   T p2 0:00.00 ./tecoc
$ size tecoc
text    data    bss dec hex
102449  13096   13424   128969  1f7c9

with video (curses)
$ size tecoc
text    data    bss dec hex
114772  13456   12432   140660  22574
  UID   PID  PPID CPU PRI  NI   VSZ   RSS WCHAN   STAT  TT TIME COMMAND
 1000 82440 86827   0  28   0   808  2296 -   T p2 0:00.01 ./tecoc

for comparison:

$ size /bin/ed
text    data    bss dec hex
207704  10800   24264   242768  3b450

  UID   PID  PPID CPU PRI  NI   VSZ   RSS WCHAN   STAT  TT TIME COMMAND
 1000 75971 86827   0   3   0   256   196 -   Tp    p2 0:00.00 ed

Interesting to note that the text size of ed(1) is almost twice that of vi.
RSS is larger for teco. 1.3MB isn't too bad, though.

On disk:

12412$ ls -l tecoc
-rwxr-xr-x  1   xxx  256920 Nov 15 13:48 tecoc*

12494$ ls -l /bin/ed
-r-xr-xr-x  1 root  bin  229928 Apr 13  2019 /bin/ed*



Re: vi in ramdisk?

2019-11-15 Thread Raul Miller
On Fri, Nov 15, 2019 at 1:17 PM Roderick  wrote:
> On Fri, 15 Nov 2019, Ian Darwin wrote:
> > Who needs cat when you have echo?
>
> Echo? Necessary?! Terrible waste of paper in a teletype terminal!
> I remember editing with sos in TOPS 10 after giving the command:
> tty noecho.

This is starting to smell like premature optimization.

Contrast, for example:

$ (echo this; echo is; echo test) >file

vs

$ cat >file
this
is
a
test

And tell me: which uses more paper?

(Answer: neither, in my case, since I am not using a teletype machine.)

Thanks,

-- 
Raul



Re: vi in ramdisk?

2019-11-15 Thread Roderick



On Fri, 15 Nov 2019, Ian Darwin wrote:


Who needs cat when you have echo?


Echo? Necessary?! Terrible waste of paper in a teletype terminal!
I remember editing with sos in TOPS 10 after giving the command:
tty noecho.

Rod.



Re: vi in ramdisk?

2019-11-15 Thread Roderick



On Fri, 15 Nov 2019, Theo de Raadt wrote:


Christian Weisgerber  wrote:



How large is a C implementation of TECO?


he probably means cat plus the shell's redirection capability.


I think, TECO is much more powerfull that ed and vi.

But perhaps DEC 10s SOS?

I do not know if it runs in unix or if there is a C implementation.
But I remember it as much simpler than ed, but more comfortable,
and for manually editing enough.

Isn't really in unix nothing simpler than ed?

Well, the advantage of ed is, that it is the standard unix editor.

Rod.



Re: vi in ramdisk?

2019-11-15 Thread Ian Darwin
On Fri, Nov 15, 2019 at 10:08:26AM -0700, Theo de Raadt wrote:
> Christian Weisgerber  wrote:
> 
> > > I think, for editing config files, there are sure editors that
> > > are simpler, smaller, not so powerful, but easier to use than ed.
> > 
> > By all means, do not keep us in suspense and tell us the names of
> > these editors.
> > 
> > How large is a C implementation of TECO?
> 
> he probably means cat plus the shell's redirection capability.
> 

Who needs cat when you have echo? 



Re: Home NAS

2019-11-15 Thread Zhi-Qiang Lei
I have a HP Gen8 Microserver running as a NAS using OpenBSD. It has been 
serving well for like 5 months. I choose OpenBSD over FreeBSD because:

1. FreeBSD was my first consideration because of ZFS, but as far as I know, ZFS 
doesn’t work well with RAID controller, and neither FreeBSD nor OpenBSD has a 
driver for the B120i array controller on the mainboard (HP is to be blamed). I 
could use AHCI mode instead RAID which also suits ZFS of FreeBSD, yet there is 
a notorious fan noise issue of that approach.
2. A HP P222 array controller works right out of the box on OpenBSD, maybe 
FreeBSD as well but the combination of ZFS and RAID controller seems weird to 
me.
3. OpenBSD is actually out of my expectation. CIFS and NFS is just easy to 
setup. The most fabulous thing to me is the full disk encryption. I had a disk 
failure and the array controller was burnt once because I had some cooling 
issue. However, I was confident to get a replacement and no data was lost.

As the 5TB limitation, I haven’t been there.


> On Nov 14, 2019, at 10:26 PM, Jan Betlach  wrote:
> 
> 
> Hi guys,
> 
> I am setting up a home NAS for five users. Total amount of data stored on NAS 
> will not exceed 5 TB.
> Clients are Macs and OpenBSD machines, so that SSHFS works fine from both (no 
> need for NFS or Samba).
> I am much more familiar and comfortable with OpenBSD than with FreeBSD.
> My dilema while stating the above is as follows:
> 
> Will the OpenBSD’s UFS stable and reliable enough for intended purpose? NAS 
> will consist of just one encrypted drive, regularly backed to hardware RAID 
> encrypted two-disks drive via rsync.
> 
> Should I byte the bullet and build the NAS on FreeBSD taking advantage of 
> ZFS, snapshots, replications, etc? Or is this an overkill?
> 
> BTW my most important data is also backed off-site.
> 
> Thank you in advance for your comments.
> 
> Jan
> 



Re: vi in ramdisk?

2019-11-15 Thread Theo de Raadt
Christian Weisgerber  wrote:

> > I think, for editing config files, there are sure editors that
> > are simpler, smaller, not so powerful, but easier to use than ed.
> 
> By all means, do not keep us in suspense and tell us the names of
> these editors.
> 
> How large is a C implementation of TECO?

he probably means cat plus the shell's redirection capability.




Re: vi in ramdisk?

2019-11-15 Thread Christian Weisgerber
On 2019-11-15, Roderick  wrote:

>> ed is included in the ramdisk, but if your use case is using vi to fix a
>
> I imagine, it is there for using it in scripts.

Interestingly enough, the installer itself does not use ed, as far
as I can tell.

* I pretty regularly use ed to perform some configuration tweaks
  before rebooting a freshly installed system.
* I have, rarely, used ed to recover a system from errors in
  /etc/fstab.
* Since the installer itself is just a script, it can be modified
  with ed in the install environment and then re-run.  From time
  to time I do this when debugging the installer or working on some
  feature there.

If you have some passing familiarity with sed, then ed will feel
very familiar.  It's just an interactive sed.  (Historically, it's
the other way around, of course.)

> I think, for editing config files, there are sure editors that
> are simpler, smaller, not so powerful, but easier to use than ed.

By all means, do not keep us in suspense and tell us the names of
these editors.

How large is a C implementation of TECO?

-- 
Christian "naddy" Weisgerber  na...@mips.inka.de



Re: A sad raid/fsck story

2019-11-15 Thread sven falempin
On Sat, Oct 5, 2019 at 8:39 AM Nick Holland  wrote:
>
> On 10/4/19 8:37 AM, sven falempin wrote:
> ...
> > How [do I] check the state of the MIRROR raid array , to detect large
> > amount of failures on one of the two disk ?
> >
> > Best.
> >
>
> fsck has NOTHING to do with the status of your drives.
> It's a File System ChecKer.  Your disk can be covered with unreadable
> sectors but if the file system on that disk is intact, fsck reports
> no problem.  Conversely, your disks can be fine, but your file system
> can be scrambled beyond recognition; bad news from fsck doesn't mean
> your drive is bad.
>
> To check the status of the disks, you probably want to slip a call
> to bioctl into /etc/daily.local:
>
> # bioctl softraid0
> Volume  Status   Size Device
> softraid0 0 Online  7945693712896 sd2 RAID1
>   0 Online  7945693712896 0:0.0   noencl 
>   1 Online  7945693712896 0:1.0   noencl 
>
> This is a happy array.  If you have a bad drive, one of those
> physical drives is going to not be online.
>
> Nick.
>

My moral of the story is:

if your raid array is not mounting, check smart, check bioctl, FSCK
each disk separately
and then restore or dump the bad drive

Next,

Raid 5 is cool . It knows which disk failed the checksum ?



Re: Home NAS

2019-11-15 Thread Patrick Marchand

Hi,


I'll be playing around with DragonflyBSD Hammer2 (and multiple offsite 
backups) for a home NAS over the next few weeks. I'll probably do a 
presentation about the experience at the Montreal BSD user group 
afterwards. It does not require as many ressources as ZFS or BTRFS, but 
offers many similar features.


As for OpenBSD as a home NAS, I'm sure it would be fine if you're 
diligent about backups. You might also want to get a small UPS for it (I 
bought a refurbished APC for 80$), so frequent but short power 
interruptions do not require an FFS integrity check. (I do this for my 
router)


Pleasure



Re: vi in ramdisk?

2019-11-15 Thread Roderick



On Fri, 15 Nov 2019, Noth wrote:


ed is included in the ramdisk, but if your use case is using vi to fix a


I imagine, it is there for using it in scripts.

I think, for editing config files, there are sure editors that
are simpler, smaller, not so powerful, but easier to use than ed.

Rod.



heavy CPU consumption and laggy/stuttering video on thinkpad x230

2019-11-15 Thread Michael H
*laptop: thinkpad x230, i7 processor, 8G ram, intel hd 4000 gpu*
*New OpenBSD user with a fresh install.*

My user account is created from the install process and has "staff" class -
though i haven't increased the datasize-cur, datasize-max for staff yet.
Additionally, apmd has been set to -A as suggested by the faq.

Basically, whenever I play a video, CPU0, CPU2(shown in top) spike up to
about 30-58%. The heatsink/fan/ventilation area of the laptop gets
extremely hot.

Videos buffer pretty slowly, and most importantly, when I am watching a
live stream via players such as mpv, it's basically unwatchable because the
video stops every 3-4 seconds.

*Here is a log file of messages from mpv while playing a stream: *
https://pastebin.com/3VRWgv3K
*Here is a log file of messages from mpv while playing a youtube video: *
https://pastebin.com/mn0wEXMf

*here is my dmesg:*
http://ix.io/21Bg
*here is my Xorg.0.log:*
http://ix.io/21Bb

*Here are the firmwares that have been downloaded during installation:*
intel-firmware-20190918v0 microcode update binaries for Intel CPUs
inteldrm-firmware-20181218 firmware binary images for inteldrm(4) driver
iwn-firmware-5.11p1 firmware binary images for iwn(4) driver
uvideo-firmware-1.2p3 firmware binary images for uvideo(4) driver
vmm-firmware-1.11.0p2 firmware binary images for vmm(4) driver

*Is this an issue with the system somehow using the modesetting driver
instead of the inteldrm* *driver*? if so, why is that and how should i best
remedy this problem? I thought old thinkpads are generally fully supported
by OpenBSD?

Anyways, if anyone could help i would really appreciate it!

*and if anyone is using this exact machine (thinkpad x230), could you also
recommend some of the other optimizations you have done for this machine? *

thanks in advance!


Re: Home NAS

2019-11-15 Thread Jan Betlach



Hi,

thank you all for comments.

I am restoring backup to my new OpenBSD based home NAS as of writing 
this.


Why I have decided to go this route and not with other option like ZFS:
- FFS seems to be reliable and stable enough for my purpose. ZFS is too 
complicated and bloated (of course it has its advantages), however major 
factor for me has been that it is not possible to encrypt ZFS natively 
on FreeBSD as of now.
I am also more comfortable with Open BSD than with Free BSD. I did not 
want to go with Linux at all.
- I have installed Open BSD on an external unencrypted USB stick. So 
that I don’t need to have access to the box in case of restart. Main 
data NAS disk is 2TB internal one in the box (Zotac nano), which is 
encrypted. I can easily mount it via SSH in case of restart. Backups are 
automated via rsync to the encrypted external hardware RAID disks. Using 
DUIDs for all drives.

- I do keep offsite backup as well.

I have tested this setup in the last couple of days before going all in. 
So far so good. Performance is plenty acceptable for my usage. Mounting 
the NAS storage via SSHFS on client machines (Macs and OpenBSDs) works 
flawlessly and speed is also OK.


Thanks again

Jan


On 15 Nov 2019, at 16:02, pierre1.bar...@orange.com wrote:


Hello,

I tried a home NAS with ZFS, then BTRFS. Those filesystems needs tons 
of RAM (~1 GB of RAM by TB of disk), preferably ECC.

I found it very expensive for home usage, so I wouldn't recommend it.
Recovy systems were also inexistent at the time (no btrfsck), I don't 
know if it has improved since.


I ended with LVM : cheap to implement and very easy to extend. I am 
very happy with it.


--
Cordialement,
Pierre BARDOU

-Message d'origine-
De : owner-m...@openbsd.org  De la part de 
Rafael Possamai

Envoyé : vendredi 15 novembre 2019 14:35
À : Jan Betlach 
Cc : misc@openbsd.org
Objet : Re: Home NAS

My experience with ZFS (FreeNAS for the most part) is that it becomes 
more "expensive" to expand your pool after the fact (for a couple of 
different reasons, see below), but if 5TB is all you're ever going to 
need in this specific case, I think you should be fine and can take 
advantage of ZFS features like you said.


I have sources for this at home (a couple of articles and link to a 
forum thread), but these are saved on my desktop at home. Just let me 
know and I'll share them with you later.


On Thu, Nov 14, 2019, 8:27 AM Jan Betlach  wrote:



Hi guys,

I am setting up a home NAS for five users. Total amount of data 
stored

on NAS will not exceed 5 TB.
Clients are Macs and OpenBSD machines, so that SSHFS works fine from
both (no need for NFS or Samba).
I am much more familiar and comfortable with OpenBSD than with 
FreeBSD.

My dilema while stating the above is as follows:

Will the OpenBSD’s UFS stable and reliable enough for intended
purpose? NAS will consist of just one encrypted drive, regularly
backed to hardware RAID encrypted two-disks drive via rsync.

Should I byte the bullet and build the NAS on FreeBSD taking 
advantage

of ZFS, snapshots, replications, etc? Or is this an overkill?

BTW my most important data is also backed off-site.

Thank you in advance for your comments.

Jan




_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez 
recu ce message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les 
messages electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, 
deforme ou falsifie. Merci.


This message and its attachments may contain confidential or 
privileged information that may be protected by law;

they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and 
delete this message and its attachments.
As emails may be altered, Orange is not liable for messages that have 
been modified, changed or falsified.

Thank you.




Re: Home NAS

2019-11-15 Thread pierre1.bardou
Hello,

I tried a home NAS with ZFS, then BTRFS. Those filesystems needs tons of RAM 
(~1 GB of RAM by TB of disk), preferably ECC.
I found it very expensive for home usage, so I wouldn't recommend it.
Recovy systems were also inexistent at the time (no btrfsck), I don't know if 
it has improved since.

I ended with LVM : cheap to implement and very easy to extend. I am very happy 
with it.

--
Cordialement,
Pierre BARDOU

-Message d'origine-
De : owner-m...@openbsd.org  De la part de Rafael 
Possamai
Envoyé : vendredi 15 novembre 2019 14:35
À : Jan Betlach 
Cc : misc@openbsd.org
Objet : Re: Home NAS

My experience with ZFS (FreeNAS for the most part) is that it becomes more 
"expensive" to expand your pool after the fact (for a couple of different 
reasons, see below), but if 5TB is all you're ever going to need in this 
specific case, I think you should be fine and can take advantage of ZFS 
features like you said.

I have sources for this at home (a couple of articles and link to a forum 
thread), but these are saved on my desktop at home. Just let me know and I'll 
share them with you later.

On Thu, Nov 14, 2019, 8:27 AM Jan Betlach  wrote:

>
> Hi guys,
>
> I am setting up a home NAS for five users. Total amount of data stored 
> on NAS will not exceed 5 TB.
> Clients are Macs and OpenBSD machines, so that SSHFS works fine from 
> both (no need for NFS or Samba).
> I am much more familiar and comfortable with OpenBSD than with FreeBSD.
> My dilema while stating the above is as follows:
>
> Will the OpenBSD’s UFS stable and reliable enough for intended 
> purpose? NAS will consist of just one encrypted drive, regularly 
> backed to hardware RAID encrypted two-disks drive via rsync.
>
> Should I byte the bullet and build the NAS on FreeBSD taking advantage 
> of ZFS, snapshots, replications, etc? Or is this an overkill?
>
> BTW my most important data is also backed off-site.
>
> Thank you in advance for your comments.
>
> Jan
>
>

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.



Re: vi in ramdisk?

2019-11-15 Thread Noth



On 08/11/2019 07:06, Philip Guenther wrote:

On Thu, Nov 7, 2019 at 9:57 PM Brennan Vincent 
wrote:


I am asking this out of pure curiosity, not to criticize or start a debate.

Why does the ramdisk not include /usr/bin/vi by default? To date,
it is the only UNIX-like environment I have ever seen without some form
of vi.


The ramdisk space is extremely tight.  We include what we feel is
necessary, PUSHING OUT other stuff as priorities shift.  If you have watch
the commits closely, you would have seen drivers vanish from the ramdisks
on tight archs as new functionality was added.

Given what we want people to use the ramdisks for (installing,
reinstalling, upgrading, fixing boot and set issues), vi is not necessary,
while other functionality and drivers extend their applicability.  We will
keep the latter and not include the former.


Philip Guenther


ed is included in the ramdisk, but if your use case is using vi to fix a 
config file on an existing installation, just do this (assuming you 
mounted everything into /mnt):


chroot /mnt /bin/ksh

export TERM=vt100

vi /etc/yourfile


Cheers,

Noth



Re: Home NAS

2019-11-15 Thread Rafael Possamai
My experience with ZFS (FreeNAS for the most part) is that it becomes more
"expensive" to expand your pool after the fact (for a couple of different
reasons, see below), but if 5TB is all you're ever going to need in this
specific case, I think you should be fine and can take advantage of ZFS
features like you said.

I have sources for this at home (a couple of articles and link to a forum
thread), but these are saved on my desktop at home. Just let me know and
I'll share them with you later.

On Thu, Nov 14, 2019, 8:27 AM Jan Betlach  wrote:

>
> Hi guys,
>
> I am setting up a home NAS for five users. Total amount of data stored
> on NAS will not exceed 5 TB.
> Clients are Macs and OpenBSD machines, so that SSHFS works fine from
> both (no need for NFS or Samba).
> I am much more familiar and comfortable with OpenBSD than with FreeBSD.
> My dilema while stating the above is as follows:
>
> Will the OpenBSD’s UFS stable and reliable enough for intended
> purpose? NAS will consist of just one encrypted drive, regularly backed
> to hardware RAID encrypted two-disks drive via rsync.
>
> Should I byte the bullet and build the NAS on FreeBSD taking advantage
> of ZFS, snapshots, replications, etc? Or is this an overkill?
>
> BTW my most important data is also backed off-site.
>
> Thank you in advance for your comments.
>
> Jan
>
>


Re: Home NAS

2019-11-15 Thread Raymond, David
I don't know how current tape systems are, but I have been burnt by
them in the past.  Either the tape deteriorates or the tape writer
company goes out of business.  My current approach is to keep stuff I
want to keep on current online storage in multiple places plus offline
USB.  Data get migrated to new media as they appear and prove
themselves.  There is still the possibility of undetected bit rot
however...

Dave

On 11/15/19, Andrew Luke Nesbit  wrote:
> On 15/11/2019 10:11, gwes wrote:
>> On 11/14/19 3:52 PM, Andrew Luke Nesbit wrote:
>>> On 15/11/2019 07:44, Raymond, David wrote:
 I hadn't heard about file corruption on OpenBSD.  It would be good to
 get to the bottom of this if it occurred.
>>>
>>> I was surprised when I read mention of it too, without any real claim
>>> or detailed analysis to back it up.  This is why I added my disclaimer
>>> about "correcting me if I'm wrong because I don't want to spread
>>> incorrect information".
>
> [...]
>
>> There was a thread a couple of months ago started by someone either
>> pretty
>> ignorant or a troll.
>> The consensus answer: no more than any other OS, less than many.
>
> Thank you gwes, for the clarification.
>
> The thread is vaguely coming back to my memory now.  I was dipping in
> and out of it at the time as I didn't have time to study the details at
> the time.
>
>> One size definitely doesn't fit all.
>
> That is pretty obvious.  I never mentioned a blanket rule, and I assume
> that OP is able to tailor any suggestion to their needs.
>
>> Backup strategies depend on user's criteria, cost of design and
>> cost of doing the backups - administration & storage, etc.
>
> Sure.  I don't have a personal archival storage system yet for long term
> storage that satisfies my specifications because I don't have the
> infrastructure and medium yet to store it.  I plan on investing in LTO
> tape but I can not afford the initial cost yet.
>
>> In an ideal world every version of every file lasts forever.
>> Given real limitations, versioning filesystems can't and don't.
>
> Indeed.  But having archival snapshots at various points in time
> increases the _probability_ that the version of the file that you need
> will be present if+when you need it.
>
>> If your data are critical, invest in a dozen or more portable
>> USB drives. Cycle them off-site. Reread them (not too often)
>> to check for decay.
>
> Yes, this is part of the backup system that I'm designing for my NAS,
> but it's not so much for archiving.
>
>> If you have much  available, get a
>> modern tape system.
>
> Yes, as I mentioned above LTO would be great if+when I can afford it.
>
>> The backup system used over 50 years ago still suitable for many
>> circumstances looks something like this:
>>daily backups held for 1 month
>>weekly backups held for 6-12 months
>>monthly backups held indefinitely offsite.
>> Hold times vary according to circumstances.
>
> I think something like this is a good plan.
>
>> The backup(8) program can assist this by storing deltas so that
>> more frequent backups only contain deltas from the previous
>> less frequent backup.
>
> I've not used backup(8) before, thanks for the suggestion.  I will have
> a look.
>
>> The compromise between backup storage requirements and granularity
>> of recovery points can be mitigated. The way to do it depends on
>> the type and structure of the data:
>>
>> Some data are really transient and can be left out.
>>
>> Source code control systems (or whatever the name is this week)
>> are a good way for intermittent backups to capture a good history
>> of whatever data is around if it's text.
>
> I don't understand how SCM's are supposed to help with this...
>
>> DBs often have their own built-in backup mechanisms.
>
> This underscores the difference between file system-level backups,
> block-level backups, and (for DBs) application-level backups.  In
> particular I'm trying to figure out a generally applicable way of taking
> a _consistent_ backup of a disk without resorting to single user mode.
>
> I think COW file systems might help in this regard but I don't think
> anything like this exists in OpenBSD.
>
>> Binary files can be regenerated if the source *and* environment
>> are backed up.
>
> Storing the environment is a tricky problem that I haven't found an
> entirely satisfactory solution for, yet.
>
>> been there, mounted the wrong tape... what write protect ring?
>
> O yeah... me too.  My team inherited a hosted service and upon
> auditing we discovered its backup system was stranger than fiction.  But
> it was so bizarre that we couldn't determine whether it was _supposed_
> to be that way or if our reasoning was flawed.  A classic type of problem.
>
> Andrew
> --
> OpenPGP key: EB28 0338 28B7 19DA DAB0  B193 D21D 996E 883B E5B9
>


-- 
David J. Raymond
david.raym...@nmt.edu
http://physics.nmt.edu/~raymond



Re: 10Gbit network work only 1Gbit

2019-11-15 Thread Gregory Edigarov



On 13.11.19 21:18, Hrvoje Popovski wrote:

On 13.11.2019. 16:37, Gregory Edigarov wrote:

could you please do one more test:
"forwarding over ix0 and ix1, pf enabled, 5 tcp states"

with this generator i can't use tcp. generally pps with 5 or 50
states are more or less same ... problem with tcp testing is that i
can't get precise pps numbers ...

and only for you :)
with iperf3 (8 tcp streams) on client boxes i'm getting this results ...

forwarding over ix0 and ix1, pf and ipsec disabled
9.40Gbps

forwarding over ix0 and ix1, pf enabled, 8 tcp streams
7.40Gbps

forwarding over ix0 and ix1, ipsec established over em0, pf disabled
8.10Gbps

forwarding over ix0 and ix1, ipsec established over em0, pf enabled, 8
TCP streams
5.25Gbps

thanks, Hrvoje



On 13.11.19 12:52, Hrvoje Popovski wrote:

On 13.11.2019. 10:59, Hrvoje Popovski wrote:

On 12.11.2019. 10:54, Szél Gábor wrote:

Dear Hrvoje, Theo,

Thank you for your answers!

answers to the questions:
-  who is parent interface for carp?  -> vlan  ( carp10 interface
parent
vlan10 -> vlan10 interface  parent -> trunk0 )
- why vlan interfaces don't have ip address ? -> it wasn't needed! i
think vlan interface need only tag packages. Carp (over vlan) interface
have IP address.

it's little strange to me to not have ip address on parent carp
interface, but if it works for you ... ok..


- vether implies that you have bridge? -> yes whe have only one bridge
for bridget openvpn clients, but  we will eliminate it.


we will do the following:
- refresh our backup firewall to oBSD 6.6
- replace trunk interface with aggr
- remove bridge interface

this is nice start to make you setup faster. big performance killer in
your setup is ipsec and old hardware. maybe oce(4) but i never tested
it, so i'm not sure ... if you can, change oce with ix, intel x520 is
not that expensive ..

bridge is slow, but only for traffic that goes through it. with ipsec,
the same second when tunnel is established, forwarding performance will
drop significantly on whole firewall ...

i forgot numbers, so i did quick tests ..


forwarding over ix0 and ix1, pf and ipsec disabled
1.35Mpps

forwarding over ix0 and ix1, pf enabled, 500 UDP states
800Kpps

forwarding over ix0 and ix1, ipsec established over em0, pf disabled
800Kpps

forwarding over ix0 and ix1, ipsec established over em0, pf enabled, 500
UDP states
550Kpps



OpenBSD 6.6-current (GENERIC.MP) #453: Mon Nov 11 21:40:31 MST 2019
  dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 17115840512 (16322MB)
avail mem = 16584790016 (15816MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xcf42c000 (99 entries)
bios0: vendor Dell Inc. version "2.8.0" date 06/26/2019
bios0: Dell Inc. PowerEdge R620
acpi0 at bios0: ACPI 3.0
acpi0: sleep states S0 S4 S5
acpi0: tables DSDT FACP APIC SPCR HPET DMAR MCFG WD__ SLIC ERST HEST
BERT EINJ TCPA PC__ SRAT SSDT
acpi0: wakeup devices PCI0(S5)
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 4 (boot processor)
cpu0: Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz, 3600.53 MHz, 06-3e-04
cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 2, package 0
mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges
cpu0: apic clock running at 100MHz
cpu0: mwait min=64, max=64, C-substates=0.2.1.1, IBE
cpu1 at mainbus0: apid 6 (application processor)
cpu1: Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz, 3600.01 MHz, 06-3e-04
cpu1:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 3, package 0
cpu2 at mainbus0: apid 8 (application processor)
cpu2: Intel(R) Xeon(R) CPU E5-2643 v2 @ 3.50GHz, 3600.01 MHz, 06-3e-04
cpu2:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,PERF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,IBRS,IBPB,STIBP,L1DF,SSBD,SENSOR,ARAT,XSAVEOPT,MELTDOWN

cpu2: 256KB 64b/line 8-way L2 cache
cpu2: smt 0, core 4, package 0
cpu3 at mainbus0: apid 16 (application pr

Re: Patch suggestion for sysupgrade

2019-11-15 Thread Raimo Niskanen
On Fri, Nov 15, 2019 at 07:00:05AM +0100, NilsOla Nilsson wrote:
> I have upgraded a machine where /home was NFS-mounted,
> like this:
> - check that the / partition has space for the files
>   that will populate /home/_sysupgrade
> - unmount /home
> - comment ut the /home line in /etc/fstab
> - upgrade with sysupgrade
> - restore the line in /etc/fstab
> - mount -a

That is another way to do it.

Though, for the last sysupgrade on amd64 6.5 -> 6.6 the _sysupgrade
directory used 443M, and to depend on that it is fine to put that on the /
partition feels a bit risky...  It _should_ work since the / partition
is typically 1G and has 833M available, but it feels discomforting.

/ Raimo


> 
> All this could be done remote.
> 
> Note that I can log in to a user where the home
> directory is not NFS-mounted, in our case
> /local_home/
> 
> On Thu, Nov 14, 2019 at 03:01:18PM +0100, Raimo Niskanen wrote:
> > The use case for this patch is that in our lab network we have NFS
> > automounted /home/* directories, so using /home/_sysupgrade
> > for sysupgrade does not work.
> > 
> > With this patch it is easy to modify /usr/sbin/sysupgrade and change
> > just the line SETSDIR=/home/_sysupgrade to point to some other local file
> > system that is outside hier(7) for example /opt/_sysupgrade
> > or /srv/_sysupgrade.
> > 
> > Even using /var/_sysupgrade or /usr/_sysupgrade should work.  As far as
> > I can tell the sysupgrade directory only has to be on a local file system,
> > and not get overwritten by the base system install.
> > 
> > The change for mkdir -p ${SETSDIR} is to make the script more defensive 
> > about
> > the result of mkdir, e.g in case the umask is wrong, or if the directory
> > containing the sysupgrade directory has got the wrong group, etc.
> > 
> > 
> > A follow-up to this patch, should it be accepted, could be to add an option
> > -d SysupgradeDir, but I do not know if that would be considered as a too odd
> > and error prone feature to merit an option.  Or?
> > 
> > The patch is on 6.6 stable.
> > 
> > Index: usr.sbin/sysupgrade/sysupgrade.sh
> > ===
> > RCS file: /cvs/src/usr.sbin/sysupgrade/sysupgrade.sh,v
> > retrieving revision 1.25
> > diff -u -u -r1.25 sysupgrade.sh
> > --- usr.sbin/sysupgrade/sysupgrade.sh   28 Sep 2019 17:30:07 -  
> > 1.25
> > +++ usr.sbin/sysupgrade/sysupgrade.sh   14 Nov 2019 13:27:34 -
> > @@ -119,6 +119,7 @@
> > URL=${MIRROR}/${NEXT_VERSION}/${ARCH}/
> >  fi
> >  
> > +[[ -e ${SETSDIR} ]] || mkdir -p ${SETSDIR}
> >  if [[ -e ${SETSDIR} ]]; then
> > eval $(stat -s ${SETSDIR})
> > [[ $st_uid -eq 0 ]] ||
> > @@ -127,8 +128,6 @@
> >  ug_err "${SETSDIR} needs to be owned by root:wheel"
> > [[ $st_mode -eq 040755 ]] || 
> > ug_err "${SETSDIR} is not a directory with permissions 0755"
> > -else
> > -   mkdir -p ${SETSDIR}
> >  fi
> >  
> >  cd ${SETSDIR}
> > @@ -185,7 +184,7 @@
> >  
> >  cat <<__EOT >/auto_upgrade.conf
> >  Location of sets = disk
> > -Pathname to the sets = /home/_sysupgrade/
> > +Pathname to the sets = ${SETSDIR}/
> >  Set name(s) = done
> >  Directory does not contain SHA256.sig. Continue without verification = yes
> >  __EOT
> > @@ -193,7 +192,7 @@
> >  if ! ${KEEP}; then
> > CLEAN=$(echo SHA256 ${SETS} | sed -e 's/ /,/g')
> > cat <<__EOT > /etc/rc.firsttime
> > -rm -f /home/_sysupgrade/{${CLEAN}}
> > +rm -f ${SETSDIR}/{${CLEAN}}
> >  __EOT
> >  fi
> > 
> > Best regards
> > --  
> > / Raimo Niskanen, Erlang/OTP, Ericsson AB
> 
> -- 
> Nils Ola Nilsson, email nils...@abc.se, tel +46-70-374 69 89



-- 

/ Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: Patch suggestion for sysupgrade

2019-11-15 Thread Raimo Niskanen
On Thu, Nov 14, 2019 at 04:59:23PM +, gil...@poolp.org wrote:
> A similar patch for this was sent to tech@ by Renaud Allard, you might want to
> go review the "sysupgrade: Allow to use another directory for data sets" 
> thread
> and comment it.

Thanks for the pointer!  I see in that thread that this is hard
to find a safe solution to this problem...

/ Raimo


> 
> 
> November 14, 2019 3:01 PM, "Raimo Niskanen"  
> wrote:
> 
> > The use case for this patch is that in our lab network we have NFS
> > automounted /home/* directories, so using /home/_sysupgrade
> > for sysupgrade does not work.
> > 
> > With this patch it is easy to modify /usr/sbin/sysupgrade and change
> > just the line SETSDIR=/home/_sysupgrade to point to some other local file
> > system that is outside hier(7) for example /opt/_sysupgrade
> > or /srv/_sysupgrade.
> > 
> > Even using /var/_sysupgrade or /usr/_sysupgrade should work. As far as
> > I can tell the sysupgrade directory only has to be on a local file system,
> > and not get overwritten by the base system install.
> > 
> > The change for mkdir -p ${SETSDIR} is to make the script more defensive 
> > about
> > the result of mkdir, e.g in case the umask is wrong, or if the directory
> > containing the sysupgrade directory has got the wrong group, etc.
> > 
> > A follow-up to this patch, should it be accepted, could be to add an option
> > -d SysupgradeDir, but I do not know if that would be considered as a too odd
> > and error prone feature to merit an option. Or?
> > 
> > The patch is on 6.6 stable.
> > 
> > Index: usr.sbin/sysupgrade/sysupgrade.sh
> > ===
> > RCS file: /cvs/src/usr.sbin/sysupgrade/sysupgrade.sh,v
> > retrieving revision 1.25
> > diff -u -u -r1.25 sysupgrade.sh
> > --- usr.sbin/sysupgrade/sysupgrade.sh 28 Sep 2019 17:30:07 - 1.25
> > +++ usr.sbin/sysupgrade/sysupgrade.sh 14 Nov 2019 13:27:34 -
> > @@ -119,6 +119,7 @@
> > URL=${MIRROR}/${NEXT_VERSION}/${ARCH}/
> > fi
> > 
> > +[[ -e ${SETSDIR} ]] || mkdir -p ${SETSDIR}
> > if [[ -e ${SETSDIR} ]]; then
> > eval $(stat -s ${SETSDIR})
> > [[ $st_uid -eq 0 ]] ||
> > @@ -127,8 +128,6 @@
> > ug_err "${SETSDIR} needs to be owned by root:wheel"
> > [[ $st_mode -eq 040755 ]] || 
> > ug_err "${SETSDIR} is not a directory with permissions 0755"
> > -else
> > - mkdir -p ${SETSDIR}
> > fi
> > 
> > cd ${SETSDIR}
> > @@ -185,7 +184,7 @@
> > 
> > cat <<__EOT >/auto_upgrade.conf
> > Location of sets = disk
> > -Pathname to the sets = /home/_sysupgrade/
> > +Pathname to the sets = ${SETSDIR}/
> > Set name(s) = done
> > Directory does not contain SHA256.sig. Continue without verification = yes
> > __EOT
> > @@ -193,7 +192,7 @@
> > if ! ${KEEP}; then
> > CLEAN=$(echo SHA256 ${SETS} | sed -e 's/ /,/g')
> > cat <<__EOT > /etc/rc.firsttime
> > -rm -f /home/_sysupgrade/{${CLEAN}}
> > +rm -f ${SETSDIR}/{${CLEAN}}
> > __EOT
> > fi
> > 
> > Best regards
> > -- 
> > / Raimo Niskanen, Erlang/OTP, Ericsson AB

-- 

/ Raimo Niskanen, Erlang/OTP, Ericsson AB



Re: Home NAS

2019-11-15 Thread Raf Czlonka
On Fri, Nov 15, 2019 at 08:54:54AM GMT, Andrew Luke Nesbit wrote:
> On 15/11/2019 10:11, gwes wrote:
> 
> > The backup(8) program can assist this by storing deltas so that
> > more frequent backups only contain deltas from the previous
> > less frequent backup.
> 
> I've not used backup(8) before, thanks for the suggestion.  I will have a
> look.
> 

Hi Andrew,

There is no backup(8) - gwes either meant a generic "backup" software,
or dump(8), and restore(8), specifically.

Regards,

Raf



Re: pvclock stability

2019-11-15 Thread Ian Gregory
I continued to investigate this and added some debugging output to the
pvclock driver to attempt to work out what was going on.

In my most recent test I rebooted the client VM at 08:10 yesterday.
Over the following 24h, there were 16 "clock step" events which caused
the time to lag real time by a total of 21.3 seconds. In all but 3 of
the steps, the change in the offset was 1.0 seconds almost exactly.
During the test the VM was loaded with it's usual workload (running
net/zabbix) and ntpd was disabled.

I added a printf to the end of pvclock_get_timecount which outputs the
state of the variables within the function for each 10 steps
of system_time.

Here is an example of the output. The prefixed date is from syslog and
is the incorrect system time. The actual time of the log entry (as
reported from a reliable NTP source) was 09:05:23

Nov 15 09:05:02 starbug /bsd: pvclock:
tsc_timestamp=3627858654285868 rdtsc=3627858654563637 delta_1=277769
shift=-20 delta=0 mul_frac=342781 system_time=1573808723797914701
ctr=1573808723797914701

The ctr value is the return value of pvclock_get_timecount - the value
1573808723797914701 translates to Fri Nov 15 09:05:23.797

I'm no expert in kernel timekeeping internals (far from it), but it
seems that the pvclock driver is returning correct timestamps from
pvclock_get_timecount and thus I conclude both the pvclock device in
vmm and the pvclock driver in the kernel are working as designed.

Can anyone advise if I've missed something? Happy to provide further
data if needed.

Thanks
Ian



On Fri, 8 Nov 2019 at 13:53, Ian Gregory  wrote:
>
> Hi
>
> Since the 6.6 release I've been experimenting with using pvclock as
> the selected timecounter on a virtual machine running under vmm. Both
> the host and guest are running 6.6-stable (the environment is provided
> by openbsd.amsterdam).
>
> With 6.5 and the tsc source, the clock would drift linearly by about 2
> seconds per minute. This was too large a drift for ntpd to compensate
> for and so I used a cron job to force-correct the clock at regular
> intervals.
>
> With 6.6 I have changed the timecounter source to pvclock. In
> frequency terms this has proven to be much more stable, with minimal
> drift. However, at irregular intervals the clock will step out of time
> by a small whole number of seconds. Over 24 hours following a reboot
> the clock now differs from real time (verified against multiple ntp
> sources) by just over 23 seconds, it having stepped 9 times during
> that time window
>
> I ran the following command every 60s following a reboot of the guest
> to log the output
>   echo -n `date`  && rdate -pv time.cloudflare.com | tail -1 | awk
> '{ print "   " $6 }'
> Note that the data points are not consistently 60s apart - I'm using
> 'sleep' to delay the loop.
>
> Raw data and chart of the offset over the 24 hours is available in
> this Google sheet: http://bit.ly/34NTaUh
>
> Is this likely to point to a bug in the pvclock implementation or an
> environment/configuration issue?
>
> Thanks
> Ian
>
>
> dmesg (guest)
> =
>
> OpenBSD 6.6 (GENERIC) #0: Sat Oct 26 06:47:50 MDT 2019
> r...@syspatch-66-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC
> real mem = 2130698240 (2031MB)
> avail mem = 2053558272 (1958MB)
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 (10 entries)
> bios0: vendor SeaBIOS version "1.11.0p2-OpenBSD-vmm" date 01/01/2011
> bios0: OpenBSD VMM
> acpi at bios0 not configured
> cpu0 at mainbus0: (uniprocessor)
> cpu0: Intel(R) Xeon(R) CPU X5675 @ 3.07GHz, 3062.26 MHz, 06-2c-02
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,HV,NXE,PAGE1GB,LONG,LAHF,ITSC,MELTDOWN
> cpu0: 256KB 64b/line 8-way L2 cache
> cpu0: smt 0, core 0, package 0
> cpu0: using IvyBridge MDS workaround
> pvbus0 at mainbus0: OpenBSD
> pvclock0 at pvbus0
> pci0 at mainbus0 bus 0
> pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
> virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
> viornd0 at virtio0
> virtio0: irq 3
> virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
> vio0 at virtio1: address fe:e1:bb:d4:c4:03
> virtio1: irq 5
> virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
> vioblk0 at virtio2
> scsibus1 at vioblk0: 2 targets
> sd0 at scsibus1 targ 0 lun 0: 
> sd0: 51200MB, 512 bytes/sector, 104857600 sectors
> virtio2: irq 6
> virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
> vmmci0 at virtio3
> virtio3: irq 7
> isa0 at mainbus0
> isadma0 at isa0
> com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
> com0: console
> vscsi0 at root
> scsibus2 at vscsi0: 256 targets
> softraid0 at root
> scsibus3 at softraid0: 256 targets
> root on sd0a (886e07e83005c94c.a) swap on sd0b dump on sd0b
>
> dmesg (host)
> 
>
> OpenBSD 6.6 (GENERIC.MP) #0: Sat Oct 26 08:08:

ifstated.conf advice needed

2019-11-15 Thread Rachel Roch
Hi,

I'm looking for a bit of help on how to write a sensible and safe (i.e. avoid 
race conditions) ifstated.conf.

I have a scenario where I have a LACP trunk and on top of the trunk, I have 
four carp interfaces.
So: trunk1 => carp0–3

Now, obviously I know I can monitor up/down on trunk1.

But what If I wanted to monitor the carp sub-interfaces too ? e.g. if I wanted 
to have the flexibility to demote one (or more) of the carp interfaces for the 
purposes of maintenance or traffic engineering.

Is there a sensible way to get around the problem of having to write ifstated 
rules for every single possible combination/permutation ?

Thanks !

Rachel



Re: Home NAS

2019-11-15 Thread Andrew Luke Nesbit

On 15/11/2019 10:11, gwes wrote:

On 11/14/19 3:52 PM, Andrew Luke Nesbit wrote:

On 15/11/2019 07:44, Raymond, David wrote:

I hadn't heard about file corruption on OpenBSD.  It would be good to
get to the bottom of this if it occurred.


I was surprised when I read mention of it too, without any real claim 
or detailed analysis to back it up.  This is why I added my disclaimer 
about "correcting me if I'm wrong because I don't want to spread 
incorrect information".


[...]


There was a thread a couple of months ago started by someone either pretty
ignorant or a troll.
The consensus answer: no more than any other OS, less than many.


Thank you gwes, for the clarification.

The thread is vaguely coming back to my memory now.  I was dipping in 
and out of it at the time as I didn't have time to study the details at 
the time.



One size definitely doesn't fit all.


That is pretty obvious.  I never mentioned a blanket rule, and I assume 
that OP is able to tailor any suggestion to their needs.



Backup strategies depend on user's criteria, cost of design and
cost of doing the backups - administration & storage, etc.


Sure.  I don't have a personal archival storage system yet for long term 
storage that satisfies my specifications because I don't have the 
infrastructure and medium yet to store it.  I plan on investing in LTO 
tape but I can not afford the initial cost yet.



In an ideal world every version of every file lasts forever.
Given real limitations, versioning filesystems can't and don't.


Indeed.  But having archival snapshots at various points in time 
increases the _probability_ that the version of the file that you need 
will be present if+when you need it.



If your data are critical, invest in a dozen or more portable
USB drives. Cycle them off-site. Reread them (not too often)
to check for decay.


Yes, this is part of the backup system that I'm designing for my NAS, 
but it's not so much for archiving.



If you have much  available, get a
modern tape system.


Yes, as I mentioned above LTO would be great if+when I can afford it.


The backup system used over 50 years ago still suitable for many
circumstances looks something like this:
   daily backups held for 1 month
   weekly backups held for 6-12 months
   monthly backups held indefinitely offsite.
Hold times vary according to circumstances.


I think something like this is a good plan.


The backup(8) program can assist this by storing deltas so that
more frequent backups only contain deltas from the previous
less frequent backup.


I've not used backup(8) before, thanks for the suggestion.  I will have 
a look.



The compromise between backup storage requirements and granularity
of recovery points can be mitigated. The way to do it depends on
the type and structure of the data:

Some data are really transient and can be left out.

Source code control systems (or whatever the name is this week)
are a good way for intermittent backups to capture a good history
of whatever data is around if it's text.


I don't understand how SCM's are supposed to help with this...


DBs often have their own built-in backup mechanisms.


This underscores the difference between file system-level backups, 
block-level backups, and (for DBs) application-level backups.  In 
particular I'm trying to figure out a generally applicable way of taking 
a _consistent_ backup of a disk without resorting to single user mode.


I think COW file systems might help in this regard but I don't think 
anything like this exists in OpenBSD.



Binary files can be regenerated if the source *and* environment
are backed up.


Storing the environment is a tricky problem that I haven't found an 
entirely satisfactory solution for, yet.



been there, mounted the wrong tape... what write protect ring?


O yeah... me too.  My team inherited a hosted service and upon 
auditing we discovered its backup system was stranger than fiction.  But 
it was so bizarre that we couldn't determine whether it was _supposed_ 
to be that way or if our reasoning was flawed.  A classic type of problem.


Andrew
--
OpenPGP key: EB28 0338 28B7 19DA DAB0  B193 D21D 996E 883B E5B9