NetBSD and IP host models

2024-04-30 Thread Sad Clouds
Hi, I have some niche use cases that need support for "strong end
system model" which can perform symmetric IP routing.

As far as I can see, there are some obstacles with NetBSD:

1. NetBSD uses "weak end system model", so a multihomed host is prone
to ARP flux when multiple IP addresses are configured on the same
subnet and on the same host.

2. Sending IP packets from eth0 to eth1 on the same host, results in a
fast forward path, where packets are forwarded internally within the
kernel, without leaving eth0 and traversing the network, which is not
what I need.

On Solaris 11.3 I can easily switch IP host models with this command:

# ipadm set-prop -p hostmodel=strong ipv4

this enables symmetric routing and forces IP packets to traverse the
network, even when sent between network interfaces on the same host.

I'm not sure I can achieve anything similar on NetBSD and don't see
any relevant sysctl settings for this. Any suggestions?

Thanks.



ctwm window focus bugs

2022-04-03 Thread Sad Clouds
Hi, are there plans to update ctwm base binary in NetBSD-9? It has ctwm
version 3.x but it seems to have broken SaveWorkspaceFocus setting. For
example switching between multiple workspaces with multiple windows,
does not restore correct focus to the last focused window. The pkgsrc
ctwm version 4.x does not seem to have this issue and works correctly.

Not a big deal as pkgsrc can be used instead, but would be nice to have
this fixed in base OS.

Thanks.


Re: procfs difference between NetBSD and Linux

2021-06-05 Thread Sad Clouds
On Thu, 3 Jun 2021 10:02:20 +1200
Mark Davies  wrote:

> 
> 
> On 2/06/21 5:49 am, Robert Elz wrote:
> > And while doing that, ask them what they're possibly trying to achieve
> > with the O_CREAT flag - if /proc/$$/fd/N doesn't exist, how is creating
> > (what would be a normal file, if procfs allowed it) going to possibly
> > do anything useful?   It is hard to believe that they're intending that
> > creating a file there will magically cause the fd to open (open to
> > what underlying object?)   If they know the fd is open (which they
> > seem to do here) then they know that /proc/$$/fd/N already exists, in
> > which case O_CREAT is useless (in the best of cases).
> 
> I think that is because after they set the path to /proc/self/fd/N they 
> pass it back to their generic open routines to do the actual open.
> see reopen_from_procfd() in source3/smbd/open.c
> 
> cheers
> mark

And why rely on procfs to open file descriptors? Shell scripts do it,
but a C program should know better.


Re: firefox on NetBSD: files again

2021-05-31 Thread Sad Clouds
On Mon, 31 May 2021 09:33:18 +0200
tlaro...@polynum.com wrote:

> Once more it is a problem of number of files a proc can open. What was
> enough for a previous version is not enough anymore because firefox
> renders pages with litteraly thousands of pieces/files.

Which version of Firefox is it? Do you have a link to a web site where
Firefox will open thousands of files, as you say, when it renders its
pages?


Re: Status of syspkg or similar

2021-05-22 Thread Sad Clouds
On Sat, 22 May 2021 17:43:31 +0900
Yuuki Enomoto  wrote:

> Hello,
> 
> > From what I can tell the feature is somewhat unfinished/abandoned?
> 
> Yes. "build.sh syspkgs" can create packages. But these may remove
> exist files/directories by install. I think, we should fix it
> completely and/or make sysinst support this for testing.
> 
> Sorry, I haven't been able to find the time to work on this for a
> while.

Thanks for the info. From what I understand, there are two separate
packaging systems - traditional NetBSD sets, and syspkgs. The two
systems cannot really co-exist together, so if/when more granular
packaging system is implemented, the sets should be deprecated. The
idea is that build.sh would stop generating sets and only generate new
packages. Whether to use pkgsrc pkg_* tools is a different question, I
personally feel that a much simpler set of tools specifically designed
for base system package management would be a better approach.


Status of syspkg or similar

2021-05-22 Thread Sad Clouds
Hello, what is the status/roadmap for NetBSD syspkg? From what I can
tell the feature is somewhat unfinished/abandoned? I wonder if there is
an easy way to only install those base binaries that I need. For
example excluding subsystems like Postfix, Httpd, NFS Client/Server,
etc, without having to manually remove their files and any
dependencies.


Re: Android client SYN packets dropped

2021-05-16 Thread Sad Clouds
On Sun, 16 May 2021 09:23:42 +0300
Arto Huusko  wrote:

> Johnny Billquist kirjoitti 15.5.2021 klo 19.26:
> > On 2021-05-15 15:33, Arto Huusko wrote:
> 
> >> My Android 10 phone is unable to connect to any TCP services hosted
> >> on a NetBSD -current 9.99.73 server. Connections to the services
> >> from other devices on the same LAN work.
> 
> > Is your phone getting into the ARP cache? If not, then the packets
> > would indeed be dropped like that.
> 
> Thank you! After some system update the phone had turned on randomized
> MAC address on the local wifi network.
> arp -a showed the arp entry in incomplete state.
> Switching randomized MAC off fixed the problem.
> 
> Arto

Not sure why this would be a problem. If the device is using a new
randomized MAC address, then it should be added to the ARP cache with a
new IP address, as if a new device joined the network.


Re: Issues building NetBSD with GCC 10

2021-04-29 Thread Sad Clouds
On Thu, 29 Apr 2021 09:41:38 +0200
Martin Husemann  wrote:

> On Thu, Apr 29, 2021 at 08:32:09AM +0100, Sad Clouds wrote:
> > I tried building NetBSD on Linux host with GCC 10 as the default
> > system compiler and ran into various issues:
> > 
> > - Tried building netbsd-9 and that fails to build bmake tool due to
> > duplicate symbols or something. This seems to have been fixed in
> > netbsd-current, so would this be backported to other branches?
> 
> I have changed my local build scripts to do
> 
> export HOST_CFLAGS="-O -fcommon"
> 
> when building older branches. The pax on the older branches needs a
> fix backported to build w/o that. You need to clean the tools pax
> objdir before retrying.
> 
> Martin

Thanks for the hint, I'll try the HOST_CFLAGS as you suggested.


Issues building NetBSD with GCC 10

2021-04-29 Thread Sad Clouds
I tried building NetBSD on Linux host with GCC 10 as the default system
compiler and ran into various issues:

- Tried building netbsd-9 and that fails to build bmake tool due to
duplicate symbols or something. This seems to have been fixed in
netbsd-current, so would this be backported to other branches?

- Tried building netbsd-current but it fails to configure mpfr due to
missing libgmp. Is this a known issue?

configure: error: libgmp not found or uses a different ABI (including static vs 
shared).
Please read the INSTALL file -- see "In case of problem".

*** Failed target: .configure_done
*** Failed commands:
@mkdir build 2>/dev/null || true
@(cd build && ${CONFIGURE_ENV} ${HOST_SH} ${GNUHOSTDIST}/configure 
${CONFIGURE_ARGS})
@echo ${BUILD_PLATFORM} > $@
*** [.configure_done] Error code 1

nbmake[7]: stopped in /netbsd_build/netbsd-current-src/tools/mpfr
1 error


Re: OS-level virtualization

2021-04-06 Thread Sad Clouds
On Tue, 6 Apr 2021 18:11:52 - (UTC)
chris...@astron.com (Christos Zoulas) wrote:

> Well, kauth does authorization checking, we are talking here about
> providing separate namespaces for different processes (networking,
> filesystem etc.)

Which makes sense for multi-tenancy use cases, i.e. you have
different users/environments with conflicting applications/libraries.
If you are the only user and have root access, then this kind of
partitioning is not really needed. There are all kinds of wacky
solutions built around zones/containers, and they simply encourage
bloat and complexity.


Re: zvol performance expecations? zvol swap?

2021-02-18 Thread Sad Clouds
On Wed, 17 Feb 2021 19:04:34 -0500
Greg Troxel  wrote:

> 
> Sad Clouds  writes:
> 
> > On Wed, 17 Feb 2021 13:15:39 -0500
> > Greg Troxel  wrote:
> >
> >> 
> >> Suppose I create a 16G zvol on a pool that is a disklabel
> >> partition on an SSD.   I would expect read/write performance that
> >> is near the native SSD read/write speed.
> >
> > Why would you expect that? In other words, you're expecting that a
> > complex file system like ZFS would have near zero overheads, which
> > is not possible. When you take into considerations things like
> > checksums, compression, encryption, etc, then the overheads could
> > be quite significant.
> 
> I have not enabled compression or encryption, and I forgot about
> checksums.
> 
> However, the use case for zvol is having zfs being a logical volume
> manager, and it would seem reasonably feasible to get high
> efficiencies via allocation of large numbers of blocks at once.   So
> 10-20% slower wouldn't surprise me.
> 
> Have you used zvols?  What was your experience?

Not used it on NetBSD. Used a bit on Solaris and Linux at home, but
never really cared about throughput performance. My personal feeling
about ZFS - it's designed to provide good data integrity, but a bit
complicated and has too many features/options. I prefer simpler designs.

With regard to zvols, the following is normally recommended for Linux
swap:

zfs create -V 8G -b $(getconf PAGESIZE) \
-o logbias=throughput -o sync=always -o checksum=off \
-o compression=off -o dedup=off -o primarycache=metadata \
-o com.sun:auto-snapshot=false /swap

Also, several tuning options for "zfs create":
atime=off  - Disable atime updates.
recordsize=16K - For small files and random I/O.
recordsize=1M  - For large files and sequential I/O.
xattr=sa   - Linux only, store xattr directly in inodes.


Re: zvol performance expecations? zvol swap?

2021-02-17 Thread Sad Clouds
On Wed, 17 Feb 2021 13:15:39 -0500
Greg Troxel  wrote:

> 
> Suppose I create a 16G zvol on a pool that is a disklabel partition on
> an SSD.   I would expect read/write performance that is near the
> native SSD read/write speed.

Why would you expect that? In other words, you're expecting that a
complex file system like ZFS would have near zero overheads, which is
not possible. When you take into considerations things like checksums,
compression, encryption, etc, then the overheads could be quite
significant.


Re: Firefox alternatives with JS : luakit?

2020-11-23 Thread Sad Clouds
On Mon, 23 Nov 2020 06:24:17 +0100 (CET)
"Thomas Mueller"  wrote:

> from Sad Clouds:
> 
> > On Sun, 22 Nov 2020 16:29:27 +0530
> > Mayuresh  wrote:
> 
> > > In the interim, would appreciate any feedback on luakit or other
> > > alternatives (with JS support).
> 
> > x86 running Linux and Opera web browser. I gave up on Firefox years
> > ago. Unfortunately Opera don't provide binaries for NetBSD, but you
> > could try Linux emulation.
> 
> I use Otter browser, which is in pkgsrc, FreeBSD ports, and
> haikuports.
> 
> One good feature is being able to fake the user-agent for sites like
> Chase online banking that refuse to allow access if the user-agent is
> not to their software's liking
> 
> Tom
> 

I had stability issues with Otter browser, random crashes, etc. Same
thing happened with Firefox on sparc64. I just stick to x86 for desktop
applications, where sparc64 and arm are more for tinkering and server
applications.

The web technology used to be small and lean many years ago, but now
there is too much complexity and bloat. Because of this, it is becoming
a monoculture of a handful of browser engines - Blink and Gecko. Don't
even try to innovate, you'll be buried by the cost and complexity.
Which is why everybody is switching to Blink, whether they like it or
not.


Re: Firefox alternatives with JS : luakit?

2020-11-22 Thread Sad Clouds
On Sun, 22 Nov 2020 16:29:27 +0530
Mayuresh  wrote:

> In the interim, would appreciate any feedback on luakit or other
> alternatives (with JS support).

x86 running Linux and Opera web browser. I gave up on Firefox years
ago. Unfortunately Opera don't provide binaries for NetBSD, but you
could try Linux emulation.


Re: sponsor NetBSD for 2020 https://github.com/sponsors/NetBSD

2020-11-10 Thread Sad Clouds
On Tue, 10 Nov 2020 19:48:47 +
m...@netbsd.org wrote:

> Also, I strongly recommend offering a bounty for smaller units of
> work, even when you want a bigger one. It avoids problems with people
> dividing the work.
> 
> If you would like this to be curated "officially" we can create a wiki
> page to coordinate the most up to date version of this.

I think it would need to be a group effort. For example, in order to
complete the port of NetBSD to sun4v in the next 6 months:

NetBSD foundation would estimate the time and cost involved. They could
reach out to various Net/Open/Free BSD developers and negotiate
reasonable rates. Looks like some work on sun4v has been done, so may
need some help to resolve remaining issues and port a few drivers. I
really don't know much about it.

Take the estimates to NetBSD users and do a survey of how many people
would contribute and how much. Based on the responses, establish
viability of this project.

If the project is to go ahead, reach out to those users again and ask
them to pay previously pledged contributions into a special fund
allocated for this project. NetBSD foundation would need to take
responsibility for managing the project and supporting the development
process.

I would be happy to contribute a reasonable amount, but would not be
able to cover all the costs on my own.


Re: sponsor NetBSD for 2020 https://github.com/sponsors/NetBSD

2020-11-10 Thread Sad Clouds
On Tue, 10 Nov 2020 18:25:09 +
m...@netbsd.org wrote:

> As for sun4v, I believe Palle is doing work on this sometimes.

Yes, I've seen from the posts on sparc64 mailing list that some people
are working on it, but often developers have other commitments and bills
to pay, so the progress can be very slow. Paying those developers to
work part or full time on various features would 1) help them pay the
bills, and 2) move features forward quicker.

I suspect NetBSD users would be more willing to sponsor the features
that they can directly benefit from. In fact, this mode of software
development (pay per feature) should be encouraged. I appreciate that
NetBSD is a non-profit organization, but relying on other people to
donate their development time for free is not always the best approach,
i.e. feature development can drag on indefinitely.


Re: sponsor NetBSD for 2020 https://github.com/sponsors/NetBSD

2020-11-10 Thread Sad Clouds
On Tue, 10 Nov 2020 08:32:29 -0500
matthew sporleder  wrote:

> Indeed -- casting a wide net is in our interest.  I hope you are able
> to use one of our many potential donation offerings -- paypal, stripe,
> amazon smile, github sponsorship.. any I am missing?

I think what may be missing is sponsorship for specific features.
NetBSD foundation normally decide where to spend the resources. It may
be used for worthwhile projects, but for different people some features
are more tangible than others.

For example, I'd be interested in sponsoring NetBSD on sun4v
architecture with support for LDOMs. OpenBSD have done a lot of work in
this area and there may be a lot of potential for porting their code.
Some of their developers could be interested in doing it if NetBSD
agreed a full-time paid project.

1. Identify the scope and cost.
2. If cost is feasible, setup project sponsorship page.
3. If enough money is raised over a set period of time, project goes
ahead, otherwise money is returned to the sponsors.
4. NetBSD oversee the delivery and quality of the project.

This approach is more targeted and allows the sponsors to have a say in
how their money is spent. If nobody is interested in a project and not
enough money is raised, then - c'est la vie...


Re: sponsor NetBSD for 2020 https://github.com/sponsors/NetBSD

2020-11-10 Thread Sad Clouds
On Tue, 10 Nov 2020 12:31:26 +0100
Matthias Petermann  wrote:

> Hallo Matthew,
> 
> Am 10.11.2020 um 05:35 schrieb matthew sporleder:
> > Hey -- the end of the year is coming up fast.  Wouldn't you feel
> > better about yourself if you added a github sponsorship to balance
> > out your incredible year? :)
> How does this type of donation compare to a Paypal Monthly
> Subscription? Is it just a different way of transport, or are there
> advantages / disadvantages to Paypal?
> 
> Kind regards
> Matthias

It looks to me like github sponsorship is geared towards small
developers who don't have their own project web page, with
payment submission links, etc. The irritating thing about github is
that they don't allow you to submit bug reports or donations unless you
setup an account on their platform. Last thing I want to do is to create
various social media accounts on competing platform - GitHub, GitLab,
SourceForge, Bitbucket, GNU Savannah, ... and the list goes on. Life is
too short for that nonsense. Luckily NetBSD accept bug reports and
donations on their official web site, and you don't have to sign up for
anything.


Re: NetBSD 9.1 upgrade and file system crash - reboot fails

2020-11-02 Thread Sad Clouds
On Tue, 3 Nov 2020 11:48:41 +1030
Brett Lymn  wrote:

> On Mon, Nov 02, 2020 at 12:34:27PM +0100, BERTRAND Jol wrote:
> > 
> > I refuse Seagate, Samsung and other WD in my servers or
> > workstations, and I refuse SMR technology also.
> > 
> 
> I have heard this sort of thing from many people over the years, they
> get bitten by a few disk failures and form a bias based on that.  I
> used to do this too but found that it seems to change from model to
> model. Here are some interesting stats from a large scale HDD
> deployment:

Which probably also depends on the actual disk usage - sequential vs
random I/O, ambient temperature, etc. I don't trust any of those
statistics, because I don't know if those disks were used under
exactly the same conditions. There is also a big difference between
consumer and enterprise disks, you get what you pay for.


Re: Filesystem checks with noauto

2020-10-27 Thread Sad Clouds
On Tue, 27 Oct 2020 20:36:59 +0300
Dima Veselov  wrote:

> Greetings,
> 
> I have a Gluster volume on NetBSD box. It has long
> description in fstab
> 
> /usr/pkg/sbin/mount_glusterfs#gfs01:/gva  /gvapuffs 
> noauto,direct-io-mode=false,rw,backup-volfile-servers=gfs02:gfs03:gfs04:gfs05:gfs06
>  
> 1 2
> 
> I assume noauto should disable filesystem checks, but the
> server can not boot. The first actual problem is checking
> filesystem that may not be available, and second that it
> tries to mount glusterfs before /usr is available and
> /usr/pkg/sbin/mount_glusterfs exists.
> 
> What may I do to workaround this? For now I have to mount
> it manually. And will it be fixed in future release?
> 
> -- 
> Sincerely yours,
> Dima Veselov
> Physics R Establishment of Saint-Petersburg University

Well noauto simply means don't automatically mount during boot. As for
the second issue, have you tried critical_filesystems_local="/usr"
in /etc/rc.conf? Not sure how fstab determines mount priority, would it
be random or simply same order as written on each line?


Re: ctwm craches in NetBSD 9.1 if move cursor from up to down to apps pictogram on intel driver

2020-10-24 Thread Sad Clouds
On Sat, 24 Oct 2020 06:01:36 +0500
Dmitrii Postolov  wrote:

> In NetBSD 9.1 and ctwm: if move mouse cursor to Applications and move
> it from left to right to pictogram then all OK, but if move cursor
> from up to down to Applications pictogram, then ctwm craches. 

Also, a very long time ago I tried ctwm on NetBSD sparc64 and it was
dying due to misaligned memory access, or similar. Not sure if this has
been fixed over the years, I haven't yet tried NetBSD-9.1


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-17 Thread Sad Clouds
On Sat, 17 Oct 2020 08:23:57 -0400
Greg Troxel  wrote:

> By default we don't enable NTP, but the default config has the pool.
> I find contacting random pool servers not a real problem, but
> connecting to anything connected with a big company that might think
> it ok to store data of what happened and use it later is potentially
> concerning.
> 
> I also realize this is turtles all the way down tand the next question
> is leaking information about DNS.  But I don't think we should be
> configuring talking to Gooogle anything or even Cloudflare.

Sure, it is down to individual users how they configure their systems.
Although I remember NetBSD sysinstall already offering users to use
Google public DNS servers when setting up network config.

People have valid concerns about privacy and tracking, but then Google
and CloudFlare are offering something for free (Email, DNS, NTP, etc)
which takes quite a lot of time and resources to manage on a daily
basis. It is reasonable to assume that they may be able to track you,
but that is the price you pay for using something for free. This is
pretty normal these days. There are various government laws that may
require certain amount of logging/tracking, so there is no absolute
privacy.

If you are extremely paranoid, then a tinfoil hat and off the grid
living, are the best ways to stay incognito. 


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-17 Thread Sad Clouds
On Sat, 17 Oct 2020 10:33:28 +0200
Martin Husemann  wrote:

> On Sat, Oct 17, 2020 at 08:41:05AM +0100, Sad Clouds wrote:
> > OK but you still need to connect to some server, be it NTP or HTTPS
> > in order to get the initial time. If you can't rely on DNS (and you
> > don't want to dynamically modify DNS server/resolver config to
> > ignore clock skew), then you still have to hard code IP address
> > somewhere. This was one of the objections raised by some people on
> > this list, as they didn't want to use IP address for some reason.
> 
> For things w/o RTC clock (that are unlikely to travel from airport
> wlan to next airport wlan) I usually do not want them to use any
> *external* IPs at all (while for me hard coded or dhcp provided local
> IPs work fine).
> 
> I also do not want my ISP, Cloudflare, Google, or some hacker having
> access at either of them to be able to tell when "some thing" in my
> local network boots.
> 
> Martin

I'm not sure I follow you. You don't want your NTP traffic to go outside
your local network, so I'm assuming you run your own local NTP servers
that synchronize with some trusted server on the Internet?

I'm not an expert on NTP, but what sort of information do you think it
could leak that could compromise your system security? There are ways
for hackers to abuse NTP protocol, but that is where you should be using
NTS extensions. 


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-17 Thread Sad Clouds
On Fri, 16 Oct 2020 11:46:31 -0700
Jordan Geoghegan  wrote:

> Not exactly, there are no NTP servers running over HTTP, it's a
> similar concept to the tlsdate util [1].

OK but you still need to connect to some server, be it NTP or HTTPS in
order to get the initial time. If you can't rely on DNS (and you don't
want to dynamically modify DNS server/resolver config to ignore clock
skew), then you still have to hard code IP address somewhere. This was
one of the objections raised by some people on this list, as they
didn't want to use IP address for some reason.

In this case, why bother with HTTPS when you could use IP address for a
real NTP server? NTP security is currently being addressed with NTS
(Network Time Security) protocol extensions:

https://blog.cloudflare.com/secure-time/
https://blog.apnic.net/2019/11/08/network-time-security-new-ntp-authentication-mechanism/

NetBSD (and others) need NTS capable client as part of the base OS and
then people like CloudFlare need to provide NTS capable time servers
(which they may already do) via stable IP addresses that never change,
similar to their DNS servers.

NTS is still at the draft stage, but this is probably the best
purpose-built solution for authenticating (and not necessary encrypting)
NTP traffic.


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-15 Thread Sad Clouds
On Wed, 14 Oct 2020 16:28:22 -0700
Jordan Geoghegan  wrote:

> 1) Have ntp daemon check various trusted http/https servers at boot
> to sanity check our clock and NTP data (no DNS needed, fall back to
> HTTP only if clock is too broken to negotiate TLS)
> 
> 2) Enjoy not having everything break on boot due to unfortunate lack
> of RTC
> 
> Regards,
> Jordan
> 
> 
> [1] https://man.openbsd.org/ntpd
> [2] https://marc.info/?l=openbsd-tech=142363400330522=2

Hi, you say working DNS is not needed, so are you saying that OpenBSD
default ntpd config comes with a set of static IP addresses that point
to NTP servers running via https protocol?


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-12 Thread Sad Clouds
On Sun, 11 Oct 2020 15:46:02 -0700
John Nemeth  wrote:

> On Oct 11, 16:47, Sad Clouds wrote:
> } On Sun, 11 Oct 2020 09:40:36 -0400 Greg Troxel 
> wrote: } 
> } > So, this is a request to explain how a 'default install' has this
> } > problem, or to clarify the problem statement.
> } 
> } Well NetBSD-9 comes with "unbound" which is supposed to replace
> "bind"
> 
>  unbound is a caching resolver, it does not replace BIND.
> 
> } as a recursive/caching name server. If you care about security, then
> 
>  However, NetBSD-9 also comes with nsd which is a full fledged
> name server.  As for myself, I have complex BIND configs that take
> advantage of a number of advanced features, so I have no intention
> of switching.

I'm kinda curious, when specifically talking about non-authoritative,
recursive, caching name server/resolver, what advantages does bind have,
over unbound? The nsd and unbound were developed by the same people,
whether you use nsd or bind as the authoritative name server is really
up to you and I'm in no way suggesting to replace one or the other.
What many people are doing are replacing bind with unbound for
recursive DNS queries, because many stub resolvers don't support
advanced features like DNSSEC and DoT. 


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-11 Thread Sad Clouds
On Sun, 11 Oct 2020 09:40:36 -0400
Greg Troxel  wrote:

> So, this is a request to explain how a 'default install' has this
> problem, or to clarify the problem statement.

Well NetBSD-9 comes with "unbound" which is supposed to replace "bind"
as a recursive/caching name server. If you care about security, then
you will always use DNSSEC and DoT, which (in my opinion) should be
configured by default. Think of it as http vs https and how most people
are now using https by default. Whether NetBSD default install
configures those features, is a completely different matter.

There is a known issue (which is not exclusive to NetBSD, nor to
unbound) that revolves around a circular dependency with ntpdate/ntpd
and DNSSEC. There are several ways to work around this issue. The fact
that NetBSD does not enable DNSSEC by default, should not preclude it
from implementing or documenting a work around.

The default install is relying on "XXX.netbsd.pool.ntp.org" hostnames
in /etc/ntp.conf for both ntpdate and ntpd. This fails to work
correctly when two conditions occur at the same time:
a) DNSSEC is used and
b) System time is incorrect

as hostnames cannot be resolved, due to DNSSEC signature validation
failures, I think.

This failure is not very obvious and only noticeable when system time
is wrong by some specific value, which depends on the configuration of
the name server (could be minutes or hours or days).

Ideally ntpdate/ntpd should have a backup list of servers that are not
hostnames, but IP addresses and don't require functioning DNS. If this
can be automated via rc scripts, then it's one less thing to remember
for NetBSD users. Could be as simple as adding a few stable IP addresses
to /etc/ntp.conf and then marking hostnames as "prefer".



Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-11 Thread Sad Clouds
On Sun, 11 Oct 2020 09:44:48 -0400
Greg Troxel  wrote:

> 
> Sad Clouds  writes:
> 
> > I don't think this is specifically confined to RTC-less machines.
> > Real hardware and virtual machines can have their clocks set to
> > incorrect time, for various reasons.
> 
> They can, but I see that as basically bugs or operator error.
> 
> > I think default max clock skew for unbound is around 24 hours.
> 
> That seems remarkably unforgiving, and that seems like an unfortunate
> design choice.
> 
> The circularity between time sync and certificate/etc. validation is
> real.  But typically certificates have fairly long lifetimes (90 days
> for letsencrypt is the shortest I tend to see), and it has always
> seemed unwise to me, to require clock sync to high accuracy to make
> validation succeed.
> 
> > The workaround can be easily implemented with various settings in
> > rc.conf, so this should probably be documented at the very least,
> > an not just for RPi.
> 
> Probably it belongs in the unbound man page, as unbound seems to be
> the thing that is being unreasonable here.
> 
> 
> Is it reasonable/feasible to have unbound lighten up on the tight time
> requirement?


You can make adjustments in unbound.conf

   val-sig-skew-min: 
   val-sig-skew-max: 

but what exactly is a reasonable time skew? Ideally you'd want to keep
it as small as possible, otherwise you open yourself to replay attacks,
etc. It's not just unbound, I think any DNS resolver implementing
DNSSEC would have such limits. 


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-11 Thread Sad Clouds
On Sun, 11 Oct 2020 08:54:15 -0400
Greg Troxel  wrote:

> I would rather not add this magic code, and also not configure
> addresses like these.  To me this is a special case of running NetBSD
> on an RTC-less machine, which I see as a bit irregular.   It's really
> a problem for deployed machines without console access.
> 
> It would be fair to point this out in the raspberry pi howto, though.

I don't think this is specifically confined to RTC-less machines. Real
hardware and virtual machines can have their clocks set to incorrect
time, for various reasons. I think default max clock skew for unbound is
around 24 hours.

The workaround can be easily implemented with various settings in
rc.conf, so this should probably be documented at the very least, an not
just for RPi.


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-11 Thread Sad Clouds
On Sun, 11 Oct 2020 12:49:16 +0200 (CEST)
Havard Eidnes  wrote:

> > The question is whether we ought to do something to break this
> > circular dependency in our default install by specifying one or two
> > (depending on "minsane" and resiliency conciderations) ntp servers
> > via IP address?  The issue then becomes "which IP address(es)" and
> > "how can that scale"?
> 
> ...or improve the documentation for potentially RTC-less systems
> about the rc.conf ntpdate_hosts workaround.
> 
> Regards,
> 
> - Håvard

If you look at /etc/rc.d/ntpdate script, it checks if "ntpdate_hosts"
is empty and if yes, attempts to extract hosts from /etc/ntp.conf

I would suggest adding the following logic:

1. Extract hosts and check if they are hostnames and not IP addresses.

2. Attempt to resolve each one of them. If they all fail, fall back to
hardcoded list of IP addresses that point to public NTP servers.

The hardcoded NTP servers list would need to be something that is not
likely to have their IP addresses changed and are always available,
similar to CloudFlare or Google public DNS servers. This is not 100%
full proof, but should work for any machine that is able to route
NTP packets to the Internet.

The man page for ntpdate talks about NetInfo support. No idea what this
is (maybe somehow similar to SRV records?), but if this is enabled, NTP
server name/IP can be omitted and ntpdate somehow finds suitable one
automagically. This probably still needs fully working DNS service.


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-11 Thread Sad Clouds
On Sat, 10 Oct 2020 13:16:40 +0100
Sad Clouds  wrote:

> On Sat, 10 Oct 2020 10:32:49 +0200
> Martin Husemann  wrote:
> 
> > On Sat, Oct 10, 2020 at 09:04:08AM +0100, Sad Clouds wrote:
> > > Anyway, has anyone come across this issue before and how did you
> > > manage to solve it?
> > 
> > You can specify an ip address (or several) as args to ntpdate.
> > Try:
> > 
> > ntpdate=YES ntpdate_args=${IP-of-your-ntpd-sever}
> > 
> > or something along that line.
> > 
> > Martin
> 
> OK thanks, I'll try that. This is probably better than changing
> ntp.conf file.

OK, turns out this is the correct way to pass NTP hosts to ntpdate:
 
ntpdate=YES
ntpdate_flags="-v"
ntpdate_hosts="95.215.175.2 62.3.77.2 178.18.123.148"


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-10 Thread Sad Clouds
On Sat, 10 Oct 2020 16:19:43 +0200
Johnny Billquist  wrote:

> That dns starts failing if you don't have a correct clock seems to be
> a serious brokenness. Plenty of embedded devices that might not have
> a battery backed clock...

Well, you don't have to use dnssec and dns-over-tls, but if you want
stronger security, there is a cost. Many security protocols require
correct system time.


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-10 Thread Sad Clouds
On Sat, 10 Oct 2020 13:48:44 +0200 (CEST)
Havard Eidnes  wrote:

> > Hi, I'm having the following issues on RPi-3 which doesn't have
> > battery operated clock. This tends to happen when clock skew is
> > quite large.
> > 
> > 1. DNS resolution no longer works, as unbound(8) needs system time
> > to be correct. I think this is due to "forward-tls-upstream: yes"
> > option.
> 
> I suspct that DNSSEC signature validation also fails with a clock
> which is way off.  RRSIG records specify a validity interval, and
> it's not uncommon for that to span about a month around the
> current time.
> 
> Regards,
> 
> - Håvard

You may be right. I was going on holiday and turned off the machine.
When I came back, the system clock was off by about a week. Didn't
bother finding out exactly where unbound was failing to resolve DNS
names, I knew right away it was something to do with the incorrect time.


Re: ntpdate(8) and unbound(8) dependencies during boot

2020-10-10 Thread Sad Clouds
On Sat, 10 Oct 2020 10:32:49 +0200
Martin Husemann  wrote:

> On Sat, Oct 10, 2020 at 09:04:08AM +0100, Sad Clouds wrote:
> > Anyway, has anyone come across this issue before and how did you
> > manage to solve it?
> 
> You can specify an ip address (or several) as args to ntpdate.
> Try:
> 
>   ntpdate=YES ntpdate_args=${IP-of-your-ntpd-sever}
> 
> or something along that line.
> 
> Martin

OK thanks, I'll try that. This is probably better than changing
ntp.conf file.


ntpdate(8) and unbound(8) dependencies during boot

2020-10-10 Thread Sad Clouds
Hi, I'm having the following issues on RPi-3 which doesn't have battery
operated clock. This tends to happen when clock skew is quite large.

1. DNS resolution no longer works, as unbound(8) needs system time to
be correct. I think this is due to "forward-tls-upstream: yes" option.

2. System time cannot be set with ntpdate(8) as it needs DNS to resolve
NTP server.

One obvious solution would be to change ntp.conf and replace NTP server
hostnames with IP addresses. Although ntpdate(8) man page doesn't
specify where it actually gets NTP servers for its operation. It
mentions briefly NetInfo from ntpd, but even ntpd man page doesn't
explain what NetInfo is and how it works.

Anyway, has anyone come across this issue before and how did you
manage to solve it?

Thanks.



Re: dhcp server null pointer in option_cache_dereference

2020-08-13 Thread Sad Clouds
On Wed, 12 Aug 2020 13:44:27 +0100 (BST)
Stephen Borrill  wrote:

> On some machines newly upgraded to 9.0_STABLE, dhcp runs fine for,
> say, 24 hours then starts logging the following. After this point, it
> no longer handles any DHCP requests:
> 
> Null pointer in option_cache_dereference: 
> /usr/src/external/mpl/dhcp/bin/server/../../dist/server/dhcp.c(4067)
> 
> That 4067 is meant to be a line number according to
> dist/common/options.c, but there's no call to
> option_cache_dereference() at line 4067. Also, most calls to
> option_cache_dereference() don't specify the line as the third
> option, but I may be missing something.
> 
> Any ideas?
> 
> -- 
> Stephen
> 

I would say build with POINTER_DEBUG and then get a core dump. This
should tell you where this happens.

int option_cache_dereference (ptr, file, line)
struct option_cache **ptr;
const char *file;
int line;
{
if (!ptr || !*ptr) {
log_error ("Null pointer in option_cache_dereference: %s(%d)",
   file, line);
#if defined (POINTER_DEBUG)
abort ();
#else
return 0;
#endif
}


Re: NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-04 Thread Sad Clouds
On Mon, 3 Aug 2020 20:42:51 +0100
Mike Pumford  wrote:

> On 03/08/2020 17:48, Sad Clouds wrote:
> > 
> > I believe Samba is single threaded, so can't take advantage of
> > multiple CPUs for a single stream. I'm not a Xen expert, however
> > I'm not sure running this in Dom0 is a representative test. I
> > imagine most workloads would be done within a DomU, while Dom0 is
> > just a control domain and allocates CPU and memory resources, so
> > you may have additional overheads + latencies.
> > 
> Samba uses multiple processes for parallelism. So there is a separate 
> smbd process for each connection. So for a single share to a single 
> client its single threaded. Multiple clients or multiple shares will 
> spawn additional smbd processes.
> 
> Mike

Which is not the best design. Might be OK with 1GbE, but when you go to
10GbE or 40GbE and higher, that single stream will saturate one CPU and
your throughput will be capped.

Many use-cases require good single stream performance. I suspect Samba
process uses a lot of global data, hence difficult to share with
multiple threads without race conditions or a lot of locking.


Re: NetBSD/Xen samba performance low (compared to NetBSD/amd64)

2020-08-03 Thread Sad Clouds
On Mon, 3 Aug 2020 15:08:28 +0200
Matthias Petermann  wrote:

> I measured this by copying an 8 GB ISO file from a Windows host.
> In constellation 2, no guests had started and the full main memory of 
> Dom0 was assigned. In my view, the only significant difference is
> that NetBSD can only use one of the two CPU cores under Xen. Since
> the CPU was idle on average at 20% during copying, that doesn't seem
> to be the bottleneck?

I believe Samba is single threaded, so can't take advantage of multiple
CPUs for a single stream. I'm not a Xen expert, however I'm not sure
running this in Dom0 is a representative test. I imagine most workloads
would be done within a DomU, while Dom0 is just a control domain and
allocates CPU and memory resources, so you may have additional
overheads + latencies.

Best to start testing simple use-cases, i.e. simple disk I/O with dd, or
network I/O with iperf, and narrow down the issue.


Re: Working ZFS RAID/SAS controller support/mpii

2020-07-15 Thread Sad Clouds
On Wed, 15 Jul 2020 08:59:00 -0400
Jason Mitchell  wrote:

> There's also a FreeBSD version of the utility:
> 
> https://www.freebsd.org/cgi/man.cgi?query=mfiutil=8
> 
> I'd think the FreeBSD version of the utility would work better given 
> that FreeBSD and NetBSD are similar.

You would still need emulation FreeBSD packages installed, same as
Linux. It has been a very long time since I tried anything like this on
NetBSD, so not sure which emulation is better.

I don't understand why manufactures can't provide binary tools for all
major OSes, there aren't that many and it's mostly x86. Especially
if the tools are to manage RAID cards, etc.


Re: Working ZFS RAID/SAS controller support/mpii

2020-07-15 Thread Sad Clouds
On Wed, 15 Jul 2020 00:20:33 +0100
Peter Kay  wrote:

> Configuration : Boot drive on SATA, other drives on LSI 3008 8i SAS in
> JBOD, boot ROM disabled. The mpii driver gets very upset (causes a
> kernel panic on boot, even though the boot drive is on SATA [1]) if
> some of the drive bays aren't occupied, throws unhappy messages about
> drives disappearing from bays, and generally doesn't provide any
> confidence that I could ever remove a drive from a running system and
> have it work.

So the issue only happens when you remove drives from a live system? If
that's the case, the obvious workaround would be to power off the
system and then replace faulty drive.

I've used "LSI 9260-8i" on NetBSD, but it's 6Gb/s per port and no JBOD.
Not actually tried hotplugging with this card. The only issue with not
having JBOD is the virtual disc provided by the card doesn't support
SSD TRIM. The card has flexible config where you can disable cached
I/O, etc, so data is passed directly to SSD without getting slowed down
by onboard cache. I've not used ZFS with it, but I think configuring
each disk individually in RAID0 and then passing them to ZFS would work.

There is LSI binary Linux command line tool (MegaCli64), so I imagine
you could offline/online individual disks, but you'd need Linux
emulation packages setup on NetBSD.


Re: Any advanatages to keeping process virtual memory to a minimum?

2020-07-11 Thread Sad Clouds
On Sat, 11 Jul 2020 14:24:10 +0100
Sad Clouds  wrote:

> On NetBSD default thread stack size seems to be around 8 MiB, which is
> quite a lot and most of the time threads don't need this much space.
> 
> When creating large number of threads, process' virtual memory (VSZ)
> can be quite large. This can be reduced with
> pthread_attr_setstacksize() but I wonder if there are any advantages
> to doing this? Solaris used to reserve swap space based on process
> memory usage, but this doesn't apply to NetBSD. Are there any usage
> scenarios on NetBSD where the size of virtual memory can be important?

Well, I just found one issue, gcore is refusing to work with 100
threads. When I reduce thread stack size to 8 KiB, gcore succeeds.
There doesn't seem to be a limit on core file size, unless I've missed
something. Is this a bug?

rp4$ ulimit -c
unlimited

rp4$ sysctl -a | grep coredumpsize
proc.curproc.rlimit.coredumpsize.soft = unlimited
proc.curproc.rlimit.coredumpsize.hard = unlimited

rp4$ gcore 17185
gcore: ptrace(PT_DUMPCORE) to 17185 failed: Input/output error


Any advanatages to keeping process virtual memory to a minimum?

2020-07-11 Thread Sad Clouds
On NetBSD default thread stack size seems to be around 8 MiB, which is
quite a lot and most of the time threads don't need this much space.

When creating large number of threads, process' virtual memory (VSZ)
can be quite large. This can be reduced with pthread_attr_setstacksize()
but I wonder if there are any advantages to doing this? Solaris used to
reserve swap space based on process memory usage, but this doesn't
apply to NetBSD. Are there any usage scenarios on NetBSD where the size
of virtual memory can be important?


Re: does anyone have a working mozilla firefox-74.0 on 9.0 amd64?

2020-07-03 Thread Sad Clouds
On Fri, 3 Jul 2020 14:32:11 +0100
Ottavio Caruso  wrote:

> On Fri, 3 Jul 2020 at 13:15, Sad Clouds 
> wrote:
> 
> >
> > The current trend of moving native desktop applications to cloud and
> > web browsers, simply frustrates and infuriates me. Yes you could
> > build a house out of Weetabix, but that doesn't mean that you
> > should.
> 
> 
> This trend started more than 10 years ago. Most people I know don't
> even use desktop applications; they use mobile apps.

Sure, most people only need computer/smartphone for two reasons -
facebook and twitter.


Re: does anyone have a working mozilla firefox-74.0 on 9.0 amd64?

2020-07-03 Thread Sad Clouds
On Fri, 3 Jul 2020 14:26:34 +0100
Mike Pumford  wrote:

> Done right there is no reason the app has to be either bloated or
> slow, that's down to the skill of the developer. So you are being put
> off by bad examples rather than a bad platform. Is it a perfect
> platform? No, but its not particularly worse than any other GUI
> framework I've used (which includes GTK, MFC and Win32). However you
> don't tend to see the REALLY BAD native apps as they never escape
> outside the organisation that wrote them most of the time. The really
> bad web apps escape all the time :(

It's more about the design and technology. Web applications rely on
HTTP, HTML, CSS, JavaScript, XML, JSON etc. all of which are plain text
that need to be parsed and converted into binary code/structures. This
is insane for complex applications, as it uses way too much memory and
CPU power, draining your battery in no time. I've lost count of how many
times I open a page and wait for JavaScript bloat to load up with CPU
spinning 100% for about 20 seconds. And it is getting worse, as people
are moving everything into web browsers. I had to write HTTP/1.1 parser
a long time ago and dealing with text and parsing tokens is a real pain
the ass, as it is error prone, time consuming and resource intensive.
Binary code, protocols and structures are the most efficient and can be
cross-platform with careful design.

You may think web browsers are cross-platform? Well, I remember seeing
complex HTML matrix stating which version of browser supports which
feature of HTML. What about various extensions and platform hacks that
you often find in JavaScript to cater for specific browser? I can still
go to the BBC news website and unable to play videos, because I don't
have flash or some other extension. How come some platforms have no
decent support for any browser, e.g. I've given up on browsing web
sites on my Sun Ultra10, it used to be usable many years ago, but now
it's just too slow and too many issues with Firefox.

I can see the appeal of having a web app that works on different desktop
and mobile devices, but the core technology is text-centric, quite
inefficient and requires powerful CPU capabilities that you wouldn't
need with native binary code.


Re: does anyone have a working mozilla firefox-74.0 on 9.0 amd64?

2020-07-03 Thread Sad Clouds
On Fri, 3 Jul 2020 10:17:03 +0100
Mike Pumford  wrote:

> There are many people that hate the fact that this level of
> development in a browser is possible but the is no denying its
> usefulness as its a cross system platform for writing applications
> that far outweighs anything else I've ever had access to. Its so
> powerful that you can actually incredibly common to build a web app,
> bundle it with a web rendering engine and ship it as a native app.

I wouldn't go that far in praising web browsers and the technology they
are built on. It's so clunky, bloated and buggy, it's not even funny.
On a daily basis, I'm forced to use crappy and slow web apps - agile
boards, code review boards, development wiki, bug reporting, etc. All of
them are just awful and laggy. Has anyone tried using VMWare vSphere
cloud app, it's too painful to describe the experience.

The current trend of moving native desktop applications to cloud and
web browsers, simply frustrates and infuriates me. Yes you could build
a house out of Weetabix, but that doesn't mean that you should.


Re: possible new feature: unrm ?

2020-07-01 Thread Sad Clouds
On Tue, 30 Jun 2020 17:18:14 -0700
Michael Cheponis  wrote:
 
> So I'm proposing that "rm" move files to the shadow FS, and some other
> command or switch to rm to really remove them(e.g. temp files).

Of cause you can setup your environment whichever way you like it. Most
other people may actually want "rm" to delete files completely.

The proper solution to your problem is to keep backups on
external/remote media. No amount of tinkering with "rm" will save you
from local file system corruption, or disk failure, or equipment theft,
etc.


Re: cvs better than git?

2020-06-21 Thread Sad Clouds
On Sun, 21 Jun 2020 18:26:48 +0200
Johnny Billquist  wrote:

> I hear what you say, but it's still a fact that even good people make 
> errors all the time. So it's not really that much about "trust" as
> about in the end getting the damn thing right. Prestige have no place
> there. If you think you write perfect code, and if you feel anyone
> looking over your should is an insult, then I think you should think
> again. :-)

It's not about being insulted, but more about getting the work done
without being blocked needlessly. As a practical example:

If I have a problem with my car and I think I can fix it better than
anyone else, I roll up my sleeves and do it.

Otherwise, I take it to a garage and let them do it for me. Whether they
do a good or a bad job, I don't stand there reviewing their work and
giving them my critique on every nut and bolt they replace. If they do
a bad job, I just don't come back there again.


Re: cvs better than git?

2020-06-21 Thread Sad Clouds
On Sun, 21 Jun 2020 17:15:44 +0200
Johnny Billquist  wrote:

> Not talking about a specific tool, but the process then.
> 
> There are multiple advantages by having code reviewed.
> First of all, noone is perfect. People make mistakes all the time,
> and having two people look at it already reduce silly errors
> significantly.

I'm not against code reviews, and as I've mentioned, you can easily do
that on the command line against a private branch, before it is merged
with the main branch. Personally, I'm not that keen on the GUI process
with Git, Gerrit, etc. Too much red tape and clicking on buttons.

When I work with people, I want to trust them to do their job right, and
don't want to monitor every line of code they write, unless of cause
they ask for it. On the other side of the spectrum, sometimes you get
complete control freaks, that block you all the time due to minor
stylistic differences.


Re: cvs better than git?

2020-06-21 Thread Sad Clouds
On Sun, 21 Jun 2020 10:32:32 -0400
g...@duzan.org wrote:

> > On 2020-06-21 16:07, Rhialto wrote:
> >> On Sun 21 Jun 2020 at 15:20:39 +0200, Johnny Billquist wrote:
> >>> I am still curious about how to manage well in git the scenario
> >>> where you do
> >>> have a central repository that holds the actual source of truth,
> >>> and where
> >>> you want to review and approve anything that gets committed.
> >>
> >> Let me point you to the hell that is called "Gerrit"...
> >> Example: https://review.opendev.org/
> >
> > Like I said. We are using gerrit. It's not without a bunch of
> > issues all on its own... :-)
> >
> > Any other options around with similar functionality?
> >
> >Johnny
> 
>Our group at work uses Gerrit. We have had some trouble
> maintaining it, so we have been considering moving to Bitbucket,
> which already exists in the company. Bitbucket seems nice enough,
> though of course it is commercial. If I were looking for a solution
> today I would consider Phabricator/Differential, as it is open
> source, supports Mercurial, and appears decent enough.
> 
>   Gary Duzan

What is the actual benefit of these tools? In a commercial environment
I find them to be a complete distraction and it often results in people
blocking you from committing changes due to various petty reasons.

Ideally, you wouldn't need to manage anything, you trust your team
members to use their own experience and commit correct and well tested
code. You can still review the changes on their private branch, but you
can easily do that on the command line - "svn diff --diff-cmd=meld ..."
was the best review tool for me.


Re: cvs better than git?

2020-06-21 Thread Sad Clouds
On Sun, 21 Jun 2020 11:05:51 +0200
Andreas Krey  wrote:

> But I take objection at the 'seems'. You're arguing
> that some - few - projects are too big for git, and
> hence everybody should stay away from is. Even though
> MS implemented VFS exactly to be able *use* git.
> 

Absolutely not. There are many small and medium projects out there and
if Git works for them, that is great. I'm not trying to persuade anyone
to stay away from Git or Mercurial or anything else. There is no
perfect VCS that will please everyone, but I find it interesting to
hear why one VCS is better or worse than another and the use-cases that
lead to that. Just because I have an opinion on something does not
mean that I expect everyone else to agree with me.


Re: cvs better than git?

2020-06-20 Thread Sad Clouds
On Sat, 20 Jun 2020 17:06:26 +0200
Andreas Krey  wrote:

> > However, a local repo has a big advantage: the possibility to
> > quickly determine the status.
> 
> ...or do logs, diffs, commits, checkouts, merges...

Microsoft developed "VFS for Git" in order to scale it for large repos.
Because let's face it, breaking up codebase into gazillions of smaller
repos is daft and not very user friendly.

The way I understand VFS - data is downloaded on demand, when you try to
access some file. Doesn't this completely negate the idea of having all
history stored locally?

Git local repos seem cool up to a certain size, beyond that you need
hacks like VFS and it's back to the traditional model of central VCS
repos.


Re: Checking out src with Mercurial

2020-06-19 Thread Sad Clouds
On Fri, 19 Jun 2020 21:07:24 +0200
Jan Danielsson  wrote:

>The tl;dr for those who don't want to read all that:  If, five
> years earlier, Rust had been in the shape it was when that post was
> written, the Mercurial developers may have opted to port to Rust
> rather than try to bend Python 3 to their will -- because many common
> assumptions they made about Python were true in 2.x, but not 3.x.
> 

Until perhaps the next awesome version of Rust is unleashed, and you're
back to something like Python 2 to 3 upgrade saga. As you pointed out,
Rust has no spec, so you could wake up tomorrow with some crazy new
features added.

People don't like C because it's too low level, fine. There are mature
programming languages like Ada, that have been around for decades and
have been designed by very smart people. Everything that you want for
writing reliable software at scale - strong typing, automatic overflow
and array bounds checking, built-in concurrency with tasks, object
oriented programming, generics, etc. But Ada is not hip enough for the
Facebook type crowd, so they keep inventing new programming languages
that simply suck when it comes to software engineering.


Re: Checking out src with Mercurial

2020-06-19 Thread Sad Clouds
On Fri, 19 Jun 2020 12:19:30 -0400
Greg Troxel  wrote:

> Sad Clouds  writes:
> 
> >> The idea of having to build rust to get a "less resource demanding
> >> implementation" would be great comedy if it weren't such a serious
> >> problem.
> >
> > The Mercurial developers have an opinion that it is very difficult
> > to develop and maintain reliable software in C. So the search goes
> > on for the silver bullet - Python, Rust, Go, etc.
> 
> Sure - but my point is that rust is extremely difficult to deal with,
> and is basically impossible on low-resource machines or unusual
> architectures.  For those who haven't been following along, there has
> been basicallya continuous tale of woe in pkgsrc-land for the last 6
> months at least, and perhaps that's fairly said to be multiple years.
> 
> So the idea that Johnny can build rust on his vax and then use some
> hg-rust on it would be laughable, if it didn't point out that the
> people deciding to use rust appear not to care about anything other
> than a handful of OS/CPU types.

What I find a bit strange is the idea that developing something in
Python + C, then re-writing it in a hipster language like Rust, is more
reliable than just using well designed C/C++ code.

OK C is not perfect, but if software reliability is the main concern,
how many control systems operating aircraft, railway, or nuclear power
station are written in Python or Rust? Probably none.

And as you mentioned, if you can't use the tool on old VAX or SPARC
machines, that are still supported by NetBSD, the whole point of the
tool becomes rather irrelevant.





Re: Checking out src with Mercurial

2020-06-19 Thread Sad Clouds
On Fri, 19 Jun 2020 09:31:00 -0400
Greg Troxel  wrote:

> Jan Danielsson  writes:
> 
> > On 2020-06-19 13:45, Matthias Petermann wrote:
> >> [---] Or maybe one day a BSD-licensed Mercurial
> >> "light" as a re-implementation in Golang will exist.
> >
> >There's work being done on oxidizing parts of Mercurial.  As far
> > as I have gathered, the plan is to keep the python implementations
> > as well, but if the user chooses to they can build certain parts in
> > Rust to get a less resource demanding implementation.
> 
> (I know you are the messenger here so not directed at you.)
> 
> The idea of having to build rust to get a "less resource demanding
> implementation" would be great comedy if it weren't such a serious
> problem.

The Mercurial developers have an opinion that it is very difficult to
develop and maintain reliable software in C. So the search goes on for
the silver bullet - Python, Rust, Go, etc.


Re: cvs better than git?

2020-06-18 Thread Sad Clouds
On Thu, 18 Jun 2020 07:54:30 +0200
Andreas Krey  wrote:

> E.g. the answer to 'how do I make tags in svn immutable by default'[1]
> is surprisingly long and intricate - so much that nobody ever fully
> answered that to me.

Search "read only branches" for subversion and it'll give you some
ideas. Never done this myself, so no idea how well it works, but from
what I've read, it is relatively simple.


Re: painting the bikeshed in transparent colors (Re: cvs better than git?)

2020-06-18 Thread Sad Clouds
On Thu, 18 Jun 2020 09:09:50 +0200
Nikita Gillmann  wrote:

> Sorry, what's the point of this continued thread?

Actually I'm always interested in why X is better than Y, so I can
make my own decisions, instead of following some predetermined
inevitability (e.g. Git has won period, just accept it as a way of
life on Earth).

Morpheus: Do you believe in fate, Neo?
Neo: No.
Morpheus: Why not?
Neo: Because I don't like the idea that I'm not in control of my life.


Re: cvs better than git?

2020-06-17 Thread Sad Clouds
On Wed, 17 Jun 2020 16:08:17 -0400
Jeffrey Walton  wrote:

> My position is, avoid Git if possible. It is a time sink that takes
> time away from real work. Ignore the Fan Boi's when they ask for Git.
> The request often shows nativity or lack of experience with the tool.
> 
> Jeff

Over the years, I worked on different projects and even though I work
for the same company, we keep switching from one VCS to another.

First we used Subversion, which was OK.

Then people decided Subversion was crap, so they setup Git and Github.
This was just awful, especially the Github workflow, never in my life
before I had to jump through so many pointless hoops in order to commit
a single line fix.

Then we switched to Mercurial. There was no Github nonsense so that was
a relief, however it still sucked compared to the simplicity of
Subversion. There is talk now of switching from Mercurial to Git. And
the cycle continues...

OK I don't spend months meditating over Git manuals, so I don't feel the
force and will never become that Git Jedi Master that everybody else
claims to be these days. There is more to life, I want VCS to be as
simple as possible and to get out of the way, so I can focus on real
work at hand.


Re: cvs better than git?

2020-06-17 Thread Sad Clouds
On Wed, 17 Jun 2020 09:11:04 -0700
Michael Cheponis  wrote:

> If you want a thirty-minutes summary presentation, watch "What’s
> Wrong With Git?" from Git Merge 2017
> .
> 
> The end result of this work is Gitless,  a simple version control
> system built on top of Git.

OK I get it - you can't polish a turd, but you can roll it in glitter.


Re: Installing debian packages on netbsd?

2020-06-14 Thread Sad Clouds
On Wed, 10 Jun 2020 12:13:43 +0100
Robert Swindells  wrote:

> I have a few debian compat packages in my tree.
> 
> I created them to use with NetBSD/arm as SUSE didn't support that
> architecture. I didn't get far enough with the kernel side of Linux
> emulation for arm to do much with the packages.
> 

Are you referring to package dependency on kernel features? I imagine
stuff like systemd requires cgroups, etc, which may be missing from
NetBSD emulation. When you install some Linux packages, they need
systemd to register startup scripts, etc.

I wonder if NetBSD compat_linux means you have to use a Linux distro
without systemd baked into its core system and package management tools.


Re: requesting support for persistent memory

2020-06-11 Thread Sad Clouds
On Thu, 11 Jun 2020 13:34:21 +0200
mayur...@kathe.in  wrote:

> i don't know anything about "the machine" project by 'hp' nor do i
> have any idea about why it might've gone quiet, other than that it
> must've gone the "ia64" route of technological innovations from 'hp'
> labs, but intel's optane (the most popular implementation of
> "persistent memory") is selling like hot-cakes in the market right
> now. in fact, my new lenovo laptop (which is a low-end core-i3
> machine) has optane memory chips soldered right onto the motherboard.
> 

They are trying to figure out how to efficiently make "memristors", but
it's probably harder to crack than they originally anticipated.

Slides from 5 years ago:
https://www.mcs.anl.gov/events/workshops/ross/2015/slides/ross2015-keeton.pdf


Re: requesting support for persistent memory

2020-06-10 Thread Sad Clouds
On Wed, 10 Jun 2020 14:13:57 +0200
mayur...@kathe.in  wrote:

> i noticed; https://pmem.io/
> they are supporting windows and linux using some
> 'dax' (direct-access) technology. would netbsd experts too work
> towards bringing-in support for persistent memory? i believe it could
> pave the way for a new breed of applications since it's a whole new
> programming paradigm.
> 

I thought it was mmap(2) followed by msync(2). Job done!

Joking aside, isn't that what HP tried to do with "The Machine" project
years ago, but then it all went rather quiet.


Re: Installing debian packages on netbsd?

2020-06-10 Thread Sad Clouds
On Tue, 09 Jun 2020 20:11:51 -0400
Greg Troxel  wrote:

>   after installing the linux compat packages, chroot into /emul/linux
>   and then run the dpkg tools.  first you might need to install them
>   because the emulation is suse.

Anyone knows why pkgsrc uses Suse Linux? Is it something to do with
long term support for older versions? If not, I wonder if Debian would
be a better choice, as they have far more packages available. 


Mechanical keyboards

2020-06-08 Thread Sad Clouds
I'm thinking of getting a Filco mechanical keyboard:

https://www.diatec.co.jp/en/det.php?prod_c=765

Either Cherry MX Blue or Brown switches, but can't decided which one is
better for typing (https://www.cherrymx.de/en/mx-original/mx-blue.html)

Never used them before, so no idea how they feel. Anyone used either of
them and could provide some feedback?

I've heard IBM Model M with "buckling spring" switches was the best, but
who knows.

This guy has a pretty cool youtube channel reviewing all sorts of weird
keyboards. He must be building a keyboard museum at home:

https://www.youtube.com/playlist?list=PL285ATFsHGY9JnaQemdjRLEMmLRKpmbfB

Thanks.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-08 Thread Sad Clouds
On Sun, 07 Jun 2020 15:12:56 -0700
"Greg A. Woods"  wrote:

> However when you put _all_ the code for _all_ the system's programs
> into one single lone binary, with no shared libraries, then _all_
> text pages are shared entirely for _all_ processes all of the time,
> no matter what program they are running as.

OK thanks for the explanation. So as long as it is a single executable,
text pages are shared, but if you create multiple executables on disk,
i.e. different inodes, then you do get duplicate text pages loaded into
RAM?


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-07 Thread Sad Clouds
On Sun, 07 Jun 2020 10:35:09 -0700
"Greg A. Woods"  wrote:

> Now ideally what I want to do for embedded systems is static-link
> every binary into one crunchgen binary.  I've done this for 5.2 on
> i386, and the whole base system (or most of it, no compiler, tests,
> or x11; and no ntpd or named or postfix) is just 7.4MB compressed.
> Yes, just 7.4 megabytes:

So when you run this binary as multiple concurrent processes, aren't
they all going to use more RAM as they load duplicate copies of shared
libraries that have been statically linked into it?


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-07 Thread Sad Clouds
On Sun, 7 Jun 2020 13:59:47 +0200
Johnny Billquist  wrote:

> You are thinking simple metrics, which is a rather user centric item.
> I was trying to make that point before.
> 
> Think instead of things like reports on what binaries have been
> replaced in the system. How could you ever graph and aggregate such
> information?

Depends really on what information you want to see. You could graph the
number of binaries for each machine, or the number of suid/sgid
binaries that got replaced, change in size before/after, etc.

Some data may not be numeric, i.e. ssh or httpd logs, but you could
configure the tools to grep for particular events or error codes and
flag them. You can extract numeric data, i.e. how many ssh users log
in/out per day, how long their sessions last, or how many failed login
attempts happen, etc. You can probably do this to some extent with
email, but it's not as flexible with regard to data mining and pattern
visualisation. In my previous role, I frequently had to sift through
various system logs trying to figure out why system performance was
suboptimal and trust me, manually grepping through log files is no fun
and wastes a huge amount of time. I don't want to be looking at these
logs/messages in my email.

I view email as a very simple tool for receiving short messages and
emergency alerts, not for daily system monitoring and trouble shooting.
Of cause if you're only interested in short messages and alerts, then
email is quite sufficient. But I get the feeling this style of system
administration has probably outlived much of its usefulness.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-07 Thread Sad Clouds
On Sun, 7 Jun 2020 13:03:47 +0200
Johnny Billquist  wrote:

> The alternative, even more so with the VMs, is that you would go into 
> each and every machine and check the state there every day. That do
> not seem to scale very well...
> 
> Having the reports instead sent to some central place seems exactly
> what you would like to do...

I agree with your last statement, however I'm still wondering if doing
this via email is a good approach. It works for simple reports, but as
the details and complexity grow, this may become unmanageable.
Extracting data and trends from simple text emails is possible, but
seems to be a rather cumbersome process.

Have a look at these graphs:
https://oss.oetiker.ch/rrdtool/gallery/index.en.html

This is the type of format that allows you to easily manage complex
subsystems. Especially if the tools can automatically aggregate data
from multiple machines and display metrics on a single graph.

I'm thinking about a design where each system has a small framework for
collecting various telemetry data. You activate various plugins and
they automatically collect metrics + logs and archive them in a
central place. This is somewhat similar to how emails get stored on a
central server, but much more sophisticated, as the framework has
specific tools to graph and visualise daily reports.

There are commercial and free frameworks that do this, but they seem
quite bloated and probably include web servers and SQL data bases. It
needs to be much simpler but with very good tools for data analysis.
A bit like DTrace but designed for this specific use case and more
emphasis on automatic graph generation.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-07 Thread Sad Clouds
On Sat, 06 Jun 2020 16:19:20 -0700
"Greg A. Woods"  wrote:

> Delivering mail to a remote mail server requires MTA software that is
> capable of delivering mail via the network.
> 
> Postfix is a most excellent MTA that can be configured to deliver mail
> via the network.
> 
> Conveniently Postfix is also a most excellent MTA that can be
> configured to (also) _receive_ mail via the network.

I agree with you and I am not suggesting that Postfix is not good at
what it does. I am questioning this aspect of system administration:

"delivering mail to a remote mail server"

I wonder if this is still relevant in 2020, when many people tend to
use multitudes of VMs and container-per-application configurations.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-07 Thread Sad Clouds
On Sat, 06 Jun 2020 19:37:39 -0400
Greg Troxel  wrote:

> I have a number of machines, and almost all of them send either daily
> mail or messages on boot, typically to me someplace else.  This
> basically requires an MTA

I'm curious, what sort of essential information do these emails
provide on a daily basis? Is it simply that some cron job completed
successfully?

Personally, I would like to see graphs and charts of
cpu/memory/disk/network usage. I would like to archive various security
and cron logs in a different location, so that hackers cannot easily
delete them. If I'm running email/web/database services, I would like
to archive all logs, statistics and performance metrics, on a frequent
(maybe hourly?) basis. If the database has performance issues at
specific times, I would like to be able to go back in history and
analyse all logs and metrics. Some of those metrics could be stored in
binary files and may need specific tools to extract visual graphs, etc.

I don't think sending all that data to my email account is going to be
very practical. So at least for how I would like to configure my
systems, Postfix does not seem very useful. Yes I can disable it from
rc.conf so not a big issue. I'm not asking people to remove Postfix,
just trying to understand how useful it is and how many people actually
use it.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-06 Thread Sad Clouds
On Sat, 06 Jun 2020 14:55:54 -0700
"Greg A. Woods"  wrote:

> I would suggest it is still very relevant to have a network capable
> MTA in the base system distribution, and for that purpose Postfix is
> a most excellent choice.

What are the use-cases for it? Is it only for emailing occasional
reports to sysadmins?


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-06 Thread Sad Clouds
On Sat, 6 Jun 2020 18:58:16 +0200
Martin Husemann  wrote:

> This is similar to how we do not need bind in base, unbound as a
> caching name server should be enough (and that move has not been
> completed either yet).
> 
> Martin

Yes, that is another example of a software package that few people use,
but it is still included with all the extras, because some small parts
of it are useful.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-06 Thread Sad Clouds
On Sat, 6 Jun 2020 20:29:54 - (UTC)
mlel...@serpens.de (Michael van Elst) wrote:

> >I'm thinking that Postfix and similar, are good for running mail
> >servers, but for system monitoring and alerting, there may be better
> >tools out there.
> 
> Since when does Postfix system monitoring and alerting?
> Are you still talking about removing postfix or about removing
> daily(5) ?

I'm questioning if having Postfix in the base system is that useful
and how many people actually use it. I'm trying to understand how people
use subsystems like Postfix when they have no requirement to run email
servers. It seems the main reason for having it is to use it as a basic
MTA and email daily/weekly reports to sysadmins. Essentially it is used
as a monitoring tool, that can take the output from other tools like
netstat, vmstat, etc and email it to someone. There doesn't seem to be
any other use case for having it. Or maybe I'm missing something here.

It seems quite often Postfix is pointless on a desktop system. Most
users tend to use some GUI MUA that has built-in MTA + POP3 + IMAP
capabilities and doesn't even talk to a local Postfix subsystem.

If you run a network appliance, you can setup Postfix to email daily
reports to some remote email server. However you now have to setup and
administer this email server, and maybe POP3 or IMAP server. You also
need an email client. You could also use other peoples' email
infrastructure (Gmail) but then they could potentially monitor all your
activity. This seems like a lot of faff just for the sake of some basic
email alerts and reports.


Re: Postfix and local mail delivery - still relevant in 2020?

2020-06-06 Thread Sad Clouds
On Sat, 6 Jun 2020 16:57:11 - (UTC)
mlel...@serpens.de (Michael van Elst) wrote:

> cryintotheblue...@gmail.com (Sad Clouds) writes:
> 
> >I've been wondering - why have Postfix in the base system and why
> >have it enabled by default?
> 
> Simple answer, mail is used by automated tasks to deliver results
> to users.
> 

OK, but does this really require the entire Postfix infrastructure?
A small mail delivery tool would be sufficient, e.g somebody mentioned
Dragonfly mail agent.

I kind of understand that it is traditional in Unix to email daily
reports to a sysadmin. This may have been reasonable decades ago, but
in 2020 it seems a bit primitive and not very useful for even a medium
number of networked systems.

There is a lot of information that can be collected on a daily basis,
for example - cpu/memory/disk/network metrics, thermal metrics, firewall
logs and alerts, mail/web/database/backup logs/alerts, etc. Sifting
through all of that in emails does not seem very productive.

I'm thinking that Postfix and similar, are good for running mail
servers, but for system monitoring and alerting, there may be better
tools out there. If you want to keep it really simple, why bother with
SMTP and mail clients at all? Just sftp all the results to a central
location and ssh there when you want to view them. 


Postfix and local mail delivery - still relevant in 2020?

2020-06-06 Thread Sad Clouds
I've been wondering - why have Postfix in the base system and why have
it enabled by default?

Most people are not interested in running their own mail server. In
this case, what is the value of having Postfix just to deliver daily(5)
reports to /var/mail/root? Seems like an overkill for something that
syslog can do perfectly well.

Is this just a vestige of 1980s time-sharing systems, when computers
were expensive and many different users had to coexist on the same
system?


Re: kernel source : compile from a ubuntu (gnu/linux) system : possible?

2020-05-27 Thread Sad Clouds
On Wed, 27 May 2020 14:47:44 +0200
mayur...@kathe.in  wrote:

> i've heard about the impressive portability of the netbsd system,
> especially it's kernel. i've also heard that it's possible to
> cross-compile the whole system for a different target 'isa'. so, i
> wondered if it's possible to compile the netbsd kernel (only) source
> from a gnu/linux system like ubuntu. i don't expect to "run" the
> kernel, just to modify the source (primarily working on updating from
> older 'c' to c99 and secondarily to identify and patch code issues
> with security) and compile to test whether everything goes through
> and then submit the patches. all this because netbsd won't support
> all of my hardware and also incapability to use virtualisation.
> thanks.
> 

I normally build on Debian. All you need is to download NetBSD sources:

/home/myusername/netbsd_build/src
/home/myusername/netbsd_build/xsrc

Setup mk.conf, i.e. cat /home/myusername/netbsd_build/mk.sparc64.conf
CPUFLAGS+=-mcpu=ultrasparc
MKX11=yes

And just run the script below. NetBSD build.sh will first build
various tools like bmake and cross-compiler and then build your
code. You don't need to be root and can build as a normal user.


#!/bin/sh

# Build NetBSD from sources. Linux needs the following packages:
# apt-get install build-essential zlib1g-dev flex

MACHINE_ARCH=sparc64
MAKECONF=/home/myusername/netbsd_build/mk.$MACHINE_ARCH.conf; export MAKECONF
WORK_DIR=/home/myusername/netbsd_build
JOBS=8

echo "MACHINE_ARCH=$MACHINE_ARCH"
echo "WORK_DIR=$WORK_DIR"
echo "sleeping for 4 seconds..."
for i in 4 3 2 1 0
do
echo -n "$i "
sleep 1
done
echo " "

# Various build targets, uncomment as needed

# Build sets
#cd $WORK_DIR/src && sh ./build.sh -j $JOBS -U -u -m $MACHINE_ARCH \
#   -x -X $WORK_DIR/xsrc \
#   -O $WORK_DIR/obj.$MACHINE_ARCH -T $WORK_DIR/tools.$MACHINE_ARCH \
#   distribution sets 2>&1 | tee $WORK_DIR/log.$MACHINE_ARCH

# Build complete release
#cd $WORK_DIR/src && sh ./build.sh -j $JOBS -U -u -m $MACHINE_ARCH \
#   -x -X $WORK_DIR/xsrc \
#   -O $WORK_DIR/obj.$MACHINE_ARCH -T $WORK_DIR/tools.$MACHINE_ARCH \
#   release 2>&1 | tee $WORK_DIR/log.$MACHINE_ARCH

# Build only kernel (requires tools)
cd $WORK_DIR/src && sh ./build.sh -j $JOBS -U -u -m $MACHINE_ARCH \
-x -X $WORK_DIR/xsrc \
-O $WORK_DIR/obj.$MACHINE_ARCH -T $WORK_DIR/tools.$MACHINE_ARCH \
kernel=GENERIC 2>&1 | tee $WORK_DIR/log.$MACHINE_ARCH


Re: HP ProLiant server running NetBSD 9 setup suggestions?

2020-05-26 Thread Sad Clouds
On Tue, 26 May 2020 11:03:41 +0100
David Brownlee  wrote:

> If you do find fans spinning more than you like it may be worth
> replacing it/them with quieter/more efficient models - I did this for
> the main case fan on a Dell T320 and now have it running with a 16
> core E5-2450L and 8 SATA drives without any noticeable noise under
> normal conditions
> 
> David

I'm not sure it's as simple as that. With 1U and 2U server cases there
isn't much room for larger fans, so to keep the system cool those tiny
fans have to spin much much faster, creating a lot of noise.


Re: Securing DNS traffic

2020-05-26 Thread Sad Clouds
On Mon, 25 May 2020 20:37:07 -0700
Andy Ruhl  wrote:

> So I'm not big into DNS and I don't have a firm grasp on all of these
> techniques, but I have an idea.
> 
> This is all just a big game of who are you hiding from right? If you
> hide from your ISP, now you have to trust the DNS server provider. Who
> among them are to be trusted?

It's not just ensuring privacy, but security as well. In the last few
years there have been quite a few attempts to subvert DNS. I found this
blog from cisco that is not too technical and most people can follow it:

https://blogs.cisco.com/security/dns-under-attack

Personally I'm not so much concerned about hiding my activity from ISP,
but making sure web and email services cannot be hacked or redirected
to malicious servers. So DNSSEC and DNS over TLS can help to some
extent. But there are many other layers to this security onion.

I've just ordered these two books, they seem like a good read:

"Managing Mission - Critical Domains and DNS: Demystifying nameservers,
DNS, and domain names"

"DNSSEC Mastery: Securing the Domain Name System with BIND"

Within a few days of me starting this thread, I got a notification from
Name.com warning that someone from China attempted to log into my old
and abandoned account.

Failed login notification
Failed on May 24, 2020, 2:42 am
IP Address: 222.173.92.154

I've not used Name.com for the past 10 years, but they seem to keep
your details forever and don't provide a facility to delete your
account. So you also need to be careful about domain registrars, some
of them could have quite lax security policies.


Re: Securing DNS traffic

2020-05-25 Thread Sad Clouds
On Fri, 22 May 2020 22:38:19 +0100
Sad Clouds  wrote:

> It seems there are two main security enhancements for DNS:
> 
> 1. DNSSEC - digital signatures for DNS records to verify they haven't
> been tampered with.
> 
> 2. DNS over TLS - encryption of DNS traffic for privacy. This goes via
> port 853 and could be over TCP or UDP (DTLS), although it's not clear
> to me if both TCP and UDP are always supported, of if it's mainly TCP.

I've been doing some more research and came across this article on DNS
hijacking

https://www.fireeye.com/blog/threat-research/2019/01/global-dns-hijacking-campaign-dns-record-manipulation-at-scale.html

Some of the techniques they describe seem to follow these steps:

1. DNS account is compromised and either A or NS records are changed to
   point to a bogus server.
2. User connects to "email.mydomain.com" which is sent to a bogus
   server that acts as a "man in the middle", collecting credentials
   and then forwarding everything to the real "email.mydomain.com"

I think TLS was designed to avoid "main in the middle" attacks, but it
seems in this case a bogus server is using its own "valid" TLS
certificate and then proxying connections to the real server.

I don't quite understand how this works. Is it the case of somebody
creating a second valid TLS certificate for "email.mydomain.com" in
order to masquerade as a genuine email server? So if different CAs can
issue such certificates, how do you mitigate such attacks? Isn't this a
flaw in the PKI design to have different CAs that can vouch for the same
domain?

Under the "Prevention Tactics" the article talks about "revoking
malicious certificates", but what tools/methods are there to tell you
which certificates are malicious?


Re: Securing DNS traffic

2020-05-25 Thread Sad Clouds
On Mon, 25 May 2020 10:17:56 +0200
Jörn Clausen  wrote:

> Hi!
> 
> I was not arguing for "no security at all". It's just this motivation
> for DoT/DoH (disguising the request from your ISP) that I don't get.
> 
> I have only a cursory knowledge of these technologies, but I think
> DNSSEC is the far better approach against the type of forgery you
> mentioned. Why do you expect CloudFlare or any other DoH provider not
> to be corrupted? I have just as much trust in them as in the
> commercial VPN provider you mentioned, or my ISP for that matter:
> very very little. As a European user, I definitely don't want all my
> DNS traffic to be routed through a single US company by default. But
> YMMV...

They are different technologies that complement each other. You need
both DNSSEC and DoT. You're right, any service provider could be
monitoring your activity and I don't believe US vs Europe makes much
difference here. I live in the UK and there has been "Data Protection
Act" in place well before EU "General Data Protection Regulation". Just
because something is legislated does not mean that everybody follows it
to the letter.

With DNSSEC you validate the integrity of the data, so if somebody
managed to poison the cache of some DNS server and insert a bogus
entry, hopefully DNSSEC should be able to flag it. However, if someone
redirects your DNS traffic (as some ISPs do) they could completely strip
out any DNSSEC data and substitute whatever records they like. For
example when you type "site.nosuchtld" into your web browser, instead of
an error, you get a web page filled with ads or some other nonsense. 

With DoT you nominate some trusted DNS server and TLS certificate
validation should flag if someone attempts to impersonate that server.
It's up to you which server to trust CloudFlare, Google or your own
that you setup in some trusted data centre.



Re: Securing DNS traffic

2020-05-24 Thread Sad Clouds
On Sun, 24 May 2020 20:55:29 +0200
Jörn Clausen  wrote:

> I simply don't get how this is a use case for DoT or DoH. Even if you
> disguise the DNS lookup, the next packet you send will be directed to
> the address you just looked up. Unless this happens to be a virtual
> hosting service, it is quite clear to your ISP what you are doing. I
> recommend this talk by Paul Vixie

There is always potential for surveillance. You may think you're safe
on a VPN, but if you didn't setup the endpoints yourself, on your own
hardware, how can you trust some VPN provider 100%? You can't.

I think the value of DoT is to stop DNS traffic hijacking and
redirection. Even if you configure /etc/resolv.conf to point to some
trusted DNS server, your ISP (or anyone else) can surreptitiously
redirect it to their own DNS server for various purposes (tracking,
filtering, serving ads, etc). Yes there are other ways to track people,
but the less info you leak in plain text the better.


Re: Securing DNS traffic

2020-05-24 Thread Sad Clouds
On Sun, 24 May 2020 11:00:00 +0200 (CEST)
Havard Eidnes  wrote:

> Nope.  There is no specified protocol to direct recursive
> resolution to use TLS towards specific authoritative servers.
> There has been talk about this on the DNSOP IETF working group,
> but nothing has been agreed.  This means that the queries from a
> recursive resolver to authoritative name servers will be in
> cleartext, typically using UDP, may (try to) use TCP in case of
> truncation.

OK thanks for the info.


Re: Securing DNS traffic

2020-05-24 Thread Sad Clouds
On Sat, 23 May 2020 11:38:18 +0200 (CEST)
Havard Eidnes  wrote:

> With your own recursor which implements query minimization, and by
> having multiple clients actively using it, you leak far less about
> your lookup history than by forwarding all your full DNS client
> queries to one of the above.  Obviously, this comes at a price --
> lookup times will be longer while the cache warms up, and caching is
> less effective the fewer clients you have using the cache.  Plus, of
> course, the outgoing queries from your recursor will be in
> cleartext.
> 
> Just saying...
> 
> - Håvard

OK, so I understand that root servers probably won't support TLS, but
some authoritative servers may support TLS (aka ADoT). But I don't seem
to find a way to tell unbound "use TLS opportunistically, wherever
possible". Isn't there some record (similar to DNSSEC RRSIG) that tells
unbound which servers actually support TLS?

So this config doesn't work, and DNS queries time out, as it is always
trying to use DNS over TLS (aka DoT), even if servers don't support it.

server:
  tls-upstream: yes


Re: [SOLVED-ish] Performance weirdness with netbsd-9 /usr/bin/grep

2020-05-23 Thread Sad Clouds
On Sat, 23 May 2020 23:46:43 +1000
Paul Ripke  wrote:

> Additionally, the test is quite repeatable from a tmpfs:
> 

OK, so I wonder if some other locale is set, then grep is converting
multi-byte chars to wide chars and then doing comparison. Years ago
I've benchmarked similar conversions with iconv() and it wasn't great.

Or could be something else completely different. Best thing to do is
profile it. Can DTrace be used to profile shared library functions? 


Re: [SOLVED-ish] Performance weirdness with netbsd-9 /usr/bin/grep

2020-05-23 Thread Sad Clouds
On Sat, 23 May 2020 14:04:56 +0200
Martin Husemann  wrote:

> On Sat, May 23, 2020 at 10:19:28AM +0100, Sad Clouds wrote:
> > rp3$ locale
> > LANG=""
> > LC_CTYPE="C"
> > LC_COLLATE="C"
> > LC_TIME="C"
> > LC_NUMERIC="C"
> > LC_MONETARY="C"
> > LC_MESSAGES="C"
> > LC_ALL=""
> 
> I get the same for netbsd-8 and netbsd-7.
> 
> Martin

Probably nothing to do with locales. First time grep is run, data is
read from disk into page cache, which takes time. Second time grep is
run with LC_ALL=C but there is no disk I/O, as data is read straight
from page cache, so much faster.

I tried to simulate with a memory file under /var/shm and don't
see any significant difference. 

Create large ASCII file:
rp3$ strings /netbsd >> /var/shm/out.txt
rp3$ for i in `seq 1 100`; do cat /var/shm/out.txt >> /var/shm/out2.txt; done

Run grep with LC_ALL="":
rp3$ time grep -c 'NoSuchString' /var/shm/out2.txt
0
0.84 real 0.23 user 0.56 sys

Run grep with LC_ALL="C":
rp3$ export LC_ALL=C
rp3$ time grep -c 'NoSuchString' /var/shm/out2.txt
0
0.94 real 0.23 user 0.67 sys

rp3$ rm /var/shm/out*


Re: [SOLVED-ish] Performance weirdness with netbsd-9 /usr/bin/grep

2020-05-23 Thread Sad Clouds
On Sat, 23 May 2020 12:30:59 +1000
Paul Ripke  wrote:
 
> So has the default value of LC_ALL (or LC_.*) changed recently?
> 

I don't use any custom locale settings and this is what I get:

rp3$ uname -rm
9.0_STABLE evbarm

rp3$ locale
LANG=""
LC_CTYPE="C"
LC_COLLATE="C"
LC_TIME="C"
LC_NUMERIC="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_ALL=""


Re: Securing DNS traffic

2020-05-23 Thread Sad Clouds
On Sat, 23 May 2020 11:38:18 +0200 (CEST)
Havard Eidnes  wrote:

> If you desire to protect your lookup history from prying eyes, it's
> one thing to protect the communication itself.  However, I would
> personally shy away from all of Google, Cloudflare and Mozilla
> recursors, DoH or not.

I was thinking about this as well, but is there any real evidence that
public DNS providers misuse your personal data? Google probably do a
lot of tracking, but this must be all detailed in their privacy policy.

I'm using Cloudflare DNS server and according to their blog:
 
https://blog.cloudflare.com/announcing-/

- They don't track users.
- They don't use data for targeting ads.
- They don't write IP addresses to disk.
- They delete transaction logs within 24 hours.

So we assume they do as they promise, but if they were to break those
promises, I think the regulators would come down on them quite hard. I
don't know much about the legal aspects of it, so who knows.

Privacy is the main reason I'm slowly moving away from Gmail and
setting up my own email server (Postfix + Dovecot) on NetBSD virtual
private server. I'd like to be managing my own email and nobody else.


Re: Securing DNS traffic

2020-05-23 Thread Sad Clouds
Looking at the responses to my original email and doing some further
research, the summary of pluses/minuses would be:

1) unbound(8) resolving via root DNS servers
  + Most accurate results, since it bypasses any intermediaries.
  - Increased lookup time and higher load on authoritative DNS servers.
  - Some servers won't support DNS over TLS, hence my ISP can monitor
those searches.

2) unbound(8) resolving via external forwarders (e.g Cloudflare)
  + Faster lookup time.
  + DNS over TLS is always supported, hence hidden from my ISP.
  - Cloudflare can monitor those searches.
  ? Have to trust Cloudflare that the results are accurate.
  + Cloudflare DNS servers seem to support encrypted SNI.

There is still a separate issue of unencrypted TLS SNI, leaking
information in plain text. Seems like Firefox can support it when the
following config setting is set to true

network.security.esni.enabled


Re: Securing DNS traffic

2020-05-23 Thread Sad Clouds
On Fri, 22 May 2020 20:05:40 -0400
Jeffrey Walton  wrote:

> On Fri, May 22, 2020 at 5:38 PM Sad Clouds
>  wrote:
> >
> > ...
> > What I'm not sure about is this - unbound(8) has "root-hints" that
> > points to root DNS servers and it will handle recursive queries,
> > but it can also specify "forward-zone" where it can forward to
> > Cloudflare or Google recursive DNS servers. Both of these solution
> > would resolve DNS names. So which one of them takes precedence and
> > under what conditions? Why have both active at the same time? Is
> > one option better/more secure than the other?
> 
> Another option for DNS over HTTPS is Mozilla's servers:
> https://support.mozilla.org/en-US/kb/firefox-dns-over-https.
> 
> Jeff

OK thanks for the link. Sounds like this is specific to Firefox, i.e.
they bundle their own DoH resolver. Still useful to know.


Securing DNS traffic

2020-05-22 Thread Sad Clouds
I've got some spare time on my hands, so I decided to educate myself on
how to secure DNS traffic. I have a small home network with various
devices and most of them use public (Cloudflare or Google) DNS servers.

It seems there are two main security enhancements for DNS:

1. DNSSEC - digital signatures for DNS records to verify they haven't
been tampered with.

2. DNS over TLS - encryption of DNS traffic for privacy. This goes via
port 853 and could be over TCP or UDP (DTLS), although it's not clear
to me if both TCP and UDP are always supported, of if it's mainly TCP.

I assume that NetBSD (and other OSes) libc simple stub resolver doesn't
support any of these options, so I would need to use something like
unbound(8) on a local network.

So I started looking at configuring unbound(8) and came across this
useful guide - https://calomel.org/unbound_dns.html

What I'm not sure about is this - unbound(8) has "root-hints" that
points to root DNS servers and it will handle recursive queries, but it
can also specify "forward-zone" where it can forward to Cloudflare or
Google recursive DNS servers. Both of these solution would resolve DNS
names. So which one of them takes precedence and under what conditions?
Why have both active at the same time? Is one option better/more secure
than the other?

Thanks.


Re: NetBSD Jails

2020-05-21 Thread Sad Clouds
On Thu, 21 May 2020 11:43:18 - (UTC)
mlel...@serpens.de (Michael van Elst) wrote:

> net...@precedence.co.uk (Stephen Borrill) writes:
> 
> >RBAC using kauth was demonstrated in a talk by Alistair Crooks at 
> >EuroBSDCon 2009. Unfortunately, any slides/recordings seem to have
> >been expunged from the UKUUG website
> 
> >https://www.ukuug.org/events/eurobsdcon2009/history/
> 
> https://web.archive.org/web/20091229053351/http://www.ukuug.org/events/eurobsdcon2009/papers/role-based-access-control.pdf
> 

Yes I've seen this paper, there are also other papers that talk about
a proof-of-concept implementation of "secure sandboxes" with kauth:

https://pdfs.semanticscholar.org/db35/dffa143308ec31eb92b1a37bcdfc56f0d37b.pdf
https://2008.asiabsdcon.org/papers/P3A-paper.pdf

The way I understand it - NetBSD has the kernel framework for
implementing RBAC, but there is still work that needs to be done for
implementing specific security models, which could be used by system
administrators.


Re: NetBSD Jails

2020-05-21 Thread Sad Clouds
On Thu, 21 May 2020 00:17:27 -0400
"Aaron B."  wrote:

> There's still networking to worry about after that, but just isolating
> processes in a more useful way is a huge step forward.

You can probably do that. If you use chroot to emulate containers,
simply partition UID and GID assignment into blocks. Each chroot gets a
unique /etc/passwd and /etc/group where IDs are offset by some value.
You just need to make sure to rebuild binary databases. This way
multiple processes that use the same user name (e.g. sshd, postfix,
httpd, etc) and are started inside chroot, run under unique IDs and
cannot send signals to one another.

There is no isolation for networking. You can assign multiple aliased
IP addresses to a single interface, but they are all visible and
accessible inside chroot. You need to be really careful about which
listening sockets you create and avoid wildcard addresses. NetBSD has
kauth(9) framework which could be use for RBAC, so potentially you could
restrict process access to specific IP addresses, but someone has to
write kernel modules and user applications that implement RBAC.


Re: NetBSD Jails

2020-05-20 Thread Sad Clouds
On Tue, 19 May 2020 21:26:02 -0700
"Greg A. Woods"  wrote:

> So what more is needed, beyond chroot and login classes, to make
> possible the kinds things like allowing a customer to install web-app
> "plugins" to their instance of a web server?  I can't think of
> _anything_ else that's _actually_ needed, other than management
> tooling to make it all clickety-web-GUI-ish.  You certainly don't
> need/want to give them root in their chroot.

Some things can be achieved with chroot and various other tools in
NetBSD, other things are not going to work with chroot. It's nothing to
do with GUI management, but the fundamental architecture of chroot.

I've started looking into this some time ago, as I wanted to partition
my applications into isolated zones, without using Xen or other
hypervisors. I don't use NetBSD for anything serious, so not concerned
about security implications at the moment, as this is mostly a toy
project.

So it is mainly looking at what NetBSD provides to restrict and manage
resources (CPU and memory limits, Veriexec and other security
frameworks, Rump, mount_null and mount_union, QoS for disk and network
I/O, etc). Not quite sure how this will work out in the end.


Re: NetBSD Jails

2020-05-19 Thread Sad Clouds
On Mon, 18 May 2020 18:15:53 -0700
"Greg A. Woods"  wrote:

> I still think the security and complexity issues with containers, are
> a very much bigger concern than the pure efficiency losses of running
> full VMs.  When it's all hidden behind a single command ("docker pull
> nginx") then it's too easy to ignore the problems and so that's what
> people do
> -- they take the easy street.

I agree with you about "security and complexity", which is why I'm not
that keen on virtualization in general. It is part of the way the
industry has evolved and it is going to be around for a while. The
trend has been to build these "Russian doll" architectures - there is a
hypervisor OS, that runs various virtualized OSes, which in turn run
various virtual machines and interpreters like Java, Python, etc. And
this of cause means you paying Intel tax for bigger and more expensive
CPUs, more memory, more disk space, etc. Instead of conserving
resources, we are wasting them.

I think (and I could be wrong) that a lot of overheads could be
eliminated with simpler designs, where software runs as close as
possible to the hardware. There is an old blog from 2013 which compares
performance of Solaris Zones vs Xen vs KVM. And if you don't need to
run different versions of OSes, then Solaris Zones seem to be the most
efficient approach:

http://dtrace.org/blogs/brendan/2013/01/11/virtualization-performance-zones-kvm-xen/

You may find this video quite entertaining. The guy rambles on
occasionally, but he tells some interesting historical facts:

https://youtu.be/coFIEH3vXPw



Re: NetBSD Jails

2020-05-19 Thread Sad Clouds
On Tue, 19 May 2020 08:10:00 +0930
Brett Lymn  wrote:

> On Sat, May 16, 2020 at 09:51:42AM +0100, Sad Clouds wrote:
> > 
> > Just look at how Solaris does it - it has Zones (aka Jails) and
> > LDOMs (Logical Domains) on SPARC. LDOMs seem to be a much better
> > way of partitioning OS instances versus something like VMware or
> > Xen.
> > 
>  almost but not quite. A SPARC LDOM is more of a hardware
> partitionig, cpu and memory are dedicated to the LDOM for its
> exclusive use.  You can configure some dynamism by allowing cpu to be
> added/removed automatically when the load average increases/decreases
> but this relies on the guest doing dynamic reconfiguration.

That's exactly what I was referring to. Yes this is specific to SPARC
where they have a very small firmware hypervisor. The advantage is how
hardware resources are dedicated to a specific domain, so the OS can
use them directly with very little overhead.

Xen supports "PCI passthrough", which can reduce the overheads when a
guest accesses that device. But I don't think it has anything like "CPU
or memory passthrough", it supports "CPU pinning" however I think you
still end up with excessive cache and TLBs thrashing.

I haven't actually done any detailed benchmarking at the hardware
level, so could be talking complete nonsense. This is just the
impression I got when reading various technical manuals over the last
few years.


Re: smbd kills inetd?

2020-05-19 Thread Sad Clouds
On Mon, 18 May 2020 20:32:16 +
Hauke Fath  wrote:

> OTOH, when I run '/etc/rc.d/inetd start', my client cannot connect,
> and inetd dies quietly - no core, no log entry anywhere.
> 
> What gives?

Can you trace system calls to see what happens, see ktrace(1)? This may
give you an idea.


Re: NetBSD Jails

2020-05-17 Thread Sad Clouds
On Sun, 17 May 2020 14:07:21 -0500
Ted Spradley  wrote:

> How well will all this modern container and virtualization stuff work
> on the older platforms that only have megabytes of memory, not
> gigabytes?

Quite well, since containers are very lightweight. It's not the
container technology that sucks beyond belief, but the bloated user
applications that need gigabytes of memory to run basic tasks.


Re: NetBSD Jails

2020-05-17 Thread Sad Clouds
On Sun, 17 May 2020 12:06:36 -0700
"Greg A. Woods"  wrote:

> If you actually really need a fully isolated and completely full
> featured environment where you can run complex applications in
> "reasonably secure" sandbox style isolation then why not choose the
> best possible hardware you can afford that supports a full virtual
> machine environment such as Xen, or nvmm/bhyve with qemu or
> virtualbox, etc.? (e.g. I bought a used Dell server for about $500
> and I can run Xen with many domUs on it very efficiently)

Your main gripe about jails/zones/containers is added complexity, well
guess what, with Xen/VMware/VirtualBox the complexity is still there,
you just pushed it over to the hypervisor vendor.

If you run multiple instances of the same OS version in Xen/VMware,
that is a pretty inefficient way to partition your application domains.
Each instance has a duplicate copy of the kernel, drivers, libraries,
etc. The overhead and latency of scheduling and context switching
various processes in multiple OSes is also not insignificant. Also
forget about chroot, it is not an enterprise solution.

Containers scale much better, lower memory usage and less CPU overhead.
For a home network server, it doesn't matter so much, but if you are in
the business of selling cloud services, then containers is the way to
go. Yes you need a solid container OS with virtualized network and
storage implementation, so for NetBSD it probably means re-writing
parts of those subsystems from scratch. It has been done for other OSes
and works quite well.


Re: NetBSD Jails

2020-05-16 Thread Sad Clouds
On Fri, 15 May 2020 15:23:32 -0700
"Greg A. Woods"  wrote:

> I'm curious about what this means to you -- what do you need/want in
> addition to the chroot environments you now have?

Here is a good comparison:
https://en.wikipedia.org/wiki/OS-level_virtualization#Implementations

Just look at how Solaris does it - it has Zones (aka Jails) and LDOMs
(Logical Domains) on SPARC. LDOMs seem to be a much better way of
partitioning OS instances versus something like VMware or Xen.

NetBSD needs a lot of catching up in this area, so not sure it will
happen any time soon, but who knows.


Re: pkg and downloaded files

2020-05-14 Thread Sad Clouds
On Thu, 14 May 2020 23:25:40 +0200
Riccardo Mottola  wrote:

> Do I miss something in my NFS config here? maybe server-side

By default NFS will map root user to nobody, hence you won't have
correct permissions. See exports(5) man page and -maproot option. You
probably need to explicitly map root to root, i.e. -maproot=root:wheel 


Re: pkg and downloaded files

2020-05-14 Thread Sad Clouds
On Thu, 14 May 2020 19:05:56 +0200
Riccardo Mottola  wrote:

> Hi All,
> 
> where do source files get saved?

This in your mk.conf tells where to save distfiles

DISTDIR=


  1   2   3   >