Re: NetBSD Jails

2020-05-16 Thread Greg A. Woods
At Sat, 16 May 2020 22:52:24 -0400, "Aaron B."  wrote:
Subject: Re: NetBSD Jails
>
> It also doesn't solve the ultimate issue here, which is isolation: a
> user (in the kernel sense of user, not necessary a human logged in via
> SSH) in one chroot could run 'ls' or equivalant syscalls and see
> activity inside a different chroot.

Hmmm... is this a real threat model?  Or just a "nice to have"?

(and maybe not "ls" -- else your chroot is leaking, but even if I can
run "ps" and see all the processes on the system, is that a problem?)

I know some people do allow human users to login to FreeBSD "jails", but
I really have to wonder why.  I think if you want to give human users
the idea that they have their own machine then you really do need to
give them a whole VM (at least with Unix/POSIX systems -- modernized
multics-like systems might be a better way).

However with just some process(es) running a service in a chroot, isn't
this only a threat when you assume the process(es) _will_ _always_ be
compromised with shell-equivalent or remote-code exploits?  Otherwise
they're running vetted code that can only do as it is told!  Yes there's
always an exploit risk, but what's the treat model where the highest
risk of such an exploit is that one exploited instance can see what
processes another un-exploited instance is running?  Maybe they can find
other like jails which can also be exploited, but still, what's the
threat model?  More web pages to deface?  Really?


> To solve the problem this way, I have to rebuild the chroot with a
> custom nginx configuration for every place I wish to deploy it - or
> manually manipulate it's configuration after deployment.
>
> This defeats the entire point of a container system: I can build the
> desired nginx environment once, and then deploy it wherever it is
> needed with no modifications. Being forced to customize it for 30+
> locations, however that customization is done, doesn't scale very well
> with the human as the bottleneck.

I guess this is one thing I just don't understand about the "modern" way
of bundling unrelated but like services onto a single OS instance.

I really would expect to have to configure each individual instance in
at least some minor way.

After all you have to do the per-instance configuration _somewhere_.

Does it really matter whether the configuration is done in the
"container"/"jail" layer, or directly to the in the per-instance config
files?

Personally I'd do it in the per-VM /etc filesystem which all sit
adjacent to each other on the file server, rather than in each
chroot/jail directory.  You can then edit them all in parallel with one
database driven configuration management tool.


> The way I see it: containers shouldn't be thought of like virtual
> machines, but more like installed packages.

If only they were that simple (i.e. that "containers" are just boxed up
replicate-able add-on packates).  Typically they end up being many
orders of magnitude more complex, especially in the hidden under layers,

Actual real VMs are (when well designed), will have some (virtual)
layers, but they're all transparent w.r.t. whether there's bare hardware
underneath, or any number of VMs running together; and with fewer
differences, fewer complexities, and fewer parts to break or hide bugs.
It's POSIX all the way down (and across), not a different way of doing
things in each different kind of "container" environment.


> It's fine grained in ways I don't care about, and also doesn't control
> things I do care about.
>
> I don't care how much core memory is locked, that you have a valid home
> directory, how big your stack size is - I'd like to say "you get 1
> gigabyte, use it as you want." It makes perfect sense when you have 100
> humans sharing a computer. It makes a lot less sense when you have 100
> applications sharing a computer that only one human has a shell account
> for.

Regarding existing kernel resource controls (rlimits) being unsuitable:

So, OK, what resource controls do you _really_ need?

I can imagine that bandwidth limits (and accounting) would be
interesting.  I think, though I have never done it, that limits can be
achieved, albeit with some complexity, using altq and bridges.  I
studied ideas for implementing per-user/route/address/etc. bandwidth
accounting some time ago but didn't come up with any great solutions.

I/O rate limiting on the filesystem, well that's a whole different
kettle of fish.  I don't even know if it makes sense once the main
bottleneck of network bandwidth is under control.


> Since CPU cycles are cheap, there's also this: the full VM route brings
> quite a bit of human overhead as well. Now I (usually) have to maintain
> a copy of the entire operating system for every application I wish to
> run. I need to centralize authentication services and logging. Tools
> like Ansible come into play to ensure VM's don't drift away from a
> common specification. Different OS versions cause random
> incompatibilites 

Re: Python flask app behind bozohttpd vs separate webserver for app

2020-05-16 Thread matthew sporleder
On Sat, May 16, 2020 at 11:30 PM Mayuresh  wrote:
>
> On Sat, May 16, 2020 at 03:56:47PM -0400, matthew sporleder wrote:
> > Just use apache :)  bozo has no proxy capabilities.
>
> I can invoke a Flask service from a cgi script using wget/curl - no?
> (Haven't yet tried.)
>
> Apache looks a monster to me and I don't need majority of what it offers.
>
> Mayuresh

FWIW I wrote the initial version of this:
https://cwiki.apache.org/confluence/display/HTTPD/Minimal+Config  and
at my job will run zero config apache on docker with just appending
-c's to the CMD..

With mpm_event, the minimal config, and whatever you need
(mod_proxy_uwsgi etc) you can live, easily, inside of a few MB and run
high performance.


Re: Python flask app behind bozohttpd vs separate webserver for app

2020-05-16 Thread Mayuresh
On Sat, May 16, 2020 at 03:56:47PM -0400, matthew sporleder wrote:
> Just use apache :)  bozo has no proxy capabilities.

I can invoke a Flask service from a cgi script using wget/curl - no?
(Haven't yet tried.)

Apache looks a monster to me and I don't need majority of what it offers.

Mayuresh


Re: NetBSD Jails

2020-05-16 Thread Aaron B.
On Sat, 16 May 2020 10:57:55 -0700
"Greg A. Woods"  wrote:

> Perhaps all that's required is a tool which extracts the minimum
> required entries from the real /etc/master.passwd for each chroot?
> (and some way to maintain chroot copies?)
> 
> (Another way would be a new service behind nsdispatch(3) which would
> provide access through the chroot, e.g. via a socket, to the shared
> /etc/master.passwd, though that would assume all chrooted programs use
> only the "standard" interfaces supported by nsdispatch(3).)
> 

I've thought about keeping a SQLite database outside the chroot's that
track which deployed instance is using which UID', and rewrite
master.passwd within each at deployment.

...or, I could simply not, keep tracking the UID's in a spreadsheet
like I do now, and not have to deal with that maintaining that code
should Jails ever be implemented. I've too many other things asking for
my time to worry about implementing a feature that could be temporary.

It also doesn't solve the ultimate issue here, which is isolation: a
user (in the kernel sense of user, not necessary a human logged in via
SSH) in one chroot could run 'ls' or equivalant syscalls and see
activity inside a different chroot.


> > - All chroots share the same network stack. If I tell nginx to bind to
> > '0.0.0.0' or '::', the first instance will startup fine, the others
> > will fail with "address already in use."
> 
> Well if you're chrooting multiple instances of the same service, isn't
> it obvious that each has to listen on one and only one specific address?
> If I understand correctly one could also route a subnet via a bridge
> interface to each chrooted service.  Maybe a chrooted process should
> also be prevented from listening to a wildcard address?
> 

To solve the problem this way, I have to rebuild the chroot with a
custom nginx configuration for every place I wish to deploy it - or
manually manipulate it's configuration after deployment.

This defeats the entire point of a container system: I can build the
desired nginx environment once, and then deploy it wherever it is
needed with no modifications. Being forced to customize it for 30+
locations, however that customization is done, doesn't scale very well
with the human as the bottleneck.


> I've heard FreeBSD folks go on for days about how FreeBSD's "jails" make
> network management simpler, but I still don't have any real
> understanding of exactly what this means. 

It's a completely different mindset that takes some long held
assumptions and turns them upside down.

The way I see it: containers shouldn't be thought of like virtual
machines, but more like installed packages. Specifically, packages that
have state which is completely disconnected from the state of the base
operating system. Long story short, I came to see large VMware
deployments running nothing but hundreds of near-identical Linux VM's
as papering over the fact package managers don't keep track of
application state.

But this is all a digression. No one has to do it this way. I've found
jails are usable as virtual machines if you understand and accept the
(big) limitations.


> > The wiki's projects list has a
> > clean solution to this particular point, which may or may not be within
> > scope of jails:
> >
> > https://wiki.netbsd.org/projects/project/virtual_network_stacks/
> 
> Virtual network stacks seem to be a rather complex solution looking for
> a problem -- i.e. in actual fact more of a problem looking for trouble.
> 

I understood the task as a research project - may or may not pay
dividends in unexpected ways. It would definitely be helpful to me,
but I don't expect to see implemented it anytime soon. You asked for
what I wanted to see, I answered :)


> > - Some way to set per-chroot resource limits would be helpful. I can
> > manipulate ulimits, but that is basically driving screws with a hammer.
> > It's simply the wrong tool.
> 
> Well here's where /etc/login.conf solves most problems for normal chroot
> environments, since only ordinary users should be running inside the
> chroot.
> 
> (Or it could, if there were resource controls related to networking. :-))
> 

It's fine grained in ways I don't care about, and also doesn't control
things I do care about.

I don't care how much core memory is locked, that you have a valid home
directory, how big your stack size is - I'd like to say "you get 1
gigabyte, use it as you want." It makes perfect sense when you have 100
humans sharing a computer. It makes a lot less sense when you have 100
applications sharing a computer that only one human has a shell account
for.


> For anything beyond that I'm pretty certain that a full virtual machine
> is the real solution.  Personally I think full VMs are the K.I.S.S. of
> death for "containers" (especially once you see the nightmare of
> complexity that underlies them in the kernel, i.e. CGroups).  I would
> much rather have the clearly structured overhead of the VM instead of
> the hidden 

Re: NetBSD Jails

2020-05-16 Thread Niels Dettenbach



Am 16.05.2020 um 00:23 schrieb Greg A. Woods :
> 
> I'm curious about what this means to you -- what do you need/want in
> addition to the chroot environments you now have?
at least dedicated „networking“ / network isolation, but ideally ressource 
„isolation“ / system limits etc., i.e. similiar to FreeBSD jails since decades. 

More and more applications today are developed with „containerization“ in mind 
regarding security etc.. - not only in multi instance / user / customer setups.

So i assume too jails / containers will getting even more attention again as 
feature (and i would use them too).

niels.

—
Niels Dettenbach
https://www.syndicat.com
https://www.syndicat.com/pub_key.asc




Re: pkgin error

2020-05-16 Thread matthew sporleder
On Sat, May 16, 2020 at 8:42 AM Martin Neitzel
 wrote:
>
> ill> Same here.
> ill>
> ill> $ echo "select file_size from remote_pkg where pkgname like
> ill> 'xmlcatmgr%'" | sqlite3 pkgin.db
> ill> 25004
> ill>
> ill> $ ftp
> ill> 
> https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.0/All/xmlcatmgr-2.2nb1.tgz
> ill> 24864 bytes retrieved in 00:00 (16.63 MiB/s)
>
> Some observations on this:
>
> % echo 0 1 2 | xargs -n1 -I XX lynx -head -dump 
> http://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.XX/All/xmlcatmgr-2.2nb1.tgz
>  | grep Length
> Content-Length: 24864
> Content-Length: 24864
> Content-Length: 25004
>
> This actually figures with my(!, see below) long "select" info:
>
> sqlite> select *  from remote_pkg where pkgname like 'xmlcat%' ;
>   PKG_ID = 21533
>  FULLPKGNAME = xmlcatmgr-2.2nb1
>  PKGNAME = xmlcatmgr
>  PKGVERS = 2.2nb1
>   BUILD_DATE = 2020-03-28 20:22:48 +
>  COMMENT = XML and SGML catalog manager
>  LICENSE = modified-bsd
> PKGTOOLS_VERSION = 20091115
> HOMEPAGE = http://xmlcatmgr.sourceforge.net/
>   OS_VERSION = 8.0
>  DESCRIPTION =
>  PKGPATH = textproc/xmlcatmgr
>  PKG_OPTIONS =
>   CATEGORIES = textproc
> SIZE_PKG = 50583
>FILE_SIZE = 25004
>OPSYS = NetBSD
>   REPOSITORY = 
> http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.2/All
>
>
> Looks like Roland is rather using the 8.0 repo?
>
> I essentially noticed the same problem here, too, after...
>
> - an update on the netbsd-8 branch on May 2nd and
> - moving my /usr/pkg/etc/pkgin/repositories.conf
>   from http://cdn.NetBSD.org/pub/pkgsrc/packages/NetBSD/amd64/8.1/All
>   to   http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.2/All
> - having nothing at all happen on "pkgin update" until
>   May 11 00:52 /var/db/pkgin/pkgin.db
>
> For "pkgin upgrade", this resulted in:
>
> 31 packages to refresh:  (xmlcatmgr-2.2nb1 ... ... ...)
> 19 packages to upgrade:
> 2 packages to install:  heimdal-1.5.3nb24 openssl-1.1.1e
>
> (I was mostly surprised about the "refresh" section.  Where does this
> come from, what is this supposed to mean?)
>
> Since I'm referring to 8.2 pkg repository, details differ for me,
> I guess I am seeing the same problem but maybe from the other side.
>
> I can still confirm the problem / the error message seen from
> my side, albeit with other packages.
>
> I get the error message with:
> download error: p5-Authen-SASL-2.16nb7 size does not match pkg_summary
>
> and [abridged]:
>
> sqlite> select *  from remote_pkg where pkgname like 'p5-Authen-SASL' ;
>   PKG_ID = 6173
>  FULLPKGNAME = p5-Authen-SASL-2.16nb7
>   BUILD_DATE = 2020-04-01 03:57:23 +
>   OS_VERSION = 8.0
> SIZE_PKG = 119267
>FILE_SIZE = 24892
>   REPOSITORY = 
> http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.2/All
>
> % echo 0 1 2 | xargs -n1 -I XX lynx -head -dump 
> http://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.XX/All/p5-Authen-SASL-2.16nb7.tgz
>  | grep Length
> Content-Length: 24900
> Content-Length: 24892
> Content-Length: 24900
>
> D'oh!
>
>
> This is not the first time this madness happens.  The last time was around
> last summer and some kind fellow on the ircnet #netbsd pointed out to me
> that the URLs
>
> http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/...
> http://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/...
>^
>|
>
> yielded different data.  Both hostnames resolved to the same IP addresses
> (as they should) but the Fastly CDN servers were apparently treating the
> requests in different ways depending on the case in the Host: headers.
>
> Martin Neitzel

I fixed the host header thing when that was pointed out.

Anyway try it now.


Re: NetBSD Jails

2020-05-16 Thread Greg A. Woods
At Fri, 15 May 2020 20:18:28 -0400, "Aaron B."  wrote:
Subject: Re: NetBSD Jails
>
> - Processes can "see" each other; I have to be careful not to reuse
> UID numbers. For example: if I build a chroot with an instance of nginx
> that runs as UID 2505, and then deploy multiple copies of that chroot,
> all of them can call kill(2) on a process in a different chroot.

Perhaps all that's required is a tool which extracts the minimum
required entries from the real /etc/master.passwd for each chroot?
(and some way to maintain chroot copies?)

(Another way would be a new service behind nsdispatch(3) which would
provide access through the chroot, e.g. via a socket, to the shared
/etc/master.passwd, though that would assume all chrooted programs use
only the "standard" interfaces supported by nsdispatch(3).)

> - All chroots share the same network stack. If I tell nginx to bind to
> '0.0.0.0' or '::', the first instance will startup fine, the others
> will fail with "address already in use."

Well if you're chrooting multiple instances of the same service, isn't
it obvious that each has to listen on one and only one specific address?
If I understand correctly one could also route a subnet via a bridge
interface to each chrooted service.  Maybe a chrooted process should
also be prevented from listening to a wildcard address?

I've heard FreeBSD folks go on for days about how FreeBSD's "jails" make
network management simpler, but I still don't have any real
understanding of exactly what this means.  The only thing that seems to
be of interest is in allowing a chrooted "root" user(*) (e.g. to allow
someone inside the jail to muck around with interface addresses, routes,
etc.), but I would suggest that allowing root inside a chroot is a very
very bad idea no matter what "jails" features you might think will
protect you (i.e. I would never trust a chrooted root user, "jails" or
no jails).

(*) "chrooted root user" -- i.e. the "root" user in the jail can only do
things (as superuser ID#0) to those resources, e.g. interfaces, routes,
etc. that are delegated to the jail.

> The wiki's projects list has a
> clean solution to this particular point, which may or may not be within
> scope of jails:
>
> https://wiki.netbsd.org/projects/project/virtual_network_stacks/

Virtual network stacks seem to be a rather complex solution looking for
a problem -- i.e. in actual fact more of a problem looking for trouble.

> - Some way to set per-chroot resource limits would be helpful. I can
> manipulate ulimits, but that is basically driving screws with a hammer.
> It's simply the wrong tool.

Well here's where /etc/login.conf solves most problems for normal chroot
environments, since only ordinary users should be running inside the
chroot.

(Or it could, if there were resource controls related to networking. :-))

For anything beyond that I'm pretty certain that a full virtual machine
is the real solution.  Personally I think full VMs are the K.I.S.S. of
death for "containers" (especially once you see the nightmare of
complexity that underlies them in the kernel, i.e. CGroups).  I would
much rather have the clearly structured overhead of the VM instead of
the hidden overhead and excessive complexity of "containers", or even
just some of the bits of them like virtual network stacks.

All this said though I would note that perhaps re-engineering the whole
network stack in the netgraph way (perhaps directly using netgraph[1]),
provides some form of "virtualization" for network things in a clean and
structured way.

[1] https://en.wikipedia.org/wiki/Netgraph

One thing I would like to have for making VMs easier to manage though is
a filesystem that can be shared read-only as a block device --
i.e. through xbdback(4) to xbd(4), or the equivalent for other kinds of VMs.
(I say this because I abhor the use of NFS to share to VM domUs.)

--
Greg A. Woods 

Kelowna, BC +1 250 762-7675   RoboHack 
Planix, Inc.  Avoncote Farms 


pgppMReEo1poV.pgp
Description: OpenPGP Digital Signature


Python flask app behind bozohttpd vs separate webserver for app

2020-05-16 Thread Mayuresh
I have a setup where NetBSD's native bozohttpd serves certain static
contents. It uses https and basic auth and the setup is pretty stable.

There are some small pieces of requirements to add some python web apps
developed using Flask. The app needs to use the same authentication data
used by the basic setup above. Fortunately that turns out to be easy.

Now I have two options:

Option 1: Run a separate instance of a python based web server such as
gunicorn on a separate port for the apps and use bozohttpd for static
contents.

I have to start gunicorn as root so that it can get access to certificate
files, but run the worker process as a non root user which is an option
provided by gunicorn.

One problem with this is, I don't get the environment variables right this
way. There are ways to feed those into gunicorn, but that would amount to
replicating my rc scripts that elaborately define the environment.

Option 2: Run python web server on localhost and via bozohttpd cgi route
the requests/responses to/from it. There may be slight overhead of this,
but it's not a performance critical application.

I can also use a simple http and Flask's simple builtin server internally
as it is not exposed to the internet.

Would appreciate any comments / pros and cons etc.

Mayuresh


firefox68 / netbsd-8 : mostly black window

2020-05-16 Thread is
Hi,

am I the only one experiencing this?

lots of

[4147, Main Thread] WARNING: failed to open shm: Not supported: file 
/pkg_comp/obj/pkgsrc/www/firefox68/default/firefox-68.7.0/ipc/chromium/src/base/shared_memory_posix.cc,
 line 142
Crash Annotation GraphicsCriticalError: |[0][GFX1-]: Failed to lock new back 
buffer. (t=2.47329) |[586][GFX1-]: Failed to lock new back buffer. (t=41.8046) 
|[587][GFX1-]: Failed to lock new back buffer. (t=41.8443) |[588][GFX1-]: 
Failed to 

on stdout/err

The Window stays white background, with some black rectangles (e.g., 
blindly opening a menu adds a corresponding rectangle)

Blindly opening an URL shows the title as the windows title (rendered by the
window manager) - onlye rendering seems to be broken.

This is 

NetBSD henrietta 8.1 NetBSD 8.1 (GENERIC) #3: Sun Jun 16 12:30:46 CEST 2019  
is@neuerserver:/var/itch/sources/8.0/obj.amd64/sys/arch/amd64/compile/GENERIC 
amd64

build environment was 8.0ish, firefox 68.7 from 2020Q1.

-is


Re: pkgin error

2020-05-16 Thread Martin Neitzel
ill> Same here.
ill>
ill> $ echo "select file_size from remote_pkg where pkgname like
ill> 'xmlcatmgr%'" | sqlite3 pkgin.db
ill> 25004
ill>
ill> $ ftp
ill> 
https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.0/All/xmlcatmgr-2.2nb1.tgz
ill> 24864 bytes retrieved in 00:00 (16.63 MiB/s)

Some observations on this:

% echo 0 1 2 | xargs -n1 -I XX lynx -head -dump 
http://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.XX/All/xmlcatmgr-2.2nb1.tgz
 | grep Length
Content-Length: 24864
Content-Length: 24864
Content-Length: 25004

This actually figures with my(!, see below) long "select" info:

sqlite> select *  from remote_pkg where pkgname like 'xmlcat%' ;
  PKG_ID = 21533
 FULLPKGNAME = xmlcatmgr-2.2nb1
 PKGNAME = xmlcatmgr
 PKGVERS = 2.2nb1
  BUILD_DATE = 2020-03-28 20:22:48 +
 COMMENT = XML and SGML catalog manager
 LICENSE = modified-bsd
PKGTOOLS_VERSION = 20091115
HOMEPAGE = http://xmlcatmgr.sourceforge.net/
  OS_VERSION = 8.0
 DESCRIPTION = 
 PKGPATH = textproc/xmlcatmgr
 PKG_OPTIONS = 
  CATEGORIES = textproc
SIZE_PKG = 50583
   FILE_SIZE = 25004
   OPSYS = NetBSD
  REPOSITORY = 
http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.2/All


Looks like Roland is rather using the 8.0 repo?

I essentially noticed the same problem here, too, after...

- an update on the netbsd-8 branch on May 2nd and
- moving my /usr/pkg/etc/pkgin/repositories.conf
  from http://cdn.NetBSD.org/pub/pkgsrc/packages/NetBSD/amd64/8.1/All
  to   http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.2/All
- having nothing at all happen on "pkgin update" until
  May 11 00:52 /var/db/pkgin/pkgin.db

For "pkgin upgrade", this resulted in:

31 packages to refresh:  (xmlcatmgr-2.2nb1 ... ... ...)
19 packages to upgrade:
2 packages to install:  heimdal-1.5.3nb24 openssl-1.1.1e

(I was mostly surprised about the "refresh" section.  Where does this
come from, what is this supposed to mean?)

Since I'm referring to 8.2 pkg repository, details differ for me,
I guess I am seeing the same problem but maybe from the other side.

I can still confirm the problem / the error message seen from
my side, albeit with other packages.

I get the error message with:
download error: p5-Authen-SASL-2.16nb7 size does not match pkg_summary

and [abridged]:

sqlite> select *  from remote_pkg where pkgname like 'p5-Authen-SASL' ;
  PKG_ID = 6173
 FULLPKGNAME = p5-Authen-SASL-2.16nb7
  BUILD_DATE = 2020-04-01 03:57:23 +
  OS_VERSION = 8.0
SIZE_PKG = 119267
   FILE_SIZE = 24892
  REPOSITORY = 
http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.2/All

% echo 0 1 2 | xargs -n1 -I XX lynx -head -dump 
http://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.XX/All/p5-Authen-SASL-2.16nb7.tgz
 | grep Length
Content-Length: 24900
Content-Length: 24892
Content-Length: 24900

D'oh!


This is not the first time this madness happens.  The last time was around
last summer and some kind fellow on the ircnet #netbsd pointed out to me
that the URLs

http://cdn.Netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/...
http://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/...
   ^
   |

yielded different data.  Both hostnames resolved to the same IP addresses
(as they should) but the Fastly CDN servers were apparently treating the
requests in different ways depending on the case in the Host: headers.

Martin Neitzel


Re: NetBSD Jails

2020-05-16 Thread Sad Clouds
On Fri, 15 May 2020 15:23:32 -0700
"Greg A. Woods"  wrote:

> I'm curious about what this means to you -- what do you need/want in
> addition to the chroot environments you now have?

Here is a good comparison:
https://en.wikipedia.org/wiki/OS-level_virtualization#Implementations

Just look at how Solaris does it - it has Zones (aka Jails) and LDOMs
(Logical Domains) on SPARC. LDOMs seem to be a much better way of
partitioning OS instances versus something like VMware or Xen.

NetBSD needs a lot of catching up in this area, so not sure it will
happen any time soon, but who knows.


Re: pkgin error

2020-05-16 Thread Roland Illig

On 15.05.2020 17:22, nottobay wrote:

I keep getting a bunch of errors saying "download error 
size does not match pkg_summary" I try just telling it to proceed but
the package still doesn't install. I have already tried forcing a pgkin
update and it didn't fix it, and I'm using the default repo the NetBSD 9
installer gave me. How would I fix this?


Same here.

$ echo "select file_size from remote_pkg where pkgname like
'xmlcatmgr%'" | sqlite3 pkgin.db
25004

$ ftp
https://cdn.netbsd.org/pub/pkgsrc/packages/NetBSD/amd64/8.0/All/xmlcatmgr-2.2nb1.tgz
24864 bytes retrieved in 00:00 (16.63 MiB/s)

The strange thing is that the +BUILD_INFO inside that package says:

BUILD_DATE=2019-12-21 05:36:25 +

Yet the mtime on cdn.NetBSD.org is 28-Mar-2020 20:22.

What's going on here? In this situation I'm missing cryptographically
signed packages since the default transport protocol is "http" without
any "s".