Re: Tester(s) needed: ixv(4)

2015-08-31 Thread Justin Cormack
On 18 August 2015 at 12:09, Bert Kiers  wrote:
> On Mon, Aug 17, 2015 at 05:38:09PM +0200, Bert Kiers wrote:
>
>> # uname -a
>> NetBSD  7.99.21 NetBSD 7.99.21 (GENERIC) #0: Mon Aug 17 17:13:23 CEST 2015  
>> ki...@shell.boppelans.net:/tmp/obj25032/sys/arch/amd64/compile/GENERIC amd64
>>
>> # dmesg|grep ixv
>> ixv0 at pci0 dev 3 function 0: Intel(R) PRO/10GbE Virtual Function Network 
>> Driver, Version - 1.1.4
>> ixv0: clearing prefetchable bit
>> ixv0: Using MSIX interrupts with 3 vectors
>> ixv0: for TX/RX, interrupting at msix0 vec 0, bound queue 0 to cpu 0
>> ixv0: for link, interrupting at msix0 vec 1, affinity to cpu 1
>>
>> # ifconfig ixv0
>> ixv0: flags=8802 mtu 1500
>> capabilities=bff80> P4CSUM_Rx,UDP4CSUM_Tx,TCP6CSUM_Rx,TCP6CSUM_Tx,UDP6CSUM_Rx,UDP6CSUM_Tx>
>> enabled=0
>> address: 00:00:00:00:00:00
>> media: Ethernet autoselect
>> status: no carrier
>>
>>
>> no carrier is correct at the moment; but no MAC-address :(
>
> After connecting the port to a switch, ifconfig still shows no carrier.
> "ifconfig ixv0 up" does not help.  Doing "ifconfig eth2 up" on the host
> does not help.  It is up on the switch.
>
> Btw, this is with 6.1.5 userland.  I don't think that matters.

Ok I got it set up on my machine using kvm, I get the same (HEAD
userland so thats not it).

Will take a look in the driver and the documentation and see if I can
see what is missing.

Justin


Re: Removing ARCNET stuffs

2015-05-31 Thread Justin Cormack
On 31 May 2015 at 00:09, David Holland dholland-t...@netbsd.org wrote:
 I'm saying that, fundamentally, if you want to run gcc4 or gcc5 on a
 Sparc IPC that you're going to have problems. There is no way around
 this, except maybe to float a new compiler with the specific goal of
 both being modern and running on 25-year-old hardware. (That's an
 enormous project.)

I think pcc is currently the only realistic solution, but it still
needs a lot of work,
and we would want to do this without compromising support for gcc/clang, so
it means fixing pcc, as well as adding more architectures. (I would be happy
with a no c++ base system to support this). pcc has come a long way, and
we could get to a pcc base system for some architectures for 8.0 I think.

Justin


Re: Removal of compat-FreeBSD

2015-02-07 Thread Justin Cormack
On 7 February 2015 at 11:33, Stephen Borrill net...@precedence.co.uk wrote:
 On Sat, 7 Feb 2015, Maxime Villard wrote:

 I intend to remove the compat-FreeBSD support from the system.


 Can tw_cli be run in any other way to manage 3ware RAID cards?

Does the Linux binary for tw_cli work? Or are there driver differences
that mean it doesnt?

Justin


Re: OpenZFS?

2014-12-28 Thread Justin Cormack
On Sun, Dec 28, 2014 at 7:52 PM, Greg Troxel g...@ir.bbn.com wrote:

 The ZFS bits in NetBSD seem old, and it also seems that they don't quite
 100% work.

 Now, it seems OpenZFS is the locus of ZFS activity, and that's how
 FreeBSD's ZFS code is maintained:

   http://open-zfs.org/wiki/Main_Page

 Thus, it seems that it would be good to extend OpenZFS to support NetBSD
 (or extend NetBSD's glue code to support OpenZFS), and to have recent
 OpenZFS code in NetBSD's src/external.

 I have put this notion in the Finish ZFS project page:

   https://wiki.netbsd.org/projects/project/zfs/?updated

 I am curious if anyone who understands ZFS better has opinions on whether
 my notion of heading to OpenZFS makes sense, and how hard it is likely
 to be.

That is definitely the way to go. It is extend NetBSD to support
OpenZFS - OpenZFS does not really have a core repo. FreeBSD is the
closest, plus the existing NetBSD glue. It all builds on rump, so you
can work in userspace, might be easiest to start on a FreeBSD system
to cross check.

The OpenZFS community is pretty friendly.

Justin


Re: driver concurrency

2014-12-01 Thread Justin Cormack
On Mon, Dec 1, 2014 at 7:42 PM, Manuel Bouyer bou...@antioche.eu.org wrote:
 On Mon, Dec 01, 2014 at 02:28:04PM -0500, Thor Lancelot Simon wrote:
 They would, and many are simple enough to make this reasonably easy to do,
 but in practice, the giant locking of our SCSI code makes it pointless.

 Sure, but we could also make the scsi code run without the giant lock.
 Also, some of them don't use the scsi layer, but present a ld(4)
 interface (although is seems that most recent onces present a scsi interface).

NVMexpress is a new non-SCSI storage standard with per CPU queues. I
was planning to port the FreeBSD driver when the hardware becomes a
bit more available.

Justin


Re: posix_madvise(2) should fail with ENOMEM for invalid adresses range

2014-11-23 Thread Justin Cormack
On Sun, Nov 23, 2014 at 4:37 PM, Nicolas Joly nj...@pasteur.fr wrote:

 Hi,

 According the OpenGroup online document for posix_madvise[1], it
 should fail with ENOMEM for invalid addresses ranges :

 [ENOMEM]
 Addresses in the range starting at addr and continuing for len
 bytes are partly or completely outside the range allowed for the
 address space of the calling process.

 But we currently fail with EINVAL (returned value from range_check()
 function).

 Ok to apply the attached patch to fix posix_madvise/madvise ?

 Thanks.

 [1] 
 http://pubs.opengroup.org/onlinepubs/9699919799/functions/posix_madvise.html

There was some discussion on PR 48910

http://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=48910

Which was slightly inconclusive.

Justin


Re: kernel constructor

2014-11-10 Thread Justin Cormack
On Nov 10, 2014 10:02 AM, Masao Uebayashi uebay...@gmail.com wrote:

 __attribute__((constructor(n))), where n being priority, can do
 ordering (hint from pooka@).

 Question is, how to provide __CTOR_LIST__, __CTOR_LIST_END__ equivalent
symbols.

 (It is super easy if MI linker script is there. :)

Constructors have priorities but it is a single global ordering which is a
bit ugly to use. But it is a nice idea to use this mechanism.

Justin


Re: kernel constructor

2014-11-10 Thread Justin Cormack
On Mon, Nov 10, 2014 at 11:27 AM, Masao Uebayashi uebay...@gmail.com wrote:
 Hard to be uglier than how init_main.c looks like now...

Can't disagree there.

Justin


Re: CTLTYPE_UINT?

2014-10-04 Thread Justin Cormack
On Sat, Oct 4, 2014 at 9:24 AM, Alan Barrett a...@cequrux.com wrote:
 On Fri, 03 Oct 2014, Justin Cormack wrote:

 Back in the sysctl discussion a while back, core group said:

 http://mail-index.netbsd.org/tech-kern/2014/03/26/msg016779.html

 a) What types are needed?  Currently, CTLTYPE_INT is a signed
   32-bit type, and CTLTYPE_QUAD is an unsigned 64-bit type.
   Perhaps all four possible combinations of signed/unsigned and
   32 bits/64 bits should be supported.


 If you add new sysctl types, please use names that describe the size and
 signedness.  For example, rename CTLTYPE_INT to CTLTPE_INT32, keep
 CTLTYPE_INT as a backward compatible alias for CTLTYPE_INT32, and add
 CTLTYPE_UINT32.  Similarly, rename CTLTYPE_QUAD to CTLTYPE_UINT64, keep
 CTLTYPE_QUAD as an alias, and add CTLTYPE_INT64.  Please don't add a
 CTLTYPE_UINT with no indication of its size.

 A survey of what other OSes do would also be useful.

Freebsd (and Dragonfly):
 CTLTYPE_NODEThis is a node intended to be a parent for other nodes.
 CTLTYPE_INT This is a signed integer.
 CTLTYPE_STRING  This is a nul-terminated string
 CTLTYPE_S64 This is a 64-bit signed integer.
 CTLTYPE_OPAQUE  This is an opaque data structure.
 CTLTYPE_STRUCT  Alias for CTLTYPE_OPAQUE.
 CTLTYPE_UINTThis is an unsigned integer.
 CTLTYPE_LONGThis is a signed long.
 CTLTYPE_ULONG   This is an unsigned long.
 CTLTYPE_U64 This is a 64-bit unsigned integer.

OpenBSD has the same types as NetBSD, ie CTLTYPE_INT and CTLTYPE_QUAD
as the int32 and uint64 types.

I agree about being explicit with the 32 bitness, but using S64 and
U64 as the 64 bit names to be consistent with FreeBSD might make
sense. long types seems best avoided if possible, you can see the
temptation to use them for memory amounts, but you could be running on
32 bit userspace on a 64 bit kernel.

Justin


CTLTYPE_UINT?

2014-10-03 Thread Justin Cormack
Back in the sysctl discussion a while back, core group said:

http://mail-index.netbsd.org/tech-kern/2014/03/26/msg016779.html

a) What types are needed?  Currently, CTLTYPE_INT is a signed
   32-bit type, and CTLTYPE_QUAD is an unsigned 64-bit type.
   Perhaps all four possible combinations of signed/unsigned and
   32 bits/64 bits should be supported.

I noticed today that there are some cases where a CTLTYPE_INT is being
fed a uint32_t, for example in sys/netinet6/ip6_input.c this is the
case for ip6_temp_preferred_lifetime and similar values that are
uint32_t.

It seems to make sense to add a CTLTYPE_UINT for these types, although
there are other options. FreeBSD introduced them way back. We could
continue to ignore the signedness differences though...

Thoughts?

Justin


Re: detect valid fd

2014-09-15 Thread Justin Cormack
On Tue, Sep 16, 2014 at 12:59 AM, Patrick Welche pr...@cam.ac.uk wrote:

 So for both of you, things look correct!

 This is on Sunday's NetBSD 7.99.1 amd64, but this is an old problem for
 me...

Very odd. What does ktrace output look like? Which other versions did
you see this on before?

Justin


Re: ixg(4) performances

2014-08-30 Thread Justin Cormack
On Sat, Aug 30, 2014 at 8:22 AM, Thor Lancelot Simon t...@panix.com wrote:
 On Fri, Aug 29, 2014 at 12:22:31PM -0400, Terry Moore wrote:

 Is the ixg in an expansion slot or integrated onto the main board?

 If you know where to get a mainboard with an integrated ixg, I wouldn't
 mind hearing about it.

They are starting to appear, eg
http://www.supermicro.co.uk/products/motherboard/Xeon/C600/X9SRH-7TF.cfm

Justin


Re: RFC: IRQ affinity (aka interrupt routing)

2014-07-25 Thread Justin Cormack
On Fri, Jul 25, 2014 at 10:06 AM, Kengo NAKAHARA k-nakah...@iij.ad.jp wrote:
 But the UI is rough, so could you comment aboud the UI?

 The implementation is consist of following three pathes:
 (1) IRQ affinity implementation itself
 The usage is sysctl -w  kern.cpu_affinity.irq=18:1 (18 is
 IRQ number, 1 is cpuid). This mean the IRQ 18 interrupts
 route to cpuid 1 cpu core.

I would think that
sysctl -w kern.cpu_affinity.irq.18=1
would be much more natural as an interface.

Justin


Re: icache sync private rump component

2014-07-19 Thread Justin Cormack
On Jul 19, 2014 10:01 AM, Alexander Nasonov al...@yandex.ru wrote:

 To compile mips/cache.h in rump kernel, I needed to add -DMIPS3=1
 to Makefile.rump for mips platforms. This is the only change outside
 of sljit scope.

You surely can't do that, people may be trying to compile on non mips3
hardware?

Justin


Re: Fixing the ELF priorities

2014-07-01 Thread Justin Cormack
On Tue, Jul 1, 2014 at 9:03 AM, Maxime Villard m...@m00nbsd.net wrote:
 Hi,
 I would like to improve the priorities of the binary loader. When the kernel
 loads a binary, it basically loops and calls different loaders (for aout, ELF,
 ...). There are several ELF loaders, for native and emulated binaries. This
 loop has a particular order: the 32bit compat loaders are called *before*
 the native ones. Which means that when you launch a simple 64bit binary on a
 64bit system, the kernel first tries to load it through the netbsd32 and 
 linux32
 compat loaders. The priority is obviously wrong, the native loaders should be
 called first; it also has a non-negligible performance impact when executing
 many binaries (when compiling something, for example).

FreeBSD recently (
http://svnweb.freebsd.org/base?view=revisionrevision=264269 ) added
elf header signature parsing to decide how to execute binaries (based
on the Linux binfmt_misc). The main use case is for qemu emulation,
but it could also apply to this type of issue. It is obviously a
bigger change, but could be worth considering.

Justin


Re: ffsv2 extattr support

2014-06-20 Thread Justin Cormack
On Fri, Jun 20, 2014 at 9:11 AM, Thomas Schmitt scdbac...@gmx.net wrote:

 One peculiarity shows up:

 extattr_get_link(2) et.al. return the attribute content with
 a trailing 0-byte.
 This 0-byte seems to be a bug-or-feature of setextattr(1).

 I overwrote an extattr in the ISO filesystem by xorriso means
 without a trailing 0-byte. Then i extracted it into the FFSv1
 filesystem and recorded it by xorriso again. No 0-byte appeared.
 So the extattr_*(2) functions and xorriso are not the ones
 who created those 0s.

 Neither FreeBSD nor Linux show trailing 0-bytes with their
 cleartext extattr/xattr.

 I care for portability of attributes in namespace user.
 So it would be interesting to know whether the convention on
 NetBSD is to have a 0-byte at the end of attribute content.
 extattr(9) specifies nul-terminated character string for
 names. But content is usually regarded as binary data, governed
 by the length value and not by a terminating character.

 If the 0-byte is convention, then i would consider to strip
 it when recording and to add it when restoring.
 But that would be feasible only if namespace user is reserved
 for C character strings rather than byte arrays.

This looks like a bug to me, attribute values need to be binary. I
have some tests which run on Linux and FreeBSD but I haven't run them
on NetBSD yet due to my current systems not having support (would be
nice if tmpfs had support).

Justin


Re: Big test regression in -current

2014-06-10 Thread Justin Cormack
All the issues I saw have been resolved - thanks everyone.

Justin


Re: Big test regression in -current

2014-06-09 Thread Justin Cormack
On Mon, Jun 9, 2014 at 8:30 PM, Martin Husemann mar...@duskware.de wrote:
 And after updating again we are down to a handfull of unusual failures:

 Summary for 587 test programs:
 3781 passed test cases.
 21 failed test cases.
 32 expected failed test cases.
 75 skipped test cases.

I am looking into one remaining issue, the rest of the failures I saw
were fixed today.

Justin


Re: Big test regression in -current

2014-06-08 Thread Justin Cormack
On Sun, Jun 8, 2014 at 4:24 PM, Mindaugas Rasiukevicius
rm...@netbsd.org wrote:
 Martin Husemann mar...@duskware.de wrote:
 Something seriously happened to -current in the last few days. Between
 June 2 and June 8 the number of test failures jumped from 17 to 80, see:

 http://www.netbsd.org/~martin/sparc64-atf/

 Some rump test crash like this:

 panic: kernel diagnostic assertion curcpu() == ci failed: file
 /usr/src/lib/librump/../../sys/rump/librump/rumpkern/intr.c, line 331


 That is RUMP-specific.  Fixed now.


The rump builds against HEAD are still failing tests
http://build.myriabit.eu:8012/waterfall after this fix so there may be
other issues, will take a look.

Justin


Re: asymmetric smp

2014-03-27 Thread Justin Cormack
On Mar 27, 2014 2:32 AM, Matt Thomas m...@3am-software.com wrote:


 I recently ordered an ODROID-XU Lite to help beat on the my ARM MP code.

 However, it has a quirk that I don't think our scheduler will deal with.

 It has 4 Cortex-A15 cores @ 1.4Ghz and 4 Cortex-A7 cores @ 1.2Ghz.  Even
if the frequencies weren't different, the A15 cores at least twice as fast
per cycle than the A7.  That asymmetry is going to cause havoc with the
scheduler.  In terms of power, the A7s use a lot less than the A15s so if
you can keep the work on the A7s and leave the A15s sleeping, you'll extend
your battery life a lot.

Yes these things are odd. Haven't got one yet. I did see that GCC now has a
dual optimization mode for code that nerds to run on both. There are a few
research documents around on what works for scheduling.

Justin


[ANN] rumprun

2014-03-21 Thread Justin Cormack
I know a few people have been using this already, but I thought I
would do a more formal announcement as I have now pushed a much
improved version (no longer uses dlopen).

Rumprun, available from https://github.com/rumpkernel/rumprun is a set
of build scripts that builds NetBSD userspace tools that run against a
rump kernel. This allows easy configuration of rump kernels, for
example in order to write tests, so you can create multiple rump
kernels with a network between them, or create raid file systems. It
is fairly simple to add more commands; most things should work unless
they use things that the rump kernel does not support (signal handling
largely). It runs on NetBSD, Linux and probably any other Unix that
rump kernel runs on.

There are examples on the rump kernel wiki
https://github.com/rumpkernel/wiki/wiki/rumprun:-Howto-to-configure-the-npf-packet-filter
https://github.com/rumpkernel/wiki/wiki/rumprun:-Howto-use-soft-raid-and-encrypted-block-devices

There is a rump kernel list at rumpkernel-us...@lists.sourceforge.net
and irc #rumpkernel on freenode if you want some help with it - there
are still known and unknown issues that needs fixing.

Justin
(jus...@netbsd.org)


Re: Enhance ptyfs to handle multiple instances.

2014-03-14 Thread Justin Cormack
On Fri, Mar 14, 2014 at 1:29 PM, Ilya Zykov net...@izyk.ru wrote:

 | I have few questions about project.
 | Christos, can I ask you about this?
 | Please, if anybody has objections or already doing it, tell me know.

 Nobody is already doing it, and if you have questions, you came to the right
 place.

 christos


 Ok.

 1. The main problem and question in this project(IMHO), it's how get access 
 for every instance through one driver ptm[x].
 First version.
 We can do it as well Linux devpts do. Inside every ptyfs we can create not 
 only slave side files,
 but ptm[x] too for this instance. But who must create(kernel mount function 
 or userspace helper) and what permissions will assign?
 One more version.
 We can do many ptm[x] minor numbers(165:0 165:1 for first instance, 165:2 
 165:3 for second ...) this can be anywhere in fs.
 But then for every mount we must pass for what instance it's mount doing. We 
 can do it with new mount option instance=#(for example).
 Every version has advantages and disadvantage. I think first version more 
 clear. What do you think?

 2. Mount without new option minstance(for example) must keep old behavior. 
 Is it necessarily?
 Or every new mount will mount new instance?

Looking at Linux, it is a little odd, my system does create a ptmx
inside /dev/pts/ but there are no read or write permissions set, and
everything is actually using the standard /dev/ptmx outside the mount.
So the one inside is not much use. All mounted instances (even for
namespaces, though have been proposals for device namespaces eg see
https://lwn.net/Articles/564854/) are identical, ie have the same
slave devices.

I can see the advantages in the other options, but I think there may
be added complexity, and it might be better to do the first option
unless there are strong use cases for multiple instances. Just having
multiple mounts is useful.

Justin


Re: RFC: stop having a single global page size

2014-01-31 Thread Justin Cormack
On Fri, Jan 31, 2014 at 9:14 AM, Martin Husemann mar...@duskware.de wrote:
 On Fri, Jan 31, 2014 at 12:59:15AM -0800, Matt Thomas wrote:
 Why would anyone want this?  Say you have a system in which the MMU can have
 per translation table page sizes.  a 16KB page size might be desirable for 
 the
 kernel and for LP64 processes.  If you are running a ILP32 process, possibly
 of an older architecture, you might want to use a 4KB page size.

 I guess another usage is a (huge) framebuffer/aperture mmap'd by the
 userland driver, where you could easily save a lot TLB entries by using
 1 MB pages (or bigger) instead of 4/8k ones. I think Solaris does this.

Linux maps the kernel with 4MB pages to save TLB entries too, I believe.

Justin


Re: BPF memstore and bpf_validate_ext()

2013-12-20 Thread Justin Cormack
On Fri, Dec 20, 2013 at 1:13 PM, Alexander Nasonov al...@yandex.ru wrote:
 Sorry for top-posting. I'm replying from my phone.

 I've not looked at linux bpf before. I remember taking a quick look at 
 bpf_jit_compile function but I didn't like emitting binary machine code with 
 macro commands.

 I spent few minutes today looking at linux code and I noticed few interesting 
 things:

 - They use negative offsets to access auxiliary data. So, there is a clear 
 distinction between local memory store and external data. I don't think it's 
 a new addition, though.
 - They have a big enum of commands. Many of them translate to bpf commands 
 but there are also special commands like load protocol number into A. There 
 is a decoder from bpf but I have no clue how it works.
 - Those commands are adapted to work with skbuf data.

I have used some of the non traditional uses of bpf in Linux, in
particularly the syscall filtering code, which is designed to be a bit
like the packet filtering code. But I don't think it is a great model,
and I think the jit compiler is rather different, as it compiles to
asm. And the current route they are going seems to be validation
rather than in kernel jitting, see https://lwn.net/Articles/575531/

Justin


 Alex


 20.12.13, 04:16, David Laight da...@l8s.co.uk:

 On Fri, Dec 20, 2013 at 01:28:12AM +0200, Mindaugas Rasiukevicius wrote:
  Alexander Nasonov al...@yandex.ru wrote:
  
   Well, if it wasn't needed for many year in bpf, why do we need it now? 
   ;-)
  
 
  Because it was decided to use BPF byte-code for more applications and that
  meant there is a need for improvements.  It is called evolution. :)

 Has anyone here looked closely at the changes linux is making to bpf?

   David

 --
 David Laight: da...@l8s.co.uk


 --
 Alex


Re: [patch] put Lua standard libraries into the kernel

2013-11-29 Thread Justin Cormack
On 29 Nov 2013 14:11, Lourival Vieira Neto lourival.n...@gmail.com
wrote:

 On Fri, Nov 29, 2013 at 10:03 AM, Marc Balmer m...@msys.ch wrote:
  Am 29.11.13 12:38, schrieb Lourival Vieira Neto:
  It will be interesting to see by how much memory the addition of the
  standard libraries will grow lua(4).  lneto claims it does not grow at
  all.  If it should, we can still move the standard libraries to a
kmod.
 
  I just double checked now (using nm to confirm). In fact, I was
  commenting the wrong portion of the Makefile to test. Sorry about that
  =(. Here is the result in amd64: 240K with stdlibs and auxlib, 166K
  with only auxlib and 154K solo. Anyway, I still think that is 86K is
  not that much to have things like {base, string, table}lib. However,
  though I think stdlibs could be in another kmod, I think that is not a
  good idea to have auxlib in another one. Lua auxlib is just an
  extension of the Lua C API and 12K is really a fair price to have a
  more complete Lua library in kernel, IMO.
 
  We could for now just go ahead, put auxlib and the stdlibs in lua(4) as
  foreseen, and when the need arises, we can still factor out the stdlibs
  to their own kmod.

 Agreed. Anyone opposes?


Sounds fine.


Re: in which we present an ugly hack to make sys/queue.h CIRCLEQ work

2013-11-27 Thread Justin Cormack
On 27 Nov 2013 06:50, Mouse mo...@rodents-montreal.org wrote:

  Let me get on the record.  It's basically ridiculous to allow GCC 4.8
  to redefine the set of permitted C expressions such that it breaks
  BSD.

 gcc 4.8 isn't.  C99 did; what's distinctive about gcc 4.8 is that
 before that gcc didn't take advantage of the leeway C99 said it had.
 These macros have been out-of-spec since the day NetBSD decided it was
 going to use C99 rather than some older version of C; it's just that
 only now is that out-of-spec-ness actually biting anyone.  (That
 decision may have been implicit in a compiler version change.)

The decision to detect them and then optimise to broken code rather than a
compile time error is what annoys me.

Justin


Re: A Library for Converting Data to and from C Structs for Lua

2013-11-20 Thread Justin Cormack
On 20 Nov 2013 08:38, Marc Balmer m...@msys.ch wrote:
  Now we need a name that covers both uses cases.  It could be memory
  because it deals with memory, or just data, which I favour.
 
  Opinions on the name?

 Since no one replied, it will go by the name 'data' and be available for
 both Luas.

 @lneto: I will start with the pack/unpack parts, you can then add your
 stuff whenever you want, ok?

I don't have opinions on the name but I do have a set of feature
requirements. I am currently using luaffi which needs some work to remove
the non portable parts, but that's not far off. Happy to switch but I do
need access to struct members like tables, nested structs, unions, casts,
metatables for structs. If there was an outline design doc that would be
helpful.


Re: [patch] changing lua_Number to int64_t

2013-11-17 Thread Justin Cormack
On Sun, Nov 17, 2013 at 11:30 AM, Alexander Nasonov al...@yandex.ru wrote:
 Mouse wrote:
 Also, using an exact-width type assumes that the hardware/compiler in
 question _has_ such a type.

 It's possible that lua, NetBSD, or the combination of the two is
 willing to write off portability to machines where one or both of those
 potential portability issues becomes actual.  But that seems to be
 asking for trouble to me; history is full of but nobody will ever want
 to port this to one of _those_ that come back to bite people.

 I was perfectly fine with long long because it's long enough to
 represent all integers in range [-2^53-1, 2^53-1].

 As Marc pointed out, Lua has a single numeric type which is double
 by default. Many Lua libraries don't need FP and they use a subset of
 exactly representable integers (not all of them do range checks, though).
 Extending the range when porting from userspace to kernel will decrease
 the pain factor of porting.

The range [-2^53-1, 2^53-1] is not sufficient - in kernel you need to
be able to deal with the longest type the kernel uses, it is
incredibly annoying to have to use userdata to deal with off_t or
gratuitously hope that losing some bits is ok (as happens with Lua in
userspace now). As the widest type in the kernel is int64_t thats
what Lua should use. (The issue of uint64_t is left as an exercise for
the Lua programmer). When/if the kernel uses something longer then Lua
can change, but using intmax_t is not useful as the kernel is explicit
about sizes.

Justin


Re: A Library for Converting Data to and from C Structs for Lua

2013-11-17 Thread Justin Cormack
On Sun, Nov 17, 2013 at 12:05 PM, Marc Balmer m...@msys.ch wrote:
 I came accross a small library for converting data to an from C structs
 for Lua, written by Roberto Ierusalimschy:

 http://www.inf.puc-rio.br/~roberto/struct/

 I plan to import it and to make it available to both lua(1) and lua(4)
 as follows:

 The source code will be imported into
 ${NETBSDSRCDIR}/sys/external/mit/struct unaltered and then be modified
 to compile on NetBSD.

 Then ${NETBSDSRCDIR}/sys/module/luastruct/ and
 ${NETBSDSRCDIR}/lib/lua/struct/ directories will be added with the
 respective Makefiles etc.


I always found this library so un user friendly that I would rather
program in C. Not sure it was meant as much more than a proof of
concept. YMMV.

Justin


Re: [patch] changing lua_Number to int64_t

2013-11-17 Thread Justin Cormack
On Sun, Nov 17, 2013 at 4:52 PM, Lourival Vieira Neto
lourival.n...@gmail.com wrote:
 On Sun, Nov 17, 2013 at 2:02 PM, Christos Zoulas chris...@zoulas.com wrote:
 On Nov 17, 10:46am, lourival.n...@gmail.com (Lourival Vieira Neto) wrote:
 -- Subject: Re: [patch] changing lua_Number to int64_t

 | On Sun, Nov 17, 2013 at 7:37 AM, Marc Balmer m...@msys.ch wrote:
 |  Am 17.11.13 04:49, schrieb Terry Moore:
 |  I believe that if you want the Lua scripts to be portable across NetBSD
 |  deployments, you should choose a well-known fixed width.
 | 
 |  I don't see this as very important.  Lua scripts will hardly depend on
 |  the size of an integer.
 |
 | But they could. I think that the script programmers should know if the
 | numeric data type is enough for their usage (e.g., time diffs).

 By making it the biggest type possible, you never need to be worried.

 Right.. you just convinced me.. if no one opposes, I'll change that to
 intmax_t and get rid of PRI/SCNd64 =).

1. Lua 5.3 will have 64 bit integer support as standard, which will
make interop and reuse between kernel and userspace code much easier,
iff we use int64_t

2. Code will have to handle use in the kernel of uint64_t which will
potentially behave differently with Lua number type being int64_t vs
if it was sat int128_t which might happen, such that its unlikely to
be tested properly. There is no existing system where intmax_t is not
int64_t so this breakage is untestable.

Justin


Re: [patch] changing lua_Number to int64_t

2013-11-17 Thread Justin Cormack
On Sun, Nov 17, 2013 at 7:56 PM, Christos Zoulas chris...@zoulas.com wrote:
 On Nov 17,  3:36pm, lourival.n...@gmail.com (Lourival Vieira Neto) wrote:
 -- Subject: Re: [patch] changing lua_Number to int64_t

 |  1. Lua 5.3 will have 64 bit integer support as standard, which will
 |  make interop and reuse between kernel and userspace code much easier,
 |  iff we use int64_t
 |
 | If they are using int64_t for integers, I think it is a good reason to us to
 | stick to int64_t.

 This is not relevant. The numeric type will still be double, so forget
 about compatibility between kernel and userland. There is no need for
 the interpreter to use a fixed width type, but rather it is convenient
 to use the largest numeric type the machine can represent.

There will be two numeric types as standard, int64_t and double. It
should be possible to compile the kernel Lua with only int64_t and no
double support I would think, so integer only userland programs would
be compatible which is a very useful feature. But the semantics of the
Lua integer type will be such that it wraps at 64 bit, unlike some
hypothetical larger type (that doesn't yet exist and which the kernel
doesn't yet use).

Justin


Re: [patch] changing lua_Number to int64_t

2013-11-17 Thread Justin Cormack
On Sun, Nov 17, 2013 at 8:39 PM, Lourival Vieira Neto
lourival.n...@gmail.com wrote:
 Well, I don't think I fully understood that; mainly because I'm not
 aware about Lua 5.3. It will provide two number types for the scripts?
 Or you are just talking about lua_Integer type on the C-side. Lua 5.1
 already has a lua_Integer type that is defined as ptrdiff_t.

Yes that is correct, so 2 will be integer and 5.3 float, operations
will be defined in terms of how they convert, so there will be int and
float division. The draft manual is here
http://www.lua.org/work/doc/manual.html (see 3.4.1). This will not
happen for a while, but it will make it much easier in future for
interfaces like the kernel that need 64 bit int support, which is why
it is being implemented. So not being compatible with this seems a
mistake.

Justin


Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-12 Thread Justin Cormack
On Tue, Nov 12, 2013 at 6:08 AM, Michael van Elst mlel...@serpens.de wrote:
 The slowdown is already enormous due to lack of floating point
 hardware. That's why emulating the FP hardware is a very common
 way to handle this situation, just look at the other platforms.

 The rationale behind this is, that people who use FP operations
 in any significant way will use hardware that supports it. And
 others will hardly notice the extra slowdown cause by emulation.

 The questions are: does ARM support this and is there a usuable
 implementation. Linux dropped NWFPE due to licensing issues.

In principle, but the people who have floating point that is not VFP,
so need emulation of registers but can actually do fp might complain.
And people who want to run existing softfloat binaries, probably a
larger group.

Generally people seem happy with a smallish number of userspaces, of Matts list:
earm{v[4567],}{hf,}{eb}  except earmv4hf isn’t valid.

The most useful are an old earmv4{eb} and a new earmv6hfe (6 could be
7 here). Thats fewer userspaces than eg are useful on MIPS. The soft
and hard ones can be built and tested on newer hardware as they are
backwards compatible. Almost no software needs fixing to know about
this (just compilers, linkers etc). I don't think this is too terrible
for the best selling CPU platform there is.

Justin



Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-11 Thread Justin Cormack
On Mon, Nov 11, 2013 at 6:42 PM, Alistair Crooks a...@pkgsrc.org wrote:

 What I am asking for is a much better way of people describing the
 design decisions they've taken, and for them to attempt the radical
 step of documenting these decisions, and publishing them, so that
 people can understand why these decisions were taken.  This would go a
 long way towards alleviating the WTF moments that we've all been
 experiencing just recently.

 To put this another way - someone has a Beaglebone - what userland
 should they be looking for - hf, sf?  Beyond that - earm or arm?  How
 do people find out what chip is in an embedded appliance?  What web
 page documents the choices of ARM NetBSD userland right now, let alone
 how to work out where to get them once they know they want a hf earm?
 How would they specify that in building packages from pkgsrc?

 I'm concerned that you think that what we have right now is workable.

earm vs arm is simple, anything that does not have legacy requirements
should use earm.

armhf basically requires VFP support, which has been optional since
ARMv5, and is still technically optional but very widespread (ie all
the other FP alternatives have gone away, there are still some
softfloat machines; NEON is in addition). In Linux world most hardfp
versions also target ARMv7 as well, which caused some annoyance from
the Raspberry pi people so there are also armv6 targetting versions
around. There is almost no *current manufacture* hardware that NetBSD
will currently run on that does not in principle support hardfloat
(that I know of, there is a lot of stuff around of course), as not
putting float in at all seems rare outside microcontrollers now
(unlike MIPS where none of the router type stuff has float). So your
Beaglebone can run hf. Whether it matters I am not sure, I think the
original quotes for the huge speedup were exaggerated for real world
use, but its never going to be slower, and any fp application should
benefit a little.

Justin



Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-11 Thread Justin Cormack
On Mon, Nov 11, 2013 at 10:56 PM, Michael van Elst mlel...@serpens.de wrote:
 m...@3am-software.com (Matt Thomas) writes:

Exactly.  with hf, floating point values are passed in floating point
registers.  That can not be hidden via a library (this works on x86
since the stack has all the arguments).

 It could be hidden by emulating the floating point hardware.

Thats not sane. The slowdown would be enormous. You are emulating
registers as well as operations.

Justin


Re: hf/sf [Was Re: CVS commit: pkgsrc/misc/raspberrypi-userland]

2013-11-10 Thread Justin Cormack
On Sun, Nov 10, 2013 at 7:38 PM, Alistair Crooks a...@pkgsrc.org wrote:
 On Sun, Nov 10, 2013 at 04:56:04AM +, Jun Ebihara wrote:
 Module Name:  pkgsrc
 Committed By: jun
 Date: Sun Nov 10 04:56:04 UTC 2013

 Modified Files:
   pkgsrc/misc/raspberrypi-userland: Makefile

 Log Message:
 support earmhf.
 ONLY_FOR_PLATFORM=  NetBSD-*-*arm*
 oked by jmcneill.

 Thanks for doing this, Jun-san.

 But in the big picture, having hf and sf versions of a platform's
 userland, in the year 2013, is, well, sub-optimal.  I don't think the
 ramifications of the change were considered in enough detail, and we
 need to discuss it, before we have to start growing new architectures
 in pkgsrc for this and that.

 Can't we lean on what was done for i386/i387 twenty years ago, and
 use a userland library to decide whether to use softfloat in the
 absence of hardware?

 So let's discuss...

armhf is not just about whether there is or is not hardfloat, it is
also a different ABI. Its more like mips o32 vs n32 in that it is an
ABI change that requires some hardware requirements too.

Justin


Re: Lua in-kernel (lbuf library)

2013-10-16 Thread Justin Cormack
On 16 Oct 2013 15:41, Lourival Vieira Neto lourival.n...@gmail.com
wrote:

 Hi Justin,

 On Tue, Oct 15, 2013 at 7:38 PM, Justin Cormack
 jus...@specialbusservice.com wrote:
  On Thu, Oct 10, 2013 at 7:15 PM, Lourival Vieira Neto
  lourival.n...@gmail.com wrote:
  Hi folks,
 
  It has been a long time since my GSoC project and though I have tried
  to come back, I've experienced some personal issues. However, now I'm
  coding again.
 
  I'm developing a library to handle buffers in Lua, named lbuf. It is
  been developed as part of my efforts to perform experimentation in
  kernel network stack using Lua. Initially, I intended to bind mbuf to
  allow, for example, to write protocols dissectors in Lua. For example,
  calling a Lua function to inspect network packets:
 
  function filter(packet)
if packet.field == value then return DROP end
return PASS
  end
 
  Thus, I started to design a Lua binding to mbuf inspired by '#pragma
  pack' and bitfields of C lang. Then, I realized that this Lua library
  could be useful to other kernel (and user-space) areas, such as device
  drivers and user-level protocols. So, I started to develop this
  binding generically as a independent library to give random access to
  bits in a buffer. It is just in the early beginning, but I want to
  share some thoughts.
 
  I have been using the luajit ffi and luaffi, which let you directly
  use C structs (with bitfields) in Lua to do this. It makes it easier
  to reuse stuff that is already defined in C. (luaffi is not in its
  current state portable but my plan is to strip out the non portable
  bits, which are the function call support).
 
  Justin

 I never used luaffi. It sounds very interesting and I think it could
 be very useful to bind already defined C structs, but my purpose is to
 dynamically define data layouts using Lua syntax (without parsing C
 code).


Yes absolutely it makes more sense if already defined in C. For parsing
binary stuff I would look at Erlang for inspiration too, it is one of the
nicer designs.

Justin


Re: Moving Lua source codes

2013-10-09 Thread Justin Cormack
On Wed, Oct 9, 2013 at 9:26 AM, Thomas Klausner w...@netbsd.org wrote:
 On Wed, Oct 09, 2013 at 08:37:23AM +0200, Marc Balmer wrote:
 So if no one really objects the plan is as follows:

 - Import Lua 5.2 to src/sys/external/
 - Remove Lua 5.1 from src/external/

 apb suggested using src/common/external/licence/name and lots of

Trying to describe a license in a filename is an exercise in futility;
I can see the case for gpl as a generic license class maybe, but in
general you need to read the licenses for all the code, there are lots
of them.

 people asked for working examples first. What's your reply to that?

Various people have posted things they are working on. While there is
no proposal to remove it, upgrading makes sense; certainly the stuff I
am working on targets 5.2 as a preference. Also the 5.1 series is no
longer getting bug fixes, and there is a hostile code fix in 5.2 that
Wikipedia found in their code audit when they added user facing Lua to
Wikipedia.

Justin


equivalent of MAP_32BIT

2013-05-26 Thread Justin Cormack
I have been informed that there might be an undocumented ability to
get mmap to return addresses in the lower 4G of address space on a 64
bit machine by passing ~(unsigned)0 as the first parameter (or
(131)-1 for 2GB). Is this correct? Or is it possibley to use
netbsd32_mmap()? Linux and some other OSs support a MAP_32BIT flag to
achieve this, looking for a way to port some code to NetBSD that
requires this.

Thanks

Justin