Re: [dev] [ubase] compile using musl

2024-03-15 Thread Mattias Andrée
On Fri, 15 Mar 2024 16:22:19 -0300
Brian Mayer  wrote:

> Hi, I'm Brian, I'm trying to compile ubase using musl as libc on
> buildroot. I use a Pinebook Pro, so aarch64 is my arch.
> 
> By just running make I get this error:
> 
> /home/blmayer/git/distro/buildroot/output/host/bin/aarch64-buildroot-linux-musl-gcc
> -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64  -O2
> -g0  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
> -o df.o -c df.c
> dd.c: In function 'copy_splice':
> dd.c:179:21: warning: implicit declaration of function 'splice'
> [-Wimplicit-function-declaration]
>  179 | r = splice(ifd, NULL, p[1], NULL, n, SPLICE_F_MORE);
>  | ^~
> /home/blmayer/git/distro/buildroot/output/host/bin/aarch64-buildroot-linux-musl-gcc
> -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64  -O2
> -g0  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
> -o dmesg.o -c dmesg.c
> dd.c:179:54: error: 'SPLICE_F_MORE' undeclared (first use in this function)
>  179 | r = splice(ifd, NULL, p[1], NULL, n, SPLICE_F_MORE);
>  |  ^
> dd.c:179:54: note: each undeclared identifier is reported only once
> for each function it appears in
> /home/blmayer/git/distro/buildroot/output/host/bin/aarch64-buildroot-linux-musl-gcc
> -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64  -O2
> -g0  -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64
> -o eject.o -c eject.c
> make[1]: *** [Makefile:164: dd.o] Error 1
> 
> My guess is that glibc provides that function by musl doesn't. So as
> my system is using musl that function is not found. Can I get some
> help?
> 
> Thanks,
> Brian
> 

Add -D_GNU_SOURCE



Re: [dev] [sbase] Defining scope of sbase and ubase

2024-03-09 Thread Mattias Andrée
On Sat, 09 Mar 2024 17:28:49 +0100
Elie Le Vaillant  wrote:

> Страхиња Радић  wrote:
> > Compiling all programs into one binary is currently an option, and IMHO it 
> > should remain an option.  
> 
> I fully agree. However, the single binary situation should be improved.
> 
> > Great, combine the two versions of libutil into a single, separate
> > libutil repository  
> 
> I'm not sure whether or not this is a good idea, because it makes
> sbase and ubase dependant upon a separate repository, which needs to
> be present in the parent directory for it to build. It'd also make
> sbase development cumbersome, because we very frequently change
> libutil when we change sbase. Both are developed as one single
> project, and patches reflect this. libutil should not be isolated I
> think.
> 
> > then have a directory hierarchy like this:
> > 
> > corebox
> > ├──sbase (portable only)  \
> > ├──ubase (nonportable) depend on libutil.so and/or libutil.a
> > ├──xbase (non-POSIX)  /
> > └──libutil (option to produce .so and/or .a)  
> 
> ubase is not only nonportable, it is _linux-specific_. It is also
> non-POSIX. I think ubase should be renamed to reflect this. The
> distinction between POSIX/non-POSIX is, I think, not very useful. As

There are also multiple standards, not just POSIX. For example
tar, true, false are not POSIX (tar was removed from POSIX, and
true and false are defined only as shell built-in in POSIX), but they
are defined in LSB which a propular, but it's a Linux specific standard.
Most of POSIX but not all of POSIX is also defined by LSB.

> Mattias said, pure POSIX is quite cumbersome, and not very descriptive
> as of what you can expect from it. sh and vi are POSIX, but out-of-scope
> for sbase (from the TODO), whereas sponge is crucial for sbase (it
> allows simpler implementation of -i for sed, which _is_ POSIX, or the
> -o flag for sort (POSIX too)) and would thus be excluded from sbase
> and put into xbase.

sed -i is not POSIX.

> 
> The solution Mattias proposed (having one big repository, a portable
> subdir, a linux (and maybe others in the future, like openbsd) subdir
> and a Makefile which includes more descriptive sets than POSIX/non-POSIX
> (well, it _can_ be used, but it is not enough)) is I think the best to
> fix the problem of libutil duplication/drifting away of versions. It
> also allows a broader scope without impeding on the goals of sucklessness.
> 
> One supplementary question, more in line with the original question asked
> by Roberto E. Vargas Caballero, is: would awk and sh be out-of-scope?
> Should we rather try to implement extensions to awk, or follow the 
> specification
> as strictly possible? Should we implement POSIX sh, or some other shell, such 
> as rc?
> Or is it out-of-scope for us to implement a full-blown shell? I really am
> not sure.

I don't think there is any reason that sbase should implement
all of the standard utilities you need, I think it should only
be the small tools that you can reasonably write in one file.
Large and complicated programs like sh should be it's own project.

> 
> Regards,
> Elie Le Vaillant
> 




Re: [dev] [sbase] Defining scope of sbase and ubase

2024-03-09 Thread Mattias Andrée
On Sat, 9 Mar 2024 14:53:07 +0100
Страхиња Радић  wrote:

> On 24/03/09 12:59AM, Mattias Andrée wrote:
> > I agree, a single repo (or alternatively making libutil it's own repo) is
> > necessary if we want one binary, and I think we do.  
> 
> Compiling all programs into one binary is currently an option, and IMHO it 
> should remain an option. In my own toy distro[1] based on Musl-LFS and using 
> sbase and ubase I compile all programs from {s,u}base separately.
> 
> The reasons why I consider that beneficial over a single executable include, 
> but are not limited to:
> 
> - Easier to maintain: if an administrator decides a utility is unnecessary or 
>   shouldn't be available, it comes down to rm-ing the file vs recompilation 
> of 
>   the entire *box.
> 
> - More robust: in case of disk corruption, all of the utilities are 
> unavailable 
>   vs only those affected.
> 
> - Fine-grained control: separate programs can be compiled using specific
>   compilation options (eg. -g -O0) vs all of the programs sharing compilation 
>   options.
> 
> etc.
> 
> 
> > Even if submodules was possible, I do not think they are a good solution.  
> 
> What makes the git submodules not possible?

I haven't actually checked, but it safe to assume that if it is not
already a problem it very well could become one (and it probably is).
If the two libutil implementations have functions with the same names
they cannot both be linked into the same binary. A "solution" would
be to use weak linkage, but then you can run into problems when the
functions that share name do not share behaviour (e.g. have different
prototype or return type).

> 
> 
> > Using submodules is unpleasant and pointless since all could is under our
> > control. I think submodules should only be used for code that you do not
> > have control over but need the source code for. Either you have separate
> > repos and have normal compile time dependencies (require that the libraries
> > are installed) or you put everything in one place, one repo.  
> 
> Everything in the quoted part seems personal preference. Git submodules offer 
> a 
> way to easily establish a hierarchy of git repositories while keeping them as 
> separate "units".

Of course these are personal preferences. There aren't any technical problems
using git submodules assuming you can put everything in the same repo.

> 
> So the libutil differs in sbase and ubase. Great, combine the two versions of 
> libutil into a single, separate libutil repository, then have a directory 
> hierarchy like this:
> 
> corebox
> ├──sbase (portable only)  \
> ├──ubase (nonportable) depend on libutil.so and/or libutil.a
> ├──xbase (non-POSIX)  /
> └──libutil (option to produce .so and/or .a)
> 
> 
> 
> [1]: https://strahinja.srht.site/galeb
> 




Re: [dev] [sbase] Defining scope of sbase and ubase

2024-03-09 Thread Mattias Andrée
On Sat, 09 Mar 2024 12:10:28 +0100
Eolien55  wrote:

> Mattias Andrée  wrote:
> > I think there should be one directory called "portable" containing only 
> > tools
> > from sbase, and one directory called "linux" containing the tools from ubase
> > and maybe even symlinks to the tools in "portable". This structure would 
> > allow
> > us to add implementations for other operating systems as well. If we add
> > symlinks to the tools in "portable" to "linux", each directory could have
> > it's own makefile. But I'm not sure this is preferable over a single 
> > Makefile
> > in the root directory.  
> 
> This is a great idea! Your mail on the other branch is a great idea too.
> I think we should have platform-specific libutil for unportable functions
> in ubase's libutil (so, linux/libutil, openbsd/libutil and so on, if we
> do actually add implementations for other OSes), and a top-level libutil
> too.
> 
> This could maybe allow adding platform-agnostic implementations of some utils
> (not all because some APIs are so different that it requires full rewrites,
> but maybe some, such as clear, stty, tput, or dd maybe).
> 
> I will start hacking on it, and will post the git repository for it
> when it builds correctly.
> 
> I'm not sure how you combine two repositories into one while keeping the
> history though.

git init .

git pull git://git.suckless.org/sbase
git branch -M master sbase
mkdir sbase
mv * .gitignore sbase
git add .
git commit -m 'Move all of sbase into its own directory'

git checkout --orphan master
git rm -rf .

git pull git://git.suckless.org/ubase
git branch -M master ubase
mkdir ubase
mv * .gitignore ubase
git add .
git commit -m 'Move all of ubase into its own directory'

git checkout --orphan master
git rm -rf .
git pull --allow-unrelated-histories . sbase
git pull --allow-unrelated-histories . ubase


> 
> Regards,
> Elie Le Vaillant
> 




Re: [dev] [sbase] Defining scope of sbase and ubase

2024-03-08 Thread Mattias Andrée
On Fri, 08 Mar 2024 23:33:12 +0100
Eolien55  wrote:

> Страхиња Радић  wrote:
> > The problem of having separate *box executables could be solved by creating 
> > an 
> > "umbrella" *box project, perhaps having sbase, ubase and 
> > [insert_letter]base as 
> > git submodules, and deciding what to build based on the contents of 
> > config.mk.  
> 
> The problem is that sbase and ubase each include a version of libutil, whith 
> some
> functions which are the same, and some other wich serve the same function but
> vary in implementation due to version history.
> 
> git submodules are kinda meh I think for this problem, because it wouldn't 
> change
> the problem of source code duplication (and of versions drifting apart) 
> between
> the 2 projects, as libutil is part of both sbase and ubase trees (not mereley 
> the
> umbrella-box project).
> 
> I believe we should put what are currently sbase and ubsase in a single git 
> repository,
> sharing all that is portably sharable, but still separating utilities that 
> are portable
> from linux-specific ones.
> 
> I think sbase and ubase should try to provide useful, well-implemented, 
> suckless
> utilities. If we want POSIX utilities, let's add them! But I don't think we 
> should
> restrain us from doing that. sbase's sponge and cols is so useful I'm 
> constantly
> using them, even though I normally use busybox.
> 
> Regards,
> Elie Le Vaillant
> 

I agree, a single repo (or alternatively making libutil it's own repo) is
necessary if we want one binary, and I think we do.

Even if submodules was possible, I do not think they are a good solution.
Using submodules is unpleasant and pointless since all could is under our
control. I think submodules should only be used for code that you do not
have control over but need the source code for. Either you have separate
repos and have normal compile time dependencies (require that the libraries
are installed) or you put everything in one place, one repo.

I see the separation of sbase and ubase into two repositories as basically
equivalent to the a single repo with a sbase directory and a ubase directory,
except better when it comes to tagging new versions, but since there reason
to have separate releases for these, it doesn't really make a difference.
So simply putting sbase and ubase in two different directories in the same
repo, and then put a makefile there to build all of it into one binary would
be step up, of course there would be some linkage problems and making them
share libutil would be the next step up. Of course, it should be possible
to select if you want ubase included in your binary or not, is the point
of the separation in the first place.

I think there should be one directory called "portable" containing only tools
from sbase, and one directory called "linux" containing the tools from ubase
and maybe even symlinks to the tools in "portable". This structure would allow
us to add implementations for other operating systems as well. If we add
symlinks to the tools in "portable" to "linux", each directory could have
it's own makefile. But I'm not sure this is preferable over a single Makefile
in the root directory.

As mentioned in an other branch of this conversation, I think we should
have a base with only the POSIX tools but than have additional optional
tools, which could be group into overlapping categories, as you can select
what you want on your system.


Best regards,
Mattias Andrée



Re: [dev] [sbase] Defining scope of sbase and ubase

2024-03-08 Thread Mattias Andrée
I think we should keep the implementation of each tool as minimal as
possible, but POSIX-complete, and of course common tools such as
install(1) and tar(1). However, actually using a system that is
nothing more than POSIX is very cumbersome. And I think it is a better
solution to implement non-standard tools when possible to address
usability issues, e.g. implementing sponge(1) instead of -i for sed(1).

However, if the system isn't actually intended for be used
interactively via the command line, e.g. on embedded devices
or a service running in a container, there is no for non-standard
tools such as sponge(1), and it ought to be easy to select what
you want on your system. I suggest that tools be group into a
few categories (unless they are organised into separate directories
I see no reason one tool couldn't be in multiple categories).

These categories could for example:
- "posix" for all POSIX tools,
- "lsb" for all LSB tools
- "users" for account management
- "extra" for other common tools
- "common" for "posix", "lsb", "users", and "extra"
- "interative" for tools that only make since of when
  a lot via the terminal
- "all" for every implemented tool

Maybe these should also be subdivided into "portable" and "linux".

The user could then specify the tools to include either by
setting BIN when running make(1) or by saying "yes" or "no" to
each category (of course each category would have a default
option), e.g. POSIX=yes INTERACTIVE=no.

Best regards,
Mattias Andrée


On Fri, 08 Mar 2024 11:36:27 +0100
Elie Le Vaillant  wrote:

> Hi,
> 
> I think one of the main current issues with the current
> organization of sbase's and ubase's code, is that while
> they share parts of code (some parts of libutil are shared),
> they do not actually have it in common. As a result, changes
> to shared parts of libutil in sbase are not reflected in ubase,
> and vice-versa.
> 
> Some parts of ubase's libutil are not portable, so indeed it
> makes sense that they are ubase-specific. But some, such as
> recurse, strtonum, strl*, ealloc, eprinf, and maybe others,
> serve the same exact function as in sbase, but sometimes
> vary in implementation, because they didn't receive the same
> patches.
> 
> So I wonder:
> - Is this a problem that needs fixing? (I think yes)
> - How do we fix it?
> 
> We could sync both periodically, applying whichever patch change
> both *base's libutil to both.
> 
> Another idea could be to have both in the same git repository,
> allowing libutil (and possibily more code, like libutf if we
> ever need to) to be shared between them both without syncing
> them back and forth. My idea would be something like this:
> 
> sbase/
>   portable/
> ls.c
> cols.c
> ...
>   unportable/
> ps.c
> kill.c
> ...
>   libutf
>   libutil/
> portable
> unportable
>   Makefile
> 
> This could fix the "multiple -box" problems. This would require
> rewriting some parts of the Makefile (for example, having PORTABLEBIN
> and UNPORTABLEBIN to select whether or not we want the unportable
> utilities; the mkbox script also), and could also provide a solution for
> the "moretools" repo by having it being a separate directory in this
> hypothetical repository.
> 
> Also I'm not sure whether we should keep the goal of being POSIX-compliant.
> ls doesn't columnate, we have (non-standard) cols to do this. sed doesn't
> have the -i flag, we have sponge for this. cron isn't specified by POSIX,
> only crontab is. Maybe toybox roadmap's section on POSIX is relevant:
> https://landley.net/toybox/roadmap.html
> 
> I think we should try and implement a minimal Unix-like userspace,
> and allow ourselves some freedom on what to implement. We already
> do this with sponge and cols. On ubase it is true also, with ctrlaltdel
> for example. I do not see why not do it more.
> 
> Overall I think bringing everything in the same repository, with
> what is now sbase and ubase in separate directories rather than
> separate repositories, would fix both the current situation, and
> allow for a "sextra"/"uextra" directory for supplementary tools.
> 
> Mattias Andrée already proposed this back when he proposed a patch
> for shuf(1):
> > No, we don't really need shuf(1) in sbase, but I think we
> > should have a suckless implementation available, it can be
> > a useful utility. I have a few more utilities I fund useful
> > but I haven't bothered to set up a repository yet. [...]
> > I think it might be a good idea to have sextra for portable
> > utilities and uextra for unportable utilities, if you have
> > any other suggestions I would like to hear them.   
> 
> I think this could fix the current situation, with code on
> different versions split between 2 repositories and ultimately
> 2 -box binaries, and allow a broader scope without impeding the
> goals of minimalness of sbase/ubase.
> 
> Regards,
> Elie Le Vaillant
> 




Re: [dev] getting rid of cmake builds

2023-09-22 Thread Mattias Andrée
On Fri, 22 Sep 2023 15:54:59 +0200
Laslo Hunhold  wrote:

> On Thu, 21 Sep 2023 16:05:17 +0200
> David Demelier  wrote:
> 
> Dear David,
> 
> > It's near to impossible to convert a CMake project to make
> > automatically, CMake is almost like a scripting language given the
> > numerous of things you can do with it.
> > 
> > Keep in mind that plain make (POSIX) is enough for really simple
> > projects but can come limited when going portable. Although the use of
> > pkg-config helps there are various places where you will need to pass
> > additional compiler/linker flags. For those I like GNU make even
> > though its syntax is somewhat strange at some points.
> > 
> > Which projects are you referring to?  
> 
> the "POSIX makefiles are not easily portable" aspect hasn't been true
> for a long time. Check out the build system of my project libgrapheme.
> It is 100% POSIX make and portable across all BSDs, macOS, Cygwin,
> MinGW and of course Linux, including automatically naming and embedding
> the semantic version as specified in the Makefile. This is done by
> providing a very simple ./configure script that automatically modifies
> config.mk and tells the user when they are working from a system that
> hasn't been ported to yet.

You can used make to run ./configure automatically, all you need to do is simply
rename Makefile to makefile.in, let ./configure run `ln -s makefile.in makefile`
and create a new file named Makefile containing:

.POSIX:
.DEFAULT:
./configure
$(MAKE) -f makefile $@

I think running ./configure isn't a big deal, however this technique is also 
very
useful if you want to automatically generate rules and variables for Makefile.
it's especially powerful because make(1posix) expressly states that `-f -` 
shall be
interpreted as using stdin as the makefile.

Extremely occasionally POSIX make can feel like it's not enough or at least not
efficient enough (at write time or build time), and GNU make can fix these 
issues,
however using this double makefile technique, all of these can be addressed (of
course not always as nicely as non-standard features can). Just look at this
beauty: 
https://codeberg.org/maandree/simple-icon-theme/src/branch/master/Makefile

> 
> libgrapheme has automatic unit tests and code generators written in
> C99, so it's probably a very extreme example. You could easily adapt a
> library project using the libgraphmee configure+config.mk+Makefile as a
> basic template.
> 
> Feel free to contact me if you have any questions.
> 
> With best regards
> 
> Laslo
> 
> [0]:https://git.suckless.org/libgrapheme/
> 




Re: [dev] Minimalist software. Should I care?

2023-07-05 Thread Mattias Andrée
On Wed, 5 Jul 2023 10:04:47 -0500
Dave Blanchard  wrote:

> On Thu, 06 Jul 2023 00:01:43 +1200
> Miles Rout  wrote:
> 
> > There is a page on the website advertising all the many patches available 
> > to improve st and dwm.
> >  Few if any other software projects provide that these days, and are 
> > offended by forks.  
> 
> Actually few if any other software projects NEED to be patched to provide 
> basic ass functionality, like you know, SCROLLBACK BUFFERS IN A TERMINAL. 
> That patch is an absolute joke, BTW--again, it calls malloc() for EVERY LINE 
> of the scrollback buffer! It takes like a second just to open the terminal 
> with a large scrollback buffer, vs sanely-designed Xterm which starts 
> instantly!

One malloc per line isn't really something to lost any sleep over. And you 
don't necessarily need scrollback in your terminal — most terminals, including 
st, do not support splitting to open new terminals, which is an even more 
important functionally that you don't need your terminal to implement either: 
tmux and similar software can provide this, and you can make your terminal run 
tmux automatically. And if the machine isn't used interactively, if it's just a 
monitor displaying information (surf is commonly used to display Jenkins and 
similar software), you definitely do not need this. Only having the absolute 
basics and that patch in those things you personally need is quite nice. And if 
you want to fork the software, or just study it to understand how the different 
functionalities are implemented, it's unbeatable. I personally do not have any 
patches applied to any suckless software, and it works just fine for me. A lot 
of popular terminals, and st's patches, implement a bunch of features  that I 
really don't have any interest, and sometimes, I don't even think they belong 
in a terminal emulator, or any software running in it, at all.

> 
> There's also few software packages out there (in the sane real world) that 
> actually require you to EDIT THE SOURCE CODE AND RECOMPILE just to change 
> basic options!
> 
> Want to use a different font in different terminals for different purposes? 
> Sorry, st doesn't support that feature, or ANY other features, AT ALL, unless 
> you personally write a patch to do it. Garbage.
> 
> >  The suckless philosophy embraces forks and patches:   
> 
> Bzzt--WRONG. I suggested a fork of st on this list one time and was violently 
> assaulted as if I was the enemy of mankind. 
> 
> That is the real world. You are living in a delusional fantasy.
> 
> > Ok this is obviously just contrarian trolling,
> >  nobody who has read xterm's source code
> >  thinks it is any good.  
> 
> I read Xterm's source code, and I use it daily. It's my most used application 
> by far. I KNOW that it is good. It beats the brakes off the useless, 
> featureless piece of trash that is ST.
> 




Re: [dev] Minimalist software. Should I care?

2023-07-05 Thread Mattias Andrée
On Wed, 5 Jul 2023 10:19:36 -0500
Dave Blanchard  wrote:

> On Wed, 5 Jul 2023 10:23:57 +0200 (CEST)
> Sagar Acharya  wrote:
> 
> > That is exactly what I'm trying to achieve. Capital is whatI lack. Soon I 
> > will be releasing Libre-Ads, a random non-targeted ads system specially for 
> > Freedom respecting people.
> > 
> > So self-hosters can self sustain and they don't have to beg for donations 
> > from companies who sell binaries and target ads.  
> 
> Dude, you are delusional. Plain and simple. 
> 
> Self-hosting has been completely possible since the beginning of time. It 
> costs peanuts. And look what we have instead: Facebook, Instagram, Gmail, and 
> so on. Nobody cares.
> 
> You think 99% of the population gives a fuck about "binaries" or "targeted 
> ads"? These are the people who happily use nothing but Microsoft malware or 
> systemd or whatever and give zero fucks about privacy or freedom. They have 
> their every bowel movement or uttered thought tracked via "smart" devices, 
> and they LOVE IT. Every single "thought" anyone in this "society" ever has is 
> programmed in their minds by some corporate or government entity, and each 
> and every one of these people is perched on the edge of their seat in 
> anticipation of the day when their "smart" devices can directly read their 
> minds also, so they can have a more intimate connection to their slave 
> masters. They're better than you and smarter than you and they're sure of it, 
> and you can't tell them shit. 
> 
> You think ANYONE, particularly corporations who make all their money by 
> siphoning it out of the pockets of these people, collecting all of their 
> personal data and reselling it, while constantly brainwashing them to believe 
> whatever their owners want them to believe, give two shits about any "Libre" 
> ad system, or would have any use for that at all? 
> 
> It's non targeted? Who the fuck wants that? The people who own this world 
> want everybody TRACKED, TARGETED, OWNED--and their slaves WANT to be TRACKED, 
> TARGETED, OWNED, with a slave collar around their necks. Hard truth. The most 
> merciful thing you can actually do for any of these pitiful fools is grant 
> them a quick death. Abandon all hope of reeducating or reaching anyone, other 
> than a select, tiny few. 

In your early 20s?

> 
> If you believe that even 1% of 1% are interested in your dream of "self 
> hosting" anything, you are NOT living on the same planet as the rest of 
> humanity. 
> 
> 
> 




Re: [dev] diff and patch

2022-02-02 Thread Mattias Andrée
On Wed, 2 Feb 2022 07:48:39 -0500
LM  wrote:

> I've been looking at non-GNU implementations of diff and patch.  The
> BSD systems, Plan 9 and toybox all have their own implementations.
> Has anyone found other non-GNU licensed Open Source alternatives for
> these programs?  Does anyone else use diff and patch alternatives that
> are not GNU licensed and if so, which alternatives do you prefer?
> 

I prefer my own implementations, that have not yet been merged
into sbase.

You can find patch(1) here:
https://github.com/maandree/sbase/blob/patch/patch.c

I don't have diff(1) readily available, but you
should be able to find it on the mailing list.



Re: [dev] Special target ".POSIX" in Makefiles

2022-01-03 Thread Mattias Andrée
On Mon, 3 Jan 2022 11:16:48 +0100
Markus Wichmann  wrote:

> On Sat, Jan 01, 2022 at 08:09:38PM +0100, Mattias Andrée wrote:
> > .POSIX:
> >
> > CC = c99
> > # Required as POSIX doesn't mandate that it is defined by default
> >
> > OBJ =\
> > alpha.o\
> > beta.o\
> > gamma.o
> >
> > LINUX_AMD64_OBJ = $(OBJ:.o=.linux-amd64-o)
> > OPENBSD_AMD64_OBJ = $(OBJ:.o=.openbsd-amd64-o)
> >
> > all: myproj-linux-amd64 myproj-openbsd-amd64
> >
> > myproj-linux-amd64: $(LINUX_AMD64_OBJ)
> > $(CC) -o $@ $(LINUX_AMD64_OBJ) $(LDFLAGS_LINUX_AMD64)
> >
> > myproj-openbsd-amd64: $(OPENBSD_AMD64_OBJ)
> > $(CC) -o $@ $(OPENBSD_AMD64_OBJ) $(LDFLAGS_OPENBSD_AMD64)
> >
> > .c.linux-amd64-o:
> > $(CC) -c -o $@ $< $(CFLAGS_LINUX_AMD64) $(CPPFLAGS_LINUX_AMD64)
> >
> > .c.openbsd-amd64-o:
> > $(CC) -c -o $@ $< $(CFLAGS_OPENBSD_AMD64) $(CPPFLAGS_OPENBSD_AMD64)
> >
> > .SUFFIXES:
> > .SUFFIXES: .c .linux-amd64-o .openbsd-amd64-o
> >
> > # NB! Cannot use .linux-amd64.o and .openbsd-amd64.o
> > # as POSIX requires a dot at the beginning but
> > # forbids any additional dot
> >
> >  
> 
> OK, that is one way. I do wonder how you would handle configurable
> dependencies. I have always been partial to the Linux Kconfig way of
> doing it, but it requires +=:
> 
> obj-y = alpha.o beta.o gamma.o
> obj-$(CONFIG_FOO) += foo.o
> obj-$(CONFIG_BAR) += bar.o
> obj-$(CONFIG_BAZ) += baz.o

You can always, although it may confuse people, especially if you
don't explain it, have ./Makefile be used to generate ./makefile.
It may be a bit messy, but this would allow you to anything, and
you can even build from ./makefile automatic once it has been
generated.

What I do, which unfortunately only work well when you have a
few options, and becomes messy when you have a lot of settings
(hopely that is a rarity), is:

./Makefile

CONFIG_FOO=n
include mk/foo=$(CONFIG_FOO).mk

CONFIG_BAR=n
include mk/bar=$(CONFIG_BAR).mk

CONFIG_BAZ=n
include mk/baz=$(CONFIG_BAZ).mk

OBJ =\
alpha.o\
beta.o\
gamma.o\
$(OBJ_FOO)\
$(OBJ_BAR)\
$(OBJ_BAZ)

./mk/foo=y.mk

OBJ_FOO = foo.o

Similar for ./mk/bar=y.mk and ./mk/baz=y.mk; and
./mk/foo=n.mk, ./mk/bar=n.mk, and ./mk/baz=n.mk are empty.

Another solution would be

./Makefile

CONFIG_FOO = 0
CONFIG_BAR = 0
CONFIG_BAZ = 0

CPPFLAGS = -DCONFIG_FOO=$(CONFIG_FOO)\
   -DCONFIG_BAR=$(CONFIG_BAR)\
   -DCONFIG_BAZ=$(CONFIG_BAZ)

OBJ = alpha.o beta.o gamma.o dependencies.o

dependencies.o: foo.c bar.c baz.c

./dependencies.c

#if CONFIG_FOO != 0
#include "foo.c"
#endif

#if CONFIG_BAR != 0
#include "bar.c"
#endif

#if CONFIG_BAZ != 0
#include "baz.c"
#endif

Of course there are situations where this doesn't work well.

But extensions aren't always that bad, += clearly has it's
uses as these examples demonstrate. It is much cleaner, to
use your sample with += than mine. It's even much better
than using the if-statement extension.


> 
> dir.o: $(obj-y)
>   ld -r -o $@ $^

$^ is non-POSIX, so you need $(obj-y)

> 
> With your scheme, this one in particular would blow up due to
> combinatorics (need a list for none, FOO, BAR, BAZ, FOO+BAR, FOO+BAZ,
> BAR+BAZ, and FOO+BAR+BAZ. Now imagine this for the hundreds of options
> Linux has)
> 
> But with the advent of ninja (yes, I know this list's opinion of C++ and
> Google in particular, but you cannot deny that Ninja is FAST), I have
> come around to the idea of configure scripts creating makefiles (or
> similar). And then you can generate makefiles in as complicated a manner
> as you like.
> 
> Indeed, if the makefile is generated, there is little need for suffix
> rules at all. You can just make direct rules for everything. Repetition
> is no problem if the code generating the Makefile is readable. And then
> you can even build with make -r, because you don't need the default
> database, either. And -r saves time in some implementations.
> 
> Ciao,
> Markus
> 




Re: [dev] Special target ".POSIX" in Makefiles

2022-01-01 Thread Mattias Andrée
.POSIX:

CC = c99
# Required as POSIX doesn't mandate that it is defined by default

OBJ =\
alpha.o\
beta.o\
gamma.o

LINUX_AMD64_OBJ = $(OBJ:.o=.linux-amd64-o)
OPENBSD_AMD64_OBJ = $(OBJ:.o=.openbsd-amd64-o)

all: myproj-linux-amd64 myproj-openbsd-amd64

myproj-linux-amd64: $(LINUX_AMD64_OBJ)
$(CC) -o $@ $(LINUX_AMD64_OBJ) $(LDFLAGS_LINUX_AMD64)

myproj-openbsd-amd64: $(OPENBSD_AMD64_OBJ)
$(CC) -o $@ $(OPENBSD_AMD64_OBJ) $(LDFLAGS_OPENBSD_AMD64)

.c.linux-amd64-o:
$(CC) -c -o $@ $< $(CFLAGS_LINUX_AMD64) $(CPPFLAGS_LINUX_AMD64) 

.c.openbsd-amd64-o:
$(CC) -c -o $@ $< $(CFLAGS_OPENBSD_AMD64) $(CPPFLAGS_OPENBSD_AMD64) 

.SUFFIXES:
.SUFFIXES: .c .linux-amd64-o .openbsd-amd64-o

# NB! Cannot use .linux-amd64.o and .openbsd-amd64.o
# as POSIX requires a dot at the beginning but
# forbids any additional dot



On Sat, 1 Jan 2022 14:08:24 +0100
Ralph Eastwood  wrote:

> Hi Mattias,
> 
> Happy New Year!
> 
> > But I also like makel, but I will give it some more though before I  
> rename it a second time.
> 
> So... makellint? :D
> I like it; it seems 'makel' is unused as a project name.
> 
> Are there any suggestions for handling out-of-source builds using
> POSIX makefiles?  I recently had a project where we needed to
> support multiple platforms and so output-ing object files and binaries
> into platform-specific directories was needed.
> I know that GNU's pattern matching rules support this behaviour, i.e.:
> 
> $(OBJDIR)/%.o: $(SRCDIR)/%.c
> $(CC) $(CFLAGS) -c $< -o $@
> 
> Perhaps just listing all of the object files and source +
> dependencies would be the easiest way in this instance?
> 
> Best regards,
> Ralph
> 
> --
> Tai Chi Minh Ralph Eastwood
> tcmreastw...@gmail.com
> 




Re: [dev] Special target ".POSIX" in Makefiles

2022-01-01 Thread Mattias Andrée
On Sat, 1 Jan 2022 13:01:05 +0100
Laslo Hunhold  wrote:

> On Sat, 1 Jan 2022 10:33:22 +0100
> Mattias Andrée  wrote:
> 
> Dear Mattias,
> 
> first off, happy new year to all of you!
> 
> > Thanks for pointing that it, I didn't find it in my search.
> > I renamed it to mklint.  
> 
> This is also confusing as mk(1) by plan9 exists, but you explicitly
> target POSIX make(1).

Your right, that wasn't really a good idea.

> 
> makelint is a really great name and it clearly shows what it does, the
> linked project seems to be dead after the initial commit and nobody
> talks about it.
> 
> Here are some suggestions for a different name:
> 
>- makel ("Makel" means "flaw" in Germany, and it fits so well
> because you want to find flaws in Makefiles)
>- makeorbreak (english idiom, meaning a situation that either brings
>   success or complete failure, might be a nod towards
>   the fact that you either are POSIX compliant or not)
>- shakenmake (like Shake'n'Bake)
> 
> I really like "makel" to be honest, which also fits your theme of
> naming some of your projects after German nouns (e.g. libzahl).

libzahl is my only project with a German name, and it's was called
libzahl because the bold Z used represent the integers stands for Zahl,
but I do have projects with names in different languages (I also have
libskrift (Swedish), libruun (librún; Old Norse, Icelandic, and Faroese;
this one is not published yet), radharc (Irish and Gaelic), and libcantara
(Spanish, Protuguese, Galician, and Asturian; this one hasn't been
published yet, and the rate it is being developed at it I'm doubtful it
ever will be))), so giving it a non-English name wouldn't be out of
character.

But I also like makel, but I will give it some more though before I
rename it a second time.

> 
> With best regards
> 
> Laslo
> 




Re: [dev] Special target ".POSIX" in Makefiles

2022-01-01 Thread Mattias Andrée
Thanks for pointing that it, I didn't find it in my search.
I renamed it to mklint.


Regards,
Mattias Andrée


On Sat, 1 Jan 2022 14:43:15 +0600
NRK  wrote:

> On Fri, Dec 31, 2021 at 11:29:11PM +0100, Mattias Andrée wrote:
> > I just started implementing a linter[0]. Even though I just
> > started it today, I think that's enough for this year.
> > 
> > Happy New Year!
> > Mattias Andrée
> > 
> > [0] https://github.com/maandree/makelint/  
> 
> Thank you very much, I'll keep an eye on this.
> 
> Also as for the name, while looking it up I noticed that there's another
> project by the exact same name [0]. Funny enough, this project's goal
> seems to be complete opposite of ours, it advices usage of non-posix
> implicit variables like $(RM) in it's README.
> 
> Might be a good idea to use a different name to avoid potential
> confusion. Something like `makecheck` should be fine, I cannot find any
> other project with that name.
> 
> [0]: https://github.com/christianhujer/makelint
> 
> - NRK
> 




Re: [dev] Special target ".POSIX" in Makefiles

2021-12-31 Thread Mattias Andrée
I just started implementing a linter[0]. Even though I just
started it today, I think that's enough for this year.

Happy New Year!
Mattias Andrée

[0] https://github.com/maandree/makelint/


On Thu, 30 Dec 2021 21:17:32 +0100
Mattias Andrée  wrote:

> On Thu, 30 Dec 2021 21:07:06 +0100
> Laslo Hunhold  wrote:
> 
> > On Thu, 30 Dec 2021 17:49:23 +0100
> > crae...@gmail.com wrote:
> > 
> > Dear craekz,
> >   
> > > As far as I can see, we could add `.POSIX` to the following programs:
> > > dwm, dmenu, dwmstatus, sent and tabbed
> > > I've just looked over the Makefiles very briefly, so I may have
> > > overseen something. Note: I just picked out the "biggest" programs.
> > 
> > sadly the make-implementations out there don't offer a "strict" mode to
> > warn you about non-compliance or undefined behaviour. GNU make (as  
> 
> I've actually being thinking of writing a makefile linter.
> How interested would people be in such a tool?
> 
> The reason to have a linter separate from a make utility itself
> is that it would not have to reject non-standard features that
> you don't want to implement in make. And I also think it would for
> cleaner implementation of both projects. Additionally, I would
> suspect that a lot of people would just stay with GNU make because
> it's in every distro, so having it as a separate project would
> probably give it wider adoption.
> 
> 
> > usual with GNU products) added a lot of GNU-extensions and poisoned the
> > entire ecosystem. It's really easy to write non-compliant makefiles and
> > have things silently break or behave slightly different across
> > implementations.
> > 
> > Adding a .POSIX target is one thing, it's another to actually verify
> > the makefiles are POSIX-compliant.
> > 
> > It would be a cool project-idea to write a very strict POSIX-compliant
> > make-implementation that, if it includes extensions, marks them as such
> > and prints a warning if desired. mk by plan9 is also very nice, of
> > course, but makefiles are the common denominator in the scene and it
> > might be a cool incentive to have such a validating make.
> > 
> > A lot of new systems are used all over the place, like cmake, ninja and
> > whatnot, so it might be cool to show how zen it is to just work with a
> > make-based build-system. One example is the really trivial way to
> > package suckless-projects, e.g. libgrapheme[0]. It doesn't get simpler
> > than that.
> > 
> > With best regards
> > 
> > Laslo
> > 
> > [0]:https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libgrapheme
> >   
> 
> 




Re: [dev] Special target ".POSIX" in Makefiles

2021-12-31 Thread Mattias Andrée
On Fri, 31 Dec 2021 12:53:59 +0100
crae...@gmail.com wrote:

> > Yeah you make a strong argument. I too think a linting tool would be
> > useful. I imagine it would function like shellcheck does for shells. If
> > nothing else it would really help people identify GNU extensions vs
> > portable behavior.  
> Something like shellcheck would be awesome! Given that the POSIX
> Makefile syntax isn't very large or complicated, the implementation of
> such a tool could be pretty simple (in the manner of suckless).
> Although I wouldn't mind a strict POSIX compliant make(1) without any
> extensions. There a far too few programs that could be used to
> verify compliance with standards (and buying a test suite isn't a
> great solution).
> 
> --
> craekz
> 

I think a make(1) implementation without any extensions isn't the
best idea. There are a lot of makefiles out there that rely on
extensions, and its not always that bad, it could be that it just
uses +=. I would much rather have a make implementation with a
sane set of commonly used extensions, that nags me if I use
extensions, than a minimal[0] implementation that I use for many
own projects and a garbage implementation for when I compile
software I download (of course the minimal would do in some cases,
but if I build from the AUR, what would happen is that it would
be used for every AUR package I download).


Regards,
Mattias Andrée



[0] Minimal only run when building from a makefile. When building
for a makefile, optional stuff like default rules for compiling
C code would be omitted, but there is no reason to not allow
such stuff, even for languages that POSIX does not specify,
when running without a makefile.



Re: [dev] Special target ".POSIX" in Makefiles

2021-12-30 Thread Mattias Andrée
On Thu, 30 Dec 2021 23:07:35 +0100
Laslo Hunhold  wrote:

> On Thu, 30 Dec 2021 21:17:32 +0100
> Mattias Andrée  wrote:
> 
> Dear Mattias,
> 
> > I've actually being thinking of writing a makefile linter.
> > How interested would people be in such a tool?  
> 
> very interested! Even though, when you implement the logic, you might
> as well go all the way and offer a make(1) that also offers linting
> (comparable to mandoc(1)).

Yes, of course you could add it to make(1) also, but for the explained
reasons I think it would be useful as a standalone implementation.

However, a linter could do thinks that I do not want to do in make(1)
itself. For example, it could do analysis to and find a portable
solution (e.g. just removing "$(@:.o=.c)"), or explain that an
non-standard feature is being used in vain (e.g. "PREFIX ?= /usr/local").

make(1) should of course tell you when you are not being POSIX
compliant, but ultimately to what extent it would hold you hand
and how pedantic it be would be determined at time of implementation.
It would probably be very little. For example, I think that the
linker could warn about bugs in other implementations or about
potential whitespace issues which are probably just fine in every
implementation people use, but I do not want to do this in
make(1) itself.


> 
> > The reason to have a linter separate from a make utility itself
> > is that it would not have to reject non-standard features that
> > you don't want to implement in make. And I also think it would for
> > cleaner implementation of both projects. Additionally, I would
> > suspect that a lot of people would just stay with GNU make because
> > it's in every distro, so having it as a separate project would
> > probably give it wider adoption.  
> 
> You wouldn't have to reject non-standard features, but offer printing a
> warning for undefined behaviour and non-standard extensions while still
> supporting them to a certain extent, something like:
> 
>$ snake -l
>Makefile:1:1: warning: Missing ".POSIX" target.
>config.mk:2:4: warning: "?=" is a GNU-extension. 
>Makefile:20:34: warning: A prerequisite must not contain a macro.
>$
> 
> Optionally you could also choose to always print warnings and turn
> them into hard errors with the l-flag.
> 
> It would be necessary to assess how many extensions are necessary
> to implement. With sbase/ubase we found out that while GNU-extensions
> are used, they are not all too widespread and only a small subset of
> the entire GNU bloat.
> 
> With Makefiles you don't really need the GNU extensions and they,
> as usual with GNU extensions, seem to originate from a misunderstanding
> or caving in to simply wrong usage-patterns (just think of cat -v) by
> users who probably don't know it better or about the right tools for the
> job.

I think there are situations where some of the extensions offered by
GNU are useful as they can make things cleaner, but they are hardly
necessary, and often they are inappropriately used, and of course there
are some features I cannot find an inappropriate use case for at all.

GNU's strategy is to make things as easy for users as possible, and
offer “more value” than the software they are replacing, which naturally
lead to it's current miserable situation. They just haven't learned to
say No. Which is probably to toughest but most valuable lesson for any
programmer to learn.

> 
> Anyway, tl;dr: Such a strict POSIX-compliant make would be really awesome!
> I'm sure many would pick it up. Null program wrote a great post[0]
> about this topic, indicating that there's no tool out there that is
> explicit about standard conformance, _especially_ undefined behaviour.

A few month ago I started writing a POSIX-like shell[0], it has
requirements that makes it impossible to be fully POSIX, but it
if you start it as sh it will be strictly POSIX and if it, in this
mode, encounters an extension it will warn you that it will not
be recognising it. It will also warn you in some situations that
look like mistakes, and yell at you (in lower case) if you are
using backquotes (read up on the backquote syntax in sh and you
will understand why).

> 
> With best regards
> 
> Laslo
> 
> [0]:https://nullprogram.com/blog/2017/08/20/
> 

Regards,
Mattias Andrée


[0] https://github.com/maandree/apsh



Re: [dev] Special target ".POSIX" in Makefiles

2021-12-30 Thread Mattias Andrée
On Thu, 30 Dec 2021 21:07:06 +0100
Laslo Hunhold  wrote:

> On Thu, 30 Dec 2021 17:49:23 +0100
> crae...@gmail.com wrote:
> 
> Dear craekz,
> 
> > As far as I can see, we could add `.POSIX` to the following programs:
> > dwm, dmenu, dwmstatus, sent and tabbed
> > I've just looked over the Makefiles very briefly, so I may have
> > overseen something. Note: I just picked out the "biggest" programs.  
> 
> sadly the make-implementations out there don't offer a "strict" mode to
> warn you about non-compliance or undefined behaviour. GNU make (as

I've actually being thinking of writing a makefile linter.
How interested would people be in such a tool?

The reason to have a linter separate from a make utility itself
is that it would not have to reject non-standard features that
you don't want to implement in make. And I also think it would for
cleaner implementation of both projects. Additionally, I would
suspect that a lot of people would just stay with GNU make because
it's in every distro, so having it as a separate project would
probably give it wider adoption.


> usual with GNU products) added a lot of GNU-extensions and poisoned the
> entire ecosystem. It's really easy to write non-compliant makefiles and
> have things silently break or behave slightly different across
> implementations.
> 
> Adding a .POSIX target is one thing, it's another to actually verify
> the makefiles are POSIX-compliant.
> 
> It would be a cool project-idea to write a very strict POSIX-compliant
> make-implementation that, if it includes extensions, marks them as such
> and prints a warning if desired. mk by plan9 is also very nice, of
> course, but makefiles are the common denominator in the scene and it
> might be a cool incentive to have such a validating make.
> 
> A lot of new systems are used all over the place, like cmake, ninja and
> whatnot, so it might be cool to show how zen it is to just work with a
> make-based build-system. One example is the really trivial way to
> package suckless-projects, e.g. libgrapheme[0]. It doesn't get simpler
> than that.
> 
> With best regards
> 
> Laslo
> 
> [0]:https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=libgrapheme
> 




Re: [dev] Special target ".POSIX" in Makefiles

2021-12-30 Thread Mattias Andrée
I agree that it shall be added to all Makefile, because
“otherwise, the behavior of make is unspecified.”
(Quote from the POSIX specifications)

Probably missing because people don't usually learn
make(1) by reading the entire manual.

.POSIX can be used as long a the makefile does not
use any feature that conflicts with POSIX. But I think
it would mislead people into thinking the non-POSIX
features are POSIX feature, so it would be best skip
it in that case.

I do however think some Makefile here that use the @
macro in the prerequisite. This is not a POSIX feature,
is seldom needed, and should be removed assuming it
is redundant given the inferred prerequisite of
inference rules.


Regards,
Mattias Andrée


On Thu, 30 Dec 2021 15:58:33 +0100
crae...@gmail.com wrote:

> Hi,
> 
> I'm wondering why the Makefiles of some suckless programs (and libs) have the
> special target `.POSIX` in them and some don't.
> For example, dwm doesn't have it in it's Makefile but there isn't a single
> non-POSIX feature used. I think it would be better to include the `.POSIX` 
> target
> simply to follow the standard and guarantee portability.
> If there is a good reason why it isn't included, please let me know.
> 
> --
> craekz
> 




Re: [dev] Ada not Rust

2021-04-26 Thread Mattias Andrée
On Mon, 26 Apr 2021 10:10:20 -0400
Ross Mohn  wrote:

> On 4/23/21 10:12 PM, Jeremy wrote:
> > On 04/20/21 10:23AM, Greg Reagle wrote:  
> >> On Tue, Apr 20, 2021, at 09:45, Jeremy wrote:
> >> I gave up on using dvtm a while ago (now I use tmux which is good) because 
> >> it
> >> would keep crashing.  And I could not figure out how to debug the crashes 
> >> or get
> >> specific information about the cause of the crashes.  If I had known about 
> >> these
> >> options then I would have compiled dvtm with them and maybe gave better bug
> >> reports.  (Though I know C, I am not an expert in C.)  
> > I know what you're talking about & it's a pain in the ass. I believe
> > this is due to the ANSI parser implementation(vt.c) that DVTM uses.
> >
> > I wrote a library, libst(a fork of st), and modified st, dvtm to link 
> > against it:
> > https://github.com/jeremybobbin/libst
> >
> > Try compiling & installing libst, then compile & run dvtm in libst/examples.
> >
> > As much as I love dvtm, I believe it's a captive user interface, and
> > lacks the extensibility that a terminal multiplexer could/should provide.
> >
> > Attempting to address this, I wrote, what I believe to be, a suckless
> > approach to terminal multiplexing - svtm:
> > https://github.com/jeremybobbin/svtm
> >
> > svtm is a composition of primarily 4 programs:
> > - abduco - {at,de}tach
> > - svt- TTY state/dumping/scrolling
> > - bmac   - byte-for-byte macros
> > - itty   - lets you run TTY input through a filter(such as bmac)
> >
> > I'd like to add a "paner" program to that list, but for now, the above
> > is all you need to express any terminal-oriented workflow in a UNIX
> > environment.
> >
> > I'm curious as to what y'all think.
> >
> > Jeremy
> >  
> I and my entire team have been actively and successfully using dvtm for 
> years. I haven't had it crash in a long while now, and I regularly keep 
> sessions alive for months. However, I am very interested in using 
> something as you describe above, with a library version of st that is 
> kept up-to-date. I didn't get your svtm to work out-of-the-box, but I 
> will continue to debug it myself. I got all the programs to compile 
> fine, but did go into each Makefile and, where necessary, added the '?' 
> character to this line "PREFIX ?= /usr/local".

Why do you need `?=`. The only difference between `=` and `?=`?

Apart from `=` beginning the only assignment operator defined by POSIX,
is that `?=` has no effect if the variable is already defined wheras
`=` does not have any effect if the variable is set in the command line.

> 
> -Ross
> 
> 




Re: [dev] Ada not Rust

2021-04-19 Thread Mattias Andrée
On Mon, 19 Apr 2021 16:19:18 -0400
Greg Reagle  wrote:

> On Sat, Apr 17, 2021, at 11:57, Laslo Hunhold wrote:
> > Anyway, I can't say it enough: Check out Ada 2012 (and the SPARK
> > subset) if you care about "secure" languages. It's not as lean as C, but
> > you end up solving so many problems with it, especially in regard to
> > software engineering and safety.  
> 
> Okay, I did.  Very interesting.  I briefly studied Ada many years ago.  Do 
> you think that Ada is a viable alternative to Rust?  Do you think it is a 
> decent alternative to C for things like operating systems or utilities like 
> sbase or ubase?
> 
> I made a Hello World program in Ada.  Very fast and small.  However, it 
> depends on libgnat-8.so.1.  Is there a way to build it so that it does not?  
> Like statically linked?
> 

For me, libgnat is only dynamically linked if I run gnatbind
with -shared, but if you -static it should be statically linked.

I cannot find how to statically link the C runtime.



Re: [dev] Checksums and Sig files for release gzip

2021-04-17 Thread Mattias Andrée
On Sat, 17 Apr 2021 20:38:51 +0200
Mattias Andrée  wrote:

> On Sat, 17 Apr 2021 21:30:58 +0300
> Sergey Matveev  wrote:
> 
> > *** Mattias Andrée [2021-04-17 20:08]:  
> > >No one has an OCaml compiler.
> > 
> > Same applies to Rust.
> > And to Go too, but it is easy bootstrappable with the C compiler, taking
> > just several minutes on modest hardware. Rust is like a JavaScript: just
> > download it and run, because it is seems so convenient modern days.
> >   
> > >If I'm going to write a compiler, I'm going to write it in C
> > 
> > That is good. And nearly everyone does so, or use at least something
> > that can be build with C-compiler.
> >   
> 
> Yes, one extra step is acceptable, as long as you have a way,
> that isn't too long, to get there from some common starting
> point. Self-hosted is a problem, but if you host a
> non-self-hosted versions that can be used to compile the
> self-hosted one, it is also acceptable, but you should not
> have to manually look for through old releases to find a
> non-self-hosted version.

So basically, if you are going to make a self-hosted compiler
for your own language, you have to publish the last version
that wasn't self-hosted alongside the self-hosted version,
and then make sure all new versions are compilable with that
version.



Re: [dev] Checksums and Sig files for release gzip

2021-04-17 Thread Mattias Andrée
On Sat, 17 Apr 2021 21:30:58 +0300
Sergey Matveev  wrote:

> *** Mattias Andrée [2021-04-17 20:08]:
> >No one has an OCaml compiler.  
> 
> Same applies to Rust.
> And to Go too, but it is easy bootstrappable with the C compiler, taking
> just several minutes on modest hardware. Rust is like a JavaScript: just
> download it and run, because it is seems so convenient modern days.
> 
> >If I'm going to write a compiler, I'm going to write it in C  
> 
> That is good. And nearly everyone does so, or use at least something
> that can be build with C-compiler.
> 

Yes, one extra step is acceptable, as long as you have a way,
that isn't too long, to get there from some common starting
point. Self-hosted is a problem, but if you host a
non-self-hosted versions that can be used to compile the
self-hosted one, it is also acceptable, but you should not
have to manually look for through old releases to find a
non-self-hosted version.



Re: [dev] Checksums and Sig files for release gzip

2021-04-17 Thread Mattias Andrée
On Sat, 17 Apr 2021 20:50:50 +0300
Sergey Matveev  wrote:

> *** Mattias Andrée [2021-04-17 18:57]:
> >This self-hosted nonsense is ludicrous.  
> 
> Not agree.
> 
> >It's understandable for C compilers  
> 
> Rust, as far as I heard/remember, was written on OCaml, that itself was
> also written on some C -- so nothing prevents its bootstrapping too, unless
> its authors thought about that. Shame on them.
> 

No one has an OCaml compiler. If I'm going to write a compiler, I'm going to
write it in C, even if the language is C, because evenone has a C compiler.
I'm not going to use an older language just because it is older, I'm going
to use the most common language. On they that could be Rust, and then, the
it would be OK to write a rust compiler in Rust.



Re: [dev] Checksums and Sig files for release gzip

2021-04-17 Thread Mattias Andrée
This self-hosted nonsense is ludicrous. It's understandable for C compilers,
it's an old language that everyone has a compiler for and there are many
implementations, and even if you wrote it in assembly, you will just shift
the problem to the assembler. So there must be one blessed language, and
C and C++ are good options. But cannot you just download an older version
of the compiler that's presumably written in C or C++, and compile the
newest version with that version, or have they not published one, or would
you need to do it in multiple steps due to frequent language changes?

Is a compiler really open source it in cannot be compiled with another
compiler?


On Sat, 17 Apr 2021 19:42:20 +0300
Sergey Matveev  wrote:

> *** Laslo Hunhold [2021-04-17 17:57]:
> >in regard to my argument: It has abysmal compile times and the compiler
> >is extremely bloated.  
> 
> Also it has bootstrap problem: officially there is no way to build Rust,
> except for downloading some binaries for you platform from the Internet.
> LLVM/Clang, GCC -- all of them can be compiled with more older GCC, tcc
> and whatever C-compilers: GNU Guix with GNU Mes bootstraps C/C++-ecosystem
> that way. But Rust developers... do not bother -- just shut up and
> download our binaries.
> 
> There is mrustc project: Rust written on C++, that can be used to build
> Rust itself. But it is just a side project, not official. 16GB of RAM
> was not enough at all (constant swapping) and I borrowed 32-cores 2-CPUs
> Xeon system with 128GB of RAM just to try to build mrustc with several
> versions of Rust (mrust can build rust 1.29, that can build 1.30, that
> can build 1.31, and so on). I succeeded on Devuan, with taking more than
> 50GB of diskspace. Could not build it on FreeBSD. So personally even if
> I wanted to try Rust, I just have no such powerful hardware for its
> bootstrapping and knowledge how to build mrustc on FreeBSD.
> 




Re: [dev] Checksums and Sig files for release gzip

2021-04-17 Thread Mattias Andrée
On Sat, 17 Apr 2021 16:30:15 +0200
Laslo Hunhold  wrote:

> On Wed, 14 Apr 2021 09:05:01 +0300
> Sergey Matveev  wrote:
> 
> Dear Sergey,
> 
> > If we a talking here about checking software integrity, then speed is
> > important. Millions of people check the hash of downloaded files -- if
> > it is slow, then huge quantity of time/energy is wasted. Less time you
> > spent on hashing, less energy is wasted. SHA2 (and SHA3 too, if we are
> > not talking about KangarooTwelve modifications) is the worst choice
> > from ecology point of view.  
> 
> we would save much more energy by banning autohell, Rust, bloated

I've completely ignored Rust. What's the problem with it?

> electron-apps and Qt. Especially autohell is really a huge waste of
> time and energy, and I often find that packages take longer to
> "configure" (what for?) than to actually compile. Never has configure
> ever helped me; it always stood in the way, e.g. when GHC added a
> warning a few months ago, breaking all autoconf-checks who assumed that
> any output from the compiler was an error.
> 
> With best regards
> 
> Laslo
> 




Re: [dev] Checksums and Sig files for release gzip

2021-04-13 Thread Mattias Andrée
On Tue, 13 Apr 2021 20:17:37 +0200
Markus Wichmann  wrote:

> On Tue, Apr 13, 2021 at 05:08:31PM +0200, Mattias Andrée wrote:
> > On Tue, 13 Apr 2021 16:57:39 +0200
> > Sagar Acharya  wrote:
> >  
> > > Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. 
> > > It shouldn't take a second more than others. Why use a weaker checksum?  
> >
> > SHA512 is actually more than twice as fast as SHA256 on 64-bit machines.
> > (I don't know which is stronger).
> >  
> 
> Y'know, while we're bikeshedding, why not just use SHA-3? Keccak has
> been out for a while now, and it is also available in 256 and 512 bit
> variants. I keep wondering why people keep using SHA-2 variants. Do you
> want to wait until it is cracked?

I use SHA-3 :) But interesting, even though Keccak (from which SHA-3 is
derived) won over BLAKE2, BLAKE2 seems to be more popular.

> 
> SHA-3 would have the benefit of always being a 64-bit algorithm (unlike
> SHA-2, which is 32-bit in the 192 and 256 bit variants, and 64-bit in
> the 384 and 512 bit variants, necessitating two very similar processing
> functions in C).

SHA-3 may be 64-bit, it's just a set of four special configurations of
Keccak which does not have restriction at all, which complicates the
algorithm. Just like you would just choose SHA-3 and not Keccak, and
one specific version of it, you would only choose one specific version
of SHA-2, so if you only implement that version, you can get rid of these
complexities. However, in the real world applications would implement
all, or at least four, of the SHA-2 versions, which only require two
distinct, simple implementations. With SHA-3, you can get rid of some
complexity by restricting the implementation to SHA-3, but wouldn't
you implement it via Keccak, so you easily can implement all variants
of Keccak? (When I implemented sha3sum, SHA-3 was not defined yet, we
only had Keccak, so I had to implement it with all those complexities,
then I just left it when SHA-3 was finalised, so it could support more
hashing algorithms.)

> Its design also makes HMAC easier, though this is not
> of import for this application.
> 
> > I see no point in having checksums at all, except for detecting bitrot.
> > Signatures are of course good.
> >  
> 
> Signatures only help if you have a known-good public key. Anyone can
> create a key and claim it belongs to, say, Barack Obama. I have no
> public key of anyone affiliated with suckless, and no way to verify if
> any key I get off of a keyserver is actually one of theirs.

That's were the idea of web of trust comes in. During slcon, we can
have key signing parties. Then other people can sign our keys, and
eventually there a chain from someone you trust to the suckless
developers. Additionally, the developers can host their signed keys
on other websites, including their own. Then, if you get them of multiple
servers, including well-known ones, they are fairly trustable.

> 
> Security is hard.
> 
> Ciao,
> Markus
> 




Re: [dev] Checksums and Sig files for release gzip

2021-04-13 Thread Mattias Andrée
On Tue, 13 Apr 2021 16:57:39 +0200
Sagar Acharya  wrote:

> Sure, any good signature. SHA512 is stronger than SHA1, MD5 and SHA256. It 
> shouldn't take a second more than others. Why use a weaker checksum?

SHA512 is actually more than twice as fast as SHA256 on 64-bit machines.
(I don't know which is stronger).

I see no point in having checksums at all, except for detecting bitrot.
Signatures are of course good.

> Thanking you
> Sagar Acharya
> https://designman.org
> 
> 
> 
> 13 Apr 2021, 20:15 by daniel.cegie...@gmail.com:
> 
> > How/where SHA512 is better than SHA256 or SHA1? I don't see any added
> > value in this. If someone breaks into your server and replace files,
> > may also regenerate check sums (SHA256/512 or SHA3, scrypt etc.). The
> > use of MD5 will be equally (un)safe as SHA512 :)
> >
> > A better solution is e.g. signify from OpenBSD or GnuPG.
> >
> > https://man.openbsd.org/signify
> >
> > Daniel
> >
> > wt., 13 kwi 2021 o 13:36 Sagar Acharya  
> > napisał(a):
> >  
> >>
> >> Can we have SHA512 checksums and sig files for the release gzips of 
> >> suckless software?
> >>
> >> Thanking you
> >> Sagar Acharya
> >> https://designman.org
> >>  
> 




Re: [dev] Completeness suckless

2021-04-09 Thread Mattias Andrée
On Fri, 9 Apr 2021 12:13:23 -0700
Jeremy  wrote:

> On 04/09/21 08:24PM, Sagar Acharya wrote:
> >   
> > >I think, the user's turn to try to understand them to the fullest, even  
> > if it means learning a bit of a programming language. - Laslo
> > Majority of users in this world would never learn a low level programming 
> > language like C. They are incapable.
> >   
> 
> Where would you say their capacities are lacking?

I would say just determination or interest. Most people are capable of
learning programming (that is not to say most people are capable of
being good at it, but probably they can become good enough), and if
you can learn a high level language, you can definitely learn a low
level language.

> 
> Jeremy
> 




Re: [dev] [sbase][tar] GNU tar support

2020-11-25 Thread Mattias Andrée
On Wed, 25 Nov 2020 16:28:02 -0500
Cág  wrote:

> Hi all,
> 
> Laslo Hunhold wrote:
> > Dear Cág,
> > Even if a suckless implementation of GNU tar was possible, would you
> > really want it to be included? I'd rather like to encourage people to
> > use standard non-proprietary file formats.  
> 
> Yeah, I think I would.  tar(1) is one of those cases where a compromise
> between being suckless and being usable has to be found.  suckless (or was
> it just you haha) made their own image format, for example, but there is
> a level of compatibility (i.e. jpg/png converters are bundled).
> 
> You can say that JavaScript is bad and all but without it you can't browse
> the web.  You can say the same about C++ but a modern Unix desktop can't
> exist without it.
> 
> The compatibility article Hadrien linked claims that all examined variants
> support GNU tar.
> 


I'm not sure what special with GNU tar, but the ability to read tar files
of all common formats I would say is desirable, however not creating new
tar files in those formats.

Concerning farbfeld, it is quite a different thing to create a new simpler
standard than supporting an already existing but complex standard. Farbfeld
was a good first step in moving towards simpler image formats, although
even if would have got exceptionally good traction it would take quiet some
time before support for older complex formats could be removed, and in the
meantime support for an additional, but simple, format would be required
which would add code (not complexity) to image format libraries. I think
this is forth it in the long run, and it gave us a good standard format
for programs, that don't need to support user provided files, to use, better
than the netpbm formats.



Re: [dev] Culling all the way down

2020-09-08 Thread Mattias Andrée
On Tue, 8 Sep 2020 07:42:21 +0200
Tobias Bengfort  wrote:

> Hi,
> 
> On 07/09/2020 09.13, Alexander Krotov wrote:
> > Maybe a better solution is to send XOFF (see 
> > https://en.wikipedia.org/wiki/Software_flow_control), but I am also
> > not sure how other programs react to it. They will probably block
> > waiting for the write(2) syscall to return instead of continuing to
> > do work in the background.  
> 
> Trying this manually had exactly the desired outcome: CPU usage dropped
> to 0. Nice!
> 
> On 06/09/2020 09.48, Mattias Andrée wrote:
> > You would just be adding unneeded work and complexity for next to no
> > gain.  
> 
> Even if there is little gain it might still make sense if the solution
> is simple. This feels like it could be simple, but I am not sure.
> 
> Concrete steps:
> 
> - dwm sets _NET_WM_STATE_HIDDEN on windows that are currently not displayed
> - st sends XON/XOFF when _NET_WM_STATE_HIDDEN is set
> 
> Do you think this would be reasonable or are there potential issues I am
> missing?
> 
> tobias
> 

This is what I suggested, exception XOFF is SIGTSTP whereas I suggested SIGSTOP.
SIGSTOP will stop the process no matter what, and that's probably what you want,
you want it to stop working entirely even if its not actually printing anything
at the moment. However, in some cases, such as a visualising audio player or a
video player, you would want to suppress output but not stop at output or at 
all;
this would require some other solution like resizing the terminal to (0, 0) or
sending a string to the program.

Of course, you will have to make this something the program has to opt in for,
otherwise it cause be stopped when this is not desired, which is in most cases
isn't.


Regards,
Mattias Andrée



Re: [dev] Culling all the way down

2020-09-07 Thread Mattias Andrée
On Mon, 7 Sep 2020 10:13:26 +0300
Alexander Krotov  wrote:

> > What you could do is to patch a terminal to allow programs to because
> > paused via SIGSTOP when invisible and continued via SIGCONT when
> > visible. Then your program would only need to write some string to
> > the terminal when it starts and when it terminates. Multiplexers
> > could however become an issue, but you could patch one as well.  
> 
> Sending SIGSTOP stops even the programs that simply do something in the
> background, like software update. This will make it impossible to run
> something like "apt-get upgrade" in invisible terminal.

I meant that this would be something the application would explicitly
have to opt in for by writing a special string to the terminal.

> 
> Maybe a better solution is to send XOFF (see
> https://en.wikipedia.org/wiki/Software_flow_control), but I am also not
> sure how other programs react to it. They will probably block waiting
> for the write(2) syscall to return instead of continuing to do work in
> the background.
> 
> If they use separate threads/coroutines for the work and for progress
> bar update, they will continue to work while progress bar will be
> blocked. Once unlocked, they will send the rest of previous progress bar
> update, skip all the "frames" that were never displayed and display the
> final state.
> 
> Looks like doing an XOFF patch for st and fixing all the programs that
> can't handle it properly is the "right thing", but prepare to discover
> that badly written programs will stop or even timeout with an error.
> 




Re: [dev] Culling all the way down

2020-09-06 Thread Mattias Andrée
On Sun, 6 Sep 2020 09:48:06 +0200
Mattias Andrée  wrote:

> On Sun, 6 Sep 2020 08:21:45 +0200
> Tobias Bengfort  wrote:
> 
> > Hi,
> > 
> > I am currently creating a curses application that is updated
> > independently of user input. Think of something like `top`. I realized
> > that the process is using the same amount of CPU whether it is currently
> > visible or not. That bothers me. I would prefer if my app got notified
> > when its visibility changes so it can stop rendering (I think this is
> > called culling).
> > 
> > For this to work it would need to be supported in several places:
> > 
> > - Window managers could set _NET_WM_STATE_HIDDEN
> > - Terminal emulators and multiplexers could set the terminal size to 0,0
> > - Applications need to actually use that information
> > 
> > The only project I could find that seems to do something like this is
> > mutter.
> > 
> > So I am wondering: Is this even worth the effort? Have I missed any
> > prior art?
> > 
> > thanks
> > tobias
> >   
> 
> Hi!
> 
> htop, with a refresh rate of 1, uses of 1% of one the (3.40GHz) CPUs,
> I don't think this is a problem as people don't generally max out the
> CPUs. And I wouldn't think people have it running without being able
> to see it either. You would just be adding unneeded work and complexity
> for next to no gain. I would only think about this for intensive programs.
> 
> 
> Regards,
> Mattias Andrée

Hi again!

What you could do is to patch a terminal to allow programs to because
paused via SIGSTOP when invisible and continued via SIGCONT when
visible. Then your program would only need to write some string to
the terminal when it starts and when it terminates. Multiplexers
could however become an issue, but you could patch one as well.


Regards,
Mattias Andrée



Re: [dev] Culling all the way down

2020-09-06 Thread Mattias Andrée
On Sun, 6 Sep 2020 08:21:45 +0200
Tobias Bengfort  wrote:

> Hi,
> 
> I am currently creating a curses application that is updated
> independently of user input. Think of something like `top`. I realized
> that the process is using the same amount of CPU whether it is currently
> visible or not. That bothers me. I would prefer if my app got notified
> when its visibility changes so it can stop rendering (I think this is
> called culling).
> 
> For this to work it would need to be supported in several places:
> 
> - Window managers could set _NET_WM_STATE_HIDDEN
> - Terminal emulators and multiplexers could set the terminal size to 0,0
> - Applications need to actually use that information
> 
> The only project I could find that seems to do something like this is
> mutter.
> 
> So I am wondering: Is this even worth the effort? Have I missed any
> prior art?
> 
> thanks
> tobias
> 

Hi!

htop, with a refresh rate of 1, uses of 1% of one the (3.40GHz) CPUs,
I don't think this is a problem as people don't generally max out the
CPUs. And I wouldn't think people have it running without being able
to see it either. You would just be adding unneeded work and complexity
for next to no gain. I would only think about this for intensive programs.


Regards,
Mattias Andrée



Re: [dev] Re: [slstatus] temperature module acts wierd on OpenBSD

2020-06-16 Thread Mattias Andrée
I'm assuming temp.value i an `int`, as %d is used. The problem was
probably that `1E6` is actually a `double` rather than an `int`,
as the whole expression is promoted to `double`, because `bprintf` is
(I assume) variadic, and the compiler does not know to change the
cast the expression back to `int` because only the type of the first
argument is specified in its declaration.


Regards,
Mattias Andrée


On Tue, 16 Jun 2020 20:42:08 +0200
Laslo Hunhold  wrote:

> On Tue, 16 Jun 2020 17:55:03 +
> messw1thdbest  wrote:
> 
> Dear messw1thdbest,
> 
> > <   return bprintf("%d", (temp.value - 27315) / 1E6);
> > ---  
> > >   return bprintf("%d", (temp.value - 27315)/100);
> 
> I'm really intrigued by that; thanks for sending in this patch! What is
> the origin of this problem? Does this have something to do with
> guaranteed constant-sizes in Posix?
> 
> With best regards
> 
> Laslo Hunhold
> 




Re: [dev] [libgrapheme] announcement

2020-03-27 Thread Mattias Andrée
On Fri, 27 Mar 2020 22:24:22 +
 wrote:

> On Fri, Mar 27, 2020 at 10:24:52PM +0100, Laslo Hunhold wrote:
> > ... This will cover 99.5% of all cases...  
> 
> What do you mean? They managed to add in grapheme cluster definition some 
> weird
> edge cases up to 0.5%??
> 
> About string comparison: if I recall well, after utf-8 normalization (n11n), 
> strings
> are supposed to be 100% perfect for comparison byte per byte.
> 
> The more you know: utf-8 n11n got its way in linux filesystems support, and
> that quite recently. This will become a problem for terminal based
> applications. In near future gnu/linux distros, the filenames will become
> normalized using the "right way"(TM) n11n.
> 
> This "right way"(TM) n11n (there are 2 n11ns) produces only non-pre-composed
> grapheme cluster of codepoints (but in the CJK realm, there are exceptions if 
> I
> recall properly). AFAIK, all terminal based applications do expect
> "pre-composed" grapheme codepoint.

This sounds absolutely horrible. Non-pre-composed characters are not widely
well support and are often rendered terribly, some software (like the Linux VT)
cannot even rendering them.

Why is even the kernel getting into encoding issues?, that should be an
application issue, not a kernel issue. A kernel should only know bytes. Is it
really a security issue?

> 
> For instance the french letter 'è' won't be 1 codepoint anymore, but 'e' + '`'
> (I don't recall the n11n order), namely a sequence of 2 codepoints.
> 
> I am a bit scared because software like ncurses, lynx, links, vim, may use the
> abominations of software we discussed earlier to handle all this.
> 




Re: [dev] Superservers: Yay or Nay?

2020-03-23 Thread Mattias Andrée
On Mon, 23 Mar 2020 21:52:49 +0100
Laslo Hunhold  wrote:

> On Mon, 23 Mar 2020 15:51:03 +0100
> Thomas Oltmann  wrote:
> 
> Dear Thomas,
> 
> > Hi, I hope everybody is doing well.  
> 
> yeah, very much. I hope you do, as well!
> 
> > In the last couple of weeks I've been working on a small TCP server
> > application. It is intended to be used in situations where the
> > typical amount of requests is relatively low and traffic is
> > infrequent. At it's core, it's a simple fork-on-accept style server,
> > much like quark. Because the server will supposedly be idling for
> > most of the time, I thought it may be a good idea to separate the
> > accept() loop and the worker processes into separate executables so
> > that the former can be replaced by the user with a superserver like
> > inetd/xinetd or even systemd (because of course pid 1 has a built-in
> > superserver these days) to save on system resources when no traffic
> > is happening. However, after doing some research it seems like
> > superservers have pretty much fallen out of favor these days.
> > Also, from some back-of-the-envelope calculations I get the feeling
> > you won't be saving much CPU time or RAM usage this way
> > on modern machines anyway, but I might be wrong there.
> > So, what do you guys think?
> > Is superserver-compatibility a desirable feature for suckless
> > server-software? Does anybody know if they still help with reducing
> > resource usage (which is probably the only reason for using one)?
> > Is anybody here using a superserver like inetd for anything anymore?  
> 
> I don't think superservers really reflect the UNIX principle. Why would
> you want to plug in your simple HTTP-server into the behemoth that is
> systemd. What's wrong with quark's idle resource usage, which I think
> is damn low?
> Don't get me wrong, I'm not offended by your remarks or anything, I
> just cannot fathom it given other things eat up orders of magnitude
> more resources than an idle HTTP server.

Consider instead the case where you have a seldom run behemoth,
which you haven't found a simple alternative to. Why not plug it
in to a tiny superserver?

I don't see anything bad with superservers themselves, but I don't
think they are always the right option. You need to consider system
boot time, subserver start time, memory usage, and activity.
A positive side-effect of superservers is that you do not need to
remember to restart the server when you update its software.

A superserver is basically a workaround for shortcomings.


Clean your hands, wear an unventilated FFP3 face mask,
Mattias Andrée

> 
> With best regards
> 
> Laslo
> 




Re: [dev] unsubscribe

2019-10-30 Thread Mattias Andrée
On Wed, 30 Oct 2019 13:02:53 -0400
Yih Lerh Huang  wrote:

> unsubscribe
> 

dev+unsubscr...@suckless.org

Tip: You can almost always find the address to send it
in the List-Unsubscribe header in the e-mail.



Re: [dev] SYNChronous SendMail

2019-09-10 Thread Mattias Andrée
I mean that if you always use same libc you only have to read it once,
but if every problem have its own you have to read all of them. I do
not think it changes it sucklessness. I just wasn't sure whether the
reason was to have a single compilation unit or if there was some
other point to it (as both was listed as futures).

Although I do not expect you to do so. I would break out the libc to
a standalone project or, depending on how well it would work (and if
most of it could be done with a script), fork musl-libc and make it
a header-only (+crt) library.


Regards,
Mattias Andrée


On Tue, 10 Sep 2019 19:15:12 +
sylvain.bertr...@gmail.com wrote:

> On Tue, Sep 10, 2019 at 06:48:17PM +0200, Mattias Andrée wrote:
> > What is the point of doing your own mini-libc within the
> > program? Aren't you just making it less portable and
> > adding more code to read?  
> 
> More code to read? Have you read the code of a standard libc? Not to mention
> the SDK deps? Moreover, this "mini-lib", as you said it, is actually compiled
> as 1 code unit with the application code (on a pi3 for the aarch64 port in a
> few secs, max optimization, lol). Hardware portability code is really thin, 
> and
> linux devs are smoothing that out (it's a WIP).
> 
> I personally use it on x86_64 (my station) and aarch64 (pi3).
> 
> Your question feels a bit like the following one : what's the point of 
> suckless?
> 
> In the end, if this does not fit your definition of suckless, don't bother and
> just look away, don't be offended, I do not expect less.
> 
> Btw, I have also the receiving part but the code is quite older:
> https://rocketgit.com/user/sylware/lnanosmtp I just did a bit of refactoring
> for the "1 compilation unit" suff and some updating (I personally use it on
> aarch64). As I did explain, gogol did a fine job at pushing me coding all 
> that.
> 




Re: [dev] SYNChronous SendMail

2019-09-10 Thread Mattias Andrée
Hi,

What is the point of doing your own mini-libc within the
program? Aren't you just making it less portable and
adding more code to read?


Regards,
Mattias Andrée


On Tue, 10 Sep 2019 15:49:19 +
 wrote:

> Hi,
> 
> For those who might be interested:
> 
> I did write a lean/suckless-ish sendmail like program: 
> https://rocketgit.com/user/sylware/syncsm
> 
> It is meant for devs/advanced sysadmins/very advanced users dealing themselves
> with their "email server". I am currently using it (not on _this_ email 
> address
> obviously, see below).
> 
> Features: no libc (direct syscalls, x86_64/aarch64), "one compilation unit",
> etc.
> 
> 
> 
> As some of you may already know, gogol/gmail, for many pop/smtp users, is
> blocking usually once a year their email accounts asking for a
> re-authentication via the google account web interface. And since recently (a
> year-ish), you cannot log in anymore without a full javascript www browser, 
> aka
> gecko|blink/webkit (all are same-same). I am still using my gmail account till
> it's blocked.
> 
> They are now way beyond evil, and are re-united with the digital
> organized crime where you have microsoft/apple (irony).
> 
> regards,
> 




Re: [dev] Re: json

2019-06-15 Thread Mattias Andrée
On Sat, 15 Jun 2019 22:11:13 +0200
Wolf  wrote:

> Hello,
> 
> On , Mattias Andrée wrote:
> > Wouldn't it just complicate matters if you needed to specify whether a
> > number is an integer or a real value;  
> 
> Could you not just consider sequence of [0-9]+ to be an integer and
> anything with other characters either invalid or float? Not sure, I'm in
> no means a parser expert, so I might be missing something fundamental
> here.

A bit: floats can be significantly larger than ints, (and negative
numbers) of course. However the syntax could be simple: \+?[0-9]+
could be unsigned int, -[0-9]+ could be signed int, and
[+-]?[0-9]*.[0-9]+ and [+-]?[0-9]+.[0-9]* could be float. But still,
only using `long double` or arbitrary precision floating-point
numbers is simpler in my mind, in usage, in specification, and in
implementation.

I would also prefer if hexadecimal encoding of floats where
supported (to eliminate loss of precision).

> 
> > Additionally, I think you would want to support for arbitrary
> > precision. Again, the software can just check if arbitrary precision
> > is required, no need to complicate the syntax.  
> 
> Agreed, arbitrary precision would be nice, and currently is probably
> done via strings if it's needed. But not sure if it's something you want
> to have in the standard though as a separate type. Passing them via
> string is probably good enough for specialized applications that do need
> them.
> 
> > What should a library do with parsed numbers that are too large or too
> > precise?  
> 
> Report an error and provide flag to do best-possible parsing if user
> wants the number anyway knowing it will not be precise. Not do a silent
> guesstimate.
> 
> 
> 
> Basically, I think the fact that following returns false is stupid:
> 
> +   $ ruby -rjson -e 'puts({ num: (?9 * 100).to_i }.to_json)' \
>   | node -p 'var fs = require("fs");
>   JSON.stringify(JSON.parse(fs.readFileSync(0, "utf-8")));' \
>   | ruby -rjson -e 'puts (?9 * 100).to_i == JSON.parse(STDIN.read)["num"]'
> false
> 
> That means that despite all libraries in the chain fully implementing
> the JSON standard, not silently corrupting the data during the
> round-trip is not guaranteed.
> 
> W.




Re: [dev] Re: json

2019-06-15 Thread Mattias Andrée
`long double` is able to exactly represent all values exactly
representable in `uint64_t`, `int64_t` and `double` (big float
can be used for other languages). Wouldn't it just complicate
matters if you needed to specify whether a number is an integer
or a real value; if there is any need for it, the software can
just check which it best fits. Additionally, I think you would
want to support for arbitrary precision. Again, the software
can just check if arbitrary precision is required, no need to
complicate the syntax. What should a library do with parsed
numbers that are too large or too precise? In most cases, the
program know what size and precision is required.


Regards,
Mattias Andrée

On Sat, 15 Jun 2019 20:37:34 +0200
Wolf  wrote:

> On , sylvain.bertr...@gmail.com wrote:
> > json almost deserves a promotion to suckless format.  
> 
> Except for not putting any limits on sizes of integers. I think it would
> be better to have size the implementation must support to be json
> complient. And also having separate int and float types. Because let's compare
> what happens in ruby:
> 
>   JSON.parse(?9 * 100))
>   => 
> 
>   
> 
> and in firefox (JavaScript):
> 
>   var x = ''; for (var i = 0; i < 100; ++i) { x += '9'; }; JSON.parse(x);
>   => 1e+100  
> 
> So, yy interoperability I guess?
> 
> W.




Re: [dev] Learn C

2019-03-24 Thread Mattias Andrée
Hi,

I would recommend to simply to C projects. Suckless.org is great place
to find sane C code with simple Makefiles. If you already are programmer,
you should be able to learn C by just using it, no reading necessary.
This way you should get a solid understanding of the basic quite fast
and with time you will pick up one the more advanced topics.


Regards
Mattias Andrée


On Sun, 24 Mar 2019 10:28:35 +0100
Thuban  wrote:

> Hi,
> I want to learn C. I mean, sane C.
> What i read before was based on big IDE such as codeblocks. So, I don't
> know how to write Makefiles from scratch. (just an example).
> As an example, I found gobyexample [1] great. But it's go.
> Any reading advice?
> 
> Thanks.
> Regards
> 
> [1] : https://gobyexample.com/



pgpZpqL1HYl4k.pgp
Description: OpenPGP digital signature


Re: [dev] Re: diff -x

2018-09-28 Thread Mattias Andrée
Why would you? You can just as well write a script a new
directory with hard links to all files that should be
included in the diff.


On Fri, 28 Sep 2018 17:49:25 +0300
Adrian Grigore  wrote:

> In a diff -ur situation?
> On Fri, Sep 28, 2018 at 5:48 PM Adrian Grigore
>  wrote:
> >
> > How would you implement diff -x in a POSIX compliant manner?
> >
> > --
> > Thanks,
> > Adi  
> 
> 
> 



pgpLReyzf7MLd.pgp
Description: OpenPGP digital signature


Re: [dev] [PATCH][blind] alloca #include for BSDs

2018-01-29 Thread Mattias Andrée
On Mon, 29 Jan 2018 10:05:51 +
Nick <suckless-...@njw.me.uk> wrote:

> Quoth Yuri: 
> > You should also change your config.mk files to allow external optimization
> > and other flags. For example:
> >   
> > > CFLAGS   = -std=c99 -Wall -pedantic -O2  
> > 
> > should be changed to
> >   
> > > CFLAGS   ?= -O2  
> >   
> > > CFLAGS   += -std=c99 -Wall -pedantic -O2  
> > 
> > This way you can allow externally supplied optimization flags while still
> > being able to add your own flags. Same with CPPFLAGS and LDFLAGS.  
> 
> I believe the reason suckless projects stick to = rather than += or 
> ?= is that they aren't POSIX / OSI standard. I don't think there's a 
> compelling reason to switch to your syntax, as distribution builders 
> or whoever can still easily see what the CFLAGS were and add to them 
> if they want or need to.
> 
> Nick
> 

Indeed POSIX only defines =. An alternative would be to do

_CFLAGS = -std=c99 -Wall -pedantic -O2 $(CFLAGS)

but I think it is preferable if the package maintainer just
take the values found in config.mk, made necessary changes
(such as add flags for specific OSes), add the package
managers flags, and invoke make(1) with those values. However,
if the package maintainer is comfortable with assuming make(1)
supports ?= and +=, use sed(1) on config.mk, and he will not
have to change if config.mk has changed when a new version is
published.


Mattias Andrée


pgpauSci7jrCw.pgp
Description: OpenPGP digital signature


Re: [dev] [PATCH][blind] alloca #include for BSDs

2018-01-28 Thread Mattias Andrée
On Sun, 28 Jan 2018 13:24:06 -0800
Yuri <y...@rawbw.com> wrote:

> BSDs use different header for alloca than Linux.
> 
> It sucks when it only works on Linux.
> 
> 
> Thanks,
> 
> Yuri
> 
> 
> 
> --- src/blind-split.c.orig  2017-05-06 11:27:39 UTC
> +++ src/blind-split.c
> @@ -2,7 +2,11 @@
>   #include "stream.h"
>   #include "util.h"
> 
> +#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__) 
> || defined(__DragonFly__)
> +#include 
> +#else
>   #include 
> +#endif
>   #include 
>   #include 
>   #include 
> 
> 

Thanks again!

I will be fixing this, but probably via config.mk.


Mattias Andrée



Re: [dev] [BUG REPORT] blind: clang-40 warning: variable 'frames' is uninitialized when used here

2018-01-28 Thread Mattias Andrée
On Sun, 28 Jan 2018 13:21:12 -0800
Yuri <y...@rawbw.com> wrote:

> src/blind-from-video.c:234:25: warning: variable 'frames' is 
> uninitialized when used here [-Wuninitialized]
>      SPRINTF_HEAD_ZN(head, frames, width, height, "xyza", 
> );
>    ^~
> src/stream.h:9:12: note: expanded from macro 'SPRINTF_HEAD_ZN'
>      (size_t)(FRAMES), (size_t)(WIDTH), (size_t)(HEIGHT), 
> PIXFMT, 0, LENP)
>   ^~
> src/blind-from-video.c:178:38: note: initialize the variable 'frames' to 
> silence this warning
>      size_t width = 0, height = 0, frames;
>      ^
>   = 0
> 
> 

Hi, thanks for the report!

I've a lot of changes in the works, I will apply this (if I haven't
already fixed it that is), but it will take some times before you
will see it.


Mattias Andrée


pgp5Qhv6VjDKd.pgp
Description: OpenPGP digital signature


Re: [dev] [general][discussion] constants: `#define` or `static const`

2017-10-13 Thread Mattias Andrée
On Thu, 12 Oct 2017 15:21:14 +0100
Matthew Parnell  wrote:

> Afternoon suckless community.
> 
> It is made clear in the suckless coding style guide when to use
> #define and enums; however, it doesn't mention general global
> constants.
> 
> I would search through the mailing list to see if this has been asked
> before; but it seems that gmane fails to search.
> 
> I'm writing a header file that will contain constants required.
> Should I use:
> 
> #define FOO 123.456
> 
> or
> 
> static double const foo = 123.456;
> 
> (or `static const double`, for those who prefer the inconsistent const
> style; doesn't matter to the question)
> 
> There are pros and cons to both; pre-processor could go either way,
> "static const" has scope and type safety, etc.
> 
> But specifically about the suckless style; I have seen a lot of
> `#define`s and a few `static const` in suckless code.
> 
> What is more in line with the suckless style, and why?
> 
> Cheers,
> 

In theory I like `static double`, however I cannot think of any
time I needed a constant that didn't need to be a #define and
should have been hardcoded. Often you want the constant to be
configurable via CPP when compiling or provide the ability
check its existence with the CPP.

Something I don't quite understand is why people make unnamed
enums and add #defines for each constant (I do get why would
do this for a named constant, but I which C had a better mechanism
for this). Like this:

enum
{
  MM_NOTOK = -1,
#define MM_NOTOK MM_NOTOK
  MM_OK = 0,
#define MM_OK MM_OK
  MM_NOMSG = 1,
#define MM_NOMSG MM_NOMSG
  MM_NOCON = 4
#define MM_NOCON MM_NOCON
};

Of course, perhaps they should just name their enums, it always
looks like the right thing to do.


pgp_xc0nCjpgQ.pgp
Description: OpenPGP digital signature


Re: [dev] Writing subtitles for videos

2017-08-31 Thread Mattias Andrée
On Thu, 31 Aug 2017 11:34:00 +0200
Quentin Rameau  wrote:

> Hi Mattias
> 
> > I tried it out, it sucked donkey balls; i'd rather just use
> > ffplay and a text editor; the video playback didn't even work
> > for me.  
> 
> Isn't this something that could be done with blind?
> I wonder if this would be within its goal scope.
> 

I'm not planing to do subtitles, I want to keep it purely
video. However, it's in the todo-list to make a simple
graphical tool for finding frames. It will have video and
audio playback, keyframe listing, and maybe waveforms, as
well as timestamp and frame number. So it will be useful
as an alternative to the ffplay part.



Re: [dev] Writing subtitles for videos

2017-08-31 Thread Mattias Andrée
On Thu, 31 Aug 2017 11:25:21 +0200
Quentin Rameau  wrote:

> > 99% of the fansubbers use aegisub
> > they're teens and they all managed to install it properly
> > i'm fairly confident my grandma could do it too
> > maybe she can teach you for a small fee  
> 
> We're glad to know you judge the quality of a software by its ability
> te be easily installed only.
> 

I tried it out, it sucked donkey balls; i'd rather just use
ffplay and a text editor; the video playback didn't even work
for me.


pgpy3TR9Nl7ij.pgp
Description: OpenPGP digital signature


Re: [dev] Writing subtitles for videos

2017-08-31 Thread Mattias Andrée
On Thu, 31 Aug 2017 08:04:26 +
Thomas Levine <_...@thomaslevine.com> wrote:

> I want to write some subtitles for some videos. I found several subtitle
> editors through web searches, and their documentation doesn't make them
> look very good. What's more, I haven't managed to install any of them
> properly, which is both inconvenient and further indicative of suck.
> 
> I think that most of what I want could be accomplished by having a video
> player that writes the current time somewhere (like a file) when I press
> a particular key. I would play the video until I get to the position
> where I want to start or stop a subtitle, then I would press the key,
> and then I would copy that time to the subtitles file.
> 
> Another helpful feature would be to reload the subtitles file without
> changing the current time; this way I could review the subtitles more
> quickly.
> 
> Do any video players already do this? Or, does anyone have other
> recommendations about the editing of subtitles?
> 

ffplay prints the playback time in centisecond precision to the terminal
when it is playing, so you can pause to see where in the video you are
and write the time down yourself.


pgpsRhwM6jLv5.pgp
Description: OpenPGP digital signature


Re: [dev] dl.suckless.org file integrity github project

2017-08-26 Thread Mattias Andrée
On Sat, 26 Aug 2017 21:05:25 +0200
Laslo Hunhold <d...@frign.de> wrote:

> On Fri, 25 Aug 2017 17:13:38 +0200
> Mattias Andrée <maand...@kth.se> wrote:
> 
> Dear Mattias,
> 
> > Each user could have a directory called pgp-keys and dl.suckless.org
> > could list those directories. This would allow us to store old keys
> > in a structured manner.
> > 
> > An alternative is that the owner of a repo commits his key to the
> > repo under /.pgp-keys.  
> 
> this is absolute insanity! This completely defeats the purpose of it.
> If for some reason the suckless.org server is compromised, the
> attacker can sign the fraudulent commits with his key and just replace
> the one for the corresponding user on dl.suckless.org.
> 
> PGP only works if the hosting is diverse, i.e. if the key is for
> instance hosted on every project member's homepage. Can't we just stop
> with this pseudo-security stuff?
> 
> If somebody fiddled with the git-repo in some way, people would notice,
> because many many people have copies of the tree on their computer. If
> somebody somehow modified tags, or rebranched the repository, it would
> be noticed. This is much more logical security approach which is
> already in place.
> Still, I'm not against signing tags with PGP keys, and as always, in
> case I get something wrong, please let me know.
> 
> With best regards
> 
> Laslo
> 

The user's must be able to find the appropriate keys some way the first
time, so suckless must at least have links to them. If suckless is
compromised these can be replaced. PGP keys only ensure that future
keys are not fraudulent as all new key should be signed by the old keys.
SSL certificates ensures that the PGP keys are not tempered with by
anyone outside suckless. Thus, hosting the keys one suckless.org, when
it has HTTPS, is more secure that every ones private home pages outside
suckless.org that do not have SSL certificates.



Re: [dev] dl.suckless.org file integrity github project

2017-08-25 Thread Mattias Andrée
On Fri, 25 Aug 2017 16:48:13 +0200
Anselm R Garbe <garb...@gmail.com> wrote:

> Hi Mattias,
> 
> On 25 August 2017 at 16:32, Mattias Andrée <maand...@kth.se> wrote:
> > On Fri, 25 Aug 2017 13:54:41 +0200
> > Anselm R Garbe <garb...@gmail.com> wrote:
> >  
> >> On 25 August 2017 at 12:56, Laslo Hunhold <d...@frign.de> wrote:  
> >> > On Fri, 25 Aug 2017 08:12:12 +0200
> >> > Anselm R Garbe <garb...@gmail.com> wrote:  
> >> >> - (optional) repo owners/maintainers should sign their future git tags
> >> >> for release creation by using their own private PGP key.  
> >> >
> >> > the public PGP-keys could be put on the
> >> > http://suckless.org/people/*-pages.  
> >>
> >> Either that, or perhaps we can reinstate the old fashion of
> >> suckless.org/~user/ homedir.  
> >
> > Wouldn't it be best to have all keys in one page?  
> 
> Sure it would, probably best is dl.suckless.org as well.
> 
> My only concern with the wiki page is, that everybody could presumably
> tamper the pubkeys there, since we accept upstream wiki changes. Of
> course they need to be reviewed, but how do I know that Laslo's pubkey
> is really Laslo's pubkey without hassle when reviewing some public
> wiki change?
> 
> Hence my suggestion to put them into a URL position that requires ssh
> access for pushing onto suckless.org, which is given for
> maintainers/repo owners.
> 
> BR,
> Anselm

Each user could have a directory called pgp-keys and dl.suckless.org
could list those directories. This would allow us to store old keys
in a structured manner.

An alternative is that the owner of a repo commits his key to the
repo under /.pgp-keys.


pgpwCiVeIBYSN.pgp
Description: OpenPGP digital signature


Re: [dev] dl.suckless.org file integrity github project

2017-08-25 Thread Mattias Andrée
On Fri, 25 Aug 2017 13:54:41 +0200
Anselm R Garbe  wrote:

> On 25 August 2017 at 12:56, Laslo Hunhold  wrote:
> > On Fri, 25 Aug 2017 08:12:12 +0200
> > Anselm R Garbe  wrote:  
> >> - (optional) repo owners/maintainers should sign their future git tags
> >> for release creation by using their own private PGP key.  
> >
> > the public PGP-keys could be put on the
> > http://suckless.org/people/*-pages.  
> 
> Either that, or perhaps we can reinstate the old fashion of
> suckless.org/~user/ homedir.
> 
> BR,
> Anselm
> 

Wouldn't it be best to have all keys in one page?


pgp2rJBh8bHLi.pgp
Description: OpenPGP digital signature


Re: [dev] dl.suckless.org file integrity github project

2017-08-23 Thread Mattias Andrée
On Wed, 23 Aug 2017 22:29:17 +0200
Markus Teich <markus.te...@stusta.mhn.de> wrote:

> Mattias Andrée wrote:
> > If the server's authenticity can be proven with HTTPS,
> > what additional secure does PGP-signatures provide?  
> 
> Some people trust persons they know more than they trust random corporations
> with questionable security policies. Other people think PGP sucks. I don't 
> know
> which group has the majority in the suckless community, thus I asked for a
> gentle vote by flamewar.
> 
> I count myself to the PGP proponents, but have to admit, that I might be too
> lazy to check the PGP signatures myself.
> 
> --Markus
> 

In general PGP is good (of course, cryptography inherently sucks, but that's
something we have to live with it), but it's just a hassle when in comes to
software packages.

There a few things to take into consideration when deciding what do here:

* The number of people that actually know the developers of a individual
  package is negligible, so there isn't actually anyone that the users can
  trust.

* It's probably easier to trust the developers than suckless itself.

* If a user verifies that there is no history of malice up to a signed
  release, the user can to some extent trust the developer and the
  developer's signature can be used to verify that no one else on suckless
  cause the server to upload a malicious version.

* An alternative to signature files is to sign the tags in Git, and those
  that care enough could pull releases from git instead.

* Signature files allows all developers, not just the owner, to sign the
  release.

* If signature files are added, people will probably make packages in
  repositories, such as the AUR, check the signature which can be a burden
  on the users which must add the developer's key to the keyring or disable
  signature checks.

* If someone with root access to the suckless servers want to replace a
  release, he can serve the genuine version of the site to everyone who has
  connected to the server previously, and server a malicious version to new
  visitors, and have the PGP keys changed.

* If a developer publishes a release, only root and that developer should
  be able to replace the release.

* So do PGP keys actually add any security if have HTTPS, or do they just
  give a false sense of security.



Re: [dev] dl.suckless.org file integrity github project

2017-08-23 Thread Mattias Andrée
On Wed, 23 Aug 2017 22:03:41 +0200
Markus Teich  wrote:

> Hiltjo Posthuma wrote:
> > Checksums are available in each project directory, yesterday I've added
> > SHA256 checksums.
> > 
> > For example:
> > SHA256: http://dl.suckless.org/dwm/sha256sums.txt
> > SHA1:   http://dl.suckless.org/dwm/sha1sums.txt
> > MD5:http://dl.suckless.org/dwm/md5sums.txt
> > 
> > HTTPs will be coming in a few weeks when some things are sorted. Maybe in 
> > the
> > future we can add also add PGP signed releases.  
> 
> Heyho,
> 
> I don't see the benefit of checksums without signatures. We already kind of 
> have
> transmission integrity by IP for release downloads or by git. We really need
> https, but PGP is probably controversial enough to be discussed. Maybe we have
> some time for that at the hackathon, but that would exclude people who cannot
> attend.
> 
> Thus, start flaming your highly valued opinions about PGP-signing releases to
> the list nao! ;P
> 
> --Markus
> 

If the server's authenticity can be proven with HTTPS,
what additional secure does PGP-signatures provide?


pgpyvwYAkUP6J.pgp
Description: OpenPGP digital signature


Re: [dev] Problems with farbfeld image editing tools

2017-07-03 Thread Mattias Andrée
On Mon, 3 Jul 2017 19:11:46 +0200
Mattias Andrée <maand...@kth.se> wrote:

> On Mon, 3 Jul 2017 18:55:42 +0200
> Laslo Hunhold <d...@frign.de> wrote:
> 
> > On Mon, 3 Jul 2017 18:47:37 +0200
> > Mattias Andrée <maand...@kth.se> wrote:
> > 
> > Dear Mattias,
> >   
> > > Perhaps farbfeld should specify that it should use linear sRGB, right
> > > now it specifies sRGB, which implies non-linear. It wouldn't make
> > > the format less complicated in my opinion, but it would be easier to
> > > implemented editing tools.
> > 
> > It would make it easier to implement the tools, however, this would on
> > the other hand force everybody trying to display farbfeld images to
> > make the transformations back to non-linear sRGB.  
> 
> Yes. However, if this is not done, the error is probably less than if
> multiple edits have been made to the image without considering this.
> 
> > As you already explained pretty well, the non-linear gamma curve is
> > there for a reason.
> >   
> > > The problem with treating non-linear colour models as linear is that
> > > the error accumulate. Whilst you may not notice the error after one
> > > edit unless you compare the image to the correct one, it will be
> > > noticeable if you apply multiple change.
> > 
> > This is correct, but only applies to cases where we need "exact"
> > transformations. Every non-integer arithmetic operation has the
> > potential to be erroneous. Given we have 16 bits per channel, the
> > accumulated error would be invisble in most cases, even for long
> > pipelines (if you don't do anything crazy).
> >   
> > > 50 % bright in the linear model is at 0.50, but at 0.74 in the
> > > non-linear model. The difference is almost 50 %, the difference is
> > > larger at darker colours.
> > 
> > When was the last time you needed to brighten up your picture by
> > "exactly factor 2"? Most of the time, people open GIMP and move the
> > slider until the brightness suits their taste.  
> 
> That was just an example to illustrate how manipulations should be
> applies. And indeed it is not common that you need exact changes.
> If you need exact colours you probably don't want to use farbfeld
> at all because it is restricted to colours in the sRGB gamut.
> 
> To avoid the problem with the transfer function, it is probably
> enough (since farbfeld uses 16-bit values) to add a tool applies
> the inverse transfer function and a tool that applies the transfer
> function. That way, the editing tools can be as simply as they are
> today, but you can get rather exact results if you need it.

I think some tools must still be aware of sRGB's non-linearity.
For example, if you make a tool that draws a gradient, you probably
want the colours in the gradient to increase linearly, so you have
to be that the colour model is not linear.

> 
> > 
> > With best regards
> > 
> > Laslo Hunhold
> >   
> 



pgpaWZSBBMnbR.pgp
Description: OpenPGP digital signature


Re: [dev] Problems with farbfeld image editing tools

2017-07-03 Thread Mattias Andrée
On Mon, 3 Jul 2017 18:55:42 +0200
Laslo Hunhold <d...@frign.de> wrote:

> On Mon, 3 Jul 2017 18:47:37 +0200
> Mattias Andrée <maand...@kth.se> wrote:
> 
> Dear Mattias,
> 
> > Perhaps farbfeld should specify that it should use linear sRGB, right
> > now it specifies sRGB, which implies non-linear. It wouldn't make
> > the format less complicated in my opinion, but it would be easier to
> > implemented editing tools.  
> 
> It would make it easier to implement the tools, however, this would on
> the other hand force everybody trying to display farbfeld images to
> make the transformations back to non-linear sRGB.

Yes. However, if this is not done, the error is probably less than if
multiple edits have been made to the image without considering this.

> As you already explained pretty well, the non-linear gamma curve is
> there for a reason.
> 
> > The problem with treating non-linear colour models as linear is that
> > the error accumulate. Whilst you may not notice the error after one
> > edit unless you compare the image to the correct one, it will be
> > noticeable if you apply multiple change.  
> 
> This is correct, but only applies to cases where we need "exact"
> transformations. Every non-integer arithmetic operation has the
> potential to be erroneous. Given we have 16 bits per channel, the
> accumulated error would be invisble in most cases, even for long
> pipelines (if you don't do anything crazy).
> 
> > 50 % bright in the linear model is at 0.50, but at 0.74 in the
> > non-linear model. The difference is almost 50 %, the difference is
> > larger at darker colours.  
> 
> When was the last time you needed to brighten up your picture by
> "exactly factor 2"? Most of the time, people open GIMP and move the
> slider until the brightness suits their taste.

That was just an example to illustrate how manipulations should be
applies. And indeed it is not common that you need exact changes.
If you need exact colours you probably don't want to use farbfeld
at all because it is restricted to colours in the sRGB gamut.

To avoid the problem with the transfer function, it is probably
enough (since farbfeld uses 16-bit values) to add a tool applies
the inverse transfer function and a tool that applies the transfer
function. That way, the editing tools can be as simply as they are
today, but you can get rather exact results if you need it.

> 
> With best regards
> 
> Laslo Hunhold
> 



pgpxA9_5kvQ5H.pgp
Description: OpenPGP digital signature


Re: [dev] Problems with farbfeld image editing tools

2017-07-03 Thread Mattias Andrée
On Mon, 3 Jul 2017 18:25:23 +0200
Laslo Hunhold <d...@frign.de> wrote:

> On Mon, 3 Jul 2017 17:28:29 +0200
> Mattias Andrée <maand...@kth.se> wrote:
> 
> Hey Mattias,
> 
> > Because if the limited number of values that can be stored in an
> > 8-bit integer and because humans don't notices small differences
> > between dark colours as well as small differences in bright
> > colours, sRGB encodes colours non-linearly so there are more
> > bright colours and fewer dark colours.  
> 
> yes, this is correct.
> 
> > However, looking at image editing tools for farbfeld I've found
> > that the tools do not take this into account. For example, making
> > a colour twice a bright, is not as simple as multiply the RGB
> > values with 2  
> 
> It is if you have linear RGB. If you make the proper transformation
> from sRGB to linear sRGB, it is perfectly valid.
> 
> > , rather the algorithm that shall be used is
> > 
> > x = F(2 ⋅ F⁻¹(x₀))
> > 
> > where F is sRGB's transfer function
> > 
> >⎧ -F(-t) if t < 0
> > F(t) = ⎨ 12.92 tif 0 ≤ t ≤ 0.0031306684425217108 [*]
> >⎩ 1.055 t↑(1/2.4) - 0.055  otherwise
> > 
> > [*] Approximate value. You will often find the value
> > 0.0031308 here, but this value is less accurate and
> > only good enough when working with 8-bit integers.  
> 
> How do you expect anyone to understand this? It's better to just work
> with linear sRGB. The transfer function is the inverse of the
> companding function and it honestly does not look very nice. The
> companding function gives more insight in how sRGB works and does not
> hide it behind closed doors.
> 
> sRGB is a pretty complex matter in my opinion. We don't just have an
> exponential gamma function, but one that is companded differently below
> a certain treshold. However, it's not nearly as complicated as general
> colour theory.
> If we have nonlinear sRGB-values V (where V is one of R,G,B), we can
> make the conversion to linear sRGB (v where v is one of r,g,b) via
> 
>   ⎧ V / 12.92V <= 0.04045
>   v = ⎨
>   ⎩ [(V + 0.055) / 1.055]^2.4else
> 
> Credit goes to Bruce Lindbloom[0] for creating this awesome collection
> of transformation formulae.
> 
> After all though, if you mistake linear and nonlinear RGB, you most
> likely won't see the difference anyway. In 99% of the cases, the tools
> work with an immediate visual feedback.
> 
> With best regards
> 
> Laslo Hunhold
> 
> [0]: http://www.brucelindbloom.com/
> 


Perhaps farbfeld should specify that it should use linear sRGB, right
now it specifies sRGB, which implies non-linear. It wouldn't make
the format less complicated in my opinion, but it would be easier to
implemented editing tools.

The problem with treating non-linear colour models as linear is that
the error accumulate. Whilst you may not notice the error after one
edit unless you compare the image to the correct one, it will be
noticeable if you apply multiple change.

50 % bright in the linear model is at 0.50, but at 0.74 in the
non-linear model. The difference is almost 50 %, the difference is
larger at darker colours.


pgphwMPa_GaBX.pgp
Description: OpenPGP digital signature


[dev] Problems with farbfeld image editing tools

2017-07-03 Thread Mattias Andrée
Because if the limited number of values that can be stored in an
8-bit integer and because humans don't notices small differences
between dark colours as well as small differences in bright
colours, sRGB encodes colours non-linearly so there are more
bright colours and fewer dark colours.

However, looking at image editing tools for farbfeld I've found
that the tools do not take this into account. For example, making
a colour twice a bright, is not as simple as multiply the RGB
values with 2, rather the algorithm that shall be used is

x = F(2 ⋅ F⁻¹(x₀))

where F is sRGB's transfer function

   ⎧ -F(-t) if t < 0
F(t) = ⎨ 12.92 tif 0 ≤ t ≤ 0.0031306684425217108 [*]
   ⎩ 1.055 t↑(1/2.4) - 0.055  otherwise

[*] Approximate value. You will often find the value
0.0031308 here, but this value is less accurate and
only good enough when working with 8-bit integers.


pgps98aZmwvZH.pgp
Description: OpenPGP digital signature


Re: [dev] [sbase] rm missing error message?

2017-06-16 Thread Mattias Andrée
On Fri, 16 Jun 2017 21:00:40 +0200
Quentin Rameau  wrote:

> Hi,
> 
> > Sounds like it says you must not write those error message if -f is
> > used. Kind of a strange requirement as `2>/dev/null` would do that.
> >  
> > > I don't fully understand the wording in POSIX on the page[0].
> > > 
> > > [0]
> > > http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm.html   
> 
> “It is less clear that error messages regarding files that cannot be
> unlinked (removed) should be suppressed. Although this is historical
> practice, this volume of POSIX.1-2008 does not permit the -f option to
> suppress such messages.”
> 

Didn't read that part.


pgpQ5vLmOB2u7.pgp
Description: OpenPGP digital signature


Re: [dev] [sbase] rm missing error message?

2017-06-16 Thread Mattias Andrée
Sounds like it says you must not write those error message if -f is used.
Kind of a strange requirement as `2>/dev/null` would do that.

On Fri, 16 Jun 2017 20:14:58 +0200
Hiltjo Posthuma  wrote:

> On Fri, Jun 16, 2017 at 02:08:24PM -0300, Marc Collin wrote:
> > Hello all.
> > 
> > I found a case where sbase rm command fails but doesn't output any
> > error message, making it look like it succeeded.
> > 
> > mkdir ./test
> > mkdir ./test/test
> > sudo chown root:root ./test
> > sudo chown root:root ./test/test
> > rm -rf ./test
> > 
> > rm won't output anything and exit (apparently) cleanly.
> > But the ./test directory won't be deleted.
> > Shouldn't a meaningful message be printed to warn about the failure?
> > 
> > Regards.
> >   
> 
> Hi,
> 
> Regarding the status code: you specify -f so the exit status is not
> modified[0].
> 
> I'm not sure if it's required to print a warning message with the -f option.
> My interpretation is it's not neccesary. However on OpenBSD it says:
> 
>   $ rm -rf test/
>   rm: test/test: Permission denied
>   rm: test: Operation not permitted
> 
> I don't fully understand the wording in POSIX on the page[0].
> 
> [0] http://pubs.opengroup.org/onlinepubs/9699919799/utilities/rm.html
> 



pgporVpVwjpde.pgp
Description: OpenPGP digital signature


[dev] GNU yes(1) is slow as shit

2017-06-14 Thread Mattias Andrée
My comments on the performance of GNU yes(1):
https://github.com/maandree/yes-silly


pgpmycFfJkrq9.pgp
Description: OpenPGP digital signature


RE: [dev] [sbase] Changing BUFSIZ

2017-06-14 Thread Mattias Andrée
I've also seen that. Seems completely silly to care about how quick
you can print 'y\n', or whatever string you choose, ad infinitum. There
is not real world sitatuion when this is important. Optimising cat(1)
however useful for when you want to cat(1) a large file. For example
you may want to read the first a file, write something else, and then
cat the rest of the file. I do this all the time with blind, except I read
output from another process and write it to another. Of course, both
other process will do processing so eventually cat(1) will be blocked,
but optimising cat(1) in this case may[0] be useful for reducing CPU
usage when rendering video.

I don't really understand why GNU put so much working into optimising
every last little detail. The only program where this have proven to be
useful (for me) is grep(1). They have so many more things they could
be spending their resources on.

[0] I haven't investigated it yet. Perhaps read(3)/write(3) is not suffient
 enought and might be required splice(3).

P.S. My suggestion puts GNU cat(1) to shame.

From: Ivan Tham [pickf...@riseup.net]
Sent: 14 June 2017 09:18
To: dev@suckless.org
Subject: Re: [dev] [sbase] Changing BUFSIZ

Mattias Andrée <maand...@kth.se> wrote:

> On Linux, the performance of cat(1) can be doubled
> when cat(1):ing from one pipe to another, by compiling
> with -DBUFSIZ=(1<<16) (the default pipe capacity).
> This is close to optimial for a read(3)/write(3)
> implementation.

I have seen people mentioning how is GNU yes fast:

https://www.reddit.com/r/unix/comments/6gxduc/how_is_gnu_yes_so_fast/



[dev] [sbase] Changing BUFSIZ

2017-06-13 Thread Mattias Andrée
On Linux, the performance of cat(1) can be doubled
when cat(1):ing from one pipe to another, by compiling
with -DBUFSIZ=(1<<16) (the default pipe capacity).
This is close to optimial for a read(3)/write(3)
implementation.


pgpKj3MyxJW0x.pgp
Description: OpenPGP digital signature


[dev] [blind] 1.1 release

2017-05-06 Thread Mattias Andrée
Hello World!

I am pleased to announce the second release of blind[1]:
version 1.1[1].

This release includes a number of important bug fixes
and improvements, and the following features:

-  absolute value (abs) operator in blind-arithm(1)

-  reading multiple frames (-f) with blind-next-frame(1)

-  make everything outside the selected region
   transparent (-s) with blind-crop(1)

-  make everything inside the selected region
   transparent (-S) with blind-crop(1)

-  reading video from stdin in blind-repeat(1)

-  blind-translate(1): frame-wise translation of video

-  blind-skip-pattern(1): skip frames in pattern

-  blind-compress(1) and blind-decompress(1): very fast
   compression that can be used to when sending streams
   across the network when rendering on multiple computers

Happy hacking!
Mattias Andrée

[0] http://tools.suckless.org/blind/
[1] http://dl.suckless.org/tools/blind-1.1.tar.gz


pgpJAfo5OpIbE.pgp
Description: OpenPGP digital signature


Re: [dev][all] Migrating build system

2017-04-01 Thread Mattias Andrée
On Sat, 1 Apr 2017 14:46:36 +0530
Aditya Goturu  wrote:

> As we know, the greatest weakness of suckless apps is their dependence on 
> bloated build systems like make.
> 
> I tried porting to to gnu autotools, but while it helped, it still needed 
> work.
> 
> I am proposing we migrate to visual studio as our build system immediately. 
> We must also start reimplementing some parts in modern languages like C# or 
> Java.
> 
> I also propose we stop using this mailing list immediately and move to a 
> better communication platform like Microsoft Exchange and that we replace IRC 
> with slack.
> 
> Thanks
> Aditya
> 

+100



pgpxzpbBtaPCb.pgp
Description: OpenPGP digital signature


Re: [dev] [ubase] pager

2017-02-10 Thread Mattias Andrée
Some pagers also support search, which can be very useful.

On Fri, 10 Feb 2017 11:28:04 -0800
"Leander S. Harding"  wrote:

>  Personally, I've always thought that the VTxx escape
> sequence family is missing one: enable/disable
> scroll-lock. Then, your 'pager' just consists of printing
> the scroll-lock sequences at the beginning and end of
> output and using your multiplexer's scrolling feature,
> and can be accomplished like Eric mentioned above via
> aliases easily, too.
> 
>  -Leander
> 
> On Fri, Feb 10, 2017 at 1:53 AM, hiro <23h...@gmail.com>
> wrote:
> > the problem is when i *know* stuff fill be very long,
> > but I still want to start reading from the beginning.
> > in tmux i don't know how to start scrolling from top of
> > my last command. I don't want to scroll there manually.
> > also in page i can use pgup/down in tmux i have to do
> > crazy emacs-combinations first.
> >
> > On 2/10/17, Eric Pruitt  wrote:  
> >> On Fri, Feb 10, 2017 at 08:26:11AM +0100, robin
> >> wrote:  
> >>> I usually pipe into less whenever something overflows
> >>> the terminal height, but having to type 2>&1 to see
> >>> stderr is a bit cumbersome. In dvtm Shift-PageUp is
> >>> much easier.  
> >>
> >> I use a generic wrapper function in Bash:
> >>
> >> #   $1  Name or path of the command to execute.
> >> #   $2  White-space separated list of options to
> >> pass to the command #   when stdout is a TTY. If
> >> there are no TTY-dependent options #   this should
> >> be "--". #   $@  Arguments to pass to command.
> >> #
> >> function -paginate()
> >> {
> >> local errfd=1
> >>
> >> local command="$1"
> >> local tty_specific_args="$2"
> >> shift 2
> >>
> >> if [[ -t 1 ]]; then
> >> test "$tty_specific_args" != "--" ||
> >> tty_specific_args="" test -t 2 || errfd=2
> >> "$command" $tty_specific_args "$@"
> >> 2>&"$errfd" | less -X -F -R return
> >> 2>"${PIPESTATUS[0]/141/0}"  # Ignore SIGPIPE failures.
> >> fi
> >>
> >> "$command" "$@"
> >> }
> >>
> >> Then I have around 30 aliases for various commands I
> >> use like this:
> >>
> >> alias cat='-paginate cat --'
> >> alias grep='-paginate grep --color=always'
> >> alias ps='-paginate ps --cols=$COLUMNS
> >> --sort=uid,pid -N --ppid 2 -p 2'
> >>
> >> Output is only paginated when stdout is a TTY so I can
> >> still use pipes, and the less flags ensure that less
> >> will exit if the output fits on one screen. I also use
> >> tmux, but I find less to be less painful to use than
> >> copy mode in tmux when I don't need to actually copy
> >> text.
> >>
> >> Eric
> >>
> >>  
> >  
> 



pgpaZqz0RSDm6.pgp
Description: OpenPGP digital signature


Re: [dev] Some core tools

2017-02-07 Thread Mattias Andrée
It looks pretty good, maybe we should recommend it as an
external component.

On Tue, 7 Feb 2017 09:43:42 -0500
stephen Turner <stephen.n.tur...@gmail.com> wrote:

> I think this was blocked by the mailing list, sorry if
> its a duplicate. I wanted to mention that there is a m4
> converted from a bsd rewrite of m4 into a more Linux
> compatible version, he advised it had all the popularly
> used features but may be missing a few of the lesser
> used. I for one have used it for a while with pcc and
> haven't seen issues related to m4. Perhaps this would be
> a helpful starting point for you.
> 
>  http://haddonthethird.net/m4/
> 
> On Mon, Feb 6, 2017 at 9:31 AM, stephen Turner
> <stephen.n.tur...@gmail.com> wrote:
> > As far as m4 is concerned I happened to meet a guy who
> > converted a bsd rewrite of m4 into a more Linux
> > compatible version, he advised it had all the popularly
> > used features but may be missing a few of the lesser
> > used. I for one have used it for a while with pcc and
> > haven't seen issues related to m4. Perhaps this would
> > be a helpful starting point for you.
> >
> > http://haddonthethird.net/m4/
> >
> >
> > On Friday, February 3, 2017,
> > <sylvain.bertr...@gmail.com> wrote:  
> >>
> >> On Thu, Feb 02, 2017 at 06:45:49PM +0100, Mattias
> >> Andrée wrote:  
> >> > I'm work on implementing make(1)  
> >>
> >> In theory, linux kbuild should be a good reference for
> >> the minimum set of makefile extensions to code. Well,
> >> in theory, the guys paid full-time at the
> >> linux fondation to work on kbuild, should have
> >> constraint themselves to use the
> >> bare minimum of makefile extensions, and be honest
> >> about it (they aren't, be
> >> carefull). suckless: better have a bit more roughness
> >> in the makefile than depends on super duper makefile
> >> extensions... which would make coding an alternative
> >> to make something crazy or insane. It's like C, the
> >> bare minimum of extensions would be those required to
> >> compile a kernel like linux (a good part of C89 syntax
> >> is already to much, hence
> >> even more with C99), but the gcc inline assembly is
> >> critical. The "right" answer would be to abstract away
> >> what's really needed (minimal) from a C toolchain for
> >> a reasonable linux build (even clang/llvm people
> >> failed).
> >>
> >> --
> >> Sylvain
> >>  
> >  
> 



pgpV3vHN2USRH.pgp
Description: OpenPGP digital signature


Re: [dev] [ubase] pager

2017-02-04 Thread Mattias Andrée
On Sat, 4 Feb 2017 12:13:03 +0100
Josuah Demangeon <m...@josuah.net> wrote:

> On February 4, 2017 10:49:52 AM GMT+01:00, "Mattias
> Andrée" <maand...@kth.se> wrote:
> > On Sat, 4 Feb 2017 10:22:24 +0100
> > Josuah Demangeon <m...@josuah.net> wrote:
> >   
> > > On February 4, 2017 2:03:16 AM GMT+01:00, "Mattias
> > > Andrée" <maand...@kth.se> wrote:  
> > > > 
> > > > Well, this is embarrassing, I forgot to check you
> > > > program before starting on ul(1). However, I just
> > > > ran
> > > >
> > > > MAN_KEEP_FORMATTING=y man man | ./iode
> > > 
> > > I made escape sequences tooglable with flags, maybe
> > > that is the reason.
> > > 
> > > You can try enabling colour interpreting with +R (or
> > > disabling it with -R).  Either as command-line flag
> > > or as keybinding.  
> > 
> > Silly me, I thought it was -R.  
> 
> It is what is written on the man page and what less
> does.  I should update it.
> 
> Maybe -R is a better idea.  Maybe it is what sbase used
> to have.
> 
> >It works, but if you place
> > util-linux's ul(1) between man(1) and iode you will see
> > a problem.  
> 
> I see ^O (shift-in) characters at the end of bold
> sequences.  Is that what you mean?  Less does not handle
> these neither, but I can, I could just set it to ignore
> them.

For some reason ul(1) produces ESC ( B, which means switch
to US ASCII character set. These can be ignored, and less(1)
does ignore them.

> 
> > I'm working on libgraffiti which ul(1) will use, it will
> > support all escape sequences (I have read long lists of
> > escape sequences to find a pattern that recognise all
> > of them) along will things like combining marks. This
> > library will also be useful for pagers, terminals, and
> > similar programs.
> >
> > iode seem to be able to do everything needed to view
> > man pages, but unlike ul(1) and most(1) it doesn't
> > support e.g. `printf 'aaa\r___'`, but it doesn't have
> > to as ul(1) can be used.  
> 
> This will be very convenient, I was not aware of such a
> diversity
> 
> > I would recommend making +R the default, what would
> > probably reduce confusion for users.  
> 
> Yes, I will change it after line folding.
> 
> Do not hesitate to ask for explanation if something is
> unclear in the source.
> 



pgpSnEcgZaRwu.pgp
Description: OpenPGP digital signature


Re: [dev] [ubase] pager

2017-02-04 Thread Mattias Andrée
On Sat, 4 Feb 2017 10:22:24 +0100
Josuah Demangeon <m...@josuah.net> wrote:

> On February 4, 2017 2:03:16 AM GMT+01:00, "Mattias
> Andrée" <maand...@kth.se> wrote:
> > 
> > Well, this is embarrassing, I forgot to check you
> > program before starting on ul(1). However, I just ran
> >
> > MAN_KEEP_FORMATTING=y man man | ./iode  
> 
> I made escape sequences tooglable with flags, maybe that
> is the reason.
> 
> You can try enabling colour interpreting with +R (or
> disabling it with -R).  Either as command-line flag or as
> keybinding.

Silly me, I thought it was -R. I works, but if you place
util-linux's ul(1) between man(1) and iode you will see
a problem.

I'm working on libgraffiti which ul(1) will use, it will
support all escape sequences (I have read long lists of
escape sequences to find a pattern that recognise all
of them) along will things like combining marks. This
library will also be useful for pagers, terminals, and
similar programs.

iode seem to be able to do everything needed to view
man pages, but unlike ul(1) and most(1) it doesn't
support e.g. `printf 'aaa\r___'`, but it doesn't have
to as ul(1) can be used.

I would recommend making +R the default, what would
probably reduce confusion for users.

> 
> > and it is not able to display it properly, which is the
> > only thing I have implemented in ul(1) so far. But I'll
> > see if there are parts that can be reused.
> > 
> > maandree  
> 
> There is also +/-N for line number, and I am trying line
> wrapping / stripping with +/-S in a "folding" branch.
> 
> I will iddle in #suckless if that helps.
> 



pgpe7VuQ7lsib.pgp
Description: OpenPGP digital signature


Re: [dev] [ubase] pager

2017-02-03 Thread Mattias Andrée
On Thu, 2 Feb 2017 20:42:06 +0100
Josuah Demangeon  wrote:

> I started such a tool recently.  It is probably not
> suckless, as it is already 1600 loc, but can properly
> display a man page, colour escape codes and content from
> UTF-8-test.txt and UTF8-demo.txt without ncurses.
> 
> http://github.com/josuah/iode
> 
> If this one sucks, I would be glad to see the
> implementation, as I will learn a lot from it.
> 
> I have also seen rirc (irc client) that does not use
> ncurses for its interface.
> 
> http://github.com/rcr/rirc/blob/master/draw.c
> 

Well, this is embarrassing, I forgot to check you program
before starting on ul(1). However, I just ran

MAN_KEEP_FORMATTING=y man man | ./iode

and it is not able to display it properly, which is the
only thing I have implemented in ul(1) so far. But I'll
see if there are parts that can be reused.

maandree


pgpM43HF0vrpf.pgp
Description: OpenPGP digital signature


Re: [dev] Some core tools

2017-02-02 Thread Mattias Andrée
On Thu, 02 Feb 2017 17:59:14 -0600
Joshua Haase <hah...@gmail.com> wrote:

> Mattias Andrée <maand...@kth.se> writes:
> > Also, I think mk(1) uses rc(1), right?  
> 
> On plan9port it uses the shell defined on the environment.

That's not precisely portable.

> 
> I think `mk` is way more suckless.

I'm not convinced mk(1) is less sucky than POSIX make(1),
but it may be less sucky than many make(1) implementations.
For example, make(1) doesn't need to know anything about
the shell's syntax, whereas it as to in mk(1), and if I
understood Greg's post correctly, it need has to understand
two different set of syntaxes: rc(1)'s and sh(1)'s. make(1)
only has too understand its own syntax, which is extremely
simple.

maandree


pgpoGMKcYEg84.pgp
Description: OpenPGP digital signature


Re: [dev] Some core tools

2017-02-02 Thread Mattias Andrée
On Thu, 02 Feb 2017 17:08:17 -0600
Joshua Haase <hah...@gmail.com> wrote:

> Mattias Andrée <maand...@kth.se> writes:
> 
> > Greetings!
> >
> > I'm work on implementing make(1), and I have two
> > questions for you:  
> 
> Why make and not mk?

Also, I think mk(1) uses rc(1), right?


pgpvqzEaU7aMd.pgp
Description: OpenPGP digital signature


Re: [dev] Some core tools

2017-02-02 Thread Mattias Andrée
On Thu, 02 Feb 2017 17:08:17 -0600
Joshua Haase <hah...@gmail.com> wrote:

> Mattias Andrée <maand...@kth.se> writes:
> 
> > Greetings!
> >
> > I'm work on implementing make(1), and I have two
> > questions for you:  
> 
> Why make and not mk?

They are not compatible and make(1) is used almost universally.

maandree


pgpuOO34UlE3P.pgp
Description: OpenPGP digital signature


Re: [dev] Some core tools

2017-02-02 Thread Mattias Andrée
On Thu, 2 Feb 2017 20:08:04 +
Connor Lane Smith  wrote:

> On 2 February 2017 at 19:54, Markus Wichmann
>  wrote:
> > GNU make style patsubst rules, i.e.
> >
> > %.o: %.c
> > $(CC) $(CFLAGS) -o $@ $<
> >
> > Those are really useful.  
> 
> While GNU's syntax can be more general, that rule can be
> done in POSIX make:
> 
> > .c.o:
> > $(CC) $(CFLAGS) -c $<  
> 
> Likewise,
> 
> > .o:
> > $(CC) $(CFLAGS) $(LDFLAGS) -o $@ $<  
> 
> cls
> 

However, there are probably a lot of makefiles
in the white that uses %, and you cannot do

%.o: src/%.c
...

without %, unless you do a really ugly hack:

.SCCS_GET: cp src/$@ $@
.c.o:
...


maandree


pgp2yVclI8l7G.pgp
Description: OpenPGP digital signature


Re: [dev] Some core tools

2017-02-02 Thread Mattias Andrée
On Thu, 2 Feb 2017 20:54:45 +0100
Markus Wichmann <nullp...@gmx.net> wrote:

> On Thu, Feb 02, 2017 at 06:45:49PM +0100, Mattias Andrée
> wrote:
> > Greetings!
> > 
> > I'm work on implementing make(1), and I have two
> > questions for you:
> > 
> > 1) What extensions do you think I shall implement? I
> > think I will add $(foreach), $(shell), and I will
> > definitely add $(SHELL). $(SHELL) is the macro that use
> > to select the shell to use, POSIX only standardises
> > that the macro SHELL not affect or be affected by the
> > environment variable SHELL. I need $(SHELL) be sh(1p)
> > is not sufficient to use blind in a sane manner. I'm
> > not sure that I will add support `-j jobs`, and I don't
> > I will add `-l loadavg`, but it will recognise those
> > options and, if not implementing them, ignore them. 
> 
> GNU make style patsubst rules, i.e.
> 
> %.o: %.c
>   $(CC) $(CFLAGS) -o $@ $<
> 
> Those are really useful. If I recall correctly, mk
> implements something equivalent.
> 
> Ciao,
> Markus
> 

Of course, I forgot that was an extension.


pgpHX_ILrUNv_.pgp
Description: OpenPGP digital signature


Re: [dev] [ubase] pager

2017-02-02 Thread Mattias Andrée
On Thu, 2 Feb 2017 20:42:06 +0100
Josuah Demangeon  wrote:

> I started such a tool recently.  It is probably not
> suckless, as it is already 1600 loc, but can properly
> display a man page, colour escape codes and content from
> UTF-8-test.txt and UTF8-demo.txt without ncurses.
> 
> http://github.com/josuah/iode
> 
> If this one sucks, I would be glad to see the
> implementation, as I will learn a lot from it.
> 
> I have also seen rirc (irc client) that does not use
> ncurses for its interface.
> 
> http://github.com/rcr/rirc/blob/master/draw.c
> 

It doesn't look too bad, but I would prefer everything
to be in one file. I think might make it easier to navigate
the code. But I will take a closer look at it and see if
can be cannibalised.


pgpS9SaSxZ38r.pgp
Description: OpenPGP digital signature


[dev] [ubase] pager

2017-02-02 Thread Mattias Andrée
Hi!

I'm going to write a pager for ubase, and, because
it is a necessary component of the pager, I will
also implement ul(1). ul(1) will be used by the
pager which is necessary to get properly formatted
output when piping man(1) or groff(1) to the pager.


Mattias Andrée


pgpAfVdp6vTnN.pgp
Description: OpenPGP digital signature


[dev] Some core tools

2017-02-02 Thread Mattias Andrée
Greetings!

I'm work on implementing make(1), and I have two questions for you:

1) What extensions do you think I shall implement? I think I will
   add $(foreach), $(shell), and I will definitely add $(SHELL).
   $(SHELL) is the macro that use to select the shell to use, POSIX
   only standardises that the macro SHELL not affect or be affected
   by the environment variable SHELL. I need $(SHELL) be sh(1p) is
   not sufficient to use blind in a sane manner. I'm not sure that I
   will add support `-j jobs`, and I don't I will add `-l loadavg`,
   but it will recognise those options and, if not implementing
   them, ignore them.

2) Any interested for make(1) in sbase? I'll be implementing it in
   either case, because I will make an implementation with extended
   functionality (but that would not go into sbase).

Reading make(1p), I found out that the built in value for $(CC) is c99
rather than cc, because cc is not standardise, but c99 is. Therefore
suggest removing `CC = cc` from our makefiles. Additionally, `.POSIX:`
should be added to the top of makefiles to ensure that $(CC) is in
fact c99 and other standard tools are also defined as expected, as
well as that make(1) implementations follow the make(1p) specifications
exactly; for example, this ensures that all commands lines in the
makefile are executed in separate shells, some make(1) implementations
run the command lines in one shell unless .POSIX is used.

After make(1), I may also start working on m4(1p) if there is any
interest in it.

For those interested, I estimate that make(1) will be between 1500
and 2000 lines. I'm currently at 700 lines (of which 500 line are
just simple preparations).


Mattias Andrée


pgpeE1ZhzWDct.pgp
Description: OpenPGP digital signature


[dev] Re: Request for video player recommendation with a good playlist

2017-02-02 Thread Mattias Andrée
If anyone is interrested, I found that this problem
also occurred with Wine and GPG. However, I have
reinstalled everything and this problem doesn't seem
to happen anymore. Although I don't now what caused
the error and why it's not happening anymore, what
happened was that X crashed because it invalid
window properties.

On Sun, 15 Jan 2017 12:05:50 +0100
Mattias Andrée <maand...@kth.se> wrote:

> Ahoy!
> 
> Does any have a recommendation for a video player
> that has a good playlist where files can easily
> be reordered? I'm getting tired of VLC causing
> X to crash (although it looks like it is exiting
> without actually crashing or aborting). So I don't
> really care how crappy it is, as long as it has a
> good playlist and does not crash X.
> 
> 
> Mattias Andrée.



pgpBBMWmuCCc5.pgp
Description: OpenPGP digital signature


[dev] [blind] 1.0 release

2017-01-22 Thread Mattias Andrée
Hello World!

I am pleased to announce the first release of blind[0]:
version 1.0[1].

blind command line video editor designed primarily for
creating new videos. blind uses a raw video format with
a very simple container. The raw video uses CIE XYZ
encoded with `double`s, with an alpha channel. The blind
tools avoid using parameters give in the command line
as much as possible, and instead use videos, for examples
to blurring a video you use blind-gauss-blur, but
gauss-blur does not have an option for selecting the
standard deviation, instead it expects a video file with
these values, which allows for non-uniform blurring and
time-based blurring.

To use blind, you need to have ffmpeg installed. ffmpeg
is used by the tool that convert video file into the
format blind uses, and by the tool that makes the
conversion in the opposite direction. You may also want
to have ImageMagick installed this is however optional,
but if you do not have it installed, you will have to
manually specify either farbfeld or PAM when converting
to or frame images. No other image format is supported
without ImageMagick.

blind is a video-only editor, you have to use other
tools for editing the audio. ffmpeg can be used to add
audio into a video file, extract audio, or concatenate
audio file.

One problem at the moment is that, unless you want to
rip your hair out, you will need a shell that supports
process substitution, which unfortunately is not too
common. Korn shell, Bash, and if I am not mistaken, rc,
are the only shells I know that supports this. However,
I'm working a sh(1p)-implementation (that will not
support even close to everything sh(1p) specifies)
that will support process substitution as well as
provide access to pipe(3) for more complicated pipelines.

[0] http://tools.suckless.org/blind/
[1] http://dl.suckless.org/tools/blind-1.0.tar.gz


pgp6uLEOJNzzR.pgp
Description: OpenPGP digital signature


Re: [dev] Request for video player recommendation with a good playlist

2017-01-15 Thread Mattias Andrée
On Sun, 15 Jan 2017 22:34:18 +0100
Felix Van der Jeugt  wrote:

> Excerpts from Mattias Andrée's message of 2017-01-15
> 16:54:46 +0100:
> > On Sun, 15 Jan 2017 10:48:56 -0500 Alexander Keller
> >  wrote:  
> > > The simplest way I can imagine is to link them into a
> > > directory temporarily and/or permanently with:
> > > 
> > > mkdir playlist ln *some_glob_pattern* playlist
> > > 
> > > Then use the vidir(1) program to edit the files to
> > > number them sequentially the way you want. Then you
> > > can either create a playlist and delete the directory
> > > using some basic command line tools or just keep the
> > > directory as a playlist. 
> > 
> > The problem with this approach is that I cannot edit
> > the list whilst it is playing, except for removing
> > files.  
> 
> Unless you didn't play that directory with `mpv
> playlist/*`, but with a simple script which takes the
> alphabetically first file, plays it, removes it when
> played succesfully, and takes the next. `while mpv "$(ls
> playlist | head -1)"; do true; done`, am I right?
> 
> Sincerely,
> Felix
> 
> 

I wrote the following script that does most of what I want.
The only thing it lacks is support for mass reorder.

#!/bin/sh

number=0
if test "$1" = -n; then
number=1
shift 1
fi
if test "$1" = --; then
shift 1
fi
if ! test $# = 0; then
cd -- "$1"
fi

if test $number = 1; then
n="$(ls -1 | wc -l)"
i=1
ls -1 | while read f; do
mv -- "$f" "00$(seq -w 0 $n | sed 1,${i}d | sed 1q) - $f"
i=$(expr $i + 1)
done
fi

current="$(ls -1 | head -n 1)"
while mpv -- "$current"; do
index=$(ls -1 | grep -Fnx -- "$current" | cut -d : -f 1)
index=$(expr $index)
if test $index -ge $(ls -1 | wc -l); then
break
fi
current="$(ls -1 | sed 1,${index}d | sed 1q)"
done


pgpW054f2SwRp.pgp
Description: OpenPGP digital signature


Re: [dev] Request for video player recommendation with a good playlist

2017-01-15 Thread Mattias Andrée
On Sun, 15 Jan 2017 10:48:56 -0500
Alexander Keller  wrote:

> The simplest way I can imagine is to link them into a
> directory temporarily and/or permanently with:
> 
> mkdir playlist
> ln *some_glob_pattern* playlist
> 
> Then use the vidir(1) program to edit the files to number
> them sequentially the way you want. Then you can either
> create a playlist and delete the directory using some
> basic command line tools or just keep the directory as a
> playlist.
> 

The problem with this approach is that I cannot
edit the list whilst it is playing, except for
removing files.


pgpSXdgmPqz1s.pgp
Description: OpenPGP digital signature


Re: [dev] Request for video player recommendation with a good playlist

2017-01-15 Thread Mattias Andrée
On Sun, 15 Jan 2017 13:03:53 +0100
Martin Kühne  wrote:

> While vlc is admittedly a bunch of crap, I have no real
> suggestion for a playlist editing player. you could wrap
> mpv with some playlist-managing DIY thing, though, as it
> does IPC but could as well be quit with SIGQUIT...
> 
> What's the reasoning behind this question though, why do
> you need to edit the playlist all the time?

I don't need it all the time, I use mpv too, but sometimes
I have a lot of videos I want to watch, but not in alphabetical
order, so I have to reorder them sometimes, and sometimes
I want to move a video in the playlist down because I feel
like watching it later. Sometimes I also want to add new
videos to the playlist.

Forgot to mention that it is preferable if it can playback
videos in faster than normal tempo. But I can work around
this with ffmpeg if necessary.

> 
> cheers!
> mar77i
> 



pgpfikw9hRC9S.pgp
Description: OpenPGP digital signature


[dev] Request for video player recommendation with a good playlist

2017-01-15 Thread Mattias Andrée
Ahoy!

Does any have a recommendation for a video player
that has a good playlist where files can easily
be reordered? I'm getting tired of VLC causing
X to crash (although it looks like it is exiting
without actually crashing or aborting). So I don't
really care how crappy it is, as long as it has a
good playlist and does not crash X.


Mattias Andrée.


pgpihMXD42D4R.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-12 Thread Mattias Andrée
On Fri, 13 Jan 2017 01:34:40 +0530
Mohammed Zohaib Ali Khan  wrote:

> > How about `Tasveer'. It actually means pictorial
> > representation of information and it can alternatively
> > be abbreviated to tsvr - Teletyped Simple Video
> > Revision[er] (or something else with R. I honestly feel
> > `Revision[er]' is cooler than `editor' ;) ).
> >
> > Regards,
> > Zohaib  
> 
> Adding to my previous comment `Tasveer' has a deep
> meaning. It comes from the root letters TSVR. Which when
> used in this sequence form words which are all used to
> express similar meanings. Some of which I know of are:
> 
> 1. Tasavvur - To imagine.
> 2. Tasveer - Pictorial representation.
> 
> There are many coming out of these root letters but
> unfortunately I do not know all, as I am no linguist in
> semetic languages.
> 
> 
> Regards,
> 
> Zohaib
> 

I like the suggestion, but I prefer ‘blind’.


pgpTLQWix3U2r.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-12 Thread Mattias Andrée
On Thu, 12 Jan 2017 13:55:36 +0100
Laslo Hunhold <d...@frign.de> wrote:

> On Thu, 12 Jan 2017 02:12:11 +0100
> Mattias Andrée <maand...@kth.se> wrote:
> 
> Hey Mattias,
> 
> > I'm working a non-graphical video editor. However, I
> > need a proper name for the project, and I have no idea
> > what to call it, so I'm taking suggestions from you.  
> 
> just call it "blind", the name is not taken afaik.

That's a fun name. I'll use it unless someone comes
up with something even better.

> 
> > Currently there is no support for audio, I'm not sure
> > it's actually needed, but I will investigate this
> > later. If it's added, I will only add tools for audio,
> > it will not be for audio effects, such tools already
> > exists and are not needed.  
> 
> Well, audio should be in it in case you want to anything
> with it other than uploading webm's to 4chan which are
> silent by default.

I meant, that I don't think there is a need for the project
to support audio, ffmpeg and tools for effects are probably
enough.

> 
> > The video editor uses it's own format, it is a simply
> > container with the metadata: number of frames, width,
> > heights, and pixel format, followed by a magic value
> > and the video in a raw format. Currently the only
> > support pixel format is CIE XYZ with alpha expressed in
> > `double`:s, but in the future I may also add support
> > for `float`:s for faster computations. So the format is
> > not portable between machines.  
> 
> single- and double-precision floating-point numbers are
> standardized and thus portable, however, you have to deal
> with endianness one way or another. I would generally
> recommend using doubles despite the speed-tradeoff,
> because rounding errors are horrible enough, especially
> if we're talking about an editing-pipeline.

Yes, that's why I choose to use it, and it will be
the default even if single-precision is added.

> 
> Also, keep in mind that the data-format is the most
> crucial component of your software. If you get it right
> and make it "network-safe", you can do anything with it.
> If you mess it up, make it too complex or too simple, you
> might as well not start working on a video editor. If you
> store each frame in a raw format anyway, you might just
> think about using ffmpeg to extract all keyframes,
> convert them to farbfeld and align them in memory in some
> way (as a suggestion). As discussed at the last slcon, I
> am still in the process of finding the right approach
> with farbfeld, so stay tuned.
> 
> Keep us updated here, but keep in mind that video editing
> is a monumental task. I have been working on multiple
> concepts for a suckless image editor for two years now
> and just a few weeks ago I was able to hit a breakthrough
> I hope to be able to pursue in the next months. Image
> editing is a much simpler task than video editing,
> especially given the sucky containers that are used in
> the industry.
> 
> Cheers
> 
> Laslo
> 



pgpCz9ulVTUF_.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-12 Thread Mattias Andrée
On Thu, 12 Jan 2017 12:42:02 +0100
Hadrien Lacour  wrote:

> So this is a bit like Vapoursynth?
> 

I'm not sure how Vapoursynth works, but it sounds like
the basic idea is the same, but that the approach is
different.


pgpSVEOvRAQfV.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-12 Thread Mattias Andrée
It's a great idea, what do you think about ‘teletyped video’?

On Thu, 12 Jan 2017 16:47:36 +0800
Ivan Tham <pickf...@riseup.net> wrote:

> I would suggest TV (Text-based Video editor).
> 
> On Wed, Jan 11, 2017 at 08:39:44PM -0800, Louis Santillan
> wrote:
> >(ht) Hot Tub
> >(httm) Hot Tub Time Machine
> >(ufeh) *nix Flying Erase-Head (an early video editing
> >machines) (ued) *nix EditDroid
> >(vzmx) Vision Mixer
> >
> >
> >Github suggests:
> >(pos) psychic-octo-spork
> >(sdd) super-duper-doodle
> >(syen) symmetrical-enigma
> >(vpara) vigilant-parakeet
> >(sspoon) super-spoon
> >
> >On Wed, Jan 11, 2017 at 7:56 PM, Mattias Andrée
> ><maand...@kth.se> wrote:  
> >> On Thu, 12 Jan 2017 05:52:27 +0200
> >> Amer <amer...@gmail.com> wrote:
> >>  
> >>> >I want the tools to have a common prefix of 2 to 4
> >>> >characters plus a dash. Any other ideas of awesome
> >>> >arbitrary things, I cannot think of anything else
> >>> >that is not already used?  
> >>>
> >>> Short names were exhausted, really?)
> >>> At least AUR is free of them.
> >>>
> >>> eiv
> >>> eivy
> >>> evior
> >>> eviour
> >>> ouvie
> >>> muv
> >>> cvq
> >>> veq
> >>> yvi
> >>> vidj
> >>> vwet
> >>> koe
> >>> koan
> >>> fyu
> >>> hevu
> >>> ...
> >>>  
> >>
> >> The name can be long, but it must have a short
> >> abbreviation that I can use as a prefix for the
> >> names of the commands.  
> >  
> 



pgpJllckMK23z.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-11 Thread Mattias Andrée
On Thu, 12 Jan 2017 05:52:27 +0200
Amer  wrote:

> >I want the tools to have a common prefix of 2 to 4
> >characters plus a dash. Any other ideas of awesome
> >arbitrary things, I cannot think of anything else that
> >is not already used?  
> 
> Short names were exhausted, really?)
> At least AUR is free of them.
> 
> eiv
> eivy
> evior
> eviour
> ouvie
> muv
> cvq
> veq
> yvi
> vidj
> vwet
> koe
> koan
> fyu
> hevu
> ...
> 

The name can be long, but it must have a short
abbreviation that I can use as a prefix for the
names of the commands.


pgpvTaVmWOA1R.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-11 Thread Mattias Andrée
On Wed, 11 Jan 2017 21:34:08 -0500
Greg Reagle  wrote:

> Why don't you name it after something totally arbitrary
> but totally awesome.  For example, there is a tree that
> is so toxic that standing under it during the rain will
> burn your skin, and it has a great name in Spanish, arbol
> de la muerte. [1]  There's all sorts of really cool stuff
> in nature.
> 
> [1]
> http://www.sciencealert.com/here-s-why-you-shouldn-t-stand-under-world-s-most-dangerous-tree
> 

Very interesting tree. I guess Manchineel works, but then
the question is, how should I name the tools? I want the
tools to have a common prefix of 2 to 4 characters plus a
dash. Perhaps ‘hm-’, as in ‘hippomane mancinella’, but
that could be confusing.

Any other ideas of awesome arbitrary things, I cannot
think of anything else that is not already used?


pgpkKGQbLEkw2.pgp
Description: OpenPGP digital signature


Re: [dev] Request for name suggestions: suckless video editor

2017-01-11 Thread Mattias Andrée
On Wed, 11 Jan 2017 17:37:20 -0800
Noah Birnel <nbir...@gmail.com> wrote:

> On Wed, Jan 11, 2017 at 5:12 PM, Mattias Andrée
> <maand...@kth.se> wrote:
> > Greetings!
> >
> > I'm working a non-graphical video editor. However, I
> > need a proper name for the project, and I have no idea
> > what to call it, so I'm taking suggestions from you.
> >
> > A non-graphical video editor may sound a bit insane.
> > There will be a graphical tool for finding locating
> > frames, but that is as graphical as it gets, as far as
> > I can foresee at the moment. The reasons for a
> > non-graphical video editor are:
> >  
> My goodness, you don't need to excuse it. This is a

I just wanted to explain it in case someone though it
was a waste of time, because it does sound impractical
to use.

> beautiful idea. It reminds me of sox (SOund eXchange), so
> it should of course be vix (VIdeo eXchange) or vex.

Good suggestion, however, there are already projects with
those names, so I cannot use either name.

> 
> Any code worth sharing yet?

Yes, there is the code as it looks right now:
https://github.com/maandree/vutil
However, I started out using RGBA and have not
tested to code since I change it to CIE XYZ.
Here is the last tested revision:
https://github.com/maandree/vutil/tree/51411b26324ce4a142c817428d4f34c0f94a6d94

> 
> Cheers,
> 
> Noah
> 



pgp_hGyc2gruy.pgp
Description: OpenPGP digital signature


[dev] Request for name suggestions: suckless video editor

2017-01-11 Thread Mattias Andrée
Greetings!

I'm working a non-graphical video editor. However, I need a
proper name for the project, and I have no idea what to call
it, so I'm taking suggestions from you.

A non-graphical video editor may sound a bit insane. There
will be a graphical tool for finding locating frames, but
that is as graphical as it gets, as far as I can foresee at
the moment. The reasons for a non-graphical video editor
are:

•  It's source control friendly and it's easy for a user to
   resolve merge conflicts and identify changes.

•  Rendering can take a very long time. With this approach,
   the user can use Make to only rerender parts that have
   been changes.

•  No room for buggy GUI:s, which currently is a problem on
   the large video editors for Linux.

•  Less chance that the user makes a change by mistake
   without noticing it, such as moving a clip in the editor
   by mistake instead of for example resizing.

•  Even old crappy computers can be used for large projects.

•  Very easy to utilise command line image editors for modify
   frames, or to add your own tools for custom effects.

Currently there is no support for audio, I'm not sure it's
actually needed, but I will investigate this later. If it's
added, I will only add tools for audio, it will not be for
audio effects, such tools already exists and are not needed.

The video editor uses it's own format, it is a simply container
with the metadata: number of frames, width, heights, and
pixel format, followed by a magic value and the video in a
raw format. Currently the only support pixel format is
CIE XYZ with alpha expressed in `double`:s, but in the future
I may also add support for `float`:s for faster computations.
So the format is not portable between machines.


Mattias Andrée


pgpdrwHaAau__.pgp
Description: OpenPGP digital signature


Re: [dev] [slcon3] preliminary schedule and registration deadline

2016-09-21 Thread Mattias Andrée
For context, the message I replied to, but forgot to
quote was Markus saying he will arrive and be at the
lobby around noon on the welcome day (Friday).

On Wed, 21 Sep 2016 16:49:54 +0200
Mattias Andrée <maand...@kth.se> wrote:

> I'll be arrive in the afternoon the day before.
> I'll meet you in the lobby around noon.



pgpr4TLErtlBW.pgp
Description: OpenPGP digital signature


Re: [dev] [slcon3] preliminary schedule and registration deadline

2016-09-21 Thread Mattias Andrée
I'll be arrive in the afternoon the day before.
I'll meet you in the lobby around noon.


pgpbrPez0jjo4.pgp
Description: OpenPGP digital signature


Re: [dev] s - suckless shell

2016-08-12 Thread Mattias Andrée
Also, the names of shells conventionally end with
sh, just do go with ssh (which incidentally is not
a shell but ends with “sh” because the full name ends
with “shell”.)

On Fri, 12 Aug 2016 23:48:29 +0200
Mattias Andrée <maand...@kth.se> wrote:

> Sorry for replying before reading, but I don't think a
> single-character name is a good idea. Two-characters
> should also be avoided, but it's acceptable. The number
> of available names are severely limited and introduces
> an unnecessarily high risk of collision. Short names,
> and single-character name in particular, are best left
> for user-defined aliases.
> 
> I will probably not read this message, because
> writing another shell is not on my priority list.
> 
> 
> On Fri, 12 Aug 2016 22:41:16 +0100 
> <ra...@openmailbox.org> wrote:
> 
> > Hello!
> > 
> > GNU Bash is 138227 lines of code. I wrote a simpler
> > shell* in 800 lines: https://notabug.org/rain1/s/
> > 
> > *It is not a true POSIX shell. You can't run existing
> > scripts with it. It's technically just a command
> > interpreter.
> > 
> > With that out the way here's an overview of how it
> > works:
> > 
> > Tokenization [tokenizer.c]: Instead of the strange and
> > complex way that normal shells work (where "$X" is
> > different to $X for example) s works by a strict
> > tokenize -> variable expansion -> parse -> execute
> > pipeline. This makes it much easier to program with and
> > less likely for scripts to break simply because your
> > CWD has a space in it.
> > 
> > Variable expansion [variables.c]: The expander supports
> > both $FOO and ${FOO} syntax, it just resolves
> > environment variables.
> > 
> > Parsing [parser.c]: There are just 3 binary operations
> > |, && and || and '&' optional at the end of a line.
> > There is no "if" or looping or anything. parser.c is 85
> > lines of code and uses my region [region.c] based
> > allocator to simplify teardown of the structure when it
> > needs to be free'd.
> > 
> > [interpreter.c] The interpreter is a simple recursive
> > process that walks the AST, creating pipes and forking
> > off children.
> > 
> > [supporting/*.c] Instead of redirection operators like
> > <,  
> > > and >> being part of the language they are simply
> > > provided as supporting programs   
> > that should be added to the $PATH: < is basically just
> > cat. The redirection operators are all packaged together
> > in busybox style. Similarly glob is not part of the
> > language, it is a 20 line script instead. You use it
> > like this: glob rm *py
> > 
> > [builtins.c] Of course a shell cannot do everything by
> > external tools - so the builtins cd, source, set, unset
> > are provided (and treated specially by the interpreter).
> > 
> > It can run scripts you supply, shebang works, using it
> > in a terminal interactively works. In theory enough for
> > practical every day use.
> > 
> > Except for the low linecount (it is even smaller than
> > execline) and simplicity of the lexical aspect of the
> > shell language it does not have strong benefits over
> > existing shells (especially since it is not POSIX
> > compatible) but I hope that the code may be interesting
> > or refreshing to others who are unhappy with the excess
> > of bloat most software has.
> > 
> >   
> 



pgpz6L9oLJLBv.pgp
Description: OpenPGP digital signature


Re: [dev] s - suckless shell

2016-08-12 Thread Mattias Andrée
Sorry for replying before reading, but I don't think a
single-character name is a good idea. Two-characters
should also be avoided, but it's acceptable. The number
of available names are severely limited and introduces
an unnecessarily high risk of collision. Short names,
and single-character name in particular, are best left
for user-defined aliases.

I will probably not read this message, because
writing another shell is not on my priority list.


On Fri, 12 Aug 2016 22:41:16 +0100 
 wrote:

> Hello!
> 
> GNU Bash is 138227 lines of code. I wrote a simpler
> shell* in 800 lines: https://notabug.org/rain1/s/
> 
> *It is not a true POSIX shell. You can't run existing
> scripts with it. It's technically just a command
> interpreter.
> 
> With that out the way here's an overview of how it works:
> 
> Tokenization [tokenizer.c]: Instead of the strange and
> complex way that normal shells work (where "$X" is
> different to $X for example) s works by a strict tokenize
> -> variable expansion -> parse -> execute pipeline. This
> makes it much easier to program with and less likely for
> scripts to break simply because your CWD has a space in
> it.
> 
> Variable expansion [variables.c]: The expander supports
> both $FOO and ${FOO} syntax, it just resolves environment
> variables.
> 
> Parsing [parser.c]: There are just 3 binary operations |,
> && and || and '&' optional at the end of a line. There is
> no "if" or looping or anything. parser.c is 85 lines of
> code and uses my region [region.c] based allocator to
> simplify teardown of the structure when it needs to be
> free'd.
> 
> [interpreter.c] The interpreter is a simple recursive
> process that walks the AST, creating pipes and forking
> off children.
> 
> [supporting/*.c] Instead of redirection operators like <,
> > and >> being part of the language they are simply
> > provided as supporting programs 
> that should be added to the $PATH: < is basically just
> cat. The redirection operators are all packaged together
> in busybox style. Similarly glob is not part of the
> language, it is a 20 line script instead. You use it like
> this: glob rm *py
> 
> [builtins.c] Of course a shell cannot do everything by
> external tools - so the builtins cd, source, set, unset
> are provided (and treated specially by the interpreter).
> 
> It can run scripts you supply, shebang works, using it in
> a terminal interactively works. In theory enough for
> practical every day use.
> 
> Except for the low linecount (it is even smaller than
> execline) and simplicity of the lexical aspect of the
> shell language it does not have strong benefits over
> existing shells (especially since it is not POSIX
> compatible) but I hope that the code may be interesting
> or refreshing to others who are unhappy with the excess
> of bloat most software has.
> 
> 



pgp626eZ0gDjy.pgp
Description: OpenPGP digital signature


Re: [dev] What do you guys think about competitive programming?

2016-08-12 Thread Mattias Andrée
On Fri, 12 Aug 2016 22:05:26 +0200
Martin Kühne <mysat...@gmail.com> wrote:

> On Fri, Aug 12, 2016 at 9:58 PM, Mattias Andrée
> <maand...@kth.se> wrote:
> > Programming contests can be fun, but it depends on the
> > competition, some barely have a focus on programming
> > but mathematics instead. I don't see them as promoting
> > bad practices, you are under extraordinary pressure so
> > this should not influence your programming practices
> > under normal conditions. I don't think the skills, that
> > are generally useful for programming contests, are
> > generally useful in other contexts. I hope recruiters
> > realise the differences in programming competitions and
> > what the employee will be doing, but that has merits
> > similar to any other contests, mathematics skills, and
> > other problem solving skills such as solving puzzles.
> > It shows competitive attitude and cognitive
> > capabilities. 
> 
> 
> I even see programming skills wrt free / open source
> projects different to those an employer would expect. An
> employer sooner says they're disappointed of somebody's
> performance, while my personally growing patchset may
> never actually ripen to be submitted to upstream for all
> the various reasons. Maybe it's my own code that sucks,
> but maybe it's the project's design decisions or upstream
> maintainer's understanding which is incompatible with the
> work. Nobody has to be loyal to anybody else in these
> matters, which I see as a core feature of these things.
> 
> cheers!
> mar77i
> 

Agreed.

Noone should be fooled to thinking that hobby programming,
which free software and open source projects often are,
is representative of a typical programming job. And because
of the time pressure in programming contests, programming
contests are closer (although not that much) to a normal
programming job than hobby programming.


pgpUcjCVacIxN.pgp
Description: OpenPGP digital signature


Re: [dev] What do you guys think about competitive programming?

2016-08-12 Thread Mattias Andrée
Programming contests can be fun, but it depends on the
competition, some barely have a focus on programming
but mathematics instead. I don't see them as promoting
bad practices, you are under extraordinary pressure so
this should not influence your programming practices
under normal conditions. I don't think the skills, that
are generally useful for programming contests, are
generally useful in other contexts. I hope recruiters
realise the differences in programming competitions and
what the employee will be doing, but that has merits
similar to any other contests, mathematics skills, and
other problem solving skills such as solving puzzles.
It shows competitive attitude and cognitive capabilities.

On Fri, 12 Aug 2016 21:44:48 +0200
Kevin Michael Frick  wrote:

> Hello suckless.org fellows!
> 
> I find myself competing in the national selection for the
> IOI[0] and was wondering: what does the suckless.org
> community think about competitive programming contests?
> 
> They certainly promote bad practices such as namespace
> pollution, non-descriptive naming of variables, lack of
> comments and most of all the use of C++[1], however they
> also really help in expanding one's knowledge about
> algorithms and well-known computer science problems,
> which in turn means better chances of getting a job in
> the CS field, and they also draw the attention of a lot
> of students towards CS and motivate us to be better
> programmers.
> 
> So, what do you think?
> 
> [0] http://www.olimpiadi-informatica.it (national
> selection), http://www.ioinformatics.org/index.shtml (IOI)
> [1] (you could implement whatever data structure you need
> in C, of course, but since you can't bring templates with
> you on an USB drive, writing std::set is 100x faster
> than implementing a bug-free red-black BST in pure C when
> you have 3 hrs to solve 3 problems)
> 



pgpwRy5VszLKG.pgp
Description: OpenPGP digital signature


  1   2   3   >