Re: [dev] [acmebrowse] Mouse driven interface for edbrowse
I attached a version of acmebrowse that doesn't require tmux. I removed readme from the script - you can still find it in a previous message. Only pipes are used. Below is a generalized version that could be adapted for other programs: > #!/bin/rc > > . 9.rc > . $PLAN9/lib/acme.rc > > fn event { > switch($1$2){ > case E* # write to body or tag > case F* # generated by ourselves; ignore > case K* # type away we do not care > case Mi # mouse: text inserted in tag > case MI # mouse: text inserted in body > case Md # mouse: text deleted from tag > case MD # mouse: text deleted from body > winwriteevent $* > case Mx MX # button 2 in tag or body > echo $9 > case Ml ML # button 3 in tag or body > echo /$9/ > } > } > > newwindow > @{wineventloop} | edbrowse | winwrite body wineventloop is exucuted in a subshell and the output is piped into edbrowse (simple echo is used within a loop). This approach has few advantages. Buffering issues are avoided as in case of fifos or redirection to a file: $ tail -f input | edbrowse $ echo "some command" >> input Blocking the reads from edbrowse stdout is unnecessary, since it is piped all the time into acme window. Yet the content of a window can be erased if need using: > echo -n , | winwrite addr > winctl 'dot=addr' > winwrite data Blocking in acmebrowse is used now for a different purpose: jumping to the top of a window. If you don't mind the cursor ending at a bottom of content, you can get away without "kill $apid/wait" approach. Window name in acme matches now the filename/URL opened in edbrowse. I find it more useful than the name of a script. As if PCRE wasn't a mess enough, edbrowse doesn't use them it in a straight way. Some behavior was changed to match ed more closly. I decided that the best way to sanitize them is to turn special characters into dots: $ echo $regexp | tr '^$[]+*?\/' '.' This avoids the escape hell, when you want to repeat a search. I'm pretty satisfied with the script. Other than that the naming of functions/commands could be improved I guess. -- Paul Onyschuk acmebrowse Description: Binary data
Re: [dev] [acmebrowse] Mouse driven interface for edbrowse
On Fri, 25 Jul 2014 11:32:01 +0200 Teodoro Santoni wrote: > How about using empty [1] instead of tmux? It seems to me that it's > not the right tool. Tmux isn't a right tool. It was just under the hand and I wanted to hammer our basics fast. I started with unix fifos, but I'm not that familiar with plan9port and did run into some issues e.g. mkfifo ebin tail -f ebin | edbrowse echo "do something" > ebin Would send EOF to edbrowse closing the pipe (tail from p9p). I'll go slowly back to this approach. Some operations like blocking the read, till edbrowse is done (currently tmux wait-for) are very easy to do in rc (this isn't working script, just overview): echo "do something" > ebin sleep 60 & echo "!kill $apid" > ebin wait $apid cat ebout > acmein This is even better, since there is guarantee sleep will exit after some time. So yes, I plan to remove dependency on tmux, but I'm looking more into plain pipes than other approaches. -- Paul Onyschuk
[dev] [acmebrowse] Mouse driven interface for edbrowse
echo -n $readme if not { parsecmd $9 pipetowin } case Ml ML # button 3 in tag or body # Send selection as regexp to edbrowse tmux send-keys -l $target /$9/ ';' \ send-keys $target Enter pipetowin } } fn parsecmd { switch ($1) { case Back # Go back one level tsend '^' Enter case Bookmarks # Open bookmark file and print tsend 'b ~/.eb/bookmarks' Enter , Enter case Bookmarks! # Add bookmark, used together with URLs command. tsend 'w+ ~/.eb/bookmarks' Enter case DDG* # Searching in duckduckgo.com ddg = `{echo $1} ddg = $ddg(2-) ddg = 'b http://ddg.gg/lite?q=' ^ $"ddg tmux send-keys -l $target $ddg ';' \ send-keys $target Enter , Enter case Go # Follow 1st link in the line and print tsend g1 Enter , Enter case Go2# Follow 2nd link and... tsend g2 Enter , Enter case Go!# Follow without print - for binary files etc tsend g1 Enter case Go2! tsend g2 Enter case Info # Show title and address of current page tsend ft Enter f Enter case Interrupt # Send Ctrl-C to edbrowse tsend C-c case Javascript # Toggle off/on javascript tsend js Enter case Print # Print whole file tsend , Enter case Quit! # With exclamation mark, must be drag-selected. tsend qt Enter windel sure exit case Refresh# Refresh page - can be useful for JS tsend rf Enter case URLs # Show addresses behind links in selected line. tsend A Enter , Enter case Write! # Save (binary) file to disk. tsend w/ Enter case * # Send selection as plain command to edbrowse tmux send-keys -l $target $1 ';' \ send-keys $target Enter } } fn pipetowin { # Block till edbrowse opens web page. tmux send-keys $target '!tmux wait-for -S ' ^ $twindow Enter ';' \ wait-for $twindow # Select content of window in acme, so writing to data will erase it. echo -n , | winwrite addr winctl 'dot=addr' # Pipe output from tmux pane to acme window tmux capture-pane -p -S -1 $target | winwrite data # Jump to the top/beginning of text in acme echo -n 0 | winwrite addr winctl 'dot=addr' winctl show # Erase visible part (pane) and scrollback in tmux tmux send-keys -R $target ';' clear-history $target } fn tmuxinit { # Unset $TMUX (in case it's running) and create detached session. TMUX=() tmux new-session -d -s ed -n dummy # Set history-limit to 10k lines - for long web pages. tmux set-option -q -t ed history-limit 1 # Postpone starting of edbrowse, otherwise history-limit won't work. tmux new-window -a -t ed:dummy -n $twindow edbrowse # Close dummy window tmux send-keys -t ed:dummy exit Enter } fn acmeinit { # Create new window in acme, change name newwindow winname acmebrowse # Add commands to tag in acme echo '| i? i* i= | DDG | b http://' | winwrite tag echo -n 'Back Refresh | Print Go! Go2! | Info URLs Bookmarks!' \ ' Write! | Javascript Interrupt | Quit!' | winwrite tag } # Initialize tmux, acme and start loop tmuxinit acmeinit pipetowin wineventloop -- Paul Onyschuk
Re: [dev] Plain text editor that sucks less - an alternative to VIM?
On Sun, 29 Jun 2014 13:24:58 +0200 patrick295767 wrote: > For many years I have been looking for a lightweight alternative to > VIM. (sthg else than Emacs, elvis, nano,... and all the billion of > text editor). I'll point out two editors that have their issues, but at the same time have some interesting ideas. I'll skip the license cult worship rituals. VIDEO TECO - code [1] - documentation [2] Everything is a command, there is only one keybinding (the command terminator). It is non-modal: no switching between a command line and a visual mode. Everything is done in a separate part of the screen, somehow similar to Sam. Example of inserting the new text (where "i" is a command and "$" is a terminator): isome new text$ Syntax is horrible, but that isn't the point. Best example of this convention is probably undo. Since everything is a command and commands are stored in a separate sub-window, you can go as far as editing the command history itself. You can modify a command in the middle of the history. AOEUI - code [3] - documentation [4] (there is a reference at the end of file, second letter is valid for QWERTY mode) This editor goes in the opposite direction, almost eliminating a separate command prompt (expect for regular expressions). The usage is oriented around a selection and a clip buffer. Example of piping paragraph through fmt (where "^" is Ctrl): ^U start selection ^Space^Yjump to the end of paragraph ^X cut selection (put into clip buffer) ^U starting new selection fmt type text (command) into selection ^R pipe clip buffer through command specified in selected text It takes some time getting used to. I find this editor convenient to the point of using it on daily basis. SYNTAX HIGHLIGHTING This is a bit off topic. I'm somewhere in the middle, when it comes to coloring. It is not harmful in itself. The problem as I see it, streams from highlighters that universally are going over the top by taking a "christmas tree" approach to the syntax. Personally I give comments different color. I find alternating colors for pairs of bracketing characters also nice - this is lighter approach than blinking/jumping to matching bracket. This is enough for me. [1] https://github.com/rhaberkorn/videoteco-fork [2] http://www.copters.com/teco.html [3] https://code.google.com/p/aoeui/ [4] https://aoeui.googlecode.com/svn/trunk/notes.txt -- Paul Onyschuk
Re: [dev] Project Oberon
On Fri, 21 Mar 2014 19:38:36 -0400 Aaron Burrow wrote: > What do you think of minix 3? > > > cd ~/code/minix > > find . -name "*.c" -or -name "*.h" | xargs wc -l > ... > 342686 total It would be comparing apples and oranges. Excluding Verilog sources, whole Oberon system is under ten thousand lines of source code. Compiler itself is about 3KLOC. Even when you take into account that code is written in denser style than idiomatic C, it is still small. As a side note: limited RISC platform constructed by Wirth has only 16 opcodes. I think this is one of the reasons why emulator for this platform could be written using less than one thousand lines of C code. You could read Project Oberon in your spare time e.g. on crapper (this is place of enlightenment according to Breaking Bad) and understand how exactly it works from top to bottom. In many ways Oberon is primitive, yet I perceive that as strength for what it is. -- Paul Onyschuk
Re: [dev] Project Oberon
on. Even if it's only a mere curiosity, reading "Project Oberon" book can be worthwhile. There aren't just that many places, were clarity and simplicity of software are celebrated. [1] http://www.eptacom.net/pubblicazioni/pub_eng/wirth.html -- Paul Onyschuk
[dev] Project Oberon
I'm not completely sure if it's in interest of this mailing list, but underlying philosophy isn't far away from that of suckless community. Hope I don't upset anyone too much by posting this, on the other hand maybe someone will find it useful. I won't describe in detail what Oberon is, since information about that can be found on Wikipedia [1] and other sources. One thing I will point out: there is clear relation between interface of Oberon and ACME text editor from Plan 9. Niklaus Wirth recently revised his Project Oberon [2] - programming language and operating system implemented for minimalistic RISC platform on top of FPGA. Simplicity is assured by the fact that whole description and/or implementation of system, language and platform must fit into the book. FPGA board isn't needed to test and run the system, since emulator is available [3] (small implementation in C with dependency on SDL2). Wirth talked a bit about resurrection of whole project [4] (last video in the list) in the recent conference at ETH, thrown to celebrate his 80th birthday. Some additional documents are available at his personal website [5]. Probably the most interesting one is a book describing Programming in Oberon [6] (which was also updated). Personally I find it interesting not only from educational perspective. At the end of email I'm attaching excerpt from the book, which describes a bit of philosophy behind Project Oberon. [1] http://en.wikipedia.org/wiki/Oberon_(operating_system) [2] http://projectoberon.com/ [3] https://github.com/pdewacht/oberon-risc-emu [4] http://www.multimedia.ethz.ch/conferences/2014/wirth/ [5] http://www.inf.ethz.ch/personal/wirth/ [6] http://www.inf.ethz.ch/personal/wirth/Oberon/PIO.pdf --- PREFACE TO THE 2013 EDITION Comments about plans to prepare a second edition to this book varied widely. Some felt that this book is outdated, that nobody is interested in a system of this kind any longer. "Why bother"? Others felt that there is an urgent need for this type of text, which explains an entire system in detail rather than merely proposing strategies and approaches. "By all means"!. Very much has changed in these last 30 years. But even without this change, it would be preposterous to propose and construct a system competing with existing, worldwide "standards". Indeed, very few people would be interested in using it. The community at large seems to be stuck with these gigantic software systems, and helpless against their complexity, their peculiarities, and their occasional unreliability. But surely new systems will emerge, perhaps for different, limited purposes, allowing for smaller systems. One wonders where their designers will study and learn their trade. There is little technical literature, and my conclusion is that understanding is generally gained by doing, that is, "on the job". However, this is a tedious and suboptimal way to learn. Whereas sciences are governed by principles and laws to be learned and understood, in engineering experience and practice are indispensable. Does Computer Science teach laws that hold for (almost) ever? More than any other field of engineering, it would be predestined to be based on rigorous mathematical principles. Yet, its core hardly is. Instead, one must rely on experience, that is, on studying sound examples. The main purpose of and the driving force behind this project is to provide a single book that serves as an example of a system that exists, is in actual use, and is explained in all detail. This task drove home the insight that it is hard to design a powerful and reliable system, but even much harder to make it so simple and clear that it can be studied and fully understood. Above everything else, it requires a stern concentration on what is essential, and the will to leave out the rest, all the popular "bells and whistles". Recently, a growing number of people has become interested in designing new, smaller systems. The vast complexity of popular operating systems makes them not only obscure, but also provides opportunities for "back doors". They allow external agents to introduce spies and devils unnoticed by the user, making the system attackable and corruptible. The only safe remedy is to build a safe system anew from scratch. Turning now to a practical aspect: The largest chapter of the 1992 edition of this book dealt with the compiler translating Oberon programs into code for the NS32032 processor. This processor is now neither available nor is its architecture recommendable. Instead of writing a new compiler for some other commercially available architecture, I decided to design my own in order to extend the desire for simplicity and regularity to the hardware. The ultimate benefit of this decision is not only that the software, but also the hardware of the Oberon System is described completely and rigorously. The processor is called RISC. The hardware modules are decribed exclusively in the language Verilog. The decision f
Re: [dev] golang dwm status
On Thu, 13 Mar 2014 20:50:38 +0100 Markus Teich wrote: > Thanks for the hint. However I wanted to avoid spawning other > processes as much as possible. Is there another way to count the cpu > cores just by reading a file in /proc or maybe /sys? You can get there by using sysconf(3), from man page: "The sysconf() function conforms to IEEE Std 1003.1-1988 (``POSIX.1''). The constants _SC_NPROCESSORS_CONF and _SC_NPROCESSORS_ONLN are not part of the standard, but are provided by many systems." Calling "sysconf(_SC_NPROCESSORS_CONF)" should work on many systems. I'm clueless, when it comes to accessing C from Go, so this can be wrong solution in your case. -- Paul Onyschuk
Re: [dev] Shell vs C where is the border?
On Mon, 10 Mar 2014 20:39:08 +0100 Szymon Olewniczak wrote: > But having so many individual programs is more harder to use that just > one (we need to run more commands), so reasonable would be to combine > all this commands to one script which would do all this work > automaticaly. So what solution would be better in your opinion? When > we should use shell scripts and when write new C programs to achieve > our goals? My mail probably will be bounced, but anyway. It (pipes or lack of them) can be abused either way. Example of way too many command line flags (GNU ls): $ ls --help | awk '/^ {2,6}-/{i++} END{print i}' 58 Yet you won't likely use more than a dozen options offered by ls ever (and fewest on daily basis). Those rare cases, when you need something specialized could be solved by sed/awk and other filters. Still they somehow managed to squeeze all those flags there - someone really hates pipes. Counter example would be man(1) command (this is personal opinion). Some implementation are doing something similar to that: $ zcat some-manpage.1.gz | eqn | grap | pic | tbl | vgrind | refer \ | troff | more This is running every time when you type "man some-manpage". Filter approach of *roff made sense, when it was used for creating documents for printing (and it was often most CPU intensive command on the box according to some accounts). -- Paul Onyschuk
Re: [dev] tmux/screen alternative
On Sat, 22 Feb 2014 14:33:20 +0100 Anselm R Garbe wrote: > For terminal detach/attach I could use dtach. I'm only after a > solution for scrollback buffer. Probably this should become a separate > tool. > > If you don't use screen/tmux with st, what other tool comes to mind > for a scrollback buffer? *BSDs have script [1] utility - not sure how helpful that is. [1] http://netbsd.gw.com/cgi-bin/man-cgi?script -- Paul Onyschuk
Re: [dev] Re: Reasonable Makefiles
On Tue, 11 Feb 2014 20:15:06 + (UTC) Thorsten Glaser wrote: > > Ugh, a horrid GNUmakefile… I normally write: > > PROG= foo > > .include > Not that I defend GNU make, but you can do: foo: This will use implicit rules and will compile foo.c. If you have more than one file with source code, this should work: foo: bar.o Will pick both foo.c and bar.c. Of course pmake/bmake makes more than that (and I don't want to argue about it): logic for installing binaries/manpages, cleaning directories etc. You can write sane rules for GNU make (I avoid implicit rules described earlier myself). The problem is that auto-generated makefiles by autotools are far from sane. I would guess that poison starts there and that is why GNU makefiles handle some obscure and weird rules/variables and so on. -- Paul Onyschuk
Re: [dev] [dwm] Conversion to XCB
On Tue, 21 Jan 2014 15:11:27 +0100 Alexander Huemer wrote: > > IMO it's not a good advice to let of all things google do that. > It is far from good solution, but often it is the only solution. Often custom search engines written in Perl do horrible job of searching through whole text. To sum state of mailing list software, here is excerpt from GNU Mailman documentation (which is one of the most popular solutions) [1]. [1] http://wiki.list.org/display/DOC/How+do+I+make+the+archives+searchable -- Paul Onyschuk
Re: [dev] [dwm] Conversion to XCB
On Tue, 21 Jan 2014 13:07:37 +0100 Markus Wichmann wrote: > > Maybe it was. However, I found no such conversation in my personal > archive of this list (which goes back maybe a year or so), and the > official version of the archive has no search function, nor has any > attempt at sending Google on that archive been successful. And since > the archive only has index pages by month, I can't even get the entire > message index, or even the message index per year and search via C-F. > Did you try something like: https://google.com/search?q=site:http://lists.suckless.org/dev/+dwm+xcb This applies to the most of the mailing lists as GMANE is not always available, but HTML archives are there (results are similar to GMANE). -- Paul Onyschuk
Re: [dev] strlcpy and strlcat
There is also Annex K in C11 specifying strcpy_s and friends. Yet it is optional part of standard, so hard to say if it will be widely adopted. -- Paul Onyschuk
Re: [dev][announce] Optimizing C compiler & c++ compiler/runtime
On Fri, 20 Dec 2013 18:06:07 + (GMT) Rob wrote: > > I suppose if you can get a stable version of GCC, like you say, the > platform ABIs aren't going to change, but I can see certain things > from C11 coming into libraries, such as atomics. Of course glibc > (should) support all the way back to C89. Not sure if musl is C99 and > above or not. > Last time I compiled glibc by hand, kernel version (headers - as glibc use them) and binutils were more an issue compared to compiler. As for musl, dependencies are listed in INSTALL file [1], list of supported compilers is also available on wiki [2]. [1] http://git.musl-libc.org/cgit/musl/tree/INSTALL [2] http://wiki.musl-libc.org/wiki/Supported_Platforms -- Paul Onyschuk
Re: [dev] Optimizing C compiler & c++ compiler/runtime
On Fri, 20 Dec 2013 17:26:42 + (UTC) Thorsten Glaser wrote: > > Oh, they’re buggy? Damn. I had hoped for a ditroff > implementation eventually. > Here [1] you can find links/references to every existing *roff implementation. Still that doesn't leave many options. Troff from Plan9 is interesting, yet it doesn't support -mdoc macros (those can be copied from Heirloom or from older version of Groff - macros itself are BSD licenses AFAIK, just macros from newer version expect a support for arbitrary number of macro arguments). There is even port to Linux [2] (didn't check it). [1] http://manpages.bsd.lv/history.html [2] http://repo.or.cz/w/troff.git -- Paul Onyschuk
Re: [dev] Optimizing C compiler & c++ compiler/runtime
On Fri, 20 Dec 2013 17:31:26 +0100 Sylvain BERTRAND wrote: > > Oh! What openbsd uses for its man page terminal renderer? I'm > stuck with the buggy heirloom tools. > Mandoc aka mdocml [1]. > > ARM64 is on its way, which will require a backport in gcc 4.7.x. > We will see how it turns out. If AArch64 gets same treatment as ARM devices, I don't see myself using it: handful of outdated binary blobs just to get it half-working (no one cares about stability of ABI for kernel modules). > > That's very bad. Linux kernel devs have not accepted patches to > allow compilation with alternative C compilers?? > You can watch this presentation [2] or check LLVMLinux project directly [3]. It touches the topic in some way. [1] http://mdocml.bsd.lv/ [2] http://www.youtube.com/watch?v=oGr4KghvxqU [3] http://llvm.linuxfoundation.org/index.php/Main_Page -- Paul Onyschuk
Re: [dev] Optimizing C compiler & c++ compiler/runtime
On Fri, 20 Dec 2013 13:49:43 +0100 Sylvain BERTRAND wrote: > Is there any remaining good c++ compiler/runtime which can > boostrap using a C compiler/minimal runtime? > > Since, it's near impossible to re-write/unroll all the > "mandatory" c++ components in C quickly (harfbuzz, > gecko/webkit...), what to do? Any suggestions? Not that I'm aware of, beside I'm not sure what benefits this would bring? You're fine with C++ in one place, but not the other? > There is also the question of finding a new C99 optimizing > compiler written properly in C of course. > > Anything else? This is valid question on other hand e.g. base OpenBSD is C++ free for some time AFAIK (after the removal of groff). Idea of minimal set of tools, capable of rebuilding itself is attractive. On one hand, you can use pretty old GCC and most of C codebase will compile just fine (OpenBSD still uses patched GCC v4.2.1, which is more than six years old). C is stable - you will more likely see changes in standard C library, than compiler/language itself. GCC v4.7.x should work just fine for some years to come. C++ is different kind of beast. More and more software requires C++11 features and this means very recent version of compilers, especially since C++ standard libraries are developed inside the same projects (GCC/libstdc++, Clang/libc++). Sticking to GCC v4.7.x isn't an option here as far I can tell. The last problem: C99-capable compiler isn't enough to get usable system based on Linux. Clang which was designed as GCC drop-in replacement chokes on Linux kernel (some patches are needed), because it heavily uses GCC extensions and specific features (some undocumented/undefined). PCC/TCC aren't actively developed, I'm not sure about the status of firm/CParser. Still those alternative C compilers are just good enough for specific programs and not larger set of packages. -- Paul Onyschuk
Re: [dev] wswsh: a mksh web framework
On Sat, 14 Dec 2013 01:17:02 + (UTC) Thorsten Glaser wrote: > Though I do low-level *roff stuff too. I had to learn it because > I had to fix the mdoc macro _implementation_ itself… not too hard, > the classical documentation https://www.mirbsd.org/manUSD/21.troff > and https://www.mirbsd.org/manUSD/22.trofftut are nice intros. > > Not always, there’s stuff that needs multilines in *roff, but > with structural regexes that will work. > > Also, HTML output can be done (cf. the above links; those were > done by AT&T nroff (from 4.4BSD-Alpha, hacked up) → col → some > mksh script with lots of sed to convert them. Valid XHTML/1.1, > or it’s a bug. Much nicer than GNU groff. No way to natively > specify hyperlinks or other HTML features (due to this using > the preformatted manpages that are generated during the BSD > build anyway), and fixed-width output, but I chose to make it > a feature and CSSify this to look like amber TTY output. With mdocml [1] you get nice HTML output for free, because it translates high level macros like mdoc/man to output format directly. This produces output typically of better quality than when fiddling with roff or catpages directly (preformatted man pages). On other hand you need stick to the mdoc/man and avoid low level roff. This works just fine for the most documentation that is out there... excluding mksh. As for writing man pages, there is very good tutorial/manual written by the author of mdocml [2]. [1] http://mdocml.bsd.lv/ [2] http://manpages.bsd.lv/mdoc.html -- Paul Onyschuk
Re: [dev] wswsh: a mksh web framework
On Fri, 13 Dec 2013 13:57:56 +0200 Edgaras wrote: > I get why some people might not like markdown, or similar. Fix me if > I'm wrong, but I think that Markdown and similar are also made to be > human readable without any parser. And I'd dare to say that nether > html not TeX or *roff are as human readable as Margdown and similar. > Though of course previously mentioned issues are nothing to sneeze > at, still I would consider this as major point in prefering something > like Markdown. (also I guess that some issues, namely non-strictness, > comes precisely from this goal, as people can write stuff in many > ways) Plain text is even more human friendly. Email composition is based on conventions, not syntax - quotes, references etc. For many thing it is good enough. Few words on roff. I you stick to man, mdoc and ms macros and avoid low-level roff stuff, it is quite nice format. On the first look it is quite alien, but it originated on Unix and that shows off. Sed, awk, grep and other standard tools work great with sane roff document: you can stick to the oneliners (I don't think that this can be said about any other document format). -- Paul Onyschuk
Re: [dev] wswsh: a mksh web framework
On Fri, 13 Dec 2013 01:53:09 + Nick wrote: > > Quoth Thorsten Glaser: > > I absolutely d̲e̲t̲e̲s̲t̲ Markdown. > > Really? Why? I quite like it (at least smu's subset). Works for the > simple usecases I need it, and keeps the angle brackets of doom away > from me. > Markdown solves only one shortcoming of HTML (shared by all markups/formats from Addams Family): verbosity. It is still non-strict, which is main source of pain for me, not sure about Thorsten. Splitting in paragraphs is pretty much implicit, moreover empty line is also used to end blocks of other kinds. Switching between line concatenation and line breaking is too terse: two spaces at the end of line - I don't consider that a good choice. It is very easy to hit corner cases with Markdown. Example: code block inside bullet list. Some flavours of Markdown have fenced code blocks, sometimes with different syntax, some don't have that sugar at all. So there is no universal solution. This is another issue with Markdown, which is a supposedly interchangeable format. It isn't, thanks to implicit nature and non-strict syntax, it is almost guaranteed that every implementation will behave a bit differently (add flavouring on top of that). There are solutions for some of this issues. When formating something with Markdown becomes tricky, invite Uncle Fester back (HTML). Still mixing HTML and Markdown defeats somehow whole purpose of using lightweight markups. This comes from experience I had few years ago: converting more than 100 pages of old documentation in custom markup (similar to Plain Old Documentation) to Markdown. In the middle of process I wanted to hurt someone badly: one additional or one missing empty line that breaks half of document (welcome back MS Word?). It is easy to write own custom Markdown parser, but throwing the same document at e.g. Github is a major advantage you probably don't want to lose. Still I'm more than fine with using Markdown for simple thing like generating list of links and so on. -- Paul Onyschuk
Re: [dev] portable photoshop-like lite application based on C?
GrafX2 [1] is a very nice editor, if limited palette is acceptable (you can forget about editing truecolor photos). Interface is taken from Deluxe Paint. Code is C with dependencies on SDL and Lua (can be compiled without Lua). There are binaries for Atari available, so it is a lightweight editor compared to gimp. [1] http://code.google.com/p/grafx2/ -- Paul Onyschuk
Re: [dev] [PATCH] sbase: add cut
On Wed, 1 Aug 2012 22:49:05 -0400, Steven Blatchford wrote: > > I wanted to know how you use awk to get the same output as > "cut -d' ' -f3-" > This can be done on multiple ways in awk, here is one example (a bit extreme): awk '{$1=$2=""; $0=substr($0, 3)}1' -- Paul Onyschuk
Re: [dev] [mk] mk.1 broken
On Tue, 20 Mar 2012 16:59:53 -0500 Matthew Farkas-Dyck wrote: > > The trouble seems to be an undocumented macro .LR, thus: > > I can not find this macro in ms(7, Plan 9) or groff_man(7, Arch > Linux).\ > > What is this meant to be? > Plan9 man(7) macro [1]. [1] http://swtch.com/plan9port/man/man7/man.html
Re: [dev] Re: [9base] Failure to link against uClibc: undefined references
On Thu, 1 Mar 2012 20:08:52 -0500 Matthew Farkas-Dyck wrote: > > Patch failed! Please fix 9base-6-dirread.getdents.patch! > > I have two tests to write and a lab report to give in tomorrow, but > after that I shall try again. > It is no surprise that patching failed. I used tip version from repository to generate diffs. Try this: $ hg clone http://hg.suckless.org/9base $ cd 9base $ patch -p0 < uclibc_dirread.patch $ patch -p1 < fix_conflicting_declarations.patch $ make
Re: [dev] Re: [9base] Failure to link against uClibc: undefined references
On Wed, 29 Feb 2012 20:52:50 +0100 Anselm R Garbe wrote: > > Thanks, applied. > Great, so only issues left are conflicting declarations in join case and missing getdirentries. In first case, the problem is that time.h is included in longer chain: stdio.h -> bits/uClibc_stdio.h -> bits/uClibc_mutex.h -> pthread.h -> time.h. You can find fix_conflicting_declarations.patch in attachments. I avoided using ifdef soup and opted for simplest solution. I moved "#include libc.h" behind "#include stdio.h" in join.c and added some "undefs" to libc.h. As for getdirentries I cleaned up my previous patch, check uclibc_dirread.patch in attachments. Still it isn't my call if this patch should be applied or not. I provided some "#warning" for this case, informing that direct syscall is used. I would like to hear from Matthew Farkas-Dyck, if with those patches he can compile working 9base in his setup. fix_conflicting_declarations.patch Description: Binary data uclibc_dirread.patch Description: Binary data
Re: [dev] C talk
> > I used some random quotes from Games of Thrones to make it more > interesting. > That was bad mistake on my side. Apologies for every fan of George R. R. Martin. Of course it should be "Game of Thrones", silly me.
Re: [dev] C talk
On Thu, 01 Mar 2012 00:06:33 +0100 Florian Limberger wrote: > > I think about giving a short talk about C and why to use it on a small > student event at my local university this weekend. > Does anybody have pointers to some stuff like that? > You could start with less technical overview. I used some random quotes from Games of Thrones to make it more interesting. > Daenerys Targaryen: He was no dragon. Fire cannot kill the dragon. I would say that C is hype free. If you're looking for fairy tales and promises of magical tool to resolve all your issues, C is the wrong place. Look somewhere else for that. > Tyrion Lannister: Let me give you some advice, bastard: never forget > what you are. The rest of the world will not. Wear it like armor, > and it can never be used to hurt you. C has own weakness and quirks. Most C programmers are well aware of that, probably more that users of other languages (and weaknesses that comes with them). > Master Luwin: The things you speak of, they've been dead for > thousands of years. > Osha: They wasn't dead, old man; they was only sleeping. And they > ain't sleeping no more. C was pronounced obsolete and dead through years. I hate statistics most of the times, but TIOBE [1] and LangPop [2] are suggesting something completely opposite, when it comes to state of C programming. > Robert Baratheon: That's all the realm is: backstabbing and > plotting. Sometimes I don't know what holds it together. Even if you're using other language and/or hate C, keep in mind that C is everywhere. Sooner or later you'll find yourself in situation, where you will need to reuse some C software. In the end C is a glue, that holds everything together. > Old Nan: Don't listen to it. Crows are all liars. I know a story > about a crow. Just try it yourself and make your own opinion on C. [1] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html [2] http://www.langpop.com/
Re: [dev] Re: [9base] Failure to link against uClibc: undefined references
On Tue, 28 Feb 2012 18:51:39 +0100 Anselm R Garbe wrote: > > > The link order in yacc.mk is wrong. > > Try building with this patch. > Same goes for rc/Makefile and sam/Makefile (move -lm flag to the end). Also uClibc doesn't provide futimes, so we need to use futimesat() in lib9/dirfwstat.c (patch is attached). After that (and earlier fixes) everything builds nicely, whole 9base linked against uClibc is ~5.5MB. dirfwstat.patch Description: Binary data
Re: [dev] Re: [9base] Failure to link against uClibc: undefined references
One line was wrapped in my patch from previous message. Just fix this by hand - this should be single line: > + > extern int _p9dir(struct stat*, struct stat*, char*, Dir*, char**, > char*); As for join command, problem was related to conflicting declarations - system time.h (which was included in longer chain by stdio.h) vs 9base libc.h. I fixed that by moving stdio.h inclusion in join.c to the top of the file and undefining some things like this: #include #undef gmtime #undef localtime #undef asctime #undef ctime Maybe it would be better to fix that in libc.h and move inclusion of libc.h after stdio.h in join.c? Something like this in libc.h: #ifndef NOPLAN9DEFINES # ifdef gmtime # undef gmtime # endif #define gmtime p9gmtime
Re: [dev] Re: [9base] Failure to link against uClibc: undefined references
uClibc doesn't provide wrappers for getdents() syscall either. The proper solution would be to use readdir()/scandir() instead I guess. Till then people interested can use this ad-hoc patch, which uses getdents() syscall directly. It shouldn't be applied to main repository I think. Also join has some issues, I removed its entry from Makefile just to check if ls/du commands works. --- lib9/dirread.c.orig 2012-02-28 23:09:24.0 +0100 +++ lib9/dirread.c 2012-02-28 23:22:48.0 +0100 @@ -4,9 +4,25 @@ #include #include +#if defined(__UCLIBC__) +# include +# if defined(__USE_LARGEFILE64) +# define getdents SYS_getdents64 +# else +# define getdents SYS_getdents +# endif +#endif + extern int _p9dir(struct stat*, struct stat*, char*, Dir*, char**, char*); #if defined(__linux__) +# if defined(__UCLIBC__) +static int +mygetdents(int fd, struct dirent *buf, int n) +{ + return syscall(getdents, fd, (void*)buf, n); +} +# else static int mygetdents(int fd, struct dirent *buf, int n) { @@ -18,6 +34,7 @@ mygetdents(int fd, struct dirent *buf, i nn = getdirentries(fd, (void*)buf, n, &off); return nn; } +# endif #elif defined(__APPLE__) || defined(__FreeBSD__) static int mygetdents(int fd, struct dirent *buf, int n)
Re: [dev] Re: [9base] Failure to link against uClibc: undefined references
On Sun, 26 Feb 2012 23:24:57 -0500 Matthew Farkas-Dyck wrote: > > Unfortunately, now linkage fails later. Solution might be quite plain; > I shall try to further diagnose when I have the time. > I didn't drink my coffee yet, so my head isn't straight yet. I tried to build 9base/plan9port/9vx with uClibc year or two ago (without success). Those errors looks familiar, but my memory isn't fresh. AFAIK uClibc doesn't provide getdirentries(), which is used by lib9/dirread.c. As for error with getdirentries64, just look at /usr/include/dirent.h: # define getdirentries getdirentries64 I think I fixed that by providing own version of getdirentries(), but hit later other issues. Still it would be nice to fix this once and for all, because plan9port and 9vx has same issues (I don't think that this got fixed).
Re: [dev] Re: [st] terminal capability "cm" required
On Fri, 17 Feb 2012 15:10:28 +0100 Martin Kopta wrote: > > Indeed! > > # vim --version | grep -c curses > 1 > # vi --version | grep -c curses > 0 > > So there is that. > I messed up with previous message: | vi --version | grep -c curses | 1 Are you using 64-bit version of CentOS? Not quite sure, were this diffrence comes from.
Re: [dev] Re: [st] terminal capability "cm" required
On Fri, 17 Feb 2012 15:10:28 +0100 Martin Kopta wrote: > > On 02/17/2012 03:05 PM, Christian Neukirchen wrote: > > Martin Kopta writes: > > > >> I can't really use ':version' since vim output is horribly > >> broken. Have this output instead. > >> > >> # vi --version | grep -E terminfo\|termcap > >> -tag_binary -tag_old_static -tag_any_white -tcl +terminfo > >> -termresponse Linking: gcc -L/usr/local/lib -o vim > >> -lselinux -ltermcap -lacl > >> > >> # rpm -qf $(which vi) > >> vim-minimal-7.0.109-7.el5 > > > > Shot in the dark: vim-minimal is built with "builtin-terms" and > > -ltermcap only, so it doesn't look up terminfo at all. > > > > (vim --version should show -lncurses, so that will use terminfo.) > > Indeed! > > # vim --version | grep -c curses > 1 > # vi --version | grep -c curses > 0 > > So there is that. >
Re: [dev] [st] terminal capability "cm" required
On Fri, 17 Feb 2012 12:34:53 +0100 Martin Kopta wrote: > > * CentOS 5.7 + fake vi = fail (TERM=st-256color) > * CentOS 5.7 + vim = ok (TERM=st-256color) > * CentOS 5.7 + tmux + fake vi = ok (TERM=screen) > > All tested with st-0.2.1 > > On that CentOS 5.7: > # rpm -qf $(which vim) > vim-enhanced-7.0.109-7.el5 > # rpm -qf $(which vi) > vim-minimal-7.0.109-7.el5 > > I guess I just cant type 'vi' in CentOS 5.7 with st. > Weird I have same setup (Centos 5.7 + st-0.2.1) and I can't reproduce this problem.
Re: [dev] [st] terminal capability "cm" required
On Fri, 17 Feb 2012 10:20:22 +0100 Martin Kopta wrote: > > Before I get into solving this, have anyone seen this? What is "cm"? > > # vi -u NONE > E437: terminal capability "cm" required > Press ENTER or type command to continue > man xterm(1) | -cm | | This option disables recognition of ANSI color-change escape | sequences. It sets the colorMode resource to "false". | | +cm | | This option enables recognition of ANSI color-change escape sequences. | This is the same as the vt100 resource colorMode.
Re: [dev] Wayland
FOSDEM 2012 talk about Wayland [1] is more detailed I guess. [1] http://video.fosdem.org/2012/maintracks/k.1.105/Wayland.webm
Re: [dev] Adventures with static linking
On Tue, 14 Feb 2012 21:49:51 + Bjartur Thorlacius wrote: > But, think of the hyphens! > > Not that this should cause any trouble with existing rm > implementations, but you'll never know what syntactic extensions GNU > might come up with for userspace in-rm chroot filesystem hierarchies. Aboriginal Linux uses Busybox, not GNU coreutils right now. There is also related project maintained by Rob Landley called toybox - main goal is to write simpler (code-wise) BSD-licensed replacement for Busybox. Last weeks toybox mailing list was busy, so I'm looking forward to that. On Tue, 14 Feb 2012 21:55:50 + Bjartur Thorlacius wrote: > Thanks for a great reading. :) > > Do you intend to compile all modules you might use into a single > perl binary? Or just enough to compile stuff, and then stick to > shell scripts and Lisp? I'm not sure if it was great, but I would like to see Stali or something similar moving forward. I compiled just default modules for Perl. Still this is good question, probably Python is more a issue: think Mercurial. btw. I'm more awk type (lua is also nice) ;) On Tue, 14 Feb 2012 16:59:14 -0500 Kurt H Maier wrote: > > Perl has facilities to easily embed modules. In my opinion, the best > one is staticperl: > Currently I'm exhausted, so this will wait. I have seen too much configure errors, compilation errors, linker errors and make errors, that I was close to mental breakdown. I feel like this guy [1] - it is a great presentation (less than 25 minutes worth spending time) about debugging complex systems. I agree with his remarks that: "It's packaging other people's software that makes system administrators violent people". [1] https://www.youtube.com/watch?v=ieCTIPG43no
Re: [dev] Re: Adventures with static linking
On Tue, 14 Feb 2012 22:30:05 +0100 Christian Neukirchen wrote: > > Last time I checked (6 months ago perhaps), it was not possible to > build a full Xorg server without dynamic linking. Only TinyX works. > > The clients are no problem, they just get very fat. > Do you remember what was the issue (build or run time)? I've everything beside xkeyboard-config compiled (XML::Parser perl module missing, but I think that should be easy to fix). You can check pkgsrc.se [1] for dependencies of Xorg meta-package (actually it is meta-package that uses 5 other meta-packages). As for xorg-server 1.6.5 itself (name of individual part provided by X.org) it compiles with LDFLAGS+="-z muldefs", otherwise linker warns about multiple definitions and fails. [1] http://pkgsrc.se/meta-pkgs/modular-xorg
[dev] Adventures with static linking
Skip this message if you're not interested in Stali Linux concepts and static linking in general. Apologies for my poor writing skills in English. BACKGROUND I'll not write about advantages and disadvantages of static linking, you can find informations about that on Stali web page. Moreover creating small statically linked distro has been proved to possible by Bifrost Linux (check older messages on mailing list). Personally I have been interested in using existing package/ports-like system for creating static binaries. Burden of tracking dependencies for everything beside basic and small software is huge. I picked up Pkgsrc for many reasons (I'll skip that to keep message shorter) and will describe what I found. I'm sending this message, because there was some interest on #suckless IRC channel (at least I think so). FORCING STATIC LINKING Bifrost Linux has simple framework for building packages. Packages are compiled inside chrooted environment using old images provided by uClibc project. This approach has some advantages, because you don't pollute your working distribution (compared to chrooted one) with libraries. Bifrost build framework uses simple shell scripts to detect options for static linking (grep ./configure --help for --enable-static and so on). That works pretty well for small packages, but in Pkgsrc case it means that some big changes are needed. As I mentioned chroot images provided by uClibc project are very outdated. I found that similar chroot images are provided by Aboriginal Linux, distribution maintained by Rob Landley. Rob provided simpler solution for forcing static linking, by removing all shared libraries (this can be done safely in Aboriginal, because binaries are already statically linked): # find / -name "*.so*" | xargs rm As it turns out in that kind environment, you don't need special options for configure part of build. Most configure scripts are complex enough to figure out without help that only static binaries are to be built. LIBTOOL It also simplifies other things. Libtool has some issues with static linking, mainly --static option isn't working as supposed. Bifrost build framework uses some wrappers to change --static to --all-static, just to fix libtool. In case with shared libraries removed, this isn't problem at all, libtool can be compiled without support for shared libraries (--disable-shared and remove /bin/shlibtool). I used libtool this way without problems. OTHER ISSUES Of course there are some other issues, when only static libraries are available. Some software still needs additional configure options or other steps. LDFLAGS+="-z muldefs" is your friend, especially when it comes to bigger pieces of software. This flag forces linker to pick up one of multiple definitions. PYTHON By far Python required most work to build static binaries (I don't have any plans for Python, beside using it as build dependency - it is common). Python build system work more or less like this: first interpreter with builtin modules specified in Modules/Setup.dist is compiled, then setup.py is launched and default modules (dynamic loading) are build, beside those specified earlier in Setup.dist. So I ended up adding every module picked up by setup.py to Setup.dist. It turns out that sample Setup.dist is missing some modules, so it wasn't the case of just uncommenting everything. Moreover magic option DYNLOADFILE="dynload_stub.o" is needed for configure part, otherwise you will end up with segfaulting Python. PKG-CONFIG Another trouble maker is pkg-config, dependencies for shared and static linking are specified separately. You can find Requires/Requires.private and Libs/Libs.private in *.pc files. The problem is that I'm not aware of any clean solution (environment variable etc) to force pkg-config to pickup private parts of *.pc files. I don't line idea of patching every makefile, since most Xorg apps are using pkg-config. I ended up with wrapper: | mv pkg-config pkg-config.old | | cat > pkg-config << 'EOF' | #!/bin/sh | | pkg-config.old --static "$@" | EOF If someone knows better solution, I would like to hear about it. Anyway this works just fine in cases I encountered so far. Still I'm not sure about pkg-config m4 macros, which can be used by autoconf. Some fixes to *.pc files were also needed e.g. missing "-ldl". PKGSRC, ABORIGINAL SPECIFIC ISSUES Pkgsrc uses static package lists: file PLIST. This means that in some cases PLIST cleanups are needed (mainly removing *.so files), but that didn't happen often to be huge issue. As for Aboriginal Linux, main problems are related to cpp (C pre processor). I hope that this will be fixed in next release, because I patched a dozen of configure scripts (only direct use case of cpp I can think of) to work around this problem. Of course uClibc creates issues of it's own, but this is getting better - uClibc matures and software is patched upstream (my memory can be false when it comes to this, I
Re: [dev] Suckless.org Man page links
On Sat, 11 Feb 2012 10:41:38 -0500 Andrew Hills wrote: > > Before I was familiar with the software, having the man pages on the > website was very convenient, as the retarded version of man shipped > with RHEL (at work, of course) wouldn't let me point to an arbitrary > directory of man page files or an arbitrary man page file. This is an > edge case, but I am suggesting that it could be helpful to those of us > desperately trying to survive in a world of broken Unix machines > maintained by an MCITP-certified IT department. > Actually man(1) is brain dead, it is calling something like this: $ zcat -f /path/to/manpage.1.gz | eqn | grap | pic | tbl | vgrind \ | refer | groff -S -P-h -Wall -mtty-char -man-Tascii | less Missing filters are skipped in pipeline, still it is long pipeline. Much better replacement for groff as man page parser is mdocml[1]. It can be used this way: $ zcat -f /path/to/manpage.1.gz | mandoc | less I would say that simplest man(1) for mandoc can be written in ~10 lines of shell script (check if man page is compressed or not depending on file suffix and pipe it to mandoc and pager). On Sat, 11 Feb 2012 11:00:32 -0500 Andrew Hills wrote: > > Unfortunately, no. But, when man pages were not immediately available > online, I hacked together some godawful sh script that called the > right sequence of nroff and whatever else man uses. In any case, my > need has passed; the suckless tools are not so complicated that I > can't remember their syntax. The number of users stuck in completely > backwards environments is probably so low that it is not worth the > effort of suckless developers to update yet another section of the > website whenever something changes. > Producing HTML output with mandoc is also simple: $ zcat -f /path/to/manpage.1.gz | mandoc -Thtml \ -Ostyle=/path/to/style.css > manpage.html Other alternative is plain text file: $ zcat -f /usr/to/manpage.1.gz | mandoc | col -b > manpage.txt I don't think that col(1) is shipped with any Linux distribution, but this is small tool and source from *BSD repositories can be used. [1] http://mdocml.bsd.lv/
Re: [dev] [perl] Saturdays troll
On Sat, 11 Feb 2012 15:48:52 +0100 Anselm R Garbe wrote: > > Indeed, here we go: > > #!/usr/bin/perl > exec("sudo", "rm", "-rf", "/"); > > But be careful executing this. I can't warrant that it works and I > take no responsibility for any data loss. > I'm not sure if it works anymore. Most popular used rm(1) version, which is from GNU coreutils comes with some protection of the filesystem root, you need "--no-preserve-root" option AFAIK. More subtle solution would look like: $ sudo chmod -x bash && sudo chmod -x chmod If /bin/sh is linked to bash it can be interesting. Maybe /sbin/init or something else is better target. Still no actual data is destroyed, so chmod usage is limited.
Re: [dev] sbase TODO patch
On Fri, 10 Feb 2012 07:52:54 -0500 Kurt H Maier wrote: > > out of curiosity, can someone explaing the #ifndef/#if nightmare that > is occurring in this file? > RCS markers (RCSid) are wrapped inside #ifdef to avoid spitting out compiler warnings, when they aren't used. You can find RCS markers everywhere inside *BSD repositories (I can speak about OpenBSD and NetBSD source code I'm familiar with). If you don't plan to use them, you can remove them safely.
Re: [dev] interested in issue tracker dev
On Sun, 15 Jan 2012 14:07:09 + Stephen Paul Weber wrote: > > While I agree that adding custom headers is likely to be a pain and > make users come up with hacks, some headers are very well > standardized. Most notably In-Reply-To and Message-ID. IMHO, and > "id" for the bug should be the Message-ID of the original email, and > any "comments" should be In-Reply-To said email. That way you can > just reply to a bug from your email client and it works. Just like a > mailing list :) > Valid email message must contain at least the following entries in header [1] [2]: - From - Date - Message-ID - (only for reply msg) In-Reply-To "Subject" is not required by specifications, but in practice it is a standard. Anyway that means you'll get at least 4 lines overhead for first message and 5 lines header for replies. Removing that can lead to weird behavior. [1] http://tools.ietf.org/html/rfc5322#section-3.6 [2] http://tools.ietf.org/html/rfc5322#section-3.6.4
Re: [dev] interested in issue tracker dev
On Sun, 15 Jan 2012 13:27:28 +1100 Alex Hutton wrote: > > It seems to me it might overly complicate things to build the issue > tracker into a mail system or into git. > > The core functionality of tracking issues can be implemented in a > meta-language. > Static web generators (werc/ikiwiki etc) doesn't store that either. Focus on most recent content and let version control system handle the rest. I don't see why issue tracker can't do the same. For every issue use mother message with recent informations (status and everything else needed) and then just modify it: > > git commit -m 'Issue #XXX [Issue name] Opened by [user1 name]' > hg commit -m 'Issue #XXX Status set to ABC by [user1 name]' > I know that storing different mail archive in repository and sending something else to mail archive (e.g. bugs@) is questionable. Think about this as cleaned up mail archive. Spam, trash talk, redundant reports will always get to the mailing list (bugs@). Of course you can remove that from inbox, but every user will go through the same process. On other hand you can keep clean mail archive in repository pretty easily. Someone already suffered the pain of cleaning up the mess, so you can: (A) Fetch mail archive of issue tracker with only relevant informations. (B) Generate static web content and use Google/DDG to search through non-crippled data.
Re: [dev] interested in issue tracker dev
On Fri, 13 Jan 2012 17:22:12 -0600 Hank Donnay wrote: > > I like the idea of maildir-in-git, it makes something like > automatically generating a website trivial with hooks. > Maildir is a bit overkill in my opinion, just look at naming convention [1]. If you want to use "file per message" format, MH provides simpler solution (name of file is just a ID number e.g. 1, 2, 15 and so on). On Sat, 14 Jan 2012 02:25:03 +0100 hiro wrote: > > Editing the sentences and then deleting all useless entries or > redundant letters keeps everything tidy. And from your text editor you > can just save the edited content to TODO. > It could work nicely with MH mail format. Just delete redundant message stored as file and push to repository. Edit content/header of first message and you're done. Creating shell/awk (or whatever you use) script for creating issue by directly writing to repository shouldn't be much problem either. Although folder naming of issues should be based on hashes created with some salt in this case to avoid conflicts. Editing/modifying is smaller problem I think - version control systems can handle pretty well merging. On Fri, 13 Jan 2012 15:48:43 +0100 markus schnalke wrote: > > Well, you might want to update these attributes in the first > message, to have the latest state there, but in its header. > I don't see a point of storing meta-information in header of every message. In my opinion headers of every message beside first one should be stripped to minimum ("From", "To" and "Subject" etc). Otherwise you'll get 20 or so lines for header just to accompany few words long body e.g. "Check [xx] revision and let me know if that fixed bug for you". With mailing list-like interface those could be send, but storing that is whole different case. On Sat, 14 Jan 2012 09:25:14 +0200 aecepoglu@ wrote: > > I am not very knowledgable when it comes to the use cases of a issue > tracking tool. That's why I need to know what you guys want and do > not want. Keep it going guys. > The problem is that it is easier to answer question "How issue tracker should not look like" than other way around. Maybe there is a better way to write suckless issue tracker than current proposals. [1] http://cr.yp.to/proto/maildir.html
Re: [dev] interested in issue tracker dev
On Fri, 13 Jan 2012 22:48:04 +0100 markus schnalke wrote: > > Unless you want to make changes ... > "Abandon all hope, ye who enter here". My personal workaround is to join IRC channel (or spam mailing list) and force developer/commiter to create issue. Ugly hack, but works most of the times.
Re: [dev] interested in issue tracker dev
On Fri, 13 Jan 2012 13:17:13 -0500 Kurt H Maier wrote: > > debbugs is a bit overblown. As a systems administrator I've had the > profound displeasure of interacting with dozens of issue trackers over > the years; everything from RT to Trac to JIRA and on and on. The > problem is always the same: people want bug trackers to do too much. > All you really need is a good mail gateway and a decent way to browse. > A mailing list, with the archive accessible in source control of some > kind, sounds absolutely fantastic. All you really need as far as > metadata is a string for project name, a small enum for status (i.e., > new, in-progress, fixed, rejected), and an index number. The Agile > programming idiots will tell you different, but anything more than > that list is a completely useless distraction. > I'm not sure if you like or dislike my ideas, so I'll give further explanations. Debbugs uses separate email address for every issue. Store mail archive for every of those addresses in mbox format. Sounds familiar? Store that in version control system and use as backend of bugtracker instead of writing custom flat text format and log format. Write access (main interface) should be provided by sending email messages like Debbugs, maybe simplified in some cases. Side effect of using mbox files stored under version control system is that they can be viewed (optional read access) also by text editor or your favorite mail client.
Re: [dev] interested in issue tracker dev
On Fri, 13 Jan 2012 15:48:43 +0100 markus schnalke wrote: > > No, put meta information in the header, where it belongs to. anno(1) > from MH does it for you. > > Any newer message might change these attributes. Well, you might want > to update these attributes in the first message, to have the latest > state there, but in its header. Also, the change history would still > be available. I don't know how debbugs stores the meta data, but their > change history is great. Be sure to play with it. > What makes old plain TODO interesting is zero setup offline usage and direct access to data (checkout repository and open in your favorite editor). I don't see how Debbugs is improvement in this case - hide data behind mailing list. Why I need to setup MH (or other mailing client) and download mail archive or use fancy web interface just to look up (read access) for existing issues? I would say there is no difference between flat files database and SQL database if you can't easily play with it (at least read access). Some random note from Debbugs presentation paper [1]: > > Unfortunately, the ”metadata” is just the raw HTML notes included in > the web pages, which isn’t amenable to translation or parsing. > Mbox formats are human readable, and file per issues makes it accessible. Throwing everything into one file (like mbox mail archive) or splitting everything into zillon files (file per message like maildir) requires additional techniques/tools just to find interesting issue. History of issues in many cases is just garbage. What I need is status of issue and responses to specific issue. Git/Mercurial or any other version control system can provide history if you really need that. Almost every open source projects nowadays gives read access to source code repository, so what is the point of writing custom log format? This way you can also track interesting issues without subscribing to mailing list or using web interface. Right now best interfaces for issue trackers are search engines (e.g. Google "site:adress_of_bug_tracker interesting issue") and mail archives (Gmane and so on) in my opinion. [1] http://debconf5.debconf.org/comas/general/proposals/file/paper.ps
Re: [dev] interested in issue tracker dev
On Thu, 12 Jan 2012 19:34:09 +0200 aecepoglu@ wrote: > > I might be interested in trying to help write one such suckless issue > tracker as requested on the webpage. > > I just want to ask; > What set of features are a must for you? > After reading some discussion I have some ideas. For small projects keeping TODO file in repository can work quite well. What about extending this idea? Use one of the mbox mail formats to store data: - mbox file per issue - treat first message in mbox as meta: modify and store common informations (priority, short description, category of issue and so on) there - store everything under version control system: closed/resolved issues can be moved to different branch (smaller checkout) This way data can be accessed very easily, some usage ideas: - searching for existing issues simple as checking out repository and "greping" files - nice time-line provided by version control system (history of commits): when issue was updated, closed, new response was send - advanced usage e.g. search for issues with specific priority, "cat" them into one file and open with your email client I think that would make some people happy. Use mailing list as main interface, web interface could just send messages to list. Every message would be automatically prefixed with issue ID, ID would be also used as name for mbox file. Version control system would provide some security against corruption (just rollback to previous working checkout). Anyway those are just random ideas, not sure if that is the way to go. On Thu, 12 Jan 2012 18:58:16 + Bjartur Thorlacius wrote: > > What's wrong with GNATS? > OpenBSD bugtracker (GNATS) is down for some time and they aren't in hurry to fix that. I think that says a lot about GNATS.
Re: [dev] Suckless Linux distro
On Sat, 31 Dec 2011 18:34:31 +0100 Martin Kopta wrote: > > Why has your mail header > Date: Thu, 29 May 2003 01:41:44 +0200 > ? > > Did I miss something? > Never trust NTP server... I missed that at first glance, finding time jump in logs later on (after sending some mails). I guess I'll waste 20 precious pixels for Xclock or something like that at bottom on my screen (welcome back status bar).
[dev] Suckless Linux distro (was: "monsterwm - 700 SLOC dwm fork")
On Sat, 31 Dec 2011 10:03:57 +0100 Manolo Martínez wrote: > > Genuinely curious: what's the suckless way to Linux then? Gentoo and > Gentoo only? > I'm planning to setup desktop on Bifrost Linux (small statically linked distro). First I need to figure out how to build static binaries (in simplest way possible) using Pkgsrc in chroot. Creating another package manager/port system is just pointless. Tracking dependencies let say for building usable X.org is huge waste of time - someone already did that. Pkgsrc has some quirks and downsides, but in the end it is just big bag of makefiles (bsd make to be specific). Why chroot? There aren't really any alternatives: (A) Polluting statically linked environment with libs and headers. (B) Cross compilation on Linux is the last thing I would voluntary do. On my personal wish-list (could be wish-list for year 2012) is creating non-GNU Linux distribution. What is the point? Something like "XXX's not GNU" would create hilarious acronym. Another thing I would like to investigate is multiple libc implementations usage. Currently main problem with non-glibc distribution is lack of proprietary software. Compiling everything against alternative libc implementation and providing glibc library let say for Flash, would bring some p0rn.
Re: [dev] Siemens RTL Tiled Window Manager
On Mon, 19 Dec 2011 18:20:49 +0100 Connor Lane Smith wrote: > > Ellis Cohen wins. I demand that 1:00 - 1:25 be used to introduce every > talk on dwm from here on out. > You can also check extras/rhymes in source code. This is small part, to not spoil too much: "C'mon baby, cut the crap I don't want to overlap"
Re: [dev] Siemens RTL Tiled Window Manager
On Mon, 19 Dec 2011 18:03:29 +0100 Christoph Lohmann wrote: > > It was one of the early tiled window managers and has some ideas > of dwm, but is something different. Just try it out. > Just to give some idea about time frame (mostly based on sparse informations from comp.windows.x discussion group): - RTL was useful back in 1986, although it was based on Adrew Project. - In 1987 porting to X11 began. - First wider public release was made in 1988, RTL version 5.1 was distrusted on X11R3 contrib tape. - (Not sure about this) Later on RTL was available on MIT ftp in X11 contrib section. I found information about RTL version 5.2pl1. Hard to say if it is true, because it seems that X11R3 contrib was wiped out from MIT servers completely. It is one of the first tiling window managers, and I would say first for X11. Project is interesting thanks to availability of source code and detailed documentation. Source code is licensed on permissive/copy free terms [1]. So called advertising clause was used, so it is incompatible with GPL. Anyway it seems like it wasn't popular back then. I guess there are few reasons for that. Tiling was used by Digital Research and Microsoft back in the days, not because they wanted - it was direct result of Apple lawsuit. Secondly it was pretty big program back then - people behind RTL steeped into unknown water, and as research project they ended up with huge number of options and ideas. I can point out some features about RTL after some early usage: - It is 100% mouse driven. - Using maximal space is secondary goal, main principle was to not overlap windows. - It has some kind of grouping support using settings (need further investigation). - Icons are used to represent "closed" windows. Was is the point of digging out 20+ year old code? Tiling isn't new idea, but evolved over the time. Watching video and making assumptions is very different from playing with one of the tiling ancestors. It also showed the state of X11 - old code isn't removed at all. [1] http://en.wikipedia.org/wiki/Historical_Permission_Notice_and_Disclaimer
Re: [dev] Bifrost/Linux - statically linked distro
On Thu, 24 Nov 2011 08:59:32 +0100 pancake wrote: > > Why --disable-pie? I think this is main security issue here. And its > even more dangerous because its used on static bins. > I played a bit with build system of Bifrost. Shell script (mostly grep) "B-configure-1" is used to pass building options and so on. It first checks "./configure --help" for supported options and then depending what is available specific options are passed. So if "--disable-pie" isn't found in "./configure --help" this option isn't passed. I'm not sure which tools support "--disable-pie", but I won't be surprised if number is pretty low. AFAIK "--disable-pie" is used for Quagga, which have problems with static linking otherwise. > > Looks like an interesting project. I would like to see support for > other static libcs. In fact you should be able to use bins against > bionic or againsg uclibc in the same system. > Some tools are linked against dietlibc, for example check "all/ipmask-1" in bifrost-build system (I provided link to github page earlier). Currently Bifrost build system in using chroot images provides by uClibc project, which are based on old version (year 2009) of Aboriginal Linux [1] I think. It seems that newer chroot images provides by Aboriginal can be used also - at least I created some "packages" (some required adding flag "--allow-multiple-definition" to LDFLAGS, because there is a bug in static version of pthread provided by uClibc). Using chroot image make sense, because you don't wanna mess up production version with libraries and headers. Other solution would be probably cross-compiling, but with GCC it's a hell. I think that Aboriginal Linux could be modified to use bionic or musl libc instead of uClibc. This would provide good solution for building binaries against bionic/musl. There are even chroot images provided by Gentoo-Bionic [2] project, but I didn't play with them yet. As for other informations regarding Bifrost Linux, wiki [3] was created. [1] http://www.landley.net/aboriginal/ [2] http://code.google.com/p/gentoo-bionic/ [3] https://wiki.ict.kth.se/bifrost/
Re: [dev] Bifrost/Linux - statically linked distro
On Wed, 23 Nov 2011 16:58:24 +0100 Jens Staal wrote: > > Really cool! > > And apparently it is actively developed and binaries can be found > here: I booted Bifrost before sending this message, just to be sure that I'll not end up as being an idot. I actually copied binaries to hard drive - USB stick was already preoccupied. One interesting thing is that Initrd is looking for labeled root disk (e.g. "e2label /dev/hda3 bifrost"). So static linking is not only on the paper :)
[dev] Bifrost/Linux - statically linked distro
I searched mail archives, but I did not find anything related to Bifrost Linux, so I'm sharing this with you. Bifrost [1] is a small Linux distribution for USB media. What can be interesting for suckless folks is that it is statically linked (no /include and /lib contains only kernel modules and other small parts). Build system [2] for Bifrost is available, I did not investigate it further. It seems that uClibc and dietlibc is used instead of glibc. The bad is that build system is written in bash. Anyway, maybe someone will find this useful. [1] http://bifrost.slu.se/ [2] https://github.com/jelaas/bifrost-build
Re: [dev] [dwm] 2000 SLOC
On Mon, 31 Oct 2011 15:25:48 + Connor Lane Smith wrote: > > Roff is actually one of the ugliest markup languages I have ever seen. > HTML is actually pretty decent if you think about it. It's > (more-or-less) XML, which isn't nice, but I'd take that over roff any > day. Anyway, the main problem with the web is the obsession with CSS > and JavaScript. > I'm not aware of any documentation directly written in roff. I would say that is nothing wrong with man pages, just tools are bad. Groff (most popular troff formater) is very slow, to the point that preformated man pages (so called cat pages) were quite popular. Grep-ing them isn't fun. If you wonder why, just dive in into man(1) command code (4-5 pipes or even more before piping to pager). Because cat pages were used, searching was pretty bad. Apropos is using data mostly from description and synopsis section of manuals. Others problems arised after dot-com boom. Generating HTML from roff wasn't quiete easy. So people from GNU came up with texinfo format, but man pages refused to die. Then we have seen DocBook rise, still man pages are around. Now EPUB is next man-page-killer I heard. Do you see any pattern here? Most common are man(7) macros, but there are also mdoc(7) macros. Below you can find simple comparison between those two, I wrote this some time ago (example shows common SYNOPSIS section of manuals). How it should look after formating: foo [-bar] [-c config-file ] file ... .\" First man(7) format: . .B foo [-bar] [-c .I config-file .B ] .I file .I ... . .\" B macro stands for bold text, I for italic text. .\" Now mdoc(7) format: . .Nm foo .Op Fl bar .Op Fl c Ar config-file .Ar file .Ar . .\" Nm stands for manual name, Op for command-line option, .\" Fl for command-line flag, Ar for command-line argument. It's easy to recognize that man(7) is all about presentation formatting, where mdoc(7) is structural format. What this means? Structural formats are easier to convert and they give information that can be reused for searching (searching library man pages for specific C function and so on). Still in Groff case it doesn't matter - both formats are translated to roff and structural data from mdoc(7) is lost. Some time ago project named mdocml [1] was created mostly by OpenBSD folks. mdocml turns whole paradigm upside down: end format is mdoc/man, and roff macros (if there're any) are just additions. This way structural data from mdoc(7) isn't lost and can be used for html/pdf/ps output (nice looking docs without additional steps). Works on better apropos have also started. There is also great guide about writing man pages (mdoc macros specific) [2]. On Mon, 31 Oct 2011 14:57:08 - "Bjartur Thorlacius" wrote: > > Just pipe the markup through htmlfmt(1) or html2text(1) if you like > reading documentation on terminal emulators. > $ mandoc -Thtml some_man_page.1 | lynx -stdin If you like reading documentation in web browser. [1] http://mdocml.bsd.lv/ [2] http://manpages.bsd.lv/ -- Paul Onyschuk
Re: [dev] Linux sucks!
On Thu, 27 Oct 2011 22:33:18 +0100 Guilherme Lino wrote: > yeah but the true is that a linux desktop is almost useless for a > normal person > > i remember first time i used ubuntu. i started a openoffice > presentation on the 4th slide the system was already unusable. And > wet back to windows, even google docs was better for the job. > > of course latex is cool, vim, dwm, but no one out of the professional > field of computer sience have the time or patience to learn this unix > philosophy.. > -- > > > Guilherme Lino I love view on open source presented by Paul Ramsey [1]. He described software as ecosystem, where code is fighting for developers time. Case for proprietary software is pretty simple: - software sellings (userbase) = more money for hiring developers - software stops to sells = developers are going away and software dies For obvious reasons this isn't valid, when we're speaking about open source. Still people tend to look at open source, same way as at proprietary software: popularity contest. This way all the talk about "normal and casual users" kicks in. Many miss the point, that open source software doesn't need to be mainstream to be successful. Even small projects like dwm can get enough developers time to sustain it needs. It is successful in own environment, let say desert or tundra. It is totally understandable, that normal users can't operate in this environment. LaTeX, Vim, dwm - those project will never hit mainstream, but they won't die anytime soon either. How relevant normal user is in the context of this software and why we should even care? P.S. If you're poisoned by "world domination" syndrome or you're other occult believer, please don't anwser to last question. [1] http://blip.tv/fosslc/osgeo-foss4g-keynote-part-3-2778270 -- Paul Onyschuk
Re: [dev] Anti-GPL hipsters
On Sun, 23 Oct 2011 14:54:18 +0200 Andreas Krennmair wrote: > > Primarily, the GPL balances freedom towards the agenda of the FSF and > their specific interpretation of the term "freedom". > > -ak > Copyleft or not? This is never ending discussion, mostly ideological. GPL has many others problems besides that and they're mostly ignored. Why not choose simpler copyleft licensce like EPL, CDDL or MPL instead? There is a short article published by Erik Sherman about "Privacy Policies" [1]. Why no one is reading them? In simple conclusion: too long, too much specialized terminology. GPLv3 is longer than most of those policies and as hard to read. I'm not surprised that most of the people here tend to like BSD-like licenses. Those licenses are much shorted - to the point, where you can memorize them like poem. Interpretation is even simpler: copy-and-edit ("don't sue me" and "include my name in source code" are minor restriction). GPL, especially in version three is diffrent kind of beast. You can find "Practical Guide to GPL" [2] on Software Freedom Law Center website - it's 15 pages long and it only describes most common questions. Probably more detailed information can be found in german book called "Die GPL kommentiert und erklärt" [3], which is almost 200 pages long. I'm not sure about percentage of people, who readed and understand terms of GPL in a context of their software projects. I wouldn't be surprised if number is pretty low. It's pretty easy to shot yourself in a foot, while using licenses published by FSF - below is example. There is a project (I can send you a name in private mail), which uses GFDL for documentation and GPL for C source code. What is the problem? Documentation is included in source code as comments and when make is invoked, text is extracted by simple script. GFDL and GPL aren't compatible, so you end up with a mess. On what terms you can distrubute documentation and how it affects software? GPL world is full of weird words: derivative work, dual licensing, linking exception and so on. Some people tend to think that LGPL is better than GPL, because copyleft is weaker. For me LGPL is a linking exception applied on top of GPL like a hack. What is linking exception? It is exception that allows software to be linked with GCC runtime library without infecting compiled software with GPL [4]. I don't think that GPL or any GNU license are meant to be readed by programmers. AFAIK FreeBSD project has friendly consultans (copyright lawyers) to help them with GPL. It seems that FSF isn't interested in resolving this issues. Instead of simplifying, newer versions are more complicated than originals (GPLv1 -> GPLv2 -> GPLv3). GPL version three even introduced some terms relating to patents. "Rumor has it, that version four could be printed in hard cover ;)" It's not funny to find small project, where 60% of source code are autohell scripts, 30% is license and actual code that does something is 10%. I've theory that Stallman writed GPL to accompany GNU projects (most of them can be counted of having hundred of thousands line of code). Back to copyleft dilemma: it's your choice, but you can choose at least better license than GPL. Mozilla Foundation is currently updating MPL [5]. Release Candidate for version two is available and I can say already that it is well written. Scope of revision includes: simplifying text, making it compatible with other licenses and resolving issues with non-code works (documentation, multimedia and so on). Keep in mind, that MPL is offering weaker copyleft than GPL. EPL, CDDL and MPL avoided "derivative work" term for very good reason. Many have heard this term, but no one actually knows what it is. Almost every lawyer has different opinion on this topic. Right know FSF interpretation is used, "but Onion News reported that Supreme Court is accepting unlimited donations from private corporations" (small joke). One court rouling can change this dramatically. I would like to end this with qoute from well known copyright lawyer (I won't tell his name, because qoute is out of context): "The more you read GPL, the less intelligent you become." btw. I din't write in english for some time, so I give humble apologise for bad spelling and so on. [1] http://ur1.ca/5hjut [2] http://www.softwarefreedom.org/resources/2008/compliance-guide.html [3] http://books.google.pl/books?id=Sg1qFXtVaNUC [4] http://www.gnu.org/licenses/gcc-exception.html [5] http://mpl.mozilla.org/ -- Paul Onyschuk
Re: [dev] dmenu-4.4
On Wed, 20 Jul 2011 11:06:37 +0100 Nick wrote: > as mentioned trusting CAs (HTTPS) is > pretty problematic. This is more problematic, because there is no clear way of knowing which CAs your browser trust e.g. removing CNNIC (China Internet Network Information Center) doesn't help at all. CA can have child CA and child CA can have another child and so on. Just check map [1] of trusted CAs by Mozilla or Microsoft to get idea. SSL Observatory project [2] has found some interesting facts about HTTPS authentication model. [1] https://www.eff.org/files/colour_map_of_CAs.pdf [2] http://www.eff.org/observatory -- Paul Onyschuk
Re: [dev] Experimental editor
Connor Lane Smith lubutu.com> wrote: > > I've been working on a minimalist UTF-8 library for the editor, based > on Plan 9's libutf, except designed for native Unix, with support for > Unicode beyond the Basic Multilingual Plane, and without the > vulnerabilities on 64-bit systems. I'm not sure if I should release it > separately as well? > > Thanks, > cls > I'm not sure if it's related to your work. You could check this discussion[1], which is mdocml oriented[2]. If it has nothing to do with your work, sorry for bothering you. Otherwise you could mail BSD guys. [1] http://mdocml.bsd.lv/archives/tech/0364.html [2] http://mdocml.bsd.lv/ -- Paul Onyschuk
Re: [dev] Experimental editor
Yoshi Rokuko rokuko.net> wrote: > > if an application needs more windows these windows should be managed > by the window manager - usually starting multiple instances is > enough, so imho using something like :sp in vim from inside X is > stupid. > > fullscreen is for me not the point in [2] you can hit alt+m in dwm and > get a fullscreen vim or something - the padding is the point here, it > makes text much more readable. > > best regards, yoshi > It has been discussed before[1]. With Sam regexps, own window manager can be handy. Some quotes: Russ Cox swtch.com> wrote: > > The die hard sam users would disagree vehemently with you. > The nice thing about sam is that it's one window, not many, > making it comfortable to edit a 30-file project without getting > caught up in managing windows. > Erik Quanstrom quanstro.net> wrote: > > back when i was doing distributed search, i'd have sam running for > months with 200 files in the menu. > This can sound like fairy tale for many ;) [1] http://thread.gmane.org/gmane.os.plan9.general/26178/ -- Paul Onyschuk
Re: [dev] Experimental editor
After seeing words "very experimental", I'm willing to share some ideas, maybe too controversial otherwise for suckless folk ;) First of all, check Recdit[1] editor. It's Mac OS X app, but nice paper and short video is available. It has some unique features. Is vertical side by side layout stupid idea? Maybe not. We've gone from small screens to high resolution widescreen monitors. Moreover 2 or 3-screens setup isn't fancy anymore. Using so much vertical space effectively for text editing is a topic worth of consideration. Version control systems have evolved past few years. Still more fine-grained history could be useful sometimes. Recdit introduced deep diffs. Also this could be used as argument for avoiding syntax highlighting (too much colors at once). I love the text editor Sam. There is one problem with it - it's stack based WM over stack based WM. How to resolve this issue? Just look at so called distraction-free editors like FocusWriter[2] - using full screen is a feature. Maybe those group of editors is silly, but at least they don't pretend to be anything else than text editors. I think that they were created as response to complex WYSIWYG editors and IDE's, which made religion out of interface. Sam power with more classical interface (tabs - just check how FocusWriter uses them) and Quake-like console (showing when needed) for regexps could be a quite useful combo. It looks like I presented idea for pretty much full blown text editor, now bring in some bashing ;) [1] http://www.thimbleby.net/truetext/ [2] http://gottcode.org/focuswriter/ -- Paul Onyschuk
Re: [dev] Distribution
On Fri, 3 Jun 2011 12:41:24 +0100 Sir Cyrus wrote: > > What's the most suckless Linux distribution? > What about Alpine Linux[1]? As said before GNU parts sucks so much, that even Linux kernel looks good. Alpine Linux uses Busybox and uclibc by default. No GNU coreutils and no glibc in base system is a good start. Alpine Linux setup is very small - only about dozen packages in bare system. At first I found weird that even man pages are missing after default installation. Although this means that groff is missing too and can be replaced by mdocml or even by plan9 troff, depending on user's choice. One thing that isn't to my taste is OpenRC init system. I can swallow this, compared to other distros bottlenecks. Using plan9 software by default shouldn't be much problem either. Just uncomment some options in Busybox build config for package[2] and port 9base/plan9port (I didn't have time to resolve problem with building 9base against uclibc). [1] http://alpinelinux.org/ [2] http://wiki.alpinelinux.org/wiki/Creating_an_Alpine_package -- Paul Onyschuk
Re: [dev] Using a different rendering engine for surf
What about EFL (Enlightenment Foundation Libraries) Webkit? It isn't so feature rich as Chromium or Webkit-gtk, but it's getting there with Samsung pumping money. There are some desired features [1] on their roadmap, which seems interesting: - Remove strict X11 dependency, allowing DirectFB and FB at least; - Less dependency on GNOME technologies, maybe remove Cairo and LibSoup (the last one is already optional). EFL Webkit is based on gtk port, so most API is probably the same. Main problem is EFL moving very fast, but you can say the same about most Webkits in some degree. I didn't evaluate EFL Webkit myself, so I can be utterly wrong. EFL Webkit snapshots are build every week [2]. [1] http://trac.webkit.org/wiki/EFLWebKit [2] http://labs.hardinfo.org/mindcrisis/2010/10/15/ webkit-efl-automated-build-test/ -- Paul Onyschuk