Very late to this thread, but...
Gawk. I'm using Tup in conjunction with Org documents as a literate
programming system. One of the ideas in this system is that a document
should completely encapsulate a feature---including the relevant build
rules. So I just write Tup blocks in the document, and the gawk script
pulls them out. It also writes the "tangle" rules based on the code block
headers.
As a result, I have only one Tupfile, which does nothing but run that
script, using the *.org files as input.
One wrinkle to this occurred with respect to ordering the build rules.
Because the rules end up being scanned in an arbitrary order, it sometimes
happens that a rule depending on some generated output occurs before the
rule that produces it. To deal with that, I use a kind of quick and dirty
trick: I write a digraph of the input and output directories and send it to
dot, then I suck out the ranks and use those to order the rules. This has
worked without a hitch (so far).
The script is short, so I'll take the liberty of including it here.
#!/usr/bin/gawk -f
function dot(line) {
dot_lines[dot_line_count++] = line
}
BEGIN {
dot_line_count = 0
dot("digraph {")
dot("rankdir=LR splines=none quadtree=fast")
dot("node[fixedsize=true width=1 height=1 label=\"\"]")
}
BEGINFILE {
delete tangles
}
# Handle line continuations. This is just to support multiline build rules.
/\\$/ {
continued = continued substr($0, 1, length($0) - 1)
next;
}
continued {
$0 = continued $0
continued = ""
}
# Source blocks
match($0, /^#\+BEGIN_SRC.*:tangle \.\.\/(\S+)/, m) {
# A tangle file may be listed more than once
tangles[m[1]] = 1
}
# Build rules
match($0, /^:(.*)\|>.*\|> (\S+\/)?[^/]+/, g) {
input_expr = g[1]
output_dir = g[2]
# Remove anything that's not a directory name, including `foreach' and
# pipes.
gsub(/[^/]+(\s|$)/, " ", input_expr)
input_count = split(input_expr, ins)
for (i = 1; i <= input_count; i++)
dot("\"" ins[i] "\" -> \"" output_dir "\"")
# We create a group called `<all>' for each directory written to (at least
for
# the principal output). This allows us to have any rule "wait for" that
# directory by specifying the group as an input. This is strictly for
# prioritizing the changes that we want to see first during a build, and has
# nothing to do with ordering for dependencies.
n = output_dir in rules ? length(rules[output_dir]) : 0
rules[output_dir][n] = $0 " " output_dir "<all>"
}
ENDFILE {
if (length(tangles) > 0) {
printf ": " FILENAME " | tangle |> ^o tangle %B^ ./tangle %%f |> "
for (file in tangles) printf file " "
printf "\n"
}
}
END {
dot("}")
# Dot's "plain" format is fastest to produce and easiest to parse.
command = "dot -Tplain " \
"| awk '/^node/ { print $3 \" \" $2 }' " \
"| sort -n " \
"| cut -d'\"' -f2"
# See "Two-Way Communications with Another Process" in the GNU awk manual:
#
#
https://www.gnu.org/software/gawk/manual/html_node/Two_002dway-I_002fO.html
for (i = 0; i < dot_line_count; i++)
print dot_lines[i] |& command
close(command, "to")
while ((command |& getline dir) > 0) {
rule_count = length(rules[dir])
for (i = 0; i < rule_count; i++)
print rules[dir][i]
}
}
This has to run every time I change anything, but it's fast. When I first
created this about three months ago, it took 18ms, although it's slower
under Tup (because of FUSE, I think?) Anyway, it's well worth it, to
enable this style of programming, which I've come to prefer.
This project incidentally contains a self-hosted copy of all of its source
documents. The above script is the only "bootstrapper" that can't be
included in the Org documents. It produces thousands of files from dozens
of rules, and efficiently updates everything as I work on the documents
(since I keep the monitor running). I will publish it on April 23 and post
about it here, since it is maybe an interesting use case for Tup.
Thanks,
Gavin
On Tuesday, November 10, 2015 at 10:00:35 AM UTC-6, [email protected] wrote:
>
> Hi all,
>
> If you use run-scripts in your Tupfiles, I'd appreciate your feedback. You
> can reply to the list or reply to me privately if you prefer.
>
> 1) What langauge(s) do you use in your run-scripts? Eg: python, shell, etc.
>
> 2) What feature(s) do you use run-scripts for that aren't available in the
> Tupfile parser?
>
> 3) Have you tried replacing your run-scripts with the Tupfile.lua files to
> utilize the Lua parser? If you have but decided to stick with run-scripts,
> what caused you do to so?
>
> For some background: I'm curious how useful they are nowadays. I've tried
> using python in run-scripts on a large scale project, and found that it
> didn't work very well. The overhead of a run-script, coupled with the
> overhead of starting up python for each directory meant that parsing took
> way longer than it should. As such, I was considering building in python
> support to improve this workflow. But if we then have regular Tupfiles, Lua
> Tupfiles, and python Tupfiles, I question the need of a generic script
> runner.
>
> Thanks for your feedback!
> -Mike
>
--
--
tup-users mailing list
email: [email protected]
unsubscribe: [email protected]
options: http://groups.google.com/group/tup-users?hl=en
---
You received this message because you are subscribed to the Google Groups
"tup-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.