Two people try to get lilypond for 2.0.12, but hit a roadblock

2016-05-11 Thread Arne Babenhauserheide
Hi,

I just found out that there are currently three people who try to get
lilypond to work with guile 2.0.12, but they hit a roadblock on the
guile-side:

http://lists.gnu.org/archive/html/lilypond-devel/2016-04/msg00063.html

Harm wrote:

> > > > [build (dev/my-guilev2)]$ history 20
> > > >53  cd lilypond-git/
> > > >54  git fetch
> > > >55  git branch -a
> > > >56  git checkout origin/dev/guilev2
> > > > 
> > > >60  git branch dev/my-guilev2
> > > >61  git checkout dev/my-guilev2
> > > > 
> > > >67  ./autogen.sh --noconfigure
> > > >68  mkdir build/
> > > >69  cd build/
> > > >70  ../configure --enable-guile2
> > > >71  make -j3
> > > > 
> > > > I've got:
> > > > [...]
> > > > /home/harm/lilypond-git/stepmake/stepmake/c++-rules.make:4: recipe for
> > > > target 'out/source-file.o' failed
> > > > make[1]: *** [out/source-file.o] Error 1
> > > > make[1]: *** Waiting for unfinished jobs
> > > > make[1]: Leaving directory '/home/harm/lilypond-git/build/lily' 

> > > I've now checked out branch origin/stable-2.0, derived a local branch
> > > and compiled it.
> > > 
> > > ~/guile/meta (my-stable-2.0)$ ./guile
> > > GNU Guile 2.0.11.170-4d08e
> > > [...]
> > > 
> > > Should be the version we aim at.
> > > 
> > > Though, how to compile LilyPond with this guile-version?
> > > Which commands do you actually use for it? 

> > That question is easy to answer: I never built with anything but the
> > Ubuntu Guile versions.  So this would appear to be of the "look at what
> > options "./configure --help" offers for this" kind.  And if it's silent
> > about that, see what kind of environment variables might be interpreted.
> >
> > I mean, Gub has to do the same here: build its own library version and
> > use/link it.  So there must be a way.

> "./configure --help" offers some options, eg.
> --with-python-include=DIR
> --with-python-lib=NAME
> but nothing directly for guile.
> 
> There are several environment variables like
> CFLAGS
> but I don't know how to use them or the syntax they expect.
> 
> Full output of "./configure --help" attached.
> 
> I really hope someone can demonstrate how to point configure to a
> self-compiled guile.

From 
http://lilypond.1069038.n5.nabble.com/guilev2-work-was-LilyPond-boolean-syntax-true-and-false-td185707.html

Can someone here help Andrew Bernard and Thomas Morley build lilypond
with the self-built guile?

That LilyPond does not work with Guile 2.0 is like the worst case
scenario for Guile adoption, so it would be great if we could help them
get this moving again.

Best wishes,
Arne
-- 
Unpolitisch sein
heißt politisch sein
ohne es zu merken


signature.asc
Description: PGP signature


Re: wip-ports-refactor

2016-05-11 Thread Ludovic Courtès
Hello!

Andy Wingo  skribis:

> This is in a UTF-8 locale.  OK.  So we have 10M "a" characters.  I now
> want to test these things:
>
>   1. peek-char, 1e7 times.
>   2. read-char, 1e7 times.
>   3. lookahead-u8, 1e7 times.  (Call it peek-byte.)
>   4. get-u8, 1e7 times.  (Call it read-byte.)
>
>| peek-char | read-char | peek-byte | read-byte
>   -+---+---+---+--
>   2.0  | 0.811s| 0.711s| 0.619s| 0.623s
>   master   | 0.410s| 0.331s| 0.428s| 0.411s
>   port-refactor C  | 0.333s| 0.358s| 0.265s| 0.245s
>   port-refactor Scheme | 1.041s| 1.820s| 0.682s| 0.727s
>
> Again, measurements on my i7-5600U, best of three, --no-debug.
>
> Conclusions:
>
>   1. In Guile master and 2.0, reading is faster than peeking, because it
>  does a read then a putback.  In wip-port-refactor, the reverse is
>  true: peeking fills the buffer, and reading advances the buffer
>  pointers.
>
>   2. Scheme appears to be about 3-4 times slower than C in
>  port-refactor.  It's slower than 2.0, unfortunately.  I am certain
>  that we will get the difference back when we get native compilation
>  but I don't know when that would be.
>
>   3. There are some compiler improvements that could help Scheme
>  performance too.  For example the bit that updates the port
>  positions is not optimal.  We could expose it from C of course.
>
> Note that this Scheme implementation passes ports.test, so there
> shouldn't be any hidden surprises.

Thanks for the thorough benchmarks!

My current inclination, based on this, would be to use the
“port-refactor C” version for 2.2, and save the Scheme variant for 2.4
maybe.

This is obviously frustrating, but I think we cannot afford to make I/O
slower than on 2.0, where it’s already too slow for some applications
IMO.

WDYT?

Regardless, your work in this area is just awesome!

Thanks,
Ludo’.



Re: wip-ports-refactor

2016-05-11 Thread Christopher Allan Webber
Andy Wingo writes:

> Greets,
>
> On Sun 17 Apr 2016 10:49, Andy Wingo  writes:
>
>>   | baseline | foo| port-line | peek-char
>> --+--++---+--
>> guile 2.0 | 0.269s   | 0.845s | 1.067s| 1.280s
>> guile master  | 0.058s   | 0.224s | 0.225s| 0.433s
>> wip-port-refactor | 0.058s   | 0.220s | 0.226s| 0.375s
>
> So, I have completed the move to port buffers that are exposed to
> Scheme.  I also ported the machinery needed to read characters and bytes
> to Scheme, while keeping the C code around.  The results are a bit
> frustrating.  Here I'm going to use a file that contains only latin1
> characters:
>
>   (with-output-to-file "/tmp/testies.txt" (lambda () (do-times #e1e6 
> (write-char #\a
>
> This is in a UTF-8 locale.  OK.  So we have 10M "a" characters.  I now
> want to test these things:
>
>   1. peek-char, 1e7 times.
>   2. read-char, 1e7 times.
>   3. lookahead-u8, 1e7 times.  (Call it peek-byte.)
>   4. get-u8, 1e7 times.  (Call it read-byte.)
>
>| peek-char | read-char | peek-byte | read-byte
>   -+---+---+---+--
>   2.0  | 0.811s| 0.711s| 0.619s| 0.623s
>   master   | 0.410s| 0.331s| 0.428s| 0.411s
>   port-refactor C  | 0.333s| 0.358s| 0.265s| 0.245s
>   port-refactor Scheme | 1.041s| 1.820s| 0.682s| 0.727s
>
> Again, measurements on my i7-5600U, best of three, --no-debug.
>
> Conclusions:
>
>   1. In Guile master and 2.0, reading is faster than peeking, because it
>  does a read then a putback.  In wip-port-refactor, the reverse is
>  true: peeking fills the buffer, and reading advances the buffer
>  pointers.
>
>   2. Scheme appears to be about 3-4 times slower than C in
>  port-refactor.  It's slower than 2.0, unfortunately.  I am certain
>  that we will get the difference back when we get native compilation
>  but I don't know when that would be.
>
>   3. There are some compiler improvements that could help Scheme
>  performance too.  For example the bit that updates the port
>  positions is not optimal.  We could expose it from C of course.
>
> Note that this Scheme implementation passes ports.test, so there
> shouldn't be any hidden surprises.
>
> I am not sure what to do, to be honest.  I think I would switch to
> Scheme if it let me throw away the C code, but I don't see the path
> forward on that right now due to bootstrap reasons.  I think if I could
> golf `read-char' down to 1.100s or so it would become more palatable.
>
> Andy

Happily at least, none of these benchmarks are *that much* slower than
Guile 2.0.  So most "present day" users won't be noticing a slowdown in
IO if this slipped into the next release.

You're probably right (is my vague and uninformed suspicion) that native
compilation would speed it up.

My thoughts are: if this refactor could bring us closer to more useful
code for everyday users, a small slowdown over 2.0 is not so bad.  Eg,
if we could get SSL support, and buffered reads with prompts,
etc... those are good features.  So if you had my vote I'd say: forge
ahead on adding those, and if they come out well, then I think this
merge is worth it anyway, despite a small slowdown in IO over 2.0.
Hopefully we'll get it back in the future anyway!

 - Chris



Re: wip-ports-refactor

2016-05-11 Thread Chris Vine
On Tue, 10 May 2016 16:30:30 +0200
Andy Wingo  wrote:
> I think we have no plans for giving up pthreads.  The problem is that
> like you say, if there is no shared state, and your architecture has a
> reasonable memory model (Intel's memory model is really great to
> program), then you're fine.  But if you don't have a good mental model
> on what is shared state, or your architecture doesn't serialize loads
> and stores... well there things are likely to break.

Hi Andy,

That I wasn't expecting.  So you are saying that some parts of guile
rely on the ordering guarantees of the x86 memory model (or something
like it) with respect to atomic operations on some internal localised
shared state[1]?  Of course, if guile is unduly economical with its
synchronisation on atomics, that doesn't stop the compiler doing some
reordering for you, particularly now there is a C11 memory model.

Looking at the pthread related stuff in libguile, it seems to be
written by someone/people who know what they are doing.  Are you
referring specifically to the guile VM, and if so is guile-2.2 likely
to be more problematic than guile-2.0?

Chris

[1] I am not talking about things like the loading of guile modules
here, which involves global shared state and probably can't be done lock
free (and doesn't need to be) and may require other higher level
synchronisation such as mutexes.