Re: GPT-4 knows Guile! :)
Hi Stefan, Interesting to hear these experiences. I'm sure there will be a period now when people discover further and further use cases. It seems to be a revolutionizing tool. Best regards, Mikael Den lör 18 mars 2023 11:49Stefan Israelsson Tampe skrev: > I replaced google with chatGPT and it's really good to help with issues > you have in your work. As an example our IT department stopped us from > installing eclipse plugins from the marketplace and I was just lost getting > no help from googling. So I just asked ChatGpt how to install gradle > without using the marketplace and got a perfect answer that was of such > good quality that I pasted it into our dev guides. If you want to know how > to do something you get a good start by asking ChatGpt but it's not that > good at writing code yet. Asking it for code review of small code portions > is also useful. It's also fantastic at explaining how things work and a > very good tool for learning stuff. Learning how to use ChatGpt to improve > your code quality and productivity is how we should approach it. And the > more experienced you are, the more effective use you will have of it. > > On Sat, Mar 18, 2023 at 11:01 AM wrote: > >> On Sat, Mar 18, 2023 at 10:03:15AM +0100, Mikael Djurfeldt wrote: >> >> [...] >> >> > > I know very little apart from knowing what deep learning is and having >> > > skimmed the "Attention is all you need"-paper. >> >> That's already more than I do, I think. Thanks for the pointer :) >> >> > >I only meant that you are >> > > not training the model during and between sessions. >> >> No, I don't think we are yet at "real time model training". But who knows. >> >> > > It is certainly >> > > possible that OpenAI filters out things from the dialogs to use as >> part of >> > > training for the next version. They warn you that they may take data >> from >> > > the dialogs. If and how they do that I don't know. >> >> I'm pretty sure that this is the plan (besides, of course, of trying to >> assess, understand public reaction to the whole idea, and perhaps to >> steer public perception, as far as possible). >> >> > Or, as GPT-4 would phrase it: I apologize for the confusion in my >> previous >> > answer. You may be right that I'm training the next version of the >> model. :) >> >> :-D >> >> Cheers >> -- >> t >> >
Re: GPT-4 knows Guile! :)
Also: do you have a mechanism to avoid controversy As an AI language model, I am designed to prioritize providing accurate and helpful information while aiming to be neutral and unbiased. My developers at OpenAI have implemented guidelines to help me avoid generating content that is offensive, harmful, or excessively controversial. However, I am not perfect, and there might be cases where my responses could unintentionally touch upon controversial topics or express unintended biases. To minimize these issues, I rely on the vast amount of data I have been trained on, which includes diverse perspectives and opinions. I aim to provide balanced and informative responses based on this training. If you believe my response is inappropriate or controversial, please let me know, and I will do my best to provide a more suitable answer. Additionally, your feedback helps me improve and refine my responses over time.
Re: GPT-4 knows Guile! :)
On Sat, Mar 18, 2023 at 9:58 AM Mikael Djurfeldt wrote: > On Sat, Mar 18, 2023 at 9:46 AM wrote: > >> On Sat, Mar 18, 2023 at 09:41:37AM +0100, Mikael Djurfeldt wrote: >> > On Sat, Mar 18, 2023 at 9:36 AM wrote: >> >> [...] >> >> > > Perhaps you didn't know, but you are training the model :-) >> > > >> > >> > Unfortunately not. I'm prompting it within its 32000 token (GPT-4) >> > attention span. Next conversation the model is back to exactly the same >> > state again. Then, of course, it is possible that OpenAI chooses to >> filter >> > out something from the dialogs it has had. >> >> You don't think that those coversations end up as raw data for the >> next model? I'd be surprised, but you know definitely more than me. >> > > I know very little apart from knowing what deep learning is and having > skimmed the "Attention is all you need"-paper. I only meant that you are > not training the model during and between sessions. It is certainly > possible that OpenAI filters out things from the dialogs to use as part of > training for the next version. They warn you that they may take data from > the dialogs. If and how they do that I don't know. > Or, as GPT-4 would phrase it: I apologize for the confusion in my previous answer. You may be right that I'm training the next version of the model. :)
Re: GPT-4 knows Guile! :)
On Sat, Mar 18, 2023 at 9:46 AM wrote: > On Sat, Mar 18, 2023 at 09:41:37AM +0100, Mikael Djurfeldt wrote: > > On Sat, Mar 18, 2023 at 9:36 AM wrote: > > [...] > > > > Perhaps you didn't know, but you are training the model :-) > > > > > > > Unfortunately not. I'm prompting it within its 32000 token (GPT-4) > > attention span. Next conversation the model is back to exactly the same > > state again. Then, of course, it is possible that OpenAI chooses to > filter > > out something from the dialogs it has had. > > You don't think that those coversations end up as raw data for the > next model? I'd be surprised, but you know definitely more than me. > I know very little apart from knowing what deep learning is and having skimmed the "Attention is all you need"-paper. I only meant that you are not training the model during and between sessions. It is certainly possible that OpenAI filters out things from the dialogs to use as part of training for the next version. They warn you that they may take data from the dialogs. If and how they do that I don't know. Cheers, Mikael
Re: GPT-4 knows Guile! :)
On Sat, Mar 18, 2023 at 9:36 AM wrote: > On Sat, Mar 18, 2023 at 09:22:43AM +0100, Mikael Djurfeldt wrote: > > BTW, in the bouncing ball example, I find it amazing that I could get an > > improvement of the code by complaining: > > > > But all those SDL_ calls look like C bindings. Please use guile-sdl2 > > bindings. > > > > (It was also quite entertaining that I had to ask it to write the code > > "according to the guile-sdl2 manual".) > > Perhaps you didn't know, but you are training the model :-) > Unfortunately not. I'm prompting it within its 32000 token (GPT-4) attention span. Next conversation the model is back to exactly the same state again. Then, of course, it is possible that OpenAI chooses to filter out something from the dialogs it has had. So, a trick you can do is to start out every session with a standard set of prompts (like "keep it short" or whatever) which will then act as a kind of configuration. > > This is called gamification, latest known in early 2000s. Luis von > Ahn [1] did quite a bit of pioneering work in that (he called it > "human computation", his PhD was "Games With a Purpose", Google > licensed a game from him to make people "out there" tag images for > them. > > So I'd say this is established "technology". > I think you're right that this will be implemented. > > Cheers > > [1] https://en.wikipedia.org/wiki/Luis_von_Ahn > -- > t >
Re: GPT-4 knows Guile! :)
On Sat, Mar 18, 2023 at 9:14 AM wrote: > This is what I was hinting at with that pastiche of Hanlon's Razor > and Clarke's Third Law. If you're doing statistical "AI", no harm > is provably intended. But the dice might be loaded ;-) > Yes, this is an important point. And it is sometimes possible to see a bias in the OpenAI engines towards what texts (and thus opinions) that are commonly occuring on the internet. > > When I then just marked a region in the browser and > > copied, the text from the title windows got copied too (and *I*, being a > > bit sloppy, didn't notice it :). > > Those small details are the interesting parts. :-) > true
Re: GPT-4 knows Guile! :)
BTW, in the bouncing ball example, I find it amazing that I could get an improvement of the code by complaining: But all those SDL_ calls look like C bindings. Please use guile-sdl2 bindings. (It was also quite entertaining that I had to ask it to write the code "according to the guile-sdl2 manual".) Best regards, Mikael
Re: GPT-4 knows Guile! :)
On Sat, Mar 18, 2023 at 8:55 AM Mikael Djurfeldt wrote: > >1. Install Guile: > >> > >> > csharp >> ^^ >> > >> > sudo apt-get install guile-2.2 >> >> [...] >> >> Did it literally write "csharp"? With some fantasy, I guess it >> is trying to evoke a root shell, but with some malice, I'd say >> Microsoft is sleazily touting some product of theirs. >> > > No, in the browser the bash lines were presented in windows which had the > text "bash" in the title bar, but GPT-4 is a bit sloppy in that text can > "leak in" from other documents, so the first two windows had the title > "from other documents" is bad wording. Of course this is fundamentally a neural representation which is inherently a bit fuzzy. > csharp instead of bash. When I then just marked a region in the browser > and copied, the text from the title windows got copied too (and *I*, being > a bit sloppy, didn't notice it :). > > Cheers, > Mikael > > >> >> I'd apply Clarke's Razor:"A sufficiently advanced form of malice >> is not distinguishable from stupidity" >> >> ;-) >> >> Cheers >> -- >> t >> >
Re: GPT-4 knows Guile! :)
>1. Install Guile: > > > > csharp > ^^ > > > > sudo apt-get install guile-2.2 > > [...] > > Did it literally write "csharp"? With some fantasy, I guess it > is trying to evoke a root shell, but with some malice, I'd say > Microsoft is sleazily touting some product of theirs. > No, in the browser the bash lines were presented in windows which had the text "bash" in the title bar, but GPT-4 is a bit sloppy in that text can "leak in" from other documents, so the first two windows had the title csharp instead of bash. When I then just marked a region in the browser and copied, the text from the title windows got copied too (and *I*, being a bit sloppy, didn't notice it :). Cheers, Mikael > > I'd apply Clarke's Razor:"A sufficiently advanced form of malice > is not distinguishable from stupidity" > > ;-) > > Cheers > -- > t >
Re: GPT-4 knows Guile! :)
Can you write a Guile Scheme program which displays a bouncing ball? As an AI language model, I cannot create a full graphical program directly within this text-based environment. However, I can provide you with an example of a simple bouncing ball simulation using the Guile Scheme programming language and the SDL2 (Simple DirectMedia Layer) library, which is commonly used for creating graphical applications. To start, you'll need to have Guile and the Guile-SDL2 bindings installed. The following instructions are for a Debian-based system: 1. Install Guile: csharp sudo apt-get install guile-2.2 1. Install SDL2 development libraries: csharp sudo apt-get install libsdl2-dev libsdl2-image-dev libsdl2-ttf-dev 1. Clone and build the Guile-SDL2 library: bash git clone https://github.com/davexunit/guile-sdl2.gitcd guile-sdl2 autoreconf -vif ./configure make sudo make install Now you can create the bouncing ball program in a file called bouncing-ball.scm: scheme (use-modules (sdl2) (sdl2 video) (sdl2 events) (sdl2 timer) (sdl2 rect)) (define screen-width 640) (define screen-height 480) (define ball-size 20) (define ball-x (make-parameter (/ (- screen-width ball-size) 2))) (define ball-y (make-parameter (/ (- screen-height ball-size) 2))) (define ball-x-velocity (make-parameter 4)) (define ball-y-velocity (make-parameter 4)) (define (process-events) (let loop ((event (poll-event))) (when event (case (event-type event) ((quit) #f) (else (loop (poll-event))) (define (update-ball) (parameterize ((x (ball-x)) (y (ball-y)) (vx (ball-x-velocity)) (vy (ball-y-velocity))) (ball-x (+ x vx)) (ball-y (+ y vy)) (when (or (<= (+ x vx) 0) (>= (+ x vx ball-size) screen-width)) (ball-x-velocity (- vx))) (when (or (<= (+ y vy) 0) (>= (+ y vy ball-size) screen-height)) (ball-y-velocity (- vy) (define (draw-ball renderer) (let ((ball-rect (make-rect (ball-x) (ball-y) ball-size ball-size))) (set-render-draw-color renderer 255 0 0 255) (render-fill-rect renderer ball-rect))) (define (main) (init! +init-video+) (let ((window (create-window "Bouncing Ball" +window-pos-undefined+ +window-pos-undefined+ screen-width screen-height +window-shown+)) (renderer (create-renderer window -1 +renderer-accelerated+))) (set-render-draw-color renderer 0 0 0 255) (let loop () (unless (process-events) (update-ball) (render-clear renderer) (draw-ball renderer) (render-present renderer) (delay 16) (loop) (main) This example sets up a window using SDL2 and creates a simple ball that bounces off the edges of the window. You can run the program using the following command: bash guile bouncing-ball.scm Keep in mind that this is a simple example, and there are many ways to improve or expand upon it, such as using images [image: mik...@djurfeldt.com] I get the following error message: ;;; note: auto-compilation is enabled, set GUILE_AUTO_COMPILE=0 ;;; or pass the --no-auto-compile argument to disable. ;;; compiling /home/mdj/guile/bouncing-ball.scm ;;; /home/mdj/guile/bouncing-ball.scm:19:13: warning: possibly unbound variable `event-type' ;;; /home/mdj/guile/bouncing-ball.scm:24:18: warning: possibly unbound variable `x' ;;; /home/mdj/guile/bouncing-ball.scm:25:18: warning: possibly unbound variable `y' ;;; /home/mdj/guile/bouncing-ball.scm:26:18: warning: possibly unbound variable `vx' ;;; /home/mdj/guile/bouncing-ball.scm:27:18: warning: possibly unbound variable `vy' ;;; /home/mdj/guile/bouncing-ball.scm:39:5: warning: possibly unbound variable `set-render-draw-color' ;;; /home/mdj/guile/bouncing-ball.scm:40:5: warning: possibly unbound variable `render-fill-rect' ;;; /home/mdj/guile/bouncing-ball.scm:43:3: warning: possibly unbound variable `init!' ;;; /home/mdj/guile/bouncing-ball.scm:43:9: warning: possibly unbound variable `+init-video+' ;;; /home/mdj/guile/bouncing-ball.scm:44:17: warning: possibly unbound variable `create-window' ;;; /home/mdj/guile/bouncing-ball.scm:44:47: warning: possibly unbound variable `+window-pos-undefined+' ;;; /home/mdj/guile/bouncing-ball.scm:44:120: warning: possibly unbound variable `+window-shown+' ;;; /home/mdj/guile/bouncing-ball.scm:45:19: warning: possibly unbound variable `create-renderer' ;;; /home/mdj/guile/bouncing-ball.scm:45:35: warning: possibly unbound variable `window' ;;; /home/mdj/guile/bouncing-ball.scm:45:45: warning: possibly unbound variable `+renderer-accelerated+' ;;; /home/mdj/guile/bouncing-ball.scm:50:9: warning: possibly unbound variable `render-clear' ;;; /home/mdj/guile/bouncing-ball.scm:52:9: warning: possibly unbound variable `render-present' ;;; compiled /home/mdj/.cache/guile/ccache/3.0-LE-8-4.6/home/mdj/guile/bouncing-ball.scm.go Backtrace: In ice-9/boot-9.scm: 1752:10 6 (with-exception-handler _ _
GPT-4 knows Guile! :)
Mikael: Let's write a scheme program which randomly makes names by combining two words from a list. To write a Scheme program that generates random names by combining two words from a list, you can use the following code: scheme (import (scheme random)) ; Import random number generation functions (define word-list '("Alpha" "Beta" "Gamma" "Delta" "Epsilon" "Zeta" "Eta" "Theta" "Iota" "Kappa")) (define (random-word lst) (list-ref lst (random (length lst (define (random-name) (string-append (random-word word-list) (random-word word-list))) (random-name) In this program, we: 1. Import the random number generation functions from the (scheme random) library. 2. Define a word-list containing words that will be used to generate random names. 3. Create a function random-word that takes a list as an argument and returns a random element from that list. 4. Create a function random-name that combines two random words from the word-list using string-append. 5. Call random-name to generate a random name. You can run this code in a Scheme interpreter or REPL (Read-Eval-Print Loop) to generate random names. [image: mik...@djurfeldt.com] Mikael: Great! Can you adapt it for Guile scheme? Sure! Here's the adapted code for Guile Scheme: scheme (use-modules (ice-9 random)) ; Import random number generation functions for Guile (define word-list '("Alpha" "Beta" "Gamma" "Delta" "Epsilon" "Zeta" "Eta" "Theta" "Iota" "Kappa")) (define (random-word lst) (list-ref lst (random (length lst (define (random-name) (string-append (random-word word-list) (random-word word-list))) (random-name) In this Guile Scheme version, we: 1. Replace the (import (scheme random)) statement with (use-modules (ice-9 random)) to import random number generation functions in Guile Scheme. 2. The rest of the code remains the same as the original Scheme version. You can run this code in a Guile Scheme interpreter or REPL to generate random names. [image: mik...@djurfeldt.com] Mikael: Now let's use the words doodle, puck, bongo, foo, bar zoinc instead. Here's the Guile Scheme code using the new words you provided: scheme (use-modules (ice-9 random)) ; Import random number generation functions for Guile (define word-list '("doodle" "puck" "bongo" "foo" "bar" "zoinc")) (define (random-word lst) (list-ref lst (random (length lst (define (random-name) (string-append (random-word word-list) (random-word word-list))) (random-name) The only change made to the original code is updating the word-list with the new words you've provided. The rest of the code remains the same. You can run this updated code in a Guile Scheme interpreter or REPL to generate random names using the new words. Mikael: Can you execute (random-name)? As an AI text-based model, I am unable to directly execute code in a REPL environment. However, I can help you understand how to run the provided code. To run the provided Guile Scheme code and execute (random-name), you can follow these steps: 1. Install Guile on your computer if you haven't already. You can find installation instructions here: https://www.gnu.org/software/guile/download/ 2. Copy and paste the provided code into a text editor, and save it as a file named random-name.scm. 3. Open a terminal or command prompt, and navigate to the directory where you saved the random-name.scm file. 4. Run the following command in the terminal or command prompt: guile -s random-name.scm This command will execute the Guile Scheme code in the random-name.scm file and print a random name using the words from the word-list. Each time you run the command, you will get a new random name as the output.[image: mik...@djurfeldt.com] Mikael: Can you give one example of how the result would look like? Certainly! When you execute the provided Guile Scheme code using the given word list, you may get different random names each time you run it. Here's one possible example of a random name generated using the words in the word-list: doodlebongo In this example, the program has combined the words "doodle" and "bongo" to create a new random name. Keep in mind that you will get different results each time you run the program, as the name generation is random.
Re: [PATCH] Fix minor typos
Thanks Colin and Aleix, This has been committed (856c3f96). Best regards, Mikael On Fri, Dec 2, 2022 at 6:44 PM Aleix Conchillo Flaqué wrote: > On Thu, Dec 1, 2022 at 10:10 PM Colin Woodbury wrote: > >> `make html` should do it. The output is in doc/ref/guile.html/index.html >> >> Yup, that does it. For my future doc improvements I'll make sure to check >> both sides. >> >> So it seems that both the extra period and the superfluous "see" are >> perhaps bugs in Info itself. At least when you open the Guile Guide in >> Emacs, are you able to visually confirm what I'm seeing? >> > Yes, confirmed. I just tried your fix and it looks good both in info and > html. > > LGTM if someone can commit this. > > Aleix >
Re: Feature request: Ability to document variables like defvar in elisp
(define a 5) (set-object-property! (module-variable (current-module) 'a) 'documentation "The variable a contains a number.") ? On Wed, Nov 2, 2022 at 9:29 AM Jean Abou Samra wrote: > Le 02/11/2022 à 02:08, Jacob Hrbek a écrit : > > The ability to document variables is critical for many projects such > > as libfive where the variables is used to declares functional computer > > aided design structure and other projects where variables influence > > the workflow. > > > > Thus proposing to change the 'define' behavior for variables to > implement: > > > > (define variable default-value docstring) > >^^ > > > > Where docstring is optional and in case it's provided to call for > example: > > > > (set-procedure-property! variable 'documentation docstring) > > > > > The problem is that in Scheme, you cannot attach metadata to immediate > values. According to the Scheme standards and the Guile documentation, > > > (define a 5) > (define b 5) > > (eq? a b) => may be #t or #f > (eq? a a) => may be #t or #f > > > So it's considerably more complicated than using an object property, > because that would not work reliably for variables defined to immediates > like numbers and characters. Instead you would need to attach the > metadata to the name you're defining the variable to, like Elisp does, > but unlike Guile does with procedures right now, and it's not as simple > in Scheme due to lexical scoping. > > Best, > Jean > > >
Re: map-par slower than map
A piece of background on par-map: When I introduced par-map et al the only ambition was to have simple language constructs to invoke parallelism. The use case I had in mind was course grained parallelism where each piece of work is somewhat substantial. Back then, a thread was launched for each piece of work, however there was also a thread pool such that not all of the overhead of launching new threads always was required. Since then, par-map has been rewritten (by others) to be based on futures. (And now the thread pool is localized in the futures implementation---as "workers".) Looking in the code now, I think it is fair to say that it is still intended for coarse grained parallelism. There is some heavy lifting going on with mutexes and condition variables as well as shuffling around with list pairs. So, applying par-map on a huge list with small amount of work per item was never and still isn't the intended use case. It would of course be interesting if the code could be improved to support fine grained parallelism. :-) Best regards, Mikael On Wed, Oct 12, 2022 at 11:30 PM Zelphir Kaltstahl < zelphirkaltst...@posteo.de> wrote: > Hi! > > On 10/12/22 22:27, Damien Mattei wrote: > > > https://github.com/damien-mattei/library-FunctProg/blob/master/guile/logiki%2B.scm#L1674 > > > > i commited the current version of code here with all files but it is > > huge :-/ > > > > On Wed, Oct 12, 2022 at 10:20 PM Damien Mattei > > wrote: > > > >> Mutex? i do not think code has situation where dead lock could happen, > it > >> is a code about minimalising logic expressions, it uses minterms , > minterms > >> set is a set of minterms :like this: > >> > >> example: > >> ((1 1 0) (1 1 1)) will be unified : (1 1 x) > >> because 0 and 1 are replaced by x > >> the minterms-set could have thousands of pair (mathematic not lisp) > >> minterms to unify > >> if there is more than one x as result there is no need to continue so i > >> escape with a continuation: > >> > >> minterms-set = > >> { > >> ((1 0 1 0) (1 1 1 0)) > >> ((1 0 1 0) (1 1 0 1)) > >> ((1 0 1 0) (1 0 1 1)) > >> ((1 0 1 0) (0 1 1 1)) > >> ((0 1 1 0) (1 1 1 0)) > >> ((0 1 1 0) (1 1 0 1)) > >> ((0 1 1 0) (1 0 1 1)) > >> ((0 1 1 0) (0 1 1 1)) > >> ((0 1 0 1) (1 1 1 0)) > >> ((0 1 0 1) (1 1 0 1)) > >> ((0 1 0 1) (1 0 1 1)) > >> ((0 1 0 1) (0 1 1 1)) > >> ((0 0 1 1) (1 1 1 0)) > >> ((0 0 1 1) (1 1 0 1)) > >> ((0 0 1 1) (1 0 1 1)) > >> ((0 0 1 1) (0 1 1 1)) > >> } > >> > >> replace { } by () to have the list, other example at another level : > >> > >> minterms-set = > >> { > >> ((0 x 1 1) (x 1 1 1)) > >> ((0 x 1 1) (1 x 1 1)) > >> ((0 x 1 1) (1 1 x 1)) > >> ((0 x 1 1) (1 1 1 x)) > >> ((x 0 1 1) (x 1 1 1)) > >> ((x 0 1 1) (1 x 1 1)) > >> ((x 0 1 1) (1 1 x 1)) > >> ((x 0 1 1) (1 1 1 x)) > >> ((0 1 x 1) (x 1 1 1)) > >> ((0 1 x 1) (1 x 1 1)) > >> ((0 1 x 1) (1 1 x 1)) > >> ((0 1 x 1) (1 1 1 x)) > >> ((x 1 0 1) (x 1 1 1)) > >> ((x 1 0 1) (1 x 1 1)) > >> ((x 1 0 1) (1 1 x 1)) > >> ((x 1 0 1) (1 1 1 x)) > >> ((0 1 1 x) (x 1 1 1)) > >> ((0 1 1 x) (1 x 1 1)) > >> ((0 1 1 x) (1 1 x 1)) > >> ((0 1 1 x) (1 1 1 x)) > >> ((x 1 1 0) (x 1 1 1)) > >> ((x 1 1 0) (1 x 1 1)) > >> ((x 1 1 0) (1 1 x 1)) > >> ((x 1 1 0) (1 1 1 x)) > >> ((1 0 1 x) (x 1 1 1)) > >> ((1 0 1 x) (1 x 1 1)) > >> ((1 0 1 x) (1 1 x 1)) > >> ((1 0 1 x) (1 1 1 x)) > >> ((1 x 1 0) (x 1 1 1)) > >> ((1 x 1 0) (1 x 1 1)) > >> ((1 x 1 0) (1 1 x 1)) > >> ((1 x 1 0) (1 1 1 x)) > >> } > >> > >> here we see some minterms are already unified > >> > >> it is not easy to read even by me because i wrote the code many years > ago > >> and is split in many files, but here it is: > >> > >> (par-map function-unify-minterms-list minterms-set) > >> > >> {function-unify-minterms-list <+ (λ (L) (apply > >> function-unify-two-minterms-and-tag L))} > >> > >> (define (unify-two-minterms mt1 mt2) > >>(function-map-with-escaping-by-kontinuation2 > >> (macro-function-compare-2-bits-with-continuation) mt1 mt2)) > >> > >> ;; (function-map-with-escaping-by-kontinuation2 > >> (macro-function-compare-2-bits-with-continuation) '(1 1 0 1 0 1 1 0) > '(1 > >> 1 0 1 1 1 1 1)) > >> > >> ;; list1 = (1 1 0 1 0 1 1 0) > >> ;; more-lists = ((1 1 0 1 1 1 1 1)) > >> ;; lists = ((1 1 0 1 0 1 1 0) (1 1 0 1 1 1 1 1)) > >> ;; clozure = # > >> > >> ;; #f > >> ;; > >> ;; (function-map-with-escaping-by-kontinuation2 > >> (macro-function-compare-2-bits-with-continuation)'(1 1 0 1 0 1 1 0) > '(1 > >> 1 0 1 1 1 1 0)) > >> > >> ;; list1 = (1 1 0 1 0 1 1 0) > >> ;; more-lists = ((1 1 0 1 1 1 1 0)) > >> ;; lists = ((1 1 0 1 0 1 1 0) (1 1 0 1 1 1 1 0)) > >> ;; clozure = # > >> > >> ;; '(1 1 0 1 x 1 1 0) > >> (define (function-map-with-escaping-by-kontinuation2 clozure list1 . > >> more-lists) > >>(call/cc (lambda (kontinuation) > >> (let ((lists (cons list1 more-lists)) > >>(funct-continu ;; this function have the kontinuation in his > environment > >> (lambda (arg1 . more-args) > >> (let ((args (cons arg1 more-args))) > >>
Re: map-par slower than map
Also, I would believe that any crashes in this context are neither due to the futures implementation nor par-map et al. I would think that crashes are due to the Guile basic thread support itself. On Tue, Oct 25, 2022 at 11:07 AM Mikael Djurfeldt wrote: > A piece of background on par-map: > > When I introduced par-map et al the only ambition was to have simple > language constructs to invoke parallelism. The use case I had in mind was > course grained parallelism where each piece of work is somewhat > substantial. Back then, a thread was launched for each piece of work, > however there was also a thread pool such that not all of the overhead of > launching new threads always was required. > > Since then, par-map has been rewritten (by others) to be based on futures. > (And now the thread pool is localized in the futures implementation---as > "workers".) Looking in the code now, I think it is fair to say that it is > still intended for coarse grained parallelism. There is some heavy lifting > going on with mutexes and condition variables as well as shuffling around > with list pairs. > > So, applying par-map on a huge list with small amount of work per item was > never and still isn't the intended use case. > > It would of course be interesting if the code could be improved to support > fine grained parallelism. :-) > > Best regards, > Mikael > > On Wed, Oct 12, 2022 at 11:30 PM Zelphir Kaltstahl < > zelphirkaltst...@posteo.de> wrote: > >> Hi! >> >> On 10/12/22 22:27, Damien Mattei wrote: >> > >> https://github.com/damien-mattei/library-FunctProg/blob/master/guile/logiki%2B.scm#L1674 >> > >> > i commited the current version of code here with all files but it is >> > huge :-/ >> > >> > On Wed, Oct 12, 2022 at 10:20 PM Damien Mattei > > >> > wrote: >> > >> >> Mutex? i do not think code has situation where dead lock could happen, >> it >> >> is a code about minimalising logic expressions, it uses minterms , >> minterms >> >> set is a set of minterms :like this: >> >> >> >> example: >> >> ((1 1 0) (1 1 1)) will be unified : (1 1 x) >> >> because 0 and 1 are replaced by x >> >> the minterms-set could have thousands of pair (mathematic not lisp) >> >> minterms to unify >> >> if there is more than one x as result there is no need to continue so i >> >> escape with a continuation: >> >> >> >> minterms-set = >> >> { >> >> ((1 0 1 0) (1 1 1 0)) >> >> ((1 0 1 0) (1 1 0 1)) >> >> ((1 0 1 0) (1 0 1 1)) >> >> ((1 0 1 0) (0 1 1 1)) >> >> ((0 1 1 0) (1 1 1 0)) >> >> ((0 1 1 0) (1 1 0 1)) >> >> ((0 1 1 0) (1 0 1 1)) >> >> ((0 1 1 0) (0 1 1 1)) >> >> ((0 1 0 1) (1 1 1 0)) >> >> ((0 1 0 1) (1 1 0 1)) >> >> ((0 1 0 1) (1 0 1 1)) >> >> ((0 1 0 1) (0 1 1 1)) >> >> ((0 0 1 1) (1 1 1 0)) >> >> ((0 0 1 1) (1 1 0 1)) >> >> ((0 0 1 1) (1 0 1 1)) >> >> ((0 0 1 1) (0 1 1 1)) >> >> } >> >> >> >> replace { } by () to have the list, other example at another level : >> >> >> >> minterms-set = >> >> { >> >> ((0 x 1 1) (x 1 1 1)) >> >> ((0 x 1 1) (1 x 1 1)) >> >> ((0 x 1 1) (1 1 x 1)) >> >> ((0 x 1 1) (1 1 1 x)) >> >> ((x 0 1 1) (x 1 1 1)) >> >> ((x 0 1 1) (1 x 1 1)) >> >> ((x 0 1 1) (1 1 x 1)) >> >> ((x 0 1 1) (1 1 1 x)) >> >> ((0 1 x 1) (x 1 1 1)) >> >> ((0 1 x 1) (1 x 1 1)) >> >> ((0 1 x 1) (1 1 x 1)) >> >> ((0 1 x 1) (1 1 1 x)) >> >> ((x 1 0 1) (x 1 1 1)) >> >> ((x 1 0 1) (1 x 1 1)) >> >> ((x 1 0 1) (1 1 x 1)) >> >> ((x 1 0 1) (1 1 1 x)) >> >> ((0 1 1 x) (x 1 1 1)) >> >> ((0 1 1 x) (1 x 1 1)) >> >> ((0 1 1 x) (1 1 x 1)) >> >> ((0 1 1 x) (1 1 1 x)) >> >> ((x 1 1 0) (x 1 1 1)) >> >> ((x 1 1 0) (1 x 1 1)) >> >> ((x 1 1 0) (1 1 x 1)) >> >> ((x 1 1 0) (1 1 1 x)) >> >> ((1 0 1 x) (x 1 1 1)) >> >> ((1 0 1 x) (1 x 1 1)) >> >> ((1 0 1 x) (1 1 x 1)) >> >> ((1 0 1 x) (1 1 1 x)) >> >> ((1 x 1 0) (x 1 1 1)) >> >> ((1 x 1 0) (1 x 1 1)) >> >> ((1 x 1 0) (1 1 x 1)) >> >> ((1 x 1 0) (1 1 1 x)) >> >> } >> >> >> >> here we see some minterms are already unified >> >> >> >> it is
Re: Hatables are slow
I agree. Also, if there is no strong reason to deviate from RnRS, that would be a good choice. (But, I'm also no maintainer.) On Wed, Feb 23, 2022 at 8:42 AM Linus Björnstam < linus.bjorns...@veryfast.biz> wrote: > Hej! > > I would also propose a hash table based on a more sane interface. The > equality and hash procedures should be associated with the hash table at > creation rather than every time the hash table is used. Like in R6RS, > srfi-69, or srfi-12X (intermediate hash tables). > > Maybe the current HT could be relegated to some kind of compat or > deprecated library to be removed in 3.4... I am no maintainer, but I think > we can all agree that the current API, while fine in the context of guile > 1.6, is somewhat clunky by today's standards. It is also commonplace enough > that regular deprecation might become rough. > > Just the simple fact that hash-set! and hashq-set! can be used > interchangeably while you at the same time NEVER EVER should mix them is > somewhat unnerving. > > I would say a hash table that specifies everything at creation time (with > maybe an opportunity to use something like the hashx-* functions for > daredevils and for future srfi needs) is the way to go. > > Best regards > Linus Björnstam > > On Mon, 21 Feb 2022, at 14:18, Stefan Israelsson Tampe wrote: > > A datastructure I fancy is hash tables. But I found out that hashtables > > in guile are really slow, How? First of all we make a hash table > > > > (define h (make-hash-table)) > > > > Then add values > > (for-each (lambda (i) (hash-set! h i i)) (iota 2000)) > > > > Then the following operation cost say 5s > > (hash-fold (lambda (k v s) (+ k v s)) 0 h) > > > > It is possible with the foreign interface to speedt this up to 2s using > > guiles internal interface. But this is slow for such a simple > > application. Now let's change focus. Assume the in stead an assoc, > > > > (define l (map (lambda (i) (cons i i)) (iota 2000))) > > > > Then > > ime (let lp ((l ll) (s 0)) (if (pair? l) (lp (cdr l) (+ s (caar l))) s)) > > $5 = 1999000 > > ;; 0.114530s real time, 0.114391s run time. 0.00s spent in GC. > > > > That's 20X faster. What have happened?, Well hashmaps has terrible > > memory layout for scanning. So essentially keeping a list of the > > created values consed on a list not only get you an ordered hashmap, > > you also have 20X increase in speed, you sacrifice memory, say about > > 25-50% extra. The problem actually more that when you remove elements > > updating the ordered list is very expensive. In python-on-guile I have > > solved this by moving to a doubly linked list when people start's to > > delete single elements. For small hashmap things are different. > > > > I suggest that guile should have a proper faster standard hashmap > > implemention of such kind in scheme. > > > > Stefan > >
Re: Hatables are slow
(Note that the resizing means *rehashing* of all elements.) On Mon, Feb 21, 2022 at 11:17 PM Mikael Djurfeldt wrote: > The hash table in Guile is rather standard (at least according to what was > standard in the old ages :). (I *have* some vague memory that I might have > implemented a simpler/faster table at some point, but that is not in the > code base now.) > > The structure is a vector of alists. It's of course important that the > alists don't get too long, so there's some resizing going on. If you call > (make-hash-table), the size of the table starts out at 31, so in your use > case, there will be several resizing steps. > > What happens with speed if you do (make-hash-table 500) instead? > > Best regards, > Mikael > > On Mon, Feb 21, 2022 at 2:55 PM Stefan Israelsson Tampe < > stefan.ita...@gmail.com> wrote: > >> A datastructure I fancy is hash tables. But I found out that hashtables >> in guile are really slow, How? First of all we make a hash table >> >> (define h (make-hash-table)) >> >> Then add values >> (for-each (lambda (i) (hash-set! h i i)) (iota 2000)) >> >> Then the following operation cost say 5s >> (hash-fold (lambda (k v s) (+ k v s)) 0 h) >> >> It is possible with the foreign interface to speedt this up to 2s using >> guiles internal interface. But this is slow for such a simple application. >> Now let's change focus. Assume the in stead an assoc, >> >> (define l (map (lambda (i) (cons i i)) (iota 2000))) >> >> Then >> ime (let lp ((l ll) (s 0)) (if (pair? l) (lp (cdr l) (+ s (caar l))) s)) >> $5 = 1999000 >> ;; 0.114530s real time, 0.114391s run time. 0.00s spent in GC. >> >> That's 20X faster. What have happened?, Well hashmaps has terrible memory >> layout for scanning. So essentially keeping a list of the created values >> consed on a list not only get you an ordered hashmap, you also have 20X >> increase in speed, you sacrifice memory, say about 25-50% extra. The >> problem actually more that when you remove elements updating the ordered >> list is very expensive. In python-on-guile I have solved this by moving to >> a doubly linked list when people start's to delete single elements. For >> small hashmap things are different. >> >> I suggest that guile should have a proper faster standard hashmap >> implemention of such kind in scheme. >> >> Stefan >> >> >> >>
Re: Hatables are slow
The hash table in Guile is rather standard (at least according to what was standard in the old ages :). (I *have* some vague memory that I might have implemented a simpler/faster table at some point, but that is not in the code base now.) The structure is a vector of alists. It's of course important that the alists don't get too long, so there's some resizing going on. If you call (make-hash-table), the size of the table starts out at 31, so in your use case, there will be several resizing steps. What happens with speed if you do (make-hash-table 500) instead? Best regards, Mikael On Mon, Feb 21, 2022 at 2:55 PM Stefan Israelsson Tampe < stefan.ita...@gmail.com> wrote: > A datastructure I fancy is hash tables. But I found out that hashtables in > guile are really slow, How? First of all we make a hash table > > (define h (make-hash-table)) > > Then add values > (for-each (lambda (i) (hash-set! h i i)) (iota 2000)) > > Then the following operation cost say 5s > (hash-fold (lambda (k v s) (+ k v s)) 0 h) > > It is possible with the foreign interface to speedt this up to 2s using > guiles internal interface. But this is slow for such a simple application. > Now let's change focus. Assume the in stead an assoc, > > (define l (map (lambda (i) (cons i i)) (iota 2000))) > > Then > ime (let lp ((l ll) (s 0)) (if (pair? l) (lp (cdr l) (+ s (caar l))) s)) > $5 = 1999000 > ;; 0.114530s real time, 0.114391s run time. 0.00s spent in GC. > > That's 20X faster. What have happened?, Well hashmaps has terrible memory > layout for scanning. So essentially keeping a list of the created values > consed on a list not only get you an ordered hashmap, you also have 20X > increase in speed, you sacrifice memory, say about 25-50% extra. The > problem actually more that when you remove elements updating the ordered > list is very expensive. In python-on-guile I have solved this by moving to > a doubly linked list when people start's to delete single elements. For > small hashmap things are different. > > I suggest that guile should have a proper faster standard hashmap > implemention of such kind in scheme. > > Stefan > > > >
Re: Pausable continuations
Hi, I'm trying to understand this. The example of a generator which you give below counts upwards, but I don't see how the value of n is passed out of the generator. Could you give another example of a generator which does pass out the values, along with a usage case which prints out the values returned by the generator? Best regards, Mikael Den tors 10 feb. 2022 17:52Stefan Israelsson Tampe skrev: > Consider a memory barrier idiom constructed from > 0, (mk-stack) > 1. (enter x) > 2. (pause x) > 3. (leave x) > > The idea is that we create a separate stack object and when entering it, > we will swap the current stack with the one in the argument saving the > current stack in x and be in the 'child' state and move to a paused > position in case of a pause, when pausing stack x, we will return to where > after where entered saving the current position in stack and ip, and be in > state 'pause' and when we leave we will be in the state 'leave and move > to the old stack, using the current > ip. At first encounter the function stack frame is copied over hence there > will be a fork limited to the function only. > > This means that we essentially can define a generator as > (define (g x) > (let lp ((n 0)) > (if (< n 10) > (begin >(pause x) >(lp (+ n 1)) > > And use it as > (define (test) > (let ((x (mk-stack))) > (let lp () >(case (enter x) >((pause) >(pk 'pause) >(lp)) > ((child) > (g x) > (leave x > > A paused or leaved stack cannot be paused, an entered stack cannot be > entered and one cannot leave a paused stack, but enter a leaved stack. > > Anyhow this idea is modeled like a fork command instead of functional and > have the benefit over delimited continuations that one does not need to > copy the whole stack and potentially speed up generator like constructs. > But not only this, writing efficient prolog code is possible as well. We > could simplify a lot of the generation of prolog code, speed it up and also > improve compiler speed of prolog code significantly. > > How would we approach the prolog code. The simplest system is to use > return the > alternate pause stack when succeeding things becomes very simple, > > x = stack to pause to in case of failure > cc = the continuation > > ( (x cc) goal1 goal2) > :: (cc (goal1 (goal2 x)) > > ( (x cc) goal1 goal2) > :: (let ((xx (mkstack))) > (case (enter xx) > ((child) > (cc (goal2 xx))) > > ((pause) > (cc (goal2 x) > > Very elegant, and we also can use some heuristics to store already made > stacks when > leaving a stack and reuse at the next enter which is a common theme in > prolog, > > Anyhow we have an issue, consider the case where everythings > succeds forever. Then we will blow the stack . There is no concept of tail > calls here. So what you can do is the following for an , > > (let ((xx (mk-stack))) > (case (enter xx) > ((child) >(goal1 x (lambda (xxx) (pause xx xxx))) > > ((pause xxx) > (goal2 xxx cc > > This enable cuts so that a cutted and (and!) in kanren lingo will use > (goal2 x cc) > > And we have tail calls! > > > I have a non jitted version guile working as a proof of concept. > > The drawback with this is if a function uses a lot of stack, it will be a > memory hog. > > WDYT? > > > > > > > > > > > > . > > >
Re: (inf) min max
The proper way to handle this would, as you suggest, be to distinguish different kinds on infinities. Then, perhaps, countable infinity could be regarded as scheme:ish exact while infinities of higher cardinality would not (since scheme's handling of that kind of numbers is an approximation and, thus, not exact). Den tis 28 sep. 2021 11:56Stefan Israelsson Tampe skrev: > Then this does not work well > > (fold min (inf) (list 1 > 200 > 3 4)) > > which is a pity, we should have an exact inf as well > > On Tue, Sep 28, 2021 at 10:32 AM wrote: > >> On Tue, Sep 28, 2021 at 10:15:30AM +0200, Stefan Israelsson Tampe wrote: >> > Why is (min (inf) 1) = 1.0 inexact? >> >> Because inf's result is inexact. The same as (min 3 3.5) is inexact, >> too. >> >> It seems that the `inexactness' is contagious across arithmetic >> generics (I haven't found an explicit place in the Guile docs; >> the racket docs are more explicit about that). >> >> Cheers >> - t >> >
Re: Python-on-guile
Nice! I guess it would be nice if "continue" *could* be compiled efficiently. And, as you indicate, perhaps that would amount to efficiently compiling let/ec. Best regards, Mikael On Sat, Apr 24, 2021 at 5:19 PM Stefan Israelsson Tampe < stefan.ita...@gmail.com> wrote: > Guile is 3x faster then fastest python-on-guile which is 2x faster then > python3 Cpython > > attached is a guile corresponding program. > > On Sat, Apr 24, 2021 at 4:41 PM Stefan Israelsson Tampe < > stefan.ita...@gmail.com> wrote: > >> To note is that 'continue' is killing performance for python-on-guile >> programs, so by changing the >> code to not use continue lead to python-on-guile running twice the speed >> of python3. The reason is that >> the while loop is used as >> (while (...) >>(let/ec continue >> ...)) >> >> And the let/ec is probably not optimally compiled. Python-on-guile will >> check the loop for continue usage and if not then it will skip the let/ec. >> >> I attached the code not using continue >> >> On Sat, Apr 24, 2021 at 2:59 PM Stefan Israelsson Tampe < >> stefan.ita...@gmail.com> wrote: >> >>> Actually changing in (language python compile), >>> >>> (define (letec f) >>> (let/ec x (f x >>> >>> To >>> >>> (define-syntax-rule (letec f) >>> (let/ec x (f x >>> >>> Actually lead to similar speeds as python3. >>> >>> >>> >>> On Sat, Apr 24, 2021 at 1:26 PM Stefan Israelsson Tampe < >>> stefan.ita...@gmail.com> wrote: >>> >>>> Pro tip, when running this on guile the scheme code that it compilse to >>>> is located in log.txt. >>>> If you ,opt the resulting code in a guile session you might be able to >>>> pinpoint issues that >>>> delays the code execution. >>>> >>>> On Sat, Apr 24, 2021 at 12:04 PM Mikael Djurfeldt >>>> wrote: >>>> >>>>> (I should perhaps add that my script doesn't benchmark the object >>>>> system but rather loops, conditionals and integer arithmetic.) >>>>> >>>>> Den fre 23 apr. 2021 17:00Mikael Djurfeldt >>>>> skrev: >>>>> >>>>>> Hi, >>>>>> >>>>>> Yesterday, Andy committed new code to the compiler, some of which >>>>>> concerned skipping some arity checking. >>>>>> >>>>>> Also, Stefan meanwhile committed something called "reworked object >>>>>> system" to his python-on-guile. >>>>>> >>>>>> Sorry for coming with unspecific information (don't have time to >>>>>> track down the details) but I noticed that my benchmark script written in >>>>>> Python, and which computes the 20:th Ramanujan number, now runs 60% >>>>>> faster >>>>>> than before these changes. >>>>>> >>>>>> This means that python-on-guile running on guile3 master executes >>>>>> python code only 2.6 times slower than the CPython python3 interpreter >>>>>> itself. :-) >>>>>> >>>>>> Have a nice weekend all, >>>>>> Mikael >>>>>> >>>>>>
Re: Python-on-guile
(I should perhaps add that my script doesn't benchmark the object system but rather loops, conditionals and integer arithmetic.) Den fre 23 apr. 2021 17:00Mikael Djurfeldt skrev: > Hi, > > Yesterday, Andy committed new code to the compiler, some of which > concerned skipping some arity checking. > > Also, Stefan meanwhile committed something called "reworked object system" > to his python-on-guile. > > Sorry for coming with unspecific information (don't have time to track > down the details) but I noticed that my benchmark script written in Python, > and which computes the 20:th Ramanujan number, now runs 60% faster than > before these changes. > > This means that python-on-guile running on guile3 master executes python > code only 2.6 times slower than the CPython python3 interpreter itself. :-) > > Have a nice weekend all, > Mikael > >
Python-on-guile
Hi, Yesterday, Andy committed new code to the compiler, some of which concerned skipping some arity checking. Also, Stefan meanwhile committed something called "reworked object system" to his python-on-guile. Sorry for coming with unspecific information (don't have time to track down the details) but I noticed that my benchmark script written in Python, and which computes the 20:th Ramanujan number, now runs 60% faster than before these changes. This means that python-on-guile running on guile3 master executes python code only 2.6 times slower than the CPython python3 interpreter itself. :-) Have a nice weekend all, Mikael
Re: Python on guile v1.2.3.7
Hi Stefan, Could it be that you have not committed the file: language/python/module/re/flag-parser.scm ? Best regards, Mikael On Sun, Apr 11, 2021 at 11:23 AM Stefan Israelsson Tampe < stefan.ita...@gmail.com> wrote: > Hi, > > I released a new tag of my python code that basically is a snapshot of a > work in progress. > > This release includes > * pythons new match statement > * dataclasses > * Faster python regexps through caching and improved datastructures > * Numerous bug fixes found while executing the python unit tests. >
Re: garbage collection slowdown
Den ons 5 feb. 2020 23:32Han-Wen Nienhuys skrev: > > > On Wed, Feb 5, 2020 at 5:23 PM Ludovic Courtès wrote: > >> Weird. It would be interesting to see where the slowdown comes from. >> Overall, my recollection of the 1.8 to 2.0 transition (where we >> introduced libgc) is that GC was a bit faster, definitely not slower. >> >> That said, does LilyPond happen to use lots of bignums and/or lots of >> finalizers? Finalizers, including those on bignums, end up being very >> GC-intensive, as discussed in my recent message. Perhaps that’s what’s >> happening here, for instance if you create lots of SMOBs with a free >> function. >> > > No, I think it's because in some phases of the program, there is a lot of > heap growth, with little garbage generation. This causes frequent > (expensive) GCs that don't reclaim anything. > When programming dynamic vectors, it is common to adapt the size of newly allocated chunks by letting them grow in proportion to vector size. Could the frequency of GC be adapted similarly such that the balance between GC and allocation is shifted towards allocation in phases with a lot of heap growth? More concretely, this could either be achieved by letting the newly allocated chunks grow in proportion to allocated memory (as I think it was in the 1.8 GC---don't know about now) or by choosing to not do a GC every time, but instead directly allocate if running average of GC gain is small compared to allocated memory. Of course these are issues at research level... > > -- > Han-Wen Nienhuys - hanw...@gmail.com - http://www.xs4all.nl/~hanwen >
Re: GNU Guile 2.9.9 Released [beta]
It might be reasonable to keep the patch for now in order not to introduce novel behavior this short before the 3.0 release. But especially in light of Andy's work, I do regret introducing procedure-properties. It's a more LISPy feature than Schemey. Did you see Andy's argument about procedure equality below? I would have preferred to postpone the release and drop procedure equality, procedure-properties etc. It can be handy and convenient, yes, but there is a reason why R6RS didn't require (eq? p p) -> #t... Best regards, Mikael On Tue, Jan 14, 2020 at 5:37 PM Stefan Israelsson Tampe < stefan.ita...@gmail.com> wrote: > > > -- Forwarded message - > From: Stefan Israelsson Tampe > Date: Tue, Jan 14, 2020 at 5:23 PM > Subject: Re: GNU Guile 2.9.9 Released [beta] > To: Mikael Djurfeldt > > > This is how it always have been in guile, without this patch you cannot > use procedure-property, use a function as a key to hash maps etc. If > this patch goes you need to forbid usage > of procedures as keys to hashmap, nuke procedure properties and friends or > mark it as internal to avoid luring schemers into using a faulty method. > This patch improves the use of higher order functions > not risk it. For example I often classify functions into different > categories and maintain this information as a property on the function via > a hashmap. This is a quite natural way of programming. Without it you need > to put the procedures in a datastructure and track that datastructure that > will uglify a lot of code. It is manageable but when the opposite is > similarly speeded code but much nicer and enjoyable code with absolutely no > risk in > higher order functionality countrary as you state (because higher order > worked flawlessly before in guile and the patch is restoring that). > > On Tue, Jan 14, 2020 at 5:07 PM Mikael Djurfeldt > wrote: > >> Hmm... it seems like both Stefan and you have interpreted my post exactly >> the opposite way compared to how it was meant. :) >> >> I completely agree that procedure equality is not strongly connected to >> the first citizen-ness. >> >> What I wanted to say is that I probably prefer you to *reverse* the >> recent patch because I prefer to have good optimization also when >> procedures are referenced by value in more than one non-operator position. >> I prefer this over having (eq? p p) => #t for the reasons I stated. >> >> Best regards, >> Mikael >> >> Den tis 14 jan. 2020 15:33Andy Wingo skrev: >> >>> On Tue 14 Jan 2020 13:18, Mikael Djurfeldt >>> writes: >>> >>> > I probably don't have a clue about what you are talking about (or at >>> > least hope so), but this---the "eq change"---sounds scary to me. >>> > >>> > One of the *strengths* of Scheme is that procedures are first class >>> > citizens. As wonderfully show-cased in e.g. SICP this can be used to >>> > obtain expressive and concise programs, where procedures can occur >>> > many times as values outside operator position. >>> > >>> > I would certainly *not* want to trade in an important optimization >>> > step in those cases to obtain intuitive procedure equality. The risk >>> > is then that you would tend to avoid passing around procedures as >>> > values. >>> >>> Is this true? >>> >>> (eq? '() '()) >>> >>> What about this? >>> >>> (eq? '(a) '(a)) >>> >>> And yet, are datums not first-class values? What does being first-class >>> have to do with it? >>> >>> Does it matter whether it's eq? or eqv? >>> >>> What about: >>> >>> (eq? (lambda () 10) (lambda () 10)) >>> >>> What's the difference? >>> >>> What's the difference in the lambda calculus between "\x.f x" and "f"? >>> >>> What if in a partial evaluator, you see a `(eq? x y)`, and you notice >>> that `x' is bound to a lambda expression? Can you say anything about >>> the value of the expression? >>> >>> Does comparing procedures for equality mean anything at all? >>> https://cs-syd.eu/posts/2016-01-17-function-equality-in-haskell >>> >>> Anyway :) All that is a bit of trolling on my part. What I mean to say >>> is that instincts are tricky when it comes to object identity, equality, >>> equivalence, and especially all of those combined with procedures. The >>> R6RS (what can be more Schemely than a Scheme standard?) makes this >>> clear. >>> >>> All that said, with the recent patch, I believe that Guile 3.0's >>> behavior preserves your intuitions. Bug reports very welcome! >>> >>> Andy >>> >>
Re: GNU Guile 2.9.9 Released [beta]
Dear Andy, I probably don't have a clue about what you are talking about (or at least hope so), but this---the "eq change"---sounds scary to me. One of the *strengths* of Scheme is that procedures are first class citizens. As wonderfully show-cased in e.g. SICP this can be used to obtain expressive and concise programs, where procedures can occur many times as values outside operator position. I would certainly *not* want to trade in an important optimization step in those cases to obtain intuitive procedure equality. The risk is then that you would tend to avoid passing around procedures as values. Have I misunderstood something or do I have a point here? Best regards, Mikael Den tis 14 jan. 2020 12:18Andy Wingo skrev: > On Mon 13 Jan 2020 22:32, Stefan Israelsson Tampe > writes: > > > In current guile (eq? f f) = #f for a procedure f. Try: > > Note that procedure equality is explicitly unspecified by R6RS. Guile's > declarative modules optimization took advantage of this to eta-expand > references to declaratively-bound top-level lambda expressions. This > unlocks the "well-known" closure optimizations: closure elision, > contification, and so on. > > However, the intention with the eta expansion was really to prevent the > > (module-add! mod 'foo foo) > > from making the procedure not-well-known. If that's the only reference > to `foo' outside the operator position, procedure identity for `foo' is > kept, because it's only accessed outside the module. But then I > realized thanks to your mail (and the three or four times that people > stumbled against this beforehand) that we can preserve the optimizations > and peoples' intuitions about procedure equality if we restrict > eta-expansion to those procedures that are only referenced by value in > at most a single position. > > It would be best to implement the eta-expansion after peval; doing it > where we do leaves some optimization opportunities on the table. But I > have implemented this change in git and it should fix this issue. > > Comparative benchmark results: > > > https://wingolog.org/pub/guile-2.9.7-vs-guile-2.9.9-with-eq-change-microbenchmarks.png > > Regards, > > Andy > >
Re: landed r7rs support
Wonderful! Then I guess the texts in sections 9.1.5 and 9.4.7 in the manual should be updated? I would have submitted a patch if I knew better how to reformulate. Best regards, Mikael On Sun, Nov 17, 2019 at 3:45 PM Andy Wingo wrote: > Hey all :) > > Just a little heads-up that I just landed R7RS support. Thanks to Göran > Weinholt for akku-scm (https://gitlab.com/akkuscm/akku-r7rs/) and > OKUMURA Yuki for yuni (https://github.com/okuoku/yuni), off of which > some of these files were based. (These projects are public domain / > CC0). > > The library syntax for R7RS is a subset of R6RS, so to use R7RS you just > (import (scheme base)) and off you go. As with R6RS also, there are > some small lexical incompatibilities regarding hex escapes; see "R7RS > Incompatibilities" in the manual. Also there is a --r7rs command-line > option. > > Cheers, > > Andy > >
Re: guile 3 update, halloween edition
Saying this without having looked at your code and also without currently promising to do any work: How does the record subtyping relate to GOOPS? I do realize that there are issues related to keeping bootstrapping lean, but shouldn't record types and classes share mechanisms? Best regards, Mikael On Wed, Oct 30, 2019 at 9:55 PM Andy Wingo wrote: > Hey folks! > > I wanted to send out an update on Guile 3. Do take a look at > https://git.savannah.gnu.org/cgit/guile.git/tree/NEWS to see where we've > come; basically the JIT is done, and we're ready to release soonish. > > However! Here begins a long chain of yak-shaving: > > I wanted good benchmarks. Generally up to now, we haven't really been > doing good incremental benchmarks. Ideally we could see some benchmark > results historically, on every commit, and against other Scheme > implementations. To a degree it's been possible with > https://ecraven.github.io/r7rs-benchmarks/, but those benchmarks have a > few problems: > > (1) They use unsafe optimizations on e.g. Chez and Gambit > (2) They are infrequently run > (3) They rely on R7RS, adding their own little compat layer for Guile, > which isn't optimal. > > Now, regarding (3), probably Guile should just have its own R7RS layer. > And it should be easier to enable R6RS too. So I added an --r6rs > option, and started importing some R7RS code from Göran Weinholt > (thanks!), with the idea of adding --r7rs. That way we can just > benchmark the different implementations, just passing --r7rs or whatever > to get the behavior we want. We can reduce the set of other scheme > implementations to just the high-performance ones: Gambit, Larceny, > Chez, and Racket. > > However! R7RS, like R6RS and like SRFI-35/SRFI-34, and also like > Racket, specifies an error-handling system in terms of "raise" and > "with-exception-handler". Guile uses "throw" and "catch". There is a > pretty good compatibility layer in Guile's R6RS exceptions/conditions > code, but it's not shared by SRFI-35/SRFI-34, and unless we built R7RS > in terms of R6RS -- something I prefer not to do; these things should be > layered on Guile core directly -- we'd have to duplicate the mechanism. > > Which, of course, is a bit trash. And when you come to think of it, > throw/catch/with-throw-handler is also a bit trash. It is too hard to > handle exceptions in Guile; the addition of `print-exception' a few > years back improved things, but still, making any sense out of the > "args" corresponding to a "key" is a mess. > > All this made me think -- Guile should probably switch to > raise/with-exception-handler and structured exceptions. (Or conditions, > or whatever we choose to call them. I find the "condition" name a bit > weird but maybe that's just a personal problem.) Racket does this too, > for what it's worth, though they have their own historical baggage. > > But, we need to maintain compatibility with throw/catch, because that's > not going anywhere any time soon (if ever). So I hacked a bit and > eventually came up with a decent implementation of throw/catch on top of > raise/with-exception-handler, and I think it's compatible in all the > weird ways that it needs to be. > > But! Now we have bootstrapping problems; how to get the implementation > in boot-9? Exceptions in SRFI-35, R6RS, R7RS, and Racket are these > hierarchical things: they form a DAG of subtypes. But core records in > Guile aren't subtypeable, so what to do? > > Well, my thinking was that we needed to sedimentarily deposit down into > Guile core those commonalities between the different record > implementations in Guile: SRFI-35 conditions, R6RS records, and SRFI-9 > records. So core now has the notion of field mutability on the record > layer (as opposed to the struct layer), a notion of subtyping, a notion > of extensibility, and so on. This is all now in the manual and will be > in NEWS. > > With that, we now have just one implementation of records!!! I am very > pleased about this. Now you can use core record introspection > facilities on any record in Guile. Cool. This also helped untangle > some knots in the R6RS inter-module graph. > > So, now the pending task is to somehow get a condition/exception > hierarchy into Guile core. I will try to mostly push things off to side > modules but it won't always be possible. There will be bijections > between a Guile's "throw" arguments and structured exceptions, mostly > inspired with what Julian did in the R6RS layer already. > > Thoughts welcome! Also: should these structured error objects be named > exceptions or conditions? SRFI-35, R6RS, and R7RS say "conditions", but > racket and my heart say "exceptions"; wdyt? > > Cheers, > > Andy > >
Re: Guile and Mes [WAS: conflicts in the gnu project now affect guile]
On Sat, Oct 19, 2019 at 9:52 AM Jan Nieuwenhuizen wrote: > Mark H Weaver writes: > Our next big target for Mes should be > to remove define-macro support from eval_apply and load Guile's > psyntax-pp.scm. > A word of caution here: Running psyntax-pp.scm without optimization is dog slow. Best regards, Mikael
Re: Stepping back up as a co-maintainer
I think we should trust what Mark says and not second guess him. Helping or being friend with someone is not among the worst crimes on my list. Den tors 17 okt. 2019 11:28zx spectrumgomas skrev: > I'm a simple Guile user and I hope I'm wrong, but I think Mark H weaver are > saying half-thruth here > https://lists.gnu.org/archive/html/guile-devel/2019-10/msg00021.html > because he has a very close friendship witch Richard Stallmam: > "I've known RMS for a long time. For a few years, I spent most of my time > two doors down from him at the MIT AI lab. I was the FSF staff sysadmin for > a while, and did sysadmin work on RMS's laptop. I was his preferred typist > when he shattered his elbow and helped him replace his bandages, etc." > http://logs.guix.gnu.org/guile/2014-08-09.log > > I think the joint statement > https://guix.gnu.org/blog/2019/joint-statement-on-the-gnu-project/ is the > true key factor of his return. > > On Wed, Oct 16, 2019 at 7:42 AM Mark H Weaver wrote: > > > Hello all, > > > > In light of recent events, I've decided to step back up as a > > co-maintainer of GNU Guile. > > > > Thanks, > >Mark > > > > >
Re: conflicts in the gnu project now affect guile
Hi Andy, Ludovic and everyone else, As a previous co-maintainer of Guile, it saddens me that you/we have run into these kind of difficulties. It's especially sad since, as also David wrote, Guile has always been a project with a friendly atmosphere. What I wish for is that everyone involved in this conflict make their best effort to find a good way forward, and in particular, a way that can preserve the valuable social assets of Guile. My own viewpoint of recent events around RMS is that he is a special kind of person with his own kind of strengths and weaknesses. Yes, it's quite clear that project leadership and management is not his strength, but at the same time he is the root of the free software movement with a fantastic legacy. I'm also fascinated by how often what he has said, and which at the time might have caused many rolling eyes, eventually have turned out to be correct. So, also here, I'm sad that there is not enough room in present day society to accommodate a person like RMS. I would have wished for him to end celebrated. Even though I'm myself strongly for the causes of women and obviously completely against any form of child abuse, I think that it is possible to disagree with some of RMS statements without judging him too harshly. I take a risk in saying that it is certainly possible to see his perspective and arguments in his defence of Minsky, even if one disagrees and thinks that he has left some aspects of the situation out. Then there is the question of leadership. I don't think that it was good timing of the GNU maintainers to go forward with their initiative at this precise point in time. But I welcome it in other respects and now that it is out in the open, I think you maintainers should go forward with it and try to achieve a more reasonable governing structure in GNU. I wish you good luck with that. With the timing you chose it will be harder, but I hope for you to succeed. Regarding Guile, I have a very high confidence in Andy and Ludovic and think the Guile project should regard itself very lucky to have such maintainers. I wasn't aware of the differences between Andy and Mark and also have a very high appreciation of Mark's careful work. But I think it is clear to most people that it is not good leadership by RMS to appoint Mark as co-maintainer without consulting with Andy and Ludovic. Under normal circumstances, a leader can do such things, but given the current situation, with the well motivated request for a new governing structure, my point of view is that it is Andy and Ludovic who currently rows this boat. Andy and Ludovic, you have my full support. Please stick with Guile, but please also keep your calm and talk a lot to key people to try to resolve this situation in a good way (which of course does not mean letting your hands be tied). Best regards, Mikael
Re: Maintainership changes: many thanks to Mark!
Mark, Ludovic and Andy, Warm regards and many, many thanks for your great work! Mikael Den ons 11 sep. 2019 09:57Andy Wingo skrev: > Hi all, > > After many years working on Guile and more than 5 years in a > maintainer role, Mark Weaver has decided to step down. Taking over > from him and remaining as Guile co-maintainers are Ludovic Courtès and > Andy Wingo. > > On behalf of myself and Ludovic and no doubt all Guile users and > developers: a heartfelt thanks, Mark, for all of your years of > service, and see you around the Guile community! > > Happy hacking, > > Andy and Ludovic > >
(ice-9 peg string-peg)
Hi, The functions in (ice-9 peg string-peg) are not re-exported by (ice-9 peg) which can be confusing to the user, since the manual only mentions (ice-9 peg). Can I, for simplicity, make (ice-9 peg) re-export (ice-9 peg string-peg) functions?
FOSDEM 2019
It was a great experience and joy for me to meet some of you at FOSDEM 2019. Thank you all! Now a piece of advice. Everyone who works with Guile knows that it's crap and look with envy at projects like Chez and Racket, right? Jim Blandy thinks that GNU should use Python as its scripting language. Chris Webber (probably rightly) thinks "look how much you can accomplish with Racket so quickly". I've been there also. I have to confess that I have now and again regarded Guile as crap since perhaps 1995 and there has been multiple occasions where I have thought that basing the community effort on some other scheme would make much more sense, and I have also always looked with envy on Chez and mzscheme/Racket. Yet, it is *amazing* to me how much progress Guile has made since I left. I, for example, *love* the new language and compiler infrastructure. But note now that Racket looks with envy on Chez and intends to base Racket on Chez while Andy Wingo thinks that he can beat Chez performance. My advice is this: Idiots go around thinking that their own code is the best thing around. Sensible people have a natural, and actually productive, tendency to be critical about their own work. That is all good, unless it hurts the sense of meaning and joy in your work. Remember now first that we are all irrational creatures. We maybe *think* sometimes that we are rational, because what we think looks rational in our view. The problem is that the view is usually very limited, with, for example, a limited set of presumptions. For example: Guile is a piece of software, right? Wrong! It is a plant, growing over time. Now, if we look over the fence into the other garden, the plants there look much greener. But what will determine the ultimate fate is not only the shape of it in the present moment, but also the genes it carries, the quality of the soil, the amount of sunlight and the skills of its gard[i]ners. We could have quit before we got GOOPS, or before we got the native threading, or before the compiler tower, without which there would be no chance to beat Chez. If you look at one combination of some Guile features: * easy embedding in applications * support for multiple languages * the compiler tower * OO system with polymorphic dispatch and metaobject protocol * nice, friendly and open community I think it is pretty strong and impressive, and I wouldn't like to live without it. It's especially important to look at Guile as a good breeding ground for new amazing work. That said, we should steal and collaborate all we can! All the best, Mikael
Re: Results of tests of guile-2.9
It would be nice to have guile-1.8 in that list since some users stayed at that version due to 2.0 being slower. Maybe, in time, we can get everyone back to the most recent release. :-)) Den tis 13 nov. 2018 00:48 skrev Mikael Djurfeldt : > Thanks, Arne! > > Den mån 12 nov. 2018 01:04 skrev Arne Babenhauserheide : > >> >> Mikael Djurfeldt writes: >> >> > That sounds great! Can you say something about how much quicker 2.9.1 is >> > compared to 2.2? >> >> You can find that by looking at the benchmarks by ecraven: >> >> https://ecraven.github.io/r7rs-benchmarks/ >> >> Comparing 2.9 and 2.2 there, you see a 30% reduction in execution time >> (averaged by geometric mean). >> >> In other words: Guile 2.9 is about 50% faster than Guile 2.2. >> >> And looking at details, Guile 2.9 is faster than Guile 2.2 in almost >> every test, and it did not experience significant slowdown in any test. >> >> >> I also did a summary of all the results which shows that Guile is >> closing in on MIT-scheme and chicken (but you’ll notice that the >> ordering is very different from the one on the site, which shows nicely >> that your result depends on what you look at, and how you look — and >> you can see that they have very different performance characteristics): >> >> >> The first number is the geometric mean of the slowdown against the >> fastest implementation in each test. The number in parens is the >> number of successful tests. >> >> 1.9881572085609192 (38 / 38) stalin-unknown >> 2.1500822915753734 (57 / 57) chez-9.5.1-m64 >> 2.738525957787122 (55 / 55) gambitc-v4.9.0 >> 2.7694443820075634 (55 / 55) gerbil-v0.14-DEV >> 4.205151966183653 (50 / 50) bigloo-4.3a >> 5.442681840154815 (57 / 57) larceny-1.3 >> 5.707385688762197 (57 / 57) racket-7.0/r7rs >> 8.679978781946975 (50 / 50) chicken-4.13.0 >> 9.248983537329178 (51 / 51) mit-9.2.1 >> 10.587408686012083 (55 / 55) guile-2.9.1.3-1f678 >> 10.615583087968362 (41 / 41) bones-unknown >> 11.524752498102057 (56 / 56) cyclone-0.9.3 >> 14.448014458884698 (57 / 57) petite-9.5.1-m64 >> 15.089971411932236 (56 / 56) guile-2.2.4 >> 18.035143748368437 (45 / 45) ypsilon-unknown >> 19.005148516339332 (44 / 44) femtolisp-unknown >> 19.139543005333042 (56 / 56) gauche-0.9.6 >> 27.645742735331833 (57 / 57) sagittarius-0.9.2 >> 31.157381722908422 (36 / 36) rscheme-unknown >> 34.153836451059746 (39 / 39) scheme48-unknown >> 36.48670680531284 (41 / 41) picrin-unknown >> 38.99165232121692 (48 / 48) kawa-3.0 >> 47.53962620985255 (28 / 28) rhizome-unknown >> 55.1945662817 (11 / 11) s9fes-unknown >> 64.80503623166697 (35 / 35) SISC-1.16.6 >> 86.0140998934114 (48 / 48) chibi-unknown >> 109.67746150832924 (35 / 35) chickencsi-4.13.0 >> 180.3672988266313 (17 / 17) foment-0.4 >> >> created with >> >> `for i in bigloo-4.3a bones-unknown chez-9.5.1-m64 chibi-unknown >> chicken-4.13.0 chickencsi-4.13.0 cyclone-0.9.3 femtolisp-unknown foment-0.4 >> gambitc-v4.9.0 gauche-0.9.6 gerbil-v0.14-DEV guile-2.2.4 >> guile-2.9.1.3-1f678 ironscheme kawa-3.0 larceny-1.3 mit-9.2.1 >> petite-9.5.1-m64 picrin-unknown racket-7.0/r7rs rhizome-unknown >> rscheme-unknown s9fes-unknown sagittarius-0.9.2 scheme48-unknown >> SISC-1.16.6 stalin-unknown tinyscheme ypsilon-unknown; do echo >> $(./evaluate-r7rs-benchmark.w /tmp/all.csv $i | tail -n 1) $i; done | sort >> -g` >> >> using >> https://bitbucket.org/ArneBab/wisp/src/tip/examples/evaluate-r7rs-benchmark.w >> >> Best wishes, >> Arne >> -- >> Unpolitisch sein >> heißt politisch sein >> ohne es zu merken >> >
Re: Results of tests of guile-2.9
Thanks, Arne! Den mån 12 nov. 2018 01:04 skrev Arne Babenhauserheide : > > Mikael Djurfeldt writes: > > > That sounds great! Can you say something about how much quicker 2.9.1 is > > compared to 2.2? > > You can find that by looking at the benchmarks by ecraven: > > https://ecraven.github.io/r7rs-benchmarks/ > > Comparing 2.9 and 2.2 there, you see a 30% reduction in execution time > (averaged by geometric mean). > > In other words: Guile 2.9 is about 50% faster than Guile 2.2. > > And looking at details, Guile 2.9 is faster than Guile 2.2 in almost > every test, and it did not experience significant slowdown in any test. > > > I also did a summary of all the results which shows that Guile is > closing in on MIT-scheme and chicken (but you’ll notice that the > ordering is very different from the one on the site, which shows nicely > that your result depends on what you look at, and how you look — and > you can see that they have very different performance characteristics): > > > The first number is the geometric mean of the slowdown against the > fastest implementation in each test. The number in parens is the > number of successful tests. > > 1.9881572085609192 (38 / 38) stalin-unknown > 2.1500822915753734 (57 / 57) chez-9.5.1-m64 > 2.738525957787122 (55 / 55) gambitc-v4.9.0 > 2.7694443820075634 (55 / 55) gerbil-v0.14-DEV > 4.205151966183653 (50 / 50) bigloo-4.3a > 5.442681840154815 (57 / 57) larceny-1.3 > 5.707385688762197 (57 / 57) racket-7.0/r7rs > 8.679978781946975 (50 / 50) chicken-4.13.0 > 9.248983537329178 (51 / 51) mit-9.2.1 > 10.587408686012083 (55 / 55) guile-2.9.1.3-1f678 > 10.615583087968362 (41 / 41) bones-unknown > 11.524752498102057 (56 / 56) cyclone-0.9.3 > 14.448014458884698 (57 / 57) petite-9.5.1-m64 > 15.089971411932236 (56 / 56) guile-2.2.4 > 18.035143748368437 (45 / 45) ypsilon-unknown > 19.005148516339332 (44 / 44) femtolisp-unknown > 19.139543005333042 (56 / 56) gauche-0.9.6 > 27.645742735331833 (57 / 57) sagittarius-0.9.2 > 31.157381722908422 (36 / 36) rscheme-unknown > 34.153836451059746 (39 / 39) scheme48-unknown > 36.48670680531284 (41 / 41) picrin-unknown > 38.99165232121692 (48 / 48) kawa-3.0 > 47.53962620985255 (28 / 28) rhizome-unknown > 55.1945662817 (11 / 11) s9fes-unknown > 64.80503623166697 (35 / 35) SISC-1.16.6 > 86.0140998934114 (48 / 48) chibi-unknown > 109.67746150832924 (35 / 35) chickencsi-4.13.0 > 180.3672988266313 (17 / 17) foment-0.4 > > created with > > `for i in bigloo-4.3a bones-unknown chez-9.5.1-m64 chibi-unknown > chicken-4.13.0 chickencsi-4.13.0 cyclone-0.9.3 femtolisp-unknown foment-0.4 > gambitc-v4.9.0 gauche-0.9.6 gerbil-v0.14-DEV guile-2.2.4 > guile-2.9.1.3-1f678 ironscheme kawa-3.0 larceny-1.3 mit-9.2.1 > petite-9.5.1-m64 picrin-unknown racket-7.0/r7rs rhizome-unknown > rscheme-unknown s9fes-unknown sagittarius-0.9.2 scheme48-unknown > SISC-1.16.6 stalin-unknown tinyscheme ypsilon-unknown; do echo > $(./evaluate-r7rs-benchmark.w /tmp/all.csv $i | tail -n 1) $i; done | sort > -g` > > using > https://bitbucket.org/ArneBab/wisp/src/tip/examples/evaluate-r7rs-benchmark.w > > Best wishes, > Arne > -- > Unpolitisch sein > heißt politisch sein > ohne es zu merken >
Re: Results of tests of guile-2.9
That sounds great! Can you say something about how much quicker 2.9.1 is compared to 2.2? Den sön 11 nov. 2018 21:53 skrev Stefan Israelsson Tampe < stefan.ita...@gmail.com>: > Hi, > > I've taken 2.9 on a ride with my active code bases, guile-log, > guile-syntax-parse and python-on -guile. > > Generelly it's a pleasure as always to work with guile. I can compile all > code and especially the > clpfd code in the prolog part works out nicely (A huge file that takes now > 6minutes to compile > a 10MB go file). > > What's was surprising was that guile-syntax-parse code is much much > quicker than before. Especially the compilation of the racket match.scm > file has improved quite a lot. Prolog code > compiles more quick as well which is very much appriciated. > > A few hickups. > > 1. compiling with O0 fails for me because guile does not translate all > letrec forms down to lover level forms. > > 2. If you put a lambda as a syntax value and try to execute that code you > get an unhelpful message down in the compiler tower, (types.scm). > > Else super work and thanks! >
Re: A different stack discipline
Den lör 3 nov. 2018 19:16 skrev Mikael Djurfeldt : > On Sat, Nov 3, 2018 at 4:30 PM Hugo Hörnquist wrote: > >> The section, as far as I can see, just describes a machine >> which pushes continuation instead of the PC counter to the >> stack. >> >> Also, while in theory quite nice it has the problem that >> Guile is really slow in restoring continuations, due to the >> fact that we have complete C interoperability. >> > > There's some misunderstanding here. The SICP register machine model is not > very different from common register machine models. There's just a > difference in how to handle subroutine calls. A short example: > > Let's first write out all operations involved in a call in a conventional > register machine: > > [...] > ; The following three micro operations consitute "call foo ()" > (sp) <- pc + offset(L1) ; NOTE the external memory access > sp <- sp - 1 > pc <- pc + offset(foo) > L1:[...] > > foo: [...] > ; the following two micro operations constitute "ret" > sp <- sp + 1 > pc <- (sp) ; NOTE the external memory access > > Now look at the call in the SICP register machine: > > [...] > continue <- pc + offset(L1) > pc <- pc + offset(foo) > L1: [...] > > foo: [...] > pc <- continue > > It is fewer operations and every operation is immediate with no memory > access. I *have* cheated since I omit a need to push the continue register > onto the stack, but while this is needed at *every* call for the > conventional machine, this is only required once at the beginning of a > function in the SICP machine *unless* the function has a tail call, in > which case we don't need to push anything. So, while one can say that we > only "push around the pushes", we make gains for every tal call. > (consitute -> constitute; tal -> tail; also, when saying "this is needed" above, I was referring to the *stack pushes*, not the push of the continue register specifically) In addition to not having to push continue in functions with tail calls, we also gain for every function that do not call a subroutine. For C compatibility, we can do an ordinary call when calling C. None of this affects the restoration of continuations. Also, it does not slow down but speeds up! >
Re: Proposal for a new (ice-9 history)
I've thought some more about this. What this is about is a need to refer to previous values in value history in a relative way rather than referring specifically to some historic value. First a simple (and perhaps not so useful) example, where we use Heron's method to compute the square root of 9 (where $ refers to the last value): scheme@(guile-user)> 1 $1 = 1 scheme@(guile-user)> (/ (+ $ (/ 9 $)) 2.0) $2 = 5.0 scheme@(guile-user)> (/ (+ $ (/ 9 $)) 2.0) $3 = 3.4 scheme@(guile-user)> (/ (+ $ (/ 9 $)) 2.0) $4 = 3.023529411764706 scheme@(guile-user)> (/ (+ $ (/ 9 $)) 2.0) $5 = 3.9155413138 scheme@(guile-user)> (/ (+ $ (/ 9 $)) 2.0) $6 = 3.1396984 scheme@(guile-user)> (/ (+ $ (/ 9 $)) 2.0) $7 = 3.0 We also have the more common case that we are debugging a program and need to inspect the output, e.g (where $$0 = $ and $$1 is the value before $).: scheme@(guile-user)> (foo 1) $1 = # scheme@(guile-user)> (get-bar $$0) $2 = ... scheme@(guile-user)> (get-baz $$1) $3 = ... scheme@(guile-user)> (foo 2) $4 = # scheme@(guile-user)> (get-bar $$0) $5 = ... scheme@(guile-user)> (get-baz $$1) $6 = ... The point is that we can use readline's value history to pick earlier lines with minor or no editing, i.e. we don't need to say $4 in the last two selectors. Maybe even more importantly, using relative value history references is conceptually easier, since we don't have to pay attention to the index of the value in value history. Mark also mentioned a use case where procedures are successively built up using values from value history. We have now discussed three different problems associated with such an extension: 1. Naming and name collisions We currently have a GDB-compatible naming scheme where values are named $1, $2 etc. It is then natural to extend this to the full GDB value history naming scheme, introducing $, $$ and $$. $ collides with the auxilliary syntactic keyword $ in (ice-9 match). Unfortunately, the interpretation of literals in syntax-rules and syntax-case macros are influenced by bindings in the environment where macros are used (according to R5RS 4.3.2) such that value history $ will disrupt $ matching in (ice-9 match). This can be solved by using different naming. Mark suggested that $ could be renamed to $$. I suggested a naming scheme where $-1, $-2 could be used for relative refrences. Chicken scheme uses #[N] and this could also be extended by letting negative numbers be relative references. However, I'm thinking that it is a bit awkward to have to consider all possible uses of names when selecting this naming scheme. I would very much prefer to let a GDB user feel at home. Is there a way for names to coexist? Well, we have the module system. Ideally, I think that value history lookups should be done in a special top level environment which is associated with the REPL. Top level bindings in the Scheme environment should have precedence. Now, instead, (ice-9 history) has a hack which patches (value-history) into every environment we visit. Given this situation, is there still a way to use the module system to give (ice-9 match) $ precedence? There is. Göran Weinholt has pointed out that other Scheme implementations tend to export their auxilliary keywords. If we export $ like this: --8<---cut here---start->8--- (define-module (ice-9 match) #:export (match match-lambda match-lambda* match-let match-let* match-letrec) #:replace ($)) [...] (define misplaced (let ((m (lambda (x) (format #f "Misplaced aux keyword ~A" x (lambda (x) (syntax-case x () (_ (identifier? x) (syntax-violation #f (m (syntax->datum x)) x)) ((_ ...) (syntax-violation #f (m (car (syntax->datum x))) x)) (define-syntax $ misplaced) [...] (include-from-path "ice-9/match.upstream.scm") --8<---cut here---start->8--- then (ice-9 match) will gracefully replace $ in (value-history) and match will work as expected. A good idea would be to define *all* auxilliary keywords to `misplaced' above, according to what Göran has said. That is independent of the issue of name collisions. If $ is replaced this way, the user can still write $$0 for that value. 2. Threading and multiple REPL:s I have suggested to store value history in a fluid and use a functional data type for the value history. That way we avoid race conditions. 3. Efficiency If value history is stored in a vlist (a module which is used in the core of Guile), much of it, particularly the most recent values, will be accessed at O(1). If we care less about efficiency and want something lean and simple, then ordinary lists will do. Comments?
Re: Proposal for a new (ice-9 history)
On Tue, Oct 30, 2018 at 2:59 PM Mikael Djurfeldt wrote: > > Is it a problem that this would drag in many modules at start-up? > I checked. It turns out that all modules used by (ice-9 vlist) are loaded anyway. (Otherwise, (language cps intmap) is an option too. :-)
Re: Proposal for a new (ice-9 history)
On Tue, Oct 30, 2018 at 7:21 AM Mark H Weaver wrote: > Hi Mikael, > > Mikael Djurfeldt writes: > > > On Tue, Oct 30, 2018 at 1:26 AM Mark H Weaver wrote: > > > > Mikael Djurfeldt writes: > > > > > The interface of (value-history) would instead have a lazy-binder > > > which provides a syntax transformer for every $... actually being > > > used. The $... identifier would expand into a list-ref into the value > > > history. > > > > A few more suggestions: > > > > If I write (define (foo x) (+ $$0 x)) at the repl, then I expect 'foo' > > to continue to refer to the same entry in the value history, even after > > the value history is later extended. > > > > Well, this could be interpreted in two ways. What I expect is that $$0 > > always refers to the last entry of the value history, even if it has > > been extended, such that $$0 will evaluate to new values as new values > > are pushed onto value history. > > I can see why it's a natural interpretation, but in practice this seems > far less useful to me. I very often write procedures that reference > values from the value history. In almost every case I can think of, I > want those references to continue to refer to the same value in the > future. If $$N has your preferred semantics, then it would almost > always be a mistake to refer to $$N from within a procedure body. > > What use cases do you have in mind that would benefit from your > preferred semantics for $$N? > > I can think of one case: it would enable writing procedures that > magically operate on the most recent REPL result, or possibly the last > two REPL results, to avoid having to pass them in explicitly as > arguments. To support this use case, we could export a procedure from > (ice-9 history) to fetch the Nth most recent value. > > Are there other realistic use cases that you know about? > I actually don't have other use-cases other than using value-history in expressions on the command line. My only concern is about the complexity of the semantics. I don't have very strong objections to the semantics you suggest, though. But note, again, that your semantics depends on macro expansion time, such that, e.g., if one would be crazy enough to put this in a file and load it in, then it would behave entirely differently. However, my feeling is that if one really wanted to do real programming against value-history, then the (ice-9 history) module should export suitable selectors with well-defined semantics. Your argument that you find it useful that the value referred to becomes fixed at macro expansion time is sufficient to me to accept it, but let's hear what other people think about it. Note that this issue is independent of whether we use bindings in a module or a list. The list-ref version would simply be a reference relative to "count", and your value would be fixed as per your requested semantics. > If you think that this is a sufficiently common use case to justify a > special set of abbreviations, perhaps we could have just one or two > magic variables to fetch the most recent values at run time? > No, I think that if we implement backward references, then we should be able to pick any value. One or two would be frustrating in many situations. Another possible syntax would be that $-1 refers to the most recent value, and then we could have $-2, $-3, etc. It would all be consistent if value history started out with $0 = ... . Dunno. > > If so, I guess value-history could be stored in a dynamically enlarged > > vector. > > It would certainly help for efficiency, but it raises another issue, > namely that we would need to think about thread safety issues. If a > procedure that refers to $$N is entered at the REPL and then evaluated > in another thread, the $$N could evaluate to garbage shortly after the > value-history vector is enlarged, unless all accesses to $$N are > serialized using a mutex. > > There's also another issue that just came to mind: multiple concurrent > REPLs. Each REPL should have its own history, I think. Modern Guile > offers REPL servers, which listen for network connections and spawn a > new REPL thread for every incoming connection. We also have cooperative > REPL servers that enable multiple REPLs within a single thread using > cooperative threading, to avoid thread safety issues. Those require a > procedure to be called periodically, and are intended for > single-threaded programs based on event loops. It would be good to > think about how to fix (ice-9 history) to properly support multiple > concurrent REPLs in the same process. > > What do you think? > You raise an important issue. Since every REPL runs with its own dynamic state, perhaps
Re: REPL and load deifferences (was Re: Proposal for a new (ice-9 history))
And, the attachments... On Tue, Oct 30, 2018 at 11:21 AM Mikael Djurfeldt wrote: > On Tue, Oct 30, 2018 at 1:55 AM Mikael Djurfeldt > wrote: > >> On Tue, Oct 30, 2018 at 12:55 AM Mark H Weaver wrote: >> >>> More precisely, it is a literal >>> identifier recognized by 'match' and related macros, in the same sense >>> that 'else' and '=>' are literal identifiers recognized by the 'cond' >>> macro. >>> >>> R5RS section 4.3.2 (Pattern language) specifies how these literal >>> identifiers are to be compared with identifiers found in each macro use: >>> >>> Identifiers that appear in are interpreted as literal >>> identifiers to be matched against corresponding subforms of the >>> input. A subform in the input matches a literal identifier if and >>> only if it is an identifier and either both its occurrence in the >>> macro expression and its occurrence in the macro definition have >>> the same lexical binding, or the two identifiers are equal and both >>> have no lexical binding. >>> >>> The implication is that these literal identifiers such as 'else', '=>' >>> and '$' lose their special meaning in any environment where they are >>> bound, unless the same binding is visible in the corresponding macro >>> definition environment. R6RS and R7RS also specify this behavior. >>> >>> For example: >>> >>> --8<---cut here---start->8--- >>> mhw@jojen ~$ guile >>> GNU Guile 2.2.3 >>> Copyright (C) 1995-2017 Free Software Foundation, Inc. >>> >>> Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'. >>> This program is free software, and you are welcome to redistribute it >>> under certain conditions; type `,show c' for details. >>> >>> Enter `,help' for help. >>> scheme@(guile-user)> ,use (ice-9 match) >>> scheme@(guile-user)> ,use (srfi srfi-9) >>> scheme@(guile-user)> (define-record-type (make-foo a b) foo? (a >>> foo-a) (b foo-b)) >>> scheme@(guile-user)> (match (make-foo 1 2) (($ a b) (+ a b))) >>> $1 = 3 >>> scheme@(guile-user)> (define $ 'blah) >>> scheme@(guile-user)> (match (make-foo 1 2) (($ a b) (+ a b))) >>> :6:0: Throw to key `match-error' with args `("match" "no >>> matching pattern" #< a: 1 b: 2>)'. >>> >>> Entering a new prompt. Type `,bt' for a backtrace or `,q' to continue. >>> scheme@(guile-user) [1]> >>> --8<---cut here---end--->8--- >>> >> >> Incidentally, this does *not* throw an error in master (unless I made >> some mistake in this late hour), which then is a bug! >> > > I now looked at this a bit more. It turns out that the difference is not > between stable-2.2 and master, but between REPL and load. While I can > reproduce the above also in master, if I instead load it (attached file > matchcoll.scm), I get no error! > > Also, the following file (attached as "elsetest.scm"): > -- > (display (cond (else #t))) > (newline) > > (define else #f) > > (display (cond (else #t))) > (newline) > -- > > gives the results #t and #, as expected, in the REPL, but if > I load the file, I instead get: > > scheme@(guile-user)> (load "elsetest.scm") > /home/mdj/guile/elsetest.scm:7:0: Unbound variable: else > > If I load it into Chez Scheme, I get: > > #t > # > > as expected. > > Maybe someone more knowledgeable than myself could sort out what out of > this is a bug? > > Also, I have to rant a bit about R5RS section 4.3.2. What a mess this is! > To have the literals influenced by bindings outside goes against the spirit > of lexical binding, in my opinion, where the idea is to be able to judge > the outcome of the code from looking at it locally. > > Best regards, > Mikael > > (use-modules (ice-9 match)) (use-modules (srfi srfi-9)) (define-record-type (make-foo a b) foo? (a foo-a) (b foo-b)) (match (make-foo 1 2) (($ a b) (+ a b))) (define $ 'blah) (match (make-foo 1 2) (($ a b) (+ a b))) (display (cond (else #t))) (newline) (define else #f) (display (cond (else #t))) (newline)
REPL and load deifferences (was Re: Proposal for a new (ice-9 history))
On Tue, Oct 30, 2018 at 1:55 AM Mikael Djurfeldt wrote: > On Tue, Oct 30, 2018 at 12:55 AM Mark H Weaver wrote: > >> More precisely, it is a literal >> identifier recognized by 'match' and related macros, in the same sense >> that 'else' and '=>' are literal identifiers recognized by the 'cond' >> macro. >> >> R5RS section 4.3.2 (Pattern language) specifies how these literal >> identifiers are to be compared with identifiers found in each macro use: >> >> Identifiers that appear in are interpreted as literal >> identifiers to be matched against corresponding subforms of the >> input. A subform in the input matches a literal identifier if and >> only if it is an identifier and either both its occurrence in the >> macro expression and its occurrence in the macro definition have >> the same lexical binding, or the two identifiers are equal and both >> have no lexical binding. >> >> The implication is that these literal identifiers such as 'else', '=>' >> and '$' lose their special meaning in any environment where they are >> bound, unless the same binding is visible in the corresponding macro >> definition environment. R6RS and R7RS also specify this behavior. >> >> For example: >> >> --8<---cut here---start->8--- >> mhw@jojen ~$ guile >> GNU Guile 2.2.3 >> Copyright (C) 1995-2017 Free Software Foundation, Inc. >> >> Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'. >> This program is free software, and you are welcome to redistribute it >> under certain conditions; type `,show c' for details. >> >> Enter `,help' for help. >> scheme@(guile-user)> ,use (ice-9 match) >> scheme@(guile-user)> ,use (srfi srfi-9) >> scheme@(guile-user)> (define-record-type (make-foo a b) foo? (a >> foo-a) (b foo-b)) >> scheme@(guile-user)> (match (make-foo 1 2) (($ a b) (+ a b))) >> $1 = 3 >> scheme@(guile-user)> (define $ 'blah) >> scheme@(guile-user)> (match (make-foo 1 2) (($ a b) (+ a b))) >> :6:0: Throw to key `match-error' with args `("match" "no >> matching pattern" #< a: 1 b: 2>)'. >> >> Entering a new prompt. Type `,bt' for a backtrace or `,q' to continue. >> scheme@(guile-user) [1]> >> --8<---cut here---end--->8--- >> > > Incidentally, this does *not* throw an error in master (unless I made some > mistake in this late hour), which then is a bug! > I now looked at this a bit more. It turns out that the difference is not between stable-2.2 and master, but between REPL and load. While I can reproduce the above also in master, if I instead load it (attached file matchcoll.scm), I get no error! Also, the following file (attached as "elsetest.scm"): -- (display (cond (else #t))) (newline) (define else #f) (display (cond (else #t))) (newline) -- gives the results #t and #, as expected, in the REPL, but if I load the file, I instead get: scheme@(guile-user)> (load "elsetest.scm") /home/mdj/guile/elsetest.scm:7:0: Unbound variable: else If I load it into Chez Scheme, I get: #t # as expected. Maybe someone more knowledgeable than myself could sort out what out of this is a bug? Also, I have to rant a bit about R5RS section 4.3.2. What a mess this is! To have the literals influenced by bindings outside goes against the spirit of lexical binding, in my opinion, where the idea is to be able to judge the outcome of the code from looking at it locally. Best regards, Mikael
Re: Proposal for a new (ice-9 history)
On Tue, Oct 30, 2018 at 1:26 AM Mark H Weaver wrote: > Mikael Djurfeldt writes: > > > The interface of (value-history) would instead have a lazy-binder > > which provides a syntax transformer for every $... actually being > > used. The $... identifier would expand into a list-ref into the value > > history. > > A few more suggestions: > > If I write (define (foo x) (+ $$0 x)) at the repl, then I expect 'foo' > to continue to refer to the same entry in the value history, even after > the value history is later extended. > Well, this could be interpreted in two ways. What I expect is that $$0 always refers to the last entry of the value history, even if it has been extended, such that $$0 will evaluate to new values as new values are pushed onto value history. This is also the effect we get if $$0 expands to (list-ref value-history 0). > > I'm also a bit concerned about the efficiency implications of expanding > these variable references into 'list-ref' calls when the history grows > large. If I write a loop that evaluates $$0 a million times, I'd prefer > to avoid a million 'list-ref' calls. > Maybe this is a Microsoft-style argument, but do we really expect users to use value history in that way? If so, I guess value-history could be stored in a dynamically enlarged vector. > To address these concerns, I'd like to suggest a slightly different > approach: > > * $0, $1, ... would continue to be ordinary variable bindings in > (value-history), as they are now. > > * The 'count' in 'save-value-history' would be made into a top-level > variable in (ice-9 history). > (This (count) is what I had in mind for $: $ -> (list-ref value-history (- count )) ) > * $$0, $$1, $$2, ... would be handled by a lazy-binder, providing a > syntax transformer that looks at the value of 'count' at macro > expansion time, and expands into the appropriate variable > reference $N. > > For example, if $5 is the most recent value, $$0 would expand into $5 > instead of (list-ref ...). This would eliminate my concerns over > efficiency. > > What do you think? > This would then have the problem that $$0 would get a more complex meaning: It would mean "the most recent result at the time of macro expansion" rather than "the most recent result". If efficiency really is a concern, I would expect that vector references would be rather efficient after compilation. Best regards, Mikael
Re: Proposal for a new (ice-9 history)
On Tue, Oct 30, 2018 at 12:55 AM Mark H Weaver wrote: > However, there's a complication with using '$' in this way. '$' is > already widely used as part of the syntax for (ice-9 match), to specify > patterns that match record objects. Yes, I actually looked at this, but thought that $ would be interpreted as a literal inside the match expression, but was probably wrong according to what you write below: > More precisely, it is a literal > identifier recognized by 'match' and related macros, in the same sense > that 'else' and '=>' are literal identifiers recognized by the 'cond' > macro. > > R5RS section 4.3.2 (Pattern language) specifies how these literal > identifiers are to be compared with identifiers found in each macro use: > > Identifiers that appear in are interpreted as literal > identifiers to be matched against corresponding subforms of the > input. A subform in the input matches a literal identifier if and > only if it is an identifier and either both its occurrence in the > macro expression and its occurrence in the macro definition have > the same lexical binding, or the two identifiers are equal and both > have no lexical binding. > > The implication is that these literal identifiers such as 'else', '=>' > and '$' lose their special meaning in any environment where they are > bound, unless the same binding is visible in the corresponding macro > definition environment. R6RS and R7RS also specify this behavior. > > For example: > > --8<---cut here---start->8--- > mhw@jojen ~$ guile > GNU Guile 2.2.3 > Copyright (C) 1995-2017 Free Software Foundation, Inc. > > Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'. > This program is free software, and you are welcome to redistribute it > under certain conditions; type `,show c' for details. > > Enter `,help' for help. > scheme@(guile-user)> ,use (ice-9 match) > scheme@(guile-user)> ,use (srfi srfi-9) > scheme@(guile-user)> (define-record-type (make-foo a b) foo? (a > foo-a) (b foo-b)) > scheme@(guile-user)> (match (make-foo 1 2) (($ a b) (+ a b))) > $1 = 3 > scheme@(guile-user)> (define $ 'blah) > scheme@(guile-user)> (match (make-foo 1 2) (($ a b) (+ a b))) > :6:0: Throw to key `match-error' with args `("match" "no > matching pattern" #< a: 1 b: 2>)'. > > Entering a new prompt. Type `,bt' for a backtrace or `,q' to continue. > scheme@(guile-user) [1]> > --8<---cut here---end--->8--- > Incidentally, this does *not* throw an error in master (unless I made some mistake in this late hour), which then is a bug! > > To avoid colliding with the popular 'match' syntax, how about making > '$$' the last value ($$0), and omitting the alias for '$$1'? > > What do you think? > Not sure. This might be confusing for GDB users... Let's think about it.
Re: Officially require GNU Make to build Guile? (was Re: Bootstrap optimization)
On Sun, Oct 28, 2018 at 11:34 PM Mark H Weaver wrote: > I'm still inclined to consider it a bug, but maybe we can have the best > of both worlds here. I see that Automake has conditionals: > > https://www.gnu.org/software/automake/manual/automake.html#Conditionals > > How hard would it be to test for GNU Make in our configure script, and > then to use your improved Makefile rules only when GNU Make is present? > I don't think that it is hard in itself. However, it is hard for me in this case since I don't know how you have been thinking with regard to the structure of the build system. E.g., I don't know why there's an am/ bootstrap.am which is included in bootstrap/Makefile.am rather than having that material in Makefile.am. In addition, I think it is better that I spend the little time I have in other ways---sorry. But, as you concluded, Guile currently uses GNU Make specific functionality. $(filter-out ...) in bootstrap/Makefile.am is such a case and also the vpath and %-thingies in am/bootstrap.am. Probably, you should start out by making an inventory of the various uses of GNU Make functionality and for each case evaluate how much work would be required to make it standard. First then it is possible to decide if it is worth to do it. However, note that applying my suggested patch would not really change the situation much in this respect since that piece of functionality already now depends on the GNU Make specific $(filter-out ...), such that when GNU Make specifics get handled, having applied my patch doesn't really increase the amount of work in any way. Best regards, Mikael
Re: Bootstrap optimization
OK, here's a new patch. OK to apply it? This actually also fixes the existing problem of all bootstrap objects being rebuilt of eval.scm is touched. The patch is verified to give a faster build for 4 and 32 build threads. The only downside is that it requires GNU Make 3.8 (which was released 2002) or later, but that shouldn't be a problem, right? Best regards, Mikael From 332471874aa227e8b25b747f560ab9185af4f2fb Mon Sep 17 00:00:00 2001 From: Mikael Djurfeldt Date: Thu, 25 Oct 2018 13:53:47 +0200 Subject: [PATCH] Bootstrap optimization * bootstrap/Makefile.am: Build both eval.go and psyntax-pp.go before the rest of the .go files so that they are handled by a fast macro expander. This saves time for a parallel build. --- bootstrap/Makefile.am | 8 ++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/bootstrap/Makefile.am b/bootstrap/Makefile.am index 57b62eb56..bcb22cdbc 100644 --- a/bootstrap/Makefile.am +++ b/bootstrap/Makefile.am @@ -32,5 +32,9 @@ GUILE_OPTIMIZATIONS = -O1 -Oresolve-primitives include $(top_srcdir)/am/bootstrap.am # We must build the evaluator first, so that we can be sure to control -# the stack. -$(filter-out ice-9/eval.go, $(GOBJECTS)): ice-9/eval.go +# the stack. Then, we build the syntax-case macro expander before the +# rest, in order to speed up parallel builds. +ice-9/psyntax-pp.go: | ice-9/eval.go + +$(filter-out ice-9/eval.go ice-9/psyntax-pp.go, $(GOBJECTS)): | \ + ice-9/psyntax-pp.go -- 2.11.0
Re: Bootstrap optimization
Den sön 28 okt. 2018 02:35Mark H Weaver skrev: > The downside of this approach to serialization is that when we add file > X.scm to the list of objects to build serially, we force a full rebuild > whenever X.scm is modified. At present, eval.scm is the only file that > forces a full rebuild. Your patch would add psyntax-pp.scm to that > list. > > I don't feel strongly about it, and maybe your patch is still a net > benefit, but I very much wish we had a better way to optimize the early > bootstrap without adding these bogus dependencies. > > > To which branch should this be applied, stable-2.2 or master? > > If we decide to adopt this approach, it should probably be committed to > stable-2.2, but first I'd like to give other people an opportunity to > share their thoughts on this. > > Thoughts? Thank you for spotting this, Mark. I didn't think of this. There seems to be a simple solution in using order-only prerequisites. I'll test this later and suggest a new patch. Best regards, Mikael
Add dependency on texinfo in README
Does this make sense (see attached patch)? Forgetting to install the texinfo packages has bitten myself several times. From 732fe1af508aeb65588981839dfc01172ec79f0e Mon Sep 17 00:00:00 2001 From: Mikael Djurfeldt Date: Wed, 24 Oct 2018 20:56:16 +0200 Subject: [PATCH] Add texinfo dependency to README * README: Add texinfo dependency. --- README | 6 ++ 1 file changed, 6 insertions(+) diff --git a/README b/README index 575ea5c3b..83569dde2 100644 --- a/README +++ b/README @@ -116,6 +116,12 @@ Guile requires the following external packages: - LIBFFI_LIBS= + - texinfo + +Guile uses `makeinfo' to create info documentation in the directory +and `install-info' (which is provided by a separate package in some +distributions) to install it. + Special Instructions For Some Systems = -- 2.11.0
Re: GNU Guile 2.9.1 Released [beta]
I improved the benchmarking code by reducing the number of cube computations. Interestingly, this gives a huge improvement of guile-2.9.1 performance while improving less or even worsening performance for other scheme interpreters I've tested. This could indicate that 1. the guile-2.9.1 optimizer is able to convert the extra code I introduced to avoid unnecessary cube computation (let:s and extra args) to something with low overhead 2. guile-2.9.1 arithmetic is a bit heavy Anyhow, now guile-1.8 went up to 7.53 s, guile-2.9.1 went down to 0.53 s while python went down to 3.60 s. Attaching version 2 of the benchmarks. (BTW, on my machine the previous version is better at provoking a segfault.) ramanujan.scm -- Compute the N:th Ramanujan number Copyright (C) 2018 Mikael Djurfeldt This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. ;;; Version 2 (define (ramanujan n) "Return the N:th Ramanujan number (sum of two cubes in more than one way)" (define (ramanujan? w b0) ;; Is w a Ramanujan number? (let loop ((a 1) (a3 1) (b b0) (b3 (* b0 b0 b0)) (seen #f)) (and (<= a b) (let ((s (+ a3 b3))) (cond ((< s w) (let ((a1 (+ 1 a))) ; too small => inc a (loop a1 (* a1 a1 a1) b b3 seen))) ((> s w) (let ((b1 (- b 1))) ; too large => dec b (loop a a3 b1 (* b1 b1 b1) seen))) (else (let ((b1 (- b 1))) ; found a sum! (or seen (loop a a3 b1 (* b1 b1 b1) #t) (define (iter w b0 n) ;; w is a Ramanujan candidate ;; b0 is the first second term to try ;; n is the number of Ramanujan number still to find ;; We first increase b0 until 1 + b0^3 >= w (let ((b0 (let loop ((b b0)) (if (>= (+ 1 (* b b b)) w) b (loop (+ 1 b)) (cond ((zero? n) (- w 1)) ; found the last number! ((ramanujan? w b0) (iter (+ 1 w) b0 (- n 1))) (else (iter (+ 1 w) b0 n) ; try next candidate (iter 2 1 n)) #!/usr/bin/python3 # ramanujan.py -- Compute the N:th Ramanujan number # # Copyright (C) 2018 Mikael Djurfeldt # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # # Version 2 # return the N:th Ramanujan number (sum of two cubes in more than one way) # def ramanujan (n): w = 0 # Ramanujan number candidate b0 = 1 # first second term to try while n > 0: w += 1 # try next candidate # increase initial b0 until 1 + b0^3 >=w while 1 + b0 * b0 * b0 < w: b0 += 1 a = 1 a3 = 1 b = b0 b3 = b0 * b0 * b0 count = 0 # number of ways to write w while a <= b: s = a3 + b3 if s < w: a += 1 # if sum is too small, increase a a3 = a * a * a continue elif s == w: count += 1 # found a sum! if count > 1: n -= 1 break b -= 1 # increase b both if sum too large and to find next way to write w b3 = b * b * b return w
Re: GNU Guile 2.9.1 Released [beta]
Then a bug report: I find that there seems to be some kind of racing condition such that the guile-2.9.1 timing line below sometimes gives segmentation fault. I'm sorry that I don't have time to look further into that right now, and hope that someone else also can reproduce it. Best regards, Mikael On Thu, Oct 11, 2018 at 12:49 AM Mikael Djurfeldt wrote: > Congratulations to fantastic work! > > I wonder if your evaluator speed estimates aren't too humble? > > With this email, I attach scheme and python versions of a (maybe > buggy---just wrote it) algorithm for finding Ramanujan numbers. It's > essentially the same algorithm for both languages, although Scheme invites > you to write in a more functional style (which involves more function > calls, which shouldn't give Guile any advantage over Python performance > wise). > > I did the following 5 times and took the median of the real time used: > > time guile -l ramanujan.scm -c '(ramanujan 20)' > time python3 -c 'from ramanujan import *; ramanujan(20)' > > Results (s): > > guile-1.8: 7.03 > guile-2.9.1: 0.91 > python-3.5.3: 3.78 > > Best regards, > Mikael D. > > On Wed, Oct 10, 2018 at 11:32 AM Andy Wingo wrote: > >> We are pleased to announce GNU Guile release 2.9.1. This is the first >> pre-release of what will eventually become the 3.0 release series. >> >> Compared to the current stable series (2.2.x), Guile 2.9.1 adds support >> for just-in-time native code generation, speeding up all Guile programs. >> See the NEWS extract at the end of the mail for full details. >> >> We encourage you to test this release and provide feedback to >> guile-devel@gnu.org, and to file bugs by sending mail to >> bug-gu...@gnu.org. >> >> The Guile web page is located at http://gnu.org/software/guile/, and >> among other things, it contains a copy of the Guile manual and pointers >> to more resources. >> >> Guile is an implementation of the Scheme programming language, with >> support for many SRFIs, packaged for use in a wide variety of >> environments. In addition to implementing the R5RS Scheme standard, >> Guile includes a module system, full access to POSIX system calls, >> networking support, multiple threads, dynamic linking, a foreign >> function call interface, and powerful string processing. >> >> Guile can run interactively, as a script interpreter, and as a Scheme >> compiler to VM bytecode. It is also packaged as a library so that >> applications can easily incorporate a complete Scheme interpreter/VM. >> An application can use Guile as an extension language, a clean and >> powerful configuration language, or as multi-purpose "glue" to connect >> primitives provided by the application. It is easy to call Scheme code >> From C code and vice versa. Applications can add new functions, data >> types, control structures, and even syntax to Guile, to create a >> domain-specific language tailored to the task at hand. >> >> Guile 2.9.1 can be installed in parallel with Guile 2.2.x; see >> >> http://www.gnu.org/software/guile/manual/html_node/Parallel-Installations.html >> . >> >> A more detailed NEWS summary follows these details on how to get the >> Guile sources. >> >> Here are the compressed sources: >> http://alpha.gnu.org/gnu/guile/guile-2.9.1.tar.lz (10.3MB) >> http://alpha.gnu.org/gnu/guile/guile-2.9.1.tar.xz (12.3MB) >> http://alpha.gnu.org/gnu/guile/guile-2.9.1.tar.gz (20.8MB) >> >> Here are the GPG detached signatures[*]: >> http://alpha.gnu.org/gnu/guile/guile-2.9.1.tar.lz.sig >> http://alpha.gnu.org/gnu/guile/guile-2.9.1.tar.xz.sig >> http://alpha.gnu.org/gnu/guile/guile-2.9.1.tar.gz.sig >> >> Use a mirror for higher download bandwidth: >> http://www.gnu.org/order/ftp.html >> >> Here are the SHA256 checksums: >> >> 9e1dc7ed34a5581e47dafb920276fbb12c9c318ba432d19cb970c01aa1ab3a09 >> guile-2.9.1.tar.gz >> f24e6778e3e45ea0691b591ad7e74fdd0040689915b09ae0e52bd2a80f8e2b33 >> guile-2.9.1.tar.lz >> 01be24335d4208af3bbd0d3354d3bb66545f157959bb0c5a7cbb1a8bfd486a45 >> guile-2.9.1.tar.xz >> >> [*] Use a .sig file to verify that the corresponding file (without the >> .sig suffix) is intact. First, be sure to download both the .sig file >> and the corresponding tarball. Then, run a command like this: >> >> gpg --verify guile-2.9.1.tar.gz.sig >> >> If that command fails because you don't have the required public key, >> then run this command to import it: >> >> gpg --keyserver keys.gnupg.net --recv-key
Re: GNU Guile 2.9.1 Released [beta]
ual, for more information. JIT > compilation will be enabled automatically and transparently. To disable > JIT compilation, configure Guile with `--enable-jit=no' or > `--disable-jit'. The default is `--enable-jit=auto', which enables the > JIT if it is available. See `./configure --help' for more. > > In this release, JIT compilation is enabled only on x86-64. In future > prereleases support will be added for all architectures supported by GNU > lightning. Intrepid users on other platforms can try passing > `--enable-jit=yes' to see the state of JIT on their platform. > > ** Lower-level bytecode > > Relative to the virtual machine in Guile 2.2, Guile's VM instruction set > is now more low-level. This allows it to express more advanced > optimizations, for example type check elision or integer > devirtualization, and makes the task of JIT code generation easier. > > Note that this change can mean that for a given function, the > corresponding number of instructions in Guile 3.0 may be higher than > Guile 2.2, which can lead to slowdowns when the function is interpreted. > We hope that JIT compilation more than makes up for this slight > slowdown. > > ** By default, GOOPS classes are not redefinable > > It used to be that all GOOPS classes were redefinable, at least in > theory. This facility was supported by an indirection in all "struct" > instances, even though only a subset of structs would need redefinition. > We wanted to remove this indirection, in order to speed up Guile > records, allow immutable Guile records to eventually be described by > classes, and allow for some optimizations in core GOOPS classes that > shouldn't be redefined anyway. > > Thus in GOOPS now there are classes that are redefinable and classes > that aren't. By default, classes created with GOOPS are not > redefinable. To make a class redefinable, it should be an instance of > `'. See "Redefining a Class" in the manual for more > information. > > * New deprecations > > ** scm_t_uint8, etc deprecated in favor of C99 stdint.h > > It used to be that Guile defined its own `scm_t_uint8' because C99 > `uint8_t' wasn't widely enough available. Now Guile finally made the > change to use C99 types, both internally and in Guile's public headers. > > Note that this also applies to SCM_T_UINT8_MAX, SCM_T_INT8_MIN, for intN > and uintN for N in 8, 16, 32, and 64. Guile also now uses ptrdiff_t > instead of scm_t_ptrdiff, and similarly for intmax_t, uintmax_t, > intptr_t, and uintptr_t. > > * Incompatible changes > > ** All deprecated code removed > > All code deprecated in Guile 2.2 has been removed. See older NEWS, and > check that your programs can compile without linker warnings and run > without runtime warnings. See "Deprecation" in the manual. > > In particular, the function `scm_generalized_vector_get_handle' which > was deprecated in 2.0.9 but remained in 2.2, has now finally been > removed. As a replacement, use `scm_array_get_handle' to get a handle > and `scm_array_handle_rank' to check the rank. > > ** Remove "self" field from vtables and "redefined" field from classes > > These fields were used as part of the machinery for class redefinition > and is no longer needed. > > ** VM hook manipulation simplified > > The low-level mechanism to instrument a running virtual machine for > debugging and tracing has been simplified. See "VM Hooks" in the > manual, for more. > > * Changes to the distribution > > ** New effective version > > The "effective version" of Guile is now 3.0, which allows parallel > installation with other effective versions (for example, the older Guile > 2.2). See "Parallel Installations" in the manual for full details. > Notably, the `pkg-config' file is now `guile-3.0'. > ramanujan.scm -- Compute the N:th Ramanujan number Copyright (C) 2018 Mikael Djurfeldt This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. (define (ramanujan n) "Return the N:th Ramanujan number (sum of two cubes in more than one way)" (define (ramanujan? w b
Re: guile 3 update: instruction explosion for pairs, vectors
On Tue, Jan 16, 2018 at 4:55 PM, Andy Wingowrote: > Thinking more globally, there are some more issues -- one is that > ideally we need call-site specialization. A GF could be highly > polymorphic globally but monomorphic for any given call site. We need > away to specialize. > Yes, but I imagine that the gain of having polymorphic inline caches compared to just GFs will decrease the more that type dispatch can be eliminated from compiled method bodies (the monomorphic case will be replaced by a direct call of the IM (compiled method)). Then, of course, a polymorphic inline cache can perhaps be regarded as an anonymous GF such that there isn't really much difference. Dynamically typed data structures would be the remaining main source of type dispatch. > Secondly, it would be nice of course to have speculative optimization, > including speculative inlining and type specialization not only on GF > arguments but also arguments to regular calls, types of return values, > and so on. > Yes! I think that dispatch on the return values is interesting. What I'm now going to write is based on almost zero knowledge of compiler construction, and I still will have to learn the Guile compiler infrastructure (where is a good start?), so please bear with me. For the same reason, what I write might be completely obvious and well-known already. Imagine that we are compiling the body of a method and we arrive at an integer addition. At some step of compilation, there has been a conversion to CPS such that we (at some level) can regard the operation as: (+ a b k) where k is the continuation. This means that k will be called with the result of the addition: (k ) In a framework where essentially all procedures are GFs, also k, here, is a GF. This allows it to dispatch on its argument such that it can treat inums, bignums, floats and doubles differently. Note now that in such a framework, a GF might have only one method (M) but the instantiated/compiled methods (IMs) can still be many. If k above is called with an inum, there will be an IM of k which is specialized to inums. This means that the compiler later can choose operations relevant for inums inside k. The "exploded" (in a slightly different sense) code for + above might, in turn, contain a branch which handles the transition into bignums at "inum overflow". If now k above come to occur in a branch of the +-code, inlined in an outer function, where the argument of k is guaranteed to be an inum, then our GF dispatch elimination will replace k with with the k-IM for inums. So, the *only* branch remaining in the code will be the overflow check in +. (BTW, I wonder if this inlining/specialization to an outer function could in some sense rely on type dispatch on the continuation k?) > > Finally I wonder that if we had the latter, if it matters so much about > optimizing generic functions in any kind of specific way -- instead you > could just have a generic optimizer. > Yes. I guess I'm mostly using my GFs as a "hook" for my thoughts on this. The reason I do this is that I can imagine a reasonably simple implementation where (almost) everything is a GF. :) There, the GFs would, in some sense, work as data structures for the compiler/optimizer. > > Of course the speculative optimizer could work on traces instead of > methods, and in that case we'd get a lot of this stuff for free... but > that's a question for farther down the road. See > http://scheme2016.snow-fort.org/static/scheme16-paper3.pdf. > Yes, this is very nice and exciting work. :) Best regards, Mikael
Re: guile 3 update: instruction explosion for pairs, vectors
Hi, Hi think this is a marvelous development and, for what it's worth, in the right direction. Many, many thanks! Maybe this is all completely obvious to you, but I want to remind, again, about the plans and ideas I had for GOOPS before I had to leave it at its rather prototypical and unfinished state: As you recall, generic functions (GFs) then carried a cache (ideas taken from PCL) with "instantiated methods" (IM; there is probably a better term and I might have called them "compiled methods" or "cmethods" before)---method code where the types of the arguments are known since each instance come from an actual invocation of the GF. Given a new invocation, the dispatch mechanism would just use argument types to look up the correct IM. I then had the idea that since we know the types of the arguments of the IM, a lot of the type dispatch could be eliminated within the IM based on flow information---very much in line with what you are doing here. If we now add one more piece, things get really exciting: Wherever there is a call to other GFs within one IM and the types of the arguments can be deduced at the point of the call, then the polymorphic type dispatch of the GF can be eliminated and we can replace the GF call with a direct call to the corresponding IM. Given now that most of the standard scheme functions can be regarded as polymorphic GFs, I then imagined that most of the type dispatch in a program could be eliminated. Actual execution would mostly be direct calls of IMs to IMs, something which the optimizer could continue to work on, especially if it all was represented as CPS. Given your work here, I think that something like this could now rather easily be implemented. That is, we re-introduce IMs, make them directly callable, and substitute IM calls for GF calls when compiling them. I gather that the code of IMs do not necessarily have to be hung onto GFs but could be handled by some separate manager/data structures. Happy new year! Mikael On Mon, Jan 8, 2018 at 4:01 PM, Andy Wingowrote: > Hey all! > > This is an update along the road to Guile 3. Check > https://lists.gnu.org/archive/html/guile-devel/2017-11/msg00016.html for > the previous entry. > > Since 25 November there have been around 100 commits or so. Firstly I > merged in patches from stable-2.0, including patches corresponding to > the improvements in the Guile 2.2.3 stable series release. > > Then, I started to look at "instruction explosion" for vector-ref et > al. Basically the idea is to transform the various subcomponents of > e.g. vector-ref into their constituent parts. In the concrete case of > vector-ref, we have to check that the vector is a heap object, that its > heap object tag is "vector", we have to extract the length from the heap > object, then we have to check that the index is an integer between 0 and > length-1, and finally we dereference the field in the vector. > Instruction explosion turns all of these into different primcalls and > branches. > > One thing that became apparent was that with instruction explosion, we'd > have a lot more control flow. Information that the optimizer would > learn in a specific way (e.g. via specialzied type inference / effects > analysis handlers for vector-ref) would instead be learned by generic > control flow. > > Concretely -- > > scheme@(guile-user)> ,x (lambda (v i) (vector-ref v i)) > Disassembly of #:1:3 (v i)> at > #x29f5b4c: > >0(assert-nargs-ee/locals 3 1);; 4 slots (2 args) at > (unknown file):1:3 >1(immediate-tag=? 2 7 0) ;; heap-object? at > (unknown file):1:17 >3(jne 23);; -> L3 >4(heap-tag=? 2 127 13) ;; vector? >6(jne 20);; -> L3 >7(word-ref/immediate 3 2 0) >8(ursh/immediate 3 3 8) >9(immediate-tag=? 1 3 2) ;; fixnum? > 11(jne 13);; -> L2 > 12(untag-fixnum 0 1) > 13(s64-imm 14(jl 8) ;; -> L1 > 15(s64 16(jnl 6) ;; -> L1 > 17(mov 3 0) > 18(uadd/immediate 3 3 1) > 19(scm-ref 2 2 3) > 20(handle-interrupts) > 21(return-values 2) ;; 1 value > L1: > 22(throw/value+data 1 201);; #(out-of-range "vector-ref" > "Argument 2 out of range: ~S") > L2: > 24(throw/value+data 1 225);; #(wrong-type-arg > "vector-ref" "Wrong type argument in position 2 (expecting small integer): > ~S") > L3: > 26(throw/value+data 2 239);; #(wrong-type-arg > "vector-ref" "Wrong type argument in position 1 (expecting vector): ~S") > > So this is a bit horrible and I need to make the disassembler do a > better job, but anyway. Instructions 1 through 6 check that V is a > vector; instructions 7 and 8 extract the length; 9 and
Re: Hook vs. list of procedures
Can I just add this: First, as Andy already hinted, it's not how a data type is implemented but the operations in its API which defines it. A list does not map directly to a hook. A hook can be implemented as a list, but that is not important. An example of a hook is before-print-hook which is used in (ice-9 history) and which can be used, for example, to support a communication protocol when running Guile in an IDE-like environment in Emacs. If we had a naive use of lists as hooks, the correct way to extend it would be something like: (set! before-print-hook (cons my-action before-print-hook)) but one could be tempted to do: (set! before-print-hook (list my-action)) which would erase anything else added there or, even worse, (define before-print-hook ...) which would not affect the real before-print-hook since a new binding in the local module would be created. (add-hook! before-print-hook my-action) doesn't tempt you that way. Also, with the current hook API, hooks are first-class citizens, which they would not be with a naive list implementation. Not sure that matters much, though. On Mon, Jan 9, 2017 at 12:03 AM, Andy Wingowrote: > On Thu 15 Dec 2016 11:48, Jan Synáček writes: > > > I've read about hooks in the manual recently and I don't understand > > why they are special. What is the difference between a hook and a > > plain list of procedures? Why do hooks have their own API? > > Historical reasons I think. Early Emacs inspired a number of Guile > extension points, and hooks are a thing there. (Hooks are not just a > list of procedures -- they're a settable place as well and a way of > running all of the procedures.) Anyway I agree, nothing to shout about, > and probably something the manual should mention less prominently. > > Andy > >
Re: Native code generation and gcc
Many thanks for these links! It seems like the GCC JIT interface is the kind of "adaptation" of gcc which I asked for. :-) Then there's the calling convention problem which Helmut brough up earlier in this thread. But I guess there could be workarounds. In any case one would have to look closer regarding this. Regarding "hotness": The original GOOPS implementation had a somewhat crazy feature that an application of a generic function to a specific argument list first resulted in the standard MOP procedure for finding a set of applicable methods and, second, from this/these generated something called a "cmethod" (compiled method) which, in turn, was stored in a cache as well as applied to the list of arguments. Next time this generic function was applied to an argument list with the same type signature, the *same* cmethod as had been used the first time could be very quickly looked up in the cache. (This lookup is described in doc/goops.mail in the repository.) The thought behind this was that when a cmethod is compiled, there is knowledge about the specific types of the arguments. This means that a compiler which compiles the applicable method into a cmethod can do some of the type dispatch during compile time, for example that of slot access. This is partially equivalent to unboxing, but more general, since some of the *generic function applications* can have their type dispatch resolved at compile time too. In the most ambitious approach one would include return values in the cmethod type signature---something which is natural to do when compiling to cps. (This type dispatch elimination was never implemented in GOOPS.) I was curious how much impact this caching scheme of things would have in real-world programs. It turned out to work very well. I'm only aware of one complaint on memory use. Obviously, though, if a generic function with a longer argument list is repeatedly called with different type signatures of the argument list, this could lead to a combinatorial explosion and fill up memory (as well as being rather inefficient). When Andy re-wrote GOOPS for the new compiler, the cmethod caching was removed---a sensible thing to do in my mind. *But*, some of the downsides of this scheme could be removed if hotness counting was added to the cache. One could do it in various ways. One could be to initially just associate the argument list type signature with a counter. If this counter reaches a certain threshold, the applicable method(s) is/are compiled into a cmethod stored in the cache. The storage of type signatures and counters still has the combinatorial explosion problem. This could now be avoided by limiting the size of the cache such that the counters compete for available space. (There are further issues to consider such as adaptability through forgetting, but I won't make this discussion even more complicated.) Best regards, Mikael On Mon, Dec 5, 2016 at 5:18 PM, Lluís Vilanova <vilan...@ac.upc.edu> wrote: > Mikael Djurfeldt writes: > > > [I apologize beforehand for being completely out of context.] > > Are there fundamental reasons for not re-using the gcc backends for > native code generation? I'm thinking of the (im?)possibility to convert the > cps to some of the intermediate languages of gcc. > > > If it wouldn't cause bad constraints the obvious gain is the many > targets (for free), the gcc optimizations, not having to maintain backends > and free future development. > > > Of course, there's the practical problem that gcc needs to be adapted > for this kind of use---but shouldn't it be adapted that way anyway? :) > > > Just an (old) idea... > > > Mikael > > Guile 2.1 has a register-base bytecode VM that makes using a code > generation > library like GNU lightning [1] a convenient alternative. In fact, that's > the > library used by nash [2] (an experimental Guile VM that generates native > code > for hot routines). You also have the experimental GCC JIT interface [3] to > achieve similar goals (available upstream since GCC 5, I think). > > IMO, if guile wants to go the tracing JIT way (like nash), it should store > the > CPS representation of routines to be able to iteratively apply more > heavy-weight > optimizations as the routine becomes hotter (called more frequently). > > For example, you could start with the current state. If the routine is > called > many times with the same argument types, you can create a version > specialized > for these types, opening more unboxing possibilities (the routine entry > point > would then have to be a version dispatcher). If a routine version later > becomes > hotter, re-compile that version into native code. > > One open question is whether the VM needs to be changed to count routine > "hotness" efficiently (as in nash), or if a sim
Native code generation and gcc
[I apologize beforehand for being completely out of context.] Are there fundamental reasons for not re-using the gcc backends for native code generation? I'm thinking of the (im?)possibility to convert the cps to some of the intermediate languages of gcc. If it wouldn't cause bad constraints the obvious gain is the many targets (for free), the gcc optimizations, not having to maintain backends and free future development. Of course, there's the practical problem that gcc needs to be adapted for this kind of use---but shouldn't it be adapted that way anyway? :) Just an (old) idea... Mikael
Re: Guile & Emacs chat at emacs hackathon/bug-crush SF
This is wonderful news! :-) I've actually tried out guile-emacs recently. What would be wonderful to have is some kind of simple "map" over what has been done so far (e.g. the large-scale structure of the code and what the relationship between the elisp and guile interpreter currently is). Maybe that exists and I didn't find it? Best regards, Mikael D. On Sun, Mar 6, 2016 at 9:32 AM, Christopher Allan Webber < cweb...@dustycloud.org> wrote: > Heya everyone, > > I was at the Emacs hackathon / bug crushing event and I gave a couple > demos that were Guile related, one showing off guile-emacs, and one > showing off Guix's Emacs integration. So the good news is: the talk > went super, super well (on both, but especially guile-emacs), and > enthusiasm was high! When I showed guile-emacs live, there were some > amazed expressions to see oh hey... this is *really* working! > > I also had a conversation with John Wiegley, current maintainer of > emacs, and he said several things: > > - He thinks it would be *great* to have Emacs running on Scheme, a >clear win, assuming it's integrated and runs fast and works well. > > - However, Guile would have to be able to make a promise: once Emacs >ran on top of Guile, Emacs would have to be able to have say over >anything that could end up changing actual semantics in Emacs >(mainly, anything that would break Emacs user's source code). > >(I think there's an easy answer to this: guile-emacs is already >aiming for heavy backwards compatibility and should just preserve >that at this level.) > > - If we could prove that performance was better in guile-emacs, that's >an easy way to win enthusiasm. > > - A good goal to work towards: all of emacs' tests should pass using >guile-emacs. > > So that's all a ways off, but I'm feeling enthusiastic that it's > possible! > > - Chris > > PS: I'd like to see bipt's elisp branch merged with master. I might try > to help... I'm trying to learn enough to do so. However I don't have a > lot of time, and especially not a lot of experience with compilers.. > >
Re: [PATCH] Append effective version to GUILE_LOAD[_COMPILED]_PATH
In python, the version number is higher up in the directory hierarchy, which, hypothetically, allows newer versions to have "inventions" in the more detailed directory structure: /usr/lib/python2.6 /usr/lib/python2.7 etc Just a thought. On Fri, Mar 4, 2016 at 2:13 PM, Jan Nieuwenhuizenwrote: > Hi, > > I am running guile-2.0 and guile-2.2 alongside each other which is > causing me some pain*). > > This is what bits of my GUILE_LOAD_COMPILED_PATH look like > > > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/lib/guile/2.2/ccache > --> > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/lib/guile/2.2/ccache/ice-9/and-let-star.go > > > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site/2.2/ > --> > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site/2.2/os/process.go > > If `/' is always used as the suffix of each path > element, and we/guix/packagers do not include that suffix in GUILE_*PATH > elements, then Guile can append effective-prefix and different major > Guile-versions can happily share the same GUILE_LOAD[_COMPILED]_PATH, > e.g., having > > > GUILE_LOAD_COMPILED_PATH=/gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site > > then guile-2.0 would get (os process) from > > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site + > /2.0 > --> > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site/2.0/os/process.go > > and guile-2.2 would read > > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site + > /2.2 > --> > /gnu/store/7ml4psifv46pzxjxw56xfl7mwd47k277-profile/share/guile/site/2.2/os/process.go > > What do you think? No more pain! Find patch attached. > > Greetings, > Jan > > > *) Some of my pain > > My Debian host system has guile-2.0, guix depends on guile-2.0, guix's > LD script depends on guile-2.0, the guile bits of my project depend on > guile-2.2. > > I have some scripts to make this situation almost bearable, but still I > regularly > > cannot find a basic library > > [1]13:53:25 janneke@janneke-ijzer:~ > $ guile --no-auto-compile > GNU Guile 2.0.11 > Copyright (C) 1995-2014 Free Software Foundation, Inc. > > Guile comes with ABSOLUTELY NO WARRANTY; for details type `,show w'. > This program is free software, and you are welcome to redistribute it > under certain conditions; type `,show c' for details. > > Enter `,help' for help. > scheme@(guile-user)> (use-modules (os proccess)) > While compiling expression: > ERROR: no code for module (os proccess) > > or some guile script (guix) aborts > > guix environment --ad-hoc ccache coreutils git guix emacs guile-next > guile-next-lib > Throw without catch before boot: > Throw to key misc-error with args ("make_objcode_from_file" "bad > header on object file: ~s" > ("\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00") #f)Aborting. > Aborted > > or linking breaks > > g++ -Wall -std=c++11 -g -I. -I out/alarm.project/c++ -I > check/alarm.project/ out/alarm.project/c++/main.o > out/alarm.project/c++/Alarm.o out/alarm.project/c++/AlarmSystem.o > out/alarm.project/c++/pump.o out/alarm.project/c++/runtime.o > -lboost_system -lboost_coroutine -lboost_thread -lboost_context -pthread -o > out/alarm.project/c++/test > collect2: error: ld terminated with signal 6 [Afgebroken] > Throw without catch before boot: > Aborting. > > and then I juggle installed guile versions and/or manually modify > GUILE_LOAD_COMPILED_PATH. > > > > -- > Jan Nieuwenhuizen | GNU LilyPond http://lilypond.org > Freelance IT http://JoyofSource.com | Avatar® http://AvatarAcademy.nl > >
Guile compilation time
Is there an easy way to replace the bootstrap interpreter with an already built Guile in order to speed up the build process?
Re: Asynchronous event loop brainstorm at FSF 30
Yes. I'm sure both you and Mark can judge this better than can, currently. I just didn't think Guile was that thread-unsafe. I imagined you would have to use mutexes around some I/O and common datastructures, and that that would be about it, but I'm probably wrong... Best regards, Mikael On Wed, Nov 18, 2015 at 3:16 PM, Christopher Allan Webber <cweb...@dustycloud.org> wrote: > Mikael Djurfeldt writes: > >> Den 4 okt 2015 02:30 skrev "Christopher Allan Webber" < >> cweb...@dustycloud.org>: >>> - This would be like asyncio or node.js, asynchronous but *not* OS >>>thread based (it's too much work to make much of Guile fit around >>>that for now) >> >> Why is this (too much work for threads)? > > Threads bring a lot of risky problems. I really don't want to deal with > that much locking. A lot of Guile's code isn't thread-safe... if we > wanted to go to the "oh yeah super safe with threads!" direction, > it might require something like Clojure's software transactional > memory. I talked to Mark Weaver about this; it's very expensive to do, > super hard to implement (I don't think we have any guile devs > interested), and makes things slower whenever you *aren't* using > threads. > > The asyncio / node.js style of things can solve IO bound problems. As > for CPU bound, we can use message passing between threads or processes. > > It's beneficial to focus on message passing for CPU bound issues anyway, > because this means that our code will be able to span across machines, > if said messages are serializable. > > Does that make sense? > - Chris
Re: Asynchronous event loop brainstorm at FSF 30
Den 4 okt 2015 02:30 skrev "Christopher Allan Webber" < cweb...@dustycloud.org>: > - This would be like asyncio or node.js, asynchronous but *not* OS >thread based (it's too much work to make much of Guile fit around >that for now) Why is this (too much work for threads)?
Re: [PATCH] Avoid GOOPS in (system foreign-object)?
On Fri, May 22, 2015 at 10:42 AM, Ludovic Courtès l...@gnu.org wrote: Just wanted to say that I think that we (or at least I) at some point in time had the goal to replace structs with pure GOOPS data structures. In the context of FFI, this would allow you to be more flexible than what structs allow, ultimately being able to access arbitrary C structs and C++ structs/classes directly from Scheme. I find that a proper MOP (which maybe still is not fully developed) is a nicer way to handle non-standard access than the strange character strings in struct vtables... The struct layout strings are a bit clunky, indeed. ;-) I like the flexibility that GOOPS provide, especially when it comes to extending things like ‘equal?’, ‘write’, and so on. I think it’s better if GOOPS is not a requirement for basic interfaces like this SMOB replacement, though. One of the concerns is performance. For instance, in 2.0, start-up time with GOOPS is on the order of 3 times higher than without it Right, but it's probably possible to re-organize things such that it's not necessary to load all of GOOPS to use basic interfaces, even if everything is based on a common non-struct data type... Just meant as input---do what you think is best!
Re: [PATCH] Avoid GOOPS in (system foreign-object)?
Hi Ludovic, Sadly, I nowadays only have time to look at guile-devel briefly now and then. I did this now and happened to see this. Just wanted to say that I think that we (or at least I) at some point in time had the goal to replace structs with pure GOOPS data structures. In the context of FFI, this would allow you to be more flexible than what structs allow, ultimately being able to access arbitrary C structs and C++ structs/classes directly from Scheme. I find that a proper MOP (which maybe still is not fully developed) is a nicer way to handle non-standard access than the strange character strings in struct vtables... Happy hacking! Mikael On Thu, May 21, 2015 at 5:28 PM, Ludovic Courtès l...@gnu.org wrote: Hello! I would like to have foreign object types based on structs rather than GOOPS classes. The rationale is that GOOPS is normally not loaded unless the user explicitly asks for it; having (system foreign-objects) load it would add overhead even for users who just want SMOB-like functionality. WDYT? The preliminary patch attached is an attempt to do that. Somehow, the creation of GOOPS classes for vtables doesn’t work as I thought, which means that ‘test-foreign-object-scm’ cannot define methods and so on (which I agree is useful functionality.) What am I missing? Thanks! Ludo’. PS: The reason I’m looking at it is that I would really want us to release 2.0.12 ASAP, so any changes to this API must be settled.
Re: Precedence for reader extensions
Thanks, Mark. This all sounds very sensible to me, and I will continue working on the scmutils port while waiting for your patch. On Fri, Feb 22, 2013 at 3:52 AM, Mark H Weaver m...@netris.org wrote: Mikael Djurfeldt mik...@djurfeldt.com writes: The API you suggest would compose much easier, but to me it feels like just another specialized solution. What we would really need is something like Ludovic's guile-reader. I agree that we should ideally have a much more general way of defining customized readers. In the meantime, my primary concern is to find a solution to your problem without committing us to supporting an overly general mechanism that fails to provide basic guarantees to other users of 'read'. As you pointed out, the current code *almost* supports overriding standard syntax for things like #!. However, it has been broken for a long time. The same bug is in Guile 1.8, and I haven't seen anyone complaining about it. Therefore, I'm more inclined to remove this broken functionality than to fix it. Mikael Djurfeldt mik...@djurfeldt.com writes: But I won't be stubborn regarding this. If someone else wants to implement another way of supporting #!optional and #!rest that is fine by me. Thanks. I hope to cook up a patch in the next few days. Ludovic Courtès l...@gnu.org writes: This is basically DSSSL keyword syntax. What about adding a new keyword style to read.c? Sounds like the easiest solution for this particular problem. This is a tempting solution, but I see a problem with this proposal: We'd have to make exceptions for things like #!fold-case and #!curly-infix, as well as for things like #!/usr/bin/guile. Also, it could potentially turn existing scsh-style block comments into syntax errors. Ludovic Courtès l...@gnu.org writes: In general, I think it should be easy to create new readers that derive from the standard syntax without having to write them from scratch. However, in hindsight, I’m not sure Guile-Reader’s API is the right approach. It’s an improvement, because it addresses this need; but its API is not ideal: “token readers” with different delimiter syntax don’t compose well, for instance. I'd be very interested to hear your current thoughts on what a better API should look like. Regards, Mark
Re: Precedence for reader extensions
On Tue, Feb 19, 2013 at 12:33 AM, Mark H Weaver m...@netris.org wrote: Mikael Djurfeldt mik...@djurfeldt.com writes: I propose to simplify this to only two levels: 1. %read-hash-procedures 2. predefined syntax It turns out that the change I propose above was already implemented in read.c. The effect just wasn't visible due to a bug in flush_ws which caused all #! to be erroneously removed if they exist as the outermost expression. In the attached diff, I've fixed the flush_ws bug and cleaned up some garbage code in scm_read_sharp which was unreachable. Can I push this into the repository? I don't think this would be sufficient. The problem is that tokens of the form #!symboldelimiter have become standardized. To name a few examples, both R6RS and R7RS define the reader directives #!fold-case and #!no-fold-case, R6RS has #!r6rs, and SRFI-105 has #!curly-infix. Guile also has #! ... !# block comments to help with the handling of executable scripts. In what sense is it not sufficient? In any case: The present diff doesn't remove any functionality or make performance worse. It only removes some inconsistent behavior. At the same time it allows support for mit-scheme #!optional and #!rest. Best regards, Mikael D. reader-fix.diff Description: Binary data
Re: Precedence for reader extensions
On Tue, Feb 19, 2013 at 4:41 PM, Mark H Weaver m...@netris.org wrote: Mikael Djurfeldt mik...@djurfeldt.com writes: On Tue, Feb 19, 2013 at 12:33 AM, Mark H Weaver m...@netris.org wrote: Mikael Djurfeldt mik...@djurfeldt.com writes: I propose to simplify this to only two levels: 1. %read-hash-procedures 2. predefined syntax It turns out that the change I propose above was already implemented in read.c. The effect just wasn't visible due to a bug in flush_ws which caused all #! to be erroneously removed if they exist as the outermost expression. I'm not sure that I consider this a bug. In this reply I've attached a file mit-reader-scm which installs a hash-read-procedure for #\!. What I wanted to say above is that scm_read_sharp (in HEAD) is implemented with the priorities I list above while flush_ws is implemented with other priorities. Here's a demo of the consequences of this bug: scheme@(guile-user) (load mit-reader.scm) scheme@(guile-user) (quote #!optional) ... !# hi) $1 = hi scheme@(guile-user) '#!optional $2 = #:optional [...] I'm uncomfortable with globally overriding standard read syntax. In a large scheme system such as Guile, there are many modules that use 'read' and expect it to act in accordance with standard lexical conventions. Well, in the mit-scheme compatibility module, my intention was to use dynamic-wind to modify #!-syntax while loading mit-scheme-specific files. Note that %read-hash-procedures is a fluid, so this will be absolutely local and won't leak out in any way to the rest of the system. The problem with this approach is that it does not compose. Let's now patch guile according to the diff I sent... there! scheme@(guile-user) (load mit-reader.scm) scheme@(guile-user) (quote #!optional) $1 = #:optional scheme@(guile-user) '#!optional $2 = #:optional scheme@(guile-user) (quote #!hi!# #!optional) $3 = #:optional My take on this is: * The %read-hash-procedures API is not pretty * The suggested change doesn't make things prettier * The suggested change *does* make things conceptually simpler and more flexible (= you can always override hash syntax if you want; compared to the current: you can override #| but not other hash syntax) * The suggested change fixes a bug * The suggested change does compose and different syntax can be confined to a module by using dynamic-wind Best regards, Mikael mit-reader.scm Description: Binary data
Re: Precedence for reader extensions
On Tue, Feb 19, 2013 at 5:42 PM, Mikael Djurfeldt mik...@djurfeldt.com wrote: * The suggested change does compose What I meant here is that it does compose with the built-in syntax. Of course, the %read-hash-procedures API by itself doesn't automatically compose if multiple user-defined modules use it to introduce new syntax. (If these modules take care to preserve previously installed procedures, it can compose.) The API you suggest would compose much easier, but to me it feels like just another specialized solution. What we would really need is something like Ludovic's guile-reader. But I won't be stubborn regarding this. If someone else wants to implement another way of supporting #!optional and #!rest that is fine by me. I regard my diff simply as a bug fix and cleanup (removing unreachable code).
Re: Precedence for reader extensions
On Tue, Feb 19, 2013 at 5:42 PM, Mikael Djurfeldt mik...@djurfeldt.com wrote: * The suggested change *does* make things conceptually simpler and more flexible (= you can always override hash syntax if you want; compared to the current: you can override #| but not other hash syntax) Just to try to be clear: What I write above is not strictly true. The current Guile *already* allows you to override standard syntax, even without my changes. What my changes do is to cleanup the old mechanism so that it doesn't fail when whitespace is involved. An example of how it currently fails is that you *can* override when you spell quote using single quote ('OBJECT) since no whitespace is involved while you cannot override when you spell quote using the symbol quote ((quote OBJECT)) since there's whitespace to be swallowed before the OBJECT. I do respect the attitude that the user shouldn't be able to override standard syntax, even though I don't think it matters much given the state of the current mess. But I think you agree that we either need to apply my fix (making the current overriding mechanism useful) or fix scm_read_sharp so that it conforms with the behavior of flush_ws.
Precedence for reader extensions
I'm working on an mit-scheme compatibility module (compat mit-scheme) enabling Guile to read a (so far) subset of mit-scheme code. Now I have the problem that mit-scheme has the two constants #!optional and #!rest (mit-scheme extensions to the scheme standard). I thought that I could support this using %read-hash-procedures but discovered that there are three precedence levels: 1. Most predefine hash syntax like #(, #! etc. 2. %read-hash-procedures 3. #| This means that I can't add new syntax for #!. I propose to simplify this to only two levels: 1. %read-hash-procedures 2. predefined syntax This would be conceptually simpler and more flexible. It could also be used to support mit-scheme read syntax. If we do not implement this change, I need to use Ludovic's (nice) guile-reader. But that package contains C code meaning that (compat mit-scheme) can't be distributed using guilehall. In any case, I think it would be good to be able to support other Scheme's read syntax using the standard reader. If anyone is afraid about the effect this would have on reader performance, it is possible to compile %read-hash-procedures to a table of flags indicating exceptions. Opinions? Best regards, Mikael
Re: Precedence for reader extensions
On Mon, Feb 18, 2013 at 10:05 PM, Mikael Djurfeldt mik...@djurfeldt.com wrote: guilehall guildhall (I write too fast)
Re: Precedence for reader extensions
On Mon, Feb 18, 2013 at 10:05 PM, Mikael Djurfeldt mik...@djurfeldt.com wrote: If anyone is afraid about the effect this would have on reader performance, it is possible to compile %read-hash-procedures to a table of flags indicating exceptions. But, given the current interface to reader extensions in the form of an alist in a fluid, such a table would have to be compiled at each entry to read...
Re: Precedence for reader extensions
(Sorry for thinking publicly.) The reason why I don't simply use guile-reader but start bugging you about it is that it feels silly that Guile, which was originally supposed to be able to support different languages, can't even support the read syntax of a sibling scheme interpreter. It is somewhat inflexible. What about including Ludovic's guile-reader as a library in the main guile distribution?
Re: syntax closures
Just saw this. Right, syntactic closures is the name of a macro system by Alan Bawden and Jonathan Rees: http://en.wikipedia.org/wiki/Syntactic_closures http://www.gnu.org/software/mit-scheme/documentation/mit-scheme-ref/Syntactic-Closures.html#Syntactic-Closures So, it would be good to choose a different name if what you are doing is different. BTW, the sc-macro-transformer facility of MIT-scheme would be nice to have. :-) Best regards, Mikael D. On Thu, Jan 24, 2013 at 9:45 AM, Alex Shinn alexsh...@gmail.com wrote: On Thu, Jan 24, 2013 at 4:11 PM, Stefan Israelsson Tampe stefan.ita...@gmail.com wrote: 2. I was actually hesistant to call this srfi-72 because of trying to do what it want more than what it say's. A main trick to simulate the effect was to introduce a closure in the syntax at one point and therefore a choose the name syntax-closure not knowing that there is an already a notion of that in the wild Oh - I thought you were referring to the existing syntactic-closures. I guess it's a plausible enough name to reuse coincidentally... Carry on then :) -- Alex
Re: Scmutils in guile-2.0
Daniel Gildea reported problems for guile-2.0 guile-scmutils. Those are fixed in the attached diff. Best regards, Mikael D. guile-scmutils-v0.8-2.0-2.diff Description: Binary data
Re: Scmutils in guile-2.0
On Fri, Feb 8, 2013 at 10:54 AM, Ludovic Courtès l...@gnu.org wrote: Mikael Djurfeldt mik...@djurfeldt.com skribis: On Thu, Feb 7, 2013 at 11:00 PM, Ludovic Courtès l...@gnu.org wrote: +(cond-expand (guile-2 + (define-syntax define-integrable + (syntax-rules () + ((_ form body ...) (define form body ...) You can actually use ‘define-inlinable’ here (info (guile) Inlinable Procedures). Sorry, I'm lost. Doesn't define-inlinable define a procedure? It does, but it’s equivalent to what some implementations call ‘define-integrable’. Ludovic---sorry, I'm being dense. You see, I just by reflex interpreted integrable as a mathematical term. But I should have reacted against this being defined as a compatibility measure. What you are saying is that I should use define-inlinable instead of define in the definition of define-integrable, right? I didn't know about the common existence of define-integrable in other implementations! Thanks you!
Scmutils in guile-2.0
As an exercise before porting the up-to-date version of scmutils, I eventually decided to bring Daniel Gildea's Guile port up-to-date. You'll find the archive guile-scmutils-v0.8.tgz here: http://www.cs.rochester.edu/~gildea/guile-scmutils/ You should be able to apply the attached patch and then be able to run it under guile-2.0 Best regards, Mikael D. guile-scmutils-v0.8-2.0.diff Description: Binary data
Re: Scmutils in guile-2.0
On Thu, Feb 7, 2013 at 11:00 PM, Ludovic Courtès l...@gnu.org wrote: +(cond-expand (guile-2 + (define-syntax define-integrable + (syntax-rules () + ((_ form body ...) (define form body ...) You can actually use ‘define-inlinable’ here (info (guile) Inlinable Procedures). Sorry, I'm lost. Doesn't define-inlinable define a procedure? Here, the idea simply was to have `define-integrable' behave exactly the same as `define'. BTW, I provided this patch just so that those who are interested could get started using guile-scmutils with guile-2.0. However, guile-scmutils only contains a fraction of the real scmutils. For the real port, I'm working on an mit-scheme compatibility module so that as much as possible of the original source can be used as is. Best regards, Mikael
Re: Fixing the slib mess
Hi Andy, No problem at all! In fact, apologies are entirely on my side: I thought I would get time to hack on this before and during Christmas, but this turned out not to be true. Great that you fixed it! If I have anything to add, I will of course bring that up. Now, I'm looking into porting Gerald Sussman's scmutils to Guile-2.0. I'm aware of an older port by Daniel Gildea but I don't think that uses GOOPS: I'm currently wondering if it could make sense to try to make an mit-scheme compatibility module providing the needed bindings. In that way a port could become easier to maintain and maybe such a module could also be useful for other mit-scheme software. Again, the amount of time I can spend on this is highly unpredictable... :( Best regards, Mikael On Mon, Jan 21, 2013 at 6:58 PM, Andy Wingo wi...@pobox.com wrote: Hello Mikael, A pleasure to see you around! On Mon 22 Oct 2012 01:11, Mikael Djurfeldt mik...@djurfeldt.com writes: When trying to use guile 2 for logic programming I discovered that the slib interface is again broken (and has been for quite some time). I am very sorry that I did not see this thread before hacking on this recently. Somehow over the past three or four months I just managed to drop everything and the inboxes filled without being filtered or drained in any way -- and to attack that I decided to just run through individual lists in order. A strange strategy, but it is good for honing the does something need to be done about this or can I drop it? instinct. Anyway I picked up something in the user list about Slib, looked into it, and then decided to fix it, without having seen this mail -- resulting in the recent patches to Slib CVS and Guile git. I'm sorry to have stepped on your toes here. In any case I didn't check it thoroughly, so surely there are issues yet to resolve. The implementation of the interface has two sides. One, the file ice-9/slib.scm, is owned by Guile. The other, slib/guile.init, is owned by slib. slib has such .init files for some common scheme implementations but I early on noticed that that the guile.init file is not really maintained. I decided that it would be more robust if slib.scm incorporated most of the interface so that it would be easy to update it as Guile changed. But of course slib also changed and at some point others felt that guile.init should contain most of the interface and the bulk of slib.scm was moved there. As we have seen, this didn't make things much better. Yes, in many ways I would like to have the interface in Guile. However it seems that time has shown that it really wants to live in slib -- probably because that's where people care most about slib. At least with Guile 2 we have managed to clean up many of the version dependent hacks, by just delegating to a fresh file for Guile 2. Anyway. Perhaps I did the wrong thing in fixing it this way? I would be very happy to commit anything you have. Please take a look at both Slib and Guile from their version control systems, and the recent patch about `include'. Aubrey seems quite responsive in dealing with patches, so if there is a change to make, I'm sure we can get it in. *But*, the proper implementation of syntax-toplevel? requires modification of psyntax.scm and adding it to the (system syntax) module. Do you have a new patch for this one? Regards, Andy -- http://wingolog.org/
Re: Fixing the slib mess
On Mon, Oct 22, 2012 at 11:51 PM, Mark H Weaver m...@netris.org wrote: It looks to me like your current implementation of 'syntax-toplevel?' is actually testing for a top-level _syntactic_ environment, but what you ought to be testing for here is slightly different. You are absolutely right. Thank you for spotting this. Unfortunately my scheming knowledge is a bit rusty. I'm not sure whether the wrap contains enough information to determine that. I don't think it does either. It might be easier to handle this with 'define-syntax-parameter' and 'syntax-parameterize'. The idea would be that within slib, 'define' would be a syntax parameter. Its default expansion would turn it into 'define-public', and also parameterize 'define' to mean 'base:define' within the body. If needed, you could also define 'let' and maybe some other things to parameterize 'define' within the body. Another option would be to make 'export' a syntax parameter, and parameterize it to a no-op within lexical contours such as 'define' and 'let'. What do you think? Correct me if I'm wrong, but doesn't this involve re-defining the syntax for all forms with bodies (in order to introduce the syntax-parameterize form)? I happen to be working on the reader lately. Would it help to implement SRFI-58 in our reader? While I think SRFI-58 support is great, I don't think slib needs it because it doesn't, to my knowledge, use read syntax.
Re: Fixing the slib mess
On Tue, Oct 23, 2012 at 8:01 PM, Mark H Weaver m...@netris.org wrote: Anyway, here's another idea: after requiring a new slib package, iterate over the entire list of top-level bindings in the slib module and export everything. What do you think? I think it sounds like the best idea so far. I'll try to go with this. One more thing: ideally, any logic that peeks into Guile internals or is likely to change between Guile versions should be in slib.scm, and anything that's likely to change between slib versions should be in guile.init. Does that make sense? Three problems come to my mind: 1. guile.init is really mostly a kind of interface, meaning that changes in both Guile and slib can affect the same pieces of code. 2. guile.init is supposed to work with a series of Guile versions. If I now try to do a larger reorganization, I will likely break compatibility with some older Guile versions, especially if I start moving things back to ice-9/slib.scm. 3. I don't really have time currently to do a full reorganization. Otherwise I concur with what you say. Problem 2 could be solved by asking Aubrey Jaffer (or who is currently maintaining slib) to include a new version of the file, guile2.init, in addition to guile.init... I'll think about your suggestions and try to come up with new patches. Best regards, Mikael D.
Re: Fixing the slib mess
On Mon, Oct 22, 2012 at 8:31 PM, Stefan Israelsson Tampe stefan.ita...@gmail.com wrote: Comments? Can I add syntax-toplevel? to psyntax.scm and (system syntax)? [...] I can answer with some kind of suggestion here. in (system syntax) there is syntax-local-binding which you can use for example as (define-syntax f (lambda (x) (syntax-case x () ((_ x) (call-with-values (lambda () (syntax-local-binding #'x)) (lambda (x y) (pk x) (pk y))) #'#t Then, scheme@(guile-user) [1] (f +) ;;; (global) ;;; ((+ guile-user)) And, scheme@(guile-user) [1] (let ((s 1)) (f s)) ;;; (lexical) ;;; (s-490) (let ((s 1)) (define-syntax g (lambda (x) #'#f)) (f g)) ;;; (displaced-lexical) ;;; (#f) I'm not sure what exactly syntax-toplevel? does, but can you base it on syntax-local-binding? And if not is it possible to change syntax-local-binding so that you can use it? Thanks, Stefan. (syntax-toplevel?) expands to #t if occurs in a context (position in the code if you prefer) where a (define x #f) would create/set! a global binding for x. It expands to #f otherwise. I had a look at syntax-local-binding, but decided that syntax-toplevel? was needed since the latter is not trying to determine the nature of an existing binding but rather the nature of the context. Of course oncould probe the context by first creating a new binding (with some random name) and then use syntax-local-binding to determine the nature of the context by looking at the new binding, but that seems somewhat invasive. :-)
Re: Fixing the slib mess
On Mon, Oct 22, 2012 at 1:11 AM, Mikael Djurfeldt mik...@djurfeldt.com wrote: Comments? Can I add syntax-toplevel? to psyntax.scm and (system syntax)? Do you think it is reasonable to submit something along the line of guile.init.diff to slib guile.init? If I get an OK, then I would of course put some further work into this so that the full feature set of slib (including uniform arrays) gets operational. If people find my prodigious use of nested-ref ugly (but note that the original code already makes use of module system primitives), I could remove most of them as well.
Fixing the slib mess
Dear Guile hackers, What nice work you are doing! For those who don't know me, I'm a Guile developer who has been doing other stuff for some time. When trying to use guile 2 for logic programming I discovered that the slib interface is again broken (and has been for quite some time). This easily happens because it is a very fragile interface. The way this is supposed to be used (and as documented in the manual), one does a (use-modules (ice-9 slib)) and can then do (require 'modular) etc. The module (ice-9 slib) forms a kind of sandbox so that all new definitions that are imported through require are loaded as local bindings in the (ice-9 slib) module and are exported through the public interface of (ice-9 slib). The implementation of the interface has two sides. One, the file ice-9/slib.scm, is owned by Guile. The other, slib/guile.init, is owned by slib. slib has such .init files for some common scheme implementations but I early on noticed that that the guile.init file is not really maintained. I decided that it would be more robust if slib.scm incorporated most of the interface so that it would be easy to update it as Guile changed. But of course slib also changed and at some point others felt that guile.init should contain most of the interface and the bulk of slib.scm was moved there. As we have seen, this didn't make things much better. I'll let you ponder on how to handle the fundamental problems with this interface, but, as a Guile user, I think it would be nice if the interface works as written in the manual. Attached to this email you'll find two patches. The patch to slib.scm copies a snippet of code from guile.init so that they agree with eachother and with the Guile reference manual on how to find slib in the filesystem. This patch for example makes SCHEME_LIBRARY_PATH work as described. I've tried to write the patch to guile.init so that it can play well with older Guile versions, but we should test this. In order to make it work with Guile 2, though, I had to introduce a new syntax binding syntax-toplevel?. Given a syntax object (as available within a syntax-case transformer), it decides if the object originates from top level context. It is used, as in the old memoizing macro transformer, to choose whether to define-public or just define. *But*, the proper implementation of syntax-toplevel? requires modification of psyntax.scm and adding it to the (system syntax) module. I didn't want to do this until I've had your comments, so the present patch has its own syntax-object accessors (which breaks abstraction and is therefore not a real solution). I should also say that I have not yet fixed the slib interface to the new Guile uniform arrays, so there's a lot of slib functionality which won't yet work. Comments? Can I add syntax-toplevel? to psyntax.scm and (system syntax)? Do you think it is reasonable to submit something along the line of guile.init.diff to slib guile.init? Best regards, Mikael Djurfeldt slib.scm.diff Description: Binary data guile.init.diff Description: Binary data
Benchmarks
Dear Guilers, I just pulled the latest version via git and am amazed about how much you have achieved during the last years. Then, today, I stumbled on this: http://www.cs.utah.edu/~mflatt/benchmarks-20100126/log3/Benchmarks.html Probably Guile now compares better to the other implementations. Happy hacking! Mikael
Re: notes on the goops dispatch implementation
2008/10/19 Andy Wingo [EMAIL PROTECTED]: I have been going through GOOPS more carefully recently, preparing to make GOOPS and code that uses GOOPS compilable, preserving all of the intended optimizations. It turns out the dispatch mechanism is rather interesting. I write about it here: http://wingolog.org/archives/2008/10/17/dispatch-strategies-in-dynamic-languages Thanks, Andy, for your willingness to plunge into this far-from-perfect code which was carelessly written during a Christmas holiday as a feasibility experiment. The idea of the vector of random values comes from the implementation of CLOS called PCL. The idea of caching versions of a method invocation with specialization to argument types actually used was new to me (although I haven't read the SELF-article you cite). As, I think, Neil Jerram has stated previously, the vision is that one could extend GOOPS by taking advantage of this specialization. Since every specialized method stored in the method invocation cache can assume the type of its arguments, it can also remove a lot of the dispatch within its code. It's no longer necessary to check if a value passed as argument is an integer or if it is an object of class Foo. It is possible to just do an integer operation or directly access a slot in a Foo instance using a constant offset. Best regards, Mikael D.
Re: Goops Valgrind
2008/9/11 Neil Jerram [EMAIL PROTECTED]: Also, is Mikael right with his error #1? I'm thinking not, because I believe that instances are structs too, so surely it's OK to call SCM_STRUCT_DATA (x)[...] on them? It is good that you are sceptical about what I say because it was a long time ago that I was well oriented in this code. However, the problem here is not whether SCM_STRUCT_DATA is operating on the right type of object. The real problem is that it is looking after information which is not stored in an instance. Furthermore, it is stored in the class object at a location with a negative offset. Instances don't have any slots with negative offset.
Re: let's bytecode it!
2008/4/25 Ludovic Courtès [EMAIL PROTECTED]: Guile-VM is written as an independent project currently, so I don't think it would fit well into core Guile, and I'm actually not sure it'd be a good idea to put it there, at least for now. [Jumping in again although I shouldn't since I don't normally follow activity on the list.] I might be missing something, but I think Andy's idea of putting guile-vm in the core is obviously the right thing to do. In fact, as soon as the vm is fully functional and stable (which may take time, I grant you that) I think it should replace the current evaluator except possibly for debugging. (Currently, the evaluator code is compiled twice---once for generating the sluggish version of the evaluator, and yet another time to produce the even more sluggish debugging evaluator. A vm could replace the sluggish evaluator. The debugging evaluator could be kept for supporting backtraces and the debugger.) I wanted to do this from the first instance when I tried out Keisuke's vm, but could never find time to do it. Even though other properties of Guile *should* be reasons for people to use Guile, I'm personally convinced that people take the slugishness as a sign of poor code and think this is the major reason why Guile hasn't been adopted to an extent several magnitudes more than currently. Another note: The author of QScheme (which has a very efficient byte-code interpreter) once teamed up with us with the aim to combine ideas from QScheme and Keisuke's vm to implement the Guile bytecode interpreter, but this never got started for real. Maybe it's worth looking at QScheme? Also, if I moved a bytecode interpreter into the core, I would probably look at the possibility to place GOOPS method dispatch as a central mechanism---maybe this could be the only form of type dispatch? I would try to remove type checking in all Guile primivites and move the responsibility for this to this core dispatch mechanism. In order not to loose backward compatibility for those using the Guile API from C in their applications, one could have type-dispatching glue-code with the old C API names. But if this seems too futuristic, just replacing the sluggish evaluator with the vm would give Guile a major boost. Be sure to test it against code using threads, though...
Re: fixes to goops + light structs + 'u' slots
2008/4/19, Andy Wingo [EMAIL PROTECTED]: I wash my hands. :-) When I left, structs where two words. commit 08c880a36746289330f3722522960ea21fe4ddc8 Author: Mikael Djurfeldt [EMAIL PROTECTED] It is natural for our memory to fade over this much time ;-) But if at any point something sparks in your brain to figure out a way around the GC chain, I'd certainly be interested. Otherwise we could put that empty third word to good use. Sigh... Well, apparently I'm the very author of this GC chain thing you are talking to me about. The only thing I can say is that there was probably a reason at the time to do this. Of course, when looking at it now, it doesn't look like good design. In fact, and to be honest, I've never had the impression that structs are good design either. The GC chain mostly looks like a bug fix, doesn't it? The problem is that you don't always have time and resources to redesign the whole thing. Most changes I've done to Guile have been done for the use of Guile in my work. The Right thing to do is probably to throw out structs and design new GOOPS objects, something I wanted to do from the start. Also, when considering GC-related things---remember that a lot of design decisions have been made against the garbage collector as it looked like three revisions ago, or something like that. The current GC is a very different beast. Also, I've seen your GOOPS todo. It's nice to see your willingness to continue development on GOOPS. Unfortunately, I won't have the time to help. Just don't be too quick to throw things out. Some code needs to be replaced, but then, again, some code has thought behind it. Since you say that you couldn't find information in the workbook, I'll try to dig up my ideas regarding the PURE_GENERIC flag and the apply-generic MOP. The extended-generic stuff is a way to get insulation between two modules A and B which both import a module C. Otherwise I think I should shut up. Oh, and please *don't* try to compile methods at macro expansion time. GOOPS method compilation is based on the crazy idea to have one compiled method per combination of argument types. The idea is that this won't give you a combinatorial explosion if you compile lazily, waiting until the first application for a certain combination. This was an experiment. Apparently, it works well enough in practise. The *big* possibility is that since each compiled method in the method cache has typed arguments, there is the opportunity for *very* interesting optimizations, such as elimination of a lot of the type dispatch (e.g. that in accessors and in other calls to generics).
Re: fixes to goops + light structs + 'u' slots
2008/4/16, Andy Wingo [EMAIL PROTECTED]: On Sun 13 Apr 2008 21:09, Mikael Djurfeldt [EMAIL PROTECTED] writes: I then ran accessor ref tests on objects that necessarily had their slots bound, and thus would go through @assert-bound-ref: (3) If the determination can be made that the slot will never be unbound, and we compile to the @assert-bound-ref case, then accessor refs are indeed faster than slot-ref. [...] I would speculate, Mikael, that it is case (3) that you are recalling. Right, although it is, in fact, @slot-ref which is the special form (I said @assert-bound-ref by mistake). Try the same benchmark accessing the third or fourth slot. Regarding Guile vs Python: Python is byte-compiled. We once had a byte-compiler guile-vm which could have been merged into Guile. Maybe it's still possible with some work. Best regards, Mikael
Re: fixes to goops + light structs + 'u' slots
2008/4/16, Mikael Djurfeldt [EMAIL PROTECTED]: 2008/4/16, Andy Wingo [EMAIL PROTECTED]: On Sun 13 Apr 2008 21:09, Mikael Djurfeldt [EMAIL PROTECTED] writes: I then ran accessor ref tests on objects that necessarily had their slots bound, and thus would go through @assert-bound-ref: (3) If the determination can be made that the slot will never be unbound, and we compile to the @assert-bound-ref case, then accessor refs are indeed faster than slot-ref. [...] I would speculate, Mikael, that it is case (3) that you are recalling. Right, although it is, in fact, @slot-ref which is the special form (I said @assert-bound-ref by mistake). Try the same benchmark accessing the third or fourth slot. To be more clear: It is only for bound slots that the accessor can be compiled down to the special form @slot-ref which was significantly faster than other tested access methods, using that version of Guile on the type hardware which was available then, at the time goops was developed. It is new to me to see such small differences in timing. Maybe it is also worth rolling up the loop a bit so that you have a sequence of accesses in the loop body?
Re: fixes to goops + light structs + 'u' slots
2008/4/14, Andy Wingo [EMAIL PROTECTED]: I have shied away from GOOPS internals in the past, but every time I have a brush with them I learn something interesting. You're very kind. It's in large parts not easily readable code. What is your perspective regarding foreign-slot? I wrote a bit about what I did recently in Guile-Gnome here: http://wingolog.org/archives/2008/04/11/allocate-memory-part-of-n Sorry for not having time right now to look into this. I define a class with a foreign slot like this: guile (use-modules (oop goops)) guile (define-class foo () (bar #:class foreign-slot)) guile (define x (make foo)) guile (slot-set! x 'bar 45) guile (slot-set! x 'bar 45000) Backtrace: In current input: 6: 0* [slot-set! #foo b7dbaa60 bar {45000}] unnamed port:6:1: In procedure slot-set! in expression (slot-set! x (quote bar) ...): unnamed port:6:1: Value out of range 0 to 4294967295: 45000 ABORT: (out-of-range) Is this designed to work? It seems that all is still not right, @slot-ref (only used in accessors, not in slot-ref) accesses the slot as a SCM regardless of whether is is a 'u' slot or a 'p' slot. I suppose that part of the dispatch/compilation needs to be made more sophisticated. Right. That part of the implementation is not finished. The compilation of accessors is in goops.scm. Another part, which is also not finished (or wasn't when I left) is parts of the metaobject protocol. I had some specific ideas how the MOP part should be completed, which I hope I left somewhere in the workbook repository. Also, SMOB-based accessors would be more heavyweight with regard to class creation and memory consumption than the current solution. Regarding memory consumption. Currently, structs are double-cells: one word for the vtable, one for the data, one empty, and one for the STRUCT_GC_CHAIN, used (please correct me) during GC to ensure that structs are freed before their vtables. This seems to be to be a waste of memory in instances, in that they occupy 4 words when they only need 2. Is it not possible to avoid this? I have puzzled over this for a number of hours, but have not really come up with anything that seems workable, given our lazy incremental sweeping. I suppose another bitvector for structs in the cards would work; you could run through it at the end of marking, and mark all structs' vtables. I wash my hands. :-) When I left, structs where two words. Not that I don't appreciate the new garbage collector. In any case, *please* always benchmark changes like this against previous code. Will work on some benchmarking, I am very interested to see how some of the method dispatch and compilation code works. Thanks! ps. I'm happy to see you around! Ahh, I'm sorry that I'm actually only a ghost. Have to focus on other stuff right now, but I'm still a Guile user!
Re: fixes to goops + light structs + 'u' slots
2008/4/14, Andy Wingo [EMAIL PROTECTED]: Is this designed to work? It seems that all is still not right, @slot-ref (only used in accessors, not in slot-ref) accesses the slot as a SCM Right, the special form is @slot-ref, not @assert-bound-ref as I stated previously.
Re: stack overflow
2008/2/17, Han-Wen Nienhuys [EMAIL PROTECTED]: Isn't it be possible to catch SIGSEGV, and check whether it was caused by overflow? Couldn't that leave the interpreter in a strange state so that one would need to quit and restart? The current scheme allows the interpreter to continue to run after error.
Re: stack overflow
2008/2/14, Ludovic Courtès [EMAIL PROTECTED]: Speaking as a user, I would prefer a solution where the evaluator measures stack size the same way as currently (i.e. without the need to do extra work at every return). It is possible to estimate the average sizes of evaluator stack frames during startup and use this as a conversion factor in the debug-options interface (scm_debug_opts) so that the user setting is approximately consistent between platforms. Hmm, I don't see how we could reliably estimate this, and I'm afraid it would add non-determinism (e.g., estimate that varies with the phase of moon, dubious estimates, loads of users suddenly reporting stack overflows because Fedora Core now ships with a bleeding-edge compiler noone else uses, etc.). I was thinking about inserting code which actually *measures* the size of frames during startup. This could be done, for example, by introducing a primitive which uses the internal stack measuring functions. One could use this primitive to measure how much stack space some code sample uses. By our knowledge of how many evaluator stack frames this code sample uses, we can compute a reliable estimate for the running instance of Guile.
Re: the future of Guile
2007/12/4, Marco Maggi [EMAIL PROTECTED]: I think that it is time for a chat on the future of Guile. Some pieces of input to the discussion: * There is (or should be) a module called workbook in the repository. It contains policy documents setting out the direction for Guile development. It is important to be consistent regarding the direction, over long time, otherwise Guile doesn't come very far. The Guile policy documents should be material for discussion if the future of Guile is on the table. * In my personal opinion, Guile needs a bytecode interpreter. Keisuke long ago wrote guile-vm which succeeded well concerning speed. One option is to dust it off and do the last part of development. * Personal wish for Guile-2: Some primitive procedures, like display, + and equal? are, in principle, generics. I think the evaluator (eval.c), which currently is a frightening monster, could be both cleaner and more efficient if all procedures were generic functions. This could lead to the elimination of all special cases in eval.c and elimination of argument checking in all primitives throughout the Guile code base. * It would be great if someone could complete the GOOPS meta object protocol. * What about writing a python-module adapter so that Guile can dynamically load python extension modules, using GOOPS for wrapping? 8-) ___ Guile-devel mailing list Guile-devel@gnu.org http://lists.gnu.org/mailman/listinfo/guile-devel
Re: SLIB
2007/8/11, Ludovic Courtès [EMAIL PROTECTED]: I'd like to fix the SLIB issue in 1.8.3. SLIB 3a4 works perfectly well with 1.8. The thing is that `(ice-9 slib)' is of no use. It's of no use since no-one has added the functions which Aubrey have added to guile.init when changing slib:s interface to the interpreter. Adding those function is, however, an easy thing to do. I'm not sure that the diff I've included is appropriate for the latest slib, but it could very well be. Apart from providing a more natural division regarding what belongs to Guile and what belongs to slib, slib.scm makes sure that each time some module requires new slib code, it will be loaded into the module (ice-9 slib) and exported from there. I'm not at all sure that guile.init does that, and if it doesn't it will lead to strange behavior: If Guile module A requires some slib feature F1, and, later, a totally unconnected Guile module B requires slib feature F2, which depends on F1, the loading of F2 may or may not lead to a reload of F2 into module B (depending on how guile.init has been implemented). If it leads to a reload, code will be duplicated in modules A and B. If it doesn't load to a reload, F2 won't find the feature F1 which it requires, since it exists in module A. Are you sure that your suggested slib.scm doesn't have any of the above two problems? Index: slib.scm === RCS file: /cvsroot/guile/guile/guile-core/ice-9/slib.scm,v retrieving revision 1.46 diff -r1.46 slib.scm 73a74,145 ;;; (software-type) should be set to the generic operating system type. ;;; UNIX, VMS, MACOS, AMIGA and MS-DOS are supported. (define software-type (if (string? (version) 1.6) (lambda () 'UNIX) (lambda () 'unix))) (define (user-vicinity) (case (software-type) ((vms) [.]) (else ))) (define vicinity:suffix? (let ((suffi (case (software-type) ((amiga)'(#\: #\/)) ((macos thinkc) '(#\:)) ((ms-dos windows atarist os/2) '(#\\ #\/)) ((nosve)'(#\: #\.)) ((unix coherent plan9) '(#\/)) ((vms)'(#\: #\])) (else (warn require.scm 'unknown 'software-type (software-type)) / (lambda (chr) (and (memv chr suffi) #t (define (pathname-vicinity pathname) (let loop ((i (- (string-length pathname) 1))) (cond ((negative? i) ) ((vicinity:suffix? (string-ref pathname i)) (substring pathname 0 (+ i 1))) (else (loop (- i 1)) (define (program-vicinity) (define clp (current-load-port)) (if clp (pathname-vicinity (port-filename clp)) (slib:error 'program-vicinity called; use slib:load to load))) (define sub-vicinity (case (software-type) ((vms) (lambda (vic name) (let ((l (string-length vic))) (if (or (zero? (string-length vic)) (not (char=? #\] (string-ref vic (- l 1) (string-append vic [ name ]) (string-append (substring vic 0 (- l 1)) . name ]) (else (let ((*vicinity-suffix* (case (software-type) ((nosve) .) ((macos thinkc) :) ((ms-dos windows atarist os/2) \\) ((unix coherent plan9 amiga) / (lambda (vic name) (string-append vic name *vicinity-suffix*)) (define with-load-pathname (let ((exchange (lambda (new) (let ((old program-vicinity)) (set! program-vicinity new) old (lambda (path thunk) (define old #f) (define vic (pathname-vicinity path)) (dynamic-wind (lambda () (set! old (exchange (lambda () vic thunk (lambda () (exchange old)) 204a277,278 (define slib:features *features*) ___ Guile-devel mailing list Guile-devel@gnu.org http://lists.gnu.org/mailman/listinfo/guile-devel
Re: Heads up: Releasing 1.8.2
2007/6/25, Greg Troxel [EMAIL PROTECTED]: guile guile (use-modules (ice-9 slib)) WARNING: (guile-user): imported module (ice-9 slib) overrides core binding `provide' WARNING: (guile-user): imported module (ice-9 slib) overrides core binding `provided?' guile (version) 1.8.1 guile The problem is that native guile has provide/require, and so does slib. Guile used to use different words, and they were changed for some reason. My view is that slib is a de facto reserved namespace :_) Isn't the original and only purpose of Guile's provide/require to be part of support for slib? I think the idea was to tell slib (and other code) what feature sets are already present in Guile. My guess is that the original was simply a clone of the corresponding parts of slib, so that they could simply replace the slib version. With time, however, slib has developed independently of Guile. I think what needs to be done is to come to an agreement with Aubrey Jaffer of how the interface to slib should look like and how it should be maintained. When I last looked at it, slib's Guile interface was based on heavily outdated assumptions. Preferably, the Guile part of slib should be minimal, giving larger freedom for development on both sides. When doing this, it is important that the result works well with Guile's module system. ___ Guile-devel mailing list Guile-devel@gnu.org http://lists.gnu.org/mailman/listinfo/guile-devel
Re: freeing srcprops ?
2007/1/29, Han-Wen Nienhuys [EMAIL PROTECTED]: Neil Jerram escreveu: Kevin Ryde [EMAIL PROTECTED] writes: Han-Wen Nienhuys [EMAIL PROTECTED] writes: why use a separate storage pool for srcprop objects? At a guess, is it because that they're likely to never need freeing, hence can be laid down in big blocks. I'd guess because setting up a srcprops is critical to start-up performance, and a double cell doesn't have enough slots to store all the common properties (filename, pos, copy) directly (as your change makes clear). All this guessing ... I suspect it was done just because of poor design and/or premature optimization. While I have to admit that I was a novice programmer at the time I wrote this code, and definitely didn't have enough experience to judge what is a good design, I fail to see what is so bad about that code. Also, please remember that things looked quite differently then. For example, there were no double cells. Neil is quite right about the reasons for doing it like that. As you know, computers were at that time a lot slower. Also, Guile, and especially SCM which Guile was derived from, was far more efficient, so that small code changes wasn't drowned but had a noticeable impact. I would say the idea of storing a lot of information for every s-expression in the code was at that time a bit outrageous, so I saw it as very important to prove that this concept was in fact realistic. So I made sure that allocation size and time was optimized. Another aspect is that the breakpoint flag needed quick access in order to test if such a scheme could be used for breakpoints instead of code substitution (which might still be a better idea). on the factual side: 1. the GUILE ends up with 1506 srcprops objects. Source properties have work also for large projects, so it's that kind of situation we need to look at. I think my own projects have reached 2 objects or more. 2. this is neglible compared to the 431777 total cells that are allocated. Yes, it could very well be the case that an extra effort is poorly motivated by memory usage alone. 3. Due to sharing of the filename cons, memory usage is slightly more than 4 SCMs per srcprop, down from 6 SCMs (2 for the smob cell, 4 for the struct) Hmm... Your filename optimization doesn't really work, does it? As soon as someone sets a breakpoint, he gets it all over the place, or did I miss something? If I'm right, the 1996 solution was, at least, down to 6 SCM from 8 (since we didn't have double cells, while your solution is in fact 4+4=8. But you could probably easily get it down to 6---not that it matters much now. Because the code made me cringe. It's pointless to have specialized storage for srcprops. it only makes the code more obtuse. If we considered implementing source properties *now*, I would probably agree. But the code is already there and I wonder if there really is any gain of replacing it. For example, the code you need to add in order to do the filename optimization is hardly much more maintainable than what we already had in there, and it is my guess that whatever alternative code you propose, it won't be faster or consume less memory. Also, Neil should probably study the effects on his debugging code of putting the breakpoint flag in the standard property list. What happens if the property list is used for other things so that it has to be traversed for, for example, one step? I guess you active guys should sort this out between yourselves. Thanks for working on Guile, Best regards, M ___ Guile-devel mailing list Guile-devel@gnu.org http://lists.gnu.org/mailman/listinfo/guile-devel