SRFI 201 and 202: a reference implelmentation and an invitation to discussion
Hi, A while ago I proposed a few SRFIs, in particular https://srfi.schemers.org/srfi-201, which provides extensions to core bindings (define, lambda, let, let*, or) https://srfi.schemers.org/srfi-201, which extends SRFI 2 with the capability of pattern matching Recently, with a second draft, I have added their implementations for Guile to SRFI repos: https://github.com/scheme-requests-for-implementation/srfi-201/ https://github.com/scheme-requests-for-implementation/srfi-202/ I'd like to invite everyone to participate in the discussion, and if it concludes well, to include the implementation with the distribution
Re: Wisp as shipped language in Guile?
2017-05-13 3:52 GMT+02:00 Mark H Weaver: > Hi Arne, > > Arne Babenhauserheide writes: > > A few weeks ago I asked in IRC whether wisp[1] could be included with > > Guile in modules/language/wisp to allow every guile users to run wisp > > code from any Guile installation via > > > > > > $ guile --language=wisp [] > > > > > > Essentially this is about making wisp as language part of the > > "batteries" of Guile. > > About 4.5 years ago, I went out on a limb and added SRFI-105 (curly > infix expressions) to core Guile. Even now, I'm of the opinion that > judicious use of curly infix could be beneficial for readability, but as > far as I can tell, there's been essentially no uptake. If I'm wrong > about that, please let me know. > > Although (a subset of) SRFI-105 seems like a clear win to me, I cannot > say the same of either Wisp or SRFI-110 (Sweet expressions). > > "The idea of introducing Algol-like syntax into Lisp keeps popping up and has seldom failed to create enormous controversy between those who find the universal use of S-expressions a technical advantage (and don’t mind the admitted relative clumsiness of S-expressions for numerical expressions) and those who are certain that algebraic syntax is more concise, more convenient, or even more natural (whatever that may mean, considering that all these notations are artificial). We conjecture that Algol-style syntax has not really caught on in the Lisp community as a whole for two reasons. First, there are not enough special symbols to go around. When your domain of discourse is limited to numbers or characters, there are only so many operations of interest, and it is not difficult to assign one special character to each and be done with it. But Lisp has a much richer domain of discourse, and a Lisp programmer often approaches an application as yet another exercise in language design; the style typically involves designing new data structures and new functions to operate on them—perhaps dozens or hundreds—and it’s just too hard to invent that many distinct symbols (though the APL community certainly has tried). Ultimately one must always fall back on a general function-call notation; it’s just that Lisp programmers don’t wait until they fail. Second, and perhaps more important, Algol-style syntax makes programs look less like the data structures used to represent them. In a culture where the ability to manipulate representations of programs is a central paradigm, a notation that distances the appearance of a program from the appearance of its representation as data is not likely to be warmly received (and this was, and is, one of the principal objections to the inclusion of loop in Common Lisp). On the other hand, precisely because Lisp makes it easy to play with program representations, it is always easy for the novice to experiment with alternative notations. Therefore we expect future generations of Lisp programmers to continue to reinvent Algol-style syntax for Lisp, over and over and over again, and we are equally confident that they will continue, after an initial period of infatuation, to reject it. (Perhaps this process should be regarded as a rite of passage for Lisp hackers.)" Guy L. Steele, Jr. and Richard P. Gabriel, "The Evolution of Lisp" My personal opinion is that adding new ways for expressing the same thing that can already be expressed in the language is a bad thing, because it only increases the entropy of your code base. When it comes to readability, I don't think that we need to make the reader syntax more complex. What we need is an editor with a good typesetting feature, so that various programmers could use the formatting of their liking (say, making their code look more like Haskell or more like Julia or some other mathematical idols like Mathematica) without making any invasive changes to the code base. When it comes to curly infix, I don't think that it really is an advantage for arithmetic expressions. But I do agree that infix expressions are actually clarifying in the case of asymmetrical binary relations: x < y is a bit more obvious than (< x y). However, this can be achieved using regular Scheme syntax, say, with the following macros: (define-syntax infix (syntax-rules () ((_ x related-to? y) (related-to? x y)) ((_ x related-to? y . likewise) (and (infix x related-to? y) (infix y . likewise) (define-syntax is (syntax-rules (_) ((is _ related-to? right . likewise) (lambda (_) (infix _ related-to? right . likewise))) ((is left related-to? _ . likewise) (lambda (_) (infix left related-to? _ . likewise))) ((is x related-to? y . likewise) (infix x related-to? y . likewise)) ((is x) ;; thanks to Arne's earlier suggestions (lambda (y) (equal? x y))) ((is left relation) ;; same as "(is left relation _)" (lambda (right) (relation left right) Perhaps the "is" macro allows more than it should, and "infix"
Re: srfi-1 take and drop seriously broken
2016-11-21 8:34 GMT+01:00 Jan Synáček: > > Ok. Apart from the fact that it's written in srfi, I wonder what the > reasoning for such behavior is. I mean, what makes the "i" bigger than > the length of the list so illegal that you have to bail out? When is > such behavior useful? On the other hand, not having to worry about the > list length is very useful. Because now my code is littered with > things like > > ;; Let's hope that string-length is O(1). > (if (>= width (string-length item)) > item > (string-take item width)) > > or > > (if (string-null? text) > "" > (string-drop-right text 1)) > > Maybe I'm just doing something wrong? > The variants of take and drop that you'd like to have would need to perform additional checks in order to find out whether the object that you are trying to drop or take is a pair. When it comes to your code, it's really hard to say if you're doing something wrong or not, because it doesn't say why you're doing what you're doing. But in the case of string operations it is often more convenient to use regular expressions. (However, I admit that I rarely have the need to use take-upto and drop-upto functions. I ofen use the split-at function, and I really appreciate that it throws an error when invoked with an index that lies beyond a given list; otherwise, I'd have to worry about silent errors that could appear in my code. Likewise, you could have (car '()) and (cdr '()) return an empty list, which would make your code "fool proof", but I don't think it would have good implications in the long run. Anyway, even Haskell's head and tail don't work like that)
Re: srfi-1 take and drop seriously broken
2016-11-20 11:42 GMT+01:00 Jan Synáček: > > >> Please, tell me that this is just a mistake... This can't be true. I > >> still can't believe it. This is from 2.0.11. Please, tell me that the > >> implementation is fixed in 2.2. > >> > >> Yours truly puzzled, > > > > > > I don't know why you find it so puzzling. You can't take or drop > something > > that "isn't there" (you can't take a car or cdr from an empty list as > well, > > although e.g. in the language of "The Little Prover" (car '()) and (cdr > '()) > > both evaluate to '() to assure their totality). If you need, you can > define > > your own variants that take/drop at most n elements of list. > > Not only that you "can", it's also IMHO a fool-proof implementation > and I can't see any reason why it should behave differently. > Because someone might think that if he took 7 elements, then he has 7 elements, so it is good that he knows earlier that this is not the case. I don't see a point in referring to Haskell documentation when discussing Scheme functions, though (if you try Racket, you'll note that although its take has a reversed order of arguments compared to srfi-1, it still doesn't allow to take or drop a positive number of elements from an empty list) I agree though that the srfi-1 document isn't explicit enough about this point.
Re: srfi-1 take and drop seriously broken
2016-11-19 19:34 GMT+01:00 Jan Synáček: > Hi, > > scheme@(guile-user)> ,use (srfi srfi-1) > scheme@(guile-user)> (take (list 1 2 3) 4) > ERROR: In procedure list-head: > ERROR: In procedure list-head: Wrong type argument in position 1 > (expecting pair): () > > scheme@(guile-user) [1]> (drop (list 1 2 3) 4) > ERROR: In procedure list-tail: > ERROR: In procedure list-tail: Wrong type argument in position 1 > (expecting pair): () > > Please, tell me that this is just a mistake... This can't be true. I > still can't believe it. This is from 2.0.11. Please, tell me that the > implementation is fixed in 2.2. > > Yours truly puzzled, I don't know why you find it so puzzling. You can't take or drop something that "isn't there" (you can't take a car or cdr from an empty list as well, although e.g. in the language of "The Little Prover" (car '()) and (cdr '()) both evaluate to '() to assure their totality). If you need, you can define your own variants that take/drop at most n elements of list. Or you could use the take-upto/drop-upto functions from the (grand scheme) library that I maintain: https://github.com/plande/grand-scheme (check the grand/list.scm file for the definition)
Re: Shorter lambda expressions
2016-09-23 18:44 GMT+02:00 Panicz Maciej Godek <godek.mac...@gmail.com>: > I hope you don't mind me having dug this thread up, with an idea that is > only loosely related with the original one. > > Recently I've been doing a small project in Clojure, and I've found that > it provides a function called "partial" that performs a sort of partial > application. > > With guile's curried definitions, it can be defined as > > (define ((partial function . args) . args+) > (apply function `(,@args ,@args+))) > > and it works rather nicely: > > (map (partial cons 2) '((3 4) (3 5) (4 6) (7 1))) > ===> ((2 3 4) (2 3 5) (2 4 6) (2 7 1)) > > I believe that -- since it is just a function -- it is much less > controversial than both the short macros and SRFI-26 (although its range > of applicability is narrower), and it seems to compose well with the spirit > of Scheme, so maybe that would be a nice-have? > I take it back. It is a terrible idea. Using explicit lambda is a better solution, because it gives an opportunity to provide a name for an element of a list. Yet the name "partial" reads terribly. Compound usages such as (partial map (partial cons 1)) are much worse than their regular counterparts, i.e. (lambda (list) (map (lambda (element) (cons 1 element)) list)) because even though the latter are more lengthy, this length actually serves the purpose of exposing the structure of expression. In the former case, it isn't clear (without knowing the arity of map) what will be the arity of the whole expression, nor the role of those arguments. It seems to be a problem even in the case of well-known functions such as cons or map. Sorry for the noise
Re: Shorter lambda expressions
I hope you don't mind me having dug this thread up, with an idea that is only loosely related with the original one. Recently I've been doing a small project in Clojure, and I've found that it provides a function called "partial" that performs a sort of partial application. With guile's curried definitions, it can be defined as (define ((partial function . args) . args+) (apply function `(,@args ,@args+))) and it works rather nicely: (map (partial cons 2) '((3 4) (3 5) (4 6) (7 1))) ===> ((2 3 4) (2 3 5) (2 4 6) (2 7 1)) I believe that -- since it is just a function -- it is much less controversial than both the short macros and SRFI-26 (although its range of applicability is narrower), and it seems to compose well with the spirit of Scheme, so maybe that would be a nice-have?
Re: anyone define port types?
2016-03-30 19:53 GMT+02:00 Marko Rauhamaa: > I like OOP, only I don't like GOOPS. Its classes and generic functions > seem so idiomatically out of place, unschemish, if you will. > > This is how OOP ought to be done: > > https://www.gnu.org/software/guile/manual/html_node/OO-Closure.htm > l#OO-Closure> > > The problem with closures is, among others, that they are non-serializable (I think that Termite solves some of those issues gracefully) I have created a tiny Guile module ("simpleton") that generalizes the > principle. In particular, > > * You don't need classes for OOP. You only need objects. > > JavaScript made a similar assumption, which -- I believe -- turned out cumbersome, because I sometimes have to pretend that it has classes despite that it doesn't. > * Do tie methods to objects. Don't pretend methods are external to >objects. * Don't expose the internal state of objects. Only interact with the >object through methods. I think it is a good rule, but it's better if you can do without state. I believe that methods *are* external to objects. Linguistically, you don't "button.press()"; you "press(button)". I also think that tying methods to objects is one of the problems of OOP, because the designer of an object has to know in advance which actions on an object are conceivable. Perhaps certain devices, like tape recorders, are modelled well within OOP -- you have a well-defined interface (like the play, rewind, fast-forward button) and a hidden internal state. But I'm afraid that this metaphor doesn't scale well.
Re: anyone define port types?
2016-03-30 13:18 GMT+02:00 Jan Nieuwenhuizen <jann...@gnu.org>: > Panicz Maciej Godek writes: > > > I also used GOOPS, which I regret to this day, and so the > > whole framework needs a serious rewrite > > What is it that you do not like about GOOPS? Most specifically, I dislike its middle three letters. The problem with OOP is that it requires to know exactly what ones want -- it is difficult to change the design of your program after it's been written (and it is also difficult to come up with a good design from the beginning), and -- since it is based on state mutation - it makes it difficult to reason about your program. On the practical side, it was a bit counterintuitive that and were unrelated, and I think that there were some issues with and types.
Re: anyone define port types?
Hi Andy, I have been using soft ports to implement a text widget in my GUI framework. I also used GOOPS, which I regret to this day, and so the whole framework needs a serious rewrite, but if you're collecting various species to the museum of make-soft-port, you can have a look: https://bitbucket.org/panicz/slayer/src/8b741c21b7372c24874d7cd1875e2aae3fc43abe/guile-modules/widgets/text-area.scm?at=default=file-view-default Regards, Panicz
Re: What is needed in guildhall to include it in Guile?
Hi Arne! 2016-02-22 16:08 GMT+01:00 Arne Babenhauserheide: > Hi, > > In january there was a thread here about Guildhall with the notion > > > I encourage you to hack on Guildhall to make it more usable for your > > needs. > > I finished my PhD last month, so I have some freed-up time — and I would > like to use some of it to hack on Guildhall and make it ready for > inclusion in Guile. > > However there’s one stumbling block: I don’t see what’s actually missing > from it. So I want to be bold and request something: > > > Please tell me what’s missing in Guildhall, so I can implement it. > > I'm glad that you wrote about this topic. I admit that I haven't used neither Guildhall nor Guix, but from what I've seen in other languages, I think that while perhaps some packages depend on additional toolchains, I think this doesn't concern the pure Guile/Scheme modules, it is absolutely sane to have a "language-specific package manager". I think that perhaps it would need to focus on community -- I would like to have a place where I could keep my modules easily for me and other people to use -- similarly to github, but focused specifically on Guile/Scheme. I I think it would be awesome if there were some statistics concerning the popularity of modules, as well as an on-site possibility to report bugs and surprising behaviors. Another thing that I believe would be cool is if there was absolutely no need to install the packages -- that the invocation of (use-modules) would fetch them (with dependencies) from the remote server (and verified as needed). A controversial thing is whether to use the Guile module system or R6RS. I personally don't like the latter too much, but perhaps it is a question of integrating it nicely with Emacs (after all, it would be a big win for the whole Scheme community if the package manager could be ported to other implementations, and the packages could be shared -- as in the case of SNOW packages[1]) Best regards, Panicz [1] http://snow.iro.umontreal.ca/
Re: Towards De-icing ice-9 modules.
2016-02-12 21:41 GMT+01:00 Chad Albers: > Hi, > > In my attempt to assist the guile project, I thought I would share a > document on a plan to migrate some of the ice-9 modules into a more > intuitive, yet to be decided, namespace. Before I proposed a technological > plan, I have begun really an audit of what ice-9 modules are available (and > undocumented), and other modules that guile ships with. (there are some > secrets down there). > Hi, maybe I'm on a bit conservative side, but as far as I can tell, there is a recurring suggestion is to rename modules called (ice-9 xxx) as (guile xxx). While I do agree that the "ice-9" name isn't particularly intuitive, it does provide a metaphor that grasps the idea that inspired Guile. Beside this little difference -- that "ice-9" might be slightly unobvious to newcommers -- I see no cognitive advantage in that renaming, while there is a huge disadvantage of breaking backwards compatibility of many programs that use Guile. If the modification was to be meaningful, we should group modules into logical categories -- for example, rename (ice-9 and-let-star) to (syntax and-let*), (ice-9 threads) to (control threads) and (ice-9 readline) to (utils readline), for instance. What I think would be a cooler idea is to provide a mechanism for automatically fetching the required modules (in their required versions) from specified git repositories, so that once a program is written, one wouldn't have to worry about its dependecies. It would also be nice to have a tool that would be able to trace the modifications in the source code to see whether it contains any changes that could break the existing functionality compared to some earler version. (this would probably be difficult to do in general, but perhaps there are some common use cases that could be easily covered) Best regards, Panicz
Re: I wrote fluid advection code: How to make this more elegant?
Hi, although I cannot be of immediate help with the topic, because I don't know anything about advection, I think that the main problem with your code is that it is imperative. While Python's stylistics handles the imperative code quite nicely, it looks rather strange in Scheme. >From my experience, the proper Scheme solution should resemble the mathematical formulation of a problem (except that it should be more descriptive), rather than a list of steps for solving it. Also, it should be more readable to use patern matching instead of list indexing, so most likely your expression (let ((newvalue (+ (- (psir i) (* c1 (- (psir (+ i 1)) (psir (- i 1) (* c2 (+ (- (psir (+ i 1)) (* 2 (psir i))) (psir (- i 1))) (array-set! psinew newvalue i)) should look like this (map (lambda (prev this next) (- this (* c1 (- next prev)) (* (- c2) (+ next (* -2 this) prev `(0 ,@(drop psir 1)) psir `(,@(drop-right psir 1) 0)) While this may also look slightly difficult to read (and write), this isn't solely because of the expression's structure, but because the factors of the expression have no name, and therefore the source code doesn't explain their role (this is the problem of the Python code either, but Python doesn't prompt to fix that) HTH PS I think that this subject fits better to guile-user 2016-01-23 11:00 GMT+01:00 Arne Babenhauserheide: > Hi, > > I just recreated a fluid advection exercise in Guile Scheme and I’m not > quite happy with its readability. Can you help me improve it? > > My main gripe is that the math does not look instantly accessible. > > The original version was in Python: > > psi[i] - c1*(psi[i+1] - psi[i-1]) + c2*(psi[i+1] - 2.0*psi[i] + > psi[i-1]) > > My port to Scheme looks like this: > > (let ((newvalue (+ (- (psir i) > (* c1 (- (psir (+ i 1)) (psir (- i 1) >(* c2 (+ (- (psir (+ i 1)) (* 2 (psir i))) > (psir (- i 1))) > (array-set! psinew newvalue i)) > > > Liebe Grüße, > Arne > > Here’s the full code: > > #!/bin/sh > # -*- scheme -*- > exec guile -e '(@@ (advection) main)' -s "$0" "$@" > !# > > ; Copyright (c) 2015 John Burkardt (original Python), 2016 Corinna > ; Hoose (adaption) and 2016 Arne Babenhauserheide (pep8 + Scheme > ; version). > > ; License: LGPL, built on the Python version from 2015 John Burkardt > ; and Corinna Hoose. License LGPL. > > (define-module (advection) > #:use-module (ice-9 optargs) ; define* > #:use-module (srfi srfi-1) ; iota > #:use-module (ice-9 format) > #:use-module (ice-9 popen)) > > > (define* (fd1d-advection-lax-wendroff #:key (nx 101) (nt 1000) (c 1)) > (let* ((dx (/ 1 (- nx 1))) > (x (iota nx 0 (/ 1 nx))) > (dt (/ 1 nt)) > (c1 (* #e0.5 (* c (/ dt dx > (c2 (* 0.5 (expt (* c (/ dt dx)) 2 > (format #t "CFL condition: dt (~g) ≤ (~g) dx/c\n" dt (/ dx c)) > (let ((psi (make-array 0 nx)) > (X (make-array 0 nx (+ nt 1))) > (Y (make-array 0 nx (+ nt 1))) > (Z (make-array 0 nx (+ nt 1 > (let ((psinew (let ((pn (make-array 0 nx))) > (let loop ((i 0)) > (cond ((= i nx) >pn) > (else >(let ((xi (list-ref x i))) > (when (and (<= 0.4 xi) (<= xi 0.6)) >(array-set! pn >(* (expt (- (* 10 xi) 4) 2) > (expt (- 6 (* 10 xi)) 2)) >i)) > (loop (+ 1 i) > (define (psir i) (array-ref psi i)) > (let loop ((j 0)) > (cond >((> j nt) #t) ; done >(else > (let ((t (/ j nt))) > (when (>= j 1) > (let ((newvalue (+ (- (psir 0) > (* c1 (- (psir 1) >(psir (- nx 1) >(* c2 (+ (- (psir 1) >(* 2 (psir 0))) > (psir (- nx 1))) > (array-set! psinew newvalue 0)) > (let loop ((i 1)) > (when (< i (- nx 1)) > (let ((newvalue (+ (- (psir i) > (* c1 (- (psir (+ i 1)) (psir (- > i 1) >(* c2 (+ (- (psir (+ i 1)) (* 2 > (psir i))) > (psir (- i 1))) > (array-set! psinew newvalue i)) >
Re: Request for feedback on SRFI-126
2015-09-28 10:13 GMT+02:00 Taylan Ulrich Bayırlı/Kammer < taylanbayi...@gmail.com>: > Panicz Maciej Godek <godek.mac...@gmail.com> writes: > > > > Maybe you should explain why there are so many implementations of > > Scheme in the first place? (That isn't the case for Python, Java or > > Perl) > > Because it's too easy to make a standards-compliant implementation > because the standards ask for too little from compliant implementations. > And/or other reasons; don't really know your intent with the question. > Because Scheme is constructed by removing unnecesary features, rather than by adding them. > [...] > > > > That's the Grand Dream, but I don't even think it's *that* far > > away: > > > > Had I such a "Grand Dream", I'd go with Python, because they already > > have that. > > Python still lacks many of Scheme's greatest features. :-) > The MIT course 6.01 that replaces SICP introduces a language Spy, which is a mixture of Python and Scheme. Might be worth checking out, although I don't really think so. Anyway, if anyone's interested, it can be found here: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-01sc-introduction-to-electrical-engineering-and-computer-science-i-spring-2011/Syllabus/MIT6_01SCS11_notes.pdf (page 89) > > > Scheme has very different traits, and very different purpose. The > > position of Python stems from its particular notion of elegance, which > > -- although isn't as extreme as Scheme's -- is easier to grasp to the > > masses. This especially regards syntax. Python has a very intuitive > > syntax for its basic notions (including dictionaries). Lisp has "all > > those hairy parentheses", which apparently very few people can > > appreciate, although they help to work with the code greatly, allowing > > to see beyond language's limitations. > > S-expression syntax, allowing things like Paredit, is one of Scheme's > many virtues over other languages. So I would like to use Scheme, not > Python. Currently I can't. (I can use Guile, but it still misses some > "batteries.") > > > I find it daunting that it took us 106 whopping SRFIs to reach a > > *basic* socket API! Scheme could have been taking over the world > > by now. : -) > > > > Not sure what you mean by "taking over the world". That there will be > > many people using it? There won't. That there will be important > > systems based on it? Don't think so. > > Why not? > Because no one does them. > Scheme's power to change the world doesn't stem from standardized > > APIs, but (hopefully) from SICP and the way it affects thinking about > > computer systems. > > Why even bother with something like Guile if the only worth of Scheme > were pedagogical? > No. I never said it's the only worth. I find Guile very practical and convinient tool, despite its flaws. > > And as amazing as it would be if that huge pool of libraries > > existed specifically as Guile modules, I'm not sure if it's > > realistic. > > > > Somehow I can't get amazed with that vision. > > The best thing that Scheme does for programming is that it promotes > > writing software that can be read, and not only executed. What you > > claim to be essential here seems to be a rather minor detail from that > > point of view. > > Libraries *are* software. Currently people *don't* write any of that > "software that can be read" except in small groups. > Small groups are a good start, I guess. > Python lacks many of Scheme's greatest features. > > > > The advantage of which rarely manifests in everyday practice, > > especially if you're not used to working with s-expressions (most > > programmers I know haven't got a clue what those are). > > I recommend that you read this: > > http://norvig.com/python-lisp.html > > I'm wondering more and more why you use Scheme if you think it has no > advantages over Python. > I don't think so, and I never said that. Python has some inherent limitations that Scheme manages to avoid, because it isn't over-specified. I gave that link because it comes from one of the greatest Lisp advocates -- and yet he claims that "Some just couldn't get used to Lisp syntax in the limited amount of class time they had to devote to it", and also "Although some people have initial resistance to the [Python's indentation as block structure]/(Lisp parentheses), most come to [like]/(deeply appreciate) them." Even within the Scheme community there appear voices complaining on the Lisp syntax, like SRFI-105, SRFI-110 or SRFI-119. > If I can't *
Re: Request for feedback on SRFI-126
Hi, while I have nothing to say regarding the details of your SRFI, I find some of your motivations questionable, and therefore I decided to write this reply. Forgive the somewhat "negative" tone of this e-mail, despite my intentions being positive. > I've made pretty fine experiences with R7RS-small so far[0][1][2][3], > and after seeing people's disdain towards R7RS-large's direction and > agreeing with them (although I wouldn't trust my own judgment alone), > I've decided to try pushing R7RS-large in a somewhat better direction. > It is unclear to me what do you mean by "better direction", and in particular, how do you judge which direction is better or worse The benefit for Guile? I shortly summed up my thoughts on that in the > FOSDEM thread on the Guix ML; here the mail from the archives: > http://lists.gnu.org/archive/html/guix-devel/2015-09/msg00759.html > > You wrote there, among others, that "with a little more work, standard Scheme might actually become a language essentially as usable as Python and the like". If you're looking for a language that is "as usable as Python", then I'd recommend trying out Python, which is very good at what it does. Maybe I'm reading your point wrong, but I don't think that competing with Python or chasing Python or trying to mimic Python would be anything but a waste of time > Perhaps a better summary: better Scheme standards -> more libraries that > work on any implementation -> more total Scheme users & more free-flow > of users between implementations -> more potential for growth of the > Guile community. I don't think that the flow of users between the implementations is the major concern of the Scheme community, and I also haven't got a clue how one can tell what the phrase "better Scheme standards" means. "Better" by which standards? Actually, I think that if you really wanted to unite communities around various Scheme implementations, you'd need to do that through dialogue rather than standardizations -- because apparently the diversity between various implementations exists for a reason, as probably those communities worship different values > The envisioned direction for R7RS-large? I'll try writing specs which > could have been part of the clean R7RS-small, or could be part of an > R8RS that would be more in the vein of R6RS (without some key bad > parts), that is: not being overly minimalist, not catering to obscure > implementations that are barely maintained and used, being more daring > in requesting modern and advanced features from implementations that > want to be compliant. > To me, minimalism is at the very heart of Scheme, and any departure from it will sooner or later turn out to be harmful. I think that putting hash tables into the language is a particularly good example of something that goes against the spirit of Scheme. What I believe would go along the spirit of Scheme is that in certain circumstances, an assoc list could be optimized to a hash table, because a hash table is essentially an optimized implementation of key-value lookup Not like R7RS-large's apparent current direction[4][5][6][7][8], i.e.: > specifying a ton of questionable libraries that seem to fill imaginary > gaps, invite design bugs through the inclusion of spurious utility > forms, and overall seem more suitable to live as third-party libraries, > because they can be implemented as such without needing support for > additional fundamental features from Scheme implementations. All the > while said fundamental features are neglected from standardization > because X and Y minimalist implementation of Scheme won't be able to > support them. > Which "said fundamental features" do you mean? Does that make sense? Feel free to add to this high-level description > of the desired direction, even if it seems vague. I'm trying to sum up > the sentiment of others, so don't see the above as my personal opinion. > I think it would be much more worthwhile to create stunning applications (especially the ones that would make use of the Scheme's particular traits), rather than constantly improving the language which is already good enough. The issue of library interoperability between implementations should be solved only if it really turns out to be an actual problem. Best regards, M.
Re: Request for feedback on SRFI-126
> > > > I've made pretty fine experiences with R7RS-small so far[0][1][2] > > [3], and after seeing people's disdain towards R7RS-large's > > direction and agreeing with them (although I wouldn't trust my own > > judgment alone), I've decided to try pushing R7RS-large in a > > somewhat better direction. > > > > It is unclear to me what do you mean by "better direction", and in > > particular, how do you judge which direction is better or worse > > Even more broadly than the summaries I already gave, I could say: I > would like us to reach a state where one can think "I need to write > application X? Let's do it in Scheme since it's such a neat language," > and then proceed to install standard Scheme libraries A, B, and C into > my system either with the distro's package manager or a Scheme package > manager (written in standard Scheme, working with my favorite > implementation), proceed to write my application in standard Scheme > using those libraries, and have this application X using libraries A, B, > and C work across Guile, Racket, Chicken, Gauche, Kawa, and what have > you, without having to change a single line of code. > Maybe you should explain why there are so many implementations of Scheme in the first place? (That isn't the case for Python, Java or Perl) Application X could be anything from a network service to a video game. > Whatever I can do in Perl, Python, shell, Java, etc., and even most > things I could do in C/C++, I should be able to do in standard Scheme. > > That's the Grand Dream, but I don't even think it's *that* far away: > > Had I such a "Grand Dream", I'd go with Python, because they already have that. Scheme has very different traits, and very different purpose. The position of Python stems from its particular notion of elegance, which -- although isn't as extreme as Scheme's -- is easier to grasp to the masses. This especially regards syntax. Python has a very intuitive syntax for its basic notions (including dictionaries). Lisp has "all those hairy parentheses", which apparently very few people can appreciate, although they help to work with the code greatly, allowing to see beyond language's limitations. Many mature Scheme implementations can do those things in one or another > non-portable way, although they lack a big pool of utility libraries to > help. > > In our example we might find ourselves missing libraries A and C, *even* > if we specifically choose Guile instead of standard Scheme. That's a > big losing point against something like Python or Java. > > Now if it were possible to write said pool of utility libraries in a > portable way, thanks to the standards unifying all *fundamental* > features needed to do so under unified APIs (i.e. things you can't > implement as a library in terms of more primitive features, like > sockets, multithreading, filesystem commands, etc.), then it would be > plausible to have the larger Scheme community start producing such a > pool of utility libraries, and then we're done really. > > I find it daunting that it took us 106 whopping SRFIs to reach a *basic* > socket API! Scheme could have been taking over the world by now. :-) > > Not sure what you mean by "taking over the world". That there will be many people using it? There won't. That there will be important systems based on it? Don't think so. Scheme's power to change the world doesn't stem from standardized APIs, but (hopefully) from SICP and the way it affects thinking about computer systems. And as amazing as it would be if that huge pool of libraries existed > specifically as Guile modules, I'm not sure if it's realistic. > Somehow I can't get amazed with that vision. The best thing that Scheme does for programming is that it promotes writing software that can be read, and not only executed. What you claim to be essential here seems to be a rather minor detail from that point of view. That was a huge chunk of text, so I'll try to keep the rest of the mail > very terse. Don't be irritated by the terseness of the sentences. > Quite the contrary, I am grateful. > Maybe I'm reading your point wrong, but I don't think that competing > > with Python or chasing Python or trying to mimic Python would be > > anything but a waste of time > > Python lacks many of Scheme's greatest features. > > The advantage of which rarely manifests in everyday practice, especially if you're not used to working with s-expressions (most programmers I know haven't got a clue what those are). I recommend that you read this: http://norvig.com/python-lisp.html > Perhaps a better summary: better Scheme standards -> more > > libraries that work on any implementation -> more total Scheme > > users & more free-flow of users between implementations -> more > > potential for growth of the Guile community. > > > > I don't think that the flow of users between the implementations is > > the major concern of the Scheme community, and I also haven't got a > > clue
Re: Guile Assembler
2015-09-04 2:54 GMT+02:00 Mark H Weaver <m...@netris.org>: > Panicz Maciej Godek <godek.mac...@gmail.com> writes: > > > It is not a patch though, but just a separate module called (ice-9 > > nice-9) that is meant to be placed in the "ice-9" directory (e.g. > > "/usr/share/guile/2.0/ice-9"). > > > > It would definitely need a more elaborate documentation, but the quick > > note is that it: > > > > * allows to destructure arguments to lambda, e.g. > > > > (map (lambda ((a . b)) (+ a b)) '((1 . 2)(3 . 4)(5 . 6))) > > > > * blends named-let with match-let and srfi-71-style let for multiple > > values, legalizing usages like > > > > (let loop ((a (b c) (values 1 (list 2 3 > > ... > > (loop (values 4 (list 5 6 > > I only just recently noticed this message, but before people start > writing a lot of code like this, I should warn you that in the procedure > call (loop (values 4 (list 5 6))), by the semantics of Guile, that is > supposed to be equivalent to (loop 4). If it does something else, > that's probably a bug in our optimizer, and it might well act > differently on our master branch already, because the multiple-values > stuff has been cleaned up a lot compared with 2.0. Anyway, you > certainly should not rely on this behavior. Sorry... Actually, the code is written in such way that "loop" is actually a macro such that (loop (values 4 (list 5 6))) would expand to (loop* (values->list (values 4 (list 5 6 where loop* takes lists as inputs and (values->list call) is a syntax defined as (call-with-values (lambda () call) list), so I think it is a proper Scheme code which does not rely on any undefined behaviors. It has some drawbacks -- among others, that loop is no longer first class (although it could be made obtainable easily) and the resulting code is rather inefficient, but it ought to behave properly. There's also a comment in the module that ;; it should generally be discouraged to use the plain let ;; with multiple values, because there's no natural way to implement ;; that when there's more than one (multiple-value) binding, ;; but it's added for completeness but for the time being I've actually only been using let* with multiple values. -- Panicz
Re: Making apostrophe, backtick, etc. hygienic?
2015-08-30 16:47 GMT+02:00 Taylan Ulrich Bayırlı/Kammer < taylanbayi...@gmail.com>: > Panicz Maciej Godek <godek.mac...@gmail.com> writes: > > > Your point is that quote (and unquote, and quasiquote, and syntax, and > > unsyntax, and quasisyntax) is a reader macro, so one might forget that > > 'x is really (quote x) -- because that indeed cannot be infered from > > the source code. > > Yup, exactly. > > > You've got the point, but I think that the only reasonable solution > > would be to make the compiler issue warning whenever reader macro > > identifiers are being shadowed. > > That's a good idea as well. It might annoy some users though, when they > really want to shadow 'quote' (or 'syntax'). Dunno. > This could actually be solved by some additional means -- for example, one could have to write some additional statements that would confirm that he's aware that the reader macros are being shadowed. For instance (define-syntax match (syntax-rules ( quote ) )) (assert (syntax-shadows? match 'quote)) ;; removes the warning But to be honest, I don't think that this is a real problem. The problem manifested itself with the "syntax" form, and not the "quote" form, and I think the reason for that is not just accidental. The "quote" form is more common and more commonly used, while the "syntax" form is exotic and surprising -- especially because everyone unfamiliar would read #' as "hash-quote" rather than "syntax" > > Putting the issue with "syntax" aside, making 'foo expand to > > (__quote__ foo) would be surprising to anyone who actually wanted to > > shadow "quote". As I mentioned earlier, there are libraries that make > > use of the fact that 'x is (quote x). Take a look in here, for > > example: > > > http://git.savannah.gnu.org/gitweb/?p=guile.git;a=blob;f=module/ice-9/match.upstream.scm;h=ede1d43c9ff8b085cb5709678c4227f5ecaaa8a5;hb=HEAD#l335 > > > > (match '(a b) > > (('a 'b) #t) > > (_ #f)) > > > > would no longer evaluate to #t, because the ('a 'b) pattern would > > actually be read as ((__quote__ a) (__quote__ b)). You'd need to > > change all occurences of "quote" with "__quote__" in the > > match.upstream.scm (and in every other library that shadows quote for > > its purpose) in order to make it work, thus making Guile > > non-RnRS-compliant. > > Hmm, that gets a little complicated, yeah. Still, in highly RnRS > compliant systems, macros actually match their "literal" inputs by > (hygienic) "bindings" and not the names of identifiers. I.e., if the > quote and __quote__ identifiers hold the *same binding*, then a macro > that has 'quote' in its literals list will also match '__quote__' for > that literal. (Magic!) I seem to remember Guile 2.2 really does this > the pedantically right way, while Guile 2.0 is more lax about it. > > I think that this is the case for R6RS or R7RS, but as far as I can tell, in R5RS it would be problematic. The only RnRS-compliant code that would break is code which itself > shadows 'quote' and expects its shadowing to work with 'foo. Like: > > (let ((quote -)) '9) ;=> -9 > > Dunno if there's any serious Scheme/Guile code out in the wild which > actually relies on this working. > > You'll never know. While it may seem unlikely to fix quote to mean minus, in mathematical analysis the symbol is often used to mean the derivative of a unary function, so it is quite possible that someone would wish to write in some context (let ((quote deriv)) (+ (f x) ('f x) (''f x))) On the other hand, your soultion would work if someone decided to write (let ('deriv) (+ (f x) ('f x) (''f x))) (which is IMO more elegant) Nevertheless, I think that even if 'x would map to (__quote__ x), it could still happen that someone was using the __double_underscore__ convention in her code (for some reason), and your allegations would apply to this new situation as well. As I said earlier, I think that the problem isn't caused by the fact that 'x is (quote x), because it is likely that every lisp programmer reads 'x as "quote x", but by the fact that there is this weird "syntax" form (and its family) which has only one application in Scheme, namely -- syntax-case macros. You could ask the question on comp.lang.scheme newsgroup, but I think that the solution you suggest would only introduce unnecessary divergence from Lisp and Scheme, and for a dubious reason. Furthermore, while it is common to use these __underscores__ in C, PHP or Python, it is an alien practice in the Scheme code base. (Instead of "quote x", you'd need to read 'x as "underscore underscore quote underscore underscore x", which is unhandy and brain-damaging) Regards, M.
Re: Making apostrophe, backtick, etc. hygienic?
2015-08-30 14:30 GMT+02:00 Taylan Ulrich Bayırlı/Kammer taylanbayi...@gmail.com: This is a bit of a crank idea, but here goes. Today I wasted some time trying to find the bug in the following piece of code: (define (syntax-car syntax) (syntax-case syntax () ((car . cdr) #'car))) Better error reporting in macro-expansion errors might have made it less painful, but maybe we can solve the problem itself. How about making 'foo turn into something like (__quote__ foo), and similar for `foo, #'foo, etc.? Where __quote__ is just a synonym to quote, and the original works too. Ideal would be a symbol that's not as noisy (in debug output) but still highly improbable to appear in user code and be accidentally shadowed. You mean that #'x is synonymous to (syntax x), and that's where the problem stems from? Maybe it would not be standards-compliant in the strict sense, but I believe it would be an improvement. Am I missing any obvious downsides? Or any subtle ones? I think that every lisper should know that 'x is synonymous to (quote x), and in some contexts it might be desirable to bind a new meaning to the quote form (and this is already done by some libraries, notably in the (ice-9 match) module). As to syntax, the use of #'x is much rarer, and the idea that #'x is (syntax x) is indeed a bit controversial. But this regards the whole syntax-case macro system, and I think that it would be more valuable to think how to fix its flaws, rather than change the very fundamentals of the language. Best regards, M.
Re: Making apostrophe, backtick, etc. hygienic?
2015-08-30 15:16 GMT+02:00 Taylan Ulrich Bayırlı/Kammer taylanbayi...@gmail.com: Panicz Maciej Godek godek.mac...@gmail.com writes: You mean that #'x is synonymous to (syntax x), and that's where the problem stems from? Yup. I shadow 'syntax', but I don't explicitly shadow #'. It gets shadowed implicitly. Lexical scoping and hygiene are supposed to let the programmer forget about such worries. I think that every lisper should know that 'x is synonymous to (quote x), and in some contexts it might be desirable to bind a new meaning to the quote form (and this is already done by some libraries, notably in the (ice-9 match) module). One may know, but still forget when sufficiently tired, and/or when using the word quote to mean something conceptually different than the quoting in lisp. You could say that about practically any keyword in Scheme, or for that matter any other language. The advantage of Scheme is that it really allows you to redefine any keyword you like. Your point is that quote (and unquote, and quasiquote, and syntax, and unsyntax, and quasisyntax) is a reader macro, so one might forget that 'x is really (quote x) -- because that indeed cannot be infered from the source code. For instance, some kind of text processing program might give the term quote a specific meaning in the program's problem domain, and once you're immersed deeply enough in this domain, you might find yourself naming some function parameter quote without giving it a second thought. Kind of difficult to explain what I mean, but I know it happens to me when I'm not careful. You've got the point, but I think that the only reasonable solution would be to make the compiler issue warning whenever reader macro identifiers are being shadowed. As another example, it also keeps happening to me that I write code like: (syntax-rules () ((_ foo) (begin ... (let ((foo bar)) ... where I forget that the 'foo' there will not be bound freshly by that let form, and that despite that I understand how 'syntax-rules' works very well (externally, not necessarily internally!). One is just accustomed to be able to let-bind whatever one wants, and lexical scoping and hygiene take care of all worries ... except when not. :-) (In this case it has nothing to do with quote/syntax/etc., just giving an example of what silly mistakes I can make when not careful.) Well, misspellings happen all the time, but I think that just as their consequences can usually be avoided in real life by use of sanity, unit-testing your code is the best way to make sure that it is sane. (Nowadays I name all my pattern variables foo for that reason. Reads like BNF too, which is nice. And I don't see it ever clashing with record type names in practice.) As to syntax, the use of #'x is much rarer, and the idea that #'x is (syntax x) is indeed a bit controversial. But this regards the whole syntax-case macro system, and I think that it would be more valuable to think how to fix its flaws, rather than change the very fundamentals of the language. Hmm, I'm not sure what flaws of syntax-case you have in mind. IMO it's a pretty nice system. syntax-rules are nice, because they allow to comprehend macro transformations in terms of textual substitutions (as it is the case with functional programming), but because they expand in normal (rather than applicative) order, it's difficult to write more complex macros. The well-known solution is to use CPS macros (which are very difficult to comprehend) or Oleg Kiselyov's idea to implement the CK abstract machine in them. The third way, often the most intuitive, to influence the order of expansion, is to use syntax-case. However, if you do so, you can no longer (in general) analyze the macros in terms of textual substitution. And you need to use some new weird special forms, like the aforementioned syntax But either way, I don't think making #'foo expand to (__syntax__ foo), and simply making __syntax__ a synonyms to syntax, are fundamental changes. Putting the issue with syntax aside, making 'foo expand to (__quote__ foo) would be surprising to anyone who actually wanted to shadow quote. As I mentioned earlier, there are libraries that make use of the fact that 'x is (quote x). Take a look in here, for example: http://git.savannah.gnu.org/gitweb/?p=guile.git;a=blob;f=module/ice-9/match.upstream.scm;h=ede1d43c9ff8b085cb5709678c4227f5ecaaa8a5;hb=HEAD#l335 (match '(a b) (('a 'b) #t) (_ #f)) would no longer evaluate to #t, because the ('a 'b) pattern would actually be read as ((__quote__ a) (__quote__ b)). You'd need to change all occurences of quote with __quote__ in the match.upstream.scm (and in every other library that shadows quote for its purpose) in order to make it work, thus making Guile non-RnRS-compliant.
Re: Guile Assembler
24 cze 2015 10:34 to...@tuxteam.de napisał(a): In addition, the module allows me to use the let and let* forms with multiple values: (let ((a (b c) (values 1 '(2 3 (+ a b c)) Any chance to see that in Guile? The module actually works with guile (it replaces the default bindings for lambda, let, let* and define), but it provides much more than that (reexports most of srfi-1 or (ice-9 regex)), so some may argue that it's too much. I could prepare a smaller version of the module that perhaps could make it into the official distribution, if guile maintainers have nothing against, and send a patch
Re: Guile Assembler
2015-06-24 11:44 GMT+02:00 Panicz Maciej Godek godek.mac...@gmail.com: I could prepare a smaller version of the module that perhaps could make it into the official distribution, if guile maintainers have nothing against, and send a patch I include the aforementioned. It is not a patch though, but just a separate module called (ice-9 nice-9) that is meant to be placed in the ice-9 directory (e.g. /usr/share/guile/2.0/ice-9). It would definitely need a more elaborate documentation, but the quick note is that it: * allows to destructure arguments to lambda, e.g. (map (lambda ((a . b)) (+ a b)) '((1 . 2)(3 . 4)(5 . 6))) * blends named-let with match-let and srfi-71-style let for multiple values, legalizing usages like (let loop ((a (b c) (values 1 (list 2 3 ... (loop (values 4 (list 5 6 (although this may not seem to be a good way of programming, I think that imposing artificial limitations on how the language can be used would be even worse) * blends match-let* with srfi-71-style let* for multiple values * blends srfi-2 and-let* with pattern matcher, so that one can finally do things like (and-let* (((a b c d) '(1 2 3))) (+ a b c d)) (which evaluates to false, of course) * allows to use curried definitions like (ice-9 curried-definitions), but such that are already blended with the pattern-matching lambda (define ((f (a b)) (c)) (list a b c)) * in addition, it re-exports match from (ice-9 match) and every, any and count from (srfi srfi-1) I think that this set of extensions is non-controversial and it is handy to gather them in a single module. It only allows to use the commonly used features that would otherwise be illegal, and it does so in a predictable way. I re-export every and any because they are used by some of the macros. I also re-export count in order to make the count of (re)exported symbols equal 9, so that the module deserves its name, but this is mostly for a pun. Also, I just copied and modified the macros in order to get rid of some dependencies, but I only tested the code superficially Best regards (define-module (ice-9 nice-9) #:use-module (ice-9 match) #:use-module ((srfi srfi-1) #:select (every any count)) #:re-export (match every any count) #:export ((and-let*/match . and-let*)) #:replace ((cdefine . define) (mlambda . lambda) (named-match-let-values . let) (match-let*-values . let*))) (define-syntax mlambda (lambda (stx) (syntax-case stx () ((_ (first-arg ... last-arg . rest-args) body ...) (and (every identifier? #'(first-arg ... last-arg)) (or (identifier? #'rest-args) (null? #'rest-args))) #'(lambda (first-arg ... last-arg . rest-args) body ...)) ((_ arg body ...) (or (identifier? #'arg) (null? #'arg)) #'(lambda arg body ...)) ((_ args body ...) #'(match-lambda* (args body ...)) (define-syntax cdefine (syntax-rules () ((_ ((head . tail) . args) body ...) (cdefine (head . tail) (mlambda args body ...))) ((_ (name . args) body ...) (define name (mlambda args body ...))) ((_ . rest) (define . rest (define-syntax list-values (syntax-rules () ((_ call) (call-with-values (lambda () call) list (define-syntax named-match-let-values (lambda (stx) (syntax-case stx () ((_ ((identifier expression) ...) ;; optimization: plain let form body + ...) (every identifier? #'(identifier ...)) #'(let ((identifier expression) ...) body + ...)) ((_ name ((identifier expression) ...) ;; optimization: regular named-let body + ...) (and (identifier? #'name) (every identifier? #'(identifier ...))) #'(let name ((identifier expression) ...) body + ...)) ((_ name ((structure expression) ...) body + ...) (identifier? #'name) #'(letrec ((name (mlambda (structure ...) body + ...))) (name expression ...))) ((_ ((structure expression) ...) body + ...) #'(match-let ((structure expression) ...) body + ...)) ;; it should generally be discouraged to use the plain let ;; with multiple values, because there's no natural way to implement ;; that when there's more than one (multiple-value) binding, ;; but it's added for completeness ((_ ((structures ... expression) ...) body + ...) #'(match-let (((structures ...) (list-values expression)) ...) body + ...)) ((_ name ((structures ... expression) ...) body + ...) (identifier? #'name) #'(letrec ((loop (mlambda ((structures ...) ...) (let-syntax ((name (syntax-rules () ((_ args (... ...)) (loop (list-values args) (... ...)) body + ... (loop (list-values expression) ...))) ))) (define-syntax match-let*-values (lambda (stx) (syntax-case stx () ((_ ((identifier expression) ...) ;; optimization: regular let* body
Re: Guile Assembler
Hm. Where's the difference to Guile's define? And why do you have double parentheses in your example? Still a bit lost. hmm...do you read the pasted code in the repo? ;-) Not yet, I must admit. But nevermind, got it. It looks like a definition for a parametric func or for a half-curried func, depending on how you squint at it ;-) That explains the second set of parens. Hi, sorry to answer that late, but I erroneously sent the previous answer only to Nala. I have another project (in another repo) which provides the (extra common) module, which redefines define and lambda forms (among others): https://bitbucket.org/panicz/slayer/src/cbfb3187edaba890b12b307d84bb9c4538407d20/guile-modules/extra/common.scm?at=default (The module is rather huge -- it is a bag that I carry around) The modification of define originates, I believe, from the book Structure and Interpretation of Classical Mechanics by Gerald Sussman and Jack Wisdom [1], but I stole that idea directly from Guile's (ice-9 curried-definitions) module [2]. The idea is that (define ((f x) y) (list x y)) is equivalent to (define f (lambda (x) (lambda (y) (list x y This behaviour is consequent with the idea that (define (g x) (* x x)) should be equivalent to (define g (lambda (x) (* x x))) The derivative variants are easier to read, because they make it apparent, that e.g. (g 5) can be substituted with (* 5 5), and -- similarly -- ((f 2) 3) can be substituted with (list 2 3). The other difficulty is that the original function header looked like this: (define ((number/base base) (l ...)) so that the form of the second argument is (l ...). This is because the (extra common) module allows to destructure the arguments using the (ice-9 match) pattern matcher, so that the above header is actually equivalent to (define number/base (lambda (base) (lambda x (match x ((l ...) In practice that form of argument doesn't do much. It just tells the reader that the argument is a proper list (because only a proper list matches such pattern) Similarly, I could write code like (map (lambda ((a . b)) (+ a b)) '((1 . 2)(3 . 4)(5 . 6))) I also use that trick with let and let* forms: (let (((x y) '(1 2))) (+ x y)) In addition, the module allows me to use the let and let* forms with multiple values: (let ((a (b c) (values 1 '(2 3 (+ a b c)) Best regards [1] http://mitpress.mit.edu/sites/default/files/titles/content/sicm/book-Z-H-11.html [2] https://www.gnu.org/software/guile/manual/html_node/Curried-Definitions.html
Re: goops - guile-clutter unexpected bug while using #:virtual slot allocation for a clutter-actor subclass
Hi, My first impression is that the error might be caused by using the combination of #:allocation #:virtual and #:init-keyword for the colour slot. Since #:init-keyword initializes slot value with a given value, and virtual slots have no actual value, the semantics of such operation is rather unclear. If your intention is to use the keyword as an argument to the constructor, you should rather provide a custom initialize method (and drop the init-keyword for the colour slot): (define-method (initialize (self bar) args) (next-method) (let-keywords args #t ((colour (or #f some-default-value-you-wish))) (if colour (slot-set! self 'colour colour HTH 2014-12-19 20:46 GMT+01:00 David Pirotte da...@altosw.be: Hello, It would be really nice if someone can help me with the following unexpected bug: http://paste.lisp.org/+33RA I don't think i'll be able to solve it by myself, at this level of knowledge I have from both goops [implementation I mean, knowledge close to zero] and the low level machinery gnome/gobject/gtype.scm Many thanks, David
Re: Add support for record types in GOOPS methods?
2014-10-21 16:57 GMT+02:00 Dave Thompson dthomps...@worcester.edu: Hello all, Last night, I encountered what I consider to be a frustrating limitation of GOOPS methods: They do not support record type descriptors, only classes. This makes it difficult to take advantage of generic procedures without also buying into the rest of the GOOPS system. Here's some code that I wish would work: (define-record-type foo (make-foo bar) foo? (bar foo-bar)) (define-method (foobar (foo foo)) (foo-bar foo)) The error thrown by `define-method' is: ERROR: In procedure class-direct-methods: ERROR: In procedure slot-ref: Wrong type argument in position 1 (expecting instance): #record-type foo There is an ugly workaround. You can use `class-of' on an instance of a record type to get a class in return. This code works, but is unideal: (define-record-type foo (make-foo bar) foo? (bar foo-bar)) (define foo-class (class-of (make-foo #f))) (define-method (foobar (foo foo-class)) (foo-bar foo)) I don't know very much about GOOPS, so I am seeking help. Would it make sense for `define-method' to work the way I want? If so, could anyone suggest a way to make `define-method' to DTRT for record types? Perhaps it could auto-generate and cache the class from the record type descriptor, but I'm not sure how since the current workaround requires an instance of that record. Hi! As I managed to find out, the (define-record-type t ...) also introduces a GOOPS class named t. Following your example, you'd need to define your method in the following way: (define-method (foobar (foo foo)) (foo-bar foo)) I don't think any further changes are needed (perhaps a section in the documentation would be nice) HTH
Re: Dijkstra's Methodology for Secure Systems Development
2014-09-20 14:46 GMT+02:00 Taylan Ulrich Bayirli/Kammer taylanbayi...@gmail.com: Panicz Maciej Godek godek.mac...@gmail.com writes: [...] First of all let me say I agree with you; guile-devel is the wrong place to discuss these things. Having this settled, let's proceed with our discussion :) I also feel uncomfortable about having been painted as the only person agreeing with Ian. According to him I was able to understand his idea at least, but I'm not clear on how it ties in with the rest of reality, like the possibility of hardware exploits... Still: [...] the back doors can be implemented in the hardware, not in the software, and you will never be able to guarantee that no one is able to access your system. Hopefully hardware will be addressed as well sooner or later. How can we know that the enemy isn't using some laws of physics that we weren't taught at school (and that he deliberately keeps that knowledge out of schools)? Then our enemy will always be in control! This reasoning, although paranoid, seems completely valid, but it does abuse the notion of enemy, by putting it in an extremely asymmetical situation. On the meanwhile, we can plug a couple holes on the software layer. Also, if the hardware doesn't know enough about the software's workings, it will have a hard time exploiting it. Just like in the Thompson hack case: if you use an infected C compiler to compile a *new* C compiler codebase instead of the infected family, you will get a clean compiler, because the infection doesn't know how to infect your new source code. So if I get it right, the assumption is that the infected compiler detects some pattern in the source code, and once we write the same logic differently, we can be more certain that after compilation, our new compiler is no longer infected? And couldn't we, for instance, take e.g. the Tiny C Compiler, compile it with GCC, and look at the binaries to make sure that there are no suspicious instructions, and then compile GCC with TCC? Or do we assume that the author of the Thompson virus was clever enough that all the programs that are used for viewing binaries (that were compiled with infected GCC) are also mean and show different binary code, hiding anything that could be suspicious? [But if so, then we could detect that by generating all possible binary sequences and checking whether the generated ones are the same as the viewed ones. Or could this process also be sabotaged?] I think it's quite difficult to find a good balance between being too naive, and entering tinfoil-hat territory. I've been pretty naive for most of my life, living under a feeling of everything bad and dark is in the past and that only some anomalies are left. That's seem to be wrong though, so I'm trying to correct my attitude; I hope I haven't swayed too much into the tinfoil-hat direction while doing so. :-) Actually the direction the discussion eventually took surprised me a bit. So maybe to discharge the atmosphere, I shall include the reference to XKCD strip (plainly it was made up to lull our vigilance): http://xkcd.com/792/
Re: Dijkstra's Methodology for Secure Systems Development
2014-09-21 13:11 GMT+02:00 Taylan Ulrich Bayirli/Kammer taylanbayi...@gmail.com: [...] Still, one last political remark from me: Things are more complicated. Google might be incapable of evil, but then they might be a tool of the US government. Not calling the US government evil either, but consider people like Julian Assange or Edward Snowden. Things get unpleasant, and someone with good ideals ends up being dubbed a terrorist. And they might not be able to become part of the government to push their ideals into acceptance, so they should at least have the ability to discuss them anonymously without ending up on a watch list. That's part of the reason I think free software is important, and I think many people would agree. (If you don't, or think my reasoning is flawed, then let's just agree to disagree so we don't continue with OT.) I think that I'd be insane to disagree with the need for free software. All I want to say is that FSF has already done a great deal of work by popularizing the notion of free software, and although I wouldn't want to diminish the significance Ian's concerns, it's just too hard for me to believe, that even if we tackle the problem post factum (if we actually are endangered), it will be too late to handle it (but I do agree that I might be deadly wrong with this point. There's even a proverb mądry polak po szkodzie -- a pole is wise only after getting harm). On the other hand, the idea seems very interesting by itself, and this alone makes it worth pursuing. If there are people out there who believe that assuring that GCC binaries are free from Thompson virus is crucial part of FSF mission, then I have absolutely no intention to argue with that, although I am strongly convinced that it is reckless if a programmer suffers from malnutrition or neglects personal hygiene at his own will.
Re: Dijkstra's Methodology for Secure Systems Development
Hey, Maybe I'm a fucking ignorant jumped-up little prick, but at least I don't stink ;] Actually I don't think that you did put yourself into a particulatly comfortable position, and even if you don't care what the people around you think, maintaining personal hygiene seems like The Right Thing To Do. Avoiding the soap is like raising an invisible fence that keeps the external world away from you, which is not necessarily a good thing. It is also a message -- that either you are a very poor person, or that you are eager to treat every living person that you meet with disrespect -- because you're simply making them feel unconfortable, and for no particular reason. It is still unclear to me why you chose to move to Bolivia and live in such poor conditions, but I have a feeling that it only lessens the odds of achieving your goal (especially when you run out of money and die of hunger). On the other hand I do admit that you're pursuing my childhood dreams of being focused on programming entirely. I imagined myself sitting on a sleeping bag with my laptop in a tunnel at a train station with a label beside me stating that I am writing free software for the great good, please support. On the other hand, now I see how non-linear the process of creating software is -- that basically you need to balance on a thin line between inspiration, motivation and focus. Actually, I think that when your teeth are falling out, then the conditions are probably not particularly propitious. I've taken a look at your work and I have to say that I am really impressed with some of your ideas, and I do agree on many points, but I have a feeling that they are all presented in a rather messy way, so one can find profound ideas lying next to some pile of crappy trivia. But I certainly need to read a bit more to get a more adequate opinion. When it comes to me, I live (and have been for the most of my life) in Poland. And I still find it difficult to see anything terrible in the idea that FSF had been subverted, when I interpret that in terms of software security, because the way I see it, the main premise of FSF movement is to share the code (as opposed to restricting it), and the main goal of the GNU Operating System is to propagate that idea, rather than to provide a secure and reliable operating system (which are only secondary goals that are needed to be fulfilled in order for the operating system to become popular, respected and desired -- or to advertise the idea well enough). But most of all, I think, software (and free software in particular) is about fun (so in this regard I would agree with Alan Perlis' foreword to SICP). This is also what I like about your ideas -- that they encourage code reuse, thus requiring the programmer to write only what's important. The issue of being afraid of touching other programmers' (or sys-admins') work is also important, because it shows that we didn't yet manage to work out the means of communicating software in a disciplined and comprehendible way, and I agree that we need to work more on that (and I also agree that we should resort to logic) Best regards
Re: Dijkstra's Methodology for Secure Systems Development
Hi. I've observed that some time ago you started sending tons of revolutionary ideas regarding the way the software should be written, and crtiticising the current practices. I am not in the position to refer to those ideas, because I didn't manage to comprehend them fully (although I am trying to figure out what is the system F that you mentioned in your thunder essay). I also made three other observations: firstly, that you are pointing out significant vulnerabilities of the GNU project as a whole; secondly -- that not every addressee wishes to become acquainted your thoughts, and lastly, that if someone dares to criticise you, you're often getting impolite. With regard to those observations, I can offer three suggestions. The first one concerns software security and the odds of the aforementioned Thompson virus. As you pointed out, we cannot guarantee that there is no back door in every GNU system installation, but I think that even if we apply your methods, we won't be able to do so. Simply because (as some of the participants of the discussion noted) the back doors can be implemented in the hardware, not in the software, and you will never be able to guarantee that no one is able to access your system. So why should we bother? If there are some people accessing my files, why should I feel unfomfortable with that? Why can't I trust that someome with such great power isn't going to be mean and evil? (There's already so many things that I can't control. I can't know for sure that I'm not going to die tomorrow, but I think that being worried about that wouldn't make that last day of mine any better) The second suggestion is that perhaps instead of sending all those letters to some news groups, you should start a blog? That way, you could watch the statistics and tell how many people are actually interested in your concerns, and you could present your ideas in a more coherent and systematic way. And people who didn't subscribe to Ian Grant newsletter would have been receiving a few unwanted e-mails less per week. When it comes to the third question, please remember that other people have their own issues, and may see no reason to consider your concerns more important than theirs. When you're announcing that there's no need to hook guile to gdb, because if we rewrote all software with proper methodology, there'd be no bugs, you seem to ignore the existing code base and common practices. Of course if you can present a universal way of creating good software, then I'm all ears, but so far I haven't seen such presentation (or it might have drowned in the flood of your other thoughts and discussions) I wish you all best with your endeavour. M.
Re: Docstring as only form in a function
2014-02-20 17:59 GMT+01:00 Arne Babenhauserheide arne_...@web.de: Hi, I recently experimented with docstrings, and I stumbled over not being able to define a function which only has a docstring as body: (define (foo) bar) (procedure-documentation foo) = #f Adding a form makes the string act as docstring: (define (foo) bar #f) (procedure-documentation foo) = bar I feel that this is inconsistent, which hurts even more, because it breaks for the simplest showcase of docstrings. I feel that this is the desired behaviour. According to the semantics of Scheme, (define (foo) bar) defines a thunk that evaluates to bar. This makes a lot of sense, and modifying that behaviour would be very surprising. My use case for using docstrings like this is that when I start writing a function, I begin with the docstring. There I explain what I want to do. Then I commit. Then I implement the function. So you can either do (define (new-foo) function that baz the bar #f) or even better, (define (new-foo) function that baz the bar (throw 'not-implemented)) We already discussed in #guile @ freenode, that it is simple to add a dummy-body to make the docstring work. To me that feels like a cludge. And I was asked to move the discussion here. This would completely reverse the priorities. For the evaluator, it's the value of an expression that is important, and a docstring is something optional. It would be very misleading to have a function that has a docstring but has no body. Note that this is illegal: (define (dummy-function)) On the other hand, a function that only returns a string is rather trivial and hardly needs a documentation. A reason for not wanting a string as the only part of the body to be seen as docstring are that this would make it harder to write functions which only return a string without giving them their return value as docstring. This would then require this: (define (foo) #f bar) I think it would be more consistent to have the first form of the body double as a docstring if it is a string. I don't see any point in this. In a real program you'll never have anything like that: (define (a-very-self-descriptive-function) this is a function that returns this string, and besides does nothing else) If function stubs are to be the only argument, then I'd rather suggest you to change your habits. Also, if you really need to provide a docsting for a function that only returns a string, it would be much better to do: (define (a-function-that-returns-a-string-foo) this function returns a string 'foo' foo) Your suggestion smells like a dirty hack, and the only motivation is to provide a documentation for functions that have no bodies -- which is wrong. The current behaviour is that if the first form in the function is a string, it is not part of the body - except if the body would otherwise be empty. What do you think? I oppose. That would be very misleading and it makes no sense in real (or ready) programs. Best regards, M.
Re: Shorter lambda expressions
Hi! 2014/1/23 Mark H Weaver m...@netris.org: Hello all, For a short time I liked 'cut' from SRFI-26, but I soon became frustrated by its limitations, most notably not being able to reference the arguments out of order or within nested expressions. I don't like the inconsistent style that results when I use 'cut' wherever possible and 'lambda' everywhere else. So I just stopped using it altogether. [...] Before getting acquainted with SRFI-26, I came up with something similar. It uses define-macro, because I didn't know syntax-case back then, but I eventually stopped using it, since it was a little bit confusing. It solved the problem by allowing numbered placeholders, so e.g. (\ + _1 _10) created a function of ten arguments which added its first argument to its tenth argument, skipping all the others. There was also a general placeholder, _, which behaved like SRFI-26's , so for instance (\ + _ _) was an equivalent of (\ + _1 _2) but I eventually started to find it a little confusing, because the same symbol refered to another entity. Also the choice of symbol \ was unfortunate, as it made the syntax unportable among schemes (certain implementations would require to write \\ to get a backslash). As far as I remember (I'd have to analyze the code to make sure), it did support nested expressions. Also, after some time, I added another placeholder ... for variadic arguments, which behaved more or less like ... form SRFI-26. I think it would be best to extend SRFI-26 with the option of using 1, 2, ... placeholders, where the resulting lambda would get the arity indicated by the highest placeholder. Also, it should support nested expressions. Personally, I'd prefer it over the Gauche-style extension. (define-macro (\ f . args) (let* ((prefix _) (placeholder '_) (ellipsis '...) (rest (if (equal? (last args) ellipsis) ellipsis '())) (args (if (symbol? rest) (drop-right args 1) args)) (max-arg 0) (next-arg (lambda () (set! max-arg (+ max-arg 1)) (string-symbol (string-append prefix (number-string max-arg)) (letrec ((process-arg (lambda(arg) (cond ((eq? arg placeholder) (next-arg)) ((and-let* (((symbol? arg)) (arg-string (symbol-string arg)) (match-struct (string-match (string-append ^ (regexp-quote prefix) ([0-9]+)$) arg-string)) (number (string-number (match:substring match-struct 1 (if ( number max-arg) (set! max-arg number)) #t) arg) ((and (list? arg) (not (null? arg)) (not (memq (first arg) '(\ quote (map process-arg arg)) (else arg) (let ((args (map process-arg args))) `(lambda ,(append (map (lambda (n) (string-symbol (string-append prefix (number-string n (iota max-arg 1)) rest) ,(if (symbol? rest) `(apply ,f ,@args ,rest) `(,f ,@args)))
Re: [PATCH] Fix thread-unsafe lazy initializations
2014/1/23 Mark H Weaver m...@netris.org: This patch fixes all of the thread-unsafe lazy initializations I could find in stable-2.0, using 'scm_i_pthread_once'. Any comments and/or objections? Does this fix the error that Chris Vine found some time ago? If so, is there any test in the test suite that failed before, and doesn't fail anymore? According to Ludovic, Chris' example corresponds to test-pthread-create-secondary.c -- yet they differ in that guile's test doesn't use scm_c_eval_string, which helped to reveal the problem. Shouldn't it be added to the test suite along with the patch, for regression? Could we come up with any tests that would prove with certainty that there are still some failures caused by lazy intializations of modules? (e.g. would it help to write a test that loads all the modules that are provided by guile, each in a separate thread?) I hope that this is not a big faux pas :) Thanks
Re: [PATCH] Fix thread-unsafe lazy initializations
2014/1/23 Mark H Weaver m...@netris.org: Does this fix the error that Chris Vine found some time ago? Probably not, but who knows? Would you like to try? Sure, but it will take some time, because I have to set up the build environment first. I think I'll have it by tomorrow, if that's ok. There are definitely still thread-safety problems with module autoloading. I haven't fixed those yet. If so, is there any test in the test suite that failed before, and doesn't fail anymore? No. According to Ludovic, Chris' example corresponds to test-pthread-create-secondary.c -- yet they differ in that guile's test doesn't use scm_c_eval_string, which helped to reveal the problem. Shouldn't it be added to the test suite along with the patch, for regression? It would be a lot of work to add tests for all of these fixes, and I'm not sure it would be easy to write tests that would detect these problems with reasonably high probability. But if you come up with some ideas, I'd be glad to hear them, and then I could perhaps implement them. Maybe it would be a nice feature if the make check instruction would generate a report that would gather all the relevant information about the environment that could be sent to some public address. That way we could increase the probability of detecting various problems (especially those that appear randomly). However, it would be a more serious enterprise, and I'd need some more time than I have now to do it. Anyway, I'm already overloaded with work to do. Would you like to do it? If it comes to regression test, I think I can handle it (and if I don't, then I'll ask for some support). If it comes to other tests, then I think that I'd need more guidance. Could we come up with any tests that would prove with certainty that there are still some failures caused by lazy intializations of modules? I don't need such a test. It's obvious from the code that module autoloading is not thread safe. But they might prove good for the time when you progress with your work to suspect that the issue is already solved :)
Fwd: Guile interpeter crash
-- Forwarded message -- From: Panicz Maciej Godek godek.mac...@gmail.com Date: 2013/10/1 Subject: Re: Guile interpeter crash To: Dmitry Bogatov kact...@gnu.org 2013/10/1 Dmitry Bogatov kact...@gnu.org Here is code that results crash (return 134). Hope it is interesting. The code you gave is not a proper Scheme program. C.f. http://www.schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-8.html#%_sec_5.3, excerpt: Although macros may expand into definitions and syntax definitions in any context that permits them, it is an error for a definition or syntax definition to shadow a syntactic keyword whose meaning is needed to determine whether some form in the group of forms that contains the shadowing definition is in fact a definition, or, for internal definitions, is needed to determine the boundary between the group and the expressions that follow the group In other words, you cannot redefine define. I suppose the interpreter falls into infinite loop when it tries to substitute define# with define, and then define -- with define#. That being said, I don't think it's fortunate to use the # character within a symbol. Best regards, M.
Inconsistent behaviour of the pattern matcher
Hi, I've traced something that is not entirely a bug, but which was a little surprise for me. It has to do with the extensions that guile provides to the Scheme language -- namely, uniform vectors and arrays. The (ice-9 match) module offers the syntax (match #(1 2 3) (#(a b c) (list a b c))) ;=== (1 2 3) However, none of the following behaves as one could expect: (match #u8(1 2 3) (#u8(a b c) (list a b c))) (match #2((1 2)(3 4)) (#2((a b)(c d)) (list a b c d))) (match #u8(1 2 3); this is perhaps questionable, but (#(a b c) ; i add it for a discussion (list a b c))) After looking into the source of the pattern matcher, I've found out that the problem is probably situated deeper: while it is possible to define macros with regular vectors, like that: (define-syntax nv (syntax-rules () ((nv #(v ...)) (list v ... it doesn't work if we replace the #(v ...) with #u8(v ...), for instance. Best regards, M.
Re: Fun with guile, Erastones + goldbach conjecture
Hey, I see that the style of your code is fairly unorthodox. I'd suggest you to read the following chapters of SICP, if you haven't already: Section 3.5 (Streams), which introduces the notion of streams, or lazy lists (that can be infinite), with the most amazing example of Erastostenes' sieve implementation, as well as sections 4.1 and 4.3 of Chapter 4 (Metalinguistic abstraction). The first one presents the notion of meta-circular evaluator, which is then used to implement non-deterministic evaluator, which allows to express certain problems extremely elegantly. Simultaneously, the book shows many examples of the finest programming style in Scheme. I think you might find it quite entertaining and useful. Best regards, M.