On Wed, 19 Jan 2022 13:44:13 + Chris Vine wrote: > On Wed, 19 Jan 2022 13:07:33 + > Chris Vine wrote: > [snip] > > As I understand it, with linux IPv6 sockets are dual stack capable, and > > in earlier kernel versions this was be enabled by default. I believe > > with current versions that is no longer the case, and that you have to > > specifically enable dual stack by turning off IPV6_V6ONLY using > > setsockopt before binding on the socket. > > > > Then, if receiving a IPv4 connection from address 184.108.40.206, this would be > > mapped as :220.127.116.11. > > > > I do not know about other OSes. I have half a memory that some earlier > > versions of windows did not support dual stack sockets (XP?). > > By the way I did use dual stack some years ago, and I cannot now > remember all the details, but I think I may have had to bind on > in6addr_any (which in dual stack would cover INADDR_ANY) or on ::1 > (which would cover 127.0.0.1) to get dual stack to work. I suggest you > play around with it to see. > > One other correction: when I said there was a mapping to : > 18.104.22.168 I meant :::22.214.171.124. You have stimulated my interest and this is what I have found. First, in C the correct call in linux to do what you want to obtain a dual stack socket is to set the IPV6_V6ONLY option in the IPPROTO_IPV6 level to 0 (off). However neither IPV6_V6ONLY nor IPPROTO_IPV6 is defined in guile, so you have to enter the numeric values for your OS by hand. In linux this will do it in guile, but of course it is non-portable: (setsockopt [sock] 41 26 0) However, this only actually seems to accept a connection from a IPv4 address if the socket binds to :: (which is in6addr_any, but the in6addr_any symbol also appears not to be defined in guile). Binding to ::1 (localhost) will not enable you to connect from 127.0.0.1 on my computer. Whether binding to :: and so permitting any interface to access the socket is OK for you depends on what your needs are. If not then it looks as if you are stuck with having two sockets, one for IPv4 and one for IPv6.
On Wed, 19 Jan 2022 13:44:13 + Chris Vine wrote: > By the way I did use dual stack some years ago, and I cannot now > remember all the details, but I think I may have had to bind on > in6addr_any (which in dual stack would cover INADDR_ANY) or on :: ^^ ::1 of course.
On Wed, 19 Jan 2022 13:07:33 + Chris Vine wrote: [snip] > As I understand it, with linux IPv6 sockets are dual stack capable, and > in earlier kernel versions this was be enabled by default. I believe > with current versions that is no longer the case, and that you have to > specifically enable dual stack by turning off IPV6_V6ONLY using > setsockopt before binding on the socket. > > Then, if receiving a IPv4 connection from address 126.96.36.199, this would be > mapped as :188.8.131.52. > > I do not know about other OSes. I have half a memory that some earlier > versions of windows did not support dual stack sockets (XP?). By the way I did use dual stack some years ago, and I cannot now remember all the details, but I think I may have had to bind on in6addr_any (which in dual stack would cover INADDR_ANY) or on :: (which would cover 127.0.0.1) to get dual stack to work. I suggest you play around with it to see. One other correction: when I said there was a mapping to : 184.108.40.206 I meant :::220.127.116.11.
On Wed, 19 Jan 2022 08:57:51 +0100 "Dr. Arne Babenhauserheide" wrote: > Hi, > > > with both fibers server and (web server) there is a split between IPv4 > and IPv6: > > IPv4: > > (fibers:run-server handler-with-path #:family AF_INET #:port port #:addr > INADDR_ANY) > > (run-server handler-with-path 'http `(#:host "localhost" #:family > ,AF_INET #:addr ,INADDR_ANY #:port ,port)) > > IPv6: > > (define s > (let ((s (socket AF_INET6 SOCK_STREAM 0))) > (setsockopt s SOL_SOCKET SO_REUSEADDR 1) > (bind s AF_INET6 (inet-pton AF_INET6 ip) port) > s)) > (fibers:run-server handler-with-path #:family AF_INET6 #:port port #:addr > (inet-pton AF_INET6 ip) #:socket s) > > (define s > (let ((s (socket AF_INET6 SOCK_STREAM 0))) > (setsockopt s SOL_SOCKET SO_REUSEADDR 1) > (bind s AF_INET6 (inet-pton AF_INET6 ip) port) > s)) > (run-server handler-with-path 'http `(#:family ,AF_INET6 #:addr > (inet-pton AF_INET6 ip) #:port ,port #:socket ,s)) > > > Is there a way to bind to both IPv6 and IPv4, so my server will react to > requests regardless of whether a client reaches my computer over IPv4 or > IPv6? As I understand it, with linux IPv6 sockets are dual stack capable, and in earlier kernel versions this was be enabled by default. I believe with current versions that is no longer the case, and that you have to specifically enable dual stack by turning off IPV6_V6ONLY using setsockopt before binding on the socket. Then, if receiving a IPv4 connection from address 18.104.22.168, this would be mapped as :22.214.171.124. I do not know about other OSes. I have half a memory that some earlier versions of windows did not support dual stack sockets (XP?).
On Sun, 13 Dec 2020 09:44:01 -0500 Greg Troxel wrote: > > I am packaging guile 3 in pkgsrc-wip. (It's not promoted to pkgsrc > proper because I am still having some issues.) > > Doing the update from 3.0.3 to 3.0.4 I found this change in install > files: > > -guile/3.0/lib/libguile-3.0.so.3.0.0-gdb.scm > +guile/3.0/lib/libguile-3.0.so.1.2.0-gdb.scm > > and that strikes me as likely a bug. Are others seeing this? Is it > supposed to be 3.1.0 or 3.0.1, to reflect a micro change? Having the > major version go backwards is baffling to me. More to the point, you will have seen guile-3.0.2's libguile-3.0.so.1 move to guile-3.0.3's libguile-3.0.so.3 and then move back to guile-3.0.4's libguile-3.0.so.1 guile-3.0.3 was released with an accidental SO version bump: micro versions of guile are supposed to retain and not break ABI/SO-version compatibility. guile-3.0.4 was released shortly thereafter correcting the problem. It looks as if you were unfortunate enough to have installed guile during the hiatus period. The best thing is to install guile-3.0.4 and recompile anything which links to libguile-3.0.so.
On Sun, 21 Jun 2020 23:04:03 +0200 Ludovic Courtès wrote: > We are delighted to announce GNU Guile release 3.0.3, the third bug-fix > release of the new 3.0 stable series. This release represents 170 > commits by 17 people since version 3.0.2. See the NEWS excerpt that > follows for full details. [snip] This has a libguile so ABI jump from libguile-3.0.so.1 to libguile-3.0.so.3, which breaks my binaries linked to libguile. Is that normal for a micro update in the stable release series and if so can there be some warning in the announcement?
On Fri, 10 Jan 2020 15:49:49 +0100 Ludovic Courtès wrote: > Hello Guilers! > > I’ve pushed a ‘wip-https-client’ branch that contains improvements for > HTTPS support in (web client) that I’d like to be part of Guile 3: > > https://git.savannah.gnu.org/cgit/guile.git/log/?h=wip-https-client > > In a nutshell: > > • $https_proxy support and a ‘current-https-proxy’ parameter; > > • better TLS alert handling; > > • verification of server certificates (!). > > You can test it with a program as simple as: > > (use-modules (web client)) > > (call-with-values > (lambda () > (http-get "https://guix.gnu.org;)) > pk) > > You can test how expired certificates are handled with: > > guix environment --ad-hoc libfaketime -- \ > faketime 2022-01-01 ./meta/guile /tmp/https.scm > > To check whether $https_proxy is honored, try: > > https_proxy=http://localhost:8118 strace -e connect \ > ./meta/guile /tmp/https.scm > > (I have Privoxy running as a proxy on that port.) > > Feedback welcome! Is the new implementation usable with suspendable ports? When I last looked the read-response-body procedure was not, which meant that http-get and http-put were not, which meant that you could not really use them with fibers. Chris
On Mon, 06 Jan 2020 21:34:59 +0100 Andy Wingo wrote: > On Mon 06 Jan 2020 00:26, Chris Vine writes: > > I have a 'try' macro which adopts the approach that if an exception > > arises, the macro unwinds from the dynamic environment of the code > > where the exception arose to the dynamic environment of the call to > > 'try', evaluates the cond clauses in that environment, and then if no > > cond clause matches re-raises the exception in that environment with > > 'raise' (rather than 'raise-continuable'). In other words, it does > > stack unwinding in the same way as exception implementations in almost > > all other mainstream languages which use exceptions. It would be > > trivial to implement this with guile-3.0's with-exception-handler with > > its unwind? argument set to true. > > I am not sure this really matches with this use case: > > (define (call-with-backtrace thunk) > (call/ec > (lambda (ret) >(with-exception-handler > (lambda (exn) >(show-backtrace exn) ;; placeholder >(ret)) > thunk > > (define (false-on-file-errors thunk) > (call/ec > (lambda (ret) >(with-exception-handler > (lambda (exn) >(if (file-error? exn) >(ret #f) >(raise-continuable exn))) > thunk > > (define (foo f) > (call-with-backtrace > (lambda () >(false-on-file-errors f > > > If there's an error while invoking `f' that's not a file error, you want > to have remained in the context of the error so you can show a full > backtrace. To my mind this is central to the exception handler design. > So far so good I think. > > If I change the implementation of `false-on-file-errors' to be: > > (define (false-on-file-errors thunk) > (guard (exn ((file-error? exn) #f)) > (thunk))) > > I think this change should preserve the not-unwinding environment that > `call-with-backtrace' expects. Good point. My approach does provide the programmer with less conveyed stack information after the re-raise of an unhandled exception, requiring more manual intervention to recover the information when debugging the exception. Before you suggested it I had not previously considered your proposal. It may turn out to be the optimum solution, but I wonder if it would surprise the programmer to have the cond conditionals evaluated in a different dynamic environment from the one in which the cond consequential is evaluated where there is a conditional which is true. But I am not sure if that is of any importance. Chris
On Fri, 22 Nov 2019 16:22:39 +0100 Andy Wingo wrote: > We are pleased to announce GNU Guile release 2.9.5. This is the fifth > pre-release of what will eventually become the 3.0 release series. [snip] > ** Reimplementation of exceptions > > Since Guile's origins 25 years ago, `throw' and `catch' have been the > primary exception-handling primitives. However these primitives have > two problems. One is that it's hard to handle exceptions in a > structured way using `catch'. Few people remember what the > corresponding `key' and `args' are that an exception handler would see > in response to a call to `error', for example. In practice, this > results in more generic catch-all exception handling than one might > like. > > The other problem is that `throw', `catch', and especially > `with-throw-handler' are quite unlike what the rest of the Scheme world > uses. R6RS and R7RS, for example, have mostly converged on > SRFI-34-style `with-exception-handler' and `raise' primitives, and > encourage the use of SRFI-35-style structured exception objects to > describe the error. Guile's R6RS layer incorporates an adapter between > `throw'/`catch' and structured exception handling, but it didn't apply > to SRFI-34/SRFI-35, and we would have to duplicate it for R7RS. > > In light of these considerations, Guile has now changed to make > `with-exception-handler' and `raise-exception' its primitives for > exception handling and defined a hierarchy of R6RS-style exception types > in its core. SRFI-34/35, R6RS, and the exception-handling components of > SRFI-18 (threads) have been re-implemented in terms of this core > functionality. There is also a a compatibility layer that makes it so > that exceptions originating in `throw' can be handled by > `with-exception-hander', and vice-versa for `raise-exception' and > `catch'. > > Generally speaking, users will see no difference. The one significant > difference is that users of SRFI-34 will see more exceptions flowing > through their `with-exception-handler'/`guard' forms, because whereas > before they would only see exceptions thrown by SRFI-34, now they will > see exceptions thrown by R6RS, R7RS, or indeed `throw'. > > Guile's situation is transitional. Most exceptions are still signalled > via `throw'. These will probably migrate over time to > `raise-exception', while preserving compatibility of course. > > See "Exceptions" in the manual, for full details on the new API. Is this rewrite, and the new with-exception-handler procedure, an opportunity to think about standardization of guile's implementation of the R6RS/R7RS 'guard' form, or at least think about what is wanted for 'guard'? The formal semantics (including specimen implementation) of 'guard' for R6RS with the corrigendum to §7.1 of the standard library at http://www.r6rs.org/r6rs-errata.html, and for R7RS without corrigendum (at §4.2.7 and §7.3, page 72 of the standard), is: (i) to evaluate the guard body within a block with its own continuation (as constructed by call/cc); (ii) if an exception is thrown, evaluate the handler (and its cond clauses) in the dynamic context of the original caller of 'guard' via that continuation; (iii) if no matching cond clause and no else clause is found, return to the dynamic environment of the original 'raise' and re-raise the exception with 'raise-continuable', even for non-continuable exceptions. If a fully conforming R6RS/R7RS implementation runs this code: (guard (exn [(equal? exn 5) #f]) (guard (exn [(equal? exn 6) 'never-reached]) (dynamic-wind (lambda () (display "in") (newline)) (lambda () (raise 5)) (lambda () (display "out") (newline) the code evaluates to #f and should print this: in out in out In chez scheme it does so. In most other implementations (including guile and racket) it seems to print: in out Guile 2.9.5 appears to implement 'guard' this way: (i) to evaluate the guard body within a block with its own continuation (as constructed by call/ec); (ii) if an exception is thrown, evaluate the handler (and its cond clauses) in the dynamic environment of the guard body within which the raise occurred (apart from the current exception handler which is reset); (iii) if no matching cond clause and no else clause is found, re-raise the exception with 'raise' within the dynamic context of that guard body. I don't especially like the mandated behaviour of 'guard', which seems to be intended to allow the guard form to handle continuable exceptions as continuable elsewhere in the call stack, which seems fairly pointless to me. If this is to be departed from, then how about doing what most people expect of a high-level exception form, and to unwind the stack by executing the cond clauses within the dynamic context of the caller of 'guard' (as R6RS/R7RS do), not in that of the guard body, and then if a re-throw is necessary do it with 'raise' within that context instead of returning to
On Thu, 31 Oct 2019 17:20:37 +0100 Andy Wingo wrote: > Greets :) > > On Thu 31 Oct 2019 01:01, Chris Vine writes: > > > "Condition" is a strange word for describing structured error objects, > > I agree. However, I think it would be quite confusing to describe > > error objects as exceptions. "Error object" or "error condition object" > > seems a reasonable alternative if the bare word "condition" is thought > > to be inappropriate. > > I'm very sympathetic to this argument -- an exception seems like a > thing-in-motion, not a thing-at-rest. But perhaps it's just the effect > of habit, setting up expectations about what good names are. (After > all, plenty of people seem happy with the term "condition"!) > > Perhaps there is a middle ground of sorts: maybe the manual can > comprehensively describe what R6RS refers to as conditions using the > term "exception objects". WDYT? I think "exception objects" would be fine. More broadly, I view an exception as something which makes the current thread of execution follow an exceptional path (say, implemented by some kind of continuation object), used generally but not exclusively to indicate that an error has occurred. An R6RS or SRFI-35 condition object on the other hand is a structured error information service, intended to be a thing (but not the only thing) which might be propagated as the payload of the exception, and which you can conveniently match on. Chris
On Wed, 30 Oct 2019 21:13:49 +0100 Andy Wingo wrote: > Also: should these structured error objects be named > exceptions or conditions? SRFI-35, R6RS, and R7RS say "conditions", but > racket and my heart say "exceptions"; wdyt? R6RS and R7RS speak of raising an exception, and handling the exception in an exception handler, and racket uses similar language. According to R6RS "when an exception is raised, an object is provided that describes the nature of the exceptional situation. The report uses the condition system described in library section 7.2 to describe exceptional situations, classifying them by condition types". However, condition objects are optional when an exception is raised - you can just as well use a symbol, or a symbol/string pair, for simple cases. "Condition" is a strange word for describing structured error objects, I agree. However, I think it would be quite confusing to describe error objects as exceptions. "Error object" or "error condition object" seems a reasonable alternative if the bare word "condition" is thought to be inappropriate.
On Sat, 08 Jun 2019 11:46:10 +0200 Arne Babenhauserheide wrote: > Chris Vine writes: > > On Sat, 08 Jun 2019 10:07:45 +0200 > > Arne Babenhauserheide wrote: > > [snip] > >> Wow, I didn’t know that you could do that. > >> > >> However: "The details of that allocation are implementation-defined, and > >> it's undefined behavior to read from the member of the union that wasn't > >> most recently written." https://en.cppreference.com/w/cpp/language/union > >> > >> Can you guarantee that this works? > > > > This is C and not C++ and the provision to which you refer does not > > apply. > > > > Reading from a member of a union other than the one last written to is > > implementation defined in C89/90, and defined in C99 (with Technical > > Corrigendum 3) and above > > Thank you for the correction and explanation! You have a good point though if visible type transformations were to appear in a header rather than a *.c file, because guile headers are (at the moment) intended to be in the common subset of C and C++ so that libguile.h can be included in a C++ programme. Having said that, gcc and clang support type punning through unions in C++ as well as C. I don't know if guile is supposed to compile with other compilers nowadays: but frankly it would be perverse for some other compiler which supports both C and C++ to invoke different behaviour for unions in such cases. Chris
On Sat, 08 Jun 2019 10:07:45 +0200 Arne Babenhauserheide wrote: [snip] > Wow, I didn’t know that you could do that. > > However: "The details of that allocation are implementation-defined, and > it's undefined behavior to read from the member of the union that wasn't > most recently written." https://en.cppreference.com/w/cpp/language/union > > Can you guarantee that this works? This is C and not C++ and the provision to which you refer does not apply. Reading from a member of a union other than the one last written to is implementation defined in C89/90, and defined in C99 (with Technical Corrigendum 3) and above, although it might include a trap representation (but wouldn't on any platform supported by guile). You might want to see in particular footnote 95 of C11 (which isn't normative but is intended to explain the provisions of §126.96.36.199 which are). gcc and clang have always supported type punning through unions. Chris
On Wed, 15 May 2019 12:09:19 +0200 wrote: > On Mon, May 13, 2019 at 06:54:38PM +0800, Nala Ginrut wrote: > > Hi folks! > > Here's a patch to add current-suspendable-io-status: > > Its result is a pair: (finished-bytes . rest-bytes) > > Sorry for this possibly dumb question, but... is there a way > to be non-blocking even if there are no readable/writable > bytes at all? Or would one have to do multi-threading (and > let the single threads  block) for that? > > Thanks With guile-2.2/3.0, you install suspendable ports and parameterize current-read-waiter and/or current-write-waiter. There is an example here: https://github.com/ChrisVine/guile-a-sync2/blob/master/a-sync/await-ports.scm fibers does something similar. See also https://www.gnu.org/software/guile/docs/master/guile.html/Non_002dBlocking-I_002fO.html#Non_002dBlocking-I_002fO Chris
On Tue, 09 Apr 2019 14:24:09 -0400 Mark H Weaver wrote: > Hi Chris, > Chris Vine writes: > > On Tue, 09 Apr 2019 04:35:38 -0400 > > Mark H Weaver wrote: > >> > >> I think it's probably fine for 2.2, although a more careful check should > >> be made for differences in behavior between the old and new > >> implementations, and tests should be added. I'll try to get to it soon. > > > > If it is going in 2.2 (or 3.0) it would be nice if the ports could be > > suspendable. put-bytevector (used by write!) is suspendable; > > get-bytevector-some (used by read!) is not. > > Unless I'm mistaken, nothing done within custom ports is suspendable, > regardless of which I/O primitives are used, because the custom port > implementations themselves are all written in C. The custom port > handlers such as 'read!' and 'write!' are always invoked from C code. > Caleb Ristvedt recently ran into this problem and posted about it on > guile-user: > > https://lists.gnu.org/archive/html/guile-user/2019-03/msg00032.html > > I responded here: > > https://lists.gnu.org/archive/html/guile-user/2019-04/msg0.html > > I'm not sure off-hand what would be required to re-implement custom > ports in suspendable Scheme code. Andy Wingo would be a good person to > ask, since he implemented (ice-9 suspendable-ports). You are probably right about custom ports' implementations. I encountered a similar problem with guile-gnutls ports. (In the case of guile-gnutls, because TLS connections are one of the areas where you would expect suspendable i/o to be useful, that is a shame.) Chris
On Tue, 09 Apr 2019 04:35:38 -0400 Mark H Weaver wrote: > Hi Rob, > > Rob Browning writes: > > > Mark H Weaver writes: > > > >> See below for a draft reimplementation of the OPEN_BOTH mode of > >> open-pipe* based on R6RS custom binary input/output. On my machine it > >> increases the speed of your test by a factor of ~1k. > > > > Hah, I was about to report that I'd tested something along similar lines > > (though much more a quick hack to just replace make-rw-port and see what > > happened), and that I had seen substantial improvements: > > > > (define (make-rw-bin-port read-port write-port) > > (define (read! dest offset count) > > (let ((result (get-bytevector-n! read-port dest offset count))) > > (if (eof-object? result) 0 result))) > > (define (write! src offset count) > > (put-bytevector write-port src offset count) > > count) > > (define (close x) > > (close-port read-port) > > (close-port write-port)) > > (make-custom-binary-input/output-port "open-bin-pipe-port" > > read! write! #f #f > > close)) > > Hah, we had the same idea! :-) > > FYI, the reason I didn't use 'get-bytevector-n!', although it leads to > the much simpler code above, is that it has the wrong semantics > w.r.t. blocking behavior. 'get-bytevector-n!' blocks until the entire > requested count is read, returning less than the requested amount only > if EOF is reached. > > In this case, 'read!' is free to return less than 'count' bytes, and > should block _only_ as needed to ensure that at least one byte is > returned (or EOF). Of course, an effort should be made to return > additional bytes, and preferably a sizeable buffer, but only if it can > be done without blocking unnecessarily. > > Guile has only one I/O primitive capable of doing this job efficiently: > 'get-bytevector-some'. Internally, it works by simply returning the > entire existing port read buffer if it's non-empty. If the read buffer > is empty, then read(2) is used to refill the read buffer, which is then > returned to the user. > > The reason for the extra complexity in my reimplementation is that > there's no way to specify an upper bound on the size of the bytevector > returned by 'get-bytevector-some', so in general we may need to preserve > the remaining bytes more future 'read!' calls. > > >> Let me know how it works for you. > > > > For a first quick test of your patch using the original program I was > > working on, I see about ~1.4MiB/s without the patch, and about 150MiB/s > > with it, measured by pv. > > It's interesting that I saw a much larger improvement than you're > seeing. In my case, the numbers reported by your test program went from > ~0.35 mb/s to ~333 mb/s, on a Thinkpad X200. > > > (If the patch holds up, it'd be nice to have in 2.2, but I suppose that > > might not be appropriate.) > > I think it's probably fine for 2.2, although a more careful check should > be made for differences in behavior between the old and new > implementations, and tests should be added. I'll try to get to it soon. If it is going in 2.2 (or 3.0) it would be nice if the ports could be suspendable. put-bytevector (used by write!) is suspendable; get-bytevector-some (used by read!) is not. Chris
On Sun, 14 Oct 2018 00:59:03 -0700 Chris Marusich wrote: > Hi Mark! > > Mark H Weaver writes: > > > When the manual says "exit status as returned by ‘waitpid’", it's > > referring to the "status value" portion of what 'waitpid' returns, > > i.e. the CDR of 'waitpid's return value. > > Thank you for the clarification! It makes more sense now. > > >> scheme@(guile-user)> (status:exit-val $1) > >> $5 = 0 > >> scheme@(guile-user)> (status:exit-val $3) > >> $6 = 0 > > > > Right, these procedures are meant to operate on the status value. > > I see. Then what's the intended use of status:exit-val? I've read its > documentation and viewed its source a few times, and it seems like this > procedure basically behaves like the identity function. I'm having > trouble thinking of a case where one would use status:exit-val instead > of simply using the integer status value directly. According to the documentation, status:exit-val returns #f if the process did not end normally (that is, it did not end by main() returning or by a call to exit()); otherwise if the process did end normally it returns the process's exit status (that is, 0 if main() returned otherwise the value passed to exit()). It does this by applying the WIFEXITED and WEXITSTATUS macros to POSIX waitpid()'s wstatus out parameter (that out parameter being the cdr of guile's waitpid procedure). Note that the exit status provided by WEXITSTATUS (and returned by status:exit-val where it doesn't return #f) is not necessarily the integer comprising the wstatus out parameter. According to POSIX it is in fact the lower 8 bits of that value. So status:exit-val definitely is not an identity function. The documentation is correct, though perhaps it needs knowledge of POSIX waitpid() to fully understand it. Perhaps it would be better if "The integer status value" was replaced by "The process status code" so the same expression is used for it in both contexts, thus also helping avoid confusion with "the exit status value" returned by status:exit-val. As "the process status code" is opaque, you don't really need to know what is in it at all. You just need to pass it to one or more of the three procedures which decode it. > >> Is the documentation incorrect? > > > > I'm not sure I'd call it "incorrect", but I agree that it's somewhat > > confusing and could use clarification. Would you like to propose a > > patch? > > I'm still a little confused about the intended use of status:exit-val, > but how does the attached patch look to you? It's a small change, but I > think this would have been enough to dispel my confusion.
On Fri, 29 Jun 2018 12:34:07 +0200 Hans Åberg wrote: > > On 29 Jun 2018, at 12:10, Chris Vine wrote: > > > >> For C++, these are only optional, cf. , as they require no padding. So > >> an alternative is to typedef the obligatory int_fast<2^k>_t types, perhaps > >> leaving the API unchanged. > >> > >> 1. https://en.cppreference.com/w/cpp/types/integer > > > > The fixed size integer types are optional in C99/11 also, depending on > > whether the platform provides a fixed size integer of the type in > > question without padding and (for negative integers) a two's complement > > representation. > > Yes, I saw that, too. It is important to ensure two's complement, too, which > the other types do not. > > > If, say, uint8_t is available in stdint.h for C, it > > will be available for C++. §21.4.1/2 of C++17 makes this even more > > explicit: "The [cstdint] header defines all types and macros the > > same as the C standard library header ". > > Which C version? In g++7, __STDC_VERSION__ is not defined, only __STDC__. In C++17, references to "C" are to ISO/IEC 9899:2011. References to the C standard library are to "the library described in Clause 7 of ISO/IEC 9899:2011". In C++11 and C++14, the references are to ISO/IEC 9899:1999. By default (if you don't use the -std=c++xx flag) g++-7 compiles according to C++14.
On Fri, 29 Jun 2018 10:39:33 +0200 Hans Åberg wrote: > > On 29 Jun 2018, at 09:39, Andy Wingo wrote: > > > > It would seem that the first four > > features of C99 are OK for all platforms that we target, with the > > following caveats: > > > > * We should avoid using C++ keywords (e.g. throw) in Guile API files. > > > > * We might want to avoid mixed decls and statements in inline functions > > in Guile API files. > > > > We should probably avoid stdbool.h and compound literals, for C++ > > reasons. > > You might make a separate C++ header: It turned out too complicated for Bison > to maintain the compile as C++ generated C parser. > > > In Guile 3.0 (master branch), the types "scm_t_uint8" and so on are now > > deprecated. My recommendation is that all users switch to use > > e.g. "uint8_t", "ptrdiff_t", etc from instead of the > > scm_t_uint8, etc definitions that they are now using. The definitions > > are compatible on all systems, AFAIU, and on GNU, scm_t_uint8 has long > > been a simple typedef for uint8_t. > > For C++, these are only optional, cf. , as they require no padding. So an > alternative is to typedef the obligatory int_fast<2^k>_t types, perhaps > leaving the API unchanged. > > 1. https://en.cppreference.com/w/cpp/types/integer The fixed size integer types are optional in C99/11 also, depending on whether the platform provides a fixed size integer of the type in question without padding and (for negative integers) a two's complement representation. If, say, uint8_t is available in stdint.h for C, it will be available for C++. §21.4.1/2 of C++17 makes this even more explicit: "The [cstdint] header defines all types and macros the same as the C standard library header ". I imagine guile will not run on any platform that does not support 8 and 32 bit fixed size integers.
On Sat, 5 May 2018 21:03:14 -0300 David Pirotte
wrote: > Hello Guilers, > > > 1- no --use-guile-site > > > > in this case, imo, locations should be > > > > $(datarootdir)/ [ source > > FWIW, > > This is what guile-gnome does, and it also does it in $(libdir), > $(includedir) ..., > > It does it using guile-gnome API version, not guile effective version, which > I also > think 'projects' should do, so currently, guile-gnome pure scheme modules land > in: > > $(datarootdir)/guile-gnome-2 > > [ except for the doc, installed in > [ $(docdor)/guile-gnome-platform/ > > The only module that guile-gnome installs in $(datarootdir)/guile/site (and > not > $(datarootdir)/guile/site/GUILE_EFFECTIVE_VERSION as I also claim we should > not do > this) is gnome-2, a module that users import to 'inform' guile of source, lib > locations (which they also can do sing the guile-gnome-2 script ...) Instead of using the project's $(libdir), $(datarootdir), and so forth, I use pkgconfig to interogate guile-2.2.pc or guile-2.0.pc, and install source (.scm) files in its revealed 'sitedir' and compiled (.go) files in its revealed 'siteccachedir', with the base project module name added to the path in each case. That way, even if the project (including any of its shared library files) are installed in, say, the /usr/local prefix, the scheme files are installed in the correct prefix for guile. That is presumably why 'sitedir' and 'siteccachedir' are exposed by pkgconfig. Chris
On Mon, 25 Sep 2017 19:59:39 +0100 Chris Vine <vine35792...@gmail.com> wrote: > ... you could consider launching the new process in C code via the > guile FFI so you can ensure that no non-async-signal-safe code is > called at the wrong time; but presumably you would still have by some > means to prevent the garbage collector from being able to start a > memory reclaiming run in the new process after the fork and before the > exec, and again I do not know how you would do that. You would also > need to block system asyncs before forking (and unblock after the fork > in the original process) but that is trivial to do. On reflection I don't think there is an issue with the garbage collector if you adopted this approach. After forking there is only one thread running in the new process - the thread of execution of the forking thread - and provided that the new process does not attempt to allocate memory after the fork and before the exec, I doubt the garbage collector has a way in which it can be provoked to begin trying to reclaim memory in the new process.
On Mon, 25 Sep 2017 19:14:22 +0200 Mathieu Othacehe
wrote: > Hi Chris, > > > This works exactly as you would expect from its POSIX equivalents > > and has the advantage that you can read from the pipe as the > > sub-process is proceeding rather than just collect at the end. > > Thank you ! Following your suggestion, I ended-up with : > > --8<---cut here---start->8--- > (let* ((err-pipe (pipe)) >(out-pipe (pipe)) >(read-out (car out-pipe)) >(write-out (cdr out-pipe)) >(read-err (car err-pipe)) >(write-err (cdr err-pipe)) >(pid (run-concurrently+ > (apply tail-call-program "...") > (write-out 1) > (write-err 2))) >(ret (status:exit-val (cdr (waitpid pid) > (close-port write-out) > (close-port write-err) > (let ((output (read-string read-out)) > (error (read-string read-err))) > (close-port read-out) > (close-port read-errs > (case ret > ((0) output) > (else (raise ...) > --8<---cut here---end--->8--- > > which seems to work. However, run-concurrently+ uses "primitive-fork" > which is forbiden in a multi-thread context (sadly, mine). > > Do you have any idea on how to overcome this ? Any launching of a new process requires a fork and if (as appears to be your intention) you want to replace the process image with a new one, an exec. As you appear to know, POSIX allows only async-signal-safe functions to be called in a multi-threaded program between the fork and the exec, although glibc does relax this somewhat. Since anything you do in guile between the fork and the exec has the potential to allocate memory, that appears to mean that, as you say, you cannot call primitive-fork in a guile program at a time when it is running more than one thread. If so, I do not know how to circumvent that: you could consider launching the new process in C code via the guile FFI so you can ensure that no non-async-signal-safe code is called at the wrong time; but presumably you would still have by some means to prevent the garbage collector from being able to start a memory reclaiming run in the new process after the fork and before the exec, and again I do not know how you would do that. You would also need to block system asyncs before forking (and unblock after the fork in the original process) but that is trivial to do. As regards your code, if you do not need to distinguish between stdout and stderr, you would do better to have only one pipe and use the write port of the pipe for both of descriptors 1 and 2. That means that you could read the pipe while the new process is proceeding rather than after it has finished (which risks filling up the pipe): just loop reading the read end of the pipe until an eof-object is received, and then call waitpid after that. Chris
On Sat, 23 Sep 2017 11:58:34 +0200 Mathieu Othacehe
wrote: > Hi, > > I recently used "open-pipe*" to launch a process but was unable to > read from stderr. This subject was already discussed on this ml > here : > > https://lists.gnu.org/archive/html/guile-user/2015-04/msg3.html > > Racket seems to have procedures to provide stdout/stdin/stderr ports > for a given subprocess. > > Mark, you said this subject was on your TODO list, is there anything > available or would it be possible to develop a racket like API ? The run-concurrently+ procedure in guile-lib ( http://www.nongnu.org/guile-lib/doc/ref/os.process/ ) should do what you want. Construct a pipe with guile's POSIX 'pipe' procedure and pass the write port to the run-concurrently+ procedure for file descriptor 2 (stderr) and then read from the read port. Close the write port on the reader side before reading so that when the sub-process ends, the read will return with an eof-object. This works exactly as you would expect from its POSIX equivalents and has the advantage that you can read from the pipe as the sub-process is proceeding rather than just collect at the end. Having said that it would be nice to have this in more formalized form in guile itself, say something along the lines of ocaml's Unix.create_process function. Cjros
On Wed, 17 Aug 2016 09:56:58 -0400 "Thompson, David"
wrote: > On Wed, Aug 17, 2016 at 9:35 AM, Tobias Reithmaier > wrote: > > Hello, > > > > is there a way to program a Inter Process Communication (IPC) in > > guile like you do it with the Linux-Libc-API with the combo fork, > > pipe, dup2 and exec? If you use the popen-module it's not the same > > because you have to wait until the program has finished. > > But there are use-cases in that the program doesn't finish. e.g. a > > server which outputs data every minute. > > > > So can i use the popen-module to control such a server with pipes? > > Or is there another way? > > Check out the "POSIX" section of the manual. Fork, dup, and exec are > all available. Tobias might also want to consider the run-with-pipe procedure, and the run-concurrently+ procedure (with tail-call-program), as provided by guile-lib ( http://www.nongnu.org/guile-lib/doc/ref/os.process/ ). Tobias looks as if he has posted to the wrong newsgroup. Follow ups should I think go to guile-user (but I have not set them). Chris
On Sat, 16 Jul 2016 11:07:40 +0200 Andy Wingo
wrote: [snip] > I would like stdint.h though :) I agree. stdint.h is required by C++11, Appendix D5, to be available in C++11 and later, with the same meaning as in C99, but in practice it was available before then. It is provided by gcc-4.4 with the -std=c++0x or -std=c99 flags for example (gcc-4.4 is the oldest compiler I have installed, which I keep for test purposes). I think it is reasonable to assume these days that any reasonable compiler implementation will have the C99 extended integer types available to it, including the optional ones so far as the architecture supports them. Chris
On Thu, 14 Jul 2016 17:41:45 +0200 Daniel Llorens
wrote: [snip] > I think we'd want C89/C90 users to still be able to #include > . Dunno. libguile.h can also at present be included in C++89/03/11/14 code by design - all the necessary "extern C" stuff is there. I would hope that would continue, but some C99 things, such as variable length arrays, designated initializers, the _Complex type, the restrict qualifier and compound literals (except in C++11/14) are not available. There is no problem with using these in libguile implementation (*.c) code, but including them in headers will generally stop the headers being usable in C++ code. Having said that, g++ happens to accept some of these in C++ code as an extension. Chris
On Thu, 23 Jun 2016 09:36:48 +0200 Andy Wingo
wrote: [snip] > Excellent. Though I think that eventually we will want to bless one > of these concurrency patterns as the default one, we're a long way > away from that, and even if we do bless one I think we will always > want to allow people to experiment and deploy different ones. So, > great, glad to hear you are doing work in this area :) A few things on that. First, there will always be a use for an event loop to do event-loopy things, irrespective of whether and how a coroutine interface is put around it. Sometimes you want to abstract things away, sometimes you don't. Secondly, as I understand it in the end you want pre-emptive "green" threads for guile, whereas my code equates to co-operative multi-tasking, whilst also working with native threads. I must come clean and say that I don't like "green" threads. Which leads on to the third point, which is that I would like to see guile match its words (in its documentation) with respect to native threads. I have found they work fine, with caution about shared global state. You think they don't, except possibly on intel, because some of its lock-free structures/variables -- and I think you are possibly referring to the VM here -- lack appropriate fences or other atomics. (The higher level C and scheme code has plainly had serious attempts at thread-safety made for it using mutexes.) Chris
On Mon, 20 Jun 2016 10:01:57 +0100 Chris Vine <ch...@cvine.freeserve.co.uk> wrote: > On Mon, 20 Jun 2016 09:34:26 +0200 > Andy Wingo <wi...@pobox.com> wrote: > [snip] > > I must not be communicating clearly because this is definitely not > > what I am proposing. The prompt doesn't service anything, and it's > > just the one user-space thread which is suspended, and when it > > suspends, it suspends back to the main loop which runs as usual, > > timers and all. > > > > prompt > >/--\ /\|/---\ /\ /--\ > >| main --> run-thread -|>(user code)--> read-char --> waiter | > >| loop | |||| | || | | > >\--/ \/|\---/ \/ \--|---/ > > ^| > >\---/ > >stack grows this way -> > > > > The current-read-waiter aborts to a prompt. That prompt is instated > > when the thread is run or resumed. When you abort to that prompt, > > you add the FD to the poll set / main loop / *, remember the > > delimited continuation, and return to the main loop. When the fd > > becomes readable or the gsource fires or whatever, you reinstate the > > delimited continuation via a new invocation of run-thread (prompt > > and all). > > Ah right, that is clearer, thank you. There would indeed be a prompt > for each user glib event source comprised in the "thread" abstraction, > which the read-waiter (or whatever) aborts to. It is that abstraction > that I was missing and will need to look at. I have stirred myself and installed guile-2.1.3. On looking more at the suspendable ports code it became obvious and I haven't needed to adopt anything like ethreads with its "thread" abstraction: instead I have kept the approach already adopted in the guile-a-sync library. However, the consequence of using suspendable ports instead of C ports is that the await-getline! procedure (as an example) has been reduced to a mere 16 lines of code, mainly because it is possible to use (ice-9 rdelim)'s read-line procedure with non-blocking ports. I have made a new repository for guile-a-sync for use with guile-2.2 and when I am happy with the new interfaces (and assuming nothing else goes wrong) I will put it up on github. This is very nice. Thanks for taking the time to go through it with me. Chris
On Mon, 20 Jun 2016 09:34:26 +0200 Andy Wingo
wrote: [snip] > I must not be communicating clearly because this is definitely not > what I am proposing. The prompt doesn't service anything, and it's > just the one user-space thread which is suspended, and when it > suspends, it suspends back to the main loop which runs as usual, > timers and all. > > prompt >/--\ /\|/---\ /\ /--\ >| main --> run-thread -|>(user code)--> read-char --> waiter | >| loop | |||| | || | | >\--/ \/|\---/ \/ \--|---/ > ^| >\---/ >stack grows this way -> > > The current-read-waiter aborts to a prompt. That prompt is instated > when the thread is run or resumed. When you abort to that prompt, you > add the FD to the poll set / main loop / *, remember the delimited > continuation, and return to the main loop. When the fd becomes > readable or the gsource fires or whatever, you reinstate the > delimited continuation via a new invocation of run-thread (prompt and > all). Ah right, that is clearer, thank you. There would indeed be a prompt for each user glib event source comprised in the "thread" abstraction, which the read-waiter (or whatever) aborts to. It is that abstraction that I was missing and will need to look at. Chris
On Mon, 20 Jun 2016 13:38:39 +1000 William ML Leslie <firstname.lastname@example.org> wrote: > On 20 June 2016 at 06:09, Chris Vine <ch...@cvine.freeserve.co.uk> > wrote: > > OK I am grateful for your patience in explaining this. I need to > > think about it, but while this works where all events come from > > user-derived events, I doubt that this would work with guile-gnome > > and the glib main loop in the round, because for gtk+ to function > > the glib main loop must block within glib's own internal call to > > the poll() and not within the guile prompt, or gdk and other events > > will jam up. You would probably need to run the glib main loop in > > its own (pthread) thread, about which you have previously given > > dire warnings so far as guile as it currently stands is concerned. > > > > As I say, I really need to think more about this and look further at > > the code. I may well be missing something. > > If the event loop is external as you suggested, you could use a > different implementation of the loop that used g_io_add_watch and > returned to the glib mainloop when you're in a glib or gtk > application. Sure, but that's not really the difficulty. On further reflection on Andy's post, I think what I understand to be his approach to suspendable-ports could be made to work with a glib main loop without invoking additional pthread threads, but I think it would be barely scalable or tractable. For Andy's sake, can I give a concrete example of an entirely single threaded program? Let's say the program uses guile-gnome and is running a glib main loop, on which the program is (of necessity if using a glib main loop) blocking in g-main-loop-run, so all code is running as events in the main loop. For simplicity, let's say you have a file watch in the glib event loop which has made a non-blocking read of the first byte of a multi-byte UTF-8 character, and the suspendable-ports implementation is in use because it is a non-blocking read of a TCP socket. (It could be something different: there may be a non-blocking read request for a complete line of text which has so far only partially been satisfied, but the partially complete character is the easiest example to deal with.) The read request is therefore in read-waiter waiting for a complete UTF-8 byte sequence to arrive. On the current hypothesis, read-waiter comprises a procedure which is suspended on a prompt waiting for an event to occur in the glib main loop which will cause it to resume, comprising the file descriptor becoming ready which will satisfy the read request. But while suspended in read-waiter, this prompt would have to service any user event sources which might become ready in the glib main loop, not just the particular file descriptor in question becoming ready. These could be nested with other non-blocking reads, also in read-waiter, on other descriptors which also have a pending partial read, or with time-outs or with tasks composed with idle handlers. Any user event source which has been attached to the glib main loop which fires would have to cause all pending partial reads to go through the nested non-blocking read cycles to see if anything is available, as well as servicing the particular user event in question. It seems hardly scalable, if it would work at all. My approach on the other hand does not nest events from the glib main loop in this way. It is still possible I have missed something. The proof of all these things is in the pudding. I suppose what is needed is an attempt at a practical implementation using read-waiter: but for the moment I don't see it. Chris
On Sun, 19 Jun 2016 19:48:03 +0200 Andy Wingo <wi...@pobox.com> wrote: > On Sun 19 Jun 2016 17:33, Chris Vine <ch...@cvine.freeserve.co.uk> > writes: > > > The answer I have adopted when reading from TCP sockets is to > > extract individual bytes only from the port into a bytevector using > > R6RS's get-u8 procedure and (if the port is textual rather than > > binary) to reconstruct characters from that using > > bytevector->string at, say, a line end. An EAGAIN/EWOULDBLOCK > > exception is then just treated as an invitation to return to the > > prompt, and read state is retained in the bytevector. > > Yep, makes sense, though it will be really slow of course. And, you'd > have to make people rewrite their code to use your I/O primitives > instead of using read-line from (ice-9 rdelim); a bit of a drag, but > OK. It's not excessively slow provided that the port itself (as opposed to the conversion to unicode characters, which you have to do yourself) can be buffered. You can loop calls to get-u8 on char-ready? to vacate the buffers so far as wanted, without returning to the prompt for every byte. From that point of view it shouldn't be that much slower than repeatedly calling read-char on a port with latin-1 encoding on a normal blocking port. [snip] > > I don't think I have got to grips with how to do that with > > read-waiter, because the read-waiter comprises in effect another > > loop (in which the main event loop with its own prompts would have > > to run) until the read request has been satisfied. I would need to > > think about it. Since ethreads use a poll()/epoll() loop, > > presumably you think it is straightforward enough to integrate the > > two, even if at present I don't. > > Here is the code. First, a helper: > > ;; The AFTER-SUSPEND thunk allows the user to suspend the current > ;; thread, saving its state, and then perform some other nonlocal > ;; control flow. > ;; > (define* (suspend #:optional (after-suspend (lambda (ctx thread) > #f))) ((abort-to-prompt (econtext-prompt-tag (current-econtext)) > after-suspend))) > > Assume there is some current-econtext parameter with a "context" which > holds the prompt tag associated with the scheduler. As you see when > the continuation resumes, it resumes by calling a thunk, allowing > exceptions to be thrown from the context of the suspend. > > Here's the main loop function, which you could replace with the GLib > main loop or whatever: > > (define* (run #:optional (ctx (ensure-current-econtext)) > #:key (install-suspendable-ports? #t)) > (when install-suspendable-ports? (install-suspendable-ports!)) > (parameterize ((current-econtext ctx) >(current-read-waiter wait-for-readable) >(current-write-waiter wait-for-writable)) > (let lp () > (run-ethread ctx (next-thread ctx)) > (lp > > Cool. Now the wait functions: > > (define (wait-for-readable port) > (wait-for-events port (port-read-wait-fd port) (logior EPOLLIN > EPOLLRDHUP))) > > (define (wait-for-writable port) > (wait-for-events port (port-write-wait-fd port) EPOLLOUT)) > > Now the wait-for-events function: > > (define (wait-for-events port fd events) > (handle-events > port > events > (suspend > (lambda (ctx thread) > ... > > Well that's a mess, but the thing to know is that the `suspend' will > abort to the relevant prompt, and then invoke the thunk that's its > argument. Here's `handle-events' and we are done: > > (define (handle-events port events revents) > (unless (zero? (logand revents EPOLLERR)) > (error "error reading from port" port))) > > But I guess that error could be "reading or writing"; oh well. > > See > http://git.savannah.gnu.org/cgit/guile.git/tree/module/ice-9/ethreads.scm?h=wip-ethreads=253dc1a7114b89351a3aa330caf173b98c5a65dd, > it's not long but it takes some time to read. I think I can fairly > ask that of you though, given your interest in this area :) OK I am grateful for your patience in explaining this. I need to think about it, but while this works where all events come from user-derived events, I doubt that this would work with guile-gnome and the glib main loop in the round, because for gtk+ to function the glib main loop must block within glib's own internal call to the poll() and not within the guile prompt, or gdk and other events will jam up. You would probably need to run the glib main loop in its own (pthread) thread, about which you have previously given dire warnings so far as guile
On Sun, 19 Jun 2016 11:13:17 +0200 Andy Wingo <wi...@pobox.com> wrote: > Hi :) > > On Sun 12 Jun 2016 10:25, Chris Vine <ch...@cvine.freeserve.co.uk> > writes: > > >> > >> http://www.gnu.org/software/guile/docs/master/guile.html/Input-and-Output.html > >> > > > > The documentation indicates that with the C ports implementation in > > guile-2.2, reads will block on non-blocking file descriptors. > > Correct. > > > This will stop the approach to asynchronicity used in 8sync and > > guile-a-sync (the latter of which I have written) from working > > correctly with sockets on linux operating systems, because at > > present both of these use guile's wrapper for select. > > The trouble is that AFAIU there is no way to make non-blocking input > work reliably with O_NONBLOCK file descriptors in the approach that > Guile has always used. > > As you know, the current behavior for Guile 2.0 is to throw an > exception when you get EAGAIN / EWOULDBLOCK. If I am understanding > you correctly, your approach is to only read from a port if you have > done a select() / poll() / etc on it beforehand indicating that you > can read at least one byte. My approach would be to to do that, if it worked. And it does work with pipes and unix domain sockets provided you only read a byte at a time (see further below), but does not work on linux with TCP sockets because linux's select() and poll() system calls are not POSIX compliant. Therefore with TCP sockets on linux you have to use non-blocking reads and cater for the possibility of an EAGAIN/EWOULDBLOCK exception. > The problem with this is not only spurious wakeups, as you note, but > also buffering. Throwing an exception when reading in Guile 2.0 will > discard input buffers in many cases. Likewise when writing, you won't > be able to know how much you've written. > > This goes not only for the explicit bufffers attached to ports and > which you can control with `setvbuf', but also implicit buffers, and > it's in this case that it's particularly pernicious: if you > `read-char' on a UTF-8 port, you might end up using local variables > in the stack as a buffer for reconstructing that codepoint. If you > throw an exception in the middle, you discard those bytes. Likewise > for writing. I recognise this problem. The answer I have adopted when reading from TCP sockets is to extract individual bytes only from the port into a bytevector using R6RS's get-u8 procedure and (if the port is textual rather than binary) to reconstruct characters from that using bytevector->string at, say, a line end. An EAGAIN/EWOULDBLOCK exception is then just treated as an invitation to return to the prompt, and read state is retained in the bytevector. In effect this is doing by hand what a more complete non-blocking EAGAIN-safe port implementation might otherwise do for you. Writing is something else. To do it effectively the writer to the port must in any event cater for the fact that when the buffer is full but the underlying file descriptor is ready for writing, the next write will cause a buffer flush, and if the size of the buffer is greater than the number of characters that the file can receive without blocking, blocking might still occur. You usually need to switch off buffering for writes (but you quite often may want to do that anyway on output ports for sockets). > For suspendable ports, you don't throw an exception: you just assume > the operation is going to work, but if you get EAGAIN / EWOULDBLOCK, > you call the current-read-waiter / current-write-waiter and when that > returns retry the operation. Since it operates on the lowest level of > bytes, it's reliable. Looping handles the spurious wakeup case. > > > However, to cater for other asynchronous implementations of file > > watches, would it be possible to provide a configurable option > > either to retain the guile-2.0 behaviour in such cases (which is to > > throw a system-error with errno set to EAGAIN or EWOULDBLOCK), or > > to provide a non-blocking alternative whereby the read operation > > would, instead of blocking, return some special value such as an > > EAGAIN symbol? Either would enable user code then to resume to its > > prompt and let other code execute. > > Why not just (install-suspendable-ports!) and > > (parameterize ((current-read-waiter my-read-waiter)) ...) > > etc? It is entirely possible with Guile 2.1.3 to build an > asynchronous coroutine-style concurrent system in user-space using > these primitives. See the wip-ethread branch for an example > implementation. I would want to continue using an external event loop implemented with poll() or select() and delimited continuations. This makes it relati
On Sat, 11 Jun 2016 19:02:09 +0200 Andy Wingo
wrote: > On Thu 14 Apr 2016 16:08, l...@gnu.org (Ludovic Courtès) writes: > > > Andy Wingo skribis: > > > >> I am working on improving our port implementation to take > >> advantage of the opportunity to break ABI in 2.2. I am wondering > >> how much I can break C API as well -- there are some changes that > >> would allow better user-space threading > >> (e.g. > >> http://thread.gmane.org/gmane.lisp.guile.devel/14158/focus=15463 > >> or Chris Webber's 8sync). But those changes would require some > >> incompatible changes to the C API related to port internals. This > >> API is pretty much only used when implementing port types. So, I > >> would like to collect a list of people that implement their own > >> ports :) I know Guile-GNOME does it for GNOME-VFS, though > >> GNOME-VFS is super-old at this point... Anyway. Looking forward > >> to your links :) > > > > What do you conclude from this poll? :-) > > > > From what you’ve seen, how do you think current uses would impact > > the refactoring work (and vice versa)? > > Sorry for the late response :) > > My conclusion is that we should not change anything about the Scheme > interface, but that with close communication with C port hackers, we > can feel OK about changing the C interface to make it both more > simple and more expressive. Since libguile is parallel-installable > and you have to select the version of your Guile when you build your > project, of course people will be able to update / upgrade when they > choose to. > > I put in a lot of effort to the documentation; check it out: > > > http://www.gnu.org/software/guile/docs/master/guile.html/Input-and-Output.html The documentation indicates that with the C ports implementation in guile-2.2, reads will block on non-blocking file descriptors. This will stop the approach to asynchronicity used in 8sync and guile-a-sync (the latter of which I have written) from working correctly with sockets on linux operating systems, because at present both of these use guile's wrapper for select. This arises because with linux you can get spurious wake-ups with select() or poll() on sockets, whereby a read on a file descriptor for a socket can block even though the descriptor is reported by select() or poll() as ready for reading. Presumably suspendable ports will use some method to work around unwanted blocking arising from spurious select()/poll() wake-ups. However, to cater for other asynchronous implementations of file watches, would it be possible to provide a configurable option either to retain the guile-2.0 behaviour in such cases (which is to throw a system-error with errno set to EAGAIN or EWOULDBLOCK), or to provide a non-blocking alternative whereby the read operation would, instead of blocking, return some special value such as an EAGAIN symbol? Either would enable user code then to resume to its prompt and let other code execute. Chris  The man page for select states "Under Linux, select() may report a socket file descriptor as 'ready for reading', while nevertheless a subsequent read blocks. This could for example happen when data has arrived but upon examination has wrong checksum and is discarded. There may be other circumstances in which a file descriptor is spuriously reported as ready. Thus it may be safer to use O_NONBLOCK on sockets that should not block."
On Tue, 10 May 2016 16:30:30 +0200 Andy Wingo
wrote: > I think we have no plans for giving up pthreads. The problem is that > like you say, if there is no shared state, and your architecture has a > reasonable memory model (Intel's memory model is really great to > program), then you're fine. But if you don't have a good mental model > on what is shared state, or your architecture doesn't serialize loads > and stores... well there things are likely to break. Hi Andy, That I wasn't expecting. So you are saying that some parts of guile rely on the ordering guarantees of the x86 memory model (or something like it) with respect to atomic operations on some internal localised shared state? Of course, if guile is unduly economical with its synchronisation on atomics, that doesn't stop the compiler doing some reordering for you, particularly now there is a C11 memory model. Looking at the pthread related stuff in libguile, it seems to be written by someone/people who know what they are doing. Are you referring specifically to the guile VM, and if so is guile-2.2 likely to be more problematic than guile-2.0? Chris  I am not talking about things like the loading of guile modules here, which involves global shared state and probably can't be done lock free (and doesn't need to be) and may require other higher level synchronisation such as mutexes.
On Wed, 06 Apr 2016 22:46:28 +0200 Andy Wingo
wrote: > So, right now Guile has a pretty poor concurrency story. We just have > pthreads, which is great in many ways, but nobody feels like > recommending this to users. The reason is that when pthreads were > originally added to Guile, they were done in such a way that we could > assume that data races would just be OK. It's amazing to reflect upon > this, but that's how it is. Many internal parts of Guile are > vulnerable to corruption when run under multiple kernel threads in > parallel. Consider what happens when you try to load a module from > two threads at the same time. What happens? What should happen? > Should it be possible to load two modules in parallel? The system > hasn't really been designed as a whole. Guile has no memory model, > as such. We have patches over various issues, ad-hoc locks, but it's > not in a state where we can recommend that users seriously use > threads. I am not going to comment on the rest of your post, because you know far more about it than I could hope to, but on the question of guile's thread implementation, it seems to me to be basically sound if you avoid obvious global state. I have had test code running for hours, indeed days, without any appearance of data races or other incorrect behaviour on account of guile's thread implementation. Global state is an issue. Module loading (which you mention) is an obvious one, but other things like setting load paths don't look to be thread safe either. It would be disappointing to give up on the current thread implementation. Better I think in the interim is to document better what is not thread-safe. Some attempts at thread safety are in my view a waste of time anyway, including trying to produce individual ports which can safely be accessed in multiple threads. Multi-threading with ports requires the prevention of interleaving, not just the prevention of data races, and that is I think best done by locking at the user level rather than the library level. Co-operative multi-tasking using asynchronous frameworks such as 8sync or another one in which I have an interest, and pre-emptive multi-tasking using a scheduler and "green" threads, are all very well but they do not enable use of more than one processor, and more importantly (because I recognise that guile may not necessarily be intended for use cases needing multiple processors for performance reasons), they can be more difficult to use. In particular, anything which makes a blocking system call will wedge the whole program. Chris
On Wed, 18 Nov 2015 10:26:25 + Chris Vine <ch...@cvine.freeserve.co.uk> wrote: > On Tue, 17 Nov 2015 11:46:24 -0600 > Christopher Allan Webber <cweb...@dustycloud.org> wrote: > [snip] > > This sounds very interesting... is the source available? Could you > > point to it? > > > > Thanks! > > - Chris > > No it's not. I'll email you something. In case it's of use to others in looking at the options, I have now put the code here: http://www.cvine.plus.com/event-loop/event-loop.scm I have no problem with any parts of it going into guile if thought useful. I was inclined to apply the MIT license to it but that seems fairly pointless give that guile is covered by the LGPL. The interesting part is not so much the event loop but the use of coroutines to give "await" type semantics. This particular implementation does allow tasks to run in their own threads in a safe way. It works for my limited purposes but I imagine it could be much improved. Chris
On Tue, 17 Nov 2015 11:46:24 -0600 Christopher Allan Webber
wrote: [snip] > This sounds very interesting... is the source available? Could you > point to it? > > Thanks! > - Chris No it's not. I'll email you something. Chris
On Tue, 17 Nov 2015 11:53:05 +0100 Jan Synáček
wrote: > Hello, > > I'm getting: > > scheme@(guile-user)> (list-head '(1 2 3) 5) > ERROR: In procedure list-head: > ERROR: In procedure list-head: Wrong type argument in position 1 > (expecting pair): () > > This looks pretty much like a bug to me. Shouldn't list-head return > the entire list when the 'k' is bigger than its length? If that is not > the case, at least the error is really confusing. I'm using Guile > 2.0.11. The error message is confusing, but I guess the behaviour of list-head mirrors R5RS list-tail: instead of mandating the return of an empty list, R5RS states that "It is an error if list has fewer than k elements". Chris
On Tue, 17 Nov 2015 13:52:21 +0100 <to...@tuxteam.de> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On Tue, Nov 17, 2015 at 12:59:56PM +, Chris Vine wrote: > > On Tue, 17 Nov 2015 10:53:19 +0100 > > [...] > > > guile's R6RS implementation has get-bytevector-some, which will do > > that for you, with unix-read-like behaviour. > > Thank you a thousand. You made me happy :-) I suppose it is worth adding that it might not be optimally efficient for all uses, as there is no get-bytevector-some! procedure which modifies an existing bytevector and takes a maximum length value. I guess it is a matter of 'suck it and see', efficiency-wise. If you are sending/receiving binary packets, it might be better to make them of fixed size and use get-bytevector-n!. (Unfortunately, get-bytevector-n! does block until n is fulfilled according to R6RS: "The get-bytevector-n! procedure reads from binary-input-port, blocking as necessary, until count bytes are available from binary-input-port or until an end of file is reached".) Chris
On Tue, 17 Nov 2015 10:53:19 +0100
wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On Mon, Nov 16, 2015 at 11:54:33AM +0100, Amirouche Boubekki wrote: > > On 2015-11-13 21:41, Jan Synáček wrote: > > [...] > > > >I have an open fd to a unix socket and I want to read data from > > >it. I know that the data is going to be only strings, but I don't > > >know the length in advance. > > > > Do you know a delimiter? maybe it's the null char? > > > > TCP is stream oriented, it's not structured at this layer into > > messages or segments. You need some knowledge about the byte stream > > to be able to split it into different meaningful piece for the > > upper layer. > > I think I "got" Jan's request, because I've been in a similar > situation before: delimiter is not (yet) part of it. What he's > looking for is an interface à la read(2), meaning "gimme as much > as there is in the queue, up to N bytes, and tell me how much > you gave me". Of course, putting stuff in a byte vector would > be preferable; the only functions I've seen which "do" that > interface are read-string!/partial and write-string/partial > operate on strings, not byte arrays, alas. guile's R6RS implementation has get-bytevector-some, which will do that for you, with unix-read-like behaviour. You cannot use this for UTF-8 text by trying to convert the bytevector with utf8->string, because you could have received a partially formed utf-8 character. So for text, you should use line orientated reading, such as with ice-9 read-line or R6RS get-line. Chris
On Sat, 03 Oct 2015 17:29:16 -0500 Christopher Allan Webber
wrote: > So David Thompson, Mark Weaver, Andrew Engelbrecht and I sat down to > talk over how we might go about an asynchronous event loop in Guile > that might be fairly extensible. Here are some of what we discussed, > in bullet points: > > - General idea is to do something coroutine based. > > - This would be like asyncio or node.js, asynchronous but *not* OS >thread based (it's too much work to make much of Guile fit around >that for now) > > - If you really need to maximize your multiple cores, you can do >multiple processes with message passing anyway > > - Initially, this would probably providing a general API for >coroutines. Mark thinks delimited continuations would not be as >efficient as he'd like, but we think it's okay because we could >provide a nice abstraction where maybe something nicer could be >swapped out later, so delimited continuations could at least be a >starting point. > > - So what we really need is a nice API for how to do coroutines, > write asynchronous code, and work with some event loop with a > scheduler > > - On top of this, "fancier" high level systems like an actor model or >something like Sly's functional reactive programming system could > be done. > > - Probably a good way to start on this would be to use libuv (or is > it libev?) and prototype this. It's not clear if that's a good long >term approach (eg, I think it doesn't work on the HURD for those > who care about that, and Guix certainly does) > > - Binary/custom ports could be a nice means of abstraction for this > > So, that's our thoughts, maybe someone or one of us or maybe even > *you* will be inspired to start from here? > > To invoke Ludo, WDYT? > - Not Ludo It is certainly the case that mixing threads with coroutines is usually best avoided, otherwise it becomes very difficult to know what code ends up running in which particular thread and thread safety becomes a nightmare. However, it would be good to allow a worker thread to post an event to the event loop safely, whereby the handler for the posted event would run in the event loop thread. asyncio allows this. Although not particularly pertinent to this proposal, which looks great, I use coroutines implemented with guile's delimited continuations for a minimalist "await" wrapper over glib's event loop as provided by guile-gnome (the whole thing is about 20 lines of code), which appears (to the user) to serialize the GUI or other events posted to the event loop. When I don't want to use guile-gnome, which is most of the time, I have my own (also minimalist) thread-safe event loop using guile's POSIX wrapper for select. My uses of guile are pretty undemanding so as I say these are minimalist. Something like asyncio for guile would be very nice indeed. Chris
On Wed, 11 Feb 2015 22:23:43 +0100 Andy Wingo wi...@pobox.com wrote: Hi! So, threads and ports again. We didn't really come to a resolution in this thread: http://article.gmane.org/gmane.lisp.guile.devel/17023 To recap, in Guile 2.0 a port has mutable internal state that can be corrupted when when multiple threads write to it at once. I ran into this when doing some multithreaded server experiments, and fixed it in the same way that libc fixes the issue for stdio streams: https://www.gnu.org/software/libc/manual/html_node/Streams-and-Threads.html#Streams-and-Threads Namely, ports can have associated recursive mutexes. They can be in a mode in which every operation on a port grabs the mutex. The interface to set a port into unlocked mode (à la fsetlocking) is unimplemented, but the machinery is there. This change fixed the crashes I was seeing, but it slows down port operations. For an intel chip from a couple years ago the slowdown was something on the order of 3x, for a tight putchar() loop; for Loongson it could be as bad as 26x. Mark was unhappy with this. Mark also made the argument that locking on port operations doesn't always make sense. Indeed I quote from the libc documentation: But there are situations where this is not enough and there are also situations where this is not wanted. The implicit locking is not enough if the program requires more than one stream function call to happen atomically. One example would be if an output line a program wants to generate is created by several function calls. The functions by themselves would ensure only atomicity of their own operation, but not atomicity over all the function calls. For this it is necessary to perform the stream locking in the application code. So we don't yet expose the equivalent of flockfile, but at this point since there are still concerns out there I wanted to ask if the current solution still makes sense. I hope this is a fair summary of the issue. My perspective on this is that crashes are unacceptable, and also that it does make sense to log to stderr from multiple threads at once. When writing to ports under error conditions you don't always have the luxury of being able to coordinate access in some nicer way. I sympathize with the desire to make put-char etc faster, as that means that more code can be written in Scheme. One possible alternate solution would be to expose ports more to Scheme and so to make it easier and safer for Scheme to manipulate port data. This would also make it possible to implement coroutines in Scheme that yield when IO would block. Or, we could just make stdio/stderr be locked by default, and some other things not. Seems squirrely to me though. Dunno. I would add that although there is a solution to this issue in master, it might not make it into 2.2. There will probably be a dozen prereleases before 2.2.0, so even if a 2.1.1 manages to make it out the door before we come to a solution, that doesn't mean that the choices in such a release are the right or final ones. Here is a comment from someone who is a guile user rather than a guile developer. Since guile provides native threads, a minimum requirement seems to me to be that when the guile library writes to stderr on its own account, it does so in a thread safe way (in the doesn't crash the program sense). Since guile (at present) writes to stderr via a global buffered port object, it means that that needs to be thread safe. An alternative is for the library to write error messages directly to the stderr file descriptor (which is intrinsically thread safe), but that would rule out character by character error printing as with put-char/write-char. Beyond that, different standards which accommodate threads within the respective standard require different things. You have referred to POSIX.1c, which requires all its functions that operate on character streams (represented by pointers to objects of type FILE) to be thread safe in a data race but not interleaving sense, and provides access for user code to the internal locks to deal with interleaving. As far as I can tell C11 is silent on the point, even though it adopts threading primitives and a memory model based on the C++11 one. C++11 does not go as far as POSIX for its own i/o streams. Instead the global objects for stdout (cout), stdin (cin) and stderr (cerr and clog) and wide stream variants must be thread safe, but whether other i/o objects are thread safe is up to the implementation - and generally they are not. Synchronization is generally left where it should be, with the user, since thread safety (in the formal data race sense) is not of itself enough. Generally you also need to write to or read from ports in a way which prevents any interleaving which would corrupt the data format which is being written or read. A compromise position is possible for guile. Ports could provide
Hi, guile-2.0's scm_c_call_with_blocked_asyncs, scm_c_call_with_unblocked_asyncs, scm_dynwind_block_asyncs and scm_dynwind_unblock_asyncs will not link for me using 32-bit ubuntu-14.04 (gcc-4.8.2/guile-2.0.9) or 32-bit slackware-14.1 (gcc-4.9.0/guile-2.0.11). I do not have a machine to test a 64-bit system at present. The issue appears to be that this part of the API is not marked as exported, even though these functions are documented and presumably intended to be called by user code. A trivial patch, against guile-2.0.11, is attached dealing with this. Chris --- guile-2.0.11/libguile/async.h.orig 2014-05-04 10:38:14.63184 +0100 +++ guile-2.0.11/libguile/async.h 2014-05-04 10:45:45.615763664 +0100 @@ -44,10 +44,10 @@ SCM_API SCM scm_noop (SCM args); SCM_API SCM scm_call_with_blocked_asyncs (SCM proc); SCM_API SCM scm_call_with_unblocked_asyncs (SCM proc); -void *scm_c_call_with_blocked_asyncs (void *(*p) (void *d), void *d); -void *scm_c_call_with_unblocked_asyncs (void *(*p) (void *d), void *d); -void scm_dynwind_block_asyncs (void); -void scm_dynwind_unblock_asyncs (void); +SCM_API void *scm_c_call_with_blocked_asyncs (void *(*p) (void *d), void *d); +SCM_API void *scm_c_call_with_unblocked_asyncs (void *(*p) (void *d), void *d); +SCM_API void scm_dynwind_block_asyncs (void); +SCM_API void scm_dynwind_unblock_asyncs (void); /* Critical sections */
On Sat, 12 Apr 2014 23:28:18 -0300 David Pirotte da...@altosw.be wrote: Hello, I'm on guile (GNU Guile) 188.8.131.52-0ece4 now Although I did not update the wrapset.api yet, in order to get a general picture of updating a clutter binding Provide a binding for clutter's gobject-introspection interface and you will carry most other gobject based libraries with you. I imagine it would be a significant amount of work though. Chris