Re: Strong typing vs. strong testing
On 10 Oct, 10:44, Lie Ryan lie.1...@gmail.com wrote: On 10/02/10 20:04, NickKeighleywrote: In a statically typed language, the of-the-wrong-type is something which can, by definition, be caught at compile time. Any time something is true by definition that is an indication that it's not a particularly useful fact. I'm not sure I agree. On occaision knowing something is true-by- definition is very useful! Something that is true by definition is just as useful as saying: my program is correct, by definition, because my requirement is what my code is doing. well no it isn't. By definition a compiler catches (or is capable of catching) type mismatches. And some (all?) type mismatches are a useful diagnostic. You can dance around all you like but its true. I've worked in a test department and I really have received but that's what the code says! as an explanation for an observed misbehaviour. It's a circular argument, your program requirement, for which the program is supposed to be tested against, is the code itself; so whatever undesirable behavior the program might have is parts of the requirement, so the program is, by definition, bug free and it's user expectation that's wrong. yes I know. It's a strawman because I never said that or anything resembling it. -- http://mail.python.org/mailman/listinfo/python-list
Re: Strong typing vs. strong testing
On 1 Oct, 11:02, p...@informatimago.com (Pascal J. Bourguignon) wrote: Seebs usenet-nos...@seebs.net writes: On 2010-09-30, Ian Collins ian-n...@hotmail.com wrote: Which is why agile practices such as TDD have an edge. If it compiles *and* passes all its tests, it must be right. So far as I know, that actually just means that the test suite is insufficient. :) Based on my experience thus far, anyway, I am pretty sure it's essentially not what happens that the tests and code are both correct, and it is usually the case either that the tests fail or that there are not enough tests. It also shows that for languages such as C, you cannot limit the unit tests to the types declared for the function, but that you should try all the possible values of the language. I'm not sure what you mean by all the possible values of the language. But some meanings of that statement are plainly wrong! For instance you don't test every possible pair of ints in a max() function (maybe 2^64 distinct tests!) Which basically, is the same as with dynamically typed programming language, only now, some unit tests will fail early, when trying to compile them while others will give wrong results later. snip -- http://mail.python.org/mailman/listinfo/python-list
Re: Strong typing vs. strong testing
On 1 Oct, 19:33, RG rnospa...@flownet.com wrote: In article slrniabt2j.1561.usenet-nos...@guild.seebs.net, Seebs usenet-nos...@seebs.net wrote: On 2010-10-01, RG rnospa...@flownet.com wrote: snip Those goal posts are sorta red shifted at this point. [...] Red shifted? Moving away fast enough that their color has visibly changed. doppler shift for instance or one of them cosmological thingies when space itself stretches snip In a statically typed language, the of-the-wrong-type is something which can, by definition, be caught at compile time. Any time something is true by definition that is an indication that it's not a particularly useful fact. I'm not sure I agree. On occaision knowing something is true-by- definition is very useful! The whole concept of type is a red herring. It's like this: there are some properties of programs that can be determined statically, and others that can't. Some of the properties that can't be determined statically matter in practice. But all of the properties that can be determined statically can also be determined dynamically. sometimes with quite a lot of effort. I know the halting problem gets thrown around a lot but to execute every path may take an inordinate amount of effort. The *only* advantage that static analysis has is that *sometimes* it can determine *some* properties of a program faster or with less effort than a dynamic approach would. agreed What I take issue with is the implication made by advocates of static analysis that static analysis is somehow *inherently* superior to dynamic analysis, probably a fair point that static analysis can provide some sort of guarantee of reliability that actually has some sort of practical meaning in the real world. It doesn't. The net effect of static analysis in the real world is to make programmers complacent about properties of programs that can only be determined at run time, to make them think that compiling without errors means something, who are these people? I don't think I've ever met any programmer that claimed a clean compile implied a correct program. Ada people come close with things like when you finally get an Ada program to compile you've got a good chance it's actually right. Part of their justification is that you tend to have to think hard about your type system up front. But given the fields Ada programmers typically operate in I don't think they'd dare ship a program that merely compiled. and that if a program compiles without errors then there is no need for run-time checking of any sort. again this is a massive straw man. Yes, people who claim this are wrong. But they are much less prevalent than you claim. Though the lack of enthusiasm I encounter when I suggest the liberal use of assert() is a data point in your favour. *You* may not believe these things, but the vast majority of programmers who use statically typed languages do believe these things, even if only tacitly. not the programmers I've encountered The result is a world where software by and large is a horrific mess of stack overflows, null pointer exceptions, core dumps, and security holes. I'm not saying that static analysis is not useful. It is. What I'm saying is that static analysis is nowhere near as useful as its advocates like to imply that it is. And it's better to forego static analysis and *know* that you're flying without a net at run-time than to use it and think that you're safe when you're really not. obviously some of us disagree -- http://mail.python.org/mailman/listinfo/python-list
Re: Strong typing vs. strong testing
On 27 Sep, 18:46, namekuseijin namekusei...@gmail.com wrote: snip Fact is: almost all user data from the external words comes into programs as strings. No typesystem or compiler handles this fact all that graceful... snobol? -- http://mail.python.org/mailman/listinfo/python-list
Re: Strong typing vs. strong testing
On 27 Sep, 20:29, p...@informatimago.com (Pascal J. Bourguignon) wrote: namekuseijin namekusei...@gmail.com writes: snip Fact is: almost all user data from the external words comes into programs as strings. No typesystem or compiler handles this fact all that graceful... I would even go further. Types are only part of the story. You may distinguish between integers and floating points, fine. But what about distinguishing between floating points representing lengths and floating points representing volumes? Worse, what about distinguishing and converting floating points representing lengths expressed in feets and floating points representing lengths expressed in meters. fair points If you start with the mindset of static type checking, you will consider that your types are checked and if the types at the interface of two modules matches you'll think that everything's ok. And six months later you Mars mission will crash. do you have any evidence that this is actually so? That people who program in statically typed languages actually are prone to this well it compiles so it must be right attitude? On the other hand, with the dynamic typing mindset, you might even wrap your values (of whatever numerical type) in a symbolic expression mentionning the unit and perhaps other meta data, so that when the other module receives it, it may notice (dynamically) that two values are not of the same unit, but if compatible, it could (dynamically) convert into the expected unit. Mission saved! they *may* do this but do they *actually* do it? My (limited) experience of dynamically typed languges is everynow and again you attempt to apply an operator to the wrong type of operand and kerblam! If your testing is inadaquate then it's inadaquate whatever the typiness of your language. -- http://mail.python.org/mailman/listinfo/python-list
Re: Strong typing vs. strong testing
On 30 Sep, 11:14, TheFlyingDutchman zzbba...@aol.com wrote: in C I can have a function maximum(int a, int b) that will always work. Never blow up, and never give an invalid answer. Dynamic typed languages like Python fail in this case on Never blows up. How do you define Never blows up? Never has execution halt. I think a key reason in the big rise in the popularity of interpreted languages is that when execution halts, they normally give a call stack and usually a good reason for why things couldn't continue. As opposed to compiled languages which present you with a blank screen and force you to - fire up a debugger, or much much worse, look at a core dump - to try and discern all the information the interpreter presents to you immediately. Personally, I'd consider maximum(8589934592, 1) returning 1 as a blow up, and of the worst kind since it passes silently. If I had to choose between blow up or invalid answer I would pick invalid answer. there are some application domains where neither option would be viewed as a satisfactory error handling strategy. Fly-by-wire, petro- chemicals, nuclear power generation. Hell you'd expect better than this from your phone! In this example RG is passing a long literal greater than INT_MAX to a function that takes an int and the compiler apparently didn't give a warning about the change in value as it created the cast to an int, even with the option -Wall (all warnings). I think it's legitmate to consider that an option for a warning/error on this condition should be available. As far the compiler generating code that checks for a change in value at runtime when a number is cast to a smaller data type, I think that's also a legitimate request for a C compiler option (in addition to other runtime check options like array subscript out of bounds). -- http://mail.python.org/mailman/listinfo/python-list
Re: Strong typing vs. strong testing
On 30 Sep, 15:24, TheFlyingDutchman zzbba...@aol.com wrote: If I had to choose between blow up or invalid answer I would pick invalid answer. there are some application domains where neither option would be viewed as a satisfactory error handling strategy. Fly-by-wire, petro- chemicals, nuclear power generation. Hell you'd expect better than this from your phone! I wasn't speaking generally, just in the case of which of only two choices RG's code should be referred to - blowing up or giving an invalid answer. I think I'd prefer termination if those were my only choices. What's the rest of the program going to do with the wrong result? When the program finally gives up the cause is lost in the mists of time, and those are hard to debug! I think error handling in personal computer and website software has improved over the years but there is still some room for improvement as you will still get error messages that don't tell you something you can relay to tech support more than that an error occurred or that some operation can't be performed. But I worked with programmers doing in-house software who were incredibly turned off by exception handling in C++. I thought that meant that they preferred to return and check error codes from functions as they had done in C, and for some of them it did seem to mean that. But for others it seemed that they didn't want to anticipate errors at all (that file is always gonna be there!). that was one of the reasons I liked exceptions. If my library threw an exception then the caller *had* to do something about it. Even to ignore it he had to write some code. I read a Java book by Deitel and Deitel and they pointed out what might have lead to that attitude - the homework and test solutions in college usually didn't require much if any error handling - the student could assume files were present, data was all there and in the format expected, user input was valid and complete, etc. plausible. Going from beginner to whatever I probably steadily increased the pessimism of my code. The file might not be there. That other team might send us syntactically invalid commands. Even if it can't go wrong it will go wrong. Fortunately my collage stuff included some OS kernal stuff. There anything that can go wrong will go wrong. -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On 19 Aug, 16:25, c...@tiac.net (Richard Harter) wrote: On Wed, 18 Aug 2010 01:39:09 -0700 (PDT), Nick Keighley nick_keighley_nos...@hotmail.com wrote: On 17 Aug, 18:34, Standish P stnd...@gmail.com wrote: How are these heaps being implemented ? Is there some illustrative code or a book showing how to implement these heaps in C for example ? any book of algorithms I'd have thought my library is currently inaccessible. Normally I'd have picked up Sedgewick and seen what he had to say on the subject. And possibly Knuth (though that requires taking more of a deep breath). Presumably Plauger's library book includes an implementation of malloc()/free() so that might be a place to start. http://en.wikipedia.org/wiki/Dynamic_memory_allocation http://www.flounder.com/inside_storage_allocation.htm I've no idea how good either of these is serves me right for not checking :-( The wikipedia page is worthless. odd really, you'd think basic computer science wasn't that hard... I found even wikipedia's description of a stack confusing and heavily biased towards implementation The flounder page has substantial meat, but the layout and organization is a mess. A quick google search didn't turn up much that was general - most articles are about implementations in specific environments. -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On 17 Aug, 18:34, Standish P stnd...@gmail.com wrote: On Aug 16, 11:09 am, Elizabeth D Rather erat...@forth.com wrote: On 8/15/10 10:33 PM, Standish P wrote: If Forth is a general processing language based on stack, is it possible to convert any and all algorithms to stack based ones and thus avoid memory leaks since a pop automatically releases memory when free is an intrinsic part of it. Forth uses two stacks. The data stack is used for passing parameters between subroutines (words) and is completely under the control of the programmer. Words expect parameters on this stack; they remove them, and leave only explicit results. The return stack is used primarily for return addresses when words are called, although it is also available for auxiliary uses under guidelines which respect the primary use for return addresses. Although implementations vary, in most Forths stacks grow from a fixed point (one for each stack) into otherwise-unused memory. The space involved is allocated when the program is launched, and is not managed as a heap and allocated or deallocated by any complicated mechanism. On multitasking Forth systems, each task has its own stacks. Where floating point is implemented (Forth's native arithmetic is integer-based), there is usually a separate stack for floats, to take advantage of hardware FP stacks. - is forth a general purpose language? Yes - are all algorithms stack based? No Does Forth uses stack for all algorithms ? Does it use pointers , ie indirect addressing ? If it can/must use stack then every algorithm could be made stack based. Forth uses its data stack for parameter passing and storage of temporary values. It is also possible to define variables, strings, and arrays in memory, in which case their addresses may be passed on the data stack. Forth is architecturally very simple. Memory allocations for variables, etc., are normally static, although some implementations include facilities for heaps as needed by applications. although some implementations include facilities for heaps as needed by applications. How are these heaps being implemented ? Is there some illustrative code or a book showing how to implement these heaps in C for example ? any book of algorithms I'd have thought http://en.wikipedia.org/wiki/Dynamic_memory_allocation http://www.flounder.com/inside_storage_allocation.htm I've no idea how good either of these is Are dictionaries of forth and postscript themselves stacks if we consider them as nested two column tables which lisp's lists are in essence, but only single row. Multiple rows would just be multiple instances of it at the same level inside parens. I can't make much sense of that. But you seem to see Lisp data structures in all sorts of strange places. I don't see that Lisp lists are nested two column tables we can peek into stacks which is like car. no. if it is not unusually costly computation, why not allow it ? there is no need to restrict to push and pop. some stacks have a top() operation. roll( stack_name, num) itself can give all those postfix permutations that push and pop cant generate with a single stack. Can we use dictionaries to generate multiple stacks inside one global stack ? I've no idea what you on about -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On 17 Aug, 21:37, Elizabeth D Rather erat...@forth.com wrote: On 8/17/10 10:19 AM, Standish P wrote On Aug 17, 12:32 pm, John Passanitijohn.passan...@gmail.com wrote: It is true that the other languages such as F/PS also have borrowed lists from lisp in the name of nested-dictionaries and mathematica calls them nested-tables as its fundamental data structure. No. you are contradicting an earlier poster from forth who admitted the part on dicts. he's saying a forth dictionary isn't a lisp s-exp. Well it isn't. Not at all. A Forth dictionary is a simple linked list, not the complicated kind of nested structures you're referring to. You really seem addicted to very complex structures. I thought he had the opposite problem! I thought it was trying to knock in all his programming nails with same stack-based hammer. They really aren't necessary for general programming. whaever *that* is -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
this is heavily x-posted I'm answering from comp.lang.c On 16 Aug, 08:20, Standish P stnd...@gmail.com wrote: [Q] How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ? I'm having trouble understanding your question (I read your whole post before replying). I strongly suspect the only connection your question has with C is that you are using C as your implementation language. I think you're trying to ask a question about memory management. You might be better off asking your question in a general programming new group such as comp.programming (sadly rather quiet these days). Note that C doesn't do automatic garbage collection. Memory is either freed on exit from a scope (stack-like memory lifetime) or explicitly (using free()). Static memory is, conceptually, never freed. Because a stack has push and pop, it is able to release and allocate memory. I'm not sure what you mean by some of the terms you use. In a sense a pop *is* a release. The memory is no longer available for use. We envisage an exogenous stack which has malloc() associated with a push and free() associated with a pop. exogenous? Why would you do this? Are you envisioning a stack of pointers? The pointers pointing to dynamically allocated memory? Well, yes, sure you could implement this in C. It isn't garbage collection by any standard definition of the term. The algorithm using the stack would have to be perfect to prevent stack overflow or condition of infinite recursion depth. the memory lifetimes must be stack-like (or close to stack-like) This would involve data type checking to filter out invalid input. I'd be more concerned about the memory allocation/dealllocation pattern rather than the data types. The task must be casted in an algorithm that uses the stack. Then the algorithm must be shown to be heuristically or by its metaphor, to be correct using informal reasoning. why informal reasoning? Why not formal reasoning? Are there any standard textbooks or papers that show stacks implemented in C/C++/Python/Forth with malloc/free in push and pop ? well it doesn't sound very hard... If Forth is a general processing language based on stack, is it possible to convert any and all algorithms to stack based ones and thus avoid memory leaks since a pop automatically releases memory when free is an intrinsic part of it. don't understand the question. - is forth a general purpose language? Yes - are all algorithms stack based? No some compuations simply need to hang onto memeory for a long time alloc (obj1) alloc (obj2) alloc (obj3) free (obj2) long_computation() free(obj3) free(obj1) this simply isn't stack based. the memory for obj2 cannot be reused on stack based scheme whilst long_computation() is going on. KR ANSI has the example of modular programming showing how to implement a stack but he uses a fixed length array. It is also possibly an endogenous stack. We look for an exogenous stack so that element size can vary. malloc the memory? I see no garbage collection in your post -- http://mail.python.org/mailman/listinfo/python-list
Re: How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ?
On 16 Aug, 09:33, Standish P stnd...@gmail.com wrote: On Aug 16, 12:47 am, Nick Keighley nick_keighley_nos...@hotmail.com On 16 Aug, 08:20, Standish P stnd...@gmail.com wrote: this is heavily x-posted I'm answering from comp.lang.c I also note that another poster has suggested you are a troll/loon you seem to be using some computer science-like terms but in an oddly non-standard manner [Q] How far can stack [LIFO] solve do automatic garbage collection and prevent memory leak ? no at all. How can a goldfish whistle? I'm having trouble understanding your question (I read your whole post before replying). I strongly suspect the only connection your question has with C is that you are using C as your implementation language. I think you're trying to ask a question about memory management. You might be better off asking your question in a general programming new group such as comp.programming (sadly rather quiet these days). this still applies Note that C doesn't do automatic garbage collection. Memory is either freed on exit from a scope (stack-like memory lifetime) or explicitly (using free()). Static memory is, conceptually, never freed. Because a stack has push and pop, it is able to release and allocate memory. I'm not sure what you mean by some of the terms you use. In a sense a pop *is* a release. The memory is no longer available for use. We envisage an exogenous stack which has malloc() associated with a push and free() associated with a pop. exogenous? Why would you do this? Are you envisioning a stack of pointers? The pointers pointing to dynamically allocated memory? Well, yes, sure you could implement this in C. It isn't garbage collection by any standard definition of the term. I can clarify what I mean. Most books implement a stack with a fixed length array of chars and push chars into it, for eg kr. this isn't inherent to a stack implementaion. A stack could be a malloced block of memory or a linked list. I have a dynamically allocated array of pointers. The cons cell is that *what*? Are you trying to implement Lisp in C or something. Try comp.lang.lisp for some help there. Have you read Lisp In Small Pieces? Good fun. allocated as well as the data is allocated for every push and the pointer points to its_curr_val.next. I'm lost. What does a cons cell have to do with a fixed array of pointers? Why do you need dynamic memory? Aren't cons cells usually of fixed size? How can a Lisp-like language use a stack based memory allocation strategy? Similarly, every pop would move the pointer to its_curr_val.prev. It would also free the cons cell and the data after making a copy of the data. Below I explain your point on memory hanging. The algorithm using the stack would have to be perfect to prevent stack overflow or condition of infinite recursion depth. the memory lifetimes must be stack-like (or close to stack-like) This would involve data type checking to filter out invalid input. I'd be more concerned about the memory allocation/dealllocation pattern rather than the data types. The task must be casted in an algorithm that uses the stack. Then the algorithm must be shown to be heuristically or by its metaphor, to be correct using informal reasoning. why informal reasoning? Why not formal reasoning? Are there any standard textbooks or papers that show stacks implemented in C/C++/Python/Forth with malloc/free in push and pop ? well it doesn't sound very hard... If Forth is a general processing language based on stack, is it possible to convert any and all algorithms to stack based ones and thus avoid memory leaks since a pop automatically releases memory when free is an intrinsic part of it. don't understand the question. - is forth a general purpose language? Yes - are all algorithms stack based? No Does Forth uses stack for all algorithms ? don't know. Ask the Forth people. Some algoritms are fundamentally not stack based. If you try to implement them in Forth then either some memory isn't claimed as soon as possible (a leak) or there is some way to to have non-stack based memory management. Does it use pointers , ie indirect addressing ? If it can/must use stack then every algorithm could be made stack based. some compuations simply need to hang onto memeory for a long time alloc (obj1) alloc (obj2) alloc (obj3) free (obj2) long_computation() free(obj3) free(obj1) this simply isn't stack based. the memory for obj2 cannot be reused on stack based scheme whilst long_computation() is going on. In theory the memory can be locked by a long_computation(). But a non- stacked based algorithm can also ignore freeing a memory and cause memory leak, just as an improper usage of stack cause the above problem. The purpose of a stack is to hold intermediate results ONLY. no not really Only
Re: Fascinating interview by Richard Stallman on Russia TV
On 18 July, 09:38, Emmy Noether emmynoeth...@gmail.com wrote: On Jul 18, 1:09 am, Nick 3-nos...@temporary-address.org.uk wrote: Emmy Noether emmynoeth...@gmail.com writes: snip In this video, Stall man makes 4 promises to public but stalls on 2nd of them. I have no idea of the rights or wrongs of this case. But I've found through experience that when someone uses a witty misspelling of someone's name, they are almost always the one in the wrong. Huh, you forgot that the whole of GNU = Gnu Not Unix you know someone named GNU? They must have had strange parents... -- http://mail.python.org/mailman/listinfo/python-list
Re: Fascinating interview by Richard Stallman at KTH on emacs history and internals
On 16 July, 09:24, Mark Tarver dr.mtar...@ukonline.co.uk wrote: On 15 July, 23:21, bolega gnuist...@gmail.com wrote: http://www.gnu.org/philosophy/stallman-kth.html RMS lecture at KTH (Sweden), 30 October 1986 did you really have to post all of this... snip read more »... ...oh sorry only about a third of it... Perhaps as an antidote http://danweinreb.org/blog/rebuttal-to-stallmans-story-about-the-form... ...to add two lines? -- http://mail.python.org/mailman/listinfo/python-list
Re: C interpreter in Lisp/scheme/python
On 7 July, 17:38, Rivka Miller rivkaumil...@gmail.com wrote: Although C comes with a regex library, C does not come with a regexp library Anyone know what the first initial of L. Peter Deutsch stand for ? Laurence according to wikipedia (search time 2s) -- http://mail.python.org/mailman/listinfo/python-list
Re: C interpreter in Lisp/scheme/python
On 8 July, 08:08, Nick Keighley nick_keighley_nos...@hotmail.com wrote: On 7 July, 17:38, Rivka Miller rivkaumil...@gmail.com wrote: Anyone know what the first initial of L. Peter Deutsch stand for ? Laurence according to wikipedia (search time 2s) oops! He was born Laurence but changed it legally to L. including the dot -- http://mail.python.org/mailman/listinfo/python-list
passing data to Tkinter call backs
Hi, If this is the wrong place for Tkinter in python please direct me elsewhere! I'm trapping mouse clicks using canvas.bind(ButtonRelease-1, mouse_clik_event) def mouse_clik_event (event) : stuff What mouse_clik_event does is modify some data and trigger a redraw. Is there any way to pass data to the callback function? Some GUIs give you a user-data field in the event, does Tkinter? Or am I reduced to using spit global data? A Singleton is just Global Data by other means. -- Nick Keighley This led to packs of feral Global Variables roaming the address space. -- http://mail.python.org/mailman/listinfo/python-list
Re: passing data to Tkinter call backs
On 9 June, 10:35, Bruno Desthuilliers bruno. 42.desthuilli...@websiteburo.invalid wrote: Nick Keighley a crit : I'm trapping mouse clicks using canvas.bind(ButtonRelease-1, mouse_clik_event) def mouse_clik_event (event) : stuff What mouse_clik_event does is modify some data and trigger a redraw. Is there any way to pass data to the callback function? Some GUIs give you a user-data field in the event, does Tkinter? Never used TkInter much, but if event is a regular Python object, you don't need any user-data field - just set whatever attribute you want, ie: [...] class Event(object): pass ... e = Event() e.user_data = here are my data e.user_data 'here are my data' But I fail to see how this would solve your problem here - where would you set this attribute ??? Those other GUIs also give you a mechanism to pass the data. Say another parameter in the bind call Or am I reduced to using spit global data? A Singleton is just Global Data by other means. from functools import partial data = dict() def handle_event(event, data): ... data['foo'] = bar ... print event ... p = partial(handle_event, data=data) ah! the first time I read this I didn't get this. But in the mean time cobbled something together using lambda. Is partial doing the same thing but a little more elegantly? p(e) __main__.Event object at 0xb75383ec data {'foo': 'bar'} Note that data doesn't have to be global here. # callback for mouse click event def mouse_clik_event (event, data) : dosomething (event.x, event.y, data) draw_stuff (display, data) data = Data(6.0, 0.2, 0.3) draw_stuff (display, data) # snag mouse display.canvas.bind(ButtonRelease-1, lambda event: mouse_clik_event (event, mandelbrot)) -- There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns -- the ones we don't know we don't know. -- http://mail.python.org/mailman/listinfo/python-list
Re: passing data to Tkinter call backs
On 9 June, 13:50, Bruno Desthuilliers bruno. 42.desthuilli...@websiteburo.invalid wrote: Nick Keighley a écrit : On 9 June, 10:35, Bruno Desthuilliers bruno. 42.desthuilli...@websiteburo.invalid wrote: Nick Keighley a crit : I'm trapping mouse clicks using canvas.bind(ButtonRelease-1, mouse_clik_event) def mouse_clik_event (event) : stuff What mouse_clik_event does is modify some data and trigger a redraw. Is there any way to pass data to the callback function? Some GUIs give you a user-data field in the event, does Tkinter? Never used TkInter much, but if event is a regular Python object, you don't need any user-data field - just set whatever attribute you want, ie: [...] class Event(object): pass ... e = Event() e.user_data = here are my data e.user_data 'here are my data' But I fail to see how this would solve your problem here - where would you set this attribute ??? Those other GUIs also give you a mechanism to pass the data. Say another parameter in the bind call Ok, so my suggestion should work, as well as any equivalent (lambda, closure, custom callable object etc). from functools import partial data = dict() def handle_event(event, data): ... data['foo'] = bar ... print event ... p = partial(handle_event, data=data) ah! the first time I read this I didn't get this. But in the mean time cobbled something together using lambda. Is partial doing the same thing Mostly, yes - in both cases you get a callable object that keeps a reference on the data. You could also use a closure: def make_handler(func, data): def handler(event): func(event, data) return handler def mouse_clik_event (event, data) : dosomething (event.x, event.y, data) draw_stuff (display, data) display.canvas.bind( ButtonRelease-1, make_handler(mouse_click_event, data) ) but a little more elegantly? Depending on your own definition for elegantly... Note that the lambda trick you used is very idiomatic - functool.partial being newer and probably not as used - so one could argue that the most common way is also the most elegant !-) I'm somewhat newbie at Python but I'd seen lambda elsewhere (scheme). I like the closure trick... I'm using Python In a Nutshell as my guide. -- http://mail.python.org/mailman/listinfo/python-list
Re: Haskell's new logo, and the idiocy of tech geekers
On 3 Oct, 00:33, Xah Lee xah...@gmail.com wrote: Haskell has a new logo. A fantastic one. Beautiful. For creator, context, detail, see bottom of: • A Lambda Logo Tour http://xahlee.org/UnixResource_dir/lambda_logo.html I'm amazed he thinks anyone would donate 3 USD to that site -- http://mail.python.org/mailman/listinfo/python-list