Re: Any marginally usable programming language approaches an ill defined barely usable re-implementation of half of Common-Lisp

2024-05-29 Thread Kaz Kylheku via Python-list
On 2024-05-29, HenHanna  wrote:
> On 5/27/2024 1:59 PM, 2qdxy4rzwzuui...@potatochowder.com wrote:
>> https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule
>
>
> interesting!!!
>
> Are  the Rules 1--9  by  Greenspun   good too?

I don't think they exist; it's a joke.

However, Greenspun's resume of accomplishments is a marvel and
an inspiration, including many in Lisp.

A few highlights:

https://philip.greenspun.com/personal/resume

"Helped architect, simulate and design prototype of HP's Precision
Architecture RISC computer. The prototype took two man-years to complete
and ran at VAX 11/780 speed in June 1983. This architecture became the
basis of HP's computer product line for 15 years and then became the
basis for the 64-bit generation of Intel processors."

https://philip.greenspun.com/personal/resume-list

"Automatic Layout tools for VLSI, with an emphasis on bus cells and
automatic implementation of finite state machines (1984 for Symbolics)"

"Design tools on Symbolics Lisp Machine for RISC CPU implemented in TTL
(1982-3 for Hewlett Packard)" (in reference to the PA-RISC work).

"ConSolve system for automating earthmoving, entirely implemented in
Lisp (1986-1989 for ConSolve), including:

* Delaunay Triangulation-based terrain model, with C0 and C1 surface
   models.
* complete environment for earthworks and road design, including
  software to specify design surfaces, calculate costs of
  realizing design surfaces and automatic design tools
* tree-structured database of zoning laws and automatic testing of
  design compliance
* hydrology modelling to calculate drainage basins, streams and ridges
* simulation of earthmoving vehicles
* automated surveying using vehicles and location systems
* radio interface to Caterpillar vehicle, including CRCC error detection
* automatically generated user interface"

-- 
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
Mastodon: @kazina...@mstdn.ca
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: f python?

2012-04-11 Thread Kaz Kylheku
[Followup-To: header set to comp.lang.lisp.]
On 2012-04-11, Shmuel Metz spamt...@library.lspace.org.invalid wrote:
 In 87wr5nl54w@sapphire.mobileactivedefense.com, on 04/10/2012
at 09:10 PM, Rainer Weikusat rweiku...@mssgmbh.com said:

'car' and 'cdr' refer to cons cells in Lisp, not to strings. How the
first/rest terminology can be sensibly applied to 'C strings' (which
are similar to linked-lists in the sense that there's a 'special
termination value' instead of an explicit length)

 A syringe is similar to a sturgeon in the sense that they both start
 with S. LISP doesn't have arrays, and C doesn't allow you to insert
 into the middle of an array.

Lisp, however, has arrays. (Not to mention hash tables, structures, and
classes). Where have you been since 1960-something?

  (let ((array #(1 2 3 4)))
(aref array 3)) ;; - 4, O(1) access
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: f python?

2012-04-09 Thread Kaz Kylheku
On 2012-04-09, Shmuel Metz spamt...@library.lspace.org.invalid wrote:
 In 20120408114313...@kylheku.com, on 04/08/2012
at 07:14 PM, Kaz Kylheku k...@kylheku.com said:

Null-terminated strings are infinitely better than the ridiculous
encapsulation of length + data.

 ROTF,LMAO!

For one thing, if s is a non-empty null terminated string then,
cdr(s) is also a string representing the rest of that string 
without the first character,

 Are you really too clueless to differentiate between C and LISP?

In Lisp we can burn a list literal like '(a b c) into ROM, and compute (b c)
without allocating any memory.

Null-terminated C strings do the same thing.

In some Lisp systems, in fact, CDR coding was used to save space when
allocating a list all at once. This created something very similar to
a C string: a vector-like object of all the CARs, with a terminating
convention marking the end.

It's logically very similar.

I need not repeat the elegant recursion example for walking a C string.

That example is not possible with the length + data representation.
(Not without breaking the encapsulation and passing the length as a separate
recursion parameter to a recursive routine that works with the raw data part of
the string.)

Null terminated strings have simplified all kids of text
manipulation, lexical scanning, and data storage/communication 
code resulting in immeasurable savings over the years.

 Yeah, especially code that needs to deal with lengths and nulls.

To get the length of a string, you call a function, in either representation,
so it is not any more complicated from a coding point of view. The function is,
of course, more expensive if the string is null terminated, but you can code
with awareness of this and not call length wastefully.

If all else was equal (so that the expense of the length operation were
the /only/ issue) then of course the length + data would be better.

However, all else is not equal.

One thing that is darn useful, for instance, is that
p + strlen(p) still points to a string which is length zero, and this
sort of thing is widely exploited in text processing code. e.g.

   size_t digit_prefix_len = strspn(input_string, 0123456789);
   const char *after_digits = input-string + digit_prefix_len;

   if (*after_digits == 0) {
 /* string consists only of digits: nothing after digits */
   } else {
 /* process part after digits */
   }

It's nice that after_digits is a bona-fide string just like input_string,
without any memory allocation being required.

We can lexically analyze a string without ever asking it what its length is,
and as we march down the string, the remaining suffix of that string is always
a string so we can treat it as one, recurse on it, whatever.

Code that needs to deal with null characters is manipulating binary data, not
text, and should use a suitable data structure for that.

 It's great for buffer overruns too.

If we scan for a null terminator which is not there, we have a buffer overrun.

If a length field in front of string data is incorrect, we also have a buffer
overrrun.

A pattern quickly emerges here: invalid, corrupt data produced by buggy code
leads to incorrect results, and behavior that is not well-defined!
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: f python?

2012-04-09 Thread Kaz Kylheku
On 2012-04-09, Roy Smith r...@panix.com wrote:
 In article 4f82d3e2$1$fuzhry+tra$mr2...@news.patriot.net,
  Shmuel (Seymour J.) Metz spamt...@library.lspace.org.invalid wrote:

 Null terminated strings have simplified all kids of text
 manipulation, lexical scanning, and data storage/communication 
 code resulting in immeasurable savings over the years.
 
 Yeah, especially code that needs to deal with lengths and nulls. It's
 great for buffer overruns too.

 I once worked on a C++ project that used a string class which kept a 
 length count, but also allocated one extra byte and stuck a null at the 
 end of every string.

Me too! I worked on numerous C++ projects with such a string template
class.

It was usually called 

  std::basic_string

and came from this header called:

  #include string

which also instantiated it into two flavors under two nicknames:
std::basic_stringchar being introduced as std::string, and
std::basic_stringwchar_t as std::wstring.

This class had a c_str() function which retrieved a null-terminated
string and so most implementations just stored the data that way, but
some of the versions of that class cached the length of the string
to avoid doing a strlen or wcslen operation on the data.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: f python?

2012-04-08 Thread Kaz Kylheku
[Followup-To: header set to comp.lang.lisp.]
On 2012-04-08, David Canzi dmca...@uwaterloo.ca wrote:
 Xah Lee  xah...@gmail.com wrote:
hi guys,

sorry am feeling a bit prolifit lately.

today's show, is: 'Fuck Python'
http://xahlee.org/comp/fuck_python.html


Fuck Python
 By Xah Lee, 2012-04-08

fuck Python.

just fucking spend 2 hours and still going.

here's the short story.

so recently i switched to a Windows version of python. Now, Windows
version takes path using win backslash, instead of cygwin slash. This
fucking broke my find/replace scripts that takes a dir level as input.
Because i was counting slashes.

Ok no problem. My sloppiness. After all, my implementation wasn't
portable. So, let's fix it. After a while, discovered there's the
'os.sep'. Ok, replace / to 'os.sep', done. Then, bang, all hell
went lose. Because, the backslash is used as escape in string, so any
regex that manipulate path got fucked majorly.

 When Microsoft created MS-DOS, they decided to use '\' as
 the separator in file names.

This is false. The MS-DOS (dare I say it) kernel accepts both forward and
backslashes as separators.

The application-level choice was once configurable through a variable
in COMMAND.COM. Then they hard-coded it to backslash.

However, Microsoft operating systems continued to (and until this day)
recognize slash as a path separator.

Only, there are broken userland programs on Windows which don't know this.

 So, when you say fuck Python, are you sure you're shooting at the
 right target?

I would have to say, probably yes.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: f python?

2012-04-08 Thread Kaz Kylheku
On 2012-04-08, Peter J. Holzer hjp-usen...@hjp.at wrote:
 On 2012-04-08 17:03, David Canzi dmca...@uwaterloo.ca wrote:
 If you added up the cost of all the extra work that people have
 done as a result of Microsoft's decision to use '\' as the file
 name separator, it would probably be enough money to launch the
 Burj Khalifa into geosynchronous orbit.

 So we have another contender for the Most Expensive One-byte Mistake?

The one byte mistake in DOS and Windows is recognizing two characters as path
separators.  All code that correctly handles paths is complicated by having to
look for a set of characters instead of just scanning for a byte.

 http://queue.acm.org/detail.cfm?id=2010365

DOS backslashes are already mentioned in that page, but alas it perpetuates the
clueless myth that DOS and windows do not recognize any other path separator.

Worse, the one byte Unix mistake being covered is, disappointingly, just a
clueless rant against null-terminated strings.

Null-terminated strings are infinitely better than the ridiculous encapsulation 
of length + data. 

For one thing, if s is a non-empty null terminated string then, cdr(s) is also
a string representing the rest of that string without the first character,
where cdr(s) is conveniently defined as s + 1.

Not only can compilers compress storage by recognizing that string literals are
the suffixes of other string literals, but a lot of string manipulation code is
simplified, because you can treat a pointer to interior of any string as a
string.

Because they are recursively defined, you can do elegant tail recursion on null
terminated strings:

  const char *rec_strchr(const char *in, int ch)
  { 
if (*in == 0)
  return 0;
else if (*in == ch)
  return in;
else
  return rec_strchr(in + 1, ch);
  }

length + data also raises the question: what type is the length field? One
byte? Two bytes? Four? And then you have issues of byte order.  Null terminated
C strings can be written straight to a binary file or network socket and be
instantly understood on the other end.

Null terminated strings have simplified all kids of text manipulation, lexical
scanning, and data storage/communication code resulting in immeasurable
savings over the years.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Xah's Edu Corner: The importance of syntax notations.

2009-08-17 Thread Kaz Kylheku
[Followup-To: header set to comp.lang.lisp.]
On 2009-08-17, Peter Keller psil...@merlin.cs.wisc.edu wrote:
 In comp.lang.scheme Peter Keller psil...@merlin.cs.wisc.edu wrote:
 The distance() function in this new model is the centroid of the syntactic
 datum which represent the semantic object.

 Oops.

 I meant to say:

 The distance() function in this new model uses the centroid of each
 individual syntactic datum (which represents the semantic object) as
 the location for each semantic object.

Don't sweat it; either way it makes no sense. The rewrite does have a more
journal-publishable feel to it, though: the centroid of the whole aromatic
diffusion seems to hover more precisely above the site of the bovine waste from
which it apparently emanates.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-05 Thread Kaz Kylheku
On 2009-06-05, Vend ven...@virgilio.it wrote:
 On Jun 4, 8:35 pm, Roedy Green see_webs...@mindprod.com.invalid
 wrote:
 On Thu, 4 Jun 2009 09:46:44 -0700 (PDT), Xah Lee xah...@gmail.com
 wrote, quoted or indirectly quoted someone who said :

 • Why Must Software Be Rewritten For Multi-Core Processors?

 Threads have been part of Java since Day 1.  Using threads complicates
 your code, but even with a single core processor, they can improve
 performance, particularly if you are doing something like combing
 multiple websites.

 The nice thing about Java is whether you are on a single core
 processor or a 256 CPU machine (We got to run our Athena Integer Java
 spreadsheet engine on such a beast), does not concern your code.

 You just have to make sure your threads don't interfere with each
 other, and Java/the OS, handle exploiting all the CPUs available.

 You need to decompose your problem in 256 independent tasks.

 It can be trivial for some problems and difficult or perhaps
 impossible for some others.

Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-04 Thread Kaz Kylheku
[Followup-To: header set to comp.lang.lisp.]
On 2009-06-04, Roedy Green see_webs...@mindprod.com.invalid wrote:
 On Thu, 4 Jun 2009 09:46:44 -0700 (PDT), Xah Lee xah...@gmail.com
 wrote, quoted or indirectly quoted someone who said :

• Why Must Software Be Rewritten For Multi-Core Processors?

 Threads have been part of Java since Day 1.

Unfortunately, not sane threads designed by people who actually understand
multithreading.

 The nice thing about Java is whether you are on a single core
 processor or a 256 CPU machine (We got to run our Athena Integer Java
 spreadsheet engine on such a beast), does not concern your code.

You are dreaming if you think that there are any circumstances (other than
circumstances in which performance doesn't matter) in which you don't have to
concern yourself about the difference between a uniprocessor and a 256 CPU
machine.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Function Application is not Currying

2009-01-28 Thread Kaz Kylheku
On 2009-01-28, Xah Lee xah...@gmail.com wrote:
 Function Application is not Currying

That's correct, Xah. Currying is a special case of function application. 
A currying function is applied to some other function, and returns function
that has fewer arguments.

In some languages, you don't see the currying function. It's invisibly
performed whenever you forget an argument. Hit a three argument function with
only two arguments, and you don't get a nice ``insufficient arguments in
function call'' error, but the call is diverted to the currying function, which
gives you back a function of one argument, which you can then call with the
missing argument to compute the original function.

 Xah Lee, 2009-01-28

 In Jon Harrop's book Ocaml for Scientist at
 http://www.ffconsultancy.com/products/ocaml_for_scientists/chapter1.html

Figures you'd be reading this. Learning anything?

 It says:

 Currying

 A curried function is a function which returns a function as its
 result.

 LOL. That is incorrect.

Yawn. Say it isn't so.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-10 Thread Kaz Kylheku
On 2008-12-05, Xah Lee [EMAIL PROTECTED] wrote:
 Let's say for example, we want to write a function that takes a vector
 (of linear algebra), and return a vector in the same direction but
 with length 1. In linear algebar terminology, the new vector is called
 the “normalized” vector of the original.

 For those of you who don't know linear algebra but knows coding, this

If I were to guess who that would be ...

 means, we want a function whose input is a list of 3 elements say
 {x,y,z}, and output is also a list of 3 elements, say {a,b,c}, with
 the condition that

 a = x/Sqrt[x^2+y^2+z^2]
 b = y/Sqrt[x^2+y^2+z^2]
 c = z/Sqrt[x^2+y^2+z^2]

 In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
 you'll have 50 or hundreds lines.

Really? ``50 or hundreds'' of lines in C?

  #include math.h /* for sqrt */

  void normalize(double *out, double *in)
  {
double denom = sqrt(in[0] * in[0] + in[1] * in[1] + in[2] * in[2]);

out[0] = in[0]/denom;
out[1] = in[1]/denom;
out[2] = in[2]/denom;
  }

Doh?

Now try writing a device driver for your wireless LAN adapter in Mathematica.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-10 Thread Kaz Kylheku
On 2008-12-10, Xah Lee [EMAIL PROTECTED] wrote:
 Xah Lee wrote:
  means, we want a function whose input is a list of 3 elements say
^^ ^^^

 Kaz, pay attention:

[ reformatted to 7 bit USASCII ]

 Xah wrote: Note, that the norm
 of any dimention, i.e. list of any length.

It was coded to the above requirements. 
--
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-03 Thread Kaz Kylheku
On 2008-12-04, Jürgen Exner [EMAIL PROTECTED] wrote:
 toby [EMAIL PROTECTED] wrote:
On Dec 3, 4:15 pm, Xah Lee [EMAIL PROTECTED] wrote:
 On Dec 3, 8:24 am, Jon Harrop [EMAIL PROTECTED] wrote:

  My example demonstrates several of Mathematica's fundamental limitations.

 enough babble Jon.

 Come flying $5 to my paypal account, and i'll give you real code,

I'll give you $5 to go away

 if you add and never come back then count me in, too.

Really? I will trade you one Xah Lee for three Jon Harrops and I will even
throw in a free William James.
--
http://mail.python.org/mailman/listinfo/python-list


Re: what's so difficult about namespace?

2008-11-26 Thread Kaz Kylheku
On 2008-11-26, Xah Lee [EMAIL PROTECTED] wrote:
 comp.lang.lisp,comp.lang.functional,comp.lang.perl.misc,comp.lang.python,comp.lang.java.programmer

 2008-11-25

 Recently, Steve Yegge implemented Javascript in Emacs lisp, and
 compared the 2 languages.

 http://steve-yegge.blogspot.com/
 http://code.google.com/p/ejacs/

 One of his point is about emacs lisp's lack of namespace.

 Btw, there's a question i have about namespace that always puzzled me.

 In many languages, they don't have namespace and is often a well known
 sour point for the lang. For example, Scheme has this problem up till
 R6RS last year.

Scheme hasn't officially supported breaking a program into multiple files until
R6RS. If the language is defined in terms of one translation unit, it doesn't
make sense to have a namespace feature.

 PHP didn't have namespace for the past decade till
 about this year. Javascript, which i only have working expertise,
 didn't have namespace as he mentioned in his blog.

Javascript programs are scanned at the character level by the browser as part
of loading a page.  So there are severe practical limitations on how large
Javascript programs can be.

Namespaces are useful only in very large programs.

 Elisp doesn't have
 name space and it is a well known major issue.

C doesn't have namespaces, and yet you have projects like the Linux kernel
which get by without this.

Given that the Linux kernel can do without namespaces, it's easy to see how
Javascript and Elisp can survive without it.

 Of languages that do have namespace that i have at least working
 expertise: Mathematica, Perl, Python, Java. Knowing these langs
 sufficiently well, i do not see anything special about namespace. The
 _essence_ of namespace is that a char is choosen as a separator, and
 the compiler just use this char to split/connect identifiers.

The essence of a true namespace or package system or whatever is that you can
establish situations in which you use the unqualified names.

You can emulate namespaces by adding prefixes to identifiers, which is
how people get by in C.

In C, you can even get the functionality of short names using the preprocessor.

I have done this before (but only once). In a widely-used public header file, I
prefixed all of the names of struct members (because struct members are not
immune to clashes: they clash with preprocessor symbols!)

  struct foo {
int foo_category;
time_t foo_timestamp;
/* ... */
  }

Inside the implementation module, I made macros for myself:

  #define category foo_category
  #define timestamp foo_timestamp

In this way, I didn't have to edit any of the code in order to move the struct
members into the namespace. Expressions like ``pf-category'' continued to work
as before.

 Although i have close to zero knowledge about compiler or parser, but
 from a math point of view and my own 18 years of programing
 experience, i cannot fathom what could possibly be difficult of
 introducing or implementing a namespace mechanism into a language.

The same things that make it difficult to add anything to a language, namely
the stupid way in which languages are designed to get in the way of extension.

What would it take to add namespaces to C, for instance?

If you have any proposal for a new C feature, the ISO C people encourage you to
develop a proof-of-concept, which means: hack it into an existing
implementation to demonstrate that it's feasible.

If you need new syntax, hacking it into an implementation means hacking it into
the parser. Everyone who wants to experiment with your feature needs to get
your compiler patches (for GCC for instance) and rebuild the compiler.

The process is painful enough that it's only worth doing if it solves something
that is perceived as being a critical issue.

 do not understand, why so many languages that lacks so much needed
 namespace for so long? If it is a social problem, i don't imagine they
 would last so long. It must be some technical issue?

I recently read about a very useful theory which explains why technologies
succeed or fail. 

See: 
http://arcfn.com/2008/07/why-your-favorite-programming-language-is-unpopular.html

The perceived crisis which namespaces solve is not sigificant enough relative
to the pain of adoption.  No, wrong, let's restate that. Programmers do use
namespaces even when there is no language feature. It's not about adoption of
namespaces, but adoption of proper support for namespaces.  Not using
namespaces is a crisis in a software project, but (at least in the perception
held by many programmers) that crisis is adequately mitigated by using a naming
convention.  Proper namespaces only address the small remaining vestiges
of the original crisis.

Thus: the perceived crisis which is solved by properly incorporating namespaces
into a programming language (rather than living with a prefix-based emulation)
is not significant enough relative to the perceived pain of adopting proper
namespaces into the programming 

Re: what's so difficult about namespace?

2008-11-26 Thread Kaz Kylheku
On 2008-11-26, Xah Lee [EMAIL PROTECTED] wrote:
 Can you see, how you latched your personal beef about anti software
 crisis philosophy into this no namespace thread?

I did no such thing. My post was about explaining the decision process
that causes humans to either adopt some technical solution or not.

In doing so, I put on the ``hat'' of an Emacs Lisp, Javascript or C programmer.

If enough programmers share this type of perception (be it right or wrong),
that explains why there is no rush to add namespace support to these languages.

The theory is that the adoption of some technology is driven by
a function of the ratio:

   perceived crisis
   
   perceived pain of adoption

These perceptions may not be accurate. The crisis may be larger than perceived,
or the pain of adoption may be smaller than perceived.

But, you missed the point that I don't necessarily agree or disagree with the
perceptions. They are what they are.

 Nobody is saying that lacking namespace is a CRISIS.

I.e. you agree that it's not widely perceived as a crisis. Hence, small
numerator in the above ratio.
--
http://mail.python.org/mailman/listinfo/python-list


Re: calling python from lisp

2008-10-28 Thread Kaz Kylheku
[Followup-To: header set to comp.lang.lisp.]
On 2008-10-29, Martin Rubey [EMAIL PROTECTED] wrote:
 Dear all,

 I'm trying to call from common lisp functions written for Sage
 (www.sagemath.org), which in turn is written in python. 

Maybe those functions will work under CLPython?

CLPython is different from python-on-lisp; it's front-end for Lisp that
understands Python syntax, and provides Python run-time support.

If the Python code can be proted to CLPython, it gets compiled,
and you can call it much more directly from other Lisp code.
--
http://mail.python.org/mailman/listinfo/python-list


Re: Microsoft's Dynamic Languages Runtime (DLR)

2007-05-04 Thread Kaz Kylheku
On May 2, 5:19 pm, sturlamolden [EMAIL PROTECTED] wrote:
 On May 3, 2:15 am, Kaz Kylheku [EMAIL PROTECTED] wrote:

  Kindly refrain from creating any more off-topic, cross-posted threads.
  Thanks.

 The only off-topic posting in this thread is your own (and now this
 one).

You are making a very clumsy entrance into these newsgroups. So far
you have started two cross-posted threads. The first is only topical
in comp.lang.python (how to emulate macros in Python). This one is
topical in neither one, since it is about Microsoft DLR.

It's quite possible that some Lisp and Python programmers have a
strong interest in Microsoft DLR. Those people who have such an
interest (regardless of whether they are Lisp and Python user also)
and who like to read Usenet will almost certainly find a Microsoft DLR
newsgroup for reading about and discussing Microsoft DLR. Do you not
agree?

Also note that there is very rarely, if ever, any good reason for
starting a thread which is crossposted among comp.lang.* newsgroups,
even if the subject contains elements that are topical in all of them
(yours does not).

 Begone.

You are childishly beckoning Usenet etiquette to be gone so that you
may do whatever you wish. But I trust that you will not, out of spite
for being rebuked, turn a few small mistakes into a persistent style.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Microsoft's Dynamic Languages Runtime (DLR)

2007-05-02 Thread Kaz Kylheku
On May 2, 11:22 am, sturlamolden [EMAIL PROTECTED] wrote:
 On Monday Microsoft announced a new runtime for dynamic languages,

Kindly refrain from creating any more off-topic, cross-posted threads.
Thanks.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone persuaded by merits of Lisp vs Python?

2006-12-29 Thread Kaz Kylheku
Paddy wrote:
 Carl Banks wrote:
  If you were so keen on avoiding a flame war, the first thing you should
  have done is to not cross-post this.

 I want to cover Pythonistas looking at Lisp and Lispers looking at

That's already covered in the orginal thread. Same two newsgroups, same
crowd of people. What's the difference?

Keep it in the original thread where uninterested people can continue
to ignore it.

 Python because of the thread. The cross posting is not as flame bait.

You're re-starting the same thread under a new root article, thereby
evading kill filters set up on the original thread.

In halfway decent newsreaders, people can killfile by thread, whereby
all articles associated with the same ancestral root article are
removed.

It's very bad practice to re-introduce continuations of long flamebait
threads under different thread identities.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Anyone persuaded by merits of Lisp vs Python?

2006-12-29 Thread Kaz Kylheku
Steven Haflich wrote:
 Ray wrote:
  Can one really survive knowing just
  one language these days, anyway?

 いいえ! 違います。

iie! chigaimas.

No, I beg to differ!

(Hey, I'm in right the middle of preparing my Kanji-drilling Lisp
program for distribution).

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: merits of Lisp vs Python

2006-12-17 Thread Kaz Kylheku
Paul Rubin wrote:
 Raffael Cavallaro [EMAIL PROTECTED]'espam-s'il-vous-plait-mac.com writes:
  For example, a common lisp with optional static typing on demand would
  be strictly more expressive than common lisp. But, take say, haskell;
  haskell's static typing is not optional (you can work around it, but
  you have to go out of your way to do so); haskell's pure functional
  semantics are not optional (again, workarounds possible to a limited
  extent). This requires you to conceive your problem solution (i.e.,
  program) within the framework of a particular paradigm. This lock-in
  to a particular paradigm, however powerful, is what makes any such
  language strictly less expressive than one with syntactic abstraction
  over a uniform syntax.

 Incorrect, I believe.  The above is like saying Lisp's lack of
 optional manual storage allocation and machine pointers makes Lisp
 less powerful.

That is true. By itself, that feature makes Lisp less poweful for
real-world software dev, which is why we have implementation-defined
escape hatches for that sort of thing.

 It's in fact the absence of those features that lets
 garbage collection work reliably.

This is a bad analogy to the bondage-and-discipline of purely
functional languages.

The removal for the need for manual object lifetime computation does
not cause a whole class of useful programs to be rejected.

In fact, all previously correct programs continue to work as before,
and in addition, some hitherto incorrect programs become correct.
That's an increase in power: new programs are possible without losing
the old ones.

Wheas programs can't be made to conform to the pure functional paradigm
by adjusting the semantics of some API function. Programs which don't
conform have to be rejected,

  Reliable GC gets rid of a large and
 important class of program errors and makes possible programming in a
 style that relies on it.

Right. GC gets rid of /program errors/. Pure functional programming
gets rid of /programs/.

 You can make languages more powerful by removing features as well as by 
 adding them.

Note that changing the storage liberation request from an imperative to
a hint isn't the removal of a feature. It's the /addition/ of a
feature. The new feature is that objects can still be reliably used
after the implementation was advised by the program that they are no
longer needed. Programs which do this are no longer buggy. Another new
feature is that programs can fail to advise the implementation that
some objects are no longer needed, without causing a leak, so these
programs are no longer buggy. The pool of non-buggy programs has
increased without anything being rejected.

Okay, that is not quite true, which brings me back to my very first
point. GC does (effectively) reject programs which do nasty things with
pointers. For instance, hiding pointers from being visible to GC.
However, such things can be made to coexist with GC. GC and non-GC
stuff /can/ and does, for pragmatic reasons, live in the same image.

Likewise, functional programming and imperative programming can also
coexist in the same image.

/Pure/ functional programming isn't about adding the feature of
functional programming. It's about  eliminating other features which
are not functional programming.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-14 Thread Kaz Kylheku
Bruno Desthuilliers wrote:
 André Thieme a écrit :
  Bruno Desthuilliers schrieb:
 
 (snip)
  Both are highly dynamic. Neither are declarative.
 
 
  Well, Lisp does support some declarative features in the ansi standard.

 If you go that way, there are declarative stuff in Python too... But
 neither Lisp nor Python are close to, say, SQL.

False. Common Lisp can be made to support SQL syntax.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-13 Thread Kaz Kylheku
Rob Warnock wrote:
 And for any of you who are rejecting this because you don't want to
 learn or use Emacs, Raffael's point is even true in the Vi family of
 editors (nvi  vim, at least). The y% command yanks (copies)
 everything through the matching paren into the anonymous buffer;
 d% deletes likewise [and saves in the anonymous buffer]; p (or P)
 pastes after (or before) the current location. All can be prefixed
 with a buffer (Q-register) name for more flexibility.

If you're on Vim, you also have the ib and ab commands that work
under visual selection. You repeat them to broaden the selection to the
next level of nesting.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-12 Thread Kaz Kylheku
I V wrote:
 To be a little provocative, I wonder if the idea that you're talking to
 the interpreter doesn't apply more to lisp than to python; you can have
 any syntax you like, as long as it looks like an AST.

Actually, that is false. You can have any syntax you like in Common
Lisp. For instance, the syntax of Python:

http://trac.common-lisp.net/clpython/

What thesaurus are you using which lists provocative as a synonym for
uninformed?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-12 Thread Kaz Kylheku
Bill Atkins wrote:
 (Why are people from c.l.p calling parentheses brackets?)

Because that's what they are often called outside of the various
literate fields.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-11 Thread Kaz Kylheku
Paddy wrote:
   http://en.wikipedia.org/wiki/Doctest

I pity the hoplelessly anti-intellectual douche-bag who inflicted this
undergraduate misfeature upon the programming language.

This must be some unofficial patch that still has a hope of being shot
down in flames, right?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-11 Thread Kaz Kylheku
Paddy wrote:
 Does Lisp have a doctest-like module as part of its standard
 distribution?

No, and it never will.

The wording you are using betrays cluelessness. Lisp is an ANSI
standard language. Its distribution is a stack of paper.

There isn't a ``standard distribution'' of Lisp any more than there is
such a thing of C++.

 There are advantages to
 doctest being one of Pythons standard modules.

There are also advantages in being able to tell idiots who have
terrible language extension ideas that they can simply roll their own
crap---and kindly keep it from spreading.

This is generally what happens in intelligent, mature programming
language communities. For instance, whenever someone comes along who
thinks he has a great idea for the C programming language, the standar
answer is: Wonderful! Implement the feature into a major compiler like
GCC, to show that it's feasible. Gain some experience with it in some
community of users, work out any wrinkles, and then come back.

In the Lisp community, we can do one better than that by saying: Your
feature can be easily implemented in Lisp and loaded by whoever wants
to use it. So, i.e. don't bother.

Lisp disarms the nutjobs who want to inflict harm on the world by
fancying themselves as programming language designers. They are reduced
to the same humble level as other software developers, because the best
they can do is write something which is just optionally loaded like any
other module, and easily altered beyond their original design or
rejected entirely. Those who are not happy with the lack of worship run
off an invent shitty little languages for hordes of newbies, being
careful that in the designs of those languages, they don't introduce
anything from Lisp which would land them in the same predicament from
which they escaped.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-11 Thread Kaz Kylheku
Kay Schluehr wrote:
 Juan R. wrote:
  A bit ambiguous my reading. What is not feasible in general? Achieving
  compositionality?

 Given two languages L1 = (G1,T1), L2 = (G2, T2 ) where G1, G2 are
 grammars and T1, T2 transformers that transform source written in L1 or
 L2 into some base language
 L0 = (G0, Id ). Can G1 and G2 be combined to create a new grammar G3
 s.t. the transformers T1 and T2 can be used also to transform  L3 = (G3
 = G1(x)G2, T3 = T1(+)T2) ? In the general case G3 will be ambigous and
 the answer is NO. But it could also be YES in many relevant cases. So
 the question is whether it is necessary and sufficient to check whether
 the crossing between G1 and G2 is feasible i.e. doesn't produce
 ambiguities.

See, we don't have this problem in Lisp, unless some of the transfomers
in T1 have names that clash with those in T2. That problem can be
avoided by placing the macros in separate packages, or by renaming. In
In the absence of naming conflicts, the two macro languages L1 and L2
combine seamlessly into L3, because the transformers T are defined on
structure, not on lexical grammar. The read grammar doesn't change (and
is in fact irrelevant, since the whole drama is played out with
objects, not text). In L1, the grammar is nested lists. In L2, the
grammar is, again, nested lists. And in L3: nested lists. So that in
fact, at one level, you don't even recognize them as being different
languages, but on a different level you can.

The problems you are grappling with are in fact created by the
invention of an unsuitable encoding. You are in effect solving a puzzle
that you or others created for you.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-11 Thread Kaz Kylheku
Paul Rubin wrote:
 André Thieme [EMAIL PROTECTED] writes:
   import module
   module.function = memoize(module.function)
 
  Yes, I mentioned that a bit earlier in this thread (not about the
  during runtime thing).
  I also said that many macros only save some small bits of code.
  Your python example contains 4 tokens / brain units.
  The Lisp version only has 2.

 You shouldn't count the import statement, since you'd need the
 equivalent in Lisp as well.

 Contrast the much more common

   a[i] = b[n]

 with

   (setf (aref a i) (aref b n))

 and the attractions of Python may make more sense.


Actual Lisp session transcript:

[1] (load infix.cl)
;; Loading file infix.cl ...
;;;
*
;;;   Infix notation for Common Lisp.
;;;   Version 1.3  28-JUN-96.
;;;   Written by Mark Kantrowitz, CMU School of Computer Science.
;;;   Copyright (c) 1993-95. All rights reserved.
;;;   May be freely redistributed, provided this notice is left intact.
;;;   This software is made available AS IS, without any warranty.
;;;
*
;; Loaded file infix.cl
T
[2] #i( if x  y then a[i] = b[j] else a[i] = c[j,j] ^^ w )

*** - EVAL: variable X has no value
The following restarts are available:
USE-VALUE  :R1  You may input a value to be used instead of X.
STORE-VALUE:R2  You may input a new value for X.
ABORT  :R3  ABORT
Break 1 [3] :a
[4] (quote #i( if x  y then a[i] = b[j] else a[i] = c[j,j] ^^ w ))
(IF ( X Y) (SETF (AREF A I) (AREF B J))
 (SETF (AREF A I) (EXPT (AREF C J J) W)))


In spite of such possibilities, things like this just don't catch on in
Lisp programming. Once people know that they /can/ get it if they want,
they no longer want it.

What doesn't make sense is writing entire language implementations from
scratch in order to experiment with notations.

I think someone may have been working on a Python interface built on
Common Lisp.

Ah, here! 

http://trac.common-lisp.net/clpython/wiki/WikiStart

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-10 Thread Kaz Kylheku
Steven D'Aprano wrote:
 I'd love to say it has been fun, but it has been more frustrating than
 enjoyable. I don't mind an honest disagreement between people who

Honest disagreement requires parties who are reasonably informed, and
who are willing not to form opinions about things that they have no
experience with.

 So now I have an even better understanding for why Lispers have a reputation 
 for being difficult and
 arrogant.

Whereas, for instance, lecturing a Lisp newsgroup (of all places) in
the following manner isn't arrogant, right?

``Turing Complete. Don't you Lisp developers know anything about
computer science? ''

If that had been intended to be funny, you should have made that more
clear by choosing, say, lambda calculus as the subject.

 But I also gained a little more insight into why Lispers love their
 language. I've learnt that well-written Lisp code isn't as hard to read as
 I remembered, so I'd just like to withdraw a comment I made early in the
 piece.

You /think/ you learned that, but in reality you only read some
/opinions/ that Lisp isn't as hard to read as was maintained by your
previously held opinions. Second-hand opinions are only little better
than spontaneous opinions.  It's, at best, a slightly favorable trade.

 I no longer believe that Lisp is especially strange compared to natural 
 languages.

Natural languages are far less completely understood than any
programming language. Only your own, and perhaps some others in nearby
branches of the language tree do not appear strange to you.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-10 Thread Kaz Kylheku
[EMAIL PROTECTED] wrote:
 Well, having read a lot of this thread, I can see one of the
 reasons why the software profession might want to avoid
 lispies.  With advocacy like this, who needs detractors?

And thus your plan for breaking into the software profession is ... to
develop Usenet advocacy skills.

``That guy we just interviewed, I don't know. Perfect score on the C++
test, lots of good architectural knowledge, but he seems to care more
about being correct than convincing people. He'd be fine for now, but
what does that say about his ability when the crunch comes, and he's
called upon to ... advocate?''

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-10 Thread Kaz Kylheku
Paul Rubin wrote:
 Kaz Kylheku [EMAIL PROTECTED] writes:
   Lisp just seems hopelessly old-fashioned to me these days.  A
   modernized version would be cool, but I think the more serious
   Lisp-like language designers have moved on to newer ideas.
 
  What are some of their names, and what ideas are they working on?

 http://caml.inria.fr
 http://www.haskell.org

Right. What these have in common with Lisp is that they use manifest
typing, whereas Lisp uses latent typing. But after that, they mostly
diverge.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-09 Thread Kaz Kylheku
Steven D'Aprano wrote:
 But Lisp's syntax is so unlike most written natural languages that that it
 is a whole different story.

Bahaha!

 Yes, the human brain is amazingly flexible,
 and people can learn extremely complex syntax and grammars (especially if
 they start young enough) so I'm not surprised that there are thousands,
 maybe tens or even hundreds of thousands of Lisp developers who find the
 language perfectly readable.

1 '(especially if they start young enough)
(ESPECIALLY IF THEY START YOUNG ENOUGH)
2 (sixth *)
ENOUGH

... said!

Yeah, so /unlike/ written natural languages! 

What a fucking moron.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-09 Thread Kaz Kylheku
Kirk Sluder wrote:
 unnecessary abstraction.  The question I have is why do critics
 single out macros and not other forms of abstraction such as
 objects, packages, libraries, and functions?

The answer is: because they are pitiful morons.

But you knew that already.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-08 Thread Kaz Kylheku
Mark Tarver wrote:
 I don't mind controversy - as long as there is intelligent argument.
 And since it involves Python and Lisp, well it should be posted to both
 groups.   The Lispers will tend to say that Lisp is better for sure -
 so it gives the Python people a chance to defend this creation.

And that would be our confirmation that this is another trolling
asshole.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: merits of Lisp vs Python

2006-12-08 Thread Kaz Kylheku
Paul Rubin wrote:
 Lisp just seems hopelessly old-fashioned to me these days.  A
 modernized version would be cool, but I think the more serious
 Lisp-like language designers have moved on to newer ideas.

What are some of their names, and what ideas are they working on?

Also, who are the less serious designers?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: a different question: can you earn a living with *just* python?

2006-09-26 Thread Kaz Kylheku
John Salerno wrote:
 But what if you are an expert Python program and have zero clue about
 other languages?

Then I would say that you are not a mature computing professional, but
merely a Python jockey.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-09 Thread Kaz Kylheku
Xah Lee wrote:
 Has anyone read this paper? And, would anyone be interested in giving a
 summary?

Not you, of course. Too busy preparing the next diatribe against UNIX,
Perl, etc. ;)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python less error-prone than Java

2006-06-04 Thread Kaz Kylheku
Christoph Zwerschke wrote:
 You will often hear that for reasons of fault minimization, you should
 use a programming language with strict typing:
 http://turing.une.edu.au/~comp284/Lectures/Lecture_18/lecture/node1.html

Quoting from that web page:

A programming language with strict typing and run-time checking should
be used.

This doesn't prescribe latent or manifest typing, only that there be
type checking.

There is no question that for reliability, it is necessary to have type
checking, whether at run time or earlier.

You can have statically typed languages with inadequate type safety,
and you can have dynamically typed languages with inadequate type
safety.

 Now the same thing, directly converted to Python:

 def binarySearch(a, key):
  low = 0
  high = len(a) - 1
  while low = high:
  mid = (low + high) / 2
  midVal = a[mid]
  if midVal  key:
  low = mid + 1
  elif midVal  key:
  high = mid - 1;
  else:
  return mid # key found
  return -(low + 1) # key not found.

 What's better about the Python version? First, it will operate on *any*
 sorted array, no matter which type the values have.

Uh huh! With hard-coded  and = operators, how stupid. What if you want
to use it on strings?

Would that be a case-insensitive lexicographic comparison, or
case-insensitive? How do you specify what kind of less-than and equal
you want to do?

-1 to indicate not found? Why copy Java braindamage induced by an
antiquated form of static typing? The Java version has to do that
because the return value is necessarily declared to be of type integer.


;; Common Lisp
;; Binary search any sorted sequence SEQ for ITEM, returning
;; the position (starting from zero) if the item is found,
;; otherwise returns NIL.
;;
;; :REF specifies positional accessing function, default is ELT
;; :LEN specifies function for retrieving sequence length
;; :LESS specifies function for less-than item comparison
;; :SAME specifies function for equality comparison

(defun binary-search (seq item
  key (ref #'elt) (len #'length)
   (less #') (same #'=))
  (loop with low = 0
and high = (funcall len seq)
while (= low high)
do
  (let* ((mid (truncate (+ low high) 2))
 (mid-val (funcall ref seq mid)))
(cond
  ((funcall less mid-val item)
   (setf low (1+ mid)))
  ((funcall same mid-val item)
   (return mid))
  (t (setf high (1- mid)))

Common Lisp integers are mathematical, so the overflow problem
described in your referenced article doesn't exist here.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python less error-prone than Java

2006-06-04 Thread Kaz Kylheku
Ilpo Nyyssönen wrote:
 This is one big thing that makes code
 less error-prone: using existing well made libraries.
 You can find binary search from python standard library too (but actually the 
 API
 in Java is a bit better, see the return values).
 Well, you can say that the binary search is a good example and in real
 code you would use the stuff from the libraries.

The trouble with your point is that Christoph's original posting refers
to an article, which, in turn, at the bottom, refers to a bug database
which shows that the very same defect had been found in Sun's Java
library!

Buggy library code is what prompted that article.

 I'd say it is not
 good example: How often will you write such algorithms? Very rarely.

 Integer overflows generally are not those errors you run into in
 programs.

Except when you feed those programs inputs which are converted to
integers which are then fed as domain values into some operation that
doesn't fit into the range type.

Other than that, you are okay!

Like when would that happen, right?

 The errors happening most often are from my point of view:

 1. null pointer errors
 2. wrong type (class cast in Java, some weird missing attribute in python)
 3. array/list index out of bounds

 First and third ones are the same in about every language.

... other than C and C++, where their equivalents just crash or stomp
over memory, but never mind; who uses those? ;)

 The second
 one is one where the typing can make a difference.

Actually, the first one is also where typing can make a difference.
Instead of this stupid idea of pointers or references having a null
value, you can make a null value which has its own type, and banish
null pointers.

So null pointer errors are transformed into type errors: the special
value NIL was fed into an operation where some other type was expected.
And by means of type polymorphism, an operation can be extended to
handle the case of NIL.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: number of different lines in a file

2006-05-19 Thread Kaz Kylheku
Bill Pursell wrote:
 Have you tried
 cat file | sort | uniq | wc -l ?

The standard input file descriptor of sort can be attached directly to
a file. You don't need a file catenating process in order to feed it:

  sort  file | uniq | wc -l

Sort has the uniq functionality built in:

  sort -u  file | wc -l

 sort might choke on the large file, and this isn't python, but it
 might work.

Solid implementations of sort can use external storage for large files,
and perform a poly-phase type sort, rather than doing the entire sort
in memory.

I seem to recall that GNU sort does something like this, using
temporary files.

Naively written Python code is a lot more likely to choke on a large
data set.

 You might try breaking the file into
 smaller peices, maybe based on the first character, and then
 process them seperately.

No, the way this is done is simply to read the file and insert the data
into an ordered data structure until memory fills up. After that, you
keep reading the file and inseting, but each time you insert, you
remove the smallest element and write it out to the segment file.  You
keep doing it until it's no longer possible to extract a smallest
element which is greater than all that have been already written to the
file. When that happens, you start a new file.  That does not happen
until you have filled memory at least twice. So for instance with half
a gig of RAM, you can produce merge segments on the order of a gig.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: number of different lines in a file

2006-05-19 Thread Kaz Kylheku
Bill Pursell wrote:
 Have you tried
 cat file | sort | uniq | wc -l ?

The standard input file descriptor of sort can be attached directly to
a file. You don't need a file catenating process in order to feed it:

  sort  file | uniq | wc -l

And sort also takes a filename argument:

  sort file | uniq | wc -l

And sort has the uniq functionality built in:

  sort -u file | wc -l

Really, all this piping between little utilities is inefficient
bullshit, isn't it!  All that IPC through the kernel, copying the data.

Why can't sort also count the damn lines?

There should be one huge utility which can do it all in a single
address space.

 sort might choke on the large file, and this isn't python, but it
 might work.

Solid implementations of sort can use external storage for large files,
and perform a poly-phase type sort, rather than doing the entire sort
in memory.

I seem to recall that GNU sort does something like this, using
temporary files.

Naively written Python code is a lot more likely to choke on a large
data set.

 You might try breaking the file into
 smaller peices, maybe based on the first character, and then
 process them seperately.

No, the way this is done is simply to read the file and insert the data
into an ordered data structure until memory fills up. After that, you
keep reading the file and inseting, but each time you insert, you
remove the smallest element and write it out to the segment file.  You
keep doing it until it's no longer possible to extract a smallest
element which is greater than all that have been already written to the
file. When that happens, you start a new file.  That does not happen
until you have filled memory at least twice. So for instance with half
a gig of RAM, you can produce merge segments on the order of a gig.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: number of different lines in a file

2006-05-19 Thread Kaz Kylheku
Paddy wrote:
 If the log has a lot of repeated lines in its original state then
 running uniq twice, once up front to reduce what needs to be sorted,
 might be quicker?

Having the uniq and sort steps integrated in a single piece of software
allows for the most optimization opportunities.

The sort utility, under -u, could squash duplicate lines on the input
side /and/ the output side.

  uniq log_file | sort| uniq | wc -l

Now you have two more pipeline elements, two more tasks running, and
four more copies of the data being made as it travels through two extra
pipes in the kernel.

Or, only two more copies if you are lucky enough to have a zero copy
pipe implementation whcih allows data to go from the writer's buffer
directly to the reader's one without intermediate kernel buffering.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tabs versus Spaces in Source Code

2006-05-16 Thread Kaz Kylheku
Xah Lee wrote:
 Tabs vs Spaces can be thought of as parameters vs hard-coded values, or
 HTML vs ascii format, or XML/CSS vs HTML 4, or structural vs visual, or
 semantic vs format. In these, it is always easy to convert from the
 former to the latter, but near impossible from the latter to the
 former.

Bahaha, looks like someone hasn't thought things through very well.

Spaces, under a mono font, offer greater precision and expressivity in
achieving specific alignment. That expressivity cannot be captured by
tabs.

The difficulty in converting spaces to tabs rests not in any bridgeable
semantic gap, but in the lack of having any way whatsoever to express
using tabs what the spaces are expressing.

It's not /near/ impossible, it's /precisely/ impossible.

For instance, tabs cannot express these alignments:

  /*
   * C block
   * comment
   * in a common style.
   */

  (lisp
   (nested list
with symbols
and things))

  (call to a function
with many parameters)
  ;; how do you align to and with using tabs?
  ;; only if to lands on a tab stop; but dependence on specific tab
stops
  ;; destroys the whole idea of tabs being parameters.

To do these alignments structurally, you need something more expressive
than spaces or tabs. But spaces do the job under a mono font, /and/
they do it in a completely unobtrusive way.

If you want to do nice typesetting of code, you have to add markup
which has to be stripped away if you actually want to run the code.

Spaces give you decent formatting without markup. Tabs do not. Tabs are
only suitable for aligning the first non-whitespace character of a line
to a stop. Only if that is the full extent of the formatting that you
need to express in your code can you acheive the ideal of being able to
change your tab parameter to change the indentation amount. If you need
to align characters which aren't the first non-whitespace in a line,
tabs are of no use whatsoever, and proportional fonts must be banished.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Tabs versus Spaces in Source Code

2006-05-16 Thread Kaz Kylheku
achates wrote:
 Kaz Kylheku wrote:

  If you want to do nice typesetting of code, you have to add markup
  which has to be stripped away if you actually want to run the code.

 Typesetting code is not a helpful activity outside of the publishing
 industry.

Be that as it may, code writing involves an element of typesetting. If
you are aligning characters, you are typesetting, however crudely.

 You might like the results of your typsetting; I happen not
 to. You probably wouldn't like mine. Does that mean we shouldn't work
 together? Only if you insist on forcing me to conform to your way of
 displaying code.

Someone who insists that everyone should separate line indentation into
tabs which achieve the block level, and spaces that achieve additional
alignment, so that code could be displayed in more than one way based
on the tab size without loss of alignment, is probably a space cadet,
who has a bizarre agenda unrelated to developing the product.

There is close to zero value in maintaining such a scheme, and
consequently, it's hard to justify with a business case.

Yes, in the real world, you have to conform to someone's way of
formatting and displaying code. That's how it is.

You have to learn to read, write and even like more than one style.

 You are correct in pointing out that tabs don't allow for 'alignment'
 of the sort you mention:

That alignment has a name: hanging indentation.

All forms of aligning the first character of a line to some requirement
inherited from the previous line are called indentation.

Granted, a portion of that indentation is derived from the nesting
level of some logically enclosing programming language construct, and
part of it may be derived from the position of a character of some
parallel constituent within the construct.

 (lisp
(nested list
 with symbols
 and things))
 But then neither does Python. I happen to think that's a feature.

Python has logical line continuation which gives rise to the need for
hanging indents to line up with parallel constituents in a folded
expression.

Python also allows for the possibility of statements separated by
semicolons on one line, which may need to be lined up in columns.

   var = 42; foo = 53
   x   =  2; y   = 10

 (And of course you can do what you like inside a comment. That's
 because tabs are for indentation, and indentation is meanigless in that
 context.

A comment can contain example code, which contains indentation.

What, I can't change the tab size to display that how I want? Waaah!!!
(;_;)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-line lambda proposal.

2006-05-11 Thread Kaz Kylheku
Duncan Booth wrote:
 One big problem with this is that with the decorator the function has a
 name but with a lambda you have anonymous functions so your tracebacks are
 really going to suck.

Is this an issue with this particular design that is addressed by other
designs?

Are the existing one-line lambdas free from this problem?

Is there really a problem? The Python tracebacks identify every frame
by file and line number, as well as name, if one is available.

Let me make the observation that  name of an inner function is, alone,
insufficient to identify that function in a debugging scenario. If you
have some inner function called I, defined within function F, you need
to know that it's the I inside F, and not some other I.

Look at what happens with:

 def err():
...   def inner():
... return nonexistent
...   return inner
...
 err()
function inner at 0x8177304
 err()()
Traceback (most recent call last):
  File stdin, line 1, in ?
  File stdin, line 3, in inner
NameError: global name 'nonexistent' is not defined

In the traceback, the programmer is told that the error occured in a
function called inner. However, there could be many functions called
inner, including a global one. The programmer is not told that it's the
function inner which is defined inside the function err, which could be
done with some qualified name syntax like err.inner or whatever. So
this piece of information is about as useful as being told that it was
inside a lambda.

The real key which lets the programmer correlate the error back to the
source code is the file name and line number. When he locates that line
of code, then it's obvious---aha---it is in the middle of an inner
function called inner, which also happens to be inside a global
function called err.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-line lambda proposal.

2006-05-11 Thread Kaz Kylheku
Sybren Stuvel wrote:
 [EMAIL PROTECTED] enlightened us with:
  this is how I think it should be done with multi-line lambdas:
 
  def arg_range(inf, sup, f):
return lambda(arg):
  if inf = arg = sup:
return f(arg)
  else:
raise ValueError

 This is going to be fun to debug if anything goes wrong. Ever seen
 such a traceback?

I'm looking at various tracebacks now. I don't see any special problems
with lambdas. Inner functions are inadequately identified, so that the
file and line number has to be used, which works just as well as for
lambdas.

 A function decorator is supposed to add something to a function. The
 syntax that sticks the closest to that of defining a function seems
 most Pythonic to me.

Which proposed lambda syntax is closest in this sense?

 I can already foresee that I'll have a tougher time explaining your
 lambda-based decorators, than the current decorators, to people
 learning Python.

Is it unusual to have a tougher time explaining X than Y to people who
are learning a language, where X and Y are different features?

For instance, an assignment statement is easier to explain than a
metaclass. Therefore, assignment statements are probably going to be
covered in an earlier lecture of a Python course than metaclasses.

In what language are features equally easy to explain?

It's possible to use Python while pretending that lambdas don't exist
at all. (That's true of the lambda in its current form, as well as the
proposed forms).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-line lambda proposal.

2006-05-11 Thread Kaz Kylheku
Terry Reedy wrote:
 So name it err_inner.  Or _err.

Right. The C language approach to namespaces.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-line lambda proposal.

2006-05-11 Thread Kaz Kylheku
Duncan Booth wrote:
 Kaz Kylheku wrote:

  Duncan Booth wrote:
  One big problem with this is that with the decorator the function has
  a name but with a lambda you have anonymous functions so your
  tracebacks are really going to suck.
 
  Is this an issue with this particular design that is addressed by
  other designs?

 Yes. Decorators don't interfere with the name of the underlying function
 displayed in tracebacks.

No, I mean do other multi-line lambda design fix this problem somehow?

It looks to me like the programmer's choice, quite simply.

Both programs shown by [EMAIL PROTECTED] use lambda. The first one uses
a decorator to actually define the wrapped function:

  @arg_range(5, 17)
  def f(arg):
return arg*2

The arg_range function uses a nesting of two lambdas, yet the decorated
function still has a name that nicely shows up in tracebacks.

So in other words, lambdas and decorators play along nicely.

  f = arg_range(5, 17, lambda(arg)):
return arg*2

Here, the programmer made a decision to define a global function using
an assigment operator instead of def. The underlying function is the
lambda itself. That is not a problem with the multi-line lambda. A
lambda feature is not even required to do create a version of this
problem:

  foo = arg_range(5, 17, bar)

Now the function is called as foo(), but what the programmer sees in
the traceback is bar, which is potentially confusing.  The traceback
will show that bar() is being called from some given file and line
number, but when that is inspected, there is no bar() there, only an
expression which contains the function call foo() (and possibly other
calls).

I'm only interested in discussing the relative merits of my multi-line
lambda proposal versus others.

I'm not interested in debating people who think that other people who
want multi-line lambdas should not have them.

I also hope that everyone understands that lambdas, multi-line or not,
are not the best tool for every situation for which they are a possible
candidate. I agree with that, and am not interested in debating it
either.  It's off topic to the question of designing that labmda,
except insofar as the design of the lambda influences whether or not
lambda is a good choice in a situation. I.e. lambda is a bad choice
for this situation if it is designed like this, but not (or less so) if
it is designed like this.

  Are the existing one-line lambdas free from this problem?

 No, but since a single line lambda does virtually nothing it isn't as
 serious. Decorators are useful enough that in some situation you might
 decorate every method in a class (e.g. for a web application you might
 apply security settings with decorators). In that situation you have just
 messed up every stack frame in every traceback.

The problem there is that the programmer uses anonymous functions for
class methods, rather than decorating named class methods. (Are
anonymous class methods even possible? Lambdas can take an object as
their first argument, but that's not the same thing.)

 I end up reading tracebacks quite a lot, and it is the sequence of the 
 function
 names which matter first, I don't usually need to go and look at the file
 and code lines.
 
  Let me make the observation that  name of an inner function is, alone,
  insufficient to identify that function in a debugging scenario. If you
  have some inner function called I, defined within function F, you need
  to know that it's the I inside F, and not some other I.
 
 If the stack frame shows I called from F then it is usually a pretty good
 guess that it means the I inside F.

Pretty good guess doesn't cut it. Fact is, that the identities of
these functions are not unambiguously pinned down by their name alone;
moreover, the line number and file information alone actually does
precisely pinpoint the location of the exception.

If only the names of functions appeared in the traceback, it would be
less useful. If foo calls bar in three different places, you would not
know which of those three places is responsible for the call of bar
from foo. Moreover, in the bottom-most frame, you would not know which
line in the function actually triggered the traceback.

So I do not believe your claim that you rarely need to look at the line
number information when comprehending tracebacks.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-line lambda proposal.

2006-05-10 Thread Kaz Kylheku
Antoon Pardon wrote:
 Could you give me an example. Suppose I have the following:

 def arg_range(inf, sup):

   def check(f):

 def call(arg):
   if inf = arg = sup:
 return f(arg)
   else:
 raise ValueError

 return call

   return check

def arg_range(inf, sup)
  return lambda(f):
return lambda(arg):
  if inf = arg = sup:
return f(arg)
  else
raise ValueError

Nice; now I can see what this does: returns a function that, for a
given function f, returns a function which passes its argument arg to f
if the argument is in the [inf, sup] range, but otherwise raises a
ValueError. The English description pops out from the nested lambda.

The names in the inner-function version only serve to confuse. The
function check doesn't check anything; it takes a function f and
returns a validating function wrapped around f.

In fact, an excellent name for the outer-most inner function is that of
the outer function itself:

def range_checked_function_maker(inf, sup):

  def range_checked_function_maker(f):

def checked_call_of_f(arg):
  if inf = arg = sup:
return f(arg)
  else:
raise ValueError

return checked_call_of_f

  return range_checked_function_maker

This alone makes a good case for using an anonymous function: when you
have a function which does nothing but return an object, and that
function has some noun as its name, it's clear that the name applies to
the returned value.

This:

  def forty_two():
return 42

is not in any way made clearer by:

  def forty_two():
forty_two = 42
return forty_two

:)

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multi-line lambda proposal.

2006-05-09 Thread Kaz Kylheku
Kaz Kylheku wrote:
 But suppose that the expression and the multi-line lambda body are
 reordered? That is to say, the expression is written normally, and the
 mlambda expressions in it serve as /markers/ indicating that body
 material follows. This results in the most Python-like solution.

Unfortunately, while this idea has intuitive appeal, it leaves some
problems to solve; namely, lambdas that occur in expressions embedded
within statement syntax which has body material of its own. For
instance

  # lambda defined and immediately called here
  if lambda(x)(4)  0:
 print a
  elif y = 4:
 print b
  else:
 print foo

Where to stick the lambda body? It's not hard to come up with
straightforward answers, except that they are not particularly
appealing. One rule might be that the lambda bodies have to be placed
immediately after the statement body material, set off by the lambda:
thing. In the case of if/elif/else, they have to be placed behind the
closest suite that follows the expression in the syntax of the
statement:

  if lambda(x)(4)  0:
 print a
  lambda:
 return x + 1
  elif y = 4:
 print b
  else:
 print foo

The overall rule is simple and uniform: each suite can have lambda:
clauses. These have to match lambda() occurences in the expression (or
expression list) immediately to the left in the same grammar
production.

On the other hand, some people might find this compromise more
attractive: simply do not allow multi-line lambdas in compound
statements.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A critic of Guido's blog on Python's lambda

2006-05-08 Thread Kaz Kylheku

Steve R. Hastings wrote:
 On Fri, 05 May 2006 21:16:50 -0400, Ken Tilton wrote:
  The upshot of
  what he wrote is that it would be really hard to make semantically
  meaningful indentation work with lambda.

 Pretty much correct.  The complete thought was that it would be painful
 all out of proportion to the benefit.

 See, you don't need multi-line lambda, because you can do this:


 def make_adder(x):
 def adder_func(y):
 sum = x + y
 return sum
 return adder_func

Now imagine you had to do this with every object.

  def add_five(x)
# return x + 5   --  anonymous integer literal, not allowed!!!
   five = 5   # define it first
   return x + five

Think about the ramifications of every object having to have a name in
some environment, so that at the leaves of all expressions, only names
appear, and literals can only be used in definitions of names.

Also, what happens in the caller who invokes make_adder? Something like
this:

   adder = make_adder(42)

Or perhaps even something like this

  make_adder(2)(3) -- 5

Look, here the function has no name. Why is that allowed? If anonymous
functions are undesireable, shouldn't there be a requirement that the
result of make_adder has to be bound to a name, and then the name must
be used?

 Note that make_adder() doesn't use lambda, and yet it makes a custom
 function with more than one line.  Indented, even.

That function is not exactly custom. What is custom are the environment
bindings that it captures. The code body comes from the program itself.

What about actually creating the source code of a function at run-time
and compiling it?

  (let ((source-code (list 'lambda (list 'x 'y) ...)))
(compile nil source-code))

Here, we are applying the compiler (available at run-time) to syntax
which represents a function. The compiler analyzes the syntax and
compiles the function for us, giving us an object that can be called.

Without that syntax which can represent a function, what do you pass to
the compiler?

If we didn't have lambda in Lisp, we could still take advantage of the
fact that the compiler can also take an interpreted function object and
compile that, rather than source code. So we could put together an
expression which looks like this:

  (flet ((some-name (x y) ...)) #'some-name)

We could EVAL this expression, which would give us a function object,
which can then be passed to COMPILE. So we have to involve the
evaluator in addition to the compiler, and it only works because the
compiler is flexible enough to accept function objects in addition to
source code.

 No; lambda is a bit more convenient.  But this doesn't seem like a very
 big issue worth a flame war.  If GvR says multi-line lambda would make
 the lexer more complicated and he doesn't think it's worth all the effort,
 I don't see any need to argue about it.

I.e. GvR is the supreme authority. If GvR rationalizes something as
being good for himself, that's good enough for me and everyone else.

 I won't say more, since Alex Martelli already pointed out that Google is
 doing big things with Python and it seems to scale well for them.

That's pretty amazing for something that doesn't even have a native
compiler, and big mutexes in its intepreter core.

Look at docs.python.org in section 8.1 en titled Thread State and
the Global Interpreter Lock:

The Python interpreter is not fully thread safe. In order to support
multi-threaded Python programs, there's a global lock that must be held
by the current thread before it can safely access Python objects.
Without the lock, even the simplest operations could cause problems in
a multi-threaded program: for example, when two threads simultaneously
increment the reference count of the same object, the reference count
could end up being incremented only once instead of twice. Therefore,
the rule exists that only the thread that has acquired the global
interpreter lock may operate on Python objects or call Python/C API
functions. In order to support multi-threaded Python programs, the
interpreter regularly releases and reacquires the lock -- by default,
every 100 bytecode instructions (this can be changed with
sys.setcheckinterval()).

That doesn't mean you can't develop scalable solutions to all kinds of
problems using Python. But it does mean that the scalability of the
overall solution comes from architectural details that are not related
to Python itself. Like, say, having lots of machines linked by a fast
network, working on problems that decompose along those lines quite
nicely.

-- 
http://mail.python.org/mailman/listinfo/python-list


Multi-line lambda proposal.

2006-05-08 Thread Kaz Kylheku
I've been reading the recent cross-posted flamewar, and read Guido's
article where he posits that embedding multi-line lambdas in
expressions is an unsolvable puzzle.

So for the last 15 minutes I applied myself to this problem and come up
with this off-the-wall proposal for you people. Perhaps this idea has
been proposed before, I don't know.

The solutions I have seen all assume that the lambda must be completely
inlined within the expression: the expression is interrupted by the
lambda, which is then completely specified (arguments and body) and the
expression somehow continues (and this is where syntactic problems
occur, giving rise to un-Python-like repugnancies).

But suppose that the expression and the multi-line lambda body are
reordered? That is to say, the expression is written normally, and the
mlambda expressions in it serve as /markers/ indicating that body
material follows. This results in the most Python-like solution.

Suppose lambda() occurs without a colon

  a = lambda(x, y), lambda(s, t), lambda(u, w): u + w
statement1
statement2
  lambda:
statement3
statement4

The first two lambdas do not have a colon after them. This means that
they have multi-line bodies which follow this statement, and there must
be as many bodies as there are lambdas. The third lambda is a regular
one-expression lambda, entirely written right there.

The bodies are made up of the statements which follow. If there is only
one body, it's simply the indented material. If there are two or more
lambdas in the expression, additional bodies are required, introduced
by lambda: statements, which are at the same indentation level as the
expression which contains the lambda markers.

Of course, the bodies have their respective lambda parameters in scope.
So statement1 and statement2 have access to x and y, and statement3 and
statement4 have access to s and t.

Problem solved, with no Python repugnancies.

The way you can embed indented material in an expression is by not
physically embedding it.

If you want to be completely anally retentive, you can require that the
expression which has lambda bodies after it has to be terminated by a
colon:

  a = lambda(x, y), lambda(s, t), lambda(u, w): u + w:
statement1
statement2
  lambda:
statement3
statement4

If we take out the last two lambdas, this reduces to:

  a = lambda(x, y):
statement1
statement2

Here, the colon terminates the lambda-containing statement. It is not
the colon which introduces the body of a one-expression lambda. E.g.:

  a = lambda(x, y): x + y

  a = lambda(x, y):
return x + y

The two are disambiguated by what follows.  You get the picture.

More examples: lambda defined in a function call argument

  a = foo(lambda (x, y)):
return x + y

Confusing? Not if you read it properly. A lambda function is
constructed with arguments x, y and passed to foo, and the result is
assigned to a. Oh, and by the way, the body of the
lambda is: return x + y.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Xah's Edu Corner: What Languages to Hate

2006-04-28 Thread Kaz Kylheku
John Bokma wrote:
 Alex Buell [EMAIL PROTECTED] wrote:

  Send your complaints to:
  abuse at sbcglobal dott net
  abuse at dreamhost dott com

 Yup, done. If he's still with dreamhost he probably is in trouble now. If
 not, next.

Hahaha, right. Your complaints probably go straight do /dev/null. Do
you think any ISP out there cares about someone cross-posting a little
troll on Usenet, cross-posted to a handful of groups? They have bigger
abuse issues to worry about.

Xah Lee is an intelligent guy, just a bit nutty.

He has an interesting and funny website.

He pisses some of you people off because a lot of his ranting is
approximately on the mark. In my home country of Slovakia people say
Trafena' hus zaga'ga!, which means It is the goose that is hit which
will honk up.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: (was Re: Xah's Edu Corner: Criticism vs Constructive Criticism)

2006-04-28 Thread Kaz Kylheku

Chris Uppal wrote:
 Tagore Smith wrote:

  It's much easier to use a killfile than to complain to an ISP, and I
  think that that should be the preferred response to messages you don't
  like.

 I'm inclined to agree.  The problem is not Xah Lee (whom I have killfiled), 
 but

What is the point of killfiling Xah Lee? Xah Lee does not enter into
random debates.

He always starts a new thread, which you can clearly identify by its
subject line and who it is from. Xah Lee does not use sock puppets, nor
does he otherwise conceal himself. He almost goes out of his way to be
clearly identifiable.

If you don't want to read Xah Lee, it is extremely easy to do so
without killfile support.

Intelligent people have learned that Xah Lee threads are extremely well
identified and easy to avoid. So that leaves behind only complete
idiots, and Xah Lee fans. :)

 the people who insist on making my killfile useless by posting loads of
 follow-ups saying things amounting to stop this insane gibberish.   Every
 bloody time.

This means that you are going into that thread anyway! Maybe if you
un-killfiled Xah Lee, you would see the root article of the thread and
then avoid stepping into it. Maybe you are stepping into these threads
because you want to.

If you truly don't like this stuff, maybe you should killfile by
thread: kill the root article by Xah Lee, and, recursively, anything
else which refers to it directly or transitively by parent references.

But then, even that is superfluous if you have a threaded reader, since
the thread is condensed to a single line on the screen which you have
to explicitly open.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Programming challenge: wildcard exclusion in cartesian products

2006-03-17 Thread Kaz Kylheku
[EMAIL PROTECTED] wrote:
 The wildcard exclusion problem is interesting enough to have many
 distinct, elegant solutions in as many languages.

In that case, you should have crossposted to comp.lang.python also.

Your program looks like a dog's breakfast.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: references/addrresses in imperative languages

2005-06-20 Thread Kaz Kylheku
Walter Roberson wrote:
 In article [EMAIL PROTECTED],
 Xah Lee [EMAIL PROTECTED] wrote:
 In hindsight analysis, such language behavior forces the programer to
 fuse mathematical or algorithmic ideas with implementation details. A
 easy way to see this, is to ask yourself: how come in mathematics
 there's no such thing as addresses/pointers/references.

 There is. Each variable in predicate calculas is a reference.
 No matter how large the formulae, a change in a variable
 is, in mathematics, immediately propagated to all occurances
 of the variable (potentially changing the references of other
 variables).

Variables don't change in mathematics, at least the run-of-the-mill
everyday mathematics. :)

 If the predicate calculas variables were not equivilent to references,
 then the use of the variable in a formula would have to be a
 non-propogating copy. and a change to the original value whence not
 be reflected in all parts of the formula and would not change
 what the other variables referenced.

 Consider for example the proof of Goedel's Incompleteness
 theorem, which involves constructing a formula with a free
 variable, and constructing the numeric encoding of that
 formula, and then substituting the numeric encoding in as
 the value of the free variable, thus ending up with
 a number that is talking about iteelf.

All these substitutions ``work'' in a way that is analogous to
functional programming. For example, substituting a variable into a
formula generates a new formula with occurences of that variable
replaced by the given value. You haven't destroyed the old formula.

 The process of
 the proof is *definitely* one of reference to a value
 in the earlier stages, with the formula being evaluated
 at a later point -- very much like compiling a program
 and then feeding the compiled program as input to itelf.

Actually no. The process specifically avoids the pointer problem by
using an arithmetic coding for the formula, the Goedel numbering. The
formula talks about an encoded version of itself. That's how the
self-reference is smuggled in, via the Goedel numbering.

 You
 cannot do it without a reference, because you need to
 have the entire number available as data at the time
 you start evaluating the mathematical formula.

The final result just /is/ self-referential. It's not constructed bit
by bit like a data structure inside a digital computer that starts out
being non-self-referential and is then backpatched to point to itself.

A mathematical derivation may give you the idea that something is
changing in place, because you always hold the most recent version of
the formula at the forefront of your mind, and can visualize the whole
process as a kind of in-place animation in your head. But really, at
each step you are making something completely new which stands on its
own.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: references/addrresses in imperative languages

2005-06-20 Thread Kaz Kylheku
SM Ryan wrote:
 # easy way to see this, is to ask yourself: how come in mathematics
 # there's no such thing as addresses/pointers/references.

 The whole point of Goedelisation was to add to name/value references into
 number theory.

Is that so? That implies that there is some table where you can
associate names (or whatever type of locators: call them pointers,
whatever) with arbitrary values. But in fact that's not the case.

 Thus Goedel was able to add back pointers contrary to the
 set hierarchy of the theory of types and reintroduce Russel's paradox.

Nope. Goedel didn't add anything, he just revealed what was already
there: that you can have statements of number theory, well-formed
formulas constructed out of existing operators without any backpointer
tricks, which have true interpretations, but are not theorems.

Everything in a Goedel's string can be recursively expanded to yield an
ordinary formula! There is no infinite regress, no unchased embedded
pointer-like things left behind.

Self-reference is achieved using two tricks: Goedel numbering, and
indirect reference. Godel numbering allows a formula to talk about
formulas, by way of embedding their Godel numbers, and translating
formula-manipulations into arithmetic manipulations (of interest are
finite ones that will nicely unfold into finite formulas). In essence
that which used to be ``code'' can be now treated as ``data''', and
operations on code (logical derivations) become arithmetic operations
on data.

Indirect reference is needed because a formula G's Goedel number cannot
be inserted into itself directly. If you start with a formula G which
has some free variable V, and then produce some new formula by
substituting G's Goedel number into it directly for occurences of V,
you no longer have G but that other formula.  You want whatever comes
out to be G, and so the input formula, the one with the free variable,
cannot be G, but perhaps a close relative which either talks about G,
or whose constituent formulas cleverly end up talking about G after the
substitution takes place.

Instead of a direct (Goedel number) reference, you can insert into a
formula some /procedure/ for making that formula's Goedel number out of
another formula's Goedel number and talk about it that way. As an
example, instead of saying ``here is a number'' by inserting its
literal digits, you can say ``the number that results by applying this
formula to this other number''. For instance, instead of writing the
number 4 we can write   successor(3). or   2 + 2.   We explicitly
mention some other number, and say how 4 is constructed out of it.

Douglas Hofstadter exposition of all this is very good. To allow the
formula G to mention its own Goedel number, Douglas Hofstadter uses
another formula which he calls U, the ``uncle''. The trick is that: the
procedure for making G's Goedel number out of U's number is the
arithmetic equivalent of the same procedure that's used to substitute
the Goedel number of U for the free variable in U itself. So as U (the
formula) is treated by substituting its own Godel number for its one
and only free variable, it produces G, and, at the same time, the
arithmetic version of that substitution, fully contained inside the U
formula itself, turns the now substituted copy of U into G also.  U
contains the fully expanded ``source code'' for the arithmetic version
of the free-variable substitution procedure, and it contains a free
variable representing the arithmetic version of the formula on which
that algorithm is to be done. As that free variable within U is
replaced by the Goedel number U, the overall formula becomes G, and the
embedded free-variable-replacement procedure is instantiated concretely
over U's Goedel number, so it becomes a constant, unparameterized
calculation that produces G's Goedel number.

Voila, G contains a procedure that computes the arithmetic object
(Goedel number) that represents G's ``source code'' (the symbolic
formula), out of the embedded number representing U's source code.
Using that giant formula, G can assert things about itself, like ``I am
not a theorem'' (i.e. ``there exists no integer representing the Goedel
numbers of a list of true statements that represent a derivation
deducing myself as the conclusion'').

There are no name references or pointers or anything. All the functions
are primitive recursive, and so can be expanded into finite-length
formulas which contain only numbers and operators and variables---dummy
ones that are bound to existential quantifiers, not to concrete values
some external name/value table.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: references/addrresses in imperative languages

2005-06-20 Thread Kaz Kylheku
Lawrence D’Oliveiro wrote:
 In article [EMAIL PROTECTED],
  Xah Lee [EMAIL PROTECTED] wrote:

 A[n] easy way to see this, is to ask yourself: how come in mathematics
 there's no such thing as addresses/pointers/references.

 Yes there are such things in mathematics, though not necessarily under
 that name.

 For instance, in graph theory, edges can be considered as pointers.
 After all, make a change to a node, and that change is visible via all
 edges pointing to that node.

Oh yeah, by the way, note how such destructive changes to a variable
become whole-environment derivations in the discipline of proving the
correctness of imperative programs.

E.g. say you have this assignment:

   x - x + 1

and you want to deduce what preconditions must exist in order for the
desired outcome   x = 42   to be true after the execution of the
statement. What do you do? You pretend that the program is not
modifying a variable in place, but rather manufacturing a new
environment out of an old one. In the new environment, the variable X
has a value that is one greater than the corresponding variable in the
old environment. To distinguish the two variables, you call the one in
the old environment X' .

You can then transform the assignment by substituting X' for X in the
right hand side and it becomes

  x = x' + 1

and from that, the precondition  x' = 41  is readily deduced from the
x = 42  postcondition.

Just to be able to sanely reason about the imperative program and its
destructive variable assignment, you had to nicely separate past and
future, rename the variables, and banish the in-place modification from
the model.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: references/addrresses in imperative languages

2005-06-20 Thread Kaz Kylheku


SM Ryan wrote:
 Kaz Kylheku [EMAIL PROTECTED] wrote:
 # SM Ryan wrote:
 #  # easy way to see this, is to ask yourself: how come in mathematics
 #  # there's no such thing as addresses/pointers/references.
 # 
 #  The whole point of Goedelisation was to add to name/value references into
 #  number theory.
 #
 # Is that so? That implies that there is some table where you can
 # associate names (or whatever type of locators: call them pointers,
 # whatever) with arbitrary values. But in fact that's not the case.

 Do you really believe the Goedel number of a statement is the statement
 itself? Is everything named Kaz the same as you?

The Goedel number is a representation of the statement in a way that
the name Kaz isn't a representation of me. You cannot identify parts of
the name Kaz with parts of me; there is no isomorphism there at all. I
am not the translated image of the name Kaz, nor vice versa.

A Goedel number isn't anything like a name or pointer. It's an encoding
of the actual typographic ``source code'' of the expression. There is
nothing external to refer to other than the encoding scheme, which
isn't particular to any given Goedel number. The encoding scheme is
shallow, like a record player; it doesn't contribute a significant
amount of context. If I decode a Goedel number, I won't have the
impression that the formula was hidden in the numbering scheme, and the
Goedel number simply triggered it out like a pointer. No, it will be
clear that each piece of the resulting formula is the direct image of
some feature of the Goedel number.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: references/addrresses in imperative languages

2005-06-20 Thread Kaz Kylheku


SM Ryan wrote:
 Kaz Kylheku [EMAIL PROTECTED] wrote:
 # SM Ryan wrote:
 #  # easy way to see this, is to ask yourself: how come in mathematics
 #  # there's no such thing as addresses/pointers/references.
 # 
 #  The whole point of Goedelisation was to add to name/value references into
 #  number theory.
 #
 # Is that so? That implies that there is some table where you can
 # associate names (or whatever type of locators: call them pointers,
 # whatever) with arbitrary values. But in fact that's not the case.

 Do you really believe the Goedel number of a statement is the statement
 itself? Is everything named Kaz the same as you?

The Goedel number is a representation of the statement in a way that
the name Kaz isn't a representation of me. You cannot identify parts of
the name Kaz with parts of me; there is no isomorphism there at all. I
am not the translated image of the name Kaz, nor vice versa.

A Goedel number isn't anything like a name or pointer. It's an encoding
of the actual typographic ``source code'' of the expression. There is
nothing external to refer to other than the encoding scheme, which
isn't particular to any given Goedel number. The encoding scheme is
shallow, like a record player; it doesn't contribute a significant
amount of context. If I decode a Goedel number, I won't have the
impression that the formula was hidden in the numbering scheme, and the
Goedel number simply triggered it out like a pointer. No, it will be
clear that each piece of the resulting formula is the direct image of
some feature of the Goedel number.

-- 
http://mail.python.org/mailman/listinfo/python-list