On 8/3/2011 7:32 AM, Chris Warburton wrote:
On Tuesday 02 August 2011 00:43:57 BGB wrote:
On 8/1/2011 3:24 PM, Simon Forman wrote:
On 7/27/11, Chris Warburton<[email protected]> wrote:
<snip>
(maybe relevant but no really to comment).
Another reason I would argue against something like types based on
Physics is that Physics tries to work out the inconceivable ways that
the Universe actually behaves by systematically throwing away all of our
intuitions that turn out to be wrong. With a computer system, we want
the opposite; we want a system that requires as little study as
possible, and for which our intuitions are accurate.
I respectfully disagree.
Jef Raskin pointed out that humans have no innate intuition regarding
computer systems, only familiarity. The word "intuitive" in reference
to computer languages and UIs is incorrect.
We want a computer system that allows us to specify our guesses
(intuitions) about the world concretely and communicate them and test
them, but that does so without unduly getting in our way.
I think that creating computer systems that support naive or unfounded
"intuitions" (whether about how computers work or about the world
outside the computer system) actually does a disservice.
yep, "intuitive" is itself often ill-defined.
usually, it is used in reference to one of several ideas:
it is conventional, so it is "intuitive" in that most of its aspects are
common with other similar systems (like, say, a language looks about
like C++ or Java, so people have a good idea what it will do without
really knowing the language in particular);
it follows conventions from somewhere else, usually related to the
domain in which it is being applied (it is "intuitive" in that it is
similar to what the users are likely to expect in their domain);
it strives for being "intuitive" in the sense of "intuitively
understood", which generally boils down to striving for minimalism and
high level of orthogonality (this is sometimes used to argue the merit
of languages like Forth and Factor).
sometimes things are "intuitive" but in unusual ways ("out in left
field"), usually more when "intuitive" is used more in the sense of
"being creative", or having a distinctive "flare" or "style".
in some sense, these are relevant goals, but none are "intuitive"
per-se, and could often just as easily be defined as "well, it is
similar to these other things in these particular ways, and differs from
them in these other ways".
but, then for whatever reason there is a cultural bias against admitting
that ones' "original works" are actually just created more like a jigsaw
puzzle of bits and pieces of ideas from other works.
but, sometimes, it is not about how original the pieces themselves are,
but if they can be put together in interesting ways to deliver a product
which is both "unique" and "original", but also matches reasonably well
with peoples' expectations as to what the product should be (all of the
parts should fit together into a cohesive whole, appearing neither as a
mishmash or as something strange and alien, ...).
or such...
I agree with the point that humans cannot be assumed to have a hard-wired
intuition, based on Physics or otherwise. I suppose my intended meaning would
be better expressed as "consistent", ie. intuition gained from using feature A
will be a reasonable approach to using feature B.
yep. breaking implied consistency or orthogonality may be a much bigger
problem.
if something works in one case, but breaks hard in another, where there
is no obvious difference between them, then there is a problem.
A simple example where this is the case is iterable objects, eg. in Python
"for a in b:". Experience of basic objects such as lists and strings carries
over quite straightforwardly to, for example, XML parsing.
yep.
An example where this isn't the case is multithreaded programming (eg. in
Python, Java, etc.). The language looks and feels like it does when non-
threaded code is being written, but under the hood all sorts of assumptions
are being violated in an incredibly non-obvious way.
yeah.
maybe some good alternative is needed to the traditional "threading and
locks" model so prevalent in modern mainstream languages.
Rob Pike had a few interesting ideas, mostly involving synchronous
"channels", where a thread sending a message to a channel will block
until another thread reads from the channel, and a thread reading will
block until another sends a message.
I had implemented them before, but personally had not found much real
use for them. or, at least not in C, mostly as OS-level threads are
fairly expensive (typically requiring 4 or 8MB of address space for each
thread stack, ...), and using them this way would essentially "waste"
the threads (preventing them from doing what they do best, and offering
little real advantage over single-threaded code for this).
"better" for the above would be if C/... had continuations, but
continuations are, sadly, one of those features which can't really be
effectively implemented without the cooperation of the compiler/OS/...
(one can fake them using stack-backup/restoration, but this would likely
make them fairly expensive as well, albeit less so if only live portions
of the stack are preserved, however a more subtle issue would be that
continuations created this way would be thread-specific).
as a result, I personally lean more towards message-queues, where one
can, asynchronously, add messages to the queue, and try to take messages
from the queue, with operations on the queue themselves being
thread-safe. this is simple and reasonably effective.
although the above is potentially less "elegant", it allows for much
more "economical" use of threads (my usual use of threads is to have
them run in a loop dealing with some particular task, dealing mostly
with external code by sending/receiving messages or similar).
otherwise:
my VMs generally also allow for lighter-weight "soft" threads, where
these threads are generally "scheduled" on top of OS-level threads
(typically, OS threads dedicated to running VM threads, with more OS
threads spawning as needed), but this gets far more complex when dealing
with mixed-language frames (if a call through native C code is made,
then the thread can no longer be preempted, ...). (FWIW, there is a
single ugly mess for mixed-language exception handling as well, and I
have not yet gotten to the stage of ping-ponging exceptions between
interpreted code and Win32 SEH and similar).
usually, the main difference between them and native code is that my VMs
very often tend to use heap-allocated call frames. the advantage is that
this allows lighter weight continuations and threads, and the
possibility of "nifty" operations, such as "forking" threads, ... but at
the cost that raw execution speed is lower. this can be partly helped
though by combining this with a "frame stack" where prior (non-captured)
frames are retained and reused when possible.
another cost though is that this can't be done cleanly/easily with the
native ABI's, again making problems with mixed-language code. although
there are again partial solutions to this.
typically, within the VM, "function objects" are used in place of raw
function pointers, which are handled directly if the VM knows how to do
so (deferring to handlers or the native ABI if this fails). another
remote possibility is "multi-entry-point" functions, which would
basically be function objects which can be called as native function
pointers by external code, but recognized and given special treatment by
the VM.
sadly though, none of this can really fix all of the problems of
mixed-language call frames.
in an ideal world though, one could create however many threads are
needed, and there would be more nice/elegant means of creating and
managing multithreaded code.
examples could include asynchronous calls and code blocks, ...
in my own language, there is the "async" modifier which can
(theoretically) be used for a lot of this:
"async function foo(x, y) { ... }"
where calls to foo implicitly create their own thread.
"async bar(x, 3);"
would create a new thread for the call (IIRC, the initial version of the
language had also allowed the "bar!(x, 3);" syntax as well).
"async { ... }"
would execute the code block in a new thread.
granted, an "thread" modifier could make sense here instead?... (maybe
aliased to "async" or maybe replacing "async").
sadly, the "async" modifier was used in the first incarnation of
BGBScript (2004-2006), but was never fully reimplemented when the
language was later re-implemented (it has been on a long-term to-do
list, but there were many more pressing features to get implemented, and
alternative if albeit less convenient mechanisms exist...).
the issue is granted, not so much whether one can type, say:
for(i=0; i<100; i++)
fun(i) { async printf("%d\n", i); } (i);
but whether or not it will "own" their computer in the process...
(note: ugly closure hack needed to give each thread a proper unique
value for 'i', again probably another weak point of the existing model).
note, the effective opposite of "async" (or "thread", if added) would be
"synchronized", as in:
"synchronized { ... }" which would use a mutex (or some other means) to
synchronize execution within a given block.
it can also be (theoretically) applied to methods and classes (sadly
also not presently implemented).
now whether any of this could make threading easier to use... I really
have little idea...
maybe there is some fundamentally better way to approach multi-threaded
code?...
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc