Re: [fonc] Stephen Wolfram on the Wolfram Language

2014-09-25 Thread BGB

On 9/24/2014 6:39 PM, David Leibs wrote:
I think Stephen is misrepresenting the Wolfram Language when he says 
it is a big language. He is really talking about the built in library 
which is indeed huge.  The language proper is actually simple, 
powerful, and lispy.

-David



I think it is partly size along two axes:
core features built into the language core and the languages' core syntax;
features that can be built on top of the language via library features 
and extensibility mechanisms.


a lot of mainstream languages have tended to be bigger in terms of 
built-in features and basic syntax (ex: C++ and C#);
a lot of other languages have had more in terms of extensibility 
features, with less distinction between library code and the core language.


of course, if a language generally has neither, it tends to be regarded 
as a "toy language".


more so if the implementation lacks sufficient scalability to allow 
implementing a reasonable sized set of library facilities (say, for 
example, if it always loads from source and there is a relatively high 
overhead for loaded code).



sometimes, it isn't so clear cut as "apparent 
complexity"=="implementation complexity".


for example, a more complex-looking language could reduce down somewhat 
with a simpler underlying architecture (say, the language is itself 
largely syntax sugar);
OTOH, a simple looking language could actually have a somewhat more 
complicated implementation (say, because a lot of complex analysis and 
internal machinery is needed to make it work acceptably).


in many cases, the way things are represented in the high-level language 
vs nearer the underlying implementation may be somewhat different, so 
the representational complexity may be being reduced at one point and 
expanded at another.



another related factor I have seen is whether the library API design 
focuses more on core abstractions and building things from these, or 
focuses more on a large number of specific use-cases. for example, Java 
having classes for nearly each and every way they could think up that a 
person might want to read/write a file, as opposed to, say, a more 
generic multipurpose IO interface.



generally, complexity has tended to be less of an issue than utility and 
performance though.
for most things, it is preferable to have a more useful language if 
albeit at the cost of a more complex compiler, at least up until a point 
where the added complexity outweighs any marginal gains in utility or 
performance.


where is this point exactly? it is subject to debate.


On Sep 24, 2014, at 3:32 PM, Reuben Thomas > wrote:


On 24 September 2014 23:20, Tim Olson > wrote:


Interesting talk by Stephen Wolfram at the Strange Loop conference:

https://www.youtube.com/watch?v=EjCWdsrVcBM

He goes in the direction of creating a "big" language, rather
than a small kernel that can be built upon, like Smalltalk, Maru,
etc.


Smalltalk and Maru are rather different: Ian Piumarta would argue, I 
suspect, that the distinction between "small" and "large" languages 
is an artificial one imposed by most languages' inability to change 
their syntax. Smalltalk can't, but Maru can. Here we see Ian making 
Maru understand Smalltalk, ASCII state diagrams, and other things:


https://www.youtube.com/watch?v=EGeN2IC7N0Q

That's the sort of small kernel you could build Wolfram on.

Racket is a production-quality example of the same thing: 
http://racket-lang.org 


--
http://rrt.sc3d.org 
___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Modern General Purpose Programming Language

2013-11-08 Thread BGB

On 11/6/2013 3:55 AM, Chris Warburton wrote:

BGB  writes:


it is sad, in premise, that hard-coded Visual Studio projects, and raw
Makefiles, are often easier to get to work when things don't go "just
right". well, that and one time recently managing to apparently get on
the bad side of some developers for a FOSS GPL project, by going and
building part of it using MSVC (for plugging some functionality into
the app), but in this case, it was the path of least effort (the other
code I was using with it was already being built with MSVC, and I
couldn't get the main project to rebuild from source via the
"approved" routes anyways, ...).

weirder yet, some of the better development experiences I have had
have been in developing extensions for closed-source commercial
projects (without any ability to see their source-code, or for that
matter, even worthwhile API documentation), which "should not be".

This is probably an artifact of using Windows (I assume you're using
Windows, since you mention Windows-related programs). Unfortunately
building GNU-style stuff on Windows is usually an edge case; many *NIX
systems use source-level packages (ports, emerge, etc.) which forces
developers to make their stuff build on *NIX. If a GNU-style project
even works on Windows at all, most people will be grabbing a pre-built
binary (programs as executables, libraries as DLLs bundled with whatever
needs them).

It's probably a cultural thing too; you find GNU-style projects awkward
on your Microsoft OS, I find Microsoft-style projects awkward on my GNU
OS.


primarily Windows, but often getting Linux apps rebuilt on Linux doesn't 
go entirely smoothly either, like trying to get VLC Media Player rebuilt 
from source on Ubuntu, and having some difficulties mostly due to things 
like library version issues.



granted, generally it does work a little better, like at least "most of 
the time" the provided configure scripts wont blow up in ones' face, in 
contrast to Windows and Cygwin or similar, where "most of the time" 
things will fail to build (short of some fair amount of manual 
intervention and hackery...).


and, a few times just finding it easier to go and port the things over 
to building with MSVC (and "fixing up" any cases where it tries to use 
GCC-specific functionality).




granted, it would also require an shift in focus:
rather than an application being simply a user of APIs and resources,
it would instead need to be a "provider" of an interface for other
components to reuse parts of its functionality and APIs, ideally with
some decoupling such that neither the component nor the application
need to be directly aware of each other.

Sounds a lot like Web applications. I've noticed many projects using
locally-hosted Web pages as their UI; especially with languages like
Haskell where processing HTTP streams and automatically generating
Javascript fits the language's style more closely than wrapping an OOP
UI toolkit like GTK.


can't say, not done much web apps.

I was more thinking though of some of what (limited) experience I have 
had with things like driver development.


like, there is some bit of hair in the mix (handling event messages 
and/or dealing with COM+), but in some ways, a sort of more subtle 
"elegance" in being able to see ones' code "just work" in a lot of 
various apps without having to mess with anything in those apps to make 
it work.



but, in general though, I try to write code to try to minimize 
dependencies on "uncertain" code or functionality.


a few times I have guessed wrong, such as allowing my codec code to 
directly depend on my projects' VFS and MM/GC system, but later on 
ending up hacking over it a little to make the thing operate as a 
self-contained codec driver in this case.


this doesn't necessarily mean shunning use of any functionality outside 
of ones' control (in a "Not Invented Here" sense), but rather it 
involves some level of "routing" such that functionality can enable or 
disable itself depending on whether or not the required functionality is 
available (for example, a Linux build of a program can't use 
Windows-specific APIs, ..., but it sort of misses out if can't use them 
when built for Windows simply because they are not also available for a 
Linux build).




sort of like codecs on Windows: you don't have to go write a plugin
for every app that uses media (or, worse, hack on their code), nor
does every media player or video-editing program have to be aware of
every possible codec or media container format, they seemingly, "just
work", you install the appropriate drivers and it is done.

the GNU-land answer more tends to be "we have FFmpeg / libavcodec and
VLC Media Player", then lots of stuff is built by building lots of
things on top of 

Re: [fonc] Modern General Purpose Programming Language (Was: Task management in a world without apps.)

2013-11-05 Thread BGB

On 11/5/2013 7:15 AM, Miles Fidelman wrote:
Casey Ransberger casey.obrien.r at gmail.comwrites 



A fun, but maybe idealistic idea: an "application" of a computer 
should just be what one decides to do with it at the time.


I've been wondering how I might best switch between "tasks" (or 
really things that aren't tasks too, like toys and documentaries and 
symphonies) in a world that does away with most of the application 
level modality that we got with the first Mac.


The dominant way of doing this with apps usually looks like either 
the OS X dock or the Windows 95 taskbar. But if I wanted less shrink 
wrap and more interoperability between the virtual things I'm 
interacting with on a computer, without forcing me to "multitask" 
(read: do more than one thing at once very badly,) what's my best 
possible interaction language look like?


I would love to know if these tools came from some interesting 
research once upon a time. I'd be grateful for any references that 
can be shared. I'm also interested in hearing any wild ideas that 
folks might have, or great ideas that fell by the wayside way back when.




For a short time, there was OpenDoc - which really would have turned 
the application paradigm on its head.  Everything you interacted with 
was an object; with methods incorporated into it's "container."  E.g., 
if you were working on a "document," there was no notion of a word 
processor, just the document with embedded methods for interacting 
with it.




a while ago, I had started, but didn't finish writing (or at least to a 
level I would want to send it) about the relationship between 
object-based and dataflow-based approaches to modular systems (where in 
both cases, the "application" could be largely dissolved in favor of 
interacting components and "generic" UIs).


but, the line gets kind of fuzzy, as what people often call "OOP" 
actually covers several distinct sets of methodologies, and people so 
much often focus on lower-level aspects (class vs not-a-class, 
inheritance trees, ...), that there is a tendency to overlook 
higher-level aspects, like whether the system is composed of objects 
interacting via passing messages using certain interfaces, or whether it 
is working with a data-stream where the objects don't really interact at 
all and rather produce and consume data in a set of shared representations.



then, there is the "bigger" issue from an architectural POV, namely, can 
"App A access anything from within App B?" short of both developers each 
having access to and the ability to hack on each-others' source code 
(or, often, get the thing rebuilt from source sometimes).


so, we have some problems:
lack of shared functionality (often short of what has explicitly been 
made into shared libraries or similar);
frequent inability to add new functionality to existing apps (or "UIs"), 
short of having access to and ability to modify their source-code to 
ones uses;

lots of software that is a PITA to get to rebuild from source (*1);
...

*1: especially in GNU land, where they pride themselves of freely 
available source, but the ever present GNU Autoconf system has a problem:

it very often has a tendency not to work;
it is often annoyingly painful to get it to work when it has decided it 
doesn't want to;
very often developers set some rather arbitrary restrictions on projects 
build-probing, like "must have exactly this version of this library to 
build", even when it will often still build and work with later (and 
earlier) versions of the library;

...

it is sad, in premise, that hard-coded Visual Studio projects, and raw 
Makefiles, are often easier to get to work when things don't go "just 
right". well, that and one time recently managing to apparently get on 
the bad side of some developers for a FOSS GPL project, by going and 
building part of it using MSVC (for plugging some functionality into the 
app), but in this case, it was the path of least effort (the other code 
I was using with it was already being built with MSVC, and I couldn't 
get the main project to rebuild from source via the "approved" routes 
anyways, ...).


weirder yet, some of the better development experiences I have had have 
been in developing extensions for closed-source commercial projects 
(without any ability to see their source-code, or for that matter, even 
worthwhile API documentation), which "should not be".



not that I don't think these problems are solvable, but maybe the 
"spaghetti string mess" that is GNU-land at present isn't really an 
ideal solution. like, there might be a need to address "general 
architectural issues" (provide solid core APIs, ...), rather than just 
daisy-chaining everything in a somewhat ad-hoc manner.



but, as an assertion:
with increasing modularity and ability to share functi

Re: [fonc] Macros, JSON

2013-07-21 Thread BGB

On 7/21/2013 12:28 PM, John Carlson wrote:


Hmm.  I've been thinking about creating a macro language written in 
JSON that operates on JSON structures.  Has someone done similar 
work?  Should I just create a JavaScript AST in JSON? Or should I 
create an AST specifically for JSON manipulation?




my scripting language mostly uses S-Expressions for its AST format.

my C frontend mostly used an XML variant.
I had a few times considered a hybrid, essentially like XML with a more 
compact syntax (*1).


in the future, most likely I would just use S-Expressions.
while S-Exps are slightly more effort in some cases to extend, they are 
generally faster than manipulating XML, and are easier to work with.



JSON could work, but its syntax is slightly more than what is needed for 
something like this, and its data representation isn't necessarily ideal.


EDN looks ok.


*1:
node := '<' tag (key'='value)* node* '>'
 >

where value was a literal value with one of several types, IIRC:
integer type;
floating-point type;
string.

note that there were no free-floating raw values.
a free-floating value would instead be wrapped in a node.

had also considered using square braces:
[array [int val=1234] [real val=3.14] [string val="string"] [symbol 
val="symbol"]]


the allowed forms would otherwise have been fairly constrained.
the constrained structure would be mostly for sake of performance and 
similar.


note:
the XML variant used by my C frontend also ended up (quietly) adding 
support for raw numeric values, mostly because of the added overhead of 
converting between strings and numbers.




Thanks,

John



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-06 Thread BGB

On 4/6/2013 12:13 PM, Reuben Thomas wrote:
On 6 April 2013 18:09, Eugen Leitl > wrote:


On Sat, Apr 06, 2013 at 12:08:35PM -0500, John Carlson wrote:
> The Lord will return like a thief in the night:
> http://bible.cc/1_thessalonians/5-2.htm
> Is this predictable?  Is there more than one return?  Jews
believe in one
> Messiah.  Christians believe in 2 Messiahs (Jesus and his
return).  Anyone
> for 3 or 4 or more?

Can the list moderator please terminate this thread?
Anyone? Anyone? Bueller?


Indeed, this is a list about the Foundations of New Computing; please 
stay on-topic.




yeah...

it is possibly a notable property that most topics on the internet tend 
to diverge into a debate about religion and/or politics (regardless of 
the original topic in question).



but, elsewhere, one can then find people into getting into inflamed 
debates about other things as well, including in computing:

UTF-16 vs UTF-32;
RGBA vs DXT;
little-endian vs big-endian in file-formats;
x86 vs ARM;
choice of programming language;
...

so, in a way, people arguing about stuff like this may be inevitable.


one assertion that can be made here is that people seem to be 
overzealous in their choice and application of universals, often without 
a lot of evidence to support their choices.


so, one thing ends up being true to one person and false to another, if 
for no other reason than differences in terms of basic assumptions, and 
a tendency to regard these assumptions as absolute (rather than, say, as 
probabilities).


well, along with an excess of people making value judgements, say, 
rather than things being more in terms of cost/benefit or similar, ...


but, sometimes it seems that regardless of ones' choice of basic 
assumptions, someone somewhere will still take issue with it.


expecting everyone to agree on much of anything is probably unrealistic...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Natural Language Wins

2013-04-06 Thread BGB

On 4/6/2013 10:59 AM, John Carlson wrote:


When I was studying Revelation in the 1980s.  We thought this same 
scripture referred to the European Union.  We also thought that Jesus 
had to return by 1988, because that was one generation past when the 
Jews returned to Israel in 1948. It seems that god has a way of 
overturning predictions.




read something recently that asserted that the prediction still held, 
only that the generation was 80 years rather than 40, thus putting the 
end-event somewhere around 2028 (with the rebuilding of the temple and 
tribulation and so on happening before this).


it still remains to be seen how things will turn out.



Some answered questions: http://reference.bahai.org/en/t/ab/SAQ/

On Apr 6, 2013 5:32 AM, "Kirk Fraser" > wrote:

>
> Most likely your personal skills at natural language are 
insufficient to understand Revelation in the Bible, like mine were 
until I spent lots of time learning.  Now I can predict the current 
Pope Francis will eventually help create the 7 nation Islamic 
Caliphate with 3 extra-national military powers like Hamas in Rev. 13, 
17:3.  You must understand natural language well if you want to 
program it well.  Many grad students hack out an NLP project that 
works at an uninspiring level.  To go beyond the state of the art, you 
must learn to understand beyond state of the art.

>
> Claims that the world has progressed beyond some past century are 
true for technology but not in human behavior.  People still have wars 
large and small.  Some of the worst human behavior can be seen in 
courts during wars between family members, and some of that behavior 
comes from lawyers.  Human behavior can only be improved by everyone 
pursuing the absolute perfection of God and his human form Jesus 
Christ, the Creator.  We must go beyond the state of the art churches, 
to learn from the true church which Jesus practiced with His students, 
before He left and they quit doing much of what He did and taught.

>
> Because his first graduates ignored His teaching of equal status 
under Him  instead pursuing positions over others and their money, 
today we have inherited that culture instead of Jesus' life where it 
is possible to be fed directly by God's miracles without need of 
money.  So I propose a return from today's "advanced culture" to 
Jesus' absolute perfection. www.freetom.info.truechurch

>
> In view of the human spiritual awakening possible that way, 
computers are only a temporary support until we get there.  Watch 
videos archived at www.sidroth.org  some of 
which are lame but others are impressive showing what is happening now 
giving the idea the perfect culture of Jesus' church is possible.

>
> Love Absolute Truth,
> Kirk W. Fraser
>
>
> On Fri, Apr 5, 2013 at 10:23 PM, Steve Taylor > wrote:

>>
>> Charlie Derr wrote:
>>
>>> Nevertheless I'm finding some of this conversation truly 
fascinating (though I'm having a little trouble figuring out

>>> what is "truth" and what isn't).
>>
>>
>> I'm just waiting for Kirk to mention Atlantis or the Rosicrucians. 
It feels like it could be any moment...

>>
>>
>>
>> Steve
>>
>> ___
>> fonc mailing list
>> fonc@vpri.org 
>> http://vpri.org/mailman/listinfo/fonc
>
>
>
>
> --
> Kirk W. Fraser
> http://freetom.info/TrueChurch - Replace the fraud churches with the 
true church.
> http://congressionalbiblestudy.org - Fix America by first fixing its 
Christian foundation.

> http://freetom.info - Example of False Justice common in America
>
> ___
> fonc mailing list
> fonc@vpri.org 
> http://vpri.org/mailman/listinfo/fonc
>



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] DSL Engineering: Designing, Implementing and Using Domain-Specific Languages

2013-01-25 Thread BGB

On 1/25/2013 10:11 AM, Kurt Stephens wrote:

Don't know if this has been posted/discussed before:

http://dslbook.org/

The 560-page, book is donationware.  Lots to read here.  :)


nifty, may have to go read it...


it does make me wonder:
how viable is donationware as an option for software, vs, say, the 
shareware model?


(I wonder this partly as my 3D engine is sort-of donationware at 
present, but I had considered potentially migrating to a shareware model 
eventually...).


note: this doesn't apply to my scripting stuff, which is intended to be 
open-source (currently uses MIT license), but is presently bundled with 
the 3D engine, but could be made available separately (as some people 
have expressed concern over downloading proprietary code to get the free 
code, but admittedly I am not entirely sure what the issue is here, but 
either way).



also left wondering if anyone has a good idea for a good way to 
determine when things like bounds-checks and null-checks can safely be 
omitted? (say, with array and pointer operations).


I guess probably a way of statically determining that both the array 
exists and the index will always be within array bounds, but am not 
entirely sure exactly what this check would itself look like (like if 
there is a good heuristic approximation, ...).


type-checking is a little easier, at least as far as one can determine 
when/where things have known types and propagate them forwards, which 
can be used in combination with explicitly declared types, ... however, 
the size of the array or value-range of an integer is not part of the 
type and may not always be immediately obvious to a compiler.


then again, looking online, this may be a non-trivial issue (but does 
add several checks and conditional-jumps operations into each array access).



well, at least luckily the security checks has a good solution:
if the code belongs to the VM root, most security checks can be omitted.
the only one that can't really be safely omitted relates to method calls 
(we don't want root calling blindly into insecure code).


often this check can be done as a static check (by checking the source 
and destination rights when generating a "call-info" structure or 
similar, which will be flushed whenever new code is loaded or the VM 
otherwise suspects that a significant scope-change has occurred).


actually, static-call caching is itself a heuristic:
it assumes that if the site being called is a known function or static 
method declaration, then the call is static;
if the site being called is a generic variable or similar, a dynamic 
call is used (which will use lookups, type-checks, ... to go about 
making the call).


most of this is largely transparent to high-level code, and mostly is 
figured out in the bytecode -> threaded-code stage, and most of this 
information may be reused by the JIT.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] deriving a POL from existing code

2013-01-23 Thread BGB

On 1/9/2013 11:53 AM, David Barbour wrote:


On Wed, Jan 9, 2013 at 9:37 AM, John Carlson > wrote:


I've been collecting references to game POLs on:
http://en.wikipedia.org/wiki/Domain-specific_entertainment_language


That's neat. I'll definitely peruse.



interesting...


my own language may loosely fit in there, being mostly developed in 
conjunction with a 3D engine, and not particularly intended for 
general-purpose programming tasks...


like, beyond just ripping off JS and AS3 and similar, has some amount of 
specialized constructs mostly for 3D game related stuff (like 
vector-math and similar).


well, and some just plain obscure stuff, ...


BTW: recently (mostly over the past few days), I went and wrote a 
simplistic JIT for the thing (I have not otherwise had a working JIT for 
several years).


it turns out if one factors out most of the hard-parts in advance, 
writing a JIT isn't actually all that difficult (*1).


in my case it gave an approx 20x speedup, bringing it from around 60x 
slower than C with (plain C) threaded code, to around 3x slower than C, 
or at least for my selection-sort test and similar... (the recursive 
Fibonacci test is still fairly slow though, at around 240x slower than 
C). as-is, it mostly just directly compiles a few misc things, like 
arithmetic operators and variable loads/stores and similar, leaving most 
everything else as call-threaded code (where the ASM code mostly just 
directly calls C functions to carry out operations).


in the selection sort test, the goal is basically to sort an array using 
selection sort. for a 64k element array, this is currently about 15s for 
C, and around 49s for BS. with the interpreter, this operation takes 
takes a good number of minutes.



*1:
current JIT, 1.2 kloc;
rest of core interpreter, 18 kloc;
rest of script interpreter (parser, front-end bytecode compiler, ...), 
32 kloc;
VM runtime (dynamic typesystem, OO facilities, C FFI, ...) + 
assembler/linker + GC: 291 kloc;

entire project, 1.32 Mloc.

so, yes, vs 291 kloc (or 1.32 Mloc), 1.2 kloc looks pretty small.

language rankings in project (excluding tools) by total kloc:
C (1.32 Mloc), BGBScript (16.33 kloc), C++ (16.29 kloc).

there may be some amount more script-code embedded in data files or 
procedurally generated, but this is harder to measure.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Current topics

2013-01-03 Thread BGB

On 1/3/2013 7:27 PM, Miles Fidelman wrote:

BGB wrote:

Whoa, I think you just invented "nanotech organelles", at least this

is the first time I've heard that idea and it seems pretty
mind-blowing.  What would a cell use a cpu for?


mostly so that microbes could be programmed in a manner more like 
larger-scale computers.


say, the microbe has its basic genome and capabilities, which can be 
treated more like hardware, and then a person can write behavioral 
programs in a C-like language or similar, and then compile them and 
run them on the microbes.


for larger organisms, possibly the cells could network together and 
form into a sort of biological computer, then you can possibly have 
something the size of an insect with several GB of storage and 
processing power rivaling a modern PC, as well as possibly other 
possibilities, such as the ability to communicate via WiFi or similar.


you might want to google "biological computing" - you'll start finding 
things like this:
http://www.guardian.co.uk/science/blog/2009/jul/24/bacteria-computer 
(title: "Bacteria make computers look like pocket calculators")




FWIW: this is like comparing a fire to an electric motor.


yes, but you can't use a small colony of bacteria to do something like 
drive an XBox360, they just don't work this way.


with bacteria containing CPUs, you could potentially do so.

and, by the time you got up to a colony the size of an XBox360, the 
available processing power would be absurd...



this is not a deficiency of the basic biological mechanisms (which are 
in-fact quite powerful), but rather their inability to readily organize 
themselves into a larger-scale computational system.





alternatively could be the possibility of having an organism with 
more powerful neurons, such that rather than neurons communicating 
via simple impulses, they can send more complex messages (neuron 
fires with extended metadata, ...). then neurons can make more 
informed decisions about whether to fire off a message.



cells do lots of nifty stuff, but most of their functionality is more 
based around cellular survival than about computational tasks.


Ummm have you heard of:

1. Brains (made up of cells),

2. Our immune systems,

3. The complex behaviors of fungi



yes, but obseve just how pitifully these things do *at* traditional 
computational tasks...


for all the raw power in something like the human brain, and the ability 
of humans to possess things like general intelligence, ..., we *still* 
have to single-step in a stupid graphical debugger and require *hours* 
to think about and write chunks of code (and weeks or months to write a 
program), and a typical human can barely even add or subtract numbers in 
a reasonable time-frame (with the relative absurdity that, with all 
their raw power, a human finds it easier just to tap the calculation 
into a calculator, in the first place).



meanwhile, a C compiler can churn through and compile around a million 
lines of code in around 1 minute or so, a task for which a human has no 
hope to even attempt.


something is clearly deficient for the human mind at this task.


Think massively parallel/distributed  computation focused on organism 
level survival and behavior.  If you want to program colonies of nano 
machines (biological or otherwise), you're going to have to start 
thinking of something a very different kinds of algorithms, running on 
something a lot more powerful than a small cpu programmed in c.




I am thinking of billions of small CPUs programmed in C, and probably 
organized into micrometer or millimeter scale networks. there would be a 
reason why each cell would have its own CPU (built out of basic 
biological components).


also, humans would probably use a C-like language mostly because it 
would be most familiar, but need not be executed exactly like how it 
would on a modern computer (they may or may not have an ISA as would be 
currently understood).


probably these would need to mesh together somehow and simulate the 
functionality of larger computers, and would likely work by distributing 
computation and memory storage among individual cells.


even if the signaling and organization is moderately inefficient, likely 
it could be made up for by using redundancy and bulk.



similarly, tasks that would, at the larger scale, be accomplished via 
robots and bulk mechanical forces, could be performed instead by 
cooperative actions by individual cells (say, millions of cells all push 
on something in the same direction at the same time, or they start 
building a structure by secreting specific chemicals at specific 
locations, ...).



Start thinking billions of actors, running on highly parallel 
hardware, and we might start approaching what cells do today.  (FYI, 
try googling "micro-tubules" and you'll find some interesting papers 
on how these sub-cellular structures just might act

Re: [fonc] Current topics

2013-01-03 Thread BGB

On 1/3/2013 2:25 AM, Simon Forman wrote:

On Wed, Jan 2, 2013 at 10:35 PM, BGB  wrote:

On 1/2/2013 10:31 PM, Simon Forman wrote:

On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay  wrote:

The most recent discussions get at a number of important issues whose
pernicious snares need to be handled better.

In an analogy to sending messages "most of the time successfully" through
noisy channels -- where the noise also affects whatever we add to the
messages to help (and we may have imperfect models of the noise) -- we
have
to ask: what kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So
a
message that can be proved perfectly "received as sent" can still be
interpreted poorly by a human directly, or by software written by humans.

A wonderful "specification language" that produces runable code good
enough
to make a prototype, is still going to require debugging because it is
hard
to get the spec-specs right (even with a machine version of human level
AI
to help with "larger goals" comprehension).

As humans, we are used to being sloppy about message creation and
sending,
and rely on negotiation and good will after the fact to deal with errors.

We've not done a good job of dealing with these tendencies within
programming -- we are still sloppy, and we tend not to create negotiation
processes to deal with various kinds of errors.

However, we do see something that is "actual engineering" -- with both
care
in message sending *and* negotiation -- where "eventual failure" is not
tolerated: mostly in hardware, and in a few vital low-level systems which
have to scale pretty much "finally-essentially error-free" such as the
Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error
detection and improvements (if possible). Dan Ingalls was (and is) a
master
at getting a whole system going in such a way that it has enough
integrity
to "exhibit its failures" and allow many of them to be addressed in the
context of what is actually going on, even with very low level failures.
It
is interesting to note the contributions from what you can say statically
(the higher the level the language the better) -- what can be done with
"meta" (the more dynamic and deep the integrity, the more powerful and
safe
"meta" becomes) -- and the tradeoffs of modularization (hard to sum up,
but
as humans we don't give all modules the same care and love when designing
and building them).

Mix in real human beings and a world-wide system, and what should be
done?
(I don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers
contrasted with engineers. The second is human systems contrasted with
biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million
engineers (some of them in computing). The current estimates of
"programmers
in the US" are about 1.3 million (US Dept of Labor counting "programmers
and
developers"). Also, the Internet and multinational corporations, etc.,
internationalizes the impact of programming, so we need an estimate of
the
"programmers world-wide", probably another million or two? Add in the ad
hoc
programmers, etc? The populations are similar in size enough to make the
contrasts in methods and results quite striking.

Looking for analogies, to my eye what is happening with programming is
more
similar to what has happened with law than with classical engineering.
Everyone will have an opinion on this, but I think it is partly because
nature is a tougher critic on human built structures than humans are on
each
other's opinions, and part of the impact of this is amplified by the
simpler
shorter term liabilities of imperfect structures on human safety than on
imperfect laws (one could argue that the latter are much more of a
disaster
in the long run).

And, in trying to tease useful analogies from Biology, one I get is that
the
largest gap in complexity of atomic structures is the one from polymers
to
the simplest living cells. (One of my two favorite organisms is
Pelagibacter
unique, which is the smallest non-parasitic standalone organism.
Discovered
just 10 years ago, it is the most numerous known bacterium in the world,
and
accounts for 25% of all of the plankton in the oceans. Still it has about
1300+ genes, etc.)

What's interesting (to me) about cell biology is just how much stuff is
organized to make "integrity" of life. Craig Ventor thinks that a minimal
hand-crafted genome for a cell would still require about 300 genes (and a
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here
should be scrutinized -- but this one harmonizes with one of Butler
Lampson's conclusions/prejudices:

Re: [fonc] Current topics

2013-01-02 Thread BGB

On 1/2/2013 10:31 PM, Simon Forman wrote:

On Tue, Jan 1, 2013 at 7:53 AM, Alan Kay  wrote:

The most recent discussions get at a number of important issues whose
pernicious snares need to be handled better.

In an analogy to sending messages "most of the time successfully" through
noisy channels -- where the noise also affects whatever we add to the
messages to help (and we may have imperfect models of the noise) -- we have
to ask: what kinds and rates of error would be acceptable?

We humans are a noisy species. And on both ends of the transmissions. So a
message that can be proved perfectly "received as sent" can still be
interpreted poorly by a human directly, or by software written by humans.

A wonderful "specification language" that produces runable code good enough
to make a prototype, is still going to require debugging because it is hard
to get the spec-specs right (even with a machine version of human level AI
to help with "larger goals" comprehension).

As humans, we are used to being sloppy about message creation and sending,
and rely on negotiation and good will after the fact to deal with errors.

We've not done a good job of dealing with these tendencies within
programming -- we are still sloppy, and we tend not to create negotiation
processes to deal with various kinds of errors.

However, we do see something that is "actual engineering" -- with both care
in message sending *and* negotiation -- where "eventual failure" is not
tolerated: mostly in hardware, and in a few vital low-level systems which
have to scale pretty much "finally-essentially error-free" such as the
Ethernet and Internet.

My prejudices have always liked dynamic approaches to problems with error
detection and improvements (if possible). Dan Ingalls was (and is) a master
at getting a whole system going in such a way that it has enough integrity
to "exhibit its failures" and allow many of them to be addressed in the
context of what is actually going on, even with very low level failures. It
is interesting to note the contributions from what you can say statically
(the higher the level the language the better) -- what can be done with
"meta" (the more dynamic and deep the integrity, the more powerful and safe
"meta" becomes) -- and the tradeoffs of modularization (hard to sum up, but
as humans we don't give all modules the same care and love when designing
and building them).

Mix in real human beings and a world-wide system, and what should be done?
(I don't know, this is a question to the group.)

There are two systems I look at all the time. The first is lawyers
contrasted with engineers. The second is human systems contrasted with
biological systems.

There are about 1.2 million lawyers in the US, and about 1.5 million
engineers (some of them in computing). The current estimates of "programmers
in the US" are about 1.3 million (US Dept of Labor counting "programmers and
developers"). Also, the Internet and multinational corporations, etc.,
internationalizes the impact of programming, so we need an estimate of the
"programmers world-wide", probably another million or two? Add in the ad hoc
programmers, etc? The populations are similar in size enough to make the
contrasts in methods and results quite striking.

Looking for analogies, to my eye what is happening with programming is more
similar to what has happened with law than with classical engineering.
Everyone will have an opinion on this, but I think it is partly because
nature is a tougher critic on human built structures than humans are on each
other's opinions, and part of the impact of this is amplified by the simpler
shorter term liabilities of imperfect structures on human safety than on
imperfect laws (one could argue that the latter are much more of a disaster
in the long run).

And, in trying to tease useful analogies from Biology, one I get is that the
largest gap in complexity of atomic structures is the one from polymers to
the simplest living cells. (One of my two favorite organisms is Pelagibacter
unique, which is the smallest non-parasitic standalone organism. Discovered
just 10 years ago, it is the most numerous known bacterium in the world, and
accounts for 25% of all of the plankton in the oceans. Still it has about
1300+ genes, etc.)

What's interesting (to me) about cell biology is just how much stuff is
organized to make "integrity" of life. Craig Ventor thinks that a minimal
hand-crafted genome for a cell would still require about 300 genes (and a
tiniest whole organism still winds up with a lot of components).

Analogies should be suspect -- both the one to the law, and the one here
should be scrutinized -- but this one harmonizes with one of Butler
Lampson's conclusions/prejudices: that you are much better off making --
with great care -- a few kinds of relatively big modules as basic building
blocks than to have zillions of different modules being constructed by
vanilla programmers. One of my favorite examples of this was the "Beings"
master

Re: [fonc] Wrapping object references in NaN IEEE floats for performance (was Re: Linus...)

2013-01-01 Thread BGB

On 1/1/2013 6:36 PM, Paul D. Fernhout wrote:

On 1/1/13 3:43 AM, BGB wrote:

here is mostly that this still allows for type-tags in the
references, but would likely involve a partial switch to the use of
64-bit tagged references within some core parts of the VM (as a partial
switch away from "magic pointers"). I am currently leaning towards
putting the tag in the high-order bits (to help reduce 64-bit arithmetic
ops on x86).


One idea I heard somewhere (probably on some Squeak-related list 
several years ago) is to have all objects stored as floating point NaN 
instances (NaN == "Not a Number"). The biggest bottleneck in practice 
for many applications that need computer power these days (like 
graphical simulations) usually seems to be floating point math, 
especially with arrays of floating point numberls. Generally when you 
do most other things, you're already paying some other overhead 
somewhere already. But multiplying arrays of floats efficiently is 
what makes or breaks many interesting applications. So, by wrapping 
all other objects as instances of floating point numbers using the NaN 
approach, you are optimizing for the typically most CPU intensive case 
of many user applications. Granted, there is going to be tradeoffs 
like integer math and so looping might then probably be a bit slower? 
Perhaps there is some research paper already out there about the 
tradeoffs for this sort of approach?




I actually tried this already...

I had borrowed the idea originally off of Lua (a paper I was reading 
talking about it mentioned it as having been used in Lua).



the problems were, primarily on 64-bit targets:
my other code assumed value-ranges which didn't fit nicely in the 52-bit 
mantissa;

being a NaN obscured the pointers from the GC;
it added a fair bit of cost to pointer and integer operations;
...

granted, you only really need 48 bits for current pointers on x86-64, 
the problem was that other code had been already assuming using a 56-bit 
tagged space when using pointers ("spaces"), leaving a little bit of a 
problem of 56>52.


so, everything was crammed into the mantissa somewhat inelegantly, and 
the costs regarding integer and pointer operations made it not really an 
attractive option.


all this was less of an issue with 32-bit x86, as I could essentially 
just shove the whole pointer into the mantissa ("spaces" and all), and 
the GC wouldn't be confused by the value.



basically, what "spaces" is, is that a part of the address space will 
basically be used and divided up into a number of regions for various 
dynamically typed values (the larger ones being for fixnum and flonum).


on 32-bit targets, spaces is 30 bits, and located between the 3GB and 
4GB address mark (which the OS generally reserves for itself). on 
x86-64, currently it is a 56-bit space located at 0x7F00_.




For more background, see:
  http://en.wikipedia.org/wiki/NaN
"For example, a bit-wise example of a IEEE floating-point standard 
single precision (32-bit) NaN would be: s111  1axx    
  where s is the sign (most often ignored in applications), a 
determines the type of NaN, and x is an extra payload (most often 
ignored in applications)"


So, information about other types of objects would start in that 
"extra payload" part. There may be some inconsistency in how hardware 
interprets some of these bits, so you'd have to think about if that 
could be worked around if you want to be platform-independent.


See also:
  http://en.wikipedia.org/wiki/IEEE_floating_point

You might want to just go with 64 bit floats, which would support 
wrapping 32 bit integers (including as pointers to an object table if 
you wanted, even up to probably around 52 bit integer pointers); see:

  "IEEE 754 double-precision binary floating-point format: binary64"
  http://en.wikipedia.org/wiki/Binary64



yep...

my current tagging scheme partly incorporates parts of double, mostly in 
the sense that some tags were chosen mostly such that a certain range of 
doubles could be passed through unmodified and with full precision.


the drawback is that 0 is special, and I haven't yet thought up a "good" 
way around this issue.


admittedly I am not entirely happy with the handling of fixnums either 
(more arithmetic and conditionals than I would like).



here is what I currently have:
http://cr88192.dyndns.org:8080/wiki/index.php/Tagged_references



does sometimes seem like I am going in circles at times though...


I know that feeling myself, as I've been working on semantic-related 
generally-triple-based stuff for going on 30 years, and I still feel 
like the basics could be improved. :-)




yes.

well, in this case, it is that I have bounced back and forth between 
tagged-references and "magic pointers" multiple times over the years.


granted, this would be 

Re: [fonc] Incentives and Metrics for Infrastructure vs. Functionality

2013-01-01 Thread BGB

On 1/1/2013 2:12 PM, Loup Vaillant-David wrote:

On Mon, Dec 31, 2012 at 04:36:09PM -0700, Marcus G. Daniels wrote:

On 12/31/12 2:58 PM, Paul D. Fernhout wrote:
2. The programmer has a belief or preference that the code is easier
to work with if it isn't abstracted. […]

I have evidence for this poisonous belief.  Here is some production
C++ code I saw:

   if (condition1)
   {
 if (condition2)
 {
   // some code
 }
   }

instead of

   if (condition1 &&
   condition2)
   {
 // some code
   }

-

   void latin1_to_utf8(std::string & s);

instead of

   std::string utf8_of_latin1(std::string s)
or
   std::string utf8_of_latin1(const std::string & s)

-

(this one is more controversial)

   Foo foo;
   if (condition)
 foo = bar;
   else
 foo = baz;

instead of

   Foo foo = condition
   ? bar
   : baz;

I think the root cause of those three examples can be called "step by
step thinking".  Some people just can't deal with abstractions at all,
not even functions.  They can only make procedures, which do their
thing step by step, and rely on global state.  (Yes, global state,
though they do have the courtesy to fool themselves by putting it in a
long lived object instead of the toplevel.)  The result is effectively
a monster of mostly linear code, which is cut at obvious boundaries
whenever `main()` becomes too long ("too long" generally being a
couple hundred lines.  Each line of such code _is_ highly legible,
I'll give them that.  The whole however would frighten even Cthulhu.


part of the issue may be a tradeoff:
does the programmer think in terms of abstractions and using high-level 
overviews?
or, does the programmer mostly think in terms of step-by-step operations 
and make use of their ability to keep large chunks of information in memory?


it is a question maybe of whether the programmer sees the forest or the 
trees.


these sorts of things may well have an impact on the types of code a 
person writes, and what sorts of things the programmer finds more readable.



like, for a person who can mentally more easily deal with step-by-step 
thinking, but can keep much of the code in their mind at-once, and 
quickly walk around and explore the various possibilities and scenarios, 
this kind of bulky low-abstraction code may be preferable, since when 
they "walk the graph" in their mind, they don't really have to stop and 
think too much about what sorts of items they encounter along the way.


in their minds-eye, it may well look like a debugger stepping at a rate 
of roughly 5-10 statements per second or so. maybe they may or may not 
be fully aware how their mind does it, but they can vaguely see the 
traces along the call-stack, ghosts of intermediate values, and the 
sudden jump of attention to somewhere where a crash has occurred or an 
exception has been thrown.


actually, I had before compared it to ants:
it is like ones' mind has ants in it, which walk along trails, either 
stepping code, or trying out various possibilities, ...
once something "interesting" comes up, it starts attracting more of 
these mental ants, until it has a whole swarm, and then a more clear 
image of the scenario or idea may emerge in ones' mind.


but, abstractions and difficult concepts are like oil to these ants, 
where if ants encounter something they don't like (like oil) they will 
back up and try to walk around it (and individual ants aren't 
particularly smart).



and, probably, other people use other methods of reasoning about code...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2013-01-01 Thread BGB

On 12/31/2012 10:47 PM, Marcus G. Daniels wrote:

On 12/31/12 8:30 PM, Paul D. Fernhout wrote:
So, I guess another meta-level bug in the Linux Kernel is that it is 
written in C, which does not support certain complexity management 
features, and there is no clear upgrade path from that because C++ 
has always had serious linking problems.
But the ABIs aren't specified in terms of language interfaces, they 
are architecture-specific.  POSIX kernel interfaces don't need C++ 
link level compatibility, or even extern "C" compatibility 
interfaces.  Similarly on the device side, that's packing command 
blocks and such, byte by byte.  Until a few years ago, GCC was the 
only compiler ever used (or able) to compile the Linux kernel.  It is 
a feature that it all can be compiled with one open source toolchain.  
Every aspect can be improved.




granted.

typically, the actual call into kernel-land is a target-specific glob of 
ASM code, which may then be wrapped up to make all the various system calls.



as for ABIs a few things could help:
if the C++ ABI was defined *along with* the C ABI for a given target;
if the C++ compilers would use said ABI, rather than each rolling their own;
if the ABI were sufficiently general to be more useful to multiple 
languages (besides just C and C++);

...

in this case, the C ABI could be considered a formal subset of the C++ ABI.


admittedly, if I could have my say, I would make some changes to the way 
struct/class passing and returning is handled in SysV / AMD64. namely 
make it less complicated/evil, like, say, the struct is either passed in 
a single register, or passed as a reference (no decomposition and 
passing via multiple registers).


more-so, probably also provide spill-space for arguments passed as 
registers (more like in Win64).



granted, this itself may illustrate part of the problem:
with many of these ABIs, not everyone is happy, so there is a lot of 
temptation for compiler vendors to go their own way (making going "mix 
and match" with code compiled by different compilers, or sometimes with 
different compiler options, unsafe...).


sometimes, it may usually work, but sometimes fail, due to minor ABI 
differences.



From that thread I read that those in the Linus camp are fine with 
abstraction, but it has to be their abstraction on their terms. An 
later in the thread, Theodore T'so gave an example of opacity in the 
programming model:


a = b + "/share/" + c + serial_num;

Arguing "where you can have absolutely no idea how many memory 
allocations are

done, due to type coercions, overloaded operators"

Well, I'd say just write the code in concise notation.  If there are 
memory allocations they'll show up in valgrind runs, for example. Then 
disassemble that function and understand what the memory allocations 
actually are.  If there is a better way to do it, then either change 
abstractions, or improve the compiler to do it more efficiently.   
Yes, there can be an investment in a lot of stuff. But just defining 
any programming model with a non-obvious performance model as a bad 
programming model is shortsighted advice, especially for developers 
outside of the world of operating systems.   That something is 
non-obvious is not necessarily a bad thing.   It just means a bit more 
depth-first investigation.   At least one can _learn_ something from 
the diversion.




yep.

some of this is also a bit of a problem for many VM based languages, 
which may, behind the scenes, chew through memory, while giving little 
control of any of this to the programmer.


in my case, I have been left fighting performance in many areas with my 
own language, admittedly because its present core VM design isn't 
particularly high performance in some areas.



though, one can still be left looking at a sort of ugly wall:
the wall separating static and dynamic types.

dynamic types is a land of relative ease, but not particularly good 
performance.
static types is a land of pain and implementation complexity, but also 
better performance.


well, there is also the "fixnum issue", where a fixnum may be just 
slightly smaller than an analogous native type (it is the curse of the 
28-30 bit fixnum, or the 60-62 bit long-fixnum...).


this issue is annoying specifically because it specifically gets in the 
way of having an efficient fixnum type and also map it to a sensible 
native type (like "int") while keeping the usual definition  intact that 
"int" is exactly 32-bits and/or that "long" is exactly 64-bits.


but, as a recent attempt at trying to switch to untagged value types 
revealed, even with an interpreter core that is "mostly statically 
typed", making this switch may still open a "big can of worms" in some 
other cases (because there are still "holes" in the static type-system).



I have been left considering the possibility of instead making a compromise:
"int", "float", and "double" can be represented directly;
"long", however, would (still) be handle

Re: [fonc] Linus Chews Up Kernel Maintainer For Introducing Userspace Bug - Slashdot

2012-12-30 Thread BGB

On 12/30/2012 10:49 PM, Paul D. Fernhout wrote:
Some people here might find of interest my comments on the situation 
in the title, posted in this comment here:

http://slashdot.org/comments.pl?sid=3346421&cid=42430475

After citing Alan Kay's OOPSLA 1997 "The Computer Revolution Has Not 
Happened Yet" speech, the key point I made there is:
"Yet, I can't help but feel that the reason Linus is angry, and 
fearful, and shouting when people try to help maintain the kernel and 
fix it and change it and grow it is ultimately because Alan Kay is 
right. As Alan Kay said, you never have to take a baby down for 
maintenance -- so why do you have to take a Linux system down for 
maintenance?"


Another comment I made in that thread cited Andrew Tanenbaum's 1992 
comment "that it is now all over but the shoutin'":
http://developers.slashdot.org/comments.pl?sid=3346421&threshold=0&commentsort=0&mode=thread&cid=42426755 



So, perhaps now we finally twenty-years see the shouting begin as the 
monolithic Linux kernel reaches its limits as a community process? :-) 
Still, even if true, it was a good run.


The main article can be read here:
http://developers.slashdot.org/story/12/12/29/018234/linus-chews-up-kernel-maintainer-for-introducing-userspace-bug 



This is not to focus on personalities or the specifics of that mailing 
list interaction -- we all make mistakes (whether as leaders or 
followers or collaborators), and I don't fully understand the culture 
of the Linux Kernel community. I'm mainly raising an issue about how 
software design affects our emotions -- in this case, making someone 
angry probably about something they fear -- and how that may point the 
way to better software systems like FONC aspired to.




dunno...

in this case, I think Torvalds was right, however, he could have handled 
it a little more gracefully.


code breaking changes are generally something to be avoided wherever 
possible, which seems to be the main issue here.


sometimes it is necessary though, but usually this needs to be "for a 
damn good reason".
more often though this leads to a shim, such that new functionality can 
be provided, while keeping whatever exists still working.


once a limit is hit, then often there will be a "clean break", with a 
new shiny whatever provided, which is not backwards compatible with the 
old interface (and will generally be redesigned to address prior 
deficiencies and open up routes for future extension).


then usually, both will coexist for a while, usually until one or the 
other dies off (either people switch to the new interface, or people 
rebel and stick to the old one).


in a few cases in history, this has instead leads to forks, with the old 
and new versions developing in different directions, and becoming 
separate and independent pieces of technology.


for example, seemingly unrelated file formats that have a common 
ancestor, or different CPU ISA's that were once a single ISA, ...


likewise, at each step, backwards compatibility may be maintained, but 
this doesn't necessarily mean that things will remain static. sometimes, 
there may still be a common-subset, buried off in there somewhere, or in 
other cases the loss of occasional "archaic" details, will cause what 
remains of this common subset to gradually fade away.




as for design and emotions:
I think people mostly prefer to stay with familiar things.
unfamiliar things will often drive people away, especially if they look 
scary of different, whereas people will be more forgiving of things 
which look familiar, even if they are different internally.


often this may well amount to shims as well, where something familiar 
will be emulated as a shim on top of something different. even if it is 
actually fake, people will not care, they can just keep on doing what 
they were doing before.


granted, yes, when some people look into the "heart of computing", and 
see this seeming mountain of things held together mostly by shims and 
some amount of duct tape, they regard it as a thing of horror. others 
may see it, and be like "this is just how it is".


luckily, it doesn't go on indefinitely, as often with enough shims, it 
will create a sufficiently thick "layer of abstraction" to where it may 
become more reasonable to rip out a lot of it, while only maintaining 
the surface-level details (for sake of compatibility). compatibility may 
be maintained, even if a lot of what goes on in-between has since 
changed, and things can be extended that much longer...


granted, by this point, it is often less "the thing it once was" so much 
as an emulator.
but, under the surface, what is the real-thing, and what is an emulator, 
isn't really always all that certain. what usually defines an emulator 
then, is not so much about what it actually does, but how much of a big 
ugly seam there is in it doing so.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/list

Re: [fonc] Falun Dafa

2012-12-23 Thread BGB

On 12/23/2012 3:48 PM, John Pratt wrote:

Respectfully, go read Zhuan Falun and then comment on this thread.



eastern religious stuff is not compatible with my religious background, 
so I will decline...


(more so, this group is theoretically more about science and computers 
than it is about eastern religion...).



did end up going and reading some more about neurology though, since 
this is more religiously neutral, and seemingly more directly related to 
all this (like, interactions between the amygdala and parietal and 
frontal lobes and so on).


but, any case, little obviously productive here anyways...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Falun Dafa

2012-12-23 Thread BGB

On 12/23/2012 11:25 AM, John Carlson wrote:




On Sun, Dec 23, 2012 at 12:37 AM, BGB <mailto:cr88...@gmail.com>> wrote:


On 12/22/2012 9:11 PM, Julian Leviston wrote:

I think you've missed the point.

The point is... you need to use your body and your emotions as
well as your mind. Our society is overly focussed on the mind.



could be, fair enough...


The point is, if you don't use your body and emotions, they'll be sure 
to let you know.  Perhaps in 15 years or so.  Check out half-life of 
an IT worker, relevant post on /.: 
http://tech.slashdot.org/story/11/12/03/1435217/half-life-of-a-tech-worker-15-years 
... The mind is co-dependent on the emotions and body, not independent.




well, except I am already late 20s (will be 29 in a matter of days), and 
by this point arguably already using "dated" technologies. (but, the 
usual "catch up" is absurd, as most of these "new technologies" end up 
largely forgotten in a few years anyways, while the older technologies 
remain in full force...).


IOW: mostly still using C, as Java is still lame, and C# still isn't 
very good on non-Windows targets (as many of the advantages it has on 
Windows, cease to exist on VM's like Mono). but, seriously, what is the 
point of playing catch-up? or taking C# seriously as a tool for much 
more than quick/dirty GUI apps and writing Paint.NET plugins and similar?...


biggest thing I have written in C# thus far was a codec for a custom 
JPEG-based image format (it is like JPEG but added more features, *1), 
and mostly in the form of a Paint.NET plugin. in many ways, C# is much 
less well-suited to this sort of thing than C is (for example, for the 
image codec, I have both C and C# versions).


*1: alpha-channels, expanded components (normal, luma, depth, ...), 
layers, lossless encoding, some additional transforms and filters (can 
help improve compression), ... basically, ended up bolting on some 
block-filters derived from those in PNG as well, which can help compress 
things better when dealing with certain types of images (flat colors and 
gradiants, or blocks containing sharp edges). it is, however, not 
strictly backwards-compatible with existing JPEG decoders (depending on 
which features are enabled). when the alternate filters are enabled, it 
also uses a different entropy-coding / VLC scheme.



now, back in time, my early/mid 20s were a time of strongish and more 
poorly controlled emotions, and I put a lot of time and effort mostly in 
getting things mostly under control (such that being upset about 
something need not interfere with my external behavior or ability to 
complete tasks). (like, say, if a person is upset about something, it 
interferes with them writing code or working things, ...).


after a while though, a person largely stops feeling upset about things. 
granted, there is always a risk of them "coming back" in some more 
aggressive form (or, occasionally, playing tricks, and bypassing its 
usual "sandbox"). granted, there is still the issue of memory-retrieval, 
where emotions can apparently interfere with the types of memories that 
are brought up (so, emotions are sort of like a cat that keeps getting 
up on the keyboard when it wants something, and one usually wants the 
cat to not be on the keyboard).


sometimes it is necessary to "get involved" and try to stabilize them 
though, because otherwise emotions can go into a sort of feedback loop, 
resulting in adverse psychological and behavioral effects (often: 
conscious fragmentation, *2, partial loss of sensory input, reduced 
ability to move, ...), but things will usually return to normal once 
emotions burn themselves out and dissipate (I think the last time this 
happened was ~ 5 years ago though).


*2: this state is a bit complicated to describe. I am left to realize 
that I don't really want to describe it, nor is it probably really 
topical here anyways.



as-is, lacking a job, I am mostly trying to "make it on my own", 
admittedly without a whole lot of success thus far.


as for the future, I don't really know...



John


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Falun Dafa

2012-12-22 Thread BGB

On 12/22/2012 9:11 PM, Julian Leviston wrote:

I think you've missed the point.

The point is... you need to use your body and your emotions as well as 
your mind. Our society is overly focussed on the mind.




could be, fair enough...


emotions are hard though, like nearly completely absent at one moment, 
or showing up and being distracting at another moment, and generally not 
very easy to make much sense out of them. but, I guess, if ignored too 
much they can start to fade away altogether. but, if not controlled, 
they can make a mess of things, leading to poor judgement and irrational 
behavior, but most often when emotions do show up, they are like "I am 
bored and lonely, and this kind of sucks", which isn't really all that 
helpful. in other cases, they might show up, cause a sense of sadness, 
and erode ones' ability to do stuff, which also isn't really helpful.


for most things in life, it doesn't seem to make much difference, but 
does apparently have a bit of a dampening effect in the relationship 
sense, like no one is really interested, which probably doesn't help 
matters all that much (and it doesn't help much when one can know with 
statistical near certainty that it wont go anywhere, most often because 
there is some critical incompatibility, or more often, the other person 
has only a short period of time before they lose interest and go elsewhere).


most else is the short of short-lived emotional states which arise from 
watching TV shows or similar, but, when the show ends, everything is as 
it was before.


nevermind things like poetry or similar, which are more just confusing 
and cryptic than anything else ("what does it mean? who knows? wait, it 
was about drinking coffee? oh well, whatever...").


even for as ineffective as it ultimately is, a person can still get a 
lot more of an effect by watching a pile of anime or similar (say, a 
person can get ~ 75 hours of emotional stimulation by watching ~ 150 
episodes of InuYasha, then be looked down on by others for doing so, or 
similar...).


and, sometimes, there are good shows, some of which a person can wish 
there were more of (like, say, "Invader Zim"), but then again, there are 
always new shows (like "MLP: FiM"...).
and elsewhere, there are videos on YouTube, like all the endless 
"Gangnam Style" parodies.


otherwise, a person is left to realize that their life is kind of empty 
and unproductive, and seemingly all their emotions can really seem to do 
is remind them about how lame their life is (and there isn't even really 
much to want, like say, there is no real way to build a newer/better 
computer without dumping lots of money into overly expensive parts, and 
better is trying to find a way to earn some sort of income...).


but, even as such, it is hard to imagine though if/how it could be any 
different.



in a way, such is life...


but, at least I am sort of making a game, and putting some videos of it 
on YouTube:

http://www.youtube.com/watch?v=GRVaCPgVxb8

and, a video about some of the high-level architecture:
http://www.youtube.com/watch?v=TlamKh8vUJ0

nevermind if it amounts to anything much more than this (hardly anyone 
cares, no one makes donations).


but, keeping going is still better than falling into despair, even if 
everything does eventually all amount to nothing.



or such...



Julian

On 23/12/2012, at 1:52 PM, BGB <mailto:cr88...@gmail.com>> wrote:



On 12/22/2012 5:52 PM, Julian Leviston wrote:

Thank you, captain obvious.

Man is a three-centered (three-brained if you will) being. Focussing 
on only one of the brains is by definition imbalanced.


Bring back the renaissance man.



so, if, say, a person likes computers, but largely lacks either an 
emotional or creative side, is this implying that computers somehow 
took away their emotions and creativity, or is it more likely the 
case that they didn't really have them to begin with?...


like, a person after a while, observing that they rarely feel much of 
anything, no longer have much of any real sense of romantic interest, 
have little intrinsic creative motivation, are unable to understand 
symbolism, tend to see the world in a literal manner, ...


and, then wonder: "so it is? what now?..."

doesn't really seem like it is the computer's fault anymore than a 
person also noting that they are also partially color-blind.


unless I have missed the point?...


a more obvious downside though is that generally, doing lots of stuff 
on a computer keeps the user nailed down to their chair. even though 
they might realize that getting up and doing stuff might be better 
for their health, doing so is time away from working on stuff...


I guess a mystery then would be if, some time in the future, there 
will be ways of using computers which don't effectively require the 
users to be sitting in a chair all d

Re: [fonc] Falun Dafa

2012-12-22 Thread BGB

On 12/22/2012 5:52 PM, Julian Leviston wrote:

Thank you, captain obvious.

Man is a three-centered (three-brained if you will) being. Focussing 
on only one of the brains is by definition imbalanced.


Bring back the renaissance man.



so, if, say, a person likes computers, but largely lacks either an 
emotional or creative side, is this implying that computers somehow took 
away their emotions and creativity, or is it more likely the case that 
they didn't really have them to begin with?...


like, a person after a while, observing that they rarely feel much of 
anything, no longer have much of any real sense of romantic interest, 
have little intrinsic creative motivation, are unable to understand 
symbolism, tend to see the world in a literal manner, ...


and, then wonder: "so it is? what now?..."

doesn't really seem like it is the computer's fault anymore than a 
person also noting that they are also partially color-blind.


unless I have missed the point?...


a more obvious downside though is that generally, doing lots of stuff on 
a computer keeps the user nailed down to their chair. even though they 
might realize that getting up and doing stuff might be better for their 
health, doing so is time away from working on stuff...


I guess a mystery then would be if, some time in the future, there will 
be ways of using computers which don't effectively require the users to 
be sitting in a chair all day (ideally without compromising either the 
user experience or capabilities). (granted, yes, traditional exercise 
can be tiring/unpleasant though...).



as for the mentioned practice, it seems like it could conflict with a 
persons' religious beliefs (many people consider these types of things 
as being occult).


more often a person might do something like memory-verses or similar 
instead (like, memorize and recite John 3:16 or similar, ...).


or such...



Julian

On 23/12/2012, at 4:28 AM, John Pratt > wrote:



I want to tell everyone on this list about something I found.

Maybe someone out there hears what I say, thinks I am pretty
crazy for saying it to an entire mailing list, but appreciates it.

That is the kind of person I am sometimes.  I might tell a CEO
not to use high-class mustard on a hotdog and genuinely wonder afterwards
why he gets angry.  So, similarly, I am going to tell all of you to
go to FalunDafa.org  because this is the best 
thing I have done

to extricate myself cognitively from computer prison that we
all live in.

It is true that computers are impressive, but they are also injurious
in other respects and if people won't acknowledge the downsides
to what they do to our cognition, I don't think that is ok, either. I am
actually a generalist on this subject, so I don't take technical stances
on this minor subject or that minor subject inside the vast field of
computer science.  But what holds true for me also holds true for you,
that computers draw you in to a certain, narrow type of thinking that
needs to be balanced by true, traditional, /human/ things like music 
or dance or art.

___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread BGB

On 10/3/2012 2:46 PM, Paul Homer wrote:

I think it's because that's what we've told them to ask for :-)

In truth we can't actually program 'everything', I think that's a 
side-effect of Godel's incompleteness theorem. But if you were to take 
'everything' as being abstract quantity, the more we write, the closer 
our estimation comes to being 'everything'. That perspective lends 
itself to perhaps measuring the current state of our industry by how 
much code we are writing right now. In the early years, we should be 
writing more and more. In the later years, less and less (as we get 
closer to 'everything'). My sense of the industry right now is that 
pretty much every year (factoring in the economy and the waxing or 
waning of the popularity of programming) we write more code than the 
year before. Thus we are only starting :-)





yeah, this seems about right.

from my own experience, new code being written in any given area tends 
to drop off once that part is reasonably stable or complete, apart from 
occasional tweaks/extensions, ...


but, there is always more to do somewhere else, so on average the code 
gradually gets bigger, as more functionality gets added in various areas.


and, I often have to decide where I will not invest time and effort.

so, yeah, this falls well short of "everything"...



Paul.


*From:* Pascal J. Bourguignon 
*To:* Paul Homer 
*Cc:* Fundamentals of New Computing 
*Sent:* Wednesday, October 3, 2012 3:32:34 PM
*Subject:* Re: [fonc] How it is

Paul Homer mailto:paul_ho...@yahoo.ca>> writes:

> The on-going work to enhance the system would consistent of
modeling data, and creating
> transformations. In comparison to modern software development,
these would be very little
> pieces, and if they were shared are intrinsically reusable (and
recombination).

Yes, that gives L4Gs.  Eventually (when we'll have programmed
everything) all computing will be only done with L4Gs: managers
specifying their data flows.

But strangely enough, users are always asking for new programs... 
Is it

because we've not programmed every functions already, or because
we will
never have them all programmed?


-- 
__Pascal Bourguignon__ http://www.informatimago.com/

A bad day in () is better than a good day in {}.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] How it is

2012-10-03 Thread BGB
ly used 
in languages like Java and C#. in dynamically-typed languages, the 
object is often itself its own metadata.



meanwhile, in my case, I build and use reflective metadata for C.

the big part of the "power" of my VM project is not that I have such big 
fancy code, or fancy code generation, but rather in that my C code has 
reflection facilities. even for plain old C code, reflection can be 
pretty useful sometimes (allowing doing things that would otherwise be 
impossible, or at least, rather impractical).


so, in a way, reflection metadata is what makes my fancy C FFI possible.

this part was made to work fairly well, even if, admittedly, this is 
something of a fairly limited scope.



much larger "big concept" things though would likely require "big 
concept" metadata, and this is where the pain would begin.


with FFI gluing, the task is simpler, like:
"on one side, I have a 'string' or 'char[]' type, and on the other, a 
'char *' type, what do I do?...".


usually, the types are paired in a reasonably straightforward way, and 
the number of arguments match, ... so the code can either succeed (and 
generate the needed interface glue), or fail at doing so.


but, admittedly, figuring out something like "how do I make these two 
unrelated things interact?", where there is not a clear 1:1 mapping, is 
not such an easy task.



this is then where we get into APIs, ABIs, protocols, ... where each 
side defines a particular (and, usually narrow) set of defined 
interactions, and interacts in a particular way.


and this is, itself, ultimately limited.

for example, a person can plug all manner of filesystems into the Linux 
VFS subsystem, but ultimately there is a restriction here: it has to be 
able to present itself as a hierarchical filesystem.


more so, given the way it is implemented in Linux, it has to be possible 
to enumerate the files, so sad as it is, you can't really just implement 
"the internet" as a Linux VFS driver (say, 
"/mnt/http/www.google.com/#hl=en&..."), albeit some other VFS-style 
systems allow this.



so, in this sense, it still requires "intelligence" to put the pieces 
together, and design the various ways in which they may interoperate...



I really don't know if this helps, or is just me going off on a tangent.



Paul.



*From:* BGB 
*To:* fonc@vpri.org
*Sent:* Tuesday, October 2, 2012 5:48:14 PM
*Subject:* Re: [fonc] How it is

On 10/2/2012 12:19 PM, Paul Homer wrote:

It always seems to be that each new generation of programmers
goes straight for the low-hanging fruit, ignoring that most of it
has already been solved many times over. Meanwhile the real
problems remain. There has been progress, but over the couple of
decades I've been working, I've always felt that it was '2 steps
forward, 1.99 steps back".



it depends probably on how one measures things, but I don't think
it is quite that bad.

more like, I suspect, a lot has to do with pain-threshold:
people will clean things up so long as they are sufficiently
painful, but once this is achieved, people no longer care.

the rest is people mostly recreating the past, often poorly,
usually under the idea "this time we will do it right!", often
without looking into what the past technologies did or did not do
well engineering-wise.

or, they end up trying for "something different", but usually this
turns out to be recreating something which already exists and
turns out to typically be a dead-end (IOW: where many have gone
before, and failed). often the people will think "why has no one
done it before this way?" but, usually they have, and usually it
didn't turn out well.

so, a blind "rebuild starting from nothing" probably wont achieve
much.
like, it requires taking account of history to improve on it
(classifying various options and design choices, ...).


it is like trying to convince other language/VM
designers/implementers that expecting the end programmer to have
to write piles of boilerplate to interface with C is a problem
which should be addressed, but people just go and use terrible
APIs usually based on "registering" the C callbacks with the VM
(or they devise something like JNI or JNA and congratulate
themselves, rather than being like "this still kind of sucks").

though in a way it sort of makes sense:
many language designers end up thinking like "this language will
replace C anyways, why bother to have a half-decent FFI?...".
whereas it is probably a minority position to design a language
and VM with the attitude "C and C++ aren'

Re: [fonc] How it is

2012-10-02 Thread BGB

On 10/2/2012 5:48 PM, Pascal J. Bourguignon wrote:

BGB  writes:


On 10/2/2012 12:19 PM, Paul Homer wrote:

 It always seems to be that each new generation of programmers goes
 straight for the low-hanging fruit, ignoring that most of it has
 already been solved many times over. Meanwhile the real problems
 remain. There has been progress, but over the couple of decades
 I've been working, I've always felt that it was '2 steps forward,
 1.99 steps back".

it depends probably on how one measures things, but I don't think it
is quite that bad.

more like, I suspect, a lot has to do with pain-threshold: people will
clean things up so long as they are sufficiently painful, but once
this is achieved, people no longer care.

the rest is people mostly recreating the past, often poorly, usually
under the idea "this time we will do it right!", often without looking
into what the past technologies did or did not do well
engineering-wise.

or, they end up trying for "something different", but usually this
turns out to be recreating something which already exists and turns
out to typically be a dead-end (IOW: where many have gone before, and
failed). often the people will think "why has no one done it before
this way?" but, usually they have, and usually it didn't turn out
well.

One excuse for this however, is that sources for old research projects
are not available generally, the more so for failed projects. At most,
there's a paper describing the project and some results, but no source,
much less machine readable sources.  (The fact is that those sources
were on punch cards or other unreadable media).


a lot of things are for things which are much more recent as well.



so, a blind "rebuild starting from nothing" probably wont achieve
much.  like, it requires taking account of history to improve on it
(classifying various options and design choices, ...).

Sometimes while not making great scientific or technological advances,
it still improves things.  Linus wanted to learn unix and wrote Linux
and Richard wanted to have the sources and wrote GNU, and we get
GNU/Linux which is better than the other unices.


well, except, in both of these cases, they were taking account of things 
which happened before:

both of them knew about, and were basing their design efforts off of, Unix.


the bigger problem is not with people being like "I am going to write my 
own version of X", but, rather, a person running into the problem 
without really taking into account that "X" ever existed, or without 
putting any effort into understanding how it worked.




it is like trying to convince other language/VM designers/implementers
that expecting the end programmer to have to write piles of
boilerplate to interface with C is a problem which should be
addressed, but people just go and use terrible APIs usually based on
"registering" the C callbacks with the VM (or they devise something
like JNI or JNA and congratulate themselves, rather than being like
"this still kind of sucks").

though in a way it sort of makes sense: many language designers end up
thinking like "this language will replace C anyways, why bother to
have a half-decent FFI?...". whereas it is probably a minority
position to design a language and VM with the attitude "C and C++
aren't going away anytime soon".

but, at least I am aware that most of my stuff is poor imitations of
other stuff, and doesn't really do much of anything actually original,
or necessarily even all that well, but at least I can try to improve
on things (like, rip-off and refine).

even, yes, as misguided and wasteful as it all may seem sometimes...

in a way it can be distressing though when one has created something
that is lame and ugly, but at the same time is aware of the various
design tradeoffs that has caused them to design it that way (like, a
cleaner and more elegant design could have been created, but might
have suffered in another way).

in a way, it is a slightly different experience I suspect...

I would say that for one thing the development of new ideas would have
to be done in autarcy: we don't want and can't support old OSes and old
languages, since the fundamental principles will be different.

But then I'd observe the fate of those different systems, even with a
corporation such as IBM backing them, such as OS/400, or BeOS.  Even if
some of them could find a niche, they remain quite confidential.


yeah. the problem is, "a new thing" is hard.
it is one thing to sell, for example, an x86 chip with a few more 
features hacked on, and quite another to try to sell something like an 
Itanium.


people really like their old stuff to keep on working, and for better or 
worse, it makes sense to keep the new thing as a backwards-compatible 
extension.




On the other hand, more or 

Re: [fonc] How it is

2012-10-02 Thread BGB

On 10/2/2012 12:19 PM, Paul Homer wrote:
It always seems to be that each new generation of programmers goes 
straight for the low-hanging fruit, ignoring that most of it has 
already been solved many times over. Meanwhile the real problems 
remain. There has been progress, but over the couple of decades I've 
been working, I've always felt that it was '2 steps forward, 1.99 
steps back".




it depends probably on how one measures things, but I don't think it is 
quite that bad.


more like, I suspect, a lot has to do with pain-threshold:
people will clean things up so long as they are sufficiently painful, 
but once this is achieved, people no longer care.


the rest is people mostly recreating the past, often poorly, usually 
under the idea "this time we will do it right!", often without looking 
into what the past technologies did or did not do well engineering-wise.


or, they end up trying for "something different", but usually this turns 
out to be recreating something which already exists and turns out to 
typically be a dead-end (IOW: where many have gone before, and failed). 
often the people will think "why has no one done it before this way?" 
but, usually they have, and usually it didn't turn out well.


so, a blind "rebuild starting from nothing" probably wont achieve much.
like, it requires taking account of history to improve on it 
(classifying various options and design choices, ...).



it is like trying to convince other language/VM designers/implementers 
that expecting the end programmer to have to write piles of boilerplate 
to interface with C is a problem which should be addressed, but people 
just go and use terrible APIs usually based on "registering" the C 
callbacks with the VM (or they devise something like JNI or JNA and 
congratulate themselves, rather than being like "this still kind of sucks").


though in a way it sort of makes sense:
many language designers end up thinking like "this language will replace 
C anyways, why bother to have a half-decent FFI?...". whereas it is 
probably a minority position to design a language and VM with the 
attitude "C and C++ aren't going away anytime soon".



but, at least I am aware that most of my stuff is poor imitations of 
other stuff, and doesn't really do much of anything actually original, 
or necessarily even all that well, but at least I can try to improve on 
things (like, rip-off and refine).


even, yes, as misguided and wasteful as it all may seem sometimes...


in a way it can be distressing though when one has created something 
that is lame and ugly, but at the same time is aware of the various 
design tradeoffs that has caused them to design it that way (like, a 
cleaner and more elegant design could have been created, but might have 
suffered in another way).


in a way, it is a slightly different experience I suspect...



Paul.


*From:* John Pratt 
*To:* fonc@vpri.org
*Sent:* Tuesday, October 2, 2012 11:21:59 AM
*Subject:* [fonc] How it is

Basically, Alan Kay is too polite to say what
we all know to be the case, which is that things
are far inferior to where they could have been
if people had listened to what he was saying in the 1970's.

Inefficient chip architectures, bloated frameworks,
and people don't know at all.

It needs a reboot from the core, all of it, it's just that
people are too afraid to admit it.  New programming languages,
not aging things tied to the keyboard from the 1960's.

It took me 6 months to figure out how to write a drawing program
in cocoa, but a 16-year-old figured it out in the 1970's easily
with Smalltalk.
___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deployment by virus

2012-07-19 Thread BGB

On 7/19/2012 7:32 AM, Eugen Leitl wrote:

On Thu, Jul 19, 2012 at 02:28:18PM +0200, John Nilsson wrote:

More work relative to an approach where full specification and controll is
feasible. I was thinking that in a not to distant future we'll want to
build systems of such complexity that we need to let go of such dreams.

It could be enough with one system. How do you evolve a system that has
emerged from som initial condition directed by user input. Even with only
one instance of it running you might have no way to recreate it so you must
patch it, and given sufficient complexity you might have no way to know how
a binary diff should be created.

It seems a great idea for evolutionary computation (GA/GP) but an
awful idea for human engineering.


it comes back to the idea of total complexity vs perceived or external 
complexity:
as the system gets larger and more complex, the level of abstraction 
tends to increase.


the person can still design and engineer a system, just typically 
working at a higher level of abstraction (and a fair amount of 
conceptual layering).


so, yeah, traditional engineering and development practices will 
probably continue on well into the foreseeable future.




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Deployment by virus

2012-07-18 Thread BGB

On 7/18/2012 9:14 PM, John Nilsson wrote:

Random as in where it's applied or random in what's applied?

I was thinking that the viral part was a means to counter the seeming
randomness in an otherwise chaotic system. Similar in spirit in how
gardening creates some amount of order and predictability, a gardener
who can apply DNA tweaks as well as pruning.

As I understand it CFEngine does something like this wile limited to
"simple" configuration.


it is not clear to me what the exact intention of this use of randomness 
is for, but security comes to mind as a partial application.


there is a lot that could probably be done already regarding the use of 
randomization and recombination to increase the security of software, 
yet it is not widely used for whatever reason.


for example, if both the address space and image layouts were 
randomized, traditional buffer-overflow exploits would become largely 
ineffective.



another issue though is that, very often, it is high-level aspects of 
the software which are being exploited.


for example, I recently read some about security exploits which had been 
going on in the login servers for a game, where a minor issue 
(apparently involving operator precedence or similar) had resulted in 
people being able to do a hacked login by using a valid (but different) 
login to connect initially, and then swapping out for a different 
user-account name (the server didn't notice that the user-name was being 
substituted, and so would act as if this other user had logged in to the 
server).


now, could randomization help with this? this is much less certain.

similar also goes for cases where the software technically works 
as-written (or designed), but this design is itself flawed (and so can 
be exploited).



now, potentially, someone could try combining genetic programming with 
unit tests, say, there are servers off somewhere basically grinding 
against the code, randomly mutating it in various ways, but still 
accepting whatever variants manage to still pass the unit test.


maybe elsewhere, there could be servers (probably running on the same 
physical hardware) designed mostly to run "assault tests", essentially 
off feeding the first servers bad data, basically trying to find ways to 
"crack" or otherwise compromise the first sets of programs.


say:
server A, mutates program, favoring options which:
still pass unit tests;
successfully resists attacks by server B (other causing crashes, or 
gaining unauthorized access).


server B, tries to breed up attack programs to use against server A, 
favoring options which are successful.


this is partly based on the observation that genetic programs are are 
least moderately good at breaking things and finding ways to cheat at 
tests (a lot of testing is often needed trying to fiddle with and 
"harden" the test itself).


I guess this is basically analogous to a more advanced case of 
"randomized adaptive stress testing" (or "hardening"), where basically 
the stress-tester logic feeds garbage loads at the code in question, and 
may involve limited feedback logic (such as trying to adapt parameters 
in the direction of worse performance, or maybe just to weed out cases 
where crashing is likely).


I have done testing of this sort to some extent in the past, but, sadly, 
my codebase is far to large and complicated to be reasonably tested in 
this way (far more often, I use hands-on interactive testing, often 
aided somewhat by interactive use of my scripting language).



most of my past tests had involved basically specialized bytecode or 
string-code formats, though a possibility could be to attempt to use a 
(functionally limited) abstraction over HLL syntax (maybe or maybe not 
token based, possibly also operator based), which could (potentially) 
result in more readable and/or usable output (for example, the output 
could be spit out as C or JS code or similar).


some of this sort of stuff could potentially be applied to game AIs 
(such as behavioral adaptation for NPCs), but thus far I have not 
personally done anything like this (others have done things like this 
though IIRC), partly as there isn't really any clear gain (gameplay 
wise) from doing so (at best, it would probably just make them harder to 
kill, as there is no good way to tune them for something like "player is 
having the most fun"). likewise this, in turn, leads mostly to 
interactive tuning


I make a lot of use of randomness for smaller things though.



BR,
John

On Thu, Jul 19, 2012 at 3:55 AM, Pascal J. Bourguignon
 wrote:

Joke apart, people are still resiting a lot to stochastic software.
One problem with random spreading of updates is that its random.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Component-based software

2012-07-18 Thread BGB

On 7/18/2012 4:04 PM, Miles Fidelman wrote:

Tomasz Rola wrote:

On Sun, 15 Jul 2012, Ivan Zhao wrote:

By "Victorian plumbing", I meant the standardization of the plumbing 
and
hardware components at the end of the 19th century. It greatly 
liberated
plumbers from fixing each broken toilet from scratch, to simply 
picking and

assembling off the shelf pieces.



There was (or even still is) a proposition to make software from
prefabricated components. Not much different to another proposition 
about
using prefabricated libraries/dlls etc. Anyway, seems like there is a 
lot

of component schools nowadays, and I guess they are unable to work with
each other - unless you use a lot of chewing gum and duct tape.



It's really funny, isn't it - how badly "software components" have 
failed.  The world is littered with "component libraries" of various 
sorts, that are unmitigated disasters.


Except. when it actually works.  Consider:
- all the various c libraries
- all the various java libraries
- all the various SDKs floating around
- cpan (perl)

Whenever we use an include statement, or run a make, we're really 
assembling from huge libraries of components.  But we don't quite 
think of it that way for some reason.




yeah.

a few factors I think:
how much is built on top of the language;
how much is "mandatory" when interacting with the system (basically, in 
what ways does it impose itself on the way the program is structured or 
works, what sorts of special treatment does it need when being used, ...).


libraries which tend to be more successful are those which operate at a 
level much closer to that of the base language, and which avoid placing 
too many special requirements on the code using the library (must always 
use memory-allocator X, object system Y, must register global roots with 
the GC, ...).


say, a person building a component library for C is like:
"ok, I will build a GC and some OO facilities";
"now I am going to build some generic containers on top of said OO library";
"now I am going to write a special preprocessor to make it easier to use";
...

while ignoring issues like:
"what if the programmer still wants or needs to use something like 
malloc or mmap?";
"how casual may the programmer be regarding the treatment of object 
references?";
"what if the programmer wants to use the containers without using the OO 
facilities?";
"what if for some reason the programmer wants to write code which does 
not use the preprocessor, and call into code which does?";

...

potentially, the library can build a large collection of components, but 
they "don't play well with others" (say, due to large amounts of 
internal dependencies and assumptions in the design). this means that, 
potentially, interfacing a codebase built on the library with another 
codebase may require an inordinate amount of additional pain.



in my case I have tried to, where possible, avoid these sorts of issues 
in my own designs, partly by placing explicit restrictions on what sorts 
of internal dependencies and assumptions are allowed when writing 
various pieces of code, and trying to keep things, for the most part, 
"fairly close to the metal".



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 9:47 PM, David-Sarah Hopwood wrote:

[Despite my better judgement I'm going to respond to this even though it is
seriously off-topic.]


in all likelihood, the topic will probably end pretty soon anyways.
don't really know how much more can really be said on this particular 
subject anyways.


but, yeah, probably this topic has gone on long enough.



On 17/07/12 17:18, BGB wrote:

an issue though is that society will not tend to see a person as they are as a 
person, but
will rather tend to see a person in terms of a particular set of stereotypes.

"Society" doesn't "see people as" anything. We do live in/with a culture where
stereotyping is commonplace, but the metonymy of letting the society stand for 
the
people in it is inappropriate here, because it is individual people who 
*choose* to
see other people in terms of stereotypes, or choose not to do so.

You're also way too pessimistic about the extent to which most reasonably 
well-educated
people in practice permit cultural stereotypes to override independent thought. 
Most
people are perfectly capable of recognizing stereotypes -- even if they 
sometimes need a
little prompting -- and understanding what is wrong with them.


a big factor here is how well one person knows another.
stereotypes and generalizations are a much larger part of the 
interaction process when dealing with people who are either strangers or 
casual acquaintances.


if the person is known by much more than a name and a face and a few 
other bits of general information, yes, then maybe they will take a 
little more time to be a little more understanding.




I speak from experience: it is entirely possible to live your life in a way 
that is
quite opposed to many of those cultural stereotypes that you've expressed 
concerning
sexuality, gender expression, employment, reproductive choices, etc., and still 
be
accepted as a matter of course by the vast majority of people. As for the 
people who don't
accept that, *it's they're fault* that they don't get it. No excuses of the form
"society made me think that way".


I think it depends some on the cultural specifics as well, since how 
well something may go over may depend a lot on where a person is, and 
who they are interacting with.


if a person is located somewhere where these things are fairly common 
and generally considered acceptable (for example: California), it may go 
over a lot easier with people than somewhere where it is less commonly 
accepted (for example: Arkansas or Alabama or similar).


likewise, it may go over a bit easier with people who are generally more 
accepting of these forms of lifestyle (such as more non-religious / 
secular type people), than it will with people who are generally less 
accepting of these behaviors (say, people with a more conservative leaning).



(I would prefer not go too much more into this, since yeah, here 
generally isn't really the place for all this.).



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 8:56 AM, Pascal J. Bourguignon wrote:

BGB  writes:


but you can't really afford a house without a job, and can't have a
job without a car (so that the person can travel between their job and
their house).

Job is an invention of the Industrial era.  AFAIK, our great great grand
parents had houses.


yes, but OTOH, they probably also didn't have things like utility bills 
and property tax.



the only real way to eliminate some sort of need for income would 
involve also eliminating the need to pay bills and taxes.


however, the big issue, is if this would be any better than, say, a 
free-market capitalist system, or vs, say, the current mixed-economy system.




I don't really think it is about gender role or stereotypes, but
rather it is more basic:
people mostly operate in terms of the pursuit of their best personal
interests.

Ok.


so, typically, males work towards having a job, getting lots money,
... and will choose females based mostly how useful they are to
themselves (will they be faithful, would they make a good parent,
...).

Well it's clear that it's not their best interest to do that: only about
40% males reproduce in this setup.


it is in the best interest of those who are successful.

if a person works in their own best interests, it may benefit 
themselves, but this is not to say that it necessarily benefits everyone.


I suspect though that the modern reproductive statistics are probably a 
bit better than this though, given that general survival and 
mate-finding are probably a bit more balanced in modern times (as well 
as most westernized societies holding negative views on things like 
polygamy, which were also a lot more common in past societies as well, ...).




in this case, then society works as a sort of sorting algorithm, with
"better" mates generally ending up together (rich business man with
trophy wife), and worse mates ending up together (poor looser with a
promiscuous or otherwise undesirable wife).

And this is also the problem, not only for persons, but for society: the
sorting is done on criteria that are bad.  Perhaps they were good to
survive in the savanah, but they're clearly an impediment to develop a
safe technological society.


whether or not it is "good" or not is a separate issue, but this is 
largely how the society seems to work from what I can tell.


similarly not everyone equal in terms of abilities, or of various 
factors of desirability, ...



the result then is usually that people with higher desirability tend to 
end up together, and those with lower desirability tend to end up with 
whoever is left over (though, it seems to take a bit longer, as many 
people also tend to try to "aim high", and will often reject those of 
similar social standing).


for example, there are also many females who basically end up remaining 
alone waiting for some "Mr. Right" to come along, but if the bar is set 
to high, no one will ever come along who is "good enough" for them.


I don't personally believe that the genders are all that different in 
terms of how they behave, nor necessarily in terms of relative ability, 
but may differ more in terms of what they look for, for example, due to 
things like societal expectations and similar.


but, likely, societal expectations is the hard one.
very possibly, much of the current media may actually serve to make this 
problem worse.




Well, perhaps.  This is not my way to learn how to program (once really)
or to learn a new programming language.

dunno, I learned originally partly by hacking on pre-existing
codebases, and by cobbling things together and seeing what all did and
did not work (and was later partly followed by looking at code and
writing functionally similar mock-ups, ...).

some years later, I started writing a lot more of my own code, which
largely displaced the use of cobbled-together code.

from what I have seen in code written by others, this sort of cobbling
seems to be a fairly common development process for newbies.


I learn programming languages basically by reading the reference, and by
exploring the construction of programs from the language rules.



this is more of an "advanced" strategy though, as-in, probably something 
used by someone generally already familiar with the general topic.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 11:12 AM, Loup Vaillant wrote:

Pascal J. Bourguignon a écrit :

BGB  writes:

dunno, I learned originally partly by hacking on pre-existing
codebases, and by cobbling things together and seeing what all did and
did not work (and was later partly followed by looking at code and
writing functionally similar mock-ups, ...).

some years later, I started writing a lot more of my own code, which
largely displaced the use of cobbled-together code.

from what I have seen in code written by others, this sort of cobbling
seems to be a fairly common development process for newbies.



I learn programming languages basically by reading the reference, and by
exploring the construction of programs from the language rules.


When I started learning programming on my TI82 palmtop in high school, 
I started by copying programs verbatim.  Then, I gradually started to 
do more and more from scratch. Like BGB.


But when I learn a new language now, I do read the reference (if any), 
and construct programs from the language rules. Like Pascal.


Maybe there's two kinds of beginners: beginners in programming itself, 
and beginners in a programming language.




yep.


likewise, many people who aren't really programmers, but are just trying 
to get something done, probably aren't really going to take a formal 
approach to learning programming, but are more likely going to try to 
find code fragments off the internet they can cobble together to make 
something that basically works.


sometimes, it takes a while to really make the transition, from being 
someone who wrote a lot of what they had by cobbling and imitation, to 
being someone who really understands how it all actually works.




Loup.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-17 Thread BGB

On 7/17/2012 9:04 AM, Pascal J. Bourguignon wrote:

David-Sarah Hopwood  writes:


On 17/07/12 02:15, BGB wrote:

so, typically, males work towards having a job, getting lots money, ... and 
will choose
females based mostly how useful they are to themselves (will they be faithful, 
would they
make a good parent, ...).

meanwhile, females would judge a male based primarily on their income, 
possessions,
assurance of continued support, ...

not that it is necessarily that way, as roles could be reversed (the female 
holds a job),
or mutual (both hold jobs). at least one person needs to hold a job though, and 
by
default, this is the social role for a male (in the alternate case, usually the 
female is
considerably older, which has a secondary limiting factor in that females have 
a viable
reproductive span that is considerably shorter than that for males, meaning 
that the
older-working-female scenario is much less likely to result in offspring, ...).

in this case, then society works as a sort of sorting algorithm, with "better" 
mates
generally ending up together (rich business man with trophy wife), and worse 
mates ending
up together (poor looser with a promiscuous or otherwise undesirable wife).

Way to go combining sexist, classist, ageist, heteronormative, cisnormative, 
ableist
(re: fertility) and polyphobic (equating multiple partners with undesirability)
assumptions, all in the space of four paragraphs. I'm not going to explain in 
detail
why these are offensive assumptions, because that is not why I read a mailing 
list
that is supposed to be about the "Fundamentals of New Computing". Please stick 
to
that topic.

It is, but it is the reality, and the reason of most of our problems
too.  And it's not by putting an onus on the expression of these choices
that you will repress them: they come from the deepest, our genes and
the genetic selection that has been applied on them for millena.

My point here being that what's needed is a change in how selection of
reproductive partners is done, and obviously, I'm not considering doing
it based on money or political power.   Of course, I have none of either
:-)


yeah.

don't think that this is me saying that everything "should" operate this 
way, rather that, at least from my observations, this is largely how it 
does already. (whether it is good or bad then is a separate and 
independent issue).


the issue with a person going outside the norm may not necessarily be 
that it is bad or wrong for them to do so, but that it may risk putting 
them at a social disadvantage.


in the original context, it was in relation to a person trying to 
maximize their own pursuit of self-interest, which would tend to 
probably overlap somewhat with adherence to societal norms.



granted, that is not to say, for example, that everything I do is 
socially advantageous:
for example, being a programmer / "computer nerd" carries its own set of 
social stigmas and negative stereotypes (and in many ways I still hold 
minority views on things, ...).


an issue though is that society will not tend to see a person as they 
are as a person, but will rather tend to see a person in terms of a 
particular set of stereotypes.




And yes, it's perfectly on-topic, if you consider how science and
technology developments are directed.  Most of our computing technology
has been created for war.


yes.



Or said otherwise, why do you think this kind of refundation project
hasn't the same kind of resources allocated to the commercial or
military projects?



I am not entirely sure I understand the question here.

if you mean, why don't people go and try to remake society in a 
different form?

well, I guess that would be a hard one.

about as soon as people start trying to push for any major social 
changes, there is likely to be a large amount of resistance and backlash.


it is much like, if you have one person pushing for "progressive" 
ideals, you will end up with another pushing for "conservative" ideals, 
typically with relatively little net change. (so, sort of a societal 
equal-and-opposite effect). (by "progressive" and "conservative" here, I 
don't necessarily mean them exactly as they are used in current US 
politics, but more "in general").


there will be changes though in a direction where nearly everyone agrees 
that this is the direction they want to go, but people fighting or 
trying to impose their ideals on the other side is not really a good 
solution. people really don't like having their personal freedoms and 
choices being hindered, or having their personal ideals and values torn 
away simply because this is how someone else feels things "should" be 
(the problem is that "promoting" something for one person also tends to 
come at the cost of "imposing" it on someone else).


a better question wo

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 8:59 PM, David-Sarah Hopwood wrote:

On 17/07/12 02:15, BGB wrote:

so, typically, males work towards having a job, getting lots money, ... and 
will choose
females based mostly how useful they are to themselves (will they be faithful, 
would they
make a good parent, ...).

meanwhile, females would judge a male based primarily on their income, 
possessions,
assurance of continued support, ...

not that it is necessarily that way, as roles could be reversed (the female 
holds a job),
or mutual (both hold jobs). at least one person needs to hold a job though, and 
by
default, this is the social role for a male (in the alternate case, usually the 
female is
considerably older, which has a secondary limiting factor in that females have 
a viable
reproductive span that is considerably shorter than that for males, meaning 
that the
older-working-female scenario is much less likely to result in offspring, ...).

in this case, then society works as a sort of sorting algorithm, with "better" 
mates
generally ending up together (rich business man with trophy wife), and worse 
mates ending
up together (poor looser with a promiscuous or otherwise undesirable wife).

Way to go combining sexist, classist, ageist, heteronormative, cisnormative, 
ableist
(re: fertility) and polyphobic (equating multiple partners with undesirability)
assumptions, all in the space of four paragraphs. I'm not going to explain in 
detail
why these are offensive assumptions, because that is not why I read a mailing 
list
that is supposed to be about the "Fundamentals of New Computing". Please stick 
to
that topic.



sorry to anyone who was offended by any of this, it was not my intent to 
cause any offense here.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 3:15 PM, Pascal J. Bourguignon wrote:

BGB  writes:


and, one can ask: does your usual programmer actually even need to
know who the past US presidents were and what things they were known
for? or the differences between Ruminant and Equine digestive systems
regarding their ability to metabolize cellulose?

maybe some people have some reason to know, most others don't, and for
them it is just the educational system eating their money.

My answer is that it depends on what civilization you want.  If you want
a feudal civilization with classes, indeed, some people don't have to
know.  Let's reserve weapon knowledge to the lords, letter and cheese
knowledge to the monks, agriculture knowledge to the peasants.

Now if you prefer a technological civilization including things like
nuclear power (but a lot of other science applications are similarly
"delicate"), then I argue that you need widespread scientific, technical
and general culture (history et al) knowledge.

Typically, the problems the Japanese have with their nuclear power
plants, and not only since Fukushima, are due to the lack of general and
scientific knowledge, not in the nuclear power plant engineers, but in
the general population, including politicians.


the issue is basically that humans have finite time and mental capacity, 
so no single person can know everything about everything.


a person would likely either have very general knowledge of a wide range 
of fields, but this would ultimately hinder them as they would lack any 
specific skills, or they would need to be very specialized, where they 
have a wide range of knowledge about a particular topic, largely to the 
exclusion of most other areas.


the issue then is a person specializing for one area trying to learn 
other information which is of little use to them will ultimately be 
using up their memory's storage capacity for stuff which is not 
particularly useful to them, as well as using up time that could be 
spent improving their skills at their particular craft.



granted, this could change if/when computer augmentation allows people 
to have a near unlimited amount of long-term memory storage (rather than 
having to pick and choose what they want to commit to memory, they can 
store much of their knowledge on a very large implanted SSD or similar).




so, the barrier to entry is fairly high, often requiring people who
want to be contributors to a project to have the same vision as the
project leader. sometimes leading to an "inner circle of yes-men", and
making the core developers often not accepting of, and sometimes
adversarial to, the positions held by groups of fringe users.

This concerns only CS/programmer professionnals.  This is not the
discussion I was having.


who says casual developers would not be project contributors?...

the more serious ones would likely be part of the inner-circle or the 
project leaders.




so, the main goal in life is basically finding employment and basic
job competence, mostly with education being as a means to an end:
getting higher paying job, ...

Who said that?

I think this is a given.

people need to live their lives, and to do this, they need a job and
money (and a house, car, ...).

No.  In what you cite, the only thing need is a house.


but you can't really afford a house without a job, and can't have a job 
without a car (so that the person can travel between their job and their 
house).



What people need are food, water, shelter, clothes, some energy for a
few appliances.  All the rest is not NEEDED, but may be convenient.

Now specific activities or person may require additionnal specific
things.  Eg. we programmers need an internet connection and a computer.
Other people may have some other specific needs.  But a job or money is
of use to nobody (unless you want to run some pack rat race).


jobs and money are needed to have a place to live, otherwise, how will 
the person pay for their cost of living expenses (taxes, mortgage/rent, 
utility bills, food, ...)?





likewise goes for finding a mate: often, potential mates may make
decisions based largely on how much money and social status a person
has, so a person who is less well off will be overlooked (well, except
by those looking for short-term hook-ups and flings, who usually more
care about looks and similar, and typically just go from one
relationship to the next).

This is something to be considered too, but even if it's greatly
influenced by genes,
http://www.psy.fsu.edu/~baumeistertice/goodaboutmen.htm
I'm of the opinion that human are not beasts, and we can also run a
cultural "program" superceding our genetic programming in a certain
measure.  (Eg. women don't necessarily have to send 2/3 of men to war or
prison and reproduce with, ie. select, only 1/3 of psychopathic males).
Now of course we're not on the wait to any kind of improvement there.
But this is not th

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 11:22 AM, Pascal J. Bourguignon wrote:

BGB  writes:


general programming probably doesn't need much more than pre-algebra
or maybe algebra level stuff anyways, but maybe touching on other
things that are useful to computing: matrices, vectors, sin/cos/...,
the big sigma notation, ...

Definitely.  Programming needs discreete mathematics and statistics much
more than the mathematics that are usually taught (which are more useful
eg. to physics).


yes, either way.

college experience was basically like:
go to math classes, which tend to be things like Calculus and similar;
brain melting ensues;
no degree earned.

then I had to move, and the college here would require taking a bunch 
more different classes, and I would still need math classes, making 
trying to do so not terribly worthwhile.




but, a person can get along pretty well provided they get basic
literacy down fairly solidly (can read and write, and maybe perform
basic arithmetic, ...).

most other stuff is mostly optional, and wont tend to matter much in
daily life for most people (and most will probably soon enough forget
anyways once they no longer have a school trying to force it down
their throats and/or needing to "cram" for tests).

No, no, no.  That's the point of our discussion.  There's a need to
increase "computer"-literacy, actually "programming"-literacy of the
general public.


well, I mean, they could have a use for computer literacy, ... depending 
on what they are doing.
but, do we need all the other stuff, like "US History", "Biology", 
"Environmental Science", ... that comes along with it, and which doesn't 
generally transfer from one college to another?...


they are like, "no, you have World History, we require US History" or 
"we require Biology, but you have Marine Biology".


and, one can ask: does your usual programmer actually even need to know 
who the past US presidents were and what things they were known for? or 
the differences between Ruminant and Equine digestive systems regarding 
their ability to metabolize cellulose?


maybe some people have some reason to know, most others don't, and for 
them it is just the educational system eating their money.




The situation where everybody would be able (culturally, with a basic
knowing-how, an with the help of the right software tools and system) to
program their applications (ie. something totally contrary to the
current Apple philosophy), would be a better situation than the one
where people are dumbed-down and are allowed to use only canned software
that they cannot inspect and adapt to their needs.


yes, but part of the problem here may be more about the way the software 
industry works, and general culture, rather than strictly about education.


in a world where typically only closed binaries are available, and where 
messing with what is available may risk a person facing legal action, 
then it isn't really a good situation.


likewise, the main way which newbies tend to develop code is by 
copy-pasting from others and by making tweaks to existing code and data, 
again, both of which may put a person at legal risk (due to copyright, 
...), and often results in people creating programs which they don't 
actually have the legal right to possess much less distribute or sell to 
others.



yes, granted, it could be better here.
FOSS sort of helps, but still has limitations.

something like, the ability to move code between a wider range of 
"compatible" licenses, or safely discard the license for "sufficiently 
small" code fragments (< 25 or 50 or 100 lines or so), could make sense.



all this is in addition to technical issues, like reducing the pain and 
cost by which a person can go about making changes (often, it requires 
the user to be able to get the program to be able to rebuild from 
sources before they have much hope of being able to mess with it, 
limiting this activity more to "serious" developers).


likewise, it is very often overly painful to make contributions back 
into community projects, given:
usually only core developers have write access to the repository (for 
good reason);

fringe developers typically submit changes via diff patches;
usually this itself requires communication with the developers (often 
via subscribing to a developer mailing-list or similar);
nevermind the usual hassles of making the patches "just so", so that the 
core developers will actually look into them (they often get fussy over 
things like which switches they want used with diff, ...);

...

ultimately, this may mean that the vast majority of minor fixes will 
tend to remain mostly in the hands of those who make them, and not end 
up being committed back into the main branch of the project.


in other cases, it may leads to forks, mostly because non-core 
developers can't really deal w

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-16 Thread BGB

On 7/16/2012 8:00 AM, Pascal J. Bourguignon wrote:

Miles Fidelman  writes:


Pascal J. Bourguignon wrote:

Miles Fidelman  writes:

And seems to have turned into something about needing to recreate the
homebrew computing milieu, and everyone learning to program - and
perhaps "why don't more people know how to program?"

My response (to the original question) is that folks who want to
write, may want something more flexible (programmable) than Word, but
somehow turning everone into c coders doesn't seem to be the answer.

Of course not.  That's why there are languages like Python or Logo.



More flexible tools (e.g., HyperCard, spreadsheets) are more of an
answer -  and that's a challenge to those of us who develop tools.
Turning writers, or mathematicians, or artists into coders is simply a
recipe for bad content AND bad code.

But everyone learns mathematics, and even if they don't turn out
professionnal mathematicians, they at least know how to make a simple
demonstration (or at least we all did when I was in high school, so it's
possible).

Similarly, everyone should learn CS and programming, and even if they
won't be able to manage software complexity at the same level as
professionnal programmers (ought to be able to), they should be able to
write simple programs, at the level of emacs commands, for their own
needs, and foremost, they should understand enough of CS and programming
to be able to have meaningful expectations from the computer industry
and from programmers.

Ok... but that begs the real question: What are the core concepts that
matter?

There's a serious distinction between computer science, computer
engineering, and programming.  CS is theory, CE is architecture and
design, programming is carpentry.

In math, we start with arithmetic, geometry, algebra, maybe some set
theory, and go on to trigonometry, statistics, calculus, .. and
pick up some techniques along the way (addition, multiplication, etc.)


in elementary school, I got out of stuff, because I guess the school 
figured my skills were better spent doing IT stuff, so that is what I 
did (and I guess also because, at the time, I was generally a bit of a 
"smart kid" compared to a lot of the others, since I could read and do 
arithmetic pretty well already, ...).


by high-school, it was the "Pre-Algebra / Algebra 1/2" route (basically, 
the lower-route), so basically the entirety of highschool was spent 
solving for linear equations (well, apart for the first one, which was 
mostly about hammering out the concept of variables and PEMDAS).


took "151A" at one point, which was basically like algebra + matrices + 
complex numbers + big sigma, generally passed this.



tried to do other higher-level college level math classes later, total 
wackiness ensues, me having often little idea what is going on and 
getting lost as to how to actually do any of this stuff.


although, on the up-side, I did apparently manage to impress some people 
in a class by mentally calculating the inverse of a matrix... (nevermind 
ultimately bombing on nearly everything else in that class).



general programming probably doesn't need much more than pre-algebra or 
maybe algebra level stuff anyways, but maybe touching on other things 
that are useful to computing: matrices, vectors, sin/cos/..., the big 
sigma notation, ...




In science, it's physics, chemistry, biology,  and we learn some
lab skills along the way.

What are the core concepts of CS/CE that everyone should learn in
order to be considered "educated?"  What lab skills?  Note that there
still long debates on this when it comes to college curricula.

Indeed.  The French National Education is answering to that question
with its educational "programme", and the newly edited manual.

https://wiki.inria.fr/sciencinfolycee/TexteOfficielProgrammeISN

https://wiki.inria.fr/wikis/sciencinfolycee/images/7/73/Informatique_et_Sciences_du_Num%C3%A9rique_-_Sp%C3%A9cialit%C3%A9_ISN_en_Terminale_S.pdf



can't say much on this.


but, a person can get along pretty well provided they get basic literacy 
down fairly solidly (can read and write, and maybe perform basic 
arithmetic, ...).


most other stuff is mostly optional, and wont tend to matter much in 
daily life for most people (and most will probably soon enough forget 
anyways once they no longer have a school trying to force it down their 
throats and/or needing to "cram" for tests).


so, the main goal in life is basically finding employment and basic job 
competence, mostly with education being as a means to an end: getting 
higher paying job, ...


(so, person pays colleges, goes through a lot of pain and hassle, gets a 
degree, and employer pays them more).




Some of us greybeards (or fuddy duddies if you wish) argue for
starting with fundamentals:
- boolean logic
- information theory
- theory of computing
- hardware design
- machine language programming (play with microcontrollers in the lab)
- operating systems
- language design

Re: [fonc] Historical lessons to escape the current sorry state of personal computing?

2012-07-15 Thread BGB

On 7/14/2012 5:11 PM, Iian Neill wrote:

Ivan,

I have some hope for projects like the Raspberry Pi computer, which aims to 
replicate the 'homebrew' computing experience of the BBC Micro in Britain in 
the 1980s. Of course, hardware is only part of the equation -- even versatile 
hardware that encourages electronic tinkering -- and the languages and software 
that are bundled with the Pi will be key.


yeah, hardware is one thing, software another.



Education is ultimately the answer, but what kind of education? Our computer 
science education is itself a product of our preconceptions of the field of 
computing, and to some degree fails to bridge the divide between the highly skilled 
technocratic elite and the personal computer consumer. The history of home 
computing in the Eighties shows the power of cheap hardware and practically 'bare 
metal' systems that are conceptually graspable. And I suspect the fact that BASIC 
was an interpreted language had a lot to do with fostering experimentation & 
play.


maybe it would help if education people would stop thinking that CS is 
some sort of extension of Calculus or something... (and stop assigning 
scary-level math classes as required for CS majors). this doesn't really 
help for someone whose traditional math skills sort of run dry much past 
the level of algebra (and who finds things like set-theory to not really 
make any sense, where these classes like to use it like gravy they put 
on everything... class about SQL, yes, your set theory is mentioned 
their as well, and put up on the board, but at least for that class, was 
almost never mentioned again once the actual SQL part got going, and the 
teacher made his way past the select statement).


along with "programming" classes which might leave a person for the 
first few semesters using pointy-clicky graphical things, and drawing 
flowcharts in Visio or similar (and/or writing out "desk checks" on paper).


now, how might it be better taught in schools?...
I don't know.


maybe something that up front goes into the basic syntax and behavior of 
the language, then has people go write stuff, and is likewise maybe 
taught starting earlier.


for example, I started learning programming in elementary school (on my 
own), and others could probably do likewise.



classes could maybe teach from a similar basis: like, here is the 
language, and here is what you can type to start making stuff happen, 
... (with no flowcharting, desk-checks, or set-notation, anywhere to be 
seen...).


the rest then is basically "climbing up the tower" and learning about 
various stuff...

like, say, if there were a semester-long class for the OpenGL API, ...



Imagine if some variant of Logo had been built in, that allowed access to the 
machine code subroutines in the way BASIC did...


could be nifty.
I don't really think the problem is as much about language though, as 
much as it is about disinterest + perceived difficulty + lack of 
sensible education strategies + ...




Regards,
Iian


Sent from my iPhone

On 15/07/2012, at 7:41 AM, Miles Fidelman  wrote:


Ivan Zhao wrote:

45 years after Engelbart's demo, we have a read-only web and Microsoft Word 2011, a gulf between 
"users" and "programmers" that can't be wider, and the scariest part is that 
most people have been indoctrinated long enough to realize there could be alternatives.

Naturally, this is just history repeating itself (a la pre-Gutenberg scribes, 
Victorian plumbers). But my question is, what can we learn from these 
historical precedences, in order to to consciously to design our escape path. A 
revolution? An evolution? An education?

HyperCard meets the web + P2P?

--
In theory, there is no difference between theory and practice.
In practice, there is.    Yogi Berra

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] memristors and the changing landscape of systems architectures

2012-07-11 Thread BGB

On 7/11/2012 4:25 AM, Pascal J. Bourguignon wrote:

BGB  writes:


On 7/10/2012 8:53 PM, Daniel Gackle wrote:

 I watched the video and got excited too. Petabits of on-chip
 non-volatile storage? that also can do logic? That's more than a
 game changer.

same here, it seems like an interesting technology.

However, I'd bet on it because of the scale of TiO2/TiO2-x device.
Other memristive devices weren't so small AFAIK.


yeah, this is a good point at least.

at least from what was said, if it can be done, does sound promising.



but, I am also left sometimes wondering about things like space-travel, 
cybernetics, and robotics as well, these
also sometimes being fields seemingly doomed to go nowhere fast (and one can wonder 
if/when anything "cool" will
ever really happen).

http://www.youtube.com/watch?v=W1czBcnX1Ww



well, that is something at least.


not meaning to be pessimistic, or if this is inappropriate for here:

a downside of most current (real-world) robotics is that they are 
functionally fairly limited, and generally only available to a small 
number of people (IOW: the people researching robotics).


the contrast would be if there were general-purpose robots which were 
available to the general public (sort of like how cars are commonly 
available, not necessarily cheap, but where people can still go and buy 
them).



yes, at least computers are cool, but then again, given that PCs have 
existed longer than I have, these aren't really all that new from my 
POV. most other current/modern technologies are, similarly, much older 
than I am.


granted, yes, technologies still change and improve over time...

for example, the world now would be pretty lame if the computers of now 
were the same as those which existed around the time I was born (a lot 
of stuff I am doing now probably wouldn't likely be possible or 
practical on the hardware which existed at the time).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] memristors and the changing landscape of systems architectures

2012-07-10 Thread BGB

On 7/10/2012 8:53 PM, Daniel Gackle wrote:

I watched the video and got excited too. Petabits of on-chip
non-volatile storage? that also can do logic? That's more than a
game changer.



same here, it seems like an interesting technology.

large+fast non-volatile storage, effective FPGAs, neural-net processors, 
... all of this stuff at least sounds promising.




But it seems that HP's memristor claims are controversial within the
research community:

http://vixra.org/abs/1205.0004
http://www.slideshare.net/blaisemouttet/mythical-memristor

Some of the dispute is about priority, which may not matter so much; I
care less about *whom* I get massive on-chip non-volatile storage from
than that I get it at all. But that too appears to be under dispute
(e.g. "Myth #3" in the second link above).

I would love to hear more from people who know about this.



dunno.

I think the debate seems to be more who will be selling it and what 
technology will be used (or if it will replace current technologies), 
rather than whether or not there will be high-speed non-volatile storage.



also, neural net chips would be cool, provided they work well.

but, most likely it will do the usual thing and go with whatever gives a 
reasonable amount of capability for the least cost.



but, I am also left sometimes wondering about things like space-travel, 
cybernetics, and robotics as well, these also sometimes being fields 
seemingly doomed to go nowhere fast (and one can wonder if/when anything 
"cool" will ever really happen).


given what all has happened thus-far (as long as I have been alive), it 
can give doubts that much notable or interesting will happen, and as 
well there is always the risk that in the future things may just fall 
apart or begin to backslide (society starts breaking down, and 
technology either stagnates or generally becomes worse). (sometimes I 
also wonder if 80s era cyberpunk is the future...).


then again, now is now, and all of this is likely the distant future.


or such...





On Tue, Jul 10, 2012 at 12:22 PM, David Barbour > wrote:


Thanks for bringing this to my attention, Shawn. Real memristors
could seriously change the programming landscape, and have much
potential for directly embedding dataflow programming models and
neural networks.

I think object dispatch and imperative C programs won't be the
most effective use.





On Mon, Jul 9, 2012 at 11:23 PM, Shawn Morel mailto:shawnmo...@me.com>> wrote:

Just watched a very interesting talk on memristors:
https://www.youtube.com/watch?v=bKGhvKyjgLY&feature=related

I hadn't bothered going into very much detail so far - for
some reason, I thought memristors would end up being primarily
used as memory elements that supplant the traditional sram,
dram, HDD hierarchy. That on its own is kind of cool and would
probably help shift us away from files and more towards
long-lived objects.

The talk, however, describes ways that memristors can be
organized to be an arbitrary combination of switching, memory,
logic or even analog emulations of synaptic behaviour. The
talk touches briefly on compiling from C down to logic gates
(Russell's material implication). Some key aspects is that, as
opposed to FPGAs the "reprogramming" can take place in a very
short time and they addressing capabilities of a HW
associative memory are quite large.

For example,  it could take a few nanoseconds to create HW
N-way associative lookup - that's to say, I could on the fly
configure a piece of HW to actually represent object message
dispatch!

shawn


___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




-- 
bringing s-words to a pen fight


___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 2:20 PM, Miles Fidelman wrote:

BGB wrote:


a problem is partly how exactly one defines "complex":
one definition is in terms of "visible complexity", where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to 
understand and maintain than a 100 kloc project.


And there are functional and behavioral complexity - i.e., REAL 
complexity, in the information theory sense.


I expect that there is some correlation between minimizing visual 
complexity and lines of code (e.g., by using domain specific 
languages), and being able to deal with more complex problem spaces 
and/or develop more sophisticated approaches to problems.




a lot depends on what code is being abstracted, and how much code can be 
reduced by how much.


if the DSL makes a lot of code a lot smaller, it will have a good effect;
if it only makes a little code only slightly smaller, it may make the 
total project larger.



personally, I assume not worrying too much about total LOC, and more 
concern with how much personal effort is required (to 
implement/maintain/use it), and how well it will work (performance, 
memory use, reliability, ...).


but, I get a lot of general criticism for how I go about doing things...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 1:39 PM, Wesley Smith wrote:

If things are expanding then they have to get more complex, they encompass
more.

Aside from intuition, what evidence do you have to back this statement
up?  I've seen no justification for this statement so far.  Biological
systems naturally make use of objects across vastly different scales
to increase functionality with a much less significant increase in
complexity.  Think of how early cells incorporated mitochondria whole
hog to produce a new species.


in code, the later example is often called "copy / paste".
some people demonize it, but if a person knows what they are doing, it 
can be used to good effect.


a problem is partly how exactly one defines "complex":
one definition is in terms of "visible complexity", where basically 
adding a feature causes code to become harder to understand, more 
tangled, ...


another definition, apparently more popular among programmers, is to 
simply obsess on the total amount of code in a project, and just 
automatically assume that a 1 Mloc project is much harder to understand 
and maintain than a 100 kloc project.


if the difference is that the smaller project consists almost entirely 
of hacks and jury-rigging, it isn't necessarily much easier to understand.


meanwhile, building abstractions will often increase the total code size 
(IOW: adding complexity), but consequently make the code easier to 
understand and maintain (reducing visible complexity).


often the code using an abstraction will be smaller, but usually adding 
an abstraction will add more total code to the project than that saved 
by the code which makes use of it (except past a certain point, namely 
where the redundancy from the client code will outweigh the cost of the 
abstraction).



for example:
MS-DOS is drastically smaller than Windows;
but, if most of what we currently have on Windows were built directly on 
MS-DOS (with nearly every app providing its own PMode stuff, driver 
stack, ...), then the total wasted HD space would likely be huge.


and, developing a Windows-like app on Windows is much less total effort 
than doing similar on MS-DOS would be.




Also, I think talking about minimum bits of information is not the
best view onto the complexity problem.  It doesn't account for
structure at all.  Instead, why don't we talk about Gregory Chaitin's
[1] notion of a minimal program.  An interesting biological parallel
to compressing computer programs can be found in looking at bacteria
DNA.  For bacteria near undersea vents where it's very hot and genetic
code transcriptions can easily go awry due to thermal conditions, the
bacteria's genetic code as evolved into a compressed form that reuses
chunks of itself to express the same features that would normally be
spread out in a larger sequence of DNA.


yep.

I have sometimes wondered what an organism which combined most of the 
best parts of "what nature has to offer" would look like (an issue seems 
to be that most major organisms seem to be more advanced in some ways 
and less advanced in others).




wes

[1] http://www.umcs.maine.edu/~chaitin/
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 11:36 AM, Randy MacDonald wrote:
@BGB, by the 'same end' i meant tranforming a statement into something 
that a flow control operator can act on, like if () {...} else {}  The 
domain of the execute operator in APL is quoted strings.  I did not 
mean that the same end was allowing asynchronous execution.




side note:
a lot of how this is implemented came from how it was originally 
designed/implemented.


originally, the main use of the "call_async" opcode was not for async 
blocks, but rather for explicit asynchronous function calls:
foo!(...);//calls function, doesn't wait for return (return value is 
a thread-handle).

likewise:
join(foo!(...));
would call a function asynchronously, and join against the result 
(return value).


async also was latter added as a modifier:
async function bar(...) { ... }

where the function will be called asynchronously by default:
bar(...);//will perform an (implicit) async call

for example, it was also possible to use a lot of this to pass messages 
along channels:

chan!(...);//send a message, don't block for receipt.
chan(...);//send a message, blocking (would wait for other end to join)
join(chan);//get message from channel, blocks for message

a lot of this though was in the 2004 version of the language (the VM was 
later re-implemented, twice), and some hasn't been fully reimplemented 
(the 2004 VM was poorly implemented and very slow).


the async-block syntax was added later, and partly built on the concept 
of async calls.



but, yeah, probably a lot of people here have already seen stuff like 
this before.





On 6/16/2012 1:23 PM, BGB wrote:

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it 
is the "async" keyword which indicates that there is deferred 
execution. (in my language, quoting indicates symbols or strings, as 
in "this is a string", 'a', or 'single-quoted string', where "a" is 
always a string, but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and 
expressions, but this is more relaxed than in many other languages 
(it is more built on "context" than on a strict syntactic divide, and 
in most cases an explicit "return" is optional since any 
statement/expression in "tail position" may implicitly return a value).



the letters in this case were just placeholders for the statements 
which would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to 
sleep for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later 
(since the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, 
and return the return value from the thread.
generally though, a "join" in this form only makes sense with a 
single argument (and would be implemented in the VM using a special 
bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and 
likewise:

join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it 
may also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" 
will have slightly different semantics (the former will return "null" 
if the cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up 
more of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message 
passing:


  aResult ? s

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 11:36 AM, Randy MacDonald wrote:
@BGB, by the 'same end' i meant tranforming a statement into something 
that a flow control operator can act on, like if () {...} else {}  The 
domain of the execute operator in APL is quoted strings.  I did not 
mean that the same end was allowing asynchronous execution.




yes, ok.

just, a lot of this logic is hard-coded into the parser and compiler 
logic though, but yeah I think I understand what is mean here.



FWIW, it is possible to use code-blocks at runtime by using the syntax 
"fun{...}", which is basically a shorthand equivalent to "function() { 
... }" (this will create closures if there are any captured bindings, 
but otherwise will create a raw block).


(by default, bindings are captured by identity, and may outlive the 
parent scope).


note that, "async{...}" will also work similar to a closure by default, 
so that variables will be captured by-reference. technically, it is also 
possible to write something like: "async(i){...}" which would capture 
'i' by-value (this being because, internally, async is implemented by 
calling a closure in a newly-created green-thread, and in the case where 
variables are used, they are treated as arguments, with the closure 
having a matching argument list).


the reason for this latter form of async is to allow things like:
for(i=0; i<16; i++)
async(i) { ... }

where each would capture the value of 'i' (rather than the variable 'i').
vaguely similar could be possible with closures, say: "fun[i]{...}", but 
thus far nothing along these lines has been implemented (and would 
require altering how closures work).





On 6/16/2012 1:23 PM, BGB wrote:

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it 
is the "async" keyword which indicates that there is deferred 
execution. (in my language, quoting indicates symbols or strings, as 
in "this is a string", 'a', or 'single-quoted string', where "a" is 
always a string, but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and 
expressions, but this is more relaxed than in many other languages 
(it is more built on "context" than on a strict syntactic divide, and 
in most cases an explicit "return" is optional since any 
statement/expression in "tail position" may implicitly return a value).



the letters in this case were just placeholders for the statements 
which would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to 
sleep for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later 
(since the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, 
and return the return value from the thread.
generally though, a "join" in this form only makes sense with a 
single argument (and would be implemented in the VM using a special 
bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and 
likewise:

join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it 
may also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" 
will have slightly different semantics (the former will return "null" 
if the cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up 
more of the logic for this (adding multiway join logic, ...).





On anoth

Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 10:05 AM, Randy MacDonald wrote:
@BGB, if the braces around the letters defers execution, as my 
memories of Perl confirm, this is perfect.  With APL, quoting an 
expression accomplishes the same end: '1+1'




no, the braces indicate a code block (in statement context), and it is 
the "async" keyword which indicates that there is deferred execution. 
(in my language, quoting indicates symbols or strings, as in "this is a 
string", 'a', or 'single-quoted string', where "a" is always a string, 
but 'a' is a character-literal).


in a expression context, the braces indicate creation of an ex-nihilo 
object, as in "{x: 3, y: 4}".


the language sort-of distinguishes between statements and expressions, 
but this is more relaxed than in many other languages (it is more built 
on "context" than on a strict syntactic divide, and in most cases an 
explicit "return" is optional since any statement/expression in "tail 
position" may implicitly return a value).



the letters in this case were just placeholders for the statements which 
would go in the blocks.


for example example:
if(true)
{
printf("A\n");
sleep(1000);
printf("B\n");
sleep(1000);
}
printf("Done\n");

executes the print statements synchronously, causing the thread to sleep 
for 1s in the process (so, "Done" is printed 1s after "B").


and, with a plain "async" keyword:
async {
sleep(1000);
printf("A\n");
}
printf("Done\n");

will print "Done" first, and then print "A" about 1 second later (since 
the block is folded into another thread).


technically, there is another operation, known as a join.

var a = async { ... };
...
var x = join(a);

where the "join()" will block until the given thread has returned, and 
return the return value from the thread.
generally though, a "join" in this form only makes sense with a single 
argument (and would be implemented in the VM using a special bytecode op).


an extension would be to implicitly allow multiple joins, as in:
join(a, b, c);//wait on 3 threads
except, now, the return value doesn't make much sense anymore, and likewise:
join(
async{A},
async{B},
async{C});
is also kind of ugly.

in this case, the syntax:
async! {A}&{B}&{C};
although, this could also work:
async! {A}, {B}, {C};

either would basically mean "async with join", and essentially mean 
something similar to the 3-way join (basically, as syntax sugar). it may 
also imply "we don't really care what the return value is".


basically, the "!" suffix has ended up on several of my keywords to 
indicate "alternate forms", for example: "a as int" and "a as! int" will 
have slightly different semantics (the former will return "null" if the 
cast fails, and the latter will throw an exception).



but, since I got to thinking about it again, I started writing up more 
of the logic for this (adding multiway join logic, ...).





On another note, I agree with the thesis that OO is just message passing:

  aResult ? someParameters 'messageName' to anObject ?? so, once 
'to' is defined, APL does OO.


I was thinking 'new' didn't fit, but

   'new' to aClass

convinced me otherwise.

It also means that 'object oriented language' is a category error.



my language is a bit more generic, and loosely borrows much of its 
current syntax from JavaScript and ActionScript.


however, it has a fair number of non-JS features and semantics exist as 
well.
it is hardly an elegant, cleanly designed, or minimal language, but it 
works, and is a design more based on being useful to myself.




On 6/16/2012 11:40 AM, BGB wrote:


I recently thought about it off-list, and came up with a syntax like:
async! {A}&{B}&{C}



--
---
|\/| Randy A MacDonald   | If the string is too tight, it will snap
|\\|array...@ns.sympatico.ca|   If it is too loose, it won't play...
  BSc(Math) UNBF '83  | APL: If you can say it, it's done.
  Natural Born APL'er | I use Real J
  Experimental webserverhttp://mormac.homeftp.net/
<-NTP>{ gnat }-


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-16 Thread BGB

On 6/16/2012 9:19 AM, Randy MacDonald wrote:

On 6/10/2012 1:15 AM, BGB wrote:
meanwhile, I have spent several days on-off pondering the mystery of 
if there is any good syntax (for a language with a vaguely C-like 
syntax), to express the concept of "execute these statements in 
parallel and continue when all are done".

I believe that the expression in Dyalog APL is:

⍎&¨statements

or

{execute}{spawn}{each}statements.



I recently thought about it off-list, and came up with a syntax like:
async! {A}&{B}&{C}

but, decided that this isn't really needed at the more moment, and is a 
bit "extreme" of a feature anyways (and would need to devise a mechanism 
for implementing a multi-way join, ...).


actually, probably in my bytecode it would look something like:
mark
mark; push A; close; call_async
mark; push B; close; call_async
mark; push C; close; call_async
multijoin

(and likely involve adding some logic into the green-thread scheduler...).


ended up basically opting in this case for something simpler which I had 
used in the past:
callback events on timers. technically, timed callbacks aren't really 
"good", but they work well enough for things like animation tasks, ...


but, I may still need to think about it.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-15 Thread BGB

On 6/15/2012 12:27 PM, Paul Homer wrote:
I wouldn't describe complexity as a problem, but rather an attribute 
of the universe we exist in, effecting everything from how we organize 
our societies to how the various solar systems interact with each other.


Each time you conquer the current complexity, your approach adds to 
it. Eventually all that conquering needs to be conquered itself ...




yep.

the world of software is layers upon layers of stuff.
one thing is made, and made easier, at the cost of adding a fair amount 
of complexity somewhere else.


this is generally considered a good tradeoff, because the reduction of 
complexity in things that are seen is perceptually more important than 
the increase in internal complexity in the things not seen.


although it may be possible to reduce complexity, say by finding ways to 
do the same things with less total complexity, this will not actually 
change the underlying issue (or in other cases may come with costs worse 
than internal complexity, such as poor performance or drastically higher 
memory use, ...).




Paul.


*From:* Loup Vaillant 
*To:* fonc@vpri.org
*Sent:* Friday, June 15, 2012 1:54:04 PM
*Subject:* Re: [fonc] The Web Will Die When OOP Dies

Paul Homer wrote:
> It is far more than obvious that OO opened the door to allow massive
> systems. Theoretically they were possible before, but it gave us
a way
> to manage the complexity of these beasts. Still, like all
technologies,
> it comes with a built-in 'threshold' that imposes a limit on
what we can
> build. If we are too exceed that, then I think we are in the
hunt for
> the next philosophy and as Zed points out the ramification of
finding it
> will cause yet another technological wave to overtake the last one.

I find that a bit depressing: if each tool that tackle complexity
better than the previous ones lead us to increase complexity (just
because we can), we're kinda doomed.

Can't we recognized complexity as a problem, instead of an unavoidable
law of nature?  Thank goodness we have STEPS project to shed some
light.

Loup.
___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-14 Thread BGB

On 6/14/2012 10:19 PM, John Zabroski wrote:


Folks,

Arguing technical details here misses the point. For example, a 
different conversation can be started by asking Why does my web 
hosting provider say I need an FTP client? Already technology is way 
too much in my face and I hate seeing programmers blame their tools 
rather than their misunderstanding of people.


Start by asking yourself how would you build these needs from scratch 
to bootstrap something like the Internet.


What would a web browser look like if the user didnt need a seperate 
program to put data somewhere on their web server and could just use 
one uniform mexhanism? Note I am not getting into "nice to have" 
features like resumption of paused uploads due to weak or episodic 
connectivity, because that too is basically a technical problem -- and 
it is not regarded as academically difficult either. I am simply 
taking one example of how users are forced to work today and asking 
why not something less technical. All I want to do is upload a file 
and yet I have all these knobs to tune and things to "install" and 
none of it takes my work context into consideration.




idle thoughts:
there is Windows Explorer, which can access FTP;
would be better if it actually remembered login info, had automatic 
logic, and could automatically resume uploads, ...


but, the interface is nice, as an FTP server looks much like a 
directory, ...



also, at least in the past, pretty much everything *was* IE:
could put HTML on the desktop, in directories (directory as webpage), ...
but most of this went away AFAICT (then again, not really like IE is 
"good").


maybe, otherwise, the internet would look like local applications or 
similar. they can sit on desktop, and maybe they launch windows. IMHO, I 
don't as much like tabs, as long ago Windows basically introduced its 
own form of tabs:

the Windows taskbar.

soon enough, it added another nifty feature:
it lumped various instances of the same program into popup menus.


meanwhile, browser tabs are like Win95 all over again, with the thing 
likely to experience severe lag whenever more than a few pages are open 
(and often have responsiveness and latency issues).


better maybe if more of the app ran on the client, and if people would 
use more asynchronous messages (rather than request/response).


...

so, then, webpages could have a look and feel more like normal apps.




Why do I pay even $4 a month for such crappy service?

On Jun 11, 2012 8:17 AM, "Tony Garnock-Jones" 
mailto:tonygarnockjo...@gmail.com>> wrote:


On 9 June 2012 22:06, Toby Schachman mailto:t...@alum.mit.edu>> wrote:

Message passing does not necessitate a conceptual dependence on
request-response communication. Yet most code I see in the
wild uses
this pattern.


Sapir-Whorf strikes again? ;-)

I rarely
see an OO program where there is a "community" of objects who
are all
sending messages to each other and it's conceptually ambiguous
which
object is "in control" of the overall system's behavior.


Perhaps you're not taking into account programs that use the
observer/observable pattern? As a specific example, all the uses
of the "dependents" protocols (e.g. #changed:, #update:) in
Smalltalk are just this. In my Squeak image, there are some 50
implementors of #update: and some 500 senders of #changed:.

In that same image, there is also protocol for "events" on class
Object, as well as an instance of Announcements loaded. So I think
what you describe really might be quite common in OO /systems/,
rather than discrete programs.

All three of these aspects of my Squeak image - the "dependents"
protocols, triggering of "events", and Announcements - are
encodings of simple asynchronous messaging, built using the
traditional request-reply-error conversational pattern, and
permitting conversational patterns other than the traditional
request-reply-error.

As an aside, working with such synchronous simulations of
asynchronous messaging causes all sorts of headaches, because
asynchronous events naturally involve concurrency, and the
simulation usually only involves a single process dispatching
events by synchronous procedure call.

Regards,
  Tony
-- 
Tony Garnock-Jones

tonygarnockjo...@gmail.com 
http://homepages.kcbbs.gen.nz/tonyg/

___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The Web Will Die When OOP Dies

2012-06-09 Thread BGB

On 6/9/2012 9:28 PM, Igor Stasenko wrote:

While i agree with guy's bashing on HTTP,
the second part of his talk is complete bullshit.


IMO, he did raise some valid objections regarding JS and similar though 
as well.


these are also yet more areas though where BS differs from JS: it uses 
different semantics for "==" and "===" (in BS, "==" compares by value 
for compatible types, and "===" compares values by identity).



granted, yes, bashing OO isn't really called for, or at least absent 
making more specific criticisms.


for example, I am not necessarily a fan of Class/Instance OO or deeply 
nested class hierarchies, but I really do like having "objects" to hold 
things like fields and methods, but don't necessarily like it to be a 
single form with a single point of definition, ...


would this mean I am "for" or "against" OO?...

I had before been accused of being anti-OO because I had asserted that, 
rather than making deeply nested class hierarchies, a person could 
instead use some interfaces.


the problem is partly that "OO" often means one thing for one person and 
something different for someone else.




He mentions a kind of 'signal processing' paradigm,
but we already have it: message passing.
Before i learned smalltalk, i was also thinking that OOP is about
structures and hierarchies, inheritance.. and all this
private/public/etc etc bullshit..
After i learned smalltalk , i know that OOP it is about message
passing. Just it. Period.
And no other implications: the hierarchies and structures is
implementation specific, i.e.
it is a way how an object handles the message, but it can be
completely arbitrary.

I think that indeed, it is a big industry's fault being unable to
grasp simple and basic idea of message passing
and replace it with horrible crutches with tons of additional
concepts, which makes it hard
for people to learn (and therefore be effective with OOP programming).


yeah.


although a person may still implementing a lot of this for sake of 
convention, partly because it is just sort of expected.


for example, does a language really need classes or instances (vs, say, 
cloning or creating objects ex-nihilo)? not really.


then why have them? because people expect them; they can be a little 
faster; and they provide a convenient way to define and answer the 
question "is X a Y?", ...


I personally like having both sets of options though, so this is 
basically what I have done.




meanwhile, I have spent several days on-off pondering the mystery of if 
there is any good syntax (for a language with a vaguely C-like syntax), 
to express the concept of "execute these statements in parallel and 
continue when all are done".


practically, I could allow doing something like:
join( async{A}, async{B}, async{C} );
but this is ugly (and essentially abuses the usual meaning of "join").

meanwhile, something like:
do { A; B; C; } async;
would just be strange, and likely defy common sensibilities (namely, in 
that the statements would not be executed sequentially, in contrast to 
pretty much every other code block).


I was left also considering another possibly ugly option:
async![ A; B; C; ];
which just looks weird...

for example:
async![
{ sleep(1000); printf("A, "); };
{ sleep(2000); printf("B, "); };
{ sleep(3000); printf("C, "); }; ];
printf("Done\n");

would print "A, B, C, Done" with 1s delays before each letter.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] iconic representations of powerful ideas

2012-06-04 Thread BGB

On 6/4/2012 12:59 PM, Miles Fidelman wrote:

BGB wrote:

On 6/4/2012 6:48 AM, Miles Fidelman wrote:

BGB wrote:


and, recently devised a hack for creating "component layered JPEG 
images", or, basically, a hack to allow creating JPEGs which also 
contained alpha-blending, normal maps, specular maps, and luma maps 
(as an essentially 16-component JPEG image composed of multiple 
component layers, with individual JPEG images placed end-to-end 
with marker tags between them to mark each layer).



dunno if anyone would find any of this all that interesting though.


well, I'd certainly be interested in seeing that hack!

Mile Fidelman



from a comment in my JPEG code:
<--
BGB Extensions:
APP11: BGBTech Tag
ASCIZ TagName
Tag-specific data until next marker.




Pardon my cluelessness here, but what exactly are you showing as "JPEG 
code?"  Is this part of a JPEG file header, part of code that's 
generating a JPEG file, what?




the "JPEG code" reference was, basically, to my JPEG encoder/decoder 
(which implements ITU T.81 and JFIF), and is written in C.


the design shouldn't be too hard to retrofit onto a "standard" encoder 
though (such as "libjpeg"), given it doesn't significantly alter the 
file-format, but as-is I use my own JPEG implementation.


in this case, basically it is implemented as wrapper functions, which 
essentially just daisy-chain to the normal JPEG encoder/decoder functions.



but, anyways, to try to explain the JPEG format:
basically, it consists mostly of "marker tags", defined as byte values.

there is no particular "file header" in JPEG, only marker tags, and data 
which follows tags, and the relative positioning and ordering of these 
marker tags.



the byte value 0xFF is "magic" in JPEG, and serves to escape various 
tags (given in the following byte). it may not be used directly as a 
data value in encoded data (it needs to be escaped).


for example, "0xFF, 0x00" escapes an 0xFF byte.

so, a few major tags.
SOI (Start Of Image, 0xD8) and EOI (Start Of Image, 0xD9) are also 
major, as these mark the start and end of an encoded image (all other 
image-encoding markers exist between these tags).


DHT (Define Huffman Table, 0xC4), defines the Huffman table.
DQT (Define Quantization Table, 0xDB), defines the Huffman table.
SOF0 (Start Of Frame 0, 0xC0), defines the resolution and, components, 
and sub-sampling of the image (for color images, always 3 components and 
4:2:0 in JFIF, and the color-space is always YUV / YCbCr).


SOS (Start Of Scan, 0xDA), marks the start of the compressed image data.


and, in this case:
APP0-APP15 (0xE0-0xEF), which define application-specific extension tags.

these are commonly used to encode various extensions features. if a 
codec sees one it doesn't recognize, it will typically skip over it 
(skip until the next marker tag).


APP0 is used by JFIF, and APP1 and APP2 are used by EXIF.
APP3-APP15 are used by various other people for various extensions.
I ended up using APP11 because, AFAICT, no one else was using it.

the usual convention for using these tags is to have them followed by an 
ASCIIZ string, which is what I was doing. I also ended up also using an 
ASCIIZ string to encode parameters as well.



but, all this means that the layer tag will come before the SOI tag.

so, for example (leaving out 0x prefixes):
FF, E0, xx, xx, "JFIF", 00, ...,//JFIF "header" (often optional)
FF, EB, "CompLayer", 00, "RGB", 00,//layer marker
FF, D8, ..., FF, D9,//encoded image
FF, EB, "CompLayer", 00, "XYZ", 00,
FF, D8, ..., FF, D9,//encoded image
...



And... I don't suppose you have any examples of such files - either 
behind URLs, or for download?




sadly, not as of yet.


I only recently implemented it, and it is mostly intended for AVI videos 
(I don't currently use it for textures, which typically give all this 
information as multiple image files).


given the way AVI works, there would be no good way to use multiple 
images per frame in an AVI (thus, the major reason for creating this). 
(note: OpenDML specifies something similar already for encoding 
interlaced MJPEG videos).


even if files were available though, most likely only the basic RGB 
image would be visible to existing applications (the other layers would 
be ignored).




Thanks,

Miles



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] iconic representations of powerful ideas

2012-06-04 Thread BGB

On 6/4/2012 6:48 AM, Miles Fidelman wrote:

BGB wrote:


and, recently devised a hack for creating "component layered JPEG 
images", or, basically, a hack to allow creating JPEGs which also 
contained alpha-blending, normal maps, specular maps, and luma maps 
(as an essentially 16-component JPEG image composed of multiple 
component layers, with individual JPEG images placed end-to-end with 
marker tags between them to mark each layer).



dunno if anyone would find any of this all that interesting though.


well, I'd certainly be interested in seeing that hack!

Mile Fidelman



from a comment in my JPEG code:
<--
BGB Extensions:
APP11: BGBTech Tag
ASCIZ TagName
Tag-specific data until next marker.

"AlphaColor":
AlphaColor
RGBA as string ("red green blue alpha").

APP11 markers may indicate component layer:
FF,APP11,"CompLayer\0", 
"RGB": Base RGB
"XYZ": Normal XYZ (XZY ordering)
"SpRGB": Specular RGB
"DASe": Depth, Alpha, Specular-Exponent
"LuRGB": Luma RGB
"Alpha": Mono alpha layer

Component Layouts:
3 component: (no marker, RGB)
4 component: RGB+Alpha
7 component: RGB+Alpha+LuRGB
8 component: RGB+XYZ+DASe
12 component: RGB+XYZ+SpRGB+DASe
16 component: RGB+XYZ+SpRGB+DASe+LuRGB
-->

"AlphaColor" was an prior extension, basically for in-image chroma-keys.
the RGB color specifies the color to be matched, and A specifies how 
"strongly" the color is matched (IIRC, it is the "distance" to Alpha=128 
or so).


it was imagined that this could be calculated dynamically per-image, but 
doing so is costly, so typically a fixed color is specified during 
encoding (such as cyan or magenta).



"CompLayer" is the component layers.
currently, this tag precedes the SOI tages.

example:
FF,APP11, "CompLayer\0", "RGB\0"
FF,SOI
...
FF,EOI
FF,APP11, "CompLayer\0", "XYZ\0"
FF,SOI
...
FF,EOI
...


basically:
most component-layers are generic 4:2:0 RGB/YUV layers (except the mono 
alpha layer, which is monochrome).


the layers may share the same Huffman and Quantization tables (by having 
only the first layer encode them).


the RGB layer always comes first, so a decoder that doesn't know of the 
extension, will just see the basic RGB components. also all layers are 
the same resolution.



this is hardly an "ideal" design, but was intended more to allow a 
simple encoder/decoder tweak to handle it (currently, it is 
encoded/decoded by a function which may accept 4 RGBA buffers, and may 
shuffle things around slightly to encode them into the layers).


the in-program layers are:
RGBA;
XYZD ('D' may be used for parallax mapping, and represents the relative 
depth of the pixel);
Specular (RGBe), this basically gives the reflection color and shininess 
of surface pixels;

Luma (RGBA).


so, yes, it is all a bit of a hack...


there was also a little fudging to the my AVI code to allow these videos 
to be used for surface video-mapping (basically, the video is streamed 
into all 4 layers at the same time).


example use-cases of something like this would likely be things like 
making animated textures which resemble moving parts (such as metal 
gears and blinking lights), or alternatively as a partial alternative to 
using 3D modeled character faces (the face moving is really the texture 
and animation frames, rather than 3D geometry), however presently this 
is likely a better fit for animated textures than for video-maps.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] iconic representations of powerful ideas

2012-06-03 Thread BGB

On 6/3/2012 8:31 PM, Shawn Morel wrote:

I'm a very visual learner / thinker. I usually find it mentally painful (yes 
brow furrowing, headache inducing) to think of hard (distant) ideas until I can 
find an image in my mind's eye. Understood that not everyone thinks like this :)


I guess I often think visually as well, though with both a lot of 
pictures and text (but, how does one really know for certain?...).


I also tend to be a bit of a "concrete" thinker (or, a "sensing type" in 
psychology terms).




I was re-reading the original NSF grant proposal, in particular after reading 
this passage:

"Key to the tractability of this approach is the separation of the kernel into two 
complementary facets: representation of executable specifications (structures of 
message-passing objects) forming symbolic expressions and the meaning of those 
specifications (interpretation of their structure) that yields concrete behavior."

I was gliding along the surface of a dynamically shifting Klein bottle.

Curious what other people might think.


personally I don't much understand the core goals of the project all 
that well either.


I lurk some, and respond if something interesting shows up, and 
sometimes make a fool of myself in the process, but oh well...


as well, it sometimes seems to me like maybe I am some sort of 
"generalized antagonist" for many people or something, at least given 
how many often pointless arguments seem to pop up (in general).



but, thinking of visual things:

I had recently looked over the SWF spec, and noticed that to some 
degree, at this level Flash looks a good deal like "some sort of 
animated photoshop-like thing" (both seem to be composed of stacks of 
layers and similar). or, at least, I found it kind of interesting.


then was recently left dealing with the idea of systems being driven 
from the top-down, rather than how I am more familiar with them in 
games: basically as interacting finite-state-machines (although top-down 
wouldn't likely replace FSMs, but they could be used in combination).



and, recently devised a hack for creating "component layered JPEG 
images", or, basically, a hack to allow creating JPEGs which also 
contained alpha-blending, normal maps, specular maps, and luma maps (as 
an essentially 16-component JPEG image composed of multiple component 
layers, with individual JPEG images placed end-to-end with marker tags 
between them to mark each layer).


the main purpose was mostly though that I could have more advanced 
video-mapped surfaces (and, for the most part, I use MJPEG AVIs for 
these). there wasn't any other "clearly better" way.



among other things...

dunno if anyone would find any of this all that interesting though.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The problem with programming languages

2012-05-09 Thread BGB

On 5/9/2012 12:13 AM, Jarek Rzeszótko wrote:

There is an excellent video by Feynman on a related note:

http://www.youtube.com/watch?v=Cj4y0EUlU-Y

A damn good way to spend six minutes IMO...



yep.


I was left previously trying to figure out whether "thinking using text" 
was more linguistic/verbal or visual thinking, given it doesn't really 
match well with either:
verbal thinking is generally described as people thinking with words and 
sounds;

visual thinking is generally described as pictures / colors / emotions / ...

so, one can wonder, where does text fit?...

granted, yes, there is some mode-changing as well, as not everything 
seems to happen the same way all the time, and I can often "push things 
around" if needed (natural language can alternate between auditory and 
textual forms, ...).


I have determined though that I can't really read and also "visualize" 
the story (apparently, many other people do this), as all I can really 
see at the time is the text. probably because my mind is more busy 
trying to buffer up the text, and the space is already used up and so 
can't be used for drawing pictures (unless I use up a lot of the space 
for drawing a picture, in which case there isn't much space for holding 
text, ...).


I can also write code while also listening to someone talk, such as in a 
technical YouTube video or similar, since the code and person talking 
are independent (and usually relevant visuals are sparse and can be 
looked at briefly). but, I can't compose an email and carry on a 
conversation with someone at the same time, because they interfere (but 
I can often read and carry on a conversation though, though it is more 
difficult to entirely avoid "topical bleed-over").



despite thinking with lots of text, I am also not very good at math, as 
I still tend to find both arithmetic and "symbolic manipulation" type 
tasks as fairly painful (but, these are used heavily in math classes).


when actually working with math, in a form that I understand, it is 
often more akin to wireframe graphics. for example, I can "see" the 
results of a dot-product or cross-product (I can see the orthogonal 
cross-bars of a cross-product, ...), and can mentally let the system 
"play out" (as annotated/diagrammed 3D graphics) and alter the results 
and see what happens (and the "math" is the superstructure of lines and 
symbols interconnecting the objects).


yet, I can't usually do this effectively in math classes, and usually 
have to resort to much less effective strategies, such as trying to 
convert the problem into a C-like form, and then evaluating this 
in-head, to try to get an answer. similarly, this doesn't work unless I 
can figure out an algorithm for doing it, or just what sort of thing the 
question is even asking for, which is itself often problematic.


another irony is that I don't really like flowcharts, as I personally 
tend to see them as often a very wasteful/ineffective way of 
representing many of these sorts of problems. despite both being 
visually-based, my thinking is not composed of flow-charts (and I much 
prefer more textual formats...).



or such...



Cheers,
Jaros?aw Rzeszótko

2012/5/9 BGB mailto:cr88...@gmail.com>>

On 5/8/2012 2:56 PM, Julian Leviston wrote:

Isn't this simply a description of your "thought clearing process"?

You think in English... not Ruby.

I'd actually hazard a guess and say that really, you think in a
semi-verbal semi-phyiscal pattern language, and not very well
formed one, either. This is the case for most people. This is why
you have to write hard problems down... you have to bake them
into physical form so you can process them again and again,
slowly developing what you mean into a shape.



in my case I think my thinking process is a good deal different.

a lot more of my thinking tends to be a mix of visual/spatial
thinking, and thinking in terms of glyphs and text (often
source-code, and often involving glyphs and traces which I suspect
are unique to my own thoughts, but are typically laid out in the
same "character cell grid" as all of the text).

I guess it could be sort of like if text were rammed together with
glyphs and PCB traces or similar, with the lines weaving between
the characters, and sometimes into and out of the various glyphs
(many of which often resemble square boxes containing circles and
dots, sometimes with points or corners, and sometimes letters or
numbers, ...).

things may vary somewhat, depending on what I am thinking about
the time.


my memory is often more like collections of images, or almost like
"pages in a book", with lots of information drawn onto them,
usually in a white-on-black color-scheme. there is typically very
l

Re: [fonc] The problem with programming languages

2012-05-08 Thread BGB

On 5/8/2012 2:56 PM, Julian Leviston wrote:

Isn't this simply a description of your "thought clearing process"?

You think in English... not Ruby.

I'd actually hazard a guess and say that really, you think in a 
semi-verbal semi-phyiscal pattern language, and not very well formed 
one, either. This is the case for most people. This is why you have to 
write hard problems down... you have to bake them into physical form 
so you can process them again and again, slowly developing what you 
mean into a shape.




in my case I think my thinking process is a good deal different.

a lot more of my thinking tends to be a mix of visual/spatial thinking, 
and thinking in terms of glyphs and text (often source-code, and often 
involving glyphs and traces which I suspect are unique to my own 
thoughts, but are typically laid out in the same "character cell grid" 
as all of the text).


I guess it could be sort of like if text were rammed together with 
glyphs and PCB traces or similar, with the lines weaving between the 
characters, and sometimes into and out of the various glyphs (many of 
which often resemble square boxes containing circles and dots, sometimes 
with points or corners, and sometimes letters or numbers, ...).


things may vary somewhat, depending on what I am thinking about the time.


my memory is often more like collections of images, or almost like 
"pages in a book", with lots of information drawn onto them, usually in 
a white-on-black color-scheme. there is typically very little color or 
movement.


sometimes it may include other forms of graphics, like pictures of 
things I have seen, objects I can imagine, ...



thoughts may often use natural-language as well, in a spoken-like form, 
but usually this is limited either to when talking to people or when 
writing something (if I am trying to think up what I am writing, I may 
often hear "echoes" of various ways the thought could be expressed, and 
of text as it is being written, ...). reading often seems to bypass this 
(and go more directly into a visual form).



typically, thinking about programming problems seems to be more like 
being in a "storm" of text flying all over the place, and then bits of 
code flying together from the pieces.


if any math is involved, often any relevant structures will be 
themselves depicted visually, often in geometry-like forms.


or, at least, this is what it "looks like", I really don't actually know 
how it all works, or how the thoughts themselves actually work or do 
what they do.


I think all this counts as some form of "visual thinking" (though I 
suspect probably a non-standard form based on some stuff I have read, 
given that "colors, movement, and emotions" don't really seem to be a 
big part of this).



or such...



On 09/05/2012, at 2:20 AM, Jarek Rzeszótko wrote:

Example: I have been programming in Ruby for 7 years now, for 5 years 
professionally, and yet when I face a really difficult problem the 
best way still turns out to be to write out a basic outline of the 
overall algorithm in pseudo-code. It might be a personal thing, but 
for me there are just too many irrelevant details to keep in mind 
when trying to solve a complex problem using a programming language 
right from the start. I cannot think of classes, method names, 
arguments etc. until I get a basic idea of how the given computation 
should work like on a very high level (and with the low-level details 
staying "fuzzy"). I know there are people who feel the same way, 
there was an interesting essay from Paul Graham followed by a very 
interesting comment on MetaFilter about this:




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The problem with programming languages

2012-05-08 Thread BGB

On 5/7/2012 11:56 PM, Julian Leviston wrote:

Naming poses no problem so long as you define things a bit. :P

Humans parsing documents without proper definitions are like coders 
trying to read programming languages that have no comments


(pretty much all the source code I ever read unfortunately)



I think this is probably why (at least in my case), I tend to "think in 
code" a lot more than "think in natural language" or "think in concepts".


like, a person is working with code, so they have this big pile of code 
in-mind, and see it and think about its behavior, ...



because, yes, comments often are a bit sparse.

personally though, I think that the overuse of comments to describe what 
code is doing is kind of pointless, as the person reading the code will 
generally figure this one out easily enough on their own ("x=i;
//assign i to x", yes, I really needed to know this...).


comments are then more often useful to provide information about why 
something is being done, or information about intended behavior or 
functioning.


as well as describing things which "should be" the case, but aren't yet 
(stuff which is not yet implemented, design problems or bugs, ...).



nevermind side-uses for things like:
putting in license agreements;
putting in documentation comments;
embedding commands for various kinds of tools (although, in many cases, 
"magic macros" may make more sense, such as in C);

...


oddly, I suspect I may be a bit less brittle than some people when it 
comes both to reading natural language, and reading code, especially 
given how many long arguments about "pedantics A vs pedantics B" there 
seem to be going around.


this is usually justified with claims like "programming is all about 
being precise", even when it amounts to a big argument over things like 
"whether or not X and Y are 'similar' despite being 
not-functionally-equivalent due to some commonality in terms of the ways 
they are used or in terms of the aesthetic properties of their interface 
and similarity in terms of externally visible behavior", or "does 
feature X if implemented as a library feature in language A still count 
as X, when X would normally be implemented as a compiler-supported 
feature in language B", ... (example, whether or not it is possible to 
implement and use "dynamic typing" in C and C++ code).


and, also recently, an argument over "waterfall method" vs "agile 
development", ...


nevermind the issue of "meaning depends on context" vs "meaning is built 
on absolute truths".


I have usually been more on the "lax side" of "the externally visible 
behavior is mostly what people actually care about" and "it doesn't 
really matter if a feature is built-in to the compiler or a 
library-provided feature, provided it works" (and, yes, I also just so 
happen to believe that "meaning depends on context" as well, as well as 
that the "waterfall method" is also "inherently broken", ...).


but, alas, better would be if there were a good way to avoid these sorts 
of arguments altogether.



but alas...



J


On 08/05/2012, at 4:36 PM, David Barbour wrote:

On Mon, May 7, 2012 at 11:07 PM, Clinton Daniel 
mailto:clinton...@yahoo.com.au>> wrote:


The other side of that coin is burdening users with a bunch of new
terms to learn that don't link to existing human concepts and words.
"Click to save the document" is easier for a new user to grok than
"Flarg to flep the floggle" ;)

Seriously though, in the space of programming language design, there
is a trade-off in terms of quickly conveying a concept via reusing a
term, versus coining a new term to reduce the impedance mismatch that
occurs when the concept doesn't have exactly the same properties
as an
existing term.


Yeah. I've had trouble with this balance before. We need to 
acknowledge the path dependence in human understanding.


My impression: it's connotation, more than denotation, that 
interferes with human understanding.


"Naming is two-way: a strong name changes the meaning of a thing, and 
a strong thing changes the meaning of a name." - Harrison Ainsworth 
(@hxa7241)


Regards,

Dave

--
bringing s-words to a pen fight
___
fonc mailing list
fonc@vpri.org 
http://vpri.org/mailman/listinfo/fonc




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The problem with programming languages

2012-05-07 Thread BGB

On 5/7/2012 7:26 AM, Carl Gundel wrote:

People do that every day without using a programming language at all.  ;-)


I think pretty much every field does this.

programmers, doctors, lawyers, engineers, ... all have their own 
specialized versions of the language, with many terms particular to the 
domain, and many common-use terms which are used in particular ways with 
particular definitions which may differ from those of the "common use" 
of the words.


decided not really to go into examples.


sometimes, mixed with "tradition" and similar, this can lead to some 
often rather confusing language-use patterns: constructions involving 
obscure grammar, words and phrases from different languages (Latin, 
Italian, French, ...), ...


so, programming is by no means unique here.



-Carl

-Original Message-
From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of John
Pratt
Sent: Monday, May 07, 2012 10:15 AM
To: fonc@vpri.org
Subject: [fonc] The problem with programming languages


The problem with programming languages and computers in general is that they
hijack existing human concepts and words, usurping them from everyday usage
and flattening out their meanings.
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Quiet and light weight devices

2012-04-22 Thread BGB

On 4/21/2012 1:57 PM, Andre van Delft wrote:

TechCrunch has an interview with Linus Torvalds. He uses a MacBook Air (iOS, 
BTW):


sure it is not OS X?...

although, it is kind of funny that he would be using a computer not 
running his own OS...




[Start of Quote]
I’m have to admit being a bit baffled by how nobody else seems to have done 
what Apple did with the Macbook Air – even several years after the first 
release, the other notebook vendors continue to push those ugly and *clunky* 
things. Yes, there are vendors that have tried to emulate it, but usually 
pretty badly. I don’t think I’m unusual in preferring my laptop to be thin and 
light.

Btw, even when it comes to Apple, it’s really just the Air that I think is 
special. The other apple laptops may be good-looking, but they are still the 
same old clunky hardware, just in a pretty dress.

I’m personally just hoping that I’m ahead of the curve in my strict requirement 
for “small and silent”. It’s not just laptops, btw – Intel sometimes gives me 
pre-release hardware, and the people inside Intel I work with have learnt that 
being whisper-quiet is one of my primary requirements for desktops too. I am 
sometimes surprised at what leaf-blowers some people seem to put up with under 
their desks.

I want my office to be quiet. The loudest thing in the room – by far – should 
be the occasional purring of the cat. And when I travel, I want to travel 
light. A notebook that weighs more than a kilo is simply not a good thing 
(yeah, I’m using the smaller 11″ macbook air, and I think weight could still be 
improved on, but at least it’s very close to the magical 1kg limit).

[End of Quote]
http://techcrunch.com/2012/04/19/an-interview-with-millenium-technology-prize-finalist-linus-torvalds/

I agree with Linus, especially on the importance of silence (I don't travel 
that much yet). I intent never to buy a noisy Windows PC any more. My 1 year 
old 4GB iMac is pretty silent, but with every tab I open in Chrome I hear some 
soft rumble that irritates me heavily. My iPad is nicely quiet when it should 
be.


well, it would be an improvement...

I suspect it is possible that PC fan noise may be a source of mild 
tinnitus in my case.
it is not noticeable when near a computer, but often when away from a 
computer or somewhere quiet, there is a ringing noise oddly similar to 
that of PC fans.


otherwise, I don't usually mind the fans it all that much (it is a 
tradeoff...).



but, I once got a fan with a fairly high CFM rating (I forget now the 
exact number, something like 1600 CFM or something...), but ended up not 
using it as the thing as it sounded more like a vacuum cleaner than a 
normal PC fan (and could also be heard from other rooms in the house).


IIRC, it was in 120mm form (and fairly thick as well), and it had 
double-sided metal grates (and also metal blades IIRC). if powered, it 
was also strong enough to propel itself along on a surface if kept 
upright (but, sadly, not strong enough to lift its own weight, which 
would have been funny...), and IIRC had thicker-than-usual wiring and a 
dedicated molex connector as well (it claimed that molex connector all 
to itself). it was also extra heavy as well.


so, it served more as a novelty than anything else...

funny would have been to tape it onto a CD and have it slide around like 
an airboat or similar, apart from the wire-length issues with a typical PSU.



luckily, fans like this are not exactly standard components...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Article "Lisp as the Maxwell’s equations of software"

2012-04-13 Thread BGB

On 4/12/2012 4:50 PM, Andre van Delft wrote:

FYI: Michael Nielsen wrote a large article "Lisp as the Maxwell’s equations of 
software", about the famous page 13 of the LISP 1.5 Programmer’s Manual; see
http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/

The article is discussed on Reddit:
http://www.reddit.com/r/programming/comments/s5jzt/lisp_as_the_maxwells_equations_of_software/


partial counter-assertion:
it is one thing to implement something;
it is quite another to implement something, effective.

although a simple interpreter for a language is very possible, it is far 
from being competitive.


what about performance?
what about libraries?
what about interoperability?
...

much like writing a 3D renderer:
it isn't too hard to get a few polygons on the screen;
it is much harder to get it to handle scene-complexities like those in 
commercial games, with a similar feature-set, and comparable performance.


the eventual result is that a "simple" core will often end up hideously 
more complex, and considerably less flexible, than it could otherwise be.



now, as far as Lisp itself, it faces a few problems:
generally unfamiliar language constructions and control flow;
relatively few people are terribly fond of S-Expressions;
implementations with significant library and interoperability problems;
...

but, its supporters are fairly dedicated, and not really open to any 
sorts of change or variation (suggest throwing a C-style syntax on it, 
and many will balk...).



a problem I think is that many people tend to operate in a mindset of 
"it is either perfect" or it is "unacceptable", and assume that 
"perfection" is somehow part of an "ontology" or similar, however, many 
people disagree, even regarding the question of which options are 
"better" or "worse".


part of the problem I think is that many people are prone to classify 
things along an "axis of interest" or similar, and then prone to assume 
that this axis is an "absolute" ranking of the options. typically this 
classification is done at the absence of considering other aspects or 
relative tradeoffs.



better I think would be to think more in terms of how "locally 
optimized" something is for a given "problem domain" or "common special 
case".


for example, the problem domains C is optimal for are very different 
from those of Java and C#, and still different from those of C++, which 
are very different from those of ECMAScript, which are all very 
different from those of Lisp and Scheme.



nevermind such peculiarities (in math land) as set-theory, which is (for 
whatever reason) commonly used despite being similarly incomprehensible 
both to humans and machines (and lacking any real obvious/direct 
application in computing, 1), but presumably math people have some sort 
of reason for why they do things the way they do (throwing set notation 
and operations at pretty much everything no matter whether or not the 
topic in question really has much of anything really to do with sets, 
"well, I have a hammer, and that screw sure does look like a nail").


1: there are some debatable "indirect" cases, like SQL and algorithms 
built on walking or culling linked lists, but I don't really consider 
these to be applicable much beyond being "vaguely similar".



better I think could be to try to isolate out which elements are more 
foundational, and figure out what sorts of things can be built from 
these elements, rather than trying to make things be direct 
manifestations of said elements.


sort of like, with chemistry: the pure elements are fine and well, but 
focusing solely on these would ignore the large and diverse world of 
various compounds and material properties.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-12 Thread BGB

On 4/11/2012 11:14 PM, Josh Gargus wrote:

On Apr 8, 2012, at 7:31 PM, BGB wrote:


now, why, exactly, would anyone consider doing rendering on the server?...


One reason might be to amortize the cost of  global illumination calculations.  
Since much of the computation is view-independent, a Really Big Server could 
compute this once per frame and use the results to render a frame from the 
viewpoint of each connected client.  Then, encode it with H.264 and send it 
downstream.  The total number of watts used could be much smaller, and the 
software architecture could be much simpler.

I suspect that this is what OnLive is aiming for... supporting existing 
PC/console games is an interim step as they try to boot-strap a platform with 
enough users to encourage game developers to make this leap.


but, the bandwidth and latency requirements would be terrible...

nevermind that currently, AFAIK, no HW exists which can do full-scene 
global-illumination in real-time (at least using radiosity or similar), 
much less handle this *and* do all of the 3D rendering for a potentially 
arbitrarily large number of connected clients.


another problem is that there isn't much in the rendering process which 
can be aggregated between clients which isn't already done (between 
frames, or ahead-of-time) in current games.


in effect, the rendering costs at the datacenter are likely to scale 
linearly with the number of connected clients, rather than at some 
shallower curve.



much better I think is just following the current route:
getting client PCs to have much better HW, so that they can do their own 
localized lighting calculations (direct illumination can already be done 
in real-time, and global illumination can be done small-scale in real-time).


the cost at the datacenters is also likely to be much lower, since they 
need much less powerful servers, and have to spend much less money on 
electricity and bandwidth.


likewise, the total watts used tends to be fairly insignificant for an 
end user (except when operating on batteries), since PC power-use 
requirements are small vs, say, air-conditioners or refrigerators, 
whereas people running data-centers have to deal with the full brunt of 
the power-bill.


the power-use issue (for mobile devices) could, just as easily, be 
solved by some sort of much higher-capacity battery technology (say, a 
laptop or cell-phone battery which, somehow, had a capacity well into 
the kVA range...).


at this point, people wont really care much if, say, plugging in their 
cell-phone to recharge is drawing, say, several amps, given power is 
relatively cheap in the greater scheme of things (and, assuming 
migration away from fossil fuels, could likely still get considerably 
cheaper over time).


meanwhile, no obvious current/near-term technology is likely to make 
internet bandwidth considerably cheaper, or latency significantly lower, ...


even with fairly direct fiber-optic connections, long distance ping 
times are still likely to be an issue, and it is much harder to LERP 
video, so short of putting the servers in a geographically nearby 
location (like, in the same city as the user), or somehow bypassing the 
speed of light, it isn't all that likely that people are going to really 
much exceed (in general) about 50-100ms ping (with a world average of 
likely closer to about 400ms ping).


this would lead to a generally unsatisfying gaming experience, as there 
would be an obvious delay between attempting an action and the results 
of this action becoming visible (which, at least, with local rendering, 
the results of ping times can be partly glossed over). (video quality 
and framerate are currently also issues, but could improve over time as 
overall bandwidth improves).


to deliver a high quality experience with point-to-point video, likely a 
ping time of around 10-20ms would be needed, which could then compete 
with the frame-rates of locally rendered video. at a 15ms ping then 
results would be "immediately" visible with a 30Hz frame-rate (it 
wouldn't be obviously different from being locally rendered).



granted, this "could" change if people either manage to develop 
faster-than-light communication faster than they manage better GPUs 
and/or higher-capacity battery technology, or people become generally 
tolerate of the latencies involved.



granted, "hybrid" strategies could just as easily work:
a lot of "general visibility" is handled on the servers, and pushed down 
as video streams, with the actual rendering being done on the client 
(essentially streamed video-mapped textures).


by analogy, this would be sort of like if people could use YouTube 
videos as textures in a 3D scene.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/9/2012 10:53 AM, David Barbour wrote:



On Mon, Apr 9, 2012 at 8:25 AM, BGB <mailto:cr88...@gmail.com>> wrote:



Running on a cluster is very different between having all the
intelligence on the individual clients.  As far as I can tell,
MMOs by and large run most of the simulation on centralized
clusters (or at least within the vendor's cloud).  Military
sims do EVERYTHING on the clients - there are no central
machines, just the information distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise
and reliability wise.


There are some security and performance drawbacks. It would be easy to 
`cheat` any of the simulation protocols used by military sims. But 
there isn't much motive to do so; it isn't as though you win virtual 
items to sell on e-bay. Some computations are many times redundant. 
But it's good enough, and the extensibility and interop advantage are 
worth more than efficiency would be.


yeah, probably fair enough.

though, it could be like the "Instant Messaging" network, which allows 
to some extent for heterogeneous protocols (typically by bridging 
between the networks and protocols).





now, why, exactly, would anyone consider doing rendering on the
server?...


Ask the developers of Second Life ;).

They basically `stream` polygons and textures to the player, 
continuously, improving the resolution of your view if you aren't 
moving too quickly. Unfortunately, you have this continuous experience 
of it always being somewhat awful during normal movement. (In general, 
that's what eventual consistency is like, too.)


actually, to some extent, I was also considering the possibility of 
something like this, but I don't generally consider this "rendering" so 
much as "streaming".


a very naive strategy would be, say, doing it like HTTP and using 
ping/pong requests to grab things as they come into view.


better latency-wise is likely to use more of a "push-down" strategy, 
where the server would speculate what the client can potentially see and 
push down the relevant geometry.


in my case though, typically geometry is sent in terms of whole brushes 
or mesh objects, rather than individual polygons.


presumably, client-side caching could also be done...


functionally, this already exists to some extent in the form of the 
real-time mapping capabilities (which is currently handled by the client 
pushing the updated geometry back to the server).



maybe textures could be sent 2-stage, with the first stage maybe sending 
textures at 1/4 or 1/8 resolution, and then sending the full resolution 
texture later.


say, first the client receives a 64x64 or 128x128 texture, and later 
gets the 256x256 or 512x512 version (probably with a mechanism to avoid 
re-sending textures the player already has).


like, unlike on a web-page, the user doesn't need to endlessly 
re-download a generic grass or concrete texture...





ironically, all this leads to more MMOs using client-side physics,
and more FPS games using server-side physics, with an MMO
generally having a much bigger problem regarding cheating than an FPS.


If you ensure deterministic physics, it would be a lot easier to 
transparently spot-check players for cheating. But I agree it is a 
very difficult problem, unless you can control the player's hardware.


likely unworkable in practice.
more practical could be to perform "sanity checks", which will fail if 
something happens which couldn't reasonably occur.


better though, reliability-wise, could be to leave the server in control 
of most things where players would likely want to cheat:

general movement;
dealing damage;
keeping track of inventory and stats;
...

this way, tampering on the client end is only likely to impact the 
client and make things buggy/annoying, but doesn't actually compromise 
world integrity.





though Capt. Kirk's "I don't believe in the no win scenario"
line comes to mind


Same here. ;)


it is not clear that client-to-client would lead to necessarily
all that much better handling of latency either, for that matter.


Client-to-client usually does improve latency since you skip an 
intermediate communication step. There are exceptions to prove the 
rule, though - e.g. if you have control over routing or can put 
servers between the clients.


fair enough. the bigger issue then is likely working around NAT though, 
since typical broadband routers only work well for outgoing connections, 
but are poorly behaved for incoming connections.



I guess the main question is whether one is measuring strict 
client-to-client latency, or client-to-world latency.


client-to-client latency would mean how long before a client performs an 
action and another client sees the result of thi

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-09 Thread BGB

On 4/8/2012 8:26 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:
Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.




ok, so basically a heterogeneous MMO. and distributed




well, yes, but I am not entirely sure how many non-distributed 
(single server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to 
deal with all of the users.


some older MMOs had "shards", where users on one server wouldn't be 
able to see what users on a different server were doing, but this is 
AFAIK generally not really considered acceptable in current MMOs 
(hence why the world would be divided up into "areas" or "regions" 
instead, presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of 
what a distributed-system is than one which allows a load-balanced 
client/server architecture.


Running on a cluster is very different between having all the 
intelligence on the individual clients.  As far as I can tell, MMOs by 
and large run most of the simulation on centralized clusters (or at 
least within the vendor's cloud).  Military sims do EVERYTHING on the 
clients - there are no central machines, just the information 
distribution protocol layer.


yes, but there are probably drawbacks with this performance-wise and 
reliability wise.


not that all of the servers need to be run in a single location or be 
owned by a single company, but there are some general advantages to the 
client/server model.





reading some stuff (an overview for the DIS protocol, ...), it 
seems that the "level of abstraction" is in some ways a bit higher 
(than game protocols I am familiar with), for example, it will 
indicate the "entity type" in the protocol, rather than, say, the 
name of, its 3D model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image 
generation and position models maintained by dead reckoning) - what 
goes across the network are changes to it's velocity vector, and 
weapon fire events.  The intent is to minimize the amount of data 
that has to be sent across the net, and to maintain speed of image 
generation by doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the 
server?...


Well, render might be the wrong term here.  Think more about map 
tiling.  When you do map applications, the GIS server sends out map 
tiles.  Similarly, at least some MMOs do most of the scene generation 
centrally.  For that matter, think about moving around Google Earth in 
image mode - the data is still coming from Google servers.


The military simulators come from a legacy of flight simulators - VERY 
high resolution imagery, very fast movement.  Before the simulation 
starts, terrain data and imagery are distributed in advance - every 
simulator has all the data needed to generate an out-the-window view, 
and to do terrain calculations (e.g., line-of-sight) locally.




ok, so sending polygons and images over the net.

so, by "very", is the implication that they are sending large numbers of 
1024x1024 or 4096x4096 texture-maps/tiles or similar?...


typically, I do most texture art at 256x256 or 512x512.

but, anyways, presumably JPEG or similar could probably make it work.


ironically, all this leads to more MMOs using client-side physics, 
and more FPS games using server-side physics, with an MMO generally 
having a much bigger problem regarding cheating than an FPS.


For the military stuff, it all comes down to compute load and network 
bandwidth/latency considerations - you simply can't move enough data 
around, quickly enough to support high-res. out-the-window imagery for 
a pilot pulling a 2g turn.  Hence you have to do all that locally.  
Cheating is less of an issue, since these are generally highly managed 
scenarios conducted as training exercises.  What's more of an issue is 
if the software in one sim. draws different conclusions than the 
software in an other sim. (e.g., two planes in a dogfight, each 
concluding that it shot down the other one) - that's usually the 
result of a design bug rather than cheating (though Capt. Kirk's "I 
don't believe in the no win scenario" line comes to mind).




this is why most modern ga

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-08 Thread BGB

On 4/4/2012 5:26 PM, Miles Fidelman wrote:

BGB wrote:


Not so sure.  Probably similar levels of complexity between a 
military sim. and, say, World of Warcraft.  Fidelity to real-world 
behavior is more important, and network latency matters for the 
extreme real-time stuff (e.g., networked dogfights at Mach 2), but 
other than that, IP networks, gaming class PCs at the endpoints, 
serious graphics processors.  Also more of a need for 
interoperability - as there are lots of different simulations, 
plugged together into lots of different exercises and training 
scenarios - vs. a MMORPG controlled by a single company.





ok, so basically a heterogeneous MMO.

and distributed



well, yes, but I am not entirely sure how many non-distributed (single 
server) MMO's there are in the first place.


presumably, the world has to be split between multiple servers to deal 
with all of the users.


some older MMOs had "shards", where users on one server wouldn't be able 
to see what users on a different server were doing, but this is AFAIK 
generally not really considered acceptable in current MMOs (hence why 
the world would be divided up into "areas" or "regions" instead, 
presumably with some sort of load-balancing and similar).


unless of course, this is operating under a different assumption of what 
a distributed-system is than one which allows a load-balanced 
client/server architecture.






reading some stuff (an overview for the DIS protocol, ...), it seems 
that the "level of abstraction" is in some ways a bit higher (than 
game protocols I am familiar with), for example, it will indicate the 
"entity type" in the protocol, rather than, say, the name of, its 3D 
model.
Yes.  The basic idea is that a local simulator - say a tank, or an 
airframe - maintains a local environment model (local image generation 
and position models maintained by dead reckoning) - what goes across 
the network are changes to it's velocity vector, and weapon fire 
events.  The intent is to minimize the amount of data that has to be 
sent across the net, and to maintain speed of image generation by 
doing rendering locally.




now, why, exactly, would anyone consider doing rendering on the server?...

presumably, the server would serve mostly as a sort of message relay 
(bouncing messages from one client to any nearby clients), and 
potentially also handling physics (typically split between the client 
and server in FPS games, where the main physics is done on the server, 
such as to help prevent cheating and similar, as well as the server 
running any monster/NPC AI).


although less expensive for the server, client-side physics has the 
drawback of making it harder to prevent hacks (such as moving really 
fast and/or teleporting), typically instead requiring the use of 
detection and banning strategies.


ironically, all this leads to more MMOs using client-side physics, and 
more FPS games using server-side physics, with an MMO generally having a 
much bigger problem regarding cheating than an FPS.


typically (in an FPS or similar), rendering is purely client-side, and 
usually most network events are extrapolated (based on origin and 
velocity and similar), to compensate for timing between the client and 
server (and the results of network ping-time and similar).


it is desirable for players and enemies to be in about the right spot, 
even with maybe 250-750 ms or more between the client and server (though 
many 3D engines will kick players if the ping time is more than 2000 or 
3000 ms).



in my own 3D engine, it is partially split, currently with player 
movement physics being split between the client and server, and most 
other physics being server-side.


there is currently no physics involved in the entity extrapolation, 
although doing more work here could be helpful (mostly to avoid 
extrapolation occasionally putting things into walls or similar).



sadly, even single-player, it can still be a little bit of an issue 
dealing with the matter of the client and server updating at different 
frequencies (say, the "server" runs internally at 10Hz, and the "client" 
runs at 30Hz - 60Hz), so extrapolating the position is still necessary 
(camera movements at 10Hz are not exactly pleasant).


so, this leaves allowing the client-side camera to partly move 
independently of the "player" as known on the server, and using 
interpolation trickery to reconcile the client and server versions of 
the player's position, and occasionally using flags so deal with things 
like teleporters and similar (the player will be teleported on the 
server, which will send a flag to be like "you are here and looking this 
direction").



but, I meant "model" in this case more in the sense of the server sends 
a message more like, say:

(delta 492
(classname "npc_plane_fa18")
(org 6714 4932 5184)
(a

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 1:06 PM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

- game-like simulations (which I'm more familiar with): but these 
are serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and 
tactics, and so forth; or pilots training in team techniques by 
flying missions in a networked simulator (and saving jet fuel); or 
decision makers practicing in simulated command posts -- simulators 
take the form of both person-in-the-loop (e.g., flight sim. with a 
real pilot) and CGF/SAF (an enemy brigade is simulated, with 
information inserted into the simulation network so enemy forces 
show up on radar screens, heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to 
PCs?...


Not so sure.  Probably similar levels of complexity between a military 
sim. and, say, World of Warcraft.  Fidelity to real-world behavior is 
more important, and network latency matters for the extreme real-time 
stuff (e.g., networked dogfights at Mach 2), but other than that, IP 
networks, gaming class PCs at the endpoints, serious graphics 
processors.  Also more of a need for interoperability - as there are 
lots of different simulations, plugged together into lots of different 
exercises and training scenarios - vs. a MMORPG controlled by a single 
company.




ok, so basically a heterogeneous MMO.


reading some stuff (an overview for the DIS protocol, ...), it seems 
that the "level of abstraction" is in some ways a bit higher (than game 
protocols I am familiar with), for example, it will indicate the "entity 
type" in the protocol, rather than, say, the name of, its 3D model.


however, it appears fairly low-level in some ways as well, using 
magic-numbers in place of, say, "entity type names", as well as 
apparently being generally byte-oriented.



in my case, my network protocol is currently based more on the use of 
specially-compressed lists / S-Expressions (with the compression and 
"message protocol" existing as separate and independent layers).


the lower-layer is concerned primarily with efficiently and compactly 
serializing the messages, but doesn't concern itself much with the 
contents of said messages. it is list-based, but theoretically, also 
supporting XML or JSON wouldn't likely be terribly difficult.


the upper-layer is mostly concerned with the message contents, and 
doesn't really care how or where they are transmitted.


I originally considered XML for the message protocol, but ended up 
opting with lists as they were both less effort and more efficient in my 
case. lists are easier to compose and process, generally require less 
memory, and natively support numeric types, ...



most entity fields are identified mnemonics in the protocol (such as 
"org" for origin, "ang" for angles or "rot" for rotation). entity types 
are given both as type-names and also as names for 3D models/sprites/...


I personally generally dislike the use of magic numbers, and in most 
cases they are avoided. some magic numbers exist though, mostly in the 
case of things like "effects flags" and similar (for stuff like whether 
an entity glows, spins, ...).


however, this doesn't mean that any strings are endlessly re-sent, as 
the protocol will compress these (typically into single Huffman-coded 
values). note that recently encoded values may be reused.


beyond entities, things like geometry and light-sources can also be 
synchronized.


nothing obvious comes to mind for why it wouldn't scale, would probably 
just split the world across multiple servers (by area) and have the 
clients hop between servers as needed (with some server-to-server 
communication).


probably, free-form client-to-client messages would also make sense, and 
maybe also the ability to broadcast messages more like in a chat-style 
system. this way, specialized clients could devise their own specialized 
messages.


(currently, I am not doing anything of the sort, mostly focusing more on 
small-scale network gaming).


...


(if by any chance anyone wants code or specs for any of this stuff, they 
can email me off-list...).



I had mostly heard about military people doing all of this stuff 
using decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.
In terms of jet fuel, travel costs, and other logistics, absolutely.  
But... when you figure in the huge dollars spent paying large systems 
integrators to write software, I'm not sure how much cheaper it all 
becomes.  (The big systems integrators are not known for brilliance of 
their coders, or efficiencies in their process -- not a lot of 20-hou

Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 9:29 AM, Miles Fidelman wrote:

BGB wrote:

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to "game 
engine."  Sort of.




"military simulations" as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations 
used by actual military, rather than for purposes of gaming?...).


Well, there are really two types of simulations in use in the military 
(at least that I'm familiar with):


- very detailed engineering models of various sorts (ranging from 
device simulations to simulations of say, a sea-skimming missile vs. a 
gattling gun point-defense weapon).  (think MATLAB and SIMULINK type 
models)




don't know much all that much about MATLAB or SIMULINK, but do know 
about things like FEM (Finite Element Method) and CFD (Computational 
Fluid Dynamics) and similar.


(left out a bunch of stuff, mostly about FEM, CFD, and particle systems, 
in games technology and wondering about how some of this stuff compares 
with their analogues as used in an engineering context).



- game-like simulations (which I'm more familiar with): but these are 
serious games, with lots of people and vehicles running around 
practicing techniques, or experimenting with new weapons and tactics, 
and so forth; or pilots training in team techniques by flying missions 
in a networked simulator (and saving jet fuel); or decision makers 
practicing in simulated command posts -- simulators take the form of 
both person-in-the-loop (e.g., flight sim. with a real pilot) and 
CGF/SAF (an enemy brigade is simulated, with information inserted into 
the simulation network so enemy forces show up on radar screens, 
heads-up displays, and so forth)


For more on the latter, start at:

http://en.wikipedia.org/wiki/Distributed_Interactive_Simulation
http://www.sisostds.org/



so, sort of like: this stuff is to gaming what IBM mainframes are to PCs?...

I had mostly heard about military people doing all of this stuff using 
decommissioned vehicles and paintball and similar, but either way.


I guess game-like simulations are probably cheaper.





Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about "game engines" and RTS though.


Maybe check out 
http://www.mak.com/products/simulate/computer-generated-forces.html 
for an example of a CGF.




looked briefly, yes, ok.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-04 Thread BGB

On 4/4/2012 6:35 AM, Miles Fidelman wrote:

BGB wrote:


still not heard the term CGF before though.
If you do military simulations, CGF (Computer Generated Forces) and 
SAF (Semi-Automated Forces) are the equivalent terms of art to "game 
engine."  Sort of.




"military simulations" as in RTS (Real Time Strategy) or similar, or 
something different?... (or, maybe even like realistic simulations used 
by actual military, rather than for purposes of gaming?...).


Wikipedia hasn't been being very helpful here regarding a lot of this 
(it doesn't seem to know about most of these terms).


well, it does know about "game engines" and RTS though.


I guess maybe this confusion is sort of like the confusion over the use 
of the term "brush" for "a piece of static world geometry typically 
defined in terms of a complex polyhedron represented in terms of a 
collection of bounding planes (but may also potentially include bezier 
patches and mesh geometry)".


but, as-is, there is no clearly better term for this, so people have to 
live with it (not like Quake / Source / Unreal / ... don't use the same 
term, grr...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread BGB

On 4/3/2012 9:29 PM, Miles Fidelman wrote:

BGB wrote:


On 4/3/2012 10:47 AM, Miles Fidelman wrote:


Hah.  You've obviously never been involved in building a CGF 
simulator (Computer Generated Forces) - absolute spaghetti code when 
you have to have 4 main loops, touch 2000 objects (say 2000 tanks) 
every simulation frame.  Comparatively trivial if each tank is 
modeled as a process or actor and you run asynchronously.


I have not encountered this term before, but does it have anything to 
do with an RBDE (Rigid Body Dynamics Engine), or often called simply 
a "physics engine". this would be something like Havok or ODE or 
Bullet or similar.


There is some overlap, but only some - for example, when modeling 
objects in flight (e.g., a plane flying at constant velocity, or an 
artillery shell in flight) - but for the most part, the objects being 
modeled are active, and making decisions (e.g., a plane or tank, with 
a simulated pilot, and often with the option of putting a 
person-in-the-loop).


So it's really impossible to model these things from the outside 
(forces acting on objects), but more from the inside (run 
decision-making code for each object).




fair enough...

but, yes, very often in cases where one is using a physics engine, this 
may be combined with the use of internal logic and forces as well, 
albeit admittedly there is a split:
technically, these forces are applied directly by whatever code is using 
the physics engine, rather than by the physics engine itself.


for example: just because it is a physics engine doesn't mean that it 
necessarily has to be "realistic", or that objects can't supply their 
own forces.


I guess, however, that this would be closer to the main "server end" in 
my case, namely the part that manages the entity system and NPC AIs and 
similar (and, also, the game logic is more FPS style).


still not heard the term CGF before though.


in this case, the basic timestep update is basically to loop over all 
the entities in the scene and calls their "think" methods and similar 
(things like AI and animation and similar are generally handled via 
think methods and similar), and maybe do things like updating physics 
(if relevant), ...


this process is single threaded with a single loop though.

I guess it is arguably "event-driven" though:
handling timing is done via events ("think" being a special case);
most interactions between entities involve events as well;
...

many entities and AIs are themselves essentially finite-state-machines.


or such...


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


[fonc] Physics Simulation (Re: Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future)

2012-04-03 Thread BGB
(changed subject, as this was much more about physics simulation than 
about concurrency).


yes, this is a big long "personal history dump" type thing, please 
ignore if you don't care.



On 4/3/2012 10:47 AM, Miles Fidelman wrote:

David Barbour wrote:


Control flow is a source of much implicit state and accidental 
complexity.


A step processing approach at 20Hz isn't all bad, though, since at 
least you can understand the behavior of each frame in terms of the 
current graph of objects. The only problem with it is that this 
technique doesn't scale. There are easily up to 15 orders of 
magnitude in update frequency between slow-updating and fast-updating 
data structures. Object graphs are similarly heterogeneous in many 
other dimensions - trust and security policy, for example.


Hah.  You've obviously never been involved in building a CGF simulator 
(Computer Generated Forces) - absolute spaghetti code when you have to 
have 4 main loops, touch 2000 objects (say 2000 tanks) every 
simulation frame.  Comparatively trivial if each tank is modeled as a 
process or actor and you run asynchronously.




I have not encountered this term before, but does it have anything to do 
with an RBDE (Rigid Body Dynamics Engine), or often called simply a 
"physics engine".

this would be something like Havok or ODE or Bullet or similar.

I have written such an engine before, but my effort was single-threaded 
(using a fixed-frequency virtual timer, with time-step subdivision to 
deal with fast-moving objects).


probably would turn a bit messy though if it had to be made internally 
multithreaded (it is bad enough just trying to deal with irregular 
timesteps, blarg...).


however, it was originally considered to potentially run in a separate 
thread from the main 3D engine, but I never really bothered as there 
turned out to not be much point.



granted, one could likely still parallelize it while keeping everything 
frame-locked though, like having the threads essentially just subdivide 
the scene-graph and each work on a certain part of the scene, doing the 
usual thing of all of them predicting/handling contacts within a single 
time step, and then all updating positions in-sync, and preparing for 
the next frame.


in the above scenario, the main cost would likely be how to best go 
about efficiently dividing up work among the threads (the usual strategy 
I use is work-queues, but I have doubts regarding their scalability).


side note:
in my own experience, simply naively handling/updating all objects 
in-sequence doesn't tend to work out very well when mixed with things 
like contact forces (example: check if object can make move, if so, 
update position, move on to next object, ...). although, this does work 
reasonably well for "Quake-style" physics (where objects merely update 
positions linearly, and have no actual contact forces).


better seems to be:
for all moving objects, predict where the object wants to be in the next 
frame;

determine which objects will collide with each other;
calculate contact forces and apply these to objects;
update movement predictions;
apply movement updates.

however, interpenetration is still not avoided (sufficient forces will 
still essentially push objects into each other). theoretically, one can 
disallow interpenetration (by doing like Quake-style physics and simply 
disallow any post-contact updates which would result in subsequent 
interpenetration), but in my prior attempts to enable such a feature, 
the objects would often become "stuck" and seemingly entirely unable to 
move, and were in-fact far more prone to violently explode (a pile of 
objects will seemingly become stuck-together and immovable, maybe for 
several seconds, until ultimately all of them will violently explode 
outward at high velocities).


allowing objects to interpenetrate was thus seen as the "lesser evil", 
since, even though objects were violating the basic assumption that 
"rigid bodies aren't allowed to exist in the same place at the same 
time", typically (assuming the collision-detection and force-calculation 
functions are working correctly, itself easier said than done), this 
will generally correct itself reasonably quickly (the contact forces 
will push the objects back apart, until they reach a sort of 
equilibrium), and with far less incidence of random "explosions".


sadly, the whole physics engine ended up a little "rubbery" as a result 
of all of this, but it seemed reasonable, as I have also observed 
similar behavior to some extent in Havok, and have figured out that I 
could deal with matters well enough by using a simpler (Quake-style) 
physics engine for most non-dynamic objects. IOW: things using AABBs 
(Axis-Aligned Bounding-Box) and similar, and other related "solid 
objects which can't undergo rotation", a very naive "check and update" 
strategy works fairly well for objects which can only ever undergo 
translational movement.


admittedly, I also never was able 

Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-04-03 Thread BGB

On 4/3/2012 9:46 AM, Miles Fidelman wrote:

David Barbour wrote:


On Tue, Apr 3, 2012 at 8:25 AM, Eugen Leitl > wrote:


It's not just imperative programming. The superficial mode of human
cognition is sequential. This is the problem with all of mathematics
and computer science as well.


Perhaps human attention is basically sequential, as we're only able 
to focus our eyes on one point and use two hands.  But I think humans 
understand parallel behavior well enough - maintaining multiple 
relationships, for example, and predicting the behaviors of multiple 
people.


And for that matter, driving a car, playing a sport, walking and 
chewing gum at the same time :-)




yes, but people have built-in machinery for this, in the form of the 
cerebellum.
relatively little conscious activity is generally involved in these 
sorts of tasks.


if people had to depend on their higher reasoning powers for basic 
movement tasks, people would likely be largely unable to operate 
effectively in basic day-to-day tasks.





If you look at MPI debuggers, it puts people into a whole other
universe of pain that just multithreading.


I can think of a lot of single-threaded interfaces that put people in 
a universe of pain. It isn't clear to me that distribution is at 
fault there. ;)




Come to think of it, tracing flow-of-control through an 
object-oriented system REALLY is a universe of pain (consider the 
difference between a simulation - say a massively multiplayer game - 
where each entity is modeled as an object, with one or two threads 
winding their way through every object, 20 times a second; vs. 
modeling each entity as a process/actor).




FWIW, in general I don't think much about global control-flow.

however, there is a problem with the differences between:
global behavior (the program as a whole);
local behavior (a local collection of functions and statements).

a person may tend to use general fuzzy / "intuitive" behavior for 
reasoning about "the system as a whole", but will typically use fairly 
rigid sequential logic for thinking about the behavior of a given piece 
of code.


there is a problem if the individual pieces of code are no longer 
readily subject to analysis.



the problem I think with multithreading isn't so much that things are 
parallel or asynchronous, but rather that things are very often 
inconsistent.


if two threads try to operate on the same piece of data at the same 
time, often this will create states which are impossible had either been 
operating on the data individually (and, very often, changes made in one 
thread will not be immediately visible to others, say, because the 
compiler had not actually thought to write the change to memory, or the 
other thread to reload the variable).


hence, people need things like the "volatile" modifier, use of atomic 
operations, things like "mutexes" or "synchronized" regions, ...



this leaves several possible options:
systems go further in this direction, with little expectation of global 
synchronization unless some specific mechanism is used (two threads 
working on a piece of memory may temporarily each see their own local copy);
or, languages/compilers go the other direction, so that one thread 
changing a variable is mandated to be immediately visible to other threads.


one option is more costly than the other.

as-is, the situation seems to be that compilers lean on one side (only 
locally consistent), whereas the HW tries to be globally consistent.


a question then, is assuming HW is not kept strictly consistent, how to 
best handle this (regarding both language design and performance).



however, personally I think abandoning local sequential logic and 
consistency, as being a bad move.


I am personally more in-favor of message passing, and either the ability 
to access objects synchronously, or "pass messages to the object", which 
may be in-turn synchronous.


consider, for example:
class Foo
{
sync function methodA() { ... }//synchronous (only one such 
method executes at a time)

sync function methodB() { ... }//synchronous
async function methodC() { ... }//asynchronous / concurrent 
(calls will not block)
sync async function methodD() { ... } //synchronous, but calls will 
not block

}

...
var obj=new Foo();

//thread A
obj.methodA();
//thread B
obj.methodB();

the VM could enforce that the object only executes a single such method 
at a time (but does not globally lock the object, unlike "synchronized").


similarly:
//thread A
async obj.methodA();
//thread B
async obj.methodB();

which works similarly, except neither thread blocks (in this case, "obj" 
functions as a virtual process, and the method call serves more as a 
message pass). note that, if methods are not "sync", then they may 
execute concurrently.


note that "obj.methodC();" will behave as if the async keyword were 
given (it may be called concurrently). "obj.methodD();" will beha

Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-03-27 Thread BGB

On 3/27/2012 6:16 PM, Miles Fidelman wrote:

BGB wrote:


granted, language-design may still need some work to find an ideal 
programming model for working with concurrent systems, but I still 
more suspect it will probably end up looking more like "existing 
language with better concurrency features bolted on" than "some 
fundamentally new approach to programming" (like, say, C or C++ or C# 
or ActionScript or similar, with message-passing and constraints or 
similar bolted on).



actually, Erlang seems to have solved a lot of the issues - functional 
language, shared-nothing actor model, run-time environment optimized 
for massive (and distributed concurrency) - the few other folks who've 
made some strides (e.g., some of the Scala and Haskell folks) seem to 
be copying from Erlang - really amazing stuff




well, yes, but Erlang was itself basically a functional language with 
message-passing bolted on.
little says one can't do similar stuff with other languages (functional, 
or otherwise).


so, take some language, add concurrent message passing, and maybe a 
concurrent constraint mechanism, and call it good.



share-nothing could be optional.

other possibilities:
data may be shared, but there is no guarantee that non-local reads are 
up to date;

data may be shared, but doing so comes at a potential performance hit;
...


another issue I can think of:
how does Tilera compare, say, with AMD Fusion?...


can't talk knowledgeably here



fair enough, I am not sure either, it may require more looking into.

a mystery is what ISA level concurrency features are available, such as 
things like inter-core interrupts or message passing (or message 
routing) mechanisms or similar, but this may still require more looking.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Everything You Know (about Parallel Programming) Is Wrong!: A Wild Screed about the Future

2012-03-27 Thread BGB

On 3/27/2012 12:23 PM, Miles Fidelman wrote:

karl ramberg wrote:

Slides/pdf:
http://www.dynamic-languages-symposium.org/dls-11/program/media/Ungar_2011_EverythingYouKnowAboutParallelProgrammingIsWrongAWildScreedAboutTheFuture_Dls.pdf 






Granted that their approach to an OLAP cube is new, but the folks 
behind Erlang, and Carl Hewitt have been talking about massive 
concurrency, an assuming inconsistency as the future trend, for years.


Miles Fidelman



yeah, it seems a bit like an overly dramatic and hand-wavy way of saying 
stuff that probably many of us knew already (besides maybe some guy off 
in the corner being like "but I thought mutex-based locking could scale 
up forever?!...").


granted, language-design may still need some work to find an ideal 
programming model for working with concurrent systems, but I still more 
suspect it will probably end up looking more like "existing language 
with better concurrency features bolted on" than "some fundamentally new 
approach to programming" (like, say, C or C++ or C# or ActionScript or 
similar, with message-passing and constraints or similar bolted on).



another issue I can think of:
how does Tilera compare, say, with AMD Fusion?...

a quick skim over what information I could find was not showing any 
strong reasons (technical or economic) which would be leaning in 
Tilera's favor vs AMD Fusion (maybe there is something more subtle?...).


both seem to be highly parallel VLIW architectures, ...


granted, as-is, one is still stuck using things like CUDA or OpenCL, but 
maybe something can be found to be able to largely eliminate needing 
these (or, gloss over them).


a partial idea that comes up is that of having a sort of bytecode format 
which can be compiled into the particular ISAs of the particular cores.


or, alternatively, throwing some sort of "x86 to VLIW JIT / 
trans-compiler" or similar into the mix.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT? Polish syntax

2012-03-19 Thread BGB

On 3/19/2012 5:24 AM, Martin Baldan wrote:

but, hmm... one could always have 2 stacks: create a stack over the stack,
in turn reversing the RPN into PN, and also gets some "meta" going on...

Uh, I'm afraid one stack is one too many for me. But then again, I'm
not sure I get what you mean.


in traditional RPN (PostScript, Forth, ...).
one directly executes commands from left-to-right.

in this case, one pushes commands left to right, and then pops and 
executes them in a loop.
so, there is a stack for values, and a stack for "the future" (commands 
awaiting execution).


naturally enough, it flips the notation (since sub-expressions are 
executed first).



+ 2 * 3 4 =>  24

Wouldn't that be "+ 2 * 3 4 =>  14" in Polish notation? Typo?


or mental arithmetic fail, either way...
I vaguely remember writing this, and I think the mental arithmetic came 
out wrong.




commands are pushed left to right, execution then consists of popping of and
executing said commands (the pop/execute loop continues until the stack is
empty). execution then proceeds left-to-right.

Do you have to get busy with "rot", "swap", "drop",  "over" etc?
That's the problem I see with stack-based languages.


if things are designed well, these mostly go away.

mostly it is a matter of making every operation "do the right thing" and 
expect arguments in a sensible order.


a problem, for example, in the design of PostScript, is that people 
tried to give the operations their "intuitive" ordering, but this leads 
to both added awkwardness and additional need for explicit stack operations.



say, for example, one can have a language with:
/  bind
or:
 / bind

though seemingly a trivial difference, one form is likely to need more 
swap/exch calls than the other.


likewise:
   setindex
vs:
   setindex
...


"dup" is a little harder, but generally I have found that dup tended to 
appear in places where a higher-level / compound operation was more 
sensible.


granted, for example, such compound operations are a large portion of my 
interpreter's bytecode ISA, but many help improve performance by 
"optimizing" common operations).


an example, suppose one compiles for an operation like:
j=i++;

one could emit, say:
load i; dup; push 1; binary add; store i; store j;

with all of the lookups and type-checks along the way.

also possible is the sequence:
lpostinc_fn 1; lstore_f 2;
(assume 1 and 2 are the lexical variable indices for i and j, both 
inferred fixnum).

or:
postinc_s i; store j;
(collapsed operation, but not knowing/giving exact types or locations).

now, what has happened?:
the first 5 operations collapse into a single operation, in the former 
case, specialized also for a lexical variable and for fixnums (say, due 
to type-inference);

what is left is a simple store.

as-noted, most of this was due to interpreter-specific 
micro-optimizations, and a lot of this is ignored in the 
(newer/incomplete) JIT (which mostly decomposes the operations again, 
and uses more specialized variable allocation and type-inference logic).


these sorts of optimizations are also to some extent language and 
use-case specific, but they do help somewhat with performance of a plain 
interpreter.



similar could likely be applied to a stack language designed for use by 
humans, where lots of operations/words are dedicated to common 
constructions likely to be used by a user of the language.




_ I *hate* infix notation. It can only make sense where everything has
arity 3, like in RDF.


many people would probably disagree.
whether culture or innate, infix notations seem to be fairly popular.

My beef with infix notation is that you get ambiguity, and then this
ambiguity is usually eliminated with arbitrary policies of operator
priority, and then you still have to use parens, even with fixed
arity. In contrast, with pure Polish notation, once you accept fixed
arity you get unambiguity for free and you get rid of parens for
everything (except, maybe, explicit lists).

For instance, in infix notation, when I see:

2 + 3 * 4

I have to remember that it means:

2 + (3*4)

But then I need the parens when I mean:

(2 + 3) * 4

In contrast, with Polish notation, the first case would be:

+ 2 * 3 4

And the second case would be:

* 4 + 2 3

Which is clearly much more elegant. No parens, no operator priority.


many people are not particularly concerned with elegance though, and 
tend to take it for granted what the operator precedences are and where 
the parens go.


this goes even for the (arguably poorly organized) C precedence hierarchy:
many new languages don't change it because people expect it a certain way;
in my case, I don't change it mostly for sake of making at least some 
effort to conform with ECMA-262, which defines the hierarchy a certain way.


the advantage is that, assuming the precedences are sensible, much more 
commonly used operations have higher precedence, and so don't need 
explicit parenthesis. on average, this tends to work out fa

Re: [fonc] OT? Polish syntax

2012-03-18 Thread BGB

On 3/18/2012 6:54 PM, Martin Baldan wrote:

BGB, please see my answer to shaun. In short:

_ I'm not looking for stack-based languages. I want a Lisp which got
rid of (most of the) the parens by using fixed arity and types,
without any loss of genericity, homoiconicity or other desirable
features. REBOL does just that, but it's not so good regarding
performance, the type system, etc.


fair enough...


but, hmm... one could always have 2 stacks: create a stack over the 
stack, in turn reversing the RPN into PN, and also gets some "meta" 
going on...


+ 2 * 3 4 => 24

commands are pushed left to right, execution then consists of popping of 
and executing said commands (the pop/execute loop continues until the 
stack is empty). execution then proceeds left-to-right.


ironically, I have a few specialized interpreters that actually sort of 
work this way:
one such interpreter uses a similar technique to implement a naive 
batch-style command language. similar was also before used in a 
text-to-speech engine of mine.


nifty features:
no need to buffer intermediate expressions during execution (no ASTs or 
bytecode);
no need for an explicit procedure call/return mechanism (the process is 
largely implicit, however one does need a mechanism to push the contents 
of a procedure, although re-parsing works fairly well);

easily handles recursion;
it also implicitly performs tail-call optimization;
fairly quick/easy to implement;
handles pause/resume easily (since the interpreter is non-recursive).

possible downsides:
not particularly likely to be high-performance (although an 
implementation using objects or threaded code seems possible);

behavior can be potentially reasonably counter-intuitive;
...



_ I *hate* infix notation. It can only make sense where everything has
arity 3, like in RDF.


many people would probably disagree.
whether culture or innate, infix notations seem to be fairly popular.

actually, it can be noted that the many of the world languages are SVO 
(and many others are SOV), so there could be a pattern here.


a reasonable tradeoff IMO is using prefix notation for commands and 
infix notation for arithmetic.




_ Matching parens is a non-issue. Just use Paredit or similar ;)


I am currently mostly using Notepad2, which does have parenthesis 
matching via highlighting.


however, the issue isn't as much with just using an editor with 
parenthesis matching, but more an issue when quickly typing something 
interactively. one may have to make extra mental effort to get the 
counts of opening and closing parenthesis right, potentially distracting 
from "the task at hand" (typing in a command or math expression or 
similar). it also doesn't help matters that the parenthesis are IMO more 
effort to type than some other keys.



granted, C style syntax isn't perfect for interactive use either. IMO, 
probably the more notable issue in this case is having to type commas. 
one can fudge it though (say, by making commas and semicolons generally 
optional).


one of the better syntax designs for interactive use seems to be the 
traditional shell-command syntax. behind this is probably C-like syntax, 
followed by RPN, followed by S-Expressions.


although physically RPN is probably a little easier to type than C style 
syntax, a downside is that one may have to mentally rework the 
expressions prior to typing them. another downside is that of being 
difficult to read or decipher later.



something like REBOL could possibly work fairly well here, given it has 
some structural similarity to shell-command syntax.




_ Umm, "whitespace sensitive" sounds a bit dangerous. I have enough
with Python :p


small-scale whitespace sensitivity actually seems to work out a bit 
nicer than larger scale whitespace sensitivity IMO. large-scale 
constrains the overall formatting and may end up needing to be worked 
around. small-scale generally has a much smaller impact, and need not 
influence overall code formatting.


the main merit it has is that it can reduce the need for commas (and/or 
semicolons), since the parser can use whitespace as a separator (and 
space is an easier key to hit).


however, many people like to use whitespace in weird places in code, 
which would carry the drawback that with such a parser, such tendencies 
would lead to incorrect code parsing.


example:
foo (x)
x = 3
  +4
...
could likely lead to the code being parsed incorrectly in several places.

otherwise, one has to write instead:
foo(x)
x=3+4
or possibly also allowed:
foo(
  x)
x=3+
  4
which would be more obvious to the parser.


or, alternatively, whitespace sensitivity can allow things like:
"dosomething 2 -3 4*9-2"

to be parsed without being ambiguous (except maybe to human readers due 
to variable-width font evilness, where font designers seem to like to 
often "hide" the spaces, but one can assume that most "real" 
programmers,

Re: [fonc] OT? Polish syntax

2012-03-15 Thread BGB

On 3/15/2012 9:21 AM, Martin Baldan wrote:

I have a little off-topic question.
Why are there so few programming languages with true Polish syntax? I
mean, prefix notation, fixed arity, no parens (except, maybe, for
lists, sequences or similar). And of course, higher order functions.
The only example I can think of is REBOL, but it has other features I
don't like so much, or at least are not essential to the idea. Now
there are some open-source clones, such as Boron, and now Red, but
what about very different languages with the same concept?

I like pure Polish notation because it seems as conceptually elegant
as Lisp notation, but much closer to the way spoken language works.
Why is it that this simple idea is so often conflated with ugly or
superfluous features such as native support for infix notation, or a
complex type system?


because, maybe?...
harder to parse than Reverse-Polish;
less generic than S-Expressions;
less familiar than more common syntax styles;
...

for example:
RPN can be parsed very quickly/easily, and/or readily mapped to a stack, 
giving its major merit. this gives it a use-case for things like textual 
representations of bytecode formats and similar. languages along the 
lines of PostScript or Forth can also make reasonable assembler 
substitutes, but with higher portability. downside: typically hard to read.


S-Expressions, however, can represent a wide variety of structures. 
nearly any tree-structured data can be expressed readily in 
S-Expressions, and all they ask for in return is a few parenthesis. 
among other things, this makes them fairly good for compiler ASTs. 
downside: hard to match parens or type correctly.


common syntax (such as C-style), while typically harder to parse, and 
typically not all that flexible either, has all the usual stuff people 
expect in a language: infix arithmetic, precedence levels, statements 
and expressions, ... and the merit that it works fairly well for 
expressing most common things people will care to try to express with 
them. some people don't like semicolons and others don't like 
sensitivity to line-breaks or indentation, and one generally needs 
commas to avoid ambiguity, but most tend to agree that they would much 
rather be using this than either S-Expressions or RPN.


(and nevermind some attempts to map programming languages to XML based 
syntax designs...).


or, at least, this is how it seems to me.


ironically, IMO, it is much easier to type C-style syntax interactively 
while avoiding typing errors than it is to type S-Expression syntax 
interactively while avoiding typing errors (maybe experience, maybe not, 
dunno). typically, the C-style syntax requires less total characters as 
well.


I once designed a language syntax specially for the case of being typed 
interactively (for terseness and taking advantage of the keyboard 
layout), but it turned out to be fairly difficult to remember the syntax 
later.


some of my syntax designs have partly avoided the need for commas by 
making the parser whitespace sensitive regarding expressions, for 
example "a -b" will parse differently than "a-b" or "a - b". however, 
there are some common formatting quirks which would lead to frequent 
misparses with such a style. "foo (x+1);" (will parse as 2 expressions, 
rather than as a function call).


a partial downside is that it can lead to visual ambiguity if code is 
read using a variable-width font (as opposed to the "good and proper" 
route of using fixed-width fonts for everything... yes, this world is 
filled with evils like variable-width fonts and the inability to tell 
apart certain characters, like the Il1 issue and similar...).


standard JavaScript also uses a similar trick for "implicit semicolon 
insertion", with the drawback that one needs to use care when breaking 
expressions otherwise the parser may do its magic in unintended ways.



the world likely goes as it does due to lots of many such seemingly 
trivial tradeoffs.


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Apple and hardware

2012-03-14 Thread BGB

On 3/14/2012 3:55 PM, Jecel Assumpcao Jr. wrote:




If you have a good version of confinement (which is pretty simple HW-wise) you
can use Butler Lampson's schemes for Cal-TSS to make a workable version of a
capability system.

The 286 protected mode was good enough for this, and was extended in the
386. I am not sure all modern x86 processors still implement these, and
if they do it is likely that actually using them will hurt performance
so much that it isn't an option in practice.


the TSS?...

it is still usable on x86 in 32-bit Protected-Mode.

however, it generally wasn't used much by operating systems, and in the 
transition to x86-64, was (along with the GDT and LDT) mostly reduced to 
a semi-vestigial structure.


its role is generally limited to holding register state and the stack 
pointers when doing inter-ring switches (such as an interrupt-handler 
transferring control into the kernel, or when transferring control into 
userspace).


however, it can no longer be used to implement process switching or 
similar on modern chips.




And, yep, I managed to get them to allow interpreters to run on the iPad, but 
was
not able to get Steve to countermand the "no sharing" rule.

That is a pity, though at least having native languages makes these
devices a reasonable replacement for my old Radio Shack PC-4 calculator.
I noticed that neither Matlab nor Mathematica are available for the
iPad, but only simple terminal apps that allow you to access these
applications running on your PC. What a waste!


IMHO, this is at least one reason to go for Android instead...

not that Android is perfect though, as admittedly I would prefer if I 
could have a full/generic ARM version of Linux or similar, but alas.


sadly, I am not getting a whole lot of the tablet I have development 
wise, which is lame considering that was a major reason I bought it 
(ended up doing far more development in Linux ARMv5TEL in QEMU preparing 
to try to port stuff to Android).


more preferable would have been:
if the NDK didn't suck as badly;
if there were, say, a C API for the GUI stuff (so one could more easily 
just use C without having to deal with Java or the JNI).


would have likely been a little happier had Android been more like just 
a ARM build of a more generic Linux distro or something.



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-14 Thread BGB

On 3/14/2012 11:31 AM, Mack wrote:

On Mar 13, 2012, at 6:27 PM, BGB wrote:



the issue is not that I can't imagine anything different, but rather that doing 
anything different would be a hassle with current keyboard technology:
pretty much anyone can type ASCII characters;
many other people have keyboards (or key-mappings) that can handle 
region-specific characters.

however, otherwise, typing unusual characters (those outside their current 
keyboard mapping) tends to be a bit more painful, and/or introduces editor 
dependencies, and possibly increases the learning curve (now people have to 
figure out how these various unorthodox characters map to the keyboard, ...).

more graphical representations, however, have a secondary drawback:
they can't be manipulated nearly as quickly or as easily as text.

one could be like "drag and drop", but the problem is that drag and drop is 
still a fairly slow and painful process (vs, hitting keys on the keyboard).


yes, there are scenarios where keyboards aren't ideal:
such as on an XBox360 or an Android tablet/phone/... or similar, but people 
probably aren't going to be using these for programming anyways, so it is 
likely a fairly moot point.

however, even in these cases, it is not clear there are many "clearly better" 
options either (on-screen keyboard, or on-screen tile selector, either way it is likely 
to be painful...).


simplest answer:
just assume that current text-editor technology is "basically sufficient" and call it 
"good enough".

Stipulating that having the keys on the keyboard "mean what the painted symbols 
show" is the simplest path with the least impedance mismatch for the user, there are 
already alternatives in common use that bear thinking on.  For example:

On existing keyboards, multi-stroke operations to produce new characters 
(holding down shift key to get CAPS, CTRL-ALT-TAB-whatever to get a special 
character or function, etc…) are customary and have entered average user 
experience.

Users of IDE's like EMACS, IntelliJ or Eclipse are well-acquainted with special 
keystrokes to get access to code completions and intention templates.

So it's not inconceivable to consider a similar strategy for "typing" non-character graphical elements.  One 
could think of say… CTRL-O, UP ARROW, UP ARROW, ESC to "type" a circle and size it, followed by CTRL-RIGHT 
ARROW, C to "enter" the circle and type a "c" inside it.

An argument against these strategies is the same one against command-line 
interfaces in the CLI vs. GUI discussion: namely, that without visual 
prompting, the possibilities that are available to be typed are not readily 
visible to the user.  The user has to already know what combination gives him 
what symbol.

One solution for mitigating this, presuming "rich graphical typing" was desirable, would 
be to take a page from the way "touch" type cell phones and tablets work, showing symbol 
maps on the screen in response to user input, with the maps being progressively refined as the user 
types to guide the user through constructing their desired input.

…just a thought :)


typing, like on phones...
I have seen 2 major ways of doing this:
hit key multiple times to indicate the desired letter, with a certain 
timeout before it moves to the next character;
type out characters, phone shows first/most-likely possibility, hit a 
key a bunch of times to cycle though the options.



another idle thought would be some sort of graphical/touch-screen 
keyboard, but it would be a matter of finding a way to make it not suck. 
using on-screen inputs in Android devices and similar kind of sucks:
pressure and sensitivity issues, comfort issues, lack of tactile 
feedback, smudges on the screen if one uses their fingers, and 
potentially scratches if one is using a stylus, ...


so, say, a touch-screen with these properties:
similar sized (or larger) than a conventional keyboard;
resistant to smudging, fairly long lasting, and easy to clean;
soft contact surface (me thinking sort of like those gel insoles for 
shoes), so that ideally typing isn't an experience of constantly hitting 
a piece of glass with ones' fingers (ideally, both impact pressure and 
responsiveness should be similar to a conventional keyboard, or at least 
a laptop keyboard);
ideally, some sort of tactile feedback (so, one can feel whether or not 
they are actually hitting the keys);
being dynamically reprogrammable (say, any app which knows about the 
keyboard can change its layout when it gains focus, or alternatively the 
user can supply per-app keyboard layouts);
maybe, there could be tabs to change between layouts, such as a US-ASCII 
tab, ...

...

with something like the above being common, I can more easily imagine 
people using non-ASCII based input methods.


say, one is typing in US-ASCII, hits a "math-sy

Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-14 Thread BGB

On 3/14/2012 8:57 AM, Loup Vaillant wrote:

Michael FIG wrote:

Loup Vaillant  writes:


You could also play the human compiler: use the better syntax in the
comments, and implement a translation of it in code just below.  But
then you have to manually make sure they are synchronized.  Comments
are good.  Needing them is bad.


Or use a preprocessor that substitutes the translation inline
automatically.


Which is a way of implementing the syntax… How is this different than
my "Then you write the parser"?  Sure you can use a preprocessor, but
you still have to write the macros for your new syntax.



in my case, this can be theoretically done already (writing new 
customized parsers), and was part of why I added block-strings.


most likely route would be translating code into ASTs, and maybe using 
something like "(defmacro)" or similar at the AST level.


another route could be I guess to make use of "quote" and "unquote", 
both of which can be used as expression-building features (functionally, 
they are vaguely similar to "quasiquote" in Scheme, but they haven't 
enjoyed so much use thus far).



a more practical matter though would be getting things "nailed down" 
enough so that larger parts of the system can be written in a language 
other than C.


yes, there is the FFI (generally seems to work fairly well), and one can 
shove script closures into C-side function pointers (provided arguments 
and return types are annotated and the types match exactly, but I don't 
entirely trust its reliability, ...).


slightly nicer would be if code could be written in various places which 
accepts script objects (either via interfaces or ex-nihilo objects).


abstract example (ex-nihilo object):
var obj={render: function() { ... } ... };
lbxModelRegisterScriptObject("models/script/somemodel", obj);

so, if some code elsewhere creates an object using the given model-name, 
then the script code is invoked to go about rendering it.


alternatively, using an interface:
public interface IRender3D { ... }//contents omitted for brevity
public class MyObject implements IRender3D { ... }
lbxModelRegisterScriptObject("models/script/somemodel", new MyObject());

granted, there are probably better (and less likely to kill performance) 
ways to make use of script objects (as-is, using script code to write 
objects for use in the 3D renderer is not likely to turn out well 
regarding the framerate and similar, at least until if/when there is a 
good solid JIT in place, and it can compete more on equal terms with C 
regarding performance).



mostly the script language was intended for use in the game's server 
end, where typically raw performance is less critical, but as-is, there 
is still a bit of a language-border issue that would need to be worked 
on here (I originally intended to write the server end mostly in script, 
but at the time the VM was a little less "solid" at the time (poorer 
performance, more prone to leak memory and trigger GC, ...), and so the 
server end was written more "quick and dirty" in plain C, using a design 
fairly similar to a mix of the Quake 1 and 2 server-ends). as-is, it is 
not entirely friendly to the script code, so a little more work is needed.


another possible use case is related to world-construction tasks 
(procedural world-building and similar).


but, yes, all of this is a bit more of a "mundane" ways of using a 
scripting language, but then again, everything tends to be built from 
the bottom up (and this just happens to be where I am currently at, at 
this point in time).


(maybe at which point in time I am stuck less worrying about which 
language is used where and about cross-language interfacing issues, then 
allowing things like alternative syntax, ... could be more worth 
exploration. but, in many areas, both C and C++ have a bit of a "gravity 
well"...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread BGB

On 3/13/2012 4:37 PM, Julian Leviston wrote:


On 14/03/2012, at 2:11 AM, David Barbour wrote:




On Tue, Mar 13, 2012 at 5:42 AM, Josh Grams <mailto:j...@qualdan.com>> wrote:


On 2012-03-13 02:13PM, Julian Leviston wrote:
>What is "text"? Do you store your "text" in ASCII, EBCDIC,
SHIFT-JIS or
>UTF-8?  If it's UTF-8, how do you use an ASCII editor to edit
the UTF-8
>files?
>
>Just saying' ;-) Hopefully you understand my point.
>
>You probably won't initially, so hopefully you'll meditate a bit
on my
>response without giving a knee-jerk reaction.

OK, I've thought about it and I still don't get it.  I understand
that
there have been a number of different text encodings, but I
thought that
the whole point of Unicode was to provide a future-proof way out
of that
mess.  And I could be totally wrong, but I have the impression
that it
has pretty good penetration.  I gather that some people who use the
Cyrillic alphabet often use some code page and China and Japan use
SHIFT-JIS or whatever in order to have a more compact representation,
but that even there UTF-8 tools are commonly available.

So I would think that the sensible thing would be to use UTF-8 and
figure that anyone (now or in the future) will have tools which
support
it, and that anyone dedicated enough to go digging into your data
files
will have no trouble at all figuring out what it is.

If that's your point it seems like a pretty minor nitpick.  What am I
missing?


Julian's point, AFAICT, is that text is just a class of storage that 
requires appropriate viewers and editors, doesn't even describe a 
specific standard. Thus, another class that requires appropriate 
viewers and editors can work just as well - spreadsheets, tables, 
drawings.


You mention `data files`. What is a `file`? Is it not a service 
provided by a `file system`? Can we not just as easily hide a storage 
format behind a standard service more convenient for ad-hoc views and 
analysis (perhaps RDBMS). Why organize into files? Other than 
penetration, they don't seem to be especially convenient.


Penetration matters, which is one reason that text and filesystems 
matter.


But what else has penetrated? Browsers. Wikis. Web services. It 
wouldn't be difficult to support editing of tables, spreadsheets, 
drawings, etc. atop a web service platform. We probably have more 
freedom today than we've ever had for language design, if we're 
willing to stretch just a little bit beyond the traditional 
filesystem+text-editor framework.


Regards,

Dave


Perfectly the point, David. A "token/character" in ASCII is equivalent 
to a byte. In SHIFT-JIS, it's two, but this doesn't mean you can't 
express the equivalent meaning in them (ie by selecting the same 
graphemes) - this is called translation) ;-)


this is partly why there are "codepoints".
one can work in terms of codepoints, rather than bytes.

a text editor may internally work in UTF-16, but saves its output in 
UTF-8 or similar.

ironically, this is basically what I am planning/doing at the moment.

now, if/how the user will go about typing UTF-16 codepoints, this is not 
yet decided.



One of the most profound things for me has been understanding the 
ramifications of OMeta. It doesn't "just" parse streams of 
"characters" (whatever they are) in fact it doesn't care what the 
individual tokens of its parsing stream is. It's concerned merely with 
the syntax of its elements (or tokens) - how they combine to form 
certain rules - (here I mean "valid patterns of grammar" by rules). If 
one considers this well, it has amazing ramifications. OMeta invites 
us to see the entire computing world in terms of sets of 
problem-oriented-languages, where language is a liberal word that 
simply means a pattern of sequence of the constituent elements of a 
"thing". To PEG, it basically adds proper translation and true 
object-orientism of individual parsing elements. This takes a while to 
understand, I think.


Formats here become "languages", protocols are "languages", and so are 
any other kind of representation system you care to name (computer 
programming languages, processor instruction sets, etc.).


possibly.

I was actually sort of aware of a lot of this already though, but didn't 
consider it particularly relevant.



I'm postulating, BGB, that you're perhaps so ingrained in the current 
modality and approach to thinking about computers, that you maybe 
can't break out of it to see what else might be possible. I think it 
was turing, wasn't it, who postulated that his turing machines could 
work off ANY symbols... so if that's the case, and your p

Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-13 Thread BGB

On 3/12/2012 9:01 PM, David Barbour wrote:


On Mon, Mar 12, 2012 at 8:13 PM, Julian Leviston <mailto:jul...@leviston.net>> wrote:



On 13/03/2012, at 1:21 PM, BGB wrote:


although theoretically possible, I wouldn't really trust not
having the ability to use conventional text editors whenever
need-be (or mandate use of a particular editor).

for most things I am using text-based formats, including for
things like world-maps and 3D models (both are based on arguably
mutilated versions of other formats: Quake maps and AC3D models).
the power of text is that, if by some chance someone does need to
break out a text editor and edit something, the format wont
hinder them from doing so.



What is "text"? Do you store your "text" in ASCII, EBCDIC,
SHIFT-JIS or UTF-8? If it's UTF-8, how do you use an ASCII editor
to edit the UTF-8 files?

Just saying' ;-) Hopefully you understand my point.

You probably won't initially, so hopefully you'll meditate a bit
on my response without giving a knee-jerk reaction.




I typically work with the ASCII subset of UTF-8 (where ASCII and UTF-8 
happen to be equivalent).


most of the code is written to assume UTF-8, but languages are designed 
to not depend on any characters outside the ASCII range (leaving them 
purely for comments, and for those few people who consider using them 
for identifiers).


EBCDIC and SHIFT-JIS are sufficiently obscure that one can generally 
pretend that they don't exist (FWIW, I don't generally support codepages 
either).


a lot of code also tends to assume Modified UTF-8 (basically, the same 
variant of UTF-8 used by the JVM). typically, code will ignore things 
like character normalization or alternative orderings. a lot of code 
doesn't particularly know or care what the exact character encoding is.


some amount of code internally uses UTF-16 as well, but this is less 
common as UTF-16 tends to eat a lot more memory (and, some code just 
pretends to use UTF-16, when really it is using UTF-8).




Text is more than an arbitrary arcane linear sequence of characters. 
Its use suggests TRANSPARENCY - that a human could understand the 
grammar and content, from a relatively small sample, and effectively 
hand-modify the content to a particular end.


If much of our text consisted of GUIDs:
  {21EC2020-3AEA-1069-A2DD-08002B30309D}
This might as well be
  {BLAHBLAH-BLAH-BLAH-BLAH-BLAHBLAHBLAH}

The structure is clear, but its meaning is quite opaque.



yep.

this is also a goal, and many of my formats are designed to at least try 
to be human editable.
some number of them are still often hand-edited as well (such as texture 
information files).



That said, structured editors are not incompatible with an underlying 
text format. I think that's really the best option.


yes.

for example, several editors/IDEs have expand/collapse, but still use 
plaintext for the source-code.


Visual Studio and Notepad++ are examples of this, and a more advanced 
editor could do better (such as expand/collapse on arbitrary code blocks).


these are also things like auto-completion, ... which are also nifty and 
work fine with text.



Regarding multi-line quotes... well, if you aren't fixated on ASCII 
you could always use unicode to find a whole bunch more brackets:

http://www.fileformat.info/info/unicode/block/cjk_symbols_and_punctuation/images.htm
http://www.fileformat.info/info/unicode/block/miscellaneous_technical/images.htm
http://www.fileformat.info/info/unicode/block/miscellaneous_mathematical_symbols_a/images.htm
Probably more than you know what to do with.



AFAIK, the common consensus in much of programmer-land, is that using 
Unicode characters as part of the basic syntax of a programming language 
borders on evil...



I ended up using:
<[[ ... ]]>
and:
""" ... """ (basically, same syntax as Python).

these seem probably like good enough choices.

currently, the <[[ and ]]> braces are not real tokens, and so will only 
be parsed specially as such in the particular contexts where they are 
expected to appear.


so, if one types:
2<[[3, 4], [5, 6]]
the '<' will be parsed as a less-than operator.

but, if one writes instead:
var str=<[[
some text...
more text...
]]>;

it will parse as a multi-line string...

both types of string are handled specially by the parser (rather than 
being handled by the tokenizer, as are normal strings).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-12 Thread BGB

On 3/12/2012 6:31 PM, Josh McDonald wrote:
Since it's your own system end-to-end, why not just stop editing 
source as a stream of ascii characters? Some kind of simple structured 
editor would let you put whatever you please in strings without 
requiring any escaping at all. It'd also make the parsing simpler :)




although theoretically possible, I wouldn't really trust not having the 
ability to use conventional text editors whenever need-be (or mandate 
use of a particular editor).


for most things I am using text-based formats, including for things like 
world-maps and 3D models (both are based on arguably mutilated versions 
of other formats: Quake maps and AC3D models). the power of text is 
that, if by some chance someone does need to break out a text editor and 
edit something, the format wont hinder them from doing so.



but, yes, that "Inventing on Principle / Magic Ink" video did rather get 
my interest up in terms of wanting to support much more streamlined 
script-editing interface.


I recently had a bit of fun writing small script fragments to blow up 
light sources and other things, and figure if I can get a more advanced 
text-editing interface thrown together, more interesting things might 
also be possible.


"blow the lights", all nearby light sources explode (with fiery particle 
explosion effects and sounds), and the area goes dark.


current leaning is to try to throw something together vaguely 
QBasic-like (with a "proper" text editor, and probably F5 as the "Run" 
key, ...).


as-is, I already have an ed / edlin-style text editor, and ALT + 1-9 as 
console-change keys (and now have multiple consoles, sort of like Linux 
or similar), ... was considering maybe the fancier text editor would use 
ALT-SHIFT + A-Z for switching between modules. will see what I can do here.



or such...



--

"Enjoy every sandwich." - WZ

Josh 'G-Funk' McDonald
   - j...@joshmcdonald.info 




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-12 Thread BGB

On 3/12/2012 10:24 AM, Martin Baldan wrote:


that is a description of random data, which granted, doesn't apply to most
(compressible) data.
that wasn't really the point though.

I thought the original point was that there's a clear-cut limit to how
much redundancy can be eliminated from computing environments, and
that thousand-fold (and beyond) reductions in code size per feature
don't seem realistic. Then the analogy from data compression was used.
I think it's a pretty good analogy, but I don't think there's a
clear-cut limit we can estimate in advance, because meaningful data
and computations are not random to begin with. Indeed, there are
islands of stability where you've cut all the visible cruft and you
need new theoretical insights and new powerful techniques to reduce
the code size further.


this is possible, but it assumes, essentially, that one doesn't run into 
such a limit.


if one gets to a point where every "fundamental" concept is only ever 
expressed once, and everything is built from preceding fundamental 
concepts, then this is a limit, short of dropping fundamental concepts.




for example, I was able to devise a compression scheme which reduced
S-Expressions to only 5% their original size. now what if I want 3%, or 1%?
this is not an easy problem. it is much easier to get from 10% to 5% than to
get from 5% to 3%.

I don't know, but there may be ways to reduce it much further if you
know more about the sexprs themselves. Or maybe you can abstract away
the very fact that you are using sexprs. For instance, if those sexprs
are a Scheme program for a tic-tac-toe player, you can say "write a
tic-tac-toe player in Scheme" and you capture the essence.


the sexprs were mostly related to scene-graph delta messages (one could 
compress a Scheme program, but this isn't really what it is needed for).


each expression basically tells about what is going on in the world at 
that moment (objects appearing and moving around, lights turning on/off, 
...). so, basically, a semi-constant message stream.


the specialized compressor was doing better than Deflate, but was also 
exploiting a lot more knowledge about the expressions as well: what the 
basic types are, how things fit together, ...


theoretically, about the only way to really do much better would be 
using a static schema (say, where the sender and receiver have a 
predefined set of message symbols, predefined layout templates, ...). 
personally though, I really don't like these sorts of compressors (they 
are very brittle, inflexible, and prone to version issues).


this is essentially what "write a tic-tac-toe player in Scheme" implies:
both the sender and receiver of the message need to have a common notion 
of both "tic-tac-toe player" and "Scheme". otherwise, the message can't 
be decoded.


a more general strategy is basically to build a model "from the ground 
up", where the sender and reciever have only basic knowledge of basic 
concepts (the basic compression format), and most everything else is 
built on the fly based on the data which has been seen thus far (old 
data is used to build new data, ...).


in LZ77 based algos (Deflate: ZIP/GZ/PNG; LZMA: 7zip; ...), this takes 
the form of a "sliding window", where any recently seen character 
sequence is simply reused (via an offset/length run).


in my case, it is built from primitive types (lists, symbols, strings, 
fixnums, flonums, ...).




I expect much of future progress in code reduction to come from
automated integration of different systems, languages and paradigms,
and this integration to come from widespread development and usage of
ontologies and reasoners. That way, for instance, you could write a
program in BASIC, and then some reasoner would ask you questions such
as "I see you used a GOTO to build a loop. Is that correct?" or "this
array is called 'clients'  , do you mean it as in server/client
architecture or in the business sense?" . After a few questions like
that, the system would have a highly descriptive model of what your
program is supposed to do and how it is supposed to do it. Then it
would be able to write an equivalent program in any other programming
language. Of course, once you have such a system, there would be much
more powerful user interfaces than some primitive programming
language. Probably you would speak in natural language (or very close)
and use your hands to point at things. I know it sounds like full-on
AI, but I just mean an expert system for programmers.


and, of course, such a system would likely be, itself, absurdly complex...

this is partly the power of information entropy though:
it can't really be created or destroyed, only really moved around from 
one place to another.



so, one can express things simply to a system, and it gives powerful 
outputs, but likely the system itself is very complex. one can express 
things to a simple system, but generally this act of expression tends to 
be much more complex. in either case, th

Re: [fonc] Error trying to compile COLA

2012-03-11 Thread BGB

On 3/11/2012 4:51 PM, Martin Baldan wrote:

I won't pretend I really know what I'm talking about, I'm just
guessing here, but don't you think the requirement for "independent
and identically-distributed random variable data" in Shannon's source
coding theorem may not be applicable to pictures, sounds or frame
sequences normally handled by compression algorithms?


that is a description of random data, which granted, doesn't apply to 
most (compressible) data.

that wasn't really the point though.

once one gets to a point where ones' data looks like this, then further 
compression is no longer possible (hence why there is a limit).


typically, compression will transform low-entropy data (with many 
repeating patterns and redundancies) into a smaller amount of 
high-entropy compressed data (with almost no repeating patterns or 
redundancy).




I mean, many
compression techniques rely on domain knowledge about the things to be
compressed. For instance, a complex picture or video sequence may
consist of a well-known background with a few characters from a
well-known inventory in well-known positions. If you know those facts,
you can increase the compression dramatically. A practical example may
be Xtranormal stories, where you get a cute 3-D animated dialogue from
a small script.


yes, but this can only compress what redundancies exist.
once the redundancies are gone, one is at a limit.

specialized knowledge allows one to do a little better, but does not 
change the basic nature of the limit.


for example, I was able to devise a compression scheme which reduced 
S-Expressions to only 5% their original size. now what if I want 3%, or 
1%? this is not an easy problem. it is much easier to get from 10% to 5% 
than to get from 5% to 3%.



the big question then is how much redundancy exists within a typical OS, 
or other large piece of software?


I expect one can likely reduce it by a fair amount (such as by 
aggressive refactoring and DSLs), but there will likely be a bit of a 
limit, and once one approaches this limit, there is little more that can 
be done (as it quickly becomes a fight against diminishing returns).


otherwise, one can start throwing away features, but then there is still 
a limit, namely how much can one discard and still keep the "essence" of 
the software intact.



although many current programs are, arguably, huge, the vast majority of 
the code is likely still there for a reason, and is unlikely the result 
of programmers just endlessly writing the same stuff over and over 
again, or resulting from other simple patterns. rather, it is more 
likely piles of special case logic and optimizations and similar.



(BTW: now have in-console text editor, but ended up using full words for 
most command names, seems basically workable...).



Best,

-Martin

On Sun, Mar 11, 2012 at 7:53 PM, BGB  wrote:

On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:

On 28.02.12 06:42, BGB wrote:

but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Shannon's theory applies to lossless transmission. I doubt anybody here
wants to reproduce everything down to the timings and bugs of the original
software. Information theory is not thermodynamics.


Shannon's theory also applies some to lossy transmission, as it also sets a
lower bound on the size of the data as expressed with a certain degree of
loss.

this is why, for example, with JPEGs or MP3s, getting a smaller size tends
to result in reduced quality. the higher quality can't be expressed in a
smaller size.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-11 Thread BGB

On 3/11/2012 5:28 AM, Jakub Piotr Cłapa wrote:

On 28.02.12 06:42, BGB wrote:

but, anyways, here is a link to another article:
http://en.wikipedia.org/wiki/Shannon%27s_source_coding_theorem


Shannon's theory applies to lossless transmission. I doubt anybody 
here wants to reproduce everything down to the timings and bugs of the 
original software. Information theory is not thermodynamics.




Shannon's theory also applies some to lossy transmission, as it also 
sets a lower bound on the size of the data as expressed with a certain 
degree of loss.


this is why, for example, with JPEGs or MP3s, getting a smaller size 
tends to result in reduced quality. the higher quality can't be 
expressed in a smaller size.



I had originally figured that the assumption would have been to try to 
recreate everything in a reasonably feature-complete way.



this means such things in the OS as:
an OpenGL implementation;
a command-line interface, probably implementing ANSI / VT100 style 
control-codes (even in my 3D engine, my in-program console currently 
implements a subset of these codes);

a loader for program binaries (ELF or PE/COFF);
POSIX or some other similar OS APIs;
probably a C compiler, assembler, linker, run-time libraries, ...;
network stack, probably a web-browser, ...;
...

then it would be a question of how small one could get everything while 
still implementing a reasonably complete (if basic) feature-set, using 
any DSLs/... one could think up to shave off lines of code.


one could probably shave off OS-specific features which few people use 
anyways (for example, no need to implement support for things like GDI 
or the X11 protocol). a "simple" solution being that OpenGL largely is 
the interface for the GUI subsystem (probably with a widget toolkit 
built on this, and some calls for things not directly supported by 
OpenGL, like managing mouse/keyboard/windows/...).


also, potentially, a vast amount of what would be standalone tools, 
could be reimplemented as library code and merged (say, one has the 
"shell" as a kernel module, which directly implements nearly all of the 
basic command-line tools, like ls/cp/sed/grep/...).


the result of such an effort, under my estimates, would likely still end 
up in the Mloc range, but maybe one could get from say, 200 Mloc (for a 
Linux-like configuration) down to maybe about 10-15 Mloc, or if one 
tried really hard, maybe closer to 1 Mloc, and much smaller is fairly 
unlikely.



apparently this wasn't the plan though, rather the intent was to 
substitute something entirely different in its place, but this sort of 
implies that it isn't really feature-complete per-se (and it would be a 
bit difficult trying to port existing software to it).


someone asks: "hey, how can I build Quake 3 Arena for your OS?", and 
gets back a response roughly along the lines of "you will need to 
largely rewrite it from the ground up".


much nicer and simpler would be if it could be reduced to maybe a few 
patches and modifying some of the OS glue stubs or something.



(tangent time):

but, alas, there seems to be a bit of a philosophical split here.

I tend to be a bit more conservative, even if some of this stuff is put 
together in dubious ways. one adds features, but often ends up 
jerry-rigging things, and using bits of functionality in different 
contexts: like, for example, an in-program command-entry console, is not 
normally where one expects ANSI codes, but at the time, it seemed a sane 
enough strategy (adding ANSI codes was a fairly straightforward way to 
support things like embedding color information in console message 
strings, ...). so, the basic idea still works, and so was applied in a 
new context (a console in a 3D engine, vs a terminal window in the OS).


side note: internally, the console is represented as a 2D array of 
characters, and another 2D array to store color and modifier flags 
(underline, strikeout, blink, italic, ...).


the console can be used both for program-related commands, accessing 
"cvars", and for evaluating script fragments (sadly, limited to what can 
be reasonably typed into a console command, which can be a little 
limiting for much more than "make that thing over there explode" or 
similar). functionally, the console is less advanced than something like 
bash or similar.


I have also considered the possibility of supporting multiple consoles, 
and maybe a console-integrated text-editor, but have yet to decide on 
the specifics (I am torn between a specialized text-editor interface, or 
making the text editor be a console command which hijacks the console 
and probably does most of its user-interface via ANSI codes or similar...).


but, it is not obvious what is the "best" way to integrate a text-editor 
into the UI for a 3D engine, hence why I have had this idea floating 
around for months, but haven't really acted on 

[fonc] Block-Strings / Heredocs (Re: Magic Ink and Killing Math)

2012-03-10 Thread BGB


On 3/10/2012 2:21 AM, Wesley Smith wrote:

most notable thing I did recently (besides some fiddling with getting a new
JIT written), was adding a syntax for block-strings. I used<[[ ... ]]>
rather than triple-quotes (like in Python), mostly as this syntax is more
friendly to nesting, and is also fairly unlikely to appear by accident, and
couldn't come up with much "obviously better" at the moment, "<{{ ... }}>"
was another considered option (but is IIRC already used for something), as
was the option of just using triple-quote (would work, but isn't readily
nestable).


You should have a look at Lua's long string syntax if you haven't already:

[[ my
long
string]]


this was briefly considered, but would have a much higher risk of clashes.

consider someone wants to type a nested array:
[[1, 2, 3], [4, 5, 6], [7, 8, 9]]
which is not so good if this array is (randomly) parsed as a string.

preferable is to try to avoid syntax which is likely to appear by 
chance, as then programmers have to use extra caution to avoid any 
"magic sigils" which might have unintended behaviors, but can pop up 
randomly as a result of typing code using only more basic constructions 
(I try to avoid this much as I do ambiguities in general, and is partly 
also why, IMO, the common "A" syntax for templates/generics is a 
bit nasty).



the syntax:
<[[ ... ]]>

was chosen as it had little chance of clashing with other valid syntax 
(apart from, potentially, the CDATA end marker for XML, which at present 
would need to be escaped if using this syntax for globs of XML).


it is possible, as the language does include unary "<" and ">" 
operators, which could, conceivably, be applied to a nested array. this 
is, however, rather unlikely, and could be fixed easily enough with a space.


as-is, they have an even-nesting rule.
WRT uneven-nesting, they can be escaped via '\' (don't really like, as 
it leaves the character as magic...).


<[[
this string has an embedded \]]>...
but this is ok.
]]>


OTOH (other remote possibilities):
<{ ... }>
was already used for "insert-here" expressions in XML literals:
<{generateSomeNode()}>

<(...)> or <((...))> just wouldn't work (high chance of collision).

#(...), #[...], and #{...} are already in use (tuple, float vector or 
matrix, and list).


example:
vector: #[0, 0, 0]
quaternion: #[0, 0, 0, 1]Q
matrix: #[[1, 0, 0] [0, 1, 0] [0, 0, 1]]
list: #{#foo, 2, 3; #v}
note: (...) parens, [...] array, {...} dictionary/object (example: "{a: 
3, y: 4}").


@(...), @[...], and @{...} are still technically available.

also possible:
/[...]/ , /[[...]]/
would be "passable" mostly only as /.../ is already used for regex 
syntax (inherited from JS...).


hmm:

<>
(available, currently syntactically invalid).

likewise:
<\ ... \>, ...
<| ... |>

...

so, the issue is mostly lacking sufficient numbers of available (good) 
brace types.
in a few other cases, this lack has been addressed via the use of 
keywords and type-suffixes.



but, a keyword would be lame for a string, and a suffix wouldn't work.



You can nest by matching the number of '=' between the brackets:

[===[
a
long
string [=[ with a long string inside it ]=]
xx
]===]


this would be possible, as otherwise this syntax would not be 
syntactically valid in the language.


[=[...]=]
would be at least possible.

not that I particularly like this syntax though...


(inlined):

On 3/10/2012 2:43 AM, Ondřej Bílka wrote:

On Sat, Mar 10, 2012 at 01:21:42AM -0800, Wesley Smith wrote:
You should have a look at Lua's long string syntax if you haven't 
already: 

Better to be consistent with rest of scripting languages(bash,ruby,perl,python)
and use heredocs.


blarg...

heredoc syntax is nasty IMO...

I deliberately didn't use heredocs.

if I did, I would probably use the syntax:
#
possibly also with the Python syntax:
"""
...
"""


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Magic Ink and Killing Math

2012-03-10 Thread BGB

On 3/8/2012 9:32 PM, David Barbour wrote:
Bret Victor's work came to my attention due to a recent video, 
Inventing on Principle


http://vimeo.com/36579366

If you haven't seen this video, watch it. It's especially appropriate 
for the FoNC audience.




although I don't normally much agree with the concept of "principle", 
most of what was shown there was fairly interesting.


kind of makes some of my efforts (involving interaction with things via 
typing code fragments into a console) seem fairly weak... OTOH, at the 
moment, I am not entirely sure how such a thing would be implemented 
either (very customized JS interpreter? ...).


it also much better addresses the problem of "how do I easily get an 
object handle for that thing right there?", which is an unsolved issue 
in my case (one can directly manipulate things in the scene via script 
fragments entered as console commands, provided they can get a reference 
to the entity, which is often a much harder problem).



timeliness of feedback is something I can sort of relate to, as I tend 
to invest a lot more effort in things I can do fairly quickly and test, 
than in things which may take many hours or days before I see much of 
anything (and is partly a reason I invested so much effort into my 
scripting VM, rather than simply just write everything in C or C++ and 
call it good enough, even despite the fair amount of code that is 
written this way).



most notable thing I did recently (besides some fiddling with getting a 
new JIT written), was adding a syntax for block-strings. I used <[[ ... 
]]> rather than triple-quotes (like in Python), mostly as this syntax is 
more friendly to nesting, and is also fairly unlikely to appear by 
accident, and couldn't come up with much "obviously better" at the 
moment, "<{{ ... }}>" was another considered option (but is IIRC already 
used for something), as was the option of just using triple-quote (would 
work, but isn't readily nestable).


this was itself a result of a quick thing which came up while writing 
about something else:
how to deal with the problem of easily allowing user-defined syntax in a 
language without the (IMO, relatively nasty) feature of allowing 
context-dependent syntax in a parser?...


most obvious solution: some way to easily create large string literals, 
which could then be fed into a user-defined parser / eval. then one can 
partly sidestep the matter of "syntax within a syntax". granted, yes, it 
is a cheap hack...




Anyhow, since then I've been perusing Bret Victor's other works at:
http://worrydream.com

Which, unfortunately, renders painfully slowly in my Chrome browser 
and relatively modern desktop machine. But sludging through the first 
page to reach content has been rewarding.


One excellent article is Magic Ink:

http://worrydream.com/MagicInk/

In this article, Victor asks `What is Software?` and makes a case that 
`interaction` should be a resource of last resort, that what we often 
need to focus on is presenting context-relevant information and 
graphics design, ways to put lots of information in a small space in a 
non-distracting way.


And yet another article suggests that we Kill Math:

http://worrydream.com/KillMath/

Which focuses on using more concrete and graphical representations to 
teach concepts and perform computations. Alan Kay's work teaching 
children (Doing With Images Makes Symbols) seems to be relevant here, 
and receives a quote.


Anyhow, I think there's a lot here everyone, if you haven't seen it 
all before.




nifty...

(although, for me, at the moment, it is after 2AM, I am needing to 
sleep...).




Regards,

Dave



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-09 Thread BGB

On 3/9/2012 7:59 AM, Eugen Leitl wrote:

On Thu, Mar 08, 2012 at 03:00:35PM -0800, Casey Ransberger wrote:


Books? First, the smell. Especially old books. I have a friend who has a 
Kindle. It smells *nothing* like a library, and I do think something is lost 
there.

Some people get olfactorically imprinted on dead tree
during their formative years. I personally like the smell
having basically grown up in libraries, but it's not
integral to the experience (and easily simulable, in
principle, for someone who would care to bring a cryotrap
into a library, and GC-MS the results thereof to be
able to synthesize the most relevant fragrances --
you could even encapsulate the result in the
polymer skin of an ebook reader to be given off
during use).
  


yeah. I personally don't really much like libraries, nor get much from 
the smell.


it is much like the smell of money: some people like it because (because 
it is associated with money and wealth, somehow?...). many other people 
think money tends to smell nasty.



there are some smells I find more preferable, like the smell of coffee:
maybe because smelling coffee soon often leads to drinking coffee?...

likewise for food: smell food, eat food...


but, I have lived most of my life in a world where the internet is 
readily available and most desirable information has been available online.




It's also, ironically, the weight of them. The sense of holding something *real* that in turn holds 
information. When you move, it takes work to keep a book, so one tends to keep the most 
"important" books one has, whereas with digital we just keep whatever we have 
"rights" to read, because there's no real expense in keeping. We also can't really share, 
at least not yet. Not in any legal model.

You can have heat maps of things you access, or
order items on virtual bookshelves. As to legality of
sharing: nobody cares. It's not enforcible, anyway.
  


yeah.

there is also a fair amount which is free to download and free to share.


many people and companies often only ask money for the printed versions:
expensive Intel docs in printed form;
free Intel docs in PDF form;
...

granted, it is not always so:
ISO wants money for PDFs;
...


granted, I am not talking here about "online bookstores" or 
device-specific formats, which exist, but I don't really deal with them 
(I don't actually have an e-book reader device, so am generally free of 
what hassles exist with proprietary e-book formats...).


presumably, a "better" solution is generic text/HTML/PDFs/...

PostScript could be nice, if there were decent viewers for it.

also possible is "Office Open XML" or "Open Document Format", both of 
which would allow e-books to be read in roughly the same form as in a 
word processor.




Second: when I finish a book, I usually give it away to someone else who'd 
enjoy it. Unless I've missed a headline, I can't do this with ebooks any more 
readily than that dubstep-blackmetal-rap album we still need to record when I 
buy it on iTunes (or whatever.)

Funny, I send ebooks as email attachments just fine.
  


yep.

email attachments work, and don't require physical proximity.
physical proximity works, when people have social contacts.
otherwise, requiring physical proximity is an inconvenience.


sending something via the USPS is also far less convenient than passing 
a physical object (one has to buy stamps, wrap it, write the address, 
and take the resulting package either to the post-office or to a drop 
box). how convenient is this? not all that convenient.


by comparison, an email attachment is trivially simple.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread BGB

On 3/8/2012 12:34 PM, Max Orhai wrote:



On Thu, Mar 8, 2012 at 7:07 AM, Martin Baldan > wrote:


>
> - Print technology is orders of magnitude more environmentally
benign
> and affordable.
>

That seems a pretty strong claim. How do you back it up? Low cost and
environmental impact are supposed to be some of the strong points of
ebooks.


Glad you asked! That was a pretty drastic simplification, and I'm 
conflating 'software' with 'hardware' too. Without wasting too much 
time, hopefully, here's what I had in mind.


I live in a city with some amount of printing industry, still. In the 
past, much more. Anyway, small presses have been part of civic life 
for centuries now, and the old-fashioned presses didn't require much 
in the way of imports, paper mills aside. I used to live in a smaller 
town with a mid-sized paper mill, too. No idea if they're still in 
business, but I've made my own paper, and it's not that hard to do 
well in small batches. My point is just that print technology 
(specifically the letterpress) can be easily found in the real world 
which is local, nontoxic, and "sustainable" (in the sense of only 
needing routine maintenance to last indefinitely) in a way that I 
find hard to imagine of modern electronics, at least at this point in 
time. Have you looked into the environmental cost of manufacturing and 
disposing of all our fragile, toxic gadgets which last two years or 
less? It's horrifying.




I would guess, apart from macro-scale parts/materials reuse (from 
electronics and similar), one could maybe:
grind them into dust and extract reusable materials via means of 
mechanical separation (magnetism, density, ..., which could likely 
separate out most bulk glass/plastic/metals/silicon/... which could then 
be refined and reused);
maybe feed whatever is left over into a plasma arc, and maybe use either 
magnetic fields or a centrifuge to separate various raw elements (dunno 
if this could be made practical), or maybe dissolve it with strong acids 
and use chemical means to extract elements (could also be expensive), or 
lacking a better (cost effective) option, simply discard it.



the idea for a magnetic-field separation could be:
feed material through a plasma arc, which will basically convert it into 
mostly free atoms;

a large magnetic coil accelerates the resultant plasma;
a secondary horizontal magnetic field is applied (similar to the one in 
a CRT), causing elements to deflect based on relative charge (valence 
electrons);
depending on speed and distance, there is likely to be a gravity based 
separation as well (mostly for elements which have similar charge but 
differ in atomic weight, such as silicon vs carbon, ...);
eventually, all of them ram into a wall (probably chilled), with a more 
or less 2D distribution of the various elements (say, one spot on the 
wall has a big glob of silicon, and another a big glob of gold, ...). 
(apart from mass separation, one will get mixes of "similarly charged" 
elements, such as globs of silicon carbide and titanium-zirconium and 
similar)


an advantage of a plasma arc is that it will likely result in some 
amount of carbon-monoxide and methane and similar as well, which can be 
burned as fuel (providing electricity needed for the process). this 
would be similar to a traditional gasifier.



but, it is possible that in the future, maybe some more advanced forms 
of manufacturing may become more readily available at the small scale.


a particular example is that it is now at least conceivably possible 
that lower-density lower-speed semiconductor electronics (such as 
polymer semiconductors) could be made at much smaller scales and cheaper 
than with traditional manufacturing (silicon wafers and optical 
lithography), but at this point there is little economic incentive for 
this (companies don't care, as they have big expensive fabs to make 
chips, and individuals and communities don't care as they don't have 
much reason to make their own electronics vs just buying those made by 
said large semiconductor manufacturers).


similarly, few people have much reason to invest much time or money in 
technologies which are likely to max out in the MHz range.


but, conceivably, one could make a CPU, and memory, essentially using 
conductive and semiconductive inks and an old-style printing-plates 
(possibly, say, on a celluloid substrate), if needed (making a CPU 
probably itself sort of resembling a book...). also sort of imagining 
some here the idle thought of movable-type logic gates and similar, ...



granted, such a scenario is very unlikely at present (it would likely 
only occur due to a collapse of current manufacturing or distribution 
architecture). any restoration of the ability to do large scale 
manufacture is likely to result in a quick return to faster and more 
powerful technologies (such as optical lithography).


apart from a loss of knowledge, it is 

Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread BGB

On 3/8/2012 7:51 AM, David Corking wrote:

BGB said:

by contrast, a wiki is often a much better experience, and similarly allows
the option of being presented sequentially (say, by daisy chaining articles
together, and/or writing huge articles). granted, it could be made maybe a
little better with a good WYSIWYG style editing system.

potentially,  maybe, something like MediaWiki or similar could be used for
fiction and similar.

Take a look at both Wikibooks and the booki project (which publishes
flossmanuals.net)


so, apparently, yes...



a mystery is why, say, LCD panels can't be made to better utilize ambient
light

Why isn't the wonderful dual-mode screen used by the OLPC XO more widely used?


it is a mystery.

seems like it could be useful (especially for anyone who has ever tried 
to use a laptop... outside...).
back-lights just can't match up to the power of the sun, as even with 
full brightness, ambient background light makes the screen look dark (a 
loss of color is a reasonable tradeoff).



my brother also had a "Neo Geo Pocket", which was a handheld gaming 
device which was usable in direct sunlight (because it used reflection 
rather than a backlight).


apparently, there is also a type of experimental LCD which pulls off 
color without using a color mask, which could also be nifty if combined 
with the use of reflected light.



personally, I would much rather have an LCD than an electronic paper 
display, given a device with an LCD could presumably also be used as a 
computer of some sort, without very slow refreshing. like, say, a tablet 
style thing which is usable in direct sunlight. likewise, ones' e-books 
can be PDF's (vs some obscure device-specific format).




the one area I think printed books currently have a slight advantage (vs
things like Adobe Reader and similar), is the ability to quickly place
custom bookmarks (would be nice if one could define user-defined bookmarks
in Reader, and if it would remember wherever was the last place the user was
looking in a given PDF).

Apple Preview, and perhaps other PDF readers, already do this.


except, like many Apple products, it is apparently Mac only...

it seems like an obvious enough feature, but Adobe Reader doesn't have it.
I haven't really though to check if there were other PDF viewers that 
could do so.



Have fun! David
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] OT: Hypertext and the e-book

2012-03-08 Thread BGB
(from ISO or ECMA or W3C or others).


OTOH, despite being fairly expensive, I have seen stuff which is fairly 
obviously crap in some of the required textbooks for college courses 
(some fairly solidly worse than the usual "teach yourself X in Y 
units-of-time" books), with the major difference that a "teach yourself" 
book is something like $15, rather than more like $150 for typical 
college textbooks...


one of the worse I saw was primarily screenshots from Visual Studio with 
arrows drawn on them with comments like "click here" and "drag this 
there", and pretty much the whole book was this, and I wasn't exactly 
impressed... (and each "chapter" was basically a bunch of "follow the 
screenshots and you will end up with whatever program was the assignment 
for this chapter...").


and the teacher didn't even accept any homework, it was simply "click to 
check yes that you have read chapter/done assignment, and receive credit".


not that I necessarily have a problem with easy classes, but there 
probably needs to be some sort of reasonable limit.


nevermind several classes that have apparently outsourced the homework 
to the textbook distributor.


sometimes though, it is kind of difficult to have a very positive 
opinion of "education" (or, at least, CS classes... general-education 
classes tend to be actually fairly difficult... but then the classes 
just suck).


a person is probably almost better off learning CS being self-taught, 
FWIW (except when it is the great will of parents that a person goes and 
gets a degree and so on).


unless maybe it is just personal perception, and to others the general 
education classes are "click yes to pass" and the CS classes are 
actually difficult (and actually involve doing stuff...). sometimes 
though, the classes are harder, and may actually ask the student to 
write some code (and I know someone who is having difficulty, having 
encountered such a class, with an assignment mostly consisting of using 
linked-lists and reading/writing the contents of text files).



Anyway, I probably wouldn't have replied to this post at all except 
that I wanted to let you all know about an especially relevant project 
which is trying to raise money to publish a book. I joined Kickstarter 
just to support this thing, and I am very reluctant to "join" websites 
these days. If you were at SPLASH 2010 in Reno, you might recall Slim 
from his Onward! film presentation. I think he's really onto 
something, although his language is a little touchy-feely. Please, 
check it out. If you believe better design is a necessary part of 
better computing (as do I) then please consider a pledge.


http://kck.st/whvn03

(Oops, I just checked, and he's made the goal! Well, I've already 
wrote this, and I still mean it, but perhaps with a little less urgency.)


-- Max



ok.



On Wed, Mar 7, 2012 at 4:11 PM, Mack <mailto:m...@mackenzieresearch.com>> wrote:


I am a self-admitted Kindle and iPad addict, however most of the
people I know are "real book" aficionados for relatively
straight-forward reasons that can be summed up as:

-   Aesthetics:  digital readers don't even come close to
approximating the experience of reading a printed and bound paper
text.  To some folks, this matters a lot.

-   A feeling of connectedness with history: it's not a
difficult leap from turning the pages of a modern edition of
'Cyrano de Bergerac' to perusing a volume that was current in
Edmund Rostand's time.  Imagining that the iPad you hold in your
hands was once upon a shelf in Dumas Pere's study is a much bigger
suspension of disbelief.  For some people, this contributes to a
psychological distancing from the material being read.

-   Simplicity of sharing:  for those not of the technical
elite, sharing a favored book more closely resembles the kind of
matching of intrinsics that happens during midair refueling of
military jets than the simple act of dropping a dog-eared
paperback on a friend's coffee table.

-   Simplicity.  Period.  (Manual transmissions and paring
knives are still with us and going strong in this era of
ubiquitous automatic transmissions and food processors.  Facility
and convenience doesn't always trump simplicity and reliability.
 Especially when the power goes out.)

Remember Marshall Mcluhan's observation: "The medium is the
message"?  Until we pass a generational shift where the bulk of
readers have little experience of analog books, these
considerations will be with us.

-- Mack

m...@mackenzieresearch.com <mailto:m...@mackenzieresearch.com>



On Mar 7, 2012, at 3:13 PM, BGB wrote:

> On 3/7/2012 3:24 AM, Ryan Mitchley wrote:

Re: [fonc] OT: Hypertext and the e-book

2012-03-07 Thread BGB
tended to prefer visual media 
(anime, TV, games, ...), since these tend to provide the stimulus 
up-front (everything that happens, one sees directly, so no need trying 
to burn mental energy imagining it...).


I guess how things go will depend mostly on the common majority or similar.


it is likely similar with books and programming:
people who like lots of books and reading, will tend to like doing so, 
and will make up the majority position of readers (as strange and alien 
as their behaviors may seem to others);
those who like programming will, similarly, continue doing so, and thus 
make up the majority position of programmers (as similarly strange and 
alien as this may seem, given how often and negatively many people 
depict "nerds" and similar...).


ultimately, whoever makes up the fields, controls the fields, and 
ultimately holds control over how things will be regarding said field. 
so, books are controlled by "literature culture", much like computers 
remain mostly under the control of "programmer culture" (except those 
parts under the control of "business culture" and similar...).



or such...





On Mar 7, 2012, at 3:13 PM, BGB wrote:


On 3/7/2012 3:24 AM, Ryan Mitchley wrote:

May be of interest to some readers of the list:

http://nplusonemag.com/bones-of-the-book


thoughts:
admittedly, I am not really much of a person for reading fiction (I tend mostly 
to read technical information, and most fictional material is more often 
experienced in the form of movies/TV/games/...).

I did find the article interesting though.

I wonder: why really do some people have such a thing for traditional books?

they are generally inconvenient, can't be readily accessed:
they have to be physically present;
one may have to go physically retrieve them;
it is not possible to readily access their information (searching is a pain);
...

by contrast, a wiki is often a much better experience, and similarly allows the 
option of being presented sequentially (say, by daisy chaining articles 
together, and/or writing huge articles). granted, it could be made maybe a 
little better with a good WYSIWYG style editing system.

potentially,  maybe, something like MediaWiki or similar could be used for 
fiction and similar.
granted, this is much less graphically elaborate than some stuff the article 
describes, but I don't think text is dead yet (and generally doubt that fancy 
graphical effects are going to kill it off any time soon...). even in digital 
forms (where graphics are moderately cheap), likely text is still far from dead.

it is much like how magazines filled with images have not killed books filled 
solely with text, despite both being printed media (granted, there are college 
textbooks, which are sometimes in some ways almost closer to being very and 
large expensive magazines in these regards: filled with lots of graphics, a new 
edition for each year, ...).


but, it may be a lot more about the information being presented, and who it is 
being presented to, than about how the information is presented. graphics work 
great for some things, and poor for others. text works great for some things, 
and kind of falls flat for others.

expecting all one thing or the other, or expecting them to work well in cases 
for which they are poorly suited, is not likely to turn out well.


I also suspect maybe some people don't like the finite resolution or usage of 
back-lighting or similar (like in a device based on a LCD screen). there are 
"electronic paper" technologies, but these generally have poor refresh times.

a mystery is why, say, LCD panels can't be made to better utilize ambient light 
(as opposed to needing all the light to come from the backlight). idle thoughts 
include using either a reflective layer, or a layer which responds strongly to 
light (such as a phosphorescent layer), placed between the LCD and the 
backlight.


but, either way, things like digital media and hypertext displacing the use of 
printed books may be only a matter of time.

the one area I think printed books currently have a slight advantage (vs things 
like Adobe Reader and similar), is the ability to quickly place custom 
bookmarks (would be nice if one could define user-defined bookmarks in Reader, 
and if it would remember wherever was the last place the user was looking in a 
given PDF).

the above is a place where web-browsers currently have an advantage, as one can more easily 
bookmark locations in a web-page (at least apart from "frames" evilness). a minor 
downside though is that bookmarks are less good for "temporarily" marking something.

say, if one can not only easily add bookmarks, but easily remove or update them 
as well.


the bigger possible issues (giving books a partial advantage):
they are much better for very-long-term archival storage (print a book with 
high-quality paper, and with luck, a person finding it in 1

Re: [fonc] OT: Hypertext and the e-book

2012-03-07 Thread BGB

On 3/7/2012 3:24 AM, Ryan Mitchley wrote:

May be of interest to some readers of the list:

http://nplusonemag.com/bones-of-the-book



thoughts:
admittedly, I am not really much of a person for reading fiction (I tend 
mostly to read technical information, and most fictional material is 
more often experienced in the form of movies/TV/games/...).


I did find the article interesting though.

I wonder: why really do some people have such a thing for traditional books?

they are generally inconvenient, can't be readily accessed:
they have to be physically present;
one may have to go physically retrieve them;
it is not possible to readily access their information (searching is a 
pain);

...

by contrast, a wiki is often a much better experience, and similarly 
allows the option of being presented sequentially (say, by daisy 
chaining articles together, and/or writing huge articles). granted, it 
could be made maybe a little better with a good WYSIWYG style editing 
system.


potentially,  maybe, something like MediaWiki or similar could be used 
for fiction and similar.
granted, this is much less graphically elaborate than some stuff the 
article describes, but I don't think text is dead yet (and generally 
doubt that fancy graphical effects are going to kill it off any time 
soon...). even in digital forms (where graphics are moderately cheap), 
likely text is still far from dead.


it is much like how magazines filled with images have not killed books 
filled solely with text, despite both being printed media (granted, 
there are college textbooks, which are sometimes in some ways almost 
closer to being very and large expensive magazines in these regards: 
filled with lots of graphics, a new edition for each year, ...).



but, it may be a lot more about the information being presented, and who 
it is being presented to, than about how the information is presented. 
graphics work great for some things, and poor for others. text works 
great for some things, and kind of falls flat for others.


expecting all one thing or the other, or expecting them to work well in 
cases for which they are poorly suited, is not likely to turn out well.



I also suspect maybe some people don't like the finite resolution or 
usage of back-lighting or similar (like in a device based on a LCD 
screen). there are "electronic paper" technologies, but these generally 
have poor refresh times.


a mystery is why, say, LCD panels can't be made to better utilize 
ambient light (as opposed to needing all the light to come from the 
backlight). idle thoughts include using either a reflective layer, or a 
layer which responds strongly to light (such as a phosphorescent layer), 
placed between the LCD and the backlight.



but, either way, things like digital media and hypertext displacing the 
use of printed books may be only a matter of time.


the one area I think printed books currently have a slight advantage (vs 
things like Adobe Reader and similar), is the ability to quickly place 
custom bookmarks (would be nice if one could define user-defined 
bookmarks in Reader, and if it would remember wherever was the last 
place the user was looking in a given PDF).


the above is a place where web-browsers currently have an advantage, as 
one can more easily bookmark locations in a web-page (at least apart 
from "frames" evilness). a minor downside though is that bookmarks are 
less good for "temporarily" marking something.


say, if one can not only easily add bookmarks, but easily remove or 
update them as well.



the bigger possible issues (giving books a partial advantage):
they are much better for very-long-term archival storage (print a book 
with high-quality paper, and with luck, a person finding it in 1000 or 
2000 years can still read it), but there is far less hope of most 
digital media remaining intact for anywhere near that long (most current 
digital media tends to have a life-span more measurable in years or 
maybe decades, rather than centuries).


most digital media requires electricity and is weak against things like 
EMP and similar, which also contributes to possible fragility.


these need not prevent use of electronic devices for convenience-sake or 
similar, but does come with the potential cost that, if things went 
particularly bad (societal collapse or widespread death or similar), the 
vast majority of all current information could be lost.


granted, it is theoretically possible that people could make bunkers 
with hard-copies of large amounts of information and similar printed on 
high-quality acid-free paper and so on (and then maybe further treat 
them with wax or polymers).


say, textual information is printed as text, and maybe data either is 
represented in a textual format (such as Base-85), or is possibly 
represented via a more compact system (a non-redundant or semi-redundant 
dot pattern).


say (quick calculation) one could fit up to around 34MB on a page at 72 
DPI, though possibly 16MB/pa

[fonc] on script performance and scalability (Re: Error trying to compile COLA)

2012-03-03 Thread BGB
basically, the same thing as before, but I encountered this elsewhere 
(on Usenet), and figured I might see what people here thought about it:

http://www.codingthewheel.com/game-dev/john-carmack-script-interpreters-considered-harmful

yes, granted, this is a different domain from what the people here are 
dealing with.



BTW: I was recently doing some fiddling with working on a new JIT 
(unrelated to the video), mostly restarting a prior effort (started 
writing a new JIT a few months ago, stopped working on it as there were 
more important things going on elsewhere, this effort is still not a 
high-priority though, ...).


not really sure if stuff related to writing a JIT is particularly 
relevant here, and no, I am not trying to spam, even if it may seem like 
it sometimes.



On 3/2/2012 10:25 AM, BGB wrote:

On 3/2/2012 3:07 AM, Reuben Thomas wrote:

On 2 March 2012 00:43, Julian Leviston  wrote:
What if the aim that superseded this was to make it available to the 
next
set of people, who can do something about real fundamental change 
around

this?

Then it will probably fail: why should anyone else take up an idea
that its inventors don't care to promote?


yeah.

most people are motivated essentially by "getting the job done", and 
if a technology doesn't exist yet for them to use, most often they 
will not try to implement one (instead finding something which exists 
and making it work), or if they do implement something, it will be 
"their thing, their way".


so, it makes some sense to try to get a concrete working system in 
place, which people will build on, and work on.


granted, nearly everything tends towards "big and complex", so there 
is not particularly to gain by fighting it. if one can get more done 
in less code, this may be good, but I don't personally believe 
minimalism to be a good end-goal in itself (if it doesn't offer much 
over the existing options).



Perhaps what is needed is to ACTUALLY clear out the cruft. Maybe 
it's not
easy or possible through the "old" channels... too much work to 
convince too

many people who have so much history of the merits of tearing down the
existing systems.

The old channels are all you have until you create new ones, and
you're not going to get anywhere by attempting to tear down existing
systems; they will be organically overrun when alternatives become
more popular. But this says nothing about which alternatives become
more popular.



yep.

this is a world built on evolutions, rather than on revolutions.
a new thing comes along, out-competes the old thing, and eventually 
takes its place.

something new comes along, and does the same.
and so on...

the most robust technologies then are those which have withstood the 
test of time despite lots of competition, and often which have been 
able to adapt to better deal with the new challenges.


so, if one wants to defeat what exists, they may need to be able to 
create something decidedly "better" than what exists, at the things it 
does well, and should ideally not come with huge costs or drawbacks 
either.



consider, a new systems language (for competing against C):
should generate more compact code than C;
should be able to generate faster code;
should not have any mandatory library dependencies;
should ideally compile faster than C;
should ideally be easier to read and understand than C;
should ideally be more expressive than C;
...

and, avoid potential drawbacks:
impositions on the possible use cases (say, unsuitable for writing an 
OS kernel, ...);
high costs of conversion (can't inter-operate with C, is difficult to 
port existing code to);
steep learning curve or "weirdness" (should be easy to learn for C 
developers, shouldn't cause them to balk at weird syntax or semantics, 
...);

language or implementation is decidedly more complex than C;
most of its new features are useless for the use-case;
it poorly handles features essential to the use case;
...

but, if one has long lists, and compares many other popular languages 
against them, it is possible to generate estimates for how and why 
they could not displace it from its domain.


not that it means it is necessarily ideal for every use case, for 
example, Java and C# compete in domains where both C and C++ have 
often done poorly.


neither really performs ideally in the others domain, as C works about 
as well for developing user-centric GUI apps as Java or C# works well 
as a systems language: not very well.


and, neither side works all that well for high-level scripting, hence 
a domain largely controlled by Lua, Python, Scheme, JavaScript, and so 
on.


but, then these scripting languages often have problems scaling to the 
levels really needed for application software, often showing 
weaknesses, such as the merit (in the small) of more easily declaring 
variables while mostly ignoring types, becomes a mess of 
diff

Re: [fonc] Sorting the WWW mess

2012-03-02 Thread BGB

On 3/2/2012 8:37 AM, Martin Baldan wrote:

Julian,

I'm not sure I understand your proposal, but I do think what Google
does is not something trivial, straightforward or easy to automate. I
remember reading an article about Google's ranking strategy. IIRC,
they use the patterns of mutual linking between websites. So far, so
good. But then, when Google became popular, some companies started to
build link farms, to make themselves look more important to Google.
When Google finds out about this behavior, they kick the company to
the bottom of the index. I'm sure they have many secret automated
schemes to do this kind of thing, but it's essentially an arms race,
and it takes constant human attention. Local search is much less
problematic, but still you can end up with a huge pile of unstructured
data, or a huge bowl of linked spaghetti mess, so it may well make
sense to ask a third party for help to sort it out.

I don't think there's anything architecturally centralized about using
Google as a search engine, it's just a matter of popularity. You also
have Bing, Duckduckgo, whatever.


yeah.

the main thing Google does is scavenging and aggregating data.
and, they have done fairly well at it...

and they make money mostly via ads...



  On the other hand, data storage and bandwidth are very centralized.
Dropbox, Google docs, iCloud, are all sympthoms of the fact that PC
operating systems were designed for local storage. I've been looking
at possible alternatives. There's distributed fault-tolerant network
filesystems like Xtreemfs (and even the Linux-based XtreemOS), or
Tahoe-LAFS (with object-capabilities!), or maybe a more P2P approach
such as Tribler (a tracker-free bittorrent), and for shared bandwidth
apparently there is a BittorrentLive (P2P streaming). But I don't know
how to put all that together into a usable computing experience. For
instance, squeak is a single file image, so I guess it can't benefit
from file-based capabilities, except if the objects were mapped to
files in some way. Oh, well, this is for another thread.


agreed.

just because I might want to have better internet file-systems, doesn't 
necessarily mean I want all my data to be off on someones' server somewhere.


much more preferable would be if I could remotely access data stored on 
my own computer.


the problem is that neither OS's nor networking hardware were really 
designed for this:
broadband routers tend to assume by default that the network is being 
used purely for pulling content off the internet, ...


at this point, it means convenience either requires some sort of central 
server to pull data from, or bouncing off of such a server (sort of like 
some sort of Reverse FTP, the computer holding the data connects to a 
server, and in turn makes its data visible on said server, and other 
computers connect to the server to access data stored on their PC, 
probably with some file-proxy magic and mirroring and similar...).


technically, the above could be like a more "organized" version of a P2P 
file-sharing system, and could instead focus more on sharing for 
individuals (between their devices) or between groups. unlike with a 
central server, it allows for much more storage space (one can easily 
have TB of shared space, rather than worrying about several GB or 
similar on some server somewhere).


nicer would be if it could offer a higher-performance alternative to a 
Mercurial or GIT or similar style system or similar (rather than simply 
being a raw shared filesystem).



better though would be if broadband routers and DNS worked in a way 
which made it fairly trivial for pretty much any computer to be easily 
accessible remotely, without having to jerk off with port-forwarding and 
other things.



potentially, if/when the "last mile" internet migrates to IPv6, this 
could help (as then presumably both NAT and dynamic IP addresses can 
partly go away).


but, it is taking its time, and neither ISPs nor broadband routers seem 
to yet support IPv6...





-Best

  Martin

On Fri, Mar 2, 2012 at 6:54 AM, Julian Leviston  wrote:

Right you are. Centralised search seems a bit silly to me.

Take object orientedism and apply it to search and you get a thing where
each node searches itself when asked...  apply this to a local-focussed
topology (ie spider web serch out) and utilise intelligent caching (so
search the localised caches first) and you get a better thing, no?

Why not do it like that? Or am I limited in my thinking about this?

Julian

On 02/03/2012, at 4:26 AM, David Barbour wrote:


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-02 Thread BGB

On 3/2/2012 3:07 AM, Reuben Thomas wrote:

On 2 March 2012 00:43, Julian Leviston  wrote:

What if the aim that superseded this was to make it available to the next
set of people, who can do something about real fundamental change around
this?

Then it will probably fail: why should anyone else take up an idea
that its inventors don't care to promote?


yeah.

most people are motivated essentially by "getting the job done", and if 
a technology doesn't exist yet for them to use, most often they will not 
try to implement one (instead finding something which exists and making 
it work), or if they do implement something, it will be "their thing, 
their way".


so, it makes some sense to try to get a concrete working system in 
place, which people will build on, and work on.


granted, nearly everything tends towards "big and complex", so there is 
not particularly to gain by fighting it. if one can get more done in 
less code, this may be good, but I don't personally believe minimalism 
to be a good end-goal in itself (if it doesn't offer much over the 
existing options).




Perhaps what is needed is to ACTUALLY clear out the cruft. Maybe it's not
easy or possible through the "old" channels... too much work to convince too
many people who have so much history of the merits of tearing down the
existing systems.

The old channels are all you have until you create new ones, and
you're not going to get anywhere by attempting to tear down existing
systems; they will be organically overrun when alternatives become
more popular. But this says nothing about which alternatives become
more popular.



yep.

this is a world built on evolutions, rather than on revolutions.
a new thing comes along, out-competes the old thing, and eventually 
takes its place.

something new comes along, and does the same.
and so on...

the most robust technologies then are those which have withstood the 
test of time despite lots of competition, and often which have been able 
to adapt to better deal with the new challenges.


so, if one wants to defeat what exists, they may need to be able to 
create something decidedly "better" than what exists, at the things it 
does well, and should ideally not come with huge costs or drawbacks either.



consider, a new systems language (for competing against C):
should generate more compact code than C;
should be able to generate faster code;
should not have any mandatory library dependencies;
should ideally compile faster than C;
should ideally be easier to read and understand than C;
should ideally be more expressive than C;
...

and, avoid potential drawbacks:
impositions on the possible use cases (say, unsuitable for writing an OS 
kernel, ...);
high costs of conversion (can't inter-operate with C, is difficult to 
port existing code to);
steep learning curve or "weirdness" (should be easy to learn for C 
developers, shouldn't cause them to balk at weird syntax or semantics, ...);

language or implementation is decidedly more complex than C;
most of its new features are useless for the use-case;
it poorly handles features essential to the use case;
...

but, if one has long lists, and compares many other popular languages 
against them, it is possible to generate estimates for how and why they 
could not displace it from its domain.


not that it means it is necessarily ideal for every use case, for 
example, Java and C# compete in domains where both C and C++ have often 
done poorly.


neither really performs ideally in the others domain, as C works about 
as well for developing user-centric GUI apps as Java or C# works well as 
a systems language: not very well.


and, neither side works all that well for high-level scripting, hence a 
domain largely controlled by Lua, Python, Scheme, JavaScript, and so on.


but, then these scripting languages often have problems scaling to the 
levels really needed for application software, often showing weaknesses, 
such as the merit (in the small) of more easily declaring variables 
while mostly ignoring types, becomes a mess of difficult-to-track bugs 
and run-time exceptions as the code gets bigger, and one may start 
running into problems with managing visibility and scope (as neither 
lexical nor global scope scale ideally well), ...


more so, a typical scripting language is likely to fail miserably as 
systems languages.

so, it also makes sense to define which sorts of domains a language targets.

...


for example, although my own language is working some on features 
intended for increasing scalability and performance (and some use in the 
"application" domain), its primary role remains as a scripting language, 
generally for apps written in C and C++ (since C remains as my main 
language, with C++ in second place).


in my case, assuming it is "good enough", it may eventually subsume some 
areas which are currently handled mostly by C code, but development is 
slow in some ways (and "change" often happens one piece at a time). 
things are stil

Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 3:56 PM, Loup Vaillant wrote:

Le 01/03/2012 22:58, Casey Ransberger a écrit :

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillant  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.


When I'm Squeaking, sometimes I find myself modeling classes with the 
browser but leaving method bodies to 'self break' and then write all 
of the actual code in the debugger. Doesn't work so well for hacking 
on the GUI, but, well.


Okay I take it back. Your use case sounds positively awesome.


I'm curious about 'debuggers are overrated' and 'you shouldn't need 
one.' Seems odd. Most people I've encountered who don't use the 
debugger haven't learned one yet.



Spot on.  The only debugger I have used up until now was a semi-broken
version of gdb (it tended to miss stack frames).



yeah...

sadly, apparently the Visual Studio debugger will miss stack frames, 
since it apparently often doesn't know how to back-trace through code in 
areas it doesn't have debugging information for, even though presumably 
pretty much everything is using the EBP-chain convention for 32-bit code 
(one gets the address, followed by question-marks, and the little 
message "stack frames beyond this point may be invalid").



a lot of time this happens in my case in stack frames where the crash 
has occurred in code which has a call-path going through the BGBScript 
VM (and the debugger apparently isn't really sure how to back-trace 
through the generated code).


note: although I don't currently have a full/proper JIT, some amount of 
the execution path often does end up being through generated code (often 
through piece-wise generate code fragments).


ironically, in AMD Code Analyst, this apparently shows up as "unknown 
module", and often accounts for more of the total running time than does 
the interpreter proper (although typically still only 5-10%, as the bulk 
of the running time tends to be in my renderer and also in "nvogl32.dll" 
and "kernel.exe" and similar...).



or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 2:58 PM, Casey Ransberger wrote:

Below.

On Feb 29, 2012, at 5:43 AM, Loup Vaillant  wrote:


Yes, I'm aware of that limitation.  I have the feeling however that
IDEs and debuggers are overrated.

When I'm Squeaking, sometimes I find myself modeling classes with the browser 
but leaving method bodies to 'self break' and then write all of the actual code 
in the debugger. Doesn't work so well for hacking on the GUI, but, well.

I'm curious about 'debuggers are overrated' and 'you shouldn't need one.' Seems 
odd. Most people I've encountered who don't use the debugger haven't learned 
one yet.


agreed.

the main reason I can think of why one wouldn't use a debugger is 
because none are available.
however, otherwise, debuggers are a fairly useful piece of software 
(generally used in combination with debug-logs and unit-tests and similar).


sadly, I don't yet have a good debugger in place for my scripting 
language, as mostly I am currently using the Visual-Studio debugger 
(which, granted, can't really see into script code). granted, this is 
less of an immediate issue as most of the project is plain C.




At one company (I'd love to tell you which but I signed a non-disparagement agreement) 
when I asked why the standard dev build of the product didn't include the debugger 
module, I was told "you don't need it." When I went to install it, I was told 
not to.

I don't work there any more...


makes sense.


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Can semantic programming eliminate the need for Problem-Oriented Language syntaxes?

2012-03-01 Thread BGB

On 3/1/2012 10:25 AM, Martin Baldan wrote:
Yes, namespaces provide a form of "jargon", but that's clearly not 
enough. If it were, there wouldn't be so many programming languages. 
You can't use, say, Java imports to turn Java into Smalltalk, or 
Haskell or Nile. They have different syntax and different semantics. 
But in the end you describe the syntax and semantics with natural 
language. I was wondering about using a powerful controlled language, 
with a backend of, say, OWL-DL, and a suitable syntax defined using 
some tool like GF (or maybe OMeta?).




as for Java:
this is due in large part to Java's lack of flexibility and expressiveness.

but, for a language which is a good deal more flexible than Java, why not?

I don't think user-defined syntax is strictly necessary, but things 
would be very sad and terrible if one were stuck with Java's syntax 
(IMO: as far as C-family languages go, it is probably one of the least 
expressive).


a better example I think was Lisp's syntax, where even if at its core 
fairly limited, and not particularly customizable (apart from reader 
macros or similar), still allowed a fair amount of customization via macros.



but, anyways, yes, the "language" problem is still a long way from 
solved, and so instead it is a constant stream of new languages trying 
to improve things here and there vs the ones which came before.



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 10:12 AM, Loup Vaillant wrote:

BGB wrote:

there is also, at this point, a reasonable lack of "industrial strength
scripting languages".
there are a few major "industrial strength" languages (C, C++, Java, C#,
etc...), and a number of scripting languages (Python, Lua, JavaScript,
...), but not generally anything to "bridge the gap" (combining the
relative dynamic aspects and easy of use of a scripting language, with
the power to "get stuff done" as in a more traditional language).


What could you possibly mean by "industrial strength scripting language"?

When I hear about an "industrial strength" tool, I mostly infer that 
the tool:

 - spurs low-level code (instead of high-level meaning),
 - is moderately difficult to learn (or even use),
 - is extremely difficult to implement,
 - has paid-for support.



expressiveness is a priority (I borrow many features from scripting 
languages, like JavaScript, Scheme, ...). the language aims to have a 
high-level of dynamic abilities in most areas as well (it supports 
dynamic types, prototype OO, lexical closures, scope delegation, ...).



learning curve or avoiding implementation complexity were not huge 
concerns (the main concession I make to learning curve is that it is in 
many regards "fairly similar" to current mainstream languages, so if a 
person knows C++ or C# or similar, they will probably understand most of 
it easily enough).


the main target audience is generally people who already know C and C++ 
(and who will probably keep using them as well). so, the language is 
mostly intended to be used mixed with C and C++ codebases. the default 
syntax is more ActionScript-like, but Java/C# style declaration syntax 
may also be used (the only significant syntax differences are those 
related to the language's JavaScript heritage, and the use of "as" and 
"as!" operators for casts in place of C-style cast syntax).


generally, its basic design is intended to be a bit less obtuse than C 
or C++ though (the core syntax is more like that in Java and 
ActionScript in most regards, and more advanced features are intended 
mostly for special cases).



the VM is intended to be free, and I currently have it under the MIT 
license, but I don't currently have any explicit plans for "support". it 
is more of a "use it if you want" proposition, provided "as-is", and so on.


it is currently "given on request via email", mostly due to my server 
being offline and probably will be for a while (it is currently 1600 
miles away, and my parents don't know how to fix it...).



but, what I mostly meant was that it is designed in such a way to 
hopefully deal acceptably well with writing largish code-bases (like, 
supporting packages/namespaces and importing and so on), and should 
hopefully be competitive performance-wise with similar-class languages 
(still needs a bit more work on this front, namely to try to get 
performance to be more like Java, C#, or C++ and less like Python).


as-is, performance is less of a critical failing though, since one can 
put most performance critical code in C land and work around the weak 
performance somewhat (and, also, my projects are currently more bound by 
video-card performance than CPU performance as well).



in a few cases, things were done which favored performance over strict 
ECMA-262 conformance though (most notably, there are currently 
differences regarding default floating-point precision and similar, due 
mostly to the VM presently needing to box doubles, and generally double 
precision being unnecessary, ... however, the VM will use double 
precision if it is used explicitly).




If you meant something more positive, I think Lua is a good candidate:
 - Small (and hopefully reliable) tools.
 - Fast implementations.
 - Widely used in the gaming industry.
 - Good C FFI.
 - Spurs quite higher-level meaning.



Lua is small, and fairly fast (by scripting language terms).

its use in the gaming industry is moderate (it still faces competition 
against several other languages, namely Python, Scheme, and various 
engine-specific languages).


not everyone (myself included) is entirely fond of its Pascal-ish syntax 
though.


I also have doubts how well it will hold up to large-scale codebases though.


its native C FFI is "moderate" (in that it could be a lot worse), but 
AFAIK most of its ease of use here comes from the common use of SWIG 
(since SWIG shaves away most need for manually-written boilerplate).


the SWIG strategy though is itself a tradeoff IMO, since it requires 
some special treatment on the part of the headers, and works by 
producing intermediate glue code.


similarly, it doesn't address the matter of potential semantic 
mismatches between the languages (so the interfaces tend to be fairly 
basic).



in my case, a similar system to

Re: [fonc] Error trying to compile COLA

2012-03-01 Thread BGB

On 3/1/2012 8:04 AM, Reuben Thomas wrote:

On 1 March 2012 15:02, Julian Leviston  wrote:

Is this one of the aims?

It doesn't seem to be, which is sad, because however brilliant the
ideas you can't rely on other people to get them out for you.


this is part of why I am personally trying to work more to develop 
"products" than doing pure research, and focusing more on trying to 
"improve the situation" (by hopefully increasing the number of viable 
options) rather than "remake the world".


there is also, at this point, a reasonable lack of "industrial strength 
scripting languages".
there are a few major "industrial strength" languages (C, C++, Java, C#, 
etc...), and a number of scripting languages (Python, Lua, JavaScript, 
...), but not generally anything to "bridge the gap" (combining the 
relative dynamic aspects and easy of use of a scripting language, with 
the power to "get stuff done" as in a more traditional language).


a partial reason I suspect:
many script languages don't scale well (WRT either performance or 
usability);
many script languages have jokingly bad FFI's, combined with a lack of 
good native libraries;

...


or such...

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Can semantic programming eliminate the need for Problem-Oriented Language syntaxes?

2012-03-01 Thread BGB

On 3/1/2012 5:25 AM, Martin Baldan wrote:

Hi,

What got me wondering this was the fact that people, as far as I know, 
don't use domain-specific languages in natural speech. What they do 
use is jargon, but the syntax is always the same. What if one could 
program in something like ACE, specify a jargon and start describing 
data structures concisely and conveniently in a controlled language? 
That way, whenever there is a new problem, you would only have to 
specify what kind of entities you want to use, what properties they 
can have, and so on.


I guess I want something like this:

http://en.wikipedia.org/wiki/Semantic-oriented_programming


What are your thoughts?



(not entirely sure I understand "SOP" at the moment, so responding more 
"in general").



to some extent, this is a role served by packages/namespaces in 
languages which have them.
namely, each package may have its own collections of various objects, 
which one can use via importing it.


otherwise, it is useful to have a reasonably expressive core language 
(IMO: this is where languages like Java fall on their face...), such 
that ideally it is not really necessarily for people to roll their own 
syntax and semantics for various tasks.


otherwise, the language should not be rigidly stupid in brain-damaged 
ways (also, IMO, a bit of a problem with Java and friends). this is 
where something "could" be presumably trivially expressed in the 
language, if only the compiler allowed it.


this was one area where C did a lot better than Java (and to a some 
extent, C++).
in C, nearly everything in the syntax which wasn't a statement, was an 
expression;
more so, C code is essentially just a list of expressions separated by 
semicolons (with a few exceptions, such as declarations and 
block-statements).


now, Java sort of "watered it down" a bit, by adopting a more watered 
down concept of "an expression", and essentially making most everything 
else be fixed form statements. not everything was bad: they watered down 
declarations in a way that made parsing them syntactically unambiguous 
(a problem in both C and C++ is that one needs to deal with context to 
be able to correctly parse the code, but both Java and C# mostly 
addressed this problem).


a weakness in Java, though, is that one can only call "methods", say:
"object.method()", "package.class.method()", ..., rather than being able 
to call arbitrary expressions (like in C and C++);
there are also no first-class functions or methods (C and C++ had these 
implicitly, C# "added" them as "delegates"), seriously it took until the 
JDK7 for Sun/Oracle to get around to adding them (never-mind the 
half-baked "lambdas"/"closures", which don't properly capture scope, and 
which require using a method to call "lambdaObj.invoke(...)", ...);

for fairly obvious reasons, one can't do curried functions in Java.

nevermind some ways Java's type-system is brain-damaged, ...


ultimately, all of its limitations and arbitrary restrictions IMO make 
Java a bit lame as a language (coupled with the weak JVM architecture), 
is part of why although I could technically support both, I didn't 
really bother maintaining it (I personally found it preferable to port 
Java code to my BGBScript language, which is, IMHO, less arbitrarily 
stupid, albeit at the cost of there being some minor syntactic 
differences which are kind of a hassle, such as the different type-cast 
syntax, ...).



a language which allows, say:
first class functions, and closures with semantics that make sense (they 
retain a copy of the parent scope, not some read-only by-value copy, or 
have the closure become invalid as soon as the parent returns);

calling arbitrary expressions, and using curried function calls;
ability to use dynamic and inferred types without a lot of extra pain;
ability to roll ones' own scoping as needed (such as via the use of 
delegation);

...

is a bit more useful IMO.
if the core language can compactly express ones' intents, why need a 
specialized DSL to try to be more compact?...



the weakness of C++, like that is Java, is that many of its newly added 
features were bolted on in often one-off forms, and where orthogonality 
either fails, or introduces new arbitrary "features" to address cases 
which should have theoretically followed from taking the prior features 
to their logical extremes (granted, it is not like C++ hasn't taken many 
of its features to extremes though...).


a bigger weakness though is that both the syntax and semantics are, in 
some was, ad-hoc, over-complicated, and its dependence on context has 
become a bit excessive.


or, at least, this is a few of my thoughts at the moment.


or such...




___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-29 Thread BGB

On 2/29/2012 4:09 PM, Alan Kay wrote:

Hi Duncan

The short answers to these questions have already been given a few 
times on this list. But let me try another direction to approach this.


The first thing to notice about the overlapping windows interface 
"personal computer experience" is that it is logically independent of 
the code/processes running underneath. This means (a) you don't have 
to have a single religion "down below" (b) the different kinds of 
things that might be running can be protected from each other using 
the address space mechanisms of the CPU(s), and (c) you can think 
about allowing "outsiders" to do pretty much what they want to create 
a really scalable really expandable WWW.


If you are going to put a "browser app" on an "OS", then the "browser" 
has to be a mini-OS, not an app.




agreed.

I started writing up my own response, but this one beat mine, and seems 
to address things fairly well.



But "standard apps" are a bad idea (we thought we'd gotten rid of them 
in the 70s) because what you really want to do is to integrate 
functionality visually and operationally using the overlapping windows 
interface, which can safely get images from the processes and 
composite them on the screen. (Everything is now kind of 
"super-desktop-publishing".) An "app" is now just a kind of integration.


yep. even on the PC with native apps, typically much of what is going on 
is in the domain of shared components in DLLs and similar, often with 
"the apps" being mostly front-end interfaces for this shared functionality.



one doesn't really need to see all of what is going on in the 
background, and the front-end UI and the background library 
functionality may be unrelated.


a annoyance though (generally seen among newbie developers) is that they 
confuse the UI for the app as a whole, thinking that throwing together a 
few forms in Visual Studio or similar, and invoking some functionality 
from these shared DLLs, suddenly makes them a big-shot developer (taking 
for granted all of the hard work that various people put into these 
libraries which their app depends on).


sort of makes it harder to get much respect though when much of ones' 
work is going into this sort of functionality, rather than a big flashy 
looking GUI (with piles of buttons everywhere, ...). it is also sad when 
many people judge how "advanced" an app is based primarily on how many 
GUI widgets they see on screen at once.



But the route that was actually taken with the WWW and the browser was 
in the face of what was already being done.


Hypercard existed, and showed what a WYSIWYG authoring system for 
end-users could do. This was ignored.


Postscript existed, and showed that a small interpreter could be moved 
easily from machine to machine while retaining meaning. This was ignored.


And so forth.



yep. PostScript was itself a notable influence to me, and my VM designs 
actually tend to borrow somewhat from PostScript, albeit often using 
bytecode in place of text, generally mapping nearly all "words" to 
opcode numbers, and using blocks more sparingly (typically using 
internal jumps instead), ... so, partway between PS and more traditional 
bytecode.



19 years later we see various attempts at inventing things that were 
already around when the WWW was tacked together.


But the thing that is amazing to me is that in spite of the almost 
universal deployment of it, it still can't do what you can do on any 
of the machines it runs on. And there have been very few complaints 
about this from the mostly naive end-users (and what seem to be mostly 
naive computer folks who deal with it).


yep.

it is also notable that I can easily copy files around within the same 
computer, but putting files online or sharing them with others quickly 
turns into a big pile of suck. a partial reason I think is due to a lack 
of good integration between local and network file storage (in both 
Windows and Linux, there has often been this "thing" of implementing 
access to network resources more as a Explorer / File Manager / ... hack 
than doing it "properly" at the OS filesystem level).


at this point, Windows has at least since integrated things as the FS 
level (one can mount SMB/CIFS shares, FTP servers, and WebDAV shares, as 
drive letters).


on Linux, it is still partly broken though, with GVFS and Samba dealing 
with the issues, but in a kind of half-assed way (and it is lame to have 
to route through GVFS something which should be theoretically handled by 
the OS filesystem).


nevermind that Java and Flash fail to address these issues as well, when 
given both knew full well what they were doing, yet both proceed to 
retain an obvious separation between "filesystem" and "URLs".


why not be like "if you open a file, and its name is a URL, you open the 
URL", and more so, have the ability to have a URL path as part of the 
working directory (well, not like Java didn't do "something" to file 
IO... good or sen

Re: [fonc] Error trying to compile COLA

2012-02-29 Thread BGB

On 2/29/2012 5:34 AM, Alan Kay wrote:
With regard to your last point -- making POLs -- I don't think we are 
there yet. It is most definitely a lot easier to make really powerful 
POLs fairly quickly  than it used to be, but we still don't have a 
nice methodology and tools to automatically supply the IDE, debuggers, 
etc. that need to be there for industrial-strength use.




yes, agreed.

the "basic infrastructure" is needed, and to a large degree this is the 
harder part, but it is far from a huge or impossible undertaking (it is 
more a matter of scaling: namely tradeoffs between performance, 
capabilities, and simplicity).


another issue though is the cost of implementing the POL/DSL/... vs the 
problem area being addressed: even if creating the language is fairly 
cheap, if the problem area is one-off, it doesn't really buy much.


a typical result is that of creating "cheaper" languages for more 
specialized tasks, and considerably more "expensive" languages for more 
general-purpose tasks (usually with a specialized language "falling on 
its face" in the general case, and a general-purpose language often 
being a poorer fit for a particular domain).



the goal is, as I see it, to make a bigger set of reusable parts, which 
can ideally be put together in new and useful ways. ideally, the IDEs 
and debuggers would probably work similarly (by plugging together logic 
from other pieces).




in my case, rather than trying to make very flexible parts, I had been 
focused more on making modular parts. so, even if the pipeline is itself 
fairly complicated (as are the parts themselves), one could presumably 
split the pipeline apart, maybe plug new parts in at different places, 
swap some parts out, ... and build something different with them.


so, it is a goal of trying to move from more traditional software 
design, where everything is tightly interconnected, to one where parts 
are only loosely coupled (and typically fairly specialized, but 
reasonably agnostic regarding their "use-case").


so, say, one wants a new language with a new syntax, there are 2 major 
ways to approach this:
route A: have a "very flexible" language (or meta-language), where one 
can change the syntax and semantics at will, ... this is what VPRI seems 
to be working towards.


route B: allow the user to throw together a new parser and front-end 
language compiler, reusing what parts from the first language are 
relevant (or pulling in other parts maybe intended for other languages, 
and creating new parts as needed). how easy or difficult it is, is then 
mostly a product of how many parts can be reused.


so, a language looks like an integrated whole, but is actually 
internally essentially built out of LEGO blocks... (with parts 
essentially fitting together in a hierarchical structure). it is also 
much easier to create languages with similar syntax and semantics, than 
to create ones which are significantly different (since more differences 
mean more unique parts).


granted, most of the languages I have worked on implementing thus far, 
have mostly been "bigger and more expensive" languages (I have made a 
few small/specialized languages, but most have been short-lived).


also, sadly, my project currently also contains a few places where there 
are "architecture splits" (where things on opposite sides work in 
different ways, making it difficult to plug parts together which exist 
on opposite sides of the split). by analogy, it is like where the screw 
holes/... don't line up, and where the bolts are different sizes and 
threading, requiring ugly/awkward "adapter plates" to make them fit.


essentially, such a system would need a pile of documentation, hopefully 
to detail what all parts exist, what each does, what inputs and outputs 
are consumed and produced, ... but, writing documentation is, sadly, 
kind of a pain.



another possible issue is that parts from one system wont necessarily 
fit nicely into another:
person A builds one language and VM, and person B makes another language 
and VM;
even if both are highly modular, there may be sufficient structural 
mismatches to make interfacing them be difficult (requiring much pain 
and boilerplate).



some people have accused me of "Not Invented Here", mostly for sake of 
re-implementing things theoretically found in libraries, but sometimes 
this is due to legal reasons (don't like the license terms), and other 
times because the library would not integrate cleanly into the project. 
often, its essential aspects can be "decomposed" and its functionality 
is re-implemented from more basic parts. another advantage is that these 
parts can often be again reused in implementing other things, or allow 
better inter-operation between a given component and those other 
components it may share parts with (and the parts may be themselves more 
useful and desirable than the thing itself).


this also doesn't mean creating a "standard of non-standard" (some 
people have acc

Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 5:36 PM, Julian Leviston wrote:


On 29/02/2012, at 10:29 AM, BGB wrote:


On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid "the 
current day world".


For example, one of the many current day standards that was 
dismissed immediately is the WWW (one could hardly imagine more of a 
mess).




I don't think "the web" is entirely horrible:
HTTP basically works, and XML is "ok" IMO, and an XHTML variant could 
be ok.


Hypertext as a structure is not beautiful nor is it incredibly useful. 
Google exists because of how incredibly flawed the web is and if 
you look at their process for organising it, you start to find 
yourself laughing a lot. The general computing experience these days 
is an absolute shambles and completely crap. Computers are very very 
hard to use. Perhaps you don't see it - perhaps you're in the trees - 
you can't see the forest... but it's intensely bad.




I am not saying it is particularly "good", just that it is "ok" and "not 
completely horrible...".


it is, as are most things in life, generally adequate for what it does...

it could be better, and it could probably also be worse...


It's like someone crapped their pants and google came around and said 
hey you can wear gas masks if you like... when what we really need to 
do is clean up the crap and make sure there's a toilet nearby so that 
people don't crap their pants any more.




IMO, this is more when one gets into the SOAP / WSDL area...


granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many "shiny new technologies" which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system 
lacking support for this is likely to be rejected outright.


You mean like email? A system that doesn't have anything to do with 
the WWW per se that is used daily by millions upon millions of people? 
:P I disagree intensely. In exactly the same was that facebook was 
taken up because it was a slightly less crappy version of myspace, 
something better than the web would be taken up in a heartbeat if it 
was accessible and obviously better.


You could, if you chose to, view this mailing group as a type of 
"living document" where you can peruse its contents through your email 
program... depending on what you see the web as being... maybe if you 
squint your eyes just the right way, you could envisage the web as 
simply being a means of sharing information to other humans... and 
this mailing group could simply be a different kind of web...


I'd hardly say that email hasn't been a great success... in fact, I 
think email, even though it, too is fairly crappy, has been more of a 
success than the world wide web.




I don't think email and the WWW are mutually exclusive, by any means.

yes, one probably needs email as well, as well as probably a small 
mountain of other things, to make a viable end-user OS...



however, technically, many people do use email via webmail interfaces 
and similar.
nevermind that many people use things like "Microsoft Outlook Web 
Access" and similar.


so, it is at least conceivable that a future exists where people read 
their email via webmail and access usenet almost entirely via Google 
Groups and similar...


not that it would be necessarily a good thing though...



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Error trying to compile COLA

2012-02-28 Thread BGB

On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid "the current 
day world".


For example, one of the many current day standards that was dismissed 
immediately is the WWW (one could hardly imagine more of a mess).




I don't think "the web" is entirely horrible:
HTTP basically works, and XML is "ok" IMO, and an XHTML variant could be ok.

granted, moving up from this, stuff quickly turns terrible (poorly 
designed, and with many "shiny new technologies" which are almost 
absurdly bad).



practically though, the WWW is difficult to escape, as a system lacking 
support for this is likely to be rejected outright.



But the functionality plus more can be replaced in our "ideal world" 
with encapsulated confined migratory VMs ("Internet objects") as a 
kind of next version of Gerry Popek's LOCUS.


The browser and other storage confusions are all replaced by the 
simple idea of separating out the safe objects from the various modes 
one uses to send and receive them. This covers files, email, web 
browsing, search engines, etc. What is left in this model is just a UI 
that can integrate the visual etc., outputs from the various 
encapsulated VMs, and send them events to react to. (The original 
browser folks missed that a scalable browser is more like a kernel OS 
than an App)


it is possible.

in my case, I had mostly assumed file and message passing.
theoretically, script code could be passed along as well, but the 
problem with passing code is how to best ensure that things are kept secure.



in some of my own uses, an option is to throw a UID/GID+privileges 
system into the mix, but there are potential drawbacks with this 
(luckily, the performance impact seems to be relatively minor). granted, 
a more comprehensive system (making use of ACLs and/or "keyrings" could 
be potentially a little more costly, rather than simple UID/GID rights 
checking, but all this shouldn't be too difficult to mostly optimize 
away in most cases).


the big issue is mostly to set up all the security in a "fairly secure" way.

currently, by default, nearly everything defaults to requiring root 
access. unprivileged code would thus require interfaces to be exposed to 
it directly (probably via "setuid" functions). however, as-is, it is 
defeated by most application code defaulting to "root".


somehow though, I think I am probably the only person I know who thinks 
this system is "sane".


however, it did seem like it would probably be easier to set up and 
secure than one based on scoping and visibility.



otherwise, yeah, maybe one can provide a bunch of APIs, and "apps" can 
be mostly implemented as scripts which invoke these APIs?...




These are old ideas, but the vendors etc didn't get it ...



maybe:
browser vendors originally saw the browser merely as a document viewing 
app (rather than as a "platform").



support for usable network file systems and "applications which aren't 
raw OS binaries" are slow-coming.


AFAIK, the main current contenders in the network filesystem space are 
SMB2/CIFS and WebDAV.


possibly useful could be integrating things in a form which is not 
terrible, for example:

OS has a basic HTML layout engine (doesn't care where the data comes from);
the OS's VFS can directly access HTTP, ideally without having to "mount" 
things first;

...

in this case, the "browser" is essentially just a fairly trivial script, 
say:
creates a window, and binds an HTML layout object into a form with a few 
other widgets;
passes any HTTP requests to the OS's filesystem API, with the OS 
managing getting the contents from the servers.


a partial advantage then is that other apps may not have to deal with 
libraries or sockets or similar to get files from web-servers, and 
likewise shell utilities would work, by default, with web-based files.


"cp http://someserver/somefile ~/myfiles/"

or similar...


actually, IIRC, my OS project may have actually done this (or it was 
planned, either way). I do remember though that sockets were available 
as part of the filesystem (under "/dev/" somewhere), so no sockets API 
was needed (it was instead based on opening the socket and using 
"ioctl()" calls...).



side note: what originally killed my OS project was, at the time, 
reaching the conclusion that it wouldn't have been likely possible for 
me to compete on equal terms with Windows and Linux, rendering the 
effort pointless, vs instead developing purely in userspace. does bring 
up some interesting thoughts though.



or such...



Cheers,

Alan






*From:* Reuben Thomas 
*To:* Fundamentals of New Computing 
*Sent:* Tuesday, February 28, 2012 1:01 PM
*Subject:* Re: [fonc] Error trying to compile COLA

On 28 February 2012 20:51, Niklas Larsson mailto:metanik...@gmail.com>> wrote:
>
> But Linux contains much more duplication than drivers only, it
> supports 

  1   2   3   4   >