I forgot the most important part!

Haskell's purity unbind higher abstractions (important for high-level AI)
whereas stateful languages limit the capacity for abstractions.

On Thu, Aug 13, 2015 at 2:04 PM, Juan Carlos Kuri Pinto <[email protected]>
wrote:

> Haskell probably has the steepest learning curve of all programming
> languages and perhaps the most rewarding one. Haskell has a relatively tiny
> grammar since everything is functions and monads. So, its grammar is
> relatively easy to learn. But the concepts and abstract algebra behind
> Haskell's infrastructure are somewhat hard to understand and new to most
> programmers.
>
> The A-Z of Programming Languages: Haskell
> Simon Peyton-Jones tells us why he is most proud of Haskell's purity, type
> system and monads.
>
> http://www.computerworld.com.au/article/261007/a-z_programming_languages_haskell/
>
> Regarding massive parallelism, Haskell is naturally parallelizable because
> it is 100% pure. That is, Haskell doesn't have mutating state. You never
> modify memory locations. You only create new constants and let the garbage
> collector remove them when they are out of scope. So, you will never have
> multiple-concurrency issues and memory collisions. That is perfect for
> massively parallel and concurrent software like AI. The results of
> functions are always stored in new memory locations, aka pointers of
> constants. That avoids memory collisions.
>
> Parallel and Concurrent Programming in Haskell
> By Simon Marlow
> http://chimera.labs.oreilly.com/books/1230000000929/index.html
>
> Regarding 100% purity, Haskell gives you the peace of mind necessary to
> focus on programming 95% of the time. Only 5% of the time you debug on
> compile time, not on runtime unless you make a logical error. Whereas other
> imperative, stateful, and dirty languages are very error-prone and you
> debug 95% of the time at both compile time and run time. And you are
> actually programming 5% of the time. Imperative, stateful, and dirty
> programming languages make you paranoid about bugs. Whereas Haskell gives
> you peace of mind because your implementation is correct at the first run.
> That's also due to Haskell's strong, static, and expressive type system.
> Haskell is designed for hard problems like AI.
>
> Stop wasting billions of dollars using the wrong software languages
>
> http://venturebeat.com/2013/10/11/stop-wasting-billions-of-dollars-using-the-wrong-software-languages/
>
> If you have more questions, please ask me or join our facebook group on
> Haskell:
>
> Programming Haskell
> https://www.facebook.com/groups/programming.haskell/?ref=bookmarks
>
>
> On Thu, Aug 13, 2015 at 1:33 PM, Steve Richfield <
> [email protected]> wrote:
>
>> Juan Carlos,
>>
>> On Thu, Aug 13, 2015 at 1:58 AM, Juan Carlos Kuri Pinto <[email protected]
>> > wrote:
>>
>>> Bro, do you even Haskell? :)
>>>
>>
>> No. I looked at the links you provided, but don't (yet) grok the
>> connection to HIGHLY parallel (e.g. no program counter) FPGA programming.
>> The problem is that these and other descriptions are all oriented toward
>> touting the Haskell's abilities in other areas, which if done just right,
>> might also work for FPGAs.
>>
>> Note that the language I am attempting to identify/design probably would
>> NOT be popular or efficient when run on conventional processors, with some
>> possible exceptions like robotics applications. That conventional
>> processors are SO ill adapted to such things has stood in the way of
>> language development, because however you do these things they are likely
>> to run slooooly on conventional processors.
>>
>> It is my belief that GPUs will (eventually) be obsoleted by FPGAs, but
>> NOT before a suitable programming language has been found. Once languages
>> are available in which to better describe processes so a compiler can see
>> them from a data chaining point of view, there are orders of magnitude in
>> performance just waiting to be harvested from the silicon foundries.
>> Haskell's "purity" might do this, but I am resistant to learning a new
>> language just to evaluate whether it could work for an application - when
>> the language was designed to do other things.
>>
>> Anyway, perhaps you could provide a few paragraphs explaining how Haskell
>> has what it takes to bridge this gap? It would sure be nice if I could
>> avoid re-inventing the wheel.
>>
>> It may also be possible that Haskell has 90% of what it takes, which
>> would greatly simplify my task.
>>
>> If Haskell has what it takes to program FPGAs, then I am the guy to
>> convince, because I have the ears of others who want to solve this problem,
>> so they can propel FPGAs to replace other forms of processors, and
>> hopefully make millions/billions in the process.
>>
>> Steve
>> ==============
>>
>>>
>>>
>>> https://www.facebook.com/notes/juan-carlos-kuri-pinto/how-to-program-stateful-intertwined-ai-networks-graphs-in-stateless-modular-prog/10151687175972712
>>>
>>>
>>> http://www.computerworld.com.au/article/261007/a-z_programming_languages_haskell/
>>>
>>> On Thu, Aug 13, 2015 at 2:01 AM, Steve Richfield <
>>> [email protected]> wrote:
>>>
>>>> Mike,
>>>>
>>>> On Mon, Aug 3, 2015 at 12:34 PM, Mike Archbold <[email protected]>
>>>> wrote:
>>>>
>>>>> This is a great classic book on programming languages, the Programming
>>>>> Language Landscape.
>>>>>
>>>>>
>>>>> http://www.amazon.com/Programming-Language-Landscape-Semantics-Implementation/dp/0023758716/ref=sr_1_1?ie=UTF8&qid=1438630302&sr=8-1&keywords=programming+language+landscape
>>>>>
>>>>> My favorite chapter is "The Swamp of Complexity."  In a nutshell --
>>>>> too many languages with too much crap in them!
>>>>
>>>>
>>>> The author completely missed the REAL problem with language complexity
>>>> - that when it becomes necessary to radically alter the execution, e.g.
>>>> vectorize the program to run on a supercomputer, then the size of the
>>>> compiler grows as the SQUARE of the size of the language (actually,
>>>> n*(n-1)/2, the number of interactions of components). Where more than two
>>>> elements must be considered together, there is often a cubic component that
>>>> can swamp even the quadratic component. If you have twice the language
>>>> complexity, it takes four (or eight) times as much compiler code to compile
>>>> it to a radically different architecture than that of the language. THAT is
>>>> why so many supercomputers start out with APL and FORTRAN compilers, and
>>>> why C compilers only vectorize simple loops that utilize a small subset of
>>>> the language.
>>>>
>>>> Of course, having lots of cute statements that all translate to
>>>> arithmetic and IF statements don't affect the compiler complexity much at
>>>> all.
>>>>
>>>> Unfortunately, this book never considered truly parallel
>>>> implementations, where everything runs at once - but rather they considered
>>>> "parallel" programming to be simple multi-threaded programing.
>>>>
>>>> Indeed, on page 6 they list "the" dozen classes of computer languages,
>>>> none of which come close to what I am trying to create.
>>>>
>>>> Still - I got my dollar's worth.
>>>>
>>>> Thanks.
>>>> Steve
>>>> ================
>>>>
>>>>> It looks like you can
>>>>> get a copy for less than a buck....
>>>>> Mike A
>>>>>
>>>>>
>>>>> On 8/3/15, Steve Richfield <[email protected]> wrote:
>>>>> > Hi all,
>>>>> >
>>>>> > I am working on a high-level FPGA programming language, that should
>>>>> also
>>>>> > serve better than existing languages as an AGI implementation
>>>>> language and
>>>>> > a robotics programming language. This is designed to be executed on
>>>>> FPGAs
>>>>> > rather than CPUs, though a PC version is contemplated.
>>>>> >
>>>>> > Here are my early thoughts. All comments are welcome.
>>>>> >
>>>>> > Parallel Computing Language
>>>>> > *Design Notes by Steve Richfield *as of Aug 2, 2015
>>>>> >
>>>>> > The goal of PCL is to provide a language to express algorithms in
>>>>> parallel
>>>>> > form for easy compilation to either parallel or sequential platforms,
>>>>> > rather than forcing programmers to express their algorithms in a
>>>>> probably
>>>>> > inefficient sequential form, for a (nonexistent) compiler to
>>>>> translate to a
>>>>> > parallel form.
>>>>> >
>>>>> >
>>>>> > The special need is to be able to translate to FPGA implementations,
>>>>> which
>>>>> > presently require efficient translation to be able to fit into
>>>>> existing
>>>>> > hardware.
>>>>> >
>>>>> >
>>>>> > *Existing Technology from which to Borrow*
>>>>> >
>>>>> > *APL structure:* In APL, everything is a matrix of varying
>>>>> dimensionality,
>>>>> > including zero dimensions (a simple variable). It includes numerous
>>>>> array
>>>>> > operations as operators in the language. Unfortunately, its
>>>>> promoters have
>>>>> > adopted syntax reminiscent to Sanskrit, which is enough to chase away
>>>>> > anyone not well versed in matrix inversions, etc. Some of the IBM-360
>>>>> > architecture was first worked out in APL.
>>>>> >
>>>>> >
>>>>> > *Dartmouth BASIC MAT statements: *The original Dartmouth BASIC
>>>>> recognized
>>>>> > MAT at the beginning of statements to indicate that the statements
>>>>> > specified matrix operations, rather than operations on variables.
>>>>> Hence,
>>>>> > *MAT
>>>>> > C=A*B* multiplied matrix *A* by matrix *B*, and stored the result in
>>>>> matrix
>>>>> > *C*. APL-like procedure is MUCH less opaque in this syntax.
>>>>> >
>>>>> >
>>>>> > *COBOL PICTURE clauses:* COBOL provided an easy (though now arcane)
>>>>> way of
>>>>> > easily describing variable structure, which could be easily extended
>>>>> to
>>>>> > meet present needs. Specifying *PICTURE 9999*, which could be
>>>>> abbreviated
>>>>> > *PIC
>>>>> > 9(4)*, a programmer could easily state that a variable had to hold 4
>>>>> > decimal digit values. In our implementation, *PICTURE 111111111111*
>>>>> or *PIC
>>>>> > 1(12)* could specify a 12-bit field, as could *PICTURE 7777* or
>>>>> *PICTURE
>>>>> > FFF*. COBOL also allowed for fixed-point notation, which is also
>>>>> important
>>>>> > in FPGA context, e.g. with *PICTURE 999V99* to represent 3 digits to
>>>>> the
>>>>> > left and two digits to the right of the implied decimal point.
>>>>> Provision
>>>>> > would have to also be made for logarithmic notation. Note that in
>>>>> addition
>>>>> > to precisely specifying “variables”, this also guides debuggers on
>>>>> how to
>>>>> > display what they find. This approach would allow for specifying
>>>>> pipeline
>>>>> > widths to be as narrow as possible for each operation.
>>>>> >
>>>>> >
>>>>> > *FORTRAN Arithmetic Statement Functions:* FORTRAN provides a
>>>>> one-line way
>>>>> > of specifying simple function subroutines, e.g.
>>>>> > *RMS(A,B)=SQRT((A**2)+(B**2))* that are usually implemented by simple
>>>>> > string substitution into their references, so they are executed as an
>>>>> > in-line subroutine in C, but without the need to specify they are
>>>>> in-line.
>>>>> > Data chaining in complex operations would be easy to specify with
>>>>> such
>>>>> > syntax.
>>>>> >
>>>>> >
>>>>> > *Eliminating **GOTO** statements: *Parallel processing aside, there
>>>>> are
>>>>> > plenty of good reasons to eliminate *GOTO* statements. In the
>>>>> process, we
>>>>> > should probably eliminate everything else that specifies anything
>>>>> > conditional beyond conditional storage of computed results. The
>>>>> presence of
>>>>> > a particular condition that necessitates particular processing
>>>>> should be
>>>>> > handled as an event, though it would be possible to fake it by
>>>>> translating
>>>>> > conditional logic into an event handler.
>>>>> >
>>>>> >
>>>>> > *All “procedure” will be event-driven:* Where sequence is needed, it
>>>>> will
>>>>> > be triggered step-by-step, e.g. by *WHEN* statements. Where a long
>>>>> sequence
>>>>> > is needed, each step must be triggered by completing the previous
>>>>> step. To
>>>>> > avoid programming flags and *WHEN* clauses for each step, a
>>>>> *PROCEDURE*
>>>>> > will be declared, that necessarily starts with a *WHEN* clause,
>>>>> after which
>>>>> > the compiler will assume that each step starts when the previous
>>>>> step has
>>>>> > completed. There may be any number of procedures simultaneously
>>>>> active at
>>>>> > any one time, but only one instance of any particular procedure,
>>>>> unless it
>>>>> > is declared as being *RECURSIVE* and/or *REENTRANT*. Where a
>>>>> procedure
>>>>> > requires conditional operation within it, the conditional operation
>>>>> will be
>>>>> > triggered and entered via a *WHEN* statement. Note that complex
>>>>> *WHEN*
>>>>> > statements, when implemented in hardware, only cost gates and NOT
>>>>> any time.
>>>>> >
>>>>> >
>>>>> > *Familiar Operations: *Familiar operations like SELECT ... CASE
>>>>> statements
>>>>> > will be provided, though they will “execute” in unfamiliar ways. For
>>>>> > example, a SELECT statement will simultaneously “execute” all CASEs
>>>>> for
>>>>> > which the stated conditions are satisfied.
>>>>> >
>>>>> >
>>>>> > *Syntax:* Three different syntaxes will be supported, which can be
>>>>> > intermixed on input. They are mathematical, familiar (similar to C),
>>>>> and
>>>>> > verbose (similar to COBOL). For example, familiar *MAT C=A*B *in the
>>>>> > example above would be simply *C=A*B* in mathematical form, and
>>>>> *Multiply
>>>>> > matrix A by matrix B giving matrix C*  in verbose form. Error
>>>>> messages from
>>>>> > the compiler would show both the input and the equivalent verbose
>>>>> forms, to
>>>>> > show how the compiler interpreted the statements.
>>>>> >
>>>>> >
>>>>> > *Early implementations:* Initially this PCL will be a publication
>>>>> language
>>>>> > to specify the construction of complex programmable logic. Then, a
>>>>> > translator will be written in a portable language like C to translate
>>>>> > programs from PCL to C so that programs can be tested on personal
>>>>> > computers, etc. Then, translators will be written to translate to
>>>>> FPGAs
>>>>> > programming languages *Verilog* and *VHDL*, and finally, FPGAs will
>>>>> be
>>>>> > adapted to become better targets for code produced by this process,
>>>>> much as
>>>>> > IBM 360/370 mainframes were designed as prime targets for COBOL
>>>>> programs.
>>>>> >
>>>>> >
>>>>> > *Other Applications:* This language comes VERY close to also meeting
>>>>> the
>>>>> > needs for robotics applications, with many simultaneous tasks and
>>>>> close
>>>>> > coupling to I/O, so it should be expanded to include anything that
>>>>> might be
>>>>> > missing to also serve robotics.
>>>>> >
>>>>> >
>>>>> > *Comments:*  PLEASE comment on this at any level, most especially
>>>>> what
>>>>> > other languages might serve this need, what features of other
>>>>> languages
>>>>> > should be incorporated, what it might be missing, what might be
>>>>> wrong, etc.
>>>>> >
>>>>> >
>>>>> > Steve
>>>>> >
>>>>> >
>>>>> >
>>>>> > -------------------------------------------
>>>>> > AGI
>>>>> > Archives: https://www.listbox.com/member/archive/303/=now
>>>>> > RSS Feed:
>>>>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>>>>> > Modify Your Subscription:
>>>>> > https://www.listbox.com/member/?&;
>>>>> > Powered by Listbox: http://www.listbox.com
>>>>> >
>>>>>
>>>>>
>>>>> -------------------------------------------
>>>>> AGI
>>>>> Archives: https://www.listbox.com/member/archive/303/=now
>>>>> RSS Feed:
>>>>> https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac
>>>>>
>>>>> Modify Your Subscription: https://www.listbox.com/member/?&;
>>>>> Powered by Listbox: http://www.listbox.com
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Full employment can be had with the stoke of a pen. Simply institute a
>>>> six hour workday. That will easily create enough new jobs to bring back
>>>> full employment.
>>>>
>>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>>> <https://www.listbox.com/member/archive/rss/303/23601136-e0982844> |
>>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>>> <http://www.listbox.com>
>>>>
>>>
>>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>>> <https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac> |
>>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>>> <http://www.listbox.com>
>>>
>>
>>
>>
>> --
>> Full employment can be had with the stoke of a pen. Simply institute a
>> six hour workday. That will easily create enough new jobs to bring back
>> full employment.
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/23601136-e0982844> |
>> Modify
>> <https://www.listbox.com/member/?&;>
>> Your Subscription <http://www.listbox.com>
>>
>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to