On Sun, Feb 5, 2012 at 6:54 PM, Viktor Cerovski
<[email protected]> wrote:
> Raul Miller-4 wrote:
>> http://blog.lab49.com/archives/3011
>> It's amusing to try to think of how to characterize J arrays using
>> that methodology.
>>
>> Conceptually speaking, J has one type: array, and it's statically
>> typed.
>>
> In other words, we are having here dynamic type.

This is a matter of perspective.

It's also a single static type.

More importantly, J's operations on this type do not, as a general
rule, exhibit "polymorphism", where we have conflicting definitions
which we choose from depending on the type of the data.

[There are exceptions to this rule, especially if we include bugs,
where the implementation conflicts with the dictionary.)



> But a bigger problem is that we don't really have a good definition of
> "type".  And by "good" I mean:
> 
> a. concise,
> b. accurate, and
> c. consistent.
> 
I don't know what's wrong with types as we use it today,
either in simplest cases or in some deep theoretical sense.

[...]


> Note also that i. i. 4 has 0 ints... and a shape of four.  So the type
> here might be "1" though if you also consider the shape information
> the type here might be int*int*int*int.
> 
oh, I see where you're headed.  You're asking for type
representation for arrays including empty ones of arbitrary
shape.  Well, ADT might be simply:
  [int]*[int]
so your array would be represented as [0,1,2,3]*[]

Here is where ADT show some power in reasoning about data,
because we can ask what does it mean for instance, "everything
is array".  Well, as I wrote some time ago, to me in J scalars are
not arrays because the equation
 a-:a,x
has no solution for any scalar a, but has solution for an array a.
So, ADT could be helpful in sorting this type of questions with
some rigour.



> But there's a bigger fallacy here -- and that fallacy is the idea I
> introduced, which is that algebraic data types can be used to describe
> individual values.  They can't.  A value only has one possible value
> -- itself -- so its algebraic representation would be 1.
> 
Yes, indeed, but why is that problem at all?


>   Algebraic
> data types maybe describe regions of memory (or other concepts which
> can be used to represent multiple values...)
> 
>>>   So the value representing the type of an
>>> empty array is 1, and the type of a bit is 2, and the type of 2 bits
>>> is 4...
> 
>> What you're doing here, and then continue with more examples,
>> is to try to build the type system starting from bits.  That's not
>> how we would like to use types however, but rather to just specify
>> explicitly some types called, say, Int32, Int64, Word32, SingleFloat,
>> etc.
> 
> Yes, though my "doing here" is largely motivated by the structure of
> algebraic data types.
> 
>> Types are only half of the story---there are also operations over types,
>> and that's also what we need to take into account when talking about
>> types.
> 
> That sounds good, though this also introduces other issues.  For
> example, every operation needs at least two types (an argument type --
> the domain -- and a result type -- the range).  Also, the types which
> are associated with operations are often "smaller" than the underlying
> type systems.  (I remember the terms "one-to-one" and "onto"
> describing the exceptional cases -- where the full range of a "type"
> would be used.)
> 
Again, I don't see problem here with ADT.  Also, don't forget that
one type ("dynamic" type, or, for example, strings) suffices to have 
arbitrarily interesting programs.  ADT merely tries to systematically 
breaks down, or builds up, all the interesting values that 
we can make or use in programs into separate classes of values, 
which are individual types.


>> We should be provided with some operation like + that sums Ints,
>> etc, and it is not necessary to have them defined at the bit level at
>> all.
> 
> I do not know what this sentence means.  But I will agree that in the
> general case of mathematical work we often work with entities which
> have no concrete, finite representation.
> 
All I'm saying is that types describe data, and we also need
operations that do something with that data to get to programs.
So when we talk about ADT only and exclusively, we still don't
do programming unless we talk about functions/procedures that 
use them.


>> So we start with some number of types and operations, and then
>> algebraically build new types as well as operations, but the starting
>> choice of types and corresponding operations is not necessarily fixed.
> 
> This sounds contextual.  In some contexts this would be true, in other
> contexts it would be false.
> 
>> More importantly, there is no some canonical choice of starting types
>> from which everything else could be built up.
> 
> Here, I imagine you are talking about the general case of mathematics
> rather than the specific case of a programming language
> implementation?
> 
Well, you're free to start with any number of any types you want, 
bit or Int32 or tree, list, house, *, whatever, so long as you can construct 
values of the(se) type(s) and have something that transforms values into
values.  That's the starting point.  In the case of programming languages
we might as well assume that we have some type Int64 with operations
like + as given.  It certainly would depend on what we are trying to
describe, but in any case it is not necessary to always start with bits.

-- 
View this message in context: 
http://old.nabble.com/algebraic-data-types-tp33260423s24193p33276054.html
Sent from the J Chat mailing list archive at Nabble.com.

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to