Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Benjamin Franksen
On Friday 08 July 2005 17:46, Henning Thielemann wrote:
> Vectors can be used and abused for many things but 
> an object which can be called a vector (because of its ability of to
> be added and to be scaled) is not a linear operator itself and does
> not naturally represent one.

At least for finite dimensional spaces (and these are the only ones 
under consideration here, right?) scalar multiplication is a very nice 
and natural way to view a vector as a linear operation (into the scalar 
field). I know, in linear algebra the told us all that this 
corespondence is not a 'natural' or 'canonic' one because it depends on 
a chosen basis. Well, well. For practical purposes of programming, we 
always use the 'canonic base' right? So if the base is canonic, then so 
is the correspondence between vector space and its dual.

On a different not, one could argue that a 1xn matrix M is indeed a 
vector of dimension n, an then M' = M^T is a nx1 matrix, that is also a 
vector of dimension n, but this is the same vector as the 
non-transposed version. Now, the two things (M and M') are the same, if 
viewed as a vector, but not the same if viewed as a matrix. Can we 
express this in Haskell?

Ben
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> Henning Thielemann wrote:
>
> >On Fri, 8 Jul 2005, Keean Schupke wrote:
> >
> >>So the linear operator is translation (ie: + v)... effectively 'plus'
> >>could be viewed as a function which takes a vector and returns a matrix
> >>(operator)
> >>
> >>(+) :: Vector -> Matrix
> >
> >Since a matrix _is_ not a linear map but only its representation, this
> >would not make sense. As I said (v+) is not a linear map thus there is no
> >matrix which represents it. A linear map f must fulfill
> > f 0 == 0
> >
> >But since
> > v+0 == v
> >  the function (v+) is only a linear map if 'v' is zero.
> >
> > I can't see how to fit in your vector extension by the 1-component.
> >
> >
> >
> Eh?

Today:

++
| The Linear Map |
++

If F is a field, V a set of vectors, and (F,V,+,*) a vectorspace then a
function f of V -> V is linear if:

for all x and y from Vf (x+y) == f x + f y
for all k from F and x from V f (k*x) == k * f x


Lemma

If f is a linear map, then f 0 = 0.

Proof: For any v from V it is v+(-1)*v = 0.
  f 0 = f (v+(-1)*v) = f v + f ((-1)*v) = f v + (-1) * f v = 0


Theorem

If (v+) is a linear map then v == 0.

Proof: (v+0) == 0 (conclusion from the lemma)
 =>  v == 0


8-]


> (or the second by the first - the two are isomorphic)... A translation
> can be represented
> by the matrix:
>
> 1   0   0   0
> 0   1   0   0
> 0   0   1   0
> dx  dy  dz  1
>
> So the result of "v+" is this matrix.

But the vectors I can add to v have one component less than necessary for
multiplication with this matrix.

Indeed you can map all v's with three components to an affine sub-space of
the 4-dimensional space, namely to those vectors with a 1 in the last
component. But you are no longer working with three dimensional vectors
but with four-dimensional ones. Again: isomorphy is not identity.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> Henning Thielemann wrote:
>
> >>does it not make sense to define matrix applicaion:
> >>
> >>mapply :: Matrix -> Vector -> Vector
> >>
> >>Then you can define say:
> >>
> >>rotate90 = mapply rotationMatrix90
> >>
> >>v' = rotate90  v
> >
> >... that's what I said about mulVec.
> >
> I guess that means we agree...

... yes, and that I wonder if you read my former mails. I feel I'm
repeating myself.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Confused about Cyclic struture

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Dinh Tien Tuan Anh wrote:

>   Another question, it's said in the book that using cyclic structure (like
> ones = 1:ones) , the list would be represented by a fixed amount of memory.
>
>   Does it mean [1,1,1..] only occupy one cell of memory ?
>   How about  in " take 100 [1,1,...] " ?

'take' will certainly ignore the cyclic structure. Even if it could detect
the cycle in its argument there is no way to represent the list of 100
elements more efficiently.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Confused about Cyclic struture

2005-07-08 Thread Dinh Tien Tuan Anh


 Another question, it's said in the book that using cyclic structure (like 
ones = 1:ones) , the list would be represented by a fixed amount of memory.


 Does it mean [1,1,1..] only occupy one cell of memory ?
 How about  in " take 100 [1,1,...] " ?




From: "Dinh Tien Tuan Anh" <[EMAIL PROTECTED]>
To: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Confused about Cyclic struture
Date: Fri, 08 Jul 2005 11:41:02 +


 So is sharing already implemented in Haskell ?

Do i have to use "where" clause to implement the sharing ?

Thanks a lot for your help
Cheers



To understand cyclic structures it is useful to think of "graph
reduction", because these graphs allow us to conveniently represent
sharing between objects. Cycles are simply "self-sharing".



_
It's fast, it's easy and it's free. Get MSN Messenger 7.0 today! 
http://messenger.msn.co.uk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


_
Be the first to hear what's new at MSN - sign up to our free newsletters! 
http://www.msn.co.uk/newsletters


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>>does it not make sense to define matrix applicaion:
>>
>>mapply :: Matrix -> Vector -> Vector
>>
>>Then you can define say:
>>
>>rotate90 = mapply rotationMatrix90
>>
>>v' = rotate90  v
>>
>>
>
>... that's what I said about mulVec.
>  
>
I guess that means we agree...

Keean.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>On Fri, 8 Jul 2005, Keean Schupke wrote:
>
>  
>
>>So the linear operator is translation (ie: + v)... effectively 'plus'
>>could be viewed as a function which takes a vector and returns a matrix
>>(operator)
>>
>>(+) :: Vector -> Matrix
>>
>>
>
>Since a matrix _is_ not a linear map but only its representation, this
>would not make sense. As I said (v+) is not a linear map thus there is no
>matrix which represents it. A linear map f must fulfill
> f 0 == 0
>
>But since
> v+0 == v
>  the function (v+) is only a linear map if 'v' is zero.
>
> I can't see how to fit in your vector extension by the 1-component.
>
>  
>
Eh?

Translation is a linear operation no? Adding vectors translates the
first by the second
(or the second by the first - the two are isomorphic)... A translation
can be represented
by the matrix:

1   0   0   0
0   1   0   0
0   0   1   0
dx dy dz 1

So the result of "v+" is this matrix.

In other words this matrix is the 'vector addition operator'...
providing you pad the vectors
with an additional '1' at the end.

So if:

translate :: Vector -> Matrix

[x,y,z,1] = [[1,0,0,0],[0,1,0,0],[0,0,1,0],[x,y,z,1]]

 we can create a matrix representing translation from:

translate [3,4,5,1]

and can apply this translation to another vector:

mapply (translate [3,4,5,1]) [2,3,4,1] = [5,7,9,1]


All I was saying is that following this, partial application of vector
addition:

[3,4,5] + [2,3,4] = [5,7,9]

but partially applying:

([3,4,5] +)

would be a the matrix defined above as (translate [3,4,5,1]) ... Of
course this has the
drawback that you need an extra dimension in you vectors and matrices to
cope with
translation.


Anyway I have more or less convinced myself that separating vectors and
matrices is the
right thing to do... I was just trying to define vector addition in
terms of a matrix operation
for neatness.

Keean.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> So the linear operator is translation (ie: + v)... effectively 'plus'
> could be viewed as a function which takes a vector and returns a matrix
> (operator)
>
> (+) :: Vector -> Matrix

Since a matrix _is_ not a linear map but only its representation, this
would not make sense. As I said (v+) is not a linear map thus there is no
matrix which represents it. A linear map f must fulfill
 f 0 == 0

But since
 v+0 == v
  the function (v+) is only a linear map if 'v' is zero.

 I can't see how to fit in your vector extension by the 1-component.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> Is a matrix is a linear operation on a vector,

It _is_ not a linear map, but it is a canonical representation of a linear
map.

> does it not make sense to define matrix applicaion:
>
> mapply :: Matrix -> Vector -> Vector
>
> Then you can define say:
>
> rotate90 = mapply rotationMatrix90
>
> v' = rotate90  v

... that's what I said about mulVec.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, David Roundy wrote:

> I don't particularly care what API you use for svd, since it's trivial to
> convert from one API to the other.  It's matrix arithmetic I care about,
> since that's the complicated part of the API.

Of course I want to use the results of more complicated routines with
basic matrix arithmetic and I like to reduce the number of conversions.
The other reason for the debate is that if you don't like the extra vector
type then you will not use it. If I want to apply a routine of you to my,
say, audio data I will have to decide whether I shall store it as column
or as row vector/matrix although this decision seems to me rather
irrelevant and annoying.

> On the other hand, the most natural return value for svd would be a
> diagonal matrix, since that is what the objects are, right?

Hm, since SVD means Singular Value Decomposition I like to have the
singular values as they are. I don't want to search them in a sparse
matrix.

> svd returns three matrices, which when multiplied together give the
> original matrix ...

This would be a nice property, though. I could do it by converting the
singular value list to a diagonal matrix. So I need a conversion, hm.

> or at least that's how I think of it.  But I'll grant that a diagonal
> matrix isn't the most convenient representation, and certain is far from
> the most efficient, unless we introduce a diagonal matrix constructor
> (which would certainly be nice).

I would not like to obtain a value of general Matrix type since it is
statically guaranted that it is always diagonal.

>  I guess you'd prefer that svd returns a list of doubles and two lists
> of vectors? Or a list of triplets of a double and two vectors?

 Either there is a function to scale the columns of a matrix separately,
then two matrices and a list/vector of doubles are ok. Or there is a
function to multiply two vectors to obtain a matrix with rank 1 (Outer
product or tensor product might be the right name. Btw. the (<>) operator
has the problem, that it is always interpreted as scalar product. But it
could also mean this kind of multiplication.) Then adding such products of
left and right singular vectors scaled by the corresponding singular value
let me reconstruct the matrix.
 The triplet approach has the advantage that it is statically sure that
the number of singular values and singular vectors match. The matrix
approach has the advantage that it is statically guaranteed that the
dimension of all singular vectors match.
 With the (orthogonal) singular vector matrices we map vectors from one
basis to the representation in another basis. So these matrices represents
naturally linear maps and it makes sense to use them.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>
>
>In general a vector need not to be a linear operator. You talked about
>vector translation, translation is not a linear operator. You gave some
>process to map the problem to somewhere, where it becomes a linear
>operator. Other people said that the scalar product with a fixed vector is
>a linear operator. That's true. Now what is a natural interpretation of a
>vector as linear operator? The scalar product or the translation? Vectors
>can be used and abused for many things but an object which can be called a
>vector (because of its ability of to be added and to be scaled) is not a
>linear operator itself and does not naturally represent one.
>
>  
>
So the linear operator is translation (ie: + v)... effectively 'plus'
could be
viewed as a function which takes a vector and returns a matrix (operator)

(+) :: Vector -> Matrix

Which could also be called 'translate'. So 'translate' takes a vector
and returns
a linear-operator which can be applied to a vector:

mapply (translate vector1) vector2

So I guess I could now ask, why allow vector addition at all?

Keean.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> Henning Thielemann wrote:
>
> >Do you mean
> > [x,y,z,1] * [[1,0,0,0],[0,1,0,0],[0,0,1,0],[dx,dy,dz,dw+1]]
> >?
> >
> Erm, yes thats what I meant ... but you obviously got the point.
>
> >>but how is this different from adding vectors? If we allow vector
> >>addition then we no longer have the nice separation between values and
> >>linear operators, as a value can also be a linear operator (a
> >>translation)?
> >
> >???
>
> Well if a vector can be a linear-operator, then surely it _is_ a matrix!

In general a vector need not to be a linear operator. You talked about
vector translation, translation is not a linear operator. You gave some
process to map the problem to somewhere, where it becomes a linear
operator. Other people said that the scalar product with a fixed vector is
a linear operator. That's true. Now what is a natural interpretation of a
vector as linear operator? The scalar product or the translation? Vectors
can be used and abused for many things but an object which can be called a
vector (because of its ability of to be added and to be scaled) is not a
linear operator itself and does not naturally represent one.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Is a matrix is a linear operation on a vector, does it not make sense to
define matrix applicaion:

mapply :: Matrix -> Vector -> Vector

Then you can define say:

rotate90 = mapply rotationMatrix90

v' = rotate90  v


Keean.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread David Roundy
On Fri, Jul 08, 2005 at 03:33:16PM +0200, Henning Thielemann wrote:
> 
> On Thu, 7 Jul 2005, David Roundy wrote:
> 
> > On the other hand, this is sort of a silly debate, since the API *I*
> > want is a subset of the API *you* want.
> 
> The API you want is certainly not just a subset of what I want. E.g. the
> singular value decomposition in your favorite API will probably return a
> 1xN matrix or a MxN diagonal matrix containing the singular values, while
> with my favorite API it will return a vector or a list. Right?

I don't particularly care what API you use for svd, since it's trivial to
convert from one API to the other.  It's matrix arithmetic I care about,
since that's the complicated part of the API.

On the other hand, the most natural return value for svd would be a
diagonal matrix, since that is what the objects are, right? svd returns
three matrices, which when multiplied together give the original
matrix... or at least that's how I think of it.  But I'll grant that a
diagonal matrix isn't the most convenient representation, and certain is
far from the most efficient, unless we introduce a diagonal matrix
constructor (which would certainly be nice).  I guess you'd prefer that svd
returns a list of doubles and two lists of vectors? Or a list of triplets
of a double and two vectors?

(Answering another email...) I certainly agree that fft is not a function
of a matrix.
-- 
David Roundy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>
>Do you mean
> [x,y,z,1] * [[1,0,0,0],[0,1,0,0],[0,0,1,0],[dx,dy,dz,dw+1]]
>?
>
>  
>
Erm, yes thats what I meant ... but you obviously got the point.

>>but how is this different from adding vectors? If we allow vector
>>addition then we no longer have the nice separation between values and
>>linear operators, as a value can also be a linear operator (a
>>translation)?
>>
>>
>
>???
>
>  
>
Well if a vector can be a linear-operator, then surely it _is_ a matrix!

Keean.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>
>I'm excited if your code gets swamped by conversions between Double and
>Matrix then. I really plead for representing everything with strings, this
>is the most simple and most flexible solution! :-]
>  
>
Surely its a case of balancing the advantage of type safety against the
added complexity. The point after all is to reduce bugs. Type-sigs reduce
one type of bug, at the cost of an increase in complexity. My argument is
that at some point the probability of introducing errors due to the
increased
complexity outweighs the reduction in the probability of error due to
type-safety?

I think this is a pragmatic point, that cannot be argued with, all you
can argue over
is where in the continuum this point is.

As such we were dicussing the complexity/type-safety trade off with
respect to
matrices - to go from this to such a sweeping statment... is like saying
'if I can't have it my way I don't want anything at all"...

Keean.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> Okay, this approach is starting to make sense to me... I can see now
> that Vectors are a different type of object to Matrices. Vectors
> represent points in N-Space and matrices represent operations on those
> points

That's what I wanted to express.

> (say rotations or translations)... But it seems we can represent
> translations as adding vectors or matrix operations (although we need to
> introduce the 'extra' dimension W... and have an extra field in vectors
> that contains the value '1').
>
> (3D translation)
>
> [x,y,z,1] * [[0,0,0,0],[0,0,0,0],[0,0,0,0],[dx,dy,dz,dw]] =
> [x+dx,y+dy,z+dz,1+dw]

Do you mean
 [x,y,z,1] * [[1,0,0,0],[0,1,0,0],[0,0,1,0],[dx,dy,dz,dw+1]]
?

> but how is this different from adding vectors? If we allow vector
> addition then we no longer have the nice separation between values and
> linear operators, as a value can also be a linear operator (a
> translation)?

???

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Keean Schupke wrote:

> Henning Thielemann wrote:
>
> >My objections to making everything a matrix were the objections I sketched
> >for MatLab.
> >
> >The example, again: If you write some common expression like
> >
> >transpose x * a * x
>
> Which just goes to show why haskell limits the '*' operator to
> multiplying the same types. Keep this to Matrix-times-Matrix operators.

Btw. the interface file by Alberto gives you uniform usage of an operator
symbol (<>) while retaining full type safety by functional dependencies.

http://dis.um.es/~alberto/hmatrix/doc/LinearAlgebra.Interface.html

This is a good solution, I think.

> I feel using separate types for vectors and scalars just overcomplicates
> things...

I'm excited if your code gets swamped by conversions between Double and
Matrix then. I really plead for representing everything with strings, this
is the most simple and most flexible solution! :-]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>
>Let me elaborate on that:
> In some cases putting vectors as columns into a matrix then applying a
>matrix operation on this matrix leads to the same like to 'map' a
>matrix-vector operation to a list of vectors. But in other cases (as the
>one above) this is not what you want. I consider it as an incidence not as
>a general principle if this kind of extension works.
>
>Let's consider another example: The basic definition of the Fourier
>transform is for vectors. MatLab wants to make the effect of vector
>operations consistent for row and column vectors, thus
>  
>
Okay, this approach is starting to make sense to me... I can see now that
Vectors are a different type of object to Matrices. Vectors represent
points in
N-Space and matrices represent operations on those points (say rotations or
translations)...

But it seems we can represent translations as adding vectors or matrix
operations
(although we need to introduce the 'extra' dimension W... and have an
extra field in vectors
that contains the value '1').

(3D translation)

[x,y,z,1] * [[0,0,0,0],[0,0,0,0],[0,0,0,0],[dx,dy,dz,dw]] =
[x+dx,y+dy,z+dz,1+dw]

but how is this different from adding vectors? If we allow vector
addition then we no longer
have the nice separation between values and linear operators, as a value
can also be
a linear operator (a translation)?

Keean.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Alberto Ruiz wrote:

> The server is working again.
>
> On Thursday 07 July 2005 20:58, Alberto Ruiz wrote:
> > I' sorry, our web server is temporarily down :-(
> >
> > >
> > > http://dis.um.es/~alberto/hmatrix/matrix.html

I would also prefer a vector of complex numbers for the FFT function
instead of a nx2 matrix.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Thu, 7 Jul 2005, David Roundy wrote:

> On the other hand, this is sort of a silly debate, since the API *I*
> want is a subset of the API *you* want.

The API you want is certainly not just a subset of what I want. E.g. the
singular value decomposition in your favorite API will probably return a
1xN matrix or a MxN diagonal matrix containing the singular values, while
with my favorite API it will return a vector or a list. Right?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Keean Schupke
Henning Thielemann wrote:

>My objections to making everything a matrix were the objections I sketched
>for MatLab.
>
>The example, again: If you write some common expression like
>
>transpose x * a * x
>  
>
Which just goes to show why haskell limits the '*' operator to
multiplying the same types. Keep this to Matrix-times-Matrix operators.

Surely a vector is a 1xN matrix? And in terms of matrices a 1x1 matrix
and a scalar are the same? I don't see the point of having separate
matix and vector types... just have 'matrix constructors:

matrix :: [[a]] -> Matrix a
vector :: [a] -> Matrix a
scalar :: a -> Matrix a

I don't see any problems with this, and it lets you define the matrix
exponential:

instance Floating a => Floating (Matrix a) where
exp = undefined

Type of exp in Floating is (a -> a -> a), so substituting the class
parameter gives:
exp :: Matrix a -> Matrix a -> Matrix a, however you can only compute
the matrix exponential
(x^y) for "scalar-x matrix-y" or "matrix-x scalar-y"...

I feel using separate types for vectors and scalars just overcomplicates
things...

>then both the human reader and the compiler don't know whether x is a
>"true" matrix or if it simulates a column or a row vector. It may be that
>'x' is a row vector and 'a' a 1x1 matrix, then the expression denotes a
>square matrix of the size of the vector simulated by 'x'. It may be that
>'x' is a column vector and 'a' a square matrix. Certainly in most cases I
>want the latter one and I want to have a scalar as a result. But if
>everything is a matrix then I have to check at run-time if the result is a
>1x1 matrix and then I have to extract the only element from this matrix.
>If I omit the 1x1 test mistakes can remain undiscovered. I blame the
>common notation x^T * A * x for this trouble since the alternative
>notation  can be immediately transcribed into computer language
>code while retaining strong distinction between a vector and a matrix
>type.
>  
>
If x is a matrix and y is a matrix then

x * y

can only be interpreted in one simple way - no special cases. Sometimes you
may need to transpose a 1xN into an Nx1 but at least you will get an
'error' if you
try to use the wrong type...

>More examples? More theory?
>  
>
More complexity?

Keean.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread David Roundy
On Thu, Jul 07, 2005 at 03:08:50PM +0200, Henning Thielemann wrote:
> 
> On Thu, 7 Jul 2005, David Roundy wrote:
> 
> > On Tue, Jul 05, 2005 at 08:17:58PM +0200, Henning Thielemann wrote:
> > > The example, again: If you write some common expression like
> > >
> > > transpose x * a * x
> > >
> > > then both the human reader and the compiler don't know whether x is a
> > > "true" matrix or if it simulates a column or a row vector. It may be that
> > > 'x' is a row vector and 'a' a 1x1 matrix, then the expression denotes a
> > > square matrix of the size of the vector simulated by 'x'. It may be that
> > > 'x' is a column vector and 'a' a square matrix. Certainly in most cases I
> > > want the latter one and I want to have a scalar as a result. But if
> > > everything is a matrix then I have to check at run-time if the result is a
> > > 1x1 matrix and then I have to extract the only element from this matrix.
> > > If I omit the 1x1 test mistakes can remain undiscovered. I blame the
> > > common notation x^T * A * x for this trouble since the alternative
> > > notation  can be immediately transcribed into computer language
> > > code while retaining strong distinction between a vector and a matrix
> > > type.
> >
> > The issue is that Haskell (as far as I understand, and noone has suggested
> > anything to the contrary) doesn't have a sufficiently powerful type system
> > to represent matrices or vectors in a statically typed way.  It would be
> > wonderful if we could represent matrix multiplication as
> >
> > matrix_mul :: Matrix n m -> Matrix m o -> Matrix n o
> 
> Even if we had that I would vote for distinction of Vector and Matrix! I
> wanted to show with my example that vectors and matrices are different
> enough that it makes sense to give them different types. In my opinion
> multiplying a matrix with a so-called column vector is a more fundamental
> bug than multiplying a matrix with a vector of non-compatible size. That
> is I like static check of matrix-vector combinations and a dynamic check
> of their size. The latter also because in many application you don't know
> the vector size at compile time.

You wouldn't need to know the vector size at compile time in order to get
static type checking of vector sizes--if the type system is sufficiently
powerful.

> > Also note that if you have several vectors x for which you want to compute
> > the dot product with metric A, and if you want to do this efficiently,
> > you'll have to convert your list of vectors into a matrix anyways.
> 
> If you bundle some vectors as columns in matrix B and want to compute
> norms with respect to matrix A writing
>  B^T * A * B
>   you will not only get the norms of the vectors in B but also many mixed
> scalar products. This example shows to me that matrices are not simply
> collections of vectors.

Good point, and that does illustrate the pain of doing this (treating sets
of vectors as matrices).

> Btw. we should not mess up the design with decisions on efficiency
> concerns. If you want efficiency then you can abuse matrices for that but
> I consider a 'map' over a list of vectors as the cleaner way (and more
> type safe, because you can't make transposition errors).

Mapping is cleaner, but something like 10 times slower... not for my
example above, but for a simple y = A*x computation--you then could do the
vector-vector inner product better than transpose y * x.

Efficiency concerns shouldn't be separated from design decisions.  An API
shouldn't force an implementation to be inefficient.  On the other hand,
this is sort of a silly debate, since the API *I* want is a subset of the
API *you* want.  (Except that I'd like matrices to be an instance of
Floating, but that's easy enough to do once we've got all the matrix
operations...)
-- 
David Roundy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Confused about Cyclic struture

2005-07-08 Thread Dinh Tien Tuan Anh


 So is sharing already implemented in Haskell ?

Do i have to use "where" clause to implement the sharing ?

Thanks a lot for your help
Cheers



To understand cyclic structures it is useful to think of "graph
reduction", because these graphs allow us to conveniently represent
sharing between objects. Cycles are simply "self-sharing".



_
It's fast, it's easy and it's free. Get MSN Messenger 7.0 today! 
http://messenger.msn.co.uk


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Thu, 7 Jul 2005, Alberto Ruiz wrote:

> Hello! Thank you very much for all your suggestions. A new version of the
> library can be found at:
>
> http://dis.um.es/~alberto/hmatrix/matrix.html

If the Matrix type would be parametrised then Matrix.fromBlocks could use
a more natural indexing.

Matrix.fromBlocks :: [[Matrix a b]] -> Matrix (Int,a) (Int,b)

The Int of the index pair would store the block number and the type a
index would store the index within the block. Hm, but it would not be
possible to store the sizes of the sub-matrices.  It would be possible
only if the sub-matrices have the same size.  Maybe it is better to allow
matrices of matrices and to be able to apply a matrix-matrix
multiplication which in turn employs a matrix-matrix multiplication of its
elements. But the matrix decompositions will fail on this structure - but
possibly those which fail aren't appropriate for block matrices.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Henning Thielemann

On Fri, 8 Jul 2005, Alberto Ruiz wrote:

> The server is working again.
>
> On Thursday 07 July 2005 20:58, Alberto Ruiz wrote:
> > I' sorry, our web server is temporarily down :-(
> >
> > >
> > > http://dis.um.es/~alberto/hmatrix/matrix.html

I would remove the 'matrix' portions of the function names, since this
information is already contained in the module name. E.g. after importing
LinearAlgebra.Matrix as Matrix, Matrix.matrix look strange, but
Matrix.fromList says everything.

I suggest

 matrix -> fromList
 matrixElements -> toList
 displayMatrix  -> display
 readMatrix  -> fromFile :: FilePath -> IO Matrix
 writeMatrix -> toFile :: FilePath -> Matrix -> IO ()
 blockMatrix -> fromBlocks or fromMatrices
 subMatrix   -> clip or cut or take
 matrixScale  -> scale
 matrixOffset -> offset
 matrixAdd-> add
 matrixSub-> sub
 matrixMul-> mul  (element-wise multiplication?
  then better zipMul or elementwiseMul, elMul what know I :-)
 matrixDiv-> div
 matrixSigns  -> signs
 matrixAbs-> abs
 matrix_x_matrix  ... hm difficult
 matrix_x_vector
 vector_x_matrix

 msin :: Matrix -> Matrix
 mcos :: Matrix -> Matrix
 mtan :: Matrix -> Matrix
 masin :: Matrix -> Matrix
 macos :: Matrix -> Matrix
 matan :: Matrix -> Matrix
 matan2 :: Matrix -> Matrix -> Matrix
 msinh :: Matrix -> Matrix
 mcosh :: Matrix -> Matrix
 mtanh :: Matrix -> Matrix
 masinh :: Matrix -> Matrix
 macosh :: Matrix -> Matrix
 matanh :: Matrix -> Matrix
 mexp :: Matrix -> Matrix
 mlog :: Matrix -> Matrix

If there are no efficiency concerns, I would drop element-wise operations
and prefer a matrix-map and a matrix-zipWith. If these operations shall
remain I would somehow point to their element-wise operation in the name.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] matrix computations based on the GSL

2005-07-08 Thread Alberto Ruiz
The server is working again.

On Thursday 07 July 2005 20:58, Alberto Ruiz wrote:
> I' sorry, our web server is temporarily down :-(
>
> >
> > http://dis.um.es/~alberto/hmatrix/matrix.html
>
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Overlapping Instances with Functional Dependencies

2005-07-08 Thread Martin Sulzmann

Simon's dead right, too :) The issue raised here is of general nature and
doesn't depend on the particular (syntactic) formalism used to specify
type dependencies (let it be FDs, ATs,...). The consequence is that
instances and type dependencies are closer linked to each other
then one might think (in case of instance/improvement overlap at least).

Martin


Simon Peyton-Jones writes:
 > Martin's dead right.  GHC uses a less sophisticated mechanism to do
 > matching when it's thinking about functional dependencies than when it's
 > doing straight instance matching.  Maybe something cleverer for fundeps
 > would make sense, as you point out.  I hadn't thought of that before;
 > it's a good point.
 > 
 > Nowadays, whenever a fundep question comes up I think "how would it show
 > up if we had associated type synonyms instead of fundeps?" (see the
 > paper on my home page).  In this case I think the answer is "GHC's
 > current instance-matching mechanism would work unchanged"; or to put it
 > another way, what ever mechanism is used for instance matching, the same
 > would be used for type dependencies.
 > 
 > Simon
 >   
 > | I wouldn't call this a bug, overlapping instances
 > | and in particular the combination with functional dependencies
 > | are something which is not well studied yet.
 > | Hence, GHC is very conservative here.
 > | 
 > | I feel like you, this program should work.
 > | As you correctly point out, there's a conflict among the
 > | two improvement rules (resulting from the instances and FD).
 > | A sensible decision is to apply the same "ad-hoc"
 > | mechanism to improvement rules that is currently
 > | applied to overlapping instances. Of course, we need some
 > | formal system to express such conditions precisely.
 > | You find some hints how to achieve this in
 > | 
 > | G. J. Duck, S. Peyton-Jones, P. J. Stuckey, and M. Sulzmann.
 > | Sound and decidable type inference for functional dependencies.
 > | In Proc. of ESOP'04
 > | 
 > | Martin
 > | 
 > | 
 > | Daniel Brown writes:
 > |  > I have the following three programs:
 > |  >
 > |  >class Foo a b
 > |  >instance Foo (a -> b) (a -> [b])
 > |  >instance Foo a a
 > |  >
 > |  >class Bar a b | a -> b
 > |  >instance Bar (a -> b) (a -> b)
 > |  >instance Bar a a
 > |  >
 > |  >class Baz a b | a -> b
 > |  >instance Baz (a -> b) (a -> [b])
 > |  >instance Baz a a
 > |  >
 > |  > When compiled in ghc 6.4 (with -fglasgow-exts
 > |  > -fallow-overlapping-instances -fallow-undecidable-instances) Foo
 > |  > and Bar compile fine, but Baz fails with this error:
 > |  >
 > |  >Baz.hs:2:0:
 > |  >Functional dependencies conflict between instance
 > declarations:
 > |  >  Baz.hs:2:0: instance Baz (a -> b) (a -> [b])
 > |  >  Baz.hs:3:0: instance Baz a a
 > |  >
 > |  > This is how I interpret the error: The fundep says "a uniquely
 > |  > determines b", but if you have `Baz (Int -> Int) b`, b is `Int ->
 > [Int]`
 > |  > according to the first instance and `Int -> Int` according to the
 > second
 > |  > instance. b isn't uniquely determined by a, so the functional
 > dependency
 > |  > isn't functional -- thus the conflict.
 > |  >
 > |  > When confronted with overlapping instances, the compiler chooses
 > the
 > |  > most specific one (if it is unique), e.g. `Baz (a -> b) (a -> [b])`
 > is
 > |  > more specific than `Baz a a`.
 > |  >
 > |  > But it seems that the combination of the two features is broken: if
 > the
 > |  > most specific instance is chosen before checking the functional
 > |  > dependency, then the fundep is satisfied; if the fundep is checked
 > |  > before choosing the most specific instance, then it isn't.
 > |  >
 > |  > Is this a bug, or am I confused?
 > |  >
 > |  >   Dan
 > |  > ___
 > |  > Haskell-Cafe mailing list
 > |  > Haskell-Cafe@haskell.org
 > |  > http://www.haskell.org/mailman/listinfo/haskell-cafe
 > | ___
 > | Haskell-Cafe mailing list
 > | Haskell-Cafe@haskell.org
 > | http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Overlapping Instances with Functional Dependencies

2005-07-08 Thread Simon Peyton-Jones
Martin's dead right.  GHC uses a less sophisticated mechanism to do
matching when it's thinking about functional dependencies than when it's
doing straight instance matching.  Maybe something cleverer for fundeps
would make sense, as you point out.  I hadn't thought of that before;
it's a good point.

Nowadays, whenever a fundep question comes up I think "how would it show
up if we had associated type synonyms instead of fundeps?" (see the
paper on my home page).  In this case I think the answer is "GHC's
current instance-matching mechanism would work unchanged"; or to put it
another way, what ever mechanism is used for instance matching, the same
would be used for type dependencies.

Simon
  
| I wouldn't call this a bug, overlapping instances
| and in particular the combination with functional dependencies
| are something which is not well studied yet.
| Hence, GHC is very conservative here.
| 
| I feel like you, this program should work.
| As you correctly point out, there's a conflict among the
| two improvement rules (resulting from the instances and FD).
| A sensible decision is to apply the same "ad-hoc"
| mechanism to improvement rules that is currently
| applied to overlapping instances. Of course, we need some
| formal system to express such conditions precisely.
| You find some hints how to achieve this in
| 
| G. J. Duck, S. Peyton-Jones, P. J. Stuckey, and M. Sulzmann.
| Sound and decidable type inference for functional dependencies.
| In Proc. of ESOP'04
| 
| Martin
| 
| 
| Daniel Brown writes:
|  > I have the following three programs:
|  >
|  >class Foo a b
|  >instance Foo (a -> b) (a -> [b])
|  >instance Foo a a
|  >
|  >class Bar a b | a -> b
|  >instance Bar (a -> b) (a -> b)
|  >instance Bar a a
|  >
|  >class Baz a b | a -> b
|  >instance Baz (a -> b) (a -> [b])
|  >instance Baz a a
|  >
|  > When compiled in ghc 6.4 (with -fglasgow-exts
|  > -fallow-overlapping-instances -fallow-undecidable-instances) Foo
|  > and Bar compile fine, but Baz fails with this error:
|  >
|  >Baz.hs:2:0:
|  >Functional dependencies conflict between instance
declarations:
|  >  Baz.hs:2:0: instance Baz (a -> b) (a -> [b])
|  >  Baz.hs:3:0: instance Baz a a
|  >
|  > This is how I interpret the error: The fundep says "a uniquely
|  > determines b", but if you have `Baz (Int -> Int) b`, b is `Int ->
[Int]`
|  > according to the first instance and `Int -> Int` according to the
second
|  > instance. b isn't uniquely determined by a, so the functional
dependency
|  > isn't functional -- thus the conflict.
|  >
|  > When confronted with overlapping instances, the compiler chooses
the
|  > most specific one (if it is unique), e.g. `Baz (a -> b) (a -> [b])`
is
|  > more specific than `Baz a a`.
|  >
|  > But it seems that the combination of the two features is broken: if
the
|  > most specific instance is chosen before checking the functional
|  > dependency, then the fundep is satisfied; if the fundep is checked
|  > before choosing the most specific instance, then it isn't.
|  >
|  > Is this a bug, or am I confused?
|  >
|  >   Dan
|  > ___
|  > Haskell-Cafe mailing list
|  > Haskell-Cafe@haskell.org
|  > http://www.haskell.org/mailman/listinfo/haskell-cafe
| ___
| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe