On 23/10/2010 5:19 PM, Daniel Peebles wrote:
Just out of curiosity, why do you (and many others I've seen with similar proposals) talk about additive monoids? are they somehow fundamentally different from multiplicative monoids?

People usually use additive notation for commutative monoids, and multiplicative notation for generic monoids. It's a convention, nothing else. Otherwise, of course they are isomorphic.

When I was playing with building an algebraic hierarchy, I picked a "neutral" operator for my monoids (I actually started at magma, but it's the same thing) and then introduced the addition and multiplication distinction at semirings, as it seemed pointless to distinguish them until you have a notion of a distributive law between the two.

How you do this really depends on what features you have for naming/renaming, how notions of inheritance/subtyping work, etc.

Having multiple names for structures which are, in fact, isomorphic, turns out to be really convenient when you want to combine the structures without duplicating declarations. Let me stress that again: 'really convenient'. Not required, manditory, or anything else like that.

Of course, all we really require is any Turing complete language. So if 'require' is the main criterion, we should all still be programming in assembler. There really is a reason we're having this discussion on the Haskell-cafe mailing list...

Jacques

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Reply via email to