Wikipedia has a list:
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_%28array%29#Array_system_cross-reference_list
On Wed, Sep 2, 2020 at 11:53 PM Mario Carneiro wrote:
> Don't forget Fortran arrays, which have user-configurable ordering (!) but
> by default start with 1.
>
>
Don't forget Fortran arrays, which have user-configurable ordering (!) but
by default start with 1.
On Wed, Sep 2, 2020 at 11:11 PM David A. Wheeler
wrote:
> I think there has been a slow march from one-based indexing to zero based
> indexing in computing.
>
> Many systems designed two decades
I think there has been a slow march from one-based indexing to zero based
indexing in computing.
Many systems designed two decades ago supported one based indexing, such as R
and matlab/octave. But newer systems are pretty much uniformly zero-based.
I realize that abstract mathematics is not
No, I'm not proposing a translation to or from lean. (Well, I am looking
into it, but that's an entirely separate discussion.) My purpose was only
to draw a parallel to another theorem prover library driven in large part
by mathematicians, to send the message that this is a minor variation that
is
Hi,
Like Mario mentioned, we have to look at the way matrix multiplication
with vectors works. The existing ` maVecMul ` takes a matrix and a
function of a single value, where it would be easy to use words as-is if
matrices were indexed starting with 0.
About concatenation, there are
Is the issue here the ability to translate to Lean more easily?
On the flip side, there has been some interest in translation from set.mm
to Mathematica, which would be more difficult with 0-based matrices.
Why there is a need for words in the development of matrices? Is any of
this already
In Coq and Lean, the structure of inductive types, specifically the peano
natural numbers starting at 0, *strongly* prefers 0-based indexing. The
difference is stark enough that it's not really ever a discussion. You can
do induction on nat, you can't do induction on positive nat (at least the
Wouldn’t they be clearly differentiated in a type theory setup? How does
coq/lean treat those? Are they the same concept or separated ones?
-stan
On Wed, Sep 2, 2020 at 8:51 PM 'fl' via Metamath
wrote:
>
>
>
> > I do not like the idea to regard words as vectors - these are two
> completely
* Mathematica
Le mercredi 2 septembre 2020 à 21:08:00 UTC+2, fl a écrit :
> No difference in Mathematic between a word and a tuple as whas to be
> expected.
>
> https://mathematica.stackexchange.com/search?q=concatenation
>
>
>
* was
Le mercredi 2 septembre 2020 à 21:08:00 UTC+2, fl a écrit :
> No difference in Mathematic between a word and a tuple as whas to be
> expected.
>
> https://mathematica.stackexchange.com/search?q=concatenation
>
>
>
No difference in Mathematic between a word and a tuple as whas to be
expected.
https://mathematica.stackexchange.com/search?q=concatenation
https://mathematica.stackexchange.com/questions/37705/concatenation-of-lists-in-a-fibonacci-like-pattern
--
FL
--
You received this message because you
> I do not like the idea to regard words as vectors - these are two
completely different concepts
Simply do a computer program and you will realize soon that tuples can be
concatenated at will.
Length is a concept relevant to words and matrices.
There is an abstract theory of words
I do not like the idea to regard words as vectors - these are two
completely different concepts. Words have prefixes and suffixes, can be
concatenated, and subwords can be extracted - what would all of these mean
for vectors? On the other side, vectors can be added and multiplied by
scalars
> I don't want to be the one making the decision on this, in part because I
won't be doing the bulk of the work,
There are ~300 proofs about words. No more. It's not a big deal. If
everybody takes 30 proofs it's done in two weeks. Changing
the start of the indices is pretty a mechanical
I don't want to be the one making the decision on this, in part because I
won't be doing the bulk of the work, and I don't really have special
expertise in the matter. It just seems that starting at 0 goes against
virtually all of published mathematics on matrices.
I would feel better if
On 02/09/2020 23:25, 'fl' via Metamath wrote:
The best to do is redefine the <" ...>" operator so that it takes (1
... N) as its set of indices and then fix up all the proofs referring
to the definition.
You should have only one definition for matrices, tuples and words
since all that is the
That looks great!
I did not see all the app, are you offering only algebra problems? I
feel this would give a too limited view of what "Maths" are. If so, I
would suggest to also include a bit of logic, maybe one inductive proof,
and why not the MIU system?
On 02/09/2020 06:18, Norman Megill
> Note that an expression such as <" <" A B C "> <" D E F "> <" G H I "> ">
> has the type (0..^N) -> ((0..^N) -> R), which would have to be converted
> anyway to match the expected I x I -> R type used by square matrices. If we
> take I to be (0..^N) then this is just an uncurry, but if the
On Wed, Sep 2, 2020 at 7:28 AM Thierry Arnoux
wrote:
> I’ve done a mock-up of this conversion (including index translation), you
> can find it here:
> http://us2.metamath.org:88/mpeuni/lmat22e11.html
>
That's a whole lot of boilerplate, I would hope that most of that can be
broken into a lemma
I’ve done a mock-up of this conversion (including index translation), you can
find it here:
http://us2.metamath.org:88/mpeuni/lmat22e11.html
On the other hand, does any one have any idea or suggestion about how I could
have expressed the Laplace expansion without integer indices, on any finite
Note that an expression such as <" <" A B C "> <" D E F "> <" G H I "> ">
has the type (0..^N) -> ((0..^N) -> R), which would have to be converted
anyway to match the expected I x I -> R type used by square matrices. If we
take I to be (0..^N) then this is just an uncurry, but if the official
For clarification: there *is* already a definition of (square) matrices
based on arbitrary finite sets used as index sets (see ~df-mat, contributed
by Stefan O'Rear). The theorems which I needed to proof Cramer's rule are
based on this definion. If I understand Thierry correctly, he does not
Indeed, we do have matrix vector multiplication, ` maVecMul `,
introduced by Alexander, and used in his proof of Cramer's rule.
That operation considers vectors as functions of a single variable,
indexed by any finite set.
This can only identify with Words (` Word X `, which can be input as `
<"
>
> I will reiterate my suggestion to use finite sets for indexing when
> possible, which sidesteps the question
Whatever the final decision on indexing start will be, I really hope that
there still will be support for matrices indexed by arbitrary finite sets.
I'm not sure if this example
I will reiterate my suggestion to use finite sets for indexing when
possible, which sidesteps the question, but while I would like to defer to
Norm here, I really think a lack of consistency is going to cause actual
problems in this area. What is the definition of a matrix multiplied by a
vector?
25 matches
Mail list logo