On Sunday, 26 June 2016 at 04:25:07 UTC, "Smoke" Adams wrote:
Languages:
C#: https://msdn.microsoft.com/en-us/library/0w4e0fzs.aspx
Java:
https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.17.3
C11:
http://www.open-std.org/jtc1/sc22/wg14/www/docs/C99RationaleV5.10.pdf (See 6.5.5 for update on % operator, mentioning, example at 7.20.6).
Python2: https://docs.python.org/2/reference/expressions.html
Python3: https://docs.python.org/3/reference/expressions.html
CPUs:
Arm7(eeabi):
https://github.com/wayling/xboot-clone/blob/master/src/arch/arm/lib/gcc/__aeabi_idivmod.S
Arm7(Darwin):
http://opensource.apple.com//source/clang/clang-163.7.1/src/projects/compiler-rt/lib/arm/modsi3.S
Mips:
http://www.mrc.uidaho.edu/mrc/people/jff/digital/MIPSir.html
(See DIV instruction)
X86: http://x86.renejeschke.de/html/file_module_x86_id_137.html
Now I'm sure there are a weird CPU that isn't produced since
the 80s and that D will never support that do it in some other
way, but for all platforms that matter today, this isn't the
case.
This is not MY definition, this is the definition everybody
except you uses? Even PHP get this right
(http://php.net/manual/en/language.operators.arithmetic.php).
Now champion, what do you have supporting your definition ?
http://mathworld.wolfram.com/Congruence.html
https://en.wikipedia.org/wiki/Modulo_operation
https://en.wikipedia.org/wiki/Modular_arithmetic
http://stackoverflow.com/questions/1082917/mod-of-negative-number-is-melting-my-brain
https://www.securecoding.cert.org/confluence/pages/viewpage.action?pageId=6422581
http://www.mathworks.com/help/matlab/ref/mod.html?requestedDomain=www.mathworks.com
Except for mathematica, these are all irrelevant. The claim is
that programming languages and CPU define % in some way, not that
mathematician do it the same way.
Please read this again (you may want to use you finger to make
sure you) :
This isn't a proof, this is a definition. This is the
definition that is used by all programming languages out
there and all CPUs. It isn't going to change because someone
on the internet think he has a better definition that
provide no clear advantage over the current one.
You mention you had information supporting that this was not
true. It is very easy to debunk. You could for instance provide a
link to a CPU that do NOT do the % operation that way. I was able
to demonstrate to you that all major CPUs and many major
languages do it that way (see there is a claim and evidence to
support it, it is how arguing works). The best you've been able
to present is a DSL (mathematica) and no CPU.
Bonus points for the stackoverflow question, which isn't a spec
and supports my point: languages and CPU do it that way. Once
again, it is to wonder if you actually understand what you are
responding to.
Of course, I don't expect a neanderthal like yourself to
understand that. Have fun lemming.
Oh, hey, I'm going to define that your an idiot! Thanks for
agreeing with me.
I see I've hurt your feelings. That's ok, you'll survive. Next
time, try make sure to understand the difference between a
definition and a proof, and I won't have to point to you, and
your feeling won't be hurt next time.