But why isn't float32 promoting to float64 on basic arithmetics then? 

As Markus pointed out, when we get SIMD support (in the form of the @simd 
macro or by autovectorization of llvm) there can be a factor of two between 
both integer computations.

P.S.: Hope you don't get this wrong. Its just a very minor nitpick on one 
of the most awesome type systems I have ever seen. 


Am Donnerstag, 16. Januar 2014 06:37:50 UTC+1 schrieb Stefan Karpinski:
>
> It's worth discussion, but there are a few significant differences. In 
> general, our policy is not "you're on your own, kid", but rather "we'll do 
> what we can, but not if it's going to cost too much". In other words, we 
> tend towards safety except where safety is unacceptably slow. BigInt 
> arithmetic everywhere? Not gonna happen. Int64 arithmetic everywhere? 
> Doable. Using 64-bit integer ops usually does not cost that much – this is 
> a particularly bad case – specifically because a 32-bit mod is *much* 
> faster than 64-bit mod. With the recent recent change to integer division, 
> it only takes *one* type annotation beyond giving the storage type of `p` 
> to get near-C speed:
>
> function test2()
>     p = zeros(Int32,20000)
>     k = 0
>     n::Int32 = 2
>     result = 0
>     @time while k < 20000
>         i = 0
>         while i < k && (n % p[i+1]) != 0
>             i = i + 1
>         end
>         if i == k
>             k = k + 1
>             p[k] = n
>             result += n
>         end
>         n = n + 1
>     end
>     println(result)
> end
>
>
> That's not bad. It's certainly no huge stretch of the imagination to think 
> that some additional optimization could remove the need for even that 
> annotation.
>
>
> On Wed, Jan 15, 2014 at 11:38 PM, John Myles White 
> <[email protected]<javascript:>
> > wrote:
>
>> +1 for Iain’s point of view.
>>
>>  — John
>>
>> On Jan 15, 2014, at 5:16 PM, Iain Dunning <[email protected]<javascript:>> 
>> wrote:
>>
>> > From a philosophical POV alone, I think its inconsistent that we
>> > a) Don't save people from overflows, but
>> > b) Silently do Int32 math as Int64 behind the scenes to presumably save 
>> themselves from themselves
>> >
>> > I think the overflow behaviour suprises some people, but only because 
>> they've been trained on Python etc. instead of C, but the Int32 behaviour 
>> would surprise pretty much everyone given how Julia normally acts (as the 
>> manual says, its falls into the more "no automatic coversion" family of 
>> languages)
>> >
>> > On Wednesday, January 15, 2014 4:28:15 PM UTC-5, Földes László wrote:
>> > Sorry for the wrong info, I was switching between a 32 bit and a 64 bit 
>> machine (SSH terminal), and I just happened to run the script on the 32 bit 
>> machine...
>> >
>> > On Wednesday, January 15, 2014 12:37:07 AM UTC+1, Przemyslaw Szufel 
>> wrote:
>> > Foldes,
>> >
>> > I went for your solution and got a time increase from
>> > 2.1 seconds (64bit integers) to 17.78 seconds (32 bit dow-casting).
>> > Seems like casting is no cheap...
>> >
>> > Any other ideas possibilities?
>> >
>> > All best,
>> > Przemyslaw
>> >
>> > P.S.
>> > Naturally I realize that this is toy example and normally in a typical 
>> production code we would rather use real numbers for computations not ints.
>> > I am asking just out of curiosity ;-)
>> >
>> >
>> > On Wednesday, 15 January 2014 00:25:20 UTC+1, Földes László wrote:
>> > You can force the literals by enclosing them in int32():
>> >
>> > p = [int32(0) for i=1:20000]
>> > result = [int32(0) for i=1:20000]
>> >     k = int32(0)
>> >     n = int32(2)
>> >     while k < int32(20000)
>> >         i = int32(0)
>> >
>> >
>> >
>> > On Wednesday, January 15, 2014 12:04:23 AM UTC+1, Przemyslaw Szufel 
>> wrote:
>> > Simon,
>> > Thanks!
>> > I changed in Cython to
>> > def primes_list(int kmax):
>> >     cdef int k, i
>> >     cdef long long n
>> >     cdef long long p[20000]
>> > and now I am getting 2.1 seconds - exactly the same time as Julia and 
>> Java with longs...
>> >
>> > Since the computational difference between 64bit longs and 32bit ints 
>> is soo high - is there any way to rewrite my toy example to force Julia to 
>> do 32 bit int calculations?
>> >
>> > All best,
>> > Przemyslaw Szufel
>> >
>> >
>> > On Tuesday, 14 January 2014 23:55:12 UTC+1, Simon Kornblith wrote:
>> > In C long is only guaranteed to be at least 32 bits (IIRC it's 64 bits 
>> on 64-bit *nix but 32-bit on 64-bit Windows). long long is guaranteed to be 
>> at least 64 bits (and is 64 bits on all systems I know of).
>> >
>> > Simon
>> >
>> > On Tuesday, January 14, 2014 5:46:04 PM UTC-5, Przemyslaw Szufel wrote:
>> > Simon,
>> > Thanks for the explanation!
>> > In Java int is 32 bit as well.
>> > I have just replaced ints with longs in Java and found out that now I 
>> get the Java speed also very similar to Julia.
>> >
>> > However I tried in Cython:
>> > def primes_list(int kmax):
>> >     cdef int k, i
>> >     cdef long n
>> >     cdef long p[20000]
>> > ...
>> >
>> > and surprisingly the speed did not change...at first I thought that 
>> maybe something did not compile or is in cache - but I made sure - it's not 
>> the cache.
>> >  Cython speed remains unchanged regardles using int or long?
>> > I know that now it becomes other language question...but maybe someone 
>> can explain?
>> >
>> > All best,
>> > Przemyslaw Szufel
>> >
>> >
>> > On Tuesday, 14 January 2014 23:29:40 UTC+1, Simon Kornblith wrote:
>> > With a 64-bit build, Julia integers are 64-bit unless otherwise 
>> specified. In C, you use ints, which are 32-bit. Changing them to long long 
>> makes the C code perform similarly to the Julia code on my system. 
>> Unfortunately, it's hard to operate on 32-bit integers in Julia, since + 
>> promotes to 64-bit by default (am I missing something)?
>> >
>> > Simon
>> >
>> > On Tuesday, January 14, 2014 4:32:16 PM UTC-5, Przemyslaw Szufel wrote:
>> > Dear Julia users,
>> >
>> > I am considering using Julia for computational projects.
>> > As a first to get a feeling of the new language a I tried to benchmark 
>> Julia speed against other popular languages.
>> > I used an example code from the Cython tutorial: 
>> http://docs.cython.org/src/tutorial/cython_tutorial.html [ the code for 
>> finding n first prime numbers].
>> >
>> > Rewriting the code in different languages and measuring the times on my 
>> Windows laptop gave me the following results:
>> >
>> > Language | Time in seconds (less=better)
>> >
>> > Python: 65.5
>> > Cython (with MinGW): 0.82
>> > Java : 0.64
>> > Java (with -server option) : 0.64
>> > C (with MinGW): 0.64
>> > Julia (0.2): 2.1
>> > Julia (0.3 nightly build): 2.1
>> >
>> > All the codes for my experiments are attached to this post (Cython i 
>> Python are both being run starting from the prim.py file)
>> >
>> > The thing that worries me is that Julia takes much much longer than 
>> Cython ,,,
>> > I am a beginner to Julia and would like to kindly ask what am I doing 
>> wrong with my code.
>> > I start Julia console and use the command  include ("prime.jl") to 
>> execute it.
>> >
>> > This code looks very simple and I think the compiler should be able to 
>> optimise it to at least the speed of Cython?
>> > Maybe I my code has been written in non-Julia style way and the 
>> compiler has problems with it?
>> >
>> > I will be grateful for any answers or comments.
>> >
>> > Best regards,
>> > Przemyslaw Szufel
>>
>>
>

Reply via email to