Re: HDF5 bindings for D

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-announce
On Monday, 22 December 2014 at 05:04:10 UTC, Rikki Cattermole 
wrote:
You seem to be missing your dub file. Would be rather hard to 
get it onto dub repository without it ;)
Oh and keep the bindings separate from wrappers in terms of 
subpackages.


Thanks - added now.

Will work on separating out bindings when have a bit more time, 
but it should be easy enough.


Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-announce
Last one for a while, I think.  I wish you all a very peaceful 
Christmas and New Year, and let's hope 2015 brings some more 
positive energy to the world.


Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph



1. D bindings/wrappers for the swiss ephemeris

http://www.astro.com/swisseph/swephinfo_e.htm
The SWISS EPHEMERIS is the high precision ephemeris developed by 
Astrodienst, largely based upon the DExxx ephemerides from NASA's 
JPL . The original release in 1997 was based on the DE405/406 
ephemeris. Since release 2.00 in February 2014, it is based on 
the DE431 ephemeris released by JPL in September 2013.


NB - Swiss Ephemeris is not free for commercial use.

2. D port of simple Nelder-Mead simplex minimisation (written by 
Michael F. Hutt in original C version) here.  With constraints.  
From Wiki:


https://en.wikipedia.org/wiki/Nelder-Mead_method
The Nelder–Mead method or downhill simplex method or amoeba 
method is a commonly used nonlinear optimization technique, which 
is a well-defined numerical method for problems for which 
derivatives may not be known. However, the Nelder–Mead technique 
is a heuristic search method that can converge to non-stationary 
points[1] on problems that can be solved by alternative methods.



Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph


dfl2 can work for 64 bit,and base on D2.067b1

2014-12-22 Thread FrankLike via Digitalmars-d-announce

Now,you can use dfl2 to get the 64 bit winForm.

Frank.


dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread FrankLike via Digitalmars-d-announce
dco is a build tool,and very easy to use,it can build dfl64.lib 
,dgui.lib or other your projects,it can auto copy dfl.lib to the 
dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel it's 
very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Re: dfl2 can work for 64 bit,and base on D2.067b1

2014-12-22 Thread FrankLike via Digitalmars-d-announce

On Monday, 22 December 2014 at 11:33:14 UTC, FrankLike wrote:

Now,you can use dfl2 to get the 64 bit winForm.

Frank.


dfl2:
https://github.com/FrankLIKE/dfl2/



Re: Facebook is using D in production starting today

2014-12-22 Thread FrankLike via Digitalmars-d-announce
On Thursday, 18 December 2014 at 09:18:06 UTC, Rune Christensen 
wrote:
On Monday, 18 November 2013 at 17:23:25 UTC, Andrei 
Alexandrescu wrote:

On 11/18/13 6:03 AM, Gary Willoughby wrote:
On Friday, 11 October 2013 at 00:36:12 UTC, Andrei 
Alexandrescu wrote:
In all likelihood we'll follow up with a blog post 
describing the

process.


Any more news on this Andrei?


Not yet. I'm the bottleneck here - must find the time to work 
on that.


Andrei


Are you still using D in production? Are you using it more than 
before?


Regards,
Rune


D is useful ,now I use the 
dfl2(https://github.com/FrankLIKE/dfl2/)and the build tool dco 
(https://github.com/FrankLIKE/dco/),very good.

Frank


Re: Facebook is using D in production starting today

2014-12-22 Thread Stefan Koch via Digitalmars-d-announce

On Monday, 22 December 2014 at 12:12:19 UTC, FrankLike wrote:
D is useful ,now I use the 
dfl2(https://github.com/FrankLIKE/dfl2/)and the build tool dco 
(https://github.com/FrankLIKE/dco/),very good.

Frank


Do you work for Facebook ?


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread uri via Digitalmars-d-announce

On Monday, 22 December 2014 at 11:41:16 UTC, FrankLike wrote:
dco is a build tool,and very easy to use,it can build dfl64.lib 
,dgui.lib or other your projects,it can auto copy dfl.lib to 
the dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel it's 
very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Thanks, I'm in the process of looking at CMake/SCons alternatives 
right at the moment and will have a look at dco.


I'm trying dub at the moment and it's working perfectly fine so 
far as a build tool. The alternatives, such as CMake and SCons, 
are proven technologies with D support that have also worked for 
me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


Thanks,
uri







Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread Dejan Lekic via Digitalmars-d-announce

On Monday, 22 December 2014 at 12:57:01 UTC, uri wrote:

On Monday, 22 December 2014 at 11:41:16 UTC, FrankLike wrote:
dco is a build tool,and very easy to use,it can build 
dfl64.lib ,dgui.lib or other your projects,it can auto copy 
dfl.lib to the dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel 
it's very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Thanks, I'm in the process of looking at CMake/SCons 
alternatives right at the moment and will have a look at dco.


I'm trying dub at the moment and it's working perfectly fine so 
far as a build tool. The alternatives, such as CMake and SCons, 
are proven technologies with D support that have also worked 
for me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


Thanks,
uri


Then try waf as well. :) https://code.google.com/p/waf/


Re: HDF5 bindings for D

2014-12-22 Thread John Colvin via Digitalmars-d-announce

On Monday, 22 December 2014 at 04:51:44 UTC, Laeeth Isharc wrote:

https://github.com/Laeeth/d_hdf5

HDF5 is a very valuable tool for those working with large data 
sets.


From HDF5group.org

HDF5 is a unique technology suite that makes possible the 
management of extremely large and complex data collections. The 
HDF5 technology suite includes:


* A versatile data model that can represent very complex data 
objects and a wide variety of metadata.
* A completely portable file format with no limit on the number 
or size of data objects in the collection.
* A software library that runs on a range of computational 
platforms, from laptops to massively parallel systems, and 
implements a high-level API with C, C++, Fortran 90, and Java 
interfaces.
* A rich set of integrated performance features that allow for 
access time and storage space optimizations.
* Tools and applications for managing, manipulating, viewing, 
and analyzing the data in the collection.
* The HDF5 data model, file format, API, library, and tools are 
open and distributed without charge.


From h5py.org:
[HDF5] lets you store huge amounts of numerical data, and 
easily manipulate that data from NumPy. For example, you can 
slice into multi-terabyte datasets stored on disk, as if they 
were real NumPy arrays. Thousands of datasets can be stored in 
a single file, categorized and tagged however you want.


H5py uses straightforward NumPy and Python metaphors, like 
dictionary and NumPy array syntax. For example, you can iterate 
over datasets in a file, or check out the .shape or .dtype 
attributes of datasets. You don't need to know anything special 
about HDF5 to get started.


In addition to the easy-to-use high level interface, h5py rests 
on a object-oriented Cython wrapping of the HDF5 C API. Almost 
anything you can do from C in HDF5, you can do from h5py.


Best of all, the files you create are in a widely-used standard 
binary format, which you can exchange with other people, 
including those who use programs like IDL and MATLAB.


===
As far as I know there has not really been a complete set of 
HDF5 bindings for D yet.


Bindings should have three levels:
1. pure C API declaration
2. 'nice' D wrapper around C API (eg that knows about strings, 
not just char*)

3. idiomatic D interface that uses CTFE/templates

I borrowed Stefan Frijter's work on (1) above to get started.  
I cannot keep track of things when split over too many source 
files, so I put everything in one file - hdf5.d.


Have implemented a basic version of 2.  Includes throwOnError 
rather than forcing checking status C style, but the exception 
code is not very good/complete (time + lack of experience with 
D exceptions).


(3) will have to come later.

It's more or less complete, and the examples I have translated 
so far mostly work.  But still a work in progress.  Any 
help/suggestions appreciated.  [I am doing this for myself, so 
project is not as pretty as I would like in an ideal world].



https://github.com/Laeeth/d_hdf5


Also relevant to some: http://code.dlang.org/packages/netcdf


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread Russel Winder via Digitalmars-d-announce

On Mon, 2014-12-22 at 12:57 +, uri via Digitalmars-d-announce wrote:
 […]
 
 Thanks, I'm in the process of looking at CMake/SCons alternatives 
 right at the moment and will have a look at dco.

May I ask why SCons is insufficient for you?

 I'm trying dub at the moment and it's working perfectly fine so far 
 as a build tool. The alternatives, such as CMake and SCons, are 
 proven technologies with D support that have also worked for me in 
 the past.
 
 Can I ask what the existing tools were missing and why did you felt 
 it necessary to reinvented your own build tool?

The makers of Dub chose to invent a new build tool despite Make, CMake 
and SCons. Although it is clear Dub is the current de facto standard 
build tool for pure D codes, there is nothing wrong with alternate 
experiments. I hope we can have an open technical discussion of these 
points, it can only help all the build systems with D support.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



GCCJIT Bindings for D

2014-12-22 Thread Iain Buclaw via Digitalmars-d-announce

Hi,

Apparently I've never announced this here, so here we go.

I have written, and started maintaining D bindings for the GCCJIT 
library, available on github at this location:


https://github.com/ibuclaw/gccjitd


What is GCCJIT?
---
GCCJIT is a new front-end for gcc that aims to provide an 
embeddable shared library with an API for adding compilation to 
existing programs using GCC as the backend.


This shared library can then be dynamically-linked into bytecode 
interpreters and other such programs that want to generate 
machine code on the fly at run-time.


The library is of alpha quality and the API is subject to change. 
 It is however in development for the next GCC release (5.0).



How can I use it?
---
See the following link for a hello world program.

https://github.com/ibuclaw/gccjitd/blob/master/tests/dapi.d

I am currently in the process of Ddoc-ifying the documentation 
that comes with the C API binding and moving that across to the D 
API.  Improvements shall come over the next months - though any 
assistance in making the Ddocs prettier are welcome contributions.



Regards
Iain.


Re: Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread bachmeier via Digitalmars-d-announce

On Monday, 22 December 2014 at 08:43:56 UTC, Laeeth Isharc wrote:
Last one for a while, I think.  I wish you all a very peaceful 
Christmas and New Year, and let's hope 2015 brings some more 
positive energy to the world.


Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph



1. D bindings/wrappers for the swiss ephemeris

http://www.astro.com/swisseph/swephinfo_e.htm
The SWISS EPHEMERIS is the high precision ephemeris developed 
by Astrodienst, largely based upon the DExxx ephemerides from 
NASA's JPL . The original release in 1997 was based on the 
DE405/406 ephemeris. Since release 2.00 in February 2014, it is 
based on the DE431 ephemeris released by JPL in September 2013.


NB - Swiss Ephemeris is not free for commercial use.

2. D port of simple Nelder-Mead simplex minimisation (written 
by Michael F. Hutt in original C version) here.  With 
constraints.  From Wiki:


https://en.wikipedia.org/wiki/Nelder-Mead_method
The Nelder–Mead method or downhill simplex method or amoeba 
method is a commonly used nonlinear optimization technique, 
which is a well-defined numerical method for problems for which 
derivatives may not be known. However, the Nelder–Mead 
technique is a heuristic search method that can converge to 
non-stationary points[1] on problems that can be solved by 
alternative methods.



Links here:
https://github.com/Laeeth/d_simplex
https://github.com/Laeeth/d_swisseph


It's been ages since I read the paper, but there is a parallel 
version of Nelder-Mead that is supposed to give very large 
performance improvements, even when used on a single processor:


http://www.cs.ucsb.edu/~kyleklein/publications/neldermead.pdf

It is not difficult to implement. I may look into modifying your 
code to implement it when I get some time.


Re: Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread via Digitalmars-d-announce

On Monday, 22 December 2014 at 20:46:23 UTC, bachmeier wrote:
It's been ages since I read the paper, but there is a parallel 
version of Nelder-Mead that is supposed to give very large 
performance improvements, even when used on a single processor:


http://www.cs.ucsb.edu/~kyleklein/publications/neldermead.pdf

It is not difficult to implement. I may look into modifying 
your code to implement it when I get some time.


It will certainly also be advantageous to pass the functions as 
aliases, so that they can get inlined.


Re: Swiss Ephemeris / Nelder-Mead simplex

2014-12-22 Thread Laeeth Isharc via Digitalmars-d-announce

On Monday, 22 December 2014 at 21:39:08 UTC, Marc Schütz wrote:

On Monday, 22 December 2014 at 20:46:23 UTC, bachmeier wrote:
It's been ages since I read the paper, but there is a parallel 
version of Nelder-Mead that is supposed to give very large 
performance improvements, even when used on a single processor:


http://www.cs.ucsb.edu/~kyleklein/publications/neldermead.pdf

It is not difficult to implement. I may look into modifying 
your code to implement it when I get some time.


It will certainly also be advantageous to pass the functions as 
aliases, so that they can get inlined.


Thanks, Marc.  I appreciate the pointer, and if you do have time 
to look at the code.  I confess that it can't really be called my 
own implementation as I simply ported it to D.  There is some 
more clever stuff within quantlib (c++ project), but I quite 
liked the idea of starting with this one as it is simple, and 
speed is not yet vital at this stage.



Laeeth.



Calypso: Direct and full interfacing to C++

2014-12-22 Thread Elie Morisse via Digitalmars-d-announce

Hi everyone,

I have the pleasure to announce to you all the existence of a 
modified LDC able to interface directly to C++ libraries, wiping 
out the need to write bindings:


 https://github.com/Syniurge/Calypso

It's at a prototype stage, but its C++ support is pretty wide 
already:


 • Global variables
 • Functions
 • Structs
 • Unions (symbol only)
 • Enums
 • Typedefs
 • C++ class creation with the correct calls to ctors 
(destruction is disabled for now)

 • Virtual function calls
 • Static casts between C++ base and derived classes (incl. 
multiple inheritance offsets)
 • Mapping template implicit and explicit specializations already 
in the PCH to DMD ones, no new specialization on the D side yet
 • D classes inheriting from C++ ones, including the correct 
vtable generation for the C++ part of the class


So what is this sorcery? Let's remind ourselves that this isn't 
supposed to be feasible:


Being 100% compatible with C++ means more or less adding a fully 
functional C++ compiler front end to D. Anecdotal evidence 
suggests that writing such is a minimum of a 10 man-year 
project, essentially making a D compiler with such capability 
unimplementable.

http://dlang.org/cpp_interface.html

Well.. it is :D
Calypso introduces the modmap keyword, as in:

  modmap (C++) cppheader.h;

to generate with the help of Clang libraries a virtual tree of 
C++ modules. Then after making Clang generate a PCH for all the 
headers, the PCH is loaded and classes, structs, enums are placed 
inside modules named after them, while global variables and 
functions are in a special module named _. For example:


  import (C++) Namespace.SomeClass;  // imports 
Namespace::SomeClass
  import (C++) Namespace._;  // imports all the global variables 
and functions in Namespace
  import (C++) _ : myCfunc, myGlobalVar;  // importing the global 
namespace = bad idea, but selective imports work


Being a prototype, I didn't really pay attention to code 
conventions or elegance and instead focused on getting things 
working.
And being tied to LDC and Clang (I have no idea how feasible a 
GCC version would be), it's going to stay like this for some time 
until I get feedback from the contributors on how this all should 
really be implemented,. For example Calypso introduces language 
plugins, to minimize the amount of code specific to C++ and to 
make support of foreign languages cleaner and less intrusive, 
although it of course needs numerous hooks here and there in DMD 
and LDC.


Calypso is still WIP, but it's in pretty good shape and already 
works in a lot of test cases (see tests/calypso/), and is almost 
ready to use for C++ libraries at least. Since C libraries are in 
the global namespace, it's not a convenient replacement yet for 
bindings until I implement the Clang module map format. More info 
this blog post detailing some of the history behind Calypso:


http://syniurge.blogspot.com/2013/08/calypso-to-mars-first-contact.html

So.. Merry Christmas dear D community? :)


My take on the current talks of feature freezing D: the 
strength of D is its sophistication. The core reason why D fails 
to attract more users isn't the frequent compiler bugs or 
regressions, but the huge amount of time needed to get something 
done because neither equivalent nor bindings exist for most big 
and widespread C++ libraries like Qt. All these talks about 
making D a more minimalist language won't solve much and will 
only result in holding back D, which has the potential to become 
a superset of all the good in other system languages, as well as 
bringing its own powerful unique features such as metaprogramming 
done right.
By removing the main reason why D wasn't a practical choice, this 
will hopefully unlock the situation and make D gain momentum as 
well as attract more contributors to the compilers to fix bugs 
and regressions before releases.


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread uri via Digitalmars-d-announce
On Monday, 22 December 2014 at 18:33:42 UTC, Russel Winder via 
Digitalmars-d-announce wrote:


On Mon, 2014-12-22 at 12:57 +, uri via 
Digitalmars-d-announce wrote:

[…]

Thanks, I'm in the process of looking at CMake/SCons 
alternatives right at the moment and will have a look at dco.


May I ask why SCons is insufficient for you?


It isn't. We review our build system every 12 months over Xmas 
quite period and tidy it all up. Part of the process is trying 
alternatives.


We use Python +SCons to drive our builds and CMake to generate 
native makefiles. We find this approach scales better in terms of 
speed and system load.


It is a pity CMake invented it's own noisy script though. I also 
find with CMake it can be extremely difficult to establish 
context when looking at the code. This is why we're slowly 
migrating to SCons.




I'm trying dub at the moment and it's working perfectly fine 
so far as a build tool. The alternatives, such as CMake and 
SCons, are proven technologies with D support that have also 
worked for me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


The makers of Dub chose to invent a new build tool despite 
Make, CMake
and SCons. Although it is clear Dub is the current de facto 
standard
build tool for pure D codes, there is nothing wrong with 
alternate
experiments. I hope we can have an open technical discussion of 
these

points, it can only help all the build systems with D support.


I really like DUB for quick development, but in it's current form 
I don't see it scaling to larger builds. IMO the use of JSON puts 
it on par with the Java build tool Ant. JSON and XML (Ant) are 
data formats, not scripting languages and In my experience a 
large build system requires logic and flow control. I've had to 
do this before in Ant XML and it isn't pretty, nor is it flexible.


I use SCons for personal D projects that I think will be long 
lived and DUB for quick experiments. I was using CMake for 
personal work but that script is too ugly :)


Cheers,
uri








Re: Calypso: Direct and full interfacing to C++

2014-12-22 Thread Rikki Cattermole via Digitalmars-d-announce

On 23/12/2014 12:14 p.m., Elie Morisse wrote:

Hi everyone,

I have the pleasure to announce to you all the existence of a modified
LDC able to interface directly to C++ libraries, wiping out the need to
write bindings:

  https://github.com/Syniurge/Calypso

It's at a prototype stage, but its C++ support is pretty wide already:

  • Global variables
  • Functions
  • Structs
  • Unions (symbol only)
  • Enums
  • Typedefs
  • C++ class creation with the correct calls to ctors (destruction is
disabled for now)
  • Virtual function calls
  • Static casts between C++ base and derived classes (incl. multiple
inheritance offsets)
  • Mapping template implicit and explicit specializations already in
the PCH to DMD ones, no new specialization on the D side yet
  • D classes inheriting from C++ ones, including the correct vtable
generation for the C++ part of the class

So what is this sorcery? Let's remind ourselves that this isn't supposed
to be feasible:


Being 100% compatible with C++ means more or less adding a fully
functional C++ compiler front end to D. Anecdotal evidence suggests
that writing such is a minimum of a 10 man-year project, essentially
making a D compiler with such capability unimplementable.

http://dlang.org/cpp_interface.html

Well.. it is :D
Calypso introduces the modmap keyword, as in:

   modmap (C++) cppheader.h;


That really should be a pragma.
pragma(modmap, C++, cppheader.h);
Since pragma's are the way to instruct the compiler to do something.


to generate with the help of Clang libraries a virtual tree of C++
modules. Then after making Clang generate a PCH for all the headers, the
PCH is loaded and classes, structs, enums are placed inside modules
named after them, while global variables and functions are in a special
module named _. For example:

   import (C++) Namespace.SomeClass;  // imports Namespace::SomeClass
   import (C++) Namespace._;  // imports all the global variables and
functions in Namespace
   import (C++) _ : myCfunc, myGlobalVar;  // importing the global
namespace = bad idea, but selective imports work

Being a prototype, I didn't really pay attention to code conventions or
elegance and instead focused on getting things working.
And being tied to LDC and Clang (I have no idea how feasible a GCC
version would be), it's going to stay like this for some time until I
get feedback from the contributors on how this all should really be
implemented,. For example Calypso introduces language plugins, to
minimize the amount of code specific to C++ and to make support of
foreign languages cleaner and less intrusive, although it of course
needs numerous hooks here and there in DMD and LDC.

Calypso is still WIP, but it's in pretty good shape and already works in
a lot of test cases (see tests/calypso/), and is almost ready to use for
C++ libraries at least. Since C libraries are in the global namespace,
it's not a convenient replacement yet for bindings until I implement the
Clang module map format. More info this blog post detailing some of the
history behind Calypso:

http://syniurge.blogspot.com/2013/08/calypso-to-mars-first-contact.html

So.. Merry Christmas dear D community? :)


My take on the current talks of feature freezing D: the strength of D
is its sophistication. The core reason why D fails to attract more users
isn't the frequent compiler bugs or regressions, but the huge amount of
time needed to get something done because neither equivalent nor
bindings exist for most big and widespread C++ libraries like Qt. All
these talks about making D a more minimalist language won't solve much
and will only result in holding back D, which has the potential to
become a superset of all the good in other system languages, as well as
bringing its own powerful unique features such as metaprogramming done
right.
By removing the main reason why D wasn't a practical choice, this will
hopefully unlock the situation and make D gain momentum as well as
attract more contributors to the compilers to fix bugs and regressions
before releases.


Will you be upstreaming this? Or maintaining this completely yourself?


Re: dco can work for 64 bit,and base on D2.067b1:code.dlang.org

2014-12-22 Thread FrankLike via Digitalmars-d-announce

On Monday, 22 December 2014 at 12:57:01 UTC, uri wrote:

On Monday, 22 December 2014 at 11:41:16 UTC, FrankLike wrote:
dco is a build tool,and very easy to use,it can build 
dfl64.lib ,dgui.lib or other your projects,it can auto copy 
dfl.lib to the dmd2\windows\lib or lib64.
After you work on the dfl2,use the build.bat,you will feel 
it's very easy to use.


dco:
https://github.com/FrankLIKE/dco/
dfl2:
https://github.com/FrankLIKE/dfl2/

Frank


Thanks, I'm in the process of looking at CMake/SCons 
alternatives right at the moment and will have a look at dco.


I'm trying dub at the moment and it's working perfectly fine so 
far as a build tool. The alternatives, such as CMake and SCons, 
are proven technologies with D support that have also worked 
for me in the past.


Can I ask what the existing tools were missing and why did you 
felt it necessary to reinvented your own build tool?


Thanks,
uri
dco  lets  building  project  is  easy, auto  add itself  to bin  
folder, may auto  add  d  files,auto  ignore  some  d  files  in  
ignoreFiles  folder,auto  -L  libs  in  dco.ini  ,and  in next  
version  ,you can set  your offen used libs  in  dco.ini.


Re: Calypso: Direct and full interfacing to C++

2014-12-22 Thread Elie Morisse via Digitalmars-d-announce
On Tuesday, 23 December 2014 at 00:01:30 UTC, Rikki Cattermole 
wrote:
Will you be upstreaming this? Or maintaining this completely 
yourself?


The ultimate goal is upstream, but first I need to agree with the 
main DMD and LDC contributors about how this should really be 
done. I.e atm the Calypso code coexists with the vanilla C++ 
support which has a different coding philosophy and is more 
intertwined with the rest of the code.


So I expect that I'll have to maintain it myself quite some time 
before this happens. And of course I'll make Calypso catch up 
with upstream LDC frequently.


Re: Calypso: Direct and full interfacing to C++

2014-12-22 Thread Dicebot via Digitalmars-d-announce

On Monday, 22 December 2014 at 23:14:44 UTC, Elie Morisse wrote:
Being 100% compatible with C++ means more or less adding a 
fully functional C++ compiler front end to D. Anecdotal 
evidence suggests that writing such is a minimum of a 10 
man-year project, essentially making a D compiler with such 
capability unimplementable.

http://dlang.org/cpp_interface.html

Well.. it is :D


Well, technically speaking you DO include fully functional C++ 
compiler front end with much more than 10 man-years of time 
invested - it just happens that it is already available and 
called clang :)


Project itself is very cool but I am in doubts about possibility 
of merging this upstream. Doing so would make full D 
implementation effectively impossible without some C++ compiler 
already available as a library on same platform - quite a 
restriction!


I think it is better suited as LDC extension and would discourage 
its usage in public open-source projects sticking to old way of C 
binding generation instead. For more in-house projects it looks 
like an absolute killer and exactly the thing Facebook guys 
wanted :)


Re: DIP69 - Implement scope for escape proof references

2014-12-22 Thread Dicebot via Digitalmars-d

On Monday, 22 December 2014 at 03:07:53 UTC, Walter Bright wrote:

On 12/21/2014 2:06 AM, Dicebot wrote:
No, it is exactly the other way around. The very point of what 
I am saying is
that you DOESN'T CARE about ownership as long as worst case 
scenario is
assumed.   I have zero idea why you identify it is conflating 
as ownership when

it is explicitly designed to be distinct.


The point of transitive scoping would be if the root owned the 
data reachable through the root.


Quoting myself:

For me scopeness is a property of view, not object itself - 
this also makes ownership method of actual data irrelevant. Only

difference between GC owned data and stack allocated one is that
former can have scoped view optionally but for the latter
compiler must force it as the only available.


It doesn't matter of root owns the data. We _assume_ that as 
worst case scenario and allowed actions form a strict subset of 
allowed actions for any other ownership situation. Such `scope` 
for stack/GC is same as `const` for mutable/immutable  - common 
denominator.


Point of transitive scope is to make easy to expose complex 
custom data structures without breaking memory safety.


Re: What is the D plan's to become a used language?

2014-12-22 Thread Daniel Murphy via Digitalmars-d
Ola Fosheim Grøstad  wrote in message 
news:aimenbdjdflzgkkte...@forum.dlang.org...


Hardly, you have to be specific and make the number of issues covered in 
the next release small enough to create a feeling of being within reach in 
a short time span. People who don't care about fixing current issues 
should join a working group focusing on long term efforts (such as new 
features, syntax changes etc).


Saying it will work doesn't make it so.

That's good, people should not expect experimental features or unpolished 
implementations to be added to the next release. What goes into the next 
release should be decided on before you start on it.


That's nice an all, but if you can't get developers to work on the features 
you've decided on then all you end up doing is indefinitely postponing other 
contributions.


I do agree that work should be polished before it is merged, but good luck 
convincing Walter to stop merging work-in-progress features into master. 
I've been on both sides of that argument and neither way is without 
drawbacks, with the current contributor energy we have available. 



Davidson/TJB - HDF5 - Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:
On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson 
wrote:
Data storage for high volume would also be nice. A D 
implementation of HDF5, via wrappers or otherwise, would be a 
very useful project. Imagine how much more friendly the API 
could be in D. Python's tables library makes it very simple. 
You have to choose a language to not only process and 
visualize data, but store and access it as well.


Thanks
Dan


Well, I for one, would be hugely interested in such a thing.  A
nice D API to HDF5 would be a dream for my data problems.

Did you use HDF5 in your finance industry days then?  Just
curious.

TJB


Well for HDF5 - the bindings are here now - pre alpha but will 
get there soone enough - and wrappers coming along also.


Any thoughts/suggestions/help appreciated.  Github here:

https://github.com/Laeeth/d_hdf5


I wonder how much work it would be to port or implement Pandas 
type functionality in a D library.


Re: Invariant for default construction

2014-12-22 Thread Daniel Murphy via Digitalmars-d

Walter Bright  wrote in message news:m78i71$1c2h$1...@digitalmars.com...

It all depends on how invariant is defined. It's defined as an invariant 
on what it owns, not whatever is referenced by the object.


Whether or not it owns the data it references is application specific. 
Where are you saying the correct place to put a check like my example, to 
ensure that an owned object correctly references its parent? 



Re: Rectangular multidimensional arrays for D

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Friday, 11 October 2013 at 22:41:06 UTC, H. S. Teoh wrote:
What's the reason Kenji's pull isn't merged yet? As I see it, 
it does
not introduce any problematic areas, but streamlines 
multidimensional
indexing notation in a nice way that fits in well with the rest 
of the

language. I, for one, would push for it to be merged.

In any case, I've seen your multidimensional array 
implementation
before, and I think it would be a good thing to have it in 
Phobos. In
fact, I've written my own as well, and IIRC one or two other 
people have

done the same. Clearly, the demand is there.

See also the thread about std.linalg; I think before we can 
even talk
about having linear algebra code in Phobos, we need a 
solidly-designed
rectangular array API. As I said in that other thread, matrix 
algebra
really should be built on top of a solid rectangular array API, 
and not
be yet another separate kind of type that's similar to, but 
incompatible

with rectangular arrays. A wrapper type can be used to make a
rectangular array behave in the linear algebra sense (i.e. 
matrix

product instead of per-element multiplication).



Hi.

I wondered how things were developing with the rectangular arrays 
(not sure who is in charge of reviewing, but I guess it is not HS 
Teoh).  It would be interesting to see this being available for 
D, and I agree with others that it is one of the key foundation 
blocks one would need to see in place before many other useful 
libraries can be built on top.


Let me know if anything I can help with (although cannot promise 
to have time, I will try).



Laeeth.


Re: What is the D plan's to become a used language?

2014-12-22 Thread Bienlein via Digitalmars-d


People have already suggested you to actually try vibe.d at 
least once before repeating CSP is necessary for easy async 
mantra.


I was trying to point out in some previous thread that the value 
of CSP is that concurrent things from the code looks like 
sync calls (not async, but sync). The statement above again 
says async and not sync (in CSP is necessary for easy async 
mantra.). So, I'm not sure the point was understood.


Asynchronous programming is very difficult to get right and also 
inherently difficult. Programming with channels where things look 
like synchronous calls make concurrent programming immensely 
easier than with asynchronous programming. If you have done 
asynchronous programming for some years and then only spend 1/2 h 
looking at concurrency in Go you will grasp immediately that this 
is a lot lot simpler. All cores are being made used of very 
evenly out of the box and are constantly under high load. You 
have to work very hard for long time to achieve the same in 
Java/C/C++/C#/whatever, because the threading model is 
conventional. With CSP-style concurrency in Go it is a lot easier 
to write concurrent server side applications and whatever you do 
can hold 40'000 network connections out of the box. Yes, you can 
do that with vibe.d as well. But for Go you only need to learn a 
drop simple language and you can start writing your server 
application, because all you need for concurrency is in the 
language.


One idea would be to add a drop dead simple abstraction layer for 
vibe.d to provide the same and sell D as a language for server 
side development like Go. There is a need for a unique selling 
point. Let's say the guys at Docker had chosen D, because it had 
that already. Then they would realize that they also can use D 
for general purpose programming and be happy. But first there has 
to be a unique selling point. The selling point of a better C++ 
has not worked out. You have to accept that and move on. Not 
accepting that time moves on is not an option.


Sorry, but wrong and wrong. Go has a model of concurrency and 
parallelism that works very well and no other language has, so 
Go has technical merit.


The technical merit is in the concurrency model as already said 
in the statement above. And currently is the time of server side 
software development. When C++ was started it was time for some 
better C. That time is over. Things change constantly and there 
is nothing you can do about that. You can accept that things have 
moved on and make use of the new chance of server side 
programming as a new selling point or continue living in the 
past. Go might be simplistic. So add CSP-style channels to D and 
you can overtake Go in all respects very easily. Besides, Haskell 
also has channel-based inter-process communication. If that is 
not academical/scientiic backing then I don't know.


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Peter Alexander via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


In my mind there are a few categories of outstanding issues.

First, there are cases where the language just does not work as
advertised. Imports are an example of this. Probably scope as
well and maybe shared (although I'm not sure what the situation
with that is).

Second, there are cases where the language works as designed, but
the design makes it difficult to get work done. For example,
@nogc and exceptions, or const with templates (or const
altogether). Order of conditional compilation needs to be defined
(see deadalnix's DIP).

And finally there's the things we would really like for D to be
successful. Tuple support and memory management are examples of
those. This category is essentially infinite.

I really think the first two categories need to be solved before
anything is frozen.


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Dejan Lekic via Digitalmars-d

On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


There is no feature complete language. What makes mainstream 
languages more likely candidates for future software projects is 
the fact that they are properly maintained by a team of 
professionals language community trusts.


I can give Java and C++ as perfect examples. (I am doing this 
mostly because these two are what I used most of the time in my 
professional career)
- None of them is feature complete, yet they are most likely 
candidate languages for many future software projects. Why? I 
believe the major reason why is that there is a well-defined 
standardization process, and what is more important, there are 
companies behind these languages. Naturally, this makes the new 
features come to the language *extremely slowly* (we talk 10+ 
years here).


Perhaps the best course of action is to extract the stable 
features that D has now, and fork a stable branch that is 
maintained by people who are actually using that stable version 
of D in *their products*. This is crucial because it is in their 
own interest to have this branch as stable as possible.


Problem with D is that it is pragmatic language, and this 
problem is why I love D. The reason I say it is a problem is 
because there are subcommunities and people with their own view 
on how things should be. Examples are numerous: GC vs noGC, 
functional vs OOP, pro- and anti- heavily templated D code. Point 
is - it is hard to satisfy all.


Re: What is the D plan's to become a used language?

2014-12-22 Thread via Digitalmars-d

On Monday, 22 December 2014 at 08:22:35 UTC, Daniel Murphy wrote:
Ola Fosheim Grøstad  wrote in message 
news:aimenbdjdflzgkkte...@forum.dlang.org...


Hardly, you have to be specific and make the number of issues 
covered in the next release small enough to create a feeling 
of being within reach in a short time span. People who don't 
care about fixing current issues should join a working group 
focusing on long term efforts (such as new features, syntax 
changes etc).


Saying it will work doesn't make it so.


You need a core team, the core team needs to be able to cooperate 
on the most important features for the greater good. Then you 
have outside contributors with special interests, perhaps even 
educational (like a master student) that could make great long 
term contributions if you established work groups headed by 
people who knew the topic well.


More importantly: it makes no business sense to invest in an open 
source project that shows clear signs of being mismanaged. Create 
a spec that has business value, manage the project well and 
people with a commercial interest will invest. Why would I 
contribute to the compiler if I see no hope of it ever reaching a 
stable release that is better than the alternatives from a 
commercial perspective?


Re: What's missing to make D2 feature complete?

2014-12-22 Thread bioinfornatics via Digitalmars-d

On Saturday, 20 December 2014 at 20:14:21 UTC, Ola Fosheim
Grøstad wrote:
On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak 
wrote:

Just wondering what the general sentiment is.


I think the main problem is what is there already, which 
prevents more sensible performance features from being added 
and also is at odds with ensuring correctness.


By priority:

1. A well thought out ownership system to replace GC with 
compiler protocols/mechanisms that makes good static analysis 
possible and pointers alias free.  It should be designed before 
scope is added and a GC-free runtime should be available.


2. Redesign features and libraries to better support AVX 
auto-vectorization as well as explicit AVX programming.


3. Streamlined syntax.

4. Fast compiler-generated allocators with pre-initialization 
for class instancing (get rid off emplace). Profiling based.


5. Monotonic integers (get rid of modular arithmetics) with 
range constraints.


6. Constraints/logic based programming for templates

7. Either explict virtual or de-virtualizing class functions 
(whole program optimization).


8. Clean up the function signatures: ref, in, out, inout and 
get rid of call-by-name lazy which has been known to be a bug 
inducing feature since Algol60. There is a reason for why other 
languages avoid it.


9. Local precise GC with explicit collection for catching 
cycles in graph data-structures.


10. An alternative to try-catch exceptions that enforce 
error-checking without a performance penalty. E.g. separate 
error tracking on returns or transaction style exceptions 
(jump to root and free all resources on failure).


+1000

I will add be consistent into phobos:
- remove all old module as std.mmfile
- put everywere @safe system trusted ...
- use everywhere as possible immutability ( const ref, in,
immutable )
- doing a smaller project with only working and non-deprecated
module
- std.stream
- consistant use of range into phobos


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Francesco Cattoglio via Digitalmars-d

On Saturday, 20 December 2014 at 20:13:31 UTC, weaselcat wrote:
On Saturday, 20 December 2014 at 17:40:06 UTC, Martin Nowak 
wrote:

Just wondering what the general sentiment is.

For me it's these 3 points.

- tuple support (DIP32, maybe without pattern matching)
- working import, protection and visibility rules (DIP22, 313, 
314)

- finishing non-GC memory management


Unique! and RefCounted! in a usable state.


+1

No RefCounted classes and non-reentrant GC makes it really 
awkward to write libraries that handle non-memory resources in a 
nice way.
My experience with (old versions of) GFM has been horrible at 
times: you have to close() everything by yourself, if you forget 
about that sooner or later the GC will collect something, proc a 
call to close(), which in turns procs a call to the logger, which 
will end up with a InvalidMemoryOperationError.
Not being able to allocate during ~this() can be extremely 
annoying for me.


Re: What's missing to make D2 feature complete?

2014-12-22 Thread via Digitalmars-d

On Monday, 22 December 2014 at 11:03:33 UTC, bioinfornatics wrote:

- use everywhere as possible immutability ( const ref, in,
immutable )


Thanks, I forgot that one. Immutable values by default is indeed 
an important improvement. All by-value parameters to functions 
should be immutable, period.


Re: Rectangular multidimensional arrays for D

2014-12-22 Thread aldanor via Digitalmars-d
A gap in multi-dimensional rectangular arrays functionality in D 
is sure a huge blocker when trying to use it for data science 
tasks. Wonder what's the general consensus on this?


Re: BNF grammar for D?

2014-12-22 Thread Kingsley via Digitalmars-d

On Sunday, 21 December 2014 at 00:34:06 UTC, Kingsley wrote:
On Friday, 19 December 2014 at 02:53:02 UTC, Rikki Cattermole 
wrote:

On 19/12/2014 10:19 a.m., Kingsley wrote:
On Wednesday, 17 December 2014 at 21:05:05 UTC, Kingsley 
wrote:



Hi Bruno,

Thanks very much. I do have a couple of questions about DDT 
in

relation to my plugin.

Firstly - I'm not too familiar with parsing/lexing but at 
the moment

the Psi Structure I have implemented that comes from the DDT
parser/lexer is not in any kind of hierarchy. All the 
PsiElements are
available but all at the same level. Is this how the DDT 
parser
works? Or is it down to my implementation of the 
Parser/Lexer that

wraps it to create some hierarchy.

For intellij it's going to be vastly easier to have a 
hierarchy with
nested elements in order to get hold of a structure 
representing a
class or a function for example - in order to do things 
like get the
start and end lines of a class definition in order to apply 
code

folding and to use for searching for classes and stuff.

Secondly - how active it the development of DDT - does it 
keep up

with the D2 releases.

--Kingsley


After doing a bit more research it looks like I have to 
create the psi
hierarchy myself - my current psi structure is flat because 
I'm just
converting the DeeTokens into PsiElements directly. I've 
still got
some experimentation to do. On the plus side I implemented 
commenting,

code folding but everything else needs a psi hierarchy


I've done some more investigation and I do need to build the 
parser
myself in order to create the various constructs. I've made a 
start but
I haven't gotten very far yet because I don't fully 
understand the

correct way to proceed.

I also had a look at using the DeeParser - because it already 
does most
of what I want. However the intellij plugin wants a PsiParser 
which
returns an intellij ASTNode in the primary parse method. I 
can't see an
easy way to hook this up with DeeParser because the 
ParsedResult
although had a node method on it - gives back the wrong type 
of ASTNode.


Any pointers on how I might get the DeeParser to interface to 
an

intellij ASTNode would be appreciated.


Read my codebase again, it'll answer a lot of questions. Your 
parser is different, but what it produces shouldn't be. and 
yes it supports hierarchies.


Hi

So finally after a lot of wrestling with the internals of 
intellij I finally managed to get a working parser 
implementation that produces a psi hierarchy based on the 
DeeParser from the ddt code.


The main issue was that Intellij only wants you to create a 
parser using their toolset - which is either with a BNF grammar 
that you can then generate the parser - or with a hand written 
parser. Since I'm already using the DDT lexer and there is a 
perfectly good DDT parser as well - I just wanted to re-use the 
DDT parser.


Hi Bruno - would be easy to return the list of tokens included 
for each node in the DeeParser?


However Intellij does not provide any way to create a custom 
AST/PSI structure or use an external parser. So I basically had 
to wrap the DeeParse inside the Intellij parser and sync them 
up programmatically. It's not the most efficient way in the 
world but it at least works.


In the long term I will write a BNF grammar for Intellij (using 
their toolkit) but I can see that will take me several months 
so this is a quick way to get the plugin up and running with 
all the power of intellij extras without spending several 
months stuck learning all about the complexities of grammar 
parsing and lexing.


Thanks very much for you help. Once I get a bit more of the 
cool stuff done I will release the plugin.




Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread logicchains via Digitalmars-d

On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:
On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:

I did notice this:

I updated the ldc D compiler earlier today (incidentally, as 
part of upgrading my system with pacman -Syu), and now it 
doesn't compile at all. It was previously compiling, and ran 
at around 90% the speed of C++ on ARM.


Sigh.


I have deployed experimental LDC package exactly to be able to 
detect such issues, otherwise it will never get there. It will 
be either fixed within a week or reverted to old mode.


I installed the new Arch Linux LDC package but it still fails 
with the same error: /usr/lib/libldruntime.so: undefined 
reference to `__mulodi4'


I did get GDC to work on ARM, but for some reason the resulting 
executable is horribly slow, around five times slower than what 
LDC produced. Are there any flags other than -O3 that I should be 
using?


Re: What's missing to make D2 feature complete?

2014-12-22 Thread Kagamin via Digitalmars-d
- delegates is another type system hole, if it's not going to be 
fixed, then it should be documented

- members of Object
- evaluate contracts at the caller side
- streams
- reference type AA


Re: Davidson/TJB - HDF5 - Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread aldanor via Digitalmars-d

On Monday, 22 December 2014 at 08:35:59 UTC, Laeeth Isharc wrote:

On Saturday, 22 March 2014 at 14:33:02 UTC, TJB wrote:
On Saturday, 22 March 2014 at 13:10:46 UTC, Daniel Davidson 
wrote:
Data storage for high volume would also be nice. A D 
implementation of HDF5, via wrappers or otherwise, would be a 
very useful project. Imagine how much more friendly the API 
could be in D. Python's tables library makes it very simple. 
You have to choose a language to not only process and 
visualize data, but store and access it as well.


Thanks
Dan


Well, I for one, would be hugely interested in such a thing.  A
nice D API to HDF5 would be a dream for my data problems.

Did you use HDF5 in your finance industry days then?  Just
curious.

TJB


Well for HDF5 - the bindings are here now - pre alpha but will 
get there soone enough - and wrappers coming along also.


Any thoughts/suggestions/help appreciated.  Github here:

https://github.com/Laeeth/d_hdf5


I wonder how much work it would be to port or implement Pandas 
type functionality in a D library.


@Laeeth

As a matter of fact, I've been working on HDF5 bindings for D as 
well -- I'm done with the binding/wrapping part so far (with 
automatic throwing of D exceptions whenever errors occur in the C 
library, and other niceties) and am hacking at the higher level 
OOP API -- can publish it soon if anyone's interested :) Maybe we 
can join efforts and make it work (that and standardizing a 
multi-dimensional array library in D).


Re: Davidson/TJB - HDF5 - Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Monday, 22 December 2014 at 11:59:11 UTC, aldanor wrote:

@Laeeth

As a matter of fact, I've been working on HDF5 bindings for D 
as well -- I'm done with the binding/wrapping part so far (with 
automatic throwing of D exceptions whenever errors occur in the 
C library, and other niceties) and am hacking at the higher 
level OOP API -- can publish it soon if anyone's interested :) 
Maybe we can join efforts and make it work (that and 
standardizing a multi-dimensional array library in D).



Oh, well :)  I would certainly be interested to see what you 
have, even if not finished yet.  My focus was sadly getting 
something working soon in a sprint, rather than building 
something excellent later, and I would think your work will be 
cleaner.


In any case, I would very much be interested in exchanging ideas 
or working together - on HDF5, on multi-dim or on other projects 
relating to finance/quant/scientific computing and the like.  So 
maybe you could send me a link when you are ready - either post 
here or my email address is my first name at my first name.com


Thanks.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Saturday, 22 March 2014 at 00:14:11 UTC, Daniel Davidson wrote:

On Friday, 21 March 2014 at 21:14:15 UTC, TJB wrote:

Walter,

I see that you will be discussing High Performance Code Using 
D at the 2014 DConf. This will be a very welcomed topic for 
many of us.  I am a Finance Professor.  I currently teach and 
do research in computational finance.  Might I suggest that 
you include some finance (say Monte Carlo options pricing) 
examples?  If you can get the finance industry interested in D 
you might see a massive adoption of the language.  Many are 
desperate for an alternative to C++ in that space.


Just a thought.

Best,

TJB


Maybe a good starting point would be to port some of QuantLib 
and see how the performance compares. In High Frequency Trading 
I think D would be a tough sell, unfortunately.


Thanks
Dan


In case it wasn't obvious from the discussion that followed: 
finance is a broad field with many different kinds of creature 
within, and there are different kinds of problems faced by 
different participants.


High Frequency Trading has peculiar requirements (relating to 
latency, amongst other things) that will not necessarily be 
representative of other areas.  Even within this area there is a 
difference between the needs of a Citadel in its option 
marketmaking activity versus the activity of a pure delta HFT 
player (although they also overlap).


A JP Morgan that needs to be able to price and calculate risk for 
large portfolios of convex instruments in its vanilla and exotic 
options books has different requirements, again.


You would typically use Monte Carlo (or quasi MC) to price more 
complex products for which there is not a good analytical 
approximation.  (Or to deal with the fact that volatility is not 
constant).  So that fits very much with the needs of large banks 
- and perhaps some hedge funds - but I don't think a typical HFT 
guy would be all that interested to know about this.  They are 
different domains.


Quant/CTA funds also have decent computational requirements, but 
these are not necessarily high frequency.  Winton Capital, for 
example, is one of the larger hedge funds in Europe by assets, 
but they have talked publicly about emphasizing longer-term 
horizons because even in liquid markets there simply is not the 
liquidity to turn over the volume they would need to to make an 
impact on their returns.  In this case, whilst execution is 
always important, the research side of things is where the value 
gets created.  And its not unusual to have quant funds where 
every portfolio manager also programs.  (I will not mention 
names).  One might think that rapid iteration here could have 
value.


http://www.efinancialcareers.co.uk/jobs-UK-London-Senior_Data_Scientist_-_Quant_Hedge_Fund.id00654869

Fwiw having spoken to a few people the past few weeks, I am 
struck by how hollowed-out front office has become, both within 
banks and hedge funds.  It's a nice business when things go well, 
but there is tremendous operating leverage, and if one builds up 
fixed costs then losing assets under management and having a poor 
period of performance (which is part of the game, not necessarily 
a sign of failure) can quickly mean that you cannot pay people 
(more than salaries) - which hurts morale and means you risk 
losing your best people.


So people have responded by paring down quant/research support to 
producing roles, even when that makes no sense.  (Programmers are 
not expensive).  In that environment, D may offer attractive 
productivity without sacrificing performance.


cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Vic via Digitalmars-d

https://news.ycombinator.com/item?id=8781522

http://arthurtw.github.io/2014/12/21/rust-anti-sloppy-programming-language.html

c'est possible!

Oh how much free time and stability there would be if D core 
*moved* GC downstream.


Vic
ps: more cows waiting for slaughter: 
http://dlang.org/comparison.html


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 11:45, logicchains via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:

 On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright wrote:

 I did notice this:

 I updated the ldc D compiler earlier today (incidentally, as part of
 upgrading my system with pacman -Syu), and now it doesn't compile at all. It
 was previously compiling, and ran at around 90% the speed of C++ on ARM.

 Sigh.


 I have deployed experimental LDC package exactly to be able to detect such
 issues, otherwise it will never get there. It will be either fixed within a
 week or reverted to old mode.


 I installed the new Arch Linux LDC package but it still fails with the same
 error: /usr/lib/libldruntime.so: undefined reference to `__mulodi4'

 I did get GDC to work on ARM, but for some reason the resulting executable
 is horribly slow, around five times slower than what LDC produced. Are there
 any flags other than -O3 that I should be using?

Other than -frelease (to turn off most non-release code generation), no.

Can you get a profiler on it to see where it's spending most of it's time?

Thanks
Iain.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread aldanor via Digitalmars-d

On Monday, 22 December 2014 at 12:24:52 UTC, Laeeth Isharc wrote:


In case it wasn't obvious from the discussion that followed: 
finance is a broad field with many different kinds of creature 
within, and there are different kinds of problems faced by 
different participants.


High Frequency Trading has peculiar requirements (relating to 
latency, amongst other things) that will not necessarily be 
representative of other areas.  Even within this area there is 
a difference between the needs of a Citadel in its option 
marketmaking activity versus the activity of a pure delta HFT 
player (although they also overlap).


A JP Morgan that needs to be able to price and calculate risk 
for large portfolios of convex instruments in its vanilla and 
exotic options books has different requirements, again.


You would typically use Monte Carlo (or quasi MC) to price more 
complex products for which there is not a good analytical 
approximation.  (Or to deal with the fact that volatility is 
not constant).  So that fits very much with the needs of large 
banks - and perhaps some hedge funds - but I don't think a 
typical HFT guy would be all that interested to know about 
this.  They are different domains.


Quant/CTA funds also have decent computational requirements, 
but these are not necessarily high frequency.  Winton Capital, 
for example, is one of the larger hedge funds in Europe by 
assets, but they have talked publicly about emphasizing 
longer-term horizons because even in liquid markets there 
simply is not the liquidity to turn over the volume they would 
need to to make an impact on their returns.  In this case, 
whilst execution is always important, the research side of 
things is where the value gets created.  And its not unusual to 
have quant funds where every portfolio manager also programs.  
(I will not mention names).  One might think that rapid 
iteration here could have value.


http://www.efinancialcareers.co.uk/jobs-UK-London-Senior_Data_Scientist_-_Quant_Hedge_Fund.id00654869

Fwiw having spoken to a few people the past few weeks, I am 
struck by how hollowed-out front office has become, both within 
banks and hedge funds.  It's a nice business when things go 
well, but there is tremendous operating leverage, and if one 
builds up fixed costs then losing assets under management and 
having a poor period of performance (which is part of the game, 
not necessarily a sign of failure) can quickly mean that you 
cannot pay people (more than salaries) - which hurts morale and 
means you risk losing your best people.


So people have responded by paring down quant/research support 
to producing roles, even when that makes no sense.  
(Programmers are not expensive).  In that environment, D may 
offer attractive productivity without sacrificing performance.


I agree with most of these points.

For some reason, people often relate quant finance / high 
frequency trading with one of the two: either ultra-low-latency 
execution or option pricing, which is just wrong. In most 
likelihood, the execution is performed on FPGA co-located grids, 
so that part is out of question; and options trading is just one 
of so many things hedge funds do. What takes the most time and 
effort is the usual data science (which in many cases boil down 
to data munging), as in, managing huge amounts of raw 
structured/unstructured high-frequency data; extracting the 
valuable information and learning strategies; implementing 
fast/efficient backtesting frameworks, simulators etc. The need 
for efficiency here naturally comes from the fact that a 
typical task in the pipeline requires dozens/hundreds GB of RAM 
and dozens of hours of runtime on a high-grade box (so noone 
would really care if that GC is going to stop the world for 0.05 
seconds).


In this light, as I see it, D's main advantage is a high 
runtime-efficiency / time-to-deploy ratio (whereas one of the 
main disadvantages for practitioners would be the lack of 
standard tools for working with structured multidimensional data 
+ linalg, something like numpy or pandas).


Cheers.


Re: What is the D plan's to become a used language?

2014-12-22 Thread Joakim via Digitalmars-d
On Monday, 22 December 2014 at 10:30:47 UTC, Ola Fosheim Grøstad 
wrote:
More importantly: it makes no business sense to invest in an 
open source project that shows clear signs of being mismanaged. 
Create a spec that has business value, manage the project well 
and people with a commercial interest will invest. Why would I 
contribute to the compiler if I see no hope of it ever reaching 
a stable release that is better than the alternatives from a 
commercial perspective?


It is not clear that the core team wants commercial investment, 
that's merely my guess of how D might become more polished and 
popular.  AFAICT, Andrei and Walter hope to get to a million 
users mostly through volunteers, which is a pipe dream if you ask 
me, though they don't appear to be against commercial 
involvement.  As you say, without presenting a more organized 
front, maybe such commercial investment is unlikely.


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread via Digitalmars-d

On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via
Digitalmars-d wrote:

On 22 December 2014 at 11:45, logicchains via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:


I did notice this:

I updated the ldc D compiler earlier today (incidentally, 
as part of
upgrading my system with pacman -Syu), and now it doesn't 
compile at all. It
was previously compiling, and ran at around 90% the speed of 
C++ on ARM.


Sigh.



I have deployed experimental LDC package exactly to be able 
to detect such
issues, otherwise it will never get there. It will be either 
fixed within a

week or reverted to old mode.



I installed the new Arch Linux LDC package but it still fails 
with the same
error: /usr/lib/libldruntime.so: undefined reference to 
`__mulodi4'


I did get GDC to work on ARM, but for some reason the 
resulting executable
is horribly slow, around five times slower than what LDC 
produced. Are there

any flags other than -O3 that I should be using?


Other than -frelease (to turn off most non-release code 
generation), no.


Can you get a profiler on it to see where it's spending most of 
it's time?


Thanks
Iain.


With the GDC build, the GC stops the main thread every single
time getLongestPath is executed. This does not happen with the
LDC build.

See :
http://unix.cat/d/lpathbench/callgrind.out.GDC
http://unix.cat/d/lpathbench/callgrind.out.LDC


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread logicchains via Digitalmars-d
On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via 
Digitalmars-d wrote:

On 22 December 2014 at 11:45, logicchains via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright 
wrote:


I did notice this:

I updated the ldc D compiler earlier today (incidentally, 
as part of
upgrading my system with pacman -Syu), and now it doesn't 
compile at all. It
was previously compiling, and ran at around 90% the speed of 
C++ on ARM.


Sigh.



I have deployed experimental LDC package exactly to be able 
to detect such
issues, otherwise it will never get there. It will be either 
fixed within a

week or reverted to old mode.



I installed the new Arch Linux LDC package but it still fails 
with the same
error: /usr/lib/libldruntime.so: undefined reference to 
`__mulodi4'


I did get GDC to work on ARM, but for some reason the 
resulting executable
is horribly slow, around five times slower than what LDC 
produced. Are there

any flags other than -O3 that I should be using?


Other than -frelease (to turn off most non-release code 
generation), no.


Can you get a profiler on it to see where it's spending most of 
it's time?


Thanks
Iain.


I ran callgrind on it, 75% of the runtime is spent in 
_D2gc2gc2GC6malloc, and 5% in reduce.


Re: Rewrite rules for ranges

2014-12-22 Thread renoX via Digitalmars-d

On Saturday, 20 December 2014 at 14:16:05 UTC, bearophile wrote:
When you use UFCS chains there are many coding patterns that 
probably are hard to catch for the compiler, but are easy to 
optimize very quickly:

[cut]

.reverse.reverse = id


.reverse.reverse is a coding pattern??

;-)

renoX


Re: Rewrite rules for ranges

2014-12-22 Thread bearophile via Digitalmars-d

renoX:


.reverse.reverse is a coding pattern??


Yes, similar patterns can come out after inlining.

Bye,
bearophile


Re: Do everything in Java…

2014-12-22 Thread via Digitalmars-d
On Thursday, 18 December 2014 at 08:56:29 UTC, ketmar via 
Digitalmars-d wrote:

On Thu, 18 Dec 2014 08:09:08 +
via Digitalmars-d digitalmars-d@puremagic.com wrote:

On Thursday, 18 December 2014 at 01:16:38 UTC, H. S. Teoh via 
Digitalmars-d wrote:
 On Thu, Dec 18, 2014 at 12:37:43AM +, via Digitalmars-d 
 wrote:
 Regular HD I/O is quite slow, but with fast SSD on PCIe and 
 a good

 database-like index locked to memory…

 That's hardly a solution that will work for the general D 
 user, many of

 whom may not have this specific setup.

By the time this would be ready, most programmers will have 
PCIe interfaced SSD. At 100.000 IOPS it is pretty ok.


didn't i say that the whole 64-bit hype sux? ;-) that's about 
memory

as database.


Heh, btw, I just read on osnews.com that HP is going to create a 
new hardware platform The Machine and a new operating system for 
it based on resistor based non-volatile memory called memristors 
that is comparable to dram in speed. Pretty interesting actually:


http://www.technologyreview.com/news/533066/hp-will-release-a-revolutionary-new-operating-system-in-2015/



Allocators stack

2014-12-22 Thread Allocator stack via Digitalmars-d

How about allocators stack? Allocator e.g. one of these
https://github.com/andralex/phobos/blob/allocator/std/allocator.d
-
allocatorStack.push(new GCAllocator);
//Some code that use memory allocation
auto a = ['x', 'y'];
a ~= ['a', 'b']; // use allocatorStack.top.realloc(...);
allocatorStack.pop();
-
Allocators must be equipped with dynamic polymorphism. For those
cases when it is too expensive attribute
@allocator(yourAllocator) applied to declaration set allocator
statically.

-
@allocator(Mallocator.instance)
void f()
{
// Implicitly use global(tls?) allocator Mallocator when allocate 
an

object or resize an array or etc.
}

@allocator(StackAllocator)
void f()
{
// Implicitly use allocatorStack.top() allocator when allocate an
object or resize an array or etc.
}
-

There is some issues to solve. E.g. how to eliminate mix memory 
from different allocators.


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 13:45, via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via
 Digitalmars-d wrote:

 On 22 December 2014 at 11:45, logicchains via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


 On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright wrote:


 I did notice this:

 I updated the ldc D compiler earlier today (incidentally, as part of
 upgrading my system with pacman -Syu), and now it doesn't compile at
 all. It
 was previously compiling, and ran at around 90% the speed of C++ on
 ARM.

 Sigh.



 I have deployed experimental LDC package exactly to be able to detect
 such
 issues, otherwise it will never get there. It will be either fixed
 within a
 week or reverted to old mode.



 I installed the new Arch Linux LDC package but it still fails with the
 same
 error: /usr/lib/libldruntime.so: undefined reference to `__mulodi4'

 I did get GDC to work on ARM, but for some reason the resulting
 executable
 is horribly slow, around five times slower than what LDC produced. Are
 there
 any flags other than -O3 that I should be using?


 Other than -frelease (to turn off most non-release code generation), no.

 Can you get a profiler on it to see where it's spending most of it's time?

 Thanks
 Iain.


 With the GDC build, the GC stops the main thread every single
 time getLongestPath is executed. This does not happen with the
 LDC build.

 See :
 http://unix.cat/d/lpathbench/callgrind.out.GDC
 http://unix.cat/d/lpathbench/callgrind.out.LDC


Thanks, looks like getLongestPath creates a closure - this causes
memory to be allocated every single time the function is called !!!

I imagine that LDC can boast smarter heuristics here - I recall David
talking about a memory optimisation that moves the heap allocation to
the stack if it can verify that the closure doesn't escape the
function.

We are a little behind the times on this - and so is DMD.

Regards
Iain.


Re: Do everything in Java…

2014-12-22 Thread deadalnix via Digitalmars-d
On Sunday, 21 December 2014 at 10:00:36 UTC, Russel Winder via 
Digitalmars-d wrote:
Although the vast majority of Java is used in a basically I/O 
bound
context, there is knowledge of and desire to improve Java in a 
CPU-
bound context. The goal here is to always be as fast as C and 
C++ for
all CPU-bound codes. A lot of people are already seeing Java 
being
faster than C and C++, but they have to use primitive types to 
achieve
this. With the shift to internal iteration and new JITS, the 
aim is to

achieve even better but using reference types in the code.



That is quite a claim. If it is true in some context, and I'd go 
as far as to say that vanilla code in C/C++ tend to be slower 
than the vanilla version in java, ultimately, C and C++ offer 
more flexibility, which mean that if you are willing to spend the 
time to optimize, Java won't be as fast. Generally, the killer is 
memory layout, which allow to fit more in cache, and be faster. 
Java is addicted to indirections.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Daniel Davidson via Digitalmars-d

On Monday, 22 December 2014 at 13:37:55 UTC, aldanor wrote:
For some reason, people often relate quant finance / high 
frequency trading with one of the two: either ultra-low-latency 
execution or option pricing, which is just wrong. In most 
likelihood, the execution is performed on FPGA co-located 
grids, so that part is out of question; and options trading is 
just one of so many things hedge funds do. What takes the most 
time and effort is the usual data science (which in many 
cases boil down to data munging), as in, managing huge amounts 
of raw structured/unstructured high-frequency data; extracting 
the valuable information and learning strategies;



This description feels too broad. Assume that it is the data 
munging that takes the most time and effort. Included in that 
usually involves some transformations like (Data - Numeric Data 
- Mathematical Data Procssing - Mathematical 
Solutions/Calibrations - Math consumers (trading systems low 
frequency/high frequency/in general)). The quantitative data 
science is about turning data into value using numbers. The 
better you are at first getting to an all numbers world to start 
analyzing the better off you will be. But once in the all numbers 
world isn't it all about math, statistics, mathematical 
optimization, insight, iteration/mining, etc? Isn't that right 
now the world of R, NumPy, Matlab, etc and more recently now 
Julia? I don't see D attempting to tackle that at this point. If 
the bulk of the work for the data sciences piece is the maths, 
which I believe it is, then the attraction of D as a data 
sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in that 
space D might shine.



implementing fast/efficient backtesting frameworks, simulators 
etc. The need for efficiency here naturally comes from the 
fact that a typical task in the pipeline requires 
dozens/hundreds GB of RAM and dozens of hours of runtime on a 
high-grade box (so noone would really care if that GC is going 
to stop the world for 0.05 seconds).




What is a backtesting system in the context of Winton Capital? Is 
it primarily a mathematical backtesting system? If so it still 
may be better suited to platforms focusing on maths.


Re: Checksums of files from Downloads

2014-12-22 Thread Andrei Alexandrescu via Digitalmars-d

On 12/10/14 11:20 PM, AndreyZ wrote:

I wanted to join D community, but I realized that I even cannot
install tools from the site securely. (Correct me if I wrong.)

To dlang.org maintainers:

I trust you but I don't trust man-in-the-middle.

So, could you at least provide checksums (e.g. sha1) for all
files which are available on the following pages, please.
http://dlang.org/download.html
http://code.dlang.org/download

Also It would be great if you:
1) install good (not self-signed) SSL certificate to allow
visitors use HTTPS;
2) sign all *.exe files provided in download sections.


Added https://issues.dlang.org/show_bug.cgi?id=13887 -- Andrei


DConf 2015?

2014-12-22 Thread Adam D. Ruppe via Digitalmars-d
By this time last year, dconf 2014 preparations were already 
under way but I haven't heard anything this year. Is another one 
planned?


Re: What is the D plan's to become a used language?

2014-12-22 Thread deadalnix via Digitalmars-d

On Monday, 22 December 2014 at 01:08:00 UTC, ZombineDev wrote:
NO. Just don't use features that you don't understand or like, 
but

don't punish happy D users by demanding a crippled D version.


On Sunday, 21 December 2014 at 22:21:21 UTC, Vic wrote:

...


That is a valid argument if feature are orthogonal. When they 
aren't, it is just rhetorical bullshit.


Re: DIP66 v1.1 (Multiple) alias this.

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d

On 21/12/14 11:11, Dicebot via Digitalmars-d wrote:

On Sunday, 21 December 2014 at 08:23:34 UTC, deadalnix wrote:

See also: https://issues.dlang.org/show_bug.cgi?id=10996


I have nothing against this, but this is, indeed, completely out of the scope
(!) of the DIP.


I think it belongs to DIP22


In fact it's already in there:

A public alias to a private symbol makes the symbol
accessibly through the alias. The alias itself needs
to be in the same module, so this doesn't impair
protection control.

It's just not implemented for alias this.


Re: DIP66 v1.1 (Multiple) alias this.

2014-12-22 Thread Joseph Rushton Wakeling via Digitalmars-d

On 21/12/14 09:23, deadalnix via Digitalmars-d wrote:

I have nothing against this, but this is, indeed, completely out of the scope
(!) of the DIP.


Fair enough.  I wanted to make sure there was nothing here that could interact 
nastily with protection attributes.




Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread aldanor via Digitalmars-d
On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:

I don't see D attempting to tackle that at this point.
If the bulk of the work for the data sciences piece is the 
maths, which I believe it is, then the attraction of D as a 
data sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in that 
space D might shine.
That is one of my points exactly -- the bulk of the work, as 
you put it, is quite often the data processing/preprocessing 
pipeline (all the way from raw data parsing, aggregation, 
validation and storage to data retrieval, feature extraction, and 
then serialization, various persistency models, etc). One thing 
is fitting some model on a pandas dataframe on your lap in an 
ipython notebook, another thing is running the whole pipeline on 
massive datasets in production on a daily basis, which often 
involves very low-level technical stuff, whether you like it or 
not. Coming up with cool algorithms and doing fancy maths is fun 
and all, but it doesn't take nearly as much effort as integrating 
that same thing into an existing production system (or developing 
one from scratch). (and again, production != execution in this 
context)


On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:
What is a backtesting system in the context of Winton Capital? 
Is it primarily a mathematical backtesting system? If so it 
still may be better suited to platforms focusing on maths.
Disclaimer: I don't work for Winton :) Backtesting in trading is 
usually a very CPU-intensive (and sometimes RAM-intensive) task 
that can be potentially re-run millions of times to fine-tune 
some parameters or explore some sensitivities. Another common 
task is reconciling with how the actual trading system works 
which is a very low-level task as well.


Does anyone want to render D with gsource ?

2014-12-22 Thread Anoymous via Digitalmars-d

A few monthes ago I've seen this:

https://code.google.com/p/gource/

Does anyone want to render D with gsource (dmd/phobos) ?



Re: Does anyone want to render D with gsource ?

2014-12-22 Thread Andrej Mitrovic via Digitalmars-d
On 12/22/14, Anoymous via Digitalmars-d digitalmars-d@puremagic.com wrote:
 A few monthes ago I've seen this:

 https://code.google.com/p/gource/

Ahh I always wanted to see this visualization for dlang repos!!

Whoever makes this happen, 1000 internets to you.


Re: DIP69 - Implement scope for escape proof references

2014-12-22 Thread Walter Bright via Digitalmars-d

On 12/22/2014 12:04 AM, Dicebot wrote:

Point of transitive scope is to make easy to expose complex custom data
structures without breaking memory safety.


I do understand that. Making it work with the type system is another matter 
entirely - it's far more complex than just adding a qualifier. 'inout' looks 
simple but still has ongoing issues.


And the thing is, wrappers can be used instead of qualifiers, in the same places 
in the same way. It's much simpler.


Re: DConf 2015?

2014-12-22 Thread Walter Bright via Digitalmars-d

On 12/22/2014 9:40 AM, Adam D. Ruppe wrote:

By this time last year, dconf 2014 preparations were already under way but I
haven't heard anything this year. Is another one planned?


Yes. Still working on getting confirmation of the date.


Re: DConf 2015?

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 20:52, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On 12/22/2014 9:40 AM, Adam D. Ruppe wrote:

 By this time last year, dconf 2014 preparations were already under way but
 I
 haven't heard anything this year. Is another one planned?


 Yes. Still working on getting confirmation of the date.

You mean to say that it's moving from it's usual time slot next year?
(Weekend before spring bank holiday)


Re: Do everything in Java…

2014-12-22 Thread Jacob Carlborg via Digitalmars-d

On 2014-12-21 20:37, Adam D. Ruppe wrote:


1) versions don't match. Stuff like rvm and bundler can mitigate this,


I'm not exactly sure what you're meaning but using Rails without bundler 
is just mad.



but they don't help searching the web. Find a technique and try it...
but it requires Rails 2.17 and the app depends in 2.15 or something
stupid like that. I guess you can't blame them for adding new features,
but I do wish the documentation for old versions was always easy to get
to and always easily labeled so it would be obvious. (D could do this too!)


This page [1] contains documentation for Rails, for 4.1.x, 4.0.x, 3.2.x 
and 2.3.x. It's basically the latest version of a given branch. This 
page [2] contains the API reference for Rails, it's not easy to find but 
you can append vX.Y.Z to that URL to get a specific version.



2) SSL/TLS just seems to randomly fail in applications and the tools
like gem and bundle. Even updating the certificates on the system didn't
help most recently, I also had to set an environment variable, which
seems just strange.


I think I have seen that once or twice when upgrading to a new version 
of OS X. But that's usually because your gems and other software is 
still built for the older version. I can't recall seeing this for a new 
project.



3) Setting up the default WEBrick isn't too bad, but making it work on a
production system (like apache passenger) has been giving us trouble.
Got it working for the most part pretty fast, but then adding more stuff
became a painful config nightmare. This might be the application (based
on Rails 2 btw) more than the platform in general, but it still irked me.


I haven't been too involved in that part. I have set up one or two apps 
with passenger and it was pretty easy to just follow the installation. 
Although, that wasn't production servers.



4) It is abysmally slow, every little thing takes forever. DB changes,
slow. Asset recompiles: slow. Tests: slow. Restarting the server: slow.
The app itself: slow. I'm told Ruby on the JVM is faster though :)


Yeah, that's one major issue. It can be very, very slow. But I also 
think it's too easy code slow with something like ActiveRecord. It's 
easy to forget it's actual a database behind it.



My main problems with ruby on rails though are bad decisions and just
underwhelming aspect of actually using it. Everyone sells it as being
the best thing ever and so fast to develop against but I've seen better
like everything. Maybe it was cool in 2005 (if you could actually get it
running then...), but not so much anymore.


I find it difficult to find something better. I think that's mostly 
because of the existing ecosystem with plugins and libraries available. 
I feel the same thing with D vs Ruby. At some point I just get tired 
with developing my own libraries and just want to get something done.


[1] http://guides.rubyonrails.org/
[2] http://api.rubyonrails.org

--
/Jacob Carlborg


Re: Do everything in Java…

2014-12-22 Thread Paulo Pinto via Digitalmars-d

On Monday, 22 December 2014 at 17:25:48 UTC, deadalnix wrote:
On Sunday, 21 December 2014 at 10:00:36 UTC, Russel Winder via 
Digitalmars-d wrote:
Although the vast majority of Java is used in a basically I/O 
bound
context, there is knowledge of and desire to improve Java in a 
CPU-
bound context. The goal here is to always be as fast as C and 
C++ for
all CPU-bound codes. A lot of people are already seeing Java 
being
faster than C and C++, but they have to use primitive types to 
achieve
this. With the shift to internal iteration and new JITS, the 
aim is to

achieve even better but using reference types in the code.



That is quite a claim. If it is true in some context, and I'd 
go as far as to say that vanilla code in C/C++ tend to be 
slower than the vanilla version in java, ultimately, C and C++ 
offer more flexibility, which mean that if you are willing to 
spend the time to optimize, Java won't be as fast. Generally, 
the killer is memory layout, which allow to fit more in cache, 
and be faster. Java is addicted to indirections.


If one is willing to spend time (aka money) optimizing, there are 
also a few tricks that are possible in Java and used in high 
frequency trading systems.


Latest as of Java 10, indirections in Java will be a thing of the 
past, assuming all features being discussed make their way into 
the language.


C and C++ are becoming a niche languages in distributed computing 
systems.


--
Paulo




Re: Allocators stack

2014-12-22 Thread Meta via Digitalmars-d
On Monday, 22 December 2014 at 16:51:30 UTC, Allocator stack 
wrote:

How about allocators stack? Allocator e.g. one of these
https://github.com/andralex/phobos/blob/allocator/std/allocator.d
-
allocatorStack.push(new GCAllocator);
//Some code that use memory allocation
auto a = ['x', 'y'];
a ~= ['a', 'b']; // use allocatorStack.top.realloc(...);
allocatorStack.pop();
-
Allocators must be equipped with dynamic polymorphism. For those
cases when it is too expensive attribute
@allocator(yourAllocator) applied to declaration set allocator
statically.

-
@allocator(Mallocator.instance)
void f()
{
// Implicitly use global(tls?) allocator Mallocator when 
allocate an

object or resize an array or etc.
}

@allocator(StackAllocator)
void f()
{
// Implicitly use allocatorStack.top() allocator when allocate 
an

object or resize an array or etc.
}
-

There is some issues to solve. E.g. how to eliminate mix memory 
from different allocators.


There are only a couple of constructs in D that allocate, so it 
may be worthwhile to let the allocator control the primitives, 
i.e.:


auto gc = new GCAllocator(...);
gc.Array!char a = ['x', 'y'];
a ~= ['a', 'b']; //use allocatorStack.top.realloc(...);
//I don't remember the proposed API for
//allocators, but you get the idea
gcAlloc.DestroyAll();

This looks a bit uglier, but it also doesn't require any new 
language constructs. The downside is, how do you manage things 
like closures doing it this way?


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Paulo Pinto via Digitalmars-d

On Monday, 22 December 2014 at 19:25:51 UTC, aldanor wrote:
On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:

I don't see D attempting to tackle that at this point.
If the bulk of the work for the data sciences piece is the 
maths, which I believe it is, then the attraction of D as a 
data sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in 
that space D might shine.
That is one of my points exactly -- the bulk of the work, as 
you put it, is quite often the data processing/preprocessing 
pipeline (all the way from raw data parsing, aggregation, 
validation and storage to data retrieval, feature extraction, 
and then serialization, various persistency models, etc). One 
thing is fitting some model on a pandas dataframe on your lap 
in an ipython notebook, another thing is running the whole 
pipeline on massive datasets in production on a daily basis, 
which often involves very low-level technical stuff, whether 
you like it or not. Coming up with cool algorithms and doing 
fancy maths is fun and all, but it doesn't take nearly as much 
effort as integrating that same thing into an existing 
production system (or developing one from scratch). (and again, 
production != execution in this context)


On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:
What is a backtesting system in the context of Winton Capital? 
Is it primarily a mathematical backtesting system? If so it 
still may be better suited to platforms focusing on maths.
Disclaimer: I don't work for Winton :) Backtesting in trading 
is usually a very CPU-intensive (and sometimes RAM-intensive) 
task that can be potentially re-run millions of times to 
fine-tune some parameters or explore some sensitivities. 
Another common task is reconciling with how the actual trading 
system works which is a very low-level task as well.



From what I have learned in Skills Matter presentations, for that 
type of use cases, D has to fight against Scala/F# code running 
in Hadoop/Spark/Azure clusters, backed up by big data databases.


--
Paulo


Re: Do everything in Java…

2014-12-22 Thread ketmar via Digitalmars-d
On Mon, 22 Dec 2014 15:36:27 +
via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Heh, btw, I just read on osnews.com that HP is going to create a 
 new hardware platform The Machine and a new operating system for 
 it based on resistor based non-volatile memory called memristors 
 that is comparable to dram in speed. Pretty interesting actually:
 
 http://www.technologyreview.com/news/533066/hp-will-release-a-revolutionary-new-operating-system-in-2015/

yes, i read about that some time ago. it's fun conception, yet they
will be forced to emulate files anyway. i.e. such machine can be either
only one user, only one task (a-la Oberon, for example), or a mad to
program. everyone using files today, and nobody will write everything
again for some OS without files.

so it will be a toy, or will try hard to emulate current arch. either
way nothing new from the programmer's POV.


signature.asc
Description: PGP signature


Re: Allocators stack

2014-12-22 Thread via Digitalmars-d
On Monday, 22 December 2014 at 16:51:30 UTC, Allocator stack 
wrote:

How about allocators stack? Allocator e.g. one of these
https://github.com/andralex/phobos/blob/allocator/std/allocator.d
-
allocatorStack.push(new GCAllocator);
//Some code that use memory allocation
auto a = ['x', 'y'];
a ~= ['a', 'b']; // use allocatorStack.top.realloc(...);
allocatorStack.pop();
-
Allocators must be equipped with dynamic polymorphism. For those
cases when it is too expensive attribute
@allocator(yourAllocator) applied to declaration set allocator
statically.

-
@allocator(Mallocator.instance)
void f()
{
// Implicitly use global(tls?) allocator Mallocator when 
allocate an

object or resize an array or etc.
}

@allocator(StackAllocator)
void f()
{
// Implicitly use allocatorStack.top() allocator when allocate 
an

object or resize an array or etc.
}
-

There is some issues to solve. E.g. how to eliminate mix memory 
from different allocators.


I've put together some semi-realistic code that touches on this 
(ignore the `scope` thingies in there):

http://wiki.dlang.org/User:Schuetzm/RC,_Owned_and_allocators

The goal was to allow returning owned objects from functions that 
don't know what kind of memory management strategy the end user 
wants to use. The caller can then decide to convert them into ref 
counted objects, or release them and let the GC take care of 
them, or just use them in a UFCS chain and let them get destroyed 
at the end of the current statement automatically.


This is achieved by storing a pointer to the allocator (which is 
an interface) next to the object. (As a side note, these 
interfaces can be auto-implemented by templates when needed, 
similar to `std.range.InputRange` et al.)


This design makes it possible to switch allocators at runtime in 
any order, either through a global variable, or by passing a 
parameter to a function. It also avoids template bloat, because 
the functions involved just need to return Owned!MyType, which 
does not depend on the allocator type. The downside is, of 
course, the overhead of the indirect calls. (The call to 
`reallocate` can be left out in this example if `Owned` already 
preallocates some space for the refcount, therefore it's just two 
indirect calls for the lifetime of a typical object: one each for 
creation and destruction.)


As for static allocators, I think it's not possible without 
sacrificing the ability to switch allocators at runtime and 
without templating on the allocator.


Re: What is the D plan's to become a used language?

2014-12-22 Thread ketmar via Digitalmars-d
On Mon, 22 Dec 2014 08:51:15 +
Bienlein via Digitalmars-d digitalmars-d@puremagic.com wrote:

 But for Go you only need to learn a 
 drop simple language and you can start writing your server 
 application, because all you need for concurrency is in the 
 language.
i can assure you that concurency in the language is not the only
thing one needs to know before start writing a server. you keep
telling that everything else in Go is so cheap to learn, so only CSP
matters. oh, really? Go can magically do all header parsing, database
management and other things for me? or we talking about echo servers?

Go is just hyped, that's all. there is NOTHING hard in creating simple
HTTP(S) server with templated pages, forms and data processing with
vibe.d. hey, it even has a sample of such server out of the box! it's
dead easy. and vibe.d can utilize threads to use all CPU cores (you
will lost some handyness here, but hey: don't use global variables! ;-).

even adding the whole Go to D will not make D overtake Go. Go is hyped.
D is not. that's all.


signature.asc
Description: PGP signature


Re: What's missing to make D2 feature complete?

2014-12-22 Thread ketmar via Digitalmars-d
On Mon, 22 Dec 2014 11:17:39 +
via Digitalmars-d digitalmars-d@puremagic.com wrote:

 On Monday, 22 December 2014 at 11:03:33 UTC, bioinfornatics wrote:
  - use everywhere as possible immutability ( const ref, in,
  immutable )
 
 Thanks, I forgot that one. Immutable values by default is indeed 
 an important improvement. All by-value parameters to functions 
 should be immutable, period.

but why? O_O


signature.asc
Description: PGP signature


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread ketmar via Digitalmars-d
On Mon, 22 Dec 2014 12:28:16 +
Vic via Digitalmars-d digitalmars-d@puremagic.com wrote:

 https://news.ycombinator.com/item?id=8781522
 
 http://arthurtw.github.io/2014/12/21/rust-anti-sloppy-programming-language.html
 
 c'est possible!
 
 Oh how much free time and stability there would be if D core 
 *moved* GC downstream.
 
 Vic
 ps: more cows waiting for slaughter: 
 http://dlang.org/comparison.html

oh, how much usefullness there would be if D core suspended all other
activities and maked GC on par with industrial strength GCs!

i'm trying to tell you that i LOVE GC. and i can't share the (growing?)
attitute that GC is a poor stepson. what we really need is a better GC,
not no GC.


signature.asc
Description: PGP signature


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Daniel Davidson via Digitalmars-d

On Monday, 22 December 2014 at 19:25:51 UTC, aldanor wrote:
On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:

I don't see D attempting to tackle that at this point.
If the bulk of the work for the data sciences piece is the 
maths, which I believe it is, then the attraction of D as a 
data sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in 
that space D might shine.
That is one of my points exactly -- the bulk of the work, as 
you put it, is quite often the data processing/preprocessing 
pipeline (all the way from raw data parsing, aggregation, 
validation and storage to data retrieval, feature extraction, 
and then serialization, various persistency models, etc).


I don't know about low frequency which is why I asked about 
Winton. Some of this is true in HFT but it is tough to break that 
pipeline that exists in C++. Take live trading vs backtesting: 
you require all that data processing before getting to the math 
of it to be as low latency as possible for live trading which is 
why you use C++ in the first place. To break into that pipeline 
with another language like D to add value, say for backtesting, 
is risky not just because the duplication of development cost but 
also the risk of live not matching backtesting.


Maybe you have some ideas in mind where D would help that data 
processing pipeline, so some specifics might help?


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Kapps via Digitalmars-d
On Monday, 22 December 2014 at 22:02:46 UTC, ketmar via 
Digitalmars-d wrote:


oh, how much usefullness there would be if D core suspended all 
other

activities and maked GC on par with industrial strength GCs!

i'm trying to tell you that i LOVE GC. and i can't share the 
(growing?)
attitute that GC is a poor stepson. what we really need is a 
better GC,

not no GC.


The GC is good for 90% of programs. It's not good for some 
programs (games in particular are a common use case). Though even 
in games it's fine in some situations, such as during loading. D 
needs to support these cases.


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Paulo Pinto via Digitalmars-d

On Monday, 22 December 2014 at 12:28:18 UTC, Vic wrote:

https://news.ycombinator.com/item?id=8781522

http://arthurtw.github.io/2014/12/21/rust-anti-sloppy-programming-language.html

c'est possible!

Oh how much free time and stability there would be if D core 
*moved* GC downstream.


Vic
ps: more cows waiting for slaughter: 
http://dlang.org/comparison.html


On the other side, 
http://www.inf.ethz.ch/personal/wirth/ProjectOberon/


http://www.ethoberon.ethz.ch/native/WebScreen.html

--
Paulo


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread H. S. Teoh via Digitalmars-d
On Tue, Dec 23, 2014 at 12:02:36AM +0200, ketmar via Digitalmars-d wrote:
[...]
 oh, how much usefullness there would be if D core suspended all other
 activities and maked GC on par with industrial strength GCs!
 
 i'm trying to tell you that i LOVE GC. and i can't share the
 (growing?) attitute that GC is a poor stepson. what we really need is
 a better GC, not no GC.

+1000.


--T


Re: Rectangular multidimensional arrays for D

2014-12-22 Thread H. S. Teoh via Digitalmars-d
On Mon, Dec 22, 2014 at 11:35:17AM +, aldanor via Digitalmars-d wrote:
 A gap in multi-dimensional rectangular arrays functionality in D is
 sure a huge blocker when trying to use it for data science tasks.
 Wonder what's the general consensus on this?

Kenji's PR has been merged in the meantime, so now we have the tools to
build a solid multi-dim array library. Somebody just needs to do the
work, that's all.


T

-- 
Debian GNU/Linux: Cray on your desktop.


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread ketmar via Digitalmars-d
On Mon, 22 Dec 2014 22:09:22 +
Kapps via Digitalmars-d digitalmars-d@puremagic.com wrote:

 On Monday, 22 December 2014 at 22:02:46 UTC, ketmar via 
 Digitalmars-d wrote:
 
  oh, how much usefullness there would be if D core suspended all 
  other
  activities and maked GC on par with industrial strength GCs!
 
  i'm trying to tell you that i LOVE GC. and i can't share the 
  (growing?)
  attitute that GC is a poor stepson. what we really need is a 
  better GC,
  not no GC.
 
 The GC is good for 90% of programs. It's not good for some 
 programs (games in particular are a common use case). Though even 
 in games it's fine in some situations, such as during loading. D 
 needs to support these cases.

there is nothing really hard to support that: just don't allocate in
main game loop. ;-)

on the other side good concurrent GC can work alongside with game
logic. game engine can fine-tune it if necessary.

also, nobody says that game engine can't use other allocators. as game
engines tend to skip standard libraries anyway, i can't see much sense
in making Phobos good for game engines.


signature.asc
Description: PGP signature


Re: Rectangular multidimensional arrays for D

2014-12-22 Thread H. S. Teoh via Digitalmars-d
On Mon, Dec 22, 2014 at 08:49:45AM +, Laeeth Isharc via Digitalmars-d wrote:
 On Friday, 11 October 2013 at 22:41:06 UTC, H. S. Teoh wrote:
 What's the reason Kenji's pull isn't merged yet? As I see it, it does
 not introduce any problematic areas, but streamlines multidimensional
 indexing notation in a nice way that fits in well with the rest of
 the language. I, for one, would push for it to be merged.

FYI, Kenji's merge has since been merged. So now the stage is set for
somebody to step up and write a nice multidimensional array
implementation.


 In any case, I've seen your multidimensional array implementation
 before, and I think it would be a good thing to have it in Phobos. In
 fact, I've written my own as well, and IIRC one or two other people
 have done the same. Clearly, the demand is there.
 
 See also the thread about std.linalg; I think before we can even talk
 about having linear algebra code in Phobos, we need a
 solidly-designed rectangular array API. As I said in that other
 thread, matrix algebra really should be built on top of a solid
 rectangular array API, and not be yet another separate kind of type
 that's similar to, but incompatible with rectangular arrays. A
 wrapper type can be used to make a rectangular array behave in the
 linear algebra sense (i.e. matrix product instead of per-element
 multiplication).
 
 
 Hi.
 
 I wondered how things were developing with the rectangular arrays (not
 sure who is in charge of reviewing, but I guess it is not HS Teoh).
 It would be interesting to see this being available for D, and I agree
 with others that it is one of the key foundation blocks one would need
 to see in place before many other useful libraries can be built on
 top.
 
 Let me know if anything I can help with (although cannot promise to
 have time, I will try).
[...]

Well, just like almost everything in D, it just takes somebody to step
up to the plate and do the work. :-) Now that language support is there,
all that's left is for a good, solid design to be made, a common API
that all (multi-dimensional) rectangular arrays will conform to, and a
nice Phobos module to go along with it.

What I envision is a set of traits for working generically with
multidimensional arrays, plus some adaptors for common operations like
subarrays (rectangular windows or views), and a concrete
implementation that serves both as a basic packed rectangular array
container and also an example of how to use the traits/adaptors.

The traits would include things like determining the dimensionality of a
given array, the size(s) along each dimension, and element type. Common
operations include a subarray adaptor that does index remappings,
iteration (in various orderings), etc.. 

The concrete implementation provides a concrete multidimensional
rectangular array type that implements the aforementioned traits. It
supports per-element operators via overloading, but not matrix algebra
(which belongs in a higher-level API).

Along with this, I have found in my own experiments that it is helpful
to include a standard 1-dimensional short array type that serves as a
common type for storing index sets, representing array dimensions, for
use in representing (sub)regions, etc.. This short array type, perhaps
we can call it a Vector, is basically an n-tuple of array indices
(whatever the array index type is -- usually size_t, but in some
applications it might make sense to allow negative array indices). A
rectangular range of array indices can then be represented as a pair of
Vectors (the n-dimensional equivalent of upperleft and lowerright
corners). Index remappings for subarrays can then be implemented via a
simple subtraction and bound on the incoming index (e.g., subarray[i1]
gets remapped to originalArray[i1 - subarray.upperleft], where i1 and
upperleft are Vectors). To allow convenient interoperability with
explicit index lists (e.g., array[i,j,k,l]), Vectors should easily
expand into argument tuples, so that writing array[v1] is equivalent to
writing array[v1[0], v1[1], v2[2], ...].

None of this is groundbreaking new territory; somebody just has to sit
down and sort out the API and write the code for it.


T

-- 
Why are you blatanly misspelling blatant? -- Branden Robinson


Re: Rectangular multidimensional arrays for D

2014-12-22 Thread aldanor via Digitalmars-d
On Monday, 22 December 2014 at 22:36:16 UTC, H. S. Teoh via 
Digitalmars-d wrote:
FYI, Kenji's merge has since been merged. So now the stage is 
set for

somebody to step up and write a nice multidimensional array
implementation.


One important thing to wish for, in my opinion, is that the 
design of such implementation would allow for (future potential) 
integration with linear algebra libraries like blas/lapack 
without having to be rewritten from scratch (e.g. so it doesn't 
end up like Python's array module which got completely superceded 
by numpy).


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 22 December 2014 at 17:01, Iain Buclaw ibuc...@gdcproject.org wrote:
 On 22 December 2014 at 13:45, via Digitalmars-d
 digitalmars-d@puremagic.com wrote:
 On Monday, 22 December 2014 at 12:43:19 UTC, Iain Buclaw via
 Digitalmars-d wrote:

 On 22 December 2014 at 11:45, logicchains via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 On Sunday, 21 December 2014 at 09:48:24 UTC, Dicebot wrote:


 On Saturday, 20 December 2014 at 21:47:24 UTC, Walter Bright wrote:


 I did notice this:

 I updated the ldc D compiler earlier today (incidentally, as part of
 upgrading my system with pacman -Syu), and now it doesn't compile at
 all. It
 was previously compiling, and ran at around 90% the speed of C++ on
 ARM.

 Sigh.



 I have deployed experimental LDC package exactly to be able to detect
 such
 issues, otherwise it will never get there. It will be either fixed
 within a
 week or reverted to old mode.



 I installed the new Arch Linux LDC package but it still fails with the
 same
 error: /usr/lib/libldruntime.so: undefined reference to `__mulodi4'

 I did get GDC to work on ARM, but for some reason the resulting
 executable
 is horribly slow, around five times slower than what LDC produced. Are
 there
 any flags other than -O3 that I should be using?


 Other than -frelease (to turn off most non-release code generation), no.

 Can you get a profiler on it to see where it's spending most of it's time?

 Thanks
 Iain.


 With the GDC build, the GC stops the main thread every single
 time getLongestPath is executed. This does not happen with the
 LDC build.

 See :
 http://unix.cat/d/lpathbench/callgrind.out.GDC
 http://unix.cat/d/lpathbench/callgrind.out.LDC


 Thanks, looks like getLongestPath creates a closure - this causes
 memory to be allocated every single time the function is called !!!

 I imagine that LDC can boast smarter heuristics here - I recall David
 talking about a memory optimisation that moves the heap allocation to
 the stack if it can verify that the closure doesn't escape the
 function.

 We are a little behind the times on this - and so is DMD.


Having another look this evening, I see that the following commit
resolves a closure ever being made.

https://github.com/logicchains/LPATHBench/commit/e82bc6c2a7ce544d43728e36eb53332eb40a5419

So I would expect that ARM runtime would have improved.

Regards
Iain.


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Vic via Digitalmars-d
On Monday, 22 December 2014 at 22:02:46 UTC, ketmar via 
Digitalmars-d wrote:

On Mon, 22 Dec 2014 12:28:16 +

. what we really need is a

better GC,
not no GC.


I am not saying no GC; I am saying:
a) something needs to be moved out of core. If not GC, what would 
you move downstream?


b) move, not remove. So you can plug in any gc implementation you 
like - the current GC if you still like it.

Like linux kernal needs gnu to do 'cd'.

hth,
Vic


Re: What's missing to make D2 feature complete?

2014-12-22 Thread via Digitalmars-d
On Monday, 22 December 2014 at 21:52:12 UTC, ketmar via 
Digitalmars-d wrote:
Thanks, I forgot that one. Immutable values by default is 
indeed an important improvement. All by-value parameters to 
functions should be immutable, period.


but why? O_O


Because it is safer in long functions where you might miss a 
modification of the input parameter when editing an existing 
function, and copying from immutable to mutable is free if the 
parameter is left alone after the copy.




Re: Allocators stack

2014-12-22 Thread Rikki Cattermole via Digitalmars-d

On 23/12/2014 5:51 a.m., Allocator stack wrote:

How about allocators stack? Allocator e.g. one of these
https://github.com/andralex/phobos/blob/allocator/std/allocator.d
-
allocatorStack.push(new GCAllocator);
//Some code that use memory allocation
auto a = ['x', 'y'];
a ~= ['a', 'b']; // use allocatorStack.top.realloc(...);
allocatorStack.pop();
-
Allocators must be equipped with dynamic polymorphism. For those
cases when it is too expensive attribute
@allocator(yourAllocator) applied to declaration set allocator
statically.

-
@allocator(Mallocator.instance)
void f()
{
// Implicitly use global(tls?) allocator Mallocator when allocate an
object or resize an array or etc.
}

@allocator(StackAllocator)
void f()
{
// Implicitly use allocatorStack.top() allocator when allocate an
object or resize an array or etc.
}
-

There is some issues to solve. E.g. how to eliminate mix memory from
different allocators.


I've also come up with a way to do this.
Using the with statement and a couple extra functionality.

with(new MyAllocator()) { // myAllocator.opWithIn();
Foo foo = new Foo(1); // myAllocator.alloc!Foo(1);
} // myAllocator.opWithOut();

class MyAllocator : Allocator {
private {
Allocator old;
}

void opWithIn() {
old = RuntimeThread.allocator;
RuntimeThread.allocator = this;
}

void opWithout() {
RuntimeThread.allocator = old;
}
}

I'm sure you get the gist.


Re: DConf 2015?

2014-12-22 Thread Walter Bright via Digitalmars-d

On 12/22/2014 12:59 PM, Iain Buclaw via Digitalmars-d wrote:

On 22 December 2014 at 20:52, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On 12/22/2014 9:40 AM, Adam D. Ruppe wrote:


By this time last year, dconf 2014 preparations were already under way but
I
haven't heard anything this year. Is another one planned?



Yes. Still working on getting confirmation of the date.


You mean to say that it's moving from it's usual time slot next year?
(Weekend before spring bank holiday)



Looks like it'll be May 27-29.


Re: ddox-generated Phobos documentation is available for review

2014-12-22 Thread Andrei Alexandrescu via Digitalmars-d

On 12/11/14 5:05 PM, Martin Nowak wrote:

On 03/10/2014 03:56 PM, Andrei Alexandrescu wrote:

All: how does one turn off css hyphenation?

Andrei


You're again using that crappy JS hyphenation?
Last time we had a performance problem with it, I wrote this super
efficient D library http://code.dlang.org/packages/hyphenate.
It could easily be hooked up with ddox or if someone has time for that
wire it up with gumbo to postprocess HTML files
(https://github.com/MartinNowak/hyphenate/issues/1).


That was quite a while ago, I forgot the context since :o). I did see 
your hyphenate library, looks pretty rad! Should we add an enhancement 
request to add hyphenation to generated docs? -- Andrei


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread anonymous via Digitalmars-d

On Monday, 22 December 2014 at 23:21:17 UTC, Vic wrote:

I am not saying no GC; I am saying:
a) something needs to be moved out of core.


And many don't agree.


If not GC, what would you move downstream?

b) move, not remove.


Move where? There is no downstream. There are no hordes of
developers waiting for the GC to be decoupled from druntime, so
that they can finally start working on it. The same people would
work on it. There would be no concentration of effort on other
areas.

So you can plug in any gc implementation you like - the current 
GC if you still like it.


I think the GC is supposed to be somewhat pluggable. If it can be
made more pluggable, I don't think anyone would object.

Would the non-core GC still be shipped with releases? Could the
language and Phobos still rely on a GC being there?

If so, I see no point in any re-branding. The personnel wouldn't
change. It would still be perceived as simply D's GC.

If not, that would be a huge breaking change. The language and
Phobos would need a major overhaul to be compatible with an
optional GC. Everything D under the sun would either have the
non-core GC as a dependency, need a major overhaul, or die. The
cost to make it happen would be high. The only benefit I can see
would be a signal to end-users that the GC isn't quite there yet.
I don't think it would be a good move.


Re: Jonathan Blow demo #2

2014-12-22 Thread Andrei Alexandrescu via Digitalmars-d

On 12/12/14 2:47 AM, bearophile wrote:



OK, I think that it will be enough to add a Phobos function like this
(what's the right Phobos module to put it?) (why isn't this @trusted?)
(why isn't this returning a T*?):



ref T uninitializedAlloc(T)() @system pure nothrow
{
   return *cast(T*)GC.malloc(T.sizeof);
}


https://issues.dlang.org/show_bug.cgi?id=13859

Bye,
bearophile


There should be a minimallyInitializedAlloc, too. -- Andrei


Re: What's missing to make D2 feature complete?

2014-12-22 Thread ketmar via Digitalmars-d
On Mon, 22 Dec 2014 23:25:11 +
via Digitalmars-d digitalmars-d@puremagic.com wrote:

 On Monday, 22 December 2014 at 21:52:12 UTC, ketmar via 
 Digitalmars-d wrote:
  Thanks, I forgot that one. Immutable values by default is 
  indeed an important improvement. All by-value parameters to 
  functions should be immutable, period.
 
  but why? O_O
 
 Because it is safer in long functions where you might miss a 
 modification of the input parameter when editing an existing 
 function, and copying from immutable to mutable is free if the 
 parameter is left alone after the copy.

i really really hate immutable integer args, for example, and can't
see any sense in doing it. that's why i wondered.


signature.asc
Description: PGP signature


Re: DConf 2015?

2014-12-22 Thread Vic via Digitalmars-d

On Tuesday, 23 December 2014 at 00:25:33 UTC, Walter Bright wrote:

On 12/22/2014 12:59 PM, Iain Buclaw via Digitalmars-d wrote:

On 22 December 2014 at 20:52, Walter Bright via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On 12/22/2014 9:40 AM, Adam D. Ruppe wrote:


By this time last year, dconf 2014 preparations were already 
under way but

snip




Looks like it'll be May 27-29.


By the way, who ever is co-coordinating this can also get help 
from SV D meetup and likely Berlin and help each other.


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

Hi.

Sorry if this is a bit long, but perhaps it may be interesting to 
one or two.


On Monday, 22 December 2014 at 22:00:36 UTC, Daniel Davidson 
wrote:

On Monday, 22 December 2014 at 19:25:51 UTC, aldanor wrote:
On Monday, 22 December 2014 at 17:28:39 UTC, Daniel Davidson 
wrote:

I don't see D attempting to tackle that at this point.
If the bulk of the work for the data sciences piece is the 
maths, which I believe it is, then the attraction of D as a 
data sciences platform is muted. If the bulk of the work is 
preprocessing data to get to an all numbers world, then in 
that space D might shine.
That is one of my points exactly -- the bulk of the work, as 
you put it, is quite often the data processing/preprocessing 
pipeline (all the way from raw data parsing, aggregation, 
validation and storage to data retrieval, feature extraction, 
and then serialization, various persistency models, etc).


I don't know about low frequency which is why I asked about 
Winton. Some of this is true in HFT but it is tough to break 
that pipeline that exists in C++. Take live trading vs 
backtesting: you require all that data processing before 
getting to the math of it to be as low latency as possible for 
live trading which is why you use C++ in the first place. To 
break into that pipeline with another language like D to add 
value, say for backtesting, is risky not just because the 
duplication of development cost but also the risk of live not 
matching backtesting.


Maybe you have some ideas in mind where D would help that data 
processing pipeline, so some specifics might help?


I have been working as a PM for quantish buy side places since 
98, after starting in a quant trading role on sell side in 96, 
with my first research summer job in 93.  Over time I have become 
less quant and more discretionary, so I am less in touch with the 
techniques the cool kids are using when it doesn't relate to what 
I do.  But more generally there is a kind of silo mentality where 
in a big firm people in different groups don't know much about 
what the guy sitting at the next bank of desks might be doing, 
and even within groups the free flow of ideas might be a lot less 
than you might think
  Against that, firms with a pure research orientation may be a 
touch different, which just goes hex again to say that from the 
outside it may be difficult to make useful generalisations.


A friend of mine who wrote certain parts of the networking stack 
in linux is interviewing with HFT firms now, so I may have a 
better idea about whether D might be of interest.  He has heard 
of D but suggests Java instead.  (As a general option, not for 
HFT).  Even smart people can fail to appreciate beauty ;)


I think its public that GS use a python like language internally, 
JPM do use python for what you would expect, and so do AHL (one 
of the largest lower freq quant firms).  More generally, in every 
field, but especially in finance, it seems like the data 
processing aspect is going to be key - not just a necessary evil. 
 Yes, once you have it up and running you can tick it off, but it 
is going to be some years before you start to tick off items 
faster than they appear.  Look at what Bridgewater are doing with 
gauging real time economic activity (and look at Google Flu 
prediction if one starts to get too giddy - it worked and then 
didn't).


There is a spectrum of different qualities of data.   What is 
most objective is not necessarily what is most interesting.  Yet 
work on affect, media, and sentiment analysis is in its very 
early stages.  One can do much better than just affect bad, buy 
stocks once they stop going down...  Someone that asked me to 
help with something are close to Twitter, and I have heard the 
number of firms and rough breakdown by sector taking their full 
feed.  It is shockingly small in the financial services field, 
and that's probably in part just that it takes people time to 
figure out something new.


Ravenpack do interesting work from the point of view of a 
practitioner, and I heard a talk by their former technical 
architect, and he really seemed to know his stuff.  Not sure what 
they use as a platform.


I can't see why the choice of language will affect your back 
testing results (except that it is painful to write good 
algorithms in a klunky language and risk of bugs higher - but 
that isn't what you meant).


Anyway, back to D and finance.  I think this mental image people 
have of back testing as being the originating driver of research 
may be mistaken.  Its funny but sometimes it seems the moment you 
take a scientist out of his lab and put him on a trading floor he 
wants to know if such and such beats transaction costs.  But what 
you are trying to do is understand certain dynamics, and one 
needs to understand that markets are non linear and have highly 
unstable parameters.  So one must be careful about just jumping 
to a back test.  (And then of course, questions of risk 

Re: Rectangular multidimensional arrays for D

2014-12-22 Thread Laeeth Isharc via Digitalmars-d

On Monday, 22 December 2014 at 22:46:57 UTC, aldanor wrote:
On Monday, 22 December 2014 at 22:36:16 UTC, H. S. Teoh via 
Digitalmars-d wrote:
FYI, Kenji's merge has since been merged. So now the stage is 
set for

somebody to step up and write a nice multidimensional array
implementation.


One important thing to wish for, in my opinion, is that the 
design of such implementation would allow for (future 
potential) integration with linear algebra libraries like 
blas/lapack without having to be rewritten from scratch (e.g. 
so it doesn't end up like Python's array module which got 
completely superceded by numpy).


You mean especially for sparse matrices ?  What is it that needs 
to be borne in mind for regular matrices ?


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Vic via Digitalmars-d

On Tuesday, 23 December 2014 at 00:49:57 UTC, anonymous wrote:

On Monday, 22 December 2014 at 23:21:17 UTC, Vic wrote:

I am not saying no GC; I am saying:
a) something needs to be moved out of core.


And many don't agree.


snip

Dear Anonymous,

IMO D needs to be more stable, the alternative is more and more 
features and less and less commercial users and followed by less 
maintainers that don't want a dead project to work on. D 
'marketing' brings people in, who then leave.
So I proposed: Do like GO does w/ Exceptions ( I can send link 
again if need)


*What* would be another way to get stability that is not fantasy?

I consider the surface area of features vs available volunteer 
maintainers at hand. If you are not sold, look at CLR and JRE 
features relative to D features. Now look at the size of their 
maintenance teams as relative FTE.


Does that look as sustainable?

Hence a prediction: major things will be moved out of core to 3rd 
party plugins to slim down the lang, because now it's more than a 
lang: it is a platform.


Vic



Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread Elvis Zhou via Digitalmars-d

On Tuesday, 23 December 2014 at 03:32:12 UTC, Vic wrote:

On Tuesday, 23 December 2014 at 00:49:57 UTC, anonymous wrote:

On Monday, 22 December 2014 at 23:21:17 UTC, Vic wrote:

I am not saying no GC; I am saying:
a) something needs to be moved out of core.


And many don't agree.


snip

Dear Anonymous,

IMO D needs to be more stable, the alternative is more and more 
features and less and less commercial users and followed by 
less maintainers that don't want a dead project to work on. D 
'marketing' brings people in, who then leave.
So I proposed: Do like GO does w/ Exceptions ( I can send link 
again if need)


*What* would be another way to get stability that is not 
fantasy?


I consider the surface area of features vs available volunteer 
maintainers at hand. If you are not sold, look at CLR and JRE 
features relative to D features. Now look at the size of their 
maintenance teams as relative FTE.


Does that look as sustainable?

Hence a prediction: major things will be moved out of core to 
3rd party plugins to slim down the lang, because now it's more 
than a lang: it is a platform.


Vic


+1


Re: Rectangular multidimensional arrays for D

2014-12-22 Thread uri via Digitalmars-d

On Tuesday, 23 December 2014 at 03:11:20 UTC, Laeeth Isharc wrote:

On Monday, 22 December 2014 at 22:46:57 UTC, aldanor wrote:
On Monday, 22 December 2014 at 22:36:16 UTC, H. S. Teoh via 
Digitalmars-d wrote:
FYI, Kenji's merge has since been merged. So now the stage is 
set for

somebody to step up and write a nice multidimensional array
implementation.


One important thing to wish for, in my opinion, is that the 
design of such implementation would allow for (future 
potential) integration with linear algebra libraries like 
blas/lapack without having to be rewritten from scratch (e.g. 
so it doesn't end up like Python's array module which got 
completely superceded by numpy).


You mean especially for sparse matrices ?  What is it that 
needs to be borne in mind for regular matrices ?


The layout in lapck/blas is column major so it can be handy using 
a wrapper around arrays to provide the FORTRAN indexing.


Also you need to pass the .ptr property of the array or a[0]. D 
arrays are fat and include their length.


Cheers,
uri


Re: cross post hn: (Rust) _ _ without GC

2014-12-22 Thread ketmar via Digitalmars-d
On Tue, 23 Dec 2014 03:32:11 +
Vic via Digitalmars-d digitalmars-d@puremagic.com wrote:

 Hence a prediction: major things will be moved out of core to 3rd 
 party plugins to slim down the lang, because now it's more than a 
 lang: it is a platform.

D is not a platform. besides, GC is a core feature of D.


signature.asc
Description: PGP signature


Re: Walter's DConf 2014 Talks - Topics in Finance

2014-12-22 Thread Daniel Davidson via Digitalmars-d

On Tuesday, 23 December 2014 at 03:07:10 UTC, Laeeth Isharc wrote:
At one very big US hf I worked with, the tools were initially 
written in Perl (some years back).  They weren't pretty, but 
they worked, and were fast and robust enough.  I has many new 
features I needed for my trading strategy.  But the owner - who 
liked to read about ideas on the internet - came to the 
conclusion that Perl was not institutional quality and that we 
should therefore cease new development and rewrite everything 
in C++.  Two years later a new guy took over the larger group, 
and one way or the other everyone left.  I never got my new 
tools, and that certainly didn't help on the investment front.  
After he left a year after that they scrapped the entire code 
base and bought Murex as nobody could understand what they had.


If we had had D then, its possible the outcome might have been 
different.




Interesting perspective on the FI group's use of perl. Yes that 
group was one of the reasons a whole new architecture committee 
was established to prevent IT tool selection (like Perl and 
especially Java) the firm did not want to be used or supported. 
Imagine after that being prohibited from using Python. Having to 
beg to get to use it embedded from C++ and when finally granted 
permission having to rewrite the much of boost python since boost 
was not a sanctioned tool. Big companies make decisions 
differently than others. I believe D would not have been a help 
in that organization and requesting its use would have been the 
surest way to get a termination package. That said, in other 
organizations D might have been a good choice.


So in any case, hard to generalise, and better to pick a few 
sympathetic people that see in D a possible solution to their 
pain, and use patterns will emerge organically out of that.  I 
am happy to help where I can, and that is somewhat my own 
perspective - maybe D can help me solve my pain of tools not up 
to scratch because good investment tool design requires 
investment and technology skills to be combined in one person 
whereas each of these two are rare found on their own.  (D 
makes a vast project closer to brave than foolhardy),


It would certainly be nice to have matrices, but I also don't 
think it would be right to say D is dead in water here because 
it is so far behind.  It also seems like the cost of writing 
such a library is v small vs possible benefit.




I did not say D is dead in the water here. But when it comes to 
math platforms it helps to have lots of people behind the 
solution. For math julia seems to have that momentum now. Maybe 
you can foster that in D.





Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Dicebot via Digitalmars-d

On Monday, 22 December 2014 at 11:45:55 UTC, logicchains wrote:
I installed the new Arch Linux LDC package but it still fails 
with the same error: /usr/lib/libldruntime.so: undefined 
reference to `__mulodi4'


I did get GDC to work on ARM, but for some reason the resulting 
executable is horribly slow, around five times slower than what 
LDC produced. Are there any flags other than -O3 that I should 
be using?


Argh, sorry, now this is my personal failure - have applied the 
workaround only to i686 and forgot about arm CARCH (it works on 
x86_64 without workarounds). Pushed update again.


Unfortunately I don't have any ARM device myself to test and this 
process is very sub-optimal :(


Re: ARMv7 vs x86-64: Pathfinding benchmark of C++, D, Go, Nim, Ocaml, and more.

2014-12-22 Thread Iain Buclaw via Digitalmars-d
On 23 Dec 2014 07:15, Dicebot via Digitalmars-d 
digitalmars-d@puremagic.com wrote:

 On Monday, 22 December 2014 at 11:45:55 UTC, logicchains wrote:

 I installed the new Arch Linux LDC package but it still fails with the
same error: /usr/lib/libldruntime.so: undefined reference to `__mulodi4'

 I did get GDC to work on ARM, but for some reason the resulting
executable is horribly slow, around five times slower than what LDC
produced. Are there any flags other than -O3 that I should be using?


 Argh, sorry, now this is my personal failure - have applied the
workaround only to i686 and forgot about arm CARCH (it works on x86_64
without workarounds). Pushed update again.

 Unfortunately I don't have any ARM device myself to test and this process
is very sub-optimal :(

Maybe you could set up a qemu-arm chroot?


  1   2   >