On Wednesday, 24 June 2015 at 13:15:52 UTC, Jacob Carlborg wrote:
On 23/06/15 21:22, Laeeth Isharc wrote:

Thing is there are different use cases. For example, I pull data from Quandl - the metadata is standard and won't change in format often; but the data for a particular series will. For example if I pull volatility data that will have different fields to price or economic data. And I don't know beforehand the total set of possibilities. This must be quite a common use case, and indeed I just hit another one recently with
a poorly-documented internal corporate database for securities.

If the data can change between calls or is not consistent my serialization library is not a good fit. But if the data is consistent but changes over time, something like once a month, my serialization library could work if you update the data structures when the data changes.

My serialization library can also work with optional fields if custom serialization is used.

Thanks, Jacob.

Some series shouldn't change too often. On the other hand, just with Quandl that is 10 million data series taken from a whole range of different sources, some of them rather unfinished, and it's hard to know.

My needs are not relevant for the library, except that I think people often want to explore new data sets iteratively (over the course of weeks and months). Of course it doesn't take long to write the struct (or make something that will write it given the data and some guidance) but that's one more layer of friction.

So from the perspective of D succeeding, I would think giving people the option (within a coherent framework, so not using one library here and another there when in other language ecosystems it is not fragmented) of using static or dynamic typing as they prefer would pay off.

I don't know if you have looked at pandas and ipython notebook much. But now one can call D code from the ipython notebook (again, a 'trivial' piece of glue but ingenious and removing this small friction makes getting work done much easier) maybe having the option to have dynamic types with JSON will have more value.

See here, as one simple example:
http://nbviewer.ipython.org/gist/wesm/4757075/PandasTour.ipynb

So it would be nice to be able to something like Adam Ruppe does here:
https://github.com/adamdruppe/arsd/blob/master/jsvar.d

var j = json!q{
                "hello": {
                        "data":[1,2,"giggle",4]
                },
                "world":20
        };

        writeln(j.hello.data[2]);

Obviously the scope is outside a serialization library, but just thinking about the broader integrated and coherent library offering we should have.

Reply via email to