On Sat, Sep 27, 2014 at 12:22 PM, mmarco <mma...@unizar.es> wrote:
>>
>> How long ?
>>
>
> Around  five seconds in a very fast SSD disk.
>>
>>
>>
>> What kind of Python/Sage object do you want to store at the end ?
>> Dictionaries of strings (easy to store in a SQL table) or less standard
>> Sage objects ? What takes most processing time, parsing the file or
>> creating Sage adequate objects from well stored strings ? Is the data in
>> the upstream RDF database organised as a graph and will you use this
>> structure in your queries (in which case relational SQL may not be
>> appropriate, note that python has libs for dealing with RDF instead of
>> parsing it as raw text) ? What kind of queries will the database deal
>> with ? Are the kind of queries already stored on the upstream file or
>> should you preprocess new columns for better performance ? Will the
>> queries involve testing properties on complex Sage objects or looking
>> for existence of string and comparing integers ?
>
>
> The file contains basically lines of the form:
>
> knot name / invariant name /invariant value
>
> Of course the invariant values are stored as strings, but we need to convert
> them to objects like polynomials.
>
> The main idea i had was to be able to "identify" a knot. That is, given an
> arbitrary knot by the user, compute the corresponding invariants, and then
> look at the database for possible candidates (that is, knots with the same
> invariants). That means that we would need fast comparisons against the
> objects stored in the database.

Or if the data types are standard (e.g., integers, floats,  etc.),
then you can build
an index.  This is a 1-liner in SQLite, and makes queries super-fast.

William

>
> Of course it could also be used on the opposite direction: to construct
> knots in sage just by their identifier in the database.
>
>
>>
>>
>> Also, the answer may depend on whether the upstream database is evolving
>> fast and whether you will ensure long-term maintenance of the package.
>> Depending on this, an option could be to have a command within Sage that
>> fetches, parses/preprocess (if there is a benefit), and store the
>> database on user's demand. This can be compatible with offering a
>> preprocessed package as well if preprocessing takes more time than
>> fetching (like distributing sources vs distributing binaries).
>
>
> I think the upstream datbse is essentially stabilized. Maybe at some point
> they could add more knots to the database, but doesn't seem likely.
>>
>>
>> Ciao,
>> Thierry
>>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at http://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.



-- 
William Stein
Professor of Mathematics
University of Washington
http://wstein.org
wst...@uw.edu

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to