Interesting trick.
I remember having seen something similar for Trees ...
Found it ...
    
http://www.tclcommunityassociation.org/wub/proceedings/Proceedings-2011/StephenHuntley/Huntley_Tcl2011.pdf

Well, he uses bignums.
Maybe this could be degenerated into a linear list, which is what you
are looking at .

<title>A Novel Method for Representing Hierarchies in a Relational
Database Using Bignums and Sqlite
<abstract>I introduce a method of using a rapidly-converging infinite
series to generate integer values which, when stored in relational
database table rows, act as tags allowing each row to be interpreted
and queried as a node in a hierarchy. To overcome integer precision
limitations, I use Tcl 8.5's Bignum feature and tcllib's
math::bigfloat package. I use Sqlite's ability to store arbitrary
binary data in its BLOB data type to manage overflow precision digits.
The resulting code provides a fast and efficient way to store and
query tree-structure data of theoretically unlimited size.



On Wed, Sep 24, 2014 at 9:59 AM, Alessandro Marzocchi
<alessandro.marzoc...@gmail.com> wrote:
> Hello,
>     Why don't you try the same approach with 64 bit integers? You could
> insert consecutive items at 0x1000000000 intervals, in this way you would
> be allowed about 4G items to me inserted in any order you like. You could
> also change the interval to change the maximum consecutive items inserted
> vs out of order item inserted tradeoff, or easily check for duplicate order
> and in this case do a costly rearrangement of sort order field (the later
> could be done also with floating points)
> Il 24/set/2014 18:49 "RSmith" <rsm...@rsweb.co.za> ha scritto:
>
>> I'm trying to find what the limit is for dividing in terms of accuracy.
>>
>> Basically I have one program that inserts values to a table and determine
>> sort order using one standard trick that has a REAL column named
>> "SortOrder" which gets the value Highest_previous_value+1 if an insert
>> happens with something that needs to be sorted at the end of the table.
>>
>> For any other position, the SortOrder gets assigned the value:
>> ((Prev.Sortorder + Next.Sortorder) / 2)
>>
>> So to be clear (in case anyone is not familiar with this method), let's
>> start with a small table with 1 item added like this:
>>
>> ID | SortOrder | Data
>> 1  |       1     | 'First Row'
>>
>> Adding two new rows to the end every time will use previous highest
>> SortOrder+1 so that the result is:
>>
>> ID | SortOrder | Data
>> 1  |       1     | 'First Row'
>> 2  |       2     | 'Eventual Fourth Row'
>> 3  |       3     | 'Last Row'
>>
>> Adding a new row that should Sort in between IDs 1 and 2 above will take
>> those SortOrders and find a new Order value by dividing the total for IDs 1
>> and 2 (=3) by 2 (=1.5):
>>
>> ID | SortOrder | Data
>> 1  |       1     | 'First Row'
>> 2  |       2     | 'Eventual Fourth Row'
>> 3  |       3     | 'Last Row'
>> 4  |     1.5   | 'New Second Row'
>>
>> Adding another row that should Sort in between IDs 2 and 4 will again
>> total and divide by 2 (=(2+1.5)/2):
>>
>> ID | SortOrder | Data
>> 1  |       1     | 'First Row'
>> 2  |       2     | 'Eventual Fourth Row'
>> 3  |       3     | 'Last Row'
>> 4  |     1.5   | 'New Second Row'
>> 5  |     1.75| 'New Third Row'
>>
>> So that if the Query 'SELECT Data FROM t ORDER BY SortOrder' executes it
>> goes like this:
>>
>> Data
>> 'First Row'
>> 'New Second Row'
>> 'New Third Row'
>> 'Eventual Fourth Row'
>> 'Last Row'
>>
>>
>> This seems like a clever method and I've seen it used a few times, but it
>> really can break easily if you keep dividing by two, there is a very quick
>> limit in accuracy where one value can no longer be divided by two
>> meanigfully. In 64-bit Floating point Math that limit is very far away,
>> quite a few iterations (assuming normal floating point mantissa accuracy -
>> the exponent size does not matter since any two such values will be
>> adjacent in the same realm of magnitude and only the available real numbers
>> in between them counts), but if inserts happen 2 to 3 times a second, and
>> imagining for a moment that the sort might hit the same spot every time,
>> many consecutive divs might be exhausted quick.
>>
>> The question is - how can I accurately establish how many
>> total-then-divide-by-2's a set of co-values in 64-bit FP guise can
>> withstand before the difference is too small to make sense to the sorter in
>> SQLite?
>>
>> Reason: The fix is easy but costly on a large DB, sort and reassign
>> SortOrders simply in Integer steps: 1, 2, 3 etc., but I want to establish
>> how often this should be done, as little as possible preferred, but not so
>> little as to allow the order to break or div0 errors or such.
>>
>>
>>
>> _______________________________________________
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
Andreas Kupries
Senior Tcl Developer
Code to Cloud: Smarter, Safer, Fasterâ„¢
F: 778.786.1133
andre...@activestate.com, http://www.activestate.com
Learn about Stackato for Private PaaS: http://www.activestate.com/stackato

21'st Tcl/Tk Conference: Nov 10-14, Portland, OR, USA --
http://www.tcl.tk/community/tcl2014/
Send mail to tclconfere...@googlegroups.com, by Sep 8
Registration is open.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to