Hello,
    Why don't you try the same approach with 64 bit integers? You could
insert consecutive items at 0x1000000000 intervals, in this way you would
be allowed about 4G items to me inserted in any order you like. You could
also change the interval to change the maximum consecutive items inserted
vs out of order item inserted tradeoff, or easily check for duplicate order
and in this case do a costly rearrangement of sort order field (the later
could be done also with floating points)
Il 24/set/2014 18:49 "RSmith" <rsm...@rsweb.co.za> ha scritto:

> I'm trying to find what the limit is for dividing in terms of accuracy.
>
> Basically I have one program that inserts values to a table and determine
> sort order using one standard trick that has a REAL column named
> "SortOrder" which gets the value Highest_previous_value+1 if an insert
> happens with something that needs to be sorted at the end of the table.
>
> For any other position, the SortOrder gets assigned the value:
> ((Prev.Sortorder + Next.Sortorder) / 2)
>
> So to be clear (in case anyone is not familiar with this method), let's
> start with a small table with 1 item added like this:
>
> ID | SortOrder | Data
> 1  |       1     | 'First Row'
>
> Adding two new rows to the end every time will use previous highest
> SortOrder+1 so that the result is:
>
> ID | SortOrder | Data
> 1  |       1     | 'First Row'
> 2  |       2     | 'Eventual Fourth Row'
> 3  |       3     | 'Last Row'
>
> Adding a new row that should Sort in between IDs 1 and 2 above will take
> those SortOrders and find a new Order value by dividing the total for IDs 1
> and 2 (=3) by 2 (=1.5):
>
> ID | SortOrder | Data
> 1  |       1     | 'First Row'
> 2  |       2     | 'Eventual Fourth Row'
> 3  |       3     | 'Last Row'
> 4  |     1.5   | 'New Second Row'
>
> Adding another row that should Sort in between IDs 2 and 4 will again
> total and divide by 2 (=(2+1.5)/2):
>
> ID | SortOrder | Data
> 1  |       1     | 'First Row'
> 2  |       2     | 'Eventual Fourth Row'
> 3  |       3     | 'Last Row'
> 4  |     1.5   | 'New Second Row'
> 5  |     1.75| 'New Third Row'
>
> So that if the Query 'SELECT Data FROM t ORDER BY SortOrder' executes it
> goes like this:
>
> Data
> 'First Row'
> 'New Second Row'
> 'New Third Row'
> 'Eventual Fourth Row'
> 'Last Row'
>
>
> This seems like a clever method and I've seen it used a few times, but it
> really can break easily if you keep dividing by two, there is a very quick
> limit in accuracy where one value can no longer be divided by two
> meanigfully. In 64-bit Floating point Math that limit is very far away,
> quite a few iterations (assuming normal floating point mantissa accuracy -
> the exponent size does not matter since any two such values will be
> adjacent in the same realm of magnitude and only the available real numbers
> in between them counts), but if inserts happen 2 to 3 times a second, and
> imagining for a moment that the sort might hit the same spot every time,
> many consecutive divs might be exhausted quick.
>
> The question is - how can I accurately establish how many
> total-then-divide-by-2's a set of co-values in 64-bit FP guise can
> withstand before the difference is too small to make sense to the sorter in
> SQLite?
>
> Reason: The fix is easy but costly on a large DB, sort and reassign
> SortOrders simply in Integer steps: 1, 2, 3 etc., but I want to establish
> how often this should be done, as little as possible preferred, but not so
> little as to allow the order to break or div0 errors or such.
>
>
>
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to