I've read through the rest of the thread on this so far. I like the linked list idea, as updating three rows seems to be the better way of doing things, but the question does remain 'what drives the sort reasoning?', however, if you don't want to get that deep into CTEs and stuff for the linked lists, one option would be instead of using integers, use strings. It might chew up a LOT more drive space, but from the sounds of it, space isn't a huge concern.
The code side of things, you'll have to do a bit more thinking, as you just won't be able to do a simple math equation to get where things should be in. C:\Users\Stephen>sqlite3 SQLite version 3.7.14.1 2012-10-04 19:37:12 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> create table test (ID integer, SortOrder char, Data char); sqlite> insert into Test (1,'A','First Row'); Error: near "1": syntax error sqlite> insert into Test values (1,'A','First Row'); sqlite> insert into Test values (2,'B','Eventual Fifth Row'); sqlite> insert into Test values (3,'C','Last Row'); sqlite> insert into Test values (4,'AA','New Second Row'); sqlite> insert into Test values (5,'AB','New Fourth Row'); sqlite> insert into Test values (6,'AAA','New Third Line'); sqlite> select * from Test order by SortOrder; 1|A|First Row 4|AA|New Second Row 6|AAA|New Third Line 5|AB|New Fourth Row 2|B|Eventual Fifth Row 3|C|Last Row sqlite> What happens when you have to go beyond Z? Tag on an A, so AAZA as an example. What comes before A? ... I don't know. Since I don't know how you're determining how the decision to insert at the bottom of a sorted list of data versus the middle, I don't know how viable that is. On Wed, Sep 24, 2014 at 12:49 PM, RSmith <rsm...@rsweb.co.za> wrote: > I'm trying to find what the limit is for dividing in terms of accuracy. > > Basically I have one program that inserts values to a table and determine > sort order using one standard trick that has a REAL column named > "SortOrder" which gets the value Highest_previous_value+1 if an insert > happens with something that needs to be sorted at the end of the table. > > For any other position, the SortOrder gets assigned the value: > ((Prev.Sortorder + Next.Sortorder) / 2) > > So to be clear (in case anyone is not familiar with this method), let's > start with a small table with 1 item added like this: > > ID | SortOrder | Data > 1 | 1 | 'First Row' > > Adding two new rows to the end every time will use previous highest > SortOrder+1 so that the result is: > > ID | SortOrder | Data > 1 | 1 | 'First Row' > 2 | 2 | 'Eventual Fourth Row' > 3 | 3 | 'Last Row' > > Adding a new row that should Sort in between IDs 1 and 2 above will take > those SortOrders and find a new Order value by dividing the total for IDs 1 > and 2 (=3) by 2 (=1.5): > > ID | SortOrder | Data > 1 | 1 | 'First Row' > 2 | 2 | 'Eventual Fourth Row' > 3 | 3 | 'Last Row' > 4 | 1.5 | 'New Second Row' > > Adding another row that should Sort in between IDs 2 and 4 will again > total and divide by 2 (=(2+1.5)/2): > > ID | SortOrder | Data > 1 | 1 | 'First Row' > 2 | 2 | 'Eventual Fourth Row' > 3 | 3 | 'Last Row' > 4 | 1.5 | 'New Second Row' > 5 | 1.75| 'New Third Row' > > So that if the Query 'SELECT Data FROM t ORDER BY SortOrder' executes it > goes like this: > > Data > 'First Row' > 'New Second Row' > 'New Third Row' > 'Eventual Fourth Row' > 'Last Row' > > > This seems like a clever method and I've seen it used a few times, but it > really can break easily if you keep dividing by two, there is a very quick > limit in accuracy where one value can no longer be divided by two > meanigfully. In 64-bit Floating point Math that limit is very far away, > quite a few iterations (assuming normal floating point mantissa accuracy - > the exponent size does not matter since any two such values will be > adjacent in the same realm of magnitude and only the available real numbers > in between them counts), but if inserts happen 2 to 3 times a second, and > imagining for a moment that the sort might hit the same spot every time, > many consecutive divs might be exhausted quick. > > The question is - how can I accurately establish how many > total-then-divide-by-2's a set of co-values in 64-bit FP guise can > withstand before the difference is too small to make sense to the sorter in > SQLite? > > Reason: The fix is easy but costly on a large DB, sort and reassign > SortOrders simply in Integer steps: 1, 2, 3 etc., but I want to establish > how often this should be done, as little as possible preferred, but not so > little as to allow the order to break or div0 errors or such. > > > > _______________________________________________ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users