On Fri, Sep 6, 2013 at 11:15 PM, Jared Albers <jalb...@mymail.mines.edu>wrote:
> I already grasp the reasoning for why the creation of tables can be > expensive and inefficient. What I don't understand is why the length > of a table name affects performance so greatly, while the length of > data in a row has almost no affect on performance. Does a longer table > name somehow require more pages? This is what I mean to point out. > > On Stack Overflow, Richard Hipp made the argument that "it should come > as no surprise that SQLite takes 4x longer to write 4x as much > content." This argument doesn't make sense to me when 4x the data in a > row has a negligible effect on performance (when inserting the row). > Right. It is surprising indeed. In my own tests, the DB file itself grows from 11MB to 15MB with long names, so there's some additional I/O overhead for sure, but 4MB of I/O clearly doesn't justify a 2.5x increase in the time it takes to create all these tables. But given that 10K tables is clearly considered an outlier use case by many, there doesn't seem to be much will to investigate further, and actually profile this use case. FWIW, I tried to take disk I/O out of the question by using a :memory DB, and got some weird results of almost instantaneous runs after two initial runs with similar run time as in the disk db case, and I saw again a 60sec runtime for the long-name case when other times I'm get 27sec. All in all, I'm puzzled, but since I have far fewer tables, I'm not too worried despite my inconclusive experiments. --DD _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users