Hello, Sorry I'm not very familiar with databases, so I probably don't use the right terms.
I wonder what's the best approach in my specific case: I have a table "A", in this table "A", each row is connected to about 50 rows of other information. These 50 rows of other information have an identical structure for all rows in table "A". The probability that one of these 50 rows is identical to another row in other sets of 50 rows is approximately zero. So I have 2 choices: 1. create 1 table with all the sets of 50 rows, by adding a reference in each row to the corresponding line in table "A" 2. create a separate table for each set of 50 rows (3. I could flatten the data, so I only will have a huge table "A", but that doesn't sound very efficient to me) Maybe there's one other issue, that might influence the choice, and that's how I use the data: when I extract data from this database, - I search for a row in table "A" - I read all the 50 extra rows of other information thanks, Stef Mientki _______________________________________________ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users