Simon Slavin-3 wrote:
> 
> By looking at the file on disk ?  Are you taking into account the journal
> file ? 
> 
Yes, I do all the counts and save the data to a file and then if the file is
newer then the database I use the file else I count again.
No, I am not taking the journal file into account, I don't know what you
mean by that.


BareFeetWare-2 wrote:
> 
> Does tis answer your needs? If not, please explain further, but it will
> probably only require modifying a constraint in the schema I proposed,
> rather than denormalizing or partitioning. 
> 
I will try to explain it.
I have the table "movies people" with columns "movie_ID, capacity_ID,
people_ID"
I understand that all rows are unique but people_ID isn't unique.
For example lets say I have the following data:
1|director|1
2|director|2
3|director|1

In this example the total count for directors is 2, I have two distinct
directors.
In the table "Capacity Statistics" I will have:
director|2

The triggers you made add/subtract 1 from "Capacity Statistics" on
insert/delete on "movies people"
What happens if I add the following to "movies people"?
4|director|2

The trigger should add 1 to "Capacity Statistics":
director|3

But there are still 2 directors:
1|director|1
2|director|2
3|director|1
4|director|2


BareFeetWare-2 wrote:
> 
> Erm ... there are also movies which have more than one person directing. 
> You need to copy across the rowid from the MoviePeople table, and delete
> just based on that particular record. 
> 
Yes, of-course.
-- 
View this message in context: 
http://old.nabble.com/Speed-up-count%28distinct-col%29-tp30864622p30898156.html
Sent from the SQLite mailing list archive at Nabble.com.

_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to