My solution: 1. Store it normalized with `user_id | thread_id | last_view_time` columns (with composite index that covers first 2 columns) 2. As soon as you open some forum section - fetch all the thread states on the page in a single query and put in a session 3. As soon as you need to get status of if the thread was read or not - check it in session first 4. On thread open modify session and put the task to update in some queue. Then some worker flushes changes for all users to the database (in a bulk manner)
On 8 August 2013 16:05, Ivan Kurnosov <[email protected]> wrote: > serialized solution would behave even worse: you need resources to: > > 1. deserialize > 2. modify > 3. store back > > And as user uses the forum every run will take more CPU (serialization), > memory (to store unserialized data), network IO (to transfer from DBMS to > the application) > > > On 8 August 2013 16:03, ashley etchell <[email protected]> wrote: > >> Would using INSERT DELAYED help with the write process? >> >> You could consider storing viewed posts per user in a serialized or csv >> text file or look at usi g merge tables somehow. >> >> Regards >> Ash >> On 8 Aug 2013 15:56, "matt thomson" <[email protected]> wrote: >> >>> Hi everybody, >>> >>> I'm working on a forum like web app where I record what topics have been >>> read by a logged in user. Usually for a many-to-many relation like this, I >>> would use a 3rd table, and have a user_id and a topic_id column in the >>> table. I'm not sure if this is scalable in this case though, the table will >>> meet three conditions: >>> >>> It will be written to on almost most page views (to say the current >>> topic has been read). >>> It will be read from on most page views (to check which topics have been >>> read and make this in the html). >>> It may get very large (eg 1 million+ records). >>> >>> Usually, if a table is going to be this big and queried a lot, I will >>> put an index on the columns that are queried. However, as well as being >>> read from frequently, the table is also being written to frequently. As >>> indexes take time to update on write, then updating the index constantly >>> might end up slowing the table down so much that this method is not >>> scalable. >>> >>> Does anyone know a way around this issue? >>> >>> Thanks. >>> >>> -- >>> -- >>> NZ PHP Users Group: http://groups.google.com/group/nzphpug >>> To post, send email to [email protected] >>> To unsubscribe, send email to >>> [email protected] >>> --- >>> You received this message because you are subscribed to the Google >>> Groups "NZ PHP Users Group" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to [email protected]. >>> For more options, visit https://groups.google.com/groups/opt_out. >>> >>> >>> >> -- >> -- >> NZ PHP Users Group: http://groups.google.com/group/nzphpug >> To post, send email to [email protected] >> To unsubscribe, send email to >> [email protected] >> --- >> You received this message because you are subscribed to the Google Groups >> "NZ PHP Users Group" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to [email protected]. >> For more options, visit https://groups.google.com/groups/opt_out. >> > > > > -- > With best regards, Ivan Kurnosov > -- With best regards, Ivan Kurnosov -- -- NZ PHP Users Group: http://groups.google.com/group/nzphpug To post, send email to [email protected] To unsubscribe, send email to [email protected] --- You received this message because you are subscribed to the Google Groups "NZ PHP Users Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/groups/opt_out.
