> Seems like the real problem here is that the application itself is not > aware of which requests have been submitted already and which haven't. > Of course you could do some things to skip out the error (error > supression with @, using "insert IGNORE into ..." to prevent errors on > duplicate keys, etc.), but I think you will make your life easier by > addressing the root of the problem: if the page is reloaded, make sure > the code isn't run a second time, unless that's what's supposed to happen. > > In my systems, I'll assign a unique ID (based on microtime) to each > submitted request (added as a hidden input in forms, else appended to > the query string), to be stored in the database. For each request, the > system checks to see if the ID has been submitted; if so, the action > code is skipped and only the display code is run; if not, the request ID > is recorded in the databse, and the action code is processed as normal. > Garbage collection routines keep the request-ID cache from getting too > big but keep the contents long enough to avoid false negatives. > > That's just one way to do it. Others here can probably recommend better > methods, but I think you have to find some way to do it. You probably > don't want to go down the path of just supressing errors all over the > place, because when a real error surfaces you need to be able to see it > and handle it properly. > > - Allen
Thanks. Your comment made me realize I already do this, I just had it disabled for testing -- late day brain fart! But I tend to be paranoid, and check at probably too many levels. _______________________________________________ New York PHP Community Talk Mailing List http://lists.nyphp.org/mailman/listinfo/talk NYPHPCon 2006 Presentations Online http://www.nyphpcon.com Show Your Participation in New York PHP http://www.nyphp.org/show_participation.php