Robert Collins wrote:
I certainly hope it's uncommon!
I don't think it is, it's a natural assumption that the same user will not be making the same request twice at the same time.


Squid really isn't the point to fix race conditions: syncronisation is
your friend, in the server.
Yes, you are right, but if I put the syncronisation in the webserver I have already lost because, I'll have an apache process sucking mud waiting for the proper locks.

The only way to correctly fix this in the application is to check if the job is already done and react intelligently to that, come to think of it that might be a whole lot better solution.

I have seen the light and will shut up about that point now.


Do you mean webserver as in ip X, ip Y, or as in apache forked() X,
apache forked() Y.
Yes.

Different apache forks, not different machines.


squid will stop serving once write() returns an error.
Yes, but that will not happen before the request has been run and that means that you have just run a request for a user who has given up and submitted a new one, iow: because the content can never be cached you have lost.


Squid offloads disk io, so writing cachable data to disk won't affect
performance much. If your app is sending non-cachable data marked as
cachable, then you have a bug!
Yes, I know, Henrik told me, no I don't send non-cachable data marked as cachable.


>> [lock webserver access by session id]
This seems very painful to me - you will slow down graphics as well as
database pages.
Not at all, there /are/ no graphics, pages typically consist of two things, the dynamicly generated page and a static stylesheet, the stylesheet will be cached either in the client or squid.

Anyway, this was a stupid idea, I'll go and fix my application in stead:)


header (like X-calm-down-beavis).
This won't work. If you have *any* downstream proxies,
Luckily it's rare to have people who use proxies and those that are deserve to be punished ;-)

No not really, but it really does seem like very very few users sit behind a proxy and in the situation where people are likely to need this hack they are also likely to have read the pre-sales-instructions that tell them to turn off their proxy.

This may sound like bs, but the root of the problem is that when we sell tickets for events that usually sell out the users go non-liniear in their frenzy to get their hands on tickets and they will happily create an account or update it weeks ahead of the release, so telling them to turn off proxies is not that big a deal in the grand scheme of things.


[session webserver affinity]
>
* This needs two new (and worthwhile) concepts in squid - 1) session ID awareness, and 2) an access list for allowing connection reuse on a
per-forwarding-attempt basis.
Yes, I do think this is a good idea generally, if a webserver has just served a request then it is likely to be faster at serving a new request for the same user as some data that user works with will be cached.


* Ah, squid doesn't start new processes :}. Anyway, this is exactly what
squid does today, with one exception: squid doesn't read the entire
object in advance of the client - it only reads a few kb ahead - to
avoid huge memory races. This is tunable IIRC.
Hmm, how, where? I'd be more than happy to spend some RAM to get more free apaches.


* Again, this is *exactly* what squid does today.
Great minds think alike :-)

--
 Regards Flemming Frandsen - http://dion.swamp.dk
 PartyTicket.Net co founder & Yet Another Perl Hacker

Reply via email to