Hi Willy, On Mon, Mar 19, 2012 at 9:56 AM, Willy Tarreau <[email protected]> wrote: > Hi Kamil, > > On Mon, Mar 19, 2012 at 09:02:25AM +0100, Kamil Gorlo wrote: >> To use our API you need to create session on server. You call /signin >> and then backend sets the session cookie - it's highly preferred (for >> some crucial features to work) that from now on all requests should go >> to the same backend (just for backward compatibility (Nginx) I have >> separate cookie for routing - this cookie might by set by LB or >> backend, for now it is set by LB when client make request without >> any). Some clients can easily pass cookies in headers, but not Flash. >> >> Flow with Flash is like this: >> >> Java script initializes session, make some requests in this session, >> then Flash must make other requests (file upload) and use the same >> session from browser - so JS pass session_id and route_id to Flash >> application and later on this parameters are passed in URL (since >> there is no way to send cookies from Flash when uploading file with >> POST - AFAIK). > > OK, so are those cookies always passed on the URL even with POST > requests ? In what format are they passed ? Do they always have > the same name as the cookie learned from the application or does > the name need to be configurable ? Is the flash server farm shared > with the HTTP farm ?
"Cookies" are passed in two ways only: - as regular http cookies - as parameters in query string when first option is impossible There is no such thing as flash server farm. There is only one application server farm, they all understand HTTP. > > Do you have an example of a full request coming from your flash > application so that we get a better idea ? Typical session look like this (with Nginx setup where Nginx sets route_id cookie which provides stickiness with backend - backend supplies only session_id cookie which I prefer should not be used for routing purposes; of course we can change setup to one where backend supplies also route_id cookie): >> PUT /signin HTTP/1.1 >> X-bla: 42 << HTTP/1.1 200 OK << Set-Cookie: sessionID=312123 (this is from backend) << Set-Cookie: routeID=asd676 (this is from nginx, but could be changed) >> GET / HTTP/1.1 >> X-asd: 3123 >> X-qwe: 545 >> Cookie: sessionID=312123; routeID=asd676 << .... >> POST /upload/file.txt?sessionID=312123&routeID=asd676 << ... > It is possible that current development version is already able to > do everything you need using stick tables (stick store-response > set-cookie() and stick match url_param()), but we need to check > more precisely. > > If instead of learning the cookie we were doing a hash on it to > always send the same cookie to the same server, would the application > work or not ? It would mean that most initial requests would not be > sent to the server that delivered the cookie, but all subsequent > requests would be performed on the same server as the first one. What do you mean by: "It would mean that most initial requests would not be sent to the server that delivered the cookie"? For me ideal solution will be like in my first post. Everything works like with "cookie option" so there is only hashing involved (there is no need to "learn" cookies from backend, store information in memory and share this information between haproxy instances), but "cookie option" can optionally read cookie value from query string if not present in headers (as implemented in appsession if I understand correctly). >> >> I can't use appsession since as far as I understand it only work if I >> >> have one process - in my environment this is not acceptable (I will >> >> have multiple LB's to get high-availability and scalability - HAproxy >> >> will be paired with stud to terminate SSL and this will probably need >> >> more than two machines). >> > >> > Well, it could probably be cheaper to have only stud on some machines >> > and then send everything to a single haproxy instance. Also, do you >> > really need multiple studs ? BTW, what will you use to balance your >> > studs and using what algorithm ? >> >> Yes, from my tests it looks that I will need probably more than two >> studs in front for simplicity/high-availability paired with haproxy. >> All machines will share pool of IP's managed by Wackamole/Heartbeat. >> Balancing will be by DNS. > > OK so that implies stick table sharing if haproxy has to learn the cookies > anyway since DNS balancing will send the requests to random nodes. Cookie > hashing would work without sharing however. > Regards, > Willy > Cheers, Kamil

