Re: Codepage, UTF-8
On Mar 3, 2009, at 9:00 AM, Christopher Barker wrote: menshikoval...@gmail.com wrote: In my controller I use flash('Регистрация'), then in sends to template form in \u0420\u0435\u0433\u0438\u0441\u0442\u0440\u0430\u0446\u0438\u044f How to use u prefix here? Does: flash(u'Регистрация') not work? On Mar 3, 2009, at 10:33 PM, menshikoval...@gmail.com wrote: No =( You probably don't have a magic encoding comment. Try adding: # coding: utf-8 (or whatever encoding your editor uses) to the top of your .py file. -- Philip Jenvey --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons, HTTP 201 Accepted, Task Queues and Background threads
On Tue, Mar 3, 2009 at 9:57 PM, kmw kochhar...@gmail.com wrote: When I get the request I add the item to a synchronized queue (which the processing thread blocks on) and return a HTTP 201 Accepted to the client. The processing thread picks up tasks from the queue and they are completed in the order received. The 201 response also has an additional Location header to poll the status of the task. Just to clarify a thing: 201 is CREATED, 202 is ACCEPTED. Don't send 201 instead of 202 because they have different meanings: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html About the matter at hand: we spawn some process in that kind of situation. Not threads. If you have scalability issues you can think about using the ampq protocol with an implementation such as rabbitmq I'm +1 about using 202 ACCEPTED :-) -- Lawrence, http://oluyede.org - http://twitter.com/lawrenceoluyede It is difficult to get a man to understand something when his salary depends on not understanding it - Upton Sinclair --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Alternative routing system?
Many thanks On Mar 4, 7:35 pm, Ben Bangert b...@groovie.org wrote: On Mar 3, 2009, at 7:14 PM, The Devil's Programmer wrote: So what I am wondering is, is it possible to use a Django style routing system with Pylons, and also, why does Pylons want me to add 'Controller' at the end of everything? Can I change this behavior? Why doesn't Pylons just let the user decide if they want to call their classes BlaBlaControllerr or not? It's a convention, you can avoid it and name your controller classes in the module whatever you want by including in the module: __controller__ = ProfileEditor etc. This is a convention, and one that you don't have to retain, its customizeable by subclassing PylonsApp (initialized in your projects config/middleware.py), and changing how it looks up the controller reference. This isn't as complex as you might believe, and the code contains a lot of doc messages to make it easier:http://pylonshq.com/docs/en/0.9.7/modules/wsgiapp/#pylons.wsgiapp.Pyl... There's some message on the mail list in the past which also cover this process. Cheers, Ben smime.p7s 3KViewDownload --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: authkit render template error
Hello. I have this error-mesage both ways =( TypeError: not all arguments converted during string formatting Alexy --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: formbuild.handle missing
On Mar 3, 3:59 am, Chris Miles miles.ch...@gmail.com wrote: Hi, there were a number of errors I had to correct to get it working. Notes pasted below. Thank you Chris, that was extremely helpful. I apologise for not doing a better job. However, after getting it working I decided I didn't like how formbuild worked and I have switched to using ToscaWidgets (tw.forms) instead. np, my aim in providing the updated docs is merely to allow people make an informed choice. Thank you again for your help. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: 0.9.7 and Elixir
On Feb 23, 2:27 pm, Philip Jenvey pjen...@underboss.org wrote: On Feb 23, 2009, at 10:58 AM, Chris Curvey wrote: I put the setup_all in load_environment (in environment.py), and that seemed to do no harm. But I can't find a method called setup_config anywhere. websetup's setup_config changed to setup_app in 0.9.7. def setup_config(command, filename, section, vars) -- def setup_app(command, conf, vars): Either one will work, regardless of version Hmm. I still haven't solved this, and its becoming more pressing. If I run the script as delivered, I get the UnboundExecutionError. So I went into websetup.py, found the setup_app function, and added this to it: # create the tables if they are not there already from sqlalchemy import create_engine meta.metadata.bind = create_engine(conf['sqlalchemy.url']) meta.metadata.create_all(checkfirst=True) Now paster setup-app development.ini runs without errors (yay!) but it does not create my one new table (boo!). I even tried pulling out the checkfirst parameter, but that seems to make no difference. (Incidentally, I need to leave setup_all() in my __init__ file, because my model directory is not in the Pylons project tree -- it's somewhere else and imported by the pylons model/__init_) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Response hangs when using SQLAlchemy
First off, thanks big time for pointing me towards repoze.profile. I have integrated it into my app and love it. The total time for a request without DB stuff is about .855 CPU seconds The total time for a request with DB suff is about 1.470 CPU seconds These numbers are saying that the entire pylons request/response cycle is only taking 1.47sec, but my client sits there waiting for a request for about a minute, even when the client is being run on the same machine. Could the response be getting tied up in the network layer? Why would my use of SQLAlchemy and Mysql interfere with how long it takes a response to get sent to a client? I have never looked at network traffic directly before, but it seems like that is what I need to do next. I want to find out exactly when the request is being sent over the wire. A ny suggestions on how to proceed? On Mar 3, 3:26 pm, Paweł Stradomski pstradom...@gmail.com wrote: W liście Bryan z dnia wtorek 03 marca 2009: Inside of an XMLRPCController I have a function that inserts items into a database, and returns 'OK' to the client if everything works. Everything runs quickly and correctly, the rows are inserted into the DB, but pylons hangs for about a minute when generating the response. The client hangs there waiting for a response while pylons does *something*. Maybe try to profile your application to check where it spends that time? Try repoze.profile. -- Paweł Stradomski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Response hangs when using SQLAlchemy
W liście Bryan z dnia środa 04 marca 2009: First off, thanks big time for pointing me towards repoze.profile. I have integrated it into my app and love it. The total time for a request without DB stuff is about .855 CPU seconds The total time for a request with DB suff is about 1.470 CPU seconds These numbers are saying that the entire pylons request/response cycle is only taking 1.47sec, but my client sits there waiting for a request for about a minute, even when the client is being run on the same machine. Could the response be getting tied up in the network layer? Why would my use of SQLAlchemy and Mysql interfere with how long it takes a response to get sent to a client? I'd be more keen to suspect that some lock is held and timeouts rather than blame network, especially if output is identical in both cases, though you can check that too. What exactly is the difference between with db and without db versions, code-wise? Could you try to remove functionality by small chunks to see what lines cause the slowdown? -- Paweł Stradomski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Response hangs when using SQLAlchemy
On Mar 5, 2009, at 7:09 AM, Bryan bryanv...@gmail.com wrote: First off, thanks big time for pointing me towards repoze.profile. I have integrated it into my app and love it. The total time for a request without DB stuff is about .855 CPU seconds The total time for a request with DB suff is about 1.470 CPU seconds These numbers are saying that the entire pylons request/response cycle is only taking 1.47sec, but my client sits there waiting for a request for about a minute, even when the client is being run on the same machine. Could the response be getting tied up in the network layer? Why would my use of SQLAlchemy and Mysql interfere with how long it takes a response to get sent to a client? I have never looked at network traffic directly before, but it seems like that is what I need to do next. I want to find out exactly when the request is being sent over the wire. A ny suggestions on how to proceed? Wireshark and or strace are miracle workers. On Mar 3, 3:26 pm, Paweł Stradomski pstradom...@gmail.com wrote: W liście Bryan z dnia wtorek 03 marca 2009: Inside of an XMLRPCController I have a function that inserts items into a database, and returns 'OK' to the client if everything works. Everything runs quickly and correctly, the rows are inserted into the DB, but pylons hangs for about a minute when generating the response. The client hangs there waiting for a response while pylons does *something*. Maybe try to profile your application to check where it spends that time? Try repoze.profile. -- Paweł Stradomski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Response hangs when using SQLAlchemy
Using wireshark on the client side, the xmlrpc data is actually getting back to the client right away. Then there is the large pause (always right around 5 minutes) This is how the conversation goes. The 5 minute gap appears only when using SQLAlchemy w/ MySQL 1. Last packet that has actual xmlrpc data sent to client 2. Client sends packet to server (confirmation ??) - 5 Minute gap - 3. Server sends 1 packet to client 4. Client sends back 2 packets 5. Server sends 1 packet to client It is the first packet sent by the server *after* the data is received by the client and the client responds that is stalling. After the data is received by the client, I assume there needs to be some tcp/ip confirmation and such, and that is what the last 4 packets being sent between client/server are. But what is so special about this packet that is taking 300 seconds for the server to send? On Mar 3, 11:17 am, Noah noah.g...@gmail.com wrote: On Mar 5, 2009, at 7:09 AM, Bryan bryanv...@gmail.com wrote: First off, thanks big time for pointing me towards repoze.profile. I have integrated it into my app and love it. The total time for a request without DB stuff is about .855 CPU seconds The total time for a request with DB suff is about 1.470 CPU seconds These numbers are saying that the entire pylons request/response cycle is only taking 1.47sec, but my client sits there waiting for a request for about a minute, even when the client is being run on the same machine. Could the response be getting tied up in the network layer? Why would my use of SQLAlchemy and Mysql interfere with how long it takes a response to get sent to a client? I have never looked at network traffic directly before, but it seems like that is what I need to do next. I want to find out exactly when the request is being sent over the wire. A ny suggestions on how to proceed? Wireshark and or strace are miracle workers. On Mar 3, 3:26 pm, Paweł Stradomski pstradom...@gmail.com wrote: W liście Bryan z dnia wtorek 03 marca 2009: Inside of an XMLRPCController I have a function that inserts items into a database, and returns 'OK' to the client if everything works. Everything runs quickly and correctly, the rows are inserted into the DB, but pylons hangs for about a minute when generating the response. The client hangs there waiting for a response while pylons does *something*. Maybe try to profile your application to check where it spends that time? Try repoze.profile. -- Paweł Stradomski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Response hangs when using SQLAlchemy
The plot thickens: While the 5 minute pause is happening, I can kill the pylons app, and the last 4 packets are still exchanged between client and server, albeit 5 minutes later. Could paste still be hanging around after I kill the pylons app? I have had problems in the past when SQLAlchemy encounters a DB error and when I try to restart my pylons app, I get a socket.error: (98, 'Address already in use') exception that starts in SocketServer.py and bubbles up to paste/serve.py. And if I do netstat -l at that point, I see that something is listening on my app's port 5004, but there is no pid or program name there, just a dash. 5 minutes later, that port no longer has something listening on it. It seems that there is some sort of 5 minute zombie state in paste or tcp/ip stack. How could Mysql+SQLAlchemy happenings have this sort of effect?? The server is running Ubunut 8.04. On Mar 4, 1:04 pm, Bryan bryanv...@gmail.com wrote: Using wireshark on the client side, the xmlrpc data is actually getting back to the client right away. Then there is the large pause (always right around 5 minutes) This is how the conversation goes. The 5 minute gap appears only when using SQLAlchemy w/ MySQL 1. Last packet that has actual xmlrpc data sent to client 2. Client sends packet to server (confirmation ??) - 5 Minute gap - 3. Server sends 1 packet to client 4. Client sends back 2 packets 5. Server sends 1 packet to client It is the first packet sent by the server *after* the data is received by the client and the client responds that is stalling. After the data is received by the client, I assume there needs to be some tcp/ip confirmation and such, and that is what the last 4 packets being sent between client/server are. But what is so special about this packet that is taking 300 seconds for the server to send? On Mar 3, 11:17 am, Noah noah.g...@gmail.com wrote: On Mar 5, 2009, at 7:09 AM, Bryan bryanv...@gmail.com wrote: First off, thanks big time for pointing me towards repoze.profile. I have integrated it into my app and love it. The total time for a request without DB stuff is about .855 CPU seconds The total time for a request with DB suff is about 1.470 CPU seconds These numbers are saying that the entire pylons request/response cycle is only taking 1.47sec, but my client sits there waiting for a request for about a minute, even when the client is being run on the same machine. Could the response be getting tied up in the network layer? Why would my use of SQLAlchemy and Mysql interfere with how long it takes a response to get sent to a client? I have never looked at network traffic directly before, but it seems like that is what I need to do next. I want to find out exactly when the request is being sent over the wire. A ny suggestions on how to proceed? Wireshark and or strace are miracle workers. On Mar 3, 3:26 pm, Paweł Stradomski pstradom...@gmail.com wrote: W liście Bryan z dnia wtorek 03 marca 2009: Inside of an XMLRPCController I have a function that inserts items into a database, and returns 'OK' to the client if everything works. Everything runs quickly and correctly, the rows are inserted into the DB, but pylons hangs for about a minute when generating the response. The client hangs there waiting for a response while pylons does *something*. Maybe try to profile your application to check where it spends that time? Try repoze.profile. -- Paweł Stradomski --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Thread-safety in Pylons (Python?)
Hi, For couple of last days I was creating simple WSGI app based on Paste components. Pylons are also based on Paste, so I started reading Pylons sources to learn something new :) I've found something that for me is strange, maybe because I don't fully understand how threading works in Python. As I correctly uderstand PylonsApp object (from wsgiapp.py module) is created only one for application (it is shared between all worker threads in WSGI server like Paste#http) and its __call__ method is called from many threads. Am I right? Inside __call__ method after some processing we are in find_controller method - and here we are *accessing and modyfing* dictionary self.controller_classes. But this dictionary is shared between many threads, so is it thread safe? I could not find anywhere unambigous answer if accessing Python primitives from many threads is safe or not - for me it looks that it might be not safe (because modyfing/iterating/accessing e.g. dictionary may result in context switches). So if it would be not safe we need some RWLock here, right? I am asking also because in my application I also want to store some global data as dict (shared between all worker threads) and I am wondering if I have to use locking primitives. Thanks in advance for all explanations. -- Cheers, Kamil Gorlo --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Response hangs when using SQLAlchemy
The problem seems to be that the server is supposed to send a couple more confirmation packets, but does not for 5 minutes. So the client is correct in waiting: it has not received the last few packets of the transaction yet, even though it has received the actual xmlrpc data. I have seen a couple of posts on the internet about the socket.error: (98, 'Address already in use') problem, but have never seen an explanation or solution. Has anyone run into this problem? On Mar 4, 1:53 pm, Wichert Akkerman wich...@wiggy.net wrote: Previously Bryan wrote: Using wireshark on the client side, the xmlrpc data is actually getting back to the client right away. Then there is the large pause (always right around 5 minutes) Perhaps you are sending an incorrect Content-Length header and the client is waiting for more data to arrive? Wichert. -- Wichert Akkerman wich...@wiggy.net It is simple to make things.http://www.wiggy.net/ It is hard to make things simple. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons, HTTP 202 Accepted, Task Queues and Background threads [was Re: Pylons, HTTP 201 Accepted ... ]
Lawrence Oluyede wrote: On Tue, Mar 3, 2009 at 9:57 PM, kmw kochhar...@gmail.com wrote: When I get the request I add the item to a synchronized queue (which the processing thread blocks on) and return a HTTP 201 Accepted to the client. The processing thread picks up tasks from the queue and they are completed in the order received. The 201 response also has an additional Location header to poll the status of the task. Just to clarify a thing: 201 is CREATED, 202 is ACCEPTED. Don't send 201 instead of 202 because they have different meanings: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html About the matter at hand: we spawn some process in that kind of situation. Not threads. If you have scalability issues you can think about using the ampq protocol with an implementation such as rabbitmq I'm +1 about using 202 ACCEPTED :-) I meant to say 202 Accepted not 201, that's what you get for trying to remember HTTP status codes. I am also leaning towards using processes instead of threads (using the multiprocessing module) but to play devil's advocate for moment why do you prefer processes to threads? Secondly, and this is pertinent when spawning processes, how do you hook into the pylons shutdown process to get the external process to stop? Cheers, - Kochhar --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons, HTTP 201 Accepted, Task Queues and Background threads
Ian Bicking wrote: On Tue, Mar 3, 2009 at 2:57 PM, kmw kochhar...@gmail.com wrote: Hi everyone, I'm trying to find some docs or perhaps old discussions about implementing a task queue within a pylons application. The scenario I'm trying to support involves a request coming into the app server to perform an action which takes a long time to complete, such as rebuilding an index or updating a value across hundreds of thousands of objects. My thought was to create a processing thread when the app is loaded. When I get the request I add the item to a synchronized queue (which the processing thread blocks on) and return a HTTP 201 Accepted to the client. The processing thread picks up tasks from the queue and they are completed in the order received. The 201 response also has an additional Location header to poll the status of the task. The question that remained was how to create and manage processing thread. I've read a couple of threads on this subject, and hunted around google a bit and found a couple of options: - http://groups.google.com/group/pylons-discuss/browse_thread/thread/e30fb912ca79b000/7cc1d4a6b1d9919d?lnk=gstq=background#7cc1d4a6b1d9919d - http://groups.google.com/group/pylons-discuss/browse_thread/thread/3e9dfda05af50634/bc914b96e2b96a1b?lnk=gstq=background#bc914b96e2b96a1b Now I'm leaning towards creating a process using the python multiprocessing module which interfaces like a thread but skips issues with the GIL and pylons thread management. However, I didn't find any information about how to manage the process lifecycle and allow it to shutdown gracefully when the server is stopped. I'd appreciate feedback on this approach and any pointers to resources that will allow me to hook into the app lifecycle and manage my subprocess as well. Hopefully I can get a working recipe out of this and put it all together in the pylons cook book for future reference. If you are thinking about user-visible long running tasks, maybe give a look at: http://pythonpaste.org/waitforit/ -- it seems like you are more thinking about APIs, but at least similar. FYI, I think there's actually an HTTP header to indicate when the client should poll next. Thanks Ian, that's pretty useful to know about. It doesn't fit my case because I'm working on server-side APIs, but I'll take a look at the implementation to see if I can glean some ideas. - Kochhar --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Pylons, HTTP 202 Accepted, Task Queues and Background threads [was Re: Pylons, HTTP 201 Accepted ... ]
On Wed, Mar 4, 2009 at 11:56 PM, Kochhar kochhar...@gmail.com wrote: I am also leaning towards using processes instead of threads (using the multiprocessing module) but to play devil's advocate for moment why do you prefer processes to threads? The answer is kind of easy. I do not have computations which would benefit more from a threading model than from a processing model. Our Apache frontend uses multiprocesses and our async computations are done in processes. We do not need to share anything and if we do, we just copy the data to the process. With an API such as pyprocessing's or subprocess is kind of easy. If we do have to go to the reason why I generally don't like threading with shared state there's a great resource which I suggest to read from cover to cover: http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.html It's also useful if you do want to know about pro and cons of the usual way threads are used. There's nothing bad intrinsecally, it's the common style of threading some environments taught us which is bad. By the way you can combine the two techniques if you have to Secondly, and this is pertinent when spawning processes, how do you hook into the pylons shutdown process to get the external process to stop? Not sure how to respond to this. Pylons is a framework, it can't be turned on or shutted down. You can start and stop the web server and how to sync this process with some daemon processes largely depend on the operating system, and so on. If you have worker process usually those worker processes are created on demand. If you need some kind of process pool the processes are killed by the pool, and so on. Take a look at: http://pyprocessing.berlios.de/ If you use Python 2.6 use the builtin module multiprocessing, otherwise you can use the backport of that: http://pypi.python.org/pypi/multiprocessing/ I'm not entirely sure of the compatibility issues between the standard library version of the package and the original one. -- Lawrence Oluyede [eng] http://oluyede.org - http://twitter.com/lawrenceoluyede [ita] http://neropercaso.it - http://twitter.com/rhymes --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Thread-safety in Pylons (Python?)
On Mar 4, 2009, at 1:58 PM, kgs wrote: Inside __call__ method after some processing we are in find_controller method - and here we are *accessing and modyfing* dictionary self.controller_classes. But this dictionary is shared between many threads, so is it thread safe? Access to Python objects is thread-safe in that Python itself has a Global Interpreter Lock (GIL) that prevents say, two threads from updating the exact same key at the exact same time. The GIL locks on the dict access/setting. The GIL does not generally lock on many operations that occur at the C layer, like I/O, and while waiting on database access. Looking up Python GIL should provide quite a few threads about it. It's for this reason that to ensure you're effectively using a multi- core processor to its full potential, you should run a Pylons process for every core. This is what I do for all the sites I run. Cheers, Ben smime.p7s Description: S/MIME cryptographic signature
Re: Thread-safety in Pylons (Python?)
On Mar 4, 2009, at 1:58 PM, kgs wrote: I could not find anywhere unambigous answer if accessing Python primitives from many threads is safe or not - for me it looks that it might be not safe (because modyfing/iterating/accessing e.g. dictionary may result in context switches). This is the most authoritative page: http://effbot.org/pyfaq/what-kinds-of-global-value-mutation-are-thread-safe.htm It's still a bit ambiguous on what is really safe, but I can guarantee the two basic dict operations in question (a getitem and setitem on steady dict value) are in fact safe. -- Philip Jenvey --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Thread-safety in Pylons (Python?)
On Thu, Mar 5, 2009 at 3:46 PM, Philip Jenvey pjen...@underboss.org wrote: On Mar 4, 2009, at 1:58 PM, kgs wrote: I could not find anywhere unambigous answer if accessing Python primitives from many threads is safe or not - for me it looks that it might be not safe (because modyfing/iterating/accessing e.g. dictionary may result in context switches). This is the most authoritative page: http://effbot.org/pyfaq/what-kinds-of-global-value-mutation-are-thread-safe.htm It's still a bit ambiguous on what is really safe, but I can guarantee the two basic dict operations in question (a getitem and setitem on steady dict value) are in fact safe. Although, if I read this correctly, you are counting on the fact the current implementation of the GIL will always remain exactly this way for the operations getitem and setitem. That seems to be a fairly safe bet though, right? -- Philip Jenvey -- Cheers, Noah --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Thread-safety in Pylons (Python?)
On Mar 4, 2009, at 7:50 PM, Noah Gift wrote: Although, if I read this correctly, you are counting on the fact the current implementation of the GIL will always remain exactly this way for the operations getitem and setitem. That seems to be a fairly safe bet though, right? If it stops working in this way, there'll be significantly larger problems in Python code than just Pylons. :) - Ben smime.p7s Description: S/MIME cryptographic signature
Looking for Senior Web Developer at Weta Digital
Just an FYI, Weta Digital is looking for a Senior Web Developer, who knows Python inside and out. Being an expert at Pylons is a plus! http://www.wetafx.co.nz/jobs/ -- Cheers, Noah --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---
Re: Thread-safety in Pylons (Python?)
On Thu, Mar 5, 2009 at 3:46 AM, Philip Jenvey pjen...@underboss.org wrote: On Mar 4, 2009, at 1:58 PM, kgs wrote: I could not find anywhere unambigous answer if accessing Python primitives from many threads is safe or not - for me it looks that it might be not safe (because modyfing/iterating/accessing e.g. dictionary may result in context switches). This is the most authoritative page: http://effbot.org/pyfaq/what-kinds-of-global-value-mutation-are-thread-safe.htm It's still a bit ambiguous on what is really safe, but I can guarantee the two basic dict operations in question (a getitem and setitem on steady dict value) are in fact safe. Yeah, this site is a bit ambigous, e.g. Operations that replace other objects may invoke those other objects’ __del__ method when their reference count reaches zero, and that can affect things. This is especially true for the mass updates to dictionaries and lists. When in doubt, use a mutex! But before that statement author said that following operations are atomic: D[x] = y D1.update(D2) where x,y are objects and D1 and D2 are dict. So how this could be true? But even if we assume that getitem and setitem on dict are atomic (on 'steady' value only? which means primitives?). How to solve problem which I am facing now: I want to start another thread T1 in my Pylons App (manually, not Paste#http worker) and have some global dict which all http workers will read. But T1 periodically will update this dictionary (in fact all he want to do is to swap this global dict with local dict which was prepared during his work). In this dict I will have Python primitives (other dicts too) and simple classes which act only as 'structs'. Do I need lock? Cheers, Kamil Gorlo --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups pylons-discuss group. To post to this group, send email to pylons-discuss@googlegroups.com To unsubscribe from this group, send email to pylons-discuss+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/pylons-discuss?hl=en -~--~~~~--~~--~--~---