Giampaolo Rodola' g.rod...@gmail.com added the comment:
Closing out as per msg128969.
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11257
___
Changes by Giampaolo Rodola' g.rod...@gmail.com:
--
resolution: - wont fix
status: open - closed
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11257
___
Giampaolo Rodola' g.rod...@gmail.com added the comment:
Unless there's a way to automatically call close() when a dispatcher instance
is no longer referenced (and I can't think of anything to do that) I'd say we
better close this as rejected.
--
Éric Araujo mer...@netwok.org added the comment:
Well, isn’t WeakRefDictionary faster than a dict here?
--
nosy: +eric.araujo
stage: - patch review
type: behavior - performance
versions: -Python 2.6, Python 2.7, Python 3.1, Python 3.2
___
Python
Éric Araujo mer...@netwok.org added the comment:
WeakValue*
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11257
___
___
Python-bugs-list
Antoine Pitrou pit...@free.fr added the comment:
Éric, the weak dicts are implemented in pure Python while built-in dicts are in
C. That can make quite a difference.
--
nosy: +pitrou
___
Python tracker rep...@bugs.python.org
Éric Araujo mer...@netwok.org added the comment:
Oh, I guess I didn’t understand at all what the numbers meant. I shouldn’t
have compared the values and assumed that less was better, my bad.
--
___
Python tracker rep...@bugs.python.org
Giampaolo Rodola' g.rod...@gmail.com added the comment:
I'd be fine with this. My only concern are performances.
I've tried this:
http://code.google.com/p/pyftpdlib/issues/attachmentText?id=152aid=-7106494857544071944name=bench.pytoken=bd350bbd6909c7c2a70da55db15d24ed
Results:
plain dict:
: 128909
nosy: mmarkk
priority: normal
severity: normal
status: open
title: asyncore stores unnecessary object references
type: behavior
versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3
___
Python tracker rep...@bugs.python.org
http
Changes by Antoine Pitrou pit...@free.fr:
--
assignee: - giampaolo.rodola
nosy: +giampaolo.rodola
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11257
___
Giampaolo Rodola' g.rod...@gmail.com added the comment:
Such code should be rewritten via weakref.
Can you write a patch?
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11257
___
Марк Коренберг socketp...@gmail.com added the comment:
--- asyncore.py 2010-09-15 22:18:21.0 +0600
+++ asyncore.py 2011-02-21 09:43:15.033839614 +0500
@@ -58,7 +58,7 @@
try:
socket_map
except NameError:
-socket_map = {}
+socket_map = weakref.WeakValueDictionary()
def
Марк Коренберг socketp...@gmail.com added the comment:
sorry, forgot
import weakref
--
___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11257
___
I have a question. Suppose I do the following:
def myfunc(a,b):
return a+b
myfunc2=myfunc
is there anyway to find all of the references to myfunc? That is, can I find
out all of the functions that may be aliased to myfunc?
second question:
class MyClass(object):
def __init__(a,b):
In article 786181.46665...@web110610.mail.gq1.yahoo.com,
William abecedarian314...@yahoo.com wrote:
I have a question. Suppose I do the following:
def myfunc(a,b):
return a+b
myfunc2=myfunc
is there anyway to find all of the references to myfunc? That is, can I find
out all of
Dennis Lee Bieber wrote:
If a process is known to be CPU bound, I think it is typical
practice to nice the process... Lowering its priority by direct
action.
Yes, but one usually only bothers with this for long-running
tasks. It's a nicety, not an absolute requirement.
It seems like
Dennis Lee Bieber [EMAIL PROTECTED] wrote:
...
Think VMS was the most applicable for that behavior... Haven't seen
any dynamic priorities on the UNIX/Linux/Solaris systems I've
encountered...
Dynamic priority scheduling is extremely common in Unixen today (and has
been for many
Karthik Gurusamy wrote:
On Jul 2, 10:57 pm, Martin v. Löwis [EMAIL PROTECTED] wrote:
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean the producer and
consumer are constantly getting on and off of the CPU since the pipe
John Nagle wrote:
C gets to
run briefly, drains out the pipe, and blocks. P gets to run,
fills the pipe, and blocks. The compute-bound thread gets to run,
runs for a full time quantum, and loses the CPU to C. Wash,
rinse, repeat.
I thought that unix schedulers were usually a bit more
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean the producer and
consumer are constantly getting on and off of the CPU since the pipe
gets full (or empty for consumer). Note that a producer can't run at
its top speed
dlomsak wrote:
Paul Rubin wrote:
dlomsak [EMAIL PROTECTED] writes:
knowledge of the topic to help. If the above are not possible but you
have a really good idea for zipping large amounts of data from one
program to another, I'd like to hear it.
Well, I was using the regular pickle at first
Steve Holden wrote:
Karthik Gurusamy wrote:
On Jul 1, 12:38 pm, dlomsak [EMAIL PROTECTED] wrote:
[...]
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean the producer and
consumer are constantly getting on and
On Jul 2, 10:57 pm, Martin v. Löwis [EMAIL PROTECTED] wrote:
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean the producer and
consumer are constantly getting on and off of the CPU since the pipe
gets full (or
If the problem does not require two way communication, which is
typical of a producer-consumer, it is a lot faster to allow P to fully
run before C is started.
Why do you say it's *a lot* faster. I find that it is a little faster.
The only additional overhead from switching forth and back
On Jul 3, 2:33 pm, Martin v. Löwis [EMAIL PROTECTED] wrote:
If the problem does not require two way communication, which is
typical of a producer-consumer, it is a lot faster to allow P to fully
run before C is started.
Why do you say it's *a lot* faster. I find that it is a little faster.
Karthik Gurusamy [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
|If all you had is just two processes, P and C and the amount of data
|flowing is less (say on the order of 10's of buffer-size ... e.g. 20
|times 4k), *a lot* may not be right quantifier.
Have pipe buffer sizes really
Okay, Im back at work and got to put some of these suggestions to use.
cPickle is doing a great job a hiking up the serialization rate and
cutting out the +=data helped a lot too. The entire search process now
for this same data set is down to about 4-5 seconds from pressing
'search' to having the
On Jul 1, 12:38 pm, dlomsak [EMAIL PROTECTED] wrote:
Thanks for the responses folks. I'm starting to think that there is
merely an inefficiency in how I'm using the sockets. The expensive
part of the program is definitely the socket transfer because I timed
each part of the routine
Karthik Gurusamy wrote:
On Jul 1, 12:38 pm, dlomsak [EMAIL PROTECTED] wrote:
[...]
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean the producer and
consumer are constantly getting on and off of the CPU since the
On Jul 2, 3:01 pm, Steve Holden [EMAIL PROTECTED] wrote:
Karthik Gurusamy wrote:
On Jul 1, 12:38 pm, dlomsak [EMAIL PROTECTED] wrote:
[...]
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean the producer and
Karthik Gurusamy wrote:
On Jul 2, 3:01 pm, Steve Holden [EMAIL PROTECTED] wrote:
Karthik Gurusamy wrote:
On Jul 1, 12:38 pm, dlomsak [EMAIL PROTECTED] wrote:
[...]
I have found the stop-and-go between two processes on the same machine
leads to very poor throughput. By stop-and-go, I mean
On Jul 2, 6:32 pm, Steve Holden [EMAIL PROTECTED] wrote:
Karthik Gurusamy wrote:
On Jul 2, 3:01 pm, Steve Holden [EMAIL PROTECTED] wrote:
Karthik Gurusamy wrote:
On Jul 1, 12:38 pm, dlomsak [EMAIL PROTECTED] wrote:
[...]
I have found the stop-and-go between two processes on the same
if both the search server and the web server/script are in the same
computer you could use POSH(http://poshmodule.sourceforge.net/) for
memory sharing or if you are in UNIX you can use mmap.
this is way faster than using sockets and doesn`t require the
serialization/deserialization step.
--
I have searched a good deal about this topic and have not found
any good information yet. It seems that the people asking all want
something a bit different than what I want and also don't divulge much
about their intentions. I wish to improve the rate of data transfer
between two python
b) use a single Python server (possibly shared with the database
process), and connect this to Apache through the
reverse proxy protocol.
Following up to myself: Instead of using a reverse proxy, you can
also implement the FastCGI protocol in the server.
Regards,
Martin
--
Martin v. Löwis [EMAIL PROTECTED] writes:
No. The CGI script has a file handle, and it is not possible to pass
a file handle to a different process.
If there is not a good Pythonic way to do the above, I am open to
mixing in some C to do the job if that is what it takes.
No, it's not
If this is a Linux server, it might be possible to use the SCM_RIGHTS
message to pass the socket between processes.
I very much doubt that the OP's problem is what he thinks it is,
i.e. that copying over a local TCP connection is what makes his
application slow.
That would require a
patch to
Martin v. Löwis [EMAIL PROTECTED] writes:
If this is a Linux server, it might be possible to use the SCM_RIGHTS
message to pass the socket between processes.
I very much doubt that the OP's problem is what he thinks it is,
i.e. that copying over a local TCP connection is what makes his
Thanks for the responses folks. I'm starting to think that there is
merely an inefficiency in how I'm using the sockets. The expensive
part of the program is definitely the socket transfer because I timed
each part of the routine individually. For a small return, the whole
search and return takes
I guess now I'd like to know what are good practices in general to get
better results with sockets on the same local machine. I'm only
instantiating two sockets total right now - one client and one server,
and the transfer is taking 15 seconds for only 8.3MB.
It would be good if you had
Martin v. Löwis wrote:
I guess now I'd like to know what are good practices in general to get
better results with sockets on the same local machine. I'm only
instantiating two sockets total right now - one client and one server,
and the transfer is taking 15 seconds for only 8.3MB.
It
dlomsak [EMAIL PROTECTED] wrote:
...
search and return takes a fraction of a second. For a large return (in
this case 21,000 records - 8.3 MB) is taking 18 seconds. 15 of those
seconds are spent sending the serialized results from the server to
the client. I did a little bit of a blind
Hello,
I have searched a good deal about this topic and have not found
any good information yet. It seems that the people asking all want
something a bit different than what I want and also don't divulge much
about their intentions. I wish to improve the rate of data transfer
between two
On 6/30/07, dlomsak [EMAIL PROTECTED] wrote:
If there is not a good Pythonic way to do the above, I am open to
mixing in some C to do the job if that is what it takes. I apologize
if this topic has been brought up many times before but hopefully I
have stated my intentions clearly enough for
dlomsak [EMAIL PROTECTED] writes:
knowledge of the topic to help. If the above are not possible but you
have a really good idea for zipping large amounts of data from one
program to another, I'd like to hear it.
One cheesy thing you might try is serializing with marshal rather than
pickle. It
Paul Rubin wrote:
dlomsak [EMAIL PROTECTED] writes:
knowledge of the topic to help. If the above are not possible but you
have a really good idea for zipping large amounts of data from one
program to another, I'd like to hear it.
One cheesy thing you might try is serializing with marshal
are mapped to method objects; URL parameters become
method parameters. See http://wiki.python.org/moin/WebFrameworks
Okay will look. I have checked out cherrypy, but it does not seem to
support direct object references, i.e. the server-side objects are
really stateless and all calls to an object
in.
This is more or less what several web frameworks do. You publish
objects; URLs are mapped to method objects; URL parameters become
method parameters. See http://wiki.python.org/moin/WebFrameworks
Okay will look. I have checked out cherrypy, but it does not seem to
support direct object
Is it possible to convert an object into a string that identifies the
object in a way, so it can later be looked up by this string.
Technically this should be possible, because things like
__main__.Foo instance at 0xb7cfb6ac
say everything about an object. But how can I look up the real object,
Martin Drautzburg wrote:
Is it possible to convert an object into a string that identifies the
object in a way, so it can later be looked up by this string.
Technically this should be possible, because things like
__main__.Foo instance at 0xb7cfb6ac
say everything about an object. But how
En Sun, 22 Apr 2007 08:07:27 -0300, Martin Drautzburg
[EMAIL PROTECTED] escribió:
Is it possible to convert an object into a string that identifies the
object in a way, so it can later be looked up by this string.
Technically this should be possible, because things like
__main__.Foo
On Apr 22, 5:07 am, Martin Drautzburg [EMAIL PROTECTED]
wrote:
__main__.Foo instance at 0xb7cfb6ac
But how can I look up the real object,
when I only have this string?
You can't because that identifies the instance with an address, and
pointers are not part of the python language.
--
Gabriel Genellina wrote:
En Sun, 22 Apr 2007 08:07:27 -0300, Martin Drautzburg
[EMAIL PROTECTED] escribió:
Is it possible to convert an object into a string that identifies the
object in a way, so it can later be looked up by this string.
Technically this should be possible, because things
Is it possible to convert an object into a string that identifies the
object in a way, so it can later be looked up by this string.
Technically this should be possible, because things like
__main__.Foo instance at 0xb7cfb6ac
say everything about an object. But how can I look up the real
En Sun, 22 Apr 2007 12:47:10 -0300, Martin Drautzburg
[EMAIL PROTECTED] escribió:
I was thinking that it would be nice if a web application could talk to
real objects. The client side does not need to know the internals of an
object, it acts as a view for server-side models. All it has to be
DrConti wrote:
Hi Bruno, hi folks!
thank you very much for your advices.
I didn't know about the property function.
I learned also quite a lot now about references.
Ok everything is a reference but you can't get a reference of a
reference...
I saw a lot of variations on how to solve this
On Sat, 25 Mar 2006 21:33:24 -0800, DrConti wrote:
Dear Python developer community,
I'm quite new to Python, so perhaps my question is well known and the
answer too.
I need a variable alias ( what in other languages you would call a
pointer (c) or a reference (perl))
Others have given
Steven D'Aprano wrote:
On Sat, 25 Mar 2006 21:33:24 -0800, DrConti wrote:
Dear Python developer community,
I'm quite new to Python, so perhaps my question is well known and the
answer too.
I need a variable alias ( what in other languages you would call a
pointer (c) or a reference (perl))
DrConti wrote:
I need a variable alias ( what in other languages you would call a
pointer (c) or a reference (perl))
Or, you think you need it.
I read some older mail articles and I found that the offcial position
about that was that variable referencing wasn't implemented because
it's
Hi Bruno, hi folks!
thank you very much for your advices.
I didn't know about the property function.
I learned also quite a lot now about references.
Ok everything is a reference but you can't get a reference of a
reference...
I saw a lot of variations on how to solve this problem, but I find
DrConti wrote:
class ObjectClass:
Test primary Key assignment
if __name__ == __main__:
ObjectClassInstantiated=ObjectClass()
ObjectClassInstantiated.AnAttribute='First PK Elem'
ObjectClassInstantiated.AnotherOne='Second PK Elem'
to
the object the attribute is bound to. When you later rebind the
attribute, it only impact this binding - there's no reason it should
impact other bindings.
so my question is:
is it still true that there is no possibilty to get directly object
references?
But object references *are* what
', 'Second PK Elem']
i.e. the assgnment
ObjectClassInstantiated.Identifier.append(ObjectClassInstantiated.AnAttribute)
assigns only the attribute value, not the reference.
so my question is:
is it still true that there is no possibilty to get directly object
references?
Is there a solution
Em Sáb, 2006-03-25 às 21:33 -0800, DrConti escreveu:
[snip]
There was also a suggestion to write a real problem where referencing
is really needed.
I have one...:
[snap]
There are loads of discussions about the code you wrote... but... isn't
bad practice to put the same data in two places? Or
Felipe Almeida Lessa schrieb:
Em Sáb, 2006-03-25 às 21:33 -0800, DrConti escreveu:
[snip]
There was also a suggestion to write a real problem where referencing
is really needed.
I have one...:
[snap]
There are loads of discussions about the code you wrote... but... isn't
bad practice
65 matches
Mail list logo