[EMAIL PROTECTED] wrote:
I'm soon going to be starting on a little program that needs to output
tabular information to a large LCD or Plasma screen. Python is, of
course, my preferred language.
My first instinct is PyGame, which I have programming for a PC monitor
before.
If all you want to
[EMAIL PROTECTED] wrote:
the queue holds references to the images, not the images themselves,
so the size should be completely irrelevant.I use one instance of
imageQueue.
hmmm.. true. And it also fails when I use PIL Image objects instead of
arrays. Any idea why compressing the string
Levi Campbell wrote:
Any and all mixing would probably happen in some sort of multimedia
library written in C (it would be both clumsy to program and slow to
execute if the calculations of raw samples/bytes were done in python) so
there shouldn't be a noticable performance hit.
Actually,
Hmm, good ideas.
I've made some refinements and posted to the cookbook. The refinements
allow for multilple function arguments and keywords.
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/466315
-Sw.
--
http://mail.python.org/mailman/listinfo/python-list
Recently, I needed to provide a number of game sprite classes with
references to assorted images.
I needed a class to:
- allow instances to specify which image files they need.
- ensure that a name should always refer to the same image.
- only load the image if it is actually used.
F. GEIGER wrote:
I've def'ed a handler for EVT_IDLE in the app's main frame. There I'd like
to call the nanothreads' __iter__ method, somehow.
When I copy the __iter__ method into a, say, runOnce() method and call the
next() method of the generator returned by runOnce(), it works. But I can't
NanoThreads v11
NanoThreads allows the programmer to simulate concurrent processing
using generators as tasks, which are registered with a scheduler.
While the scheduler is running, a NanoThread can be:
- paused
- resumed
- ended (terminate and call all registered exit functions)
- killed
yoda wrote:
I'm considering moving to stackless python so that I can make use of
continuations so that I can open a massive number of connections to the
gateway and pump the messages out to each user simultaneously.(I'm
thinking of 1 connection per user).
This won't help if your gateway works
I guess because the function name may be re-bound between loop iterations.
Are there good applications of this? I don't know.
I have iterator like objects which dynamically rebind the .next method
in order to different things. When I want a potentially infinite
iterator to stop, I rebind
I found LGT http://lgt.berlios.de/ but it didn't seem as if the
NanoThreads module had the same capabilites as stackless.
What specific capabilities of Stackless are you looking for, that are
missing from NanoThreads?
Sw.
--
http://mail.python.org/mailman/listinfo/python-list
This article describes a system very similar to my own.
shameless plug
The LGT library (http://developer.berlios.de/projects/lgt) provides a
simple, highly tuned 'microthread' implementation using generators. It
is called NanoThreads. It allows a microthread to be paused, resumed,
and killed,
gen = iterator()
gen.next
method-wrapper object at 0x009D1B70
gen.next
method-wrapper object at 0x009D1BB0
gen.next
method-wrapper object at 0x009D1B70
gen.next
method-wrapper object at 0x009D1BB0
gen.next is gen.next
False
What is behind this apparently strange behaviour? (The .next
Why is that? I thought gen.next is a callable and gen.next() actually
advances the iterator. Why shouldn't gen.next always be the same object?
That is, in essence, my question.
Executing the below script, rather than typing at a console, probably
clarifies things a little.
Sw.
Paul McGuire wrote:
I still think there are savings to be had by looping inside the
try-except block, which avoids many setup/teardown exception handling
steps. This is not so pretty in another way (repeated while on
check()), but I would be interested in your timings w.r.t. your current
Hello People.
I've have a very tight inner loop (in a game app, so every millisecond
counts) which I have optimised below:
def loop(self):
self_pool = self.pool
self_call_exit_funcs = self.call_exit_funcs
self_pool_popleft = self.pool.popleft
self_pool_append
I guess it is hard to see what the code is doing without a complete
example.
The StopIteration is actually raised by task.next(), at which point
task is removed from the list of generators (self.pool). So the
StopIteration can be raised at any time.
The specific optimisation I am after, which
Yes. It slows down the loop when there are only a few iterators in the
pool, and speeds it up when there are 2000.
My use case involves 1000 iterators, so psyco is not much help. It
doesn't solve the magic creation of locals from instance vars either.
Sw.
--
def loop(self):
self_pool = self.pool
self_call_exit_funcs = self.call_exit_funcs
self_pool_popleft = self.pool.popleft
self_pool_append = self.pool.append
check = self.pool.__len__
while check() 0:
task = self_pool_popleft()
Psyco actually slowed down the code dramatically.
I've fixed up the code (replaced the erroneous return statement) and
uploaded the code for people to examine:
The test code is here: http://metaplay.dyndns.org:82/~xerian/fibres.txt
These are the run times (in seconds) of the test file.
without
Michael Hoffman wrote:
I think this is going to be much harder than you think, and I imagine
this will only end in frustration for you. You will not be able to do it
well with just Python. I would recommend a different fun project.
Actually, it's pretty easy, using the pyHook and Python win32
I know its been done before, but I'm hacking away on a simple Vector
class.
class Vector(tuple):
def __add__(self, b):
return Vector([x+y for x,y in zip(self, b)])
def __sub__(self, b):
return Vector([x-y for x,y in zip(self, b)])
def __div__(self, b):
return
class Vector(tuple):
... x = property(lambda self: self[0])
... y = property(lambda self: self[1])
... z = property(lambda self: self[2])
...
Vector(abc)
('a', 'b', 'c')
Vector(abc).z
'c'
Vector(abc)[2]
'c'
Aha! You have simultaneously proposed a neat solution, and
And what should happen for vectors of size != 3 ? I don't think that a
general purpose vector class should allow it; a Vector3D subclass would
be more natural for this.
That's the 'magic' good idea I'm looking for. I think a unified Vector
class for all size vectors is a worthy goal!
--
Ok, I've attached the proto PEP below.
Comments on the proto PEP and the implementation are appreciated.
Sw.
Title: Secure, standard serialization of simple python types.
Abstract
This PEP suggests the addition of a module to the standard library,
which provides a serialization
I think you should implement it as a C extension and/or write a PEP.
This has been an unfilled need in Python for a while (SF RFE 467384).
I've submitted a proto PEP to python-dev. It coming up against many of
the same objections to the RFE.
Sw.
--
See also bug# 471893 where jhylton suggests a PEP. Something really
ought to be done about this.
I know this, you know this... I don't understand why the suggestion is
meeting so much resistance. This is something I needed for a real world
system which moves lots of data around to untrusted
If anyone is interested, I've implemented a faster and more space
efficient gherkin with a few bug fixes.
http://developer.berlios.de/project/showfiles.php?group_id=2847
--
http://mail.python.org/mailman/listinfo/python-list
For simple 2D graphics, your best option is pygame.
http://pygame.org/
If you need assistance, join the pygame mailing list, where you should
find someone to help you out.
--
http://mail.python.org/mailman/listinfo/python-list
I can't reproduce your large times for marshal.dumps. Could you
post your test code?
Certainly:
import sencode
import marshal
import time
value = [r for r in xrange(100)] +
[{1:2,3:4,5:6},{simon:wittber}]
t = time.clock()
x = marshal.dumps(value)
print marshal enc T:, time.clock() - t
Andrew Dalke wrote:
This is with Python 2.3; the stock one provided by Apple
for my Mac.
Ahh that is the difference. I'm running Python 2.4. I've checked my
benchmarks on a friends machine, also in Python 2.4, and received the
same results as my machine.
I expected the numbers to be like
I've written a simple module which serializes these python types:
IntType, TupleType, StringType, FloatType, LongType, ListType, DictType
It available for perusal here:
http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/415503
It appears to work faster than pickle, however, the decode
For simple data types consider marshal as an alternative to pickle.
From the marhal documentation:
Warning: The marshal module is not intended to be secure against
erroneous or maliciously constructed data. Never unmarshal data
received from an untrusted or unauthenticated source.
BTW, your
Ahh, I had forgotten that. Though I can't recall what an attack
might be, I think it's because the C code hasn't been fully vetted
for unexpected error conditions.
I tried out the marshal module anyway.
marshal can serialize small structures very qucikly, however, using the
below test
33 matches
Mail list logo