Re: Memory leak in Python

2006-05-09 Thread Peter Tillotson
1) Review your design - You say you are processing a large data set,
just make sure you are not trying to store  3 versions. If you are
missing a design, create a flow chart or something that is true to the
code you have produced. You could probably even post the design if you
are brave enough.

2) Check your implementation - make sure you manage lists, arrays etc
correctly. You need to sever links (references) to objects for them to
get swept up. I know it is obvious but easy to do in a hasty implementation.

3) Verify and test problem characteristics, profilers, top etc. It is
hard for us to help you much without more info. Test your assumptions.

Problem solving and debugging is a process, not some mystic art. Though
sometime the Gremlins disappear after a pint or two :-)

p

[EMAIL PROTECTED] wrote:
 I have a python code which is running on a huge data set. After
 starting the program the computer becomes unstable and gets very
 diffucult to even open konsole to kill that process. What I am assuming
 is that I am running out of memory.
 
 What should I do to make sure that my code runs fine without becoming
 unstable. How should I address the memory leak problem if any ? I have
 a gig of RAM.
 
 Every help is appreciated.
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Probability Problem

2006-04-25 Thread Peter Tillotson
I had a possibly similar problem calculating probs related to premium
bond permutation. With 10^12 memory ran out v quickly. In the end I got
round it by writing a recursive function and quantising the probability
density function.

Elliot Temple wrote:
 Problem: Randomly generate 10 integers from 0-100 inclusive, and sum
 them. Do that twice. What is the probability the two sums are 390 apart?
 
 I have code to do part of it (below), and I know how to write code to do
 the rest. The part I have calculates the number of ways the dice can
 come out to a given number. The problem is the main loop has 9
 iterations and it takes about 2.5 minutes to begin the 4th one, and each
 iteration is about 101 times longer than the previous one. So:
 
 x = 2.5 * 101**6
 x /= (60*24*365.25)
 x
 5045631.5622908585
 
 It'd take 5,000 millennia. (If my computer didn't run out of memory
 after about 4 minutes, that is.)
 
 Any suggestions? Either a way to do the same thing much more efficiently
 (enough that I can run it) or a different way to solve the problem.
 
 Code:
 
 li = range(101)
 li2 = []
 range101 = range(101)
 for x in xrange(9):
 print x is %s % x
 li2 = []
 for y in li:
 for z in range101:
 li2 += [y+z]
 li = li2
 print li.count(800)
 # prints how many ways the dice can add to 800
 
 
 This link may help:
 http://www.math.csusb.edu/faculty/stanton/m262/intro_prob_models/calcprob.html
 
 
 -- Elliot Temple
 http://www.curi.us/blog/
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiline comments

2006-04-19 Thread Peter Tillotson
Ben Finney wrote:
 Atanas Banov [EMAIL PROTECTED] writes:
 
 Edward Elliott wrote:
 Saying coders shouldn't use multiline comments to disable code
 misses the point.  Coders will comment out code regardless of the
 existence of multiline comemnts.  There has to be a better
 argument for leaving them out.
 i beg to differ: you'd be surprised how much effect can little
 inconveniences have.

 want to comment block of code? use tripple-quotes. does not nest?
 ahhh, maybe it's time to get rid of that block you commented out a
 month ago just in case the new code doesnt work.
 
 Indeed. Using revision control means never needing to comment out
 blocks of code.
 
 If your revision control system is so inconvenient to use that you'd
 rather have large blocks of commented-out code, it's time to start
 using a better RCS -- perhaps a distributed one, so you can commit to
 your own local repository with abandon while trying out changes.
 

I'm not sure I agree, revision control is great but not the only answer.
In multi-developer teams working on the trunk, it its kind of
inconvenient if someone checks in broken code. It also blocks critical
path development if the person responsible for the code you conflict
with happens to be out on holiday. Block commenting is a clear flag to
that developer that something has changed - ideally he'd notice, see
sensible revision control comments, see the flag on the wiki or you
would remember to tell him. But if all of that fails, if it is commented
in the code it should get picked up at a code review.

Personally, I prefer clear code, minimally commented with good high
level descriptions of particularly complex section / algorithms. The
later doesn't always fit neatly on one line. There is an argument that
these should go into their own functions and be commented at the
function level. Again I'm not sure I agree entirely - function comments
that are auto extracted to create api docs (sorry Java background :-))
need only outline What a function does. There is a place for multiline
comments to describe How that is achieved.

Having said all that, I generally don't like comments, they are often
maintained poorly, too numerous, too verbose (red rag :-)) - so i'm
undecided whether they should be made easier for developers or
discouraged except where vital. Perhaps we should make them really hard
and elegant - mandate latex/mathml markup so good editors can display
the equations we are implementing :-)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multiline comments

2006-04-19 Thread Peter Tillotson
nice one Jorge :-)

Jorge Godoy wrote:
 Peter Tillotson wrote:
 
 I'm not sure I agree, revision control is great but not the only answer.
 In multi-developer teams working on the trunk, it its kind of
 inconvenient if someone checks in broken code. It also blocks critical
 
 This is something that should be a policy: no untested and working code
 should be commited to the trunk; if you need to commit it for any reason
 create a branch and do it there.
typo

committing broken code to trunk should punishable by stocks at least --
perhaps a public flogging.

Though on a serious note, you need a good reason for creating arbitrary
branches and a clearly defined naming policy. You also need to remember
to get off the branch asap before you inadvertently start a major fork.

 path development if the person responsible for the code you conflict
 with happens to be out on holiday. Block commenting is a clear flag to
 
 Here I believe that no code revision system is a substitute for project.  If
 you have two conflicting changes on the same line of code, then you surely
 are missing some discussion on the code and more documentation on what is
 being done / has been done.  Revision management is no substitute for
 meetings and projects.
 
 that developer that something has changed - ideally he'd notice, see
 sensible revision control comments, see the flag on the wiki or you
 would remember to tell him. But if all of that fails, if it is commented
 in the code it should get picked up at a code review.
 
 I believe that it is easier to see multiple commented out lines than just
 the beginning and ending of a multiline comment.  Specially when you're
 screening the code instead of reading it line by line.
 
 Personally, I prefer clear code, minimally commented with good high
 level descriptions of particularly complex section / algorithms. The
 
 We have the same taste, except that I prefer documenting more things than
 just complex algorithms so I have a lot of comments and docstrings in my
 code.
 later doesn't always fit neatly on one line. There is an argument that
 these should go into their own functions and be commented at the
 function level. Again I'm not sure I agree entirely - function comments
 that are auto extracted to create api docs (sorry Java background :-))
 need only outline What a function does. There is a place for multiline
 comments to describe How that is achieved.
 
 I still believe that you're working with an inappropriate environment if
 your editor can't use some extension for the language you choose (coming
 from a Java background you might like PyDev on Eclipse, even though its
 indentation features aren't as nice as Emacs' features...) or being able to
 repeat the comment symbol from one line to the next when it wraps (keeping
 indentation, of course!)
Sadly i don't get to code at the coalface much recently. I've tinkered
in python with pydev and vi over the last couple of years - i just
really dislike coding on a white background. I'm sure eclipse can do it
- i'm just not sure i've got the perseverance to work out how.

 Having said all that, I generally don't like comments, they are often
 maintained poorly, too numerous, too verbose (red rag :-)) - so i'm
 undecided whether they should be made easier for developers or
 discouraged except where vital. Perhaps we should make them really hard
 and elegant - mandate latex/mathml markup so good editors can display
 the equations we are implementing :-)
 
 :-)  There's an approach that allows using those...  I don't remember which
 docsystem allows for MathML markup.  But then, I'd go with DocBook + MathML
 + SVG ;-)  (Hey!  You started!  And you even said that you didn't like
 verbose comments... ;-))
 
oooh - but surely a picture is worth a thousand words and with SVG no
truer word was spoken :-)

I was only half kidding about latex. I can just about read pure latex
but that human readable xml stuff always defeats me
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python advocacy in scientific computation

2006-02-17 Thread Peter Tillotson
Hi,

Like it - an area that doesn't come out strongly enough for me is 
Python's ability to drop down to and integrate with low level 
algorithms. This allows me to to optimise the key bits of design in 
python very quickly and then if I still need more poke i can drop down 
to low level programming languages. Optimise design, not code unless I 
really need to.

To be fair the same is at least partly true for Java ( though supporting 
JNI code scares me ) but my prototyping productivity isn't as high.

The distributed / HPC packages may also be worth noting - PyMPI and 
PyGlobus.

p

Michael Tobis wrote:
 Someone asked me to write a brief essay regarding the value-add
 proposition for Python in the Fortran community. Slightly modified to
 remove a few climatology-related specifics, here it is.
 
 I would welcome comments and corrections, and would be happy to
 contribute some version of this to the Python website if it is of
 interest.
 
 ===
 
 The established use of Fortran in continuum models such as climate
 models has some benefits, including very high performance and
 flexibility in dealing with regular arrays, backward compatibility with
 the existing code base, and the familiarity with the language among the
 modeling community. Fortran 90 and later versions have taken many of
 the lessons of object oriented programming and adapted them so that
 logical separation of modules is supported, allowing for more effective
 development of large systems. However, there are many purposes to which
 Fortran is ill-suited which are increasingly part of the modeling
 environment.
 
 These include: source and version control and audit trails for runs,
 build system management, test specification, deployment testing (across
 multiple platforms), post-processing analysis, run-time and
 asynchronous visualization, distributed control and ensemble
 management. To achieve these goals, a combination of shell scripts,
 specialized build tools, specialized applications written in several
 object-oriented languages, and various web and network deployment
 strategies have been deployed in an ad hoc manner. Not only has much
 duplication of effort occurred, a great deal of struggling up the
 learning curves of various technologies has been required as one need
 or another has been addressed in various ad hoc ways.
 
 A new need arises as the ambitions of physical modeling increase; this
 is the rapid prototyping and testing of new model components. As the
 number of possible configurations of a model increases, the expense and
 difficulty of both unit testing and integration testing becomes more
 demanding.
 
 Fortunately, there is Python. Python is a very flexible language that
 has captured the enthusiasm of commercial and scientific programmers
 alike. The perception of Python programmers coming from almost any
 other language is that they are suddenly dramatically several times
 more productive than previously, in terms of functionality delivered
 per unit of programmer time.
 
 One slogan of the Python community is that the language fits your
 brain. Why this might be the case is an interesting question. There
 are no startling computer science breakthroughs original to the
 language, Rather, Python afficionados will claim that the language
 combines the best features of such various languages as Lisp, Perl,
 Java, and Matlab. Eschewing allegiance to a specific theory of how to
 program, Python's design instead offers the best practices from many
 other software cultures.
 
 The synergies among these programming modes is in some ways harder  to
 explain than to experience. The Python novice may nevertheless observe
 that a single language can take the place of shell scripts, makefiles,
 desktop computation environments, compiled languages to build GUIs, and
 scripting languages to build web interfaces. In addition,  Python is
 useful as a wrapper for Fortran modules, facilitating the
 implementation of true test-driven design processes in Fortran models.
 
 Another Python advocacy slogan is batteries included. The point here
 is that (in part because Python is dramatically easier to write than
 other languages) there is a very broad range of very powerful standard
 libraries that make many tasks which are difficult in other languages
 astonishingly easy in Python. For instance, drawing upon the standard
 libraries (no additional download required)  a portable webserver
 (runnable on both Microsoft and Unix-based platforms) can be
 implemented in seven lines of code. (See
 http://effbot.org/librarybook/simplehttpserver.htm ) Installation of
 pure python packages is also very easy, and installation of mixed
 language products with a Python component is generally not
 significantly harder than a comparable product with no Python
 component.
 
 Among the Python components and Python bindings of special interest to
 scientists are the elegant and powerful matplotlib plotting package,
 which began by emulating and now 

Re: python concurrency proposal

2006-01-03 Thread Peter Tillotson
I'd really like to see a concurrency system come into python based on 
theories such as Communicating Sequential Processes (CSP) or its 
derivatives lambda or pi calculus. These provide an analytic framework 
for developing multi thread / process apps. CSP like concurrency is one 
of the hidden gems in the Java Tiger release (java.util.concurrency). 
The advantages of the analytic framework is that they minimise livelock, 
deadlock and facilitate debugging.

I'm no expert on the theory but i've developed under these frameworks 
and found them a more reliable way of developing distributed agent systems.

You may also be interested in looking at 
http://sourceforge.net/projects/pympi

p
[EMAIL PROTECTED] wrote:
 Alright, so I've been following some of the arguments about enhancing
 parallelism in python, and I've kind of been struck by how hard things
 still are.  It seems like what we really need is a more pythonic
 approach.  One thing I've been seeing suggested a lot lately is that
 running jobs in separate processes, to make it easy to use the latest
 multiprocessor machines.  Makes a lot of sense to me, those processors
 are going to be more and more popular as time goes on. But it would
 also be nice if it could also turn into a way to make standard
 threading a little easier and trouble free.  But I'm not seeing an easy
 way to make it possible with the current constraints of the language,
 so it seems like we're going to need some kind of language improvement.
   Thinking of it from that perspective, I started thinking about how it
 would be easy to deal with in a more idealized sense.  It would be nice
 to abstract out the concept of running something in parallel to
 something that can be easily customized, is flexible enough to use in a
 variety of concepts, and is resonably hard to screw up and fairly easy
 to use.  Perhaps a new built-in type might be just the trick.  Consider
 a new suite:
 
 pardef Name(self, par type, arguments...):
   self.send(dest pardef, tag, arguments)
   self.receive(tag, arguments)
   return arguments
   yield arguments
 
 so the object would then be something you can create an instance of,
 and set up like a normal object, and it would have other interface
 functions as well.  Consider your basic vector add operation:
 
 import concurrent
 import array
 
 pardef vecadd(self, concurrent.subprocess, veca, vecb, arrtype):
   import array
   output = array.array(arrtype)
   for a,b in zip(veca, vecb):
   output.append( a + b)
   return output
 
 a = array.array('d')
 b = array.array('d')
 for i in range(1000):
   a.append(float(i))
   b.append(float(i))
 
 h1 = vecadd(a[:500], b[:500], 'd')
 h2 = vecadd()
 h2.veca = a[500:]
 h2.vecb = b[500:]
 h2.arrtype = 'd'
 
 h1.run()
 h2.run()
 c = h1.result + h2.result
 
 You can see a few things in this example.  First off, you'll notice
 that vecadd has the import for array inside it.  One of the most
 important things about the pardef is that it must not inherit anything
 from the global scope, all variable passing must occur through either
 the arguments or .receive statements.  You'll also notice that it's
 possible to set the arguments like instance values.  This isn't as
 important in this case, but it could be very useful for setting
 arguments for other pardefs.  Take this example of your basic SIMD-ish
 diffusion simulation:
 
 import concurrent
 
 pardef vecadd(self, concurrent.subprocess, right, left, up, down,
 initval):
   current = initval
   maxruns = 100
   updef = not (isinstance(up, int) or isintance(up, float))
   downdef = not (isinstance(down, int) or isintance(down, float))
   rightdef = not (isinstance(right, int) or isintance(right, float))
   leftdef = not (isinstance(left, int) or isintance(left, float))
   for i in range(maxruns):
   if updef:
   upval = self.receive(up, 'up')
   else:
   upval = up
   if downdef:
   downval = self.receive(down, 'down')
   else:
   downval = down
   if rightdef:
   rightval = self.receive(right, 'right')
   else:
   rightval = right
   if leftdef:
   leftval = self.receive(left, 'left')
   else:
   leftval = left
   current = (upval + downval + leftval + rightval) / 4
   if updef:
   up.send('down', current)
   if downdef:
   down.send('up', current)
   if rightdef:
   right.send('left', current)
   if leftdef:
   left.send('right', current)
   return current
 
 diffgrid = {}
 for x, y in zip(range(10), range(10)):
   diffgrid[(x, y)] = vecadd()
 for x, y in zip(range(10), range(10)):
   

Re: Send password over TCP connection

2005-10-10 Thread Peter Tillotson
simplest approach is to 1 way hash the password ... perhaps using md5

normally with passwords the server only has to check if it is the same 
word, assuming the same hash algorithms the same hash value can be 
created at client.

Its not hugely secure ... anyone sniffing can grab your hash value and 
then try to crack it at their leisure. It would be better to communicate 
over ssl.

Anyone know of a simple ssl api in python :-)

dcrespo wrote:
 Hi all,
 
 I have a program that serves client programs. The server has a login
 password, which has to be used by each client for logging in. So, when
 the client connects, it sends a string with a password, which is then
 validated on the server side. The problem is obvious: anyone can get
 the password just sniffing the network.
 
 How can I solve this?
 
 Daniel
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python profiler

2005-10-04 Thread Peter Tillotson
look in the gc module ...

Celine  Dave wrote:
 Hello All,
 
 I am trying to find a profiler that can measure the
 memory usage in a Python program. I would like to
 gather some statistics about object usages. For
 example, I would like to be able to see how much time
 it takes to search for an item in a dict object, how
 many times it has to access the symbol table to
 retrieve a specific item, and things like that.
 
 Thanks,
 
 Dave
 
 
   
 __ 
 Yahoo! Mail - PC Magazine Editors' Choice 2005 
 http://mail.yahoo.com
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Advanced concurrancy

2005-08-01 Thread Peter Tillotson
I've not yet had a chance to try some examples, but i've looked through 
the documentation. It feels quite familiar, but i'd say that it is 
closer to Jade, the fipa (federation of intelligent physical agents) 
compliant agent framework than CSP or pi calculus. I like the behaviour 
(component microthread) model, but the advantage of CSP / pi calculus is 
that the resulting distributed system remains open to mathematical analysis.

For concurency its is the best framework i've seen :-) Have you come 
across the pylinda tuplespace implementation. It might be well worth a 
look. I might be barking up the wrong tree - but it seems to me that 
there could be considerable overlap between tuplespaces and mailboxes, 
though you did mention that you were moving towards twisted as the 
underlying platform for the future.

I'm quite interested in the mini version and also using the modules as 
mobile code rather than installing it formally. I'll probably zip the 
Axion directory and distribute the zip with the code, adding the zip to 
the python path dynamically.

cheers

p

Michael Sparks wrote:
 Peter Tillotson wrote:
 
 
Hi,

I'm looking for an advanced concurrency module for python and don't seem
to be able to find anything suitable. Does anyone know where I might
find one? I know that there is CSP like functionality built into
Stackless but i'd like students to be able to use a standard python build.
 
 
 Please take a look at Kamaelia* - it /probably/ has what you're after by the
 sounds of things. Currently the unit for sequential process can be either
 generators or threads, and is single CPU, single process, however we do
 expect to make the system multi-process and multi-system.
* http://kamaelia.sourceforge.net/
 
 Currently it runs on Linux, Mac OS X, Windows and a subset works nicely on
 Series 60 mobiles. (That has separate packaging) It works with standard
 Python versions 2.2 and upwards.
 
 The basic idea in Kamaelia is you have a class that represents a concurrent
 unit that communicates with local interfaces only which are essentially
 queues. The specific metaphor we use is that of an office worker with
 inboxes and outboxes with deliveries made between outboxes to inboxes.
 There also exists a simple environmental/service lookup facility which acts
 like an assistant in the above metaphor, and has natural similarities to a
 Linda type system.
 
 (The actual rationale for the assistant facility though is based on
 biological systems. We have communicating linked concurrent components -
 which is much like many biological systems. However in addition to that
 most biological systems also have a hormonal system - which is part of the
 thinking behind the assistant system)
 
 Generators (when embedded in a class) lend themselves very nicely to this
 sort of model in our experience /because/ they are limited to a single
 level (with regard to yield).
 
 It's probably suitable for your students because we've tested the system on
 pre-university trainees, and vacation trainees, and found they're able to
 pick up the system, learn the basic ideas within a week or so (I have some
 exercises on how to build a mini- version if that helps), and build
 interesting systems. 
 
 For example we had a pre-university trainee start with us at the beginning
 of the year, learn python, Kamaelia, and build a simple streaming system
 taking a video file, taking snapshots and sending those to mobile phones
 and PC's - this was over a period of 3 months. He'd only done a little bit
 of access in the past, and a little bit of VB. Well that as well as a
 simple learning system simulating a digital TV decode chain, but taking a
 script instead of a transport stream.
 
 We recently made a 0.2.0 release of the system (not announced on c.l.p yet)
 that includes (basic) support for a wide range of multimedia/networked apps
 which might help people getting started. Some interesting new additions in
 the release are an IPython integration - allowing you to build Kamaelia
 systems on the fly using a shell, much like you can build unix pipelines,
 as well as a visual introspection tool (and network graph visualiser) which
 allows you to see inside systems as they are running. (This has turned out
 to be extremely useful - as you can expect with any CSP-type system)
 
 The specific use cases you mention are also very closed aligned with our
 aims for the project. 
 
 We're essentially working on making concurrency easy and natural to use,
 (starting from the domain of networked multimedia). You can do incremental
 development and transformation in exactly the way it sounds like you want,
 and build interesting systems. We're also working on the assumption that if
 you do that you can get performance later by specific optimisations (eg
 more CPUs).
* Example of incremental component development here:
  http://tinyurl.com/dp8n7
 
 By the time we reach a 1.0 release (of Kamaelia) we're also aiming to be
 able

Re: Advanced concurrancy

2005-07-29 Thread Peter Tillotson
Cheers Guys,

I have come across twisted and used in async code. What i'm really 
looking for is something that provides concurrency based on CSP or pi
calculus. Or something that looks much more like Java's JSR 166 which is 
now integrated in Tiger.

Peter Tillotson wrote:
 Hi,
 
 I'm looking for an advanced concurrency module for python and don't seem 
 to be able to find anything suitable. Does anyone know where I might 
 find one? I know that there is CSP like functionality built into 
 Stackless but i'd like students to be able to use a standard python build.
 
 I'm trying to develop distributed / Grid computing modules based on 
 python. The aim is to be able to use barriers for synchronisation and 
 channels for communication between processes running on a single box. 
 Then the jump to multiple processes on multiple boxes and eventually to 
 MPI implementations. Hopefully, each jump should not be that big a leap.
 
 Of course it would be nice if there was a robust way of managing 
 concurrency in python aswell ;-)
 
 p
-- 
http://mail.python.org/mailman/listinfo/python-list


Advanced concurrancy

2005-07-28 Thread Peter Tillotson
Hi,

I'm looking for an advanced concurrency module for python and don't seem 
to be able to find anything suitable. Does anyone know where I might 
find one? I know that there is CSP like functionality built into 
Stackless but i'd like students to be able to use a standard python build.

I'm trying to develop distributed / Grid computing modules based on 
python. The aim is to be able to use barriers for synchronisation and 
channels for communication between processes running on a single box. 
Then the jump to multiple processes on multiple boxes and eventually to 
MPI implementations. Hopefully, each jump should not be that big a leap.

Of course it would be nice if there was a robust way of managing 
concurrency in python aswell ;-)

p
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Detecting computers on network

2005-07-22 Thread Peter Tillotson
You could use a sniffer in promiscuous mode. pypcap -- or something 
like. This will record every packet seen by your network card. Whether 
is will work depends on whether you are on a true braodcast network.

if a box is on and completely inactive you'll never see it, but most 
boxes do something. Windows boxes positively shout about there presence :-)

baically this is pasive nmap, nmap will try to open a tcp or udp 
connection on every  machine. your going to generat a lot of traffic.

If you've got access to the boxes enabling ICMP and using that is the 
proper way.

Sandeep Arya wrote:
 Hello dear
 
 I had sent my earlier queries regarding same topic. However just to be 
 more specific this time..
 
 I just wann try to detect that if there are some ip address in a list of 
 some ip address alive or not.
 
 How can i do this?
 
 Shall i try to connect them and check that my connection is working or 
 not? If working than means alive  (connection based)
 
 SHalle i send some buffer value (whatever) to socket using sendto(...) 
 and then checking for return value? (Connectionless)
 
 Well for me it doesnot matter that i should have connection or 
 connectionless.. I just wannn  know who are alive in my LAN?
 
 This application will be for my computers in lan. not for maganetwork.
 
 LAN will have just some bridges and computers.
 
 i need to detect tham all..
 
 however i doesnot matter that all of them replies or not. I just wann 
 know that atleast some of them reply. Rest i will take care of...
 
 
 Sandeep
 
 _
 NRIs, does your family in India need money urgently? 
 http://creative.mediaturf.net/creatives/icicibank/ICICI_NRI_ERA.htm Open 
 an ICICI Bank NRI savings A/c
 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Calculating average time

2005-07-08 Thread Peter Tillotson
have a look at the timeit module aswell

GregM wrote:
 Hi,
 I'm hoping that someone can point me in the right direction with this.
 What I would like to do is calculate the average time it takes to load
 a page. I've been searching the net and reading lots but I haven't
 found anything that helps too much. I'm testing our web site and hiting
 +6000 urls per test. Here is a subset of what I'm doing.
 
 import IEC
 #IE controller from http://www.mayukhbose.com/python/IEC/index.php
 from win32com.client import Dispatch
 import time
 import datetime
 from sys import exc_info, stdout, argv, exit
 failedlinks = []
 links = open(testfile).readlines()
 totalNumberTests = len(links)
 ie = IEC.IEController()
 start = datetime.datetime.today()
 # asctime() returns a human readable time stamp whereas time() doesn't
 startTimeStr = time.asctime()
 for link in links:
 start = datetime.datetime.today()
 ie.Navigate(link)
 end = datetime.datetime.today()
 pagetext = ie.GetDocumentText()
 #check the returned web page for some things
 if not (re.search(searchterm, pagetext):
failedlinks.append(link)
 ie.CloseWindow()
 finised = datetime.datetime.today()
 finishedTimeStr = time.asctime()
 # then I print out results, times and etc.
 
 So:
 1. Is there a better time function to use?
 
 2. To calculate the average times do I need to split up min, sec, and
 msec and then just do a standard average calculation or is there a
 better way?
 
 3. is there a more efficient way to do this?
 
 4. kind of OT but is there any control like this for Mozilla or
 firefox?
 
 This is not intended to be any sort of load tester just a url
 validation and page check.
 
 Thanks in advance.
 Greg.
 
-- 
http://mail.python.org/mailman/listinfo/python-list


importing packages from a zip file

2005-06-29 Thread Peter Tillotson
Hi all,

I was wondering if this is possible. In python v2.3 the import systems
was extended via PEP302 to cope with packages. *.py files in a directory
hierarchy can be imported as modules each level in the directory
hierarchy needs to contain at least an empty __init__.py file.

eg. With the file system

base/
__init__.py
branch1/
__init__.py
myModule.py

I can import myModule as follows

import base.branch1.myModule

At the same time its possible to store modules in a flat zip-file and
import modules with the following.

from myZip.zip import myModule.py

but is there a way to do both at the same time? eg.

from myZip.zip import base.branch1.myModule

I'm interested in this in the development of mobile code for some Grid
applications :-)

thanks in advance

p
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: importing packages from a zip file

2005-06-29 Thread Peter Tillotson
solution: have to add the zip archives to the PYTHONPATH, can be done in
the env but also as below

import sys, os.path
zipPackages=['base.zip']
for package in zipPackages:
sys.path.insert(0,os.path.join(sys.path[0],package))

import base.branch1.myModule

Peter Tillotson wrote:
 Hi all,
 
 I was wondering if this is possible. In python v2.3 the import systems
 was extended via PEP302 to cope with packages. *.py files in a directory
 hierarchy can be imported as modules each level in the directory
 hierarchy needs to contain at least an empty __init__.py file.
 
 eg. With the file system
 
 base/
 __init__.py
 branch1/
 __init__.py
 myModule.py
 
 I can import myModule as follows
 
 import base.branch1.myModule
 
 At the same time its possible to store modules in a flat zip-file and
 import modules with the following.
 
 from myZip.zip import myModule.py
 
 but is there a way to do both at the same time? eg.
 
 from myZip.zip import base.branch1.myModule
 
 I'm interested in this in the development of mobile code for some Grid
 applications :-)
 
 thanks in advance
 
 p
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: importing packages from a zip file

2005-06-29 Thread Peter Tillotson
cheers Scott

should have been
from myZip.zip import base.branch1.myModule.py

and no it didn't work, anyone know a reason why this syntax is not
preferred ??

sorry posted the soln again, it works but feels nasty

Scott David Daniels wrote:
 Peter Tillotson wrote:
 
 ... With the file system

 base/
 __init__.py
 branch1/
 __init__.py
 myModule.py

 At the same time its possible to store modules in a flat zip-file and
 import modules with the following.

 
 
 
 Does this work for you?  It gives me a syntax error.
 
 Typically, put the zip file on the sys.path list, and import modules
 and packages inside it.  If you zip up the above structure, you can use:
 
 sys.path.insert(0, 'myZip.zip')
 import base.branch1.myModule
 
 --Scott David Daniels
 [EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list