Issue combining gzip and subprocess

2009-07-21 Thread Iwan Vosloo
Hi there,

We tried to gzip the output of a shell command, but this results in a
strange error: the resulting file seems to be the concatenation of the
plaintext file with the zipped content.

For example:

f = gzip.open(filename, 'w')
subprocess.check_call(['ls','-la'], stdout=f)
f.close()

Using a normal file works as expected, but a GzipFile results in a file
containing what looks like the unzipped data, followed by the zipped
data.

I suspect this may have something to do with limitations of GzipFile
such as it not implementing truncate().

Does anyone have an explanation / can you suggest a nice solution for
doing what we are trying to do?

Regards
- Iwan

-- 
http://mail.python.org/mailman/listinfo/python-list


How to debug this import problem?

2009-05-08 Thread Iwan Vosloo
Hi there,

We have a rather complicated program which does a bit of os.chdir and
sys.path manipulations.  In between all of this, it imports the decimal
module several times.

However, it imports a new instance of decimal sometimes.  (Which is a
problem, since a decimal.Decimal (imported at point A) now is not the
same as that imported somewhere else.

In trying to figure out what's happening, we've changed the code in
decimal to print out id(sys) when decimal gets imported.  This also
gives back different values at different times.  My suspicion is that
when importing python tries to check sys.modules - but each time sys is
a different sys already.

Any ideas of how we can gather more data to find out exactly what causes
this? The path manipulations is a suspect, but unfortunately we cannot
remove it.  If we can trace the problem from the import end though, we
may be able to isolate the exact one which is the problem.

This code incidentally also runs in a virtualenv environment AND uses
setuptools.  None of these complications can be removed...

Regards
- Iwan

--
http://mail.python.org/mailman/listinfo/python-list


Re: Structuring larger applications - ideas

2005-05-16 Thread Iwan Vosloo

I know my foreign (to python) one class per module idea is what makes
life more difficult for me here.  And there is an argument to be made
out that it is a stupid remnant I stick to from having used it in other
programming languages (do I have to admit C++ in my background?)  Two
small examples of where it it useful for me:  my development
environment is run by make to a large extent.  Many standard UNIX tools
have the intelligence to deal with filenames, and if you know a file
corresponds to a class, you have a lot more info available in your
makefile.  Also, if I use my version control software (currently gnu
arch) to see what's changed, for example, I get a list of changes files
which again gives me more info because I know a file corresponds to a
class.
So, I can do a number of such small things using that convention that
works well with standard old UNIX tools.  And I find that valuable.

As for the dependencies- I'm trying to keep the specification of them
simple.  From a design point of view, it makes sense to me to only
specify ONCE that largeish-collection-of-classes-A depends on
largeish-collection-of-classes-B.  As another simple small example: say
I change the name of large-collection-of-classes-B - this way I only
need to do it once, else I need to grep and sed all over in order to do
it.

It just feels cleaner and easier.

I know essentially, if you stick to having many classes in a python
module, you have what I referred to in the previous paragraph, because
you can stick to importing other modules at the beginning of a module
and so have them and their contens available in the entire file.

Still, I'm investigating whether I can bend the ever-so-flexible python
to work well with my admittedly somewhat strange requirement.  And, off
course, I'd like to hear what others think.

-i

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Structuring larger applications - ideas

2005-05-16 Thread Iwan Vosloo

I know my foreign (to python) one class per module idea is what makes
life more difficult for me here.  And there is an argument to be made
out that it is a stupid remnant I stick to from having used it in other
programming languages (do I have to admit C++ in my background?)  Two
small examples of where it it useful for me:  my development
environment is run by make to a large extent.  Many standard UNIX tools
have the intelligence to deal with filenames, and if you know a file
corresponds to a class, you have a lot more info available in your
makefile.  Also, if I use my version control software (currently gnu
arch) to see what's changed, for example, I get a list of changes files
which again gives me more info because I know a file corresponds to a
class.
So, I can do a number of such small things using that convention that
works well with standard old UNIX tools.  And I find that valuable.

As for the dependencies- I'm trying to keep the specification of them
simple.  From a design point of view, it makes sense to me to only
specify ONCE that largeish-collection-of-classes-A depends on
largeish-collection-of-classes-B.  As another simple small example: say
I change the name of large-collection-of-classes-B - this way I only
need to do it once, else I need to grep and sed all over in order to do
it.

It just feels cleaner and easier.

I know essentially, if you stick to having many classes in a python
module, you have what I referred to in the previous paragraph, because
you can stick to importing other modules at the beginning of a module
and so have them and their contens available in the entire file.

Still, I'm investigating whether I can bend the ever-so-flexible python
to work well with my admittedly somewhat strange requirement.  And, off
course, I'd like to hear what others think.

-i

-- 
http://mail.python.org/mailman/listinfo/python-list