I run gimp and mod_perl on a production server and trigger longer jobs
using the 'at' command so as not to keep users hanging around for a
result. Some of the runs can generate upwards of 1,000 or so images, so
the connections were timing out long before the job ever ended.

For shorter jobs, gimp start-up time can be significantly reduced - by
upwards of 8 seconds on my laptop -  by giving the apache process it's
own gimp configuration directory in it's home directory.

On my debian systems the apache process runs as user 'www-data' and it's
home directory is /var/www. I copy a '.gimp-2.2' directory and then set
the permissions up:

cp ~/.gimp-2.2 /var/www
chown -R www-data.www-data /var/www/.gimp-2.2

I have a tarball of a simple conf directory which I use to do the same
thing on production servers, but the above is simple and effective and I
was very chuffed when I figured it out! It's not as good as a keeping
persistent connections going, but gives instant results for little work :)


P.S. I'll upgrade to gimp-2.2 soon, honest!

Tom Rathborne wrote:
> On Sun, Oct 04, 2009 at 04:33:28PM -0400, Vio wrote:
>> You're right. Keeping a running Gimp daemon in memory would most
>> certainly speed things up (for subsequent calls)
> This worked very well with gimp-perl and mod_perl 10 years ago.
> It was relatively easy to establish a persistent connection from
> mod_perl to the gimp-perl server. Just as in database connections,
> each Apache child maintained a socket connection to gimp-perl.
> It's just a matter of sorting out the socket permissions, as you have
> learned. For security purposes, I ran gimp as a different user, but
> granted permissions on the socket to www-data, and allowed gimp to
> write to a specific directory in the document root to enable caching
> of output images.
> The performance was amazing, even 10 years ago.
> I'm not sure if gimp-python has a similar server mode, but if it does,
> I'm sure that mod_python could cache the connections for similar
> performance levels.
> Cheers,
> Tom
Gimp-developer mailing list

Reply via email to