[Gimp-developer] Wish: A better animation system inside GIMP

2010-08-29 Thread LightningIsMyName
Hello,

Currently, creating animations in GIMP is not done in a very clean
way. To specify the duration of each frame and to specify whether it
should stay or be hidden after that period, we abuse the layer names.
Until recently we also had a limit of one layer per frame so we
couldn't have had a frame composed out of several layers (so
maintaining the frames had to be done in other files), but now layer
groups can solve this (so the animation process is now a bit better).

I'm posting here a small wishlist regarding using GIMP for animations,
to get the general approval before posting it as an enhancement
request in bugzilla:

* More supported frame disposal modes -  GIF has 3 frame disposal
modes (taken from
http://www.webreference.com/content/studio/disposal.html ):
1. Leave - after the period of the frame has ended, the frame's
pixels will remain visible and the next frames will be displayed on
top of it. This mode IS supported in GIMP.
2. Restore Previous - after the period of the frame has ended, the
frame's pixels will become transparent and will allow to see what was
in that region before the current frame covered it. This mode IS
supported in GIMP.
3. Restore Background - after the period of the frame has ended, the
rectangle of that frame will be replaced by a transparent rectangle.
It's similar to Restore Previous, except for the fact that after the
frame disappears you won't be able to see what was below it.  This
mode IS NOT supported in GIMP, and this is the mode I want to add.
Supporting this mode means modifying the gif load/save plugin, the
animation preview plugin and possibly the un/optimize plugins.

* A better animation editing modes. Abusing the layer modes is a very
bad way to do this (at least in my opinion) - see a similar case in
https://bugzilla.gnome.org/show_bug.cgi?id=344910 comment 3. We should
remember the animation parameters using parasites, and by having some
animation editor plugin (which leads to my next point - an animation
editor)

* GIMP should have an animation editor which is integrated with a
preview of the animation. Editing the animation and then closing the
editor just to open the preview plugin is a very bad workflow. You
should be able to preview from the same place you edit.
Also, the editor should allow to edit the parasites of the layers
which include the animation parameter.
I have something in mind similar to MS Gif Animator
(http://en.wikipedia.org/wiki/Microsoft_GIF_Animator), or anything
with a similar simple UI.

This is how I imagine the animation workflow with GIMP - I'll
appreciate some feedback.
~LightningIsMyName
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Wish: A better animation system inside GIMP

2010-08-29 Thread Olivier
You seem to be interested only in animated GIF?

Even in this case, are you aware of the GIMP Animation Package?

Olivier Lecarme
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Wish: A better animation system inside GIMP

2010-08-29 Thread LightningIsMyName
Hello,

Forgot to mention that it would also require to change most animation
scripts and GAP...

On Sun, Aug 29, 2010 at 10:15 AM, Olivier oleca...@gmail.com wrote:
 You seem to be interested only in animated GIF?

 Even in this case, are you aware of the GIMP Animation Package?

 Olivier Lecarme

First of all, yes - I am aware of GAP and I used it several times
(although I'm still not completely familiar with it).

Still, abusing layer names must stop and this is my main request - and
in order to stop this we must introduce a very simple animation editor
(since we have many animation scripts and it should be possible to
edit their result). GAP is very good and complicated, and I'm
referring to something much simpler only to edit the frame
duration/disposal (instead of the ugly layer name hack) - nothing
more.

~LightningIsMyName
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Wish: A better animation system inside GIMP

2010-08-29 Thread Alexandre Prokoudine
On 8/29/10, LightningIsMyName wrote:

 Still, abusing layer names must stop and this is my main request - and
 in order to stop this we must introduce a very simple animation editor

With keyframing, presumably?

Alexandre Prokoudine
http://libregraphicsworld.org
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Wish: A better animation system inside GIMP

2010-08-29 Thread Olivier
2010/8/29 LightningIsMyName lightningismyn...@gmail.com

 Hello,

 Forgot to mention that it would also require to change most animation
 scripts and GAP...

 On Sun, Aug 29, 2010 at 10:15 AM, Olivier oleca...@gmail.com wrote:
  You seem to be interested only in animated GIF?
 
  Even in this case, are you aware of the GIMP Animation Package?
 
  Olivier Lecarme

 First of all, yes - I am aware of GAP and I used it several times
 (although I'm still not completely familiar with it).

 Still, abusing layer names must stop and this is my main request - and
 in order to stop this we must introduce a very simple animation editor
 (since we have many animation scripts and it should be possible to
 edit their result). GAP is very good and complicated, and I'm
 referring to something much simpler only to edit the frame
 duration/disposal (instead of the ugly layer name hack) - nothing
 more.

 GAP is not very complicated, simply it is not well described.

If you are interested in multi-layer animations, the duration and mode of
layers is obviously a layer property. Why not use the layer name for this?
It is available and not very useful for anything else in this precise case.
-- 
Olivier Lecarme
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Dummy Layer with particular dynamic effect

2010-08-29 Thread David Gowers
On Sun, Aug 29, 2010 at 4:24 AM, Jacopo Corzani corz...@gmail.com wrote:

 Well, adjustment layers is what GIMP developers seem to refer to as
 layer abuse :)

 There are different ways to implement non-destructive editing. Would
 you be interested to find out more?

 Alexandre Prokoudine
 http://libregraphicsworld.org




 Hi Alexandre,
 i could agree with your consideration of layer abuse but adjustment
 layer is a quick and robust solution to manage effects whenever i want.
 Changing a curve (or any other effect) dynamically without duplicate
 layers would be a great.
 Sure,i'm interested to find another quick way to handle effects during
 time without do an layer abuse, but i don't have any ideas that speed up
 work like adj layers... :)
 Thanks for your response.

Peter Sikking has already expressed a way of doing this: you attach a
list of effects to a layer or layer group.

http://www.mmiworks.net/eng/publications/2007/05/lgm-top-gimp-user-requests_25.html

As he observes, this avoids the problem of 'adjustment layers' not
actually being layers in any normal sense (that is, their lack of
content) while allowing the same or greater functionality.
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Dummy Layer with particular dynamic effect

2010-08-29 Thread Jacopo Corzani
On 08/29/2010 12:46 PM, David Gowers wrote:
 On Sun, Aug 29, 2010 at 4:24 AM, Jacopo Corzanicorz...@gmail.com  wrote:

  
 Well, adjustment layers is what GIMP developers seem to refer to as
 layer abuse :)

 There are different ways to implement non-destructive editing. Would
 you be interested to find out more?

 Alexandre Prokoudine
 http://libregraphicsworld.org




 Hi Alexandre,
 i could agree with your consideration of layer abuse but adjustment
 layer is a quick and robust solution to manage effects whenever i want.
 Changing a curve (or any other effect) dynamically without duplicate
 layers would be a great.
 Sure,i'm interested to find another quick way to handle effects during
 time without do an layer abuse, but i don't have any ideas that speed up
 work like adj layers... :)
 Thanks for your response.
  
 Peter Sikking has already expressed a way of doing this: you attach a
 list of effects to a layer or layer group.

 http://www.mmiworks.net/eng/publications/2007/05/lgm-top-gimp-user-requests_25.html

 As he observes, this avoids the problem of 'adjustment layers' not
 actually being layers in any normal sense (that is, their lack of
 content) while allowing the same or greater functionality.

This kind of layer's effect stack is currently under develpment (you 
post is dated to may 2007)? . It can be a great step to make gimp 
totally non destructive and easy to use.If this feature is under devel i 
would contribute in a development or test phase. Anyone know 
anything?There is an open task about that?

Jacopo
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Dummy Layer with particular dynamic effect

2010-08-29 Thread Øyvind Kolås
On Sun, Aug 29, 2010 at 12:39 PM, Jacopo Corzani corz...@gmail.com wrote:
 http://www.mmiworks.net/eng/publications/2007/05/lgm-top-gimp-user-requests_25.html

 As he observes, this avoids the problem of 'adjustment layers' not
 actually being layers in any normal sense (that is, their lack of
 content) while allowing the same or greater functionality.

 This kind of layer's effect stack is currently under develpment (you
 post is dated to may 2007)? . It can be a great step to make gimp
 totally non destructive and easy to use.If this feature is under devel i
 would contribute in a development or test phase. Anyone know
 anything?There is an open task about that?

There is an ongoing architecture refactoring of GIMP towards migrating
the internal representation of buffers (or rather what is
traditionally seen by the user as layers) and
all processing happening on them to use GEGL (http://gegl.org/). An
initial refactoring has been done for color processing tools and the
processing of the layer stack. This can be enabled by the user through
the menus of the development version.

Once GIMP uses GEGL by default for the composition of the layers
adding any GEGL based operation as a filter on a layer is easy from a
programming perspective. I have myself done an experiment where I
replaced the per opacity slider in the layers dialog with a gaussian
blur and it works as expected for both text and other layers.

The UI for this is not fully decided upon, and when it comes to the UI
side and this proclaimed layer-abuse, I am personally in favor of
adding adjustment layers within the layer tree. This keeps the flow of
compositing in one place in the UI, making it more readable.  The
styling and interactions with these operations can be a bit different
from the actual content layers, allowing to collapse the list,
simplifying the layers dialog to a tree of just the content layers.
Having this chain embedded within the layer tree solves an otherwise
problematic recursion issue if the list is kept external to the layer
tree, you want to be able to use layer subtrees as masks/parameters
for operations performed on a layer or layer-group. Note that the
transition to GEGL would also also enable you to use other render
nodes in addition to text like gradients, noise, fractals,
checkerboards, vector layers and more.

Thus helping out with this refactoring, and improving both GEGL and
how GIMP uses GEGL is probably what would be most beneficial.

One thing that probably would aid in accelerating the refactoring of
various subsystems is adding a tile backend to GeglBuffer that is
backed by a GIMP TileManager, This would simplify the code used during
transitioning, at the moment proxy GEGL operations are used that both
makes more code required and also adds some overhead that could be
avoided.

/Øyvind K.
--
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] Wish: A better animation system inside GIMP

2010-08-29 Thread Martin Nordholts
On 08/29/2010 09:01 AM, LightningIsMyName wrote:
 Hello,

 Currently, creating animations in GIMP is not done in a very clean
 way.

That's not a problem since creating animations is not a defined goal in 
the GIMP product vision. I rather think we should remove support for 
animations in GIMP for 3.0, plug-ins can supply that functionality instead.

  / Martin


-- 

My GIMP Blog:
http://www.chromecode.com/
Automatic tab style and removed tab title bar
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


[Gimp-developer] GPU-accelerated Image Filtering w/ CUDA

2010-08-29 Thread Alan Reiner
This is a long message, so let me start with the punchline:  *I have a 
lot of CUDA code that harnesses a user's GPU to accelerate very tedious 
image processing operations, potentially 200x speedup.  I am ready to 
donate this code to the GIMP project.  This code can be run on Windows 
or Linux, and probably Mac, too.*  *It only works on NVIDIA cards, but 
can detect at runtime whether the user has acceptable hardware, and 
disables itself if not.*

Hi all, I'm new here.  I work on real-time image processing applications 
that must run at 60-240Hz, which is typically too fast for doing things 
like convolutions on large images.  However, the new fad is to use CUDA 
to harness the parallel computing power of your graphics card to do 
computations, instead of just rendering graphics.  The speed ups are 
phenomenal.

For instance, I implemented a basic convolution algorithm (blurring), 
which operates on a 4096x4096 image with a 15x15 kernel/PSF.  On my CPU 
it took *27 seconds* (AMD Athlon X3 440).  When running the identical 
algorithm in CUDA, I get it done in *0.1 to 0.25 seconds*, so between 
110x to 250x speedup (NVIDIA GTX 460).  Which side of the spectrum you 
are on depends on whether the memory already resides in the GPU device 
memory, of it needs to be copied in/out on each operation. 

Any kind of operation that resembles convolution, such as edge 
detection, blurring, morphology operations, etc, are all highly 
parallelizable and ideal for GPU-acceleration.  *I have a lot of this 
code already written for grayscale images, and can be donated to the 
GIMP project.*  I would be interested to expand the code to work on 
color images (though, I suspect just doing it three times on each 
channel would probably not be ideal), and I don't think it will be that 
hard to integrate into the existing GIMP project (only a couple extra 
libraries need to be added for a user's computer to benefit from it).

Additionally, the CUDA comes with convenient functions for determining 
whether a user has a CUDA-enabled GPU, and can default to regular CPU 
operations if they don't have one.  It can determine how many cards they 
have, select the fastest one, and adjust the function calls to 
accommodate older GPU cards.   Therefore, I believe the code can safely 
be integrated and dynamically enable itself only if it can be used.

My solution is for any image size (within the limit of GPU memory), but 
the kernel/PSF size must be odd and no larger than 25x25.  It's not to 
say larger kernel sizes can't be done in CUDA, but my solution is 
elegant for sizes smaller than that, due to having a limited amount of 
shared memory.  I believe it will still work up to a 61x61 kernel but 
with substantial slowdown (though, probably still much faster than 
CPU).  Beyond that, I believe a different algorithm is needed.

I have implemented basic convolution (which assumes 0s outside the edge 
of the image), bilateral filter (which is blurring without destroying 
edges), and most of the basic binary morphological operations 
(kernel-based erode, dilate, opening, closing).  I believe it would be 
possible to develop a morphology plugin, that allows you to start with a 
binary image, and click buttons for erode, dilate, opening, etc, and 
have it respond immediately.  This would allow someone to start with an 
image, and try lots of different combinations of morphological 
operations to determine if their problem can be solved with morphology 
(which usually requires a long and complex sequence of morph ops).

Unfortunately, I don't have much time to become a GIMP developer, but I 
feel like I can still contribute.  I will happily develop the algorithms 
to be run on the GPU, as they will probably benefit my job, too (I'm 
open to suggestions for functions that operate on the whole image, but 
independently).  And I can help with the linking to CUDA libraries, 
which NVIDIA claims can be done quickly by someone with no CUDA experience.

Please let me know if anyone is interested to work with me on this:
etothe...@gmail.com

-Alan
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] GPU-accelerated Image Filtering w/ CUDA

2010-08-29 Thread Jacopo Corzani
On 08/29/2010 08:13 PM, Alan Reiner wrote:
 This is a long message, so let me start with the punchline:  *I have a
 lot of CUDA code that harnesses a user's GPU to accelerate very tedious
 image processing operations, potentially 200x speedup.  I am ready to
 donate this code to the GIMP project.  This code can be run on Windows
 or Linux, and probably Mac, too.*  *It only works on NVIDIA cards, but
 can detect at runtime whether the user has acceptable hardware, and
 disables itself if not.*

 Hi all, I'm new here.  I work on real-time image processing applications
 that must run at 60-240Hz, which is typically too fast for doing things
 like convolutions on large images.  However, the new fad is to use CUDA
 to harness the parallel computing power of your graphics card to do
 computations, instead of just rendering graphics.  The speed ups are
 phenomenal.

 For instance, I implemented a basic convolution algorithm (blurring),
 which operates on a 4096x4096 image with a 15x15 kernel/PSF.  On my CPU
 it took *27 seconds* (AMD Athlon X3 440).  When running the identical
 algorithm in CUDA, I get it done in *0.1 to 0.25 seconds*, so between
 110x to 250x speedup (NVIDIA GTX 460).  Which side of the spectrum you
 are on depends on whether the memory already resides in the GPU device
 memory, of it needs to be copied in/out on each operation.

 Any kind of operation that resembles convolution, such as edge
 detection, blurring, morphology operations, etc, are all highly
 parallelizable and ideal for GPU-acceleration.  *I have a lot of this
 code already written for grayscale images, and can be donated to the
 GIMP project.*  I would be interested to expand the code to work on
 color images (though, I suspect just doing it three times on each
 channel would probably not be ideal), and I don't think it will be that
 hard to integrate into the existing GIMP project (only a couple extra
 libraries need to be added for a user's computer to benefit from it).

 Additionally, the CUDA comes with convenient functions for determining
 whether a user has a CUDA-enabled GPU, and can default to regular CPU
 operations if they don't have one.  It can determine how many cards they
 have, select the fastest one, and adjust the function calls to
 accommodate older GPU cards.   Therefore, I believe the code can safely
 be integrated and dynamically enable itself only if it can be used.

 My solution is for any image size (within the limit of GPU memory), but
 the kernel/PSF size must be odd and no larger than 25x25.  It's not to
 say larger kernel sizes can't be done in CUDA, but my solution is
 elegant for sizes smaller than that, due to having a limited amount of
 shared memory.  I believe it will still work up to a 61x61 kernel but
 with substantial slowdown (though, probably still much faster than
 CPU).  Beyond that, I believe a different algorithm is needed.

 I have implemented basic convolution (which assumes 0s outside the edge
 of the image), bilateral filter (which is blurring without destroying
 edges), and most of the basic binary morphological operations
 (kernel-based erode, dilate, opening, closing).  I believe it would be
 possible to develop a morphology plugin, that allows you to start with a
 binary image, and click buttons for erode, dilate, opening, etc, and
 have it respond immediately.  This would allow someone to start with an
 image, and try lots of different combinations of morphological
 operations to determine if their problem can be solved with morphology
 (which usually requires a long and complex sequence of morph ops).

 Unfortunately, I don't have much time to become a GIMP developer, but I
 feel like I can still contribute.  I will happily develop the algorithms
 to be run on the GPU, as they will probably benefit my job, too (I'm
 open to suggestions for functions that operate on the whole image, but
 independently).  And I can help with the linking to CUDA libraries,
 which NVIDIA claims can be done quickly by someone with no CUDA experience.

 Please let me know if anyone is interested to work with me on this:
 etothe...@gmail.com

 -Alan
 ___
 Gimp-developer mailing list
 Gimp-developer@lists.XCF.Berkeley.EDU
 https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer

Hi Alan,
CUDA comes with a free but propietary license (far away from GPL 
ideology),is strictly system dependent (there are not a freebsd version 
for example) and is not opensource.
It's important to say that CUDA is enabled on certain number of NVIDIA 
chip and not on generic accelerated graphic card,there are a big 
restrictions.
Increase speed but to sacrify portability and code openness adding close 
source library is not a good idea for me, in addiction NVIDIA have a two 
face with opensource community, i don't trust that is reliable if there 
are any problems or suggestions.
At the current day i have see only problematic NVIDIA closed source 
products like gpu 

Re: [Gimp-developer] GPU-accelerated Image Filtering w/ CUDA

2010-08-29 Thread Jon Nordby
On 29 August 2010 20:13, Alan Reiner etothe...@gmail.com wrote:
 This is a long message, so let me start with the punchline:  *I have a
 lot of CUDA code that harnesses a user's GPU to accelerate very tedious
 image processing operations, potentially 200x speedup.  I am ready to
 donate this code to the GIMP project.
Hi

The first thing you should do if you want an (open source) project to
to able use your code is to provide the code. Preferably in the form
of a public source control repository. Without this first step,
nothing can happen. :)

 Please let me know if anyone is interested to work with me on this:
 etothe...@gmail.com
Please note that it is expected in open source projects that
communication is kept in public channels (eg: a mailing list like this
one) unless there is a very good reason not to. For this reason I've
cc'ed the list in this reply, and I urge you to do the same.

-- 
Regards Jon Nordby - www.jonnor.com
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] GPU-accelerated Image Filtering w/ CUDA

2010-08-29 Thread Alan Reiner
I forgot that CUDA is not OSS.  We don't have to worry about that because we
only use it for in-house simulations.  I only remembered it was free for
such use.

I know that similar stuff can be done with OpenGL, but that's a completely
different beast.  There's also OpenCL but I don't know anything about that
either.   At least those two solutions should work on both NVIDIA and ATI,
but I believe the code still needs to be tailored specifically for each
architecture.

As for portability, I don't see that as a concern for any of these.  For
various platforms, it would be preprocessed out.  For everything else it can
detect and disable itself if it won't work on the resident card.

I might look a little bit into the OpenGL solution to see if that's
feasible, but my understanding is that it's more archaic and not as
powerful.  And I personally don't have a reason to learn it.  Perhaps one
day when I have time to contribute directly to an OSS project.

-Alan

On Aug 29, 2010 2:51 PM, Jacopo Corzani corz...@gmail.com wrote:

On 08/29/2010 08:13 PM, Alan Reiner wrote:
 This is a long message, so let me start with the punchl...
 ___
 Gimp-developer mailing list
 Gimp-developer@lists.XCF.Berkeley.EDU
 https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer

Hi Alan,
CUDA comes with a free but propietary license (far away from GPL
ideology),is strictly system dependent (there are not a freebsd version
for example) and is not opensource.
It's important to say that CUDA is enabled on certain number of NVIDIA
chip and not on generic accelerated graphic card,there are a big
restrictions.
Increase speed but to sacrify portability and code openness adding close
source library is not a good idea for me, in addiction NVIDIA have a two
face with opensource community, i don't trust that is reliable if there
are any problems or suggestions.
At the current day i have see only problematic NVIDIA closed source
products like gpu drivers and too few interest in the opensource world
and their users.

Jacopo

___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] GPU-accelerated Image Filtering w/ CUDA

2010-08-29 Thread Øyvind Kolås
On Sun, Aug 29, 2010 at 11:40 PM, Alan Reiner etothe...@gmail.com wrote:
 I forgot that CUDA is not OSS.  We don't have to worry about that because we
 only use it for in-house simulations.  I only remembered it was free for
 such use.

 I know that similar stuff can be done with OpenGL, but that's a completely
 different beast.  There's also OpenCL but I don't know anything about that
 either.   At least those two solutions should work on both NVIDIA and ATI,
 but I believe the code still needs to be tailored specifically for each
 architecture.

 As for portability, I don't see that as a concern for any of these.  For
 various platforms, it would be preprocessed out.  For everything else it can
 detect and disable itself if it won't work on the resident card.

 I might look a little bit into the OpenGL solution to see if that's
 feasible, but my understanding is that it's more archaic and not as
 powerful.  And I personally don't have a reason to learn it.  Perhaps one
 day when I have time to contribute directly to an OSS project.

Doing image processing on the GPU using OpenGL and GLSL for GIMPs next
generation engine is planned and the initial proof of concept of such
a system deeply integrated with GEGL exist in a branch of the git
repository at http://git.gnome.org/browse/gegl/log/?h=gsoc2009-gpu ,
The approach taken there is to implement automatic migration of tiles
between cpu and gpu.

/Øyvind K.
___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer


Re: [Gimp-developer] GPU-accelerated Image Filtering w/ CUDA

2010-08-29 Thread Dov Grobgeld
Alan,

You're code certainly sounds very useful, and I would love to see it open
sourced. May I suggest, as was already stated, that you decide upon a
license, find a name for your library, and then open a github (
http://github.com) account (or any other free hosting) where you upload the
code. Whether it will be made part of gimp or not is a different issue, and
I agree that you should introducing closed source dependencies for such a
project is not a good idea.

Btw, there is an open standard for CUDA-like operations being developed,
called OpenCL, but it is not very supported yet. See:
http://en.wikipedia.org/wiki/OpenCL . Pehaps you want to investigate whether
there is NVIDIA support for the operations that you use, and if so, recode
the algorithms in OpenCL? But again, I would do the work in a separate
repository in github.

Regards,
Dov


On Mon, Aug 30, 2010 at 01:46, Øyvind Kolås pip...@gimp.org wrote:

 On Sun, Aug 29, 2010 at 11:40 PM, Alan Reiner etothe...@gmail.com wrote:
  I forgot that CUDA is not OSS.  We don't have to worry about that because
 we
  only use it for in-house simulations.  I only remembered it was free for
  such use.
 
  I know that similar stuff can be done with OpenGL, but that's a
 completely
  different beast.  There's also OpenCL but I don't know anything about
 that
  either.   At least those two solutions should work on both NVIDIA and
 ATI,
  but I believe the code still needs to be tailored specifically for each
  architecture.
 
  As for portability, I don't see that as a concern for any of these.  For
  various platforms, it would be preprocessed out.  For everything else it
 can
  detect and disable itself if it won't work on the resident card.
 
  I might look a little bit into the OpenGL solution to see if that's
  feasible, but my understanding is that it's more archaic and not as
  powerful.  And I personally don't have a reason to learn it.  Perhaps one
  day when I have time to contribute directly to an OSS project.

 Doing image processing on the GPU using OpenGL and GLSL for GIMPs next
 generation engine is planned and the initial proof of concept of such
 a system deeply integrated with GEGL exist in a branch of the git
 repository at http://git.gnome.org/browse/gegl/log/?h=gsoc2009-gpu ,
 The approach taken there is to implement automatic migration of tiles
 between cpu and gpu.

 /Øyvind K.
 ___
 Gimp-developer mailing list
 Gimp-developer@lists.XCF.Berkeley.EDU
 https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer

___
Gimp-developer mailing list
Gimp-developer@lists.XCF.Berkeley.EDU
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gimp-developer