What happens right before the trigger? You could try overriding
repaint()
repaint() is not getting called excessively. It is only update() that
gets called all the time. I know this since I tried this:
public void repaint() {
System.out.println(repaint);
super.repaint();
}
.. without
PROTECTED] wrote:
This is a very thourough answer. I will try to reply ...
On 8/25/05, Jim Graham [EMAIL PROTECTED] wrote:
From the Component javadocs:
The update method of Component calls this component's paint
method to redraw this component.
and:
Subclasses
Hi Olof,
There is no hard requirement to use Swing. It does provide a number of
features, such as automatic double buffering and Timers, which could save
you some coding and debugging time, but it is also possible to roll your
own as you have already done. Swing also provides a lot of features
...
Thanks for your help,
/Olof
On 8/25/05, Jim Graham [EMAIL PROTECTED] wrote:
Hi Olof,
There is no hard requirement to use Swing. It does provide a number of
features, such as automatic double buffering and Timers, which could save
you some coding and debugging time, but it is also possible
no AWT
code either. The important classes I should guess are
TerraformerApplet and ResourceManager which do the AWT/graphics stuff,
alone. ResourceManager uses a MediaTracker to load the images
synchronously.
Cya and thanks for you help,
/Olof
On 8/26/05, Jim Graham [EMAIL PROTECTED] wrote:
We've
I submitted bug 6369452:
SampleModel and Raster methods do not specify the order
of data in the array for multiple pixels
to track this issue. It should appear in the bug parade in a day or two.
Note that you can access the data a band at a time using the
get/setSamples methods
Depending on the complexity of your drawing, simply rerendering the
scene with the objects in the new position (with double buffering) may
work well for 10 frames/second performance. Switch to using Swing with
a JPanel (you'll have to override paintComponent instead of paint) and
call repaint()
Hi Erik,
There are likely 2 different issues that you are dealing with here.
First is the issue of dithering during drawing vs. during imaging. We
currently do dithering when rendering an image into a lower color
resolution destination, but we don't do any dithering at all when we do
draw and
No, by sub-pixel I mean geometry which places coordinates at
fractional screen coordinates for purposes of expressing the locations
of those vertices more accurately than integers would allow. If the
display has sub-components that are smaller than a pixel, that could be
taken advantage of
Does the dispose() call potentially flush some drawing operations?
Is it simply not needed if you don't write to the graphics context?
Jim would probably be a better person to answer this question, but
since JDK 1.4 or so, Graphics.dispose() doesn't really do anything
heavyweight; we have other
Hi Ben,
What is the ColorModel in the destination you are rendering to?
What kind of device are you displaying on?
Can you send a test case that demonstrates it?
...jim
Ben Galbraith wrote:
Hi all,
I'm seeking to create a smooth gradient background for a JComponent
Sorry for the delay, I just noticed that no one answered this.
Typically PURE means don't do anything to the path and NORMALIZE means
tweak the values, which should mean that PURE would be faster, but the
difference should be small as the tweaking is pretty minimal and done on
a per path element
When I run this case I can just make out 29 extremely faint bars on the
screen which is exactly how many possible colors there are between the
endpoints you chose so it looks like I am seeing adjacent colors on a
true-color display. I'm a bit surprised that my eyes are that sensitive
and that my
some graphics gurus could
tutor me on how to create grayscale gradients of limited range that
don't have banding problems.
It sounds like from your last paragraph that I should try a different
set of colors above 50% luminance?
Ben
On Jun 20, 2006, at 11:48 PM, Jim Graham wrote:
When I run
The CAP decoration is applied at the end of each dash as documented
(though admittedly this phrase isn't repeated in the docs of the
constructors, just the class comments and the constants). Thus, the
round caps will eat into the null space of the dashes by half of the
line width on both ends of
A question though. Why should the choice of end cap be affecting the
rendering of a line between the end points? IMO each dash should be unaffected
by the cap, except for the dashes at each end of the line. Seems like an
undesirable feature to me.
It's not just Java - X11 (for OnOff dashes)
);
}
Jim Graham wrote:
Other things to try would be STROKE_PURE and ANTIALIASING hints...
...jim
Russell East wrote:
Jim,
thanks for the clarification.
which release and platform is that on?
both 1.5.0_07-b03 and 1.6.0-rc-b89 on Fedora Linux
We currently only worry about dithering on 8-bit
indexed destinations.
We have a request outstanding for dithering on
15/16-bit displays, but
we haven't gotten to it. This would be the first
request for dithering
on a TrueColor display. ;-)
Presumably if you get it working for 15/16-bit then
You can use the Area class to add in the original Shape:
public Shape expandShape(Shape s, float d) {
BasicStroke bs = new BasicStroke(d);
// or new BasicStroke(d, CAP_ROUND, JOIN_ROUND);
Area a = new Area(bs.createStrokedShape(s));
it that way because it was easier to
implement, which made me think there might be some tricky fast way.
On 7/17/06, Jim Graham [EMAIL PROTECTED] wrote:
You can use the Area class to add in the original Shape
Hi ylzhao,
There are some missing details in your post that affect the type of
solution that can achieve your goals. For instance, how often are the
pixels updated as compared to how often the updated image is drawn to
the screen? Also, what percentage of the pixels are modified on a
typical
There are really 2 questions here:
- How do I manipulate a BufferedImage without interfering with its
ability to be accelerated under the covers?
- Is there a type of image that lives in hardware accelerated memory and
to which I can write directly to its pixels?
To answer the first question,
I'm curious what your goals are here so that we can understand if we are
aware of your needs for future API development.
The usual reason for specifying a storage format is that you need to
access the pixels directly and so you need to know how they will be
stored in the image buffers. But with
Another issue is that the getSubImage() method on BufferedImage
currently revokes the ability to manage either the parent or the child
image. This prevents us from caching or accelerating either image in
the future permanently. Thus, with this code you don't even get the
benefits of managed
, and there is no method to compute the
geometrical center of the area such as:
Point2D center()
The point is if the area interface is extended to incluse the center
method, expandShape method should be very simple and direct to implement
by using AffineTransform.
Yinhe Cao
Jim Graham wrote On 07/17/06 14
except at those corners
where the outer normal directions are not well defined. Are you going
to closed the gap with an arch of radius d, or simply extends the outer
edges? For small d, there is no much difference, but for a large d, the
difference will be visible.
Thanks,
Yinhe
Jim Graham
Have you tried setting the STROKE_CONTROL hint to PURE? The default is
NORMALIZE which allows the renderer to round various values so that
line widths appear uniform (though they are sticky to the integers as
a result). The PURE setting tells the renderer to honor sub-pixel
positions more
Currently the only mechanism to get access to these sources is via the
Peabody project:
http://java.sun.com/developer/technicalArticles/J2SE/peabody/
You will have to sign a contributor agreement to see the source code as
is described on that page.
The code was developed completely in
The AA pipeline currently works by feeding the path to the renderer,
then looping along getting chunks of graybits coverage values for each
piece of the shape. Each chunk of graybits is then used to color the
destination surface.
Most of this code is reentrant, but one critical piece is not -
When you set the animated attribute to true on a MIS it will never be
complete so drawImage will never return true.
The return value of drawImage should probably be ignored for 99.99%
of code - it doesn't tell you anything that you really need to know.
All it is telling you is if all of the
now.
Thanks for responding
Ken
Jim Graham wrote:
When you set the animated attribute to true on a MIS it will never be
complete so drawImage will never return true.
The return value of drawImage should probably be ignored for
99.99% of code - it doesn't tell you anything that you really
What values are you using for rMask, gMask and bMask?
Internally we have native support for packed formats of xRGB, xBGR,
ARGB, and RGBx. It the masks you are using don't match one of those
formats then we may back off to some pretty generalized method call per
pixel code which might explain
It would be nice to get a method in Image that allows me to 'make this Image
object managed if possible' without this workaround.
If you want more explicit control over acceleration then the best route
would be to create your own VolatileImage objects and manage them
yourself. That is
Hi Mikael,
Have you tried using the STROKE_PURE hint?
...jim
[EMAIL PROTECTED] wrote:
Hi Chris,
I'll leave the AA thickness as explained in the reply to Kirill above.
What bothers me a tad more is the flat circles that still haunts us. :) I
think this is what is the
Hi Mikael,
[EMAIL PROTECTED] wrote:
I don't know what to say. You go to great length explaining why circles aren't
round and why the corner radius on rounded rectangles are different on the
different sides, and why you will not fix it. I guess other platforms (Flash,
.NET and similar) also
Hi Ken,
QUESTION: does anyone have an idea of how to handle this situation?
Shouldn't ImageFetcher.run() throw something? Like an OutOfMemoryError
or something so that grabPixels() can throw an InterruptedException like
the documentation says it will???
The ImageFetcher should capture any
Hi Ken,
Ken Warner wrote:
Hi Jim,
You raise some interesting points. The reason I used Toolkit was
because of the simplicity of the code that I needed to write. I wrote
an early version of my applet using ImageIO and BufferedImage. The code
ended up being pretty complicated. Right now I
I have a couple more items that may affect OOM when using the Toolkit
image code...
Try using the Toolkit.createImage(...) variants instead of the
Toolkit.getImage(...) variants. The latter getImage calls all keep a
copy of the image that was returned in an internal hashmap in case they
get a
[I sent this earlier, but it doesn't appear to have gone anywhere so I
am resending it in hopes that it makes it through to the forum and
mailing list to clarify things. Apologies if this causes duplicate
messages for some...jim]
That translates the device space 100 pixels over. Great. But the
[I sent this out a few days ago, but replied to the wrong address so I
don't believe it went through - sending again...jim]
The Bilinear filtering algorithm is a well known algorithm that has a
very specific definition which we follow. If we automatically applied
multiple steps based on the
[I sent this out a few days ago, but my mailer replied to the wrong
header address so it never went anywhere - sending again to the list in
hopes it can still clarify some things...jim]
Hi Nick,
[EMAIL PROTECTED] wrote:
Now for the real problem. I'm running the following code with 4 threads.
rectangle.getBounds().setLocation(shapeLocation) which doesn't work
I think this is the source of your misunderstanding (that snippet of
code was never intended to work) which you found out later in the thread
you linked in:
which is where my misunderstanding started from, also on this post-
[EMAIL PROTECTED] wrote:
Jim- right on and well said- that was exactly my confusion
Kirillcool said:
you're using core view classes in your model. The Shape and its derived classes
are for painting on the screen. Although they might seem like a good option to
use in the model, they
I wanted to follow up this suggestion with a warning.
The Area object is not a generalized Shape. It has some very specific
properties which make it inappropriate for some Shapes. Most of these
should be documented in the latest javadoc for the Area class.
1. Area cannot represent shapes
The premultiplication problem will not be an issue for my little
interpolator since I do not composite. But it would be for someone
trying to use my interpolator for another purpose than what it was
designed for.
You said that you don't need have the premultiplication problem because
you do
It is made final to lock in its implementation since internal code
relies on its internal structure.
In 1.6 it is just a thin veneer over Path2D.Float which is not final
(though its implementation-based methods are final).
Out of curiosity - why are you wanting to subclass it?
Polygon and Rectangle were part of the original integer-based AWT
Graphics API before it was fleshed out in Java 2 to become the 2D API
that we know today. Thus it makes sense that Polygon was created as an
integer polygon (for handing directly to the platform APIs that did all
of the rendering
Olivier Lefevre wrote:
GeneralPath in Java 2 [...] can handle polygons and much more
API-wise yes but is there a performance angle or is Polygon
really redundant now?
It depends on how you would use Polygon as to whether or not there would
be a performance penalty or not.
First, there are 2
One thing to add...
I was tracking down an issue that someone was having internally with
using the Disposer and thought that maybe the problem was that they
didn't understand how we use the DisposerTarget interface. Strange that
their query and your email were separated only by minutes. ;-)
There are 2 advantages to Disposer:
- priority of disposal process
- the DisposerTarget mechanism
One problem with the finalize method is that they intentionally leave
some of the aspects of the process undocumented. One thing that is not
documented is the priority of the finalizer thread -
Hi Mark,
In an ideal world we would have made TexturePaint, or at least its
createContext() method, final. Internally it is useful for us to know
that a texture is being applied so we can use optimized code to paint
with the texture directly rather than having to call createContext() and
cause
This is probably due to the fact that the Area class does not optimize
the case of 2 rectangular areas which abut each other top and bottom
into a single rectangular area, but it does optimize the horizontal case.
The isRectangular() method performs a trivial check for a single outline
with 4
The per-pixel overhead of accessing VRAM would probably be more
noticeable than the per-operation overhead of getting the pixels into
the pipeline - unless there was a lot of overlap between the sysmem images.
With little or no overlap then it might even be faster to blit the
component sysmem
If VRAM is expensive to store per-pixel then the cost you pay to do the
blit(s) is dependent on the number of pixels.
If you are writing to the same VRAM pixels over and over because the
sysmem images overlap then you pay the cost to access pixels that will
later be overwritten.
By contrast
Just a suggestion about techniques - make sure you use some sort of
interpolation filtering hint when you try to rotate an image like this.
The default algorithm NEAREST_NEIGHBOR is the fastest, but you'd get
better quality with BILINEAR filtering. I'm not sure that BICUBIC
(slower still)
[EMAIL PROTECTED] wrote:
True - but thats an array. Depending on the internal representation of
DataBuffer (which I have not looked into yet), this could mean that the casting
has to be done for each element of the array.
Once you grab the Java array, you are done grabbing it and done
Just a small correction...
[EMAIL PROTECTED] wrote:
Can you try to modify my example code using your
suggestion, so that we can see if it produces
similiar memory leaking behavior or if it solves it?
Sorry, no ... I don't have the time.
I post a snippit which should be easy to use:
[code]
Hi Ken,
ImageI/O reads images into a BufferedImage which provides easy access to
a Java int/byte array with the pixels. Details of pixel access for
BufferedImage objects were discussed recently on this thread:
http://forums.java.net/jive/thread.jspa?messageID=269294
handling api.
Now to all those who are gnashing your teeth about why I don't want to
use BufferedImages -- take a valium and have a beer and a ciggy -- relax
and let me live in my own little world without BufferedImages.
Jim Graham wrote:
Hi Ken,
ImageI/O reads images into a BufferedImage which
will see something I don't.
Ken
Jim Graham wrote:
Hi Ken,
Do you really need to rewrite it all the way through? For example,
the massaged data (that has been run through the panoramic projection)
could be stored in integer format - it's just the code that reads a
pixel out of the source tiles
.
Ken
Jim Graham wrote:
Hi Ken,
Do you really need to rewrite it all the way through? For example,
the massaged data (that has been run through the panoramic
projection) could be stored in integer format - it's just the code
that reads a pixel out of the source tiles that needs to change
into an integer array like I first asked.
Something like pixels = imageReader.decodeFileBytesToIntegerArray(...)
But all I seem to get from you is ...rewrite the applet... Ok.
I think I'm done.
Jim Graham wrote:
The MemoryImageSource relies on the ColorModel to define whether or
not the data contains
That should be the best way. It is independent of what kind of image
you are using...
...jim
Ken Warner wrote:
I need to draw just a small rectangular portion of a BufferedImage.
What is the absolute best, fastest way to do that?
Is their a tutorial that will show how to do
A line width of 0 should produce the thinnest line representable on the
device. If the line disappears then that is a bug...
...jim
[EMAIL PROTECTED] wrote:
Since I have not received an answer to my previous question, I will try to
simplify it even further.
For an
How are you taking the snap shot? How do you receive that snapshot into
Java? By loading it as an image, or are you using the built-in robot
facilities to do the snapshot?
If you are using robot, then the image will already be a BufferedImage
so you don't need to convert it. If you are
Hi Adam,
I haven't read through everything you've written here, but it sounds
like you've already learned about the major players here - there are
access methods at a number of levels - BufferedImage, Raster,
DataBuffer, and some helper methods on SampleModel and ColorModel as
well. The
You shouldn't be using Antialiased rendering on an Indexed image.
Antialiasing requires the ability to choose a color that is N% of the
way from the foreground to the background and store that color, but that
color may not be in the colormap of an Indexed image.
One could argue that we should
Hi Lew,
A line from 20 to 21 should only be one pixel long as the distance
between those points is only 1 pixel.
The bug is that we drew 2 pixels when you gave us the first stroke. I
think I know what's causing that. We have a test to see if the line
width is small enough to generate
Hi Lew,
You are describing the default behavior which assumes CAP_SQUARE.
The line itself is infinitely thin and goes from one coordinate to the
next, don't think of pixels. So, 20,60 to 20,60 goes nowhere - it has a
length of 0. 20,60 to 21,60 goes a total distance of 1 unit and has a
Just to be clear, you are calling g.drawImage(img, width, height,
null) in the pseudo-code below which draws img at the location w,h -
don't you want to draw it at (0,0) as in:
g.drawImage(img, 0, 0, null);
or was that a typo in the pseudo-code?
...jim
[EMAIL
Hi Michael,
I think you are getting confused by some under-documented APIs that
never had a useful purpose for developers and, in consequence, that have
some undiscovered bugs in them.
coerceData is not the way to make a premultiplied image. Even worse, it
can have dangerous effects on an
It depends on your definition of the term from the contents of.
The simplest way to get a snapshot of a subrectangle of an existing
image is to simply create a new image and draw the original into the new
one, as in:
// assuming:
Image src;
int subx, suby, subw, subh;
Yes, if you want alpha you should pass in TYPE_INT_ARGB.
Note that the default compositing mode of the freshly retrieved graphics
object is SRC_OVER. The code sample still works fine since the newly
constructed image is all 0's (transparent pixels) so copying your source
image over the fully
Hi Charles,
This is really a question for JAI since I believe they own the TIFF
writer. I forwarded your question on to Brian Burkhalter for more
information. He may draw the question off onto a JAI list as well.
Either way you should hear from him...
...jim
[EMAIL
[EMAIL PROTECTED] wrote:
If I draw a rectangle in the center of a panel at say (x, y) and then I draw it
again after appying a scale of (sx, sy), how do I get it so that the center of
the rectangle at both scale levels is in the same location on the screen, i.e.
(x, y)? It looks like I need
Choose the coordinate that you want to remain constant and use that
coordinate in the first and last translates. If 100,100 is the
coordinate that you want to not move during the transform, then use that
coordinate. If there is some other coordinate that you want to remain
unchanged then use
There appears to be a bug in your method. Was this a cut-and-paste error?
[EMAIL PROTECTED] wrote:
Hi.
I am writing an Applet for viewing large (~2600x3500) tiff files. Here is my
scaling method
public static BufferedImage getScaledInstance( Image img, int targetWidth,
int
Hi,
[EMAIL PROTECTED] wrote:
And here infos that you asked;
İmage info: [EMAIL PROTECTED] //source image that i get with Jimi.
As I read this you are saying that Jimi returned to you a ToolkitImage.
Is that correct? I think that was to be expected. Unfortunately
printing a ToolkitImage
If you are using JComponent.paintComponent() to render into your image
then it will render the background color unless you turn that off. You
might want to start with this article for some pointers/tips:
http://java.sun.com/products/jfc/tsc/articles/painting/
You can try, but it will only draw the current frame of the GIF to the
BufferedImage. BufferedImage objects are just static buckets of pixels
so they can't animate.
The last parameter to the drawImage call is an optional reference to an
ImageObserver that will be notified if/when a new frame
In other words each new arc is effectively underneath all of the
previous arcs?
You could use intermediate images for this.
Create 2 INT_ARGB images the size of your drawing (or the portion with
this waterfall-ish effect). On each frame you have the one that holds
all of the previous arcs
Are these 1-pixel wide arcs or wider?
Currently we have 2 different paths for drawArc. If we determine that
the stroke size is less than or equal to a pixel then we trace along
the arc and use a bresenham-like algorithm to touch the pixels. If it
is wider then we invoke a line widening
Unfortunately the Stroke interface in 2D is defined as just a geometric
effect - it can turn a Path into an outline to fill, but cannot
specify any color properties for the result. As a result, there would
be no custom Stroke implementation you could use to achieve this
result and I can't
Since both shapes are being rendered with an opaque color, why not just
render shape1 followed by shape3?
The problem you are running into with antialiasing is a common one and
it is due to the fact that the two operations work independently to fill
C (for fraction of coverage) of the pixel
Since the 2f stroke straddles the path it only contributes 1f on every
side so the 96pixel ellipse should be 98 pixels.
It should be reliable on all platforms, but there may be +/- half a
pixel slop depending on rounding directions, stroke normalization hints
and algorithm choices.
Yes,
Answers inline...
[EMAIL PROTECTED] wrote:
Sorry, fixed now.
And several questions about the code.
1. Save/Restore of transformation in Graphics2D. Is this way:
AffineTransform saved = g2d.getTransform();
...
g2d.setTransform(saved);
efficient???
It works well for 1 or 2 attributes, but
I have some general comments about the code you included. Hopefully
they can help you figure out the problem you are having...
[EMAIL PROTECTED] wrote:
I have a question
If I have a buffered image. I apply some changes to the image, such as smoothen
the image. How can I display the new
Phil already talked about why you are having resolution problems, but I
wanted to point out a mistake in your code that was causing some of the
operations to be lost.
[EMAIL PROTECTED] wrote:
Hi, I'm trying to build up a BufferedImage using its Graphics component, and
then print that image
What format are you saving it as? Note that the JPG image format is
lossy so the image written to disk might be different from the image you
handed to the writer (off by 1s would be common). PNG is a lossless
format so it will write out exactly the pixels you hand to it.
Also, what format
jav...@javadesktop.org wrote:
First, note what the API doc says about setClip(Shape clip):
[b]Not all objects that implement the Shape interface can be used to set the clip. The only Shape objects that are guaranteed to be supported are Shape objects that are obtained via the getClip method and
The BasicStroke.createStrokedShape() is the intended method to test for
intersection with the stroked shape as you implemented in your later
example. Note that the default graphics setting for the STROKE_CONTROL
hint is STROKE_NORMALIZE which allows tweaking of lines for aesthetic
purposes
Note that, in Java 2D, a shape cannot know its outline design because
that depends on another graphics attribute - the Stroke object. The
default implementation of that interface is a BasicStroke object with a
line width of 1.0f, but that is just the default for the Graphics
objects you get
If I understand your question correctly, you are saying that the
performance meets your needs, so you are mainly asking if there are any
hidden gotchas in that API?
If we are in a hw accelerated pipeline then I think the flipping is
probably guaranteed to be free since image transformation is
This is probably due to the fact that we use different rendering
pipelines in some cases when the colors are translucent (for example, we
may use X11 requests to render on Linux and Solaris when the colors are
solid, and then we may use readback, modify pixels, writeback
mechanisms to deal
Can ADD and MULTIPLY be expressed using Porter/Duff equations as per the
existing rules? I seem to recall that they require clipping the result
which none of the other modes require. The P/D chapter in Foley and
vanDam mentiones PLUS, but it doesn't describe it in the same terms as
the
Dmitri already pointed out your problem with the placing of the command
line options, but I wanted to mention some issues with the options you
were using:
I don't think opengl=True provides trace output (Chris?). I think all
it does is it prints a message saying Yes, indeed, I did
jav...@javadesktop.org wrote:
Thank you Dmitri for your answer.
Let me just say a bit more of the problem and why i go to this method:
My goal is to find the faster way to load an image from the disk (a tif file,
either binary or colored) and display it on the screen in a compatible way
(e.g.
I think I've discovered at least one problem that you are running into -
we don't have an optimized loop to convert from 1-bit binary images to
INT_RGB images. Further, the general code that gets used doesn't take
the best advantage of the existing optimized loops that do exist.
We only have
jav...@javadesktop.org wrote:
- let us know what image formats you are seeing in the loaded image and
which Reader is loading it so we can consider adding loops that deal
with that format more directly (or modifying our format detection code
to recognize it if there is something amiss with our
101 - 199 of 199 matches
Mail list logo