threads blocking on EncodeRepresentation
Hi, I am getting a whole lot of blocked threads when using the EncodeRepresentation class (shown below). Has anyone else experience such a problem Daemon Thread [Thread-108] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 95 EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133 Daemon Thread [Thread-109] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 95 EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133 Daemon Thread [Thread-110] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 95 EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133 Daemon Thread [Thread-111] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 95 EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133
Re: EncodeRepresentation.write leaving blocked threads around?
In this case, as with a subsequent message from Jim Alateras, the problem appears to be that the representation size exceeds the capacity of the underlying pipe (1024), and no one is reading from the input stream side of the pipe. But who made these daemon threads? Mixing blocking data structures with daemon threads is a bad idea. To quote from Java Concurrency in Practice: Daemon threads should be used sparingly -- few processing activities can be safely abandoned at any time with no cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon threads are best saved for housekeeping tasks, such as a background thread that periodically removes expired entries from an in-memory cache. Daemon threads are not a good substitute for properly managing the life-cycle of services within an application. --tim On 10/15/07, J. Matthew Pryor [EMAIL PROTECTED] wrote: My running app has about 15 threads this stack traces identical to this? Daemon Thread [Thread-44] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 91 [local variables unavailable] EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133 We are using EncodeRepresentation and the POST of our resources if working fine, not sure why these threads are staying around Thanks, Matthew
Re: EncodeRepresentation.write leaving blocked threads around?
The thread is created by the ByteUtils class in its getStream() method /** * Returns an input stream based on the given representation's content and * its write(OutputStream) method. Internally, it uses a writer thread and a * pipe stream. * * @return A stream with the representation's content. */ public static InputStream getStream(final Representation representation) throws IOException { if (representation != null) { final PipeStream pipe = new PipeStream(); // Creates a thread that will handle the task of continuously // writing the representation into the input side of the pipe Thread writer = new Thread() { public void run() { try { OutputStream os = pipe.getOutputStream(); representation.write(os); os.write(-1); os.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } }; // Starts the writer thread writer.start(); return pipe.getInputStream(); } else { return null; } } The problem is that at least on my Mac OSX running Java 5, the above call to representation.write(os) blocks at this line encoderOutputStream.finish(); in the method below /** * Writes the representation to a byte stream. * * @param outputStream *The output stream. */ public void write(OutputStream outputStream) throws IOException { if (canEncode()) { DeflaterOutputStream encoderOutputStream = null; if (this.encoding.equals(Encoding.GZIP)) { encoderOutputStream = new GZIPOutputStream (outputStream); } else if (this.encoding.equals(Encoding.DEFLATE)) { encoderOutputStream = new DeflaterOutputStream (outputStream); } else if (this.encoding.equals(Encoding.ZIP)) { encoderOutputStream = new ZipOutputStream (outputStream); } else if (this.encoding.equals(Encoding.IDENTITY)) { // Encoder unecessary for identity encoding } if (encoderOutputStream != null) { getWrappedRepresentation().write(encoderOutputStream); encoderOutputStream.finish(); } else { getWrappedRepresentation().write(outputStream); } } else { getWrappedRepresentation().write(outputStream); } } On 16/10/2007, at 11:05 PM, Tim Peierls wrote: In this case, as with a subsequent message from Jim Alateras, the problem appears to be that the representation size exceeds the capacity of the underlying pipe (1024), and no one is reading from the input stream side of the pipe. But who made these daemon threads? Mixing blocking data structures with daemon threads is a bad idea. To quote from Java Concurrency in Practice: Daemon threads should be used sparingly -- few processing activities can be safely abandoned at any time with no cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon threads are best saved for housekeeping tasks, such as a background thread that periodically removes expired entries from an in-memory cache. Daemon threads are not a good substitute for properly managing the life-cycle of services within an application. --tim On 10/15/07, J. Matthew Pryor [EMAIL PROTECTED] wrote: My running app has about 15 threads this stack traces identical to this? Daemon Thread [Thread-44] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 91 [local variables unavailable] EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133 We are using EncodeRepresentation and the POST of our resources if working fine, not sure why these threads are staying around Thanks, Matthew
Re: Newbie question: How to stream data between RESTlets
Why no one goes further and proposes the concrete schema and classes for doing this? I really have no great knowledge in the area, so the following is just a some kind of expectation what we really need to do while elaborating such a system. Let we play with HTTP protocol and we have just the only EVENT server that exposes the following REST interfaces: 1) http://newbie.com/feeds lists the HTML docs with the following information div class=feeds ul li class=feed a href=http://newbie.com/feed/msf-17;Medical Sensor Feed # 17/a /li li class=feed a href=http://newbie.com/feed/msf-27;Radio Sensor Feed # 27/a /li !-- etc -- /ul /div 2) http://newbie.com/feed/msf-11 lists the stream div class=feed This is a feed from span class=sourcea href=http://newbie.com/feeds;source feed list/a /span with the name span class=namehttp://newbie.com/feed/msf-11/span ul !-- data in the same manner -- !-- never stop -- When each GET to the service its start propagation of the current EVENT window to the client. And never stop the request GET http://newbie.com/feed/msf-11?n=1500 means GET 1500 data items from the feed and refuse connection GET http://newbie.com/feed/msf-11?t=2007-10-17T21:00Z Refuse connection near the specified time GET http://newbie.com/feed/msf-11?e=fastinfoset produce the same content using some kind of binary encoding to compress GET http://newbie.com/feed/msf-11?e=plain NEED to populate the propose schema data per row / number. So this protocol would be used only by closed, yours proprietary system that feel ok to do this in such a manner. No interoperability here. 3) I don't know what to set up in Content-length, may be we need to work in the schema PARTIAL GET (that is uses for download manager). 4) We need to work with KEEP-ALIVE HTTP 1/1 switch on and use the Continuation feature (Grizzly / AcyncWeb / Jetty) 5) We need to create an StreamRepresenation class, it's simple, and probably use the write(WritableChannel) methods to produce content 6) We MAY use the direct NIO transfer of repeatable chunks like /a /li li class=feed a href=http://newbie.com/feed/msf-27; 7) AND at the last we may look through the ability to use the restlet with the specific protocol for this purpose. I suspect that MINA is the good way to connect restlet with to do this easyly. mmm, do anyone try to use 4th item (Continuation) with restlet?
Re: EncodeRepresentation.write leaving blocked threads around?
On 10/16/07, J. Matthew Pryor [EMAIL PROTECTED] wrote: The thread is created by the ByteUtils class in its getStream() method Yes, but what are you doing with the InputStream returned by getStream()? I suspect that you aren't reading this stream completely, so the writing thread blocks forever. The writer thread is putting things in a capacity-bound queue; it might be better if the ByteUtils code used timed offer() instead of put(). Independent of the blocking issue, if the thread created by the call to ByteUtils.getStream isDaemon() without a call to setDaemon(), that implies that the calling thread isDaemon(). And that's bad. --tim
RE: Issue using StatusService
Hi Davide, Good suggestion, I've just updated the StatusService Javadocs in SVN. * Service to handle error statuses. If an exception is thrown within your * application or Restlet code, it will be intercepted by this service if it is * enabled. * * When an exception or an error is caught, the * [EMAIL PROTECTED] #getStatus(Throwable, Request, Response)} method is first invoked to * obtain the status that you want to set on the response. If this method isn't * overridden or returns null, the [EMAIL PROTECTED] Status.SERVER_ERROR_INTERNAL} constant * will be set by default. * * Also, when the status of a response returned is an error status (see * [EMAIL PROTECTED] Status#isError()}, the * [EMAIL PROTECTED] #getRepresentation(Status, Request, Response)} method is then invoked * to give your service a chance to override the default error page. * * If you want to customize the default behavior, you need to create a subclass * of StatusService that overrides some or all of the methods mentioned above. * Then, just create a instance of your class and set it on your Component or * Application via the setStatusService() methods. Best regards, Jerome -Message d'origine- De : Davide Angelocola [mailto:[EMAIL PROTECTED] Envoyé : vendredi 12 octobre 2007 20:50 À : discuss@restlet.tigris.org Objet : Re: Issue using StatusService Hi Jerome, On Thursday 11 October 2007 21:57:02 Jerome Louvel wrote: When an exception or error is thrown, is it ultimately caught by the Application's StatusFilter: public void doHandle(Request request, Response response) { try { super.doHandle(request, response); } catch (Throwable t) { response.setStatus(getStatus(t, request, response)); } } The ApplicationStatusFilter, subclass of StatusFilter has this implementation which calls your StatusService instance: public Status getStatus(Throwable throwable, Request request, Response response) { Status result = getApplication().getStatusService().getStatus( throwable, request, response); if (result == null) result = super.getStatus(throwable, request, response); return result; } Your getRepresentation(Status,Req,Resp) method is called later, but only if your status is an error status. See this StatusFilter code: public void afterHandle(Request request, Response response) { // If no status is set, then the success ok status is assumed. if (response.getStatus() == null) { response.setStatus(Status.SUCCESS_OK); } // Do we need to get a representation for the current status? if (response.getStatus().isError() ((response.getEntity() == null) || overwrite)) { response.setEntity(getRepresentation(response.getStatus(), request, response)); } } This implementation looks fine to me (I've just tested it). I hope it helped clarifying the expected behavior on your side. Let me know if it still doesn't work. Yeah.. now it works. Thanks very much... moreover can this behavior better documented in the API or in the tutorial? But I've got another issue with the representation. Here is the code: @Override public Status getStatus(Throwable throwable, Request request, Response response) { final Writer result = new StringWriter(); final PrintWriter printWriter = new PrintWriter(result); throwable.printStackTrace(printWriter); return new Status(Status.SERVER_ERROR_INTERNAL, result.toString()); } @Override public Representation getRepresentation(Status status, Request request, Response response) { StringBuilder buffer = new StringBuilder(); buffer.append(htmlheadtitle).append(status.getName()). append(/title/headbody); buffer.append(status.getCode()).append(br/); buffer.append(status.getName()).append(br/); buffer.append(status.getUri()).append(br/); buffer.append(status.getDescription()).append(br/); buffer.append(/body/html); return new StringRepresentation(buffer.toString(), MediaType.TEXT_HTML); } in the browser I see the HTTP response, something like: Date: Fri, 12 Oct 2007 00:03:07 GMT Server: Noelios-Restlet-Engine/1.0.5 Content-Type: text/html; charset=ISO-8859-1 Content-Length: 4498 html head [...] I'm wrong? -- Best Regards, Davide Angelocola -- -- Davide Angelocola
RE: Cache-control header long term strategy
Hi Rob, trunk sends the deprecation INFO message when setting Cache-control via the addAdditionalHeaders hack. The message indicates that the manual usage of this header is discourage (not deprecated). The Cache-control header setting is important for trying to interoperate with certain browsers. Absolutely. I searched but did not find any traffic on the planned means for supporting it. We have a pending RFE to add full caching support: http://restlet.tigris.org/issues/show_bug.cgi?id=25 I'd love to try to contribute bits to land this one item in 1.1 if somebody can point me to the right guidance on how Restlet should support it. That would be a great addition to the project. I can provide you design/impl support. I have always treated this as an explicit setting made by code, never intuited by other behavior, so a simple setCacheControl(Message.CACHE_CONTROL_NO_CACHE) or something like that would do for my purposes ... but happy to run down the road of something more elegant if someone's already thunk it up. Looking at the specs: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9 The idea would be to map the *semantics* of those headers into a dedicated cacheControl property attached to the Message class. This property would be an instance (possibly null) of a CacheControl class. Instead of passing a raw list of caching directive, it would provide a higher-level API, easier to understand and less fragile. Here is a proposition for a list of properties for this CacheControl class: 1) cacheable : Enum (CACHEABLE_NO, CACHEABLE_PUBLIC, CACHEABLE_PRIVATE) See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1 2) storable : boolean (true by default) See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.2 3) sharedMaximumAge : Date maximumAge : Date minimumFreshness : Long maximumStaleness : Long See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.3 4) returnOnlyIfCached : boolean (false by default) cacheMustRevalidate : boolean (false by default) proxyMustRevalidate : boolean (false by default) See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.4 5) transformable : boolean (true by default) See http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.5 * Maybe we should also consider the support of cache extensions, maybe later only. * We should also clearly indicate in the Javadocs which properties are supported by: - responses only - requests only - both * We should also support the mapping from the older Pragma: no-cache HTTP 1.0 header to those properties when parsing HTTP requests. Best regards, Jerome
Re: EncodeRepresentation.write leaving blocked threads around?
Matt, Interesting to see whether we get the same problem on Linux or whether it is specific to the mac. On windows, at least, i don't see any of these threads. You said in the email that the post actually completes but the thread doesn't terminate. Is this correct? cheers /jima J. Matthew Pryor wrote: The thread is created by the ByteUtils class in its getStream() method /** * Returns an input stream based on the given representation's content and * its write(OutputStream) method. Internally, it uses a writer thread and a * pipe stream. * * @return A stream with the representation's content. */ public static InputStream getStream(final Representation representation) throws IOException { if (representation != null) { final PipeStream pipe = new PipeStream(); // Creates a thread that will handle the task of continuously // writing the representation into the input side of the pipe Thread writer = new Thread() { public void run() { try { OutputStream os = pipe.getOutputStream(); representation.write(os); os.write(-1); os.close(); } catch (IOException ioe) { ioe.printStackTrace(); } } }; // Starts the writer thread writer.start(); return pipe.getInputStream(); } else { return null; } } The problem is that at least on my Mac OSX running Java 5, the above call to representation.write(os) blocks at this line encoderOutputStream.finish(); in the method below /** * Writes the representation to a byte stream. * * @param outputStream *The output stream. */ public void write(OutputStream outputStream) throws IOException { if (canEncode()) { DeflaterOutputStream encoderOutputStream = null; if (this.encoding.equals(Encoding.GZIP)) { encoderOutputStream = new GZIPOutputStream(outputStream); } else if (this.encoding.equals(Encoding.DEFLATE)) { encoderOutputStream = new DeflaterOutputStream(outputStream); } else if (this.encoding.equals(Encoding.ZIP)) { encoderOutputStream = new ZipOutputStream(outputStream); } else if (this.encoding.equals(Encoding.IDENTITY)) { // Encoder unecessary for identity encoding } if (encoderOutputStream != null) { getWrappedRepresentation().write(encoderOutputStream); encoderOutputStream.finish(); } else { getWrappedRepresentation().write(outputStream); } } else { getWrappedRepresentation().write(outputStream); } } On 16/10/2007, at 11:05 PM, Tim Peierls wrote: In this case, as with a subsequent message from Jim Alateras, the problem appears to be that the representation size exceeds the capacity of the underlying pipe (1024), and no one is reading from the input stream side of the pipe. But who made these daemon threads? Mixing blocking data structures with daemon threads is a bad idea. To quote from Java Concurrency in Practice: Daemon threads should be used sparingly -- few processing activities can be safely abandoned at any time with no cleanup. In particular, it is dangerous to use daemon threads for tasks that might perform any sort of I/O. Daemon threads are best saved for housekeeping tasks, such as a background thread that periodically removes expired entries from an in-memory cache. Daemon threads are not a good substitute for properly managing the life-cycle of services within an application. --tim On 10/15/07, J. Matthew Pryor [EMAIL PROTECTED] wrote: My running app has about 15 threads this stack traces identical to this? Daemon Thread [Thread-44] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park() line: 118 AbstractQueuedSynchronizer$ConditionObject.await() line: 1767 ArrayBlockingQueueE.put(E) line: 368 ByteUtils$PipeStream$2.write(int) line: 331 ByteUtils$PipeStream$2(OutputStream).write(byte[], int, int) line: 99 GZIPOutputStream.finish() line: 91 [local variables unavailable] EncodeRepresentation.write(OutputStream) line: 235 ByteUtils$2.run() line: 133 We are using EncodeRepresentation and the POST of our resources if working fine, not sure why these threads are staying around Thanks, Matthew
Re: Issue using StatusService
On Tuesday 16 October 2007 20:43:33 Jerome Louvel wrote: Hi Davide, Good suggestion, I've just updated the StatusService Javadocs in SVN. thanks very much :-) -- -- Davide Angelocola
RE: HEAD not well supported?
Hi all, Thanks for the quality of the feed-back. I feel like I'm now grasping all aspects of the problem and can propose a solution: 1) Split the Resource class into an abstract Handler class and a Resource subclass 2) Handler only works at the lower API level and specifies only the handle*() and allow*() methods. The default behavior is very basic: - nothing is allowed - handleHead() redirects to handleGet() and this is clearly documented - handleOptions() use the updateAllowedMethods() to automatically update the Response based on the available allow*() methods and their return value. - handleDelete(), handleGet(), handlePost() and handlePut() set the status to SERVER_ERROR_INTERNAL 3) Handler has convenience methods getApplication(), getLogger(), the context, request and response properties. It also has a new allowHead() method which is invoked by the Finder. 4) Resource offers a higher-level API that, as Tim said, is easier to use to map to domain objects, handles content negotiation, conditional methods. - handleGet() is implemented based on the variants property, the getPreferredVariant() and getRepresentation(Variant) methods - handlePost() is implemented by calling an acceptRepresentation(Representation) method to match the REST/HTTP 1.1 terminology and have less parallel names. - handlePut() is implemented by calling a storeRepresentation(Representation) method to match the REST/HTTP 1.1 terminology and have less parallel names. - handleDelete() is implemented by calling a deleteAll() or delete() or removeAll() or remove(). See REST terminology here: http://roy.gbiv.com/pubs/dissertation/rest_arch_style.htm#sec_5_2_1_2 See HTTP 1.1 methods terminology here: http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html 5) Also, in order to provide a equivalent of the Handler.allow*() methods, I propose to map them (for GET, POST, PUT and DELETE only): - Handler.allowGet()- Resource.readable property - Handler.allowPost() - Resource.modifiable property - Handler.allowPut()- Resource.modifiable property - Handler.allowDelete() - Resource.modifiable property That's it, Finder would also require a few minor changes to only know about the Handler class. This would be backward compatible. Feed-back welcome. Best regards, Jerome -Message d'origine- De : Michael Terrington [mailto:[EMAIL PROTECTED] Envoyé : dimanche 14 octobre 2007 01:25 À : discuss@restlet.tigris.org Objet : Re: HEAD not well supported? Tim Peierls wrote: There's a false parallel here that I don't think should be encouraged by providing parallel names. getRepresentation takes a Variant argument, handleGet does not; post takes a Representation argument, handlePost does not. If anything, I'd argue for names that were *less* parallel, e.g., add instead of post, and remove instead of delete. +1 for less parallel names, especially if handle* is moved to Finder. The only difficulty I see is coming up with a good name for post. Add to me corresponds to remove, so perhaps createResource? Regards, Michael.
Re: EncodeRepresentation.write leaving blocked threads around?
None of this is code we are writing, its all internal to Restlet, all we are doing is calling EncodeRepresentation.getText() and that method call causes all the other stuff to happen. I can't see how our code has any impact on the closing of the stream and shutting down of the thread. It also appears that this is not a problem under linux, we will test under windows, but it may be its a Mac specific thing. In terms of end to end functionality, our use of GZip encoding works fine and the encoded representations make it from client to server OK, so the concern is simply that there end up being hundreds of these dead corpse threads left over as our software runs for longer. Thanks for your help Matthew On 17/10/2007, at 1:16 AM, Tim Peierls wrote: On 10/16/07, J. Matthew Pryor [EMAIL PROTECTED] wrote: The thread is created by the ByteUtils class in its getStream() method Yes, but what are you doing with the InputStream returned by getStream()? I suspect that you aren't reading this stream completely, so the writing thread blocks forever. The writer thread is putting things in a capacity-bound queue; it might be better if the ByteUtils code used timed offer() instead of put(). Independent of the blocking issue, if the thread created by the call to ByteUtils.getStream isDaemon() without a call to setDaemon (), that implies that the calling thread isDaemon(). And that's bad. --tim