Re: [gwt-contrib] Re: Allow RPC stats system extensions (issue751801)
I don't know anything about the *current* stats support in RPC. Judging from the revision logs, it looks like Bob might have added that system. Added to the CC. -Lex On Tue, Aug 24, 2010 at 5:29 PM, Pascal Muetschard pmuetschard...@google.com pmuetschard%2...@google.com wrote: Could I please get some feedback on this and get it submitted? Thanks. On Aug 10, 2:13 pm, pmuetschard...@google.com wrote: Reviewers: Dan Rice, scottb, Lex, Description: This patch allows for the extension of the RPC stats system by moving the stats methods into an object, making them non-static. This would allow application developers to extend the ProxyCreator to use a different implementation of the stats methods. Please review this athttp://gwt-code-reviews.appspot.com/751801/show Affected files: user/src/com/google/gwt/rpc/client/impl/RpcCallbackAdapter.java user/src/com/google/gwt/rpc/client/impl/RpcServiceProxy.java user/src/com/google/gwt/user/client/rpc/impl/RemoteServiceProxy.java user/src/com/google/gwt/user/client/rpc/impl/RequestCallbackAdapter.java user/src/com/google/gwt/user/client/rpc/impl/RpcStatsContext.java user/src/com/google/gwt/user/rebind/rpc/ProxyCreator.java -- http://groups.google.com/group/Google-Web-Toolkit-Contributors -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Faster edit-distance computation in JsFunctionClusterer (issue669801)
Ray, does the patch look good to you? -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Phasing in a new, unified linker
I don't have a strong opinion about it. They can be non-final, with simply no particular effort to truly make them extensible. I think it might be possible to move the template JS files to GWT-translated code with extension points managed through rebinding and overriding. Until then, making changes that involve JS modifications effectively require you to cut and paste the whole file. There is now some templating, by the way. You asked about the __COMPUTE_SCRIPT_BASE__ reference. SelectionScriptLinker -- the base class for all the linkers in the subject line -- substitutes that string for the contents of computeScriptBase.js, a file included within gwt-user.jar. Thus, linkers that want the standard implementation of computeScriptBase() can simply include that string rather than copying the whole chunk of JS. Such templating is pretty limited even in principle, however, and in practice it's so far only done for that file and for one other one. Incidentally, you mention moving code into Java. That strategy actually came to pass for runAsync code fetching. Originally, linkers would emit a function that the code loader calls to download code. Now, there is a deferred binding, and the choice of linker causes the deferred binding choice to differ. Thus, code loaders are now simply Java classes that implement a simple Java interface. It's much easier to maintain. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Faster edit-distance computation in JsFunctionClusterer (issue669801)
On Wed, Jul 28, 2010 at 6:15 PM, avassalo...@google.com wrote: Oh, sorry. I made this comment somewhere else. The problem is the endStatements() method doesn't use the regex to recognize the other declaration style. Ah, yes! Well at the least this code should be moved to a subroutine. I believe the version with the regex is the desired version. In addition, I believe the current regex don't match the declaration emitted by the cross-linker. The dot in the name prevent a match. I thought so at first, but it's using find(). So it should still match. Perhaps it matches too many -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Phasing in a new, unified linker
On Mon, Jul 26, 2010 at 6:56 PM, John Tamplin j...@google.com wrote: Well, we do know there will be other linkers, and if there aren't extension points defined they will be done via cut-and-paste, which is what led to the current state we are in. No question that extension points are useful. Let's add them, but only when we have an idea of what we are supporting with them. Note that IFrameLinker and XSLinker have several extension points, and yet nonetheless there is a lot of cut and paste going on. We didn't add (all of) the ones that people really ended up needing. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Phasing in a new, unified linker
On Mon, Jul 26, 2010 at 12:37 PM, Scott Blum sco...@google.com wrote: SGTM as far as process. Is the new linker designed to curtail extension, or to sanely encourage it? The existing primary linkers ended up getting extended in brittle ways. That's a good point. Let's make it a final class to start with, and open up extension points later as issues come up. There are no known needs for extensions at this point. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] proposing a hypothetically breaking change to gwt-servlet.jar: comments?
On Mon, Jun 28, 2010 at 3:57 PM, Freeland Abbott fabb...@google.com wrote: So, how breaking are we willing to be to correct that? My knee-jerk reaction is that we don't want to do a lot of breaking just to tidy up the definition of gwt-servlet. The only benefit is to reduce the size of the white list, right? That's a small benefit. Nonetheless, we still would benefit to make gwt-servlet be based on a whitelist rather than a blacklist. It would immediately shrink the jar size, and it would prevent us from accidentally sprawling it even larger. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] proposing a hypothetically breaking change to gwt-servlet.jar: comments?
I think everyone is saying the same thing. Use Miroslav's suggested ASM tool to get a first cut, and then bake the resulting whitelist into the ant files. Then, little by little, refactor things so that the whitelist can shrink, until all that's left is **/shared/** and **/server/** . I'd only add that the ASM tool does get into 4+ hours of work. So if the list Freeland has already looks pretty good, we might instead start there. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: How To Simulate KeyPress
I have a question related to this. I have a KeyPressHandler in the TextBox. When the user presses Enter, I want to simulate it as Tab. I have added the necessary code to capture the 'Enter event, and then I fire the new event using the DomEvent.fireNativeEvent. NativeEvent tabEvent = Document.get().createKeyPressEvent(false, false, false, false, 9, 9); DomEvent.fireNativeEvent(tabEvent, textBox); What I am hoping to see is the 'tabbing' of the current cursor to the next Focusable widget, but it doesn't seem to happen. What am I missing? It basically re-enteres the KeyPressHandler since this is the widget that is being fired. In short, I need the way to suppress the current KeyPressHandler and have the Browser level KeyPressHandler of the 'tabbing' to occur. Any help is appreciated. Regards On May 5, 11:01 am, Thomas Broyer t.bro...@gmail.com wrote: On May 4, 9:43 pm, Lukas Laag laa...@gmail.com wrote: It is possible to generate events programmatically in JavaScript. This page gives a good overview of the topic, includingkeyevents (look for 'Manually firing events'): http://www.howtocreate.co.uk/tutorials/javascript/domevents. I do not know if generating atabevent like this in a JSNI method you actually don't need JSNI: just use Document.get().createXxxEvent() and DomEvent.fireNativeEvent(). would solve your focus problem. I doubt it would, otherwise people would have proposed it as a workaround:http://stackoverflow.com/questions/2398528/set-textbox-focus-in-mobil... (note: according to the web, it's a bug in WebKit) -- You received this message because you are subscribed to the Google Groups Google Web Toolkit group. To post to this group, send email to google-web-tool...@googlegroups.com. To unsubscribe from this group, send email to google-web-toolkit+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/google-web-toolkit?hl=en. -- You received this message because you are subscribed to the Google Groups Google Web Toolkit group. To post to this group, send email to google-web-tool...@googlegroups.com. To unsubscribe from this group, send email to google-web-toolkit+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-web-toolkit?hl=en.
Re: [gwt-contrib] Command pattern and GWT.runAsync
On Fri, Jun 4, 2010 at 5:13 AM, David david.no...@gmail.com wrote: Less maintenance on the async, declarative transaction management, undo, batching, less web.xml tweeking, ... there are many reasons why we also use a command pattern. With the system as it is, I believe your best bet is to make abstract class plus *templates* of key methods. The templates are then copied down to each subclass. An example is in GWT's showcase sample, where each content pane implements the following method: @Override protected void asyncOnInitialize(final AsyncCallbackWidget callback) { GWT.runAsync(CwAbsolutePanel.class, new RunAsyncCallback() { public void onFailure(Throwable caught) { callback.onFailure(caught); } public void onSuccess() { callback.onSuccess(onInitialize()); } }); } This method calls onInitialize() to do the work specific to each example pane. You could imagine it also calling some other protected methods in key places. In general, it would be nicer if runAsync included some extra callbacks in key places so that this kind of thing isn't necessary. Then whenever you change the template, you could change it in one place instead of having to modify all the copies in all the subclasses. Initially, the issue was that it wasn't obvious what hook points people would want. At this point, the issue is more a matter of priorities. To do it well would require surveying what everyone is doing with runAsync and making sure the right hooks are there for the majority of them. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Tabless
On Thu, Jun 3, 2010 at 12:56 PM, Ray Ryan rj...@google.com wrote: No argument. And since we've never, ever managed to actually delete a deprecated class so far as I know, the issue may not come up for a while… There are some counterexamples. For example: http://gwt-code-reviews.appspot.com/139804/show To get deprecated things removed, it's key that users have something to switch to. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: CompilePerms and classpath loading problem.
On Tue, Jun 1, 2010 at 11:56 AM, Ray Ryan rj...@google.com wrote: Yup. Is the fix to make it use the resource oracle? To play it safe: First check resource oracle. Next check the context class loader, as in Marko's email. Then check wherever it looks now (the system class loader?). Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: CompilePerms and classpath loading problem.
On Tue, Jun 1, 2010 at 2:35 PM, Marko Vuksanovic markovuksano...@gmail.comwrote: Class Loaders are checked in parent to child direction - so if you try to fetch a resource from a context class loader, system class loader is the first that will be checked and only after resource is not found there, next child will be checked... and so on... So if something is found in context class loader, all parent class loaders have been checked. Somebody correct me if I'm wrong. I don't believe it's necessarily true for the system loader to be a parent of the context loader. It's common, but not necessary. The only loader you can't get away from is the boot class loader. That said, if it's not on the context loader, you might want to ignore it if you can get away with it. For that matter, the same goes for things not in the resource oracle. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: The new 2.1M1 release appears to have a few challenges
The new data-flow optimizer is choking on the code. Some sort of uncommon syntax in the code is breaking one of its assertions. Let me poke around a little to see what's up. Filed as Issue 4957: http://code.google.com/p/google-web-toolkit/issues/detail?id=4957 Lex -- You received this message because you are subscribed to the Google Groups Google Web Toolkit group. To post to this group, send email to google-web-tool...@googlegroups.com. To unsubscribe from this group, send email to google-web-toolkit+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-web-toolkit?hl=en.
Adding CSS style dynamically
Hello. I have some odd situation where I need to add the style attributes dynamically/programmatically depending on what the saved data is. In short, instead of defining the css style definitions in the file and reference them, I need to create those per Widget, depending on the data. I tried to use DOM.setElementProperty, but it doesn't seem to work properly. It doesn't set the background or the foreground, and the absolute positioning also gets screwed up. Can anyone share your tips/insights on how I can achieve this? private void setStyle(Element dom, ScreenElement e) { DOM.setElementProperty(dom, position, absolute); DOM.setElementProperty(dom, top, e.getYpos() + px); DOM.setElementProperty(dom, left, e.getXpos() + px); DOM.setElementProperty(dom, height, e.getHeight() + px); DOM.setElementProperty(dom, width, e.getWidth() + px); DOM.setElementProperty(dom, fontSize, e.getFontSize() + pt); DOM.setElementProperty(dom, fontFamily, e.getFontFamily()); DOM.setElementProperty(dom, backgroundColor, e.getBackColor()); DOM.setElementProperty(dom, color, e.getForeColor()); if (e.isFontStyleUnderline()) { DOM.setElementProperty(dom, textDecoration, underlined); } else { DOM.setElementProperty(dom, textDecoration, normal); } if (e.isFontStyleItalic()) { DOM.setElementProperty(dom, fontStyle, italic); } else { DOM.setElementProperty(dom, fontStyle, normal); } if (e.isFontStyleBold()) { DOM.setElementProperty(dom, fontWeight, bold); } else { DOM.setElementProperty(dom, fontWeight, normal); } } -- You received this message because you are subscribed to the Google Groups Google Web Toolkit group. To post to this group, send email to google-web-tool...@googlegroups.com. To unsubscribe from this group, send email to google-web-toolkit+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-web-toolkit?hl=en.
Re: [gwt-contrib] Re: Fix GWT logging in Devmode (issue437801)
On Wed, May 5, 2010 at 2:15 PM, Ray Ryan rj...@google.com wrote: You should be able to throw subclasses of RuntimeException, no? For example, UnsupportedOperationException. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: Question about CompilePerms and adding gwt module source at runtime.
On Mon, May 3, 2010 at 7:36 AM, Marko Vuksanovic markovuksano...@gmail.comwrote: I solved the problem... This had nothing to do with the GWT. The problem was with adding a folder to java class path dynamically. Although at first I thought I had done it correctly, it turned out that that's a little tricky... For anyone else with the same problem - here's a gist on how to solve it... http://gist.github.com/387972 As you can see, a call to protected method is required in order to add a folder to class path. It would also be possible to create a new class loader and set that as the context class loader. Then GWT should use the new one. Alternatively, when the JVM is launched on the remote machine, pass the extra directories in using the -classpath option. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: Question about CompilePerms and adding gwt module source at runtime.
On Tue, May 4, 2010 at 12:25 PM, Marko Vuksanovic markovuksano...@gmail.com wrote: Hi Lex, The first solution seems interesting... could you please provide a code snippet just to get me started... Did you mean something like Thread.currentThread().setContextClassLoader(urlClassloader); Exactly. Where urlClassloader is a freshly made URLClassLoader that you create. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
How to add remove icon on TabLayoutPanel
Hello there.. First time here, so be gentle with me. :-) I've searched around, but couldn't find the answer to my question, so here it goes. I have added the Tab(s)s to the TabLayoutPanel through UiBinder and programmatically. However, I needed a way to remove this tab from the TabLayoutPanel when the user clicks on the x icon or similar in the Tab header. I can't seem to find the way to add this x icon to the header, nor a way to extend the g:header to provide the image or a text. Can anyone help or direct me to the link where I can find more info? Cheers. -- You received this message because you are subscribed to the Google Groups Google Web Toolkit group. To post to this group, send email to google-web-tool...@googlegroups.com. To unsubscribe from this group, send email to google-web-toolkit+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/google-web-toolkit?hl=en.
[gwt-contrib] Re: When JsStaticEval converts a number to a string, use the JS (issue335801)
Interesting subthreads aside, does the change in this patch LGT everyone? On Fri, Apr 9, 2010 at 6:58 PM, John Tamplin j...@google.com wrote: On Fri, Apr 9, 2010 at 6:29 PM, Lex Spoon sp...@google.com wrote: Changing it is fine. However, the ideal change would be to whichever way takes the fewest bytes! Well, not if we are using it in contexts where it is expected to be all digits, as in the original bug. True. In this particular subthread, the context is JsToStringVisitor. This class already discards whitespace and drops comments. It also does rewrites such as 1.23456E5 to 123456, and those are more than fine. Emit it whichever way is shortest. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: When JsStaticEval converts a number to a string, use the JS (issue335801)
On Fri, Apr 9, 2010 at 4:50 PM, Ray Cromwell cromwell...@google.com wrote: What about JS string promotion tho? Imagine the following: var x = Hello; var y = 2.0041234E3 alert(x + y); If y was originally 20041234000 but JsToStringGenerationVisitor serialized it in scientific notation, then this would be wrong, since the user would expect Hello 20041234000. Testing this in the Chrome console, it works, become y=2.0041234E3 gets toStringed by the JS runtime as 20041234, so maybe you're right. Still doesn't make me feel warm and fuzzy inside. Changing it is fine. However, the ideal change would be to whichever way takes the fewest bytes! Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors To unsubscribe, reply using remove me as the subject.
Re: [gwt-contrib] GWT 2.0.3 Compiler Bug?
Hey, Jay, I haven't replicated the problem using your example, but I think I see the bug anyway. There is an optimization to simplify at compile time expressions like 2004318071+'', and it is converting the number part to a string using the equivalent of Double.toString. That is where the scientific notation comes from. However, a real JSVM apparently has a different algorithm for turning a number into a string. I'll work on the JsStaticEval bug. Here's an issue for it: http://code.google.com/p/google-web-toolkit/issues/detail?id=4830 Jay, if you can minimize the problem down to something I can replicate, I can test that it is in fact the problem you are seeing. Alternatively, have you ever built and used GWT from trunk? IF so, you could verify the fix once it's committed. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors To unsubscribe, reply using remove me as the subject.
Re: [gwt-contrib] Re: Array implementation for Lightweight Collections. Pure Java implementation only. (issue232801)
On Thu, Mar 25, 2010 at 2:16 PM, Freeland Abbott fabb...@google.com wrote: Am I right to think that the problem with builder.create() is that it implies a defensive copy, to allow you to do builder.add() afterwards, and we want to avoid that? (If not, then what is the problem?) The solution to that could indeed be a more clever builder: at create() time, we return the existing array as an ImmutableArray, *and let go of it in the builder, moving it to a previousList filed or somesuch.* If the user does indeed later reuse the builder with some add() (or remove() or whatever), we do the defensive copy then, lazily. Presumably two back-to-back create() calls could just return the same list, since it's immutable anyway. That works fine It won't always optimize as well, though. For a builder created in one method, built up, and then turned into an immutable collection, anything goes. Unwrap the fields of the builder object, inline the add/remove/etc methods, and use data flow for the rest. However, for a builder passed around to multiple methods, this looks much harder, and surely the optimizer won't always figure it out. For freeze, I presume it works out that in the prod-mode version the frozen collection really is the same object? In that case, we should get the tightest possible code just by inlining add/remove/etc. There isn't any fancy inference needed to prove that add() is never called after create(). -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors To unsubscribe from this group, send email to google-web-toolkit-contributors+unsubscribegooglegroups.com or reply to this email with the words REMOVE ME as the subject.
[gwt-contrib] whirlwind overview of distributed GWT builds
I can't find them documented anywhere, so I have tossed up a brief web page. http://code.google.com/p/google-web-toolkit/wiki/DistributedBuilds Most GWT apps build fast enough that a distributed build isn't worth the effort, so this is a pretty specialized topic. However, an occasional project has a lot of translations plus a lot of supported browsers, and when you multiply the two numbers together you easily get over a hundred permutations being built. For such projects, a distributed build can help, because the permutations can be built in parallel. The details are on the wiki page. Ideas about improving the page (or the feature!) are welcome. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors To unsubscribe from this group, send email to google-web-toolkit-contributors+unsubscribegooglegroups.com or reply to this email with the words REMOVE ME as the subject.
[gwt-contrib] Re: Adds the -XdisableGflow flag. (issue260801)
On Tue, Mar 23, 2010 at 2:24 PM, sco...@google.com wrote: Just a thought.. but instead of a new flag, what if we redefined aggressive optimizations to mean just GFlow, since all the other optimizations have a lot of miles on them now. What do you think? I like the idea of updating the aggressive optimization flag, and started to suggest that instead. However, it would still probably include more than gflow. Potential examples would be function clustering, function deduping, and same parameter value substitution. So we'd likely end up wanting a separate flag, anyway. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors To unsubscribe from this group, send email to google-web-toolkit-contributors+unsubscribegooglegroups.com or reply to this email with the words REMOVE ME as the subject.
Re: [gwt-contrib] Re: RR : Soft permutations (issue160801)
On Thu, Mar 18, 2010 at 7:38 PM, John Tamplin j...@google.com wrote: On Thu, Mar 18, 2010 at 7:25 PM, sp...@google.com wrote: The main issue is that I don't believe that sharded builds will take full advantage of the collapsing. We need for Precompile to emit the number of *collapsed* permutations, but it looks like it emits the number before collapsing. Also, CompilePerms needs to treat its input number as an index into the collapsed permutations. Wouldn't you have to run generators in precompile if you wanted to collapse equivalent permutations down? There are different kinds of collapsing. I mean the collapsing that this patch adds, which not coincidentally does not depend on generator results. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors To unsubscribe from this group, send email to google-web-toolkit-contributors+unsubscribegooglegroups.com or reply to this email with the words REMOVE ME as the subject.
Re: [gwt-contrib] Re: RR : Soft permutations
On Thu, Mar 18, 2010 at 12:32 PM, BobV b...@google.com wrote: Depending on the string name of an enum looks suboptimal. What are you referring to? A deferred binding option is a selection from an enum. Normally the name of an enum is not significant. It's only used in debuggers and for binding identifiers to the chosen enum. Thus, globbing on the name of a deferred binding property looks like doing this in Java to me: enum Locale { EN_UK, EN_US, ES_AR, ES_ES } boolean useit(Locale locale) { return locale.name().startsWith(EN_); } It breaks down because the person writing the enums has to always write them with a careful hierarchy represented in the names. Instead of safari, g1, nexus, and chrome, we have webkit.safari, webkit.chrome, webkit.android.g1, webkit.android.nexus. This is awkward when it works, but it completely breaks down once there's another axis to classify the browsers with. For example, what's the glob that combines all mobile browsers? Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RR : Soft permutations
On Thu, Mar 18, 2010 at 12:20 PM, John Tamplin j...@google.com wrote: On Thu, Mar 18, 2010 at 12:11 PM, Lex Spoon sp...@google.com wrote: Yes, that's what I was thinking for complex cases. For simple cases, can't users already specify en and get a permutation with all the Englishes combined? Perhaps I misunderstand, though. Is the plan to stop using en and switch to having people use en_* to collapse down the Englishes? Right now, people who use runtime locales for the most part they just list the compile-time locales they want translations for and inherit CldrLocales. If we add new locales, such as recently, they don't have to change anything and they automatically get the localized date/time/number formats and currency names for each runtime locale. That is what I have been pushing to have a solution for in soft permutations, rather than requiring them to manually create the collapse lists is a problem. Having to run some pre-compile tool is problematic, since how do you tell Eclipse, for example, what to run and when? Granted, runtime locales aren't going to be ported to soft permutations immediately, but it still shouldn't be something that is going to make life harder for our users when we do it. I thought the conclusion of the discussion we had about that was to allow a module to specify a class that could synthesize module entries It sounds like the design isn't so nailed down after all for how runtime locales will work with soft permutations. Let's please talk this over in a higher-bandwidth forum. I thought I had mentioned that developing what is essentially a macro system is possible but will take some substantial effort. Perhaps it's the thing to do, but it increases the scope of what module files support. If we don't need it, this is a place we can save a lot of effort. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: RR : Soft permutations
On Wed, Mar 17, 2010 at 2:48 PM, BobV b...@google.com wrote: On Wed, Mar 17, 2010 at 2:33 PM, sp...@google.com wrote: Still, what is the use case for globs other than a bare * ? You mean like: gecko* ie* Those are trivial already. Is there any new use case enabled by it? -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Support runAsync with the cross-site linker.
On Mon, Mar 15, 2010 at 8:34 PM, Joel Webber j...@google.com wrote: That's great news, and will really help with efforts to make our linkers more sane. Out of curiosity, what's the strategy for loading fragments into the enclosing namespace (and yes, that's the sound of me being too lazy to dig into the patch)? No, it should be documented separately from the patch. Here you go: http://code.google.com/p/google-web-toolkit/wiki/CrossSiteRunAsync http://code.google.com/p/google-web-toolkit/wiki/CrossSiteRunAsync-Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Support runAsync with the cross-site linker.
On Tue, Mar 16, 2010 at 11:54 AM, Eric Ayers zun...@google.com wrote: Something is screwy with riedvelt, and I can't leave a comment on XSTemplate.js My comment isn't really about your patch in particular, but the patterns we are using in the linker templates. I wish that stanzas of code common to many linkers, like the calling of __gwtStatsEvent() could be extracted and put into the template with variable substitution. For example, adding the sessionId field to the event works fine here, but there are several linkers outside of GWT that will need to be updated to get the same fix. Yes. Indeed, this fix originally went into the iframe linker and it was overlooked for the cross-site linker. Using more templating should help us not have to duplicate code so much. I was thinking in the short term to pull out computeScriptBase and processMetas into their own file. Code that wants to pull it in could insert COMPUTE_SCRIPT_BASE or PROCESS_METAS at the place it should go. As well, the code can then be tested more conveniently. Perhaps, though, we may as well define a general INCLUDE(processMetas.js) ? Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: One-line fix to SelectionScript's fallback logic for
2010/3/11 Miguel Méndez mmen...@google.com +1 to Ray's question. I know that you were simply doing a fix Lex, but we need to think about how we test these features. I agree. I'll work out a test. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: allow skipping unit tests in development or production mode
On Wed, Mar 10, 2010 at 7:17 PM, amitman...@google.com wrote: LGTM on this quick and dirty solution. Eventually, we do want something like John suggests -- it is mostly up to you to either go with this or the general solution. I feel the same way. A quick and dirty solution would be very valuable so that I can add tests for the cross-site linker. However, it looks like we ultimately want to support an or of a bunch of ands. Did you review the implementation, Amit? So far everyone is okay with the API as something to work with for now. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Give a better error message when RunAsyncCode.runAsyncCode is passed something
replacements, it may as well check the GWT.create for sanity as well. As an outside example, look again at the header of Scala's RefChecks. It's called Ref*Checks*, but it in addition to doing checks it also does a few transformations. If you look at the header and scroll down past the list of checks, you will see a list of transformations. I don't know the history behind these particular transformations, but I can only imagine that this was a convenient place to park them. Another example is that many type checkers also do a little bit of rewriting. One such type checker is the one for Go: https://code.google.com/p/go/source/browse/src/cmd/gc/typecheck.c?spec=svn5e69b9812415e81b0e2566192e80c8735a46088er=5e69b9812415e81b0e2566192e80c8735a46088e#45 Looks closely at the signature of that method. See how it returns a different AST node than it starts with? That's because some of the type checks logically require the type checker to do a conversion and then see if the resulting code is safe. Once the type checker has done such a logical conversion, it may as well do an actual conversion. Otherwise, some later phase would have to write nearly identical code to do the actual conversion. The simplest example where this comes up is for implicit conversions, which is at line 885 for this particular type checker: https://code.google.com/p/go/source/browse/src/cmd/gc/typecheck.c?spec=svn5e69b9812415e81b0e2566192e80c8735a46088er=5e69b9812415e81b0e2566192e80c8735a46088e#885 Overall, I don't see any general principle we can apply to where checks should be placed versus where transformations should be placed. The decision has to be made using more generic programming guidelines. Don't duplicate code. Don't compound two complicated operations with each other. Things like that. From that perspective, I think ReplaceRunAsyncs does well as it is. It's highly maintainable for it to run on our ASTs versus the JDT structures. Also, the general conversion of Java source code to GWT ASTs is already tricky and complicated, so we should strive to avoid entangling more with it when possible. ReplaceRunAsyncs is 324 lines long as it is, and it does a few closely intertwined things related to replacing runAsync calls. The rewrites that the class is named after are only about 54 lines. The rest is checking, helper methods, visiting, and Java boilerplate. I'm not *strongly* against chopping this code up different ways, but given these proportions and this logical cohesion, it looks hard to do much better. Finally, you mentioned sharing code between the compiler and with IDE plugins. I'm quite excited by that development, and Miguel and I have spoken several times about how the relevant APIs can end up looking. It's very early days, however. There are no APIs agreed on, and the options for the AST alone are ranging all over the place, including: APT, JSR 308, our TypeOracle, our JJS nodes, or maybe a new API. One option clearly won't work though, and that's the internal JDT trees that the compiler currently uses. The internal JDT trees are an internal API of the Eclipse project, and they change it from release to release. They're also just plain miserable to work against. So, while it's still too early to do very much to help that upcoming refactoring, one thing that's already clear is that the internal JDT trees aren't suitable. We would do well to use them as little as possible. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Updates the IFRame and XS selection script templates to support
On Mon, Mar 8, 2010 at 12:37 PM, Ian Petersen ispet...@gmail.com wrote: Just idle curiosity here, but why did you have to make the same change twice? I know you're modifying selection scripts, and maybe that means you need to violate DRY for some reason, but it strikes me as a potential refactoring. Agreed. It just grew like that over time. To address it, I was thinking that perhaps we could pull that code out into separate files, and then use replaceWith calls to insert the code, the same way the linkers currently fill in __MODULENAME__. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] meta tag applying to just one module?
Seems conceptually simpler, and doesn't introduce the idea of module-specific meta props. Also should be faster since it means less DOM crawling during startup? Do you think any of these are blocking problems? In particular, if we went with the 7-line meta-prop solution, would users be harmed in any measurable way? -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] meta tag applying to just one module?
On Tue, Mar 2, 2010 at 6:23 PM, Scott Blum sco...@google.com wrote: Meh. I've always thought our selections scripts were too big and complicated as is. I'd rather we could figure out a way to get rid of meta properties altogether. :( That may be, but do we have the time for it? Such a project would easily take a couple of weeks. Would you object, Scott, to extending the existing system for now? It's a two-hour job once we decide on a convention. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: when-linker-added
On Wed, Mar 3, 2010 at 6:35 PM, Lex Spoon sp...@google.com wrote: Sure, will do. The discussions preceding this patch are in several different places. I can summarize them on the wiki. http://code.google.com/p/google-web-toolkit/wiki/WhenLinkerAddedhttp://code.google.com/p/google-web-toolkit/wiki/WhenLinkerAdded?ts=1267724209updated=WhenLinkerAdded -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] meta tag applying to just one module?
On Thu, Mar 4, 2010 at 12:47 PM, Scott Blum sco...@google.com wrote: It seems like to me, if you have any kind of process for inlining a selection script into a host page, that that process *ought* to be able to update the module base URL correctly. I mean... if you have to generate the correct meta tag to do the exact same thing, while not just modify the selection script being inlined? It seems to me that would avoid extra load-time overhead and complication, and we wouldn't have to extend meta props to work on a per-module basis. Am I crazy? Can you give more detail on what you are thinking people should do? The way the selection scripts currently are, such a rewriter would be brittle. What the rewriter would need to do is recognize our computeScriptBase() function and replace it with its own logic. I don't see a way to do that that won't easily break the next time we tweak our selection script. So, it seems we'd need to develop a less fragile way to do rewrites of selection scripts. I can imagine several ways to do that, but they would all require a substantial, multi-week effort. To contrast, the running proposal would need ~7 lines of code. Here's the meta tag part: name = name.replace(__MODULENAME__::, ); if (name.indexOf(::) = 0) { continue; } Here's the base URL change: if (base = getMetaProp(baseUrl)) { return; } Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: when-linker-added
On Wed, Mar 3, 2010 at 12:25 PM, Bruce Johnson br...@google.com wrote: Is there some sort of lightweight design doc for this? I'm pretty sure I remember it being discussed somewhere, but we need a short writeup on the project wiki to capture the context. Sure, will do. The discussions preceding this patch are in several different places. I can summarize them on the wiki. FWIW, the original discussion thread is here: http://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/ebca7dc3b6171ea9/0b7c699b5775c046?lnk=raot The initial patch coming out of that discussion is here: http://gwt-code-reviews.appspot.com/143811/show Seeing the actual patch seems to have gotten people to really think about the compatibility breakage involved in changing the primary linker selection, as discussed here: https://groups.google.com/group/google-web-toolkit-contributors/browse_thread/thread/bd7936cd2f323484/87009f6a5c5b6606?lnk=raotpli=1 Everyone then very quickly agreed to go back to when-linker-added, which is this patch. This one is a lot simpler and is better about compatibility. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] meta tag applying to just one module?
On Mon, Mar 1, 2010 at 3:55 PM, Scott Blum sco...@google.com wrote: Avoiding the larger issue of meta tags applying globally, I'd think for this case there should be a more direct way to do it. What I mean is, you have to load the *.cache.html files from *somewhere*. So (from the inlined selection script), you have to do something roughly equivalent to: iframe.src = baseUrl + strongHash + .cache.html It seems like... whatever process is used to inline the selection script in the first place, has to be able to specify the base URL, at least relative to the host page base url. Am I understanding this right? That's right. It would also be possible to modify the server-side support for inlining the script. However, barring other changes, that would mean people are doing regex rewrites on our selection scripts, which seems rather brittle. The meta property system looks like a good, clean solution. All we need to do is decide on a convention for having them be module-specific. I don't know the history of this feature or what all it is intended to support, but it looks awfully straightforward to allow prepending moduleName:: before the property name. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] meta tag applying to just one module?
The meta tag system is a general system for a host page to influence a loaded GWT module. One aspect I don't understand, however: is there an existing way to have it apply to just one module, if multiple GWT modules are loaded on the same page? The reason this comes up is that I would like to make the magic base URL be settable by the host page for people who have inlined the selection script into the host page. For such people, the built-in magic can do the wrong thing. The first idea that was suggested was to use a meta tag. However, wouldn't t hat by default apply to all GWT apps loaded on the page? It might be that one app comes from one server and the other from a different one. If there is no other idea around, then perhaps we could bake the module into the name parameter, like this: meta name=com.google.gwt.sample.mail.Mail::baseUrl content= http://static-content.service.com/mail; If no :: is present, then the setting applies to every module. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: when-linker-added
Hey, Bob, This patch is ready for review. It's to support cross-site runAsync: LoadingStrategy is a deferred binding that will end up depending on which linker is active. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RFC: sharded linking
On Tue, Feb 16, 2010 at 3:32 PM, Scott Blum sco...@google.com wrote: On Fri, Feb 12, 2010 at 7:00 PM, Lex Spoon sp...@google.com wrote: On Fri, Feb 12, 2010 at 9:50 AM, Alex Moffat alex.mof...@gmail.comwrote: Where can I read a description of what -XshardPrecompile, or see the code for it, it sounds very useful to me personally? -XshardPrecompile is an experiment that everyone wants to change, so it seems unlikely to be released in its current form. We can talk about it if it helps, but I would propose that we focus more on what we want to do for real. It seemed relevant because it sounded like you propose to essentially make -XshardPrecompile the default (only?) behavior for Precompile? Or did I misread? No, that's the idea. The reason that makes me cautious has to do with a desire for a future change to the Generator API to support things like minimal rebuild. I imagine a world where the work each Generator does could be sharded out in a way that's independent of the number of permutations. Are you saying that you want to not have to shard, with future developments? I don't think that should be a problem with this patch. As a case in point, the Compiler entry point *could* shard out generating and linking, but it chooses not to. We have the flexibility to play around with these choices over time. - I'm not sure why development mode wouldn't run a sharded link first. Wouldn't it make sense if development mode works just like production compile, it just runs a single development mode permutation shard link before running the final link? Sure, we can do that. Note, though, that they will be running against an empty ArtifactSet, because there aren't any compiles for them to look at. Thus, they won't typically do anything. Do public resources and generated resources show up during the sharded phase? Everyone is happy, I think, with having dev mode run a single on-shard linking step. So, these are just details. FWIW, here is how it is in the patch: 1. Resources are available via ResourceOracle. 2. Public artifacts are be there. They are identical on all permutations, so they aren't added to the artifact set until the final link step. 3. Generated artifacts are there for compilation, but not for development mode. With development mode, all linking is done before the generators run, and generators run on demand. --- you write (gmail just messed up my reply quotes): Now that I am thinking along those lines, it almost begs the question. If we are willing to break the world, is this the best possible way to model new link process? In other words, it seems worth re-examining the design without regard to the existing API and asking ourselves if it's the thing we'd have designed from scratch. Maybe you guys all already did that and I'm the only one late to the party. For example, if we're going from scratch, then we could avoid the transition entirely and just mandate what the new rules are. We wouldn't need a @Shardable annotation since all linkers would need to be sharding aware. We might rather have two separate methods for sharded vs. non-sharded link than a boolean parameter. We might revisit the whole PRE, PRIMARY, POST thing with regards to sharding and decide the right answer is SHARD, PRE, PRIMARY, POST. Or something. I don't know what the right answers are. All I'm saying is, breaking things is awesome when you're doing something revolutionary and the end result is awesome. I just want to be sure, if we're going to break things, that we believe we'll end up somewhere revolutionary and awesome as opposed to evolutionary and incremental, but less than awesome. I initially proposed simply breaking the world. However, at your encouragement, this patch has developed to be backwards compatible. As things stand, this patch both gets a large improvement and is evolutionary. On those specific changes: 1. @Shardable can certainly be dropped after a deprecation period. Is there any urgency to drop it immediately? 2. Two separate methods versus one with a boolean looks fine to me. It's changed back and forth as the patch developed. 3.PRE/PRIMARY/POST still appear to be useful. All linkers care whether they are primary or not, because there is one primary linker and it must deal with generating a selection script. Additionally, a few linkers care whether they go before or after the primary linker. 4. SHARD as a separate linker order is very tempting but turns out to have some problems. First, many linkers have both an on-shard and on-final part, and if SHARD was a separate order then those linkers would have to be subdivided into two linkers. Instead of IframeLinker, we'd have to have IframeShardLinker and IframeFinalLinker. Second, the SHARD part also has PRE/PRIMARY/POST, so you really have six linker orders, not four. It's tidier
Re: [gwt-contrib] RR: Two key GWT.create to avoid boilerplate
On Sat, Feb 13, 2010 at 6:22 PM, Ray Cromwell cromwell...@gmail.com wrote: I'm not sure how my proposal creates more boilerplate, it adds the ability to specify arbitrary parameters to GWT.create, e.g. GWT.create(A.class, LiteralParam1, LiteralParam2, ...) Sweet. It looks like everyone is in favor of this part. Unless someone speaks to the contrary, it sounds like the lights are green for this one, as soon as someone has time to do it. public class MyLibrary { @GwtCreate public static T T create(Class? clazz) { return GWT.create(clazz); } } Usage: MyLibrary.create(Binder.class); This part has the down side that it requires a special class loader for it to work. To contrast, GWT.create can be intercepted using regular Java code by using GWT.setBridge. This has proven useful for writing test cases. It would be very helpful to refine this to the point that it doesn't strictly require using a special class loader. For example, maybe we could insist (and verify) that the body of one of these methods has the dev mode implementation, and that it is in a stylized form ? Also, the proposal on the blog entry mentions a variation of @GwtCreate that takes the generator as an argument: @GwtCreate(generator=com.foo.BarGenerator.class) T extends Bar T create(ClassT clazz) This version looks at odds with the way deferred binding normally works. Normally, the module file chooses the generator, and it can even choose a different generator for each permutation of the compile. For this, let's try and come up with a way to distinguish the separate interface types inherited by clazz. It might not even be necessary, though. Note that when looking at the above create() call, the supplied clazz argument has already been upcast to the Bar interface. Thus, even though the original class implemented Bar as well as a bunch of other generatable interfaces, this particular call should be deferred bound however Bar is deferred bound. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RFC: sharded linking
On Fri, Feb 12, 2010 at 9:50 AM, Alex Moffat alex.mof...@gmail.com wrote: Where can I read a description of what -XshardPrecompile, or see the code for it, it sounds very useful to me personally? -XshardPrecompile is an experiment that everyone wants to change, so it seems unlikely to be released in its current form. We can talk about it if it helps, but I would propose that we focus more on what we want to do for real. It's not in 2.0.0 as far as I can see. My concerns about the sharded linking proposal came from what I understood the original flow to be from my looking at it and from the original sharded linkin proposal. Your understanding is correct as far as I can tell. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RFC: sharded linking
On Thu, Feb 11, 2010 at 8:58 PM, Brendan Kenny bcke...@gmail.com wrote: If this is indeed the direction to go in (and I'm a big fan of the goals as well), it's probably also worth making a more formal definition for won't step on each other's toes. As a use case, I'm working on a PRE linker that (currently) removes CompilationResults, alters them based on information collected from across all permutations, and then emits new ones. Obviously this isn't ideal--its expensive and CompilationResults were written to be (mostly) immutable--but it's also perfectly acceptable within the current design of the artifactSet/linker chain. The primary linker only cares about the set of compilation results it receives, and if an earlier linker altered them, it need never know. Hey, Brendan, it sounds like you are already pressing the limits of what is doable with linkers.Can you describe in more detail what this linker accomplishes? For this linker to be used in distributed builds, I believe you'd really want to come up with a way to do the JS rewrites on the sharded part. Otherwise, the final link node is going to have to do the JS rewrites for the whole build sequentially. What exact information is used as input to the rewrites? It seems (and I could definitely be misinterpreting here) that in both the simulated sharding procedure and Scott's alternate proposal, there will be sections of primary and post linkers running before a non- shardable pre linker. If that's true, then neither will be able to fully honor the ordering of linkers when shardable and non-shardable linkers are mixed. That's a large part of why I suggested that we phase out non-sharded linkers. In mixed mode, there isn't a perfect ordering to choose. With all sharded linkers, the order is simple and predictable. All sharded parts run before all final parts, and within either of those groups, PRE/PRIMARY/POST are respected. Continuing to think out loud, it seems that the way to alter my linker is probably either to statically derive what all permutations will need in every shard (as opposed to just having each triggered generator emit an artifact and collecting them at the end), or keeping that the same and creating a custom primary linker, which I was hoping not to do as it would tend to limit adoption. It might help to know that both generators and linkers have access to the full set of *possible* values of a deferred binding, not just the values for the current permutation. As an example, the LocaleListLinker reads off all possible values of locale and generates a file containing them: http://code.google.com/p/google-web-toolkit-incubator/source/browse/trunk/src/com/google/gwt/libideas/linker/LocaleListLinker.java Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] RFC: sharded linking
On Thu, Feb 11, 2010 at 7:43 PM, Scott Blum sco...@google.com wrote: I have a few comments, but first I wanted to raise the point that I'm not sure why we're having this argument about maximally sharded Precompiles at all. For one thing, it's already implemented, and optional, via -XshardPrecompile. I can't think of any reason to muck with this, or why it would have any relevance to sharded linking. Can we just table that part for now, or is there something I'm missing? There are still two modes, but there's no more need for an explicit argument. For Compiler, precompile is never sharded. For the three-stage entry points, full sharding happens iff all linkers are shardable. - I'm not sure why development mode wouldn't run a sharded link first. Wouldn't it make sense if development mode works just like production compile, it just runs a single development mode permutation shard link before running the final link? Sure, we can do that. Note, though, that they will be running against an empty ArtifactSet, because there aren't any compiles for them to look at. Thus, they won't typically do anything. 2) Instead of trying to do automatic thinning, we just let the linkers themselves do the thinning. For example, one of the most serialization-expensive things we do is serialize/deserialze symbolMaps. To avoid this, we update SymbolMapsLinker to do most of its work during sharding, and update IFrameLinker (et al) to remove the CompilationResult during the sharded link so it never gets sent across to the final link. In addition to the other issues pointed out, note that this adds ordering constraints among the linkers. Any linker that deletes something must run after every linker that wants to look at it. Your example wouldn't work as is, because it would mean no POST linker can look at CompilationResults. It also wouldn't work to put the deletion in a POST linker, for the same reason. We'd have to work out a way for the deletions to happen last, after all the normal linkage activity. Suppose, continuing that idea, we add a POSTPOST order that is used only for deletion. If it's really only for deletion, then the usual link() API is overly general, because it lets linkers both add and remove artifacts during POSTPOST, which is not desired. So, we want a POSTPOST API that is only for deletion. Linkers somehow or another mark artifacts for deletion, but not anything else. At this point, though, isn't it pretty much the same as the automated thinning in the initial proposal? The pros to this idea are (I think) that you don't break anyone... instead you opt-in to the optimization. If you don't do anything, it should still work, but maybe slower than it could. The proposal that started this thread also does not break anyone. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RFC: sharded linking
On Wed, Feb 10, 2010 at 10:58 AM, John Tamplin j...@google.com wrote: On Wed, Feb 10, 2010 at 10:45 AM, Lex Spoon sp...@google.com wrote: Is copying source code so inconvenient that it would be worth having a slower build? I would have thought any of the following would work to move source code from one machine to another: 1. rsync 2. jar + scp 3. svn up on the slave machines Do any of those seem practical for your situation, Alex? Overall, it's easy to provide an extra build staging as an option, but we support a number of build stagings already What does make it difficult is that you can't have a pool of worker machines that can build any project that are asked of them without copying the sources to the worker for each request. For a large project, this can get problematic especially when you have to send the transitive dependencies. You assume the answer here, John. The question is, just why is copying source code problematic to begin with? Can anyone put their finger on it? One concern is that the copying might take too long. However, is there any project where it would take more than a few seconds? A few seconds seems like not a big deal for any build large enough to bother with parallel building. Another possible concern is the need to do some extra build configuration. It doesn't take much *build time* to copy the dependencies, but it takes *developer time* to set it up. Here I agree that it is some amount of extra work. However, it doesn't seem like much. You have to know what your dependencies are, and you have already worked out how to copy precompilation.ser, so how much more work is it to also send over the source code? Overall, I see that it worries people to send source code to the CompilePerms nodes. Yet, it seems entirely normal to me. When you do a distributed build, all the remote workers must have their inputs copied over to them over the network. Besides, what is gained by having the user have to arrange this copying themselves rather than the current method of sending it as part of the compile process? For example, distributed C/C++ compilers send the preprocessed source to the worker nodes, so they don't have to have the source or the same include files, we currently send the AST which is a representation of the source, etc. Compared to the status quo, we gain much faster builds. Compared to automatically copying, we have a fully specced out proposal. :) If we try to automatically copy dependencies, how would we we know exactly what to copy? Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RFC: sharded linking
On Wed, Feb 10, 2010 at 11:25 AM, James Northrup northrup.ja...@gmail.comwrote: the usecases being described as a point of deliberation, defining dependancies, repository access, and bundling automation, are well solved items in the maven stable. how hard can it be to define a multiproject descriptor, assign channels of build-stage progression, and have a top-level project build coordinated by one maven instance publish artifacts to sucessive build-channels served elsewhere by daemons which trigger maven sub-builds? That's a nice idea. Has anyone heard of a project using Maven to support distributed builds? The little bit of web searching I did turned up did not look good. People were saying it would be logical to build that way, but that Maven has a fundamental showstopper: the local repositories are not thread safe. Perhaps that has changed by now? Maven aside, there are other options. Hudson and Pulse should work well. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: RFC: sharded linking
What you describe, Alex, is available via the Compiler entry point, though it hasn't been particularly well documented. There is a PermutationWorkerFactory that can create CompilePerms workers. The default worker factory spawns Java VMs on the same machine, but it is possible to write a replacement worker that uses ssh or whatnot to do the work on a separate machine. The way to plug in a replacement worker factory is to set the Java property gwt.jjs.permutationWorkerFactory . That said, I thought the reason for existence of Precompile, CompilePerms, and Link is to get the best build time but at the expense of needing extra configuration. We are finding that by spending a few seconds copying source code over, we save 10+ minutes in Precompile and 10+ minutes in Link. Is copying source code so inconvenient that it would be worth having a slower build? I would have thought any of the following would work to move source code from one machine to another: 1. rsync 2. jar + scp 3. svn up on the slave machines Do any of those seem practical for your situation, Alex? Overall, it's easy to provide an extra build staging as an option, but we support a number of build stagings already Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] RFC: sharded linking
This is a design doc about speeding up the link phase of GWT. If you don't maintain a linker, and if you don't have a multi-machine GWT build, then none of this should matter to you. If you do maintain a linker, let's make sure your linker can be updated with the proposed changes. If you do have a multi-machine build, or if you have some ideas about them, then perhaps you can help us get the best speed benefit possible out of this. I want to speed up linking for multi-machine builds in two ways: 1. Allow more parts of linking to run in parallel. In particular, anything that happens once per permutation and does not need information from other permutations can run in parallel. As an example, the iframe linker chunks the JavaScript of each permutation into multiple script tags. That work can happen in parallel once the linker API supports it. 2. Link does a lot of Java serialization for its artifacts, but the majority of the artifacts in a compile are emitted artifacts that have no structure. They are just a named bag of bits, from the compiler's perspective. It would help if such artifacts did not need a round of Java serialization on the Link node and could instead be bulk copied. === Transition === The compiler will support two compilation modes: maximal sharding and simulated sharding. Maximal sharding is used when all linkers support it and the Precompile/CompilePerms/Link entry points are used. Simulated sharding is used when either some linker can't shard or when the Compiler entry point is used. Linkers individually indicate whether they implement the sharding or non-sharding API. This allows linkers to be updated one by one and to leave the non-sharding API behind once they do. It does not cause trouble with other linkers, because in practice linkers are highly independent. I've looked at as many linkers as I could find to verify this. Occasionally one linker depends on another; in such a case they'll have to be updated in tandem, but the need for that should be rare. By default, a linker is assumed to want the legacy non-sharding API. For such linkers, it isn't safe to assume it generators or its associated artifacts can be safely serialized and then deserialized on a different computer. The non-sharding API will be deprecated. After the sharding API has been out for one GWT release cycle, support for non-shardable linkers will be dropped. === Maximal sharding === Currently, Precompile parses Java into ASTs and runs generators. CompilePerms then runs one copy for each permutation, in parallel. Each instance optimizes the AST for one permutation and then converts it into JavaScript plus some additional artifacts. Finally, Link takes the JavaScript and all the produced artifacts, runs the individual linkers, and produces the final output. In summary, the three stages are: current Precompile: - parse Java and run generators - output: number of permutations, AST, generated artifacts current CompilePerms: - input: permutation id, AST - compile one permutation to JavaScript - output: JavaScript, generated artifacts current Link: - input: JavaScript from all permutations, generated artifacts - run linkers on all artifacts - emit EmittedArtifacts into the final output With maximal sharding, Precompile does no work except to count the number of permutations. Each CompilePerms instance parses Java ASTs, run generators, and optimizes for a specific permutation. Additionally, each CompilePerms instance also runs the shardable part of linkers on the results for that permutation. It then thins the artifacts (see below) and emits them. Finally, Link takes these results from the CompilePerms instances, runs the final, non-shardable part of each linker, and emits all the artifacts designated as emitted artifacts. In summary, the maximal-sharding staging looks like this: new Precompile: - output: number of permutations new CompilePerms: - input: permutation id - compile one permutation to JavaScript, including running generators - run the on-shard part of linkers - thin down the resulting artifacts, as defined below - output: JavaScript and the thinned down set of artifacts new Link: - input: JavaScript and transferable artifacts from each permutation - run the final part of linkers, which can add more files to the final output - output: resulting emitted artifacts === Simulated Sharding === Simulated sharding uses the in-trunk compiler staging, but runs the linkers as much as possible as if they were using the maximal sharding staging. The sequence is the same whether the Compiler entry point is used or the Precompile/CompilePerms/Link trio of entry points is used. Under simulated sharding, the Precompile and CompilePerms steps run exactly as in trunk. The Link stage, however, runs the linkers in a careful order so as to use the sharded API for those linkers that have been updated: - For each compiled permutation,
[gwt-contrib] Re: RR : Record selected annotations in Java AST
I was thinking JAnnotationArgument would be a marker interface, like in the patch you just posted. If it was in the hierarchy, it would have to be a subtype of expression, which looks a little weird to me. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: Server-side Class object on client-side
On Wed, Jan 27, 2010 at 5:14 PM, Nathan Wells nwwe...@gmail.com wrote: interface RpcService { call(Class? procedureClassLiteral, AsyncCallback callback, Arguments args) } Makes sense. Have you considered, though, the security implications of client code sending an arbitrary class literal to the server? If you end up writing a server-side check that the class literal comes from a list of class literals that are considered safe, then you could go one step further. Instead of sending a class literal, send the class literal's index in that list. Or, similarly, make up an enum that has an entry for each possible class literal, and send an element of the enum across the wire. Convert the enum to a class literal on the server side. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: JSNI references with * as the parameter list
On Tue, Jan 5, 2010 at 12:20 PM, sco...@google.com wrote: I haven't looked at this, but should JsniCheckerTest get a new case as well? Sure, I'll add one. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: JSNI references with * as the parameter list
On Tue, Jan 5, 2010 at 4:38 PM, Lex Spoon sp...@google.com wrote: The main trick is in dealing with bridge methods. I believe they should be ignored, because you can't normally call them in Java. However, we unfortunately currently allow them. So, as a compromise, the latest patch allows *direct* access to bridge methods, but it ignores them when resolving a wildcard. To give an example, suppose StringHolder extends SettableString, like this: abstract class SettableT { abstract void set(T x); } class StringHolder { void set(String x) { /*...*/ } } From the point of view of source code, StringHolder has only a single set() method, and it takes a String as an argument. However, in the Java bytecode, it will have two set methods: it will have the String one, and a bridge method that takes an Object as an argument. The Object one just forwards to the String one. The Object one would be used in code like this: void setToNull(SettableT settable) { settable.set(null); // calls the bridge method } The current patch considers @StringHolder::set(*) to be unambiguous, because it ignores bridge methods. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: JSNI references with * as the parameter list
On Mon, Jan 4, 2010 at 1:26 PM, b...@google.com wrote: http://gwt-code-reviews.appspot.com/126817/diff/2006/2007 File dev/core/test/com/google/gwt/dev/jjs/impl/JsniRefLookupTest.java (right): http://gwt-code-reviews.appspot.com/126817/diff/2006/2007#newcode234 Line 234: JMethod res = (JMethod) lookup(test.Bar::foo(*), errors); It seems weird that this should work. If Bar didn't declare foo() (and the lookup went up to the supertype Foo), this would fail to compile. It breaks the otherwise-Java style of name lookups used by JSNI to care about where the method is declared. It also rewards the unnecessary (and ugly) practice of re-defining a method overload just to make it the default. It's simplest to explain to users that the @class::method(*) wildcard syntax selects the one method out of the entire supertype/superinterface hierarchy whose name is method without having to get into discussions about which type a method is declared on. As would be expected, the trickiest part of this patch has to do with overloading and inheritance. These are dark corners that good code will not rely on, but our tools have to do something with them. I'm not in love with the current algorithm, but what precisely shall we do? It's certainly important, but let's be careful not to bike shed the corner cases. For this particular test case, I thought that if someone defines Bar and Bar has exactly one foo method, then Bar::foo(*) should be allowed, regardless of what Bar inherits. It's easy enough to disallow such a reference, but should we? We have a choice here between supporting good code and eliminating bad code. I fear that if we try to eliminate all the bad code programmers could write, we will never be able to accept any code at all. It's just a design guideline, though. Does anyone else have an opinion about this question? If we want to rule that case out, then my next suggestion would be to count overloads exactly as a Java compiler would. That would mean that methods overriding each other would not count as extra overloads. The main trickiness would be dealing with inheritance that includes bridge methods. That wouldn't be insurmountable, but it's more complicated than the current solution. FWIW, the current algorithm is as follows. I think it's pretty easy to work with. If the user asks for Foo::bar(*), then start in class Foo and look for methods named bar. If you see exactly one, then that's the one they mean. If you see more than one, then the reference is ambiguous. If you see zero, then recurse into inherited superclasses and methods. Also, on that vein, add test code for looking up wildcards on interfaces. Good idea, I will. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: when-linkers-include name='xs' /
On Mon, Dec 21, 2009 at 11:24 PM, BobV b...@google.com wrote: Make add-linker accept conditionals based on module properties? Then rebinds and linkers can have unified predicates. Do you mean, instead of doing this: add-linker name=xs / People would normally do this: set-configuration-property name=linker value=xs / Then the linker setting influences a bunch of others? add-linker name=xs when-property-is name=linker value=xs / /add-linker If that's what you mean, it sounds rather elegant. The thing users configure are properties, and then everything else including linker addition is rule-driven. The main question I have, though, is how to transition to such a scheme? Users currently write explicit add-linkers all over the place. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: JSNI references with ?? as the parameter list
On Tue, Dec 22, 2009 at 2:29 PM, John Tamplin j...@google.com wrote: On Tue, Dec 22, 2009 at 2:24 PM, cromwell...@gmail.com wrote: Very cool. Lex, wasn't there also a proposal a while back to allow the current class to be omitted if you're referring to a method in the same class, e.g. @this::someMethod(??)(a,b) or @class::someMethod(??)(a,b) ? or maybe just @::someMethod(??)(a,b) I believe this last one was the syntax we settled on (though at the time we discussed * to mean any arguments). I would like being able to leave out the class type. It's simply extra work and somebody has to get around to it. Already this patch has been implemented for months now. Do you remember, John, if we included the :: if the class type was left off? I was thinking we could do @someMethod(??)(a,b), which is more concise. Regording * vs. ?, I used * to begin with, and there was a request to change it to ? at the meeting. I changed it to ?? in this patch because potentially we'd want to use a single ? as an individual wildcard type rather than a sequence of them. Also, we discussed using imports for class resolution, though I am not sure how hard it is to get at them for JDT (and it doesn't impact IHM, since we already have to read the source to get JSNI source anyway). Yes, that would be good, too. This improvement is the hardest one of the bunch. In theory, the JDT should provide the information via its scope and/or environment objects. That's good, because then we can use the true Java naming rules including accessing inner types from parent classes. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: when-linkers-include name='xs' /
On Tue, Dec 22, 2009 at 2:58 PM, BobV b...@google.com wrote: The main question I have, though, is how to transition to such a scheme? Users currently write explicit add-linkers all over the place. Since the choice of link style is an application, as opposed to library, kind of choice, and the switch is a one-liner, I propose that we break existing users in a helpful manner. Ray Cromwell and I just talked about this, and it looks like legacy code should not be broken with the changes immediately foreseen. Legacy users would have a raw add-linker, but that's all they'll need so long as they don't use runAsync. So it looks like it's just a matter of emitting good deprecation messages. 1) Add support for deprecated and error attributes to module, define-linker, define-property, define-configuration-property, generate-with, and replace-with tags - This allows messaging to be done up-front, instead of waiting for the entire link cycle. - The messages are emitted when the definitions are used with add-linker, set-property, or the rebind rule matches - Library developers also get some benefit in being able to turn-down existing .gwt.xml API Hmm, I see only two things we want users to change: 1.Set the linker property rather than using add-linker, for primary linkers. 2. If you define a new primary linker, set up a way for users to use it via linker. Given this, a single deprecation looks like enough: check for the use of add-linker that (a) has no conditions in it, and (b) adds a primary linker. Do you see a need for more deprecation? If that's all it is, then it seems reasonable to hard code the specific deprecations in the compiler rather than adding a general deprecation system for module components. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Revisiting the script-via-iframe default linkage
Hey, Matt, I've now double checked on several browsers other than Opera, and I agree that onerror works on non-IE and onreadystatechange works on IE. Details here: http://blog.lexspoon.org/2009/12/detecting-download-failures-with-script.html One tricky aspect is that I don't see how to get IE to say whether or not the download really failed. Sometimes the loaded state is reached when loading a page that is not in cache. Ideas would be welcome about how to deal with that. In the experiments I did, the callback always happens after the script evaluation. If that sequencing is reliable, then it will work to always call the on-failed handler but to have AsyncFragmentLoader quietly ignore such calls if the fragment has already loaded successfully. It tracks the already-loaded fragments anyway, these days, so this would be easy to do. As a bonus, always calling, whether in state loaded or complete, would give good handling to situations where the browser downloads *some* content but it's not the real JS code, e.g. the please log in pages that hotel wifi networks insert. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Revisiting the script-via-iframe default linkage
On Tue, Dec 15, 2009 at 2:53 PM, John Tamplin j...@google.com wrote: On Tue, Dec 15, 2009 at 2:48 PM, Lex Spoon sp...@google.com wrote: Ideas would be welcome about how to deal with that. Could the fragments include some JS at the end which calls a well-known I loaded successfully method? Yes, and in fact they already do. I'm leaning at this point toward indicating failure on any of: onload, onerror, onreadystatechange(loaded), onreadystatechange(complete). Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Revisiting the script-via-iframe default linkage
On Wed, Dec 9, 2009 at 12:47 PM, Matt Mastracci matt...@mastracci.com wrote: Do you know how to get onerror to fire in IE? It didn't seem to work in my testing. No, but why do you need it if you have onreadystatechanged? It should be no problem to hook up both callbacks. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Revisiting the script-via-iframe default linkage
On Mon, Nov 30, 2009 at 10:28 PM, Matt Mastracci matt...@mastracci.com wrote: 2. onerror works some of the time in some of the browsers. It fails on various combinations of resolve errors, error status codes and other failure conditions. For all browsers (except Opera) that don't support it directly, It can be emulated with onreadystatechange/onload and lack of a JSONP callback. Can you expand on that? IE has script-tag callbacks that should be usable to detect download errors. What did you get working on other browsers? If there's a way to detect download failures on Firefox and on Webkit-based browser, then JSONP downloads are better than I thought. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Revisiting the script-via-iframe default linkage
Thanks for the test code and data, Matt! It sounds like enough browsers are covered that error reporting is no longer a major decider between XHR vs. script tags. Regarding iframes, be aware that some GWT users can't use them. I don't know all the reasons why, but one example reason is that iframes don't work reasonably on iPhones. So, we need to support non-iframe linkers for at least some use cases. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Flow analysis framework definition and solver.
On Tue, Dec 1, 2009 at 4:52 PM, mike.aizat...@gmail.com wrote: I can create a separate CFG changelist + CFG-based analyses changelist. I just worry that I will have to maintain several changelist branches + main branch with all the code together. I would certainly prefer to land this code into SVN even before it's actually plugged into compiler. Well, it would be sitting in svn and not being tested in any way except that it compiles. If you want to commit the pieces somewhere, why don't we make an svn branch? Shall I do that? We can then put in the patches as they are committed on the branch, and merge it to trunk once enough is in that it does something. For using this framework to walk the callgraph, I understand that the implementation is simple, but simple things often don't perform well. An easily O(n^3) graph traversal is okay if n is only the size of one method, but is problematic if we are talking about a whole program. Mostly, though, I'd simply like to divide the issue. Since the patch is already large, let's do the intra-proc part first. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Flow analysis framework definition and solver.
On Tue, Dec 1, 2009 at 5:59 PM, Mike Aizatsky mike.aizat...@gmail.com wrote: Well, it would be sitting in svn and not being tested in any way except that it compiles. If you want to commit the pieces somewhere, why don't we make an svn branch? I don't think svn branch will help much. It will only add headaches. What do you say if I would merge all LGTM'ed changes together into single git branch on my workstation? This would at least help me reduce the number of branches to maintain to 3: dev branch, under review branch, LGTM'ed branch. I can also actively export that branch to github if you need. Sounds good. Git is optional; please do if it's easy, but don't worry if it's not. I can patch in multiple patches just fine. For using this framework to walk the callgraph, I understand that the implementation is simple, but simple things often don't perform well. An easily O(n^3) graph traversal is okay if n is only the size of one method, but is problematic if we are talking about a whole program. I don't see where you get O(n^3). Solver algorithm complexity is O(e * l) where e is number of edges, and l is lattice height. For a simple boolean lattice l = 2, so this should be quite efficient from algorithmic point of view. I said n^3 because frequently the information is a set that is proportional to the size of the program. For example, we'd like to analyze clinits that have definitely been called. Then the lattice height is the number of classes in the program, which is proportional to n. When you multiply it all out you get n^3 or n^4, depending on how big you assume the call graphs are. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Revisiting the script-via-iframe default linkage
(Reposting to get it on the mailing list; first try bounced.) Hey, Matt, I agree with your analysis about the code-splitting issues. I've worked out a preliminary patch to do var renaming, but I haven't shared it yet because it's in a pretty early state. I could share it if you or someone is eager enough to see it that you're willing to hack some code to get to use it. To really get it polished up into a committable state, the main issue will be figuring out when to enable the rewrites.Whether to enable it or not depends on the choice of linker. For the off-domain loading, I was thinking to look into a JSONP-like downloader. That, too, is something that should only optionally be enabled, because it has worse download failure reporting. Thus, again the hardest part will be figuring out when to enable it. -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Use 1-based counting for permutations in Compile Report
Consistency is good, but this patch breaks a different consistency! Specifically, the symbolMaps files count permutations from 0. So permutation 3 in the compile report would be permutation 2 in the symbolMaps files. I think that consistency is more important than that the permutation counting and split point counting use the same numbering. If someone reads about permutation 3 in their compile report, they should be able to safely look at permutation 3 in the symbol information and in any other files the compiler emits. The easier of the two numberings to change is the permutation numbering. The symbol maps numbering must be coming from some PermutationResult or something. We could trace backward to find out where that number originally comes from, and shift them all to be 1-based. Making the split point numbers 0-based is certainly possible but could touch a lot of code. The biggest issue is that CodeSplitter already uses split point 0, which doesn't really exist, as a convention indicating the initial download. If split point 0 was a real split point, then some other number would have to be used. Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: Allow declared RuntimeExceptions to be correctly propagated
On Fri, Nov 20, 2009 at 9:01 AM, BobV b...@google.com wrote: Thanks for the review. My only question is anything required for STOB to handle declared RuntimeExceptions, or is this only for deRPC? ProxyCreator.addRemoteServiceRootTypes() iterates over the declared exception types for any given RPC method. It imposes a constraint that they're derived from Exception. STOB has no mention of RuntimeException, nor does it have any reason to treat exception types differently from return types. That looks like an accident. It looks like the intent was never to serialize an unchecked exception. When John and I talked about this issue previously, we couldn't come up with an example where you'd really need to serialize an unchecked exception. You could always either make it checked, or throw a different exception instead of the unchecked one. Are there any use cases for serializing an unchecked exception, or could we instead have STOB skip RunTimeExceptions just like it skips Errors? Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
Re: [gwt-contrib] Re: Allow declared RuntimeExceptions to be correctly propagated
On Fri, Nov 20, 2009 at 2:08 PM, BobV b...@google.com wrote: Including declared RTE's has been there since 1.5. Since the user has explicitly asked for those throwable types, I'm thinking that it's no worse than any other exception type. Okay, I suppose we are stuck with it, then. I don't immediately see an easy way to ween people off of them if they are already using them. Adding a warning to STOB output would not be paid attention to. What we should do as a sanity-check is to explicitly disallow void serviceMethod() throws Exception; void serviceMethod() throws RuntimeException; because that would be hideously expensive. Tempting, but those have also been supported since at least 1.5 Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: adding new names to the blackout list
On Tue, Nov 17, 2009 at 3:11 PM, Freeland Abbott fabb...@google.com wrote: I don't promise this is exhaustive, but it catches up to the mozilla and IE references, plus uneval from issue 3965. (Which wasn't on the mozilla pages, despite being reserved there, so I'm in fact almost sure this isn't exhaustive...) LGTM. -Lex -- http://groups.google.com/group/Google-Web-Toolkit-Contributors
[gwt-contrib] Re: Inline Polymorphic Function Declarations
I can review it. -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: Inline Polymorphic Function Declarations
This is neat, Bob! It's also timely. I've been looking into prefetching for runAsync code, and the folks I've talked to are worried about locking up the browser with giant evals. Lazy eval as in this patch would enable people to prefetch code more aggressively. Like you say, it likely needs to chunk more than one function at a time. It would be a very helpful next step on this if it could chunk, say, 10 functions at a time rather than just one. We would then be in a position to tune the chunk size. Speaking of which, unless I am mistaken we currently lack the key performance numbers we need to make a good tuning. Most pressingly, we need to know the overhead for each extra call to eval, and we need to know how gzipped script size changes with lazy eval under different chunk sizes. One last thing. It would help if there was a way to eval the functions without running the resulting functions. That way, code could be evalled in the background whenever the app is idle. For example, functions like this: function foo(args) { implementation; } could be replaced with this: function install_foo() { install_foo = noop; eval(code_to_install_foo_for_real); } function foo(args) { install_foo(); foo(args); } Now install_foo can be called by itself during idle time. Lex PS -- In the implementation, Bob, I believe the class comment could safely be expanded without becoming a wall of text. --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: Some disk-cache optimizations
LGTM. Lex Spoon --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: Any plans for supporting Scala ?
On Mon, Oct 5, 2009 at 12:23 PM, John Tamplin j...@google.com wrote: Why would that be easier than just parsing Scala and building an extended GWT AST from the Scala AST? It seems like inventing a new language (even if it is close to Java) and modifying tools on both sides to use this would be more work. Parsing isn't enough. It would also be necessary to type check, so that the meaning of the ASTs can be understand. After that, it would be necessary to desugar the Scala ASTs down into Java equivalents. That adds up to the bulk of a Scala compiler. Especially the type checking part would take a lot of time to reimplement. The next best thing would be to call into the existing Scala compiler. However, doing it that way means that we have to figure out a way to supply people with compatible versions of the GWT compiler and Scala compiler. If we don't come up with a stable API between the two code bases, then we'll have to figure out a way to supply users binaries of each language that work well together. We can do that, but it will mean people have an extra constraint when they choose which version of GWT and which version of Scala they want to use. Coming up with an API looks better. The text format would essentially be such an API. It is unlikely to change much over time, because both code bases are tracking Java's glacial motion. Further, it shouldn't be too time consuming to develop, because it's only use would be to make two known code bases be able to talk to each other. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: Any plans for supporting Scala ?
On Mon, Oct 5, 2009 at 12:37 PM, Ray Cromwell cromwell...@gmail.com wrote: if I understand, Lex is not really talking about a new syntax/text representation, but an intermediate Java AST that has none of the restrictions that Java does. I was thinking it would be a text format, but it could be a binary format if that looks easier. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: -soyc runs the dashboard, too
On Sat, Oct 3, 2009 at 2:30 AM, Sami Jaber sami.ja...@gmail.com wrote: ping ? Is it possible to know when this new feature will be commited ? as I understand it, soyc-vis.jar will be taken out and replaced by gwtc -report that will generate in the same time soyc reports and the dashboard. can we still use the term SOYC in GWT 2 or it is going to be replaced by a more marketing/friendly naming as dev mode was replaced by OOPHM and production mode by web mode ? It's committed to GWT's trunk. It will hit Perforce this week or next. The soyc-vis.jar will still exist but will be deprecated. The option name is still -soyc. The thinking is to rename it to compile report and to update the option accordingly. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: Any plans for supporting Scala ?
Let me try and sum up where things are. Regarding manpower, I spent a month or two spending 20% time on this, following all of the threads discussed in this email. From where things are, I would estimate a 3-week full time effort to get a basic version working that, for example, translated function literals the same way that inner classes are translated. Getting structural method calls to work would take a little while longer, but they aren't used that much in most Scala code. Getting Scala function literals translated to JavaScript function literals would require updating GWT's IR analogous to what Max describes, and nobody has really thought through a specific plan for that. If anyone wants to work on this, I can give all the relevant Scala and GWT pointers. Just let me know. It would involve writing some Scala code, but would not require any deep knowledge of compilers. Let me now update all the subthreads I see on here with what I found. Please ping if I have overlooked something. Decompilation does not work on Scala. I tried all the ones I could get my hands on, and they all fail. It's been about a year, so if anyone knows of a decompiler breakthrough in that time, it might be possible now. However, it would take large changes. The decompiled output I looked at was fine for 90% of the code, but the remaining 10% included some really big problems that needed solving. Generating Java source code from scalac might be possible, but it's terribly difficult. Java source code does not want to be a target language. Java rejects all kinds of code that are logical but that wouldn't make sense to write by hand. When I started on this path, I enumerated all the problems I knew of and found a solution to all of them. However, along the way, more of them popped up. None of them were ever fatal by themselves, but there are a bunch of them. A better tactic would be to define a modified Java source language that does not have the restrictions. It would be pretty much the same as Java, but with a few changes. A partial list of changes would be: 1. There is no rule in constructors about calling the super() constructor as the syntactically first thing in a constructor. 2. There is a comma operator expression just like in JavaScript. 3. Any expression can be used in an expression statement, even useless things like literals. 4. The types would all be Java erased types. 5. Probably imports would be removed. 6. Overloading and overriding would be as in Java byte code, including override on return type. 7. That probably implies that method calls are also as in Java byte code, and specify the full method signature. This plan has the huge advantage that everything in the chain is under our control. The Scala compiler could generate this language, and GWT could read it. Also, per John's comments about decompiling Java, note that we could plan to update this language to support more source languages than just Java. It should be much easier for them to emit what they mean than for GWT to reverse engineer what their bytecode intended. Lex Spoon --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: CloneStatementVisitor
On Fri, Sep 11, 2009 at 4:25 PM, Scott Blum sco...@google.com wrote: Gotcha. Okay that makes sense. At the same time, I can't help but wonder if the idea of retargeting to new params/locals couldn't somehow be baked into the Cloners to force the issue. Conceivably a statement could be cloned and then put back in the same method. So, a clone of a parameter ref makes sense on its own. It depends on what you do with it, or with what other transformations you do afterwards. A check at the place the item is added into a tree would certainly make sense, but it would probably slow down the compiles. As one possibility, Context.replaceMe could, if some paranoid mode flag is turned on, verify that the inserted tree makes sense. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: RR: STOB short circuit path must also compute some side information
Committed at r5982. --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: RR: STOB short circuit path must also compute some side information
Here is an updated patch with the method rename and the updated docs. I'm still waiting on trunk to stabilize before committing. -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~--- stobStringArrayArray2-r5972.patch Description: Binary data
[gwt-contrib] RR: STOB short circuit path must also compute some side information
I'm posting directly to the list, because I got error 500 on Reitveldt. This is a simple patch. There is a small problem in a short-circuit path in SerializableTypeOracleBuilder (STOB). The short circuit is to check if a type has already been examined and, if so, returns yes, it's serializable. The problem is that in that same place, it is necessary to compute extra information about the queried type: the instantiable subtypes of the request. Currently that doesn't happen on the short-circuit path. The empty set is returned, leading to the serialization policy having too few types, leading to run-time serialization failures. To fix this, the attached patch stores the instantiable types information in TypeInfoComputed. That way, the short circuit path has the infromation available. This percolated to a few other small changes: 1. checkTypeInstantiable returns a whole TypeInfoComputed instead of a boolean 2. TypeInfoComputed can now handle an arbitrary JType, not just a reference type. 3. checkTypesInstantiable no longer takes a set as an argument for passing back the instantiable subtypes, because that information is in the returned TypeInfoComputed. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~--- stobStringArrayArray-r5972.patch Description: Binary data
[gwt-contrib] Re: RR: STOB short circuit path must also compute some side information
2009/8/18 Freeland Abbott fabb...@google.com: Looks fine... the only comments are pretty cosmetic: changing checkTypeInstantiable() (and checkArrayInstantiable() also) to return non-boolean should probably also change its name; something like computeInstantiability() may be a better name. It also may warrant more explicit javadoc. It just feels very, very odd to me to have the return value of check then have additional methods to call! Agreed! I'll change the name to computeInstantiability. Also, I simply forgot about the javadoc; I'll update it. -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: add names to runAsync calls
On Tue, Aug 11, 2009 at 5:05 PM, sco...@google.com wrote: Hi Lex, I reviewed everything except CodeSplitter. I looked at it briefly, but that class is fairly unfamiliar to me. Maybe Kathrin or Bob is a better choice? Both of them already have reviews for me (or soon will). It's not really a code splitter change, but new code to associate names with AST nodes (in ReplaceRunAsync). CodeSplitter then looks up the names as recorded. I was really wondering what you'd think is good style for this sort of thing. On little issues: http://gwt-code-reviews.appspot.com/56814/diff/1/7#newcode70 Line 70: // run FindDeferredBindingSitesVisitor because it detects errors, too This raises a flag for me since this is part of web mode compilation.. it's not part of TypeOracle build or testing it. Maybe you want to use the new compiler test infrastructure I just committed? I think that would do exactly what you want. It also includes a basic definition of the GWT compilation unit (with create()) and you could add your runAsync decls to that. Sure, I can do that. http://gwt-code-reviews.appspot.com/56814/diff/1/6#newcode96 Line 96: code.append( public Class? getClass() { return null; }\n); I checked in a conflict to this line. My version returns Object.class.. will that work for you? Yeah, I saw. That version is also fine. http://gwt-code-reviews.appspot.com/56814/diff/1/4 File user/src/com/google/gwt/core/client/GWT.java (right): http://gwt-code-reviews.appspot.com/56814/diff/1/4#newcode211 Line 211: public static void runAsync(Class? name, RunAsyncCallback callback) { The fundamental question I have is why this is a class name instead of a string literal? If it's literally just an arbitrary string identifier, it seems misleading to attach a class to it. http://gwt-code-reviews.appspot.com/56814 I should have dug up the thread that discussed the design of this naming scheme. Here it is: http://groups.google.com/group/Google-Web-Toolkit-Contributors/browse_thread/thread/99751ee4ccd02d06/2ecbc72988b91a5b It was a long discussion, and this is the scheme that came out of it. I initially pushed for strings, myself, because they are simple and they are enough for applications. Looking forward, though, libraries might include runAsync calls, in which case class literals have some advantages: 1. They reuse the global Java package hierarchy, so it's hard for two libraries to use the same name by accident. 2. Despite being in a global hierarchy--and thus long--you can use Java imports to shorten them back up. 3. They can have Javadoc attached. 4. They can have other annotations attached, if we ever come up with a need to attach more information to a split point. --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] lightweight events for runAsync in draft mode
Okay, I recently wrote a test for runAsync lightweight metrics, but -- oops -- that test fails in draft mode. In draft mode, no code splitting happens, so no events are generated, and so the test rightfully complains. So, what should be done? I'm thinking to have draft mode generate some different events, and am wondering what people think. My first thought was to leave the events alone, because after all there are no actual downloads in draft mode. However, there are several problems with that approach: 1. The test really should fail if no events are generated in regular compilation modes. So it wouldn't be good to simply change the test to tolerate a complete lack of events. 2. It's awkward to have a test that only runs in certain compilation modes. The list of exceptions would have to live somewhere, and where would that be? 3. It's also awkward to have the test disable itself, because it needs to query some API to figure out whether code splitting really happened. What API would that be? Am I in draft compile mode? Did code splitting happen for real? I can't think of an API that wouldn't be fragile. It's fully intended that the compiler is flexible in the kinds of optimization it does, and it should be possible for the code splitter to have its own decision making as well. It would be better if this test were robust against such changes. Further, the API would be hard to keep private to the test; application code might start using it, thus locking GWT into supporting it for some amount of time. So, instead of enabling the test selectively, how about generating a different event when in draft compile mode? The current event sequence for calling a single runAsync is as follows: - leftoversDownload -- download of the leftovers fragment - download1 --- download of code for split point 1 - runCallbacks1 -- run the callbacks for split point 1 In draft compile mode, maybe the events could be like this: - codeAlreadyLoaded1 -- code for split point 1 requested but already present - runCallbacks1 -- run the callbacks for split point 1 I could then update the test to tolerate either sequence. Thoughts? Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: lightweight events for runAsync in draft mode
On Fri, Aug 7, 2009 at 11:35 AM, Bruce Johnsonbr...@google.com wrote: I would go back and push on your dismissal of option #2: tests that really do only run in certain modes. I think we're going to have to embrace that on many different levels, and perhaps we simply need to come up with better infrastructure for managing it. That's a great list of examples you give! I'm now convinced we need selective enabling of tests in at least some cases. Your list is just too long and varied to rule them all out. One place I would quibble about is compiler transforms that we consider to be optimizations, because an optimization should preserve behavior. Thus, a test case should not have any easy way to be sensitive to the choice. Going back to the code splitting example, it would be within spec for the code splitter *not* to generate a download event for a particular split point, even though it's not that clever right now. That flexibility makes it very hard to think up an API for querying the code splitter's settings. Certainly on and off isn't enough information, at least going forward. Would anyone see anything bad, for the narrow code splitting example, with adding a LWM event saying split point requested, but code already present? This approach would mean I can finish up the runAsync LWM patch on Monday or Tuesday. So, would it be too much of a digression to brainstorm about a uniform way to handle all these variations? We have @DoNotRunWith() right now. Could we generalize that to: @DoNotRunWhen(from=deferred-bound-type, to=chosen-replacement-class) such as @DoNotRunWhen(from=DOMImpl.class, to=DOMImplOpera.class) We'd have to figure out how to compose these using a decent syntax, but something like this seems like it might work. It would also mean that we'd have to introduce potentially artificial classes to represent optional modes that are just flags right now, including classes such as AggressiveCompilerOptimizationsDisabled or CodeSplittingEnabled, etc. Looks good to me. For the short term, how does it strike you to also have a way to query config and binding properties? While it might not be what we want in the long run, it looks very practical right now. For example, disable this test when user.agent=opera. Another example would be, disable this test if class metadata is off. Given the near-term state of module specifications, querying the property looks more direct than needing to define a class that corresponds to them. Also, while thinking about how to set up the annotations, or and and would seem helpful in some cases. Run this test if the agent is iphone *or* the agent is android. Run this test if the agent is iphone *and* class metadata is on. So it would be good to include them, or something equally powerful, in the supported annotations. At Ian's suggestion, I just took a quick look at the JUnit 4 documentation, and they support an assumeThat() method that we could mimic. The way it's used is that a test method or class can run assumeThat(someExpression) as the first thing it does. If the assumption fails, then the rest of the test simply doesn't run. Large benefits of this approach are that it is very general, and that we don't have to spec up and implement and ||. Instead, the built-in Java versions could be used: // test a deferred binding result assumeThat(!(GWT.create(com.google.gwt.user.DOMImpl) instanceof DOMImplOpera)); // test a property assumeThat(GWT.bindingPropertyIs(user.agent, safari)); // test two properties assumeThat( GWT.bindingPropertyIs(user.agent, safari) GWT.configurationPropertyIs(gwt.metadata.disabled, false)); On the downside, using assumeThat() means that the test runner still has to run the tests. With annotations, the test runner can know ahead of time which tests are relevant on which platforms. Would our test runners make good use of that information? Enough to merit the implementation time and the larger GWT user manual? -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: faster size breakdown with non-fractional billing
On Mon, Aug 3, 2009 at 1:58 PM, Bruce Johnsonbr...@google.com wrote: Because it's easy to bikeshed: can we make the -soyc (-soycExtra) flag more like -style in that it has multiple values rather than having two separate flags? Or is there a rationale for this style that I'm missing. When Bob V's permutation control changes land, we want to make all of this sort of stuff fall into the category of deferred binding properties, so that you could, for example, create one permutaiton with style PRETTY, another with style DETAILED, etc. Having the -soyc flag follow a name/value pattern would make it more amenable to this change. There is no immediate use case for the detailed information. However, I hated to remove all that code when we might need it later. Thus, I left -soyc as the normal use case, and added -XsoycExtra for those use cases that might conceivably need it in the future. It's not a documented option, and it's not listed when you run the compiler with -help. How does that sound, Bruce and Kathrin (and anyone else interested)? It seems very helpful if -soyc is the only option users need to supply. There is even talk of having the -soyc option go ahead and run the dashboard generator, thus giving you final HTML output without needing to add the second step. On a related note, I agree with Kathrin that it's not precisely detailed or extra information that this flag gives you. It's different information, different enough that you can't compute one from the other. I named it extra in a hurry. Can anyone think of anything that would be less misleading? -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: don't download class factories in the initial download
On Tue, Jul 28, 2009 at 3:23 PM, Scott Blumsco...@google.com wrote: There was an unused import of com.google.gwt.dev.jjs.ast.JModVisitor left in here. Sorry, I just eyeballed reitveld and didn't patch it in. My bad! Thanks for fixing it. -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: RR : Emulate JS stack traces (phase 1)
On Mon, Jul 20, 2009 at 9:20 AM, b...@google.com wrote: http://gwt-code-reviews.appspot.com/47816 I'll review it. Man, what a great talking point this is going to be. Because GWT has a compiler, we get to do fine-grained rewrites like this one. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: SOYC generates more dependency trees
I fully agree about the development pattern! That's why I started by writing the CodeSplitting wiki page as if SOYC already supported the use cases I was thinking about. The design doc was therefore a user manual. As an aside, the code splitting document might sound like a weird home for SOYC development, but in this case the use cases are the same. If we add more kinds of information to SOYC, then that will change. At the risk of being redundant, here are those two use cases I thought about: 1. Size breakdowns of different parts of your code, so you know what parts to work most on shrinking and/or splitting out. 2. Dependency information related to code splitting, so you can debug what is happening when splitting doesn't go as expected. For this scope, my remaining task list to get information complete is: 1. Dependency information for strings. 2. Depict the initial load sequence. 3. Deal with the divide between pre-optimization size breakdowns and after-optimization dependencies. If we go with the Soy Lite size breakdowns, then this issue will disappear. For the UI itself, doubtless it can be improved. I only went so far as to think about the work flow for common debugging tasks, which I documented in the CodeSplitting page. However, the UI is still sparse, and it could doubtless use more guidance and cross-linking. In particular, the current implementation makes it nearly impossible to re-sort the output. Additionally, it takes a lot of time to generate even the relatively limited set of HTML files that are currently supported. Both of these could be remedied by making it a GWT app plus a servlet. Finally, I like the idea of supporting more detailed queries. I haven't looked into it because it's exhausted more than my available time just to cover the basics. A good start would be... use cases that aren't already covered. :) Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: SoyLite
On Sun, Jun 28, 2009 at 12:19 AM, John Tamplinj...@google.com wrote: On Fri, Jun 26, 2009 at 3:20 PM, Lex Spoon sp...@google.com wrote: I've been trying to think of ways to speed up the -soyc option, and here is the result of one attempt. What do people think? The idea is to mimick some aspects of the speedy symbolMaps files. Instead of using the enhanced SourceInfo's to track links between before-optimization and after-optimization code, bill size information only to the program as it stands at the end of Java optimization. Additionally, be careful to avoid needing any massaging of the data in StoryRecorder; instead, make a single pass through all the size information. How much would it be skewed by JS-level optimizations? What about JSNI code? The basic framework is the same in both cases. Output bytes get billed back to Java code. That's final output bytes, after all optimization is complete. Both would bill JSNI code to the associated Java native method. The difference is whether to map each byte to multiple methods, or to pick just one. Let me give some examples. Suppose Point is a class with a method getX() that is always inlined. Thus, Point.getX() is inlined away during Java optimization. Then suppose some method TextArea.getArea() calls Point.getX(). In trunk, Point.getX() is billed for every place it gets inlined, so it will show up in the size breakdown. TextArea.getArea(), meanwhile, will not be billed for all of its output bytes; the ones that it got by inlining Point.getX() will be partially billed back to Point.getX(). To contrast, with SoyLite, Point.getX() would not show up in the size output, and TextArea.getArea() would be billed slightly more. As another example, suppose Java method Integer.toString ends up compiling to JavaScript function toString_3. Also, suppose the compiler creates a static version Integer.toString$, which then compiles to function toString_4. In trunk, Integer.toString is given full blame for toString_3 and half blame for toString_4; Integer.toString$ is given half blame for toString_4. With SoyLite, Integer.toString is given full blame for toString_3, and Integer.toString$ is given full blame for toString_4. Does that clear things up better? If not, perhaps we should examine Showaces in more detail to find real examples of differences to look at. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: naming runAsync calls
Is there any other design criterion that people can see? The main ones I see are that it's easy to implement and maintain, it's easy to spec, and that developers can use it without needing any major code refactor. That narrows it down to either an annotation on the method (option 1), or an extra argument to runAsync (option 4). Does object to flipping a coin between these two? On Tue, Jun 23, 2009 at 12:04 PM, Ian Petersenispet...@gmail.com wrote: On Tue, Jun 23, 2009 at 8:43 AM, Lex Spoonsp...@google.com wrote: That's a tough question to answer. I don't have a Java environment at hand, so can someone remind me whether or not you're allowed to annotate local variable declarations? If so, you could assign your CakeMaker to an annotated local and then pass the local to runAsync. I suspect you can't annotate locals, though It seems that they can. Proposal number 6. :) @RunAsyncName(Foo); RunAsyncCallback callback = chooseMyCallback(...); GWT.runAsync(callback); However, I would dearly like to get this implemented and not add new proposals. Is there a reason to prefer this over the others? In another vein, I am have second thoughts about Bruce's original suggestion of annotating the type that's instantiated and passed to runAsync. Is the ambiguity I mentioned a real problem or just a phantom? Is the fragment beyond a given split point defined by the callsite, the type of the argument to runAsync, or both? In other words, suppose I have a concrete implementation of AsyncCallback called MyCallback. If I invoke runAsync in two distinct places but pass an instance of MyCallback to both, are the fragments on the other side of the invocation the same or different? Does it matter how MyCallback is implemented or if it's parameterized somehow? If the concrete type of the argument to runAsync defines the fragment, maybe it's enough to annotate the type. It's the *call* of runAsync that the compiler pays direct attention to, not its argument. That's why I am pushing back against approaches that annotate some aspect of the argument. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: naming runAsync calls
On Wed, Jun 24, 2009 at 12:48 PM, Ian Petersenispet...@gmail.com wrote: The following should be allowed: @SplitPointName(foo) final AsyncCallback callback = chooseACallback(); if (flipACoin()) GWT.runAsync(callback); else GWT.runAsync(callback); I don't see how to literally allow that, because there are two calls to runAsync. The aren't allowed to have the same name. However, they could be named differently even though they use the same callback, like this: AsyncCallback callback = chooseACallback(); if (flipACoin()) { @SplitPointName(foo) AsyncCallback callback1 = callback; GWT.runAsync(callback1); } else { @SplitPointName(bar) AsyncCallback callback2 = callback; GWT.runAsync(callback2); } It works fine, but IMHO it's verbose. Overall, unless I missed something, it's down to style and taste. I'd pick 1, then 4, then 6. Ian has indicated preferring 6, then 4, then 1. I presume Cameron prefers 4 over anything else. Shall we go with 4, then, everyone? // proposal 4 GWT.runAsync(NameGoesHere, new RunAsyncCallback() { } ) Going once going twice Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: naming runAsync calls
On Wed, Jun 24, 2009 at 2:15 PM, Lex Spoonsp...@google.com wrote: Overall, unless I missed something, Okay, Bruce pointed out a new constraint to me: if different libraries name their runAsync calls, then we want to able to refer to those calls reliably even if different libraries choose the same name. This isn't an issue immediately, but it likely will be in the future. Thinking about libraries, I would add another constraint: we don't want libraries to have to expose their implementation. Library writers should ideally be able to document a named runAsync call without exposing the precise arrangement of their internal classes. After some discussion at the office, a tweak to option 4 fixes things up handily. Instead of passing in a string to the method, act like a Java framework and require passing in a class literal. A typical use in application code would pass in the enclosing top-level class: package org.foodity.impl; class Cookies { ... GWT.runAsync(Cookies.class, new RunAsyncCallback() { ... }); ... } A library writer could instead specify a public class, so as not to expose their internal factoring. A user of the name in a gwt.xml file would use the fully qualified version: extend-configuration-property name=compiler.splitpoint.initial.sequence value=org.foodity.impl.Cookies / A user in another Java file would use imports to make the name short: import org.foodity.impl.Cookies; RunAsyncQueue.startPrefetching(Cookies.class); Thoughts? The main downside I know of is the one John Tamplin has pointed out: if there are multiple runAsync calls within a single class -- as sometimes happens -- then the programmer has to code up some new classes that will only be used for naming. Can we live with that? Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: naming runAsync calls
On Wed, Jun 24, 2009 at 5:08 PM, Ray Cromwellcromwell...@gmail.com wrote: I prefer 4 as well, because I think it will be less prone to error and it is more directly associated with the runAsync call. However, I'm curious, what is the effect of the following: GWT.runAsync(foo, callback1); GWT.runAsync(bar, callback1); That would appear to me to generate identical code, but with two different named output files. Ideally, the compiler would figure out that they are the same and do something smart. Right now, though, the results would tend to be bad. On the other hand, what about this: GWT.runAsync(foo, callback1); GWT.runAsync(foo, callback2); here, two different callbacks try to use the same name. This is a subtopic common to any naming scheme: What happens if the same name is specified for two calls? It either needs to be a compile error, or a warning. Either way, the name is not allowed to be used. Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: naming runAsync calls
On Mon, Jun 22, 2009 at 7:33 PM, Ian Petersenispet...@gmail.com wrote: Here's what I mean: // ... surrounding code ... GWT.runAsync(new AsyncCallback() { public void onFailure(Throwable caught) { // deal with failure } @SplitPointName(I like Bruce's idea) public void onSuccess() { // deal with success } }); // ... surrounding code ... I'm not sure if it's better or worse, but it seems more flexible than requiring a surrounding method. Okay, call it proposal 5, annotation of onSuccess methods. I'm not clear on how we should associate onSuccess methods with runAsync calls in the general case. Note that the argument to runAsync doesn't currently have to be an anonymous inner class: class Bakery { private static class CakeMaker implements RunAsyncCallback {...} public static void makeOrder(CakeType cake) { GWT.runAsync(new CakeMaker(cake)); } } It gets even more interesting if the CakeMaker type is abstract, and the implementation is made by a builder class. The builder might even have multiple implementations: class Bakery { private static abstract class CakeMaker implements RunAsyncCallback { } private static class CakeMakerBuilder { ... public CakeMaker build() { class CakeMakerImpl1 extends CakeMaker { public void onSuccess() { ... } } class CakeMakerImpl1() { public void onSuccess() { ... } } if (someConfigProperty) { return new CakeMakerImpl1(); } else { return new CakeMakerImpl2() } } } } Which onSuccess method(s) should be annotated, and how should GWT interpret those annotations? Lex Spoon --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: Add getJsniSignature() to typeinfo.JAbstractMethod
Excellent. I agree that that's an annoying piece of code to keep rewriting. -Lex On Mon, Jun 22, 2009 at 9:42 AM, b...@google.com wrote: Reviewers: Lex, scottb, Message: Review requested Description: I keep writing the same utility method in my Generator types to create a JSNI reference to a given JMethod. --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: naming runAsync calls
Okay, we now have a suite of options. Does anyone see any particularly strong reason to pick among them? The options I see are: 1. Annotate the surrounding method with something like @RunAsyncName(Foo) 2. Use the fully-qualified method name surrounding the call. 3. Use the fully-qualified type name of the callback object. 4. Use a new parameter to runAsync indicating the name. #2 is what's in trunk right now, but it is quite verbose. I think we shouldn't stick with that unless we can make it more concise. One way to do htat would be to make JSNI references more concise if they refer to a method that isn't overloaded. We could let * be used as the parameter list in that case. As much as I like that idea, it would take a few days to implement, so the opportunity cost would be high. For #3, it's not necessarily unambiguous, as pointed out. Thus, it loses one of the main feature of #2. Additionally, it looks harder to spec to me. We would have to tell people something like: to name a runAsync call, make sure to put the call within a method with at least one argument, and then specify the type of that first argument. Can anyone tighten up that spec and make it competitive? Overall, both #2 and #3 take extra work compared to the others. #2 requires implementation work, and #3 requires at the least some extra working to spec it. #1 and #4 both look good to me, though I admit a slight preference for #1. That slight preference is mainly because it would mean we get to stick with a single GWT.runAsync method in the magic GWT class, which seems better for a few small reasons. That said, we could certaily go that route if there's a reason to prefer it. Thoughts welcome. I still think the annotations (option 1) look good, but don't want to push for that it if will cause some trouble or miss some opportunity. Is there a reason to prefer 4 over 1? Is 2 or 3 worth the extra work, or else, does anyone see how to make 2 or 3 easier to implement? Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---
[gwt-contrib] Re: More if-statement optimizations in JsStaticEval
On Tue, Jun 9, 2009 at 11:01 AM, Matt Mastraccimatt...@mastracci.com wrote: Yeah... I would love to have a single AST to represent both of the language states - many optimizations don't happen because we toss all the Java type info during the conversion to JS. For instance, you can't safely optimize this: if (blah != null) to this: if (blah) without knowing what type blah was at the beginning. I expect this would work well, too. In addition, a lot of optimizations would be a lot simpler to write (and some would just fall out of that structure) if we were dealing with a more abstract SSA version of the code, rather than a more direct AST. To be precise, I think what you're getting at is that within-method data flow would be helpful to represent. It's safer to stay with an AST representation due to the fact that the GWT compiler must output valid JavaScript syntax, and in particular does not have the luxury of using any kind of goto. However, it could certainly use an AST but either have the AST follow the SSA invariant, or else augment the AST with a data-flow graph. Either way, the goal is to get data flow information in there, so that silly code like (x=10,x) can be replaced by simply 10. -Lex --~--~-~--~~~---~--~~ http://groups.google.com/group/Google-Web-Toolkit-Contributors -~--~~~~--~~--~--~---