On Thu, Feb 11, 2010 at 4:43 PM, Scott Blum <sco...@google.com> wrote:
> - I dislike the whole transition period followed by having to forcibly
> update all linkers, unless there's a really compelling reason to do so.

In general, I'd agree, but the number of linkers in the wild appears
to be small, this may be a case of trying to preserve an API that only
5 or 10 people in the world are using.

>  Maybe I'm missing some use cases, but I don't see what problems result from
> having some linkers run early and others run late.  As Lex noted, all the
> linkers are largely independent of each other and mostly won't step on each
> other's toes.

In theory, you could have a non-sharded pre-linker whose job it is to
pre-filter the results before all other linkers are supposed to see
them. This could be, for example, substituting text into compiled
artifacts that a later linker might depend on, although admittedly,
this would only cause you a problem if you had written a
sharded-linker that cooperates with something a non-shared pre-linker
is supposed to do. I can't really think of any practical cases.

> - It seems unnecessary to have to annotate Artifacts to say which ones are
> transferable, because I thought we already mandated that all Artifacts have
> to be transferable.

 Should all artifacts have to be transferable? The linker could be
generating temporary artifacts that run within a shard that don't need
to be sent back for the final link right?


> 2) Instead of trying to do automatic thinning, we just let the linkers
> themselves do the thinning.  For example, one of the most
> serialization-expensive things we do is serialize/deserialze symbolMaps.  To
> avoid this, we update SymbolMapsLinker to do most of its work during
> sharding, and update IFrameLinker (et al) to remove the CompilationResult
> during the sharded link so it never gets sent across to the final link.

  It sounds to me like almost every linker will want to do thinning,
so if thinning is going to be used 100% of the time, won't requiring
everyone to reimplement thinning themselves result in potential bugs?

I thought Lex's design was essentially to make things network
efficient by doing the right thing in the common case (automatic
thinning, white-list things you want transferred). I'm not saying the
manual/opt-out approach wouldn't result in similar savings, but it
seems like Lex's design would make it harder for people to write
linkers that blow up on sharded compiles, especially when most third
parties/external contributors aren't using the shard feature yet, so
don't have much a way to detect they've done something bad.

-Ray

-- 
http://groups.google.com/group/Google-Web-Toolkit-Contributors

Reply via email to