First, to be really on the same page, it might be useful to think of this change as a composition of two changes that are related.
The first one is the jar file caching on the other side of the channel. When one side wants to load class "Foo" from the other side, it used to be that we sent the class file image in the response. With this branch, the response can be "that class is in the jar file e11f8a3c1ee73e09b19e344f9fe99cb5" (which refers to the check sum of the jar file). The receiver, once it gets this response, can either look up this jar file in its local cache, or make a separate call to retrieve this jar file. So the response to the classloading request is actualy getting a lot smaller, because now we are mostly just sending around the checksum of a jar file, instead of the actual class file. The second part of what this branch does is prefetching. When one side wants to load "Foo" from the other side, we not only tell them that Foo is in the jar file e11f8a3c1ee73e09b19e344f9fe99cb5, but we'll also tell them that Bar is from the same jar file, and Zot is from the jar file cb58012189e5b726c347219d6d5c73c8. This eats up some of the saving we achieved in the first change, but the response should still come in far smaller than what we had before. And now that we are on the same page, I think you are mixing up latency and bandwidth. As you noted, this is an optimization for high latency network, and I still assume there's fair bandwidth. Adding to the size of the request/response is not too bad in such a network, because the time it takes to do so is a function of bandwidth. For example, even on 1MB/sec network you can get 100KB sent for 100ms. In contrast, the delay induced by a roundtrip is a function of latency, so if your roundtrip time is 100ms, saving single roundtrip wins you 100ms. Thus in this hypothetical network, saving one roundtrip by adding 50KB to the response is a win, and I think this branch is a net win for this kind of situation. This branch hurts the initial start-up cost of a new slave, but for most situations I think this is a net-win. 2013/5/13 Dean Yu <[email protected]> > I've been noodling over this, and I'm not convinced that optimistic > prefetching of classes is going to lead to an overall win. In the end this > is about latency, and the hypothesis is that you save on the latency of > round trip calls for loading a single class at a time by sending many > classes up front. The only way this would be true is if the amount of > latency you add by increasing the size of the original payload is less > than the sum of the latency for each round trip call that is saved. You > also reduce the amount you could save if you wind up sending more classes > than is actually needed as part of the remote calling chain. I find it > hard to believe that you can achieve the necessary savings, at lease > purely from the remoting layer. It seems that you'd need some code in > Jenkins to help tune the prefetch algorithm. > > -- Dean > > On 5/12/13 8:08 AM, "Dean Yu" <[email protected]> wrote: > > >Do you also prefetch inherited classes? I wonder if that would help. > >Also, can you measure the number of classes that were prefetched but > >never used? > > > > -- Dean > > > >On Saturday, May 11, 2013 2:10:18 PM UTC-7, Kohsuke Kawaguchi wrote: > >> (Context: see https://github.com/jenkinsci/remoting/pull/10) > >> > >> > >> I've got the new code working under the Maven job type to see the > >>effect of prefetching. Here is the summary of classloader activities in > >>building > >> > https://svn.jenkins-ci.org/trunk/jenkins/test-projects/model-maven-projec > >>t/ > >> > >> > >> > >> Class loading count=801 > >> Class loading prefetch hit=372 (46%) > >> Resource loading count=11 > >> > >> > >> > >> The new code manages to avoid sending individual class/resource file > >>images on the wire completely, and they are instead all retrieved from > >>locally cached jar files. > >> > >> > >> > >> The prefetch hit ratio 46% means we were able to cut the number of > >>roundtrips to 54% of what it was before. Interestingly, this 46% number > >>is very consistent across different call patterns --- the slave itself > >>had 48% prefetch hit ratio. > >> > >> > >> > >> I haven't measured the difference in the number of bytes transferred. > >> > >> > >> I wonder what can be done to further improve the prefetch hit ratio. > >> > >> > >> > >> The complete call sequence details at > >>https://gist.github.com/kohsuke/5561414 > >> > >> > >> -- > >> Kohsuke Kawaguchi > > > >-- > >You received this message because you are subscribed to the Google Groups > >"Jenkins Developers" group. > >To unsubscribe from this group and stop receiving emails from it, send an > >email to [email protected]. > >For more options, visit https://groups.google.com/groups/opt_out. > > > > > > > -- > You received this message because you are subscribed to the Google Groups > "Jenkins Developers" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > For more options, visit https://groups.google.com/groups/opt_out. > > > -- Kohsuke Kawaguchi -- You received this message because you are subscribed to the Google Groups "Jenkins Developers" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. For more options, visit https://groups.google.com/groups/opt_out.
