Adam,

> To start with, we're approaching C/C++ compilation the same way as Java:
> Compile all the C/C++ source in a single task. If any of the inputs to this
> task change (source, headers, compiler, compiler options, etc), then all of
> the source is recompiled. Later, we will add incremental compilation, where
> only those source files whose inputs have changed will be recompiled. We'll
> also clean up stale output files, where the associated source file no
> longer exists (this is a nice advantage of the 'source set' approach over
> the 'action per source file' approach).


  Because C++ compilation can take a long time, 
  it would be very nice if gradle was multiprocessor 
  and multi-machine friendly. 


> > SCons and Waf both use MD5 signatures not just timestamps for
> > determining whether a source file needs to be recompiled.  This is
> > deemed a huge win, for the really big builds -- we are talking here of
> > massive systems taking 20-40 hours to build from scratch.
> 
> Gradle's incremental build already uses MD5 signatures, not timestamps, so
> we're ok in this respect.

   
   It would be great if there were an easy way to collect 
   stats on how long different parts of a build have taken,
   and then be able to query a running gradle process about
   how things are going.  
   
   For example, if you had a CLI, web-based, or ncurses-based console, 
   you could get a reasonable time-to-completion estimate / progress bar.  

   On a related note, if a stable interface exposed the dependency 
   state machine & work queue, one could even imagine custom logic
   for transferring work to an idle (or specially licensed) cluster.
   You could then 
   
        o  Share artifacts over NFS 

        o  Ship artifacts around via sockets & tar.gz

        o  Uploaded to something like Artifactory by the remote 
           build cluster, and have a lazy poll thread and/or 
           signaling mechanism watch for it.


> >>> This means parsing the files to determine the graph of
> >>> relationships.  
> >> 
> >> This will be challenging, but there are Java libraries around to do this 
> >> that we can leverage when the time comes.
> > 
> > I don't know of any tools other than compiler for doing scans, what Java
> > libraries are you thinking of here?  SCons and Waf use text processing
> > for the scaning, which given they are Python systems is fairly
> > straightforward.  In C and C++ you have to process #include in D and Go
> > it is import, in LaTeX it is \input or \include -- though LaTeX is
> > exceedingly complicated since it is a macro based notation with
> > expansion.
> 
> There are quite a few places where a task might want to extract dependencies 
> out of source files, and do its work on a subset of its source files based on 
> this. It might be good for us to introduce some common infrastructure for 
> these tasks, such as persistence of the dependency graph, cleaning up stale 
> outputs, and so on. And maybe even some way to say from the command-line 
> 'build this particular output file' or 'rebuild everything affected by this 
> source file'.
> 


  There's also the '-MM' switch in gcc:

  http://scottmcpeak.com/autodepend/autodepend.html
 


                        -Jon


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply via email to