What I'm doing with Messenger Tutorial Docs -- Reviews Welcome
  ==============================================================
  {

    1. How it works
    ------------------------------
    {
      There will be a custom cmake target, probably called 'docs'.
      When you 'make' this target, it runs a bunch of simple Messenger
      tests in all supported languages.

      The purpose of these tests is not the same as the other tests
      that already exist; they are not trying to prove correctness
      of all the Messenger features.  Their purpose is to provide
      the documents with good code snippets -- that so the same thing
      in each language.

      If any of these tests fail, we stop right there and the docs
      do not get made.

      After the tests run successfully, the code snippets are all
      extracted.

      The docs I write are kind of ... templates for docs.  Each doc
      gets expanded into L distinct docs, where L is the number of
      languages that Messenger supports.  So you get the cross-product
      of docs and languages.

      Each doc has little markers in it, like
         pmdocproc 13
      that tell the processor which snippet to put there.

      Each program has little snippet-markers like:

        /* pmdocproc 13 c */
        code ( goes, here );
        /* pmdocproc 13 end */

      to tell the processor where to get the snippet from, and what
      language it's in.

      The markers that are inthe code are always comments -- however that
     language makes its comments -- and must be on a line by themselves.
    }





    2. Why it works that way
    --------------------------------
    {
      We get two cool benefits out of this:

        * You can see the whole doc tree in your favorite language,
          and compare languages.

        * The code snippets will never go out of date.  If they
          quite working, the docs don't get built.
    }



    3. Where I am Now
    ------------------------------
    {
      * the python version of the doc-maker is working

      * I know how to integrate with cmake, and doing that now.

      * I'll be away next week, but I would like to check stuff in
        shortly after returning.
    }



    4. Feedback I would like
    ---------------------------------
    {
      Anything you are inspired to volunteer, about any aspect of this.
      I am attaching the Python doc maker below just in case anyone wants
      to look at it.   ( I was so uncomfortable with Python that I prototyped
      the project in C, so I don't know how *pythonic* my code is.... )

      Thanks in advance!

      And please send any comments to the list.....
    }



    5. The Python doc processor
    ----------------------------------
    {
      #! /usr/bin/python

      import os
      import subprocess
      import shutil





      #====================================================
      #  First, run all the tests for each language.
      #  We will not create any documents if any of
      #  these tests fail.
      #====================================================
      def run_examples(languages):
          saved_dir = os.getcwd()

          for language in languages:
              test_dir = './doc_examples/' + language
              os.chdir ( test_dir )
              subprocess.check_call ( "./run_all" )
              print "---------------------------------------------"
              print "Tests in" , test_dir , " were successful."
              print "---------------------------------------------"
              os.chdir ( saved_dir )

          print "\n================================================="
          print "  All language example tests were successful."
          print "=================================================\n"





      #====================================================
      # Make new output dirs for each language.
      # These will hold final docs.
      #====================================================
      def make_output_dirs ( output_dir, html_dir, languages ):
          if os.path.exists ( output_dir ):
              shutil.rmtree ( output_dir )
          os.mkdir ( output_dir )

          if os.path.exists ( html_dir ):
              shutil.rmtree ( html_dir )
          os.mkdir ( html_dir )

          for language in languages:
              os.mkdir ( output_dir + '/' + language )
              os.mkdir ( html_dir   + '/' + language )





      #====================================================
      #  For each example program name, there
      #  should be an instance of it in each
      #  example/language directory.
      #  I.e., example program "foo" should exist as
      #     doc_examples/c/foo.c
      #     doc_examples/rb/foo.rb
      #     doc_examples/py/foo.py
      #====================================================
      def make_example_file_names(example_dir, languages, example_names, 
example_file_names ):
          for language in languages:
              for example in example_names:
                 example_file_name = example_dir + '/' + language + '/' + 
example + '.' + language
                 print "example file: " , example_file_name
                 example_file_names.append ( example_file_name )





      def load_file(file_name):
          with open(file_name) as f:
              content = f.readlines()
              return content





      #===========================================================
      # Load all the example and target files into memory.
      #===========================================================
      def load_files ( example_file_names, target_files, example_files,
                       targets_dir, target_file_names, markdown_suffix ):
          for file_name in target_file_names:
              target_file_name = targets_dir + '/' + file_name + '.' + 
markdown_suffix
              target_files.append ( load_file ( target_file_name ) )

          for example_file_name in example_file_names:
              example_files.append ( load_file ( example_file_name ) )





      #==================================================
      #  Extract a all text-chunk from the given file.
      #  We have previously found the starting and
      # ending line numbers.
      #==================================================
      def extract_chunk ( chunks, file ):
          # The "Markdown" markup language recognizes
          # a line of text as code if it starts with 8 blank spaces.
          # All of my text-chunks are code, so precede eachone with
          # 8 blank-spaces.
          # I will prepend these extra spaces onto eachline of the
          # chunk as I suck it in from file and store it here.

          code_indent = '        '
          #             -12345678-

          first_line = chunks[3]
          last_line  = chunks[4] - 1
          chunk_lines = file[first_line:last_line]
          final_chunk_lines = []
          i = 0
          for line in chunk_lines:
              if -1 < line.find ( 'pmdocproc_start' ):
                  continue
              elif -1 < line.find ( 'pmdocproc_end' ):
                  continue
              else:
                  # This is a normal line.  Prepend the code prefix.
                  final_chunk_lines.append ( code_indent + line )
              i = i + 1
          return final_chunk_lines





      #=========================================================
      #  Find the starting and ending lines of each text-chunk
      # in this file.  Chunks can be nested.
      #=========================================================
      def locate_chunks(file_name, file, chunks):
          # chunks_stack is a temporary place to put chunks
          # that are under construction.  It's a stack so that chunks
          # can be nested.  Once they are fully constructed, they
          # go to the 'chunks' list.
          chunks_stack = []
          i = 0
          for line in file:
             i += 1
             #-------  The start of a text chunk ----------------
             if -1 < line.find ( 'pmdocproc_start' ):
                 # format of a chunk start marker:
                 # COMMENT_SYMBOL pmdocproc_start  ID  LANGUAGE
                 words = line.split()

                 # Here is what a chunk looks like:
                 #   ( file_name,
                 #     chunk_id,
                 #     language,
                 #     start_line,
                 #     end_line,
                 #     lines   # this is an array of all the chunk's file-lines.
                 #   )
                 # At this point, I know all but the ending line number,
                 # so push what I know on the stack.
                 chunk_so_far = ( file_name, words[2] , words[3] , i )
                 chunks_stack.append ( chunk_so_far )

             #-------  The end of a text chunk ----------------
             if -1 < line.find ( 'pmdocproc_end' ):
                 words = line.split()
                 chunk_start = chunks_stack.pop()

                 # The ID that I get off the top of the stack had
                 # better agree with the ID that I see in the chunk-end
                 # statement from this line.
                 end_id = words[2]
                 start_id = chunk_start[1]

                 if end_id != start_id:
                     print "mismatched IDs:", start_id, end_id
                     os._exit(1)

                 # Grab the lines of the chunk and append them.
                 complete_chunk = chunk_start + ( i , )
                 chunk_lines = extract_chunk ( complete_chunk, file )
                 really_complete_chunk = complete_chunk + ( chunk_lines, )
                 chunks.append ( really_complete_chunk )





      def get_chunk ( id, language, chunks ):
          for chunk in chunks:
              if id == chunk[1] and language == chunk[2] :
                return chunk





      #==================================================================
      # Read through each target file.  Find in it all the
      # markers that tell us to insert a chunk.  Make a list
      # of lines that is the file, with all requested chunks inserted.
      #==================================================================
      def place_chunks ( target_file_names, target_dir, markdown_suffix,
                         target_files, chunks, result_files, languages ):
          for language in languages:
              i = 0
              for target_file in target_files:
                  result_file_lines = []
                  target_file_name = target_dir + '/' + \
                      target_file_names[i] + '.' + markdown_suffix
                  print "placing chunks in target file: ", language, 
target_file_name
                  i = i + 1
                  for line in target_file:
                      if -1 >= line.find ( 'pmdocproc' ):
                          # This is a normal line --- just copy it to result.
                          result_file_lines.append ( line )
                      else:
                          # This line is a chunk-specifier
                          # find and spit out the chunk.
                          words = line.split()
                          chunk = get_chunk ( words[1], language, chunks )
                          result_file_lines.extend ( chunk[5] )
                  result_file = ( target_file_name, language, result_file_lines 
)
                  result_files.append ( result_file )





      #========================================================
      # Make the result-file name from a given target file
      # and language.
      #========================================================
      def make_result_name ( target_name, language, output_base_dir ):

          # From a file name like /s/b/c/foo.bar ,
          # get just the  "foo.bar"
          words = target_name.split('/')
          n_words = len(words)
          if n_words <= 1:
              tail_name = target_name
          else:
              tail_name = words[n_words-1]

          return output_base_dir + '/' + language + '/' + tail_name





      #===========================================================
      # Write all the result files to their final destinations.
      #===========================================================
      def store_result_files ( result_files, output_base_dir, result_file_names 
):
          for result_file in result_files:
              target_file_name = result_file[0]
              language         = result_file[1]
              result_data      = result_file[2]
              result_name = make_result_name ( target_file_name, language, 
output_base_dir )
              result_file_names += result_name
              print "output file: ", result_name
              f = open ( result_name, 'w' )
              for line in result_data:
                  f.write ( line )
              f.close()




      #=============================================
      # Debugging print for chunks.
      #=============================================
      def print_chunks ( chunks ):
          for chunk in chunks:
              print "==================================="
              print "chunk ", chunk[1], "language", chunk[2], "from file", 
chunk[0]
              print "lines:", chunk[3], "to", chunk[4]
              print "lines begin ----------------------------------"
              for line in chunk[5]:
                print line
              print "lines end ----------------------------------"





      #========================================================
      # From each markdown file, make the corresponding final
      # result file, in HTML format.
      #========================================================
      def make_html(target_file_names, html_dir, output_dir, targets_dir,
                    markdown_suffix, html_suffix, languages):
          for file_name in target_file_names:
              for language in languages:
                  markdown_file_name = output_dir + '/' +  \
                                       language   + '/' +  \
                                       file_name  + '.' + markdown_suffix

                  html_file_name = html_dir   + '/' + \
                                   language   + '/' + \
                                   file_name  + '.' + html_suffix

                  return_code = subprocess.call ( "pandoc -f markdown -t html " 
+ markdown_file_name + ' > ' + html_file_name, shell=True )
                  if return_code != 0:
                      print "pandoc error! on file: ", markdown_file_name
                      os._exit(1)
                  print "HTML file ", html_file_name, "created."

          # Make the index file as a special case. -----------------
          markdown_index_file_name = targets_dir + '/' + "index" + '.' + 
markdown_suffix
          html_index_file_name     = html_dir    + '/' + "index" + '.' + 
html_suffix

          return_code = subprocess.call ( "pandoc -f markdown -t html " + 
markdown_index_file_name + ' > ' + html_index_file_name, shell=True )

          if return_code != 0:
              print "pandoc error! on file: ", markdown_index_file_name
              os._exit(1)





      # ------------------------------------------------------
      #               M A I N
      # ------------------------------------------------------

      #------------------- data ---------------------------

      example_dir = './doc_examples'
      output_dir  = './output_dir'
      html_dir    = "./html"
      targets_dir = "./targets"

      markdown_suffix = 'md'
      html_suffix     = 'html'

      languages = [ 'c',
                    'py',
                    'rb'
                  ]


      #=================================================================
      # I write each of these files once.  The one I write
      # will reside, i.e., in
      #    ./targets/sending_and_receiving.md
      # Each of these target files has markers in it that show where
      # to stick the text chunks.  i.e.
      #    pmdocproc 3
      # means 'replace this line with chunk 3"
      # But that chunk will exists in several different versions:
      # one for each language, i.e. C, Python, and Ruby.
      # So we end up with a separate copy of each 'target' file
      # for each language.
      #=================================================================
      target_file_names = [ "sending_and_receiving",
                            "blocking_and_nonblocking_io",
                            "timeouts"
                          ]

      #=================================================================
      # These are the example code files from which text-chunks
      # are extracted.
      # There will be one of these for each language.
      # i.e. "doc_1_send" means that there will exist:
      #     c/doc_1_send.c
      #     py/doc_1_send.py
      #     rb/doc_1_send.rb
      # These are runnable files -- in fact if they do not run
      # successfully, the documents will not get made.
      # These files contain the code snippets that get placed into
      # the target documents to produce the final language-specialized
      # markdown files.
      #=================================================================
      example_names = [ 'doc_1_send',
                        'doc_1_recv'
                      ]


      #=================================================================
      # A bunch of lists that get filled in during processing.
      #=================================================================
      chunks             = []
      example_file_names = []
      example_files      = []
      result_file_names  = []
      result_files       = []
      target_files       = []



      #------------------- code ---------------------------

      run_examples(languages)

      make_output_dirs ( output_dir, html_dir, languages )

      make_example_file_names(example_dir, languages, example_names, 
example_file_names)

      load_files ( example_file_names, target_files, example_files,
                   targets_dir, target_file_names, markdown_suffix )


      j = 0
      for file in example_files:
          print "Locating chunks in file:" , example_file_names[j]
          locate_chunks ( example_file_names[j], file, chunks )
          j += 1



      place_chunks ( target_file_names, targets_dir, markdown_suffix,  
target_files, chunks, result_files, languages )

      store_result_files ( result_files, output_dir, result_file_names )

      make_html ( target_file_names, html_dir, output_dir, targets_dir,
                  markdown_suffix, html_suffix, languages )



    }

Reply via email to