Hello,

I think this is a bug.  I have about 1000 Urls that I need to cache
for a web application.  Since there are a thousand URLS, the
possibility of invalid URLS makes using a ManagedResourceStore an
unreliable solution.  With a ManagedResourceStore, the URLs are cached
one by one until a 404 error is encountered, then caching halts!

The documentation states that all the URLs in the manifest file must
be valid, which is unfortunate, because an invalid url at slot 252
means that 251 of the 1000 urls are cached successfully while the
remainder are just tossed out by Google Gears!

I have been exploring using a ResourceStore.  I was looking at the
sample application 
http://code.google.com/apis/gears/samples/hello_world_resourcestore.html
and noticed that If I removed one of the files to force a 404 error,
the system continued to cache the remaining files!  This is exactly
what I needed!

So I took my manifest file, loaded all of the files into an array, and
used the ResourceStore explicit capture method to pass in an array of
URLS.

I get anywhere from 2 to 30 files cached and then the system --
without warning -- just stops caching!  Even if all of the URLs are
valid 200 responses it just suddenly stops!

This to me seems like a bug in the ResourceStore class/object, and
I've found no documentation suggesting that ResourceStores have a
timeout.  Any suggestions?  Is this a bug that needs to be reported to
the Gears Development team?  Are there any workarounds to ensure that
the system continues to cache valid urls even if one encountered is
missing?

Thanks,
James Mortensen

// I take the manifest file containing 998 urls and I use a for loop
to load the contents into the filesToCapture array // from the sample
app.
function loadManifest() {
    var url = "large_manifest.json";

    var req = new XMLHttpRequest();
    req.onreadystatechange = function() {
        if(req.readyState == 4 && req.status == 200) {
            alert("loaded manifest file...");

             // here I am stripping out the unneeded manifest metadata
            response = req.responseText.substring(16);
            eval("response = " + response);
            alert(response);
            alert(response.entries[0].url);
            alert(filesToCapture[0]);

            // here I am taking the entries.url properties and putting
them in a normal array, just like in filesToCapture
            for(var i=0; i < response.entries.length; i++) {
                filesToCapture[i] = response.entries[i].url;
            }

            alert("length = " + filesToCapture.length);
        }
    };

    req.open("GET",url,true);
    req.send(null);
}

// I tried passing in the entire array into the capture function, as
well as using a for loop to call capture N
//times where N is the number of files in the array.  I only cached 1
or 2 files with the for loop, but about 10-30
// files by passing in the entire array as demonstrated in this
function.
function capture() {
  var store = localServer.openStore(STORE_NAME);
  if (!store) {
    setError('Please create a store for the captured resources');
    return;
  }

  clearStatus();
  addStatus('Capturing...');

  count = 0;
  // Capture this page and the js library we need to run offline.

//  for(count=0; count<filesToCapture.length; count++) {
    store.capture(filesToCapture, captureCallback);
 // }
}

// this is the callback function that is called when each file is
captured.  I added the count variable to the status so
// that I could enumerate each attempt.  I get 15-30 files cached
before the system just stops doing stuff with
//no warnings or errors!
function captureCallback(url, success, captureId) {
  addStatus(count + " " + url + ' captured ' + (success ?
'succeeded' : 'failed'));
  count++;
}

Reply via email to