Actually that's exactly what I did.  We cache the json response, not the
actual backend content.

Pushed out into production on hi5modules.com..  Seems like it's helping...

On 3/29/08 2:54 PM, "Kevin Brown" <[EMAIL PROTECTED]> wrote:

> I'd actually recommend sending the same request that we send now, only using
> GET instead of post, and putting parameters in the query string instead of
> the post body. Since there are no custom headers or signing happening,
> packing it into the query string is no big deal.
> 
> This ensures that the output is still always json, which ensures that the
> open proxy can't be used for malicious things.  Ideally the "open" proxy
> should only be used for static content that can be safely filtered (images,
> for example).
> 
> On Sat, Mar 29, 2008 at 2:17 PM, Paul Lindner <[EMAIL PROTECTED]> wrote:
> 
>> On 3/29/08 2:12 PM, "Brian Eaton" <[EMAIL PROTECTED]> wrote:
>> 
>>> On Sat, Mar 29, 2008 at 1:45 PM, Paul Lindner <[EMAIL PROTECTED]> wrote:
>>>>  Here's a patch I cranked out to help us out for the moment.  It adds a
>>>> REFRESH_INTERVAL param for makeRequest().  If present we use a
>> cacheable GET
>>>> request instead of a POST request.
>>> 
>>> No patch came through for me for some reason.  Maybe you could attach
>>> it to a jira issue instead?
>>> 
>>> Why require gadget code changes to take advantage of the cache?  Are
>>> there heuristics that could be used to enable this by default?  (For
>>> example, if the makeRequest parameters specify a 'GET' with no query
>>> parameters, that's a strong indication that the response should be
>>> cacheable.)
>> 
>> Right now none of the calls gadgets.io.makeRequest() makes are cacheable
>> in
>> the browser.  This is very different from iGoogle/gmodules.com and is
>> causing us issues.
>> 
>> The patch insures that content is fetched in such a way that the browser
>> will cache it.
>> 
>> I attached the patch to SHINDIG-162, however this is only a stopgap and
>> there may be a much better way of achieving these goals.
>> 
>> 
> 

Reply via email to