Hi,

I'd like to do something similar to Julian, but rather than interpret 
Javascript code with a robot, I'd like to record/playback complete 
client browsing sessions for scriptable playback later...basically be 
able to record what a web client does, then be able to later put it 
on autopilot.  I have a need for automation of surfing patterns, but 
don't want to hire hundreds of people to point and click if it can be 
automated.  I'm thinking that if I can run through the browsing 
session manually, then there must be ways to have the client go 
through the same session later.

I do not relish the idea of needing to completely understand the
ins-and-outs of every current/future web technology language as they 
are changing all the time.  I shouldn't have to write my own web 
client that is smart enough to handle anything a web server can 
throw at me.  Why re-invent the wheel...there should be a way to 
use existing web clients, which Do understand web technology 
languages, to "record" and "play-back" surfing sessions...to make web 
browsers "scriptable".  Essentially, the best way to "simply" (yeah) 
record a users' "clicks" is at the client-level, namely Using the 
client.

I need to have complete control over what any web server can throw at 
the client, so the play-back has to be on the client side.  It needs 
to be able to play-back anything that a user might do in a browsing 
session.  It needs to:

1) Record everything that happens.  Whether the user clicked a link, 
   or filled-out a form, or responded to a JavaScript alert, or 
   interacted with a Java applet, or anything else.
2) Be able to play-back the session later.  It will need to be able 
   to make smart decisions later if need be, dynamically making smart 
   decisions later.  Now it's a "smarter" autopilot...knowing when a 
   form changes location on a page, or how to complete forms even 
   when subtle HTML page changes occurred.  

Ideas I've considered exploring are:
1) Proxy themes.  I don't like these.  I don't want to run 
   everything  through a custom proxy server or proxy script.  
   That isn't seamless enough, and it's not client-side, so 
   can't handle things like recording a user's Java interaction.  
   It could handle JavaScript interaction, but I don't want to 
   worry about writing or using a JavaScript interpreter.

2) The new DOM 2 could be key.  Netscape 6 has some support for the 
   new DOM 2 features.  I need something that will let me totally 
   know the state of every page object at all times, but don't know 
   how advanced JavaScript and the DOM.  I only need something to 
   work for me, so I don't care it it only works for Netscape or IE.
   I'm going to get the new O'Reilly Designing with JavaScript 2nd 
   Edition when it comes out in March.  See:
   http://www.oreilly.com/catalog/designjs2/
   JavaScript can be employed to record somewhat, but it has it's 
   limitations now.  See what the Tango Group is doing with their
   JavaScript Shared Browser:
   http://www.npac.syr.edu/users/gcf/jssbmarch99/fullhtml.html

3) Netscape's new XUL support.  I'm thinking that perhaps a XUL
   application can know how a user is interacting with the DOM 
   (better than JavaScript?) and be programmed to record/playback
   browsing sessions.  XUL uses JavaScript, so will XUL have the 
   same type of security restriction where you can't monitor objects 
   on other site's pages?  As an example of how to use XUL, see
   "http://www.webtechniques.com/news/2000/07/powers/".  In listings 
   3 and 4, the WebTechniques author used XUL to customize the 
   browser application to show a page-loading progress meter..really 
   cool.  See this for more about XUL:  http://www.mozilla.org/
   docs/codestock99/xul/xul_files/v3_document.htm.

4) Customize a Java web server like HotJava.  I don't know Java
   yet, but this solution seems like it would offer the most
   flexibility to record/playback interactions with Java applets.

5) Learn a Windows windowing library (OWL, MFC, Delphi, Tck/TK 
   for windows, etc), then customize the Mozilla or Spidermonkey 
   browser to do what I want.  I'll do this if it's the only 
   solution.

Does anybody have some ideas?  Thanks for your help.

--Ted
<[EMAIL PROTECTED]>

On 27 Nov 00 18:04, Julian Monteiro wrote:
From: Julian Monteiro <[EMAIL PROTECTED]>

> Hi,
> 
> I'm doing some stuff with LWP and I'm trying do make a Robot which
> interpretate Javascript pages. I alredy looked at mozilla javascript
> interpreters :
> 
> http://www.mozilla.org/js/
> 
> They have SpiderMonkey(C)  and Rhino (Java).
> 
> Did someone alredy used them? or know some perl modules to do job?
> 
> Thank's for help
> 
> 
> Julian
> <[EMAIL PROTECTED]>
> 
> 

---------------------------------------------------------------- 
    Ted Peterson                  |  IRE/NICAR 
      Webmaster                   |  http://www.ire.org
    (573) 882-2042                |  http://www.nicar.org
---------------------------------------------------------------- 
 "From then on, when anything went wrong with a computer, we 
  said it had bugs in it."  -- Grace Murray Hopper, on the
  removal of a 2-inch-long moth from an experimental computer
  at Harvard in 1945, quoted in Time 16 Apr 84.

 "Each grain of sand has an architecture, but a desert displays
  the structure of the wind."  -- Keith Waldrop, 1975

Reply via email to