Re: [CODE4LIB] httpRequest javascript.... grrr

2007-11-29 Thread pkeane

I'd highly recommend getting a good clear handle on the underlying
javascript workings before moving to a library like jQuery (which I am
quite fond of) especially when using XMLHTTPRequest.  If you don't,
mysterious problems may arise that are all the more difficult to debug
since you have the library between you and the executed javascript.

I find that the most common problems with XHR are quite often due to it's
asynchronous behavior.  You cannot simply invoke a function within the
response code and expect to have it fire, because the code has no way of
knowing when/if that response will occur.  You need to instead create a
callback function that you pass into the response code (like you are
accustomed to doing when setting an event handler).

I am not 100% sure if that's the problem here, but I would try this:

before defining httpRequest.onreadystatechange, define:

callback_alert = function(msg) { alert(msg); };

then:

httpRequest.onreadtstatechange = function(callback_alert) {
[...]
callback_alert ( root_node.firstChild.data );
 }

The problem is that your anonymous function onreadystatechage is
effectively compiled to run later, but the alert function becomes a
closure which remembers its compilation state when invoked.  And at the
time of its compilation, xmldoc did not exist.

One thing I WOULD recommend it to study some of the libraries to see how
they construct their XHR code.

Here's my standard XHR:

Dase.ajax = function(url,my_func) {
var xmlhttp = Dase.createXMLHttpRequest();
xmlhttp.open('GET', url, true);
xmlhttp.send(null);
xmlhttp.onreadystatechange = function() {
if (xmlhttp.readyState == 4  xmlhttp.status == 200) {
var returnStr = xmlhttp.responseText;
if (my_func) {
my_func(returnStr);
}
} else {
// wait for the call to complete
}
};
};

Note that I always pass in a (callback) function to do what needs doing
to the response code.

I hope that helps-
Peter Keane
daseproject.org


On Thu, 29 Nov 2007, Eric Lease Morgan wrote:


Why doesn't my httpRequest Javascript function return unless I add an
alert? Grrr.

I am writing my first AJAX-y function called add_tag. This is how it
is suppose to work:

1. define a username
2. create an httpRequest object
3. define what it is suppose to happen when it gets a response
4. open a connection to the server
5. send the request

When the response it is complete is simply echos the username. I know
the remote CGI script works because the following URL works correctly:

http://mylibrary.library.nd.edu/demos/tagging/?
cmd=add_tagusername=fkilgour

My Javascript is below, and it works IF I retain the alert
( 'Grrr!' ) line. Once I take the alert out of the picture I get a
Javascript error xmldoc has no properties. Here's my code:


function add_tag() {

 // define username
 var username  = 'fkilgour';

 // create an httpRequest
 var httpRequest;
 if ( window.XMLHttpRequest ) { httpRequest = new XMLHttpRequest(); }
 else if ( window.ActiveXObject ) { httpRequest = new ActiveXObject
( Microsoft.XMLHTTP ); }

 // give the httpRequest some characteristics and send it off
 httpRequest.onreadystatechange = function() {

  if ( httpRequest.readyState == 4 ) {

   var xmldoc = httpRequest.responseXML;
   var root_node = xmldoc.getElementsByTagName( 'root' ).item( 0 );
   alert ( root_node.firstChild.data );

  }

 };

 httpRequest.open( 'GET', './index.cgi?cmd=add_tagusername=' +
username, true );
 httpRequest.send( '' );
 alert ( 'Grrr!' );

}


What am I doing wrong? Why do I seem to need a pause at the end of my
add_tag function? I know the anonymous function -- function() -- is
getting executed because I can insert other httpRequest.readyState
checks into the function and they return. Grrr.

--
Eric Lease Morgan
University Libraries of Notre Dame

(574) 631-8604


Re: [CODE4LIB] httpRequest javascript.... grrr [resolved]

2007-11-29 Thread pkeane

Indeed, my proposed fix was incorrect -- an alert does NOT need to be
passed into the function as a callback (it's always globally available)
and since the parameter for the alert is response text, that's A-OK.  If
you want to insert that response into the page (and not just 'alert' it),
you WOULD need to create a callback function which made reference to the
page element 'target' (thus serving as a closure).

One thing about XHR -- you have all four HTTP verbs at your disposal: GET,
POST, PUT, DELETE and so you may wish to use one of the non-safe (i.e.
state-changing) methods for your XHR call (probably POST in the case of
adding a tag) to make things more RESTful.  XHR is actually a very good
way to hijack links (which are otherwise going to simply use 'GET')
which will perform state-changing operations.  Then when your application
starts exposing web services, you'll be that much more aligned with
RESTful principles (I'm convinced that's v. important, although plenty of
successful services expose unsafe GETs).  Just a thought...

best-
Peter Keane


On Thu, 29 Nov 2007, Eric Lease Morgan wrote:



On Nov 29, 2007, at 9:21 AM, Eric Lease Morgan wrote:


Why doesn't my httpRequest Javascript function return unless I add
an alert? Grrr.



I have resolved my problem, but I'm not exactly sure how.

First of all, my httpRequest (XMLHttpRequest) code was just fine. I
made no significant changes to it. Instead, I separated my form input/
validation routine from the httpRequest functionality and the problem
disappeared. Don't ask my why. I don't know.  This makes for better
modular programing though.  javascript--

BTW, I appreciate the links to various Javascript libraries, but
since I am really only starting out in this regard I think I need to
get my hands dirtier before I lean on someone else's code.

Finally, for posterity's sake, I have included my resulting code in
an attachment to this message. I don't know whether or not the list
will accept attachments.

--
Eric Lease Morgan
University Libraries of Notre Dame

(574) 631-8604



[CODE4LIB] Distributed Models the Library (was: Re: [CODE4LIB] RFC 5005 ATOM extension and OAI)

2007-10-25 Thread pkeane

Hi Jakob-

Yes, I think you are correct that it is a bit much to think that a
distributed archiving model is a bit much for libraries to even consider
now, but I do think there are useful insights to be gained here.

As it stands now, linux developers using Git can carry around the entire
change history of the linux kernel (well, I think they just included the
2.6 kernel when they moved to Git) on their laptop, make changes, create
patches, etc and then make that available to others.  Well, undoubtedly
change history is is a bit much for the library to think about, by why
not, for instance, and entire library catalog?  If I could check out the
library catalog onto my computer  use whatever tools I wished to search,
organize, annotate, etc., then perhaps mix-in data (say holdings data
from other that are near me) OR even create the sort of relationships
between records that the Open Library folks are talking about
(http://www.hyperorg.com/blogger/mtarchive/berkman_lunch_aaron_swartz_on.html)
then share that added data, we have quite a powerful distributed
development model.  It may seem a bit far-fetched, but I think that some
of the pieces (or at least a better understanding of how this might all
work) are beginning to take shape.

-Peter

On Thu, 25 Oct 2007, Jakob Voss wrote:


Peter wrote:


Also, re: blog mirroring, I highly recommend the current discussions
floating aroung the blogosphere regarding distributed source control (Git,
Mercurial, etc.).  It's a fundamental paradigm shift from centralized
control to distributed control that points the way toward the future of
libraries as they (we) become less and less the gatekeepers for the
stuff be it digital or physical and more and more the facilitators of
the bidirectional replication that assures ubiquitous access and
long-term preservation.  The library becomes (actually it has already
happended) simply a node on a network of trust and should act accordingly.

See the thoroughly entertaining/thought-provoking Google tech talk by
Linus Torvalds on Git:  http://www.youtube.com/watch?v=4XpnKHJAok8


Thanks for pointing to this interesting discussion. This goes even
further then the current paradigm shift from the old model
(author - publisher - distributor - reader) to a world of
user-generated content and collaboration! I was glad if we finally got
to model and archive Weblogs and Wikis - modelling and archiving the
whole process of content copying, changing and remixing and
republication is far beyong libraries capabilities!

Greetings,
Jakob

--
Jakob Voß [EMAIL PROTECTED], skype: nichtich
Verbundzentrale des GBV (VZG) / Common Library Network
Platz der Goettinger Sieben 1, 37073 Göttingen, Germany
+49 (0)551 39-10242, http://www.gbv.de


Re: [CODE4LIB] Distributed Models the Library (was: Re: [CODE4LIB] RFC 5005 ATOM extension and OAI)

2007-10-25 Thread pkeane

Very interesting!  I will check it out

-Peter

On Thu, 25 Oct 2007, Jason Stirnaman wrote:


not, for instance, and entire library catalog?  If I could check out the
library catalog onto my computer  use whatever tools I wished to search,


Peter,

You might be interested in Art Rhyno's experiment.  Here's Jon Udell's summary:

Art Rhyno?s science project
Art Rhyno?s title is Systems Librarian but he should consider adding Mad 
Scientist to his business card because his is full of wild and crazy and ? to 
me, at least ? brilliant ideas. Last year, when I was a judge for the Talis 
?Mashing up the Library? competion, one of my favorite entries was this one 
from Art. The project mirrors a library catalog to the desktop and integrates 
it with desktop search. The searcher in this case is Google Desktop, but could 
be another, and the integration is accomplished by exposing the catalog as a 
set of Web Folders, which Art correctly describes as ?Microsoft?s in-built and 
oft-overlooked WebDAV option.?

http://blog.jonudell.net/2007/03/16/art-rhynos-science-project/

Jason
--

Jason Stirnaman
OME/Biomedical  Digital Projects Librarian
A.R. Dykes Library
The University of Kansas Medical Center
Kansas City, Kansas
Work: 913-588-7319
Email: [EMAIL PROTECTED]



On 10/25/2007 at 10:47 AM, in message

[EMAIL PROTECTED], pkeane
[EMAIL PROTECTED] wrote:

Hi Jakob-

Yes, I think you are correct that it is a bit much to think that a
distributed archiving model is a bit much for libraries to even consider
now, but I do think there are useful insights to be gained here.

As it stands now, linux developers using Git can carry around the entire
change history of the linux kernel (well, I think they just included the
2.6 kernel when they moved to Git) on their laptop, make changes, create
patches, etc and then make that available to others.  Well, undoubtedly
change history is is a bit much for the library to think about, by why
not, for instance, and entire library catalog?  If I could check out the
library catalog onto my computer  use whatever tools I wished to search,
organize, annotate, etc., then perhaps mix-in data (say holdings data
from other that are near me) OR even create the sort of relationships
between records that the Open Library folks are talking about
(http://www.hyperorg.com/blogger/mtarchive/berkman_lunch_aaron_swartz_on.htm
l)
then share that added data, we have quite a powerful distributed
development model.  It may seem a bit far-fetched, but I think that some
of the pieces (or at least a better understanding of how this might all
work) are beginning to take shape.

-Peter

On Thu, 25 Oct 2007, Jakob Voss wrote:


Peter wrote:


Also, re: blog mirroring, I highly recommend the current discussions
floating aroung the blogosphere regarding distributed source control (Git,
Mercurial, etc.).  It's a fundamental paradigm shift from centralized
control to distributed control that points the way toward the future of
libraries as they (we) become less and less the gatekeepers for the
stuff be it digital or physical and more and more the facilitators of
the bidirectional replication that assures ubiquitous access and
long-term preservation.  The library becomes (actually it has already
happended) simply a node on a network of trust and should act accordingly.

See the thoroughly entertaining/thought-provoking Google tech talk by
Linus Torvalds on Git:  http://www.youtube.com/watch?v=4XpnKHJAok8


Thanks for pointing to this interesting discussion. This goes even
further then the current paradigm shift from the old model
(author - publisher - distributor - reader) to a world of
user-generated content and collaboration! I was glad if we finally got
to model and archive Weblogs and Wikis - modelling and archiving the
whole process of content copying, changing and remixing and
republication is far beyong libraries capabilities!

Greetings,
Jakob

--
Jakob Voß [EMAIL PROTECTED], skype: nichtich
Verbundzentrale des GBV (VZG) / Common Library Network
Platz der Goettinger Sieben 1, 37073 Göttingen, Germany
+49 (0)551 39-10242, http://www.gbv.de



Re: [CODE4LIB] RFC 5005 ATOM extension and OAI

2007-10-24 Thread pkeane


This conversation about Atom is, I think, really an important one to have.
As well designed and thought out as protocols  standards such as OAI-PMH,
METS (and the budding OAI-ORE spec) are, they don't have that viral
technology attribute of utter simplicity.  Sure there are trade-offs, but
the tool support and interoperability on a much larger scale that Atom
could provide cannot be denied.  I, too, have pondered the possibility of
Atom ( AtomPub for writing back) as a simpler replacement for all sorts
of similar technologies (METS, OAI-PMH, WebDAV, etc.) --
http://efoundations.typepad.com/efoundations/2007/07/app-moves-to-pr.html.
The simple fact that Google has standardized all of its web services on
GData (a flavor of Atom) cannot be ignored.

I have had some very interesting discussions over on atom-syntax about
thoroughly integrating Atom as a standard piece of infrastructure in a
large digital library project here at UT Austin (daseproject.org), and
while I don't necessarily think it provide a whole lot of benefit as an
internal data transfer mechanism, I see numerous advantages to
standardizing on Atom for any number of outward-facing
services/end-points. I think it would be sad if Atom and AtomPub were seen
only as technologies used by and for blogs/blogging.

Also, re: blog mirroring, I highly recommend the current discussions
floating aroung the blogosphere regarding distributed source control (Git,
Mercurial, etc.).  It's a fundamental paradigm shift from centralized
control to distributed control that points the way toward the future of
libraries as they (we) become less and less the gatekeepers for the
stuff be it digital or physical and more and more the facilitators of
the bidirectional replication that assures ubiquitous access and
long-term preservation.  The library becomes (actually it has already
happended) simply a node on a network of trust and should act accordingly.

See the thoroughly entertaining/thought-provoking Google tech talk by
Linus Torvalds on Git:  http://www.youtube.com/watch?v=4XpnKHJAok8

-peter keane
daseproject.org

On Tue, 23 Oct 2007, Jakob Voss wrote:


Hi Ed,

You wrote:


I completely agree.  When developing software it's really important to
focus on the cleanest/clearest solution, rather than getting bogged
down in edge cases and the comments from nay sayers. I hope that my
response didn't come across that way.


:-)


A couple follow on questions for you:

In your vision for this software are you expecting that content
providers would have to implement RFC 5005 for your archiving system
to work?


Probably yes - at least for older entries. New posts can also be
collected with the default feeds. Instead of working out exceptions and
special solutions how to get blog archives with other methods you should
provide RFC 5005 plugins for common blog software like Wordpress and
advertise its use (We are sorry - the blog that you asked to archive
does not support RFC 5005 so we can only archive new postings. Please
ask its provider to implement archived feeds so we can archive the
postings before {TIMESTAMP}. More information and plugins for RFC 5005
can be found {HERE}. Thank you!).


Are you considering archiving media files associated with a blog entry
(images, sound, video, etc?).


Well, it depends on. There are hundreds of ways to associate media files
- I doubt that you can easily archive YouTube and SlideShare widgets
etc. but images included with img src=.../ should be doable. However
I prefer iterative developement - if basic archiving works, you can
start to think about media files. By the way I would value more the
comments - which are also additional and non trivial to archive.

To begin with, a WordPress plugin is surely the right step. Up to now
RFC 5005 is so new that noone implemented it yet although its not
complicated.

Greetings,
Jakob

--
Jakob Voß [EMAIL PROTECTED], skype: nichtich
Verbundzentrale des GBV (VZG) / Common Library Network
Platz der Goettinger Sieben 1, 37073 Göttingen, Germany
+49 (0)551 39-10242, http://www.gbv.de