What you have described is basic deep linking, but does not solve the problem I have been attempting to articulate. Regardless of what goes on on the server, if you enter some path info after the .com part of the url, the server thinks it is getting its data from that location (foo.com/bar/ for example).
The problem occurs after the initial page is loaded in whatever app you are running on the client. Let's say for example that you want to move the user from foo.com/bar/ to foo.com/foo - there is no way to update the browser without causing the full page to reload. This is why we rely on the hash portion of the url. That part does not cause the page to reload, since it is generally meant to refer to a named anchor with in the html. This fact makes the current method of deep linking and history manipulation a huge hack BTW - which IMO means that the browser vendors aught to figure out some kind of extension to make history management and deep linking more robust for client runtimes (a simple javascript API would work - maybe something plugins could interact with directly also). We could just move them in the app, and update the url client side, but we'd end up with foo.com/bar/#foo which is just terrible (though again, its not the end of the world - I guess). Thanks! Kevin N. dorkie dork from dorktown wrote: > First, this structure already works for these content management / > blog sites. They have a single index page that shows content based on > the url. There is only one file. So we know it works. > > #1 - yes. i am thinking at its most basic level, on that single catch > all index page, we have a function that gets called when the page > loads. that function is passed the broken down url information, ie, a > object similar to the location object (contains the pathname, > protocol, etc - or we define a set of global variables). these are set > when the mod rewrite or referrer. i'd have to check how they do it. > back to the function - then your function can parse and use logic you > set up to deliver the content that you want to make available to the > search engines based on the url the user or search bot requested. in > your logic you may be going to the database and pulling information > based on the page they request. we store that information in a > variable that will eventually get written to the page underneath the > swf in xml hidden inside of a script tag. > > #2 - the search engine does not know it is getting redirected to the > index.php, index.asp, index.jsp page. all it knows is that when it > asks for animals.com/carnivors/bears > <http://animals.com/carnivors/bears> it is getting the page on bears. > > #3 - yes. thats right. this is described in #1. i really think some > variables would be better than an object. > > #4 - that is not necessary. remember, the user (and their browser) and > the search bot only believe in the url. they don't care or know if its > a single page or a different page that gets served up. > > One more thing we would have to do that I did not mention in #1 was to > pass a list of links to the page to be written out. These can be in a > display none tag or in the no script section. > > And finally that url path and variables should be passed to the flash > swf in the flash vars section. > > Dang. I think this can be done rather quickly. This would be basic. A > front end / front end manager could be created for this later on top > of this. > > On 12/22/06, * Kevin Newman* <[EMAIL PROTECTED] > <mailto:[EMAIL PROTECTED]>> wrote: > > dorkie dork from dorktown wrote: > > One of the open source solutions I occasionaly use *cough* *cough* > > *drupal* *cough* has a mod redirect / mod rewrite htaccess file (i'm > > combining words). Any url that is entered into the site gets > rewritten > > or redirected. It is a dynamic system that allows you to dynamically > > redirect to the content you want without hard coding paths or > > directories. > > > > So you would create a dynamic page (doesn't really exist - only > in the > > database) and then an alias to reach it. For example, > > www.test.com/myalias <http://www.test.com/myalias> > <http://www.test.com/myalias>. Actually a lot of > > systems use this (wordpress, etc). > > > > All urls entered in to the site would be redirected to index.php. At > > this point you could with the super awesome power of server side > code, > > deliver page links dynamically and content dynamically. I > recently did > > a project with FXT and did exactly that. You pass the data as an xml > > model into a script tag under the body tag. Search engines pick this > > up. My Flex app pulled this xml in and then used it as a > dataprovider > > for numerous controls. > > > > I've asked in the wish list for Macromedia at the time before it was > > Adobe to let us specify the type of extension for the published html > > wrapper. Right now if you click publish or run it creates a HTML > page. > > So somewhere in the options it would be nice to choose the page > > extension type (php, asp, jsp, etc) of the page we are publishing. > > I've also investigated and requested a way to pick my template page > > that Flex finds and then replaces the tokens which it is looking for > > inside of it. This may all be possible already. I've been mostly > only > > learning the API, mostly. > > > > Now that I've had a chance to think about it a default Adobe > Page and > > Link Management admin site might be the solution. It would be > created > > and deployed to bin or bin-admin with every project. Then you would > > upload that to the server along with the contents of your bin > > directory. A developer could login to it on the server. The > Adobe Page > > Manager would use an xml file or database to create the aliases. The > > alias would be for different states in a Flex app. When you > create an > > alias there would be a place where you could call the services > or page > > associated with that state and alias. It would then wrap that > content > > in a xml tag like in FXT for indexing. You could also pass any > links > > back to the page. The content and links would be in the noscript / > > script / area of the page and not be visible to the user but it > would > > be visible to the search bots. Because it uses a single page > index.php > > and .htaccess mod rewrite it would redirect all traffic to the > single > > index page (php, asp, java). That page would take that url alias, > > search the xml file or database and serve up the appropriate > content > > (in the background hidden to users). I hope that makes sense. > > > > It would manage aliases, content for that alias, links for the alias > > and state to pass to the embedded flash swf. I can actually imagine > > how it would work. > Hmm.. I'm not sure I understand exactly what you are talking > about, but > let's see if this is close: > > 1. Set up the homepage to pull dcontent from the database using > whatever > server side technology (php/asp.net, etc, over WebServices/FMS, > etc.) - > front loader style (design pattern). > > 2. Set up a 404 handler (using .htaccess or iisadmin) to detect > what url > was passed in - say domain.com/section/contentid > <http://domain.com/section/contentid>, and redirect to > homepage. (How would the search engine deal with something like that - > would it just keep replacing the index information for the home page. > Can a spider or bot store multiple versions of a webpage? Maybe > the Vary > header would help here?). > > 3. Redirect the user (or bot/spider) to the homepage, display the > information based on the "section/contentid" part of the url. I'm not > sure how you would do that. Is there a server var that can be relied > upon that you can check to know where the user or bot was > redirected from. > > 4. Use client side technology and location.replace to fix up the > url so > the user ends in the right place in the app, and has the standard one > link type (with the #). > > > Would something like that actually work? I'm sure I could make it work > for a web browser, but I'm not sure what would work with search engine > spider bots. > > Kevin N. >

