Hello there

Does anybody please have any information on the specific
difficulties search engine spiders have with cookies and what
methods can be used to circumvent the problems they face?

I am a marketing consultant working with a range of clients who
use log file analysis tools such as WebTrends and RedSheriff.
Both of these tools (and others) use (or at least have an option
to use) cookies in order to track unique visitors to web sites.
My understanding is that search engine spiders cannot accept
cookies and consequently will not trawl a site that serves
cookies to them. I understand that cloaking is a POSSIBLE
solution - however this in itself raises a few questions for me:

1) If cloaking is an acceptable method to get round the problem
of sending away a search engine spider (and consequently not
indexing pages within a site), how should this technology be
implemented and what pitfalls need to be considered.
2) Most search engines state clearly that they do ban sites
altogether from their indexes if they discover cloaking has been
used? Are there any other methods that are preferable?

The manufacturers of both WebTrends and RedSheriff have not
provided me with any information on this matter and so I would
like to raise issues highlighted in any responses so that they
can respond formally.

I have to say that although I have a technical background I am
not a programmer or software developer - but I have been learning
a lot simply by subscribing to this list - keep up the great
work. However, I am looking for a reasonably technical answer (or
where I might go to find one please) so that I can pass on
specific suggestions to my client, WebTrends and RedSheriff.

Thanks for any help you can give.

Regards

Dave Watson.


--
This message was sent by the Internet robots and spiders discussion list 
([EMAIL PROTECTED]).  For list server commands, send "help" in the body of a message 
to "[EMAIL PROTECTED]".

Reply via email to