Hmmm. This is of significance to me, as I work in a secure environment...
I'm not sure how much compute power and secondary storage you have, but in
the end, if it's possible, my humble recommendation is to have two
databases, and use some webserver/htaccess voodoo to arrange so that outside
people
see a different search page/database backend to internal people.
Of course, depending on resources, you might like to implement that at the
database layer with some sort of balancing in it, but we're talking hideous
implementation details by now...
If that's not viable, I'm working on a wholesome way of adding ACLs to the
engine, but it's not a simple exercise to work out what the ACLs need to be,
especially on the scale of many many thousands of URLs.
That would also require an authentication method on the search page, which
is intrusive... NTLM is NOT my friend, before you mention it (=
And along the same lines, are you using mySQL? [I plan to move to SyBase at
some point in the future, but until then...]
If so, how have you implemented the security of the database user for
/searching/ being effectively read-only, and the indexer/admin user being
rw? I've basically created a HUGE permission list, but I feel that's
probably not the best solution...
The problem, you see, is that the read-only user needs to be able to create
and drop tables, and have write access to some parts of the qtracking table.
[note: only some parts of it, because I have greatly enhanced the qtracking
here, and have several daemons running that are finding further information
about the users than just what they give]
Gary (-;
PS e-mail me personally if you want snippets of source code, etc. In fact,
feel free to e-mail me personally about this anyway...
> -----Original Message-----
> From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, August 21, 2001 6:27 PM
> To: [EMAIL PROTECTED]
> Subject: security: private or public pages
>
> Hello,
>
> I wish to do so that, from one side the whole set of documents from
> our site be normaly referenced and retrieved when looked from one of
> our team users, and, from the other side that any external web visitor
> only see the references to public document. As the spider is working
> from inside the site, access controls are not efficient. More
> specifically, the research engine may provide references and display
> pages strictly for internal use.
>
> One solution may be to use tags or categories, but, during research
> stage, it may still be possible to hack the URL in order to suppress
> the CGI research limiting parameters (t=, cat=).
>
> Another way to overpass this problem would be to constrain in the
> template file part (<!--variables ... -->) the CGI parameters that we
> want in order that they cannot be suppressed or overriden during a
> research. However it would still be possible to short-cut this solution
> with the "tmplt=" CGI parameter, but in case where the restriction is
> added as a defect option in the template, the protection seems enough
> since it is not possible to guess the name of the other templates.
>
> What do you think about this ?
>
> Sincerely.
>
> Dominique Asselineau
> --
> +------------------------------------o------------------------------------
> -+
> | P-mail: | E-mail:
> |
> | E.N.S.T. - Dep. TSI | [EMAIL PROTECTED]
> |
> | Dominique Asselineau | Phone: (33/0) 1 45 81 78 91
> |
> | 46, rue Barrault | Fax: (33/0) 1 45 81 37 94
> |
> | 75634 PARIS Cedex 13 - France |
> |
> +------------------------------------o------------------------------------
> -+
> ___________________________________________
> If you want to unsubscribe send "unsubscribe general"
> to [EMAIL PROTECTED]
>
___________________________________________
If you want to unsubscribe send "unsubscribe general"
to [EMAIL PROTECTED]