On Tue, 25 Jan 2000 [EMAIL PROTECTED] wrote:
> OK, here is an update: it basically works (verified with tcpdump here),
> including usernames with "\" in them, if you do it correctly, but you need
> to use the long form, described below. In the current version using a URL
> won't always work in combination with user/pass authentication because
> of two bugs. The correct way to do it is:
> 
> print read [
>       scheme: 'http
>       user: "my\user"
>       pass: "mypassword"
>       host: "www.somewhere.com"
>       path: "dir/file.html"
> ]

Here's what I type into the Parser...

>> print read [
[    scheme: 'http
[    user: "webuser"
[    pass: "letmein"
[    host: "127.0.0.1"
[    path: "mysite/default.htm"
[    ]
Net-log: ["Opening tcp for" http]
connecting to: 127.0.0.1
Net-log: {GET /mysite/default.htm HTTP/1.0
Accept: */*
User-Agent: REBOL 2.2.0.3.1
Host: 127.0.0.1
Authorization: Basic d2VidXNlcjpsZXRtZWlu

}
Net-log: "HTTP/1.1 401 Access Denied"
** User Error: Error.  Target url: http://127.0.0.1/mysite/default.htm
could not be retrieved
.  Server response: HTTP/1.1 401 Access Denied.
** Where: print read [
    scheme: 'http
    user: "webuser"
    pass: "letmein"
    host: "127.0.0.1"
    path: "mysite/default.htm"
]

My next guess is that it probably centers around the "User-Agent" clause,
and possibly the Accept: */*.  However, I'm not overly familiar with the
low-levels of HTTP.  Should I possibly over-ride the User-Agent with some
sort of Mozilla, or IE string?  If so, what strings are acceptable?

Here's the modified version of the script to get a passworded page:
--------------------------------------------------------------------
REBOL [
    Title: "Password Page Retreival"
    Date:  25-Jan-2000
    Purpose: "A script to fetch a web page that uses basic authentication"
    File:  %getsecurepage.r
    Notes: {
        A quick test of the scheme mechanism of reading a URL -
        this should allow for retrieval of web pages that are
        located behind a password protected challenge.
    }
]
;host: "209.85.159.166"

http-port: open [
  scheme: 'tcp
  port-id: 80
  timeout: 0:30 
  host: "127.0.0.1"
]

msg: rejoin [{GET /mysite/default.htm HTTP/1.1
Host: 127.0.0.1
Authorization: Basic } enbase webuser:letmein "^j^m^j^m"
]

print msg

insert http-port msg

while [data: copy http-port] [prin data]

print ""

close http-port
--------------------------------------------------------------------

Here's the response that I get:
--------------------------------------------------------------------
>> do %getsecurepage.r
Script: "Password Page Retreival" (25-Jan-2000)
GET /mysite/default.htm HTTP/1.1
Host: 127.0.0.1
Authorization: Basic d2VidXNlcjpsZXRtZWlu


HTTP/1.1 401 Access Denied
WWW-Authenticate: NTLM
Connection: close
Content-Length: 835
Content-Type: text/html

<html><head><title>Error 401.3</title>

<meta name="robots" content="noindex">
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1"></head>

<body>

<h2>HTTP Error 401</h2>

<p><strong>401.3 Unauthorized: Unauthorized due to ACL on
resource</strong></p>

<p>This error indicates that the credentials passed by the client do not
have access to the particular resource on the server. This resource could
be either the page or file listed in th e address line of the client, or
it could be another file on the server that is needed to pro cess the file
listed on the address line of the
client.</p>

<p>Please make a note of the entire address you were trying to access and
then contact the Web server's administrator to verify that you have
permission to access the requested resource.
</p>

</body></html>
--------------------------------------------------------------------

I've done some basic searches on IIS config, and it looks to be setup
correctly - keep in mind, I'm just testing this to see if I could retrieve
pages that require passwords.

- Porter Woodward

Reply via email to