RE: Successfully build after 2 week struggle but...!

2019-01-25 Thread Salisbury, Mark via curl-library
Hi Himanshu,

“I successfully build windows x86 static libcurl library after struggling for 
two week. I'm new to building libraries but at the end it build. I build all 
dependencies statically like…”

When you create a static library, you don’t have to worry about satisfying link 
dependencies your code has (functions defined in external modules).
Your code still has these dependencies, but they aren’t a problem until you try 
to link your static library into a program (.exe) or .dll / .so (in Linux).

So at the time you link your program (not libcurl but your program which 
depends on libcurl – looks like you called it CurlLib2UploadFile.exe), you need 
to specify all the libraries which satisfy these missing dependencies to the 
linker.  These missing dependencies (read, write, CertOpenStore, etc.) look 
like methods defined in standard windows libraries (ws2_32.lib, Crypt32.lib, 
etc.)

Hope this helps.

-Mark
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

RE: schannel: next InitializeSecurityContext failed: Unknown error

2019-01-04 Thread Salisbury, Mark via curl-library
This error message is actually pretty helpful:

Trying https://www.hollywood-mal.de/ OK!
Trying https://www.hollywood-mal.com/ FAIL: 35 
schannel: next InitializeSecurityContext failed: Unknown error (0x80092013) - 
Die Sperrfunktion konnte die Sperrung nicht überprüfen, da der Sperrserver 
offline war. (NB: In English the error is probably "schannel: next 
InitializeSecurityContext failed: Unknown error (0x80092013) - The revocation 
function was unable to check revocation because the revocation server was 
offline.")

I checked the CRL distribution point for both sites (you can see this info in 
the details of the site’s certificate), it’s the same:

[1]CRL Distribution Point
 Distribution Point Name:
  Full Name:
   URL=http://crl.starfieldtech.com/sfig2s1-103.crl

I copied your code, compiled it, and tested it:

C:\Users\MASALI1\source\repos\Debug>curl-test.exe
Trying https://www.hollywood-mal.de/ OK!
Trying https://www.hollywood-mal.com/ OK!

So it looks like it was a temporary problem.  Is the problem continuing for you?

Thanks,
Mark

Here are a couple pages to help understand certificate revocation checks:
https://blogs.msdn.microsoft.com/ieinternals/2011/04/07/understanding-certificate-revocation-checks/
https://www.digicert.com/util/utility-test-ocsp-and-crl-access-from-a-server.htm


From: curl-library  On Behalf Of Andreas 
Falkenhahn via curl-library
Sent: Friday, January 4, 2019 5:31 AM
To: curl-library@cool.haxx.se
Cc: Andreas Falkenhahn 
Subject: schannel: next InitializeSecurityContext failed: Unknown error

I know people have had problems with this before and I did my googling about 
it, but I don't really understand how to solve this problem because in my case 
it's particularly weird. Consider this little snippet:

static void tryconnect(const char *address)
{
CURL *curl = curl_easy_init();
CURLcode res;
char buf[CURL_ERROR_SIZE];

curl_easy_setopt(curl, CURLOPT_URL, address);
curl_easy_setopt(curl, CURLOPT_CONNECT_ONLY, 1);
curl_easy_setopt(curl, CURLOPT_ERRORBUFFER, buf);

printf("Trying %s ", address);
if(!(res = curl_easy_perform(curl))) {
printf("OK!\n");
} else {
printf("FAIL: %d %s\n", res, buf);
}

curl_easy_cleanup(curl);
}

int main(int argc, char *argv[])
{
curl_global_init(CURL_GLOBAL_DEFAULT);
tryconnect("https://www.hollywood-mal.de/"); --> 
works!
tryconnect("https://www.hollywood-mal.com/"); 
--> fails with schannel error
curl_global_cleanup();
return 0;
}

Why on earth does https://www.hollywood-mal.de/ 
work fine and https://www.hollywood-mal.com/ 
doesn't work at all? I'm the owner of both domains and they are hosted by the 
very same company with the very same settings, yet one works, and the other one 
doesn't. Of course, in a browser both work fine, but with curl only the *.de 
one works, the *.com one fails.

This is the output:

Trying https://www.hollywood-mal.de/ OK!
Trying https://www.hollywood-mal.com/ FAIL: 35 
schannel: next InitializeSecurityContext failed: Unknown error (0x80092013) - 
Die Sperrfunktion konnte die Sperrung nicht überprüfen, da der Sperrserver 
offline war. (NB: In English the error is probably "schannel: next 
InitializeSecurityContext failed: Unknown error (0x80092013) - The revocation 
function was unable to check revocation because the revocation server was 
offline.")

How can I solve this please? Some people seem to be suggesting to use the 
OpenSSL backend instead of schannel but is this really the only way to go? 
Isn't this possible with in-house Windows solutions?

I'm on curl 7.57.0, Windows 7, x64.

Thanks for ideas!

--
Best regards,
Andreas Falkenhahn mailto:andr...@falkenhahn.com


---
Unsubscribe: 
https://cool.haxx.se/list/listinfo/curl-library
Etiquette: 
https://curl.haxx.se/mail/etiquette.html
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

RE: bug in schannel connection shutdown?

2012-09-19 Thread Salisbury, Mark
Marc,

 I took a look at this issue, but was unable to reproduce it myself.
 Even with a testcase provided by Mark, I still was not running into invalid 
 memory access errors.

 Anyway, I created a patch that should avoid such issues by reference counting 
 the credential handle and only allowing it to be freed if it's not needed 
 anymore.

 Please take a look at the attached patch and tell me if it fixes the problem 
 for you. Thanks.

 Best regards,
 Marc

Nice job.  I tested and confirmed that the memory access error is gone with 
your change.

(Frank actually reported the error and provided the test case.)

Mark Salisbury

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: bug in schannel connection shutdown?

2012-08-09 Thread Salisbury, Mark
Frank,

 I'm using libcurl 7.27.0 with schannel on windows with MSVC2008, with https.
 ...
If I then call curl_multi_cleanup() (when shutting down the entire program), I 
get accesses to free()d memory in schannel connection cleanup. I don't get 
such issues on linux with gnutls.
 The attached file reproduces the issue.

I've reproduced the issue you reported and I am taking a look at it now.

Thanks,
Mark

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: schannel_connect_step3 failures

2012-06-21 Thread Salisbury, Mark
Yang, Marc, et. All,

 I am worried that the flags change in your use cases. And I really don't 
 like the
 idea of ignoring or just warning about non-matching flags. 
 ISC_RET_CONFIDENTIALITY, ISC_RET_REPLAY_DETECT and ISC_RET_SEQUENCE_DETECT 
 are pretty important to make sure that the SSL connection is actually 
 secure. Why
 would you want to communicate through an SSL connection that is actually not
 secure? There should be some other way to fix this.

 I am pretty busy with final exams during the following weeks, so I 
 would like to ask whether you or someone else could spend a little 
 more research on this issue before simple ignoring the source of the 
 actual problem. Thanks in advance, I would really appreciate it!

No intention to ignore it on this side. Actually I'm raising the issue 
publicly, and listening to your recommendation of not disabling the check.

I loaded the URL Yang mentioned the problem with -  https://www.digicert.com/ - 
without issues on WinXP and Win7.  I don't have a Win2k machine to duplicate 
the problem on.

MSDN says InitializeSecurityContext() with the flags we care about here are 
supported from Win2k onwards.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa375924(v=vs.85).aspx

One possibility is to disable the checks only on Win2k (something like #if 
WINVER = 0x400).  I'd recommend not making any change though until we learn 
more.  I tried a quick search of the web but did not find anything.

Mark

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: Unicode and NTLM

2012-06-20 Thread Salisbury, Mark
I doubt the change I made to be able to use the wide versions of functions 
(when UNICODE is defined) actually fixed anything besides compiling on systems 
that don't have an ANSI method available.  (It'd be cool if it did though!).

UTF-8 (which is passed around using char *) is still Unicode, even though it 
represents many characters (ASCII) with a single byte.  My change just 
converted char * (which I assume to be UTF-8) to wchar_t * (UTF-16) before 
invoking SSPI functions.  I think there is no change in the range of characters 
that are represented with the conversion.  Does Windows do anything differently 
on the backend when you use the wide version of one of these functions?

Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Daniel Stenberg
Sent: Wednesday, June 20, 2012 7:20 AM
To: libcurl development
Subject: Re: Unicode and NTLM

On Wed, 20 Jun 2012, Christian Hägele wrote:

 recently SSPI support was added to libcurl on windows

I just checked. We added that in 2005... ;-)

 and additional to that Mark Salisbury added support for the 
 wide-character-functions on Windows.

 I looked through the code a bit and it seems like that fixes the 
 limitation curl had that no Unicode user names and passwords are 
 supported. But this is only true if SSPI is used.

 My question is if that also fixes Known Bug 75?
 (http://curl.haxx.se/docs/knownbugs.html)
 If not, could the same technique be used to address that issue?

Can you just test and figure it out?

And if it does fix it, it is still only for SSPI-enabled builds so known bug
75 would still remain - just need a little update to clarify for which builds 
it doesn't apply!

-- 

  / daniel.haxx.se

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: further schannel improvements

2012-06-20 Thread Salisbury, Mark
Marc, et all,

Thanks for providing the test URL.  I found that I was seeing the infinite loop 
during renegotiate even before the doread change (72a5813), so it doesn't seem 
to have introduced the problem.

Anyway, I have a fix for the renegotiate problem.   It's very simple - if the 
connect state is ssl_connect_2_writing, doread is set to false.

Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Marc Hoersken
Sent: Wednesday, June 20, 2012 1:49 AM
To: libcurl development
Subject: Re: further schannel improvements

Hi there,

2012/6/20 Yang Tse yangs...@gmail.com:
 On Tue, Jun 19, 2012 at 5:22 AM, Yang Tse yangs...@gmail.com wrote:

 Relative to seven patch files posted Fri, Jun 15, 2012 at 2:24 AM by 
 Mark Salisbury Mark...

 Patches 0002-* and 0004-* not yet integrated. All other five somewhat 
 adjusted and pushed to repo.

 All seven patch files integrated/adjusted/pushed.to repo.

 Please test

thanks a lot, Yang and Mark!

But there is one new problem being introduced by some of those patches. It 
seems like the new handshake logic is unable to handle the renegotiation if 
requested by the remote party. The new doread
variable will make curl try to read more data in an endless loop, even though 
the data is already in the encrypted data buffer. This means we need to change 
that logic to support doread being set to FALSE from the beginning for 
renegotiation. I suggest that doread or a similar variable is made a parameter 
to the step2 function. This would allow the schannel_recv function to pass 
FALSE into that.

What do you think?

You can test the renegotiation against
https://stuff.marc-hoersken.de/renegotiate/

Best regards,
Marc
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


0001-schannel-SSL-fix-for-renegotiate-problem.patch
Description: 0001-schannel-SSL-fix-for-renegotiate-problem.patch
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

2 fixes for schannel handshake

2012-06-19 Thread Salisbury, Mark
Hello,

I have a couple more fixes for schannel  SSL (see attached).

1.  Process extra data buffer before returning from schannel_connect_step2.  
Without this change I've seen WinCE hang when schannel_connect_step2 returns 
and calls Curl_socket_ready.

2.  If the encrypted handshake does not fit in the intial buffer (seen with 
large certificate chain), increasing the encrypted data buffer is necessary.  I 
double the size when it grows but limit it to 16x the original size.

Also a warning is fixed.  It appears there is more turmoil than there really is 
as most of the change was increasing indentation.

Yang - thanks very much for working through my patches.  Looks like there is 
just one patch left - patch 4 
schannel-SSL-Made-send-method-handle-unexpected-case.patch.

Mark



0001-schannel-SSL-changes-in-schannel_connect_step2.patch
Description: 0001-schannel-SSL-changes-in-schannel_connect_step2.patch
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

RE: schannel and cacert verification

2012-06-13 Thread Salisbury, Mark
That's correct.  

Desktop windows has multiple cert stores - there is a machine store and a user 
store.  The user store is what you see when you open up the Certificates view 
from IE.  By default I think all code uses this store too.

Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Daniel Stenberg
Sent: Wednesday, June 13, 2012 3:18 PM
To: libcurl hacking
Subject: schannel and cacert verification

Hi guys,

Am I right when I assume that the new schannel code uses the Windows cert 
store when a server's SSL certificate gets verified?

-- 

  / daniel.haxx.se
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: after 7.26.0

2012-06-06 Thread Salisbury, Mark
Hi Marc,

I'd like to test your native windows SSL work and try integrating some of what 
I've done on top of your work.

What state is your work in currently?  I recall one of the concerns was around 
write buffering.

Does it make sense for you to send me a patch I can use to base work off?  Did 
you have approval to push your work directly to the repo?  As I browse the git 
repo I don't think any of your work has come in yet; I just see patches you 
sent to the list on 4/21.

Thanks,
Mark


-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Marc Hoersken
Sent: Thursday, May 24, 2012 3:39 PM
To: libcurl development
Subject: Re: after 7.26.0

2012/5/24 Daniel Stenberg dan...@haxx.se
 3 - schannel support for libcurl, by Marc Hoersken

I will be happy to work on the required changes to my current schannel 
implementation.
I am still not sure how to proceed on the general SSPI route, we will need to 
talk about this in the separate thread.
We should probably also try to incorporate at least some of the functionality 
and advantages of the implementation created by Mark Salisbury.

 Thanks for flying curl!

Nice release ;-)

After Schannel/SSPI I will also be interested in adding IPv6 SOCKS5 Proxy 
support. I will probably need it for another project of my own.
But it will probably happen after the next release.
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: SSL/TLS support using Windows SSPI Schannel API

2012-04-23 Thread Salisbury, Mark
Marc, Daniel, All,

Thinking about this a little bit more, I wonder if the SSL write function 
really needs to write ALL the bytes the client passes in before it returns.  
Here's why.  Suppose the client passes in 100 bytes of data to write (a small 
HTTP GET request).  When we use SSL, we may encrypt that 100 bytes into a 
different number of bytes; it depends on the encryption algorithm that was 
negotiated (without extra work we're not going to even know which one was 
used).  Suppose the plain bytes are converted into 125 bytes. If we write only 
the first 30 bytes, and we return that to the client, they will call the write 
function again and start us off at position 30.  Since the previous message was 
not written fully, it probably wasn't coherent to the receiver when it was 
decrypted.  Writing 30 bytes (of encrypted data) may not mean that 30 bytes of 
the unencrypted data was decrypted by the client.

What do you guys think?  Do you agree?  When ssl write methods are passed a 
buffer to write do they need to write it all or return an error if they are 
unable to, respecting configured timeouts?

Any encryption experts want to weigh in?

Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Marc Hoersken
Sent: Monday, April 23, 2012 12:15 PM
To: libcurl development
Subject: Re: SSL/TLS support using Windows SSPI Schannel API

2012/4/23 Daniel Stenberg dan...@haxx.se:
 On Mon, 23 Apr 2012, Salisbury, Mark wrote:

 Thanks a lot for your contribution Mark. Let's combine these into 
 something great!


Yep, I am also for combining the solutions into something great!


 - write buffering implemented (though this is very easy to do).  it 
 continues in a loop until all bytes are written.  Not sure if this is 
 what Daniel intended as correct when he said The code considers 
 swrite() returns that are less than full to be errors..  The 
 alternative is to maintain a 'bytes to write' buffer an check that first 
 when a send call is invoked.


 That's the issue I meant, yes. Just looping is however not the ideal 
 solution since that is a blocking behavior which will waste CPU cycles 
 and degrade the multi interface experience. It is however better than 
 not handling the case at all... =)


What about returning CURL_AGAIN? Will this make libcurl re-call the functions 
for further writing?


 See my implementation attached.  (of course there are some changes in 
 some other libcurl header files too, just the main implementation 
 file is attached).


 Mark and Marc! What do you consider the best way forward to be?

 Will you merge your two efforts first, or should we get one of them 
 into the master first and then work on adjusting that with code from 
 the other way afterwards? I'm open for either way.

I guess Mark's solution is more mature, but he also correctly identified 
advantages in both implementations. As I am pretty busy until the 7th May, I am 
not sure if I can help much with the merge, but I would really like to see the 
advantages of both approaches being merged into libcurl.

Best regards,
Marc

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

RE: SSL/TLS support using Windows SSPI Schannel API

2012-04-23 Thread Salisbury, Mark
I think I didn't explain my concern very well - let me try again with fewer 
words :)

If you are asked to send 100 bytes, which is translated into 125 encrypted 
bytes, but only 30 bytes (encrypted) are actually sent, how do you know how 
many unencrypted bytes were sent?  (how do you know what to return to the 
caller for bytes written?)  I don’t think the caller cares how many encrypted 
bytes were sent.

If you tell the caller 30 bytes were sent, it will call you back at an offset 
30 bytes into its buffer.

Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Marc Hoersken
Sent: Monday, April 23, 2012 2:18 PM
To: libcurl development
Subject: Re: SSL/TLS support using Windows SSPI Schannel API

2012/4/23 Salisbury, Mark mark.salisb...@hp.com:
 Thinking about this a little bit more, I wonder if the SSL write function 
 really needs to write ALL the bytes the client passes in before it returns.  
 Here's why.  Suppose the client passes in 100 bytes of data to write (a small 
 HTTP GET request).  When we use SSL, we may encrypt that 100 bytes into a 
 different number of bytes; it depends on the encryption algorithm that was 
 negotiated (without extra work we're not going to even know which one was 
 used).  Suppose the plain bytes are converted into 125 bytes. If we write 
 only the first 30 bytes, and we return that to the client, they will call the 
 write function again and start us off at position 30.  Since the previous 
 message was not written fully, it probably wasn't coherent to the receiver 
 when it was decrypted.  Writing 30 bytes (of encrypted data) may not mean 
 that 30 bytes of the unencrypted data was decrypted by the client.

 What do you guys think?  Do you agree?  When ssl write methods are passed a 
 buffer to write do they need to write it all or return an error if they are 
 unable to, respecting configured timeouts?

 Any encryption experts want to weigh in?

I am not an encryption expert, but as the Schannel decryption routines 
themselves are perfectly fine with handling incomplete encrypted data by asking 
for more data, I think it should be find to write partial data to the socket. 
This is the only way to handle big data transfer over a slow connection by 
buffering inside the client.

Remember: packets my be split up anyway, so you can never be sure that a packet 
completely arrives at the target endpoint in the same timeframe and can be read 
at once. TCP connections are not message oriented, but stream oriented and this 
means that the receiving endpoint should never care about the original send 
size and just figure out how much more data is required.

Best regards,
Marc

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

RE: SSL/TLS support using Windows SSPI Schannel API

2012-04-23 Thread Salisbury, Mark
So you're saying that the SSL write method could consume all the data (and thus 
return bytes written = full amount), not send all the encrypted data, and it 
will be called again to write the remaining bytes?

This is good.  I'm trying to follow how this works but it's not 
straightforward...  Curl_write calls the send method, Curl_write is called by 
many places, but a rather common one I think is transfer.c.

This makes me think that if the send method indicates it has written (consumed) 
all the data it may not be called again to write out the remaining bytes -

if(k-writebytecount == data-set.infilesize) {
  /* we have sent all data we were supposed to */
  k-upload_done = TRUE;
  infof(data, We are completely uploaded and fine\n);
}
 
Thanks,
Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Daniel Stenberg
Sent: Monday, April 23, 2012 2:49 PM
To: libcurl development
Subject: RE: SSL/TLS support using Windows SSPI Schannel API

On Mon, 23 Apr 2012, Salisbury, Mark wrote:

 If you are asked to send 100 bytes, which is translated into 125 
 encrypted bytes, but only 30 bytes (encrypted) are actually sent, how 
 do you know how many unencrypted bytes were sent?  (how do you know 
 what to return to the caller for bytes written?)  I don’t think the 
 caller cares how many encrypted bytes were sent.

 If you tell the caller 30 bytes were sent, it will call you back at an 
 offset 30 bytes into its buffer.

As a user of such a function we don't care how many bytes that were sent, we 
care how many bytes from the buffer that was *consumed* and that's how most 
network-crypto related send functions tend to work.

I haven't checked how the API works that we're using here in this case so 
perhaps there's a need to add a buffering layer in between that would provide 
the above mentioned functionality.

-- 

  / daniel.haxx.se

---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

RE: IP address support for no proxy feature

2009-11-12 Thread Salisbury, Mark
The patch isn't specific to numbers, the key to the patch is that it will match 
based on the beginning of the string instead of the end.  So it should work 
with IPv6 subnets too (I assume that the same concept is applicable with IPv6 
where the beginning of the address identifies the larger network, eventually 
getting down to a specific host at the right end of the address).

Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of johan...@sun.com
Sent: Wednesday, November 11, 2009 7:09 PM
To: libcurl development
Subject: Re: IP address support for no proxy feature

On Wed, Nov 11, 2009 at 10:12:13PM +, Salisbury, Mark wrote:
 With this change, it is possible to specify a no proxy list like
 192.168.*, which would prevent the proxy from being used to access
 any host that begins with 192.168., for instance http://192.168.0.1/
 would not go through the proxy server with this change.

I think you should use a different syntax for specifying IP addresses
that are going to be included with the NOPROXY option.

A few concerns:

1. RFC 952 allows hostnames with numbers, it's just that the hostname
can't start with a number.  What happens if the user sets something like
this, A-1.* ?

http://tools.ietf.org/html/rfc952

2. What's your approach for dealing with IPv6 addresses?

3. How do you cope with subnets?  192.168.0.0/25 and 192.168.0.0/23 both
generate a range of addresses that can't be expressed in a single entry.

I would suggest you switch to CIDR notation.  Perhaps someone else on
the list can provide more detailed comments about IPv6.

-j
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


RE: IP address support for no proxy feature

2009-11-12 Thread Salisbury, Mark
Yang, 

Yes, you make a good point.  If I modify the method to also take the resolved 
IP address of the hostname as an argument and test either the hostname or the 
IP address against each no proxy host string, that should solve the problem, 
right?  Do you have any other concerns?

The case I was solving involved accessing a website by it's IP address, which 
probably isn't as useful generally as the case you pointed out.

Thanks,
Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Yang Tse
Sent: Wednesday, November 11, 2009 9:18 PM
To: libcurl development
Subject: Re: IP address support for no proxy feature

2009/11/12, Salisbury, Mark wrote:

 Here's how I'd propose updating it:

 .IP CURLOPT_NOPROXY
 Pass a pointer to a zero terminated string. This should be a comma-separated
 list of hosts which do not use a proxy, if one is specified.  To disable the
 proxy, set a single * character.  Each name in this list is matched as either
 a domain which contains the hostname, the hostname itself, or an IP address
 range. For example, local.com would match local.com, local.com:80, and
 www.local.com, but not www.notlocal.com.  192.* will match all hosts
 beginning with 192, like 192.168.1.100.  It will not match 10.10.10.192.
 (Added in 7.19.4, updated to support IP address ranges in 7.19.8)

Ok, the patch does not bring the capability that the RE seemed to
imply and it doesn't bring the capabilities that the modified
documentation for CURLOPT_NOPROXY state.

Even with the patch applied if a destination URL is specified with a
format such as 'host1.local.com' while --noproxy specifies the IP
address of host1.local.com the request will go through the proxy,
ignoring the --noproxy setting.

What the patch brings is the capability of filtering using an IP
subnet notation those URL's which have been given using the IP address
of the host.

Given this, in order to be committed or the code part of the patch is
reworked to fullfill the description, or the contrary is done.

-- 
-=[Yang]=-
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html


IP address support for no proxy feature

2009-11-11 Thread Salisbury, Mark
Hello,

I would like to submit a patch to allow IP addresses to be included in the list 
of no proxy hosts.

Currently the no proxy logic allows for domain names to be specified.  For 
instance, specifying hp.com will prevent the proxy from being used for 
accessing all hosts that end in hp.com.  However, if I want to prevent the 
proxy from being used for a range of IP addresses I don't have any way to do 
this, since the logic in check_noproxy only compares the end of the requested 
host against the entry in the no proxy list.

With this change, it is possible to specify a no proxy list like 192.168.*, 
which would prevent the proxy from being used to access any host that begins 
with 192.168., for instance http://192.168.0.1/ would not go through the 
proxy server with this change.

 I created a short list of hosts and a no proxy list with this change and could 
not see any ill side effects from this change.

I used Tortoise SVN to create my patch, against a repository where I have 
libcurl code checked in (version 7.19.7).

Thanks,
Mark Salisbury
Hewlett-Packard Company



noproxy.patch
Description: noproxy.patch
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html

RE: IP address support for no proxy feature

2009-11-11 Thread Salisbury, Mark
curl_easy_setopt.3 (docs\libcurl\curl_easy_setopt.3) is the man page, right?

I also see curl_easy_setopt.html and curl_easy_setopt.pdf.  Are these 
automatically generated based off the man page?

Here's the current entry in curl_easy_setopt.3:

.IP CURLOPT_NOPROXY
Pass a pointer to a zero terminated string. The should be a comma- separated
list of hosts which do not use a proxy, if one is specified.  The only
wildcard is a single * character, which matches all hosts, and effectively
disables the proxy. Each name in this list is matched as either a domain which
contains the hostname, or the hostname itself. For example, local.com would
match local.com, local.com:80, and www.local.com, but not www.notlocal.com.
(Added in 7.19.4)

Here's how I'd propose updating it:

.IP CURLOPT_NOPROXY
Pass a pointer to a zero terminated string. This should be a comma-separated
list of hosts which do not use a proxy, if one is specified.  To disable the
proxy, set a single * character.  Each name in this list is matched as either 
a domain which contains the hostname, the hostname itself, or an IP address 
range. For example, local.com would match local.com, local.com:80, and 
www.local.com, but not www.notlocal.com.  192.* will match all hosts 
beginning with 192, like 192.168.1.100.  It will not match 10.10.10.192.
(Added in 7.19.4, updated to support IP address ranges in 7.19.8)

Thanks,
Mark

-Original Message-
From: curl-library-boun...@cool.haxx.se 
[mailto:curl-library-boun...@cool.haxx.se] On Behalf Of Yang Tse
Sent: Wednesday, November 11, 2009 3:45 PM
To: libcurl development
Subject: Re: IP address support for no proxy feature

2009/11/11, Salisbury, Mark wrote:

 I would like to submit a patch to allow IP addresses to be included in the
 list of no proxy hosts.

At least CURLOPT_NOPROXY section in curl_easy_setopt.3 should be
updated accordingly to reflect and document the new capability this
would introduce.

Cheers,
-- 
-=[Yang]=-
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html
---
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette:  http://curl.haxx.se/mail/etiquette.html