Hello community,

here is the log from the commit of package curl.3001 for openSUSE:12.3:Update 
checked in at 2014-09-17 22:48:07
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:12.3:Update/curl.3001 (Old)
 and      /work/SRC/openSUSE:12.3:Update/.curl.3001.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "curl.3001"

Changes:
--------
New Changes file:

--- /dev/null   2014-07-24 01:57:42.080040256 +0200
+++ /work/SRC/openSUSE:12.3:Update/.curl.3001.new/curl.changes  2014-09-17 
22:48:08.000000000 +0200
@@ -0,0 +1,1112 @@
+-------------------------------------------------------------------
+Tue Sep  2 15:56:05 UTC 2014 - [email protected]
+
+- fix for CVE-2014-3613 (bnc#894575)
+  * libcurl cookie leaks
+  * added curl-CVE-2014-3613.patch
+
+-------------------------------------------------------------------
+Wed Apr  2 10:43:38 UTC 2014 - [email protected]
+
+- fixes for two security vulnerabilities:
+  * CVE-2014-0138 (bnc#868627)
+    - curl: wrong re-use of connections
+    - added: curl-CVE-2014-0138.patch
+    - removed: curl-CVE-2014-138-bad-reuse.patch
+  * CVE-2014-0139 (bnc#868629)
+    - curl: IP address wildcard certificate validation
+    - added: curl-CVE-2014-0139.patch
+    - removed: curl-CVE-2014-139-reject-cert-ip-wildcards.patch
+
+-------------------------------------------------------------------
+Mon Mar 17 11:16:10 UTC 2014 - [email protected]
+
+- fixes for two security vulnerabilities:
+  * CVE-2014-138 (bnc#868627)
+    - curl: wrong re-use of connections
+    - added curl-CVE-2014-138-bad-reuse.patch
+  * CVE-2014-139 (bnc#868629)
+    - curl: IP address wildcard certificate validation
+    - curl-CVE-2014-139-reject-cert-ip-wildcards.patch
+
+-------------------------------------------------------------------
+Tue Jan 14 12:33:28 UTC 2014 - [email protected]
+
+- fix for CVE-2014-0015 (bnc#858673)
+  * re-use of wrong HTTP NTLM connection in libcurl
+  * added curl-CVE-2014-0015-NTLM_connection_reuse.patch
+- fix test failure because of an expired cookie (bnc#862144)
+  * added curl-test172_cookie_expiration.patch
+
+-------------------------------------------------------------------
+Mon Dec  2 11:26:06 UTC 2013 - [email protected]
+
+- fix CVE-2013-4545 (bnc#849596)
+  = acknowledge VERIFYHOST without VERIFYPEER
+
+-------------------------------------------------------------------
+Thu Jun 13 10:06:23 UTC 2013 - [email protected]
+
+- fix for CVE-2013-2174 (bnc#824517)
+  added curl-CVE-2013-2174.patch
+
+-------------------------------------------------------------------
+Fri Apr 12 11:01:51 UTC 2013 - [email protected]
+
+- fixed CVE-2013-1944 (bnc#814655)
+  added curl-CVE-2013-1944.patch
+
+-------------------------------------------------------------------
+Thu Feb  7 10:54:15 UTC 2013 - [email protected]
+
+- fixed CVE-2013-0249 (bnc#802411)
+- refreshed patches
+
+-------------------------------------------------------------------
+Fri Jan 11 21:34:38 CET 2013 - [email protected]
+
+- Break build loop and make GPG signature verification optional.
+
+-------------------------------------------------------------------
+Tue Nov 27 20:05:00 CET 2012 - [email protected]
+
+- Verify GPG signature.
+
+-------------------------------------------------------------------
+Tue Nov 20 23:43:24 UTC 2012 - [email protected]
+
+- Curl 7.28.1
+* FTP: prevent the multi interface from blocking Obsoletes
+  curl-ftp-prevent-the-multi-interface-from-blocking.patch
+* don't send '#' fragments when using proxy
+* OpenSSL: Disable SSL/TLS compression - avoid the "CRIME" attack
+* TFTP: handle resend
+* memory leak: CURLOPT_RESOLVE with multi interface
+* SSL: Several SSL-backend related fixes 
+
+-------------------------------------------------------------------
+Sun Nov  4 19:57:33 UTC 2012 - [email protected]
+
+- added curl-ftp-prevent-the-multi-interface-from-blocking.patch in
+  order to prevent the multi interface from blocking when using ftp
+  and the remote end responds very slowly (sf#3579064)
+
+-------------------------------------------------------------------
+Sun Jul 29 22:14:25 UTC 2012 - [email protected]
+
+- Curl 7.27.0
+* support metalinks
+* Add sasl authentication support
+* various bugfixes
+- Fix previous change, _GNU_SOURCE --> AC_USE_SYSTEM_EXTENSIONS
+
+-------------------------------------------------------------------
+Mon Jul  9 13:12:24 UTC 2012 - [email protected]
+
+- define _GNU_SOURCE for oS/SLES <= 11.4, as O_CLOEXEC is
+  defined inside a ifdef __USE_GNU
+
+-------------------------------------------------------------------
+Sat May 12 23:24:56 UTC 2012 - [email protected]
+
+- Update to new upstream release 7.25.0
+* Added CURLOPT_TCP_KEEPALIVE, CURLOPT_TCP_KEEPIDLE,
+  CURLOPT_TCP_KEEPINTVL
+* use new library-side TCP_KEEPALIVE options
+* Added a new CURLOPT_MAIL_AUTH option
+* Added support for --mail-auth
+* (for more see the shipped CHANGES file)
+
+-------------------------------------------------------------------
+Wed Feb  8 00:45:18 UTC 2012 - [email protected]
+
+- Problem with the c-ares backend, workaround for [bnc#745534] 
+
+-------------------------------------------------------------------
+Thu Feb  2 18:47:10 UTC 2012 - [email protected]
+
+- Update to version curl 7.24.0
+- refresh patches to fix broken build
+
+-------------------------------------------------------------------
+Wed Jan 18 13:49:56 CET 2012 - [email protected]
+
+- use the rpmoptflags unconditionally, don't do own compiler flag
+  magic. Fixes debuginfo package built 
+
+-------------------------------------------------------------------
+Wed Dec 28 10:30:28 UTC 2011 - [email protected]
+
+- Package /usr/share/aclocal to avoid build dependency on automake.
+
+-------------------------------------------------------------------
+Wed Nov 30 22:39:35 UTC 2011 - [email protected]
+
+- Use O_CLOEXEC in library code. 
+
+-------------------------------------------------------------------
+Tue Nov 29 11:51:38 UTC 2011 - [email protected]
+
+- Remove redundant/unwanted tags/section (cf. specfile guidelines)
+
+-------------------------------------------------------------------
+Tue Nov 29 08:20:23 UTC 2011 - [email protected]
+
+- Use original source tarball 
+
+-------------------------------------------------------------------
+Mon Nov 28 12:00:00 UTC 2011 - [email protected]
+
+- Update to version 7.23.1:
+  + Empty headers can be sent in HTTP requests by terminating with a semicolon
+  + SSL session sharing support added to curl_share_setopt()
+  + Added support to MAIL FROM for the optional SIZE parameter
+  + smtp: Added support for NTLM authentication
+  + curl tool: code split into tool_*.[ch] files
+  + lots of bugfixes
+-------------------------------------------------------------------
+Mon Oct  3 15:44:17 UTC 2011 - [email protected]
+
+- Update to version 7.22.0:
+  + Added CURLOPT_GSSAPI_DELEGATION
+  + Added support for NTLM delegation to Samba's winbind daemon
+    helper ntlm_auth
+  + Display notes from setup file in testcurl.pl
+  + BSD-style lwIP TCP/IP stack experimental support on Windows
+  + OpenSSL: Use SSL_MODE_RELEASE_BUFFERS if available
+  + --delegation was added to set CURLOPT_GSSAPI_DELEGATION
+  + nss: start with no database if the selected database is broken
+  + telnet: allow programatic use on Windows
+  + for a list of bugfixes, see
+    http://curl.haxx.se/changes.html#7_22_0
+- Drop curl-openssl-release-buffers.patch: fixed upstream.
+- Add curl-fix-m4.patch: Use 'x' in configure scripts. Fixes issues
+  when configure is run with -Werror -Wall.
+
+-------------------------------------------------------------------
+Sun Sep 18 00:10:42 UTC 2011 - [email protected]
+
+- Remove redundant tags/sections from specfile
+- Use %_smp_mflags for parallel build
+
+-------------------------------------------------------------------
+Fri Sep 16 17:22:44 UTC 2011 - [email protected]
+
+- Add curl-devel to baselibs
+
+-------------------------------------------------------------------
++++ 915 more lines (skipped)
++++ between /dev/null
++++ and /work/SRC/openSUSE:12.3:Update/.curl.3001.new/curl.changes

New:
----
  baselibs.conf
  curl-7.28.1.tar.lzma
  curl-7.28.1.tar.lzma.asc
  curl-CVE-2013-0249.patch
  curl-CVE-2013-1944.patch
  curl-CVE-2013-2174.patch
  curl-CVE-2013-4545.patch
  curl-CVE-2014-0015-NTLM_connection_reuse.patch
  curl-CVE-2014-0138.patch
  curl-CVE-2014-0139.patch
  curl-CVE-2014-3613.patch
  curl-test172_cookie_expiration.patch
  curl.changes
  curl.keyring
  curl.spec
  dont-mess-with-rpmoptflags.diff
  libcurl-ocloexec.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ curl.spec ++++++
#
# spec file for package curl
#
# Copyright (c) 2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
#
# All modifications and additions to the file contributed by third parties
# remain the property of their copyright owners, unless otherwise agreed
# upon. The license for this file, and modifications and additions to the
# file, is the same license as for the pristine package itself (unless the
# license for the pristine package is not an Open Source License, in which
# case the license is the MIT License). An "Open Source License" is a
# license that conforms to the Open Source Definition (Version 1.9)
# published by the Open Source Initiative.

# Please submit bugfixes or comments via http://bugs.opensuse.org/
#


%bcond_without openssl
%bcond_with mozilla_nss
%bcond_without testsuite

Name:           curl
Version:        7.28.1
Release:        0
Summary:        A Tool for Transferring Data from URLs
License:        BSD-3-Clause and MIT
Group:          Productivity/Networking/Web/Utilities
Url:            http://curl.haxx.se/
Source:         http://curl.haxx.se/download/%{name}-%{version}.tar.lzma
Source2:        http://curl.haxx.se/download/%{name}-%{version}.tar.lzma.asc
Source3:        baselibs.conf
Source4:        %{name}.keyring
Patch:          libcurl-ocloexec.patch
Patch1:         dont-mess-with-rpmoptflags.diff
Patch2:         curl-CVE-2013-0249.patch
Patch3:         curl-CVE-2013-1944.patch
Patch4:         curl-CVE-2013-4545.patch
Patch5:         curl-CVE-2013-2174.patch
Patch6:         curl-CVE-2014-0015-NTLM_connection_reuse.patch
Patch7:         curl-test172_cookie_expiration.patch
Patch8:         curl-CVE-2014-0138.patch
Patch9:         curl-CVE-2014-0139.patch
Patch10:        curl-CVE-2014-3613.patch
# Use rpmbuild -D 'VERIFY_SIG 1' to verify signature during build or run 
one-shot check by "gpg-offline --verify --package=curl curl-*.asc".
%if 0%{?VERIFY_SIG}
BuildRequires:  gpg-offline
%endif
BuildRequires:  libidn-devel
BuildRequires:  libtool
BuildRequires:  lzma
BuildRequires:  openldap2-devel
BuildRequires:  pkg-config
BuildRequires:  zlib-devel
%if %{with openssl}
BuildRequires:  openssl-devel
%endif
%if %{with mozilla_nss}
BuildRequires:  mozilla-nss-devel
%endif
BuildRequires:  krb5-devel
BuildRequires:  libssh2-devel
BuildRequires:  openssh
%if 0%{?_with_stunnel:1}
# used by the testsuite
BuildRequires:  stunnel
%endif
BuildRoot:      %{_tmppath}/%{name}-%{version}-build
# bug437293
%ifarch ppc64
Obsoletes:      curl-64bit
%endif

%description
Curl is a client to get documents and files from or send documents to a
server using any of the supported protocols (HTTP, HTTPS, FTP, FTPS,
TFTP, DICT, TELNET, LDAP, or FILE). The command is designed to work
without user interaction or any kind of interactivity.

%package -n libcurl4
Summary:        Version 4 of cURL shared library
Group:          Productivity/Networking/Web/Utilities

%description -n libcurl4
The cURL shared library version 4 for accessing data using different
network protocols.

%package -n libcurl-devel
Summary:        A Tool for Transferring Data from URLs
Group:          Development/Libraries/C and C++
Requires:       glibc-devel
Requires:       libcurl4 = %{version}
# curl-devel (v 7.15.5) was last used in 10.2
Provides:       curl-devel <= 7.15.5
Obsoletes:      curl-devel < 7.16.2

%description -n libcurl-devel
Curl is a client to get documents and files from or send documents to a
server using any of the supported protocols (HTTP, HTTPS, FTP, GOPHER,
DICT, TELNET, LDAP, or FILE). The command is designed to work without
user interaction or any kind of interactivity.

%prep
%if 0%{?VERIFY_SIG}
%gpg_verify %{S:2}
%endif
%setup -q
%patch
%patch1
%patch2 -p1
%patch3 -p1
%patch4 -p1
%patch5 -p1
%patch6 -p1
%patch7 -p1
%patch8 -p1
%patch9 -p1
%patch10 -p1

%build
autoreconf -fi
# local hack to make curl-config --libs stop printing libraries it depends on
# (currently, libtool sets link_all_deplibs=(yes|unknown) everywhere,
# will hopefully change in the future)
sed -i 's/link_all_deplibs=unknown/link_all_deplibs=no/' configure
%configure \
        --enable-ipv6 \
%if %{with openssl}
        --with-ssl \
        --with-ca-path=/etc/ssl/certs/ \
%else
        --without-ssl \
%if %{with mozilla_nss}
        --with-nss \
%endif
%endif
        --with-gssapi=/usr/lib/mit \
        --with-libssh2\
        --enable-hidden-symbols \
        --disable-static \
    --enable-threaded-resolver

: if this fails, the above sed hack did not work
./libtool --config | grep -q link_all_deplibs=no
# enable-hidden-symbols needs gcc4 and causes that curl exports only its API
make %{?_smp_mflags}

%if %{with testsuite}

%check
cd tests
make
# make sure the testsuite runs don't race on MP machines in autobuild
if test -z "$BUILD_INCARNATION" -a -r /.buildenv; then
        . /.buildenv
fi
if test -z "$BUILD_INCARNATION"; then
        BUILD_INCARNATION=0
fi
base=$((8990 + $BUILD_INCARNATION * 20))
perl ./runtests.pl -a -b$base || {
%if 0%{?curl_testsuite_fatal:1}
        exit
%else
        echo "WARNING: runtests.pl failed with code $?, continuing nevertheless"
%endif
}
%endif

%install
%{makeinstall}
rm $RPM_BUILD_ROOT%_libdir/libcurl.la
install -d $RPM_BUILD_ROOT/usr/share/aclocal
install -m 644 docs/libcurl/libcurl.m4 $RPM_BUILD_ROOT/usr/share/aclocal/

%post -n libcurl4 -p /sbin/ldconfig

%postun -n libcurl4 -p /sbin/ldconfig

%files
%defattr(-,root,root)
%doc README RELEASE-NOTES
%doc docs/{BUGS,FAQ,FEATURES,MANUAL,RESOURCES,TODO,TheArtOfHttpScripting}
%doc lib/README.curl_off_t
%{_prefix}/bin/curl
%doc %{_mandir}/man1/curl.1%{ext_man}

%files -n libcurl4
%defattr(-,root,root)
%{_libdir}/libcurl.so.4*

%files -n libcurl-devel
%defattr(-,root,root)
%{_prefix}/bin/curl-config
%{_prefix}/include/curl
%dir %{_prefix}/share/aclocal
%{_prefix}/share/aclocal/libcurl.m4
%{_libdir}/libcurl.so
%{_libdir}/pkgconfig/libcurl.pc
%{_mandir}/man1/curl-config.1%{ext_man}
%{_mandir}/man1/mk-ca-bundle.1%{ext_man}
%{_mandir}/man3/*
%doc docs/libcurl/symbols-in-versions

%changelog
++++++ baselibs.conf ++++++
libcurl4
  obsoletes "curl-<targettype> <= <version>"
  provides "curl-<targettype> = <version>"
curl-devel
  requires -curl-<targettype>
  requires "libcurl4-<targettype> = <version>"
++++++ curl-CVE-2013-0249.patch ++++++
>From ee45a34907ffeb5fd95b0513040d8491d565b663 Mon Sep 17 00:00:00 2001
From: Eldar Zaitov <[email protected]>
Date: Wed, 30 Jan 2013 23:22:27 +0100
Subject: [PATCH] Curl_sasl_create_digest_md5_message: fix buffer overflow

When negotiating SASL DIGEST-MD5 authentication, the function
Curl_sasl_create_digest_md5_message() uses the data provided from the
server without doing the proper length checks and that data is then
appended to a local fixed-size buffer on the stack.

This vulnerability can be exploited by someone who is in control of a
server that a libcurl based program is accessing with POP3, SMTP or
IMAP. For applications that accept user provided URLs, it is also
thinkable that a malicious user would feed an application with a URL to
a server hosting code targetting this flaw.

Bug: http://curl.haxx.se/docs/adv_20130206.html
---
 lib/curl_sasl.c |   23 ++++++-----------------
 1 file changed, 6 insertions(+), 17 deletions(-)

Index: curl-7.28.1/lib/curl_sasl.c
===================================================================
--- curl-7.28.1.orig/lib/curl_sasl.c    2012-08-08 22:45:18.000000000 +0200
+++ curl-7.28.1/lib/curl_sasl.c 2013-02-07 11:55:15.183277599 +0100
@@ -345,9 +345,7 @@ CURLcode Curl_sasl_create_digest_md5_mes
     snprintf(&HA1_hex[2 * i], 3, "%02x", digest[i]);
 
   /* Prepare the URL string */
-  strcpy(uri, service);
-  strcat(uri, "/");
-  strcat(uri, realm);
+  snprintf(uri, sizeof(uri), "%s/%s", service, realm);
 
   /* Calculate H(A2) */
   ctxt = Curl_MD5_init(Curl_DIGEST_MD5);
@@ -391,20 +389,11 @@ CURLcode Curl_sasl_create_digest_md5_mes
   for(i = 0; i < MD5_DIGEST_LEN; i++)
     snprintf(&resp_hash_hex[2 * i], 3, "%02x", digest[i]);
 
-  strcpy(response, "username=\"");
-  strcat(response, userp);
-  strcat(response, "\",realm=\"");
-  strcat(response, realm);
-  strcat(response, "\",nonce=\"");
-  strcat(response, nonce);
-  strcat(response, "\",cnonce=\"");
-  strcat(response, cnonce);
-  strcat(response, "\",nc=");
-  strcat(response, nonceCount);
-  strcat(response, ",digest-uri=\"");
-  strcat(response, uri);
-  strcat(response, "\",response=");
-  strcat(response, resp_hash_hex);
+  snprintf(response, sizeof(response),
+           "username=\"%s\",realm=\"%s\",nonce=\"%s\","
+           "cnonce=\"%s\",nc=\"%s\",digest-uri=\"%s\",response=%s",
+           userp, realm, nonce,
+           cnonce, nonceCount, uri, resp_hash_hex);
 
   /* Base64 encode the reply */
   return Curl_base64_encode(data, response, 0, outptr, outlen);
++++++ curl-CVE-2013-1944.patch ++++++
diff --git a/lib/cookie.c b/lib/cookie.c
index 35a3731..1aaf669 100644
--- a/lib/cookie.c
+++ b/lib/cookie.c
@@ -118,15 +118,29 @@ static void freecookie(struct Cookie *co)
   free(co);
 }
 
-static bool tailmatch(const char *little, const char *bigone)
+static bool tailmatch(const char *cooke_domain, const char *hostname)
 {
-  size_t littlelen = strlen(little);
-  size_t biglen = strlen(bigone);
+  size_t cookie_domain_len = strlen(cooke_domain);
+  size_t hostname_len = strlen(hostname);
 
-  if(littlelen > biglen)
+  if(hostname_len < cookie_domain_len)
     return FALSE;
 
-  return Curl_raw_equal(little, bigone+biglen-littlelen) ? TRUE : FALSE;
+  if(!Curl_raw_equal(cooke_domain, hostname+hostname_len-cookie_domain_len))
+    return FALSE;
+
+  /* A lead char of cookie_domain is not '.'.
+     RFC6265 4.1.2.3. The Domain Attribute says:
+       For example, if the value of the Domain attribute is
+       "example.com", the user agent will include the cookie in the Cookie
+       header when making HTTP requests to example.com, www.example.com, and
+       www.corp.example.com.
+   */
+  if(hostname_len == cookie_domain_len)
+    return TRUE;
+  if('.' == *(hostname + hostname_len - cookie_domain_len - 1))
+    return TRUE;
+  return FALSE;
 }
 
 /*
diff --git a/tests/data/Makefile.am b/tests/data/Makefile.am
index 0528a25..b51f524 100644
--- a/tests/data/Makefile.am
+++ b/tests/data/Makefile.am
@@ -78,6 +78,7 @@ test1118 test1119 test1120 test1121 test1122 test1123 
test1124 test1125       \
 test1126 test1127 test1128 test1129 test1130 test1131 test1132 \
 test1200 test1201 test1202 test1203 test1204 test1205 test1206 test1207 \
 test1208 test1209 test1210 test1211 \
+test1218 \
 test1220 \
 test1300 test1301 test1302 test1303 test1304 test1305  \
 test1306 test1307 test1308 test1309 test1310 test1311 test1312 test1313 \
diff --git a/tests/data/test1218 b/tests/data/test1218
new file mode 100644
index 0000000..7d86547
--- /dev/null
+++ b/tests/data/test1218
@@ -0,0 +1,61 @@
+<testcase>
+<info>
+<keywords>
+HTTP
+HTTP GET
+HTTP proxy
+cookies
+</keywords>
+</info>
+
+# This test is very similar to 1216, only that it sets the cookies from the
+# first site instead of reading from a file
+<reply>
+<data>
+HTTP/1.1 200 OK
+Date: Tue, 25 Sep 2001 19:37:44 GMT
+Set-Cookie: domain=.example.fake; bug=fixed;
+Content-Length: 21
+
+This server says moo
+</data>
+</reply>
+
+# Client-side
+<client>
+<server>
+http
+</server>
+ <name>
+HTTP cookies and domains with same prefix
+ </name>
+ <command>
+http://example.fake/c/1218 http://example.fake/c/1218 
http://bexample.fake/c/1218 -b nonexisting -x %HOSTIP:%HTTPPORT
+</command>
+</client>
+
+# Verify data after the test has been "shot"
+<verify>
+<strip>
+^User-Agent:.*
+</strip>
+<protocol>
+GET http://example.fake/c/1218 HTTP/1.1
+Host: example.fake
+Accept: */*
+Proxy-Connection: Keep-Alive
+
+GET http://example.fake/c/1218 HTTP/1.1
+Host: example.fake
+Accept: */*
+Proxy-Connection: Keep-Alive
+Cookie: bug=fixed
+
+GET http://bexample.fake/c/1218 HTTP/1.1
+Host: bexample.fake
+Accept: */*
+Proxy-Connection: Keep-Alive
+
+</protocol>
+</verify>
+</testcase>
++++++ curl-CVE-2013-2174.patch ++++++
commit 45030219bf8b44270d40fc62e8a02411612d00cc
Author: Daniel Stenberg <[email protected]>
Date:   Sun May 19 23:24:29 2013 +0200

    Curl_urldecode: no peaking beyond end of input buffer
    
    Security problem: ....
    
    If a program would give a string like "%" to curl_easy_unescape(), it
    would still consider the % as start of an encoded character. The
    function then not only read beyond the buffer but it would also deduct
    the *unsigned* counter variable for how many more bytes there's left to
    read in the buffer by two, making the counter wrap. Continuing this, the
    function would go on reading beyond the buffer and soon writing beyond
    the allocated target buffer...
    
    Reported-by: Timo Sirainen

Index: curl-7.19.0/lib/escape.c
===================================================================
--- curl-7.19.0.orig/lib/escape.c       2013-06-13 12:17:06.251345362 +0200
+++ curl-7.19.0/lib/escape.c    2013-06-13 12:17:07.228374970 +0200
@@ -149,7 +149,8 @@ char *curl_easy_unescape(CURL *handle, c
 
   while(--alloc > 0) {
     in = *string;
-    if(('%' == in) && ISXDIGIT(string[1]) && ISXDIGIT(string[2])) {
+    if(('%' == in) && (alloc > 2) &&
+       ISXDIGIT(string[1]) && ISXDIGIT(string[2])) {
       /* this is two hexadecimal digits following a '%' */
       char hexstr[3];
       char *ptr;
++++++ curl-CVE-2013-4545.patch ++++++
commit 3c3622b66221d89509cffaa693fc7dcd5c5b96cf
Author: Daniel Stenberg <[email protected]>
Date:   Wed Oct 2 15:31:10 2013 +0200

    OpenSSL: acknowledge CURLOPT_SSL_VERIFYHOST without VERIFYPEER
    
    Setting only CURLOPT_SSL_VERIFYHOST without CURLOPT_SSL_VERIFYPEER set
    should still verify that the host name fields in the server certificate
    is fine or return failure.
    
    Bug: http://curl.haxx.se/mail/lib-2013-10/0002.html
    Reported-by: Ishan SinghLevett

diff --git a/lib/ssluse.c b/lib/ssluse.c
index 4f3c1e1..9974ac8 100644
--- a/lib/ssluse.c
+++ b/lib/ssluse.c
@@ -2351,7 +2351,7 @@ ossl_connect_step3(struct connectdata *conn,
    * operations.
    */
 
-  if(!data->set.ssl.verifypeer)
+  if(!data->set.ssl.verifypeer && !data->set.ssl.verifyhost)
     (void)servercert(conn, connssl, FALSE);
   else
     retcode = servercert(conn, connssl, TRUE);
++++++ curl-CVE-2014-0015-NTLM_connection_reuse.patch ++++++
commit 8ae35102c43d8d06572c3a1292eb6e27e663c78d
Author: Daniel Stenberg <[email protected]>
Date:   Tue Jan 7 09:33:54 2014 +0100

    ConnectionExists: fix NTLM check for new connection
    
    When the requested authentication bitmask includes NTLM, we cannot
    re-use a connection for another username/password as we then risk
    re-using NTLM (connection-based auth).
    
    This has the unfortunate downside that if you include NTLM as a possible
    auth, you cannot re-use connections for other usernames/passwords even
    if NTLM doesn't end up the auth type used.
    
    Reported-by: Paras S
    Patched-by: Paras S
    Bug: http://curl.haxx.se/mail/lib-2014-01/0046.html

Index: curl-7.28.1/lib/url.c
===================================================================
--- curl-7.28.1.orig/lib/url.c  2014-01-14 14:35:12.627855282 +0100
+++ curl-7.28.1/lib/url.c       2014-01-14 14:35:52.164309737 +0100
@@ -2942,8 +2942,8 @@ ConnectionExists(struct SessionHandle *d
   struct connectdata *check;
   struct connectdata *chosen = 0;
   bool canPipeline = IsPipeliningPossible(data, needle);
-  bool wantNTLM = (data->state.authhost.want==CURLAUTH_NTLM) ||
-                  (data->state.authhost.want==CURLAUTH_NTLM_WB);
+  bool wantNTLM = (data->state.authhost.want & CURLAUTH_NTLM) ||
+                  (data->state.authhost.want & CURLAUTH_NTLM_WB);
 
   for(i=0; i< data->state.connc->num; i++) {
     bool match = FALSE;
++++++ curl-CVE-2014-0138.patch ++++++
>From 9db36827fb5eade403143b36566914ee9dc37d7b Mon Sep 17 00:00:00 2001
From: Steve Holme <[email protected]>
Date: Thu, 20 Feb 2014 23:51:36 +0000
Subject: [PATCH] url: Fixed connection re-use when using different log-in
 credentials

In addition to FTP, other connection based protocols such as IMAP, POP3,
SMTP, SCP, SFTP and LDAP require a new connection when different log-in
credentials are specified. Fixed the detection logic to include these
other protocols.

Bug: http://curl.haxx.se/docs/adv_20140326A.html
---
 lib/http.c    | 2 +-
 lib/url.c     | 7 ++++---
 lib/urldata.h | 2 ++
 3 files changed, 7 insertions(+), 4 deletions(-)

Index: curl-7.28.1/lib/http.c
===================================================================
--- curl-7.28.1.orig/lib/http.c 2014-04-10 13:48:24.391462756 +0200
+++ curl-7.28.1/lib/http.c      2014-04-10 13:48:26.799485773 +0200
@@ -148,7 +148,7 @@ const struct Curl_handler Curl_handler_h
   ZERO_NULL,                            /* readwrite */
   PORT_HTTPS,                           /* defport */
   CURLPROTO_HTTP | CURLPROTO_HTTPS,     /* protocol */
-  PROTOPT_SSL                           /* flags */
+  PROTOPT_SSL | PROTOPT_CREDSPERREQUEST /* flags */
 };
 #endif
 
Index: curl-7.28.1/lib/url.c
===================================================================
--- curl-7.28.1.orig/lib/url.c  2014-04-10 13:48:26.800485782 +0200
+++ curl-7.28.1/lib/url.c       2014-04-10 13:50:40.772766689 +0200
@@ -3117,10 +3117,10 @@ ConnectionExists(struct SessionHandle *d
             continue;
           }
         }
-        if((needle->handler->protocol & CURLPROTO_FTP) ||
-           ((needle->handler->protocol & CURLPROTO_HTTP) && wantNTLM)) {
-          /* This is FTP or HTTP+NTLM, verify that we're using the same name
-             and password as well */
+      if((!(needle->handler->flags & PROTOPT_CREDSPERREQUEST)) ||
+       ((needle->handler->protocol & CURLPROTO_HTTP) && wantNTLM)) {
+        /* This protocol requires credentials per connection or is HTTP+NTLM,
+           so verify that we're using the same name and password as well */
           if(!strequal(needle->user, check->user) ||
              !strequal(needle->passwd, check->passwd)) {
             /* one of them was different */
Index: curl-7.28.1/lib/urldata.h
===================================================================
--- curl-7.28.1.orig/lib/urldata.h      2014-04-10 13:48:24.392462766 +0200
+++ curl-7.28.1/lib/urldata.h   2014-04-10 13:48:26.801485792 +0200
@@ -755,6 +755,8 @@ struct Curl_handler {
                                       gets a default */
 #define PROTOPT_NOURLQUERY (1<<6)   /* protocol can't handle
                                         url query strings (?foo=bar) ! */
+#define PROTOPT_CREDSPERREQUEST (1<<7) /* requires login creditials per request
+                                          as opposed to per connection */
 
 
 /* return the count of bytes sent, or -1 on error */
++++++ curl-CVE-2014-0139.patch ++++++
>From f44e3a4d0df9397278735d1520f7681715b83b59 Mon Sep 17 00:00:00 2001
From: Daniel Stenberg <[email protected]>
Date: Mon, 3 Mar 2014 11:46:36 +0100
Subject: [PATCH] Curl_cert_hostcheck: reject IP address wildcard matches

There are server certificates used with IP address in the CN field, but
we MUST not allow wild cart certs for hostnames given as IP addresses
only. Therefore we must make Curl_cert_hostcheck() fail such attempts.

Bug: http://curl.haxx.se/docs/adv_20140326B.html
Reported-by: Richard Moore
---
 lib/hostcheck.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/lib/hostcheck.c b/lib/hostcheck.c
index 24ddd89..d144f31 100644
--- a/lib/hostcheck.c
+++ b/lib/hostcheck.c
@@ -28,6 +28,7 @@
 
 #include "hostcheck.h"
 #include "rawstr.h"
+#include "inet_pton.h"
 
 /*
  * Match a hostname against a wildcard pattern.
@@ -43,11 +44,23 @@ static int hostmatch(const char *hostname, const char 
*pattern)
   const char *pattern_label_end, *pattern_wildcard, *hostname_label_end;
   int wildcard_enabled;
   size_t prefixlen, suffixlen;
+  struct in_addr ignored;
+#ifdef ENABLE_IPV6
+  struct sockaddr_in6 si6;
+#endif
   pattern_wildcard = strchr(pattern, '*');
   if(pattern_wildcard == NULL)
     return Curl_raw_equal(pattern, hostname) ?
       CURL_HOST_MATCH : CURL_HOST_NOMATCH;
 
+  /* detect IP address as hostname and fail the match if so */
+  if(Curl_inet_pton(AF_INET, hostname, &ignored) > 0)
+    return CURL_HOST_NOMATCH;
+#ifdef ENABLE_IPV6
+  else if(Curl_inet_pton(AF_INET6, hostname, &si6.sin6_addr) > 0)
+    return CURL_HOST_NOMATCH;
+#endif
+
   /* We require at least 2 dots in pattern to avoid too wide wildcard
      match. */
   wildcard_enabled = 1;
-- 
1.9.0

++++++ curl-CVE-2014-3613.patch ++++++
>From eac573ea9c368f5e3c07de4d5ec5c5d0f84a021a Mon Sep 17 00:00:00 2001
From: Tim Ruehsen <[email protected]>
Date: Tue, 19 Aug 2014 21:01:28 +0200
Subject: [PATCH 1/2] cookies: only use full host matches for hosts used as IP
 address

By not detecting and rejecting domain names for partial literal IP
addresses properly when parsing received HTTP cookies, libcurl can be
fooled to both send cookies to wrong sites and to allow arbitrary sites
to set cookies for others.

Bug: http://curl.haxx.se/docs/adv_20140910.html
---
 lib/cookie.c        | 50 ++++++++++++++++++++++++++++++++++++++----------
 tests/data/test1105 |  3 +--
 tests/data/test31   | 55 +++++++++++++++++++++++++++--------------------------
 tests/data/test8    |  3 ++-
 4 files changed, 71 insertions(+), 40 deletions(-)

Index: curl-7.28.1/lib/cookie.c
===================================================================
--- curl-7.28.1.orig/lib/cookie.c       2014-09-10 15:47:19.149052494 +0200
+++ curl-7.28.1/lib/cookie.c    2014-09-10 15:47:19.191053031 +0200
@@ -94,6 +94,7 @@ Example set of cookies:
 #include "strtoofft.h"
 #include "rawstr.h"
 #include "curl_memrchr.h"
+#include "inet_pton.h"
 
 /* The last #include file should be: */
 #include "memdebug.h"
@@ -177,6 +178,27 @@ static void strstore(char **str, const c
   *str = strdup(newstr);
 }
 
+/*
+ * Return true if the given string is an IP(v4|v6) address.
+ */
+static bool isip(const char *domain)
+{
+  struct in_addr addr;
+#ifdef ENABLE_IPV6
+  struct in6_addr addr6;
+#endif
+
+  if(Curl_inet_pton(AF_INET, domain, &addr)
+#ifdef ENABLE_IPV6
+     || Curl_inet_pton(AF_INET6, domain, &addr6)
+#endif
+    ) {
+    /* domain name given as IP address */
+    return TRUE;
+  }
+
+  return FALSE;
+}
 
 /****************************************************************************
  *
@@ -290,6 +312,8 @@ Curl_cookie_add(struct SessionHandle *da
           }
         }
         else if(Curl_raw_equal("domain", name)) {
+          bool is_ip;
+          const char *dotp;
           /* note that this name may or may not have a preceding dot, but
              we don't care about that, we treat the names the same anyway */
 
@@ -333,16 +357,22 @@ Curl_cookie_add(struct SessionHandle *da
             if('.' == whatptr[0])
               whatptr++; /* ignore preceding dot */
 
-            if(!domain || tailmatch(whatptr, domain)) {
-              const char *tailptr=whatptr;
-              if(tailptr[0] == '.')
-                tailptr++;
-              strstore(&co->domain, tailptr); /* don't prefix w/dots
-                                                 internally */
+          is_ip = isip(domain ? domain : whatptr);
+
+          /* check for more dots */
+          dotp = strchr(whatptr, '.');
+          if(!dotp)
+            domain=":";
+
+          if(!domain
+             || (is_ip && !strcmp(whatptr, domain))
+             || (!is_ip && tailmatch(whatptr, domain))) {
+            strstore(&co->domain, whatptr);
               if(!co->domain) {
                 badcookie = TRUE;
                 break;
               }
+            if(!is_ip)
               co->tailmatch=TRUE; /* we always do that if the domain name was
                                      given */
             }
@@ -819,10 +849,14 @@ struct Cookie *Curl_cookie_getlist(struc
   time_t now = time(NULL);
   struct Cookie *mainco=NULL;
   size_t matches = 0;
+  bool is_ip;
 
   if(!c || !c->cookies)
     return NULL; /* no cookie struct or no cookies in the struct */
 
+  /* check if host is an IP(v4|v6) address */
+  is_ip = isip(host);
+
   co = c->cookies;
 
   while(co) {
@@ -834,8 +868,8 @@ struct Cookie *Curl_cookie_getlist(struc
 
       /* now check if the domain is correct */
       if(!co->domain ||
-         (co->tailmatch && tailmatch(co->domain, host)) ||
-         (!co->tailmatch && Curl_raw_equal(host, co->domain)) ) {
+         (co->tailmatch && !is_ip && tailmatch(co->domain, host)) ||
+         ((!co->tailmatch || is_ip) && Curl_raw_equal(host, co->domain)) ) {
         /* the right part of the host matches the domain stuff in the
            cookie data */
 
Index: curl-7.28.1/tests/data/test1105
===================================================================
--- curl-7.28.1.orig/tests/data/test1105        2012-11-19 11:49:15.000000000 
+0100
+++ curl-7.28.1/tests/data/test1105     2014-09-10 15:47:19.192053044 +0200
@@ -59,8 +59,7 @@ userid=myname&password=mypassword
 # This file was generated by libcurl! Edit at your own risk.
 
 127.0.0.1      FALSE   /we/want/       FALSE   0       foobar  name
-.127.0.0.1     TRUE    "/silly/"       FALSE   0       mismatch        this
-.0.0.1 TRUE    /       FALSE   0       partmatch       present
+127.0.0.1      FALSE   "/silly/"       FALSE   0       mismatch        this
 </file>
 </verify>
 </testcase>
Index: curl-7.28.1/tests/data/test31
===================================================================
--- curl-7.28.1.orig/tests/data/test31  2012-11-19 11:49:15.000000000 +0100
+++ curl-7.28.1/tests/data/test31       2014-09-10 15:53:38.164885768 +0200
@@ -49,7 +49,8 @@ Set-Cookie: novalue; domain=reallysilly
 Set-Cookie: test=yes; domain=foo.com; expires=Sat Feb 2 11:56:27 GMT 2030
 Set-Cookie: test2=yes; domain=se; expires=Sat Feb 2 11:56:27 GMT 2030
 Set-Cookie: magic=yessir; path=/silly/; HttpOnly
-Set-Cookie: blexp=yesyes; domain=.0.0.1; domain=.0.0.1; expiry=totally bad;
+Set-Cookie: blexp=yesyes; domain=127.0.0.1; domain=127.0.0.1; expiry=totally 
bad;
+Set-Cookie: partialip=nono; domain=.0.0.1;
 
 boo
 </data>
@@ -93,30 +94,30 @@ Accept: */*
 # http://curl.haxx.se/docs/http-cookies.html
 # This file was generated by libcurl! Edit at your own risk.
 
-.127.0.0.1     TRUE    /silly/ FALSE   0       ismatch this
-.127.0.0.1     TRUE    /secure1/       TRUE    0       sec1value       secure1
-.127.0.0.1     TRUE    /secure2/       TRUE    0       sec2value       secure2
-.127.0.0.1     TRUE    /secure3/       TRUE    0       sec3value       secure3
-.127.0.0.1     TRUE    /secure4/       TRUE    0       sec4value       secure4
-.127.0.0.1     TRUE    /secure5/       TRUE    0       sec5value       secure5
-.127.0.0.1     TRUE    /secure6/       TRUE    0       sec6value       secure6
-.127.0.0.1     TRUE    /secure7/       TRUE    0       sec7value       secure7
-.127.0.0.1     TRUE    /secure8/       TRUE    0       sec8value       secure8
-.127.0.0.1     TRUE    /secure9/       TRUE    0       secure  very1
-#HttpOnly_.127.0.0.1   TRUE    /p1/    FALSE   0       httpo1  value1
-#HttpOnly_.127.0.0.1   TRUE    /p2/    FALSE   0       httpo2  value2
-#HttpOnly_.127.0.0.1   TRUE    /p3/    FALSE   0       httpo3  value3
-#HttpOnly_.127.0.0.1   TRUE    /p4/    FALSE   0       httpo4  value4
-#HttpOnly_.127.0.0.1   TRUE    /p4/    FALSE   0       httponly        myvalue1
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec      myvalue2
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec2     myvalue3
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec3     myvalue4
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec4     myvalue5
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec5     myvalue6
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec6     myvalue7
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec7     myvalue8
-#HttpOnly_.127.0.0.1   TRUE    /p4/    TRUE    0       httpandsec8     myvalue9
-.127.0.0.1     TRUE    /       FALSE   0       partmatch       present
+127.0.0.1      TRUE    /silly/ FALSE   0       ismatch this
+127.0.0.1      TRUE    /secure1/       TRUE    0       sec1value       secure1
+127.0.0.1      TRUE    /secure2/       TRUE    0       sec2value       secure2
+127.0.0.1      TRUE    /secure3/       TRUE    0       sec3value       secure3
+127.0.0.1      TRUE    /secure4/       TRUE    0       sec4value       secure4
+127.0.0.1      TRUE    /secure5/       TRUE    0       sec5value       secure5
+127.0.0.1      TRUE    /secure6/       TRUE    0       sec6value       secure6
+127.0.0.1      TRUE    /secure7/       TRUE    0       sec7value       secure7
+127.0.0.1      TRUE    /secure8/       TRUE    0       sec8value       secure8
+127.0.0.1      TRUE    /secure9/       TRUE    0       secure  very1
+#HttpOnly_127.0.0.1    TRUE    /p1/    FALSE   0       httpo1  value1
+#HttpOnly_127.0.0.1    TRUE    /p2/    FALSE   0       httpo2  value2
+#HttpOnly_127.0.0.1    TRUE    /p3/    FALSE   0       httpo3  value3
+#HttpOnly_127.0.0.1    TRUE    /p4/    FALSE   0       httpo4  value4
+#HttpOnly_127.0.0.1    TRUE    /p4/    FALSE   0       httponly        myvalue1
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec      myvalue2
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec2     myvalue3
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec3     myvalue4
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec4     myvalue5
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec5     myvalue6
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec6     myvalue7
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec7     myvalue8
+#HttpOnly_127.0.0.1    TRUE    /p4/    TRUE    0       httpandsec8     myvalue9
+127.0.0.1      FALSE   /       FALSE   0       partmatch       present
 127.0.0.1      FALSE   /we/want/       FALSE   2054030187      nodomain        
value
 #HttpOnly_127.0.0.1    FALSE   /silly/ FALSE   0       magic   yessir
 .0.0.1 TRUE    /we/want/       FALSE   0       blexp   yesyes
Index: curl-7.28.1/tests/data/test8
===================================================================
--- curl-7.28.1.orig/tests/data/test8   2012-11-19 11:49:15.000000000 +0100
+++ curl-7.28.1/tests/data/test8        2014-09-10 15:47:19.192053044 +0200
@@ -42,7 +42,8 @@ Set-Cookie: duplicate=test; domain=.0.0.
 Set-Cookie: cookie=yes; path=/we;
 Set-Cookie: cookie=perhaps; path=/we/want;
 Set-Cookie: nocookie=yes; path=/WE;
-Set-Cookie: blexp=yesyes; domain=.0.0.1; domain=.0.0.1; expiry=totally bad;
+Set-Cookie: blexp=yesyes; domain=%HOSTIP; domain=%HOSTIP; expiry=totally bad;
+Set-Cookie: partialip=nono; domain=.0.0.1;
 
 </file>
 <precheck>
@@ -59,7 +60,7 @@ perl -e 'if ("%HOSTIP" !~ /\.0\.0\.1$/)
 GET /we/want/8 HTTP/1.1
 Host: %HOSTIP:%HTTPPORT
 Accept: */*
-Cookie: cookie=perhaps; cookie=yes; partmatch=present; foobar=name; 
blexp=yesyes
+Cookie: cookie=perhaps; cookie=yes; foobar=name; blexp=yesyes
 
 </protocol>
 </verify>
Index: curl-7.28.1/tests/data/test61
===================================================================
--- curl-7.28.1.orig/tests/data/test61  2012-08-08 23:38:25.000000000 +0200
+++ curl-7.28.1/tests/data/test61       2014-09-10 15:47:19.192053044 +0200
@@ -23,6 +23,7 @@ Set-Cookie: test3=maybe; domain=foo.com;
 Set-Cookie: test4=no; domain=nope.foo.com; path=/moo; secure
 Set-Cookie: test5=name; domain=anything.com; path=/ ; secure
 Set-Cookie: fake=fooledyou; domain=..com; path=/;
+Set-Cookie: supercookie=fooledyou; domain=.com; path=/;^M
 Content-Length: 4
 
 boo
++++++ curl-test172_cookie_expiration.patch ++++++
Index: curl-7.19.7/tests/data/test172
===================================================================
--- curl-7.19.7.orig/tests/data/test172 2008-11-19 22:12:35.000000000 +0100
+++ curl-7.19.7/tests/data/test172      2014-02-04 15:05:46.817554144 +0100
@@ -36,7 +36,7 @@ http://%HOSTIP:%HTTPPORT/we/want/172 -b
 
 .%HOSTIP       TRUE    /silly/ FALSE   0       ismatch this
 .%HOSTIP       TRUE    /       FALSE   0       partmatch       present
-%HOSTIP        FALSE   /we/want/       FALSE   1391252187      nodomain        
value
+%HOSTIP        FALSE   /we/want/       FALSE   2139150993      nodomain        
value
 </file>
 </client>
 
++++++ curl.keyring ++++++
pub   1024D/279D5C91 2003-04-28
uid                  Daniel Stenberg (Haxx) <[email protected]>
sub   1024g/B70B3510 2003-04-28

-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v2.0.19 (GNU/Linux)

mQGiBD6tnnoRBACRPnFBVoapBrTpPrCNZ2rq3DcmW6n/soQJW47+zP+vcrcxQ1WJ
QiWSzLGO+QOIUZSYfnliR22r8HkFX9EUSW3IAcRMJMsaO3wMJ0a+78a9QqWLp6RV
0arcQkuuCvG79h+yJ6NnoAXe1geRt8vNGsaWtsS91CtYlTSs6JVtaRLnYwCg/Ly1
EFgvNZ6SJRc/8I5rRv0lrz8D/0goih2kZ5z4SI+r2hgABNcN7g565YwGKaQDbIch
soh3OBzgETWc3wuAZqmCzQXPXMpMx+ziqX6XDzDKNiGL1CdrBJQd0II8UutWVDje
f9UxLfo02YQ8diGYeq0u9k1RezC13w4TVUmQfg0Uqn4xM6DNzO1O6yCK8rlNwsvL
gHNJA/9m1pfzjpvdxtmJNKRU3C4cRCjXhxNdM7laSEj0/wOGaR2QWWEge51orWwo
SLQUIe4BDPvtRStQHC+tI7qr7d12rMMEBXviJC5EkGBOzlgWr9virjM/u/pkGMc2
m5r3pVuWH/JSsHsV952y2kWP64uP4zdLXOpVzX/xs0sYJ9nOPLQnRGFuaWVsIFN0
ZW5iZXJnIChIYXh4KSA8ZGFuaWVsQGhheHguc2U+iFkEExECABkFAj6tnnoECwcD
AgMVAgMDFgIBAh4BAheAAAoJEHjhHGsnnVyRjngAn1gK6Q0qUTHwYJBAhIDmrRi0
ebfDAJ4qDSHd6UU2MEkkFCgGfYgEBXKbb7kBDQQ+rZ59EAQAmYsA8gPjJ75gOIPb
XNg9Z31QzIz65qS9XdNsFNAdKxnY4b72nhc0oaS9/7Dcdf2Q+1mDa2p72DWk+9iz
7knmBL++csBP2z9eMe5h8oV53prqNOHDHyL3WLOa25ga9381gZnzWoQME74iSBBM
wDw8vbLEgIZ34JaQ7Oe+9N3+6n8AAwcD/Av+Ms+3gCc5pLp4nx36qqi36fodaG9+
dwIcMbr9bivEtjmDHeuPsD6X1J9+Y/ikUBIDpMPv33lJxLoubOtpLhEuN2XN/ojT
rueVPDKA1f+GyfHnyfpf/78IgX1hGVqu/3RBWKPpXFwSZA4q8vFR+FaPC5WbU68t
FLJpYuC9ZO/LiEYEGBECAAYFAj6tnn0ACgkQeOEcayedXJGtPQCgxrbd59afemZ9
OIadZD8kUGC29dUAoJ94aGUkWCwoEiPyEZRGXv9XRlfx
=yTQx
-----END PGP PUBLIC KEY BLOCK-----
++++++ dont-mess-with-rpmoptflags.diff ++++++
Index: configure.ac
===================================================================
--- configure.ac.orig   2013-02-07 11:55:15.150276599 +0100
+++ configure.ac        2013-02-07 11:55:15.167277116 +0100
@@ -288,10 +288,6 @@ dnl platform/compiler/architecture speci
 dnl **********************************************************************
 
 CURL_CHECK_COMPILER
-CURL_SET_COMPILER_BASIC_OPTS
-CURL_SET_COMPILER_DEBUG_OPTS
-CURL_SET_COMPILER_OPTIMIZE_OPTS
-CURL_SET_COMPILER_WARNING_OPTS
 
 if test "$compiler_id" = "INTEL_UNIX_C"; then
   #
++++++ libcurl-ocloexec.patch ++++++
Open library file descriptors with O_CLOEXEC
This patch is non-portable, it needs linux 2.6.23 and glibc 2.7
or later, different combinations (old linux, new glibc and vice-versa)
will result in a crash.

To make it portable you have to test O_CLOEXEC support at *runtime*
compile time is not enough.


Index: lib/cookie.c
===================================================================
--- lib/cookie.c.orig   2012-08-08 23:38:25.000000000 +0200
+++ lib/cookie.c        2013-02-07 11:55:15.146276477 +0100
@@ -736,7 +736,7 @@ struct CookieInfo *Curl_cookie_init(stru
     fp = NULL;
   }
   else
-    fp = file?fopen(file, "r"):NULL;
+    fp = file?fopen(file, "re"):NULL;
 
   c->newsession = newsession; /* new session? */
 
@@ -1060,7 +1060,7 @@ static int cookie_output(struct CookieIn
     use_stdout=TRUE;
   }
   else {
-    out = fopen(dumphere, "w");
+    out = fopen(dumphere, "we");
     if(!out)
       return 1; /* failure */
   }
Index: lib/file.c
===================================================================
--- lib/file.c.orig     2012-11-13 22:04:27.000000000 +0100
+++ lib/file.c  2013-02-07 11:55:15.147276507 +0100
@@ -249,7 +249,7 @@ static CURLcode file_connect(struct conn
   fd = open_readonly(actual_path, O_RDONLY|O_BINARY);
   file->path = actual_path;
 #else
-  fd = open_readonly(real_path, O_RDONLY);
+  fd = open_readonly(real_path, O_RDONLY|O_CLOEXEC);
   file->path = real_path;
 #endif
   file->freepath = real_path; /* free this when done */
@@ -347,7 +347,7 @@ static CURLcode file_upload(struct conne
   else
     mode = MODE_DEFAULT|O_TRUNC;
 
-  fd = open(file->path, mode, conn->data->set.new_file_perms);
+  fd = open(file->path, mode | O_CLOEXEC, conn->data->set.new_file_perms);
   if(fd < 0) {
     failf(data, "Can't open %s for writing", file->path);
     return CURLE_WRITE_ERROR;
Index: lib/formdata.c
===================================================================
--- lib/formdata.c.orig 2012-08-08 22:45:18.000000000 +0200
+++ lib/formdata.c      2013-02-07 11:55:15.147276507 +0100
@@ -1207,7 +1207,7 @@ CURLcode Curl_getformdata(struct Session
         FILE *fileread;
 
         fileread = strequal("-", file->contents)?
-          stdin:fopen(file->contents, "rb"); /* binary read for win32  */
+          stdin:fopen(file->contents, "rbe"); /* binary read for win32  */
 
         /*
          * VMS: This only allows for stream files on VMS.  Stream files are
@@ -1338,7 +1338,7 @@ static size_t readfromfile(struct Form *
   else {
     if(!form->fp) {
       /* this file hasn't yet been opened */
-      form->fp = fopen(form->data->line, "rb"); /* b is for binary */
+      form->fp = fopen(form->data->line, "rbe"); /* b is for binary */
       if(!form->fp)
         return (size_t)-1; /* failure */
     }
Index: lib/hostip6.c
===================================================================
--- lib/hostip6.c.orig  2012-03-08 20:35:24.000000000 +0100
+++ lib/hostip6.c       2013-02-07 11:55:15.147276507 +0100
@@ -45,7 +45,7 @@
 #ifdef HAVE_PROCESS_H
 #include <process.h>
 #endif
-
+#include <fcntl.h>
 #include "urldata.h"
 #include "sendf.h"
 #include "hostip.h"
@@ -113,7 +113,7 @@ bool Curl_ipv6works(void)
   static int ipv6_works = -1;
   if(-1 == ipv6_works) {
     /* probe to see if we have a working IPv6 stack */
-    curl_socket_t s = socket(PF_INET6, SOCK_DGRAM, 0);
+    curl_socket_t s = socket(PF_INET6, SOCK_DGRAM | SOCK_CLOEXEC, 0);
     if(s == CURL_SOCKET_BAD)
       /* an ipv6 address was requested but we can't get/use one */
       ipv6_works = 0;
Index: lib/if2ip.c
===================================================================
--- lib/if2ip.c.orig    2012-03-08 20:35:24.000000000 +0100
+++ lib/if2ip.c 2013-02-07 11:55:15.148276537 +0100
@@ -153,7 +153,7 @@ char *Curl_if2ip(int af, const char *int
   if(len >= sizeof(req.ifr_name))
     return NULL;
 
-  dummy = socket(AF_INET, SOCK_STREAM, 0);
+  dummy = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0);
   if(CURL_SOCKET_BAD == dummy)
     return NULL;
 
Index: lib/netrc.c
===================================================================
--- lib/netrc.c.orig    2012-08-08 22:45:18.000000000 +0200
+++ lib/netrc.c 2013-02-07 11:55:15.148276537 +0100
@@ -107,7 +107,7 @@ int Curl_parsenetrc(const char *host,
     netrc_alloc = TRUE;
   }
 
-  file = fopen(netrcfile, "r");
+  file = fopen(netrcfile, "re");
   if(file) {
     char *tok;
     char *tok_buf;
Index: lib/ssluse.c
===================================================================
--- lib/ssluse.c.orig   2012-11-13 23:01:17.000000000 +0100
+++ lib/ssluse.c        2013-02-07 11:55:15.149276568 +0100
@@ -437,7 +437,7 @@ int cert_stuff(struct connectdata *conn,
       STACK_OF(X509) *ca = NULL;
       int i;
 
-      f = fopen(cert_file,"rb");
+      f = fopen(cert_file,"rbe");
       if(!f) {
         failf(data, "could not open PKCS12 file '%s'", cert_file);
         return 0;
@@ -2274,7 +2274,7 @@ static CURLcode servercert(struct connec
 
     /* e.g. match issuer name with provided issuer certificate */
     if(data->set.str[STRING_SSL_ISSUERCERT]) {
-      fp=fopen(data->set.str[STRING_SSL_ISSUERCERT],"r");
+      fp=fopen(data->set.str[STRING_SSL_ISSUERCERT],"re");
       if(!fp) {
         if(strict)
           failf(data, "SSL: Unable to open issuer cert (%s)",
Index: lib/connect.c
===================================================================
--- lib/connect.c.orig  2012-11-13 22:02:15.000000000 +0100
+++ lib/connect.c       2013-02-07 11:55:15.149276568 +0100
@@ -1238,7 +1238,7 @@ CURLcode Curl_socket(struct connectdata
                                     (struct curl_sockaddr *)addr);
   else
     /* opensocket callback not set, so simply create the socket now */
-    *sockfd = socket(addr->family, addr->socktype, addr->protocol);
+    *sockfd = socket(addr->family, addr->socktype | SOCK_CLOEXEC, 
addr->protocol);
 
   if(*sockfd == CURL_SOCKET_BAD)
     /* no socket, no connection */
Index: configure.ac
===================================================================
--- configure.ac.orig   2012-09-08 22:39:18.000000000 +0200
+++ configure.ac        2013-02-07 11:58:27.875122101 +0100
@@ -180,6 +180,7 @@ AC_CANONICAL_HOST
 dnl Get system canonical name
 AC_DEFINE_UNQUOTED(OS, "${host}", [cpu-machine-OS])
 
+AC_USE_SYSTEM_EXTENSIONS
 dnl Checks for programs.
 CURL_CHECK_PROG_CC
 
@@ -193,6 +194,7 @@ dnl Our configure and build reentrant se
 CURL_CONFIGURE_THREAD_SAFE
 CURL_CONFIGURE_REENTRANT
 
+
 dnl check for how to do large files
 AC_SYS_LARGEFILE
 
Index: m4/curl-compilers.m4
===================================================================
--- m4/curl-compilers.m4.orig   2012-11-16 13:02:23.000000000 +0100
+++ m4/curl-compilers.m4        2013-02-07 11:55:15.151276630 +0100
@@ -1272,7 +1272,7 @@ dnl CPPFLAGS from being unexpectedly cha
 AC_DEFUN([CURL_CHECK_PROG_CC], [
   ac_save_CFLAGS="$CFLAGS"
   ac_save_CPPFLAGS="$CPPFLAGS"
-  AC_PROG_CC
+  AC_PROG_CC_STDC
   CFLAGS="$ac_save_CFLAGS"
   CPPFLAGS="$ac_save_CPPFLAGS"
 ])
-- 
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to