[squid-users] UFS or ext4

2023-11-22 Thread Mohsen Pahlevanzadeh


  
  
Dear All,


I need to install a www cache, However  I don't know use UFS or
  ext4? 

Please help me with fact.


Best regards,


  

___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] Kerberos pac ResourceGroups parsing

2023-11-22 Thread Alex Rousskov

On 2023-11-21 23:05, Andrey K wrote:

I have posted a PR: https://github.com/squid-cache/squid/pull/1597 

This is my first contribution to open source. Could you please verify if 
everything is OK.


Thank you for posting that pull request! Let's continue this 
conversation on GitHub since squid-users mailing list is not meant for 
code reviews.


Alex.



чт, 16 нояб. 2023 г. в 17:01, Alex Rousskov:

On 2023-11-16 07:48, Andrey K wrote:

 > I have slightly patched the negotiate_kerberos_pac.cc to
 > implement ResourceGropIds-block parsing.

Please consider posting tested changes as a GitHub Pull Request:
https://wiki.squid-cache.org/MergeProcedure#pull-request



Thank you,

Alex.


 > Maybe it will be useful for the community.
 > This patch can be included in future Squid-releases.
 >
 > Kind regards,
 >     Ankor.
 >
 > The patch for the
 > file src/auth/negotiate/kerberos/negotiate_kerberos_pac.cc below:
 >
 > @@ -362,6 +362,123 @@
 >       return ad_groups;
 >   }
 >
 > +
 > +char *
 > +get_resource_group_domain_sid(uint32_t ResourceGroupDomainSid){
 > +
 > +    if (ResourceGroupDomainSid!= 0) {
 > +        uint8_t rev;
 > +        uint64_t idauth;
 > +        char dli[256];
 > +        char *ag;
 > +        int l;
 > +
 > +        align(4);
 > +
 > +        uint32_t nauth = get4byt();
 > +
 > +        size_t length = 1+1+6+nauth*4;
 > +
 > +            ag=(char *)xcalloc((length+1)*sizeof(char),1);
 > +            // the first byte is a length of the SID
 > +            ag[0] = (char) length;
 > +            memcpy((void *)&ag[1],(const void*)&p[bpos],1);
 > +            memcpy((void *)&ag[2],(const void*)&p[bpos+1],1);
 > +            ag[2] = ag[2]+1;
 > +            memcpy((void *)&ag[3],(const
void*)&p[bpos+2],6+nauth*4);
 > +
 > +
 > +
 > +        /* mainly for debug only */
 > +        rev = get1byt();
 > +        bpos = bpos + 1; /*nsub*/
 > +        idauth = get6byt_be();
 > +
 > +        snprintf(dli,sizeof(dli),"S-%d-%lu",rev,(long unsigned
int)idauth);
 > +        for ( l=0; l<(int)nauth; l++ ) {
 > +            uint32_t sauth;
 > +            sauth = get4byt();
 > +            snprintf((char
 > *)&dli[strlen(dli)],sizeof(dli)-strlen(dli),"-%u",sauth);
 > +        }
 > +        debug((char *) "%s| %s: INFO: Got ResourceGroupDomainSid
%s\n",
 > LogTime(), PROGRAM, dli);
 > +        return ag;
 > +    }
 > +
 > +    return NULL;
 > +}
 > +
 > +char *
 > +get_resource_groups(char *ad_groups, char
*resource_group_domain_sid,
 > uint32_t ResourceGroupIds, uint32_t ResourceGroupCount){
 > +    size_t group_domain_sid_len = resource_group_domain_sid[0];
 > +    char *ag;
 > +    size_t length;
 > +
 > +    resource_group_domain_sid++; //now it points to the actual data
 > +
 > +
 > +    if (ResourceGroupIds!= 0) {
 > +        uint32_t ngroup;
 > +        int l;
 > +
 > +        align(4);
 > +        ngroup = get4byt();
 > +        if ( ngroup != ResourceGroupCount) {
 > +            debug((char *) "%s| %s: ERROR: Group encoding error =>
 > ResourceGroupCount: %d Array size: %d\n",
 > +                  LogTime(), PROGRAM, ResourceGroupCount, ngroup);
 > +            return NULL;
 > +        }
 > +        debug((char *) "%s| %s: INFO: Found %d Resource Group
rids\n",
 > LogTime(), PROGRAM, ResourceGroupCount);
 > +
 > +        //make a group template which begins with the
ResourceGroupDomainID
 > +        length = group_domain_sid_len+4;  //+4 for a rid
 > +        ag=(char *)xcalloc(length*sizeof(char),1);
 > +        memcpy((void *)ag,(const void*)resource_group_domain_sid,
 > group_domain_sid_len);
 > +
 > +
 > +        for ( l=0; l<(int)ResourceGroupCount; l++) {
 > +            uint32_t sauth;
 > +            memcpy((void *)&ag[group_domain_sid_len],(const
 > void*)&p[bpos],4);
 > +
 > +            if (!pstrcat(ad_groups," group=")) {
 > +                debug((char *) "%s| %s: WARN: Too many groups !
size >
 > %d : %s\n",
 > +                      LogTime(), PROGRAM, MAX_PAC_GROUP_SIZE,
ad_groups);
 > +               xfree(ag);
 > +               return NULL;
 > +            }
 > +
 > +
 > +            struct base64_encode_ctx ctx;
 > +            base64_encode_init(&ctx);
 > +            const uint32_t expectedSz =
base64_encode_len(length) +1 /*
 > terminator */;
 > +            char *b64buf = static_cast(xcalloc(expectedSz, 1));
 > +            size_t blen = base64_encode_update(&ctx, b64buf,
length,
 > reinterpret_c

Re: [squid-users] What's this 'errorno=104' error?

2023-11-22 Thread Amos Jeffries

On 22/11/23 07:01, Wen Yue wrote:
I configured Squid6.3 as a MITM proxy and used Chrome to browse web 
pages through this Squid proxy, such as twitter.com. However,
I noticed these error messages in the 
cache.log:


...
2023/11/22 01:33:38 kid1| ERROR: system call failure while accepting a 
TLS connection on conn8925690 local=10.0.0.5:3128  
remote=171.221.64.188:33454  FD 12 flags=1: 
SQUID_TLS_ERR_ACCEPT+TLS_IO_ERR=5+errno=104



Depends on the OS the proxy was built for.

Assuming Linux it would be POSIX "ECONNRESET" meaning "Connection reset 
by peer".



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] how to avoid use http/1.0 between squid and the target

2023-11-22 Thread Amos Jeffries

On 22/11/23 23:03, David Komanek wrote:

Hello,

I have a strange problem (definitely some kind of my own ignorance) :

If I try to access anything on the site https://www.samba.org WITHOUT 
proxy, my browser negotiate happily for http/2 protocol and receives all 
the data. For http://www.samba.org WITHOUT proxy it starts with http/1.1 
which is auto-redirected from http to https and continues with http/2. 
So far so good.


But WITH proxy, it happens that squid is using http/1.0.


That is odd. Squid should always be sending requests as HTTP/1.1.

Have a look at the debug level "11,2" cache.log records to see if Squid 
is actually sending 1.0 or if it is just relaying CONNECT requests with 
possibly HTTP/1.0 inside.



HTH
Amos
___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users


Re: [squid-users] 6.x gives frequent connection to peer failed - spurious?

2023-11-22 Thread Stephen Borrill

On 21/11/2023 15:55, Alex Rousskov wrote:

On 2023-11-21 08:38, Stephen Borrill wrote:

On 15/11/2023 21:55, Alex Rousskov wrote:

On 2023-11-10 05:46, Stephen Borrill wrote:

With 6.x (currently 6.5) there are very frequent (every 10 seconds 
or so) messages like:

2023/11/10 10:25:43 kid1| ERROR: Connection to 127.0.0.1:8123 failed




why is this logged as a connection failure


The current error wording is too assuming and, in your case, 
evidently misleading. The phrase "Connection to X failed" should be 
changed to something more general like "Cannot contact cache_peer X" 
or "Cannot communicate with cache_peer X".


CachePeer::countFailure() patches welcome.


But the point is that it _can_ communicate with the peer, but the peer 
itself can't service the request. The peer returning 503 shouldn't be 
logged as a connection failure



My bad. I missed the fact that the described DNS error happens at a 
_peer_ Squid. Sorry.



Currently, Squid v6 treats most CONNECT-to-peer errors as a sign of a 
broken peer. In 2022, 4xx errors were excluded from that set[1]. At that 
time, we also proposed to make that decision configurable using a new 
cache_peer_fault directive[2], but the new directive was blocked as an 
"overkill"[3], so we hard-coded 4xx exclusion instead.


Going forward, you have several options, including these two:

1. Convince others that Squid should treat all 503 CONNECT errors from 
peers as it already treats all 4xx errors. Hard-code that new logic.


2. Convince others that cache_peer_fault or a similar directive is a 
good idea rather than an overkill. Resurrect its implementation[2].



[1] 
https://github.com/squid-cache/squid/commit/022dbabd89249f839d1861aa87c1ab9e1a008a47


[2] 
https://github.com/squid-cache/squid/commit/25431f18f2f5e796b8704c85fc51f93b6cc2a73d


[3] https://github.com/squid-cache/squid/pull/1166#issuecomment-1295806530



2) seems sensible. Especially in the case where you have a single 
cache_peer and cannot go direct. No benefit to marking it as dead.


However, I'm currently running with 1) as per below and this stops the 
non-existent domains counting against the peer (which surely opens it to 
a DoS attack):


--- src/CachePeer.cc.orig   2023-11-22 08:30:17.524266325 +
+++ src/CachePeer.cc2023-11-22 08:31:05.394052184 +
@@ -71,7 +71,7 @@
 void
 CachePeer::noteFailure(const Http::StatusCode code)
 {
-if (Http::Is4xx(code))
+if (Http::Is4xx(code) || code == Http::scServiceUnavailable)
 return; // this failure is not our fault

 countFailure();



 > do I need to worry about it beyond it filing up the logs needlessly?

In short, "yes".

I cannot accurately assess your specific needs, but, in most 
environments, one should indeed worry that their cache_peer server 
names cannot be reliably resolved because failed resolution attempts 
waste Squid resources and increase transaction response time. 
Moreover, if these failures are frequent enough (relative to peer 
usage attempts), the affected cache_peer will be marked as DEAD (as 
you have mentioned):


 > 2023/11/09 08:55:22 kid1| Detected DEAD Parent: 127.0.0.1:8123


Problem seems to be easily reproducible:

1# env https_proxy=http://127.0.0.1:8084 curl https://www.invalid.domain/
curl: (56) CONNECT tunnel failed, response 503
2# grep invalid /usr/local/squid/logs/access.log|tail -1
1700573429.015  4 127.0.0.1:8084 TCP_TUNNEL/503 0 CONNECT 
www.invalid.domain:443 - FIRSTUP_PARENT/127.0.0.1:8123 -

3# date -r 1700573429 '+%Y/%m/%d %H:%M:%S'
2023/11/21 13:30:29
4# grep '2023/11/21 13:30:29' /usr/local/squid/logs/cache.log
2023/11/21 13:30:29 kid1| ERROR: Connection to 127.0.0.1:8123 failed


With 4.x there were no such messages.

By comparing to the peer squid logs, these seems to tally with DNS 
failures:
peer_select.cc(479) resolveSelected: PeerSelector1688 found all 0 
destinations for bugzilla.tucasi.com:443


Full ALL,2 log at the time of the reported connection failure:

2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(214) doAccept: New 
connection on FD 17
2023/11/10 10:25:43.162 kid1| 5,2| TcpAcceptor.cc(316) acceptNext: 
connection on conn3 local=127.0.0.1:8123 remote=[::] FD 17 flags=9
2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1332) 
parseHttpRequest: HTTP Client conn13206 local=127.0.0.1:8123 
remote=127.0.0.1:57843 FD 147 flags=1
2023/11/10 10:25:43.162 kid1| 11,2| client_side.cc(1336) 
parseHttpRequest: HTTP Client REQUEST:
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) 
clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 
is ALLOWED; last ACL checked: localhost
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(683) 
clientAccessCheck2: No adapted_http_access configuration. default: 
ALLOW
2023/11/10 10:25:43.162 kid1| 85,2| client_side_request.cc(707) 
clientAccessCheckDone: The request CONNECT bugzilla.tucasi.com:443 
is ALLOWED; last ACL checked: localhost
2023/11/10 10:25:43.162 kid1| 44,2| peer_select.

[squid-users] how to avoid use http/1.0 between squid and the target

2023-11-22 Thread David Komanek

Hello,

I have a strange problem (definitely some kind of my own ignorance) :

If I try to access anything on the site https://www.samba.org WITHOUT 
proxy, my browser negotiate happily for http/2 protocol and receives all 
the data. For http://www.samba.org WITHOUT proxy it starts with http/1.1 
which is auto-redirected from http to https and continues with http/2. 
So far so good.


But WITH proxy, it happens that squid is using http/1.0. The remote site 
is blocking this protocol, requiring at least http/1.1 (confirmed by the 
samba.org website maintainer), so the site remains inaccessible. But 
this is the only site where I have been able to encounter this problem. 
If I connect WITH proxy to other sites, squid is using http/1.1 as expected.


So, I'm lost here, unable to find the reason, why http/1.1 couldn't be 
used by squid in some rare cases. What am I missing here? I am not aware 
of any configuration directives which could cause this.


browsers: chrome, firefox (both updated)
squid: freebsd package (now version 6.5, but the I had the same problem 
with 5.9 before)


Thanks in advance for some hints here.

Best regards,

  David Komanek
  Charles University in Prague
  Faculty of Science


___
squid-users mailing list
squid-users@lists.squid-cache.org
https://lists.squid-cache.org/listinfo/squid-users