[squid-users] NTLM V2 Set up for Squid issue
From: K R, Bharath Sent: Thursday, September 29, 2022 10:31 PM To: squid-users@lists.squid-cache.org Cc: Kasat, Puneeth Kumar ; Boddupalli, Vikram ; Uppal, Tanjot Singh Subject: NTLM V2 Set up for Squid issue Hi Team, We see the below error while configuring Squid for NTLM V2. 1664469456.486 73 10.65.140.107 TCP_DENIED/407 4408 GET http://detectportal.firefox.com/canonical.html - HIER_NONE/- text/html 1664469461.446 67 10.65.140.107 TCP_DENIED/407 4408 GET http://detectportal.firefox.com/canonical.html - HIER_NONE/- text/html 1664469466.478 96 10.65.140.107 TCP_DENIED/407 4408 GET http://detectportal.firefox.com/canonical.html - HIER_NONE/- text/html 1664469471.497102 10.65.140.107 TCP_DENIED/407 4408 GET http://detectportal.firefox.com/canonical.html - HIER_NONE/- text/html 1664469476.478 88 10.65.140.107 TCP_DENIED/407 4408 GET http://detectportal.firefox.com/canonical.html - HIER_NONE/- text/html 1664469481.454 46 10.65.140.107 TCP_DENIED/407 4408 GET http://detectportal.firefox.com/canonical.html - HIER_NONE/- text/html 1664469612.625 34 10.65.140.107 TCP_DENIED/407 4326 CONNECT push.services.mozilla.com:443 - HIER_NONE/- text/html auth_param ntlm program /usr/bin/ntlm_auth --diagnostics --helper-protocol=squid-2.5-ntlmssp --domain=x.com auth_param ntlm children 10 auth_param ntlm keep_alive off auth_param ntlm program /usr/lib/squid/ntlm_auth .com/x.informatica.com auth_param ntlm children 5 auth_param ntlm max_challenge_reuses 0 auth_param ntlm max_challenge_lifetime 2 minutes acl ntlm_users proxy_auth REQUIRED http_access allow ntlm_users #http_access deny all NOTE: Our wbinfo component is working as expected. We made use of https://wiki.squid-cache.org/ConfigExamples/Authenticate/Ntlm for doc. Regards, Bharath ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] sarg error in squid 5.7
On 30/09/22 03:45, Majed Zouhairy wrote: Peace, does squid still not support long urls? Squid supports long URLs. If it did not those log entries would be shorter and indicate clients being rejected with "414 URI Too Long". Your problem is the length of the log line. The logging modules have limits which may be shorter than what you have configured the acceptible URL / request size to be. I suggest trying TCP logging instead of the default stdio or daemon. Sarg itself may also have problems with such long URLs. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] TCP_MISS only
On 30/09/22 02:59, Andy Armstrong wrote: Hi, Excellent I understand and agree with what you are saying. Is this behaviour documented within the Squid documentation anywhere, or is this more ‘how does the HTTP specification handle caching’? It is HTTP basics. I have just re-phrased details from RFC 9110 and RFC 9111 to make them clearer in context of this conversation. The details about methods can be found in https://www.rfc-editor.org/rfc/rfc9110#name-methods The requirements on caches are covered in https://www.rfc-editor.org/rfc/rfc9111#name-overview-of-cache-operation. I am moving forward with a HTTP GET to see if that works per my use case. I assume therefore that any other verb is simply not going to work out the box? Without knowing the details of your service API it is hard to answer that question. From what you have mentioned so far I believe so. Or at least it is the method most easily altered when you want non-default behaviour to occur. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
[squid-users] sarg error in squid 5.7
Peace, does squid still not support long urls? sudo sarg -x -z SARG: Init SARG: Loading configuration file "/usr/share/sarg/sarg.conf" SARG: TAG: access_log /var/log/squid/access.log SARG: TAG: title "Squid User Access Reports" SARG: TAG: font_face Tahoma,Verdana,Arial SARG: TAG: font_size 18px SARG: TAG: header_font_size 12px SARG: TAG: temporary_dir /tmp SARG: TAG: output_dir /srv/www/sarg SARG: TAG: resolve_ip yes SARG: Chaining IP resolving module "dns" SARG: TAG: date_format e SARG: TAG: lastlog 25 SARG: TAG: remove_temp_files yes SARG: TAG: index yes SARG: TAG: index_tree file SARG: TAG: overwrite_report yes SARG: TAG: topsites_num 180 SARG: TAG: exclude_codes /usr/share/sarg/exclude_codes SARG: TAG: max_elapsed 2880 SARG: TAG: report_type topusers topsites denied sites_users users_sites date_time denied auth_failures site_user_time_date downloads SARG: TAG: long_url no SARG: TAG: privacy no SARG: TAG: show_successful_message yes SARG: TAG: show_read_statistics yes SARG: TAG: www_document_root /srv/www/htdocs SARG: TAG: download_suffix "zip,arj,bzip,gz,ace,doc,iso,adt,bin,cab,com,dot,drv$,lha,lzh,mdb,mso,ppt,rtf,src,shs,sys,exe,dll,mp3,avi,mpg,mpeg" SARG: Purging temporary directory "/tmp/sargnX2FZX" SARG: Parameters: SARG: Hostname or IP address (-a) = SARG: Exclude file (-c) = SARG: Date from-until (-d) = SARG:Email address to send reports (-e) = SARG: Config file (-f) = /usr/share/sarg/sarg.conf SARG: Date format (-g) = Europe (dd/mm/) SARG:IP report (-i) = No SARG: Keep temporary files (-k) = No SARG:Input log (-l) = /var/log/squid/access.log SARG: Resolve IP Address (-n) = Yes SARG: Output dir (-o) = /srv/www/sarg/ SARG: Use Ip Address instead of userid (-p) = No SARG:Accessed site (-s) = SARG: Time (-t) = SARG: User (-u) = SARG:Temporary dir (-w) = /tmp/sargnX2FZX SARG: Debug messages (-x) = Yes SARG: Process messages (-z) = 1 SARG: Previous reports to keep (--lastlog) = 25 SARG: SARG: sarg version: 2.4.0 Jan-16-2020 SARG: Reading access log file: /var/log/squid/access.log SARG: Log format identified as "squid log format" for /var/log/squid/access.log SARG: The following line read from /var/log/squid/access.log could not be parsed and is ignored 1664429404.809 12 10.32.0.2 NONE_NONE/400 23894 POST https://yandex.by/clck/safeclick/data=AiuY0DBWFJ5Hyx_fyvalFLOD-Yhiku6D0pBKde9dUC5JtQWHFepEIISjW65MNM0MbWwwbD826Q6PbhmH8wOygGEGZrEbAD17l5ZxdNR-cdjO5PX6BPXQlCuhboWrsuvOUWLzCF5DolY3QBpPflNk0GmYurtACNOj7wBvT5DBkWS6Bceio12NXfa8_IXI5oBhrJfhfaLy5st7wUykxNAvNbUHEnwTieLfdmpWv04zCFacPnnvLUqOdGAbFXQPEbyGKDhVsb12swdUr4_IwTWnXwp1t0lXq_Cm415e6rMpeCdg8vTFFT-D0k0UQ39arRjMuSDzE_8MtoDT6M3q7UB3qA8i-DQ4QupIf-bI_8lhs0d1gtj3wWUrmNrATp6i_Xjf6leJ-IJ_gsU7hxp1BhpvQxPiBPbdz63vO8xdfmupx6dQMbYcoSywDppIGo1X80S4dK9_ZpoKlMP_Onn8Flh8s6Dl4aqqh1AOvNaWnEirAtxtHbtfvUBAQz8N-SYAaN2sQJKUnjJGewpawVeLOZ1qQQ,,/sign=214e4bac6a7f5b6913d67e7a333db887/keyno=0/ref=orjY4mGPRjkHVRqRT7scnl9k3ZfzgjFj0NXM8QCXJ87Lp4yaofJuZIGyAcDCDs-Qxz1NDGQsk0PYaiHTlBS-kNrrDW1IyfUgrvoaMQQfJjvAaeeXw0yAWiEPHr2ZWCcw2tYm73H8gZ89Y5jlFWYscV9f6rVDBKocPyT7JEQI25xWZJZvkh_O-JV9DU8JbHJ5_cwFb_FmjJRcNITbQwr24MQ3UrFCfWmBzhTmAoRmzxtZN3RIbSvTFJQccXH-nU0Awm8IViyObMOih6WKmYmDFq_lMhfJi21t9kshE1JzTbLXdJIck-bAmjvX_VZn2cVPXKt10hpVpsTpMICr52rGzA,,/installation_info=eyJnZW8iOi J2bGEiLCJjdHlwZSI6InByb2QiLCJ2ZXJ0aWNhbCI6IldFQiJ9/events=%5B%7B%22event%22%3A%22append%22%2C%22tree%22%3A%7B%22id%22%3A%2265olw0e-0-115%22%2C%22name%22%3A%22%24subresult%22%2C%22attrs%22%3A%7B%22schema-ver%22%3A0%2C%22ui%22%3A%22desktop%22%2C%22parent-id%22%3A%2265olw0e-0-10%22%2C%22trigger-event-trusted%22%3Afalse%2C%22main-search%22%3Afalse%7D%2C%22children%22%3A%5B%7B%22id%22%3A%2265olw0e-0-114%22%2C%22name%22%3A%22image%22%2C%22attrs%22%3A%7B%22row%22%3A0%2C%22item%22%3A0%2C%22docid%22%3A%22ZA4F90CD9CBA93F43%22%2C%22image%22%3A%22%2F%2Favatars.mds.yandex.net%2Fi%3Fid%3De9a6a5515c67cae851b07b9f229b5e83-5220043-images-thumbs%26n%3D13%22%2C%22width%22%3A90%2C%22height%22%3A150%7D%2C%22children%22%3A%5B%7B%22id%22%3A%2265olw0e-0-116%22%2C%22name%22%3A%22thumb%22%2C%22attrs%22%3A%7B%22row%22%3A0%2C%22item%22%3A0%7D%7D%5D%7D%2C%7B%22id%22%3A%2265olw0e-0-117%22%2C%22name%22%3A%22image%22%2C%22attrs%22%3A%7B%22row%22%3A0%2C%22item%22%3A1%2C%22docid%22%3A%22Z9DDC9719A5955EEE%22%2C%22ima
Re: [squid-users] TCP_MISS only
Hi, Excellent I understand and agree with what you are saying. Is this behaviour documented within the Squid documentation anywhere, or is this more ‘how does the HTTP specification handle caching’? I am moving forward with a HTTP GET to see if that works per my use case. I assume therefore that any other verb is simply not going to work out the box? Kind regards, Andy Armstrong 安迪 阿姆斯特朗 Principal Specialist for Z Technologies EMEA Squad Leader for Hybrid Cloud Worldwide Community Leader for Hybrid Cloud Member of the CTO Office Server & Storage EMEA Distinguished Technical Specialist – The Open Group IBM Master Inventor Mobile: +447500103874 From: squid-users on behalf of Amos Jeffries Date: Thursday, 29 September 2022 at 13:06 To: squid-users@lists.squid-cache.org Subject: [EXTERNAL] Re: [squid-users] TCP_MISS only On 28/09/22 07:56, Andy Armstrong wrote: > Okay – but what happens if you are communicating with a non REST > endpoint. You are still communicating over HTTP. To interact with and benefit from HTTP agents like caches you need to comply to the HTTP semantics they use. IMO, REST is just a useful tool to define (in abstract) an API's operation when considering what/how it needs to be implemented. > Consider a Web services endpoint for example where a request > is only interacted with via POST but the operation for example may > frequently be a read based function akin to a HTTP GET? That is by definition a broken implementation of HTTP. The agent is using a *delivery* API (POST) for retrieval (GET). If you can separate the delivery and fetch operations HTTP becomes much easier to use. > Is Squid just > simply not going to help cache those requests? Not *by default*, no. POST implies changing some arbitrary resource *other* than the URL presented. Based on data and logic which may not be provided in the request message URL+headers. To use POST with caching both the client *and* the server have to explicitly tell the HTTP cache agent(s) what to do on every single HTTP message. - The client has to tell the cache whether a stored response is able to be produced as reply, what object-ID it is trying to retrieve, what object-ID's it already knows about (if any), and how old the stored object is allowed to be. - The server has to tell the cache whether the response can be stored, what to use for a unique-ID of the reply object, how old it already is, how long it can be stored for, how and when to update it when it becomes stale. The Squid refresh_pattern can provide defaults for the storage times when they are omitted. But all the ID related things and whether to use cache at all can only come from the client/server. As you can see by limiting yourself to POST-only you have imposed a huge amount of complexity. Using GET instead for fetches makes all the above *optional* where now it is mandatory. > It is only helpful for > more strict alignment to REST principles? > You lost me here. Squid implements HTTP. REST is a very abstract simplification of basic HTTP/1.0 semantics. So the closer ones code aligns to REST the *easier* it is to implement HTTP properly. But HTTP/1.1+ are vastly more than REST. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users Unless otherwise stated above: IBM United Kingdom Limited Registered in England and Wales with number 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants. PO6 3AU ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Use ICP RTT with HTTPS request
On 27/09/22 02:25, Alex Rousskov wrote: On 9/26/22 05:51, Théo BARRAGUE wrote: entry is null so peerGetSomeNeighbor is never called I did not check all the details, but it looks like Squid ICMP code (ab)uses StoreEntry-linked metadata. Basic CONNECT tunnels lack StoreEntry because they are not reading/writing data from/to Store. The combination is essentially a Squid bug -- basic CONNECT tunnels cannot use ICMP features. CONNECT tunnel should be able to use data from ICMP like other code doing peer selection. I see several bugs here: 1) ICMP relying on StoreEntry as a data source. The server (if not a cache_peer) being ping'ed should come from the CONNECT request object URI. 2) peer selection initiating ICMP directly. It should be retrieving RTT values from NetDB, which indirectly uses ICMP to get updates. Cheers Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] Frequent disruption of Microsoft Teams and other online VC applications through Squid Proxy
On 28/09/22 23:11, Punyasloka Arya wrote: We have our squid 3.5 proxy running on centos 6 for 500 users Desktop video conference application Microsoft Team Meetings sessions and other VC apps often get disconnected. Audio goes off and reconnects automatically. At the same time browsing and other applications work fine. At the same time SQUID load is normal. we are clueless where to look into the system Any clues to fine tune the underlying parameters like File desriptors, cahce parameters etc. Please first check whether Squid is actually involved with those software transactions. Typically those type of communications should be using RTSP/RTMP or VoIP protocols which Squid-3 does not support. When Squid-3 does handle CONNECT for non-HTTP protocols (including encrypted HTTPS) its involvement is purely that of shuffling bytes between client and server. So the things to look for there are TCP level network issues (eg TCP, NAT, or router cache timeouts) closing connections unexpectedly. If you have Squid Delay Pools configured that may also be interfering with connections that need high traffic flow rates. Check how much traffic Squid is handling at the time(s) these issues occur. Squid-3.5 has upper limits of around 19K requests/second and ~63K concurrent client connections (on Linux/BSD, much lower on Windows). If either of these limits are encountered traffic speeds *will* drastically reduce speed until the clients adjust to a lower level. - The fix for these if you actually hit the upper limits is to use more Squid instances (on different hardware, not VMs on same HW) to share the traffic load. - If the speed drop occurs before hitting the limits you can try optimizing squid.conf settings (ACL sequence in particular), TCP stack settings (ie. ephemeral ports use and/or TCP flow controls) for performance. Also check cache.log to see if Squid, workers, or helpers are halting/crashing at all. These are not very consistent in Squid-3 but should show up as one or more "FATAL", "ERROR", "assertion failed", or "unhandled exception", or "halted" messages. - The fix here is obviously to figure out and prevent the crashes occuring. Each log message and ones above it should provide some hints to help with troubleshooting. FWIW, Squid-3.5 is long out of support and AFAIK CentOS 6 is also. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users
Re: [squid-users] TCP_MISS only
On 28/09/22 07:56, Andy Armstrong wrote: Okay – but what happens if you are communicating with a non REST endpoint. You are still communicating over HTTP. To interact with and benefit from HTTP agents like caches you need to comply to the HTTP semantics they use. IMO, REST is just a useful tool to define (in abstract) an API's operation when considering what/how it needs to be implemented. Consider a Web services endpoint for example where a request is only interacted with via POST but the operation for example may frequently be a read based function akin to a HTTP GET? That is by definition a broken implementation of HTTP. The agent is using a *delivery* API (POST) for retrieval (GET). If you can separate the delivery and fetch operations HTTP becomes much easier to use. Is Squid just simply not going to help cache those requests? Not *by default*, no. POST implies changing some arbitrary resource *other* than the URL presented. Based on data and logic which may not be provided in the request message URL+headers. To use POST with caching both the client *and* the server have to explicitly tell the HTTP cache agent(s) what to do on every single HTTP message. - The client has to tell the cache whether a stored response is able to be produced as reply, what object-ID it is trying to retrieve, what object-ID's it already knows about (if any), and how old the stored object is allowed to be. - The server has to tell the cache whether the response can be stored, what to use for a unique-ID of the reply object, how old it already is, how long it can be stored for, how and when to update it when it becomes stale. The Squid refresh_pattern can provide defaults for the storage times when they are omitted. But all the ID related things and whether to use cache at all can only come from the client/server. As you can see by limiting yourself to POST-only you have imposed a huge amount of complexity. Using GET instead for fetches makes all the above *optional* where now it is mandatory. It is only helpful for more strict alignment to REST principles? You lost me here. Squid implements HTTP. REST is a very abstract simplification of basic HTTP/1.0 semantics. So the closer ones code aligns to REST the *easier* it is to implement HTTP properly. But HTTP/1.1+ are vastly more than REST. HTH Amos ___ squid-users mailing list squid-users@lists.squid-cache.org http://lists.squid-cache.org/listinfo/squid-users