[squid-users] SSL decryption problem using Mozilla Firefox
Hi All, I am able to decrypt packets with Squid ssl bump feature when Internet Explorer used, but when Mozilla used I cannot decrypt the dumped packets. The root CA is the same as in IE so the private key is the same. What's wrong? Thanks you for any advice! Andrew -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-decryption-problem-using-Mozilla-Firefox-tp4666567.html Sent from the Squid - Users mailing list archive at Nabble.com.
RE: [squid-users] SSL decryption problem using Mozilla Firefox
So when you use Internet Explorer, in the access.log, you see a GET request, but if you use Firefox you see a CONNECT? Does Firefox get passed the certificate from the Squid proxy, or the certificate from the website? I don't think it is possible to make squid only sslbump some clients, but not 100% sure. I'm pretty new to ssl in squid, but for people to help you, a copy of your squid.conf would be useful. As well as part of your access.log showing Internet Explorer connecting and Firefox browsing the internet. -Original Message- From: Makkok [mailto:szemolb...@yahoo.com] Sent: Tuesday, 1 July 2014 8:02 p.m. To: squid-users@squid-cache.org Subject: [squid-users] SSL decryption problem using Mozilla Firefox Hi All, I am able to decrypt packets with Squid ssl bump feature when Internet Explorer used, but when Mozilla used I cannot decrypt the dumped packets. The root CA is the same as in IE so the private key is the same. What's wrong? Thanks you for any advice! Andrew -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-decryption-problem-us ing-Mozilla-Firefox-tp4666567.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] RE: SSL decryption problem using Mozilla Firefox
Hi Liam! Thanks for your reply. Its very interesting, because in the access.log file I see the correct things, GET POST request with both browsers. But when I start a tcpdump capture on the squid listening interface, only the traffic generated by Internet Explorer can be decrypted. The squid CA is inserted into the root CAs in firefox success, and I see firefox is using my cert, as the Internet Explorer does. I use dynamic ssl generation feature with ssl bump server first directive. ssl_bump server-first all https_port 3129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/MyCA.pem Thanks! -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-decryption-problem-using-Mozilla-Firefox-tp4666567p4666569.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Help on squid external proxy configuration
Hi all I'm new in this environments, so I've a problem related to an application in an environment that uses a NTLM authentication. This kind of authentication is not supported by the sw I'm using so the support said me that the best way to solve the issue is installing a squid proxy server in order to use my box as a proxy without authentication internally and use the squid proxy as connecting to another external proxy using the normal credential. Hope my explanation is clear. Practically, my box with a squid proxy server installed must receive the http requests from my sw and use this proxy to connect without credential to the external proxy with normal... and it, defintely, will connect to internet site I'm searching for. How can I do this? what simple kind of configuration I must to use in my squid proxy server? HELP ME. Thanks ROBERTO
[squid-users] Caching a specific site (Part two)
Hello list, i have ahd a problem with squid caching a specific site for a couple of minutes and i thought i could resolve the problem, but it seems now like that won´t work. Here are some parts of my squid.conf cache_mem 24 MB error_directory /usr/lib/squid/errors/de digest_generation off cache_dir aufs /var/log/cache 50 16 256 access_log stdio:/var/log/squid/access.log cache_log /var/log/squid/cache.log cache_store_log none strip_query_terms off ... refresh_pattern -i ^http://mydomain\.de/vneu/(.*)\.htm$ 5 99% 10 override-lastmod override-expire ignore-reload ignore-private ignore-no-cache refresh_pattern -i ^http://mydomain\.de/plaueler/* 10 99% 10 override-lastmod override-expire ignore-reload ignore-private ignore-no-cache ... I have no idea what i shall do to get squid caching these two sites for a few minutes again and again. Greetings
[squid-users] assertion failed: comm.cc:185: fd_table[conn-fd].halfClosedReader != NULL
Hi all, we have two squid servers on a large enterprise LAN for caching internet traffic. we currently use squid 3.3.9 with these (relevant?) settings: pipeline_prefetch on shutdown_lifetime 1 second ## Cache Settings ## cache_dir diskd /squid/cache/active 51200 20 512 Q1=288 Q2=256 cache_mem 8192 MB minimum_object_size 0 KB maximum_object_size 50 MB maximum_object_size_in_memory 1 MB memory_cache_mode disk store_dir_select_algorithm least-load cache_swap_low 94 cache_swap_high 95 max_filedescriptors 65536 ## Networking ## http_port 8080 ## USE ufdbGuard ## url_rewrite_program /usr/local/ufdbguard/bin/ufdbgclient -l /usr/local/ufdbguard/logs url_rewrite_children 50 we encountered the error: *assertion failed: comm.cc:185: fd_table[conn-fd].halfClosedReader != NULL* on cache.log file and the squid server stopped working without accepting any connection. can you please help us with this error? many thanks, Enrico -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-comm-cc-185-fd-table-conn-fd-halfClosedReader-NULL-tp4666572.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] Re: Probs with squid 3.4.4 and cache_peer parent
Then lets try to get ridd off the error messages in squid.log. This is my standard cmd for a parent proxy, all requests are forwarded to: cache_peer xxx.xxx.xxx.xx parent 3128 0 no-query no-digest no-netdb-exchange This should get rid off the errors regarding pinger. Correct ? Still crashing ? -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Probs-with-squid-3-4-4-and-cache-peer-parent-tp4666557p4666573.html Sent from the Squid - Users mailing list archive at Nabble.com.
[squid-users] delay pool negativ value
Hi, If I watch the delay pool with squidclient mgr:delay, what does a negativ value in the current field means? Is there a description of the values in that output? Kind regards marc
Re: [squid-users] delay pool negativ value
On Tue, Jul 1, 2014 at 12:47 PM, Grooz, Marc (regio iT) marc.gr...@regioit.de wrote: Hi, If I watch the delay pool with squidclient mgr:delay, what does a negativ value in the current field means? Small or smallish values mean that the pool is depleted. Until the values get positive and big enough again, no more data will be sent. Is there a description of the values in that output? It should be more or less self-explanatory. What aspects are you mostly concerned about? -- Francesco
Re: [squid-users] RE: SSL decryption problem using Mozilla Firefox
I am really trying to understand the issue.. Really. Eliezer On 07/01/2014 12:13 PM, Makkok wrote: Hi Liam! Thanks for your reply. Its very interesting, because in the access.log file I see the correct things, GET POST request with both browsers. But when I start a tcpdump capture on the squid listening interface, only the traffic generated by Internet Explorer can be decrypted. The squid CA is inserted into the root CAs in firefox success, and I see firefox is using my cert, as the Internet Explorer does. I use dynamic ssl generation feature with ssl bump server first directive. ssl_bump server-first all https_port 3129 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/etc/MyCA.pem Thanks!
Re: [squid-users] SSL bump working on most site...cert pinning issue?
On 2014-06-30 20:21, James Lay wrote: On Mon, 2014-06-30 at 22:56 +1000, Dan Charlesworth wrote: Yeah, pinned SSL ‘aint gonna be bumped. The Twitter apps are another popular one that use pinning. As far as your broken_sites ACL goes, you can’t use `dstdomain` because the only thing Squid can see of the destination before bumping an intercepted connection is the IP address. So for `ssl_bump none` you’ll need to be use `dst` ACLs instead. ProTip: Here are the Apple and Akamai public IP blocks (to use in a dst equivalent of your broken_sites), respectively: 17.0.0.0/8, 23.0.0.0/12. Good luck On 30 Jun 2014, at 10:38 pm, James Lay j...@slave-tothe-box.net wrote: Topic pretty much says it...most sites work fine using my below set up, but some (Apple's app store) do not. I'm wondering if cert pinning is the issue? Since this set up is basically two separate sessions, I packet captured both. The side the I have control over gives me a TLS Record Layer Alert Close Notify. I am unable to decrypt the other side as the device in question is an iDevice and I can't capture the master secret. I've even tried to ACL certain sites to not bump, but they don't go through. Below is my complete setup. This is running the below: Squid Cache: Version 3.4.6 configure options: '--prefix=/opt' '--enable-icap-client' '--enable-ssl' '--enable-linux-netfilter' '--enable-follow-x-forwarded-for' '--with-large-files' '--sysconfdir=/opt/etc/squid' Any assistance with troubleshooting would be wonderful...thank you. James $IPTABLES -t nat -A PREROUTING -i eth0 -s 192.168.1.96/28 -p tcp --dport 80 -j REDIRECT --to-port 3128 $IPTABLES -t nat -A PREROUTING -i eth0 -s 192.168.1.96/28 -p tcp --dport 443 -j REDIRECT --to-port 3129 acl localnet src 192.168.1.0/24 acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443# https acl Safe_ports port 70 # gopher acl Safe_ports port 210# wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280# http-mgmt acl Safe_ports port 488# gss-http acl Safe_ports port 591# filemaker acl Safe_ports port 777# multiling http acl CONNECT method CONNECT acl broken_sites dstdomain textnow.me acl broken_sites dstdomain akamaiedge.net acl broken_sites dstdomain akamaihd.net acl broken_sites dstdomain apple.com acl allowed_sites url_regex /opt/etc/squid/url.txt acl all_others dst all acl SSL method CONNECT http_access deny !Safe_ports http_access deny CONNECT !SSL_ports http_access allow manager localhost http_access deny manager http_access allow allowed_sites http_access deny all_others http_access allow localnet http_access allow localhost http_access deny all icp_access deny all sslproxy_cert_error allow broken_sites sslproxy_cert_error deny all sslproxy_options ALL ssl_bump none broken_sites ssl_bump server-first all http_port 192.168.1.253:3128 intercept https_port 192.168.1.253:3129 intercept ssl-bump generate-host-certificates=on cert=/opt/sslsplit/sslsplit.crt key=/opt/sslsplit/sslsplitca.key options=ALL sslflags=NO_SESSION_REUSE always_direct allow all hierarchy_stoplist cgi-bin ? access_log syslog:daemon.info common refresh_pattern ^ftp: 144020% 10080 refresh_pattern ^gopher: 14400% 1440 refresh_pattern -i (cgi-bin|\?)0 0% 0 refresh_pattern . 0 20% 4320 icp_port 3130 coredump_dir /opt/var So adding: acl broken_sites dst 23.0.0.0/12 now gives me the below: Jun 30 20:16:51 gateway (squid-1): 192.168.1.100 - - [30/Jun/2014:20:16:51 -0600] CONNECT 23.204.162.217:443 HTTP/1.1 403 3385 TCP_DENIED:HIER_NONE Jun 30 20:16:51 gateway (squid-1): 192.168.1.100 - - [30/Jun/2014:20:16:51 -0600] NONE error:invalid-request HTTP/0.0 400 3981 TAG_NONE:HIER_NONE So something is off. Any help on these beastie? Thank you. James Bah..had to add: http_access allow broken_sites Go me! Thank you. James
[squid-users] RE: SSL decryption problem using Mozilla Firefox
Hi Elizer, Let me clarify the issue; I successfully set up squid with ssl bumping feature /man in the middle proxy/ against my home network. Everything works perfect. I see the access.log every https connection url. To check if it is really working, besides theaccess.log, I started sniffing the traffic on the proxy interface. And the issue begins from here. If I sniff traffic generated by Internet Explorer, I can decrypt the sniffed traffic via Wireshark. But in case in Firefox or Opera, I cannot decrypt the packets, even though I own the private key of the root CA(of course). What should be the problem? Thanks! -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/SSL-decryption-problem-using-Mozilla-Firefox-tp4666567p4666578.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] RE: SSL decryption problem using Mozilla Firefox
Using wireshark?? If it works on squid you can use the debug sections of squid to debug the connection using it. If you do have an issue with wireshark or tcpdump traffic dumping\decryption you should really ask at tcpdump channel\mailing to verify why would their software do not show the content as you expect it to be and maybe there is an option that you are not using in the OS level combined with tcpdump that should allow you what you are talking about. All The Bests, Eliezer On 07/01/2014 05:33 PM, Makkok wrote: Hi Elizer, Let me clarify the issue; I successfully set up squid with ssl bumping feature /man in the middle proxy/ against my home network. Everything works perfect. I see the access.log every https connection url. To check if it is really working, besides theaccess.log, I started sniffing the traffic on the proxy interface. And the issue begins from here. If I sniff traffic generated by Internet Explorer, I can decrypt the sniffed traffic via Wireshark. But in case in Firefox or Opera, I cannot decrypt the packets, even though I own the private key of the root CA(of course). What should be the problem? Thanks!
[squid-users] Re: Probs with squid 3.4.4 and cache_peer parent
Looks like your problem is caused by the failing pinger. Which means, there is an --enable-icmp in your config options, when building squid. So another possibillity would be to remove this config option. AFAIK in your situation, the pinger would only be an advantage (or even necessary), in case of alternatives for your upstream proxy, to detect the closest one. But as your squid has no choice, you should be able to disable it completely. Or to give proper rights to the pinger, although redundant. -- View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Probs-with-squid-3-4-4-and-cache-peer-parent-tp4666557p4666580.html Sent from the Squid - Users mailing list archive at Nabble.com.
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
Eliezer I have now re-created the SSL certificates by creating the CSR, sending the to the CA and getting the new certificate back. Unfortunately, I'm still getting the same error; 2014/07/01 19:14:47| Startup: Initializing Authentication Schemes ... 2014/07/01 19:14:47| Startup: Initialized Authentication Scheme 'basic' 2014/07/01 19:14:47| Startup: Initialized Authentication Scheme 'digest' 2014/07/01 19:14:47| Startup: Initialized Authentication Scheme 'negotiate' 2014/07/01 19:14:47| Startup: Initialized Authentication Scheme 'ntlm' 2014/07/01 19:14:47| Startup: Initialized Authentication. 2014/07/01 19:14:47| Processing Configuration File: /etc/squid/squid.conf (depth 0) 2014/07/01 19:14:47| Processing: hosts_file /etc/hosts 2014/07/01 19:14:47| Processing: http_port X.X.X.90:80 accel defaultsite=domain.local 2014/07/01 19:14:47| Processing: http_port X.X.X.95:80 accel defaultsite=server_1..co.uk 2014/07/01 19:14:47| Processing: https_port X.X.X.95:443 accel cert=/usr/newrprgate/CertAuth/www_domain_info/14735441.crt key=/usr/newrprgate/CertAuth/www_domain_info/domain_info_key.pem defaultsite=server_1..co.uk 2014/07/01 19:14:47| Processing: cache_peer X.X.125.205 parent 8025 0 no-query originserver name=server_1 2014/07/01 19:14:47| Processing: acl sites_server_1 dstdomain www.domain.info 2014/07/01 19:14:47| Processing: cache_peer_access server_1 allow sites_server_1 2014/07/01 19:14:47| Processing: cache_peer_access server_1 deny all 2014/07/01 19:14:47| Processing: http_port X.X.X.96:80 accel defaultsite=server_2..co.uk 2014/07/01 19:14:47| Processing: cache_peer X.X.125.2X parent 8026 0 no-query originserver name=server_2_http 2014/07/01 19:14:47| Processing: cache_peer X.X.125.2X parent 8061 0 no-query originserver ssl sslflags=DONT_VERIFY_PEER name=server_2_https 2014/07/01 19:14:47| Processing: acl sites_server_2 dstdomain www.domainhomes.org.uk 2014/07/01 19:14:47| Processing: cache_peer_access server_2_http allow sites_server_2 2014/07/01 19:14:47| Processing: cache_peer_access server_2_https allow sites_server_2 2014/07/01 19:14:47| Processing: cache_peer_access server_2_http deny all 2014/07/01 19:14:47| Processing: cache_peer_access server_2_https deny all 2014/07/01 19:14:47| Processing: http_port X.X.X.97:80 accel defaultsite=server_3..co.uk 2014/07/01 19:14:47| Processing: cache_peer X.X.125.205 parent 8025 0 no-query originserver name=server_3_http 2014/07/01 19:14:47| Processing: cache_peer X.X.125.205 parent 8061 0 no-query originserver ssl sslflags=DONT_VERIFY_PEER name=server_3_https 2014/07/01 19:14:47| Processing: acl sites_server_3 dstdomain www.domain2.info 2014/07/01 19:14:47| Processing: cache_peer_access server_3_http allow sites_server_3 2014/07/01 19:14:47| Processing: cache_peer_access server_3_https allow sites_server_3 2014/07/01 19:14:47| Processing: cache_peer_access server_3_http deny all 2014/07/01 19:14:47| Processing: cache_peer_access server_3_https deny all 2014/07/01 19:14:47| Processing: acl localnet src X.0.0.0/8# RFCX8 possible internal network 2014/07/01 19:14:47| Processing: acl localnet src 172.X.0.0/12 # RFCX8 possible internal network 2014/07/01 19:14:47| Processing: acl localnet src 192.X8.0.0/X # RFCX8 possible internal network 2014/07/01 19:14:47| Processing: acl localnet src fc00::/7 # RFC 4193 local private network range 2014/07/01 19:14:47| aclIpParseIpData: IPv6 has not been enabled. 2014/07/01 19:14:47| Processing: acl localnet src fe80::/X # RFC 4291 link-local (directly plugged) machines 2014/07/01 19:14:47| aclIpParseIpData: IPv6 has not been enabled. 2014/07/01 19:14:47| Processing: acl SSL_ports port 443 2014/07/01 19:14:47| Processing: acl Safe_ports port 80 # http 2014/07/01 19:14:47| Processing: acl Safe_ports port 21 # ftp 2014/07/01 19:14:47| Processing: acl Safe_ports port 443# https 2014/07/01 19:14:47| Processing: acl Safe_ports port 70 # gopher 2014/07/01 19:14:47| Processing: acl Safe_ports port 2X# wais 2014/07/01 19:14:47| Processing: acl Safe_ports port X25-65535 # unregistered ports 2014/07/01 19:14:47| Processing: acl Safe_ports port 280 # http-mgmt 2014/07/01 19:14:47| Processing: acl Safe_ports port 488 # gss-http 2014/07/01 19:14:47| Processing: acl Safe_ports port 591 # filemaker 2014/07/01 19:14:47| Processing: acl Safe_ports port 777 # multiling http 2014/07/01 19:14:47| Processing: acl CONNECT method CONNECT 2014/07/01 19:14:47| Processing: http_access deny !Safe_ports 2014/07/01 19:14:47| Processing: http_access deny CONNECT !SSL_ports 2014/07/01 19:14:47| Processing: http_access allow localhost manager 2014/07/01 19:14:47| Processing: http_access deny manager 2014/07/01 19:14:47| Processing: acl all_internet src all 2014/07/01 19:14:47| Processing: http_access allow tte_network 2014/07/01 19:14:47| Processing: http_access allow ltdc_network 2014/07/01 19:14:47| Processing: http_access allow lldc_network 2014/07/01 19:14:47|
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
What is the output of squid -v when using 3.4.3? I am not sure what the issue is and I can test it with my own certificate later on. If you see that I have not tested it yet in the next week try to send me an email to remind me that it was not verified yet. Eliezer On 07/01/2014 09:25 PM, John Gardner wrote: 2014/07/01 19:14:47| Initializing https_port X.X.X.95:443 SSL context 2014/07/01 19:14:47| Using certificate in /usr/newrprgate/CertAuth/www_domain_info/14735441.crt 2014/07/01 19:14:47| storeDirWriteCleanLogs: Starting... 2014/07/01 19:14:47| Finished. Wrote 0 entries. 2014/07/01 19:14:47| Took 0.00 seconds ( 0.00 entries/sec). FATAL: No valid signing SSL certificate configured for https_port X.X.X.95:443 Squid Cache (Version 3.4.3): Terminated abnormally. CPU Usage: 0.064 seconds = 0.051 user + 0.013 sys Maximum Resident Size: 32032 KB Page faults with physical i/o: 0 I think I might try the Oracle 6.5 repo version Squid 3.1 RPM which comes with the distro first, before I start compiling a new version of Squid. John
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
On 07/01/2014 09:25 PM, John Gardner wrote: Eliezer I have now re-created the SSL certificates by creating the CSR, sending the to the CA and getting the new certificate back. Unfortunately, I'm still getting the same error; I have just understood something: I did not released oracle 3.4.3 RPM but a 3.4.5 so the squid -v should be the first thing to verify. Then the list of installed packages using: yum list installed Also just noticed.. Did you generated a PEM certificate from the CSR?? There a very detailed process describes at: http://wiki.squid-cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate Which also helps to create a rootCA, CSR and then a certificate. It will probably not be authorized by you browser but will be accepted by squid. The wiki page will give you all the details you should know about the process by a quick look. Try to follow the instructions to make sure that squid is working on not working with some certificates. I will try to provide later a certificate that works with my server(not my real one...). Eliezer
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
Eliezer A! I've just found the problem... SELinux. Despite me initially running setenforce Permissive, I must have forgotten to set it on reboot. I'm now running; the 3.4.5 RPM from here; http://www1.ngtech.co.il/rpm/oracle/6/x86_64/ I apologise for wasting your time, it's now all running successfully. Thanks John On 1 July 2014 20:26, Eliezer Croitoru elie...@ngtech.co.il wrote: On 07/01/2014 09:25 PM, John Gardner wrote: Eliezer I have now re-created the SSL certificates by creating the CSR, sending the to the CA and getting the new certificate back. Unfortunately, I'm still getting the same error; I have just understood something: I did not released oracle 3.4.3 RPM but a 3.4.5 so the squid -v should be the first thing to verify. Then the list of installed packages using: yum list installed Also just noticed.. Did you generated a PEM certificate from the CSR?? There a very detailed process describes at: http://wiki.squid-cache.org/ConfigExamples/Reverse/SslWithWildcardCertifiate Which also helps to create a rootCA, CSR and then a certificate. It will probably not be authorized by you browser but will be accepted by squid. The wiki page will give you all the details you should know about the process by a quick look. Try to follow the instructions to make sure that squid is working on not working with some certificates. I will try to provide later a certificate that works with my server(not my real one...). Eliezer
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
OK, I have tested a brand new 3.4.5 RPM that I have just built (not the one that in the repo but from the same SRPM) and it works just fine. 1404244361.354 3 192.168.10.99 TCP_MISS/500 4054 GET https://192.168.10.124:8443/favicon.ico - HIER_NONE/- text/html To verify it against your settings: https_port 8443 accel cert=/etc/squid/cloud.ngtech.co.il.crt key=/etc/squid/cloud.ngtech.co.il.key cipher=ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM options=NO_SSLv2 defaultsite=server_1.uk You can download the public and private key at: http://www1.ngtech.co.il/squid/cert.tar it's not a valid certificate due to the expiration date but it was in use until somewhere in 2013. .. Stil runs. My assumption is that instead of using a key and a certificate you are using the CSR which is only the middle of the process to get a valid key. All The Bests, Eliezer P.S. I am working on the 3.4.6 RPM for CentOS 6 and Oracle 6 and it will be probably released next week. On 07/01/2014 09:39 PM, Eliezer Croitoru wrote: What is the output of squid -v when using 3.4.3? I am not sure what the issue is and I can test it with my own certificate later on. If you see that I have not tested it yet in the next week try to send me an email to remind me that it was not verified yet. Eliezer
Re: [squid-users] FATAL: No valid signing SSL certificate configured for https_port
It's ok. But it shows one nasty thing: squid doesn't shows a permission denied error\output that can redirect us to the issue in hands and verify why.. This is a BUG to my opinion but I do not know (yet) how to look at it. It states that an error accrue but it seems like a syntax error to me rather then access error. What do you think about the description of the bug? Eliezer On 07/01/2014 10:58 PM, John Gardner wrote: Eliezer A! I've just found the problem... SELinux. Despite me initially running setenforce Permissive, I must have forgotten to set it on reboot. I'm now running; the 3.4.5 RPM from here; http://www1.ngtech.co.il/rpm/oracle/6/x86_64/ I apologise for wasting your time, it's now all running successfully. Thanks John