[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
You should start here:
http://wiki.squid-cache.org/ConfigExamples/DynamicContent




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665780.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] squid caching dynamic content

2014-05-01 Thread Eliezer Croitoru

On 05/01/2014 08:26 AM, Mario Almeida wrote:

How to setup dynamic caching of youtube, facebook etc..

facebook is cache friendly as the key...
youtube is another story...

Eliezer


[squid-users] Issue: client_delay_pools and related directives

2014-05-01 Thread Laz C. Peterson
Hi there Squid users —

I’m looking for some insight in this matter … I can’t figure out what I’m doing 
wrong.  I am running Squid version 3.3.8 on Ubuntu 14.04.

I have a very basic squid.conf, though it links to multiple include files.  Is 
there a squid3 command I can run to show the entire active configuration?

Anyhow, the issue that I’m having is when configuring client_delay_pools per 
the Squid configuration documentation, all requests immediately get “connection 
reset by peer”.  It does not seem like Squid restarts or anything, though I 
only have production servers to “test” on right now.  Have not turned debug 
mode on yet, but there is nothing logged.

Essentially, this is the relevant configuration:

acl slowdown src 10.3.2.82/32 10.0.2.162/32
client_delay_pools 1
client_delay_initial_bucket_level 50
client_delay_parameters 1 1024 2048
client_delay_access 1 allow slowdown
client_delay_access 1 deny all
client_delay_parameters 1 2048 32000

Has anyone had success with this?  We have users who upload large images and 
suck up the entire bandwidth when doing so.  Currently we are limiting 
bandwidth at the router, but that’s getting to be a pain when it’s just a 
couple systems uploading to various remote locations.

Any insight or direction would be greatly appreciated.  Thanks so much!
~Laz

[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
youtube is another story... 
Yep. What's written in the wiki seems to be obsolete (once again): AFAICS,
now the id for a video is not unique any more, which means, not to be used
as part of the STORE-ID any more :-(

Obviously, the guys from youtube are reading here as well. And doing
eveything reasonable for them to harm caching.



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665783.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: squid caching dynamic content

2014-05-01 Thread Eliezer Croitoru

On 05/01/2014 07:36 PM, babajaga wrote:

Yep. What's written in the wiki seems to be obsolete (once again): AFAICS,
now the id for a video is not unique any more, which means, not to be used
as part of the STORE-ID any more:-(

Obviously, the guys from youtube are reading here as well. And doing
eveything reasonable for them to harm caching.

It is unique but there are other properties which forces a re-fetch or 
full-fetch.
actually I was thinking about it and I do not care that youtube\google 
will read squid.
Absolutely the opposite.. They should read squid and also participate 
and tell us what they think and how to do things( if they want).


Using special scripts\programs youtube can be cached.
The main issue is the algorithm that should be used to invalidate 
objects and rate them.
Let say we have 3000 users that have watched a video, should we continue 
cache it compared to other videos? etc..


Eliezer


[squid-users] Some https sites load

2014-05-01 Thread Eric Vanderveer
Hi just recently my dansguardian/squid server started having problems
loading https sites.  Now not all https sites do this.  When I go to
twitter.com I have no problems but when i go to facebook.com it
doesn't load.  I have not idea where to look for this.  Any ideas?

Thanks

Eric Vanderveer


[squid-users] https interception some whitelisted sites not working properly

2014-05-01 Thread Ikna Nou
Hello List, 

This is my situation:
squid3.4.4 on Debian compiled from source (with options --enable-ssl and 
--enable-ssl-crtd)

It works quite well.

Now, I'm trying to create a list of ssl whitelisted sites, using the ssl_bump 
feature and following: 

http://wiki.squid-cache.org/Features/SslBump


With some sites added to this list (like Google, Hotmail, etc) the certificate 
presented to the client isn't the original one but the created by squid. 


It happens with some sites, particularly these. There are other that is OK.

Have you folks go through these issues?





Below is my squid.conf setting regarding to this. Thanks in advance!



http_port 3129
http_port 3128 intercept
https_port 3127 intercept ssl-bump generate-host-certificates=on 
dynamic_cert_mem_cache_size=256MB cert=/etc/squid3/certs/ssl/public.pem 
key=/etc/squid3/certs/ssl/private.pem
## --\
acl broken_sites dstdomain /etc/squid3/acl/ssl_whitelist.acl
#acl broken_sites dstdomain .cisco.com .virustotal.com .mail-archive.com 
.facebook.com

always_direct allow broken_sites
ssl_bump none localhost
ssl_bump none broken_sites
#ssl_bump server-first !broken_sites 
sslproxy_cert_error allow all
sslproxy_flags DONT_VERIFY_PEER
ssl_bump server-first all 

[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
It is unique 
That is past. I checked my favourite video today once more in detail, as I
was wondering about  the droppig hit rate.
Might be country dependent, though.
Pls, verify:
Video from Dire straits: http://www.youtube.com/watch?v=8Pa9x9fZBtY
From my logs:
1398957788.924 40 88.78.165.175 TCP_MISS/200 55727 GET
http://r7---sn-a8au-nuae.googlevideo.com/videoplayback?c=webclen=10270797cpn=W_QOZclsT2zLDiglcver=as3dur=646.652expire=1398981332fexp=900225%2C912524%2C945043%2C910207%2C916611%2C937417%2C913434%2C936923%2C3300073%2C3300114%2C3300131%2C3300137%2C3300164%2C3310366%2C3310635%2C3310649gcr=usgir=yesid=o-ALN3b5RL_HFbn0uq6wVdN7ok381e5Klr5aXnm2-j_rVgip=209.239.112.105ipbits=0itag=140keepalive=yeskey=yt5lmt=1389150223974581ms=aumt=1398957076mv=mmws=yesrange=10215424-10455039ratebypass=yessignature=C94B087DF4A3CF0CCF06AFE4FDC7A729F5C6B606.BF87024E4845DCD025BF2AAB9E408F4824E32085source=youtubesparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Clmt%2Csource%2Cupn%2Cexpiresver=3upn=LnXkp11yenY
re27md0x54bl26 CARP/127.0.0.1 application/octet-stream

1398959279.449 41 88.78.165.175 TCP_MISS/200 55727 GET
http://r7---sn-a8au-nuae.googlevideo.com/videoplayback?c=webclen=10270797cpn=MCtgQsjQaEvCa6lMcver=as3dur=646.652expire=1398981332fexp=947338%2C916624%2C929313%2C902534%2C937417%2C913434%2C936923%2C902408%2C3300073%2C3300114%2C3300131%2C3300137%2C3300164%2C3310366%2C3310635%2C3310649gcr=usgir=yesid=o-AJTyH4Z0kYlbLupmzkm6UGeWxO2d2KyQNUiluKAS28Kuip=209.239.112.105ipbits=0itag=140keepalive=yeskey=yt5lmt=1389150223974581ms=aumt=1398958548mv=mmws=yesrange=10215424-10455039ratebypass=yessignature=77AB96CA03864D43DC0A917F6AA4D0EC4A0C739B.21F0DE6753599D7298439D8B2C0496574061A641source=youtubesparams=clen%2Cdur%2Cgcr%2Cgir%2Cid%2Cip%2Cipbits%2Citag%2Clmt%2Csource%2Cupn%2Cexpiresver=3upn=xv54PP7C3To
re27md0x54bl26 CARP/127.0.0.2 application/octet-stream



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665787.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Some https sites load

2014-05-01 Thread Eliezer Croitoru

Hey Eric,

What version of squid?
How does squid and dansguardian is connected to squid?
What issues with HTTPS? it is pretty complex to understand the issue in 
hands.

What error do you get in the web-browser? what shows up in the logs?

Eliezer

On 05/01/2014 11:24 PM, Eric Vanderveer wrote:

Hi just recently my dansguardian/squid server started having problems
loading https sites.  Now not all https sites do this.  When I go to
twitter.com I have no problems but when i go to facebook.com it
doesn't load.  I have not idea where to look for this.  Any ideas?

Thanks

Eric Vanderveer




RE: [squid-users] how to use refresh_pattern correct

2014-05-01 Thread Lawrence Pingree
Thanks Dan,
I get about 40% cache rate with no real issues with websites. And my web
surfing performance is sub-3 second response times in most cases.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Dan Charlesworth [mailto:d...@getbusi.com] 
Sent: Monday, April 28, 2014 9:18 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] how to use refresh_pattern correct

Hi Lawrence

I think that's the most extensive list of refresh patterns I've seen in one
place, for a forward proxy. Props.

Is anyone else using a collection like this and care to comment on its
performance / viability?

Dan

On 29 Apr 2014, at 2:05 pm, Lawrence Pingree geek...@geek-guy.com wrote:

 Try using my refresh patterns:
 http://www.lawrencepingree.com/2014/01/01/optimal-squid-config-conf-fo
 r-3-3-
 9/
 
 
 
 
 Best regards,
 The Geek Guy
 
 Lawrence Pingree
 http://www.lawrencepingree.com/resume/
 
 Author of The Manager's Guide to Becoming Great
 http://www.Management-Book.com
  
 
 
 -Original Message-
 From: Amos Jeffries [mailto:squ...@treenet.co.nz]
 Sent: Monday, April 28, 2014 10:15 AM
 To: squid-users@squid-cache.org
 Subject: Re: [squid-users] how to use refresh_pattern correct
 
 On 29/04/2014 2:02 a.m., tile1893 wrote:
 Hi,
 
 i'm running squid on openwrt and i want squid to cache all requests
 which
 are made.
 I think that this is done by defining refresh_pattern in squid config.
 But in my opinion no matter how i config them, they are always be
 ignored by
 squid and will never be used.
 
 for example:
 refresh_pattern www  5000   100%  1  override-expire
 override-lastmod
 ignore-reload ignore-no-store ignore-must-revalidate ignore-private 
 ignore-auth store-stale
 
 or:
 refresh_pattern  www  1200  100% 6000 override-expire
 
 But they both dont work.
 Any idea how to configure squid that it is caching every request?! Do 
 I
 have
 to enable those refresh_patterns somehow?!
 
 FYI: Caching everything is not possible. HTTP protocol requires at 
 least some non-cached traffic just to operate.
 
 Now that your expectations have been lowered ...
 
 *correct* usage is not to have any of the override-* or ignore-* 
 options at all. But correct and practical are not always the same. Use 
 the options if you are required to, but only then.
 
 There is also a very large diference betwen HTTP/1.0 caching and
 HTTP/1.1 caching you need to be aware of. In HTTP/1.0 there was 
 HIT/MISS and very little else. In HTTP/1.1 there is also revalidation 
 (304, REFRESH, IMS,
 INM) which is caching the [large] bodies of objects while still 
 sending the [small] headers back and forth - giving the best of both
worlds.
 
 
 So tile1893...
 what version of Squid do you have?
 how are you testing it?
 what makes you think its not caching?
 how much cache space do you have?
 what are your maximum object limits?
 what order is your cache, store and object related config options?
 what traffic rate (requests per second/minute) are you serving?
 what does redbot.org say about the URLs you are trying to cache?
 
 (Maybe more later but that should do for starters.)
 
 Amos
 
 
 





Re: [squid-users] Re: squid caching dynamic content

2014-05-01 Thread Eliezer Croitoru

Have not seen this video but:
Have you ever heard about multiplexing? multi-streaming? using multiple 
streams for the same video?


Eliezer

On 05/02/2014 12:46 AM, babajaga wrote:

It is unique 

That is past. I checked my favourite video today once more in detail, as I
was wondering about  the droppig hit rate.
Might be country dependent, though.
Pls, verify:
Video from Dire straits:http://www.youtube.com/watch?v=8Pa9x9fZBtY
 From my logs:
1398957788.924 40 88.78.165.175 TCP_MISS/200 55727 GET




Re: [squid-users] https interception some whitelisted sites not working properly

2014-05-01 Thread Eliezer Croitoru

Hey there,

This was asked in the past month twice if i'm not wrong.
In the stage when you use ssl_bump.. squid dosn't have any sense of 
dstdomain.
Means that when squid bumps and knows the site name the connection is 
already bumped and knows about it but when you want to apply a whitelist 
squid only works on the IP level.

So instead use iptables and\or squid dst as a whitelist level.

Eliezer

On 05/02/2014 12:21 AM, Ikna Nou wrote:

acl broken_sites dstdomain /etc/squid3/acl/ssl_whitelist.acl





Re: [squid-users] Issue: client_delay_pools and related directives

2014-05-01 Thread Eliezer Croitoru

Hey,

As for this:
On 05/01/2014 05:51 PM, Laz C. Peterson wrote:

Anyhow, the issue that I’m having is when configuring client_delay_pools per 
the Squid configuration documentation, all requests immediately get “connection 
reset by peer”.  It does not seem like Squid restarts or anything, though I 
only have production servers to “test” on right now.  Have not turned debug 
mode on yet, but there is nothing logged.


To understand the issue.
You have a squid instance that runs fine right? until you apply delay pools?
If it is what you say then it's very simple to test and verify.

The first thing to do is run:
squid -kparse
and see if there are any errors in the parsing.
after that I can test it on a test node here without any problem.

Eliezer


RE: [squid-users] feature requests

2014-05-01 Thread Lawrence Pingree
Hi Amos,
Thanks for your help in understanding my request. I have attempted to create
a rock store but was unsuccessful. There doesn't seem to be very good
guidance on the proper step by step process of creating a rock store. I
came across crashes the last time i attempted it. Also, I am using an x86
platform (32 bit) with multiple cores, when I attempted to use SMP mode with
multiple workers, instantly my intercept mode stopped functioning. I
couldn't figure out what was wrong so I'd love to get better guidance on
this as well. 



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, April 29, 2014 1:20 AM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] feature requests

On 29/04/2014 4:17 p.m., Lawrence Pingree wrote:
 
 I would like to request two features that could potentially help with 
 performance.
 

See item #1 Wait ...

http://wiki.squid-cache.org/SquidFaq/AboutSquid#How_to_add_a_new_Squid_feat
ure.2C_enhance.2C_of_fix_something.3F

Some comments to think about before you make the formal feature request bug.
Don't let these hold you back, but they are the bigger details that will
need to be overcome for these features to be accepted and useful.

I will also suggest you test out the 3.HEAD Squid code to see what we have
done recently with collapsed_forwarding, SMP support and large rock caches.
Perhapse the startup issues that make you want these are now resolved.


 1. I would like to specify a max age for memory-stored hot objects 
 different than those specified in the generic cache refresh patterns.

refresh_patterns are not generic. They are as targeted as the regex pattern
you write.

The only difference between memory, disk or network sources for a cache is
access latency. Objects are promoted from disk to memory when used, and
pushed from memory to disk when more memory space is needed.


I suspect this feature will result in disk objects maximum age stabilizing
at the same value as the memory cache is set to.
 - a memory age limit higher than disk objects needing to push to disk will
get erased as they are too old.
 - a memory age limit lower than disk objects promoted from disk will get
erased or revalidated to be within the memory limit (erasing the obsoleted
disk copy).
So either way anything not meeting the memory limit is erased. Disk will
only be used for the objects younger than the memory limit which need to
overspill into the slower storage area where they can age a bit beofre next
use ... which is effectively how it works today.


Additionally, there is the fact that objects *are* cached past their max-age
values. All that happens in HTTP/1.1 when an old object is requested is a
revalidation check over the network (used to be a re-fetch in HTTP/1.0). The
revalidation MAY supply a whole new object, or just a few new headers.
 - a memory age limit higher than disk causes the disk (already slow) to
have additional network lag for revalidation which is not applied to the in
memory objects.
 - a memory age limit lower than disk places the extra network lag on memory
objects.

... what benefit is gained from adding latency to one of the storage areas
which is not applicable to the same object when it is stored to the other
area?


The overarching limit on all this is the *size* of the storage areas, not
the object age. If you are in the habit of setting very large max-age value
on refresh_pattern to increase caching take a look at your storage LRU/LFU
age statistics sometime. You might be in for a bit of a surprise.


 
 2. I would like to pre-load hot disk objects during startup so that 
 squid is automatically re-launched with the memory cache populated. 
 I'd limit this to the maximum memory cache size amount.
 

This one is not so helpful as it seems when done by a cache. Loading on
demand solves several performance problems which pre-loading encounter in
full.

 1) Loading the objects takes time. Resulting in a slower time until first
request.

Loading on-demand we can guarantee that the first client starts receiving
its response as fast as possible. There is no waiting for GB of other
objects to fully load first, or even the end of the current object to
complete loading.


 2) Loading based on previous experience is as best an educated guess.
That can still load the wrong things, wasting the time spent.

Loading on-demand guarantees that only the currently hot objects are loaded.
Regardless of what was hot a few seconds, minutes or days ago when the proxy
shutdown. Freeing up CPU cycles and disk waiting time for servicing more
relevant requests.


 3) A large portion of traffic in HTTP/1.1 needs to be validated over the
network using the new clients request header details before use.

This comes back to (1). As soon as the headers are loaded the network

[squid-users] Re: squid caching dynamic content

2014-05-01 Thread babajaga
No. But what has it to do with varying id  for same video, which makes
documented STORE-ID algo obsolete ?
This varying id yt also used some time ago already, for quite a while.

(BTW: About a year ago yt also used real range requests. Changed it after a
few months to their range spec in  URL. Never mentioned in wiki, AFAIK)



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-caching-dynamic-content-tp4665779p4665794.html
Sent from the Squid - Users mailing list archive at Nabble.com.


RE: [squid-users] how to use refresh_pattern correct

2014-05-01 Thread Lawrence Pingree
I actually have no real issues. It works very well to be quite honest. :)



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Amos Jeffries [mailto:squ...@treenet.co.nz] 
Sent: Tuesday, April 29, 2014 6:45 AM
To: Dan Charlesworth; Lawrence Pingree
Cc: squid-users@squid-cache.org
Subject: Re: [squid-users] how to use refresh_pattern correct

On 29/04/2014 4:16 p.m., Dan Charlesworth wrote:
 Hi Lawrence
 
 I think that's the most extensive list of refresh patterns I've seen 
 in one place, for a forward proxy. Props.
 
 Is anyone else using a collection like this and care to comment on its 
 performance / viability?

You can find many similar lists in the archives of this mailing list if you
pick a really weird file extension type and search for it plus
refresh_pattern. eg.
https://www.google.co.nz/search?q=tgz+refresh_pattern+site%3Awww.squid-cach
e.org


As for viability. It uses most of the HTTP violation refresh_pattern options
based just on test for file extension patterns. I have no doubt that they
cause many things to store and log as HITs but the reliability of the
responses coming out of that cache is suspect.

Amos





Re: [squid-users] Issue: client_delay_pools and related directives

2014-05-01 Thread Laz C. Peterson
Hello Eliezer,

Yes, the squid instances runs wonderfully.  And actually we use standard 
delay_pools with no problems.  Only when introducing client_delay_pools does 
the problems start.

Here is the results of “squid -kparse” … Again, this is using Squid 3.3.8 on 
Ubuntu 14.04.

2014/05/01 15:08:21| Startup: Initializing Authentication Schemes ...
2014/05/01 15:08:21| Startup: Initialized Authentication Scheme 'basic'
2014/05/01 15:08:21| Startup: Initialized Authentication Scheme 'digest'
2014/05/01 15:08:21| Startup: Initialized Authentication Scheme 'negotiate'
2014/05/01 15:08:21| Startup: Initialized Authentication Scheme 'ntlm'
2014/05/01 15:08:21| Startup: Initialized Authentication.
2014/05/01 15:08:21| Processing Configuration File: /etc/squid3/squid.conf 
(depth 0)
2014/05/01 15:08:21| Processing: ident_lookup_access allow all
2014/05/01 15:08:21| Processing: dns_nameservers 127.0.0.1
2014/05/01 15:08:21| Processing: visible_hostname ocr-sab-lx0.ocretina.corp
2014/05/01 15:08:21| Processing: acl snmp_squid snmp_community p4r4v1s
2014/05/01 15:08:21| Processing: acl paravis_hq src 10.0.0.0/16
2014/05/01 15:08:21| Processing: snmp_port 3401
2014/05/01 15:08:21| Processing: snmp_access allow snmp_squid paravis_hq
2014/05/01 15:08:21| Processing: snmp_access deny all
2014/05/01 15:08:21| Processing: snmp_incoming_address 10.3.1.11
2014/05/01 15:08:21| Processing: snmp_outgoing_address 10.3.1.11
2014/05/01 15:08:21| Processing: acl SSL_ports port 443
2014/05/01 15:08:21| Processing: acl Safe_ports port 80  # http
2014/05/01 15:08:21| Processing: acl Safe_ports port 21  # ftp
2014/05/01 15:08:21| Processing: acl Safe_ports port 443 # https
2014/05/01 15:08:21| Processing: acl Safe_ports port 70  # gopher
2014/05/01 15:08:21| Processing: acl Safe_ports port 210 # wais
2014/05/01 15:08:21| Processing: acl Safe_ports port 1025-65535  # unregistered 
ports
2014/05/01 15:08:21| Processing: acl Safe_ports port 280 # http-mgmt
2014/05/01 15:08:21| Processing: acl Safe_ports port 488 # gss-http
2014/05/01 15:08:21| Processing: acl Safe_ports port 591 # filemaker
2014/05/01 15:08:21| Processing: acl Safe_ports port 777 # multiling 
http
2014/05/01 15:08:21| Processing: acl CONNECT method CONNECT
2014/05/01 15:08:21| Processing: include /etc/squid3/conf.d/ocr.conf
2014/05/01 15:08:21| Processing Configuration File: /etc/squid3/conf.d/ocr.conf 
(depth 1)
2014/05/01 15:08:21| Processing: include /etc/squid3/conf.d/ocr/ocr.acls
2014/05/01 15:08:21| Processing Configuration File: 
/etc/squid3/conf.d/ocr/ocr.acls (depth 2)
2014/05/01 15:08:21| Processing: acl ocr_unrest_users ident -i 
/etc/squid3/conf.d/ocr/unrest.users
2014/05/01 15:08:21| strtokFile: /etc/squid3/conf.d/ocr/unrest.users not found
2014/05/01 15:08:21| Warning: empty ACL: acl ocr_unrest_users ident -i 
/etc/squid3/conf.d/ocr/unrest.users
2014/05/01 15:08:21| Processing: acl ocr_unrest_comps src 
/etc/squid3/conf.d/ocr/unrest.comps
2014/05/01 15:08:21| Processing: acl adsites dstdomain -i 
/etc/squid3/conf.d/ads.sites
2014/05/01 15:08:21| Processing: acl adregex url_regex -i 
/etc/squid3/conf.d/ads.regex
2014/05/01 15:08:21| Processing: acl paravis src 10.0.0.0/16
2014/05/01 15:08:21| Processing: acl ocr_unrest_doc src 10.3.1.231-10.3.1.235/32
2014/05/01 15:08:21| Processing: acl ocr_gary src 10.3.1.181-10.3.1.189/32
2014/05/01 15:08:21| Processing: acl laz src 10.3.1-6.31/32
2014/05/01 15:08:21| Processing: acl ocr_chen src 10.3.1.191/32
2014/05/01 15:08:21| Processing: acl ocr_chen src 10.3.2.191/32
2014/05/01 15:08:21| Processing: acl ocr_chen src 10.3.3.191/32
2014/05/01 15:08:21| Processing: acl ocr_chen src 10.3.4.191/32
2014/05/01 15:08:21| Processing: acl ocr_chen src 10.3.5.191/32
2014/05/01 15:08:21| Processing: acl ocr_chen src 10.3.6.191/32
2014/05/01 15:08:21| Processing: acl ocr_clinic src 10.3.2-6.101-110/32
2014/05/01 15:08:21| Processing: acl ocr_exam src 10.3.2-6.111-120/32
2014/05/01 15:08:21| Processing: acl ocr_va src 10.3.2-6.121-130/32
2014/05/01 15:08:21| Processing: acl ocr_insurance src 10.3.1.101-10.3.1.110/32 
10.3.1.161-10.3.1.170/32
2014/05/01 15:08:21| Processing: acl ocr_admin src 10.3.1.121-10.3.1.130/32
2014/05/01 15:08:21| Processing: acl ocr_study src 10.3.1.141-142/32
2014/05/01 15:08:21| Processing: acl ocr_testing src 10.3.2-6.81-90/32
2014/05/01 15:08:21| Processing: acl ocr_doctor_personal src 10.3.1-6.231-240/32
2014/05/01 15:08:21| Processing: acl ocr_doctor_systems src 10.3.2-6.131-135/32
2014/05/01 15:08:21| Processing: acl ocr_dhcp src 10.3.1-6.201-230/32
2014/05/01 15:08:21| Processing: acl ocr src 10.3.0.0/16 10.0.2.0/24
2014/05/01 15:08:21| Processing: acl ocr_white dstdomain 
/etc/squid3/conf.d/ocr/white.list
2014/05/01 15:08:21| Processing: acl ocr_audio url_regex -i 
/etc/squid3/conf.d/ocr/audio.stream
2014/05/01 15:08:21| Processing: acl ocr_audiosites dstdomain 
/etc/squid3/conf.d/ocr/audio.sites
2014/05/01 15:08:21| 

Re: [squid-users] Re: Access denied when in intercept mode

2014-05-01 Thread Tobias Krais

Hi nettrino,

I am very interested in your question, because I plan to do the same. 
But I cannot see any configurations in your mails (see below). Was it 
sent correctly?


Greetings,

Tobias

Am 01.05.2014 07:04, schrieb nettrino:

Sry for the second post, my iptables-save output is




--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-denied-when-in-intercept-mode-tp4665775p4665778.html
Sent from the Squid - Users mailing list archive at Nabble.com.



RE: [squid-users] Access denied when in intercept mode

2014-05-01 Thread Lawrence Pingree
If you are getting access denied it is most likely a squid ACL. By default
squid.conf has most things blocked in the ACL.



Best regards,
The Geek Guy

Lawrence Pingree
http://www.lawrencepingree.com/resume/

Author of The Manager's Guide to Becoming Great
http://www.Management-Book.com
 


-Original Message-
From: Eliezer Croitoru [mailto:elie...@ngtech.co.il] 
Sent: Wednesday, April 30, 2014 8:34 PM
To: squid-users@squid-cache.org
Subject: Re: [squid-users] Access denied when in intercept mode

Hey there,

it depends on the topology and you current iptables and squid.conf state.
Can you share:
squid -v
iptables-save
cat squid.conf
(remove any confidential data and spaces + comments from the squid.conf)

Eliezer

On 05/01/2014 03:51 AM, nettrino wrote:
 Hello all, I am trying to set up a squid proxy to filter HTTPS 
 traffic. In particular, I want to connect an Android device to the 
 proxy and examine the requests issued by various applications.

 When 'intercept' mode is on, I get 'Access Denied'. I would be most 
 grateful if you gave me some hint on what is wrong with my 
 configuration.

 I have been following this tutorial:
 http://pen-testing-lab.blogspot.com/2013/11/squid-3310-transparent-pro
 xy-for-http.html

 My squid version is 3.4.4.2 running in Ubuntu with kernel
3.5.0-48-generic.



 Currently I am testing to connect from my laptop (squidCA.pem is added 
 in my
 browser)
 In the machine where squid is running I have flushed all my iptables 
 rules and only have


 Any help is greatly appreciated.

 Thank you



 --
 View this message in context: 
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-denied-when-
 in-intercept-mode-tp4665775.html Sent from the Squid - Users mailing 
 list archive at Nabble.com.






[squid-users] Re: Access denied when in intercept mode

2014-05-01 Thread nettrino
I don't know why this does not show properly. Could you try  this link
http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-denied-when-in-intercept-mode-td4665775.html
 
?



--
View this message in context: 
http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-denied-when-in-intercept-mode-tp4665775p4665799.html
Sent from the Squid - Users mailing list archive at Nabble.com.


Re: [squid-users] Re: Access denied when in intercept mode

2014-05-01 Thread Amos Jeffries
On 2/05/2014 10:53 a.m., nettrino wrote:
 I don't know why this does not show properly. Could you try  this link
 http://squid-web-proxy-cache.1019090.n4.nabble.com/Access-denied-when-in-intercept-mode-td4665775.html
  
 ?

FYI: it is happening because you are posting using a web interface by
Nabble. Attachments, quotations and HTML submissions are not delivered
to the mailing list the projects runs.

Amos