[ 
https://issues.apache.org/jira/browse/TS-2643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14340043#comment-14340043
 ] 

Faysal Banna commented on TS-2643:
----------------------------------

Good day Guys. 
Long awaited the throttling bandwidth from origin servers and finally it came 
through.
been testing it for the last 12 hours and i can say it works great so far.
Thanks for the efforts done by Sudheer Vinukonda and the patch he provided 
TS-3414 

Idea: 
                              
Clients (xxxPort) <----------------> (8080) ATS (YYY-Port) <----------------> 
(80) Origin Server 
the client has a bandwidth rate of 512kbit/s 
ATS has 10Mbit/s bandwidth 
if a single request was made the client gets his requested object with 
512kbit/s 
while ATS fetches the request with the maximum speed it can pull from origin 
server in this case it may reach 10Mbit/s, regardless of how big or small the 
file object being fetched is.

from Cache perspective this is good so that the cache can save it faster and 
release file descriptors to handle other requests.
from Client perspective this doesn't make any difference since client rate is 
fixed at 512kbit/s
from ISP (Internet Service Provider) perspective this consumes more bandwidth 
than delivering .. (actually no since what was fetched was delivered) but as 
the ISP shall observe is that the bandwidth from Origin to ATS is higher than 
what is going from ATS to client even though this difference shall be perished 
as soon as the server closes the connection with Origin Server and continues to 
deliver to client.

in an ideal situation one request at a time this is not a problem. but on 
hybrid and busy servers this creates a bottleneck. where ATS would be eating up 
connections and bandwidth shared with other protocols (https , chat.. whatsapp, 
viber, voip ) 

Many of us observed the internal stats of ATS and seen 50% bandwidth saving yet 
on the real monitor of Origin incoming vs client outgoing bandwidth ATS seems 
to eat more bandwidth than provides.

The wait is over and hope with this patch things may start to interact much 
better 

Requirements:

OS : Fedora Linux ( any linux version may be used but for my interest i mostly 
use Redhat derivatives)
ATS : latest mainstream from github.com/apache/trafficserver
patch : TS-3414 patched to trafficserver
firewall: iptables + iproute2  ( Actually firewall not used except for 
filtering and protection) (iproute2 holds the magic)
sudo + visudo (in order to enable ATS to interact with the kernel , adding and 
removing rules) 
grep + awk utilities 

Steps:
1- Downloads:
     #git clone https://github.com/apache/trafficserver
     #ts_lua_ehemeral.patch
 2- Compilation : 
     # cd trafficserver 
     # patch -p1<ts_lua_ehemeral.patch
     # autoreconf -if 
     # ./configure --enable-experimental-plugins --enable-example-plugins 
     #  make && make install 
3- OS Configuration.
    # Identify which is the ethernet connection with the internet . in my case 
its eth0 
    # Apply traffic control rules to eth0 
         /sbin/ip link add dev eth0-ifb name eth0-ifb type ifb
         /sbin/ip link set dev eth0-ifb up
         /sbin/tc qdisc del dev eth0-ifb root
         /sbin/tc qdisc add dev eth0-ifb root handle 1: htb default FFFF r2q 10
         /sbin/tc qdisc del dev eth0 ingress
         /sbin/tc qdisc add dev eth0 ingress
         /sbin/tc filter add dev eth0 parent ffff: protocol all prio 39999 u32 
match u32 0 0 action mirred egress redirect dev eth0-ifb
         /sbin/tc class add dev eth0-ifb parent 1: classid 1:1 htb rate 
100000kbit ceil 100000kbit quantum 1500
         /sbin/tc class add dev eth0-ifb parent 1:1 classid 1:FFFF htb rate 
128kbit ceil 100000kbit prio 4 quantum 1500
         /sbin/tc qdisc add dev eth0-ifb parent 1:FFFF handle 11: sfq perturb 
10 quantum 1500

  # Permit ATS to use /sbin/tc
     Note this can be done in various other modes but for me this is how i done 
it so any way one may do it for as long as he gets the same result 
      # Add ATS group to wheel group 
         i for my tests used the default nobody group,user with ATS so my setup 
goes like this.
        gpasswd -a nobody wheel
     # change sudo and sudoers to enable group to use /sbin/tc 
        #visudo 
        then scroll down and enable 
       %wheel  ALL=(ALL)       NOPASSWD: ALL
      while this is not recommended from an IT and security perspective but for 
my quick and dirty test it should work good.
4- ATS Configuration and ts_lua
    # In my setup i shall do only files with extension .gz to be rate limited 
for my test case and proof of concept.
        installation of ats went to /usr/local/etc/trafficserver
        # in remap.config
        regex_map http://(.*)/ http://$1/ @plugin=tslua.so 
@pparam=/usr/local/etc/trafficserver/tspull.lua
       # in /usr/local/etc/trafficserver/tspull.lua
        function send_request()
               local outPort = ts.server_request.server_addr.get_outgoing_port()
               local outPortHex=string.format("%x",outPort)
               ts.ctx['tcEnable'] = 'sudo /sbin/tc class add dev eth0-ifb 
parent 1: classid 1:'..outPortHex..' htb rate 128kbit ceil 512kbit prio 4 
quantum 1500'
               ts.ctx['tcFilter'] = 'sudo /sbin/tc filter add dev eth0-ifb 
parent 1:0  protocol ip prio 10 u32 match ip protocol 6 0xff match ip sport 80 
0xffff match ip dport '..outPort..' 0xffff  flowid 1:'..outPortHex
               ts.ctx['tcQdisc'] = 'sudo /sbin/tc qdisc add dev eth0-ifb parent 
1:'..outPortHex..' handle '..outPortHex..': fq_codel'
               ts.ctx['tcDisable']= 'sudo /sbin/tc class del dev eth0-ifb 
parent 1: classid 1:'..outPortHex..' htb rate 128kbit ceil 512kbit prio 4 
quantum 1500 '
               ts.ctx['tcFilterDisable'] = [[sudo /sbin/tc filter del dev 
eth0-ifb parent 1:0 handle `tc filter sh dev eth0-ifb | grep "flowid 
1:]]..outPortHex..[["| awk '{print $10}'`  protocol ip prio 10 u32 match ip 
protocol 6 0xff match ip sport 80 0xffff match ip dport ]]..outPort..[[ 0xffff  
flowid 1:]]..outPortHex
        end
       function Closed_connection()
                if ts.ctx['tcDisable'] then
                      os.execute(ts.ctx['tcFilterDisable'])
                      os.execute(ts.ctx['tcDisable'])
                end
       end
        function do_remap()
           ts.ctx['url']=ts.client_request.get_url()
           local s_r,e_r=string.find(ts.ctx['url'],'%.gz$')
           if s_r ~= nil then 
              ts.hook(TS_LUA_HOOK_SEND_REQUEST_HDR,send_request)
              ts.hook(TS_LUA_HOOK_TXN_CLOSE,Closed_connection)
          end
        end


Happy Coding :) 


> Origin Server Throttling 
> -------------------------
>
>                 Key: TS-2643
>                 URL: https://issues.apache.org/jira/browse/TS-2643
>             Project: Traffic Server
>          Issue Type: New Feature
>          Components: Network, Plugins
>            Reporter: Faysal Banna
>             Fix For: sometime
>
>         Attachments: ts_lua_ehemeral.patch
>
>
> Hi Guys 
> wonder if its too much work to request for implementing a feature to rate 
> limiting (throttling) bandwidth (bytes/sec) for data from Origin Servers , 
> yet i know not where would it be implemented, and since we take most of the 
> headers requested and responses in the header_rewrite and we can 
> apply/override rules per txn in header_rewrite, The idea is that since we do 
> time condition and set background filling and chance some stuff in the 
> header_rewrite 
> why cann't we issue a configurable request to records.config to limit this 
> txn. 
> its for the reason needed so that at some peak hours clients who are using 
> downloads or watching some porn movies would be rate limited at those times,  
> and maybe for example windows updates that work in background would not 
> consume much of my bandwidth from origin server 
>  and leave space more for the interactive pages and stuff to pass through 
>  thats why i said if it possible to do it in header_rewrite plugin cause as 
> far its the only plugin i seen that can override configurable rules as per 
> request 
> am not sure how the internals of the ATS works per transaction. thats why my 
> thought was in header_rewrite. yet i guess it might land in some new plugin 
> throttle_plugin or so. taking into consideration that it needs to inspect the 
> requested headers and the time it is requested at.
> much regards 
> Faysal Banna



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to