I think it is intended for use with the CONNECT method.
Friday, June 8, 2012, 4:48:19 PM, you wrote:
So I'm confused what the use case for 'T' is?
Yes. For a range request, if it cannot be satisfied by cache, is just passed on
to the origin server. This is because there is no good thing to do with the
request as partial responses cannot be cached and (as noted) always fetching
the source object is not a good thing.
This hasn't come up
I did a bit of netsearching and I think I see the problem, although I don't
have a good solution. It is a result of optimization in the call sequence in
test_append that calls str.append(value,len) which calls reserve(length() +
count). length() is inlined to a reference to n so the argument is
I think the best option is to move the burden to the backport proposer. While
backports serve a critical function there's good reason to make it a bit of a
challenge to keep them down to the ones that are really needed. Therefore we
could make the proposer responsible for (1) detecting a +3
I would like to note that I am actively working on less drastic changes to
range handling, specifically to move the range data from the object header to
the alternate header. That will be needed before the further improvements here
are implemented.
In all seriousness, why do you want that? Any browser will handle the redirects
and all of the content will be cached.
Tuesday, August 7, 2012, 7:52:14 PM, you wrote:
Thank you..
But I do not want curl to follow redirects. Instead I want ATS to do follow
redirect and give me responses.
Tuesday, August 21, 2012, 2:43:06 PM, you wrote:
Thank you Alan for stepping in!
ATS will only cache full object requests. It never caches range requests.
Is this happening also for large files (video files for example) ? I assume
that the cache will fill up quite quickly with this approach,
Wednesday, August 22, 2012, 9:31:55 AM, you wrote:
In practice, because ATS does not by default cache cgi looking URLs very
few video files will be cached. Certainly nothing from YouTube, for
instance, unless you write a customized plugin to do so (you might search
Are you sure? The only
Wednesday, August 22, 2012, 6:24:49 AM, you wrote:
Who currently does the actual work of retrieving the file to be cached and
placing it into the cache?
The ATS core handles all cache data. In fact it's an issue that plugins have
very limited access.
valid option to allow the plugin to
Since this has been a topic for a while, I will just throw out an idea to see
how fast you guys can shoot it down.
A cache object is stored as a series of fragments. If we subdivided each
fragment in to chunks, we could have 64 chunks / fragment and represent them
with a bitmap in a single
Monday, August 27, 2012, 4:25:14 PM, you wrote:
I don't know if you were sarcastic in your last email so I will continue on
the same idea.
TS-974 (https://issues.apache.org/jira/browse/TS-974) talks about the same
thing I was trying to describe: hold partial objects in the cache for a
large
Kinda sorta of. I have made a simple diagram with some of this information:
http://people.apache.org/~amc/ats-cache-layout.jpg
which you might find useful. The proposed change would be quite similar to the
existing cache structure but would have a couple of major differences -
1) Support
Thursday, August 30, 2012, 5:29:16 PM, you wrote:
1. Is there a one to many relation between a cached object and a Doc
structure?
Yes, that's the chained Docs in the lower right. Each Doc represents a fragment
and we have discussed previously how objects are stored as an ordered set of
Wednesday, September 5, 2012, 3:54:03 AM, you wrote:
Just for clarification: I did some tests and it seems the entries in the
alternate vector stored in the first Doc are different versions of the
same cached object?
Yes, see here:
Tuesday, September 4, 2012, 12:43:50 AM, you wrote:
It seems ok to me. why not put it in the TS-1339, so that we all can
reviews and test it.
On my list of things to do.
Wednesday, September 5, 2012, 4:16:42 PM, you wrote:
Are all Doc structures of the same size?
Mostly, but you don't need their actual size, only the size of the content per
Doc instance, which is implicit in the fragment offset table. Given a range
request we can walk the fragment offset
Your write up seems basically accurate to me. I would note again that reading
the earliest Doc of an alternate is an historical artifact of the
implementation and not at all a functional requirement.
The other half of the proposal is that, when servicing a range request that has
been forwarded
Monday, September 10, 2012, 5:14:40 PM, you wrote:
Can we also consider the other way: make the request to the origin server
with a larger range so that we may 'join' two disjoint parts of the data?
(try to avoid having many empty chunks in between filled chunks)
We can consider it but I
There seems to be a lot of pressure on updating HostDB and how DNS is handled
so I want to go ahead and push this out to trunk. Please review and let me know
if think there are structural problems with this patch. The essence is to
support origin servers that have both IPv4 and IPv6 addresses
I have had multiple reports of crashing involving the range support on trunk. I
have finally managed to get my new (hopefully not magic) dev box up and running
but I can't replicate the problem. I can do thousands of range requests through
ATS trunk and it works perfectly. I can leave the
I have a library for doing this for my own codebase. I would be happy to donate
a license and source to TS.
Basically there is an XML file that describes the messages. During a build it
generates C++ code to define a table of messages and creates a set of constants
for the message values or
Although we try to keep fixes separate, the fix for these two bugs ended up
being too intertwined to readily separate.
The primary change of interest is to split the HostDB in to separate IPv4,
IPv6, and SRV sections. The SRV records were already split in an ad hoc way,
this change formalizes
This is a nasty mutex leak that has a one line fix which basically restores it
to the codebase of 3.0.X.
Monday, December 10, 2012, 11:33:36 PM, you wrote:
hmm, change in the interface, do you have tested the SplitDNS?
The interface change at the top level is minimal. I did test SplitDNS but I'll
run some more tests today.
Your interpretation seems reasonable, the catch is the server is known to be
1.1 compliant. Is it reasonable to infer that if the request is marked as 1.1?
Friday, December 14, 2012, 6:51:20 AM, you wrote:
The comment seem to indicate that a post request always needs a
content-length header.
+1 Works for me on my test system - Fedora 17 64 bit.
Depends what you mean by doesn't work properly. As noted, there is no content
length. But ATS should dechunk then rechunk the data stream for the plugin.
Wednesday, February 13, 2013, 10:12:45 AM, you wrote:
Hi,
I have an ats plugin which performs transform.
Is there any reason it doesn't
I am running trunk, which I think is currently identical to 3.3.1 and it
crashes on the first request. FC 18, forward full transparent, null transform
enabled. It seems to work without the null transform, though. The error is at
P_ProtectionQueue.h:90 with a failed assert. Running in the
Saturday, March 9, 2013, 10:35:30 PM, you wrote:
On 3/9/13 4:26 PM, Leif Hedstrom wrote:
Hi all,
I've prepared a release for v3.3.1, which has quite a few improvements and
bug fixes over 3.3.0. Please see the CHANGES for more details.
The artifacts are available at
Trunk doesn't build for me because of an issue in experimental/jtest. I have a
fix, but when I checked the build bots there are no logs of the build. Is this
on purpose, or a problem? I'm concerned about checking in my fix because I
would have no way of knowing if I broke anything for anyone
Igor tagged me with TS-1366 but as I looked in to it things went badly.
As far as I can tell, the pristine header flag has no effect at all on any
logging output. Instead, there are additional fields to handle that (e.q.,
cquuc for cquc), although none that correspond to cqtx. Looking through
Wednesday, May 1, 2013, 8:58:50 AM, you wrote:
On May 1, 2013, at 12:48 AM, Igor Galić i.ga...@brainsware.org wrote:
Following this basic idea here
http://nadeausoftware.com/articles/2012/10/c_c_tip_how_detect_compiler_name_and_version_using_compiler_predefined_macros
This is a great
I may teak that to use ats_is_ip_any() rather than having a bool flag.
Wednesday, May 1, 2013, 1:51:23 PM, you wrote:
Yes, this patch is already committed to 3.3.x/Master. Below is a patch
that will cleanly apply to 3.2.0.
Brian
--- iocore/net/UnixConnection.cc2012-06-14
Wednesday, June 12, 2013, 3:29:17 PM, you wrote:
+ else if (!strncmp(name, proxy.config.http.flow_control.enabled,
length))
+cnf = TS_CONFIG_HTTP_FLOW_CONTROL_ENABLED;
+ break;
Not a big deal, but I've seen some reason efforts to use {} consistently,
and I kinda like
I was requested to provide the API that would go with this fix. Here is the
preliminary version. If any one has suggestions for better names, speak up.
/** Plugin lifecycle hooks.
These are called during lifecycle events of a plugin. They
should be set in the plugin initialization
Wednesday, July 17, 2013, 2:46:41 PM, you wrote:
What event ID and edata are delivered to the continuation?
The event ID will be the hook ID and a null edata. I can't think of anything
useful to return there.
Tuesday, July 30, 2013, 12:25:09 AM, you wrote:
https://issues.apache.org/jira/browse/TS-1976 - Invalid httpport
argument passed
I thought there was already a committ for 1976. Although I have a suggested
alternative patch.
All;
On a somewhat different but related topic, has any consideration been given to
using a buddy allocation system, or is the cross access between block free
lists too much for that?
The range transforms worked at for 3.0 but were broken for 3.2. This was masked
by the addition of range acceleration but when that was determined to be too
risky the original range transform issue was re-exposed. I'm currently trying
to get some time to work on that, I see what the problem is,
Friday, August 9, 2013, 3:01:04 PM, you wrote:
should we generalize ts:cv to extend it to other configs?
That's one of the things I was planning on at some point. Having a domain means
we can store all sorts of useful data for this kind of thing and do fancy stuff
with it (e.g., letting you
Saturday, August 10, 2013, 8:49:45 PM, you wrote:
1) Is the quality the same as a tier level ? If so, 4,294,967,296 different
tiers seems incredibly excessive.
Yes. We thought it more hassle than benefit to restrict the quality value to
less than 32 bits. It does give some flexibility in the
Thursday, August 22, 2013, 2:16:13 AM, you wrote:
btw, the only reason I see right now for having an option for passing an
additional, configurable parameter to a script, is to use it for sending
instance name in a multi-instance setup.
Question does anyone in our community run multiple
+1
I have run in to an issue with a client which I think is of general interest.
The issue arises in cases where you have a set of origin servers that share a
fully qualified domain name but have distinct IP addresses. In this case the
DNS query returns all of the IP addresses, each one
Tuesday, October 1, 2013, 12:35:23 PM, you wrote:
How about matching on FQDN, but frequently(-ish) expiring sessions from the
pool? This would cause the pool to slowly traverse the full set of origins.
I'm not sure how that would help. The goal is to keep server sessions alive as
long as
After chatting with James Peach and Ming Zym, both of whom I utterly confused,
I see it a bit more clearly.
The root is how server sessions are shared, and what constitutes a valid
session for a specific client transaction in a session.
The current behavior is to match both the fully qualified
Thursday, October 3, 2013, 10:31:07 AM, you wrote:
There's also a bug filed on this already:
https://issues.apache.org/jira/browse/TS-1893
I've assumed control of that bug :-) and updated it with this proposal. Any one
interested should move the discussion there.
I have added a new section to the architecture area of the documentation,
proposals. I have place my HostDB proposal there for review and comment
before the summit. This is not a complete and detailed design, but more of a
sketch of how I think we should proceed, and to provide at least one
Bowing to popular demand, I moved the proposal to the wiki --
https://cwiki.apache.org/confluence/display/TS/HostDB+upgrade
I have added a page for partial object caching -
https://cwiki.apache.org/confluence/display/TS/Partial+Object+Caching
This will be changing because as I typed it up I realized I had a much better
understanding of cache mechanics which is changing my view on the best options
to do this.
I have
+1
On Oct 9, 2013, at 2:33 PM, James Peach jpe...@apache.org wrote:
Hi all,
I'd like to propose that we add formal API reviews to out development
process. The rationale for this is:
- API is important enough that we should go to extra effort to make it
consistent and
How do you return the EOS event? You would need to do it in response to a
normal IO call, instead of return WRITE_READY or similar. If you've already
read all the input, you'd need to return it on the write side.
You might adjusting the VIO nbytes to convince the VIO that it's finished all
of
I will be out tomorrow but then I will look at this again when I am back
online. I am sure I have seen this before and fixed it, I just need to find the
right issue.
Friday, November 1, 2013, 11:07:53 AM, you wrote:
So, is it right to conclude that compiling 4.0.2 will probably not help me.
Sorry, running too many bugs. The fix is in TS-1424, commit f63b27d52.
It got mixed in because it was entangled with propagating a related error
condition for TS-1424.
Hey, Igor, could you take a look at this?
Friday, November 15, 2013, 3:22:03 AM, you wrote:
In this particular instance,
https://issues.apache.org/jira/browse/TS-2351
I made some notes on the bug, I think we've seen this before. IMHO it's part of
the general range handling problems we've
I've been failing at solving the range transform issue for quite a while. I
think now we need to do something a little bit bigger to make it work correctly
and I have outlined that as an API proposal in the wiki.
https://cwiki.apache.org/confluence/display/TS/Transform+Plugin+Content+Length+Control
I read the proposal and found it very reasonable. Better yet, it
sounds like it could be possible to make this API/ABI compatible.
I think 4.2.X seems reasonable, although I will try for 4.1.X.
Tuesday, December 10, 2013, 10:29:36 PM, you wrote:
On Dec 9, 2013, at 2:53 PM, Alan M. Carroll a...@network-geographics.com
wrote:
I've been failing at solving the range transform issue for quite a while. I
think now we need to do something a little bit bigger to make it work
correctly
Wednesday, December 18, 2013, 4:51:31 PM, you wrote:
- where does the original Content-Length header come from?
The cached response header.
Or the upstream response - right?
Will the first plugin transform receive the original CL + the 'range
transformed' CL?
Well, the range transform is
I am working on command line access to the state of physical storage in the
cache. Currently the interface looks like
traffic_line --device /dev/sdb --cmd offline
The --device must match a path in storage.config and if so, that physical
device is marked offline as if it had failed. I'm in the
Tuesday, January 21, 2014, 6:01:17 AM, you wrote:
Hi,
I implemented a solution for HTTP Upgrade in ATS under Seamless Access SW (a
Mobixell product).
We'll be happy to contribute with the solution but I don't know what the
procedure is.
I can take point on that. Do you have access to
I am working on adding a couple of configuration variables and have run in to
the problem of sharing enumerations between the plugin API and the internals.
As far as I can tell this is currently done in one of two ways:
1) Use straight numeric values (e.g. if (share_server_session == 2))
2)
Thursday, January 30, 2014, 12:24:43 PM, you wrote:
https://code.google.com/p/cpp-btree/
I personally need this because it implements an ordered set where all our
hash table implementations are obviously unordered.
Have you looked at the red/black tree implementation in lib/ts/IpMap.h?
, January 30, 2014, 12:43:22 PM, you wrote:
On Thu, Jan 30, 2014 at 11:39 AM, Alan M. Carroll
a...@network-geographics.com wrote:
Thursday, January 30, 2014, 12:24:43 PM, you wrote
https://code.google.com/p/cpp-btree/
I personally need this because it implements an ordered set where all our
This is ready for initial review. This is an alpha, which means it compiles and
works on my magic dev box. However, this is a sufficiently major code change
that I would like some review.
A few key points:
1) This effectively reverts TS-1925 because, it turns out, 3.2.X used the MMH
hash for
I'm still researching the presence bits but as far as I can tell so far they
should be written and read from the cache. There may be some alignment issues
but I don't see how can happen and not scrozzle the rest of the HdrHeap. I need
to trace through a bit more and see if it's possibly a
Thursday, March 20, 2014, 1:28:45 PM, you wrote:
So when ATS receives a regex purge request, it must scan the disk for each
and every object in order to retrieve it's URL and compare it to the
pattern being searched. Is this assumption correct?
Yes.
Are our assumptions correct? If so, is
Jean,
Please look at TS-2564. Also, the traffic.out logfile should have more details
on the crash.
Wednesday, April 9, 2014, 2:25:57 AM, you wrote:
Hello,
I tried to upgrade a 2 server's cluster from ATS 4.0.2 to 4.2.0.
Everything is fine on one server when to other one keeps crashing with
Wednesday, April 9, 2014, 11:18:28 AM, you wrote:
As per documentation of ATS, it says raw disk performance is better than
formatted disk.
I verified page load performance . it degrades page load time with raw disk.
During my test, I mentioned path of raw disk partition in storage.config
+1
I'd like to propose that we pull libck into our tree and use it to replace
some of our stuff like the freelist, ink_atomic_list and hash tables.
http://concurrencykit.org/
Right now there are not enough distro's to make just linking against system
libs feasible, but I'd like to set it
+1
Given the nature of ATS and its focus exclusively on high performance
environments, I suggest we throw off the bonds of 32bit support going
forward. I’m unaware of anyone running ATS on 32bit systems or developing
ATS on 32bit systems.
I propose removing 32bit support in Apache
Brendan,
This might be TS-2564. I'm currently investigating another crash that seems
related.
Phil,
+1. Works in local testing and in some field deployments.
I've been thinking about some of the SPDY issues that have come up and have a
couple of ideas.
First, the SPDY SM is really a client session. It handles input from a client
socket and drives the transactions through the system, without interacting
(directly) with any of the origin servers or
Hmmm, is that always going to be the case? I’d imagine that we (long term)
support the following types of sessions:
I think it will be. In fact, I would argue that the possible future
proliferation of session mixing is another reason to have a SPDY client
session, so that we can have a
James,
I still don't understand this focus on client session chaining. AFAICT there
is no client session chaining other than an implementation detail in the
current SPDY implementation. I don't see how client session is a general
concept at all.
In order to do other things (such as
James,
Monday, May 19, 2014, 7:45:29 PM, you wrote:
I will bow out of the naming issue just do whatever the consensus is.
The default value of 1000 is *huge*. 10 would be better IMHO.
1000 was the existing hardwired value.
We should not be adding more default entries to records.config; the
James,
+# if TS_HAS_SPDY
+extern int spdy_config_load ();
+spdy_config_load(); // must be before HttpProxyPort init.
+# endif
This seems really fragile, is it going to stay, or do you have a plan to
remove it?
That's the way everything else is initialized, a specific method is
TSHttpConnectWithPluginID
=
Allows the plugin to initiate an http connection. This will tag the HTTP state
machine with extra data that can be accessed by the logging interface.
Synopsis
`#include ts/ts.h`
.. c:function:: TSVConn
William,
I've thought about that. I think this is at the limits of what I this is
reasonable for argument lists. If we want more (and we would want at least
transparency) I would rather go with an options structure which can contain a
much larger number of options. This is a bit ugly in C, but
James,
Monday, May 26, 2014, 9:19:27 PM, you wrote:
Hi Alan,
So AFAICT this is not a true superset of the TSClientProtoStack API. FWIW, I
also think that removing that API should have been reviewed.
This proposal is not intended as a replacement for TSClientProtoStack, but to
solve a
James,
Monday, May 26, 2014, 8:00:59 PM, you wrote:
Where is proxy.process.spdy.total_streams decremented?
I don't think it is - it is counting the total number of streams, which can
only increase. The ACTIVE variants count the number of objects currently
extant. This is parallel to things
James,
The correct syntax for referring to function arguments is :c:arg:`foo`
Except sockaddr in this case is not a functiona argument, it is a type. The
argument is addr.
:data:`NULL`
WARNING: py:data reference target not found: NULL
I would argue against using constructs that generate
James,
Monday, June 9, 2014, 10:44:15 AM, you wrote:
:data:`NULL`
WARNING: py:data reference target not found: NULL
Weird, that's used in other places. Does :c:data:`NULL` generate an error too?
Yes, I get dozens of messages from that every time I build the docs.
SPDY never makes the
Nick,
You should talk to Brian G. He had a bug last month in transform that looked a
lot like this. See TS-2820.
https://issues.apache.org/jira/browse/TS-2820
Tuesday, June 17, 2014, 11:16:35 AM, you wrote:
There's a common idiom in /plugins/ and /examples/.
Where a plugin uses a
Based on work trying to clean up Coverity issues, we have run in to pattern
that looks like this
X x = alloc_some_resource();
if (! cond1) { Log(error); destroy(x); return FAIL; }
if (! cond2) { Log(error); destroy(x); return FAIL; }
// ... etc ...
return x;
The problem is that frequently
Theo,
Sunday, July 20, 2014, 11:19:11 AM, you wrote:
Any reason to not simply use ck_hs and ck_ht from concurrency kit? That was
one of the points of looking at that library.
On Jul 19, 2014 11:33 PM, Alan M. Carroll a...@network-geographics.com
wrote:
Only administrative reasons
James,
Friday, July 25, 2014, 3:59:18 PM, you wrote:
Still reviewing, but I wish this had been a number of smaller patches. For
example, const'ing the MD5 stuff and adding ~PluginIdentity() could have been
independent patches ...
Yes, the ~PluginIdentity() is a leak from some other work I
As part of TS-2362 (Cache backwards compatibility) I must be able to switch the
URL hashing at run time. Some release after 3.2 changed the URL from MMH to MD5
so it must be switched back to be compatible but not if there are no old cache
stripes. There has also been a pending desire for other
I would like to note that another purpose of the FQDN change is to provide
better encapsulation for server session management with an eye to future work
in making that more accessible to plugins and other core code.
All;
It's getting close to time for beginning the 5.1 release cycle. I would like to
close it out on Monday, 18 Aug. If this is an issue let me know ASAP. Thanks.
This came up yesterday on the IRC. The problem is that every call to
TSMimeHdrFieldNext allocates a MIME field handle which gets very slow if you
use the function heavily. One suggested approach was to switch the allocator
from a global to a per thread.
I think it might be better to add
Leif,
Is the goal here to iterate over all headers? If so, maybe some sort of
iterator functionality would be more appropriate, similar to how we added the
iterator (with a callback) for lib records (i.e. TSRecordDump() )? Can that
help simplifying / improve such operations?
If you added
Leif,
But if the goal is to always iterate over all headers, I’m guessing (but not
sure?) that it could be done more efficiently with an API that assumes so ?
But alas, I don’t know what the use case here is, I’ve yet to see one where
iterating over all headers is required (you usually
I think there are two reasonable approaches -
1) Iterator style. In this case we either create a new iterator type or re-use
the FieldHandle as an iterator. In this case you get a handle to the first
field, and then re-use it for each field. This saves the expense of allocating
a new handle
James,
I can answer a few of these.
Thanks for the docs, this looks very promising. When you are ready to submit
patches, this will need API review
https://cwiki.apache.org/confluence/display/TS/API+Review+Process.
Actually, Susan forgot to mention that you can review the code at
Igor,
I'm using the TSSslVConn to cache a pointer to the global cert table
(loaded from ssl_multicert.config). Since in theory the
ssl_multicert.config could be reloaded at any point, we acquire() a copy
does this mean we would now support reloading of the ssl config w/o restart?
No. It
All;
I have created the 5.1.x branch. This is in preparation for the 5.1.0 release
of Traffic Server. All commits to this branch must be explicitly approved by
the Release Manager (amc). If you have something you really need to get in to
5.1.0, fix it on master first and then ask to have that
Adam,
adace@mound:~/Src/apache/trafficserver git checkout 5.1.x
Branch 5.1.x set up to track remote branch 5.1.x from origin.
Switched to a new branch '5.1.x'
adace@mound:~/Src/apache/trafficserver
Does that sound right?
Yes.
All;
I forgot to note that as part of the last triage a number of bugs were moved
from 5.1.0 to 5.2.0 because of the lack of an available patch. If one of these
was yours, you will need to work on it on master for 5.2.0 and explicitly
request a back port to 5.1.0.
101 - 200 of 255 matches
Mail list logo