Not an expert, don't even do much in this area much these days.
Historically, I have seen a number of REST URL API patterns, more often
then not (when these things were less tepid than they are now) in the
Semantic Web area (e.g. MS OData) where parens, which as I recall is
allowed by the RFC, are
Just now, David Vanderson wrote:
> No guru here, but my experience has been that every url encoder is
> slightly different - I don't think there's a broad consensus on edge
> cases. I'd say go for it.
The problem is not just being different from others, it's the
possibility of old code breaking..
No guru here, but my experience has been that every url encoder is
slightly different - I don't think there's a broad consensus on edge
cases. I'd say go for it.
On 12/17/2012 06:59 AM, Eli Barzilay wrote:
For many people there is a constant source of annoyance when you
copy+paste doc URLs in
p.s. Also the current docs[1] say this in the second paragraph:
The URI encoding uses allows a few characters to be
represented as-is: a through z, A through Z, 0-9, -,
_, ., !, ~, *, ', ( and ).
But this in the final sentence:
In additon, since there appear to be some brain-dea
Although I'm hardly a web "expert", I think net/uri-codec is currently
a little confusing.
I get the impression that it was originally written prior to 2005,
because the detailed introduction talks only about RFCs 1738 and
2396.[1]
It looks like perhaps functions such as uri-path-segment-encode w
For many people there is a constant source of annoyance when you
copy+paste doc URLs into a markdown context as with stackoverflow and
others. The problem is that these URLs have parens in them and at
least in Chrome, the copied URL still has them -- and because markdown
texts use parens for URLs
6 matches
Mail list logo