Re: [go-nuts] net/http: question about RoundTripper

2020-05-11 Thread Kevin Conway
I'll make an attempt to answer this question but I could be wrong given
some deeper historical context between the early Go developers.

There are two chunks of the http.RoundTripper comments that folks typically
ask about: "should not attempt to handle higher-level protocol details" and
"should not modify the request". Unfortunately, the rationale for why these
statements are made is neither presented in the comments nor in the patches
that introduced them.

It appears "should not modify the request" most likely refers to a
concurrency issue that must be mitigated by copying the request before
mutating it in the RoundTripper. Here's the change that claims to have
resolved an issue surrounding this: https://codereview.appspot.com/5284041.
The original comments explicitly allowed for mutating the request but this
was changed to "should not" after this patch due to the bug that it
resolved.

It's a little harder to find direct evident of the author's intent for
"should not attempt to handle higher-level protocol details". This part of
the comment has been in the code for nearly a decade and it becomes fairly
difficult to track the origin past a set of major renames and large
movements of files from place to place within the early Go source code.
Reading
https://github.com/golang/go/commit/e0a2c5d4b540934e06867710fe7137661a2a39ec
makes it seem like these notes were meant for the author or for other Go
core devs who were building the original HTTP stack rather than those of us
who would use it later. For example, it appears to signal that standard
library developers should isolate higher level features within the Client
type rather than in the ClientTransport (now RoundTripper) type. I haven't
found anything, yet, that suggests the comments are meant for anyone other
than developers of the http package in the Go standard library.

>From a more practical perspective, you don't really have another choice
when it comes to HTTP client middleware that are generally useful in Go
applications than the http.RoundTripper interface. If everyone applied the
"accept interfaces, return structs" guidelines then you would have more
options. For example, if everything that needed an HTTP client accepted a
"type Doer { Do(r *http.Request) (*http.Response, Error) }" style interface
then you could target your middleware as wrappers for the http.Client.
Unfortunately, most projects that allow for injection of a custom HTTP
client do so by accepting an instance of *http.Client. Accepting that
specific, concrete type makes wrapping anything other than the
http.RoundTripper a practical impossibility.

Personally, I've been using http.RoundTripper middleware for several years
without issue. It's a solid pattern that can provide an enormous amount of
value by allowing re-usable layers of behavior that can be injected into
virtually any library or framework that uses an HTTP client. I don't worry
about the comments in the standard library for the reasons I listed.

On Mon, May 11, 2020 at 2:01 PM Anuj Agrawal 
wrote:

> I am trying to understand in what cases would it make sense to
> implement my own RoundTripper.
>
> If I search about it, I come across examples of RoundTripper that try
> to do things like caching, retries, authentication, etc. I also read
> somewhere that there are many RoundTripper implementations that just
> set the User-Agent header on a request.
>
> I know that the documentation says "RoundTrip should not attempt to
> handle higher-level protocol details such as redirects,
> authentication, or cookies." And, I also understand that RoundTripper
> would be a bad place for things like caching.
>
> However, I am not able to figure out why is it such a bad idea to use
> RoundTripper as a middleware that allows me to do some of the higher
> level things like authentication. After all, authentication is just
> about interpreting and/or manipulating some headers. In some cases, it
> could be just as good as setting the User-Agent where all that happens
> is setting the Authorization header with a token. In some other cases,
> it could mean interpreting the challenge thrown by the server and then
> making the same call again with a response to the challenge.
>
> Can someone please help me understand this better?
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/golang-nuts/CABnk5p0k77he6c7Pw0APQJF%2B_FDJFOFdJp_BJ8eS66jbw6%3DG1w%40mail.gmail.com
> .
>


-- 
Kevin Conway

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" 

Re: [go-nuts] Re: View http open connections

2019-02-13 Thread Kevin Conway
> Does anyone know a way of getting current open connections?

You might consider using https://golang.org/pkg/net/http/#Server.ConnState
for this instead of a middleware for the http.Handler. The different
connection states are documented on
https://golang.org/pkg/net/http/#ConnState and the callback should fire
each time a connection changes state (including new and closed). This
should also help distinguish between connections which are not really the
same as active requests.

On Wed, Feb 13, 2019 at 7:54 AM Dany Xu 
wrote:

> what does it means "heavy load"? if not in "heavy load", it can work ? On
> the other hand, do you want the tcp connections or the http req numbers?
> From the go package, it appears that getting the basic tcp connections in
> http.server is impossible.
>
> 在 2019年2月12日星期二 UTC+8下午10:37:52,jose.albe...@gmail.com写道:
>>
>> I was trying to get current HTTP connections in HTTP.Server. Internally
>> transport tracks it under connPerHostCount. However, I don't see a way of
>> getting this value.
>>
>> I tried this:
>>
>> type HTTPRequestMetrics struct {
>>  activeConnections int64
>>  next  http.Handler
>> }
>>
>>
>> func HTTPRequestMetricsHandler(h http.Handler) *HTTPRequestMetrics {
>>  return {next: h}
>> }
>>
>>
>> func (h *HTTPRequestMetrics) ServeHTTP(w http.ResponseWriter, r *http.
>> Request) {
>>  atomic.AddInt64(, 1)
>>  defer atomic.AddInt64(, -1)
>>
>>
>>  h.next.ServeHTTP(w, r)
>> }
>>
>>
>> But looks like defer never gets called when under heavy load (and clients
>> see connection reset by peer)
>>
>> Does anyone know a way of getting current open connections?
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>


-- 
Kevin Conway

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Can I say a stack overflow is a panic?

2019-01-04 Thread Kevin Conway
There are several conditions that bypass recover. Another example is "fatal
error: concurrent map read and map write". The error messages typically
start with "fatal error" and represent a non recoverable exception in the
run time rather than a user defined condition.

I don't know the official term for these exceptions. My team has been
calling them "super panic" to distinguish them from actual calls to panic.

On Jan 4, 2019 09:43, "伊藤和也"  wrote:

I tried to recover a stack overflow but I couldn't.

func main() {
   defer func() {
  src := recover()
  fmt.Println(src)
   }()
   main()
}


-- 
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Json decode : Monitoring errors properly

2018-12-26 Thread Kevin Conway
> I believe https://golang.org/pkg/encoding/json/#Decoder.Buffered was
added for this purpose.

I just caught on that this method is exactly what you showed in the
original message. I guess my input can be reduced to "I think that's your
only option when using the decoder".

I do think you've correctly identified they the buffer isn't guaranteed to
contain the whole object since the decoder has early exit error conditions.
I'm not sure how you'd change that without rewriting the decoder.

>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Json decode : Monitoring errors properly

2018-12-26 Thread Kevin Conway
> For example, the error returned may show : "*invalid character 'G'
looking for beginning of value*". But you can't see the original message.

I believe https://golang.org/pkg/encoding/json/#Decoder.Buffered was added
for this purpose. For example, https://play.golang.org/p/yAn2fypIELc

> 1- I'm not sure the buffer is always complete at the time I call the
ReadAll

ReadAll (https://golang.org/pkg/io/ioutil/#ReadAll) reads until the io.EOF.
If you're using long lived TCP connections on which you expect to have
multiple request/response cycles then you likely don't want to use ReadAll
since the stream won't encounter an EOF until some later point in time. The
Decoder is made for the streaming case is very likely the right choice over
reading things in with other tools.

> 2- It can be time-consuming

I don't know that you're necessarily going to avoid this problem using the
json.Decoder. As it was pointed out, the current implementation of Decoder
continues to read and buffer bytes from the stream until it has enough to
represent a whole object before it decodes it. There may be some early exit
cases from errors in the "decoderState" that is used to manage the internal
buffer but I don't expect they should be relied on for performance.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: What is the behavior of an HTTP persistent connection in Go?

2018-08-21 Thread Kevin Conway
> Are requests sent serially ...  so that other concurrent requests to the
same host are blocked

When sending requests using HTTP/1, each connection will handle only one
request at a time except when using pipelining. When sending requests using
HTTP/2, each connection may manage any number of requests. The
http.Transport does a lot to abstract this behavior so it looks the same to
a developer. As far as blocking, the http.Transport will never "block"
while waiting for a connection to return to the pool. Unlike the connection
pool in Java, for example, the http.Transport connection pool does not
limit the number of open connections. If there are no idle connections in
the pool then the http.Transport will make another one on-demand and use
that for the request.

> forget to Close() a Response.Body

This can definitely become a problem and is a common mistake made by new go
devs. A good answer to this question with more detail is here:
https://stackoverflow.com/a/33238755

> Transport.IdleConnTimeout will close any connections automatically for me
right?

When configuring your connection pool, it is important to keep in mind that
all of the logic for removing an old connection from the pool and closing
it is based on *idle* time. For example, IdleConnTimeout of 30s will cause
connections that have gone unused for 30s to close. There is, as of go1.10,
still no way to define a maximum lifetime of a connection that is in-use.

> pool of clients ... to allow for parallel requests

Each client already manages any number of connections for HTTP/1 calls and
handles multiple, concurrent HTTP/2 calls on a single connection per host.
You do not need to do anything else to get the behavior you want.

>  I could increase the Transport.MaxIdleConns or
Transport.MaxIdleConnsPerHost

Like the connection timeout, it is important to recognize that the
connection limits are based on *idle* connections. There is no setting, as
of go1.10, that allows you to set a maximum number of active connections
for HTTP/1. The HTTP/2 support in http.Transport already has a built-in
limit because it enforces use of a single connection for all concurrent
requests so you don't have to do anything special for HTTP/2. However,
there is no limit to the number of connections http.Transport will make for
a HTTP/1 calls.



On Tue, Aug 21, 2018 at 12:32 AM  wrote:

> Also do you have any recommendations for deriving appropriate values for 
> Transport.MaxIdleConns
> or Transport.MaxIdleConnsPerHost?
>
>
> On Monday, August 20, 2018 at 12:59:07 AM UTC-7, golang...@gmail.com
> wrote:
>>
>> The http.Transport caches connections for future re-use
>> https://golang.org/pkg/net/http/#Transport
>>
>> Are requests sent serially by which I mean only one request and response
>> can be handled by this persistent connection at a time, so that other
>> concurrent requests to the same host are blocked until a response is
>> received and closed?
>>
>> If I forget to Close() a Response.Body, does that leave the connection
>> open so that it can't be shared? Do other concurrent requests open a new
>> connection during that time?
>>
>> And if I do forget, by default the Transport.IdleConnTimeout will close
>> any connections automatically for me right?
>>
>> I understand that persistent connections save on protocol overhead, but
>> if it can only handle one request at a time wouldn't in some cases it be
>> better to have a pool of clients each with their own persistent connection
>> to the same host to allow for parallel requests?
>>
>> Alternatively, I could increase the Transport.MaxIdleConns or
>> Transport.MaxIdleConnsPerHost right? Which would trade throughput for
>> server resources to maintain those connections.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Best way to add retries to existing library using net/http.Client?

2017-12-11 Thread Kevin Conway
A few of us had a very short discussion on the subject here:
https://groups.google.com/forum/#!topic/golang-nuts/rjYdaFM5OBw

Almost all go code I've seen that requires an HTTP client takes an
`*http.Client` as one of the parameters. This doesn't leave consumers with
many other options than to leverage the `http.RoundTripper` interface that
is consumed via `http.Client.Transport` and implemented by
`http.Transport`. I've seen a great deal of success in using that interface
to implement resiliency features like retries even though the documentation
explicitly forbid using the `http.RoundTripper` for that purpose. My team
has fallen to using a decorator pattern where we define all new
functionality as an `http.RoundTripper` and apply it with a
`func(http.RoundTripper) http.RoundTripper`. Ex:

var client = {
  Transport: Retry(TImeout(Log({}))),
}

On Sun, Dec 10, 2017 at 1:19 PM Alex Buchanan 
wrote:

> I'm considering adding http client retries to an existing library, owned
> by someone else, in order to add resiliency to errors such as http 503. I'd
> like to keep the changes as minimal as possible, with maximum effect, as
> usual. I'm eyeing http.Client.Transport. Possibly I could use that to wrap
> the DefaultTransport, intercept responses, check the code against some
> rules, and retry on failure.
>
> Am I on the right track here?
>
> Thanks,
> Alex
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Maximum number of requests per HTTP connection?

2017-07-06 Thread Kevin Conway
> You can have a filter which increments the counter and records for  each
request  from the  user and kicks him/her when the limit is crossed.

We're actually interested in this problem from the other direction. When
making an outbound HTTP request we want to limit the number of times (or
the maximum duration of use) for a single connection. The idea is that we
want to re-use the connection for some time to amortise the cost of DNS and
TLS across multiple requests. However, there's a point at which we need to
snap the connection and start a new one to ensure that changes to DNS are
respected in a timely manner.

On Thu, Jul 6, 2017 at 7:34 AM  wrote:

> Hi there,
> You can have a filter which increments the counter and records for  each
> request  from the  user and kicks him/her when the limit is crossed. I do a
> similar type of check where in I check whether the user has a authenticated
> session at the point of time of for each request. I did this for a web app
> hosted on Tomcat server in java. Hope this idea may help for your situation.
> Regards
> Pavan
>
>
> On Tuesday, July 4, 2017 at 5:32:18 AM UTC+5:30, Mikhail Mazurskiy wrote:
>>
>> Hello there,
>>
>> Is there a way to limit the number of http requests a client will issue
>> via a single tcp connection? I cannot find anything relevant to this
>> question. Maybe a snippet of code to implement this...
>>
>> The use case is to force a highly active http connection to be closed
>> after certain number of requests and/or time period. It is actively used so
>> idle timeout will not work. I'd like it to be closed and re-established to
>> pick up DNS changes - start using new ips before the remote side closes the
>> connection. Trying to use weighted dns to progressively (in steps)
>> transition load from one group of servers to another group.
>>
>> Thanks in advance for any input.
>>
>> Cheers,
>> Mikhail.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Maximum number of requests per HTTP connection?

2017-07-04 Thread Kevin Conway
Unfortunately, neither the http. Client nor the *http. Request expose a
reference to the underlying connection such that you could implement that
behaviour. At least, that's what I've found when trying to do this as well.

One possible strategy we've considered is implementing a wrapper around the
transport that will recycle the wrapped transport after a given time or
number of uses.

I'm definitely interested in hearing other ideas if this is already solved
in the community.

On Mon, Jul 3, 2017, 19:02 Mikhail Mazurskiy 
wrote:

> Hello there,
>
> Is there a way to limit the number of http requests a client will issue
> via a single tcp connection? I cannot find anything relevant to this
> question. Maybe a snippet of code to implement this...
>
> The use case is to force a highly active http connection to be closed
> after certain number of requests and/or time period. It is actively used so
> idle timeout will not work. I'd like it to be closed and re-established to
> pick up DNS changes - start using new ips before the remote side closes the
> connection. Trying to use weighted dns to progressively (in steps)
> transition load from one group of servers to another group.
>
> Thanks in advance for any input.
>
> Cheers,
> Mikhail.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] RFC: Blog post: How to not use an HTTP router

2017-06-18 Thread Kevin Conway
If I understand correctly, you're describing a simplified version of
https://twistedmatrix.com/documents/current/web/howto/using-twistedweb.html
which
provides the concept of "path" by having a system that generates a graph of
resource nodes than can be rendered. Routing to any specific endpoint is a
matter of graph traversal until the system reaches a leaf node and calls a
relevant HTTP method. It's a model I've used successfully in the past and
found it to be enjoyable.

That being said, having access to easy-to-use, parameterized routes makes
things much simpler IMO. It cleanly separates the logic required for
resource graph traversal from the endpoint rendering. Having them combined
was one of my major complaints about the "resource as a router" model.

A large concept that this article also ignores is middleware. The decorator
pattern is quite a powerful one and is facilitated by nearly every 3rd
party mux implementation. Top level support for middleware makes adding
decorators to some, or all, endpoint rendering resources  an easy task
regardless of the specific resource graph traversal required to activate
them. The ability to take a single purpose, well tested endpoint and wrap
it in other single purpose, well tested functionality (such as logging,
stats, tracing, retries, backoffs, circuit breaking, authentication, etc.)
without modifying the core logic of the endpoint is a large value add. It's
unclear how the "resource as a router" model could easily provide such a
feature. This is not to say it's impossible, I simply haven't seen it done
well outside the mux model before.

On Sun, Jun 18, 2017 at 5:02 PM 'Axel Wagner' via golang-nuts <
golang-nuts@googlegroups.com> wrote:

> Hey gophers,
>
> in an attempt to rein in the HTTP router epidemic, I tried writing down a)
> why I think *any* router/muxer might not be a good thing to use (much
> less write) and b) what I consider good, practical advice on how to route
> requests instead. It's not rocket science or especially novel, but I wanted
> to provide more useful advice than just saying "just use net/http" and
> haven't seen that a lot previously.
>
> Feedback is welcome :)
> http://blog.merovius.de/2017/06/18/how-not-to-use-an-http-router.html
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Generic service via interfaces...

2017-05-29 Thread Kevin Conway
Before we put the focus on criticism of the OP's code organisation, it's
well worth investigating the likely code bug first.

>but when I use var m storage.ModelInterface and then call a member
function of the interface (SetName) Go panics because nil pointer
dereference
Based on this problem description and the code snippet you provided, it
looks like you are referencing the storage interface incorrectly. `var m
storage.UserModelInterface` will declare a variable `m` of type
`storage.UserModelInterface` and assign to it a zero value (nil). I believe
what you meant to do was reference the store attached to the `UserService`
instance. In that case, the correct reference would be `s.storage.SetName`.

On Mon, May 29, 2017 at 5:24 AM Egon  wrote:

> Don't put organize by structure, this is what seems to be causing the most
> of the damage,
>
> I would suggest implementing as:
>
> assets/*
> cmd/root.go
> cmd/server.go
> storage/mongo/user.go
> storage/id.go
> user/user.go
> user/service.go
> forum/server.go
> main.go
>
> User package would look like:
>
> package user
>
> import (
> "git.icod.de/dalu/forum/server/storage"
> )
>
> type User struct {
> ID storage.ID `bson:"_id,omitempty" json:"id"`
> Name   string `bson:"name" json:"name"`
> Email  string `bson:"email" json:"email"`
> Password   []byte `bson:"password" json:"-"`
> ActivationCode string `bson:"activation_code,omitempty"
> json:"activation_code,omitempty"`
> RecoveryCode   string `bson:"recovery_code,omitempty"
> json:"recovery_code,omitempty"`
> Roles  []string   `bson:"roles" json:"roles"`
> }
>
> type Storage interface {
> Create(m *User) error
> Update(id string, m *User) error
>
> ReadOne(id string) (*User, error)
> ReadOneBy(query map[string]interface{}) (*User, error)
>
> ReadAll(sort string, start, limit int) ([]*User, error)
> ReadAllBy(query map[string]interface{}, sort string, start, limit int)
> ([]UserModelInterface, error)
>
> Delete(id string) error
> DeleteBy(query map[string]interface{}) error
> }
>
> type Service struct {
> storage Storage
> }
>
> func NewService(storage Storage) *Service {
> return {storage}
> }
>
> func (s *Service) CreateUser(name, email, password string) error {
> user := {}
> user.Name = name
> user.Email = email
>
> // ...
> return s.storage.Create(m)
> }
>
> func (s *Service) VerifyActivationCode(email string) {
> // ...
> }
>
> It will make your whole code easier to manage. Names will become much
> nicer and clearer. Lots of interfaces will disappear.
>
> For storage.ID, it can be implemented as *interface{ Set(v string);
> String() string }* or *struct { I uint64; B bson.ObjectId }* (with custom
> marshalers)...
>
> + Egon
>
> On Sunday, 28 May 2017 21:40:01 UTC+3, Darko Luketic wrote:
>>
>> I'm stuck and I hoped it wouldn't come to that.
>>
>> I wanted to have interfaces for various databases aka "I wanted to
>> support multiple databases" (not debatable).
>> The idea was have ModelInterface (UserModelInterface,
>> CategoryModelInterface etc) which would wrap the model with getters setters
>> StorageInterface which would CRUD the modelinterfaces
>> and finally services which would implement higher level and more
>> convenient functions and use storage interfaces to store data.
>>
>> Well up to the point where I started creating services everything went
>> mostly smooth.
>> But I hoped I could keep the services database-agnostic.
>> However I can't.
>>
>>
>> https://github.com/dalu/forum/tree/f39df77f5003f71f08f473970b3df1fbd29a5a43
>>
>> as you can see in line 19 and 20
>>
>> https://github.com/dalu/forum/blob/f39df77f5003f71f08f473970b3df1fbd29a5a43/server/service/user.go#L19
>>
>> when I use a concrete model.User (aka line 20) everything works without
>> error.
>> but when I use var m storage.ModelInterface and then call a member
>> function of the interface (SetName)
>> Go panics because nil pointer dereference
>>
>> === RUN   TestNewUserService
>> --- FAIL: TestNewUserService (0.00s)
>> panic: runtime error: invalid memory address or nil pointer dereference
>> [recovered]
>> panic: runtime error: invalid memory address or nil pointer
>> dereference
>> [signal SIGSEGV: segmentation violation code=0x1 addr=0x80 pc=0x5ba9f8]
>>
>> goroutine 5 [running]:
>> testing.tRunner.func1(0xc420068b60)
>> /usr/lib/go/src/testing/testing.go:622 +0x29d
>> panic(0x5f1520, 0x721990)
>> /usr/lib/go/src/runtime/panic.go:489 +0x2cf
>>
>> git.icod.de/dalu/forum/server/service.(*UserService).CreateUser(0xc420055f40,
>> 0x62f170, 0x8, 0x630677, 0xd, 0x62e844, 0x6, 0x7ffdb1d66ba1, 0xc420031f68)
>> /home/darko/go/src/
>> git.icod.de/dalu/forum/server/service/user.go:19 +0x28
>> git.icod.de/dalu/forum/server/service.TestNewUserService(0xc420068b60)
>> /home/darko/go/src/
>> git.icod.de/dalu/forum/server/service/user_test.go:23 +0x1f9
>> testing.tRunner(0xc420068b60, 0x63af98)
>> 

Re: [go-nuts] Re: How fast can gopacket handles?

2017-05-27 Thread Kevin Conway
>  Any actual processing of the packet in the same thread significantly
hurt the rate
>  offload the actual packet processing to a different goroutine

As Rajanikanth points out, you'll need to put your work in other goroutines
to make use of your other cores for processing them. One goroutine per
packet is likely going to cause its own issues. I'd suggest adding a
configurable batch size to let you iterate and find the ideal number
packets to spin off for processing in a goroutine. Maybe experiment with a
few different patterns. For example you might start with a naive goroutine
creation on each batch of a significant size (
https://play.golang.org/p/GH16HEJgiy) or implement something like a worker
pool model where you send segments of work to available workers (
https://play.golang.org/p/3D_JuWdA4a).

Also, given that you attempting to provide as much active time to the
packet collector as possible it might be worthwhile to investigate usage of
https://golang.org/pkg/runtime/#LockOSThread which allows you to isolate
your consumer goroutine to an OS thread and force all other goroutines to
operate in other OS threads.

On Sat, May 27, 2017 at 9:15 AM Rajanikanth Jammalamadaka <
rajanika...@gmail.com> wrote:

> Can you offload the actual packet processing to a different goroutine?
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: How fast can gopacket handles?

2017-05-27 Thread Kevin Conway
> can only handle up to 250Mbps-ish traffic

I'm not familiar with gopacket, but I have seen multiple occasions where
logging to a file or stdout became a bottleneck. Your code snippet is
logging on every packet which seems excessive. Try logging with less
frequency and, if using stdout, consider using a log destination with
different buffering characteristics like a file or syslog over UDP.

On Fri, May 26, 2017 at 3:59 PM Egon  wrote:

> On Friday, 26 May 2017 20:51:55 UTC+3, Chun Zhang wrote:
>>
>> Good point.
>> as a comparison: tcpdump -w /dev/null can handle up to 750Mbps, where
>> sending machine's  speed limit reached. I think it should be able to handle
>> line rate.
>>
>> Are those two packages lighter/faster than gopacket?
>>
>
> Nevermind, just noticed... gopacket/pcap is a fork of akrennmair/gopcap
>
> Anyways, to get more information on what is taking time in your program
> see https://blog.golang.org/profiling-go-programs
>
> Maybe try something like this:
>
> handle, err := pcap.OpenLive(device, snapshot_len, promiscuous, timeout)
>
> // ...
> for {
> data, ci, err := handle.ZeroCopyReadPacketData()
> // ...
>
>
> This should remove allocations from critical path.
>
> *PS: code untested and may contain typos :P*
>
>
>>
>>
>> Thanks,
>> Chun
>>
>> On Friday, May 26, 2017 at 12:37:55 PM UTC-4, Egon wrote:
>>>
>>> As a baseline measurement I suggest writing the same code in C; this
>>> shows how much your VM / config / machine can handle.
>>>
>>> With gopacket -- use src.NextPacket instead of Packets.
>>>
>>> There are also: https://github.com/akrennmair/gopcap and
>>> https://github.com/miekg/pcap
>>>
>>> + Egon
>>>
>>> On Friday, 26 May 2017 19:01:20 UTC+3, Chun Zhang wrote:

 Hi, All,

 I am trying to write a small program to handle packets coming from a
 GigE wire. The testing code snip is as below.

 The problem I am facing is that this is extremely slow, can only handle
 up to 250Mbps-ish traffic with normal ipv4 sized packets, anything above
 that resulting significant packet drop.  Note that I am doing nothing with
 the packet at this moment. If I try to do any packet processing, then
 apparently it gets slower.

 Has anybody measured the efficiency of the gopacket package? Is there
 any other faster alternatives?

 PS: the host machine is an ubuntu VM with 12-core and 12G memory, but
 looks only 2 cores are used for this program.

 Thanks,
 Chun



 // Open device
 handle, err = pcap.OpenLive(device, snapshot_len, promiscuous, timeout)
 if err == nil {
Info.Println("Open interface ", device, "successfully")

 }
 defer handle.Close()


//fmt.Println("In the deafult reading case ", time.Now())
// Use the handle as a packet source to process all packets
packetSource := gopacket.NewPacketSource(handle, handle.LinkType())
Info.Println("pcketsourc is ", packetSource, time.Now())
for packet := range packetSource.Packets() {
   
 Debug.Println("---")
   count++
   Warning.Println("packet count ", count)

   // write to a pcap for testing
   /*err = w.WritePacket(packet.Metadata().CaptureInfo, 
 packet.Data())
   if err != nil {
  fmt.Println(err)
   }*/

   continue

 --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] netutil.LimitListener(lis, n) limits to n-2 requests

2017-05-23 Thread Kevin Conway
I'm speculating a bit here until I can dig back into the listener stack,
but at least one variable you can tweak is the connection keep alive. On
the surface, limit listener appears to gate on the socket accept call. I
feel it is an important distinction because it limits the listener and not
the request handler so we're ready taking about concurrent connections
rather than requests.

The default client will reuse established connections as available. It's
possible you've found a poor interaction beetween connection pooling and
connection limiting. You might try running your experiment again with keep
alive disabled on your HTTP client.

On Tue, May 23, 2017, 05:57 Pablo Rozas Larraondo <
p.rozas.larrao...@gmail.com> wrote:

> Hi gophers,
>
> I'm using netutil.LimitListener to limit the number of concurrent requests
> that can be handled in a server to limit the memory usage of the service.
> I've noticed that when I send a bunch of requests it serves the number
> passed to the LimitListerner but after that it serves n-2 requests. I
> wonder what the problem might be, not a big deal but maybe there is some
> kind of problem in the library.
>
> Here is the code that I've created to test this problem:
>
> Server (waits 5 seconds before replying with concurrency limited to 10):
> package main
>
> import (
> "golang.org/x/net/netutil"
> "log"
> "net"
> "net/http"
> "time"
> )
>
> type fooHandler struct {
> }
>
> func (fh *fooHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
> time.Sleep(5 * time.Second)
> w.Write([]byte("Done"))
> }
>
> func main() {
> lis, err := net.Listen("tcp", ":8080")
> if err != nil {
> log.Fatalf("failed to listen: %v", err)
> }
> lis = netutil.LimitListener(lis, 10)
> fH := {}
>
> s := {
> Handler:fH,
> ReadTimeout:60 * time.Second,
> WriteTimeout:   60 * time.Second,
> MaxHeaderBytes: 1 << 20,
> }
>
> if err := s.Serve(lis); err != nil {
> log.Fatalf("failed to serve: %v", err)
> }
> }
>
>
> Client (Sends 40 concurrent requests to the server):
> package main
>
> import (
> "fmt"
> "io/ioutil"
> "net/http"
> "sync"
> )
>
> func Ask(wg *sync.WaitGroup, n int) {
> resp, err := http.Get("http://localhost:8080;)
> if err != nil {
> return
> }
> defer resp.Body.Close()
> body, err := ioutil.ReadAll(resp.Body)
> fmt.Println(err, string(body), n)
> wg.Done()
> }
> func main() {
> wg := {}
> for i := 0; i < 40; i++ {
> wg.Add(1)
> go Ask(wg, i)
> }
> wg.Wait()
> }
>
> In this example I see 10 responses after the first 5 seconds and then 8
> responses every five seconds.
>
> go version go1.8 darwin/amd64
>
> Thanks,
> Pablo
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Recover considered harmful

2017-04-25 Thread Kevin Conway
> To try to postpone the exit of a program after a critical error to me
implies a much more complex testing and validation process that has
identified all the shared state in the program and verified that it is
correct in the case that a panic is caught

There's an implicit argument here that the panic is, in fact, the result of
a critical error. This is my primary contention with the general use of
panic(). There is no guarantee for me as the consumer of a panicking
library that the panic in question is truly related to an unrecoverable
exception state that can only be resolved by a process exit

I posit the question one last time: How can the author of shared code
understand, in sufficient detail, all the possible ways that the code could
be leverage such that she or he could determine, objectively, that any
given process must stop when a particular error state is encountered?

> There are a host of other reasons that can take a server offline
abruptly. It seems like a odd misallocation of resources to try to prevent
one specific case.

This, generally, is the argument that "if you can't stop all exceptions
then why bother to stop any?". Contrary to exception states such as my
cloud provider has terminated my instance abruptly or my data center has
lost power, panic() uses are entirely defined by developers and not
strictly related to unrecoverable exception states. The process exit in the
case of a panic is entirely preventable unlike a true, systemic failure. To
say that panic leads to process termination and, therefore, panic is
equivalent to all process termination events is fallacious. I stand firm
that only the process developer knows when the process should exit.

To put it more succinctly: The idea that your exception state should stop
my process is, well, that's just, like, your opinion, man.

On Tue, Apr 25, 2017, 21:52 Dave Cheney  wrote:

> > Yes, and then crashes the program. In the scenario I described, with
> thousands of other requests in flight that meet an abrubt end.  That could
> be incredibly costly, even if it's been planned for
>
> There are a host of other reasons that can take a server offline abruptly.
> It seems like a odd misallocation of resources to try to prevent one
> specific case - a goroutine panics due to a programming error or input
> validation failure -- both which are far better addressed with testing.
>
> To try to postpone the exit of a program after a critical error to me
> implies a much more complex testing and validation process that has
> identified all the shared state in the program and verified that it is
> correct in the case that a panic is caught.
>
> To me it seems simpler and more likely to have the root cause of the panic
> addressed to just let the program crash. The alternative, somehow
> firewalling the crash, and its effects on the internal state of your
> program, sounds unworkably optimistic.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Recover considered harmful

2017-04-24 Thread Kevin Conway
On Mon, Apr 24, 2017, 21:31 Sam Whited  wrote:

> On Mon, Apr 24, 2017 at 6:31 PM, Dan Kortschak
>  wrote:
> > We (gonum) would extend the security exception to include scientific
> > code; there are far too many peer reviewed works that depend on code
> > that will blithely continue after an error condition that should stop
> > execution or log failure.
>
> Also a great example! The main take away here is that we should always
> design for failure, and sometimes the primary failure mode should be
> "abort at all costs and let the application developer know that
> something catastrophic happened which could lead to worse things
> happening in the future".
>
> —Sam
>

In this example we're considering panic as a mechanism of preventing
otherwise avoidable code bugs. What happens when the same code begins
silencing panics and continuing on? Do we add a new level of panic that
overcomes the normal recovery method? The fundamental assertion being made
by panic advocates is that you know better than I when my program should
end and you want some mechanism to enforce that opinion on me.

I'll argue that sticking to idiomatic errors returned by function calls
combined with static analysis tools, like errcheck, are sufficient in
solving for all scenarios where panic might otherwise be used to signal an
error state. If you want to use panic internally within an API that's
completely acceptable so long as that panic is never exposed beyond the API
boundary. To quote the golang blog on the subject:

The convention in the Go libraries is that even when a package uses panic
internally, its external API still presents explicit error return values.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Recover considered harmful

2017-04-24 Thread Kevin Conway
I'd say that recover() is not a problem but, instead, a symptom of panic()
being available to developers. I'd flip the title and say panic() should be
considered harmful. To quote from
https://blog.golang.org/defer-panic-and-recover :
> The process continues up the stack until all functions in the current
goroutine have returned, at which point the program crashes

Any code that invokes panic is very clearly stating that an error has
occurred that is completely unrecoverable and the _only_ choice of action
that could possibly be taken is to end the program. The recover() builtin
must exist to account for the fact that _all_ uses of panic in user space
are, in fact, recoverable errors.

As someone developing in go, it is infuriating when external libraries
(whether 3rd party or std lib) make decisions about when my program should
stop. Code related bugs, such as nil pointer dereferences or invalid
interface conversions, should result in a process failure just like a
segfault in any other runtime. However, some library using the same process
ending mechanism to let me know that it doesn't like the format of my
string input is unacceptable.

On Mon, Apr 24, 2017 at 5:41 AM Christian von Pentz 
wrote:

> On 04/24/2017 11:02 AM, Sokolov Yura wrote:
> > I mean: panic usually means programmer error, so if it happens, then
> > program behaves incorrectly, and there is always a chance of serious
> > state corruption. So, is there reason to recover at all?
>
> I encountered many cases of panics when using external tools/libraries
> which were completely "fine" to recover from. magicmime was such a
> package that had a few "hiccups" when used in a multi-threaded
> environment mostly due to the underlying libmagic. That being said, very
> easy and convenient to recover from, so yeah, I would say recover is a
> perfectly valid strategy sometimes.
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] What's the recommended way to determine if a type.Type is exported?

2017-04-24 Thread Kevin Conway
Alternatively, if you are walking an AST and only interested in the
exported names then you can use
https://golang.org/pkg/go/ast/#PackageExports . It will filter your tree to
only elements that are exported before you begin traversal.

As to your specific case of determining if a type used in a method
signature is exported, I'd ask: Does it matter if the source package is the
same one where the method is defined or is it any non-builtin, exported
types used in the signature?

I believe the case of "exported name used in a method within the same
package" can be determined by iterating of the 'Params.List' attribute of
the ast.FuncType and looking for elements of the slice that are of type
'*ast.Ident' and checking the corresponding 'IsExported' method call
results. For complete coverage, you'd also need to check for
*ast.ArrayType, *ast.ChanType, *ast.MapType, *ast.Ellipsis (for uses of the
type as a variadic), *ast.FuncType (and its corresponding arguments and
return types), and *ast.StarExpr (for uses of the type as a pointer).

The case of "any non-builtin, exported type from any package" would use the
previous logic but also add in checking for elements of 'Params.List' that
are of type '*ast.SelectorExpr'. While I'm sure there is a case where this
is not true, the elements of type *ast.SelectorExpr in the parameter list
of an *ast.FuncType are usually references to a type exported by another
package (think "http.Handler"). The 'X' attribute can be converted to
*ast.Ident for the source package name and the 'Sel' attribute contains the
name of the referenced type. Note that the package name might actually be a
local alias when the file imports with ' import myname "net/http" '.

https://play.golang.org/p/yNKAQe_YSV

PS the implementation of 'IsExported' used by everything checks the
capitalisation of the first letter to make its choice:
https://golang.org/src/go/ast/ast.go?s=16357:16390#L516

On Mon, Apr 24, 2017 at 2:49 AM Tejas Manohar  wrote:

> Agreed! I ended up casting to types.NamedType (or pointer then elem())...
> I'll try your method too!
> On Sun, Apr 23, 2017 at 11:11 PM Axel Wagner <
> axel.wagner...@googlemail.com> wrote:
>
>> I'd say, probably type-asserting to a *types.TypeName
>>  and then use Exported()
>>  (the code of which
>> also leads to ast.IsExported 
>> ).
>>
>> Side-note: You probably shouldn't rely on parsing the output of String()
>> of any value, just as you shouldn't rely on parsing the output of
>> error.Error().
>>
>> On Mon, Apr 24, 2017 at 7:06 AM,  wrote:
>>
>>> https://golang.org/pkg/go/types/#Type
>>>
>>> Is there a helper function to determine whether a *types.Type* is
>>> exported? It seems like I can do some string parsing like
>>>
>>> t := something.Type()
>>> parts := strings.Split(t.String(), ".") // go/ast.Node => ["go/ast",
>>> "Node"]
>>> ident := parts[len(parts)-1] // "Node"
>>> exported := strings.IsUpper(ident[0])
>>>
>>> but I imagine there's a simpler, more robust way. The end goal is to
>>> find out whether a type of a method argument is exported-- e.g.
>>> namedType := obj.Type().(*types.Named)
>>> method := namedType.Method(0)
>>> signature := method.Type().(*types.Signature)
>>> signature.Params().At(0).Type() // is this exported?
>>>
>>> And, for some context, all of this is from walking go/ast
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to golang-nuts+unsubscr...@googlegroups.com.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
>
> Best regards,
>
> Tejas Manohar
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: go-swagger dynamic API creation

2017-03-28 Thread Kevin Conway
Having used go-swagger (https://github.com/go-swagger/go-swagger) at one
point, I'd say that these YAML generators are possibly useful for
generating documentation from your code. One pain point of the
documentation generators, though, is that most require that I create and
maintain code objects exclusively for the benefit of doc generation. At
that point, I'd rather maintain docs than unused code objects. That's a
personal preference, though.

Now, if you're doing any amount of contract negotiation with consumers,
attempting to implement an API contract, intending to perform any amount of
contract testing, or are targeting any form of contract driven development
then these generator tools are a complete inversion of the model you want.
If any of the previous statements are true then you should consider finding
code generators that consume swagger/OpenAPI documents and generate code
skeletons to fill in rather than YAML generators that leverage your code.

On Mon, Mar 27, 2017 at 2:01 PM Matt Ho  wrote:

> Before writing github.com/savaki/swag, I gave goswagger a try.  I think
> goswagger is a fantastic library with lots of useful features.  However,
> for my own use, I found things like:
>
> var findTodos = runtime.OperationHandlerFunc(func(params interface{}) 
> (interface{}, error) {
> log.Println("received 'findTodos'")
> log.Printf("%#v\n", params)
>
> return items, nil})
>
>
> a little cumbersome.  I also wanted to be able to use automatic code
> reload tools like https://github.com/codegangsta/gin and code generation
> made that a little more problematic.
>
> Hence was born:
>
> https://github.com/savaki/swag
>
>
> M
>
> On Monday, March 27, 2017 at 11:39:48 AM UTC-7, Johann Höchtl wrote:
>
> The last time I used it swagger was called swagger.
>
> Lots has changed since it's OpenAPI. A huge framework evolved around it
> https://goswagger.io/
>
> I really like the approach of defining the API entirely dynamically in
> code (and announcements like
> https://groups.google.com/forum/#!topic/golang-nuts/3ebgsgF6W2c, nice!) .
> Unless I misunderstand goswagger.io, nothing prevents the drifting apart
> of the generated code from the YML - api spec.
>
> There is also an example to dynamically generate the swagger spec using
> goswagger.io
> https://goswagger.io/tutorial/dynamic.html
> and I wonder if there is experience using that. Especially is it in
> feature parity with the go generate approach of  goswagger.io?
>
> Thank you!
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Handle gziped request bodies

2017-03-04 Thread Kevin Conway
>> AFAIK, there is (still?) no out-of-the-box support for gzip.
I believe this is the answer. I figured I'd fish around to see if anyone
had already solved and open sourced a solution to this problem.

>> See, for example, https://github.com/NYTimes/gziphandler
While certainly related, this example only provides gzip _compression_ for
responses emitted from an HTTP server when the request has the appropriate
accept-encoding header. I'm also looking to gzip _decompress_ the incoming
request bodies when they are marked with an appropriate content-encoding
header.

On Sat, Mar 4, 2017 at 4:16 PM John Kemp <stable.pseudo...@gmail.com> wrote:

> AFAIK, there is (still?) no out-of-the-box support for gzip.
>
> See, for example, https://github.com/NYTimes/gziphandler
>
> - johnk
>
> > On Mar 4, 2017, at 5:11 PM, Kevin Conway <kevinjacobcon...@gmail.com>
> wrote:
> >
> > I'm running a go 1.7 HTTP server. One of my clients is applying gzip to
> the POST body of their request and applying the appropriate
> content-encoding header. The current server implementation in go, unlike
> the client, does not appear to automatically handle decompression of the
> body. This is causing calls to JSON unmarshal to fail because it, rightly,
> expects uncompressed data to work with.
> >
> > Certainly, we could add our own gzip reader on top of the request body
> to handle this. It seems strange, though, that this case is accounted for
> in the go HTTP client but not the server.
> >
> > Have I missed something obvious?
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "golang-nuts" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to golang-nuts+unsubscr...@googlegroups.com.
> > For more options, visit https://groups.google.com/d/optout.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Handle gziped request bodies

2017-03-04 Thread Kevin Conway
I'm running a go 1.7 HTTP server. One of my clients is applying gzip to the
POST body of their request and applying the appropriate content-encoding
header. The current server implementation in go, unlike the client, does
not appear to automatically handle decompression of the body. This is
causing calls to JSON unmarshal to fail because it, rightly, expects
uncompressed data to work with.

Certainly, we could add our own gzip reader on top of the request body to
handle this. It seems strange, though, that this case is accounted for in
the go HTTP client but not the server.

Have I missed something obvious?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] http.Client interface

2016-12-11 Thread Kevin Conway
>  How would I swap out a http.Client that is required by a third-party
library with a another that supports, say retries with exponential back-off?
I'd suggest looking at http.Transport and http.RoundTripper [
https://golang.org/pkg/net/http/#RoundTripper]. The docs explicitly forbid
using RoundTripper to implement higher level protocol features like
retry-with-backoff based on HTTP response codes, but those are your only
hooks into making this a feature of the client rather than moving the retry
logic somewhere else.


On Sun, Dec 11, 2016 at 4:07 AM  wrote:

> Just a quick question - I've noticed that the http package doesn't have an
> interface for Client. How would I swap out a http.Client that is required
> by a third-party library with a another that supports, say retries with
> exponential back-off? It appears that it is not possible? Would the http
> package benefit from an interface for the http.Client?
>
> Cheers!
> Ben
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.