increasing response time.
Sweet.
Thanks again!
Evan
On Monday, 12 December 2016 14:36:14 UTC-8, Evan Digby wrote:
>
> Hi Dave,
>
> Thanks for the insight. I'll look further into the heap creep. I'll also
> try to get together a log of a run past 8 hours over the next day or
f the trace. GC sweep time is proportional to the size of the heap,
> so this is expected.
>
> On Tuesday, 13 December 2016 09:20:07 UTC+11, Evan Digby wrote:
>>
>> Hi Dave,
>>
>> Thanks for the reply. Attached is the full GCTRACE for ~8 hours of run. I
>
Hi all,
Under what circumstances could I expect the Go GC time per pause (right now
calculated using PauseTotalNS / NumGC using the runtime stats) creep up
slowly over time?
Assuming the number of allocations we do per second is consistent, could it
be that the GC is somehow not keeping up
November 2016 12:57:07 UTC-8, Evan Digby wrote:
>
> I think I've eyeballed a bug in my code that *might* cause this but I
> won't be at a computer for a day or two to verify. I'll keep here posted!
--
You received this message because you are subscribed to the Google Groups
"golan
I think I've eyeballed a bug in my code that *might* cause this but I won't be
at a computer for a day or two to verify. I'll keep here posted!
--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving
://play.golang.org/p/w9WYsNnkv6
On Thursday, 10 November 2016 17:34:08 UTC-8, Evan Digby wrote:
>
> I'm actually struggling to come up with a toy example that identically
> reproduces the results. I wonder if there are some compiler optimizations
> happening here.
>
> The e
Is it expected that if multiple sub-benchmarks are run in the same
benchmark, the cost of the setup will impact the results from the first
benchmark but not the second?
func BenchmarkIntersection(b *testing.B) {
// Long setup -- ~5.5 seconds
b.Run("basic 1", func(b *testing.B) {
for i
ay, 13 September 2016 22:41:44 UTC-7, Egon wrote:
>
> On Wednesday, 14 September 2016 00:58:39 UTC+3, Evan Digby wrote:
>>
>> Hi Egon,
>>
>> Thanks for that. It seems to implement the same requirements as
>> implemented in my example, although I prefer my implement
for miscommunication the
original question.
Evan
On Tue, 13 Sep 2016 at 14:48 Egon <egonel...@gmail.com> wrote:
>
>
> On Wednesday, 14 September 2016 00:18:26 UTC+3, Evan Digby wrote:
>>
>> Hi Egon,
>>
>> My requirements are more simple than a grac
ven this, there was no need
> for the h.closed channel.
>
>
>
> Back in a few. J
>
>
>
> John
>
> John Souvestre - New Orleans LA
>
>
>
> *From:* Evan Digby [mailto:evandi...@gmail.com]
> *Sent:* 2016 September 13, Tue 15:59
> *To:* John Souvestre; gola
;
>
> John
>
> John Souvestre - New Orleans LA
>
>
>
> *From:* golang-nuts@googlegroups.com [mailto:golang-nuts@googlegroups.com]
> *On Behalf Of *Evan Digby
> *Sent:* 2016 September 13, Tue 15:32
> *To:* golang-nuts
> *Cc:* aro...@gmail.com
> *Subject:* Re:
libraries that do that? If you don't have a way to account for
> the time between Handle(..) is called and the goroutine starts, you always
> might miss a task that got called near the time Close() was called.
>
> - Augusto
>
>
> On Tuesday, September 13, 2016 at 12:50:50 PM
between Handle(..) is called and the goroutine starts, you always
> might miss a task that got called near the time Close() was called.
>
> - Augusto
>
>
> On Tuesday, September 13, 2016 at 12:50:50 PM UTC-7, Evan Digby wrote:
>>
>> Hi Aroman,
>>
>> Your ap
before the goroutine is spawned (among other sync requirements to ensure no
new connections are accepted, etc).
Thanks again,
Evan
On Tuesday, 13 September 2016 13:11:17 UTC-7, Egon wrote:
>
> On Tuesday, 13 September 2016 22:52:27 UTC+3, Evan Digby wrote:
>>
> one to show when all the tasks are running and one to show when all the
> tasks are done. No mutex and no blocking channels.
>
>
>
> John
>
> John Souvestre - New Orleans LA
>
>
>
> *From:* golan...@googlegroups.com [mailto:
> golan...@googlegroups.com
; // alternatively use runtime.Gosched() instead of Sleep
> }()
>
> h.Close()
>
> if atomic.LoadInt64() > 0 {
> // fail
> }
>
> It's not completely fool-proof, but should work well enough in practice.
>
> On Tuesday, 13 September 2016 21:56:08 UTC+3, Evan Digby wro
ould block an outstanding task. The key to using waitgroups is to call
> Add() outside of goroutines that might call done:
>
> https://play.golang.org/p/QVWoy8fCmI
>
> On Tuesday, September 13, 2016 at 12:19:16 PM UTC-7, Evan Digby wrote:
>>
>> Hi John,
>>
>>
leans LA
>
>
>
> *From:* golan...@googlegroups.com [mailto:
> golan...@googlegroups.com ] *On Behalf Of *Evan Digby
> *Sent:* 2016 September 13, Tue 13:56
> *To:* golang-nuts
> *Subject:* [go-nuts] Having difficulty testing this "cleanly"
>
>
>
>
Has anyone come across a good way, non-racy way to ensure that N tasks are
guaranteed to be completed after a function is called? Essentially I have a
“Close” function that must be guaranteed to block until all tasks are
finished. Achieving this was pretty simple: wrap each task in an RLock,
Sorry, I didn't see your longer high-level overview post--I see the vision
a bit more clearly now.
Have you used slack? It might be a good place to start discussion to hammer
out some details before moving onto a google doc. I find shared google docs
can be challenging without at least a
ine6-4 1000 145 ns/op
Since my full use case (wasn't included in the original post) requires
appending more than one thing in a loop, that exaggerates this issue
further.
Thanks!
Evan
On Sunday, 17 July 2016 16:31:33 UTC-7, kortschak wrote:
>
> On Sun, 2016-07-17 at 09:
Hi TL,
It's identical between the two runs as best as can be with a rebuild in between
two runs. The values used are the same.
I'm going to dig into the assembly a bit when I get the time.
For now the solution is to explicitly make copies, which was the desired result
in the first place.
The
Hi Nate,
Thanks for the suggestions. We've definitely backed off appending to a
separate value.
Our code ended up looking like this (the requirements were actually a bit
more complex than the core example):
func (n Namespace) Combine(with ...Namespace) Namespace {
// Benchmarks show it's
share the same underlying array.
>
> On Friday, July 15, 2016 at 7:28:32 PM UTC+2, Evan Digby wrote:
>>
>> I can't reproduce this in go playground (yet), but under what
>> circumstances would/could/should:
>>
>> nss := []namespace.Namespace{
>> appe
I'm noticing that if I accidentally create a second template with the same
name but no content there is no error reported when I "parse" it, but then
when I attempt to execute it I do see an error:
https://play.golang.org/p/Rj3433vvju
Is this expected behaviour? Why wouldn't it simply return
One approach to deal with this would be to abstract your internal model as
an interface from the model you receive and mutate them to match. This also
gives you the power to truncate/round/mutate a float however best suites
your needs rather than hoping the library truncates/rounds/mutates in a
Unfortunately there's a reason why password managers still require a
passphrase. It's simply not secure to store the key anywhere near the
secure files.
I'm not saying password managers are perfect--they have their own security
issues--but it seems that you're attempting to closely mimic that
27 matches
Mail list logo