This is a somewhat known issue. Each token in a parsered .go file is 
represented by a Node structure inside the program. The Node structure is 
large, especially on 64 bit systems. 

Normally this is not a problem, but in th e case where code has large tables of 
data memory usage when compiling can be unexpectedly high. 

This problem is being worked on, but not solution exists in a shipping version 
of Go yet.

The mitigation to this problem is to reduce the number of parsed tokens, so, 
rather than

var data = []byte{ 65, 66, 67, 68, 69, 70}

Do

const data = "abcdef"

The latter produces O(1) tokens per declaration vs O(N) tokens for the former. 

If the data cannot be represented as text, compressing and base64 encoding can 
help. 

I'm not sure what strategy go-bindata uses, but you can check this yourself 
looking at its generated output. 

Dave

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to