[go-nuts] Re: Aren't package declarations (and per file imports) redundant?

2017-03-18 Thread damien . stanton
Hi Sunder,

You can see the reasoning behind the package identifier in the language 
spec.  Hope this helps!

Damien

On Saturday, March 18, 2017 at 7:49:57 AM UTC-4, Sunder Rajan Swaminathan 
wrote:
>
> Before anyone flames, I love Go! There. Ok, now to the issue at hand -- 
>
> The toolchain already seems to understand the directory layout then why 
> bother littering the sources with package declaration? Also is there a 
> point to specifying imports at a file level? I mean doesn't the linker 
> bring in symbols at a package level anyway? My reason for brining this up 
> is I'm trying to generate a codebase using a custom built specification and 
> having to constantly tweak the imports and packages (in over 200 files) are 
> getting in my way of a smooth development. I'm sure others have had the 
> same problem.
>
> In the spirit of brining a solution and not just a problem, how about the 
> toolchain assume a package to be "main" if there's a main function therein. 
> Imports could be specified at the package level like in D or Rust in a 
> separate file.
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Different latency inside and outside

2017-03-18 Thread David Collier-Brown
Are you seeing the average response time / latency of the cache from 
outside? 

If so, you should see lots of really quick responeses, and a few ones as 
slow as inside that average to what you're seeing.

--dave


On Saturday, March 18, 2017 at 3:52:21 PM UTC-4, Alexander Petrovsky wrote:
>
> Hello!
>
> Colleagues, I need your help!
>
> And so, I have the application, that accept through http (fasthttp) 
> dynamic json, unmarshal it to the map[string]interface{} using ffjson, 
> after that some fields reads into struct, then using this struct I make 
> some calculations, and then struct fields writes into 
> map[string]interface{}, this map writes to kafka (asynchronous), and 
> finally the result reply to client through http. Also, I have 2 caches, one 
> contains 100 millions and second 20 millions items, this caches build using 
> freecache to avoid slow GC pauses. Incoming rate is 4k rps per server 
> (5 servers at all), total cpu utilisation about 15% per server.
>
> The problem — my latency measurements show me that inside application 
> latency significantly less then outside.
> 1. How I measure latency?
> - I've add timings into http function handlers, and after that make 
> graphs.
> 2. How I understood that latency inside application significantly less 
> then outside?
> - I'm installed in front of my application the nginx server and log 
> $request_time, $upstream_response_time, after that make graphs too.
>
> It graphs show me that inside application latency is about 500 
> microseconds in 99 percentile, and about 10-15 milliseconds outside 
> (nginx). The nginx and my app works on the same server. My graphs show me 
> that GC occur every 30-40 seconds, and works less then 3 millisecond.
>
>
> 
>
>
> 
>
>
> Could someone help me find the problem and profile my application?
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Different latency inside and outside

2017-03-18 Thread Alexander Petrovsky
Hello!

Colleagues, I need your help!

And so, I have the application, that accept through http (fasthttp) dynamic 
json, unmarshal it to the map[string]interface{} using ffjson, after that 
some fields reads into struct, then using this struct I make some 
calculations, and then struct fields writes into map[string]interface{}, 
this map writes to kafka (asynchronous), and finally the result reply to 
client through http. Also, I have 2 caches, one contains 100 millions and 
second 20 millions items, this caches build using freecache to avoid 
slow GC pauses. Incoming rate is 4k rps per server (5 servers at all), 
total cpu utilisation about 15% per server.

The problem — my latency measurements show me that inside application 
latency significantly less then outside.
1. How I measure latency?
- I've add timings into http function handlers, and after that make 
graphs.
2. How I understood that latency inside application significantly less then 
outside?
- I'm installed in front of my application the nginx server and log 
$request_time, $upstream_response_time, after that make graphs too.

It graphs show me that inside application latency is about 500 microseconds 
in 99 percentile, and about 10-15 milliseconds outside (nginx). The nginx 
and my app works on the same server. My graphs show me that GC occur every 
30-40 seconds, and works less then 3 millisecond.






Could someone help me find the problem and profile my application?

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] atomic bugs

2017-03-18 Thread T L
At the end of sync/atomic package docs, it says

On x86-32, the 64-bit functions use instructions unavailable before the 
Pentium MMX. 


On non-Linux ARM, the 64-bit functions use instructions unavailable before 
the ARMv6k core. 


So when running Go programs which call the 64-bit atomic functions on above 
mentioned machines, programs will crash?


If it is true, is it good idea to add a compiler option to convert the 
64-bit function calls to mutex calls?

And is it possible to do the conversions at run time?


And I read from somewhere which says Go authors are some regretted to 
expose the atomic functions,

for these functions are intended to be used in standard packages internally.

So is it a good idea to recommend gophers to use mutex over atomic and 
convert some mutex calls to atomic calls atomically by compiler?


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Does it make sense to make expensive syscalls from different goroutines?

2017-03-18 Thread Vitaly Isaev


суббота, 18 марта 2017 г., 14:37:11 UTC+3 пользователь Konstantin Khomoutov 
написал:
>
> On Sat, 18 Mar 2017 03:50:39 -0700 (PDT) 
> Vitaly Isaev  wrote: 
>
> [...] 
> > Assume that application does some heavy lifting with multiple file 
> > descriptors (e.g., opening - writing data - syncing - closing), what 
> > actually happens to Go runtime? Does it block all the goroutines at 
> > the time when expensive syscall occures (like syscall.Fsync)? Or only 
> > the calling goroutine is blocked while the others are still operating? 
>
> IIUC, since there's no general mechanism to have kernel somehow notify 
> the process of the completion of any generic syscalls, when a goroutine 
> enters a syscall, it essentially locks its unrelying OS thread and 
> waits until the syscall completes.  The scheduler detects the goroutine 
> is about to sleep in the syscall and schedules another goroutine(s) to 
> run, but the underlying OS thread is not freed. 
>
> This is in contrast to network I/O which uses the platform-specific 
> poller (such as IOCP on Windows, epoll on Linux, kqueue on FreeBSD and 
> so on) so when an I/O operation on a socket is about to block, the 
> goroutine which performed that syscall is suspended, put on the wait 
> list, its socket is added to the set the poller monitors and its 
> underlying OS thread is freed to be able to serve a runnable goroutine. 
>
> > So does it make sense to write programs with multiple workers that do 
> > a lot of user space - kernel space context switching? Does it make 
> > sense to use multithreading patterns for disk input? 
>
> It may or may not.  A syscall-heavy workload might degrade the 
> goroutine scheduling to actually be N×N instead of M×N.  This might not 
> be the problem in itself (not counting a big number of OS threads 
> allocated and mostly sleeping) but concurrent access to the same slow 
> resource such as rotating medum is almost always a bad idea: say, your 
> HDD (and the file system on it) might only provide such and such read 
> bandwidth, so spreading the processing of the data being read across 
> multiple goroutines is only worth the deal if this processing is so 
> computationally complex that a single goroutine won't cope with that 
> full bandwidth.  If one goroutine is OK with keeping up with that full 
> bandwidth, having two goroutines read that same data will make each deal 
> with only half the bandwidth, so they will sleep > 50% of the time. 
> Note that reading two files in parallel off the filesystem located on 
> the same rotating medium will usually result in lowered full 
> bandwidth due to seek times required to jump around the blocks of 
> different files.
>
> SSDs and other kinds of medium might have way better performance 
> characteristics so it worth measuring. 
>
> IOW, I'd say that trying to parallelizing might be a premature 
> optimization.  It worth keeping in mind that goroutines serve two 
> separate purposes: 1) they allow you to write natural sequential 
> control flow instead of callback-ridden spaghetti code; 2) they allow 
> performing tasks truely in parallel--if the hardware supports it 
> (multiple CPUs and/or cores). 
>
> This (2) is tricky because it assumes such goroutines have something to 
> do; if they instead contend on some shared resource, the parallelization 
> won't really happen. 
>

Thanks, that's a very good point.  

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Re: Go - language of the future!

2017-03-18 Thread mhhcbon
wait. We should speak the same kind of comparisons.

if we do a comparison to languages like haskell, idris, coq
i m not sure we are targeting the hearth of the industry.
They do not represent solarge% of code written code everyday
that GO is made to replace.

My understanding is that go would like to be at the point of junction
of those two worlds, academic / enterprise.
In that regards, we should compare it to java / php / ruby / python / d / c 
/ c++.

For those which are language designers, comparing the language to such 
things
like haskell is made in an attempt to make the best language design (small 
ego trip here ?),
not the most practical, effective IRL language.
IRL, we need both, a good language design to serve an efficient programming 
experience.
But, as everything, too much this or that, this is not good.

I don t think it s appropriate to compare GO to Js in browser, 
but this is very personal taste related.

Also, JS as a backend language has been overused, imho, 
it serves great purpose for stuff like p2p/cli (i m still waiting to see a 
100% go project like this one https://github.com/mafintosh/hypervision),
but it sounds unreasonable for a large web app backend,
in so many ways.
I like to think JS/GO complete each other.

What i think more interesting in your post is that quote,

> Go is an extremely innovative language

I would not say that, stricktly speaking about the language,
not the environment it provides (test, compile ect).

I d rather say, 
Go was designed to implement for real some/lots of innovations,
since then, 
I would say it is stuck in its compatibility promise. Don t you feel this 
way ?
Note, im not criticizing, 
its a fact which i think does not help its innovative designers to innovate 
=)

On that topic js has been able to find its way to 
test/try/evict|keep with projects like babel.
Not claiming here that babel is 100% perfect, 
just saying, in practice they made that 
and it was useful to the language changes.

Anyways, IRL, i simply think we don t read often enough that *the language 
is great*,
but its so easy to find threads of 100 messages 
about such proven-useless-thing like semicolons.

you know, like a person, it needs to be told it is great from time to time, 
it participates to its well being, in my very humble opinion.

On Friday, March 17, 2017 at 12:21:46 AM UTC+1, Jesper Louis Andersen wrote:
>
> I think this is too premature to make this call. Javascript is a language 
> with no type system and it is relying a lot on runtime behavior. Yet, it 
> was a "language of the future" when it was created and I don't think it was 
> envisioned to become as big as it got. Attempts are replacing it are going 
> slowly because a language has momentum.
>
> Go is an extremely innovative language. The designers made a choice: keep 
> the language rather simple and conservative, but innovate in other areas of 
> language design: fast compilation speeds, fast linking, tooling, loose 
> coupling, and so on. Every language feature runs the danger of becoming a 
> long term liability, so adding those requires a lot of careful thought. 
> Keeping a language small also requires innovation and lots of hard 
> cost/benefit trade-offs.
>
> I think Go was right in requiring a garbage collector, rather than going 
> the C/C++ way of manually managing memory, the Swift way of using ARC, or 
> the Rust way of using ownership, borrowing and lifetimes. Concurrency 
> demands a sacrifice---in this case that you have to free yourself from 
> thinking about freeing of memory.
>
> Haskell is a language which is far smaller than Go. But it allows so many 
> combinations of its feature set that the learning curve is rather steep. It 
> is not a priori clear that such a language is more destined to be one for 
> the future, save for perhaps a fact that functional programs tend to be 
> easier to formally verify.
>
> A language such as Idris has dependent types, which essentially means that 
> the world of types and the world of terms (programs) get merged into one 
> and you program both at the same time. Yet, there are some real hard 
> problems which are only being overcome now for these kinds of languages. 
> Many of the ideas toy with restricting turing completeness in different 
> ways, often by limiting oneself to programs which are provably terminating. 
> Perhaps they are the languages of the future?
>
> Coq is a proof assistant. Not only can you write mathematics in such a 
> system, but you can write programs which you can then prove have certain 
> desirable properties. A good example is that you can prove the invariants 
> of a Red-Black search tree are maintained for your operations. The machine 
> is able to help you with your proof as you work through it (therein 
> "assistant"). But once the program is written, Coq can *extract* that 
> program to OCaml, Haskell, and so on. This has been used in a 
> tour-de-force, the CompCert project: Take the C specification. 

Re: [go-nuts] Aren't package declarations (and per file imports) redundant?

2017-03-18 Thread 'Axel Wagner' via golang-nuts
Hi,

the compiler (not just the linker) needs to know, what identifiers refer to
and package imports are file-local, so they necessarily need to be included
in all files. Similarly for the package declaration, AFAIK it's used in the
object files to disambiguate symbols.
What you can do, to reduce the annoyance in generated code, is to just
include all possibly needed packages and then use them, by adding something
like "var _ bytes.Buffer" in the generated header. It then won't matter
whether the generated code actually uses the package, it counts as used for
correctness.

On Sat, Mar 18, 2017 at 12:49 PM,  wrote:

> Before anyone flames, I love Go! There. Ok, now to the issue at hand --
>
> The toolchain already seems to understand the directory layout then why
> bother littering the sources with package declaration? Also is there a
> point to specifying imports at a file level? I mean doesn't the linker
> bring in symbols at a package level anyway? My reason for brining this up
> is I'm trying to generate a codebase using a custom built specification and
> having to constantly tweak the imports and packages (in over 200 files) are
> getting in my way of a smooth development. I'm sure others have had the
> same problem.
>
> In the spirit of brining a solution and not just a problem, how about the
> toolchain assume a package to be "main" if there's a main function therein.
> Imports could be specified at the package level like in D or Rust in a
> separate file.
>
> Thanks!
>
> --
> You received this message because you are subscribed to the Google Groups
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Aren't package declarations (and per file imports) redundant?

2017-03-18 Thread Jan Mercl
On Sat, Mar 18, 2017, 12:49  wrote:

>
>
> The toolchain already seems to understand the directory layout then why
> bother littering the sources with package declaration?
>

The package declaration is not related to the directory layout.
-- 

-j

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Aren't package declarations (and per file imports) redundant?

2017-03-18 Thread u89012
Before anyone flames, I love Go! There. Ok, now to the issue at hand -- 

The toolchain already seems to understand the directory layout then why 
bother littering the sources with package declaration? Also is there a 
point to specifying imports at a file level? I mean doesn't the linker 
bring in symbols at a package level anyway? My reason for brining this up 
is I'm trying to generate a codebase using a custom built specification and 
having to constantly tweak the imports and packages (in over 200 files) are 
getting in my way of a smooth development. I'm sure others have had the 
same problem.

In the spirit of brining a solution and not just a problem, how about the 
toolchain assume a package to be "main" if there's a main function therein. 
Imports could be specified at the package level like in D or Rust in a 
separate file.

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] Does it make sense to make expensive syscalls from different goroutines?

2017-03-18 Thread Konstantin Khomoutov
On Sat, 18 Mar 2017 03:50:39 -0700 (PDT)
Vitaly Isaev  wrote:

[...]
> Assume that application does some heavy lifting with multiple file 
> descriptors (e.g., opening - writing data - syncing - closing), what 
> actually happens to Go runtime? Does it block all the goroutines at
> the time when expensive syscall occures (like syscall.Fsync)? Or only
> the calling goroutine is blocked while the others are still operating?

IIUC, since there's no general mechanism to have kernel somehow notify
the process of the completion of any generic syscalls, when a goroutine
enters a syscall, it essentially locks its unrelying OS thread and
waits until the syscall completes.  The scheduler detects the goroutine
is about to sleep in the syscall and schedules another goroutine(s) to
run, but the underlying OS thread is not freed.

This is in contrast to network I/O which uses the platform-specific
poller (such as IOCP on Windows, epoll on Linux, kqueue on FreeBSD and
so on) so when an I/O operation on a socket is about to block, the
goroutine which performed that syscall is suspended, put on the wait
list, its socket is added to the set the poller monitors and its
underlying OS thread is freed to be able to serve a runnable goroutine.

> So does it make sense to write programs with multiple workers that do
> a lot of user space - kernel space context switching? Does it make
> sense to use multithreading patterns for disk input?

It may or may not.  A syscall-heavy workload might degrade the
goroutine scheduling to actually be N×N instead of M×N.  This might not
be the problem in itself (not counting a big number of OS threads
allocated and mostly sleeping) but concurrent access to the same slow
resource such as rotating medum is almost always a bad idea: say, your
HDD (and the file system on it) might only provide such and such read
bandwidth, so spreading the processing of the data being read across
multiple goroutines is only worth the deal if this processing is so
computationally complex that a single goroutine won't cope with that
full bandwidth.  If one goroutine is OK with keeping up with that full
bandwidth, having two goroutines read that same data will make each deal
with only half the bandwidth, so they will sleep > 50% of the time.
Note that reading two files in parallel off the filesystem located on
the same rotating medium will usually result in lowered full
bandwidth due to seek times required to jump around the blocks of
different files.

SSDs and other kinds of medium might have way better performance
characteristics so it worth measuring.

IOW, I'd say that trying to parallelizing might be a premature
optimization.  It worth keeping in mind that goroutines serve two
separate purposes: 1) they allow you to write natural sequential
control flow instead of callback-ridden spaghetti code; 2) they allow
performing tasks truely in parallel--if the hardware supports it
(multiple CPUs and/or cores).

This (2) is tricky because it assumes such goroutines have something to
do; if they instead contend on some shared resource, the parallelization
won't really happen.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Does it make sense to make expensive syscalls from different goroutines?

2017-03-18 Thread Vitaly Isaev


I would appreciate it if someone could clarify how does Go runtime operates 
under this circumstances:


Assume that application does some heavy lifting with multiple file 
descriptors (e.g., opening - writing data - syncing - closing), what 
actually happens to Go runtime? Does it block all the goroutines at the 
time when expensive syscall occures (like syscall.Fsync)? Or only the 
calling goroutine is blocked while the others are still operating?


So does it make sense to write programs with multiple workers that do a lot 
of user space - kernel space context switching? Does it make sense to use 
multithreading patterns for disk input?


Minimal example: https://play.golang.org/p/O0omcPBMAJ

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] golint if/else stmt and early returns

2017-03-18 Thread mhhcbon
Indeed, thanks!


On Friday, March 17, 2017 at 5:04:30 AM UTC+1, Nigel Tao wrote:
>
> This is tangential, but if we're talking about style, you might be 
> able to simplify this line 
> ret = PropertiesList(ret).Append(PropertiesList(temp)) 
> to be 
> ret = PropertiesList(ret).Append(temp) 
> if the PropertiesList underlying type and the temp variable's type are 
> what I'm guessing they are: []*Properties. The relevant sections of 
> the spec are https://golang.org/ref/spec#Calls and the link to 
> "assignable". 
>
> In a similar fashion, you could probably further simplify to: 
> ret = ret.Append(temp) 
> if you replaced 
> ret := []*Properties{} 
> with either 
> ret := PropertiesList(nil) 
> or 
> var ret PropertiesList 
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Guetzli perceptual JPEG encoder for Go

2017-03-18 Thread Val
Thanks Chai!
Do you think this is something we could translate to pure go, no requiring 
cgo?
I understand this would be a fair amount of work. I did a similar job 
recently (translated some PVRTC stuff from c++ to go by copy-paste, then 
fix everything), it went pretty well. I may try the same for Guetzli.

Cheers
Val

On Friday, March 17, 2017 at 6:37:43 PM UTC+1, chais...@gmail.com wrote:
>
> https://github.com/chai2010/guetzli-go
> https://github.com/google/guetzli
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Setting up GOPATH

2017-03-18 Thread Robbie Wright


On Saturday, 18 March 2017 13:49:13 UTC+10:30, samgusa...@gmail.com wrote:
>
> Hello,
> For some reason, no matter what i do, i just can't seem to get the go path 
> to work. 
> My tree(not entirely sure what it is):
> -documents
> - golang
>- src
>- hello1
> hello1 is where i have my code
>
> when ever i try and do go install i get this message. 
>
> go install: no install location for directory 
> /Users/samgreenhill/documents/golang/src/hello1 outside GOPATH
>
> For more details see: 'go help go path'
>
>
> After looking it up, i tried to customize my .bash_profile and i put this
>
> export GOROOT="/usr/local/go"
>
> export GOPATH="/Users/samgreenhill/documents/golang"
>
> export PATH="/Users/samgreenhill/documents/golang/bin:$PATH"
>
>
> and it still returned the same message
>
> Sams-MBP:hello1 samgreenhill$ go install
>
> go install: no install location for directory 
> /Users/samgreenhill/documents/golang/src/hello1 outside GOPATH
>
> For more details see: 'go help gopath'
>
> Sams-MBP:hello1 samgreenhill$ 
>
>
>
> i am curious what i am doing wrong. 
>
> thank you
>

Do you have directories called pkg, src and bin inside $GOPATH?
 

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] Re: Code review/Advice - reading and editing a CSV

2017-03-18 Thread Robbie Wright
Thanks Peter, Cecil and Charles!
I have attached the code as it stands, sorry for not getting back to you 
sooner.

It's still messy but it works, that is, it outputs a column of economic 
block values in IJK order to be input into an open pit optimiser.
The code is for an open source pit optimiser, I am currently trying to port 
C to Go to use the pseudoflow algorithm outlined here (Berkeley.edu). 
. 

If you're interested below is a short introduction:

# Introduction

The mining industry is a very relevant economic sector.  In Chile, copper 
exports account for about 62.5% of the total exports and represent a 12% of 
the GDP [1]. Mines can be either open-pit or underground, the actual 
decision depending on different economic and technical considerations. 
Open-pit mines are preferred to underground mines because they can reach 
higher production levels, and have smaller operational costs.  However, 
most of the time, it is necessary to remove material with poor or no ore 
content (waste) in order to have access to economically profitable 
material. In order to define what portions of the terrain must be mined at 
different moments during the life-time of the mine, the planning horizon is 
discretized into time-periods (or time-slots). In turn, the terrain is 
divided into  regular blocks,  which  are  arranged  in  a  3-dimensional 
 array (columns 2,3,4).   For  each  block,  estimations  on  the  ore 
content,  density  and  other  relevant  attributes (...columns)  are 
 constructed  by  using  geostatistical  methods. A block model, namely, 
the set of all blocks and their attributes, is the main input to the mine 
planning process. My csv is a block model. 

Lerchs and Grossman [2] proposed a very simplified version of the problem 
in which block destinations are fixed in advance, slope constraints are 
considered, but capacity or blending constraints are not.  In this case, 
the problem reduces to selecting a subset of blocks such that the 
 contained  value  is  maximized  while  the  precedence  constraint 
 induced  by  the  slope  angles  are  held. This problem is known as the 
ultimate or final  pit problem.  Lerchs and Grossman presented an 
efficient(polynomial) algorithm for solving the ultimate pit problem, and 
showed that reducing the economic value of any given block makes the 
optimal solution of the ultimate pit problem to shrink, in the sense that, 
if the values of the blocks decrease, the new solution is a subset of the 
original one.  Therefore, it is possible to produce nested pits and, by 
trial and error, construct block schedules that satisfy other constraints 
like capacity of the mill.  Present-day commercial software, like Whittle 
[3][4], is based on these facts.

Picard [5] showed that the ultimate pit problem is equivalent to the 
maximum closure problem in which, given a directed graph G = (V,A) with 
weight function w defined over the nodes, one looks for a subset of nodes U 
⊂ V such that ∑ u ∈ U w(u) is maximal but u ∈ U, (u,v) ∈ A ⇒ v ∈ U.  The 
maximum closure problem,  in  turn,  can  be  reduced  to  the min  cut 
problem  (for  more  details  see  [27]).   Using  this  fact, Hochbaum [6] 
proposes to attack the ultimate pit problem by means of existing efficient 
algorithms for the min cut problem.

[1]  Boland, N., Dumitrescu, I., Froyland, G., and Gleixner, A. (2009). 
Lp-based disaggregation approaches to solving the open pit mining 
production scheduling problem with block processing selectivity. Computers 
& Operations Research , 36(4):1064– 1089.
[2] Lerchs, H and Grossmann, I F, (1965). Optimum design of open pit mines, 
The Canadian Mining and Metallurgical Bulletin, Vol. 58, January, pp.47-54. 
[3] Alford, C.G. and Whittle, J., (1986). Application of Lerchs– Grossmann 
pit optimization to the design of open pit mines, In Large Open Pit Mining 
Conference, AusIMM–IE Aust Newman Combined Group, 1986, 201–207.
[4] Osanloo, M., Gholamnejad, J., and Karimi, B. (2008). Long-term open pit 
mine production planning:  a review of models and algorithms. International 
Journal of Mining, Reclamation and Environment, 22(1):3–35.
[5] Picard, J. (1976). Maximal closure of a graph and applications to 
combinatorial problems. Management Science, 22(11):pp.1268–1272.
[6] Hochbaum, D. and Chen, A. (2000).  Performance analysis and best 
implementation of old and new algorithms for the open-pit mining problem. 
Operations Research, 48:894–914.





-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


csv4.go
Description: Binary data