Re: [go-nuts] why is my program retaining memory when len([]byte)>16384 ?

2016-07-04 Thread mhhcbon
Hi,

thanks for help!


But the problem might just be that you're writing many more bytes to 
> your encoder than you're reading from the file. 
>

Did you mean i should write it so ? Write(srcBuf[0:lenRead])

nrRead, err := src.Read(bufRead); // read from file
if err!= nil {
  break
}
fmt.Printf("read read len: %d\n", nrRead)

nwEnc, err := encoder.Write(bufRead[0:nrRead]) // write to encoder


This means the size of buf2 could be much larger than the original 
> file and also full of junk data from previous reads and zeroed bytes. 
>

One q i have here, is not buf2 supposed to be drained by the following call 
to Read, at least partially

_, err = buf2.Write(r) // write to decoder
fmt.Println(err)
if decoder==nil {
  decoder = flate.NewReader(buf2)
}
r2 := make([]byte, dReadSize*2)
_, err = decoder.Read(r2) // read from decoder // <- here, it should 
remove some data of buf2, no ?
fmt.Println(err)


Besides that, i totally agree this example code is full of wrongness, I 
rewrote it,

package main

import (
  "fmt"
  "os"
  "bytes"
  // "encoding/base64"
  "compress/flate"
  "io"
  "time"
)

func main () {
  // okSize := 16384
  // koSize := 64384
  // koSize := 16385 // yes 1 more and it breaks :s

  // change read size to 16384 and everything is ok
  fReadSize := 16385
  dReadSize := 64384 // dread seems not impacting

  src, _ := os.Open("src.avi")
  bufRead := make([]byte, fReadSize)

  bufEnc := new(bytes.Buffer)
  encoder, _ := flate.NewWriter(bufEnc, 9)

  bufDec := new(bytes.Buffer)
  var decoder io.ReadCloser

  for {

fmt.Println("---")
nrRead, err := src.Read(bufRead); // read from file
if err!= nil {
  break
}
fmt.Printf("read read len: %d\n", nrRead)

nwEnc, err := encoder.Write(bufRead[0:nrRead]) // write to encoder
fmt.Println(err)
fmt.Printf("encode write len: %d\n", nwEnc)
err = encoder.Flush()
fmt.Println(err)
encSlice := bufEnc.Bytes() // read from encoder
// fmt.Println(encSlice[0:20])
fmt.Printf("encode read len: %d\n", len(encSlice))
fmt.Printf("bufEnc len: %d\n", bufEnc.Len())
fmt.Printf("bufEnc cap: %d\n", bufEnc.Cap())

bufEnc.Truncate(0)
// fmt.Println("___")
// fmt.Println(encSlice[0:20])

// fmt.Println(r)
nwDec, err := bufDec.Write(encSlice) // write to decoder
fmt.Println(err)
fmt.Printf("decode write len: %d\n", nwDec)
if decoder==nil {
  decoder = flate.NewReader(bufDec)
}
sliceDec := make([]byte, dReadSize*2)
nrDec, err := decoder.Read(sliceDec) // read from decoder
fmt.Println(err)
fmt.Printf("decode read len: %d\n", nrDec)
fmt.Printf("bufDec len: %d\n", bufDec.Len())
fmt.Printf("bufDec cap: %d\n", bufDec.Cap())

time.Sleep( 1 * time.Second)
  }
}

Which after few iterations will yield

read read len: 16385

encode write len: 16385

encode read len: 15150
bufEnc len: 15150
bufEnc cap: 16320

decode write len: 15150

decode read len: 8
bufDec len: 51234
bufDec cap: 140569

So definitely the decoder is holding on something. Its cap and len are 
irremediably growing.

The thing now is if I try to read it twice time consecutively, 
the second will read will always yield 0 bytes and provide me an 
UnexpectedEOF error.
Does this error leaves the decoder in a unrecoverable state ? I m unsure.

Looks likes i can t drain it :x 


Le lundi 4 juillet 2016 06:17:51 UTC+2, Jesse McNelis a écrit :
>
> On Mon, Jul 4, 2016 at 3:35 AM, mhhcbon  > wrote: 
> > Hi, 
> > 
> > I have this program which reads file, flate encode then flate decode the 
> > data. 
> > 
> > I noticed that when i used different size for the slice of []byte to 
> read 
> > data, the program will retain memory when the size is > 16384. 
> > When its lower than this value everything is fine, but 16385 breaks. 
> > 
> > I don t quite understand the reason of this behavior, can someone help 
> me to 
> > understand what s going on there ? 
>
> I can't see anywhere that this program could be holding on to extra 
> memory it's not using. 
> But the problem might just be that you're writing many more bytes to 
> your encoder than you're reading from the file. 
>
> _, err := src.Read(b); // read from file 
>
> Read() isn't required to fill the whole buffer it's given. It can read 
> a single byte and return. 
> Because you're ignoring the value telling you how many bytes it read 
> you're passing the whole 16385 slice to your encoder even though you 
> might have read much less than 16385 bytes. 
>
> This means the size of buf2 could be much larger than the original 
> file and also full of junk data from previous reads and zeroed bytes. 
>
> Read() is a low level call that you should avoid calling directly 
> because it's tricky to get right. 
>
> For an example of how to properly call a Read() see the implementation 
> of io.Copy() 
> https://golang.org/src/io/io.go?#L366 
>

-- 
You received this message 

Re: [go-nuts] why is my program retaining memory when len([]byte)>16384 ?

2016-07-03 Thread Jesse McNelis
On Mon, Jul 4, 2016 at 3:35 AM, mhhcbon  wrote:
> Hi,
>
> I have this program which reads file, flate encode then flate decode the
> data.
>
> I noticed that when i used different size for the slice of []byte to read
> data, the program will retain memory when the size is > 16384.
> When its lower than this value everything is fine, but 16385 breaks.
>
> I don t quite understand the reason of this behavior, can someone help me to
> understand what s going on there ?

I can't see anywhere that this program could be holding on to extra
memory it's not using.
But the problem might just be that you're writing many more bytes to
your encoder than you're reading from the file.

_, err := src.Read(b); // read from file

Read() isn't required to fill the whole buffer it's given. It can read
a single byte and return.
Because you're ignoring the value telling you how many bytes it read
you're passing the whole 16385 slice to your encoder even though you
might have read much less than 16385 bytes.

This means the size of buf2 could be much larger than the original
file and also full of junk data from previous reads and zeroed bytes.

Read() is a low level call that you should avoid calling directly
because it's tricky to get right.

For an example of how to properly call a Read() see the implementation
of io.Copy()
https://golang.org/src/io/io.go?#L366

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] why is my program retaining memory when len([]byte)>16384 ?

2016-07-03 Thread mhhcbon
Sorry for my explanations.

I wanted to mean that when i use a len([]byte) <=16384 
the program can read lots of megabytes and remains 
at a stable state of memory usage, 5 mb.

When i use higher values, the memory grows and keep growing.
by higher value i mean anything > 16384


The slice is valid for use only until the next buffer 
> modification (that is, only until the next call to a method like Read, 
> Write, Reset, or Truncate). The slice aliases the buffer content at 
> least until the next buffer modification, so immediate changes to the 
> slice will affect the result of future reads. 
>

Indeed, if i write some data on buf, i see the changes in r.

r := buf.Bytes() // read from encoder
fmt.Println(r)
buf.Truncate(0)
buf.Write([]byte("aaa"))
fmt.Println("___")
fmt.Println(r)

I thought it was a copy.

But now i don't understand why Truncate(0) does not erase r, 
as in https://golang.org/src/bytes/buffer.go?s=2594:2626#L57 
there is this 

b.buf = b.buf[0 : b.off+n]

Which resolves to b.buf = b.buf[0:0]


:x

Le dimanche 3 juillet 2016 21:08:01 UTC+2, Janne Snabb a écrit :
>
> On 2016-07-03 20:35, mhhcbon wrote: 
>
> > r :=buf.Bytes()// read from encoder 
> > buf.Truncate(0) 
>
>
> I did not understand your explanation of the problem, but surely there 
> is a bug in the code quoted above. 
>
> Read bytes.Buffer Bytes() function documentation: 
>
> func (b *Buffer) Bytes() []byte 
>
> Bytes returns a slice of length b.Len() holding the unread portion of 
> the buffer. The slice is valid for use only until the next buffer 
> modification (that is, only until the next call to a method like Read, 
> Write, Reset, or Truncate). The slice aliases the buffer content at 
> least until the next buffer modification, so immediate changes to the 
> slice will affect the result of future reads. 
>
>
> Janne Snabb 
> sn...@epipe.com  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [go-nuts] why is my program retaining memory when len([]byte)>16384 ?

2016-07-03 Thread Janne Snabb
On 2016-07-03 20:35, mhhcbon wrote:

> r :=buf.Bytes()// read from encoder
> buf.Truncate(0)


I did not understand your explanation of the problem, but surely there
is a bug in the code quoted above.

Read bytes.Buffer Bytes() function documentation:

func (b *Buffer) Bytes() []byte

Bytes returns a slice of length b.Len() holding the unread portion of
the buffer. The slice is valid for use only until the next buffer
modification (that is, only until the next call to a method like Read,
Write, Reset, or Truncate). The slice aliases the buffer content at
least until the next buffer modification, so immediate changes to the
slice will affect the result of future reads.


Janne Snabb
sn...@epipe.com

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[go-nuts] why is my program retaining memory when len([]byte)>16384 ?

2016-07-03 Thread mhhcbon
Hi,

I have this program which reads file, flate encode then flate decode the 
data.

I noticed that when i used different size for the slice of []byte to read 
data, the program will retain memory when the size is > 16384.
When its lower than this value everything is fine, but 16385 breaks.

I don t quite understand the reason of this behavior, can someone help me 
to understand what s going on there ?

package main

import (
  "fmt"
  "os"
  "bytes"
  // "encoding/base64"
  "compress/flate"
  "io"
)

func main () {
  // okSize := 16384
  // koSize := 64384
  // koSize := 16385 // yes 1 more and it breaks :s

  // change read size to 16384 and everything is ok
  fReadSize := 16385
  dReadSize := 64384 // dread seems not impacting

  src, _ := os.Open("big.file 1gb")
  b := make([]byte, fReadSize)

  buf := new(bytes.Buffer)
  encoder, _ := flate.NewWriter(buf, 9)

  buf2 := new(bytes.Buffer)
  var decoder io.ReadCloser
  for {
fmt.Println("---")
_, err := src.Read(b); // read from file
if err!= nil {
  break
}
// fmt.Println(b)
_, err = encoder.Write(b) // write to encoder
fmt.Println(err)
err = encoder.Flush()
fmt.Println(err)
r := buf.Bytes() // read from encoder
buf.Truncate(0)
// fmt.Println(r)
_, err = buf2.Write(r) // write to decoder
fmt.Println(err)
if decoder==nil {
  decoder = flate.NewReader(buf2)
}
r2 := make([]byte, dReadSize*2)
_, err = decoder.Read(r2) // read from decoder
fmt.Println(err)
  }
}



Does this helps ?

$ uname -a
Linux pc15-home 4.6.3-300.fc24.x86_64 #1 SMP Fri Jun 24 20:52:41 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux


thanks!



PS: no i m not interested into that version

package main

import (
  "os"
  "io"
  "compress/gzip"
)

func main () {
  pr, pw := io.Pipe()
  go func () {
decoder, _ := gzip.NewReader(pr)
io.Copy(os.Stdout, decoder)
  }()
  archiver := gzip.NewWriter(pw)
  defer archiver.Close()
  io.Copy(archiver, os.Stdin)
}

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.