On Tue, Aug 28, 2018 at 02:31:04PM +0300, Alexey Budankov wrote:
> Hi,
> 
> On 28.08.2018 11:57, Jiri Olsa wrote:
> > On Mon, Aug 27, 2018 at 09:16:55PM +0300, Alexey Budankov wrote:
> > 
> > SNIP
> > 
> >> +  int trace_fd = rec->session->data->file.fd;
> >> +  struct aiocb **mmap_aio = rec->evlist->mmap_aio;
> >> +  int mmap_aio_size = 0;
> >> +  off_t off;
> >>  
> >>    if (!evlist)
> >>            return 0;
> >> @@ -528,14 +632,17 @@ static int record__mmap_read_evlist(struct record 
> >> *rec, struct perf_evlist *evli
> >>    if (overwrite && evlist->bkw_mmap_state != BKW_MMAP_DATA_PENDING)
> >>            return 0;
> >>  
> >> +  off = lseek(trace_fd, 0, SEEK_CUR);
> >> +
> >>    for (i = 0; i < evlist->nr_mmaps; i++) {
> >>            struct auxtrace_mmap *mm = &maps[i].auxtrace_mmap;
> >>  
> >>            if (maps[i].base) {
> >> -                  if (perf_mmap__push(&maps[i], rec, record__pushfn) != 
> >> 0) {
> >> -                          rc = -1;
> >> +                  rc = perf_mmap__push(&maps[i], rec, record__pushfn, 
> >> &off);
> >> +                  if (rc < 0)
> >>                            goto out;
> >> -                  }
> >> +                  else if (rc > 0)
> >> +                          mmap_aio[mmap_aio_size++] = &maps[i].cblock;
> > 
> > I understand the purpose of mmap_aio array, but I don't see a reason
> > to fill it in every time we call record__mmap_read_evlist
> 
> The cycle trips the same number of iterations over kernel buffers 
> for every call of record__mmap_read_evlist(). Called perf_mmap__push() 
> checks if there is data ready for spill in the corresponding buffer 
> and if there is no such data returns 0. So every time we execute 
> the cycle we get different set of buffers to spill and in this 
> circumstances dynamic filling of mmap_aio looks preferable.
> Lifetime management of perf_mmap object and referenced memory 
> is not related another thing.

argh.. the perf_mmap__push returns 0 for empty buffer, ok

jirka

Reply via email to