HI Robbie,
yes, we have seen intermittent errors with requesting sample buffer
memory. However I'm not sure why this should only happen occasionally.
Perhaps Stefane knows?
As for overflow_force_software, that is untested on Montecito. I
wouldn't worry about it as you won't be using software overflow on
Monte, just hardware. Not sure what the reason is...
Phil
>
> Is this reasonable?
>
> As for overflow_force_software, it fail with following error msg:
>
> ......
>
> PAPI Error: read(1): short 56 vs. 92592 bytes.
> PAPI Error: unexpected msg type 10.
> PAPI Error: pfm_restart(0): not supported
>
> .....
>
> What's the reason?
>
> Robbie
>
> On 11/3/06, Philip J. Mucci <[EMAIL PROTECTED]> wrote:
>
>
> Hi robbie,
>
> great catch on this one again. Fixed in CVS.
>
> As for your question below...
> > .inline static int check_multiplex_timeout(hwd_context_t
> *ctx,
> > unsigned long *timeout)
> > {
> > int ret, ctx_fd;
> > pfarg_setdesc_t set;
> > pfarg_ctx_t newctx;
> >
> > return(PAPI_OK); //Why return PAPI_OK immediately without
> any
> > processing?
> > ....
> > }
>
> I really hated to have to build a context and do all the gory
> bits just
> to check that the timeout is valid to the kernel. None of the
> other
> platforms do this...but I will get to this eventually.
>
> > PS: When will you get access to a Montecito platform?
> Current PAPI
> > does need great effort to work well with Montecito. I have
> spent many
> > hours on fixing bugs, however, some tests still fail on my
> platform,
> > such as byte_profile,
>
> As soon as the Gelato folks send me one. ;-) Sorry to say I
> don't know
> when that is. Christmas maybe?
>
> The only test I'm truly concerned about below is earprofile
> and
> overflow_force_software.
>
> Can you run them and send me the output?
>
> I had ear profiling running at one point last time I had
> access to a
> Monte. What happens now?
>
> Phil
>
> >
>
> earprofile,kufrin,overflow3_pthreads,overflow_force_software,sdsc-mpx,sdsc2-mpx
> and sdsc4-mpx.
> >
> > Best regards,
> >
> > Robbie
> >
> >
> >
> >
>
>
_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/