x86 linux boot is a component of the '--length long' regression. You need the kernel and disk image that are installed when the regression is run.
command line: ./build/X86/gem5.opt ./tests/gem5/x86-boot-tests/run_exit.py --kernel boot-test/vmlinux-4.19.83 --disk boot-test/base.img --cpu-type atomic --num-cpus 1 --boot-type init On Sat, Oct 10, 2020 at 10:22 PM Gabe Black via gem5-dev <[email protected]> wrote: > I'm planning to give this a shot, but is there a simple command line for > this? It would take a lot of time to set up a system and all that, and it > would be much nicer to just fire off, for instance, a linux boot test if we > already have something lying around. > > Gabe > > On Wed, Oct 7, 2020 at 10:41 AM Jason Lowe-Power <[email protected]> > wrote: > >> I think Linux boot is pretty reasonable. Or, Linux boot + some >> multithreaded tests (parsec is available for x86 in gem5-resources). If >> there isn't much performance impact there, I think that would be strong >> evidence for little performance impact, generally. >> >> Cheers, >> Jason >> >> On Mon, Oct 5, 2020 at 3:07 AM Gabe Black via gem5-dev <[email protected]> >> wrote: >> >>> Hey folks. I'm trying out using dynamically allocated arrays to track >>> source and destination register indices in static and dynamic instructions >>> rather than fixed size arrays and would like to check what the impact on >>> performance is. I used to use the twolf SPEC benchmark for that since it >>> was fairly quick and easy to run but still ran long enough to get >>> meaningful results, but do we have something like that now that's maybe >>> even easier to set up? Or is easier for other people to run? >>> >>> As far as the arrays, what I'm aiming at is to make it unnecessary to >>> measure the max number of indices needed and hence the minimum size of >>> those arrays since that centralized global value needs to reflect every >>> instruction in gem5, and it would be a bit of a pain to coordinate that >>> with multiple ISAs. Allocating those arrays statically as part of the >>> StaticInst or DynInst classes makes allocation cheaper since it just makes >>> the classes a little bigger, and making them dynamic will inevitably >>> involve secondary allocations to give the vectors (for instance) their >>> backing store. I'm hopeful it won't be that bad though, since StaticInsts >>> are usually reused from a cache and not reallocated, and dynamic >>> instructions are used in CPUs which already have lots of other, more >>> substantial overhead. >>> >>> Gabe >>> _______________________________________________ >>> gem5-dev mailing list -- [email protected] >>> To unsubscribe send an email to [email protected] >>> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s >> >> _______________________________________________ > gem5-dev mailing list -- [email protected] > To unsubscribe send an email to [email protected] > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
_______________________________________________ gem5-dev mailing list -- [email protected] To unsubscribe send an email to [email protected] %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
