I changed the default value of “—num-compute-units” to 8 in apu_se.py. I still 
got the segmentation fault. The following is the trace from gem5 simulation. 
The 8 CUs are initialized successfully, but get the failure during the 
instruction fetching stage. — Tsung Tai
parser.add_option("-u", "--num-compute-units", type="int", default=8,
                  help="number of GPU compute units")

51620963000: system.cpu2.CUs0: trueWgSize[0] =  256
51620963000: system.cpu2.CUs0: trueWgSize[1] =  1
51620963000: system.cpu2.CUs0: trueWgSize[2] =  1
51620963000: system.cpu2.CUs0: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs0: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs1: trueWgSize[0] =  256
51620963000: system.cpu2.CUs1: trueWgSize[1] =  1
51620963000: system.cpu2.CUs1: trueWgSize[2] =  1
51620963000: system.cpu2.CUs1: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs1: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs2: trueWgSize[0] =  256
51620963000: system.cpu2.CUs2: trueWgSize[1] =  1
51620963000: system.cpu2.CUs2: trueWgSize[2] =  1
51620963000: system.cpu2.CUs2: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs2: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs3: trueWgSize[0] =  256
51620963000: system.cpu2.CUs3: trueWgSize[1] =  1
51620963000: system.cpu2.CUs3: trueWgSize[2] =  1
51620963000: system.cpu2.CUs3: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs3: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs4: trueWgSize[0] =  256
51620963000: system.cpu2.CUs4: trueWgSize[1] =  1
51620963000: system.cpu2.CUs4: trueWgSize[2] =  1
51620963000: system.cpu2.CUs4: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs4: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs5: trueWgSize[0] =  256
51620963000: system.cpu2.CUs5: trueWgSize[1] =  1
51620963000: system.cpu2.CUs5: trueWgSize[2] =  1
51620963000: system.cpu2.CUs5: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs5: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs6: trueWgSize[0] =  256
51620963000: system.cpu2.CUs6: trueWgSize[1] =  1
51620963000: system.cpu2.CUs6: trueWgSize[2] =  1
51620963000: system.cpu2.CUs6: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs6: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu2.CUs7: trueWgSize[0] =  256
51620963000: system.cpu2.CUs7: trueWgSize[1] =  1
51620963000: system.cpu2.CUs7: trueWgSize[2] =  1
51620963000: system.cpu2.CUs7: trueWgSizeTotal =  256
51620963000: system.cpu2.CUs7: Free WF slots =  36, Mapped WFs = 0,             
VGPR Availability = 0, SGPR Availability = 0
51620963000: system.cpu0.workload.drivers.device.dispatcher: kernel 0 failed to 
launch
51620963000: system.cpu0.workload.drivers.device.dispatcher: Returning 0 Kernels
51620964000: global: WF[0][0]: Id28 reserved fetch buffer entry for PC = 
0x7ffeebde9100
51620964000: global: CU7: WF[0][0]: Id28: Initiate fetch from pc: 
140732855652608 0x7ffeebde9100
51620964000: global: CU7: WF[0][0]: Initiating fetch translation: 0x7ffeebde9100
gem5 has encountered a segmentation fault!
________________________________
From: Tsungtai Yeh
Sent: Friday, July 27, 2018 7:20:55 AM
To: [email protected]
Cc: [email protected]
Subject: [gem5-dev] How to increase the default CU counts in GCN_X86 gem5-apu?


I tried to increase the CU counts to 32 in the gem5-apu. However, I found I 
could not only increase "--num-compute-units" default count (4). Do you know 
any other parameters I also need to change when increasing the CU counts? Thank 
you.
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to