[gem5-users] Running Tensorflow application on gem5

2020-08-21 Thread Abhishek Singh via gem5-users
Hello Everyone,


I am trying to run LeNet and AlexNet (tensor flow code) at this github
(Link: https://github.com/iCAS-Lab/IMAC/tree/master/Tensorflow). I have
made a disk image using  Step 2 "Using gem5 utils and chroot to create a
disk image"

I am using Linux-4.8.13 kernel and ubuntu 16. I am able to run
the benchmark using "chroot" to the image

I get the following error when I try to run the application in FS mode of
gem5:
"segfault at d52beb20 ip d52beb20 sp 7ffeab147908 error 14 in
libstdc++.so.6.0.25[7f6c8396a000+179000]"

My CLI is " ./build/X86/gem5.opt configs/example/fs.py --disk
image=/home/abs218/image_kernel/ubuntu-16.img
--kernel=/home/abs218/image_kernel/linux-4.8.13/vmlinux"

I am using gem5 develop branch (commit:e63504b) and x86 isa

If anyone has run AlexNet or LeNet using gem5, please let me know how you
did it (source of benchmark, which gem5 commit, which ISA)?

Best regards,

Abhishek
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Full system files Makefile. missing

2020-08-21 Thread Abhishek Singh via gem5-users
Thanks!

Best regards,

Abhishek


On Fri, Aug 21, 2020 at 1:47 PM Chongzhi Zhao  wrote:

> The new method is here:
> https://www.gem5.org/documentation/general_docs/m5ops/
>
> *Chongzhi "Paul" Zhao*
> Doctoral Student in Computer Engineering
> Texas A&M University
> Email: chongzhizhao4 (at) gmail (dot) com
>
>
> On Fri, Aug 21, 2020 at 12:44 PM Abhishek Singh via gem5-users <
> gem5-users@gem5.org> wrote:
>
>> Hello Everyone,
>>
>> I am trying to build a Full system image using Step 2 "Using gem5 utils
>> and chroot to create a disk image" mentioned here (
>> https://www.gem5.org/documentation/general_docs/fullsystem/disks
>> 
>> ).
>>
>> I cannot locate "Makefile." in gem5_20 (util/m5) and develop branch.
>> Is this method no more supported?
>>
>>
>>
>> Best regards,
>>
>> Abhishek
>> ___
>> gem5-users mailing list -- gem5-users@gem5.org
>> To unsubscribe send an email to gem5-users-le...@gem5.org
>> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: SLICC: Main memory overwhelmed by requests?

2020-08-21 Thread Jason Lowe-Power via gem5-users
Hi Theo,

It's possible that if you increase the deadlock timeout your protocol will
"just work". There's an infinite queue between the memory controller
(DRAMCtrl) and the Ruby directory (which sends the memory requests to the
memory controller). We've made some progress to correctly model
backpressure there, but in some circumstances, the queue size can grow very
large (i.e., when the request bandwidth far exceeds the memory's
bandwidth). The fact that your protocol works with more channels (i.e.,
more bandwidth) makes me suspect this is the problem.

As far as debugging... memory requests are (usually) sent from a Ruby
directory back into gem5's "normal" memory system. They are sent via a
special message buffer, always called "requestToMemory". In the
AbstractController::serviceMemoryQueue() function, this message buffer is
checked and if it's non-empty a new packet is created and sent across the
directory's (or whatever the state machine is) "memory" port.

On the DRAM side, you can use the "MemAccess" debug flag to see when the
memory is *functionally* accessed and "DRAM" for the DRAM transactions.
Finally, you might want to use the "PacketQueue" debug flag because there
is an (infinite) QueuedPort between the memory and your Ruby controllers.

Hopefully this helps track down the problem. Let us know if you have more
questions! This is a complicated code path.

Cheers,
Jason


On Fri, Aug 21, 2020 at 8:56 AM tolausso--- via gem5-users <
gem5-users@gem5.org> wrote:

> Hi all,
>
> I am trying to run a Linux kernel in FS mode, with a custom-rolled
> SLICC/Ruby directory-based cache coherence protocol, but it seems like the
> memory controller is dropping some requests in rare circumstances --
> possibly due to it being overwhelmed with requests.
>
> The protocol seems to work fine for a long time but about 90% of the way
> into booting the kernel, around the same time as the "mounting
> filesystems..." message appears, gem5 crashes and reports a deadlock.
> Inspecting the trace, it seems that the deadlock occurs during a period of
> very high main memory traffic; the trace looks something like this:
> > Directory receives DMA read request for Address 1, sends MEMORY_READ to
> memory controller
> > Directory receives DMA read request for Address 2, sends MEMORY_READ to
> memory controller
> > ...
> >  Directory receives DMA read request for Address N, sends MEMORY_READ to
> memory controller
> > Directory receives CPU read request for Address A, sends MEMORY_READ to
> memory controller
>
> After some time, the Directory receives responses for all of the
> DMA-induced requests (Address 1...N). However, it never hears back about
> the MEMORY_READ to Address A, and so eventually gem5 calls it a day and
> reports a deadlock. Address A is distinct from addresses 1..N and its read
> should therefore not be affected by the requests to the other addresses.
>
> I have tried:
> * Using the same kernel with one of the example SLICC protocols
> (MOESI_CMP_directory). No error occurred, so the underlying issue must be
> with my protocol.
> * Upping the memory size to 8192MB (from 512MB) and increasing the number
> of channels to 4 (from 1). Under this configuration the above issue does
> not occur, and the Linux kernel happily finishes booting. This combined
> with the fact that it takes so long for any issues to occur makes me think
> that my protocol is somehow overwhelming the memory controller, causing it
> to drop the request to read Address A. In other words, I am pretty
> confident that the error is not something as simple as forgetting to pop
> the memory queue, for example.
>
> If anyone has any clues as to what might be going on I would very much
> appreciate your comments.
> I was especially wondering about the following:
> * Is it even possible for requests to main memory to fail due to for
> example network congestion? If so, is there any way to catch this and retry
> the request?
> * (Noob question): Where in gem5 do the main memory requests "go to"? Is
> there a debugging flag I could use to check whether the main memory
> receives the request?
>
> Best,
> Theo Olausson
> Univ. of Edinburgh
> ___
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Full system files Makefile. missing

2020-08-21 Thread Chongzhi Zhao via gem5-users
The new method is here:
https://www.gem5.org/documentation/general_docs/m5ops/

*Chongzhi "Paul" Zhao*
Doctoral Student in Computer Engineering
Texas A&M University
Email: chongzhizhao4 (at) gmail (dot) com


On Fri, Aug 21, 2020 at 12:44 PM Abhishek Singh via gem5-users <
gem5-users@gem5.org> wrote:

> Hello Everyone,
>
> I am trying to build a Full system image using Step 2 "Using gem5 utils
> and chroot to create a disk image" mentioned here (
> https://www.gem5.org/documentation/general_docs/fullsystem/disks
> 
> ).
>
> I cannot locate "Makefile." in gem5_20 (util/m5) and develop branch.
> Is this method no more supported?
>
>
>
> Best regards,
>
> Abhishek
> ___
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Full system files Makefile. missing

2020-08-21 Thread Abhishek Singh via gem5-users
Hello Everyone,

I am trying to build a Full system image using Step 2 "Using gem5 utils and
chroot to create a disk image" mentioned here (
https://www.gem5.org/documentation/general_docs/fullsystem/disks).

I cannot locate "Makefile." in gem5_20 (util/m5) and develop branch.
Is this method no more supported?



Best regards,

Abhishek
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] SLICC: Main memory overwhelmed by requests?

2020-08-21 Thread tolausso--- via gem5-users
Hi all,

I am trying to run a Linux kernel in FS mode, with a custom-rolled SLICC/Ruby 
directory-based cache coherence protocol, but it seems like the memory 
controller is dropping some requests in rare circumstances -- possibly due to 
it being overwhelmed with requests.

The protocol seems to work fine for a long time but about 90% of the way into 
booting the kernel, around the same time as the "mounting filesystems..." 
message appears, gem5 crashes and reports a deadlock.
Inspecting the trace, it seems that the deadlock occurs during a period of very 
high main memory traffic; the trace looks something like this:
> Directory receives DMA read request for Address 1, sends MEMORY_READ to 
> memory controller
> Directory receives DMA read request for Address 2, sends MEMORY_READ to 
> memory controller
> ...
>  Directory receives DMA read request for Address N, sends MEMORY_READ to 
> memory controller
> Directory receives CPU read request for Address A, sends MEMORY_READ to 
> memory controller

After some time, the Directory receives responses for all of the DMA-induced 
requests (Address 1...N). However, it never hears back about the MEMORY_READ to 
Address A, and so eventually gem5 calls it a day and reports a deadlock. 
Address A is distinct from addresses 1..N and its read should therefore not be 
affected by the requests to the other addresses.

I have tried:
* Using the same kernel with one of the example SLICC protocols 
(MOESI_CMP_directory). No error occurred, so the underlying issue must be with 
my protocol.
* Upping the memory size to 8192MB (from 512MB) and increasing the number of 
channels to 4 (from 1). Under this configuration the above issue does not 
occur, and the Linux kernel happily finishes booting. This combined with the 
fact that it takes so long for any issues to occur makes me think that my 
protocol is somehow overwhelming the memory controller, causing it to drop the 
request to read Address A. In other words, I am pretty confident that the error 
is not something as simple as forgetting to pop the memory queue, for example.

If anyone has any clues as to what might be going on I would very much 
appreciate your comments.
I was especially wondering about the following:
* Is it even possible for requests to main memory to fail due to for example 
network congestion? If so, is there any way to catch this and retry the request?
* (Noob question): Where in gem5 do the main memory requests "go to"? Is there 
a debugging flag I could use to check whether the main memory receives the 
request?

Best,
Theo Olausson
Univ. of Edinburgh
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[gem5-users] Re: issues in FS mode with TimingSimpleCPU+Multicore

2020-08-21 Thread Chongzhi Zhao via gem5-users
Hi Jaspinder,
What you observed is consistent with this post:
https://www.gem5.org/documentation/benchmark_status/
The common advice is usually that you boot with AtomicSimpleCPU, make a
checkpoint, and then restore the checkpoint with a more detailed CPU model.
Sincerely,
*Chongzhi "Paul" Zhao*
Doctoral Student in Computer Engineering
Texas A&M University
Email: chongzhizhao4 (at) gmail (dot) com


On Fri, Aug 21, 2020 at 2:23 AM JASPINDER KAUR via gem5-users <
gem5-users@gem5.org> wrote:

>
> Dear All,
>   I am trying to boot gem5 in FS mode for multiple cores. However, I am
> facing a problem as mentioned below:
>
>1. Atomic CPU - working properly.
>2. TimingSimpleCPU - Single core - working properly.
>3. TimingSimpleCPU - multiple cores (2 or 4) - the execution stuck
>after partial booting.
>
> *Please let me know where I am doing the mistake. The commans I am using
> is:*
>
> >build/X86/gem5.fast configs/example/fs.py --cpu-type=TimingSimpleCPU
> --script=/home/user1/simulators/gem5/FS/runs.rcS --num-cpus=2 --caches
>  --kernel=/home/user1/simulators/gem5/FS/binaries/x86-4.8.13.smp
> --disk-image=/home/user1/simulators/gem5/FS/disks/x86-security.img
>
> *The execution stuck after the last line of the following output:*
>
>   x86: Booting SMP configuration:
>  node  #0, CPUs:  #1
> CPU: CPU feature xsave disabled, no CPUID level 0xd
> x86: Booted up 1 node, 2 CPUs
> smpboot: Total of 2 processors activated (31999.69 BogoMIPS)
> devtmpfs: initialized
> clocksource: jiffies: mask: 0x max_cycles: 0x,
> max_idle_ns: 1911260446275000 ns
> NET: Registered protocol family 16
> cpuidle: using governor ladder
> PCI: Using configuration type 1 for base access
> HugeTLB registered 2 MB page size, pre-allocated 0 pages
> ACPI: Interpreter disabled.
> vgaarb: loaded
> SCSI subsystem initialized
> usbcore: registered new interface driver usbfs
> usbcore: registered new interface driver hub
> usbcore: registered new device driver usb
> pps_core: LinuxPPS API ver. 1 registered
> pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <
> giome...@linux.it>
> PTP clock support registered
> PCI: Probing PCI hardware
> PCI host bridge to bus :00
> pci_bus :00: root bus resource [io  0x-0x]
> pci_bus :00: root bus resource [mem 0x-0x]
> pci_bus :00: No busn resource found for root bus, will use [bus 00-ff]
> pci :00:04.0: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
> pci :00:04.0: legacy IDE quirk: reg 0x14: [io  0x03f6]
> pci :00:04.0: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
> pci :00:04.0: legacy IDE quirk: reg 0x1c: [io  0x0376]
> ___
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: [ARM system] Question about the cleassic cache system

2020-08-21 Thread chenboya via gem5-users
Hi, Ciro

Thank you for sharing this.
I saw Jason initiated a code review for Tiago's update last month.
So I guess this work will be added to the main repository soon.








___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: [ARM system] Question about the cleassic cache system

2020-08-21 Thread Ciro Santilli via gem5-users
I'm not sure about the cache hierarchy issue.

But about Ruby support, I don't think there's any known ARM specific problem, 
and ARM contributors have been specifically pushing Ruby recently, see e.g. see 
Tiago's CHI announcement: https://www.gem5.org/2020/05/29/flexible-cache.html

From: chenboya via gem5-users 
Sent: Friday, August 21, 2020 10:50 AM
To: gem5-users@gem5.org 
Cc: chenboya 
Subject: [gem5-users] [ARM system] Question about the cleassic cache system


Hi, ALL

I'm doing some design space exploration work using GEM5.
My work is exploring the different cache structures, using ARM cores, classic 
cache structure, and use parsec-3.0 to simulate the multi-core performance.
My system has 4-level caches, every level using L2XBar to connect. Use big 
little clusters, every cluster has the L2 cache shared by cores in the cluster.

Now I meet some problems in the cache structure as below:

1. If I connect all the clusters to one L3 cache then connect to one L4 cache, 
then the full system run is OK.

2. If I connect big clusters directly to L4 cache, other clusters connect to 
one L3 cache then connect to L4 cache, will see the error below (in 
system.terminal):

/home/root/parsec-3.0
 [0.831546] Unable to handle kernel paging request at virtual address 
aaecd674a901
 [0.831563] Unable to handle kernel paging request at virtual address 
aaecde551e41
 [0.831573] Unable to handle kernel paging request at virtual address 
aaecde527e41

3. If I connect big clusters to one L3, other clusters to another L3, and these 
two L3 caches connect to L4 cache, will have SAME error.

If change all cores to Atomic CPU, will not have the page fault error.

All the 3 experiments use same image (the aarch64-ubuntu-trusty-headless.img 
add parsec), and use automatically generated dtb file.
When instantiation, the 3 structures can be generated successfully.

So are there any limits about classic mode memory system for ARM? For example, 
cannot support more than one L3 caches or asymmetry hierarchies?
Should I using Ruby to replace it?

In Andreas Hansson's 2015 slides, he said Ruby has some compatibility problems 
for ARM.
FSConfig.py also warns that Ruby on ARM is not working properly yet. Has those 
problems solved now?


-Original Message-
From: gem5-users-requ...@gem5.org [mailto:gem5-users-requ...@gem5.org]
Sent: 2020年8月21日 8:23
To: gem5-users@gem5.org
Subject: gem5-users Digest, Vol 169, Issue 52

Send gem5-users mailing list submissions to
gem5-users@gem5.org

To subscribe or unsubscribe via email, send a message with subject or body 
'help' to
gem5-users-requ...@gem5.org

You can reach the person managing the list at
gem5-users-ow...@gem5.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of gem5-users digest..."

Today's Topics:

   1. Re: KVM does not work  (chenboya)
   2. Re: Functional read failed while using pthread lock in program
  (Jason Lowe-Power)
   3. Packet request send directly to memory without searching in cache
  (Muhammad Aamir)
   4. issues in FS mode with TimingSimpleCPU+Multicore (JASPINDER KAUR)


--

Date: Thu, 20 Aug 2020 14:55:05 +
From: chenboya 
Subject: [gem5-users] Re: KVM does not work
To: "gem5-users@gem5.org" 
Message-ID: 
Content-Type: text/plain; charset="utf-8"

There are some GIC issues about running the KVM mode, fortunately an engineer 
had given the solution.
Here are some discussions about the KVM mode for ARM.

https://gem5.atlassian.net/browse/GEM5-547


-Original Message-
From: gem5-users-requ...@gem5.org [mailto:gem5-users-requ...@gem5.org]
Sent: 2020年8月12日 9:46
To: gem5-users@gem5.org
Subject: gem5-users Digest, Vol 169, Issue 31

Send gem5-users mailing list submissions to
gem5-users@gem5.org

To subscribe or unsubscribe via email, send a message with subject or body 
'help' to
gem5-users-requ...@gem5.org

You can reach the person managing the list at
gem5-users-ow...@gem5.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of gem5-users digest..."

Today's Topics:

   1. KVM does not work (毛允飞)
   2. Re: KVM does not work (Giacomo Travaglini)


--

Date: Wed, 12 Aug 2020 16:40:48 +0800
From: 毛允飞 
Subject: [gem5-users] KVM does not work
To: gem5-users@gem5.org
Message-ID:

Content-Type: multipart/alternative;
boundary="c0216705acaa2553"

--c0216705acaa2553
Content-Type: text/plain; charset="UTF-8"

Hi All
I run the fs_bigLITTLE.py script in gem5, but there is no information in the 
m5term console. I don't know what went wrong,

INFO:
Global frequency set at 1 ticks per second
info: Simulated platform: VExpress_GEM5_V1
info: kernel located at:
/home/tracy/gem5/fs_image_arm/binaries/v

[gem5-users] [ARM system] Question about the cleassic cache system

2020-08-21 Thread chenboya via gem5-users

Hi, ALL

I'm doing some design space exploration work using GEM5.
My work is exploring the different cache structures, using ARM cores, classic 
cache structure, and use parsec-3.0 to simulate the multi-core performance.
My system has 4-level caches, every level using L2XBar to connect. Use big 
little clusters, every cluster has the L2 cache shared by cores in the cluster.

Now I meet some problems in the cache structure as below:

1. If I connect all the clusters to one L3 cache then connect to one L4 cache, 
then the full system run is OK.

2. If I connect big clusters directly to L4 cache, other clusters connect to 
one L3 cache then connect to L4 cache, will see the error below (in 
system.terminal):

/home/root/parsec-3.0
 [0.831546] Unable to handle kernel paging request at virtual address 
aaecd674a901
 [0.831563] Unable to handle kernel paging request at virtual address 
aaecde551e41
 [0.831573] Unable to handle kernel paging request at virtual address 
aaecde527e41

3. If I connect big clusters to one L3, other clusters to another L3, and these 
two L3 caches connect to L4 cache, will have SAME error.

If change all cores to Atomic CPU, will not have the page fault error.

All the 3 experiments use same image (the aarch64-ubuntu-trusty-headless.img 
add parsec), and use automatically generated dtb file.
When instantiation, the 3 structures can be generated successfully.

So are there any limits about classic mode memory system for ARM? For example, 
cannot support more than one L3 caches or asymmetry hierarchies?
Should I using Ruby to replace it?

In Andreas Hansson's 2015 slides, he said Ruby has some compatibility problems 
for ARM. 
FSConfig.py also warns that Ruby on ARM is not working properly yet. Has those 
problems solved now?


-Original Message-
From: gem5-users-requ...@gem5.org [mailto:gem5-users-requ...@gem5.org] 
Sent: 2020年8月21日 8:23
To: gem5-users@gem5.org
Subject: gem5-users Digest, Vol 169, Issue 52

Send gem5-users mailing list submissions to
gem5-users@gem5.org

To subscribe or unsubscribe via email, send a message with subject or body 
'help' to
gem5-users-requ...@gem5.org

You can reach the person managing the list at
gem5-users-ow...@gem5.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of gem5-users digest..."

Today's Topics:

   1. Re: KVM does not work  (chenboya)
   2. Re: Functional read failed while using pthread lock in program
  (Jason Lowe-Power)
   3. Packet request send directly to memory without searching in cache
  (Muhammad Aamir)
   4. issues in FS mode with TimingSimpleCPU+Multicore (JASPINDER KAUR)


--

Date: Thu, 20 Aug 2020 14:55:05 +
From: chenboya 
Subject: [gem5-users] Re: KVM does not work
To: "gem5-users@gem5.org" 
Message-ID: 
Content-Type: text/plain; charset="utf-8"

There are some GIC issues about running the KVM mode, fortunately an engineer 
had given the solution.
Here are some discussions about the KVM mode for ARM.

https://gem5.atlassian.net/browse/GEM5-547


-Original Message-
From: gem5-users-requ...@gem5.org [mailto:gem5-users-requ...@gem5.org]
Sent: 2020年8月12日 9:46
To: gem5-users@gem5.org
Subject: gem5-users Digest, Vol 169, Issue 31

Send gem5-users mailing list submissions to
gem5-users@gem5.org

To subscribe or unsubscribe via email, send a message with subject or body 
'help' to
gem5-users-requ...@gem5.org

You can reach the person managing the list at
gem5-users-ow...@gem5.org

When replying, please edit your Subject line so it is more specific than "Re: 
Contents of gem5-users digest..."

Today's Topics:

   1. KVM does not work (毛允飞)
   2. Re: KVM does not work (Giacomo Travaglini)


--

Date: Wed, 12 Aug 2020 16:40:48 +0800
From: 毛允飞 
Subject: [gem5-users] KVM does not work
To: gem5-users@gem5.org
Message-ID:

Content-Type: multipart/alternative;
boundary="c0216705acaa2553"

--c0216705acaa2553
Content-Type: text/plain; charset="UTF-8"

Hi All
I run the fs_bigLITTLE.py script in gem5, but there is no information in the 
m5term console. I don't know what went wrong,

INFO:
Global frequency set at 1 ticks per second
info: Simulated platform: VExpress_GEM5_V1
info: kernel located at:
/home/tracy/gem5/fs_image_arm/binaries/vmlinux.vexpress_gem5_v1_64.20170616
warn: Highest ARM exception-level set to AArch32 but the workload is for 
AArch64. Assuming you wanted these to match.
system.vncserver: Listening for connections on port 5900
system.terminal: Listening for connections on port 3456
system.realview.uart1.device: Listening for connections on port 3457
system.realview.uart2.device: Listening for connections on port 3458
system.realview.uart3.device: Listening for connections on port 3459
0: system.remote_gdb: liste

[gem5-users] issues in FS mode with TimingSimpleCPU+Multicore

2020-08-21 Thread JASPINDER KAUR via gem5-users
Dear All,
  I am trying to boot gem5 in FS mode for multiple cores. However, I am
facing a problem as mentioned below:

   1. Atomic CPU - working properly.
   2. TimingSimpleCPU - Single core - working properly.
   3. TimingSimpleCPU - multiple cores (2 or 4) - the execution stuck after
   partial booting.

*Please let me know where I am doing the mistake. The commans I am using
is:*

>build/X86/gem5.fast configs/example/fs.py --cpu-type=TimingSimpleCPU
--script=/home/user1/simulators/gem5/FS/runs.rcS --num-cpus=2 --caches
 --kernel=/home/user1/simulators/gem5/FS/binaries/x86-4.8.13.smp
--disk-image=/home/user1/simulators/gem5/FS/disks/x86-security.img

*The execution stuck after the last line of the following output:*

  x86: Booting SMP configuration:
 node  #0, CPUs:  #1
CPU: CPU feature xsave disabled, no CPUID level 0xd
x86: Booted up 1 node, 2 CPUs
smpboot: Total of 2 processors activated (31999.69 BogoMIPS)
devtmpfs: initialized
clocksource: jiffies: mask: 0x max_cycles: 0x, max_idle_ns:
1911260446275000 ns
NET: Registered protocol family 16
cpuidle: using governor ladder
PCI: Using configuration type 1 for base access
HugeTLB registered 2 MB page size, pre-allocated 0 pages
ACPI: Interpreter disabled.
vgaarb: loaded
SCSI subsystem initialized
usbcore: registered new interface driver usbfs
usbcore: registered new interface driver hub
usbcore: registered new device driver usb
pps_core: LinuxPPS API ver. 1 registered
pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <
giome...@linux.it>
PTP clock support registered
PCI: Probing PCI hardware
PCI host bridge to bus :00
pci_bus :00: root bus resource [io  0x-0x]
pci_bus :00: root bus resource [mem 0x-0x]
pci_bus :00: No busn resource found for root bus, will use [bus 00-ff]
pci :00:04.0: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
pci :00:04.0: legacy IDE quirk: reg 0x14: [io  0x03f6]
pci :00:04.0: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
pci :00:04.0: legacy IDE quirk: reg 0x1c: [io  0x0376]
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s