Hey all,

I'm running into a strange issue where threads are not spawning when
launched with std::thread. It seems to work once, and then I try to launch
again using a newly allocated thread pointer (after deleting the old one)
and it hangs.

Minimal example:

void foo()
{
  printf("Foo alive from tid %lu\n", m5_cpu_id());
  //m5_cpu_id is a pseudo_instruction I added to return tc->cpuId()
}

void main()
{
  printf("Launching foo 1"\n);
  std::thread * mythread = new std::thread(foo,...);
  printf("Done Launching foo 1"\n);

  printf("Joining foo 1"\n);
  myThread->join();
  delete myThread;

  printf("Launching foo 2"\n);
  mythread = new std::thread(foo,...);
  printf("Done Launching foo 2"\n);

  printf("Joining foo 2"\n);
  myThread->join(); //<<<<< IT HANGS HERE
  printf("Done Everything!\n");
  delete myThread;
}

______

It works fine with TimingSimpleCPU, but then with DerivO3CPU I get the
failure.

Output for  DerivO3CPU:
  Launch 1
  Done Launch 1
  I'm alive on tid 1
  Launch 2
  Done Launch 2

And there it Hangs.

FYI I am using apu_se.py, though with the above minimal example I've
managed to reproduce the bug with no GPU code (nor even hipcc) involved.

I went back to the original code I found that showed std::thread could be
used here:
https://www.gem5.org/documentation/learning_gem5/part3/running/

[image: image.png]

The comment there that -1 is required for SE mode, and then the subsequent
comment about appeasing SE mode...

What exactly do those comments mean?

I'm going to keep debugging, but if anyone has any suggestions for debug
flags that could be helpful it would be appreciated. (I'm using SyscallAll
and going to investigate some of the syscalls SE mode ignores).

I'm wondering if maybe it is calling join() multiple times that might be
the problem? Though unsure why at this point.

Thanks!

Dan
_______________________________________________
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to