Re: CVS commit: src/sys/arch/x86/x86
"Kamil Rytarowski" wrote: > Module Name: src > Committed By: kamil > Date: Fri Jun 5 21:48:04 UTC 2020 > > Modified Files: > > src/sys/arch/x86/x86: cpu_rng.c > > Log Message: > > Change const unsigned to preprocessor define > > Fixes GCC -O0 build with the stack protector. Surely a gcc bug? This almost certainly needs an /* XXX gcc stack protector -O0 bug */ comment and possibly an entry in doc/HACKS as well otherwise someone will come along later and de-uglify this change. Cheers, Simon.
Re: CVS commit: src/sys/arch/x86/x86
On 06.06.2020 09:42, Simon Burge wrote: > "Kamil Rytarowski" wrote: > >> Module Name: src >> Committed By:kamil >> Date:Fri Jun 5 21:48:04 UTC 2020 >> >> Modified Files: >> >> src/sys/arch/x86/x86: cpu_rng.c >> >> Log Message: >> >> Change const unsigned to preprocessor define >> >> Fixes GCC -O0 build with the stack protector. > > Surely a gcc bug? This almost certainly needs an > /* XXX gcc stack protector -O0 bug */ comment and > possibly an entry in doc/HACKS as well otherwise > someone will come along later and de-uglify this > change. > > Cheers, > Simon. > This is not really a GCC bug, as C const is not constexpr. It's also not the only place with such logic and such workaround. C++ fixed it and have real const. signature.asc Description: OpenPGP digital signature
Re: CVS commit: src/sys/arch/x86/x86
On Sat, Jun 06, 2020 at 11:25:19 +0200, Kamil Rytarowski wrote: > On 06.06.2020 09:42, Simon Burge wrote: > > "Kamil Rytarowski" wrote: > > > >> Module Name: src > >> Committed By: kamil > >> Date: Fri Jun 5 21:48:04 UTC 2020 > >> > >> Modified Files: > >> > >>src/sys/arch/x86/x86: cpu_rng.c > >> > >> Log Message: > >> > >> Change const unsigned to preprocessor define > >> > >> Fixes GCC -O0 build with the stack protector. > > > > Surely a gcc bug? This almost certainly needs an > > /* XXX gcc stack protector -O0 bug */ comment and > > possibly an entry in doc/HACKS as well otherwise > > someone will come along later and de-uglify this > > change. > > This is not really a GCC bug, as C const is not constexpr. It's > also not the only place with such logic and such workaround. C++ > fixed it and have real const. Doesn't -Wvla help catching these? Should we enable it? -uwe
Re: CVS commit: src/sys/arch/x86/x86
In article <20200606135850.ge14...@pony.stderr.spb.ru>, Valery Ushakov wrote: >On Sat, Jun 06, 2020 at 11:25:19 +0200, Kamil Rytarowski wrote: > >> On 06.06.2020 09:42, Simon Burge wrote: >> > "Kamil Rytarowski" wrote: >> > >> >> Module Name: src >> >> Committed By: kamil >> >> Date: Fri Jun 5 21:48:04 UTC 2020 >> >> >> >> Modified Files: >> >> >> >> src/sys/arch/x86/x86: cpu_rng.c >> >> >> >> Log Message: >> >> >> >> Change const unsigned to preprocessor define >> >> >> >> Fixes GCC -O0 build with the stack protector. >> > >> > Surely a gcc bug? This almost certainly needs an >> > /* XXX gcc stack protector -O0 bug */ comment and >> > possibly an entry in doc/HACKS as well otherwise >> > someone will come along later and de-uglify this >> > change. >> >> This is not really a GCC bug, as C const is not constexpr. It's >> also not the only place with such logic and such workaround. C++ >> fixed it and have real const. > >Doesn't -Wvla help catching these? Should we enable it? I think it might catch too much... But we can try it... christos
Re: CVS commit: src/lib/libpthread
On Thu, Jun 04, 2020 at 09:28:09PM +, Andrew Doran wrote: > On Thu, Jun 04, 2020 at 06:26:07PM +, Martin Husemann wrote: > > > On Wed, Jun 03, 2020 at 10:10:24PM +, Andrew Doran wrote: > > > Module Name: src > > > Committed By: ad > > > Date: Wed Jun 3 22:10:24 UTC 2020 > > > > > > Modified Files: > > > src/lib/libpthread: pthread.c pthread_cond.c pthread_mutex.c > > > > > > Log Message: > > > Deal with a couple of problems with threads being awoken early due to > > > timeouts or cancellation where: > > > > Not sure if it is caused by this commit or joergs TSD/malloc, but today > > most of the libpthread tests time out on aarch64, while everything but > > a few minor nits was fine on May 30. > > I'll try to reproduce it over the weekend. This works OK for me on aarch64 with libpthread updated in isolation. I'll try updating more stuff tomorrow. Andrew > > Andrew > > > Martin > > > > (this is on a hummingboard pulse 4-core board) > > lib/libpthread/t_barrier (398/848): 1 test cases > > barrier: [300.034098s] Failed: Test case timed out after 300 seconds > > [300.034568s] > > > > lib/libpthread/t_cond (399/848): 9 test cases > > bogus_timedwaits: [300.016677s] Failed: Test case timed out after 300 > > seconds > > broadcast: [300.023040s] Failed: Test case timed out after 300 seconds > > cond_timedwait_race: [300.034567s] Failed: Test case timed out after > > 300 seconds > > condattr: [0.013756s] Passed. > > destroy_after_cancel: [300.025649s] Failed: Test case timed out after > > 300 seconds > > signal_before_unlock: [300.023630s] Failed: Test case timed out after > > 300 seconds > > signal_before_unlock_static_init: [300.022909s] Failed: Test case timed > > out after 300 seconds > > signal_delay_wait: [300.031310s] Failed: Test case timed out after 300 > > seconds > > signal_wait_race: [300.025552s] Failed: Test case timed out after 300 > > seconds > > [2400.221956s] > > > > lib/libpthread/t_condwait (400/848): 2 test cases > > cond_wait_mono: [2.023083s] Passed. > > cond_wait_real: [2.017972s] Passed. > > [4.042078s] > > > > lib/libpthread/t_detach (401/848): 1 test cases > > pthread_detach: [300.021242s] Failed: Test case timed out after 300 > > seconds > > [300.021708s] > > > > lib/libpthread/t_equal (402/848): 1 test cases > > pthread_equal: [300.027646s] Failed: Test case timed out after 300 > > seconds > > [300.028295s] > > > > lib/libpthread/t_fork (403/848): 1 test cases > > fork: [300.018077s] Failed: Test case timed out after 300 seconds > > [300.018975s] > > > > lib/libpthread/t_fpu (404/848): 1 test cases > > fpu: [0.020578s] Passed. > > [0.021314s] > > > > lib/libpthread/t_join (405/848): 1 test cases > > pthread_join: [0.021632s] Passed. > > [0.022411s] > > > > lib/libpthread/t_kill (406/848): 1 test cases > > simple: [300.033759s] Failed: Test case timed out after 300 seconds > > [300.034528s] > > > > lib/libpthread/t_mutex (407/848): 7 test cases > > mutex1: [300.017542s] Failed: Test case timed out after 300 seconds > > mutex2: [300.023064s] Failed: Test case timed out after 300 seconds > > mutex3: [300.023259s] Failed: Test case timed out after 300 seconds > > mutex4: [300.023015s] Failed: Test case timed out after 300 seconds > > mutex5: [300.023495s] Failed: Test case timed out after 300 seconds > > mutexattr1: [0.013769s] Passed. > > mutexattr2: [0.025483s] Passed. > > [1500.154160s] > > > > lib/libpthread/t_name (408/848): 1 test cases > > name: [0.014262s] Passed. > > [0.014951s] > > > > ...
Re: CVS commit: src/tests/lib/libc/sys
On Sat, Jun 06, 2020 at 06:11:21PM +, Jason R Thorpe wrote: > Module Name: src > Committed By: thorpej > Date: Sat Jun 6 18:11:21 UTC 2020 > > Modified Files: > src/tests/lib/libc/sys: t_lwp_create.c > > Log Message: > Add a test case to ensure that _lwp_create() fails with the > expected error code when a bad new-lwp-id pointer is passed. Was thinking of just this yesterday - thanks! It's a fragile bit of code and has been fixed 3x already this year. Andrew
re: CVS commit: src
"Jason R Thorpe" writes: > Module Name: src > Committed By: thorpej > Date: Sat Jun 6 21:26:00 UTC 2020 > > Modified Files: > src/lib/libprop: Makefile shlib_version i wonder, since this adds to the kernel ABI, you should have also bumped the kernel version, since modules using this new version won't run on older versions of this version now. .mrg.
Re: CVS commit: src
> On Jun 6, 2020, at 10:51 PM, matthew green wrote: > > "Jason R Thorpe" writes: >> Module Name: src >> Committed By:thorpej >> Date:Sat Jun 6 21:26:00 UTC 2020 >> >> Modified Files: >> src/lib/libprop: Makefile shlib_version > > i wonder, since this adds to the kernel ABI, you should have > also bumped the kernel version, since modules using this new > version won't run on older versions of this version now. I guess I could bump it... but I have some related kernel changes coming that will require a bump regardless. Worth bumping twice? I guess version #s are cheap. -- thorpej