Printed the values

lut 25 15:46:27 r201 vnet[190022]: clib_time_verify_frequency:222: verify clock: CPS(2100000000.0000000) dtc/dtr(2095001738.5946480) lut 25 15:46:27 r201 vnet[190022]: clib_time_verify_frequency:222: verify clock: CPS(2090000000.0000004) dtc/dtr(2094997431.9388570) lut 25 15:46:29 r201 vnet[190022]: clib_time_verify_frequency:222: verify clock: CPS(2100000000.0000000) dtc/dtr(2095000928.3322970)
(all the rest look similar enough)

I have no clue why this wouldn't come up with 1 socket setup, unfortunately can't swap them out right now.

W dniu 25.02.2018 o 14:43 Konrad Gutkowski <kgutkowski-...@ffs.pl> pisze:

For 6130:
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin intel_pt mba tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke

Ubuntu 17.10, kernel 4.13.0-32-generic

What kind of problems do you have in mind?

W dniu 25.02.2018 o 14:13 Dave Barach <dbar...@cisco.com> pisze:

What does this say?

$ grep constant_tsc /proc/cpuinfo

You'll run into all sorts of issues if the frequency underlying the rdtsc instruction varies all over the roadmap. Clib_time_verify_frequency is intended to deal with clock drift, not with speed-step...

Which distro are you running?

Unless you really need to run with the iommu on, don't do that. The isolcpus / nohz_full sort of config would be the last things to set up when trying to squeeze maximum performance from a given system.

D.

-----Original Message-----
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of Konrad Gutkowski
Sent: Saturday, February 24, 2018 9:07 AM
To: vpp-dev@lists.fd.io
Subject: Re: [vpp-dev] clib_time_now

Hardware as requested:
Platform: Intel S2600WF
Affected cpu 6130, 6130T
Various nic's all connected to socket 2

Same platform with 5122 and 6134 seems to work fine.

Kernel params for 6130:
intel_iommu=on iommu=pt isolcpus=16-31,48-63 nohz_full=16-31,48-63
rcu_nocbs=16-31,48-63 hugepagesz=1GB hugepages=16 vpp runs on 16-19,48-51 with master running on 31

If I add "notsc clocksource=hpet" the problem is less pronounced.

But, what I'm asking about is the code itself - seems wrong. The hardware just brings the problem forth much more frequently.

I've added a few prints to see how the values change over time, and
c->clocks_per_second while mostly stable tends to change now and again
(210000 vs 209000 in my case).
This leads to "now" jumping backward and forward, if the jump happens in specific moment, like (maybe only?) when vm->barrier_no_close_before is set, vpp will stall for some time or throw assert.

The code as it is now, relies on clock parameters not to change frequently, or at all, while in fact it can change (and vpp calls clib_time_verify_frequency for exactly this purpose).

I hope this makes sense.

W dniu 24.02.2018 o 14:19 Dave Barach <dbar...@cisco.com> pisze:

If you don't describe the "specific hardware configuration," we can't
help you.

-----Original Message-----
From: vpp-dev@lists.fd.io [mailto:vpp-dev@lists.fd.io] On Behalf Of
Konrad Gutkowski
Sent: Friday, February 23, 2018 9:10 PM
To: vpp-dev@lists.fd.io
Subject: [vpp-dev] clib_time_now

Hi,

If I'm reading this correctly clib_time_verify_frequency takes into
considetation only recent tick's, while the result is applied to the
total number of tick's since program start.

 From what I saw, in normal circumstances with cpu speed scaling it
will very rarely lead to main thread stalling.

With some specific hardware configuration it makes vpp almost inoperable.

Thanks,

--
Konrad Gutkowski








--
--
Konrad Gutkowski










--
--
Konrad Gutkowski

-=-=-=-=-=-=-=-=-=-=-=-
Links:

You receive all messages sent to this group.

View/Reply Online (#8327): https://lists.fd.io/g/vpp-dev/message/8327
View All Messages In Topic (6): https://lists.fd.io/g/vpp-dev/topic/12586493
Mute This Topic: https://lists.fd.io/mt/12586493/21656
New Topic: https://lists.fd.io/g/vpp-dev/post

Change Your Subscription: https://lists.fd.io/g/vpp-dev/editsub/21656
Group Home: https://lists.fd.io/g/vpp-dev
Contact Group Owner: vpp-dev+ow...@lists.fd.io
Terms of Service: https://lists.fd.io/static/tos
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to