On 9/24/2012 7:56 AM, Charles Steinkuehler wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> I believe most or all of what you're wanting to do can be implemented
> using the cpuset commands:
> https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility
>
> I didn't see much change in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I believe most or all of what you're wanting to do can be implemented
using the cpuset commands:
https://rt.wiki.kernel.org/index.php/Cpuset_Management_Utility
I didn't see much change in the very limited testing I did with this,
but I did not try ver
On 9/23/2012 1:19 PM, Michael Haberler wrote:
> Am 23.09.2012 um 18:22 schrieb Kent A. Reed:
>
>> On 9/23/2012 3:34 AM, Michael Haberler wrote:
>>> I just stumbled upon an idea (yet untested) to improve latency with
>>> RT_PREEMPT on multicore cpu's like the atom
>>>
>>> <...>
>>> what would neede
Am 23.09.2012 um 18:22 schrieb Kent A. Reed:
> On 9/23/2012 3:34 AM, Michael Haberler wrote:
>> I just stumbled upon an idea (yet untested) to improve latency with
>> RT_PREEMPT on multicore cpu's like the atom
>>
>> this assumes that rtapi_app in sim mode (plus hal_parport in usermode) will
>
On 9/23/2012 3:34 AM, Michael Haberler wrote:
> I just stumbled upon an idea (yet untested) to improve latency with
> RT_PREEMPT on multicore cpu's like the atom
>
> this assumes that rtapi_app in sim mode (plus hal_parport in usermode) will
> be the time-critical component
> it borrows from the
I just stumbled upon an idea (yet untested) to improve latency with RT_PREEMPT
on multicore cpu's like the atom
this assumes that rtapi_app in sim mode (plus hal_parport in usermode) will be
the time-critical component
it borrows from the isolcpus=1 idea for RTAI where CPU1 would be reserved for
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 9/15/2012 1:10 PM, Kent A. Reed wrote:
> Absolutely, and this is why I'm spending some time with this board.
> It already works fine for me in a traditional RTAI-based LinuxCNC
> system. I decided to play with it because of the general,
> well-foun
On 9/15/2012 12:22 PM, Jan de Kruyf wrote:
> Hallo Kent,
> Please let it run overnight, my poor old Dell wanted to go to bed sometime
> after midnight and gave some horrible results next morning (been sleeping
> at the wheel, obviously). Although it looked quite happy for the first hour
> or so.
To
Hallo Kent,
Please let it run overnight, my poor old Dell wanted to go to bed sometime
after midnight and gave some horrible results next morning (been sleeping
at the wheel, obviously). Although it looked quite happy for the first hour
or so.
The other thing to watch out for is frequency throttli
Gentle persons:
As a brief summary...
I reset the BIOS on my ASUS AT5NM10-I to nearly default settings,
including enabled Hyperthreading. I did leave audio processing disabled.
I built and ran the OSADL cycletest (hint: first install the libnuma1
and libnuma-dev packages) with essentially the
On 9/14/2012 1:15 AM, Jan de Kruyf wrote:
> Hallo,
> this is the plot from this board with a slightly different cpu, under
> reasonable load:
> https://www.osadl.org/Latency-plot-of-system-in-rack-4-slot.qa-latencyplot-r4s7.0.html
> You might look up there also exactly how they do their tests.
>
>
Unfortunately trye for x86 hardware ... and even then , sometimes you
have bad luck with the hardware :-(
Hmm, I am missing a rttop progrram, like powertop , but different .
:-) ... Lars
2012/9/14 Kent A. Reed :
> Thanks, Jan, Lars, Michael, Charles, Ken, and Steve for your thoughts.
Thanks, Jan, Lars, Michael, Charles, Ken, and Steve for your thoughts.
I'll have to cogitate a bit on what to do next, but I can answer Steve
off the top of my head. Two pre-built realtime kernels are available
from the Debian Wheezy repository. The amd64 build is for 64-bit amd and
intel proce
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 9/13/2012 11:30 PM, Kent A. Reed wrote:
> "ERROR: Missed scheduling deadline for task 0 [ times] "Now is
> x.xxx, deadline is x.xxx "Absolute number of pagefaults in
> realtime context: 1030"
Does the pagecount number increase, or does
Which still doesn't affect the pagefaults ... tricky this I will
have a look when I get home today, I don't have systems running the
code here .
/ Lars
2012/9/14 Steve Stallings :
> Hi Kent,
>
> I have NO experience with doing this sort of build,
> but have to wonder if it is significant t
Hi Kent,
I have NO experience with doing this sort of build,
but have to wonder if it is significant that your
target is an Atom D510 cpu, yet you list a kernel
for amd64.
Regards,
Steve Stallings
> -Original Message-
> From: Kent A. Reed [mailto:kentallanr...@gmail.com]
> Sent: Thursd
I did not do it for emc I hinted that we should look at doing it.
I have used exactly the code you linked for my rt-code ( not EMC
) ... basicly you mlockall() and then fault all pages you are going
to use, after tuning glibc to never trim the heap.
So basicly lock all rt parts in
Michael,
as far as I am aware that is in the code, but obviously somewhere is a slip
up.
j.
On Fri, Sep 14, 2012 at 8:15 AM, Michael Haberler wrote:
>
> Am 14.09.2012 um 06:30 schrieb Kent A. Reed:
>
> > With the RT PREEMPT system, this board gives lousy results even running
> > headless. Using
Am 14.09.2012 um 06:30 schrieb Kent A. Reed:
> With the RT PREEMPT system, this board gives lousy results even running
> headless. Using latency-test, for some minutes I see the base (25us)
> thread showing a max latency of anywhere from 25us to 40us and the servo
> (1000us) thread 40us to 50u
That graph looks a bit funky
Perhaps you should try to run hwlat ? ( module ) ... to see if there
are some SMI or similar, you can also use latency tracing in the
kernel to see where the offender is
Can you see what rt-priority the system is running with ? 110 looks
like a bit over t
Hallo,
this is the plot from this board with a slightly different cpu, under
reasonable load:
https://www.osadl.org/Latency-plot-of-system-in-rack-4-slot.qa-latencyplot-r4s7.0.html
You might look up there also exactly how they do their tests.
The Pagefault message says that the LCNC memory is not
21 matches
Mail list logo