Re: x86_64 BSP : Status Update [ticket #2898]

2021-03-29 Thread Amaan Cheval
I linked the wrong GSoC status update of mine there - here's the final
report that you may find useful:
https://blog.whatthedude.com/post/gsoc-final/#future-to-do

On Mon, Mar 29, 2021 at 3:38 PM Amaan Cheval  wrote:
>
> Hey Shashvat!
>
> I've CC'd Chris who may have something to add given that the original
> ticket seems to have an update from John Millard - not sure if John's
> made progress since my work on the x86-64 BSP was upstreamed, so I'll
> let Chris speak to that.
>
> I wouldn't recommend running it on real hardware yet - I don't think
> anyone has tested it on hardware.
> Not all tests in the testsuite pass in QEMU either, from what I
> remember (some basic ones do), so that will likely be what you'll need
> to work on.
>
> To run the BSP in QEMU, you'll need to follow these instructions:
> https://docs.rtems.org/branches/master/user/bsps/bsps-x86_64.html
>
> Let me know if you run into any issues, since the setup can be a bit
> complicated. In summary, for the setup, you'll want to:
>
> - Build RTEMS/RSB with x86-64 as the BSP (this should be the same as
> what you did for your GSoC proof in terms of building the BSP and
> samples/tests)
> - Get QEMU
> - Build OVMF's open-source UEFI firmware
> - Get FreeBSD booting in QEMU with UEFI, and then replace it's
> `kernel` with a built RTEMS application (such as the ticker tests or
> hello.exe, etc.)
> - Run FreeBSD image with RTEMS app as its kernel
>
> We need to do this because for the x86-64 BSP, we use FreeBSD's
> bootloader. This is slightly problematic, because FreeBSD's bootloader
> only supports UFS/ZFS for filesystems.
> I think ideally, we'll want a UEFI-compatible bootloader which can
> support more filesystems - FreeBSD's bootloader is functional, but
> perhaps not the best for a dev/prod environment long-term - maybe
> Joel/Chris can comment on this.
> (For eg. most Linux systems can't mount UFS/ZFS unless specifically
> compiled for that support, which means the dev-environment is quite
> hacky and slow - I had to use the network to get my RTEMS apps into
> the FreeBSD filesystem for the bootloader to use it.)
>
> After the bootloader issues are made easier (so we don't need to
> replace FreeBSD's kernel every time we want to recompile our RTEMS app
> and re-run it), the next aim will probably be to make as many tests
> pass as possible, and to improve automated tests, such as a
> configuration for rtems-test[1].
> I recall there being some edge-cases in the clock driver, so you'll
> likely have the failing tests to guide which drivers you need to work
> on in the BSP.
>
> If there's still time after that, I think we can figure out which
> specific portions need to be worked on (i.e. running on hardware,
> improving existing drivers, adding libbsd support, SMP support, etc.).
>
> In case you haven't seen this already, this is my blog post from my
> GSoC on the x86-64 BSP, summarizing the status as of then, as well as
> potential areas for improvement next:
> https://blog.whatthedude.com/post/gsoc-phase-2-status/#upcoming
>
> [1] https://docs.rtems.org/branches/master/user/tools/tester.html
>
> On Mon, Mar 29, 2021, 12:58 PM Shashvat  wrote:
> >
> > Hello everyone !
> >
> > I wanted to know the status of the x86_64 BSP's development.
> > Also it would be great help if someone guides me to get it running on QEMU 
> > or my x64 based laptop running legacy BIOS.(not UEFI)
> >
> >
> > Regards
> > Shashvat
> >
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: x86_64 BSP : Status Update [ticket #2898]

2021-03-29 Thread Amaan Cheval
Hey Shashvat!

I've CC'd Chris who may have something to add given that the original
ticket seems to have an update from John Millard - not sure if John's
made progress since my work on the x86-64 BSP was upstreamed, so I'll
let Chris speak to that.

I wouldn't recommend running it on real hardware yet - I don't think
anyone has tested it on hardware.
Not all tests in the testsuite pass in QEMU either, from what I
remember (some basic ones do), so that will likely be what you'll need
to work on.

To run the BSP in QEMU, you'll need to follow these instructions:
https://docs.rtems.org/branches/master/user/bsps/bsps-x86_64.html

Let me know if you run into any issues, since the setup can be a bit
complicated. In summary, for the setup, you'll want to:

- Build RTEMS/RSB with x86-64 as the BSP (this should be the same as
what you did for your GSoC proof in terms of building the BSP and
samples/tests)
- Get QEMU
- Build OVMF's open-source UEFI firmware
- Get FreeBSD booting in QEMU with UEFI, and then replace it's
`kernel` with a built RTEMS application (such as the ticker tests or
hello.exe, etc.)
- Run FreeBSD image with RTEMS app as its kernel

We need to do this because for the x86-64 BSP, we use FreeBSD's
bootloader. This is slightly problematic, because FreeBSD's bootloader
only supports UFS/ZFS for filesystems.
I think ideally, we'll want a UEFI-compatible bootloader which can
support more filesystems - FreeBSD's bootloader is functional, but
perhaps not the best for a dev/prod environment long-term - maybe
Joel/Chris can comment on this.
(For eg. most Linux systems can't mount UFS/ZFS unless specifically
compiled for that support, which means the dev-environment is quite
hacky and slow - I had to use the network to get my RTEMS apps into
the FreeBSD filesystem for the bootloader to use it.)

After the bootloader issues are made easier (so we don't need to
replace FreeBSD's kernel every time we want to recompile our RTEMS app
and re-run it), the next aim will probably be to make as many tests
pass as possible, and to improve automated tests, such as a
configuration for rtems-test[1].
I recall there being some edge-cases in the clock driver, so you'll
likely have the failing tests to guide which drivers you need to work
on in the BSP.

If there's still time after that, I think we can figure out which
specific portions need to be worked on (i.e. running on hardware,
improving existing drivers, adding libbsd support, SMP support, etc.).

In case you haven't seen this already, this is my blog post from my
GSoC on the x86-64 BSP, summarizing the status as of then, as well as
potential areas for improvement next:
https://blog.whatthedude.com/post/gsoc-phase-2-status/#upcoming

[1] https://docs.rtems.org/branches/master/user/tools/tester.html

On Mon, Mar 29, 2021, 12:58 PM Shashvat  wrote:
>
> Hello everyone !
>
> I wanted to know the status of the x86_64 BSP's development.
> Also it would be great help if someone guides me to get it running on QEMU or 
> my x64 based laptop running legacy BIOS.(not UEFI)
>
>
> Regards
> Shashvat
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: Current master fails make

2021-03-15 Thread Amaan Cheval
Hey Richi,

You probably need to upgrade and rebuild your toolchain using RSB to
support these new ftw.h related tests.

See this relevant mail from the list announcing this:
https://lists.rtems.org/pipermail/users/2021-March/068264.html

Hope that helps!

On Mon, Mar 15, 2021, 12:02 PM Richi Dubey  wrote:

> Hi,
>
> When I run make, I get this error
> /home/richi/quick-start/LatestStrong/src/rtems/c/src/../../testsuites/psxtests/psxftw01/init.c:44:10:
> fatal error: ftw.h: No such file or directory
>44 | #include 
>
> I think its because of this commit;
> https://git.rtems.org/rtems/commit/testsuites/psxtests/psxftw01/init.c?id=a26a326e55922f3d52639d361e49c3ed0c3834c0
>
> Please let me know how to resolve this.
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [PATCH v2] Add Amaan to MAINTAINERS

2020-02-21 Thread Amaan Cheval
Done!
https://git.rtems.org/rtems/commit/?id=486829b2766119275f74e7a2a11d7bf3a9561f54

On Fri, Feb 21, 2020 at 10:35 PM Gedare Bloom  wrote:

> This second one looks good, please push!
>
> On Thu, Feb 20, 2020 at 9:55 PM Amaan Cheval 
> wrote:
> >
> > ---
> >  MAINTAINERS | 2 ++
> >  1 file changed, 2 insertions(+)
> >
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index f00f2753f2..437b55418b 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -51,6 +51,7 @@ Ben Gras   b...@rtems.org
> >  Pavel Pisa pp...@pikron.com
> >  Christian Mauderer christian.maude...@embedded-brains.de
> >  Hesham Almataryheshamelmat...@gmail.com
> > +Amaan Cheval   am...@rtems.org
> >
> >  Localized Write Permission
> >  ==
> > @@ -58,4 +59,5 @@ sparc  Daniel Hellstrom (
> dan...@gaisler.com)
> >  beagle Ben Gras (b...@rtems.org)
> >  tms570 Pavel Pisa (p...@cmp.felk.cvut.cz)
> >  raspberrypiPavel Pisa (p...@cmp.felk.cvut.cz)
> > +x86_64 Amaan Cheval (am...@rtems.org)
> >
> > --
> > 2.23.0
> >
> > ___
> > devel mailing list
> > devel@rtems.org
> > http://lists.rtems.org/mailman/listinfo/devel
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

[PATCH v2] Add Amaan to MAINTAINERS

2020-02-20 Thread Amaan Cheval
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index f00f2753f2..437b55418b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -51,6 +51,7 @@ Ben Gras   b...@rtems.org
 Pavel Pisa pp...@pikron.com
 Christian Mauderer christian.maude...@embedded-brains.de
 Hesham Almataryheshamelmat...@gmail.com
+Amaan Cheval   am...@rtems.org
 
 Localized Write Permission
 ==
@@ -58,4 +59,5 @@ sparc  Daniel Hellstrom 
(dan...@gaisler.com)
 beagle Ben Gras (b...@rtems.org)
 tms570 Pavel Pisa (p...@cmp.felk.cvut.cz)
 raspberrypiPavel Pisa (p...@cmp.felk.cvut.cz)
+x86_64 Amaan Cheval (am...@rtems.org)
 
-- 
2.23.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH] Add Amaan to MAINTAINERS

2020-02-20 Thread Amaan Cheval
Made this patch before fetching/rebasing master, so it won't apply cleanly.
v2 after rebasing is on its way.

On Fri, Feb 21, 2020 at 10:15 AM Amaan Cheval 
wrote:

> ---
>  MAINTAINERS | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2732d773c4..99f473efe0 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -51,6 +51,7 @@ Ben Gras   b...@rtems.org
>  Martin Galvan  martin.gal...@tallertechnologies.com
>  Pavel Pisa pp...@pikron.com
>  Christian Mauderer christian.maude...@embedded-brains.de
> +Amaan Cheval   am...@rtems.org
>
>  Localized Write Permission
>  ==
> @@ -58,4 +59,5 @@ sparc  Daniel Hellstrom (
> dan...@gaisler.com)
>  beagle Ben Gras (b...@rtems.org)
>  tms570 Pavel Pisa (p...@cmp.felk.cvut.cz)
>  raspberrypiPavel Pisa (p...@cmp.felk.cvut.cz)
> +x86_64 Amaan Cheval (am...@rtems.org)
>
> --
> 2.23.0
>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

[PATCH] Add Amaan to MAINTAINERS

2020-02-20 Thread Amaan Cheval
---
 MAINTAINERS | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2732d773c4..99f473efe0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -51,6 +51,7 @@ Ben Gras   b...@rtems.org
 Martin Galvan  martin.gal...@tallertechnologies.com
 Pavel Pisa pp...@pikron.com
 Christian Mauderer christian.maude...@embedded-brains.de
+Amaan Cheval   am...@rtems.org
 
 Localized Write Permission
 ==
@@ -58,4 +59,5 @@ sparc  Daniel Hellstrom 
(dan...@gaisler.com)
 beagle Ben Gras (b...@rtems.org)
 tms570 Pavel Pisa (p...@cmp.felk.cvut.cz)
 raspberrypiPavel Pisa (p...@cmp.felk.cvut.cz)
+x86_64 Amaan Cheval (am...@rtems.org)
 
-- 
2.23.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: GSOC: Call for Mentors

2020-02-17 Thread Amaan Cheval
Hey! I'd like to co-mentor a project too. Thanks!

On Tue, Feb 18, 2020 at 8:56 AM Vijay Kumar Banerjee <
vijaykumar9...@gmail.com> wrote:

> Hi Gedare,
>
> Please add me to the list of co-mentors.
> Looking forward to a great experience. :)
>
> Best regards,
> Vijay
>
>
> On Tue, Feb 18, 2020, 1:28 AM Gedare Bloom  wrote:
>
>> Assuming we get accepted on 2/20, I will be able to start adding mentors
>> shortly after. Please let me know if you're interested to mentor this
>> summer, either in a primary or co-mentor capacity. As usual, I will provide
>> "backup" mentoring and high-level organization of all projects with weekly
>> contact to each student. Primary mentors are responsible for multiple
>> interactions per week (daily is best) with their student and providing
>> strong technical guidance. Co-mentors should stay in the loop, and possibly
>> step in if the primary mentor needs a break or other emergency comes up.
>>
>> Gedare
>> ___
>> devel mailing list
>> devel@rtems.org
>> http://lists.rtems.org/mailman/listinfo/devel
>
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: x86_64 gcc missing crti/n

2019-03-14 Thread Amaan Cheval
Quick note; I'm not at the computer, but I haven't tested with the RTEMS 6
buildsets - I can do that tomorrow too.

On Thu, Mar 14, 2019, 10:52 PM Amaan Cheval  wrote:

> Hey!
>
> It seems like the RSB patch for crt[i/n] was mistakenly taken out
> along with other stuff:
>
> https://git.rtems.org/rtems-source-builder/commit/?id=258129e140a2f0c7f579492bda2a86c6c1b93080
>
> I'm on a new computer where I'll have to compile git from source to
> use `send-email` to send the patch, so I've just attached it here -
> hope that's okay! (I can send it properly tomorrow if we want the
> community to review it.)
>
> I've tested that with this patch, the RTEMS kernel does successfully build.
>
>
> On Thu, Mar 14, 2019 at 8:52 PM Joel Sherrill  wrote:
> >
> > Hi
> >
> > Sebastian mentioned that x86_64 gcc is missing crti/n. I looked on the
> > GCC master and the code is in libgcc/config.host. This is in
> > libgcc/config.host on the master. I am assuming it  isn't in gcc 7.4.0
> > and the RSB is missing a patch.
> >
> > x86_64-*-elf* | x86_64-*-rtems*)
> > tmake_file="$tmake_file i386/t-crtstuff t-crtstuff-pic
> t-libgcc-pic"
> > case ${host} in
> >   x86_64-*-rtems*)
> > extra_parts="$extra_parts crti.o crtn.o"
> > ;;
> > esac
> > ;;
> >
> > Thanks to git blame, the code is from Amaan and the github URL
> > for the patch is:
> >
> >
> https://github.com/gcc-mirror/gcc/commit/ab55f7db3694293e4799d58f7e1a556c0eae863a
> >
> > Amaan.. care to prepare an RSB patch so crti/n is included in the tools
> from the
> > RSB. Please and thank you.
> >
> > --joel
> > ___
> > devel mailing list
> > devel@rtems.org
> > http://lists.rtems.org/mailman/listinfo/devel
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: x86_64 gcc missing crti/n

2019-03-14 Thread Amaan Cheval
Hey!

It seems like the RSB patch for crt[i/n] was mistakenly taken out
along with other stuff:
https://git.rtems.org/rtems-source-builder/commit/?id=258129e140a2f0c7f579492bda2a86c6c1b93080

I'm on a new computer where I'll have to compile git from source to
use `send-email` to send the patch, so I've just attached it here -
hope that's okay! (I can send it properly tomorrow if we want the
community to review it.)

I've tested that with this patch, the RTEMS kernel does successfully build.


On Thu, Mar 14, 2019 at 8:52 PM Joel Sherrill  wrote:
>
> Hi
>
> Sebastian mentioned that x86_64 gcc is missing crti/n. I looked on the
> GCC master and the code is in libgcc/config.host. This is in
> libgcc/config.host on the master. I am assuming it  isn't in gcc 7.4.0
> and the RSB is missing a patch.
>
> x86_64-*-elf* | x86_64-*-rtems*)
> tmake_file="$tmake_file i386/t-crtstuff t-crtstuff-pic t-libgcc-pic"
> case ${host} in
>   x86_64-*-rtems*)
> extra_parts="$extra_parts crti.o crtn.o"
> ;;
> esac
> ;;
>
> Thanks to git blame, the code is from Amaan and the github URL
> for the patch is:
>
> https://github.com/gcc-mirror/gcc/commit/ab55f7db3694293e4799d58f7e1a556c0eae863a
>
> Amaan.. care to prepare an RSB patch so crti/n is included in the tools from 
> the
> RSB. Please and thank you.
>
> --joel
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
From edd8d1090eaf35a56346af8130b1d1a73fcb2d56 Mon Sep 17 00:00:00 2001
From: Amaan Cheval 
Date: Thu, 14 Mar 2019 22:26:48 +0530
Subject: [PATCH] 5: Fix x86_64 GCC build to include crt[in].o

This was mistakenly removed in commit 258129e, but is still required.
---
 rtems/config/5/rtems-x86_64.bset | 4 
 1 file changed, 4 insertions(+)

diff --git a/rtems/config/5/rtems-x86_64.bset b/rtems/config/5/rtems-x86_64.bset
index 2d6ff5f..fef043d 100644
--- a/rtems/config/5/rtems-x86_64.bset
+++ b/rtems/config/5/rtems-x86_64.bset
@@ -3,3 +3,7 @@
 %define with_libgomp
 
 %include 5/rtems-default.bset
+
+# Have gcc build crti.o and crtn.o
+%patch add gcc --rsb-file=gcc-f8fd78279d353f6959e75ac25571c1b7b2dec110.patch https://gcc.gnu.org/git/?p=gcc.git;a=blobdiff_plain;f=libgcc/config.host;h=f8fd78279d353f6959e75ac25571c1b7b2dec110;hp=11b4acaff55e00ee6bd3c182e9da5dc597ac57c4;hb=ab55f7db3694293e4799d58f7e1a556c0eae863a;hpb=344c180cca810c50f38fd545bb9a102fb39306b7
+%hash sha512 gcc-f8fd78279d353f6959e75ac25571c1b7b2dec110.patch aef76f9d45a53096a021521375fc302a907f78545cc57683a7a00ec61608b8818115720f605a6b1746f479c8568963b380138520e259cbb9e8951882c2f1567f
-- 
2.21.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: Query Regarding Old Projects, for GSoC 2019

2019-03-06 Thread Amaan Cheval
I'm not sure if the project is open, but if it is, I'd be willing to
co-mentor.

On Wed, Mar 6, 2019, 2:45 PM Vaibhav Gupta  wrote:

> I was exploring for more open projects and found the following one.
>
> - Port V8 Javascript Engine :
> https://devel.rtems.org/wiki/Developer/Projects/Open/V8
>
> Not much information is given about it and even the above link was
> modified in 2015. I want to know if the project is open for GSOC 2019?
> If it is open, then It would be great to know if someone would like to
> mentor it. I would like to discuss further on it.
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: git.rtems.org LetsEncrypt TLS certificate expired

2018-08-24 Thread Amaan Cheval
Hi!

Seems like rtems.org now has this issue.
On Thu, Aug 16, 2018 at 6:09 AM Chris Johns  wrote:
>
> On 15/8/18 11:19 pm, Amaan Cheval wrote:
> > FYI: https://docs.rtems.org has an expired certificate too.
> >
> > On Wed, Aug 15, 2018 at 3:30 PM, Amaan Cheval  
> > wrote:
> >> The HTTPS certificate for https://git.rtems.org has expired (~15
> >> minutes ago). Are the auto-renewal scripts failing?
>
> Thank you for the notice. Amar has resolved this issue.
>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: Aligned operations mismatch

2018-08-22 Thread Amaan Cheval
On Wed, Aug 22, 2018 at 8:20 PM Mikhail Svetkin
 wrote:
>
> Hi all,
>
> I have found a problem with assembler code generation in network stack on ARM 
> processor (aligment fault).
>
> I am using:
> arm-rtems5-gcc (GCC) 7.2.0 20170814 (RTEMS 5, RSB 
> a6d011e028a0776cedf0823940eb882e917a44e5, Newlib 2.5.0.20170922)
>
> The compiler generates invalid instruction 'ldmia' here 
> (https://git.rtems.org/rtems/tree/cpukit/libnetworking/netinet/udp_usrreq.c#n160).
>
> The instruction does not support unaligned accesses.
>
> Also i have found the bug fix in FreeBSD repository 
> (https://github.com/freebsd/freebsd/commit/6cc0e8d2a0b583db5707f811d4ebfbe1ad05e628)
>
> I tried their solution and it works.
>

Do you mean you changed the alignment to 2 in RTEMS'
cpukit/libnetworking/netinet/ip.h:85?
https://git.rtems.org/rtems/tree/cpukit/libnetworking/netinet/ip.h#n85

If yes, and if you could confirm it worked / tests passed, feel free
to submit a patch: https://devel.rtems.org/wiki/Developer/Contributing

If not, a ticket with all the information is highly appreciated too!

P.S. - I believe this is a part of the "old" networking stack for
RTEMS - the "new" stack uses rtems-libbsd, which already includes the
FreeBSD patch you mentioned:
https://github.com/RTEMS/rtems-libbsd/blob/master/freebsd/sys/netinet/ip.h#L70

Perhaps someone can shed some light on when the old stack can be used
vs. the new one, and how - I haven't seen much documentation on it.
Patch fixes for the old stack should likely still be accepted while
there are still BSPs using it.

> Should I create a ticket for that?
>
> Best regards,
> Mikhail
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH] user: Update x86-64 chapter with end-of-GSoC status

2018-08-20 Thread Amaan Cheval
---
 user/bsps/bsps-x86_64.rst | 52 ++-
 1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/user/bsps/bsps-x86_64.rst b/user/bsps/bsps-x86_64.rst
index 19c4461..c13f369 100644
--- a/user/bsps/bsps-x86_64.rst
+++ b/user/bsps/bsps-x86_64.rst
@@ -136,10 +136,60 @@ After rebooting, the RTEMS kernel should run after the 
UEFI firmware and
 FreeBSD's bootloader. The ``-serial stdio`` QEMU flag will let the RTEMS 
console
 send its output to the host's ``stdio`` stream.
 
+Paging
+--
+
+During the BSP's initialization, the paging tables are setup to identity-map 
the
+first 512GiB, i.e. virtual addresses are the same as physical addresses for the
+first 512GiB.
+
+The page structures are set up statically with 1GiB super-pages.
+
+.. note::
+  Page-faults are not handled.
+
+.. warning::
+  RAM size is not detected dynamically and defaults to 1GiB, if the
+  configuration-time ``RamSize`` parameter is not used.
+
+Interrupt Setup
+---
+
+Interrupt vectors ``0`` through ``32`` (i.e. 33 interrupt vectors in total) are
+setup as "RTEMS interrupts", which can be hooked through
+``rtems_interrupt_handler_install``.
+
+The Interrupt Descriptor Table supports a total of 256 possible vectors (0
+through 255), which leaves a lot of room for "raw interrupts", which can be
+hooked through ``_CPU_ISR_install_raw_handler``.
+
+Since the APIC needs to be used for the clock driver, the PIC is remapped (IRQ0
+of the PIC is redirected to vector 32, and so on), and then all interrupts are
+masked to disable the PIC. In this state, the PIC may _still_ produce spurious
+interrupts (IRQ7 and IRQ15, redirected to vector 39 and vector 47 
respectively).
+
+The clock driver triggers the initialization of the APIC and then the APIC
+timer.
+
+The I/O APIC is not supported at the moment.
+
+.. note::
+  IRQ32 is reserved by default for the APIC timer (see following section).
+
+  IRQ255 is reserved by default for the APIC's spurious vector.
+
+.. warning::
+  Besides the first 33 vectors (0 through 32), and vector 255 (the APIC 
spurious
+  vector), no other handlers are attached by default.
+
 Clock Driver
 
 
-The clock driver currently uses the idle thread clock driver.
+The clock driver currently uses the APIC timer. Since the APIC timer runs at 
the
+CPU bus frequency, which can't be detected easily, the PIT is used to calibrate
+the APIC timer, and then the APIC timer is enabled in periodic mode, with the
+initial counter setup such that interrupts fire at the same frequency as the
+clock tick frequency, as requested by ``CONFIGURE_MICROSECONDS_PER_TICK``.
 
 Console Driver
 --
-- 
2.18.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Pre-merge issues (at -O2 optimization level) and WIP review

2018-08-16 Thread Amaan Cheval
On Tue, Aug 14, 2018 at 7:34 PM, Joel Sherrill  wrote:
>
>
> On Sun, Aug 12, 2018 at 4:47 PM, Amaan Cheval 
> wrote:
>>
>> Hi!
>>
>> I've narrowed the issue down to this bintime function:
>>
>> https://github.com/RTEMS/rtems/blob/b2de4260c5c71e518742731a8cdebe3411937181/cpukit/score/src/kern_tc.c#L548
>>
>> The watchdog ticks in _Per_CPU_Information / Clock_driver_ticks are at
>> "1000", when that function is called (rtems_clock_get_tod ->
>> _TOD_Get_timeval -> _Timecounter_Microtime -> microtime). The bt and
>> tvp values there are:
>>
>> (gdb) p bt
>> $2 = {sec = 599562004, frac = 18446744073709551536}
>> (gdb) p *tvp
>> $3 = {tv_sec = 599562004, tv_usec = 99}
>>
>> The full (relevant) debug log for the "wrong" timing despite the
>> Clock_driver_ticks being correct is here:
>> https://gist.github.com/AmaanC/c59caf5232b03054d457dcacb5ab1c54
>>
>> I'm quite unfamiliar with how the low-level internals work and it
>> looks like it comes from FreeBSD. This is likely a bug from the
>> timecounter being "too" precise - it dispatches the task at _exactly_
>> the tc_freq it promised - if it slips by 1 tick, then the values start
>> looking correct.
>>
>> This looks more like an off-by-one in the low-level code, in that
>> case, since my clock driver's timecounter returns exactly the value it
>> ought to be returning (100 when 1 second has passed, for eg., when the
>> tc_frequency=100Hz - in that case the bintime's returned "now.tv_sec"
>> value in clockgettod.c causes the wrong second to be set in
>> "time_buffer").
>>
>>
>> https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-03-post-hello/bsps/x86_64/amd64/clock/clock.c#L51
>>
>> On Sun, Aug 12, 2018 at 2:48 PM, Amaan Cheval 
>> wrote:
>> > There's another issue I'm having now:
>> >
>> > At -O0, ticker.exe works well and passes reliably. At -O2, the TOD
>> > seems to be rushed a bit:
>> >
>> > TA1  - rtems_clock_get_tod - 09:00:00   12/31/1988
>> > TA2  - rtems_clock_get_tod - 09:00:00   12/31/1988
>> > TA3  - rtems_clock_get_tod - 09:00:00   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:04   12/31/1988
>> > TA2  - rtems_clock_get_tod - 09:00:09   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:09   12/31/1988
>> > TA3  - rtems_clock_get_tod - 09:00:14   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:14   12/31/1988
>> > TA2  - rtems_clock_get_tod - 09:00:19   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:19   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:24   12/31/1988
>> > TA3  - rtems_clock_get_tod - 09:00:29   12/31/1988
>> > TA2  - rtems_clock_get_tod - 09:00:29   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:29   12/31/1988
>> > TA1  - rtems_clock_get_tod - 09:00:34   12/31/1988
>> >
>> > I'm not sure what it could be - I suspected my get_timecount somehow
>> > not realizing that Clock_driver_ticks was volatile, but that seems to
>> > be in order. The relevant code is here:
>> >
>> > https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-03-post-hello/bsps/x86_64/amd64/clock/clock.c
>> >
>> > On Sun, Aug 12, 2018 at 3:43 AM, Amaan Cheval 
>> > wrote:
>> >> Figured it out; turns out my code to align the stack so I could make
>> >> calls without raising exceptions was messing up and corrupting the
>> >> stack-pointer.
>> >>
>> >> Running the -O2 code now makes the clock run a bit too quickly - the
>> >> calibration may have a minor issue. I'll fix that up and send patches
>> >> tomorrow or Monday hopefully.
>> >>
>> >> I'll be traveling Tuesday, so I'd appreciate if we can get them merged
>> >> upstream Monday itself - I'm okay to have a call and walk someone
>> >> through the patches and whatnot if need be.
>> >>
>> >> Cheers!
>> >>
>> >> On Sun, Aug 12, 2018 at 1:25 AM, Amaan Cheval 
>> >> wrote:
>> >>> Hi!
>> >>>
>> >>> In the process of cleaning my work up, I've run into an odd problem
>> >>> which only shows up when I set the optimization level to -O2. At -O0,
>> >>> it's perfectly fine.
>> >>>
>> >>> The issue is that somehow execution ends up at address 0x0.
>> >>>
>> >>> This likely happens due to a _CPU_Context_switch, where the rsp is se

Re: git.rtems.org LetsEncrypt TLS certificate expired

2018-08-15 Thread Amaan Cheval
FYI: https://docs.rtems.org has an expired certificate too.

On Wed, Aug 15, 2018 at 3:30 PM, Amaan Cheval  wrote:
> The HTTPS certificate for https://git.rtems.org has expired (~15
> minutes ago). Are the auto-renewal scripts failing?
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


git.rtems.org LetsEncrypt TLS certificate expired

2018-08-15 Thread Amaan Cheval
The HTTPS certificate for https://git.rtems.org has expired (~15
minutes ago). Are the auto-renewal scripts failing?
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[GSoC - x86_64] Final report

2018-08-13 Thread Amaan Cheval
Hi!

I've written my final report up here:
https://blog.whatthedude.com/post/gsoc-final/

Let me know if you have any comments! It's been really fun working
with all of you, and I look forward to more!

Cheers!
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 3/5] bsps/x86_64: Add paging support with 1GiB super pages

2018-08-13 Thread Amaan Cheval
Updates #2898.
---
 bsps/x86_64/amd64/start/bspstart.c|   2 +
 bsps/x86_64/amd64/start/page.c| 172 ++
 bsps/x86_64/headers.am|   5 +
 bsps/x86_64/include/libcpu/page.h |  68 +++
 c/src/lib/libbsp/x86_64/amd64/Makefile.am |   1 +
 .../cpu/x86_64/include/rtems/score/cpu_asm.h  |  13 ++
 6 files changed, 261 insertions(+)
 create mode 100644 bsps/x86_64/amd64/start/page.c
 create mode 100644 bsps/x86_64/headers.am
 create mode 100644 bsps/x86_64/include/libcpu/page.h

diff --git a/bsps/x86_64/amd64/start/bspstart.c 
b/bsps/x86_64/amd64/start/bspstart.c
index 784748ce3f..5a5b46bcec 100644
--- a/bsps/x86_64/amd64/start/bspstart.c
+++ b/bsps/x86_64/amd64/start/bspstart.c
@@ -26,7 +26,9 @@
 
 #include 
 #include 
+#include 
 
 void bsp_start(void)
 {
+  paging_init();
 }
diff --git a/bsps/x86_64/amd64/start/page.c b/bsps/x86_64/amd64/start/page.c
new file mode 100644
index 00..64bdf21707
--- /dev/null
+++ b/bsps/x86_64/amd64/start/page.c
@@ -0,0 +1,172 @@
+/*
+ * This file sets up page sizes to 1GiB (i.e. huge pages, using only the PML4
+ * and PDPT, skipping the PDT, and PT).
+ * We set up identity-page mapping for the 512 GiBs addressable by using static
+ * PML4 and PDPT tables.
+ *
+ * Section 4.5 "4-Level Paging" of Volume 3 of the Intel Software Developer
+ * Manual guides a lot of the code used in this file.
+ */
+
+/*
+ * Copyright (c) 2018.
+ * Amaan Cheval 
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *notice, this list of conditions and the following disclaimer in the
+ *documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+uint64_t amd64_pml4[NUM_PAGE_TABLE_ENTRIES] RTEMS_ALIGNED(4096);
+uint64_t amd64_pdpt[NUM_PAGE_TABLE_ENTRIES] RTEMS_ALIGNED(4096);
+
+bool paging_1gib_pages_supported(void)
+{
+  /*
+   * If CPUID.8001H:EDX.Page1GB [bit 26] = 1, 1-GByte pages are supported
+   * with 4-level paging.
+   */
+  uint32_t a, b, c, d;
+  cpuid(0x8001, , , , );
+  return (d >> 26) & 1;
+}
+
+uint8_t get_maxphysaddr(void)
+{
+  /*
+   * CPUID.8008H:EAX[15:8] reports the linear-address width supported by 
the
+   * processor. Generally, this value is 48 if CPUID.8001H:EDX.LM [bit 29] 
=
+   * 1 and 32 otherwise.
+   */
+  uint32_t a, b, c, d;
+  cpuid(0x8008, , , , );
+
+  uint8_t maxphysaddr = (a >> 8) & 0xff;
+  /* This width is referred to as MAXPHYADDR. MAXPHYADDR is at most 52. */
+  assert(maxphysaddr <= 52);
+
+  return maxphysaddr;
+}
+
+uint64_t get_mask_for_bits(uint8_t start, uint8_t end)
+{
+  /*
+   * Create a mask that lets you select bits start:end when logically ANDed 
with
+   * a value. For eg.
+   *   get_mask_for_bits(48, 64) = 0x
+   */
+  uint64_t mask = (((uint64_t) 1 << (end - start)) - 1) << start;
+  return mask;
+}
+
+RTEMS_INLINE_ROUTINE void assert_0s_from_bit(uint64_t entry, uint8_t bit_pos)
+{
+  /* Confirm that bit_pos:64 are all 0s */
+  assert((entry & get_mask_for_bits(bit_pos, 64)) == 0);
+}
+
+uint64_t create_cr3_entry(
+  uint64_t phys_addr, uint8_t maxphysaddr, uint64_t flags
+)
+{
+  /* Confirm PML4 address is aligned on a 4KiB boundary */
+  assert((phys_addr & 0xfff) == 0);
+  uint64_t entry = (phys_addr & get_mask_for_bits(12, maxphysaddr)) | flags;
+
+  /* Confirm that bits maxphysaddr:64 are 0s */
+  assert_0s_from_bit(entry, maxphysaddr);
+  return entry;
+}
+
+uint64_t create_pml4_entry(
+  uint64_t phys_addr, uint8_t maxphysaddr, uint64_t flags
+)
+{
+  /* Confirm address we're writing is aligned on a 4KiB boundary */
+  assert((phys_addr & 0xfff) == 0);
+  uint64_t entry = (phys_addr & get_mask_for_bits(12, maxphysaddr)) | flags;
+
+  /*
+   * Con

[PATCH 5/5] bsps/x86_64: Add APIC timer based clock driver

2018-08-13 Thread Amaan Cheval
The APIC timer is calibrated by running the i8254 PIT for a fraction of a
second (determined by PIT_CALIBRATE_DIVIDER) and counting how many times the
APIC counter has ticked. The calibration can be run multiple times (determined
by APIC_TIMER_NUM_CALIBRATIONS) and averaged out.

Updates #2898.
---
 bsps/x86_64/amd64/clock/clock.c   | 299 ++
 bsps/x86_64/amd64/headers.am  |   3 +
 bsps/x86_64/amd64/include/apic.h  |  62 
 bsps/x86_64/amd64/include/clock.h |  99 ++
 bsps/x86_64/amd64/include/pic.h   |  75 +
 bsps/x86_64/amd64/interrupts/pic.c|  76 +
 c/src/lib/libbsp/x86_64/amd64/Makefile.am |   4 +-
 .../cpu/x86_64/include/rtems/score/cpu_asm.h  |  23 ++
 8 files changed, 640 insertions(+), 1 deletion(-)
 create mode 100644 bsps/x86_64/amd64/clock/clock.c
 create mode 100644 bsps/x86_64/amd64/include/apic.h
 create mode 100644 bsps/x86_64/amd64/include/clock.h
 create mode 100644 bsps/x86_64/amd64/include/pic.h
 create mode 100644 bsps/x86_64/amd64/interrupts/pic.c

diff --git a/bsps/x86_64/amd64/clock/clock.c b/bsps/x86_64/amd64/clock/clock.c
new file mode 100644
index 00..76e537755a
--- /dev/null
+++ b/bsps/x86_64/amd64/clock/clock.c
@@ -0,0 +1,299 @@
+/*
+ * Copyright (c) 2018.
+ * Amaan Cheval 
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *notice, this list of conditions and the following disclaimer in the
+ *documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/* Use the amd64_apic_base as an array of 32-bit APIC registers */
+volatile uint32_t *amd64_apic_base;
+static struct timecounter amd64_clock_tc;
+
+extern volatile uint32_t Clock_driver_ticks;
+extern void apic_spurious_handler(void);
+extern void Clock_isr(void *param);
+
+static uint32_t amd64_clock_get_timecount(struct timecounter *tc)
+{
+  return Clock_driver_ticks;
+}
+
+/*
+ * When the CPUID instruction is executed with a source operand of 1 in the EAX
+ * register, bit 9 of the CPUID feature flags returned in the EDX register
+ * indicates the presence (set) or absence (clear) of a local APIC.
+ */
+bool has_apic_support()
+{
+  uint32_t eax, ebx, ecx, edx;
+  cpuid(1, , , , );
+  return (edx >> 9) & 1;
+}
+
+/*
+ * Initializes the APIC by hardware and software enabling it, and sets up the
+ * amd64_apic_base pointer that can be used as a 32-bit addressable array to
+ * access APIC registers.
+ */
+void apic_initialize(void)
+{
+  if ( !has_apic_support() ) {
+printf("warning: cpuid claims no APIC support - trying anyway.\n");
+  }
+
+  /*
+   * The APIC base address is a 36-bit physical address.
+   * We have identity-paging setup at the moment, which makes this simpler, but
+   * that's something to note since the variables below use virtual addresses.
+   *
+   * Bits 0-11 (inclusive) are 0, making the address page (4KiB) aligned.
+   * Bits 12-35 (inclusive) of the MSR point to the rest of the address.
+   */
+  uint64_t apic_base_msr = rdmsr(APIC_BASE_MSR);
+  amd64_apic_base = (uint32_t*) apic_base_msr;
+  amd64_apic_base = (uint32_t*) ((uintptr_t) amd64_apic_base & 0x0ff000);
+
+  /* Hardware enable the APIC just to be sure */
+  wrmsr(
+APIC_BASE_MSR,
+apic_base_msr | APIC_BASE_MSR_ENABLE,
+apic_base_msr >> 32
+  );
+
+  DBG_PRINTF("APIC is at 0x%" PRIxPTR "\n", (uintptr_t) amd64_apic_base);
+  DBG_PRINTF(
+"APIC ID at *0x%" PRIxPTR "=0x%" PRIx32 "\n",
+(uintptr_t) _apic_base[APIC_REGISTER_APICID],
+amd64_apic_base[APIC_REGISTER_APICID]
+  );
+
+  DBG_PRINTF(
+"APIC spurious vector register *0x%" PRIxPTR &qu

[PATCH 2/5] bsps/x86_64: Reduce default RamSize to 1GiB

2018-08-13 Thread Amaan Cheval
Simulators may not always be able to allocate 4GiB easily, and using an
artificially lower RAM may cause a broken heap.

Updates #2898.
---
 bsps/x86_64/amd64/start/linkcmds | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/bsps/x86_64/amd64/start/linkcmds b/bsps/x86_64/amd64/start/linkcmds
index 46b1ccbfe7..ecb4a2b835 100644
--- a/bsps/x86_64/amd64/start/linkcmds
+++ b/bsps/x86_64/amd64/start/linkcmds
@@ -23,15 +23,15 @@ HeapSize = DEFINED(HeapSize)  ? HeapSize  :
 RamBase = DEFINED(RamBase)? RamBase   :
   DEFINED(_RamBase)   ? _RamBase  : 0x0;
 
-/* XXX: Defaulting to 4GiB.
+/* XXX: Defaulting to 1GiB.
  */
 RamSize = DEFINED(RamSize)? RamSize   :
-  DEFINED(_RamSize)   ? _RamSize  : 0x;
+  DEFINED(_RamSize)   ? _RamSize  : 0x4000;
 
 SECTIONS
 {
   /* Read-only sections, merged into text segment: */
-  PROVIDE (__executable_start = SEGMENT_START("text-segment", 0x40)); . = 
SEGMENT_START("text-segment", 0x40) + SIZEOF_HEADERS;
+  PROVIDE (__executable_start = SEGMENT_START("text-segment", 0x0010)); . 
= SEGMENT_START("text-segment", 0x0010) + SIZEOF_HEADERS;
   .interp : { *(.interp) }
   .note.gnu.build-id : { *(.note.gnu.build-id) }
   .hash   : { *(.hash) }
-- 
2.18.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 1/5] bsps/x86_64: Reorganize header files and compile-options

2018-08-13 Thread Amaan Cheval
Updates #2898.
---
 bsps/x86_64/amd64/config/amd64.cfg|  3 ++
 cpukit/score/cpu/x86_64/headers.am|  1 +
 cpukit/score/cpu/x86_64/include/rtems/asm.h   | 10 
 .../cpu/x86_64/include/rtems/score/cpu.h  |  5 +-
 .../cpu/x86_64/include/rtems/score/cpu_asm.h  | 50 +++
 .../cpu/x86_64/include/rtems/score/cpuimpl.h  | 16 +-
 .../cpu/x86_64/include/rtems/score/x86_64.h   | 13 -
 .../score/cpu/x86_64/x86_64-context-switch.S  |  8 +--
 8 files changed, 84 insertions(+), 22 deletions(-)
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpu_asm.h

diff --git a/bsps/x86_64/amd64/config/amd64.cfg 
b/bsps/x86_64/amd64/config/amd64.cfg
index 3c4492d9d3..ad861cb867 100644
--- a/bsps/x86_64/amd64/config/amd64.cfg
+++ b/bsps/x86_64/amd64/config/amd64.cfg
@@ -11,3 +11,6 @@ CPU_CFLAGS  = -mno-red-zone
 # way we can avoid linker-time relocation errors spawning from values being
 # larger than their optimized container sizes.
 CPU_CFLAGS += -mcmodel=large
+CPU_CFLAGS += -Werror=return-type
+
+LDFLAGS = -Wl,--gc-sections
diff --git a/cpukit/score/cpu/x86_64/headers.am 
b/cpukit/score/cpu/x86_64/headers.am
index b3792d00b1..d23c39d99b 100644
--- a/cpukit/score/cpu/x86_64/headers.am
+++ b/cpukit/score/cpu/x86_64/headers.am
@@ -11,6 +11,7 @@ include_rtems_HEADERS += include/rtems/asm.h
 include_rtems_scoredir = $(includedir)/rtems/score
 include_rtems_score_HEADERS =
 include_rtems_score_HEADERS += include/rtems/score/cpu.h
+include_rtems_score_HEADERS += include/rtems/score/cpu_asm.h
 include_rtems_score_HEADERS += include/rtems/score/cpuatomic.h
 include_rtems_score_HEADERS += include/rtems/score/cpuimpl.h
 include_rtems_score_HEADERS += include/rtems/score/x86_64.h
diff --git a/cpukit/score/cpu/x86_64/include/rtems/asm.h 
b/cpukit/score/cpu/x86_64/include/rtems/asm.h
index 36699140b7..76efc07db3 100644
--- a/cpukit/score/cpu/x86_64/include/rtems/asm.h
+++ b/cpukit/score/cpu/x86_64/include/rtems/asm.h
@@ -84,6 +84,16 @@
 #define r14 REG (r14)
 #define r15 REG (r15)
 
+/*
+ * Order of register usage for function arguments as per the calling convention
+ */
+#define REG_ARG0 rdi
+#define REG_ARG1 rsi
+#define REG_ARG2 rdx
+#define REG_ARG3 rcx
+#define REG_ARG4 r8
+#define REG_ARG5 r9
+
 // XXX: eax, ax, etc., segment registers
 
 /*
diff --git a/cpukit/score/cpu/x86_64/include/rtems/score/cpu.h 
b/cpukit/score/cpu/x86_64/include/rtems/score/cpu.h
index 5c40af7647..557d11109d 100644
--- a/cpukit/score/cpu/x86_64/include/rtems/score/cpu.h
+++ b/cpukit/score/cpu/x86_64/include/rtems/score/cpu.h
@@ -40,6 +40,7 @@ extern "C" {
 #endif
 
 #include 
+#include 
 #include 
 
 #define CPU_SIMPLE_VECTORED_INTERRUPTS FALSE
@@ -54,7 +55,7 @@ extern "C" {
 #define CPU_PROVIDES_IDLE_THREAD_BODYFALSE
 #define CPU_STACK_GROWS_UP   FALSE
 
-#define CPU_STRUCTURE_ALIGNMENT __attribute__((aligned ( 64 )))
+#define CPU_STRUCTURE_ALIGNMENT RTEMS_ALIGNED(64)
 #define CPU_CACHE_LINE_BYTES 64
 #define CPU_MODES_INTERRUPT_MASK   0x0001
 #define CPU_MAXIMUM_PROCESSORS 32
@@ -104,7 +105,7 @@ typedef struct {
 uint32_t   special_interrupt_register;
 } CPU_Interrupt_frame;
 
-#endif /* ASM */
+#endif /* !ASM */
 
 
 #define CPU_CONTEXT_FP_SIZE sizeof( Context_Control_fp )
diff --git a/cpukit/score/cpu/x86_64/include/rtems/score/cpu_asm.h 
b/cpukit/score/cpu/x86_64/include/rtems/score/cpu_asm.h
new file mode 100644
index 00..ac43a6366d
--- /dev/null
+++ b/cpukit/score/cpu/x86_64/include/rtems/score/cpu_asm.h
@@ -0,0 +1,50 @@
+/*
+ * Copyright (c) 2018.
+ * Amaan Cheval 
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *notice, this list of conditions and the following disclaimer in the
+ *documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED.  IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#ifndef _RTEMS_SCORE_CPU_ASM_H
+#define _RTEMS_SCORE_CPU_ASM_H
+
+

[PATCH 0/5] [GSoC - x86_64] Add interrupts and clock driver

2018-08-13 Thread Amaan Cheval
This patch series includes all of my remaining work so far on the x86_64 BSP. It
supports:

* Static paging support using 1GiB superpages
* RTEMS interrupts
* A fairly accurate clock driver based on the APIC timer calibrated by the PIT

ticker.exe passes reliably on -O0 optimization level, and it seems like it
_should_ on -O2 as well, except for the issue I've been describing on this
thread:

https://lists.rtems.org/pipermail/devel/2018-August/022825.html

 bsps/x86_64/amd64/clock/clock.c   | 299 ++
 bsps/x86_64/amd64/config/amd64.cfg|   3 +
 bsps/x86_64/amd64/headers.am  |   3 +
 bsps/x86_64/amd64/include/apic.h  |  62 
 bsps/x86_64/amd64/include/clock.h |  99 ++
 bsps/x86_64/amd64/include/pic.h   |  75 +
 bsps/x86_64/amd64/interrupts/idt.c| 151 +
 bsps/x86_64/amd64/interrupts/isr_handler.S| 191 +++
 bsps/x86_64/amd64/interrupts/pic.c|  76 +
 bsps/x86_64/amd64/start/bspstart.c|   4 +
 bsps/x86_64/amd64/start/linkcmds  |   6 +-
 bsps/x86_64/amd64/start/page.c| 172 ++
 bsps/x86_64/headers.am|   9 +
 bsps/x86_64/include/bsp/irq.h |  46 +++
 bsps/x86_64/include/libcpu/page.h |  68 
 c/src/lib/libbsp/x86_64/amd64/Makefile.am |   9 +-
 cpukit/score/cpu/x86_64/cpu.c |  17 +-
 cpukit/score/cpu/x86_64/headers.am|   2 +
 cpukit/score/cpu/x86_64/include/rtems/asm.h   |  10 +
 .../cpu/x86_64/include/rtems/score/cpu.h  | 112 +--
 .../cpu/x86_64/include/rtems/score/cpu_asm.h  | 104 ++
 .../cpu/x86_64/include/rtems/score/cpuimpl.h  |  16 +-
 .../cpu/x86_64/include/rtems/score/idt.h  | 131 
 .../cpu/x86_64/include/rtems/score/x86_64.h   |  13 +-
 .../cpu/x86_64/x86_64-context-initialize.c|   9 +-
 .../score/cpu/x86_64/x86_64-context-switch.S  |   8 +-
 26 files changed, 1636 insertions(+), 59 deletions(-)
 create mode 100644 bsps/x86_64/amd64/clock/clock.c
 create mode 100644 bsps/x86_64/amd64/include/apic.h
 create mode 100644 bsps/x86_64/amd64/include/clock.h
 create mode 100644 bsps/x86_64/amd64/include/pic.h
 create mode 100644 bsps/x86_64/amd64/interrupts/idt.c
 create mode 100644 bsps/x86_64/amd64/interrupts/isr_handler.S
 create mode 100644 bsps/x86_64/amd64/interrupts/pic.c
 create mode 100644 bsps/x86_64/amd64/start/page.c
 create mode 100644 bsps/x86_64/headers.am
 create mode 100644 bsps/x86_64/include/bsp/irq.h
 create mode 100644 bsps/x86_64/include/libcpu/page.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpu_asm.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/idt.h

-- 
2.18.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Pre-merge issues (at -O2 optimization level) and WIP review

2018-08-12 Thread Amaan Cheval
Hi!

I've narrowed the issue down to this bintime function:
https://github.com/RTEMS/rtems/blob/b2de4260c5c71e518742731a8cdebe3411937181/cpukit/score/src/kern_tc.c#L548

The watchdog ticks in _Per_CPU_Information / Clock_driver_ticks are at
"1000", when that function is called (rtems_clock_get_tod ->
_TOD_Get_timeval -> _Timecounter_Microtime -> microtime). The bt and
tvp values there are:

(gdb) p bt
$2 = {sec = 599562004, frac = 18446744073709551536}
(gdb) p *tvp
$3 = {tv_sec = 599562004, tv_usec = 99}

The full (relevant) debug log for the "wrong" timing despite the
Clock_driver_ticks being correct is here:
https://gist.github.com/AmaanC/c59caf5232b03054d457dcacb5ab1c54

I'm quite unfamiliar with how the low-level internals work and it
looks like it comes from FreeBSD. This is likely a bug from the
timecounter being "too" precise - it dispatches the task at _exactly_
the tc_freq it promised - if it slips by 1 tick, then the values start
looking correct.

This looks more like an off-by-one in the low-level code, in that
case, since my clock driver's timecounter returns exactly the value it
ought to be returning (100 when 1 second has passed, for eg., when the
tc_frequency=100Hz - in that case the bintime's returned "now.tv_sec"
value in clockgettod.c causes the wrong second to be set in
"time_buffer").

https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-03-post-hello/bsps/x86_64/amd64/clock/clock.c#L51

On Sun, Aug 12, 2018 at 2:48 PM, Amaan Cheval  wrote:
> There's another issue I'm having now:
>
> At -O0, ticker.exe works well and passes reliably. At -O2, the TOD
> seems to be rushed a bit:
>
> TA1  - rtems_clock_get_tod - 09:00:00   12/31/1988
> TA2  - rtems_clock_get_tod - 09:00:00   12/31/1988
> TA3  - rtems_clock_get_tod - 09:00:00   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:04   12/31/1988
> TA2  - rtems_clock_get_tod - 09:00:09   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:09   12/31/1988
> TA3  - rtems_clock_get_tod - 09:00:14   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:14   12/31/1988
> TA2  - rtems_clock_get_tod - 09:00:19   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:19   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:24   12/31/1988
> TA3  - rtems_clock_get_tod - 09:00:29   12/31/1988
> TA2  - rtems_clock_get_tod - 09:00:29   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:29   12/31/1988
> TA1  - rtems_clock_get_tod - 09:00:34   12/31/1988
>
> I'm not sure what it could be - I suspected my get_timecount somehow
> not realizing that Clock_driver_ticks was volatile, but that seems to
> be in order. The relevant code is here:
> https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-03-post-hello/bsps/x86_64/amd64/clock/clock.c
>
> On Sun, Aug 12, 2018 at 3:43 AM, Amaan Cheval  wrote:
>> Figured it out; turns out my code to align the stack so I could make
>> calls without raising exceptions was messing up and corrupting the
>> stack-pointer.
>>
>> Running the -O2 code now makes the clock run a bit too quickly - the
>> calibration may have a minor issue. I'll fix that up and send patches
>> tomorrow or Monday hopefully.
>>
>> I'll be traveling Tuesday, so I'd appreciate if we can get them merged
>> upstream Monday itself - I'm okay to have a call and walk someone
>> through the patches and whatnot if need be.
>>
>> Cheers!
>>
>> On Sun, Aug 12, 2018 at 1:25 AM, Amaan Cheval  wrote:
>>> Hi!
>>>
>>> In the process of cleaning my work up, I've run into an odd problem
>>> which only shows up when I set the optimization level to -O2. At -O0,
>>> it's perfectly fine.
>>>
>>> The issue is that somehow execution ends up at address 0x0.
>>>
>>> This likely happens due to a _CPU_Context_switch, where the rsp is set
>>> to a corrupted value, leading to a corrupt (i.e. 0) return address at
>>> the end of the context switch.
>>>
>>> What's curious is that this corruption _seems_ to occur in
>>> _ISR_Handler's call to _Thread_Dispatch, by somehow messing the value
>>> of rsp up - I honestly don't know this for sure because gdb says one
>>> thing (i.e. that rsp = 0), but setting up some code (cmpq $0, rsp) to
>>> check this seems to say rsp is non-zero, at least.
>>>
>>> This is an odd heisenbug I'd like to investigate for sure - I just
>>> thought I'd shoot this email out because:
>>>
>>> - If I can't figure it out tomorrow, soon, I'll just drop it so I can
>>> create more logical commits to send as patches upstream (thereby
>>> leaving -O0 upstream, at least temporarily)
>>>
>>> - If anyone's seen an odd stack corruption like 

Re: [GSoC - x86_64] Pre-merge issues (at -O2 optimization level) and WIP review

2018-08-12 Thread Amaan Cheval
There's another issue I'm having now:

At -O0, ticker.exe works well and passes reliably. At -O2, the TOD
seems to be rushed a bit:

TA1  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:04   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:09   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:09   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:14   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:14   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:19   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:19   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:24   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:29   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:29   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:29   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:34   12/31/1988

I'm not sure what it could be - I suspected my get_timecount somehow
not realizing that Clock_driver_ticks was volatile, but that seems to
be in order. The relevant code is here:
https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-03-post-hello/bsps/x86_64/amd64/clock/clock.c

On Sun, Aug 12, 2018 at 3:43 AM, Amaan Cheval  wrote:
> Figured it out; turns out my code to align the stack so I could make
> calls without raising exceptions was messing up and corrupting the
> stack-pointer.
>
> Running the -O2 code now makes the clock run a bit too quickly - the
> calibration may have a minor issue. I'll fix that up and send patches
> tomorrow or Monday hopefully.
>
> I'll be traveling Tuesday, so I'd appreciate if we can get them merged
> upstream Monday itself - I'm okay to have a call and walk someone
> through the patches and whatnot if need be.
>
> Cheers!
>
> On Sun, Aug 12, 2018 at 1:25 AM, Amaan Cheval  wrote:
>> Hi!
>>
>> In the process of cleaning my work up, I've run into an odd problem
>> which only shows up when I set the optimization level to -O2. At -O0,
>> it's perfectly fine.
>>
>> The issue is that somehow execution ends up at address 0x0.
>>
>> This likely happens due to a _CPU_Context_switch, where the rsp is set
>> to a corrupted value, leading to a corrupt (i.e. 0) return address at
>> the end of the context switch.
>>
>> What's curious is that this corruption _seems_ to occur in
>> _ISR_Handler's call to _Thread_Dispatch, by somehow messing the value
>> of rsp up - I honestly don't know this for sure because gdb says one
>> thing (i.e. that rsp = 0), but setting up some code (cmpq $0, rsp) to
>> check this seems to say rsp is non-zero, at least.
>>
>> This is an odd heisenbug I'd like to investigate for sure - I just
>> thought I'd shoot this email out because:
>>
>> - If I can't figure it out tomorrow, soon, I'll just drop it so I can
>> create more logical commits to send as patches upstream (thereby
>> leaving -O0 upstream, at least temporarily)
>>
>> - If anyone's seen an odd stack corruption like this, or has any
>> advice on debugging it, could you let me know? I suspect something
>> like interrupting tasks which ought not to be interrupted (perhaps I
>> forgot to implement some kind of "CPU_ISR_Disable") - is there
>> anything you can think of of that sort?
>>
>> Also, here's a Github PR like last time with all the work (just for
>> the overall changes, not the specific commits!). I'd appreciate a
>> quick review if anyone could - sorry about sending this out over the
>> weekend! I've had a surprising share of Heisenbugs with QEMU in the
>> past week.
>>
>> https://github.com/AmaanC/rtems-gsoc18/pull/3/files
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[GSoC - x86_64] Pre-merge issues (at -O2 optimization level) and WIP review

2018-08-11 Thread Amaan Cheval
Hi!

In the process of cleaning my work up, I've run into an odd problem
which only shows up when I set the optimization level to -O2. At -O0,
it's perfectly fine.

The issue is that somehow execution ends up at address 0x0.

This likely happens due to a _CPU_Context_switch, where the rsp is set
to a corrupted value, leading to a corrupt (i.e. 0) return address at
the end of the context switch.

What's curious is that this corruption _seems_ to occur in
_ISR_Handler's call to _Thread_Dispatch, by somehow messing the value
of rsp up - I honestly don't know this for sure because gdb says one
thing (i.e. that rsp = 0), but setting up some code (cmpq $0, rsp) to
check this seems to say rsp is non-zero, at least.

This is an odd heisenbug I'd like to investigate for sure - I just
thought I'd shoot this email out because:

- If I can't figure it out tomorrow, soon, I'll just drop it so I can
create more logical commits to send as patches upstream (thereby
leaving -O0 upstream, at least temporarily)

- If anyone's seen an odd stack corruption like this, or has any
advice on debugging it, could you let me know? I suspect something
like interrupting tasks which ought not to be interrupted (perhaps I
forgot to implement some kind of "CPU_ISR_Disable") - is there
anything you can think of of that sort?

Also, here's a Github PR like last time with all the work (just for
the overall changes, not the specific commits!). I'd appreciate a
quick review if anyone could - sorry about sending this out over the
weekend! I've had a surprising share of Heisenbugs with QEMU in the
past week.

https://github.com/AmaanC/rtems-gsoc18/pull/3/files
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH] Rework to minimize and eventually eliminate RTEMS use of bsp_specs

2018-08-10 Thread Amaan Cheval
Haha, don't worry about it. It's really a non-blocker we can
absolutely handle after GSoC just as well. I just wanted to confirm in
case I'd missed something!

On Fri, Aug 10, 2018 at 6:55 PM, Joel Sherrill  wrote:
> I am sorry. I will have to dig this up and commit it.
>
> I will try to do this before I leave about lunch.
>
> Looks like we both have work to do before the end of GSoC. :)
>
> --joel
>
> On Fri, Aug 10, 2018 at 6:11 AM, Amaan Cheval 
> wrote:
>>
>> Hey Joel!
>>
>> I'm not sure if this ever made it upstream - if it did, could you dig
>> the commit up?
>>
>> I'll leave the x86_64's bsp_specs empty and make the RSB backporting
>> patches accordingly. If not, no rush, we should just add a ticket or
>> something so as to not lose track of it entirely after GSoC ends.
>>
>> On Mon, Jul 9, 2018 at 10:30 AM, Amaan Cheval 
>> wrote:
>> > To make my previous email clearer, here's what I meant with the
>> > "minimal" GCC patch required (attached).
>> >
>> > To manually test, you can place gcc-STARTFILE_SPEC.patch in
>> > $RSB/rtems/patches/ and then "git apply rsb-startfile.diff" to the RSB
>> > repo. Then build GCC and confirm that "x86_64-rtems5-gcc -dumpspecs"
>> > includes crti and crtbegin in the startfile susbtitution.
>> >
>> > Let me know if we aim to have this GCC work done before merging the
>> > x86_64 BSP (see
>> > https://lists.rtems.org/pipermail/devel/2018-July/022388.html) so I
>> > can leave bsp_specs in or clear it out accordingly?
>> >
>> > For now, I'm going to leave it in.
>> >
>> > On Fri, Jul 6, 2018 at 10:46 AM, Amaan Cheval 
>> > wrote:
>> >> Hey, Joel!
>> >>
>> >> The x86_64 BSP currently uses an empty bsp_specs file contingent on
>> >> (at least the x86-64 parts of) this email thread's patch making it
>> >> upstream to GCC, and making their way into the RSB.
>> >>
>> >> 2 options:
>> >> - 1. Make the upstream GCC commit (at least the parts adding
>> >> rtemself64.h, editing config.gcc, and "#if 0"ing out
>> >> gcc/config/rtems.h)
>> >> - 2. Use a bsp_specs in the new BSP for the merge now, and empty it out
>> >> later
>> >>
>> >> I can test and send you an x86_64 specific patch for GCC if you'd
>> >> like. Or if you prefer to have all the work together, we can go with
>> >> #2.
>> >>
>> >> Let me know!
>> >>
>> >> On Sat, May 19, 2018 at 3:17 AM, Joel Sherrill  wrote:
>> >>> Thanks. I will try to deal with this Monday.
>> >>>
>> >>> My specs patches are not ready to push to gcc so I need to focus on
>> >>> just the parts to make x86_64 right.
>> >>>
>> >>> On Fri, May 18, 2018 at 3:41 PM, Amaan Cheval 
>> >>> wrote:
>> >>>>
>> >>>> To be clear, I applied this patch (with my fixes) on the 7.3 release
>> >>>> through the RSB to test, not on GCC's master branch.
>> >>>>
>> >>>> > to add i386/rtemself64.h
>> >>>>
>> >>>> What you sent in this email thread adds rtemself64.h already. Do you
>> >>>> mean you'd like to split the commits up or something?
>> >>>>
>> >>>> The only changes I made on top of yours were:
>> >>>>
>> >>>> - Readd "rtems.h" to config.gcc
>> >>>> - Fix comments
>> >>>>
>> >>>> I've attached the patch file I used within the RSB here (sorry if you
>> >>>> meant a patch of _just_ the fixes I made on top of yours, this is
>> >>>> just
>> >>>> the cumulative diff I used to patch GCC 7.3 to test).
>> >>>>
>> >>>> Regards,
>> >>>>
>> >>>> On Fri, May 18, 2018 at 7:00 PM, Joel Sherrill 
>> >>>> wrote:
>> >>>> >
>> >>>> >
>> >>>> >
>> >>>> > On Fri, May 18, 2018 at 1:38 AM, Amaan Cheval
>> >>>> > 
>> >>>> > wrote:
>> >>>> >>
>> >>>> >> I just compiled my local fixed copy (adding rtems.h back in) and
>> >>>> >> there's good news! With the patch, the x86_64 compile stub works
>> >>>> >> wi

Re: [PATCH] Rework to minimize and eventually eliminate RTEMS use of bsp_specs

2018-08-10 Thread Amaan Cheval
Hey Joel!

I'm not sure if this ever made it upstream - if it did, could you dig
the commit up?

I'll leave the x86_64's bsp_specs empty and make the RSB backporting
patches accordingly. If not, no rush, we should just add a ticket or
something so as to not lose track of it entirely after GSoC ends.

On Mon, Jul 9, 2018 at 10:30 AM, Amaan Cheval  wrote:
> To make my previous email clearer, here's what I meant with the
> "minimal" GCC patch required (attached).
>
> To manually test, you can place gcc-STARTFILE_SPEC.patch in
> $RSB/rtems/patches/ and then "git apply rsb-startfile.diff" to the RSB
> repo. Then build GCC and confirm that "x86_64-rtems5-gcc -dumpspecs"
> includes crti and crtbegin in the startfile susbtitution.
>
> Let me know if we aim to have this GCC work done before merging the
> x86_64 BSP (see
> https://lists.rtems.org/pipermail/devel/2018-July/022388.html) so I
> can leave bsp_specs in or clear it out accordingly?
>
> For now, I'm going to leave it in.
>
> On Fri, Jul 6, 2018 at 10:46 AM, Amaan Cheval  wrote:
>> Hey, Joel!
>>
>> The x86_64 BSP currently uses an empty bsp_specs file contingent on
>> (at least the x86-64 parts of) this email thread's patch making it
>> upstream to GCC, and making their way into the RSB.
>>
>> 2 options:
>> - 1. Make the upstream GCC commit (at least the parts adding
>> rtemself64.h, editing config.gcc, and "#if 0"ing out
>> gcc/config/rtems.h)
>> - 2. Use a bsp_specs in the new BSP for the merge now, and empty it out later
>>
>> I can test and send you an x86_64 specific patch for GCC if you'd
>> like. Or if you prefer to have all the work together, we can go with
>> #2.
>>
>> Let me know!
>>
>> On Sat, May 19, 2018 at 3:17 AM, Joel Sherrill  wrote:
>>> Thanks. I will try to deal with this Monday.
>>>
>>> My specs patches are not ready to push to gcc so I need to focus on
>>> just the parts to make x86_64 right.
>>>
>>> On Fri, May 18, 2018 at 3:41 PM, Amaan Cheval 
>>> wrote:
>>>>
>>>> To be clear, I applied this patch (with my fixes) on the 7.3 release
>>>> through the RSB to test, not on GCC's master branch.
>>>>
>>>> > to add i386/rtemself64.h
>>>>
>>>> What you sent in this email thread adds rtemself64.h already. Do you
>>>> mean you'd like to split the commits up or something?
>>>>
>>>> The only changes I made on top of yours were:
>>>>
>>>> - Readd "rtems.h" to config.gcc
>>>> - Fix comments
>>>>
>>>> I've attached the patch file I used within the RSB here (sorry if you
>>>> meant a patch of _just_ the fixes I made on top of yours, this is just
>>>> the cumulative diff I used to patch GCC 7.3 to test).
>>>>
>>>> Regards,
>>>>
>>>> On Fri, May 18, 2018 at 7:00 PM, Joel Sherrill  wrote:
>>>> >
>>>> >
>>>> >
>>>> > On Fri, May 18, 2018 at 1:38 AM, Amaan Cheval 
>>>> > wrote:
>>>> >>
>>>> >> I just compiled my local fixed copy (adding rtems.h back in) and
>>>> >> there's good news! With the patch, the x86_64 compile stub works with
>>>> >> a blank bsp_specs file!
>>>> >
>>>> >
>>>> > Awesome!
>>>> >
>>>> > Can you send me your changes as a patch? I am thinking I need to make
>>>> > sure we agree on what the gcc master for x86_64-rtems looks like.
>>>> >
>>>> > Apparently I owe committing a patch to add i386/rtemself64.h since it is
>>>> > missing on the master. And the comment is wrong.  What else?
>>>> >
>>>> >> On Fri, May 18, 2018 at 12:59 AM, Amaan Cheval 
>>>> >> wrote:
>>>> >> > Hey!
>>>> >> >
>>>> >> > Thanks so much for sharing this, it's quite useful to put your
>>>> >> > earlier
>>>> >> > email[1] about minimzing the bsp_specs in context.
>>>> >> >
>>>> >> > From looking ahead a bit without testing (still compiling), the patch
>>>> >> > may need an ENDFILE_SPEC definition as well for "crtend.o" (it
>>>> >> > defines
>>>> >> > __TMC_END__ which crtbegin.o has left undefined for eg.) and possibly
>>>> >> > "crtn.o", at least to eliminate the x86_64

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-09 Thread Amaan Cheval
Haha, my tc_frequency was set all wrong, causing the date to be wonky.

The dispatching issue turned out to be a (potential) QEMU bug where
"decq" wouldn't set the ZF in EFLAGS even if it resulted in a 0 value,
causing the "jne" to always be taken.

Anyway, here's where we're at now:

Start @ 0x1027f9 ...
EFI framebuffer information:
addr, size 0x8000, 0x1d4c00
dimensions 800 x 600
stride 800
masks  0x00ff, 0xff00, 0x00ff, 0xff00
Filling 512 page tables
1gib pages not supported!
maxphysaddr = 48
sidt = ff f a8 39 34 0 0 0 0 0
us_per_tick = 1
Desired frequency = 100 irqs/sec
APIC was at fee0
APIC is now at fee0
APIC ID at *fee00020=0
APIC spurious vector register *fee000f0=10f
APIC spurious vector register *fee000f0=1ff
CPU frequency: 0x57c60
APIC ticks/sec: 0x57c6qemu-system-x86_64: warning: I/O thread spun for
1000 iterations


*** BEGIN OF TEST CLOCK TICK ***
*** TEST VERSION: 5.0.0.2f10634899719c2857e2c8dd5088fb93a425fc83-modified
*** TEST STATE: EXPECTED-PASS
*** TEST BUILD: RTEMS_NETWORKING RTEMS_POSIX_API
*** TEST TOOLS: 7.3.0 20180125 (RTEMS 5, RSB
25f4db09c85a52fb1640a29f9bdc2de8c2768988, Newlib 3.0.0)
TA1  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:00   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:05   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:10   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:10   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:15   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:15   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:20   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:20   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:25   12/31/1988
TA3  - rtems_clock_get_tod - 09:00:30   12/31/1988
TA2  - rtems_clock_get_tod - 09:00:30   12/31/1988
TA1  - rtems_clock_get_tod - 09:00:30   12/31/1988

*** END OF TEST CLOCK TICK ***


*** FATAL ***
fatal source: 5 (RTEMS_FATAL_SOURCE_EXIT)
fatal code: 0 (0x)
RTEMS version: 5.0.0.2f10634899719c2857e2c8dd5088fb93a425fc83-modified
RTEMS tools: 7.3.0 20180125 (RTEMS 5, RSB
25f4db09c85a52fb1640a29f9bdc2de8c2768988, Newlib 3.0.0)
executing thread ID: 0x08a010002
executing thread name: TA1
qemu-system-x86_64: terminating on signal 2

-

2 issues:
- It isn't reliably this way - sometimes it may start at 9:00:01 (and
then the rest are 6, 11, etc.). I'm using a very naive timecounter
(number of IRQs occurred so far) right now - I'll have it account for
the ticks since the last IRQ too, which I imagine may help with this?
- It is much slower than IRL time - I'm not sure if this is just due
to QEMU or perhaps from ISRs piling over each other due to the handler
taking too long. I'm not quite sure how to find out either.

Let me know if you have any suggestions!

Patches incoming soon too, so I'd appreciate reviews! :)

On Thu, Aug 9, 2018 at 8:12 PM, Joel Sherrill  wrote:
>
>
> On Thu, Aug 9, 2018 at 7:43 AM, Amaan Cheval  wrote:
>>
>> Addition to status: it doesn't seem like the RTEMS Interrupt's call to
>> _Thread_Dispatch functions either - ticker.exe has outputs like the
>> following (yeah, the counter is running too quickly right now):
>>
>> *** BEGIN OF TEST CLOCK TICK ***
>> *** TEST VERSION: 5.0.0.2f10634899719c2857e2c8dd5088fb93a425fc83-modified
>> *** TEST STATE: EXPECTED-PASS
>> *** TEST BUILD: RTEMS_NETWORKING RTEMS_POSIX_API
>> *** TEST TOOLS: 7.3.0 20180125 (RTEMS 5, RSB
>> 25f4db09c85a52fb1640a29f9bdc2de8c2768988, Newlib 3.0.0)
>> TA1  - rtems_clock_get_tod - 11:34:12   05/12/1990
>> TA2  - rtems_clock_get_tod - 11:34:12   05/12/1990
>> TA3  - rtems_clock_get_tod - 11:34:12   05/12/1990
>
>
> Congratulations! But why is the date 5/12/1990? I think it is supposed
> to be 12/31/1989. :)
>>
>>
>> (And then the _Thread_Idle_body is never preempted due to the
>> interrupt dispatching a new thread - not sure if it just thinks it's
>> "too late" to even bother or if simply never even tries. I'll keep
>> investigating.)
>
>
> This means your "outer" assembly language _ISR_Handler does not
> yet deal with "needs dispatch". On the 5 second tick, a task is unblocked
> and set up to preempt. The end of the ISR path has to be right to
> make this work.
>
> --joel
>>
>>
>> On Thu, Aug 9, 2018 at 6:03 PM, Amaan Cheval 
>> wrote:
>> > Hi everyone!
>> >
>> > Good news! The APIC timer _does_ work now (after implementing 1GiB
>> > pages)! I see Clock_isr_ticks increasing steadily, though I don't have
>> > tc_get_timecount implemented yet - I've yet to figure out the
>> > specifics of the clock driver (how
>> > rtems_configuration_get

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-09 Thread Amaan Cheval
Addition to status: it doesn't seem like the RTEMS Interrupt's call to
_Thread_Dispatch functions either - ticker.exe has outputs like the
following (yeah, the counter is running too quickly right now):

*** BEGIN OF TEST CLOCK TICK ***
*** TEST VERSION: 5.0.0.2f10634899719c2857e2c8dd5088fb93a425fc83-modified
*** TEST STATE: EXPECTED-PASS
*** TEST BUILD: RTEMS_NETWORKING RTEMS_POSIX_API
*** TEST TOOLS: 7.3.0 20180125 (RTEMS 5, RSB
25f4db09c85a52fb1640a29f9bdc2de8c2768988, Newlib 3.0.0)
TA1  - rtems_clock_get_tod - 11:34:12   05/12/1990
TA2  - rtems_clock_get_tod - 11:34:12   05/12/1990
TA3  - rtems_clock_get_tod - 11:34:12   05/12/1990

(And then the _Thread_Idle_body is never preempted due to the
interrupt dispatching a new thread - not sure if it just thinks it's
"too late" to even bother or if simply never even tries. I'll keep
investigating.)

On Thu, Aug 9, 2018 at 6:03 PM, Amaan Cheval  wrote:
> Hi everyone!
>
> Good news! The APIC timer _does_ work now (after implementing 1GiB
> pages)! I see Clock_isr_ticks increasing steadily, though I don't have
> tc_get_timecount implemented yet - I've yet to figure out the
> specifics of the clock driver (how
> rtems_configuration_get_microseconds_per_tick influences the
> counter_ticks, specifically).
>
> I suspect we'll barely just make ticker.exe work by EOD tomorrow,
> leaving just the weekend for me to clean the patches up and Monday to
> actually merge them.
>
> Would someone be willing to have a meeting on Hangouts (or whatever)
> with me to speed up the process of (1) upstreaming my patches and (2)
> checking that my "work package" looks good enough at any convenient
> time on Monday?
>
> (I'm a bit busy on Monday, so I'd really prefer to have this whole
> thing done by EOD Monday for me.)
>
> On Thu, Aug 9, 2018 at 7:03 AM, Gedare Bloom  wrote:
>> On Wed, Aug 8, 2018 at 12:21 PM, Amaan Cheval  wrote:
>>> Status update: The code is at a point where the APIC timer _should_
>>> work, but doesn't (it never starts ticking away, so when calibrating
>>> with the PIT, and later starting the APIC timer to generate IRQs,
>>> pretty much nothing happens).
>>>
>>> I suspect the cause being the APIC base relocation not working (the
>>> APIC is located at 0xfee0 in physical memory by default, and in
>>> the code we write to an MSR to relocate it, because the page-mapping
>>> scheme FreeBSD setup doesn't let us access such high physical memory -
>>> only the first 1GiB of physical memory).
>>>
>>> On QEMU, the MSR accepts our write for the relocation and happily
>>> spits it back out when read, but given the unresponsiveness of the
>>> APIC timer despite enabling all the right bits, I suspect it's just a
>>> "fake" in that regard (QEMU's "info lapic" doesn't reflect any of our
>>> changes to the APIC configuration either, supporting this theory).
>>> QEMU _does_ reflect changes to the APIC by other operating systems
>>> which don't relocate it, so I don't suspect its emulation being a
>>> problem.
>>>
>>> On VirtualBox, the MSR simply silently swallows the write, and upon a
>>> read, returns the original 0xfee0 value again. This means that if
>>> we can't relocate it, we can't access it at the moment either.
>>>
>>> The only real way to work around this is to have a paging scheme that
>>> lets us access physical address 0xfee0 - in that case, we could
>>> support page-faults and dynamically map pages in, _or_ have static
>>> pages that are absurdly large (such as 1GiB), letting the virtual
>>> address do the heavy-lifting in terms of finding the
>>> virtual-to-physical mapping.
>>>
>>
>> I recommend a few static super pages to get it working. It is simple
>> and fits the prevailing RTEMS model.
>>
>>> Either way, I think this issue this close to the deadline basically
>>> means the APIC timer won't be functional and make it upstream.
>>>
>>> I'll clean things up and send patches tomorrow for everything so far,
>>> including all the stub-code which will become usable once our paging
>>> scheme works fine.
>>>
>>> If anyone has any last-minute swooping ideas on how to save the APIC
>>> timer, let me know! (Interrupts aren't masked, and as far as I can
>>> tell, changing the "-cpu" flag on QEMU doesn't make a difference. I
>>> don't have any ideas as to what else the problem could be.)
>>>
>>> In my final report, I'll make sure I document what's remaining in
>>> clearer terms than I have in this email, 

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-09 Thread Amaan Cheval
Hi everyone!

Good news! The APIC timer _does_ work now (after implementing 1GiB
pages)! I see Clock_isr_ticks increasing steadily, though I don't have
tc_get_timecount implemented yet - I've yet to figure out the
specifics of the clock driver (how
rtems_configuration_get_microseconds_per_tick influences the
counter_ticks, specifically).

I suspect we'll barely just make ticker.exe work by EOD tomorrow,
leaving just the weekend for me to clean the patches up and Monday to
actually merge them.

Would someone be willing to have a meeting on Hangouts (or whatever)
with me to speed up the process of (1) upstreaming my patches and (2)
checking that my "work package" looks good enough at any convenient
time on Monday?

(I'm a bit busy on Monday, so I'd really prefer to have this whole
thing done by EOD Monday for me.)

On Thu, Aug 9, 2018 at 7:03 AM, Gedare Bloom  wrote:
> On Wed, Aug 8, 2018 at 12:21 PM, Amaan Cheval  wrote:
>> Status update: The code is at a point where the APIC timer _should_
>> work, but doesn't (it never starts ticking away, so when calibrating
>> with the PIT, and later starting the APIC timer to generate IRQs,
>> pretty much nothing happens).
>>
>> I suspect the cause being the APIC base relocation not working (the
>> APIC is located at 0xfee0 in physical memory by default, and in
>> the code we write to an MSR to relocate it, because the page-mapping
>> scheme FreeBSD setup doesn't let us access such high physical memory -
>> only the first 1GiB of physical memory).
>>
>> On QEMU, the MSR accepts our write for the relocation and happily
>> spits it back out when read, but given the unresponsiveness of the
>> APIC timer despite enabling all the right bits, I suspect it's just a
>> "fake" in that regard (QEMU's "info lapic" doesn't reflect any of our
>> changes to the APIC configuration either, supporting this theory).
>> QEMU _does_ reflect changes to the APIC by other operating systems
>> which don't relocate it, so I don't suspect its emulation being a
>> problem.
>>
>> On VirtualBox, the MSR simply silently swallows the write, and upon a
>> read, returns the original 0xfee0 value again. This means that if
>> we can't relocate it, we can't access it at the moment either.
>>
>> The only real way to work around this is to have a paging scheme that
>> lets us access physical address 0xfee0 - in that case, we could
>> support page-faults and dynamically map pages in, _or_ have static
>> pages that are absurdly large (such as 1GiB), letting the virtual
>> address do the heavy-lifting in terms of finding the
>> virtual-to-physical mapping.
>>
>
> I recommend a few static super pages to get it working. It is simple
> and fits the prevailing RTEMS model.
>
>> Either way, I think this issue this close to the deadline basically
>> means the APIC timer won't be functional and make it upstream.
>>
>> I'll clean things up and send patches tomorrow for everything so far,
>> including all the stub-code which will become usable once our paging
>> scheme works fine.
>>
>> If anyone has any last-minute swooping ideas on how to save the APIC
>> timer, let me know! (Interrupts aren't masked, and as far as I can
>> tell, changing the "-cpu" flag on QEMU doesn't make a difference. I
>> don't have any ideas as to what else the problem could be.)
>>
>> In my final report, I'll make sure I document what's remaining in
>> clearer terms than I have in this email, so it's easier for other
>> contributors to pick it up too, if any are interested.
>>
>> 
>>
>> On Tue, Aug 7, 2018 at 6:03 AM, Chris Johns  wrote:
>>> On 07/08/2018 09:27, Joel Sherrill wrote:
>>>> On Mon, Aug 6, 2018 at 8:13 AM, Amaan Cheval >>> <mailto:amaan.che...@gmail.com>> wrote:
>>>>
>>>> Thanks for all the help! I have a simple test using the RTEMS
>>>> interrupt manager working successfully (tested by calling
>>>> rtems_interrupt_handler_install for vector 0, and then triggering a
>>>> divide-by-0 exception).
>>>>
>>>> Yeah!
>>>>
>>>> Could someone shed any light on why the i386 only hooks the first 17
>>>> vectors as "RTEMS interrupts"?
>>>>
>>>> You are making me feel very old especially since I have the real
>>>> IBM manual in my office which corresponds to the answer.
>>>
>>> Grandchildren, grey hair or Sebastian posting he is feeling old do not make 
>>> you
>>> feel old? Interesting! ;) :)
>>>
> I fee

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-08 Thread Amaan Cheval
Status update: The code is at a point where the APIC timer _should_
work, but doesn't (it never starts ticking away, so when calibrating
with the PIT, and later starting the APIC timer to generate IRQs,
pretty much nothing happens).

I suspect the cause being the APIC base relocation not working (the
APIC is located at 0xfee0 in physical memory by default, and in
the code we write to an MSR to relocate it, because the page-mapping
scheme FreeBSD setup doesn't let us access such high physical memory -
only the first 1GiB of physical memory).

On QEMU, the MSR accepts our write for the relocation and happily
spits it back out when read, but given the unresponsiveness of the
APIC timer despite enabling all the right bits, I suspect it's just a
"fake" in that regard (QEMU's "info lapic" doesn't reflect any of our
changes to the APIC configuration either, supporting this theory).
QEMU _does_ reflect changes to the APIC by other operating systems
which don't relocate it, so I don't suspect its emulation being a
problem.

On VirtualBox, the MSR simply silently swallows the write, and upon a
read, returns the original 0xfee0 value again. This means that if
we can't relocate it, we can't access it at the moment either.

The only real way to work around this is to have a paging scheme that
lets us access physical address 0xfee0 - in that case, we could
support page-faults and dynamically map pages in, _or_ have static
pages that are absurdly large (such as 1GiB), letting the virtual
address do the heavy-lifting in terms of finding the
virtual-to-physical mapping.

Either way, I think this issue this close to the deadline basically
means the APIC timer won't be functional and make it upstream.

I'll clean things up and send patches tomorrow for everything so far,
including all the stub-code which will become usable once our paging
scheme works fine.

If anyone has any last-minute swooping ideas on how to save the APIC
timer, let me know! (Interrupts aren't masked, and as far as I can
tell, changing the "-cpu" flag on QEMU doesn't make a difference. I
don't have any ideas as to what else the problem could be.)

In my final report, I'll make sure I document what's remaining in
clearer terms than I have in this email, so it's easier for other
contributors to pick it up too, if any are interested.



On Tue, Aug 7, 2018 at 6:03 AM, Chris Johns  wrote:
> On 07/08/2018 09:27, Joel Sherrill wrote:
>> On Mon, Aug 6, 2018 at 8:13 AM, Amaan Cheval > <mailto:amaan.che...@gmail.com>> wrote:
>>
>> Thanks for all the help! I have a simple test using the RTEMS
>> interrupt manager working successfully (tested by calling
>> rtems_interrupt_handler_install for vector 0, and then triggering a
>> divide-by-0 exception).
>>
>> Yeah!
>>
>> Could someone shed any light on why the i386 only hooks the first 17
>> vectors as "RTEMS interrupts"?
>>
>> You are making me feel very old especially since I have the real
>> IBM manual in my office which corresponds to the answer.
>
> Grandchildren, grey hair or Sebastian posting he is feeling old do not make 
> you
> feel old? Interesting! ;) :)
>
>> It is dated Sept 1985. In fairness, I saved it from the garbage heap
>> years later when someone was cleaning out their office. :)
>
> Ah the good old days before the internet and search engines!!
>
>> The x86 architecture is really vectored and the original i386
>> port actually used simple direct vectoring since the first BSP wasn't
>> a PC. Imagine that!  Another board using an i386 which didn't
>> look like a PC at all.
>>
>> For better or worse, the PC/AT (286) and later used two i8259 PICs
>> in a master and slave configuration. The slave PIC cascaded off the
>> master PIC. This all fed into one CPU IRQ so many of the direct
>> vectors were unused. The PIC arrangement is described here:
>>
>> https://en.wikipedia.org/wiki/Interrupt_request_(PC_architecture)
>>
>> Here's what I'm aiming to get done before the GSoC deadline:
>>
>> - Remap PIC (masking/disabling the PIC doesn't stop it from generating
>> spurious interrupts (IRQ7), which would look like exceptions to us)
>> - Disable PIC
>> - Enable APIC (done already, but confirm it plays well with the recent
>> changes to the IDT)
>> - Enable the PIT timer and use it to calibrate the APIC timer
>> - Clock driver using the APIC timer - (1) generate interrupts on ticks
>> and (2) tc_get_timecount function which calculates total time passed
>> through calculating (number of IRQs occured * time_per_irq +
>> time_passed_since_last_irq (through tick counter))
>>
>> This does seem a bit

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-06 Thread Amaan Cheval
Thanks for all the help! I have a simple test using the RTEMS
interrupt manager working successfully (tested by calling
rtems_interrupt_handler_install for vector 0, and then triggering a
divide-by-0 exception).

Could someone shed any light on why the i386 only hooks the first 17
vectors as "RTEMS interrupts"?

Here's what I'm aiming to get done before the GSoC deadline:

- Remap PIC (masking/disabling the PIC doesn't stop it from generating
spurious interrupts (IRQ7), which would look like exceptions to us)
- Disable PIC
- Enable APIC (done already, but confirm it plays well with the recent
changes to the IDT)
- Enable the PIT timer and use it to calibrate the APIC timer
- Clock driver using the APIC timer - (1) generate interrupts on ticks
and (2) tc_get_timecount function which calculates total time passed
through calculating (number of IRQs occured * time_per_irq +
time_passed_since_last_irq (through tick counter))

This does seem a bit ambitious given how short we are on time - I'll
finish this up even after the deadline if need be.

What should our minimum deliverable be for this period? Should we try
to upstream the interrupt support before I finish the clock driver? (I
think we can have this discussion on Wednesday or so, since by then
I'll likely know how much progress on the clock driver remains.)

Sorry about the rush near the deadline - getting the APIC functioning
had a bunch more yak-shaving style issues than I anticipated.

On Wed, Aug 1, 2018 at 10:51 PM, Joel Sherrill  wrote:
> I have started to reply twice but you jumped in ahead. :)
>
> On Wed, Aug 1, 2018 at 12:12 PM, Amaan Cheval 
> wrote:
>>
>> If my previous email _is_ in fact correct, could someone confirm?
>> Because this excerpt in the documentation here seems to contradict it
>> (which was what lead to the confusion in the first place):
>>
>>
>> https://docs.rtems.org/branches/master/c-user/interrupt_manager.html#establishing-an-isr
>>
>> With my emphasis:
>>
>> > The rtems_interrupt_catch directive establishes an ISR for the system.
>> > The address of the ISR and its associated CPU vector number are specified 
>> > to
>> > this directive. This directive installs the **RTEMS interrupt wrapper in 
>> > the
>> > processor’s Interrupt Vector Table and the address of the user’s ISR in the
>> > RTEMS’ Vector Table**. This directive returns the previous contents of the
>> > specified vector in the RTEMS’ Vector Table.
>
>
> Almost but Gedare and I left out a detail. rtems_interrupt_catch is ONLY
> used on pure simple vectored architectures which do not use the
> bsp_interrupt_*
> or  interfaces. Some embedded MCUs are so simple and have
> plenty of vectors so you don't need the complexity of supporting a PIC. For
> example, the m68k family had 256 direct vectors and I don't recall ever
> seeing
> a PIC.[1]
>
> You should assume that you can ignore rtems_interrupt_catch and simple
> vectored support for x86_64. See cpukit/rtems/intrcatch.c and I hope you
> see an ifdef that results in the code disappearing on your port. Simple
> vectored is FACE is your cpu.h. :)
>
> [1] Disclaimer: The support for the  interfaces is critical to
> the
> libbsd stack. We haven't discussed it but any architecture that is
> sufficient
> to run the new stack will have to support this interface. If someone wants
> the
> new stack on a 68040 VME board or a Coldfire board, we will have to find
> the simplest, non-bloated way to support this. When doing the MIPS Malta,
> we just converted the MIPS architecture away from simple vectored.
>
> So support the bsp_interrupt_* infrastructure. :)
>
> --joel
>
>>
>>
>> On Wed, Aug 1, 2018 at 10:39 PM, Amaan Cheval 
>> wrote:
>> > Okay, I think I understand finally. Sorry about the rambling!
>> >
>> > When rtems_interrupt_catch is called, that's installing a "raw" ISR by
>> > modifying the processor specific table itself, so _ISR_Handler is
>> > never called, but the user ISR is.
>> >
>> > When rtems_interrupt_handler_install is called, that's an "RTEMS
>> > interrupt", and we go through the _ISR_Handler -> dispatch route I
>> > laid out earlier, leading to eventually the user's ISR.
>> >
>> > Thank you for letting me rubber-duck with you, everyone (let me know
>> > if anything above sounds off, though!) :P
>> >
>> > On Wed, Aug 1, 2018 at 10:20 PM, Amaan Cheval 
>> > wrote:
>> >> Thanks for the background!
>> >>
>> >> Let's use the gen5200 as the ongoing example - my confusion arises
>> >> here (correct me if any of the following points is incorrect!

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-01 Thread Amaan Cheval
If my previous email _is_ in fact correct, could someone confirm?
Because this excerpt in the documentation here seems to contradict it
(which was what lead to the confusion in the first place):

https://docs.rtems.org/branches/master/c-user/interrupt_manager.html#establishing-an-isr

With my emphasis:

> The rtems_interrupt_catch directive establishes an ISR for the system. The 
> address of the ISR and its associated CPU vector number are specified to this 
> directive. This directive installs the **RTEMS interrupt wrapper in the 
> processor’s Interrupt Vector Table and the address of the user’s ISR in the 
> RTEMS’ Vector Table**. This directive returns the previous contents of the 
> specified vector in the RTEMS’ Vector Table.

On Wed, Aug 1, 2018 at 10:39 PM, Amaan Cheval  wrote:
> Okay, I think I understand finally. Sorry about the rambling!
>
> When rtems_interrupt_catch is called, that's installing a "raw" ISR by
> modifying the processor specific table itself, so _ISR_Handler is
> never called, but the user ISR is.
>
> When rtems_interrupt_handler_install is called, that's an "RTEMS
> interrupt", and we go through the _ISR_Handler -> dispatch route I
> laid out earlier, leading to eventually the user's ISR.
>
> Thank you for letting me rubber-duck with you, everyone (let me know
> if anything above sounds off, though!) :P
>
> On Wed, Aug 1, 2018 at 10:20 PM, Amaan Cheval  wrote:
>> Thanks for the background!
>>
>> Let's use the gen5200 as the ongoing example - my confusion arises
>> here (correct me if any of the following points is incorrect!). In
>> overly simplified call chains:
>>
>> Register interrupt handler:
>> -  bsp_interrupt_facility_initialize() -> ppc_exc_set_handler(vec,
>> C_dispatch_irq_handler) -> ppc_exc_handler_table[vec] =
>> C_dispatch_irq_handler
>>
>> Interrupt handler called:
>> - C_dispatch_irq_handler -> dispatch -> bsp_interrupt_handler_dispatch
>> (irq-generic.h) -> bsp_interrupt_handler_table[index].handler()
>>
>> What I'm confused about is how the bsp_interrupt_handler_table is
>> updated at all - I just haven't found the link between how the entries
>> in the two tables are synchronized, where the tables are:
>>
>> 1) the ppc_exc_handler_table (the processor IDT) and
>> 2) the bsp_interrupt_handler_table (the RTEMS interrupt table)
>>
>> 
>> Another similar chain of confusion for i386 is:
>> - rtems_interrupt_catch (intrcatch.c) -> _ISR_Install_vector (isr.h)
>> -> _CPU_ISR_install_vector (i386/idt.c) -> idt_entry_tbl[vector]
>> updated
>>
>> But the i386 dispatch code chain is:
>> - _ISR_Handler (i386/irq_asm.S) -> BSP_dispatch_isr (i386/irq.c) ->
>> bsp_interrupt_handler_dispatch (irq-generic.h) ->
>> bsp_interrupt_handler_table[index].handler()
>>
>> But I don't see any updates to bsp_interrupt_handler_table that would
>> let this work.
>> -
>>
>> Would you happen to know what that "missing link" is?
>>
>> On Wed, Aug 1, 2018 at 9:07 PM, Joel Sherrill  wrote:
>>>
>>>
>>> On Wed, Aug 1, 2018 at 10:11 AM, Gedare Bloom  wrote:
>>>>
>>>> On Wed, Aug 1, 2018 at 9:15 AM, Amaan Cheval 
>>>> wrote:
>>>> > That's definitely very illuminating, thank you so much for all the
>>>> > details!
>>>> >
>>>> > A few more questions that have arisen for me. Feel free to skip over
>>>> > them (I'll likely figure them out given enough time, so I'm only
>>>> > asking in case any of them are obvious to anyone):
>>>> >
>>>> > - The i386 doesn't use CPU_Interrupt_frame at all. It seems like it
>>>> > stores some of the data onto the stack?
>>>> >
>>>> the interrupt frame structure was introduced during 4.11 development.
>>>> probably i386 never got updated to use a struct to encapsulate the
>>>> interrupt frame. the interrupt frame should contain the registers that
>>>> are preserved by the interrupt entry code I believe.
>>>
>>>
>>> +1
>>>
>>> Historically, there was no structure to represent the set of
>>> registers and information saved on interrupt entry. Over time
>>> this has been added.
>>>
>>> i386  also is missing the SMP synchronization check in the
>>> middle of the context which ensures it is safe for a thread to
>>> be migrated.
>>>

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-01 Thread Amaan Cheval
Okay, I think I understand finally. Sorry about the rambling!

When rtems_interrupt_catch is called, that's installing a "raw" ISR by
modifying the processor specific table itself, so _ISR_Handler is
never called, but the user ISR is.

When rtems_interrupt_handler_install is called, that's an "RTEMS
interrupt", and we go through the _ISR_Handler -> dispatch route I
laid out earlier, leading to eventually the user's ISR.

Thank you for letting me rubber-duck with you, everyone (let me know
if anything above sounds off, though!) :P

On Wed, Aug 1, 2018 at 10:20 PM, Amaan Cheval  wrote:
> Thanks for the background!
>
> Let's use the gen5200 as the ongoing example - my confusion arises
> here (correct me if any of the following points is incorrect!). In
> overly simplified call chains:
>
> Register interrupt handler:
> -  bsp_interrupt_facility_initialize() -> ppc_exc_set_handler(vec,
> C_dispatch_irq_handler) -> ppc_exc_handler_table[vec] =
> C_dispatch_irq_handler
>
> Interrupt handler called:
> - C_dispatch_irq_handler -> dispatch -> bsp_interrupt_handler_dispatch
> (irq-generic.h) -> bsp_interrupt_handler_table[index].handler()
>
> What I'm confused about is how the bsp_interrupt_handler_table is
> updated at all - I just haven't found the link between how the entries
> in the two tables are synchronized, where the tables are:
>
> 1) the ppc_exc_handler_table (the processor IDT) and
> 2) the bsp_interrupt_handler_table (the RTEMS interrupt table)
>
> 
> Another similar chain of confusion for i386 is:
> - rtems_interrupt_catch (intrcatch.c) -> _ISR_Install_vector (isr.h)
> -> _CPU_ISR_install_vector (i386/idt.c) -> idt_entry_tbl[vector]
> updated
>
> But the i386 dispatch code chain is:
> - _ISR_Handler (i386/irq_asm.S) -> BSP_dispatch_isr (i386/irq.c) ->
> bsp_interrupt_handler_dispatch (irq-generic.h) ->
> bsp_interrupt_handler_table[index].handler()
>
> But I don't see any updates to bsp_interrupt_handler_table that would
> let this work.
> -
>
> Would you happen to know what that "missing link" is?
>
> On Wed, Aug 1, 2018 at 9:07 PM, Joel Sherrill  wrote:
>>
>>
>> On Wed, Aug 1, 2018 at 10:11 AM, Gedare Bloom  wrote:
>>>
>>> On Wed, Aug 1, 2018 at 9:15 AM, Amaan Cheval 
>>> wrote:
>>> > That's definitely very illuminating, thank you so much for all the
>>> > details!
>>> >
>>> > A few more questions that have arisen for me. Feel free to skip over
>>> > them (I'll likely figure them out given enough time, so I'm only
>>> > asking in case any of them are obvious to anyone):
>>> >
>>> > - The i386 doesn't use CPU_Interrupt_frame at all. It seems like it
>>> > stores some of the data onto the stack?
>>> >
>>> the interrupt frame structure was introduced during 4.11 development.
>>> probably i386 never got updated to use a struct to encapsulate the
>>> interrupt frame. the interrupt frame should contain the registers that
>>> are preserved by the interrupt entry code I believe.
>>
>>
>> +1
>>
>> Historically, there was no structure to represent the set of
>> registers and information saved on interrupt entry. Over time
>> this has been added.
>>
>> i386  also is missing the SMP synchronization check in the
>> middle of the context which ensures it is safe for a thread to
>> be migrated.
>>
>>>
>>> > - There used to be defines in cpu.h regarding hardware/software based
>>> > interrupt stacks, and how they'd be setup, which were made
>>> > superfluous[1] - I'm not quite sure how these are meant to work - I
>>> > see references to "stack high" and "stack low" and I'm not quite sure
>>> > what the code is referencing when using those.
>>> >
>>>
>>> a hardware interrupt stack is one that the hardware switches to during
>>> an interrupt. i think m68k has such.
>>>
>>> most interrupt stacks in RTEMS are software-managed, meaning that
>>> RTEMS explicitly switches the stack region off the task stack and to
>>> an interrupt stack region.
>>>
>>> some stacks start high and grow down, and some stacks start low and
>>> grow up. maybe this is what the "stack high" and "stack low" you
>>> mention are in relation to?
>>
>>
>> They are used to denote the top and bottom of the memory reserved
>> for the interrupt s

Re: [GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-01 Thread Amaan Cheval
Thanks for the background!

Let's use the gen5200 as the ongoing example - my confusion arises
here (correct me if any of the following points is incorrect!). In
overly simplified call chains:

Register interrupt handler:
-  bsp_interrupt_facility_initialize() -> ppc_exc_set_handler(vec,
C_dispatch_irq_handler) -> ppc_exc_handler_table[vec] =
C_dispatch_irq_handler

Interrupt handler called:
- C_dispatch_irq_handler -> dispatch -> bsp_interrupt_handler_dispatch
(irq-generic.h) -> bsp_interrupt_handler_table[index].handler()

What I'm confused about is how the bsp_interrupt_handler_table is
updated at all - I just haven't found the link between how the entries
in the two tables are synchronized, where the tables are:

1) the ppc_exc_handler_table (the processor IDT) and
2) the bsp_interrupt_handler_table (the RTEMS interrupt table)


Another similar chain of confusion for i386 is:
- rtems_interrupt_catch (intrcatch.c) -> _ISR_Install_vector (isr.h)
-> _CPU_ISR_install_vector (i386/idt.c) -> idt_entry_tbl[vector]
updated

But the i386 dispatch code chain is:
- _ISR_Handler (i386/irq_asm.S) -> BSP_dispatch_isr (i386/irq.c) ->
bsp_interrupt_handler_dispatch (irq-generic.h) ->
bsp_interrupt_handler_table[index].handler()

But I don't see any updates to bsp_interrupt_handler_table that would
let this work.
-

Would you happen to know what that "missing link" is?

On Wed, Aug 1, 2018 at 9:07 PM, Joel Sherrill  wrote:
>
>
> On Wed, Aug 1, 2018 at 10:11 AM, Gedare Bloom  wrote:
>>
>> On Wed, Aug 1, 2018 at 9:15 AM, Amaan Cheval 
>> wrote:
>> > That's definitely very illuminating, thank you so much for all the
>> > details!
>> >
>> > A few more questions that have arisen for me. Feel free to skip over
>> > them (I'll likely figure them out given enough time, so I'm only
>> > asking in case any of them are obvious to anyone):
>> >
>> > - The i386 doesn't use CPU_Interrupt_frame at all. It seems like it
>> > stores some of the data onto the stack?
>> >
>> the interrupt frame structure was introduced during 4.11 development.
>> probably i386 never got updated to use a struct to encapsulate the
>> interrupt frame. the interrupt frame should contain the registers that
>> are preserved by the interrupt entry code I believe.
>
>
> +1
>
> Historically, there was no structure to represent the set of
> registers and information saved on interrupt entry. Over time
> this has been added.
>
> i386  also is missing the SMP synchronization check in the
> middle of the context which ensures it is safe for a thread to
> be migrated.
>
>>
>> > - There used to be defines in cpu.h regarding hardware/software based
>> > interrupt stacks, and how they'd be setup, which were made
>> > superfluous[1] - I'm not quite sure how these are meant to work - I
>> > see references to "stack high" and "stack low" and I'm not quite sure
>> > what the code is referencing when using those.
>> >
>>
>> a hardware interrupt stack is one that the hardware switches to during
>> an interrupt. i think m68k has such.
>>
>> most interrupt stacks in RTEMS are software-managed, meaning that
>> RTEMS explicitly switches the stack region off the task stack and to
>> an interrupt stack region.
>>
>> some stacks start high and grow down, and some stacks start low and
>> grow up. maybe this is what the "stack high" and "stack low" you
>> mention are in relation to?
>
>
> They are used to denote the top and bottom of the memory reserved
> for the interrupt stack. One important use is in
> cpukit/libmisc/stackchk/check.c
> to report on usage.
>
>>
>>
>> > - c/src/lib/libbsp/no_cpu/no_bsp/Makefile.am doesn't include
>> > irq-sources.am, by the way (this is part of why I used to think a lot
>> > of what your email mentioned was unnecessary, until you...ahem,
>> > pre-empted that line of thought and helped clarify it :P). Should I
>> > add a ticket to update the no_bsp code to be more in line with current
>> > use?
>> >
>> Sure. I don't know that anyone is in particular maintaining
>> no_cpu/no_bsp since we can't compile it, it is basically best effort
>> stuff that sometimes we miss updating.
>
>
> +1
>
> Also there are variations based on simple vectored and PIC vectored
> architectures.
>
> The architecture is responsible for the managing the minimal actions
> based on what the CPU does for an interrupt/exception. Logicall

[GSoC - x86_64] Interrupt manager and and port-specific glue - was Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-08-01 Thread Amaan Cheval
That's definitely very illuminating, thank you so much for all the details!

A few more questions that have arisen for me. Feel free to skip over
them (I'll likely figure them out given enough time, so I'm only
asking in case any of them are obvious to anyone):

- The i386 doesn't use CPU_Interrupt_frame at all. It seems like it
stores some of the data onto the stack?

- There used to be defines in cpu.h regarding hardware/software based
interrupt stacks, and how they'd be setup, which were made
superfluous[1] - I'm not quite sure how these are meant to work - I
see references to "stack high" and "stack low" and I'm not quite sure
what the code is referencing when using those.

- c/src/lib/libbsp/no_cpu/no_bsp/Makefile.am doesn't include
irq-sources.am, by the way (this is part of why I used to think a lot
of what your email mentioned was unnecessary, until you...ahem,
pre-empted that line of thought and helped clarify it :P). Should I
add a ticket to update the no_bsp code to be more in line with current
use?

- My understanding of _ISR_Handler is that it'll be the handler for
_all_ interrupt vectors by default - it'll then dispatch interrupts to
user-handlers (or internal handlers, for the timer, for eg.). Is that
right? (I don't quite understand its interaction with the RTEMS
interrupt manager yet, but irq-generic's "bsp_interrupt_handler_table"
seems to be the RTEMS equivalent to the processor-specific vector
table, and "bsp_interrupt_handler_dispatch" seems to call the actual
handler within that table as appropriate. Accurate? (I just haven't
found how that table actually gets its handlers setup besides during
initialization, since rtems_interrupt_catch just calls
_CPU_install_vector, which updates the processor vector table, not the
RTEMS interrupt manager vector table.)

- My understanding of the interaction between RTEMS' interrupt manager
(i.e. support for nested interrupts and thread dispatch once an
interrupt ends) and the BSP's processor-specific interrupt manager
(code to use the APIC and IDT in my case) is that they're tied
together through the use of irq-generic.c's "bsp_interrupt_initialize"
- is that right? (m68k never seems to call it, though, so perhaps
not?)

Sorry about the rambling! To reiterate, I'll likely figure it out
given enough time, so if the answers aren't at the top of your head, I
can figure it out without wasting your time :)

[1] https://devel.rtems.org/ticket/3459#comment:11

On Wed, Aug 1, 2018 at 3:18 AM, Joel Sherrill  wrote:
>
>
> On Tue, Jul 31, 2018 at 3:05 PM, Amaan Cheval 
> wrote:
>>
>> Hm, I'm not sure what to look for in the other ports specifically, really.
>> The BSP porting documentation doesn't have a section on interrupts, so I'm
>> doing this on more of an "as it comes up" basis.
>>
>> What I've got right now (the interrupt handlers in C) are what I need for
>> calibrating the APIC timer (through the PIT) - so simply hooking IRQ0 (for
>> the timer) and IRQ7 (spurious vector), since those are needed for the timer
>> work to continue.
>>
>> What constitutes as a requirement for basic interrupt support?
>
>
> There used to be a generic porting guide. I can see that this particular
> section
> has bit rotted some but the interrupt dispatching section. Some of this
> will have evolved to support SMP and fine grained locking but the
> pseudo-code
> here will give you a push toward the right line of thinking:
>
> https://docs.rtems.org/releases/rtemsdocs-4.10.2/share/rtems/html/porting/porting00034.html
>
> The idea is that you need to ensure RTEMS knows it is inside an interrupt
> and the current locking scheme (old was dispatching, new is ...) is honored.
>
> The ARM and PowerPC (plus RISCV) are good ports to look at for how SMP
> plays into this. But the CPU supplement is thin for their interrupt
> processing.
>
>
> This is the CPU Architecture supplement section for the m68k. This is a
> relatively simple
> architecture to describe. There is also a section for the i386 which reads
> similarly.
>
> https://docs.rtems.org/branches/master/cpu-supplement/m68xxx_and_coldfire.html#interrupt-processing
>
> Personally, I find the m68k a fairly easy processor to read assembly in.
> Look at cpukit/score/cpu/m68k/cpu_asm.S and _ISR_Handler to see what
> is done there w/o SMP. On the m68k _ISR_Handler is directly put into the
> vector table. But this isn't the most similar example for you.
>
> For the i386 (better example), it is in bsps/i386/shared/irq/irq_asm.S with
> the
> same name. There _ISR_Handler is installed via the DISTINCT_INTERRUPT_ENTRY
> macros at the bottom of the file where some prologue jumps to the common
> _ISR_Handler and then the actions are similar. Usually _ISR_Handler type of
> code ends up

Re: [GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-07-31 Thread Amaan Cheval
Hm, I'm not sure what to look for in the other ports specifically, really.
The BSP porting documentation doesn't have a section on interrupts, so I'm
doing this on more of an "as it comes up" basis.

What I've got right now (the interrupt handlers in C) are what I need for
calibrating the APIC timer (through the PIT) - so simply hooking IRQ0 (for
the timer) and IRQ7 (spurious vector), since those are needed for the timer
work to continue.

What constitutes as a requirement for basic interrupt support?

On Wed, Aug 1, 2018, 1:29 AM Joel Sherrill  wrote:

>
>
> On Tue, Jul 31, 2018 at 2:52 PM, Amaan Cheval 
> wrote:
>
>> Hi Chris!
>>
>> I currently have code like this in
>> c/src/lib/libbsp/x86_64/amd64/Makefile.am:
>>
>> librtemsbsp_a_SOURCES +=
>> ../../../../../../bsps/x86_64/amd64/interrupts/handlers.c
>> # XXX: Needed to use GCC "interrupt" attribute directives - can we
>> pass these
>> # flags only for the handlers.c source file (compile to an object
>> file first and
>> # then link with the rest for librtemsbsp.a?)
>> librtemsbsp_a_CFLAGS = -mgeneral-regs-only
>>
>> The CFLAGS arg is required to allow us to use
>> "__attribute__((interrupt))" to setup interrupt handlers in C. (See
>> [1] and ctrl+f "interrupt" for more.)
>>
>> Is there a way to not force the CFLAGS for _all_ of librtemsbsp, but
>> to limit it only to handlers.c?
>>
>> If not, is the above code something that would be acceptable to have
>> upstream?
>>
>> [1]
>> https://gcc.gnu.org/onlinedocs/gcc/x86-Function-Attributes.html#x86-Function-Attributes
>
>
> Are we basically talking about the outermost layer of your interrupt
> dispatching?
>
>
> Have you looked at the basic approach taken by the other ports? They end
> up switching the stack pointer to a dedicated stack on the outermost
> interrupt
> and, if a context switch/dispatch is needed, arrange for the interrupted
> task to call _Thread_Dispatch.But tinker with its stack so some registers
> are saved and it looks like it made the call itself.
>
> If you can do it in C, I am ok with an attribute. I just don't think you
> can pull off all the stack and return to dispatch magic that way.
>
> --joel
>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

[GSoC - x86_64 - automake] Limit CFLAGS to specific source for librtemsbsp.a

2018-07-31 Thread Amaan Cheval
Hi Chris!

I currently have code like this in c/src/lib/libbsp/x86_64/amd64/Makefile.am:

librtemsbsp_a_SOURCES +=
../../../../../../bsps/x86_64/amd64/interrupts/handlers.c
# XXX: Needed to use GCC "interrupt" attribute directives - can we
pass these
# flags only for the handlers.c source file (compile to an object
file first and
# then link with the rest for librtemsbsp.a?)
librtemsbsp_a_CFLAGS = -mgeneral-regs-only

The CFLAGS arg is required to allow us to use
"__attribute__((interrupt))" to setup interrupt handlers in C. (See
[1] and ctrl+f "interrupt" for more.)

Is there a way to not force the CFLAGS for _all_ of librtemsbsp, but
to limit it only to handlers.c?

If not, is the above code something that would be acceptable to have upstream?

[1] 
https://gcc.gnu.org/onlinedocs/gcc/x86-Function-Attributes.html#x86-Function-Attributes
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Clock driver - which hardware source to support primarily?

2018-07-30 Thread Amaan Cheval
Cool, that's the plan.

Yes, I believe it will limit the accessible RAM since page-faults never
occur (due to the repetitive mapping), and we never map anything beyond the
first 1 GiB of physical memory.

I think the simplest paging scheme later will be identity mapping virtual
to physical addresses. I'm just not sure of unintended consequences due to
this (for eg. with the linker script's structure influencing memory-mapped
devices). I guess we'll look into this when we come to it, though.

On Tue, Jul 31, 2018, 1:28 AM Joel Sherrill  wrote:

>
>
> On Mon, Jul 30, 2018, 2:27 PM Amaan Cheval  wrote:
>
>> Quick status update: in working on the APIC timer, as a prerequisite,
>> I've had to setup access to the Interrupt Descriptor Table (which is
>> great because it helps us have the basic interrupt support we need at
>> least).
>>
>> Another minor issue I've run into is the fact that the APIC is located
>> at physical address 0xfee0 by default on most x86 processors - per
>> the FreeBSD bootloader's paging scheme, they map every GiB of virtual
>> memory to the first 1 GiB of physical memory (0x4000).
>>
>> As a workaround, I'll probably just move the APIC address to within
>> this accessible range (through a Model Specific Register) with an
>> "XXX" comment about creating a better paging scheme later.
>>
>> Let me know if anyone thinks better paging support should also come
>> first! If not, what's next is initializing the PIT to calibrate the
>> APIC timer, and then we should have a pretty nice and self-contained
>> clock driver for the x86_64 port too.
>>
>
> Make the clock tick and timer driver work first.
>
> Will this have any impact on the amount of RAM accessible until this is
> fixed?
>
>>
>> On Wed, Jul 18, 2018 at 7:53 PM, Gedare Bloom  wrote:
>> > On Wed, Jul 18, 2018 at 10:17 AM, Joel Sherrill  wrote:
>> >>
>> >>
>> >> On Wed, Jul 18, 2018 at 12:31 AM, Sebastian Huber
>> >>  wrote:
>> >>>
>> >>> Hello Amaan,
>> >>>
>> >>> On 17/07/18 19:18, Amaan Cheval wrote:
>> >>>>
>> >>>> Hi!
>> >>>>
>> >>>> Now that I'm working on the clock driver, we need to pick what we
>> >>>> support first. Our options in brief are:
>> >>>
>> >>>
>> >>> The clock driver needs an interrupt. What is the status of the
>> interrupt
>> >>> controller support in the BSP?
>> >>>
>> >>> For timekeeping we use a port of the FreeBSD timecounter in RTEMS.
>> You may
>> >>> have a look at the FreeBSD timecounter for this architecture, e.g.
>> >>> sys/x86/x86/tsc.c. I looks quite complicated. I would not take to
>> much care
>> >>> about legacy support, e.g. ignore hardware which is older than five
>> years?.
>> >>
>> >>
>> >> That's not a good rule for PCs at all. The APIC was first introduced
>> as an
>> >> external controller with the i486,
>> >> Based on your rule, we wouldn't support it even though it is the most
>> likely
>> >> choice.
>> >>
>> >
>> > I believe he meant ignore hardware that is not available from products
>> > in the last five years.
>> >
>> >
>> >> Avoid things that are deemed legacy. The starting point for this is
>> the old
>> >> PC
>> >> System Design Guide.
>> >>
>> >> https://en.wikipedia.org/wiki/PC_System_Design_Guide
>> >>
>> >> If it was deemed obsolete in PC2001, then you definitely want to avoid
>> it.
>> >> Those
>> >> things are just now really disappearing.
>> >>
>> >
>> > This is consistent with my interpretation.
>> >
>> >> --joel
>> >>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Sebastian Huber, embedded brains GmbH
>> >>>
>> >>> Address : Dornierstr. 4, D-82178 Puchheim, Germany
>> >>> Phone   : +49 89 189 47 41-16
>> >>> Fax : +49 89 189 47 41-09
>> >>> E-Mail  : sebastian.hu...@embedded-brains.de
>> >>> PGP : Public key available on request.
>> >>>
>> >>> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>> >>>
>> >>
>>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [GSoC - x86_64] Clock driver - which hardware source to support primarily?

2018-07-30 Thread Amaan Cheval
Quick status update: in working on the APIC timer, as a prerequisite,
I've had to setup access to the Interrupt Descriptor Table (which is
great because it helps us have the basic interrupt support we need at
least).

Another minor issue I've run into is the fact that the APIC is located
at physical address 0xfee0 by default on most x86 processors - per
the FreeBSD bootloader's paging scheme, they map every GiB of virtual
memory to the first 1 GiB of physical memory (0x4000).

As a workaround, I'll probably just move the APIC address to within
this accessible range (through a Model Specific Register) with an
"XXX" comment about creating a better paging scheme later.

Let me know if anyone thinks better paging support should also come
first! If not, what's next is initializing the PIT to calibrate the
APIC timer, and then we should have a pretty nice and self-contained
clock driver for the x86_64 port too.

On Wed, Jul 18, 2018 at 7:53 PM, Gedare Bloom  wrote:
> On Wed, Jul 18, 2018 at 10:17 AM, Joel Sherrill  wrote:
>>
>>
>> On Wed, Jul 18, 2018 at 12:31 AM, Sebastian Huber
>>  wrote:
>>>
>>> Hello Amaan,
>>>
>>> On 17/07/18 19:18, Amaan Cheval wrote:
>>>>
>>>> Hi!
>>>>
>>>> Now that I'm working on the clock driver, we need to pick what we
>>>> support first. Our options in brief are:
>>>
>>>
>>> The clock driver needs an interrupt. What is the status of the interrupt
>>> controller support in the BSP?
>>>
>>> For timekeeping we use a port of the FreeBSD timecounter in RTEMS. You may
>>> have a look at the FreeBSD timecounter for this architecture, e.g.
>>> sys/x86/x86/tsc.c. I looks quite complicated. I would not take to much care
>>> about legacy support, e.g. ignore hardware which is older than five years?.
>>
>>
>> That's not a good rule for PCs at all. The APIC was first introduced as an
>> external controller with the i486,
>> Based on your rule, we wouldn't support it even though it is the most likely
>> choice.
>>
>
> I believe he meant ignore hardware that is not available from products
> in the last five years.
>
>
>> Avoid things that are deemed legacy. The starting point for this is the old
>> PC
>> System Design Guide.
>>
>> https://en.wikipedia.org/wiki/PC_System_Design_Guide
>>
>> If it was deemed obsolete in PC2001, then you definitely want to avoid it.
>> Those
>> things are just now really disappearing.
>>
>
> This is consistent with my interpretation.
>
>> --joel
>>
>>>
>>>
>>>
>>> --
>>> Sebastian Huber, embedded brains GmbH
>>>
>>> Address : Dornierstr. 4, D-82178 Puchheim, Germany
>>> Phone   : +49 89 189 47 41-16
>>> Fax : +49 89 189 47 41-09
>>> E-Mail  : sebastian.hu...@embedded-brains.de
>>> PGP : Public key available on request.
>>>
>>> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>>>
>>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [PATCH] sptests/spfatal26: Use an illegal instruction

2018-07-20 Thread Amaan Cheval
On Fri, Jul 20, 2018 at 12:18 AM, Joel Sherrill  wrote:
>
>
> On Thu, Jul 19, 2018 at 1:37 PM, Sebastian Huber
>  wrote:
>>
>>
>>
>> - Am 19. Jul 2018 um 17:03 schrieb joel j...@rtems.org:
>>
>> > On Thu, Jul 19, 2018 at 8:49 AM, Gedare Bloom  wrote:
>> >
>> >> For now we don't need to generalize this approach or make any kind of
>> >> facility like this available outside of testing.
>> >>
>> >> (FYI: 0 is a "nop" on some architectures)
>> >>
>> >> Gedare
>> >>
>> >> On Thu, Jul 19, 2018 at 9:37 AM, Sebastian Huber
>> >>  wrote:
>> >> > I thought about adding a _CPU_Illegal_instruction() function to
>> >> > . But, do you want such a toxic function in a
>> >> > header
>> >> file
>> >> > or librtemscpu.a? Now it is isolated in the test and can do no harm.
>> >>
>> >
>> > I have wondered if there enough architectural oddities like this in
>> > the tests where a central place to address them would be helpful
>> > when porting.
>>
>> I am not really happy about the use of architecture defines in the tests.
>> I will add a _CPU_Instruction_illegal() and _CPU_Instruction_no_operation()
>> (used by testsuites/sptests/spcache01/init.c) to 
>> tomorrow.
>>
>> >
>> > Where all do you have to check now when porting?
>>
>> You always have to check the test results.
>
>
> I meant how many places in the source do you have to touch that
> you don't expect? For example, RPC has some architecture conditionals
> in it that are easy to forget.

Yep, the xdr_float.c update was definitely not something I expected to
have to do:
https://git.rtems.org/rtems/commit/?id=76c03152e110dcb770253b54277811228e8f78df

Thankfully, IIRC, it was a compile-time error, so it called attention
to itself pretty easily.

Others that were unexpected / hard to understand at first for me:

- Not sure why I need
cpukit/score/cpu/x86_64/include/machine/elf_machdep.h and not
important enough
- The bsps/*/*/config/bsp.cfg file and what magic variables affect
compilation of which parts of the system (CPU_CFLAGS vs.
CFLAGS_OPTIMIZE_V)
- The hacky use of bsp_specs to override some GCC defaults (the
inclusion of the default crt0 earlier, with __getreent being redefined
erroneously)
- How our GCC toolchains implicitly have "-lrtemsbsp -lrtemscpu" for
when -qrtems is used[1]

[1] https://github.com/gcc-mirror/gcc/blob/master/gcc/config/rtems.h#L41

>
> --joel
>
>
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH 0/1] [GSoC - x86_64] User documentation for BSP

2018-07-18 Thread Amaan Cheval
On Wed, Jul 18, 2018 at 6:56 PM, Gedare Bloom  wrote:
> On Wed, Jul 18, 2018 at 9:09 AM, Amaan Cheval  wrote:
>> Hi!
>>
>> On Fri, Jul 13, 2018 at 7:32 PM, Joel Sherrill  wrote:
>>>
>>>
>>> On Fri, Jul 13, 2018 at 8:25 AM, Gedare Bloom  wrote:
>>>>
>>>> Hello Amaan,
>>>>
>>>>
>>>> On Fri, Jul 13, 2018 at 3:32 AM, Amaan Cheval 
>>>> wrote:
>>>> > The built documentation can more easily be viewed here:
>>>> > http://whatthedude.com/rtems/user/html/bsps/bsps-x86_64.html
>>>> >
>>>> > It feels a bit convoluted to me at the moment. I'd appreciate feedback
>>>> > on how
>>>> > the documentation may be made more understandable, and on whether the
>>>> > current
>>>> > approach even seems sustainable - specifically, using FreeBSD's
>>>> > bootloader ties
>>>> > us into using the UFS filesystem and can slow down the
>>>> > iterative-development
>>>> > process.
>>>> >
>>>> I agree. It looks like you have to build FreeBSD at least one time to
>>>> use this? Alternatives should be again considered for iterative
>>>> improvement.
>>
>> That's right, and I agree. The only question is, at what point should
>> we consider alternatives? After interrupts and the clock driver work
>> is completed, if there's time left, or before (i.e. right now)?
>>
>> Is this too large a barrier to entry, and we'd rather make it easier
>> for future contributors to enter, than to have a more complete BSP
>> that users and contributors find to be too much effort to practically
>> use?
>>
>> (I'm asking about priorities assuming that there isn't time to do
>> _all_ the work, which may or may not be true.)
>>
>
> It is better to have a working basic BSP, even if it is a bit
> complicated to build from scratch, and have a roadmap for how to make
> it simpler to use.
>

Gotcha, thanks.

>>>
>>>
>>> Reducing what has to be built is an important goal. But an alternative is
>>> to host a pre-built binary of what is required to boot. I did that for a
>>> boot
>>> floppy for the pc386. There were instructions for making one and they
>>> worked but, in practice, just grabbing the floppy image was easier.
>>
>> Since we likely need the entire FreeBSD hard disk image, not just its
>> loader.efi file, this may be a file that's a GB or more. (I've been
>> using 8GB just to be safe, so I don't know how small we can make
>> this.)
>>
>> We need the entire FreeBSD image because their kernel is one of the
>> few that lets us mount a UFS/ZFS filesystem as read-write (which are
>> the only filesystems their loader supports loading the kernel from).
>>
>> IMO, looking for an alternate solution may well be for the best, but I
>> also think it can wait until after we have interrupts and the clock
>> driver.
>>
>> Roughly in the order listed here:
>> https://blog.whatthedude.com/post/gsoc-phase-2-status/#upcoming
>>
>>>
>>> Note; you didn't need to use a real floppy. Telling qemu what file
>>> was the 1.44 MB image was all that was needed. Combine that with
>>> vfat for c: and it worked find.
>>>
>>> So I think we need both -- RSB for building from source and a pre-built
>>> binary.
>>>
>>>>
>>>>
>>>> > In my opinion, this system is good _enough_ for now - we can explore
>>>> > other
>>>> > options later if time permits, but I'd love to hear differing opinions.
>>>> >
>>>> > P.S. - Joel asked earlier if the QEMU that the RSB builds will suffice -
>>>> > for me,
>>>> > it didn't because in it "SDL support is disabled" (and so are all other
>>>> > graphics
>>>> > options). It's likely possible to install FreeBSD without graphics, it
>>>> > may not
>>>> > be worth the effort of setting up - it's likely easier to update the
>>>> > RSB's QEMU
>>>> > to also build graphics support.
>>>> >
>>>> I was going to recommend this. You can make it an option of the qemu
>>>> configuration in RSB to enable the support needed. I suggest you talk
>>>> to Vijay as he has some experience now with RSB, and also this will
>>>> require Chris Johns approval.
>>>
>>>
>>> Everything that wasn't needed was

Re: [PATCH 0/1] [GSoC - x86_64] User documentation for BSP

2018-07-18 Thread Amaan Cheval
Hi!

On Fri, Jul 13, 2018 at 7:32 PM, Joel Sherrill  wrote:
>
>
> On Fri, Jul 13, 2018 at 8:25 AM, Gedare Bloom  wrote:
>>
>> Hello Amaan,
>>
>>
>> On Fri, Jul 13, 2018 at 3:32 AM, Amaan Cheval 
>> wrote:
>> > The built documentation can more easily be viewed here:
>> > http://whatthedude.com/rtems/user/html/bsps/bsps-x86_64.html
>> >
>> > It feels a bit convoluted to me at the moment. I'd appreciate feedback
>> > on how
>> > the documentation may be made more understandable, and on whether the
>> > current
>> > approach even seems sustainable - specifically, using FreeBSD's
>> > bootloader ties
>> > us into using the UFS filesystem and can slow down the
>> > iterative-development
>> > process.
>> >
>> I agree. It looks like you have to build FreeBSD at least one time to
>> use this? Alternatives should be again considered for iterative
>> improvement.

That's right, and I agree. The only question is, at what point should
we consider alternatives? After interrupts and the clock driver work
is completed, if there's time left, or before (i.e. right now)?

Is this too large a barrier to entry, and we'd rather make it easier
for future contributors to enter, than to have a more complete BSP
that users and contributors find to be too much effort to practically
use?

(I'm asking about priorities assuming that there isn't time to do
_all_ the work, which may or may not be true.)

>
>
> Reducing what has to be built is an important goal. But an alternative is
> to host a pre-built binary of what is required to boot. I did that for a
> boot
> floppy for the pc386. There were instructions for making one and they
> worked but, in practice, just grabbing the floppy image was easier.

Since we likely need the entire FreeBSD hard disk image, not just its
loader.efi file, this may be a file that's a GB or more. (I've been
using 8GB just to be safe, so I don't know how small we can make
this.)

We need the entire FreeBSD image because their kernel is one of the
few that lets us mount a UFS/ZFS filesystem as read-write (which are
the only filesystems their loader supports loading the kernel from).

IMO, looking for an alternate solution may well be for the best, but I
also think it can wait until after we have interrupts and the clock
driver.

Roughly in the order listed here:
https://blog.whatthedude.com/post/gsoc-phase-2-status/#upcoming

>
> Note; you didn't need to use a real floppy. Telling qemu what file
> was the 1.44 MB image was all that was needed. Combine that with
> vfat for c: and it worked find.
>
> So I think we need both -- RSB for building from source and a pre-built
> binary.
>
>>
>>
>> > In my opinion, this system is good _enough_ for now - we can explore
>> > other
>> > options later if time permits, but I'd love to hear differing opinions.
>> >
>> > P.S. - Joel asked earlier if the QEMU that the RSB builds will suffice -
>> > for me,
>> > it didn't because in it "SDL support is disabled" (and so are all other
>> > graphics
>> > options). It's likely possible to install FreeBSD without graphics, it
>> > may not
>> > be worth the effort of setting up - it's likely easier to update the
>> > RSB's QEMU
>> > to also build graphics support.
>> >
>> I was going to recommend this. You can make it an option of the qemu
>> configuration in RSB to enable the support needed. I suggest you talk
>> to Vijay as he has some experience now with RSB, and also this will
>> require Chris Johns approval.
>
>
> Everything that wasn't needed was disabled to make it easier to build
> on multiple hosts. Too many projects are Linux monoculture and
> getting things built on multiple hosts can be a pain.
>
> But I think SDL support needs to be enabled since otherwise we have
> no way to test graphics at all.
>
>>
>>
>> Relatedly, does it make sense for you to look at creating an RSB
>> "recipe" for building the UEFI firmware?
>
>
> See above. I think we need RSB recipes for everything required that
> we expect to be put together by a user. And we should host binaries
> along with matching sources for some pre-built versions. I don't want
> this BSP to be an order of magnitude harder to get started with than
> any of the others.

+1.

>>
>>
>> > P.P.S. - Some of the documentation is double-spaced, but this patch
>> > isn't. Let me
>> > know if it ought to be (the README didn't say anything of the sort, and
>> > it isn't
>> > consistent throughout).
>> >
>>
>> Stick to one consisten

Re: [GSoC - x86_64] Clock driver - which hardware source to support primarily?

2018-07-18 Thread Amaan Cheval
Hey!

On Wed, Jul 18, 2018 at 11:01 AM, Sebastian Huber
 wrote:
> Hello Amaan,
>
> On 17/07/18 19:18, Amaan Cheval wrote:
>>
>> Hi!
>>
>> Now that I'm working on the clock driver, we need to pick what we
>> support first. Our options in brief are:
>
>
> The clock driver needs an interrupt. What is the status of the interrupt
> controller support in the BSP?

TL;DR: There is none. We've been using the idle-thread simulation and
left interrupts disabled from the get-go.

>
> For timekeeping we use a port of the FreeBSD timecounter in RTEMS. You may
> have a look at the FreeBSD timecounter for this architecture, e.g.
> sys/x86/x86/tsc.c. I looks quite complicated.

Right, it goes to show the flaw with the TSC - it needs calibration
(FreeBSD seems to try to use CPUID and model names, etc. to detect the
frequency instead of using the PIT), and synchronization across
different cores in SMP configurations.

The APIC timers are local to each core, but are also synchronized
because they are based on a common bus signal.

(See this for more pros/cons of each timer:
https://www.halobates.de/timers2.pdf)

> I would not take to much care
> about legacy support, e.g. ignore hardware which is older than five years?.

That's a fair guideline, thanks!

>
> --
> Sebastian Huber, embedded brains GmbH
>
> Address : Dornierstr. 4, D-82178 Puchheim, Germany
> Phone   : +49 89 189 47 41-16
> Fax : +49 89 189 47 41-09
> E-Mail  : sebastian.hu...@embedded-brains.de
> PGP : Public key available on request.
>
> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [GSoC - x86_64] Clock driver - which hardware source to support primarily?

2018-07-18 Thread Amaan Cheval
Hi!

Thanks a ton for all the details, they definitely help clarify the
direction a fair bit. I think we're settling on working on interrupt
support and the APIC timer (and some PIT work to calibrate the APIC
timer) in the immediate future.

After ticker.exe passes through the APIC timer, we'll have a nearly
complete BSP and then the rtems-tools can be fixed up to let us run
tests more easily, and we can also look into alternate bootloaders for
UEFI-awareness instead of FreeBSD's.

Let me know if anyone disagrees with that?

On Tue, Jul 17, 2018 at 11:04 PM, Joel Sherrill  wrote:
>
>
> On Tue, Jul 17, 2018 at 12:18 PM, Amaan Cheval 
> wrote:
>>
>> Hi!
>>
>> Now that I'm working on the clock driver, we need to pick what we
>> support first. Our options in brief are:
>>
>> - rdtsc; will need calibration through the PIT or one of the other options
>
>
> Can this generate an interrupt?

Nope, it's only an instruction that reads out the value held in the
timestamp counter. It can be used to determine the time since the last
interrupt - something as simple as:

last_interrupt = rdtsc();
// ...;
time_since_last_interrupt = rdtsc() - last_interrupt;

I'm now leaning towards the APIC timer (read on), so this may not be
used at all.

>
> This is currently used to get the offset since the last tick and
> there is calibration code in pc386. Of course, that likely uses
> the PIT so is just a guide.
>
>>
>> - PIT; likely not given how legacy it is, but it may be quite simple
>
>
> Legacy. Preferably off the table as a permanent solution.
>
> How legacy is it? It doesn't seem to have disappeared like IO ports,
> non-UEFI booting, etc.

It's one of the "classic" timers - basically guaranteed to be there
(unlike _most_ of the other options in consideration). It's not our
ideal option, but I think we'll inevitably need it as a fallback, and
to synchronize the other timers during initialization. Our support may
not need to be full-fledged if we can assume processors recent enough
to support APIC timers, which really _should_ be the case on just
about anything modern enough to include UEFI firmware anyway.

>
>>
>> - APIC timer[1]; better for long-term as it's independent of multiple
>> core time sync problems in general, and it also has multiple modes,
>> including having the timer generate deadline interrupts instead of the
>> "polled" ticking method. It'll need better interrupt support and ACPI
>> to detect/use.
>
>
> I had hoped we would end up here.
>
> Looks like it needs to be calibrated and likely you can read the
> counter to see how long it has been running since the IRQ was
> generated. Plus you can read it for the time since last IRQ.
>
> Good and self-contained I think.

Agreed - if interrupts are an absolute must, the APIC timer can do
everything we need, with only a little help during calibration from
the PIT.

>
>>
>>
>> - HPET[2] (high-precision event timer); may be the "best" option -
>> it's fairly modern, highly accurate. Only downside is it needing ACPI
>> to detect and use (as opposed to usually CPUID for the others)
>
>
> Is this present on all CPUs? What's the lowest model? Would this
> be a limiting a factor?

I don't really know, nor do I know how to find it out without much
more research.

>
> Another consideration is that this may best be left alone since
> it may make sense to have it available for applications needing
> a secondary timer.
>
> Finally, it simply looks fairly complex to deal with. You have to
> detect it, can be at different IO addresses, and (worst) may not
> be present at all and you have to have APIC timer support anyway.
>

Right, I think I'm leaning away from even considering this for the
reasons you listed.

>>
>>
>> - RTC; legacy real-time clock - not really a good option in my
>> understanding because it's often too slow for real use-cases needing
>> the high precision
>
>
> This is a bad option and was even for pc386. I don't recall if it
> could even generate an interrupt but the granularity was bad.
>
>>
>>
>> I suspect using rdtsc+pit (option 1) is likely the best for now in
>> providing us with speed (rdtsc is much faster than the PIT, I've read)
>> and ease-of-development (for the port, that is).
>
>
> If speed == ease-of-development, this is the best option since you
> have code to reuse.
>
> APIC timer has to be there even if in the future, the HPET is
> supported.
>
>>
>>
>> Using the APIC timer may work well as well since basic ACPI and
>> interrupt support are likely important for this port (for eg.
>> currently, the port doesn't know how to reset the s

[GSoC - x86_64] Clock driver - which hardware source to support primarily?

2018-07-17 Thread Amaan Cheval
Hi!

Now that I'm working on the clock driver, we need to pick what we
support first. Our options in brief are:

- rdtsc; will need calibration through the PIT or one of the other options

- PIT; likely not given how legacy it is, but it may be quite simple

- APIC timer[1]; better for long-term as it's independent of multiple
core time sync problems in general, and it also has multiple modes,
including having the timer generate deadline interrupts instead of the
"polled" ticking method. It'll need better interrupt support and ACPI
to detect/use.

- HPET[2] (high-precision event timer); may be the "best" option -
it's fairly modern, highly accurate. Only downside is it needing ACPI
to detect and use (as opposed to usually CPUID for the others)

- RTC; legacy real-time clock - not really a good option in my
understanding because it's often too slow for real use-cases needing
the high precision

I suspect using rdtsc+pit (option 1) is likely the best for now in
providing us with speed (rdtsc is much faster than the PIT, I've read)
and ease-of-development (for the port, that is).

Using the APIC timer may work well as well since basic ACPI and
interrupt support are likely important for this port (for eg.
currently, the port doesn't know how to reset the system at all, until
we look into ACPI more). This will likely take a while but help the
port be much more well-rounded for future growth.

The HPET option is great if the high-precision is important - from
other ports, it doesn't look like we need _such_ high precision, but
let me know if that's wrong (especially given that most other BSPs are
not usually as beefy as x86_64 systems may be too).

For now, I'm probably working on setting the PIT up regardless since
it'll likely be useful as a fallback no matter what (and it can be
used for some of the other methods too).

Cheers!

[1] https://wiki.osdev.org/APIC_timer
[2] https://wiki.osdev.org/HPET
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH 1/1] user: Add x86_64 BSP chapter

2018-07-13 Thread Amaan Cheval
---
 user/bsps/bsps-x86_64.rst | 143 +-
 1 file changed, 142 insertions(+), 1 deletion(-)

diff --git a/user/bsps/bsps-x86_64.rst b/user/bsps/bsps-x86_64.rst
index 18f80d2..19c4461 100644
--- a/user/bsps/bsps-x86_64.rst
+++ b/user/bsps/bsps-x86_64.rst
@@ -1,7 +1,148 @@
 .. comment SPDX-License-Identifier: CC-BY-SA-4.0
+.. comment Copyright (c) 2018 Amaan Cheval 
 .. comment Copyright (c) 2018 embedded brains GmbH
 
 x86_64
 **
 
-There are no x86_64 BSPs yet.
+amd64
+=
+
+This BSP offers only one variant, ``amd64``. The BSP can run on UEFI-capable
+systems by using FreeBSD's bootloader, which then loads the RTEMS executable 
(an
+ELF image).
+
+Currently only the console driver and context initialization and switching are
+functional (to a bare minimum), but this is enough to run the ``hello.exe`` 
sample
+in the RTEMS testsuite.
+
+Build Configuration Options
+---
+
+There are no options available to ``configure`` at build time, at the moment.
+
+Testing with QEMU
+-
+
+To test with QEMU, we need to:
+
+- Build / install QEMU (most distributions should have it available on the
+  package manager).
+- Build UEFI firmware that QEMU can use to simulate an x86-64 system capable of
+  booting a UEFI-aware kernel, through the ``--bios`` flag.
+
+Building TianoCore's UEFI firmware, OVMF
+
+
+Complete detailed instructions are available at `TianoCore's Github's wiki
+<https://github.com/tianocore/tianocore.github.io/wiki/How-to-build-OVMF>`_.
+
+Quick instructions (which may fall out of date) are:
+
+.. code-block:: shell
+
+$ git clone git://github.com/tianocore/edk2.git
+$ cd edk2
+$ make -C BaseTools
+$ . edksetup.sh
+
+Then edit ``Conf/target.txt`` to set:
+
+::
+
+ACTIVE_PLATFORM   = OvmfPkg/OvmfPkgX64.dsc
+TARGET= DEBUG
+TARGET_ARCH   = X64
+# You can use GCC46 as well, if you'd prefer
+TOOL_CHAIN_TAG= GCC5
+
+Then run ``build`` in the ``edk2`` directory - the output should list the
+location of the ``OVMF.fd`` file, which can be used with QEMU to boot into a 
UEFI
+shell.
+
+You can find the ``OVMF.fd`` file like this as well in the edk2 directory:
+
+.. code-block:: shell
+
+$ find . -name "*.fd"
+./Build/OvmfX64/DEBUG_GCC5/FV/MEMFD.fd
+./Build/OvmfX64/DEBUG_GCC5/FV/OVMF.fd <-- the file we're looking for
+./Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd
+./Build/OvmfX64/DEBUG_GCC5/FV/OVMF_VARS.fd
+
+Boot RTEMS via FreeBSD's bootloader
+---
+
+The RTEMS executable produced (an ELF file) needs to be placed in the FreeBSD's
+``/boot/kernel/kernel``'s place.
+
+To do that, we first need a hard-disk image with FreeBSD installed on
+it. `Download FreeBSD's installer "memstick" image for amd64
+<https://www.freebsd.org/where.html>`_ and then run the following commands,
+replacing paths as appropriate.
+
+.. code-block:: shell
+
+   $ qemu-img create freebsd.img 8G
+   $ OVMF_LOCATION=/path/to/ovmf/OVMF.fd
+   $ FREEBSD_MEMSTICK=/path/to/FreeBSD-11.2-amd64-memstick.img
+   $ qemu-system-x86_64 -m 1024 -serial stdio --bios $OVMF_LOCATION \
+   -drive format=raw,file=freebsd.img \
+   -drive format=raw,file=$FREEBSD_MEMSTICK
+
+The first time you do this, continue through and install FreeBSD. `FreeBSD's
+installation guide may prove useful
+<https://www.freebsd.org/doc/handbook/bsdinstall-start.html>`_ if required.
+
+Once installed, build your RTEMS executable (an ELF file), for
+eg. ``hello.exe``. We need to transfer this executable into ``freebsd.img``'s
+filesystem, at either ``/boot/kernel/kernel`` or ``/boot/kernel.old/kernel`` 
(or
+elsewhere, if you don't mind user FreeBSD's ``loader``'s prompt to boot your
+custom kernel).
+
+If your host system supports mounting UFS filesystems as read-write
+(eg. FreeBSD), go ahead and:
+
+1. Mount ``freebsd.img`` as read-write
+2. Within the filesystem, back the existing FreeBSD kernel up (i.e. effectively
+   ``cp -r /boot/kernel /boot/kernel.old``).
+3. Place your RTEMS executable at ``/boot/kernel/kernel``
+
+If your host doesn't support mounting UFS filesystems (eg. most Linux kernels),
+do something to the effect of the following.
+
+On the host
+
+.. code-block:: shell
+
+   # Upload hello.exe anywhere accessible within the host
+   $ curl --upload-file hello.exe https://transfer.sh/rtems
+
+Then on the guest (FreeBSD), login with ``root`` and
+
+.. code-block:: shell
+
+   # Back the FreeBSD kernel up
+   $ cp -r /boot/kernel/ /boot/kernel.old
+   # Bring networking online if it isn't already
+   $ dhclient em0
+   # You may need to add the --no-verify-peer depending on your server
+   $ fetch https://host.com/path/to/rtems/hello.exe
+   # Replace default kernel
+   $ cp hello.exe /boot/kernel/kernel
+   $ reboot
+
+After rebooting, the RTEMS kernel should run after th

[PATCH 0/1] [GSoC - x86_64] User documentation for BSP

2018-07-13 Thread Amaan Cheval
The built documentation can more easily be viewed here:
http://whatthedude.com/rtems/user/html/bsps/bsps-x86_64.html

It feels a bit convoluted to me at the moment. I'd appreciate feedback on how
the documentation may be made more understandable, and on whether the current
approach even seems sustainable - specifically, using FreeBSD's bootloader ties
us into using the UFS filesystem and can slow down the iterative-development
process.

In my opinion, this system is good _enough_ for now - we can explore other
options later if time permits, but I'd love to hear differing opinions.

P.S. - Joel asked earlier if the QEMU that the RSB builds will suffice - for me,
it didn't because in it "SDL support is disabled" (and so are all other graphics
options). It's likely possible to install FreeBSD without graphics, it may not
be worth the effort of setting up - it's likely easier to update the RSB's QEMU
to also build graphics support.

P.P.S. - Some of the documentation is double-spaced, but this patch isn't. Let 
me
know if it ought to be (the README didn't say anything of the sort, and it isn't
consistent throughout).

Amaan Cheval (1):
  user: Add x86_64 BSP chapter

 user/bsps/bsps-x86_64.rst | 143 +-
 1 file changed, 142 insertions(+), 1 deletion(-)

-- 
2.16.0.rc0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH 2/2] x86_64/console: Add NS16550 polled console driver

2018-07-12 Thread Amaan Cheval
Thanks a lot for the kind words! It definitely wouldn't have come even
this far without this brilliant community, so I really can't take most
of the credit. Thanks for all the help!

P.S. - For those interested, documentation on running is coming soon!

On Thu, Jul 12, 2018 at 2:14 AM, Joel Sherrill  wrote:
> After discussion with Chris and a confirmation from Amaan, I have pushed
> this patch set which means we now have x86_64 and the amd64 BSP.
> Amaan would be the first to tell you that it needs more love but it
> does run hello world and is far enough along where others can
> begin to experiment with and enhance it.
>
> Thanks Amaan. We all look forward to you guiding this port and
> BSP to maturity.  :)
>
> --joel
>
> On Mon, Jul 9, 2018 at 6:12 AM, Amaan Cheval  wrote:
>>
>> This addition allows us to successfully run the sample hello.exe test.
>>
>> Updates #2898.
>> ---
>>  bsps/x86_64/amd64/console/console.c| 123
>> +
>>  c/src/lib/libbsp/x86_64/amd64/Makefile.am  |   2 +
>>  .../score/cpu/x86_64/include/rtems/score/cpuimpl.h |  14 +++
>>  .../score/cpu/x86_64/include/rtems/score/x86_64.h  |   3 +
>>  4 files changed, 49 insertions(+), 93 deletions(-)
>>
>> diff --git a/bsps/x86_64/amd64/console/console.c
>> b/bsps/x86_64/amd64/console/console.c
>> index b272b679d7..5408c57fe7 100644
>> --- a/bsps/x86_64/amd64/console/console.c
>> +++ b/bsps/x86_64/amd64/console/console.c
>> @@ -24,112 +24,49 @@
>>   * SUCH DAMAGE.
>>   */
>>
>> -#include 
>> +#include 
>>  #include 
>> -#include 
>> -
>> -/*  console_initialize
>> - *
>> - *  This routine initializes the console IO driver.
>> - *
>> - *  Input parameters: NONE
>> - *
>> - *  Output parameters:  NONE
>> - *
>> - *  Return values:
>> - */
>> +#include 
>> +#include 
>> +#include 
>>
>> -rtems_device_driver console_initialize(
>> -  rtems_device_major_number  major,
>> -  rtems_device_minor_number  minor,
>> -  void  *arg
>> -)
>> +static uint8_t amd64_uart_get_register(uintptr_t addr, uint8_t i)
>>  {
>> -  (void) major;
>> -  (void) minor;
>> -  (void) arg;
>> -  return RTEMS_SUCCESSFUL;
>> +  return inport_byte(addr + i);
>>  }
>>
>> -/*
>> - *  Open entry point
>> - */
>> -
>> -rtems_device_driver console_open(
>> -  rtems_device_major_number major,
>> -  rtems_device_minor_number minor,
>> -  void* arg
>> -)
>> +static void amd64_uart_set_register(uintptr_t addr, uint8_t i, uint8_t
>> val)
>>  {
>> -  (void) major;
>> -  (void) minor;
>> -  (void) arg;
>> -  return RTEMS_SUCCESSFUL;
>> +  outport_byte(addr + i, val);
>>  }
>>
>> -/*
>> - *  Close entry point
>> - */
>> -
>> -rtems_device_driver console_close(
>> -  rtems_device_major_number major,
>> -  rtems_device_minor_number minor,
>> -  void* arg
>> -)
>> -{
>> -  (void) major;
>> -  (void) minor;
>> -  (void) arg;
>> -  return RTEMS_SUCCESSFUL;
>> -}
>> +static ns16550_context amd64_uart_context = {
>> +  .base = RTEMS_TERMIOS_DEVICE_CONTEXT_INITIALIZER("UART"),
>> +  .get_reg = amd64_uart_get_register,
>> +  .set_reg = amd64_uart_set_register,
>> +  .port = (uintptr_t) COM1_BASE_IO,
>> +  .initial_baud = COM1_CLOCK_RATE
>> +};
>>
>>  /*
>> - * read bytes from the serial port. We only have stdin.
>> + * XXX: We should use the interrupt based handler once interrupts are
>> supported
>>   */
>> +const console_device console_device_table[] = {
>> +  {
>> +.device_file = "/dev/console",
>> +.probe = console_device_probe_default,
>> +.handler = _handler_polled,
>> +.context = _uart_context.base
>> +  }
>> +};
>> +const size_t console_device_count =
>> RTEMS_ARRAY_SIZE(console_device_table);
>>
>> -rtems_device_driver console_read(
>> -  rtems_device_major_number major,
>> -  rtems_device_minor_number minor,
>> -  void* arg
>> -)
>> +static void output_char(char c)
>>  {
>> -  (void) major;
>> -  (void) minor;
>> -  (void) arg;
>> -  return RTEMS_SUCCESSFUL;
>> -}
>> +  rtems_termios_device_context *ctx = console_device_table[0].context;
>>
>> -/*
>> - * write bytes to the serial port. Stdout and stderr are the same.
>> - */
>>

[PATCH 2/2] x86_64/console: Add NS16550 polled console driver

2018-07-09 Thread Amaan Cheval
This addition allows us to successfully run the sample hello.exe test.

Updates #2898.
---
 bsps/x86_64/amd64/console/console.c| 123 +
 c/src/lib/libbsp/x86_64/amd64/Makefile.am  |   2 +
 .../score/cpu/x86_64/include/rtems/score/cpuimpl.h |  14 +++
 .../score/cpu/x86_64/include/rtems/score/x86_64.h  |   3 +
 4 files changed, 49 insertions(+), 93 deletions(-)

diff --git a/bsps/x86_64/amd64/console/console.c 
b/bsps/x86_64/amd64/console/console.c
index b272b679d7..5408c57fe7 100644
--- a/bsps/x86_64/amd64/console/console.c
+++ b/bsps/x86_64/amd64/console/console.c
@@ -24,112 +24,49 @@
  * SUCH DAMAGE.
  */
 
-#include 
+#include 
 #include 
-#include 
-
-/*  console_initialize
- *
- *  This routine initializes the console IO driver.
- *
- *  Input parameters: NONE
- *
- *  Output parameters:  NONE
- *
- *  Return values:
- */
+#include 
+#include 
+#include 
 
-rtems_device_driver console_initialize(
-  rtems_device_major_number  major,
-  rtems_device_minor_number  minor,
-  void  *arg
-)
+static uint8_t amd64_uart_get_register(uintptr_t addr, uint8_t i)
 {
-  (void) major;
-  (void) minor;
-  (void) arg;
-  return RTEMS_SUCCESSFUL;
+  return inport_byte(addr + i);
 }
 
-/*
- *  Open entry point
- */
-
-rtems_device_driver console_open(
-  rtems_device_major_number major,
-  rtems_device_minor_number minor,
-  void* arg
-)
+static void amd64_uart_set_register(uintptr_t addr, uint8_t i, uint8_t val)
 {
-  (void) major;
-  (void) minor;
-  (void) arg;
-  return RTEMS_SUCCESSFUL;
+  outport_byte(addr + i, val);
 }
 
-/*
- *  Close entry point
- */
-
-rtems_device_driver console_close(
-  rtems_device_major_number major,
-  rtems_device_minor_number minor,
-  void* arg
-)
-{
-  (void) major;
-  (void) minor;
-  (void) arg;
-  return RTEMS_SUCCESSFUL;
-}
+static ns16550_context amd64_uart_context = {
+  .base = RTEMS_TERMIOS_DEVICE_CONTEXT_INITIALIZER("UART"),
+  .get_reg = amd64_uart_get_register,
+  .set_reg = amd64_uart_set_register,
+  .port = (uintptr_t) COM1_BASE_IO,
+  .initial_baud = COM1_CLOCK_RATE
+};
 
 /*
- * read bytes from the serial port. We only have stdin.
+ * XXX: We should use the interrupt based handler once interrupts are supported
  */
+const console_device console_device_table[] = {
+  {
+.device_file = "/dev/console",
+.probe = console_device_probe_default,
+.handler = _handler_polled,
+.context = _uart_context.base
+  }
+};
+const size_t console_device_count = RTEMS_ARRAY_SIZE(console_device_table);
 
-rtems_device_driver console_read(
-  rtems_device_major_number major,
-  rtems_device_minor_number minor,
-  void* arg
-)
+static void output_char(char c)
 {
-  (void) major;
-  (void) minor;
-  (void) arg;
-  return RTEMS_SUCCESSFUL;
-}
+  rtems_termios_device_context *ctx = console_device_table[0].context;
 
-/*
- * write bytes to the serial port. Stdout and stderr are the same.
- */
-
-rtems_device_driver console_write(
-  rtems_device_major_number major,
-  rtems_device_minor_number minor,
-  void* arg
-)
-{
-  (void) major;
-  (void) minor;
-  (void) arg;
-  return 0;
-}
-
-/*
- *  IO Control entry point
- */
-
-rtems_device_driver console_control(
-  rtems_device_major_number major,
-  rtems_device_minor_number minor,
-  void* arg
-)
-{
-  (void) major;
-  (void) minor;
-  (void) arg;
-  return RTEMS_SUCCESSFUL;
+  ns16550_polled_putchar(ctx, c);
 }
 
-BSP_output_char_function_type BSP_output_char = NULL;
-BSP_polling_getchar_function_type BSP_poll_char   = NULL;
+BSP_output_char_function_type BSP_output_char   = output_char;
+BSP_polling_getchar_function_type BSP_poll_char = NULL;
diff --git a/c/src/lib/libbsp/x86_64/amd64/Makefile.am 
b/c/src/lib/libbsp/x86_64/amd64/Makefile.am
index f05b40f3f9..aa40f6224f 100644
--- a/c/src/lib/libbsp/x86_64/amd64/Makefile.am
+++ b/c/src/lib/libbsp/x86_64/amd64/Makefile.am
@@ -29,6 +29,8 @@ librtemsbsp_a_SOURCES += 
../../../../../../bsps/shared/start/bspreset-empty.c
 # clock
 librtemsbsp_a_SOURCES += 
../../../../../../bsps/shared/dev/clock/clock-simidle.c
 # console
+librtemsbsp_a_SOURCES += 
../../../../../../bsps/shared/dev/serial/console-termios-init.c
+librtemsbsp_a_SOURCES += 
../../../../../../bsps/shared/dev/serial/console-termios.c
 librtemsbsp_a_SOURCES += ../../../../../../bsps/x86_64/amd64/console/console.c
 # timer
 librtemsbsp_a_SOURCES += ../../../../../../bsps/shared/dev/btimer/btimer-stub.c
diff --git a/cpukit/score/cpu/x86_64/include/rtems/score/cpuimpl.h 
b/cpukit/score/cpu/x86_64/include/rtems/score/cpuimpl.h
index bac092c320..67fe712a32 100644
--- a/cpukit/score/cpu/x86_64/include/rtems/score/cpuimpl.h
+++ b/cpukit/score/cpu/x86_64/include/rtems/score/cpuimpl.h
@@ -28,6 +28,20 @@
 extern "C" {
 #endif
 
+static inline uint8_t inport_byte(uint16_t port)
+{
+  uint8_t ret;
+  __asm__ volatile ( "inb %1, %0"
+ : "=a" (ret)
+  

[PATCH 1/2] bsp/x86_64: Minimal bootable BSP

2018-07-09 Thread Amaan Cheval
Current state:

  - Basic context initialization and switching code.
  - Stubbed console (empty functions).
  - Mostly functional linker script (may need tweaks if we ever want to move
away from the large code model (see: CPU_CFLAGS).
  - Fully functional boot, by using FreeBSD's bootloader to load RTEMS's ELF for
UEFI-awareness.

In short, the current state with this commit lets us boot, go through the system
initialization functions, and then call user application's Init task too.

Updates #2898.
---
 bsps/x86_64/amd64/config/amd64.cfg |  13 +
 bsps/x86_64/amd64/console/console.c| 135 
 bsps/x86_64/amd64/headers.am   |   7 +
 bsps/x86_64/amd64/include/bsp.h|  51 +++
 bsps/x86_64/amd64/include/start.h  |  47 +++
 bsps/x86_64/amd64/include/tm27.h   |   1 +
 bsps/x86_64/amd64/start/bsp_specs  |   9 +
 bsps/x86_64/amd64/start/bspstart.c |  32 ++
 bsps/x86_64/amd64/start/linkcmds   | 281 
 bsps/x86_64/amd64/start/start.c|  36 +++
 c/src/aclocal/rtems-cpu-subdirs.m4 |   1 +
 c/src/lib/libbsp/x86_64/Makefile.am|   7 +
 c/src/lib/libbsp/x86_64/acinclude.m4   |  10 +
 c/src/lib/libbsp/x86_64/amd64/Makefile.am  |  40 +++
 c/src/lib/libbsp/x86_64/amd64/configure.ac |  19 ++
 c/src/lib/libbsp/x86_64/configure.ac   |  20 ++
 cpukit/configure.ac|   1 +
 cpukit/librpc/src/xdr/xdr_float.c  |   3 +-
 cpukit/score/cpu/x86_64/Makefile.am|  12 +
 cpukit/score/cpu/x86_64/cpu.c  |  83 +
 cpukit/score/cpu/x86_64/headers.am |  16 +
 .../score/cpu/x86_64/include/machine/elf_machdep.h |   4 +
 cpukit/score/cpu/x86_64/include/rtems/asm.h| 134 
 cpukit/score/cpu/x86_64/include/rtems/score/cpu.h  | 359 +
 .../cpu/x86_64/include/rtems/score/cpuatomic.h |  14 +
 .../score/cpu/x86_64/include/rtems/score/cpuimpl.h |  37 +++
 .../score/cpu/x86_64/include/rtems/score/x86_64.h  |  41 +++
 .../score/cpu/x86_64/x86_64-context-initialize.c   |  95 ++
 cpukit/score/cpu/x86_64/x86_64-context-switch.S|  98 ++
 29 files changed, 1605 insertions(+), 1 deletion(-)
 create mode 100644 bsps/x86_64/amd64/config/amd64.cfg
 create mode 100644 bsps/x86_64/amd64/console/console.c
 create mode 100644 bsps/x86_64/amd64/headers.am
 create mode 100644 bsps/x86_64/amd64/include/bsp.h
 create mode 100644 bsps/x86_64/amd64/include/start.h
 create mode 100644 bsps/x86_64/amd64/include/tm27.h
 create mode 100644 bsps/x86_64/amd64/start/bsp_specs
 create mode 100644 bsps/x86_64/amd64/start/bspstart.c
 create mode 100644 bsps/x86_64/amd64/start/linkcmds
 create mode 100644 bsps/x86_64/amd64/start/start.c
 create mode 100644 c/src/lib/libbsp/x86_64/Makefile.am
 create mode 100644 c/src/lib/libbsp/x86_64/acinclude.m4
 create mode 100644 c/src/lib/libbsp/x86_64/amd64/Makefile.am
 create mode 100644 c/src/lib/libbsp/x86_64/amd64/configure.ac
 create mode 100644 c/src/lib/libbsp/x86_64/configure.ac
 create mode 100644 cpukit/score/cpu/x86_64/Makefile.am
 create mode 100644 cpukit/score/cpu/x86_64/cpu.c
 create mode 100644 cpukit/score/cpu/x86_64/headers.am
 create mode 100644 cpukit/score/cpu/x86_64/include/machine/elf_machdep.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/asm.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpu.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpuatomic.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpuimpl.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/x86_64.h
 create mode 100644 cpukit/score/cpu/x86_64/x86_64-context-initialize.c
 create mode 100644 cpukit/score/cpu/x86_64/x86_64-context-switch.S

diff --git a/bsps/x86_64/amd64/config/amd64.cfg 
b/bsps/x86_64/amd64/config/amd64.cfg
new file mode 100644
index 00..3c4492d9d3
--- /dev/null
+++ b/bsps/x86_64/amd64/config/amd64.cfg
@@ -0,0 +1,13 @@
+include $(RTEMS_ROOT)/make/custom/default.cfg
+
+RTEMS_CPU = x86_64
+
+CFLAGS_OPTIMIZE_V += -O2 -g
+CFLAGS_OPTIMIZE_V += -ffunction-sections -fdata-sections
+
+# We can't have the red zone because interrupts will not respect that area.
+CPU_CFLAGS  = -mno-red-zone
+# This flag tells GCC to not assume values will fit in 32-bit registers. This
+# way we can avoid linker-time relocation errors spawning from values being
+# larger than their optimized container sizes.
+CPU_CFLAGS += -mcmodel=large
diff --git a/bsps/x86_64/amd64/console/console.c 
b/bsps/x86_64/amd64/console/console.c
new file mode 100644
index 00..b272b679d7
--- /dev/null
+++ b/bsps/x86_64/amd64/console/console.c
@@ -0,0 +1,135 @@
+/*
+ * Copyright (c) 2018.
+ * Amaan Cheval 
+ *
+ * Redistribution and use in source and binary forms, with or without

[PATCH 0/2] [GSoC - x86_64] Minimal BSP patch

2018-07-09 Thread Amaan Cheval
This patchset is also available on Github as a pull-request for anyone who would
rather review it there (personally that's what I'd prefer):

  https://github.com/AmaanC/rtems-gsoc18/pull/2

For posterity, my concerns (also listed in the PR description) are:

- The use of '-mcmodel=large' in 'amd64.cfg'

- The lack of of 'LDFLAGS = -Wl,--gc-sections' in 'amd64.cfg' (see

https://github.com/AmaanC/rtems-gsoc18/commit/153c1c7addec6f95ee15505c1de17220b8257ecb
  for why)

- The folder structure and filenames (where do we use 'console/console.c',
  vs. just 'console.c'? Where do we place files that _may_ be shared for other
  BSPs in the same computer family (for eg. 'x86_64/amd64/include/start.h')?)

- 'XXX' comments in code - I'm not sure if all of them should be upstream, and
  would appreciate someone keeping an eye out for any that may be out of place
  or should simply be tickets on Trac instead

- The way 'x86_64-context-switch.S' works directly on the 'Context_Control'
  structure (Ctrl+F 'CPU_SIZEOF_POINTER')

Amaan Cheval (2):
  bsp/x86_64: Minimal bootable BSP
  x86_64/console: Add NS16550 polled console driver

 bsps/x86_64/amd64/config/amd64.cfg |  13 +
 bsps/x86_64/amd64/console/console.c|  72 +
 bsps/x86_64/amd64/headers.am   |   7 +
 bsps/x86_64/amd64/include/bsp.h|  51 +++
 bsps/x86_64/amd64/include/start.h  |  47 +++
 bsps/x86_64/amd64/include/tm27.h   |   1 +
 bsps/x86_64/amd64/start/bsp_specs  |   9 +
 bsps/x86_64/amd64/start/bspstart.c |  32 ++
 bsps/x86_64/amd64/start/linkcmds   | 281 
 bsps/x86_64/amd64/start/start.c|  36 +++
 c/src/aclocal/rtems-cpu-subdirs.m4 |   1 +
 c/src/lib/libbsp/x86_64/Makefile.am|   7 +
 c/src/lib/libbsp/x86_64/acinclude.m4   |  10 +
 c/src/lib/libbsp/x86_64/amd64/Makefile.am  |  42 +++
 c/src/lib/libbsp/x86_64/amd64/configure.ac |  19 ++
 c/src/lib/libbsp/x86_64/configure.ac   |  20 ++
 cpukit/configure.ac|   1 +
 cpukit/librpc/src/xdr/xdr_float.c  |   3 +-
 cpukit/score/cpu/x86_64/Makefile.am|  12 +
 cpukit/score/cpu/x86_64/cpu.c  |  83 +
 cpukit/score/cpu/x86_64/headers.am |  16 +
 .../score/cpu/x86_64/include/machine/elf_machdep.h |   4 +
 cpukit/score/cpu/x86_64/include/rtems/asm.h| 134 
 cpukit/score/cpu/x86_64/include/rtems/score/cpu.h  | 359 +
 .../cpu/x86_64/include/rtems/score/cpuatomic.h |  14 +
 .../score/cpu/x86_64/include/rtems/score/cpuimpl.h |  51 +++
 .../score/cpu/x86_64/include/rtems/score/x86_64.h  |  44 +++
 .../score/cpu/x86_64/x86_64-context-initialize.c   |  95 ++
 cpukit/score/cpu/x86_64/x86_64-context-switch.S|  98 ++
 29 files changed, 1561 insertions(+), 1 deletion(-)
 create mode 100644 bsps/x86_64/amd64/config/amd64.cfg
 create mode 100644 bsps/x86_64/amd64/console/console.c
 create mode 100644 bsps/x86_64/amd64/headers.am
 create mode 100644 bsps/x86_64/amd64/include/bsp.h
 create mode 100644 bsps/x86_64/amd64/include/start.h
 create mode 100644 bsps/x86_64/amd64/include/tm27.h
 create mode 100644 bsps/x86_64/amd64/start/bsp_specs
 create mode 100644 bsps/x86_64/amd64/start/bspstart.c
 create mode 100644 bsps/x86_64/amd64/start/linkcmds
 create mode 100644 bsps/x86_64/amd64/start/start.c
 create mode 100644 c/src/lib/libbsp/x86_64/Makefile.am
 create mode 100644 c/src/lib/libbsp/x86_64/acinclude.m4
 create mode 100644 c/src/lib/libbsp/x86_64/amd64/Makefile.am
 create mode 100644 c/src/lib/libbsp/x86_64/amd64/configure.ac
 create mode 100644 c/src/lib/libbsp/x86_64/configure.ac
 create mode 100644 cpukit/score/cpu/x86_64/Makefile.am
 create mode 100644 cpukit/score/cpu/x86_64/cpu.c
 create mode 100644 cpukit/score/cpu/x86_64/headers.am
 create mode 100644 cpukit/score/cpu/x86_64/include/machine/elf_machdep.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/asm.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpu.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpuatomic.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/cpuimpl.h
 create mode 100644 cpukit/score/cpu/x86_64/include/rtems/score/x86_64.h
 create mode 100644 cpukit/score/cpu/x86_64/x86_64-context-initialize.c
 create mode 100644 cpukit/score/cpu/x86_64/x86_64-context-switch.S

-- 
2.15.0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Minimal BSP to review

2018-07-08 Thread Amaan Cheval
Cool, I'll squash and post here again after I have (likely later today
or possibly early tomorrow) - I just noticed that using -O2 as an
optimization flag deterministically causes relocation errors, for eg.:

/home/amaan/repos/rtems/b-jul6/x86_64-rtems5/c/amd64/lib/libbsp/x86_64/
amd64/../../../../../../../../kernel/c/src/lib/libbsp/x86_64/amd64/../.
./../../../../bsps/shared/start/bspgetworkarea-default.c:54:(.text.bsp_
work_area_initialize+0xd): relocation truncated to fit: R_X86_64_32S ag
ainst symbol `RamSize' defined in *ABS* section in dhrystone.exe

Using -O0 reliably compiles for me, so for the merge patch, I'll leave
it at -O0 with an XXX comment.

Besides that, the only other potential blocker is the GCC patch
that'll allow the port to use an empty bsp_specs file - for now, I'm
leaving the bsp_specs it to remain functional.

On Fri, Jul 6, 2018 at 6:31 AM, Gedare Bloom  wrote:
> On Thu, Jul 5, 2018 at 10:02 AM, Amaan Cheval  wrote:
>> Hi!
>>
>> Yeah, the individual commits aren't logical at all. I just thought we
>> could review the actual code contents (in the "files changed" tab) on
>> Github (https://github.com/AmaanC/rtems-gsoc18/pull/1/files). Is this
>> what you meant by "final patchset"?
>>
>> Once this PR is polished up sufficiently, I can submit the squashed,
>> rebased commits to devel - that way we maintain the commit history for
>> how things progressed temporally, and still have the logical commits
>> we want upstream.
>>
>> If you'd prefer, I can squash on Github too (it just means I'd then
>> for a brief period not have the "daily" commit branch we aim to have
>> for GSoC, as I'd have to rebase to make changes instead of adding new
>> commits to the daily branch to indicate more "work done"). Let me know
>> if you prefer this!
>>
>
> You can do your squash/fixup on a new branch, and then restart your
> daily when needed. It is OK for the merger period to have less code
> activity.
>
> I took a quick look and commented on your github.
>
>> Thanks for catching the missing copyright in linkcmds!
>>
>> On Thu, Jul 5, 2018 at 7:16 PM, Sebastian Huber
>>  wrote:
>>> Hello Amaan,
>>>
>>> I think it is quite confusing to review the individual commit. I will review
>>> the final patch set.
>>>
>>> Just one thing:
>>>
>>> x86_64-rtems5-ld --verbose | head -n 20
>>> GNU ld (GNU Binutils) 2.30
>>>   Supported emulations:
>>>elf_x86_64
>>>elf_i386
>>>elf_iamcu
>>>elf32_x86_64
>>>elf_l1om
>>>elf_k1om
>>> using internal linker script:
>>> ==
>>> /* Script for -z combreloc: combine and sort reloc sections */
>>> /* Copyright (C) 2014-2018 Free Software Foundation, Inc.
>>>Copying and distribution of this script, with or without modification,
>>>are permitted in any medium without royalty provided the copyright
>>>notice and this notice are preserved.  */
>>> OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64",
>>>   "elf64-x86-64")
>>> OUTPUT_ARCH(i386:x86-64)
>>> ENTRY(_start)
>>> SEARCH_DIR("/build/rtems/5/x86_64-rtems5/lib");
>>>
>>> Please add the FSF notice to your file if necessary.
>>>
>>> --
>>> Sebastian Huber, embedded brains GmbH
>>>
>>> Address : Dornierstr. 4, D-82178 Puchheim, Germany
>>> Phone   : +49 89 189 47 41-16
>>> Fax : +49 89 189 47 41-09
>>> E-Mail  : sebastian.hu...@embedded-brains.de
>>> PGP : Public key available on request.
>>>
>>> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>>>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [PATCH] Rework to minimize and eventually eliminate RTEMS use of bsp_specs

2018-07-08 Thread Amaan Cheval
To make my previous email clearer, here's what I meant with the
"minimal" GCC patch required (attached).

To manually test, you can place gcc-STARTFILE_SPEC.patch in
$RSB/rtems/patches/ and then "git apply rsb-startfile.diff" to the RSB
repo. Then build GCC and confirm that "x86_64-rtems5-gcc -dumpspecs"
includes crti and crtbegin in the startfile susbtitution.

Let me know if we aim to have this GCC work done before merging the
x86_64 BSP (see
https://lists.rtems.org/pipermail/devel/2018-July/022388.html) so I
can leave bsp_specs in or clear it out accordingly?

For now, I'm going to leave it in.

On Fri, Jul 6, 2018 at 10:46 AM, Amaan Cheval  wrote:
> Hey, Joel!
>
> The x86_64 BSP currently uses an empty bsp_specs file contingent on
> (at least the x86-64 parts of) this email thread's patch making it
> upstream to GCC, and making their way into the RSB.
>
> 2 options:
> - 1. Make the upstream GCC commit (at least the parts adding
> rtemself64.h, editing config.gcc, and "#if 0"ing out
> gcc/config/rtems.h)
> - 2. Use a bsp_specs in the new BSP for the merge now, and empty it out later
>
> I can test and send you an x86_64 specific patch for GCC if you'd
> like. Or if you prefer to have all the work together, we can go with
> #2.
>
> Let me know!
>
> On Sat, May 19, 2018 at 3:17 AM, Joel Sherrill  wrote:
>> Thanks. I will try to deal with this Monday.
>>
>> My specs patches are not ready to push to gcc so I need to focus on
>> just the parts to make x86_64 right.
>>
>> On Fri, May 18, 2018 at 3:41 PM, Amaan Cheval 
>> wrote:
>>>
>>> To be clear, I applied this patch (with my fixes) on the 7.3 release
>>> through the RSB to test, not on GCC's master branch.
>>>
>>> > to add i386/rtemself64.h
>>>
>>> What you sent in this email thread adds rtemself64.h already. Do you
>>> mean you'd like to split the commits up or something?
>>>
>>> The only changes I made on top of yours were:
>>>
>>> - Readd "rtems.h" to config.gcc
>>> - Fix comments
>>>
>>> I've attached the patch file I used within the RSB here (sorry if you
>>> meant a patch of _just_ the fixes I made on top of yours, this is just
>>> the cumulative diff I used to patch GCC 7.3 to test).
>>>
>>> Regards,
>>>
>>> On Fri, May 18, 2018 at 7:00 PM, Joel Sherrill  wrote:
>>> >
>>> >
>>> >
>>> > On Fri, May 18, 2018 at 1:38 AM, Amaan Cheval 
>>> > wrote:
>>> >>
>>> >> I just compiled my local fixed copy (adding rtems.h back in) and
>>> >> there's good news! With the patch, the x86_64 compile stub works with
>>> >> a blank bsp_specs file!
>>> >
>>> >
>>> > Awesome!
>>> >
>>> > Can you send me your changes as a patch? I am thinking I need to make
>>> > sure we agree on what the gcc master for x86_64-rtems looks like.
>>> >
>>> > Apparently I owe committing a patch to add i386/rtemself64.h since it is
>>> > missing on the master. And the comment is wrong.  What else?
>>> >
>>> >> On Fri, May 18, 2018 at 12:59 AM, Amaan Cheval 
>>> >> wrote:
>>> >> > Hey!
>>> >> >
>>> >> > Thanks so much for sharing this, it's quite useful to put your
>>> >> > earlier
>>> >> > email[1] about minimzing the bsp_specs in context.
>>> >> >
>>> >> > From looking ahead a bit without testing (still compiling), the patch
>>> >> > may need an ENDFILE_SPEC definition as well for "crtend.o" (it
>>> >> > defines
>>> >> > __TMC_END__ which crtbegin.o has left undefined for eg.) and possibly
>>> >> > "crtn.o", at least to eliminate the x86_64 port's bsp_specs entirely
>>> >> > (see here[2]).
>>> >>
>>> >> Just noticed that ENDFILE_SPEC already includes crtend in i386elf.h,
>>> >> so there's no need for this change.
>>> >>
>>> >> >
>>> >> > I've also left some comments inline below.
>>> >> >
>>> >> > +1 on upstreaming this into GCC (making sure it also backports to 7.3
>>> >> > for simplicity, so we don't need to write a 7.3-specific patch for
>>> >> > the
>>> >> > RSB as well) with a few additons (at least for the x86_64 target, to
>>> >> > try to have 

Re: [GSoC - x86_64] Minimal BSP to review

2018-07-05 Thread Amaan Cheval
Hi!

Yeah, the individual commits aren't logical at all. I just thought we
could review the actual code contents (in the "files changed" tab) on
Github (https://github.com/AmaanC/rtems-gsoc18/pull/1/files). Is this
what you meant by "final patchset"?

Once this PR is polished up sufficiently, I can submit the squashed,
rebased commits to devel - that way we maintain the commit history for
how things progressed temporally, and still have the logical commits
we want upstream.

If you'd prefer, I can squash on Github too (it just means I'd then
for a brief period not have the "daily" commit branch we aim to have
for GSoC, as I'd have to rebase to make changes instead of adding new
commits to the daily branch to indicate more "work done"). Let me know
if you prefer this!

Thanks for catching the missing copyright in linkcmds!

On Thu, Jul 5, 2018 at 7:16 PM, Sebastian Huber
 wrote:
> Hello Amaan,
>
> I think it is quite confusing to review the individual commit. I will review
> the final patch set.
>
> Just one thing:
>
> x86_64-rtems5-ld --verbose | head -n 20
> GNU ld (GNU Binutils) 2.30
>   Supported emulations:
>elf_x86_64
>elf_i386
>elf_iamcu
>elf32_x86_64
>elf_l1om
>elf_k1om
> using internal linker script:
> ==
> /* Script for -z combreloc: combine and sort reloc sections */
> /* Copyright (C) 2014-2018 Free Software Foundation, Inc.
>Copying and distribution of this script, with or without modification,
>are permitted in any medium without royalty provided the copyright
>notice and this notice are preserved.  */
> OUTPUT_FORMAT("elf64-x86-64", "elf64-x86-64",
>   "elf64-x86-64")
> OUTPUT_ARCH(i386:x86-64)
> ENTRY(_start)
> SEARCH_DIR("/build/rtems/5/x86_64-rtems5/lib");
>
> Please add the FSF notice to your file if necessary.
>
> --
> Sebastian Huber, embedded brains GmbH
>
> Address : Dornierstr. 4, D-82178 Puchheim, Germany
> Phone   : +49 89 189 47 41-16
> Fax : +49 89 189 47 41-09
> E-Mail  : sebastian.hu...@embedded-brains.de
> PGP : Public key available on request.
>
> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

[GSoC - x86_64] Minimal BSP to review

2018-07-05 Thread Amaan Cheval
Hi!

I've made a pull-request that's nearly complete on Github:
https://github.com/AmaanC/rtems-gsoc18/pull/1/

I'd appreciate a review before I squash it into 2 commits (1. minimal
BSP reaching Init task, and 2. adding the NS16550 console driver) and
submit the squashed patches to devel@ after this review.

I'll work on a blog post summarizing the work so far and the BSP
specific user documentation in the meantime.

Cheers,
Amaan
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Current state, next steps, and minimal mergable BSP

2018-07-04 Thread Amaan Cheval
Hi!

Quick question - when I squash and submit the patches to devel, do 2
commits make sense?
1. Initial BSP that can build completely and get to the user's Init task
2. NS16550 console driver that lets printf/printk work

Another question - my code has a bunch of XXX comments in it - are we
okay to leave that as-is? Should I be changing some to TODOs, as
appropriate (seems like unnecessary effort since the BSP is still in
flux a fair bit).

I've made a WIP pull-request on Github to make some early feedback
easier (patches on the mailing list would likely feel cluttered for
this much code, I think). There's a bunch of clean-up left before I'll
squash (see the to-do in the PR), but I'd appreciate a skimmed review
if possible for anything else I need to do pre-merge:
https://github.com/AmaanC/rtems-gsoc18/pull/1

Let me know what you think!

On Sun, Jul 1, 2018 at 7:06 AM, Chris Johns  wrote:
>
>> On 29 Jun 2018, at 11:37 pm, Amaan Cheval  wrote:
>>
>> On Fri, Jun 29, 2018 at 6:46 PM, Sebastian Huber
>>  wrote:
>>>
>>> From my point of view we can merge this stuff right now if the license and
>>> copyright status is clear of all files and it builds all tests.
>>
>> Noted. I'll start cleaning right away, then, unless someone disagrees soon.
>>
>
> I am happy to see this code merged as soon as possible.
>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: GSoC IRC Meeting reschedule: 7/4 to 7/3

2018-07-03 Thread Amaan Cheval
Hey!

Sorry, I totally forgot we meant to reschedule this week's meeting - I
have another conflicting commitment at that time. I'll try to pop in
for a bit if possible.

I'll update the wiki page with this week's work, but here's the update
I'd have sent on IRC:
- Got basic UART/console driver working using RTEMS' existing NS16550
driver code
- Rebase with master and fix misc. issues that cropped up (commit -
https://github.com/AmaanC/rtems-gsoc18/commit/a28badd8df1904496c27b262ae515d22da544713)
- WIP: Misc. clean up (cpu.h primarily for now, but a lot of things
that _work_ but may not be in their ideal state (for eg.
CPU_Context_switch))

Up next:
- Finish clean up for port and send a v1 patch for review to devel@
(estimate: done end of this week or early next week)
- Document method of using FreeBSD's bootloader in the user BSP
documentation (and on my blog)
- Document current state of BSP in the user BSP documentation

No blockers at the moment.

I haven't blogged my progress because I've been putting those status
updates on the mailing list - it didn't feel like it deserved a blog
post (since it would be so highly context-specific, it wouldn't stand
on its own). Let me know if you'd still prefer to have those blog
posts, even if only in summarizing updates/decisions made on the
mailing list.

Have a good fourth of July!

On Mon, Jul 2, 2018 at 11:29 PM, Dannie Huang  wrote:
> Got it! Enjoy the break.
>
> On Mon, Jul 2, 2018 at 12:21 PM, Gedare Bloom  wrote:
>>
>> Hello All,
>>
>> We will hold the weekly IRC meeting on Tuesday 7/3 (tomorrow!) instead
>> of 7/4 at the same usual time (10am Eastern Daylight Time) due to the
>> US holiday on 7/4. Joel will likely lead the meeting.
>>
>> Gedare
>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Current state, next steps, and minimal mergable BSP

2018-06-29 Thread Amaan Cheval
On Fri, Jun 29, 2018 at 6:46 PM, Sebastian Huber
 wrote:
> Hello Amaan,
>
> On 29/06/18 14:31, Amaan Cheval wrote:
>>
>> Hi!
>>
>> There are 3 sections to this email:
>> - An update on the current state
>> - What I plan to work on next
>> - An open question on when we want to merge this upstream
>>
>>
>> 
>>
>> The current state of the BSP (available at
>> https://github.com/AmaanC/rtems-gsoc18/) is:
>>
>> - Using FreeBSD's bootloader (loader.efi) to load RTEMS' ELF image
>> (replacing the existing FreeBSD /boot/kernel/kernel file)
>>
>> - Likely complete linker script (linkcmds includes TLS sections and
>> SYSINIT seems to work)
>
>
> I reworked the initialization and interrupt stack allocation and checked in
> the patch for this today:
>
> https://devel.rtems.org/ticket/3459
>
> I will update the documentation next week. You need a new section in the
> linker command file:
>
>   .rtemsstack (NOLOAD) : {
> *(SORT(.rtemsstack.*))
>   }
>

Thanks for the heads-up - those patches are part of why I haven't
rebased recently :)
I likely will soon.

>>
>> - bspgetworkarea does _NOT_ detect available memory size right now, it
>> just uses all stub values (I believe FreeBSD's bootloader leaves this
>> information in a struct somewhere, but I need to look into it more to
>> know for sure)
>>
>> - Untidy context-switching (how do we decide which registers should or
>> shouldn't be saved? For eg. rdi, rsi are part of the calling
>> convention and are hence clobbered by merely calling
>> _CPU_Context_switch - should everything but those 2 be excluded?)
>
>
> Since the _CPU_Context_switch() is a function call, you only have to
> save/restore the non-volatile (callee-saved) registers and thread-local
> registers.
>
>>
>> - Polled console driver using ns16550-context, console-termios,
>> console-termios-init (hello.exe works
>> https://gist.github.com/AmaanC/9d95e50d3ae3dacbe7c91169b7633cfe, the
>> "Test" on L58 is me adding a printk to confirm printk works too.)
>>NOTE: The test never ends by itself - we don't have a shutdown
>> routine yet, so it just loops idly, forever.
>>
>>
>> 
>>
>> My rough next steps (subject to reshuffling based on your feedback,
>> and realizing I didn't know all the requirements / possibilities) are:
>>
>> - Work on ticker.exe passing with the idle-clock task
>> (clock-simidle.c) if possible
>> - Clean up the existing code we have and we ought to leave some time
>> for code reviews
>> - Document anything that isn't already documented (how to load the
>> RTEMS ELF into a FreeBSD image, for eg. - it's not friendly to
>> iterative develoment because you need QEMU to edit the UFS filesystem
>> if your host is a standard Linux kernel - see[1].)
>
>
> We have a new place for BSP documentation:
>
> https://docs.rtems.org/branches/master/user/bsps/bsps-x86_64.html
>

Ooh, brilliant, I'd missed that. I'll send patches there instead of
the wiki after the code is closer to being merged and more stable.

>> - Look into ISR code needed
>
>
> In the ISR code you have to save/restore the volatile (caller-saved)
> registers and thread-local registers.
>
>> - Move console code to interrupt mode (from current polled mode)
>> - Look into ACPI (specifically at least be able to shutdown / reset
>> the system to cleanly exit)
>> - Misc. subtle issues with specific tests possibly failing
>> - Bonus items, if there's time
>>
>>
>> 
>>
>> Is there anything from the above list you'd like sooner, as part of
>> the BSP we merge in the coming week or two? Is the current technical
>> state sufficient to be merged (after cleanups)?
>>
>> My understanding is that we don't really have a _hard_ requirement on
>> what the minimal BSP is that gets merged, but given that we reach the
>> Init task and printf/console drivers work, do we want to merge ASAP?
>> Or do we prefer to have ticker.exe passing, a real interrupt based
>> clock driver, etc. functioning too, if we can (i.e. should I see if I
>> can rush those)?
>
>
> From my point of view we can merge this stuff right now if the license and
> copyright status is clear of all files and it builds all tests.

Noted. I'll start cleaning right away, then, unless someone disagrees soon.

>

[GSoC - x86_64] Current state, next steps, and minimal mergable BSP

2018-06-29 Thread Amaan Cheval
Hi!

There are 3 sections to this email:
- An update on the current state
- What I plan to work on next
- An open question on when we want to merge this upstream



The current state of the BSP (available at
https://github.com/AmaanC/rtems-gsoc18/) is:

- Using FreeBSD's bootloader (loader.efi) to load RTEMS' ELF image
(replacing the existing FreeBSD /boot/kernel/kernel file)

- Likely complete linker script (linkcmds includes TLS sections and
SYSINIT seems to work)

- bspgetworkarea does _NOT_ detect available memory size right now, it
just uses all stub values (I believe FreeBSD's bootloader leaves this
information in a struct somewhere, but I need to look into it more to
know for sure)

- Untidy context-switching (how do we decide which registers should or
shouldn't be saved? For eg. rdi, rsi are part of the calling
convention and are hence clobbered by merely calling
_CPU_Context_switch - should everything but those 2 be excluded?)

- Polled console driver using ns16550-context, console-termios,
console-termios-init (hello.exe works
https://gist.github.com/AmaanC/9d95e50d3ae3dacbe7c91169b7633cfe, the
"Test" on L58 is me adding a printk to confirm printk works too.)
  NOTE: The test never ends by itself - we don't have a shutdown
routine yet, so it just loops idly, forever.



My rough next steps (subject to reshuffling based on your feedback,
and realizing I didn't know all the requirements / possibilities) are:

- Work on ticker.exe passing with the idle-clock task
(clock-simidle.c) if possible
- Clean up the existing code we have and we ought to leave some time
for code reviews
- Document anything that isn't already documented (how to load the
RTEMS ELF into a FreeBSD image, for eg. - it's not friendly to
iterative develoment because you need QEMU to edit the UFS filesystem
if your host is a standard Linux kernel - see[1].)
- Look into ISR code needed
- Move console code to interrupt mode (from current polled mode)
- Look into ACPI (specifically at least be able to shutdown / reset
the system to cleanly exit)
- Misc. subtle issues with specific tests possibly failing
- Bonus items, if there's time



Is there anything from the above list you'd like sooner, as part of
the BSP we merge in the coming week or two? Is the current technical
state sufficient to be merged (after cleanups)?

My understanding is that we don't really have a _hard_ requirement on
what the minimal BSP is that gets merged, but given that we reach the
Init task and printf/console drivers work, do we want to merge ASAP?
Or do we prefer to have ticker.exe passing, a real interrupt based
clock driver, etc. functioning too, if we can (i.e. should I see if I
can rush those)?

Cheers, and sorry about the lengthy email!

[1] https://lists.rtems.org/pipermail/devel/2018-June/022166.html
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


devel.rtems.org unavailabe?

2018-06-29 Thread Amaan Cheval
Hi!

Here's what I see when visiting https://devel.rtems.org/:

Service Unavailable

The server is temporarily unable to service your request due to
maintenance downtime or capacity problems. Please try again later.

Regards,
Amaan
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Console / serial communication

2018-06-28 Thread Amaan Cheval
I believe it isn't, but QEMU is being "helpful" and multiplexing the COM1
(RTEMS) stream with the VGA/VBE/etc. text mode streams.

I'm looking into how to either:
- Make QEMU only print COM1 to stdio
- Or, how to quiet FreeBSD through configuration (loader.conf lets us set
console to comconsole, vidconsole, nullconsole, and efi)

I'll let y'all know as things progress!

Would it be a blocker if we couldn't quiet the UEFI firmware and loader.efi?

On Thu, Jun 28, 2018, 3:39 AM Chris Johns  wrote:

> On 28/06/2018 00:37, Amaan Cheval wrote:
> > Since we skipped our meeting today, here's a quick screengrab of UART
> > working with -serial stdio on QEMU (just inb/outb instructions
> > directly, without termios or ns16550):
> > https://i.imgur.com/tumtD3Z.png
>
> Nice. Is the loader.efi also using the UART?
>
> Chris
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [GSoC - x86_64] Console / serial communication

2018-06-26 Thread Amaan Cheval
Hi!

Quick question since the BSP guide is outdated - I see several
"methods" of RTEMS' console management. The guide says to use
console-termios.c (and rtems_termios_device_install) as the "new"
method.
https://docs.rtems.org/branches/master/bsp-howto/console.html#build-system-and-files

Most BSPs (beagle, pc386, malta) using NS16550 use legacy-console.c,
though - are NS16550 and Termios mutually exclusive for now?

Or is it simply that none of the old NS16550 users have been ported
over to using console-termios.c as well?

What would be the expectation from a new BSP?

On Tue, Jun 26, 2018 at 12:42 PM, Amaan Cheval  wrote:
> On Tue, Jun 26, 2018 at 12:15 PM, Chris Johns  wrote:
>> On 26/06/2018 14:29, Amaan Cheval wrote:
>>> On Tue, Jun 26, 2018 at 4:10 AM, Chris Johns  wrote:
>>>> On 25/06/2018 21:40, Amaan Cheval wrote:
>>>>
>>>> Will a text based video console be supported?
>>>
>>> As above, I can't say :P
>>> The only guiding factor I have for GSoC is to do the _minimal_ needed
>>> port first, because the project is quite wide - once _all_ the basics
>>> (clock next, some of IRQ, maybe ACPI) are done, we can look into what
>>> to improve further if we have the time.
>>
>> Sure this is best approach.
>>
>>> Given the possibility that we may not be able to add more than one
>>> console this summer, which would we want the most? Which driver? (I'm
>>> trying to do my own research too, of course, but if there's an obvious
>>> one you can think of, let me know!)
>>
>> Lets see how the serial UART goes first.
>
> Cool.
>
> Just as a note for the future: loader.efi does seem to set some EFI
> framebuffer (efifb) work up - I suspect it's similar to how some
> bootinfo is passed to the kernel
> (http://fxr.watson.org/fxr/source/i386/include/bootinfo.h?v=FREEBSD11#L48).
> This is worth investigating later for:
> - Seeing how the ELF kernel detects memory info
> - How to access the UEFI UGA/GOP graphics APIs
> - Accessing UEFI's RuntimeServices (most of them seem unnecessary, but
> ResetSystem may be usefu in lieu of ACPI work)
>
> For now I'm just focusing on the UART work, and then hopefully we can
> get hello.exe passing and get the minimal BSP upstreamed after major
> cleanups (- if the UART work takes too long, I believe we ought to
> clean up and merge _some_ before phase 2 regardless).
>
>>
>>>
>>> I only picked UART because it seems to be what's used when we use
>>> QEMU's "-serial stdio" flags.
>>>
>>
>> This is a good option and I often use it on a PC.
>>
>> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Console / serial communication

2018-06-26 Thread Amaan Cheval
On Tue, Jun 26, 2018 at 12:15 PM, Chris Johns  wrote:
> On 26/06/2018 14:29, Amaan Cheval wrote:
>> On Tue, Jun 26, 2018 at 4:10 AM, Chris Johns  wrote:
>>> On 25/06/2018 21:40, Amaan Cheval wrote:
>>>
>>> Will a text based video console be supported?
>>
>> As above, I can't say :P
>> The only guiding factor I have for GSoC is to do the _minimal_ needed
>> port first, because the project is quite wide - once _all_ the basics
>> (clock next, some of IRQ, maybe ACPI) are done, we can look into what
>> to improve further if we have the time.
>
> Sure this is best approach.
>
>> Given the possibility that we may not be able to add more than one
>> console this summer, which would we want the most? Which driver? (I'm
>> trying to do my own research too, of course, but if there's an obvious
>> one you can think of, let me know!)
>
> Lets see how the serial UART goes first.

Cool.

Just as a note for the future: loader.efi does seem to set some EFI
framebuffer (efifb) work up - I suspect it's similar to how some
bootinfo is passed to the kernel
(http://fxr.watson.org/fxr/source/i386/include/bootinfo.h?v=FREEBSD11#L48).
This is worth investigating later for:
- Seeing how the ELF kernel detects memory info
- How to access the UEFI UGA/GOP graphics APIs
- Accessing UEFI's RuntimeServices (most of them seem unnecessary, but
ResetSystem may be usefu in lieu of ACPI work)

For now I'm just focusing on the UART work, and then hopefully we can
get hello.exe passing and get the minimal BSP upstreamed after major
cleanups (- if the UART work takes too long, I believe we ought to
clean up and merge _some_ before phase 2 regardless).

>
>>
>> I only picked UART because it seems to be what's used when we use
>> QEMU's "-serial stdio" flags.
>>
>
> This is a good option and I often use it on a PC.
>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Console / serial communication

2018-06-25 Thread Amaan Cheval
On Tue, Jun 26, 2018 at 4:10 AM, Chris Johns  wrote:
> On 25/06/2018 21:40, Amaan Cheval wrote:
>> Hi!
>>
>> In the last thread about using FreeBSD's bootloader to bring UEFI
>> support to our port, Chris said this:
>>
>>> It has been a couple of years but I think FreeBSD contains some of the Intel
>>> code to interface to UEFI and via this you can get to the UEFI console. This
>>> should be easy but it comes with a side effect.
>>>
>>> UEFI boots in graphics mode and so it's console on a PC is a slow scroll 
>>> one. On
>>> boards like a Minnow using the UEFI console has the advantage of being able 
>>> to
>>> support any redirection UEFI has enabled such as a serial port. The 
>>> disadvantage
>>> of this is performance and overhead. In time this may be a boot option.
>>>
>>> What I am not sure is the boundary between UEFI and the kernel and what is
>>> enabled or available when the kernel is loaded.
>>
>> Source: https://lists.rtems.org/pipermail/devel/2018-June/022136.html
>>
>> According to my understanding, the efi_console is not available to the
>> kernel - only to the loader.efi (the PE image of the bootloader, which
>> eventually loads the ELF kernel).
>
> OK.
>
>> (In brief, my reasoning for this belief is that efi_console is defined
>> and initialized only through the bootloader code (in stand/efi/main.c,
>> when cons_probe is called). The efi_console relies on UEFI's
>> BootServices, which won't be accessible when the ELF kernel is loaded,
>> because ExitBootServices() is called before loading the kernel (as
>> part of bi_load).)
>
> I suspect I was looking at a direct entry to RTEMS from EFI rather than the
> better way of using loader.efi.
>
>> Given that information, it makes more sense to actually port a console
>> driver that the ELF kernel uses, not that the bootloader uses.
>
> Yes this makes sense. Thank for you for researching this and resolving it.

Sure thing!

>
>>
>> I'll look into how FreeBSD implements their UART console
>> (https://github.com/freebsd/freebsd/blob/1cfbfa1fae9926303f69532a97a5a766fa672582/sys/dev/uart/uart_tty.c),
>> and look into possibly reusing RTEMS' existing drivers for UART
>> (NS16550?) as applicable.
>
> There are working drivers for this device in RTEMS already (libchip) and the
> i386 BSP has some support.

Ah, yeah, that's why I mentioned it - I'm completely unfamiliar with
these drivers, so I don't really have a way of picking _the_ one to
implement first, for our minimal BSP.

>
> Will a text based video console be supported?

As above, I can't say :P
The only guiding factor I have for GSoC is to do the _minimal_ needed
port first, because the project is quite wide - once _all_ the basics
(clock next, some of IRQ, maybe ACPI) are done, we can look into what
to improve further if we have the time.

Given the possibility that we may not be able to add more than one
console this summer, which would we want the most? Which driver? (I'm
trying to do my own research too, of course, but if there's an obvious
one you can think of, let me know!)

I only picked UART because it seems to be what's used when we use
QEMU's "-serial stdio" flags.

>
>>
>> I'm not sure I know enough about serial programming _yet_, so do let
>> me know if you think you have any resources that would be useful or if
>> you'd like to fine-tune my plan!
>
> I suggest you review the ARM and SPARC bsps and follow those models. The i386
> has support for PCI based UARTS which this BSP will need to support.

Will do!

>
>>
>> P.S. - Since we last disussed it, the context initialization and
>> switching are working, and the port does now make it to the user's
>> Init task! :D
>>
>
> Awesome.
>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[GSoC - x86_64] Console / serial communication

2018-06-25 Thread Amaan Cheval
Hi!

In the last thread about using FreeBSD's bootloader to bring UEFI
support to our port, Chris said this:

> It has been a couple of years but I think FreeBSD contains some of the Intel
> code to interface to UEFI and via this you can get to the UEFI console. This
> should be easy but it comes with a side effect.
>
> UEFI boots in graphics mode and so it's console on a PC is a slow scroll one. 
> On
> boards like a Minnow using the UEFI console has the advantage of being able to
> support any redirection UEFI has enabled such as a serial port. The 
> disadvantage
> of this is performance and overhead. In time this may be a boot option.
>
> What I am not sure is the boundary between UEFI and the kernel and what is
> enabled or available when the kernel is loaded.

Source: https://lists.rtems.org/pipermail/devel/2018-June/022136.html

According to my understanding, the efi_console is not available to the
kernel - only to the loader.efi (the PE image of the bootloader, which
eventually loads the ELF kernel).

(In brief, my reasoning for this belief is that efi_console is defined
and initialized only through the bootloader code (in stand/efi/main.c,
when cons_probe is called). The efi_console relies on UEFI's
BootServices, which won't be accessible when the ELF kernel is loaded,
because ExitBootServices() is called before loading the kernel (as
part of bi_load).)

Given that information, it makes more sense to actually port a console
driver that the ELF kernel uses, not that the bootloader uses.

I'll look into how FreeBSD implements their UART console
(https://github.com/freebsd/freebsd/blob/1cfbfa1fae9926303f69532a97a5a766fa672582/sys/dev/uart/uart_tty.c),
and look into possibly reusing RTEMS' existing drivers for UART
(NS16550?) as applicable.

I'm not sure I know enough about serial programming _yet_, so do let
me know if you think you have any resources that would be useful or if
you'd like to fine-tune my plan!

P.S. - Since we last disussed it, the context initialization and
switching are working, and the port does now make it to the user's
Init task! :D

Cheers
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Outdated documentation hosted on the RTEMS FTP (and indexed by search engines)

2018-06-25 Thread Amaan Cheval
Hi!

A month or so ago we discussed some possibly outdated docs being
hosted on the RTEMS FTP for specific users.
https://lists.rtems.org/pipermail/devel/2018-May/021480.html

Here's a few others I've found indexed on Google:
- https://ftp.rtems.org/pub/rtems/people/sebh/c-user/html/glossary.html
- 
https://ftp.rtems.org/pub/rtems/people/joel/doc/c_user/configuring_a_system.html

Regards,
Amaan
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 port] Status update - June 21

2018-06-21 Thread Amaan Cheval
On Thu, Jun 21, 2018 at 5:42 PM, Amaan Cheval  wrote:
> Hi!
>
> In lieu of our weekly meeting, here's a quick status update. The past
> week, I worked on:
> - Making FreeBSD's bootloader load a custom ELF kernel, and then RTEMS
> static binaries
> - Fixing cpu.h up to reflect the x86_64 feature set
> - Fixing the linker script up for my stub to allow for sysinit
> functions to be included and called
> - WIP: Context initialization and switching code
>
> Next up (estimating 2 weeks until complete):
> - Finish all of the RTEMS initialization required to get to bsp_start
> successfully (for eg. it seems like sysinit calls filesystem
> initialization routines, which I'm stubbing out for now, but will need
> to implement to avoid having the testsuite end up throwing
> "rtems_fatal_error_occurred".

Quick update to my update - we're already reaching bsp_start since
it's called through the sysinit functions. What I meant was getting to
the user's Init task - i.e. going through stubs of the entire
initialization chain as much as is possible, and hitting the
user-defined Init task.

Once that's done, I'll be filling in the stub with working / minimal
code enough to get the console working and hello.exe passing.

> - Port FreeBSD's console code to get to hello world with our stub
>
> I'm aiming to get to a hacky working hello.exe as soon as possible,
> and then we can work on making all the initialization routines, the
> linker script. etc. be clean and perfectly correct instead of merely
> functional in the emulator.
>
> Cheers
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[GSoC - x86_64 port] Status update - June 21

2018-06-21 Thread Amaan Cheval
Hi!

In lieu of our weekly meeting, here's a quick status update. The past
week, I worked on:
- Making FreeBSD's bootloader load a custom ELF kernel, and then RTEMS
static binaries
- Fixing cpu.h up to reflect the x86_64 feature set
- Fixing the linker script up for my stub to allow for sysinit
functions to be included and called
- WIP: Context initialization and switching code

Next up (estimating 2 weeks until complete):
- Finish all of the RTEMS initialization required to get to bsp_start
successfully (for eg. it seems like sysinit calls filesystem
initialization routines, which I'm stubbing out for now, but will need
to implement to avoid having the testsuite end up throwing
"rtems_fatal_error_occurred".
- Port FreeBSD's console code to get to hello world with our stub

I'm aiming to get to a hacky working hello.exe as soon as possible,
and then we can work on making all the initialization routines, the
linker script. etc. be clean and perfectly correct instead of merely
functional in the emulator.

Cheers
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Copyright and license notices in new and ported code

2018-06-15 Thread Amaan Cheval
Hi!

In some RTEMS source files I see a series of copyright notices,
including those for individuals and corporations, for eg.:
https://git.rtems.org/rtems/tree/cpukit/score/cpu/arm/include/rtems/score/cpu.h#n7
https://git.rtems.org/rtems/commit/?id=660db8c86fa16dc67c40bdeebbf671e50a7f3087
(again, in cpu.h)

I was just wondering how this ought to be for new code. Is this meant
to be handled on a file-by-file basis? I don't see anything about it
on the wiki either.

For now I'm using this, let me know if I need to add OAR to the comment too:

https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-01-compile-stub/cpukit/score/cpu/x86_64/include/rtems/score/cpu.h#L10

I haven't yet, but I'll likely also port FreeBSD sources soon - in
that case I ought to just preserve the file as much as possible to
make future updates easier, right? (That is, no changes made to the
copyright or added to the code if we can avoid it (by adding new files
for customizations on top, for eg.)).

Cheers
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-15 Thread Amaan Cheval
On Thu, Jun 14, 2018 at 11:25 AM, Chris Johns  wrote:
> On 14/06/2018 05:33, Joel Sherrill wrote:
>> On Wed, Jun 13, 2018, 6:57 PM Amaan Cheval > <mailto:amaan.che...@gmail.com>> wrote:
>>
>> On Wed, Jun 13, 2018 at 9:35 PM, Gedare Bloom > <mailto:ged...@rtems.org>> wrote:
>>     > On Wed, Jun 13, 2018 at 11:33 AM, Amaan Cheval > <mailto:amaan.che...@gmail.com>> wrote:
>> >> Hi!
>> >>
>> >> As we discussed in the last thread on the topic[1], I'm trying to use
>> >> FreeBSD's loader.efi directly with RTEMS' generated static binaries
>> >> (since FreeBSD's loader.efi has an ELF loader).
>> >>
>> >> In brief, I did this by:
>> >> - Installing FreeBSD in QEMU with UEFI firmware
>> >> - Confirming that FreeBSD's loader.efi is in fact used
>> >> - Replacing FreeBSD's ELF kernel with a "custom" kernel[2] with an 
>> RTEMS ELF
>> >> - Verifying that the code running after FreeBSD's loader.efi is in
>> >> fact the "RTEMS ELF" by attaching gdb to QEMU (the rtems ELF is simply
>> >> a while(1) loop compiled with RTEMS' tools - see later on why I can't
>> >> do something more elaborate)
>> >>
>> >> Some more details of the process I followed for testing this:
>> >> https://gist.github.com/AmaanC/42faa131ee97a1d6c4c7c25c29f0fde9z
>> >>
>> >> I think this method is superior to the PIC RTEMS method because:
>> >> - FreeBSD uses it
>> >> - RTEMS retains static ELF binaries, which can likely easily be
>> >> combined with a Multiboot header + protect mode starter code
>> >> - FreeBSD has methods to provide ACPI related hints to their ELF
>> >> kernel - this might make our implementation with regards to ACPI
>> >> simpler too
>
> I agree this is the best approach. In time we can host on our file server a
> package of FreeBSD binaries that boot an RTEMS kernel.

Something worth noting here is that most Linux kernels (at least
Ubuntu and Debian, from my tests) would need to be recompiled to
enable write-support for UFS filesystems, which FreeBSD uses.
https://unix.stackexchange.com/questions/24589/mounting-ufs-partition-with-read-write-permissions-on-ubuntu-10-04

What this means is that it's likely best to provide to users a disk
image _including_ the FreeBSD kernel too, not just the bootloader,
since it can then be used to edit the image itself in QEMU to replace
the kernel with the RTEMS static binary.

Procedure would look like:
- Download freebsd.img
- Run qemu ... freebsd.img
- Edit filesystem to replace /boot/kernel/kernel with RTEMS binary
(leaving old FreeBSD kernel as a backup in /boot/kernel.old/)

I just thought I'd bring this caveat up. We can write documentation up
on the exact steps needed to accomplish this on the wiki soon too.

>
>> >>
>> >> Regarding some concerns Chris had with linker options and whatnot,
>> >> here's what FreeBSD uses:
>> >> https://www.freebsd.org/doc/en/books/arch-handbook/boot-kernel.html
>> >>
>> >> Here's what I used (with the code being a simple while(1) loop):
>> >>   x86_64-rtems5-gcc ktest.c -c -nostdlib
>> >>   x86_64-rtems5-ld ktest.o -e main -o kernel
>> >>
>
> Nice, this looks fine. It is normal for a bare metal piece of C code.
>
>> >>
>> 
>> -
>> >>
>> >> What I need input on:
>> >> - Right now, we use the following RTEMS code for testing:
>> >>
>> >> int main() {
>> >>   while(1) {}
>> >> }
>> >>
>> >
>> > It's not really an RTEMS code, it is a C program (ktest.c) compiled
>> > with the RTEMS-flavored toolchain, right?
>>
>> Yeah, for now that's right. I'm going to conduct the same gdb based
>> debug-stepping style test for RTEMS setting boot_card as the entry
>> point soon - for now, it crashes QEMU with:
>>
>> qemu: fatal: Trying to execute code outside RAM or ROM at 
>> 0x000b
>>
>> RAX=006004c0 RBX=006003d8 RCX=37f36000
>> RDX=0040
>> RSI=0400 RDI=0180 RBP=006003d8
>> RSP=3c589fb8
>> ...
>>
>> I see that it reaches that

Re: [GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-14 Thread Amaan Cheval
You can't return to loader.efi from the kernel, from what I can tell.
loader.efi executes the ELF kernel through a trampoline:
https://github.com/freebsd/freebsd/blob/433bd38e3a0349f9f89f9d54594172c75b002b74/stand/efi/loader/arch/amd64/elf64_freebsd.c#L197-L200

Defined here:
https://github.com/freebsd/freebsd/blob/433bd38e3a0349f9f89f9d54594172c75b002b74/stand/efi/loader/arch/amd64/amd64_tramp.S#L40

The only way to get back to EFI that I see is through a system reset
or some dark voodoo that switches back to efi_main directly (the
mystic arts will likely be harder than just figuring ACPI out, though
:P).

On Fri, Jun 15, 2018 at 5:33 AM, Chris Johns  wrote:
> On 15/06/2018 03:25, Amaan Cheval wrote:
>>
>> For shutdown and whatnot, I imagine I'll need to get our ACPI support
>> out there too, so that may be on the backburner for a little bit.
>>
>
> What happens if you return to loader.efi?
>
> Does it return to the EFI OS?
>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-14 Thread Amaan Cheval
Gotcha. We can leave an #error directive in what we merge initially too,
like we did for the i386 SMP support.

On Thu, Jun 14, 2018, 11:55 PM Gedare Bloom  wrote:

> On Thu, Jun 14, 2018 at 1:25 PM, Amaan Cheval 
> wrote:
> > Sounds good!
> >
> > For shutdown and whatnot, I imagine I'll need to get our ACPI support
> > out there too, so that may be on the backburner for a little bit.
> >
> > Maybe this should be a new thread, but; what is the minimal BSP we
> > will be willing to merge upstream? I'm thinking that this process of
> > booting, initialization, and reaching the init task (but not having
> > full or possibly any console support) can be the minimal.
> >
> > Is that a reasonable assumption?
> >
>
> The preferred "minimal" BSP is one with working console and clock
> driver. It would be OK to start merging once you reach the milestone
> of context switching to the Init task, though, and then add the
> console support, and then the clock driver, before you begin the
> other, more advanced BSP features.
>
> > On Thu, Jun 14, 2018 at 9:42 PM, Joel Sherrill  wrote:
> >>
> >>
> >> On Thu, Jun 14, 2018, 6:08 PM Amaan Cheval 
> wrote:
> >>>
> >>> Thanks for your input, everyone! I appreciate it! :)
> >>>
> >>> On Thu, Jun 14, 2018 at 11:25 AM, Chris Johns 
> wrote:
> >>> > On 14/06/2018 05:33, Joel Sherrill wrote:
> >>> >> On Wed, Jun 13, 2018, 6:57 PM Amaan Cheval  >>> >> <mailto:amaan.che...@gmail.com>> wrote:
> >>> >>
> >>> >> On Wed, Jun 13, 2018 at 9:35 PM, Gedare Bloom  >>> >> <mailto:ged...@rtems.org>> wrote:
> >>> >> > On Wed, Jun 13, 2018 at 11:33 AM, Amaan Cheval
> >>> >>  >>> >> <mailto:amaan.che...@gmail.com>> wrote:
> >>> >> >> Hi!
> >>> >> >>
> >>> >> >> As we discussed in the last thread on the topic[1], I'm
> trying
> >>> >> to use
> >>> >> >> FreeBSD's loader.efi directly with RTEMS' generated static
> >>> >> binaries
> >>> >> >> (since FreeBSD's loader.efi has an ELF loader).
> >>> >> >>
> >>> >> >> In brief, I did this by:
> >>> >> >> - Installing FreeBSD in QEMU with UEFI firmware
> >>> >> >> - Confirming that FreeBSD's loader.efi is in fact used
> >>> >> >> - Replacing FreeBSD's ELF kernel with a "custom" kernel[2]
> with
> >>> >> an RTEMS ELF
> >>> >> >> - Verifying that the code running after FreeBSD's loader.efi
> is
> >>> >> in
> >>> >> >> fact the "RTEMS ELF" by attaching gdb to QEMU (the rtems ELF
> is
> >>> >> simply
> >>> >> >> a while(1) loop compiled with RTEMS' tools - see later on
> why I
> >>> >> can't
> >>> >> >> do something more elaborate)
> >>> >> >>
> >>> >> >> Some more details of the process I followed for testing this:
> >>> >> >>
> https://gist.github.com/AmaanC/42faa131ee97a1d6c4c7c25c29f0fde9z
> >>> >> >>
> >>> >> >> I think this method is superior to the PIC RTEMS method
> because:
> >>> >> >> - FreeBSD uses it
> >>> >> >> - RTEMS retains static ELF binaries, which can likely easily
> be
> >>> >> >> combined with a Multiboot header + protect mode starter code
> >>> >> >> - FreeBSD has methods to provide ACPI related hints to their
> ELF
> >>> >> >> kernel - this might make our implementation with regards to
> ACPI
> >>> >> >> simpler too
> >>> >
> >>> > I agree this is the best approach. In time we can host on our file
> >>> > server a
> >>> > package of FreeBSD binaries that boot an RTEMS kernel.
> >>> >
> >>> >> >>
> >>> >> >> Regarding some concerns Chris had with linker options and
> >>> >> whatnot,
> >>> >> >> here's what FreeBSD uses:
> >>> >> >>
> >>> >> https://www.freebsd.org/doc/en/books/arch-handbook/

Re: [GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-14 Thread Amaan Cheval
Sounds good!

For shutdown and whatnot, I imagine I'll need to get our ACPI support
out there too, so that may be on the backburner for a little bit.

Maybe this should be a new thread, but; what is the minimal BSP we
will be willing to merge upstream? I'm thinking that this process of
booting, initialization, and reaching the init task (but not having
full or possibly any console support) can be the minimal.

Is that a reasonable assumption?

On Thu, Jun 14, 2018 at 9:42 PM, Joel Sherrill  wrote:
>
>
> On Thu, Jun 14, 2018, 6:08 PM Amaan Cheval  wrote:
>>
>> Thanks for your input, everyone! I appreciate it! :)
>>
>> On Thu, Jun 14, 2018 at 11:25 AM, Chris Johns  wrote:
>> > On 14/06/2018 05:33, Joel Sherrill wrote:
>> >> On Wed, Jun 13, 2018, 6:57 PM Amaan Cheval > >> <mailto:amaan.che...@gmail.com>> wrote:
>> >>
>> >> On Wed, Jun 13, 2018 at 9:35 PM, Gedare Bloom > >> <mailto:ged...@rtems.org>> wrote:
>> >> > On Wed, Jun 13, 2018 at 11:33 AM, Amaan Cheval
>> >> > >> <mailto:amaan.che...@gmail.com>> wrote:
>> >> >> Hi!
>> >> >>
>> >> >> As we discussed in the last thread on the topic[1], I'm trying
>> >> to use
>> >> >> FreeBSD's loader.efi directly with RTEMS' generated static
>> >> binaries
>> >> >> (since FreeBSD's loader.efi has an ELF loader).
>> >> >>
>> >> >> In brief, I did this by:
>> >> >> - Installing FreeBSD in QEMU with UEFI firmware
>> >> >> - Confirming that FreeBSD's loader.efi is in fact used
>> >> >> - Replacing FreeBSD's ELF kernel with a "custom" kernel[2] with
>> >> an RTEMS ELF
>> >> >> - Verifying that the code running after FreeBSD's loader.efi is
>> >> in
>> >> >> fact the "RTEMS ELF" by attaching gdb to QEMU (the rtems ELF is
>> >> simply
>> >> >> a while(1) loop compiled with RTEMS' tools - see later on why I
>> >> can't
>> >> >> do something more elaborate)
>> >> >>
>> >> >> Some more details of the process I followed for testing this:
>> >> >> https://gist.github.com/AmaanC/42faa131ee97a1d6c4c7c25c29f0fde9z
>> >> >>
>> >> >> I think this method is superior to the PIC RTEMS method because:
>> >> >> - FreeBSD uses it
>> >> >> - RTEMS retains static ELF binaries, which can likely easily be
>> >> >> combined with a Multiboot header + protect mode starter code
>> >> >> - FreeBSD has methods to provide ACPI related hints to their ELF
>> >> >> kernel - this might make our implementation with regards to ACPI
>> >> >> simpler too
>> >
>> > I agree this is the best approach. In time we can host on our file
>> > server a
>> > package of FreeBSD binaries that boot an RTEMS kernel.
>> >
>> >> >>
>> >> >> Regarding some concerns Chris had with linker options and
>> >> whatnot,
>> >> >> here's what FreeBSD uses:
>> >> >>
>> >> https://www.freebsd.org/doc/en/books/arch-handbook/boot-kernel.html
>> >> >>
>> >> >> Here's what I used (with the code being a simple while(1) loop):
>> >> >>   x86_64-rtems5-gcc ktest.c -c -nostdlib
>> >> >>   x86_64-rtems5-ld ktest.o -e main -o kernel
>> >> >>
>> >
>> > Nice, this looks fine. It is normal for a bare metal piece of C code.
>> >
>> >> >>
>> >>
>> >> -
>> >> >>
>> >> >> What I need input on:
>> >> >> - Right now, we use the following RTEMS code for testing:
>> >> >>
>> >> >> int main() {
>> >> >>   while(1) {}
>> >> >> }
>> >> >>
>> >> >
>> >> > It's not really an RTEMS code, it is a C program (ktest.c)
>> >> compiled
>> >> > with the RTEMS-flavored toolchain, right?
>> >>
>> >> Yeah, for now that's right. I'm going to conduct the same gdb based
>&g

Re: [GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-14 Thread Amaan Cheval
Thanks for your input, everyone! I appreciate it! :)

On Thu, Jun 14, 2018 at 11:25 AM, Chris Johns  wrote:
> On 14/06/2018 05:33, Joel Sherrill wrote:
>> On Wed, Jun 13, 2018, 6:57 PM Amaan Cheval > <mailto:amaan.che...@gmail.com>> wrote:
>>
>> On Wed, Jun 13, 2018 at 9:35 PM, Gedare Bloom > <mailto:ged...@rtems.org>> wrote:
>>     > On Wed, Jun 13, 2018 at 11:33 AM, Amaan Cheval > <mailto:amaan.che...@gmail.com>> wrote:
>> >> Hi!
>> >>
>> >> As we discussed in the last thread on the topic[1], I'm trying to use
>> >> FreeBSD's loader.efi directly with RTEMS' generated static binaries
>> >> (since FreeBSD's loader.efi has an ELF loader).
>> >>
>> >> In brief, I did this by:
>> >> - Installing FreeBSD in QEMU with UEFI firmware
>> >> - Confirming that FreeBSD's loader.efi is in fact used
>> >> - Replacing FreeBSD's ELF kernel with a "custom" kernel[2] with an 
>> RTEMS ELF
>> >> - Verifying that the code running after FreeBSD's loader.efi is in
>> >> fact the "RTEMS ELF" by attaching gdb to QEMU (the rtems ELF is simply
>> >> a while(1) loop compiled with RTEMS' tools - see later on why I can't
>> >> do something more elaborate)
>> >>
>> >> Some more details of the process I followed for testing this:
>> >> https://gist.github.com/AmaanC/42faa131ee97a1d6c4c7c25c29f0fde9z
>> >>
>> >> I think this method is superior to the PIC RTEMS method because:
>> >> - FreeBSD uses it
>> >> - RTEMS retains static ELF binaries, which can likely easily be
>> >> combined with a Multiboot header + protect mode starter code
>> >> - FreeBSD has methods to provide ACPI related hints to their ELF
>> >> kernel - this might make our implementation with regards to ACPI
>> >> simpler too
>
> I agree this is the best approach. In time we can host on our file server a
> package of FreeBSD binaries that boot an RTEMS kernel.
>
>> >>
>> >> Regarding some concerns Chris had with linker options and whatnot,
>> >> here's what FreeBSD uses:
>> >> https://www.freebsd.org/doc/en/books/arch-handbook/boot-kernel.html
>> >>
>> >> Here's what I used (with the code being a simple while(1) loop):
>> >>   x86_64-rtems5-gcc ktest.c -c -nostdlib
>> >>   x86_64-rtems5-ld ktest.o -e main -o kernel
>> >>
>
> Nice, this looks fine. It is normal for a bare metal piece of C code.
>
>> >>
>> 
>> -
>> >>
>> >> What I need input on:
>> >> - Right now, we use the following RTEMS code for testing:
>> >>
>> >> int main() {
>> >>   while(1) {}
>> >> }
>> >>
>> >
>> > It's not really an RTEMS code, it is a C program (ktest.c) compiled
>> > with the RTEMS-flavored toolchain, right?
>>
>> Yeah, for now that's right. I'm going to conduct the same gdb based
>> debug-stepping style test for RTEMS setting boot_card as the entry
>> point soon - for now, it crashes QEMU with:
>>
>> qemu: fatal: Trying to execute code outside RAM or ROM at 
>> 0x000b
>>
>> RAX=006004c0 RBX=006003d8 RCX=37f36000
>> RDX=0040
>> RSI=0400 RDI=0180 RBP=006003d8
>> RSP=3c589fb8
>> ...
>>
>> I see that it reaches that stage even from some code it ought not to
>> be executing, so I'll look into what that may be about.

It was quite simple, really - my stub doesn't define
_CPU_Context_restore yet - rtems_initialize_executive calls that
function expecting it to never return, but when it does, we lose
control and just start running code from virtual address 0 (or
possibly whatever happens to be on the stack as the return instruction
pointer).

What we _do_ know is a positive sign, though - an actual RTEMS static
binary does seem to be loaded just fine, and starts executing too,
until we call _CPU_Context_restore and lose control.

Next up: I'll work on the context-switching code to move past this,
and then we can follow the original plan in my proposal
(context-switching, basic IRQ, idle thread based clock driver,
printk/console suppo

Re: [GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-13 Thread Amaan Cheval
On Wed, Jun 13, 2018 at 9:35 PM, Gedare Bloom  wrote:
> On Wed, Jun 13, 2018 at 11:33 AM, Amaan Cheval  wrote:
>> Hi!
>>
>> As we discussed in the last thread on the topic[1], I'm trying to use
>> FreeBSD's loader.efi directly with RTEMS' generated static binaries
>> (since FreeBSD's loader.efi has an ELF loader).
>>
>> In brief, I did this by:
>> - Installing FreeBSD in QEMU with UEFI firmware
>> - Confirming that FreeBSD's loader.efi is in fact used
>> - Replacing FreeBSD's ELF kernel with a "custom" kernel[2] with an RTEMS ELF
>> - Verifying that the code running after FreeBSD's loader.efi is in
>> fact the "RTEMS ELF" by attaching gdb to QEMU (the rtems ELF is simply
>> a while(1) loop compiled with RTEMS' tools - see later on why I can't
>> do something more elaborate)
>>
>> Some more details of the process I followed for testing this:
>> https://gist.github.com/AmaanC/42faa131ee97a1d6c4c7c25c29f0fde9z
>>
>> I think this method is superior to the PIC RTEMS method because:
>> - FreeBSD uses it
>> - RTEMS retains static ELF binaries, which can likely easily be
>> combined with a Multiboot header + protect mode starter code
>> - FreeBSD has methods to provide ACPI related hints to their ELF
>> kernel - this might make our implementation with regards to ACPI
>> simpler too
>>
>> Regarding some concerns Chris had with linker options and whatnot,
>> here's what FreeBSD uses:
>> https://www.freebsd.org/doc/en/books/arch-handbook/boot-kernel.html
>>
>> Here's what I used (with the code being a simple while(1) loop):
>>   x86_64-rtems5-gcc ktest.c -c -nostdlib
>>   x86_64-rtems5-ld ktest.o -e main -o kernel
>>
>> -
>>
>> What I need input on:
>> - Right now, we use the following RTEMS code for testing:
>>
>> int main() {
>>   while(1) {}
>> }
>>
>
> It's not really an RTEMS code, it is a C program (ktest.c) compiled
> with the RTEMS-flavored toolchain, right?

Yeah, for now that's right. I'm going to conduct the same gdb based
debug-stepping style test for RTEMS setting boot_card as the entry
point soon - for now, it crashes QEMU with:

qemu: fatal: Trying to execute code outside RAM or ROM at 0x000b

RAX=006004c0 RBX=006003d8 RCX=37f36000
RDX=0040
RSI=0400 RDI=0180 RBP=006003d8
RSP=3c589fb8
...

I see that it reaches that stage even from some code it ought not to
be executing, so I'll look into what that may be about.

>
> It would be nice to get an RTEMS x86-64 BSP to start, at least to
> confirm that you reach _start, and then even you can try to make it to
> the "boot_card" startup sequence.

Right, I'll aim to have that working soon (using boot_card as the
entry, since "_start" usually does the bootloader stuff that we're now
offloading to FreeBSD, and then calls boot_card anyway).

>
>> That's literally it, because we have no access to standard libraries,
>> and loader.efi calls ExitBootServices, after which we can't just
>> easily directly access video memory (at 0xb8000 for eg.) to print to
>> the screen. The way FreeBSD handles this is by initializing the
>> console and printing to that - I haven't been able to easily port that
>> yet.
>>
>> The question is - should I start with that effort (i.e. bringing
>> printk console functionality to RTEMS) the way FreeBSD does? This way,
>> we skip the bootloader for now by simply using the one built on the
>> real FreeBSD - if the console prints and more elaborate linking tests
>> work fine, we can be certain that this works. If _not_, I believe the
>> console initialization code will likely still remain the same since
>> we'll want to do it similar to how FreeBSD does it.
>>
>
> I think this approach to getting a console to work may be reasonable,
> assuming the FreeBSD console is not much more complicated than what
> RTEMS needs. ...

I can't say about this yet, but I'll look into it (and perhaps
simplifying it as we port it if it _is_ too complicated).

>
>> What do you think?
>>
>> Cheers,
>> Amaan
>>
>> [1] https://lists.rtems.org/pipermail/devel/2018-June/022052.html
>> [2] https://www.freebsd.org/doc/handbook/kernelconfig-building.html
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[GSoC - x86_64] Using FreeBSD's UEFI loader for RTEMS static binaries

2018-06-13 Thread Amaan Cheval
Hi!

As we discussed in the last thread on the topic[1], I'm trying to use
FreeBSD's loader.efi directly with RTEMS' generated static binaries
(since FreeBSD's loader.efi has an ELF loader).

In brief, I did this by:
- Installing FreeBSD in QEMU with UEFI firmware
- Confirming that FreeBSD's loader.efi is in fact used
- Replacing FreeBSD's ELF kernel with a "custom" kernel[2] with an RTEMS ELF
- Verifying that the code running after FreeBSD's loader.efi is in
fact the "RTEMS ELF" by attaching gdb to QEMU (the rtems ELF is simply
a while(1) loop compiled with RTEMS' tools - see later on why I can't
do something more elaborate)

Some more details of the process I followed for testing this:
https://gist.github.com/AmaanC/42faa131ee97a1d6c4c7c25c29f0fde9z

I think this method is superior to the PIC RTEMS method because:
- FreeBSD uses it
- RTEMS retains static ELF binaries, which can likely easily be
combined with a Multiboot header + protect mode starter code
- FreeBSD has methods to provide ACPI related hints to their ELF
kernel - this might make our implementation with regards to ACPI
simpler too

Regarding some concerns Chris had with linker options and whatnot,
here's what FreeBSD uses:
https://www.freebsd.org/doc/en/books/arch-handbook/boot-kernel.html

Here's what I used (with the code being a simple while(1) loop):
  x86_64-rtems5-gcc ktest.c -c -nostdlib
  x86_64-rtems5-ld ktest.o -e main -o kernel

-

What I need input on:
- Right now, we use the following RTEMS code for testing:

int main() {
  while(1) {}
}

That's literally it, because we have no access to standard libraries,
and loader.efi calls ExitBootServices, after which we can't just
easily directly access video memory (at 0xb8000 for eg.) to print to
the screen. The way FreeBSD handles this is by initializing the
console and printing to that - I haven't been able to easily port that
yet.

The question is - should I start with that effort (i.e. bringing
printk console functionality to RTEMS) the way FreeBSD does? This way,
we skip the bootloader for now by simply using the one built on the
real FreeBSD - if the console prints and more elaborate linking tests
work fine, we can be certain that this works. If _not_, I believe the
console initialization code will likely still remain the same since
we'll want to do it similar to how FreeBSD does it.

What do you think?

Cheers,
Amaan

[1] https://lists.rtems.org/pipermail/devel/2018-June/022052.html
[2] https://www.freebsd.org/doc/handbook/kernelconfig-building.html
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH] x86_64/binutils: Add PEI target to build UEFI application images

2018-06-11 Thread Amaan Cheval
Bump :)
On Tue, Jun 5, 2018 at 10:54 PM Amaan Cheval  wrote:
>
> Original commit in binutils:
> https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commitdiff;h=421acf18739edb54111b64d2b328ea2e7bf19889
>
> Update #2898
> ---
>  rtems/config/5/rtems-x86_64.bset | 5 +
>  1 file changed, 5 insertions(+)
>
> diff --git a/rtems/config/5/rtems-x86_64.bset 
> b/rtems/config/5/rtems-x86_64.bset
> index 9b92538..6041971 100644
> --- a/rtems/config/5/rtems-x86_64.bset
> +++ b/rtems/config/5/rtems-x86_64.bset
> @@ -14,4 +14,9 @@
>  %patch add gcc --rsb-file=gcc-f8fd78279d353f6959e75ac25571c1b7b2dec110.patch 
> https://gcc.gnu.org/git/?p=gcc.git;a=blobdiff_plain;f=libgcc/config.host;h=f8fd78279d353f6959e75ac25571c1b7b2dec110;hp=11b4acaff55e00ee6bd3c182e9da5dc597ac57c4;hb=ab55f7db3694293e4799d58f7e1a556c0eae863a;hpb=344c180cca810c50f38fd545bb9a102fb39306b7
>  %hash sha512 gcc-f8fd78279d353f6959e75ac25571c1b7b2dec110.patch 
> aef76f9d45a53096a021521375fc302a907f78545cc57683a7a00ec61608b8818115720f605a6b1746f479c8568963b380138520e259cbb9e8951882c2f1567f
>
> +#
> +# Binutils PEI target for UEFI support
> +#
> +%patch add binutils 
> --rsb-file=binutils-f8ca72b332396939c7c04a8774ce4c54f5a82d42.patch 
> https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blobdiff_plain;f=bfd/config.bfd;h=f8ca72b332396939c7c04a8774ce4c54f5a82d42;hp=0db8ed4562b2c11ce51e6a3b138c317f4014a1aa;hb=421acf18739edb54111b64d2b328ea2e7bf19889;hpb=f7c6f42310233479ea6339430b7c1ca1f9ec68e1
> +%hash sha512 binutils-f8ca72b332396939c7c04a8774ce4c54f5a82d42.patch 
> f8af6906871a95a6fb234d0c72c44e9b1823ed835ec91bd84b466aad1f2f5f021ff5fb37835e6132899dcaa6c7e52e3e73f5a2dc9f0efab97aa6fffce2f06d9e
>  %include 5/rtems-default.bset
> --
> 2.16.0.rc0
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-11 Thread Amaan Cheval
Minor update: I'll work on Chris' suggestion of using FreeBSD's
loader.efi and having that load our static hello.exe - it ought to be
a quicker test that way.
On Sun, Jun 10, 2018 at 9:34 PM Amaan Cheval  wrote:
>
> On Sun, Jun 10, 2018 at 12:38 AM Joel Sherrill  wrote:
> >
> >
> >
> > On Fri, Jun 8, 2018 at 7:45 PM, Chris Johns  wrote:
> >>
> >> On 9/6/18 10:00 am, Joel Sherrill wrote:
> >> > On Thu, Jun 7, 2018, 9:01 PM Chris Johns  >> > <mailto:chr...@rtems.org>> wrote:
> >> > > and what
> >> > > discussions we need to have to decide between the "bundled 
> >> > kernel.so approach"
> >> > > (the one implemented here) vs. the "FreeBSD loader.efi+hello.exe"
> >> > approach. Let
> >> > > me know!
> >> > >
> >> >
> >> > I do not think I can help too much here. I understand the 
> >> > loader.efi+exe
> >> > solution and it should work because all RTEMS applications we have 
> >> > are
> >> > statically linked (I am assuming it is here). I have not looked at 
> >> > the details
> >> > being used with the -fPIC and .so solution so I cannot comment. I do 
> >> > have some
> >> > concerns the relocatable exe might expose some dark corners and 
> >> > issues in the
> >> > host tools we have, for example how does GDB find the base address 
> >> > of the image
> >> > so you can debug it? and is this just working or is it really 
> >> > suppose to work
> >> > this way?
> >> >
> >> >
> >> > All I can say is that with Deos/RTEMS, we use PIC on arm, PowerPC, and 
> >> > x86.
> >>
> >> I would hope a solution like Deos did provide a seamless way to do this 
> >> and I
> >> would also hope they support you.
> >
> >
> > I am not using their normal recommended tool setup for users. This is normal
> > RTEMS tools building our test executables. At one point, it was our gdb with
> > their qemu build. They use something like this strictly internally.
> >
> > These executables are statically linked EXCEPT for references to, in
> > the minimum case, two .so's from their environment. I set an argument to
> > ld to ensure all symbols are resolved at link time. Their boot process 
> > associates
> > the .so files with the partitions. It is dynamic loading but it is 
> > statically configured
> > and not touched after boot.
> >
> > I haven't had any special help from them in this area except figuring out 
> > the
> > arguments and linker scripts. When I have access to a build log, I am happy
> > to post the build of hello world for comparison.
> >
> > I have no idea how this compares to UEFI booting except to say that PIC
> > hasn't introduced any tool issues in our GNU tools. libdl may have issues
> > but we aren't using it yet. I can check if the tests pass or are disabled. I
> > don't remember. But that may be illuminating.
> >
> >>
> >>
> >> > We
> >> > have spent a lot of time debugging with gdb attached to qemu.
> >> How does GDB get the relocatable load address to map to the symbol table?
> >>
> >> The libdl code supports the same protocol/design as NetBSD and other 
> >> systems in
> >> informing GDB about the address of loaded modules. There is a series of 
> >> symbols
> >> and tables maintained that GDB knows to examine to find the load addresses 
> >> of
> >> object files.
> >>
> >> > I haven't seen any tools issues yet.
> >>
> >> Yet?  Once the path is settled it will be difficult to change so all I am 
> >> asking
> >> is the detail be checked and understood. RTEMS does not support shared 
> >> libraries
> >> the same way Linux or other Unix systems do. I do not understand enough of 
> >> the
> >> low level and the standards all this is based on to help decide.
> >>
> >> An example of an issue where a relocatable kernel with an unknown load 
> >> address
> >> creates a problem is libdl. The testsuite uses the 2-pass approach 
> >> (rtems-syms
> >> --embed) which should be OK however the other approach where the symbol 
> >> table is
> >> not embedded and built on the host would fail. It is a small issue but it 
> >> shows
> >> how things can subtly break.

Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-10 Thread Amaan Cheval
On Sun, Jun 10, 2018 at 12:38 AM Joel Sherrill  wrote:
>
>
>
> On Fri, Jun 8, 2018 at 7:45 PM, Chris Johns  wrote:
>>
>> On 9/6/18 10:00 am, Joel Sherrill wrote:
>> > On Thu, Jun 7, 2018, 9:01 PM Chris Johns > > > wrote:
>> > > and what
>> > > discussions we need to have to decide between the "bundled kernel.so 
>> > approach"
>> > > (the one implemented here) vs. the "FreeBSD loader.efi+hello.exe"
>> > approach. Let
>> > > me know!
>> > >
>> >
>> > I do not think I can help too much here. I understand the 
>> > loader.efi+exe
>> > solution and it should work because all RTEMS applications we have are
>> > statically linked (I am assuming it is here). I have not looked at the 
>> > details
>> > being used with the -fPIC and .so solution so I cannot comment. I do 
>> > have some
>> > concerns the relocatable exe might expose some dark corners and issues 
>> > in the
>> > host tools we have, for example how does GDB find the base address of 
>> > the image
>> > so you can debug it? and is this just working or is it really suppose 
>> > to work
>> > this way?
>> >
>> >
>> > All I can say is that with Deos/RTEMS, we use PIC on arm, PowerPC, and x86.
>>
>> I would hope a solution like Deos did provide a seamless way to do this and I
>> would also hope they support you.
>
>
> I am not using their normal recommended tool setup for users. This is normal
> RTEMS tools building our test executables. At one point, it was our gdb with
> their qemu build. They use something like this strictly internally.
>
> These executables are statically linked EXCEPT for references to, in
> the minimum case, two .so's from their environment. I set an argument to
> ld to ensure all symbols are resolved at link time. Their boot process 
> associates
> the .so files with the partitions. It is dynamic loading but it is statically 
> configured
> and not touched after boot.
>
> I haven't had any special help from them in this area except figuring out the
> arguments and linker scripts. When I have access to a build log, I am happy
> to post the build of hello world for comparison.
>
> I have no idea how this compares to UEFI booting except to say that PIC
> hasn't introduced any tool issues in our GNU tools. libdl may have issues
> but we aren't using it yet. I can check if the tests pass or are disabled. I
> don't remember. But that may be illuminating.
>
>>
>>
>> > We
>> > have spent a lot of time debugging with gdb attached to qemu.
>> How does GDB get the relocatable load address to map to the symbol table?
>>
>> The libdl code supports the same protocol/design as NetBSD and other systems 
>> in
>> informing GDB about the address of loaded modules. There is a series of 
>> symbols
>> and tables maintained that GDB knows to examine to find the load addresses of
>> object files.
>>
>> > I haven't seen any tools issues yet.
>>
>> Yet?  Once the path is settled it will be difficult to change so all I am 
>> asking
>> is the detail be checked and understood. RTEMS does not support shared 
>> libraries
>> the same way Linux or other Unix systems do. I do not understand enough of 
>> the
>> low level and the standards all this is based on to help decide.
>>
>> An example of an issue where a relocatable kernel with an unknown load 
>> address
>> creates a problem is libdl. The testsuite uses the 2-pass approach 
>> (rtems-syms
>> --embed) which should be OK however the other approach where the symbol 
>> table is
>> not embedded and built on the host would fail. It is a small issue but it 
>> shows
>> how things can subtly break.
>
>
> I'm not relocating any RTEMS code with Deos. Our code is linked to a static 
> address
> ranges and invokes Deos methods in the shared libbary.
>
> I know this doesn't prove anything concretely about the UEFI exe  but it is 
> the closest
> example we have. PIC is likely OK. The .so magic could be problematic as it 
> looks
> like I effectively build a static exe.

The Bsymbolic flag isn't a requirement for this system to work - it
just helps eliminate the use of the GOT/PLT unnecessarily. Personally,
I don't think I have the clarity to be able to say whether this is or
isn't safe - I think the only way to be able to tell will be to
continue with my work on it and to prove that it either does or
doesn't, at least in the general case.

What we aim with the hello.so method is the same as you said -
effectively building a static fully resolved exe, with the difference
that this _is_ relocatable depending on the UEFI firmware's access and
availability to memory.

I can't speak to how libdl is affected - but the bit from the ld
manual about "platforms which support shared libraries" doesn't imply
libdl / RTEMS needing to support shared libraries to me. What we build
in this method is hello.so (which includes all of RTEMS' kernel + the
user application (hello world here)) - the platform that needs to
support shared libraries then is 

Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-10 Thread Amaan Cheval
On Sat, Jun 9, 2018 at 5:26 AM Chris Johns  wrote:
>
> On 9/6/18 2:39 am, Amaan Cheval wrote:
> > On Fri, Jun 8, 2018 at 7:31 AM, Chris Johns  wrote:
> >> On 08/06/2018 01:50, Amaan Cheval wrote:
> >>>
> >>> Joel, Chris, I'd appreciate guidance on what I ought to work on next
> >>
> >> I would like to see the focus on the kernel context switcher, FPU support, 
> >> and
> >> then interrupts so we have the basic drivers we need like a tick interrupt 
> >> running.
> >>
> >> This assumes the loader issues we have not resolved do not effect this 
> >> work.
> >>
> >
> > They do affect it to a certain extent - I'd be working semi-blind
> > since without tying the loose ends required to boot, I wouldn't be
> > able to test the implementations of the areas you mentioned. I'm not
> > at that point yet, but I suspect I will be within a week or so, so the
> > sooner we determine the approach required for booting, the better.
>
> OK.
>
> >>> and what
> >>> discussions we need to have to decide between the "bundled kernel.so 
> >>> approach"
> >>> (the one implemented here) vs. the "FreeBSD loader.efi+hello.exe" 
> >>> approach. Let
> >>> me know!
> >>>
> >>
> >> I do not think I can help too much here. I understand the loader.efi+exe
> >> solution and it should work because all RTEMS applications we have are
> >> statically linked (I am assuming it is here). I have not looked at the 
> >> details
> >> being used with the -fPIC and .so solution so I cannot comment. I do have 
> >> some
> >> concerns the relocatable exe might expose some dark corners and issues in 
> >> the
> >> host tools we have, for example how does GDB find the base address of the 
> >> image
> >> so you can debug it? and is this just working or is it really suppose to 
> >> work
> >> this way?
> >
> > Well, these images won't simply _run_ through GDB, no - but here's
> > some stuff that may be helpful to see:
> > https://gist.github.com/AmaanC/4e1aaa2cbdda974b93c5a3e1eac5318a
>
> Interesting and thanks. Is this with QEMU?

No, it was only the dynamic/shared library "hello.so" (that I attached
in an earlier email, in case you want to play with it yourself).

>
> > One concern of yours was the unnecessary addition of the GOT/PLT.
> > Thankfully, through options like -Bsymbolic, we can circumvent the
> > GOT/PLT for all symbols which have already been resolved (as you'll
> > see happens for "InitializeLib", "Print", "boot_card", etc. (boot_card
> > because this is from my WIP version trying to get boot_card to work
> > with this method too)).
>
> The -Bsymbolic option is an example of my concern and stepping into a dark
> corner. It could also be my lack of understanding. Yes it works but is that
> always going to be the case? For example the GNU ld manual states:
>
>  "This option is only meaningful on ELF platforms which support shared
>   libraries and position independent executables."
>
> and technically we do not support shared libraries so is this usage a normal 
> use
> case? I do not know. Also Oracle says the option is somewhat historic:
>
>  https://docs.oracle.com/cd/E19957-01/806-0641/chapter4-16/index.html

That's a good point. I see this article which seems to me like it _is_
a fair concern, especially in the case of something as complex as
RTEMS.
https://software.intel.com/en-us/articles/performance-tools-for-software-developers-bsymbolic-can-cause-dangerous-side-effects

>
> > I'm definitely concerned, but having looked at it more, I can't find
> > anything specific that would genuinely cause problems besides
> > unresolved symbols - we could have a build-time check for them,
> > though, failing the build when that's the case (-Wl,-z,defs does it).
>
> If loader.efi+exe avoids this then it makes sense to me to do so. The less we
> bend or stretch the more stable the support will be over time.
>
> > So for next steps, I guess you'll be looking into how the use of -fPIC
> > may affect us, and we can work on addressing those concerns, right?
>
> I do not know because I do not know risks.
>
> > I
> > personally preferred the static build approach too, since that way we
> > can "plug" loaders in:
> > - loader.efi for UEFI firmware
> > - multiboot header + 32 to 64 bit mode code for Multiboot
> >
>
> Agreed.
>
> > That's a slight oversimplification since (1) needs 

Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-08 Thread Amaan Cheval
On Fri, Jun 8, 2018 at 7:31 AM, Chris Johns  wrote:
> On 08/06/2018 01:50, Amaan Cheval wrote:
>>
>> Joel, Chris, I'd appreciate guidance on what I ought to work on next
>
> I would like to see the focus on the kernel context switcher, FPU support, and
> then interrupts so we have the basic drivers we need like a tick interrupt 
> running.
>
> This assumes the loader issues we have not resolved do not effect this work.
>

They do affect it to a certain extent - I'd be working semi-blind
since without tying the loose ends required to boot, I wouldn't be
able to test the implementations of the areas you mentioned. I'm not
at that point yet, but I suspect I will be within a week or so, so the
sooner we determine the approach required for booting, the better.

>> and what
>> discussions we need to have to decide between the "bundled kernel.so 
>> approach"
>> (the one implemented here) vs. the "FreeBSD loader.efi+hello.exe" approach. 
>> Let
>> me know!
>>
>
> I do not think I can help too much here. I understand the loader.efi+exe
> solution and it should work because all RTEMS applications we have are
> statically linked (I am assuming it is here). I have not looked at the details
> being used with the -fPIC and .so solution so I cannot comment. I do have some
> concerns the relocatable exe might expose some dark corners and issues in the
> host tools we have, for example how does GDB find the base address of the 
> image
> so you can debug it? and is this just working or is it really suppose to work
> this way?

Well, these images won't simply _run_ through GDB, no - but here's
some stuff that may be helpful to see:
https://gist.github.com/AmaanC/4e1aaa2cbdda974b93c5a3e1eac5318a

One concern of yours was the unnecessary addition of the GOT/PLT.
Thankfully, through options like -Bsymbolic, we can circumvent the
GOT/PLT for all symbols which have already been resolved (as you'll
see happens for "InitializeLib", "Print", "boot_card", etc. (boot_card
because this is from my WIP version trying to get boot_card to work
with this method too)).

I'm definitely concerned, but having looked at it more, I can't find
anything specific that would genuinely cause problems besides
unresolved symbols - we could have a build-time check for them,
though, failing the build when that's the case (-Wl,-z,defs does it).

So for next steps, I guess you'll be looking into how the use of -fPIC
may affect us, and we can work on addressing those concerns, right? I
personally preferred the static build approach too, since that way we
can "plug" loaders in:
- loader.efi for UEFI firmware
- multiboot header + 32 to 64 bit mode code for Multiboot

That's a slight oversimplification since (1) needs to be packaged on
the filesystem with the static hello.exe and (2) needs to be linked in
with it, but I think the build system retains more of its separations
from the boot method this way (though I can't be sure given my
relatively naive view of RTEMS + autotools).

>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-06 Thread Amaan Cheval
Chris, I've blogged a summary of the 2 approaches we can take in
integrating gnu-efi in case you missed the discussions earlier, btw.
The FreeBSD approach isn't as fleshed out, but I'll let you all know
what I find there.

https://blog.whatthedude.com/post/gnu-efi-kernel-so/

On Wed, Jun 6, 2018 at 2:30 PM, Amaan Cheval  wrote:
> I don't know yet, but I'll look into it. I'll pause the "hello.efi" approach
> work until we settle on a direction, yes? For now, primarily improving the
> stub, looking into the FreeBSD ld-elf.so, etc. Sound good?
>
> On Wed, Jun 6, 2018, 1:01 PM Chris Johns  wrote:
>>
>> On 6/6/18 5:22 pm, Amaan Cheval wrote:
>> > On Wed, Jun 6, 2018 at 12:45 PM, Chris Johns  wrote:
>> >> On 6/6/18 5:06 pm, Amaan Cheval wrote:
>> >>> On Wed, Jun 6, 2018 at 6:55 AM, Chris Johns  wrote:
>> >>>> On 4/6/18 7:49 pm, Amaan Cheval wrote:
>> >>>>> Please let me know if that approach doesn't make sense - given that
>> >>>>> there is no dynamic loader in the RTEMS kernel as far as I know,
>> >>>>
>> >>>> There is a dynamic loader in RTEMS called libdl. It is not based
>> >>>> around the ELF
>> >>>> standard loader used for shared libraries and will not be. GOT and
>> >>>> PLT do not
>> >>>> add value to RTEMS because we do not have an active virtual address
>> >>>> space with
>> >>>> paging.
>> >>>>
>> >>>> The libdl code is currently supported with the i386 static builds. I
>> >>>> would hope
>> >>>> this would continue to be supported without major refactoring of that
>> >>>> code.
>> >>>> There are tests in the testsuite under libtests.
>> >>>
>> >>> Hm, so libdl can load static ELFs and handle those relocations itself?
>> >>> In that case we could go the "FreeBSD" way more easily as I outlined
>> >>> earlier; a loader UEFI application image (loader.efi) + the
>> >>> application image to be found on the filesystem and loaded through
>> >>> libdl, which we somehow call through loader.efi.
>> >>>
>> >>> loader.efi -> libdl -> hello.exe (static executable ELF now)
>> >>>
>> >>
>> >> libdl is not designed to do this and do not think it could be made too.
>> >> It needs
>> >> a full RTEMS kernel located to run.
>> >
>> > Ah, I see. In that case porting FreeBSD's ELF loader is the only
>> > viable option in this direction, I believe, since the ELF to be loaded
>> > would be the RTEMS kernel+the user app.
>> >
>>
>> Do we need to port it or can we use a released version from an installed
>> FreeBSD?
>>
>> >>
>> >>>>
>> >>>>> what
>> >>>>> we really want _is_ a static file, but for it to be a relocatable
>> >>>>> PE,
>> >>>>> we need to convince GCC to spit out a relocatable but fully resolved
>> >>>>> shared library.
>> >>>>
>> >>>> It is not clear to me yet making the kernel relocatable is free of
>> >>>> other
>> >>>> possible issues. I will need to find time to review all the low level
>> >>>> parts here
>> >>>> and my time is currently limited.
>> >>>>
>> >>>> Does this UEFI work effect the score context switcher, interrupts,
>> >>>> etc? If is
>> >>>> does we will need to resolve what happens and if not should we
>> >>>> consider leaving
>> >>>> it if we can?
>> >>>
>> >>> It affects it in the sense that it's all compiled with fPIC, yes. It
>> >>> needs to be if we're bundling it all together (creating hello.efi,
>> >>> which includes the UEFI boot code, all of RTEMS, and the user app).
>> >>>
>> >>>> For example can iPXE chain load a multiboot image?
>> >>>
>> >>> Yes, we could just use Multiboot too. One thing to note, though -
>> >>> Multiboot will drop us in 32-bit protected mode, whereas 64-bit UEFI
>> >>> firmware will load is into 64-bit protected mode.
>> >>
>> >> Ah OK this is a good point and boards like a Minnow have a 32bit or
>> >> 64bit BIOS
>> >> or what ever it is called on those boards so this may not work.
>> >>
>> >>> Supporting Multiboot
>> >>> too will involve adding some code to move to 64-bit mode before
>> >>> actually calling into the generalized 64-bit mode code.
>> >>
>> >> Can it be used as a step towards another solution?
>> >
>> > Not sure what you mean - like if it would be useful anywhere outside
>> > of the Multiboot work? If so, no, likely not, unless we also want
>> > legacy BIOS support eventually, in which case it could be part of the
>> > real mode->protected mode->long mode transition chain, but that's
>> > unlikely :P
>> >
>>
>> I was just wondering if a temporary short cut can be taken so we do not
>> spend
>> all our time on booting an image.
>>
>> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-06 Thread Amaan Cheval
I don't know yet, but I'll look into it. I'll pause the "hello.efi"
approach work until we settle on a direction, yes? For now, primarily
improving the stub, looking into the FreeBSD ld-elf.so, etc. Sound good?

On Wed, Jun 6, 2018, 1:01 PM Chris Johns  wrote:

> On 6/6/18 5:22 pm, Amaan Cheval wrote:
> > On Wed, Jun 6, 2018 at 12:45 PM, Chris Johns  wrote:
> >> On 6/6/18 5:06 pm, Amaan Cheval wrote:
> >>> On Wed, Jun 6, 2018 at 6:55 AM, Chris Johns  wrote:
> >>>> On 4/6/18 7:49 pm, Amaan Cheval wrote:
> >>>>> Please let me know if that approach doesn't make sense - given that
> >>>>> there is no dynamic loader in the RTEMS kernel as far as I know,
> >>>>
> >>>> There is a dynamic loader in RTEMS called libdl. It is not based
> around the ELF
> >>>> standard loader used for shared libraries and will not be. GOT and
> PLT do not
> >>>> add value to RTEMS because we do not have an active virtual address
> space with
> >>>> paging.
> >>>>
> >>>> The libdl code is currently supported with the i386 static builds. I
> would hope
> >>>> this would continue to be supported without major refactoring of that
> code.
> >>>> There are tests in the testsuite under libtests.
> >>>
> >>> Hm, so libdl can load static ELFs and handle those relocations itself?
> >>> In that case we could go the "FreeBSD" way more easily as I outlined
> >>> earlier; a loader UEFI application image (loader.efi) + the
> >>> application image to be found on the filesystem and loaded through
> >>> libdl, which we somehow call through loader.efi.
> >>>
> >>> loader.efi -> libdl -> hello.exe (static executable ELF now)
> >>>
> >>
> >> libdl is not designed to do this and do not think it could be made too.
> It needs
> >> a full RTEMS kernel located to run.
> >
> > Ah, I see. In that case porting FreeBSD's ELF loader is the only
> > viable option in this direction, I believe, since the ELF to be loaded
> > would be the RTEMS kernel+the user app.
> >
>
> Do we need to port it or can we use a released version from an installed
> FreeBSD?
>
> >>
> >>>>
> >>>>> what
> >>>>> we really want _is_ a static file, but for it to be a relocatable PE,
> >>>>> we need to convince GCC to spit out a relocatable but fully resolved
> >>>>> shared library.
> >>>>
> >>>> It is not clear to me yet making the kernel relocatable is free of
> other
> >>>> possible issues. I will need to find time to review all the low level
> parts here
> >>>> and my time is currently limited.
> >>>>
> >>>> Does this UEFI work effect the score context switcher, interrupts,
> etc? If is
> >>>> does we will need to resolve what happens and if not should we
> consider leaving
> >>>> it if we can?
> >>>
> >>> It affects it in the sense that it's all compiled with fPIC, yes. It
> >>> needs to be if we're bundling it all together (creating hello.efi,
> >>> which includes the UEFI boot code, all of RTEMS, and the user app).
> >>>
> >>>> For example can iPXE chain load a multiboot image?
> >>>
> >>> Yes, we could just use Multiboot too. One thing to note, though -
> >>> Multiboot will drop us in 32-bit protected mode, whereas 64-bit UEFI
> >>> firmware will load is into 64-bit protected mode.
> >>
> >> Ah OK this is a good point and boards like a Minnow have a 32bit or
> 64bit BIOS
> >> or what ever it is called on those boards so this may not work.
> >>
> >>> Supporting Multiboot
> >>> too will involve adding some code to move to 64-bit mode before
> >>> actually calling into the generalized 64-bit mode code.
> >>
> >> Can it be used as a step towards another solution?
> >
> > Not sure what you mean - like if it would be useful anywhere outside
> > of the Multiboot work? If so, no, likely not, unless we also want
> > legacy BIOS support eventually, in which case it could be part of the
> > real mode->protected mode->long mode transition chain, but that's
> > unlikely :P
> >
>
> I was just wondering if a temporary short cut can be taken so we do not
> spend
> all our time on booting an image.
>
> Chris
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-06 Thread Amaan Cheval
On Wed, Jun 6, 2018 at 12:45 PM, Chris Johns  wrote:
> On 6/6/18 5:06 pm, Amaan Cheval wrote:
>> On Wed, Jun 6, 2018 at 6:55 AM, Chris Johns  wrote:
>>> On 4/6/18 7:49 pm, Amaan Cheval wrote:
>>>> Please let me know if that approach doesn't make sense - given that
>>>> there is no dynamic loader in the RTEMS kernel as far as I know,
>>>
>>> There is a dynamic loader in RTEMS called libdl. It is not based around the 
>>> ELF
>>> standard loader used for shared libraries and will not be. GOT and PLT do 
>>> not
>>> add value to RTEMS because we do not have an active virtual address space 
>>> with
>>> paging.
>>>
>>> The libdl code is currently supported with the i386 static builds. I would 
>>> hope
>>> this would continue to be supported without major refactoring of that code.
>>> There are tests in the testsuite under libtests.
>>
>> Hm, so libdl can load static ELFs and handle those relocations itself?
>> In that case we could go the "FreeBSD" way more easily as I outlined
>> earlier; a loader UEFI application image (loader.efi) + the
>> application image to be found on the filesystem and loaded through
>> libdl, which we somehow call through loader.efi.
>>
>> loader.efi -> libdl -> hello.exe (static executable ELF now)
>>
>
> libdl is not designed to do this and do not think it could be made too. It 
> needs
> a full RTEMS kernel located to run.

Ah, I see. In that case porting FreeBSD's ELF loader is the only
viable option in this direction, I believe, since the ELF to be loaded
would be the RTEMS kernel+the user app.

>
>>>
>>>> what
>>>> we really want _is_ a static file, but for it to be a relocatable PE,
>>>> we need to convince GCC to spit out a relocatable but fully resolved
>>>> shared library.
>>>
>>> It is not clear to me yet making the kernel relocatable is free of other
>>> possible issues. I will need to find time to review all the low level parts 
>>> here
>>> and my time is currently limited.
>>>
>>> Does this UEFI work effect the score context switcher, interrupts, etc? If 
>>> is
>>> does we will need to resolve what happens and if not should we consider 
>>> leaving
>>> it if we can?
>>
>> It affects it in the sense that it's all compiled with fPIC, yes. It
>> needs to be if we're bundling it all together (creating hello.efi,
>> which includes the UEFI boot code, all of RTEMS, and the user app).
>>
>>> For example can iPXE chain load a multiboot image?
>>
>> Yes, we could just use Multiboot too. One thing to note, though -
>> Multiboot will drop us in 32-bit protected mode, whereas 64-bit UEFI
>> firmware will load is into 64-bit protected mode.
>
> Ah OK this is a good point and boards like a Minnow have a 32bit or 64bit BIOS
> or what ever it is called on those boards so this may not work.
>
>> Supporting Multiboot
>> too will involve adding some code to move to 64-bit mode before
>> actually calling into the generalized 64-bit mode code.
>
> Can it be used as a step towards another solution?

Not sure what you mean - like if it would be useful anywhere outside
of the Multiboot work? If so, no, likely not, unless we also want
legacy BIOS support eventually, in which case it could be part of the
real mode->protected mode->long mode transition chain, but that's
unlikely :P

>
>>
>>>
>>> Sorry to have disappeared for a period at a critical time in this 
>>> discussion, it
>>> was beyond my control.
>>
>> No worries! Hope everything's okay!
>>
>
> Thanks and yes.
>
>> I'll look into libdl further in the meantime and continue to polish up
>> the stub to reflect more of the x64 features, while we figure out our
>> boot direction.
>
> Great. I hope to be back full time next week and I can have a look.
>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-06 Thread Amaan Cheval
On Wed, Jun 6, 2018 at 6:55 AM, Chris Johns  wrote:
> On 4/6/18 7:49 pm, Amaan Cheval wrote:
>> Please let me know if that approach doesn't make sense - given that
>> there is no dynamic loader in the RTEMS kernel as far as I know,
>
> There is a dynamic loader in RTEMS called libdl. It is not based around the 
> ELF
> standard loader used for shared libraries and will not be. GOT and PLT do not
> add value to RTEMS because we do not have an active virtual address space with
> paging.
>
> The libdl code is currently supported with the i386 static builds. I would 
> hope
> this would continue to be supported without major refactoring of that code.
> There are tests in the testsuite under libtests.

Hm, so libdl can load static ELFs and handle those relocations itself?
In that case we could go the "FreeBSD" way more easily as I outlined
earlier; a loader UEFI application image (loader.efi) + the
application image to be found on the filesystem and loaded through
libdl, which we somehow call through loader.efi.

loader.efi -> libdl -> hello.exe (static executable ELF now)

>
>> what
>> we really want _is_ a static file, but for it to be a relocatable PE,
>> we need to convince GCC to spit out a relocatable but fully resolved
>> shared library.
>
> It is not clear to me yet making the kernel relocatable is free of other
> possible issues. I will need to find time to review all the low level parts 
> here
> and my time is currently limited.
>
> Does this UEFI work effect the score context switcher, interrupts, etc? If is
> does we will need to resolve what happens and if not should we consider 
> leaving
> it if we can?

It affects it in the sense that it's all compiled with fPIC, yes. It
needs to be if we're bundling it all together (creating hello.efi,
which includes the UEFI boot code, all of RTEMS, and the user app).

> For example can iPXE chain load a multiboot image?

Yes, we could just use Multiboot too. One thing to note, though -
Multiboot will drop us in 32-bit protected mode, whereas 64-bit UEFI
firmware will load is into 64-bit protected mode. Supporting Multiboot
too will involve adding some code to move to 64-bit mode before
actually calling into the generalized 64-bit mode code.

>
> Sorry to have disappeared for a period at a critical time in this discussion, 
> it
> was beyond my control.

No worries! Hope everything's okay!

I'll look into libdl further in the meantime and continue to polish up
the stub to reflect more of the x64 features, while we figure out our
boot direction.

>
> Chris
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-05 Thread Amaan Cheval
Hi! Thanks for the guidance, both of you! I think we're quite close
now to integrating gnu-efi in and finally having the stub port boot as
a UEFI application image. All the test exe's generated on my local
tree are now dynamic libraries, so both Newlib and crtbeginS/crtendS
issues have been resolved (see point 2 below for the relevant WIP
patches).

Here's what I've done:

- Updated amd64.cfg as Joel instructed earlier - here I put "-shared"
in LDFLAGS instead of CPU_CFLAGS because of configure trying to use
CPU_CFLAGS in general, running into unresolved symbols coming from
GCC's stub crt0.c, and falling down. Putting it in LDFLAGS lets me
work around this, but I'd appreciate a thumbs-up on this being okay:

https://github.com/AmaanC/rtems-gsoc18/commit/547ef85a7f176046b2cb06a34b1e312c4986e97f#diff-c64f46c71f828120e08bc4c8667f0525R18

- Updated GCC as follows, building on top of Joel's WIP patch to
minimize bsp_specs:

https://github.com/AmaanC/gcc/pull/1

I made it a PR on my fork so viewing it commit-wise or as a whole diff
is easier for you to review. I'll look into how to make and use
multilib variants next.

Let me know what you think (about the LDFLAGS workaround, and the
approach for the GCC patch so far).

On Mon, Jun 4, 2018 at 6:42 PM, Joel Sherrill  wrote:
>
>
> On Mon, Jun 4, 2018 at 5:44 AM, Sebastian Huber
>  wrote:
>>
>>
>>
>> - Am 4. Jun 2018 um 12:20 schrieb Amaan Cheval amaan.che...@gmail.com:
>>
>> > That's a good idea. That way based on the multilib variant, Newlib would
>> > be
>> > compiled using fPIC, yes?
>>
>> Yes.
>
>
> This would be desirable for the i386 as well. Having the RTEMS libraries as
> dynamic libraries would be more natural under Deos.
>
> Just a statement. Not a requirement on the GSoC project.
>>
>>
>> >
>> > Then I could simply figure out how to solve the crtbegin and crtend
>> > dilemma
>> > (which I believe should be easier), and use those to have a
>> > dynamic/shared
>> > RTEMS kernel + user application.
>>
>> These files will be compiled with -fPIC as well.
>
>
>
>>
>>
>> >
>> > If that sounds right, I'll look into that first. Not familiar with the
>> > GCC
>> > source yet, but it should be doable.
>>
>> Sorry, I have no idea how the x86_64 configuration of GCC works for RTEMS.
>>
>> >
>> >
>> > On Mon, Jun 4, 2018, 3:43 PM Sebastian Huber <
>> > sebastian.hu...@embedded-brains.de> wrote:
>> >
>> >> Hello Amaan,
>> >>
>> >> can't you add a new multilib variant which includes -fPIC to the GCC
>> >> configuration for RTEMS?
>> ___
>> devel mailing list
>> devel@rtems.org
>> http://lists.rtems.org/mailman/listinfo/devel
>
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


[PATCH] x86_64/binutils: Add PEI target to build UEFI application images

2018-06-05 Thread Amaan Cheval
Original commit in binutils:
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commitdiff;h=421acf18739edb54111b64d2b328ea2e7bf19889

Update #2898
---
 rtems/config/5/rtems-x86_64.bset | 5 +
 1 file changed, 5 insertions(+)

diff --git a/rtems/config/5/rtems-x86_64.bset b/rtems/config/5/rtems-x86_64.bset
index 9b92538..6041971 100644
--- a/rtems/config/5/rtems-x86_64.bset
+++ b/rtems/config/5/rtems-x86_64.bset
@@ -14,4 +14,9 @@
 %patch add gcc --rsb-file=gcc-f8fd78279d353f6959e75ac25571c1b7b2dec110.patch 
https://gcc.gnu.org/git/?p=gcc.git;a=blobdiff_plain;f=libgcc/config.host;h=f8fd78279d353f6959e75ac25571c1b7b2dec110;hp=11b4acaff55e00ee6bd3c182e9da5dc597ac57c4;hb=ab55f7db3694293e4799d58f7e1a556c0eae863a;hpb=344c180cca810c50f38fd545bb9a102fb39306b7
 %hash sha512 gcc-f8fd78279d353f6959e75ac25571c1b7b2dec110.patch 
aef76f9d45a53096a021521375fc302a907f78545cc57683a7a00ec61608b8818115720f605a6b1746f479c8568963b380138520e259cbb9e8951882c2f1567f
 
+#
+# Binutils PEI target for UEFI support
+#
+%patch add binutils 
--rsb-file=binutils-f8ca72b332396939c7c04a8774ce4c54f5a82d42.patch 
https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=blobdiff_plain;f=bfd/config.bfd;h=f8ca72b332396939c7c04a8774ce4c54f5a82d42;hp=0db8ed4562b2c11ce51e6a3b138c317f4014a1aa;hb=421acf18739edb54111b64d2b328ea2e7bf19889;hpb=f7c6f42310233479ea6339430b7c1ca1f9ec68e1
+%hash sha512 binutils-f8ca72b332396939c7c04a8774ce4c54f5a82d42.patch 
f8af6906871a95a6fb234d0c72c44e9b1823ed835ec91bd84b466aad1f2f5f021ff5fb37835e6132899dcaa6c7e52e3e73f5a2dc9f0efab97aa6fffce2f06d9e
 %include 5/rtems-default.bset
-- 
2.16.0.rc0

___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-04 Thread Amaan Cheval
That's a good idea. That way based on the multilib variant, Newlib would be
compiled using fPIC, yes?

Then I could simply figure out how to solve the crtbegin and crtend dilemma
(which I believe should be easier), and use those to have a dynamic/shared
RTEMS kernel + user application.

If that sounds right, I'll look into that first. Not familiar with the GCC
source yet, but it should be doable.


On Mon, Jun 4, 2018, 3:43 PM Sebastian Huber <
sebastian.hu...@embedded-brains.de> wrote:

> Hello Amaan,
>
> can't you add a new multilib variant which includes -fPIC to the GCC
> configuration for RTEMS?
>
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

[GSoC - x86_64 BSP] Using fPIC to compile RTEMS as a shared library

2018-06-04 Thread Amaan Cheval
Hi!

I figured I'd quickly confirm the direction I'm taking towards
compiling RTEMS as a dynamic/shared library.

Problems I've run into in setting up amd64.cfg to compile all of RTEMS
as a shared library:

- In the x86_64 tools, gcc's "-shared" flag has a different effect
than the "-Wl,-shared" flag used to pass the flag to the linker - the
former silently seems to compile a static library. The latter leads us
to our next issue

- Newlib seems to be compiled as a static libc.a. This leads to errors
such as the following:

https://gist.github.com/AmaanC/475bc0298697d22b944577ac80ec2736#file-rtems-make-fpic-log-L178

I believe this _should_ be solvable by compiling newlib as a shared
library, and _then_ linking the shared libc with RTEMS together. See
[1][2][3] for more.

Please let me know if that approach doesn't make sense - given that
there is no dynamic loader in the RTEMS kernel as far as I know, what
we really want _is_ a static file, but for it to be a relocatable PE,
we need to convince GCC to spit out a relocatable but fully resolved
shared library.

- Similarly, the gcc-compiled crtbegin and crtend also include static
relocations which are invalid when compiling with fPIC.

I think we can use crtbeginS.o and crtendS.o[4] in place of those -
that way, we might still be able to have GCC handle their inclusion,
without needing our bsp_specs. I'll look into this after figuring
newlib out.

---

I'd just like some confirmation on this being the correct path to
follow. I'm not quite sure, because if newlib is a shared library, I
think we'll need to divide the current build stage up to add stages
like:
- Compile librtemscpu.a and librtemsbsp.a with -fPIC
- Compile test_xyz.c while linking it with libc, librtems*, lib*efi,
etc. to create a fully resolved test_xyz.so
- Convert .so to PE using objcopy

Does that make sense to you all?

---

[1] 
https://cygwin.com/git/gitweb.cgi?p=newlib-cygwin.git;a=blob;f=newlib/README;h=e793d57ce75e56d1eb044e2c0325631e9eeef1af;hb=HEAD#l498
[2] https://sourceware.org/ml/newlib/2016/msg01106.html
[3] 
https://forum.osdev.org/viewtopic.php?p=276046=c7798911615bef866354e92a64125b1c#p276046
[4] https://dev.gentoo.org/~vapier/crt.txtz
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH] Updating trace buffer configuration

2018-05-30 Thread Amaan Cheval
On Wed, May 30, 2018 at 6:33 PM, Vidushi Vashishth  wrote:
> Could you please change the
>
> struct _Thread_Control
>
> to
>
> Thread_Control
>
> and check if it still works.
>
> In RTEMS, we use typedefs for structures in general.
>
> I tried to include the threadq.h header file so that I could use the
> variable Thread_Control instead of _Thread_Control. This header file has the
> following typedef statement:
>
> typedef struct _Thread_Control Thread_Control;
>
> However this leads to the following error:
>
> fileio-wrapper.c:389:31: warning: initialization from incompatible pointer
> type [-Wincompatible-pointer-types]
>struct Thread_Control* tc = _Thread_Get_executing();
>^

Based on this error, I believe you need to drop the "struct", given
that Thread_Control is the typedef for "struct _Thread_Control".

Minor, but in most other places in RTEMS, I've seen pointers declared
with the asterisk on the variable not on the type ("int *ptr;" instead
of "int* ptr;").

> fileio-wrapper.c:390:32: warning: passing argument 1 of
> '_Thread_Get_priority' from incompatible pointer type
> [-Wincompatible-pointer-types]
>return (_Thread_Get_priority(tc) << 8) | tc->Real_priority.priority;
>
> I had tried this earlier too. So it doesn't work.
>
>
>
> On Wed, May 30, 2018 at 10:59 AM, Sebastian Huber
>  wrote:
>>
>>
>>
>> On 29/05/18 17:36, Vidushi Vashishth wrote:
>>>
>>> ---
>>>   linkers/rtld-trace-buffer.ini | 5 +++--
>>>   1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/linkers/rtld-trace-buffer.ini
>>> b/linkers/rtld-trace-buffer.ini
>>> index af9fc31..ee68d55 100644
>>> --- a/linkers/rtld-trace-buffer.ini
>>> +++ b/linkers/rtld-trace-buffer.ini
>>> @@ -26,6 +26,7 @@ buffer-local = " uint8_t* in;"
>>>   header = "#include "
>>>   header = "#include "
>>>   header = "#include "
>>> +header = "#include "
>>> [trace-buffer-tracers]
>>>   code = <<>> @@ -84,8 +85,8 @@ static inline uint32_t __rtld_tbg_executing_id(void)
>>>   static inline uint32_t __rtld_tbg_executing_status(void)
>>>   {
>>> /* @fixme Add the current CPU for SMP. */
>>> -  struct Thread_Control* tc = _Thread_Get_executing();
>>
>>
>> Could you please change the
>>
>> struct _Thread_Control
>>
>> to
>>
>> Thread_Control
>>
>> and check if it still works.
>>
>> In RTEMS, we use typedefs for structures in general.
>>
>>> -  return (tc->current_priority << 8) | tc->real_priority;
>>> +  struct _Thread_Control* tc = _Thread_Get_executing();
>>> +  return (_Thread_Get_priority(tc) << 8) | tc->Real_priority.priority;
>>>   }
>>> static inline uint32_t __rtld_tbg_executing_state(void)
>>
>>
>> --
>> Sebastian Huber, embedded brains GmbH
>>
>> Address : Dornierstr. 4, D-82178 Puchheim, Germany
>> Phone   : +49 89 189 47 41-16
>> Fax : +49 89 189 47 41-09
>> E-Mail  : sebastian.hu...@embedded-brains.de
>> PGP : Public key available on request.
>>
>> Diese Nachricht ist keine geschäftliche Mitteilung im Sinne des EHUG.
>>
>> ___
>> devel mailing list
>> devel@rtems.org
>> http://lists.rtems.org/mailman/listinfo/devel
>
>
>
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [GSoC] Ways to make the x86_64 port work with UEFI

2018-05-30 Thread Amaan Cheval
On Wed, May 30, 2018 at 4:30 AM, Joel Sherrill  wrote:
>
>
> On Tue, May 29, 2018 at 11:26 AM, Amaan Cheval 
> wrote:
>>
>> On Tue, May 29, 2018 at 9:28 PM, Amaan Cheval 
>> wrote:
>> > Noted, thanks a ton for the details! Unrelated to the topic at hand,
>> > but out of interest, is this the only reading material for further
>> > details? http://ceur-ws.org/Vol-1697/EWiLi16_12.pdf
>> >
>> > In brief: My tests for keeping a libfake.a (compiled without -fpic)
>> > and a loader.so with a user-appXYZ.c have been successful, but I'm not
>> > sure if my assumptions hold for all cases. See the details below for
>> > more.
>> >
>> > Next actions:
>> > - Read more of Linkers and Loaders since it seems to be the only
>> > detailed resource I've found at this point
>> > - Experiment with actually using the existing librtems*.a I've got and
>> > making them boot as a PE UEFI application image
>> >
>> >
>> > 
>> >
>> > In more detail:
>> >
>> > The problem:
>> > That UEFI needs a relocatable PE file, i.e. one that can function
>> > regardless of the physical address it's loaded at (no virtual
>> > addresses that early).
>> > To build an ELF of that kind, the resources I've seen all build their
>> > source with -fpic, and then use objcopy to convert the ELF into a
>> > relocatable PE, with an embedded runtime self-relocator (akin to
>> > load-time relocation, if I'm understanding correctly).
>> >
>> > What Joel suggested seems to be the simplest option - see if not using
>> > -fpic for _all_ of RTEMS' build system is fine. I think it might be
>> > from some testing, but I'm not sure if this is conclusive since I need
>> > to understand the specifics of the entire development process better.
>> >
>> > So here's my understanding of the situation at the moment:
>> >
>> > - librtems*.a is made up of object files, compiled without -fpic, and
>> > that should be fine because I believe object files will use RIP
>> > relative addressing code by default on x64 where it can, and leave the
>> > rest for link-time relocations to handle. IF this is true, this works
>> > perfectly for us because all memory accesses and jumps/calls are
>> > relative.
>
>
> Just to be clear. For Deos, I am compiling all code with -fPIC. This
> includes all librtemscpu.a and librtemsbsp.a. When I accidentally
> missed an adapter file, that caused an issue.
>

Oh my, that's me misreading majorly, sorry about that. I'm still
unclear on how this works:

> Unless the loader forces something, you can use PIC with no build system 
> changes.

I meant that to build with fpic for x86_64/amd64 and nothing else,
we'd end up having to add a lot of special cases within the build
system for even the "generic" parts of RTEMS, such as what's in cpukit
(all the subdirs except cpukit/score/cpu), right? My concern was that
in doing this, the already overcomplicated build system gets more
complicated. If y'all don't think that kind of "if amd64, then use
-fPIC" logic feels hacky, I'm okay with this approach 100%, and we can
work to find a way to make that special-casing as explicit as
possible.

>>
>> >
>> > - We can have a loader.c which acts as the core with the efi_main
>> > function - compile it with -fpic into loader.so, and then link
>> > loader.so, librtems*.a, and user-appXYZ.c together to form a
>> > relocatable ELF, then convert it into a PE using objcopy. Note that
>> > from what I can tell, the ELF generated from this still has type EXEC,
>> > not DYN, according to readelf.
>>
>> Correction: This was a leftover file that I'd forgotten to take out
>> after renaming a target. Sorry about the confusion. Just using the
>> "-shared" flag does cause the resulting ELF to be of type DYN.
>>
>> >
>> > The concerns I have are about my assumptions; if GCC generates any
>> > code that uses absolute addressing and that is resolved as a link-time
>> > relocation, that could be problematic because the final relocatable PE
>> > may not match up with the resolved absolute address.
>
>
> The Deos kernel guys had me check readelf on a known good executable
> with the ones I was producing. The loadable sections should match up.
> For example, on one architecture I missed an alignment in the linkcmds
> and on another, an argument hidden in bsp_specs made a section writable
> whic

Re: [GSoC] Ways to make the x86_64 port work with UEFI

2018-05-29 Thread Amaan Cheval
On Tue, May 29, 2018 at 9:28 PM, Amaan Cheval  wrote:
> Noted, thanks a ton for the details! Unrelated to the topic at hand,
> but out of interest, is this the only reading material for further
> details? http://ceur-ws.org/Vol-1697/EWiLi16_12.pdf
>
> In brief: My tests for keeping a libfake.a (compiled without -fpic)
> and a loader.so with a user-appXYZ.c have been successful, but I'm not
> sure if my assumptions hold for all cases. See the details below for
> more.
>
> Next actions:
> - Read more of Linkers and Loaders since it seems to be the only
> detailed resource I've found at this point
> - Experiment with actually using the existing librtems*.a I've got and
> making them boot as a PE UEFI application image
>
> 
>
> In more detail:
>
> The problem:
> That UEFI needs a relocatable PE file, i.e. one that can function
> regardless of the physical address it's loaded at (no virtual
> addresses that early).
> To build an ELF of that kind, the resources I've seen all build their
> source with -fpic, and then use objcopy to convert the ELF into a
> relocatable PE, with an embedded runtime self-relocator (akin to
> load-time relocation, if I'm understanding correctly).
>
> What Joel suggested seems to be the simplest option - see if not using
> -fpic for _all_ of RTEMS' build system is fine. I think it might be
> from some testing, but I'm not sure if this is conclusive since I need
> to understand the specifics of the entire development process better.
>
> So here's my understanding of the situation at the moment:
>
> - librtems*.a is made up of object files, compiled without -fpic, and
> that should be fine because I believe object files will use RIP
> relative addressing code by default on x64 where it can, and leave the
> rest for link-time relocations to handle. IF this is true, this works
> perfectly for us because all memory accesses and jumps/calls are
> relative.
>
> - We can have a loader.c which acts as the core with the efi_main
> function - compile it with -fpic into loader.so, and then link
> loader.so, librtems*.a, and user-appXYZ.c together to form a
> relocatable ELF, then convert it into a PE using objcopy. Note that
> from what I can tell, the ELF generated from this still has type EXEC,
> not DYN, according to readelf.

Correction: This was a leftover file that I'd forgotten to take out
after renaming a target. Sorry about the confusion. Just using the
"-shared" flag does cause the resulting ELF to be of type DYN.

>
> The concerns I have are about my assumptions; if GCC generates any
> code that uses absolute addressing and that is resolved as a link-time
> relocation, that could be problematic because the final relocatable PE
> may not match up with the resolved absolute address.
>
> My tests with a fake static archive library, and creating a PE have
> been successful, but I'm unsure of how to trigger the relocation
> behavior by the UEFI firmware (i.e. loading the UEFI image at an
> address other than it's preferred one). One idea is to have a UEFI
> application image that loads this test UEFI application image through
> the "LoadImage" function UEFI provides as a service and then to use
> QEMU's monitor / gdb inspection capabilities to see if the address the
> image is loaded at genuinely changes.
>
> If any of you have any resources, that'd be highly appreciated. Some
> resources I'm using so far are:
> - 
> https://eli.thegreenplace.net/2011/11/03/position-independent-code-pic-in-shared-libraries/
> - 
> https://eli.thegreenplace.net/2011/08/25/load-time-relocation-of-shared-libraries/
> - https://eli.thegreenplace.net/2012/01/03/understanding-the-x64-code-models
>
> Sorry about the length of the email!
>
> On Fri, May 25, 2018 at 10:51 PM, Joel Sherrill  wrote:
>>
>>
>> On Fri, May 25, 2018, 12:11 PM Amaan Cheval  wrote:
>>>
>>> Hey! Could you link me to some code that you used for the Deos setup
>>> you mentioned?
>>> My understanding is that the -shared option can link static archives
>>> to create a "shared" library in the sense that it doesn't include the
>>> usual crt0 runtime environment and whatnot, but the code within is
>>> still position-dependent. Given that the PE image that EFI needs is
>>> one that needs to be truly relocatable, this may not work - BUT, I've
>>> only just noticed the ./gnuefi/reloc_x86_64.c file which seems to
>>> handle some kinds of runtime relocations encoded within the converted
>>> PE file, so maybe this will work after all. I'll continue to
>>> investigate and let you know how it goes!
&g

Re: [GSoC] Ways to make the x86_64 port work with UEFI

2018-05-29 Thread Amaan Cheval
Noted, thanks a ton for the details! Unrelated to the topic at hand,
but out of interest, is this the only reading material for further
details? http://ceur-ws.org/Vol-1697/EWiLi16_12.pdf

In brief: My tests for keeping a libfake.a (compiled without -fpic)
and a loader.so with a user-appXYZ.c have been successful, but I'm not
sure if my assumptions hold for all cases. See the details below for
more.

Next actions:
- Read more of Linkers and Loaders since it seems to be the only
detailed resource I've found at this point
- Experiment with actually using the existing librtems*.a I've got and
making them boot as a PE UEFI application image



In more detail:

The problem:
That UEFI needs a relocatable PE file, i.e. one that can function
regardless of the physical address it's loaded at (no virtual
addresses that early).
To build an ELF of that kind, the resources I've seen all build their
source with -fpic, and then use objcopy to convert the ELF into a
relocatable PE, with an embedded runtime self-relocator (akin to
load-time relocation, if I'm understanding correctly).

What Joel suggested seems to be the simplest option - see if not using
-fpic for _all_ of RTEMS' build system is fine. I think it might be
from some testing, but I'm not sure if this is conclusive since I need
to understand the specifics of the entire development process better.

So here's my understanding of the situation at the moment:

- librtems*.a is made up of object files, compiled without -fpic, and
that should be fine because I believe object files will use RIP
relative addressing code by default on x64 where it can, and leave the
rest for link-time relocations to handle. IF this is true, this works
perfectly for us because all memory accesses and jumps/calls are
relative.

- We can have a loader.c which acts as the core with the efi_main
function - compile it with -fpic into loader.so, and then link
loader.so, librtems*.a, and user-appXYZ.c together to form a
relocatable ELF, then convert it into a PE using objcopy. Note that
from what I can tell, the ELF generated from this still has type EXEC,
not DYN, according to readelf.

The concerns I have are about my assumptions; if GCC generates any
code that uses absolute addressing and that is resolved as a link-time
relocation, that could be problematic because the final relocatable PE
may not match up with the resolved absolute address.

My tests with a fake static archive library, and creating a PE have
been successful, but I'm unsure of how to trigger the relocation
behavior by the UEFI firmware (i.e. loading the UEFI image at an
address other than it's preferred one). One idea is to have a UEFI
application image that loads this test UEFI application image through
the "LoadImage" function UEFI provides as a service and then to use
QEMU's monitor / gdb inspection capabilities to see if the address the
image is loaded at genuinely changes.

If any of you have any resources, that'd be highly appreciated. Some
resources I'm using so far are:
- 
https://eli.thegreenplace.net/2011/11/03/position-independent-code-pic-in-shared-libraries/
- 
https://eli.thegreenplace.net/2011/08/25/load-time-relocation-of-shared-libraries/
- https://eli.thegreenplace.net/2012/01/03/understanding-the-x64-code-models

Sorry about the length of the email!

On Fri, May 25, 2018 at 10:51 PM, Joel Sherrill  wrote:
>
>
> On Fri, May 25, 2018, 12:11 PM Amaan Cheval  wrote:
>>
>> Hey! Could you link me to some code that you used for the Deos setup
>> you mentioned?
>> My understanding is that the -shared option can link static archives
>> to create a "shared" library in the sense that it doesn't include the
>> usual crt0 runtime environment and whatnot, but the code within is
>> still position-dependent. Given that the PE image that EFI needs is
>> one that needs to be truly relocatable, this may not work - BUT, I've
>> only just noticed the ./gnuefi/reloc_x86_64.c file which seems to
>> handle some kinds of runtime relocations encoded within the converted
>> PE file, so maybe this will work after all. I'll continue to
>> investigate and let you know how it goes!
>
>
> Deos isn't a good example except that you can compile with -fPIC and put
> that code into a static library. Deos is a closed source Level A (man rated
> flight) ARINC 653 RTOS. It's boot process reads configuration information
> about each partition and associates .so's with each address space per the
> configuration. It can't change after that.
>
> The RTEMS exe is mostly linked as normal except to use some arguments to say
> some symbols are from a shared library.
>
> The base address of the exe is that of the provided virtual address space
> with .data and .bss in their respective spaces.
>
> And our entry point is in C so th

Re: [GSoC] Ways to make the x86_64 port work with UEFI

2018-05-25 Thread Amaan Cheval
Hey! Could you link me to some code that you used for the Deos setup
you mentioned?
My understanding is that the -shared option can link static archives
to create a "shared" library in the sense that it doesn't include the
usual crt0 runtime environment and whatnot, but the code within is
still position-dependent. Given that the PE image that EFI needs is
one that needs to be truly relocatable, this may not work - BUT, I've
only just noticed the ./gnuefi/reloc_x86_64.c file which seems to
handle some kinds of runtime relocations encoded within the converted
PE file, so maybe this will work after all. I'll continue to
investigate and let you know how it goes!

Regarding how TLS differs with PIC - could you elaborate? Is it
something we'll need to solve for if we go with the -fPIC option, or
is it something we need to keep in mind as a limitation, but isn't
really a blocker?

On Fri, May 25, 2018 at 10:13 PM, Joel Sherrill <j...@rtems.org> wrote:
>
>
> On Fri, May 25, 2018, 11:15 AM Amaan Cheval <amaan.che...@gmail.com> wrote:
>>
>> Hey!
>>
>> Skippable details about how FreeBSD handles the UEFI boot process!
>>
>> 
>>
>> Having looked into it a bit more, my understanding of how FreeBSD
>> handles this process is:
>> - They build a two-stage bootloader for EFI, called boot1.efi and
>> loader.efi[1]
>> - loader.efi is an interactive prompt which may autoboot, or a "boot
>> kernelImg" command can be used to load the actual kernel
>> - The kernel is loaded as an ELF through helper functions. The
>> command_boot[2] function drives this:
>>   - In brief, through calls go through:
>> command_boot -> mod_loadkld -> file_load ->
>> file_formats[i]->l_load (actually the loadfile function in
>> load_elf.c[3])
>>   - The loadfile function parses the program and section headers of
>> the ELF file (through more function detours that are not really
>> important)
>>   - Once the ELF has been loaded at the correct entry_addr that it
>> expects to be loaded at in memory, the l_exec[4] function is called,
>> which is actually elf64_exec in elf64_freebsd.c[5], at which hopefully
>> through trampolining magic, the control flow will transfer to the
>> kernel or ELF module
>>
>>
>> 
>>
>> What this means for RTEMS if we go with gnu-efi is essentially 2
>> options, given that the objcopy method of converting from ELF->PE
>> requires the ELF to be a position-independent shared library:
>>
>> - Using -fPIC to compile all of RTEMS, including the RTEMS user's
>> application code. This way we'd have librtemsbso.so, librtemscpu.so,
>> etc. which would then be linked into user_app.c through -fPIC and
>> -shared flags still, creating one singular hello.so, which can then
>> finally be converted into hello.efi and put on a FAT filesystem and
>> booted. This seems doable, but I'm fairly concerned about it further
>> complicating our build system and likely being quite singular in its
>> focus on EFI.
>
>
> I'm using PIC on the Deos BSP. RTEMS is still a .a and exes are linked with
> our static libraries and Deos .so.
>
> Unless the loader forces something, you can use PIC with no build system
> changes.
>
> note that thread local storage is different on i386 with and without PIC.
>>
>>
>> - The FreeBSD way of a (loader.efi) and a hello.exe (ELF64) put on
>> possibly the same partition on the FAT filesystem required for UEFI
>> application images anyway. The loader.efi can find the hello.exe file
>> through perhaps a config file it can read or by having a magic-name
>> like rtems.exe or something. This effectively means we need an ELF
>> dynamic linker / loader (akin to ld.so) within RTEMS' source. I think
>> using FreeBSD's code for this should be fine. One added benefit of
>> this method is that librtems* and user applications remain as ELF64s,
>> which in the future could also be used with Multiboot with a slightly
>> modified "loader" (i.e. one which generates the apt Multiboot magic
>> header, and boots the PC from 32-bit protected mode to 64-bit long
>> mode).
>>
>> I prefer the latter approach personally. If both of these seem too
>> complicated, we can of course go back to considering generating the PE
>> header format in ASM the way Linux distros use EFISTUB and the code
>> Chris shared (as I mentioned in my original blog post) for wimboot.
>> Those approaches may be significantly simpler i

Re: [GSoC] Ways to make the x86_64 port work with UEFI

2018-05-25 Thread Amaan Cheval
Hey!

Skippable details about how FreeBSD handles the UEFI boot process!


Having looked into it a bit more, my understanding of how FreeBSD
handles this process is:
- They build a two-stage bootloader for EFI, called boot1.efi and loader.efi[1]
- loader.efi is an interactive prompt which may autoboot, or a "boot
kernelImg" command can be used to load the actual kernel
- The kernel is loaded as an ELF through helper functions. The
command_boot[2] function drives this:
  - In brief, through calls go through:
command_boot -> mod_loadkld -> file_load ->
file_formats[i]->l_load (actually the loadfile function in
load_elf.c[3])
  - The loadfile function parses the program and section headers of
the ELF file (through more function detours that are not really
important)
  - Once the ELF has been loaded at the correct entry_addr that it
expects to be loaded at in memory, the l_exec[4] function is called,
which is actually elf64_exec in elf64_freebsd.c[5], at which hopefully
through trampolining magic, the control flow will transfer to the
kernel or ELF module



What this means for RTEMS if we go with gnu-efi is essentially 2
options, given that the objcopy method of converting from ELF->PE
requires the ELF to be a position-independent shared library:

- Using -fPIC to compile all of RTEMS, including the RTEMS user's
application code. This way we'd have librtemsbso.so, librtemscpu.so,
etc. which would then be linked into user_app.c through -fPIC and
-shared flags still, creating one singular hello.so, which can then
finally be converted into hello.efi and put on a FAT filesystem and
booted. This seems doable, but I'm fairly concerned about it further
complicating our build system and likely being quite singular in its
focus on EFI.

- The FreeBSD way of a (loader.efi) and a hello.exe (ELF64) put on
possibly the same partition on the FAT filesystem required for UEFI
application images anyway. The loader.efi can find the hello.exe file
through perhaps a config file it can read or by having a magic-name
like rtems.exe or something. This effectively means we need an ELF
dynamic linker / loader (akin to ld.so) within RTEMS' source. I think
using FreeBSD's code for this should be fine. One added benefit of
this method is that librtems* and user applications remain as ELF64s,
which in the future could also be used with Multiboot with a slightly
modified "loader" (i.e. one which generates the apt Multiboot magic
header, and boots the PC from 32-bit protected mode to 64-bit long
mode).

I prefer the latter approach personally. If both of these seem too
complicated, we can of course go back to considering generating the PE
header format in ASM the way Linux distros use EFISTUB and the code
Chris shared (as I mentioned in my original blog post) for wimboot.
Those approaches may be significantly simpler in a sense, but may
limit how we use UEFI Services - I'm not sure about the details of
this yet - I can investigate if y'all aren't fond of the option I laid
down above.

Let me know!

[1] 
https://www.freebsd.org/cgi/man.cgi?query=loader=0=8=FreeBSD+11.1-RELEASE+and+Ports=default=html
[2] 
https://github.com/freebsd/freebsd/blob/433bd38e3a0349f9f89f9d54594172c75b002b74/stand/common/boot.c#L53
[3] 
https://github.com/freebsd/freebsd/blob/d8596f6f687a64b994b065f3058155405dfc39db/stand/common/load_elf.c#L150
[4] 
https://github.com/freebsd/freebsd/blob/433bd38e3a0349f9f89f9d54594172c75b002b74/stand/common/boot.c#L107
[5] 
https://github.com/freebsd/freebsd/blob/d8596f6f687a64b994b065f3058155405dfc39db/stand/efi/loader/arch/amd64/elf64_freebsd.c#L93

On Sun, May 20, 2018 at 10:52 PM, Joel Sherrill <j...@rtems.org> wrote:
>
>
> On Sun, May 20, 2018, 12:10 PM Amaan Cheval <amaan.che...@gmail.com> wrote:
>>
>> On Sat, May 19, 2018 at 6:51 PM, Gedare Bloom <ged...@rtems.org> wrote:
>> > On Fri, May 18, 2018 at 5:53 PM, Joel Sherrill <j...@rtems.org> wrote:
>> >>
>> >>
>> >> On Fri, May 18, 2018 at 3:24 PM, Amaan Cheval <amaan.che...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi everyone!
>> >>>
>> >>> I've written a quick blog post summarizing the options I've considered
>> >>> to make the x86_64 port work with UEFI firmware - the primary winner
>> >>> seems to be in my eyes to use "gnu-efi" and to add support for the
>> >>> target "pei-x86-64" (aliased to "efi-app-x86_64") to
>> >>> "x86_64-rtems5-objcopy" in binutils. I've submitted a patch for this
>> >>> here[1].
>> >>
>> >>
>> >> That patch is quite simple so should

Re: Compiler error with U_FORTIFY_SOURCE, D_FORTIFY_SOURCE options

2018-05-25 Thread Amaan Cheval
Hi!

I don't know the specifics of the thing you're talking about here, so
others should definitely weigh in on that if they can, but regarding the
header file; the include syntax used (, as opposed to "ssp.h") is
the one used for system header files (for eg.  vs "my_header.h").

I see 2 possibilities:
- The project expects these files to be in a standard include directory
(such as /usr/include), which the compiler checks for system headers by
default
- You need to add the directory where ssp.h exists to the command-line
flags for the compiler, for eg. through "gcc -I. ssp.c" (assuming ssp.c
includes ssp.h, which exists in the same directory). See this for more:
https://gcc.gnu.org/onlinedocs/gcc-6.4.0/cpp/Search-Path.html


On Fri, May 25, 2018, 8:08 PM Udit agarwal  wrote:

> Hi all,
> While cross-compiling fio for RTEMS, by default in the compiler call
>
>>  -U_FORTIFY_SOURCE   -D_FORTIFY_SOURCE=2
>
> options are used which raises the following error:
>
>> /home/uka_in/development/benchmark/sandbox/5/lib/gcc/arm-rtems5/7.3.0/include/ssp/unistd.h:38:10:
>> fatal error: ssp.h: No such file or directory
>>  #include 
>>   ^~~
>> compilation terminated.
>> make: *** [crc/crc32c-arm64.o] Error 1
>>
>> However, ssp.h is in the same folder.
> Without these options, only sys/unistd.h is included and not the
> ssp/unistd.h. Any idea why these compiler opts are raising these error?
>
> Also, FIO needs a BSP independent method for determining the size of RAM
> for it's internal working.  I'm unable to figure out any such
> implementation. Any help on this too, would be great.
>
> Regards,
> Udit agarwal
> ___
> devel mailing list
> devel@rtems.org
> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel

Re: [GSoC] Ways to make the x86_64 port work with UEFI

2018-05-20 Thread Amaan Cheval
On Sat, May 19, 2018 at 6:51 PM, Gedare Bloom <ged...@rtems.org> wrote:
> On Fri, May 18, 2018 at 5:53 PM, Joel Sherrill <j...@rtems.org> wrote:
>>
>>
>> On Fri, May 18, 2018 at 3:24 PM, Amaan Cheval <amaan.che...@gmail.com>
>> wrote:
>>>
>>> Hi everyone!
>>>
>>> I've written a quick blog post summarizing the options I've considered
>>> to make the x86_64 port work with UEFI firmware - the primary winner
>>> seems to be in my eyes to use "gnu-efi" and to add support for the
>>> target "pei-x86-64" (aliased to "efi-app-x86_64") to
>>> "x86_64-rtems5-objcopy" in binutils. I've submitted a patch for this
>>> here[1].
>>
>>
>> That patch is quite simple so shouldn't be a problem if this is the
>> direction
>> that gets consensus.
>>>
>>>
>>> The blog post is here:
>>> https://blog.whatthedude.com/post/uefi-app-options/
>>>
>>> I'd appreciate all feedback (and please do let me know if I haven't
>>> provided enough context)!
>>>
>>> Specifically, some concerns I'd like to discuss are:
>>>
>>> - Does everyone agree with me on choosing gnu-efi + objcopy as our
>>> method of choice?
>>
>>
>> Does using gnu-efi add code that runs on the target? Can you point
>> us to the files, if so.

Sure. The files would run on the target, yes. These are the ones
listed here (as linked to in my blog post, perhaps without sufficient
emphasis):
https://wiki.osdev.org/UEFI#Developing_with_GNU-EFI

>>
>> Can you tell which approach FreeBSD takes?

FreeBSD takes the gnu-efi approach I see as the "winner" here (also a
link in the post):
https://github.com/freebsd/freebsd/blob/996b0b6d81cf31cd8d58af5d8b45f0b4945d960d/stand/efi/loader/Makefile#L98-L119

>>
>>>
>>> - How do we integrate gnu-efi into our build process? A part of the
>>> RSB, making sure the path to the libraries are in an exported
>>> variable? Or perhaps a part of the RTEMS kernel itself if the licenses
>>> are compatible (I don't see any on the project[2], only copyright
>>> notices within the source files of the release versions).
>>
>>
>> GNU-efi would be built like qemu or the device tree compiler would
>> be my guess and x86_64-rtems toolset might add that to the standard
>> set of tools. License on host tools being GPL isn't an issue.
>>
>
> It appears to be a standard 2-clause BSD released by Intel as
> specified in the README file of gnu-efi.
>
>>
>>>
>>> - Regardless of how we manage UEFI, do we require Multiboot support
>>> too? Multiboot drops us in a 32-bit protected mode environment,
>>> whereas 64-bit UEFI firmware will boot us into 64-bit long mode - this
>>> would mean the kernel would need to support separate code-paths for
>>> the 2 if we want to support both methods.
>>
>>
>> That's a good question. For GSoC, I think UEFI is fine and perhaps a ticket
>> under the general "modern PC support" ticket for multiboot support. Unless
>> that eliminates a LOT of PCs.
>>
>> I don't want you to spend all summer getting an image to boot both
>> ways. Personally, I want you to have a working BSP one way. :)
> +1
>

Noted, thanks!

>>>
>>>
>>> [1] https://www.sourceware.org/ml/binutils/2018-05/msg00197.html
>>> [2] https://sourceforge.net/projects/gnu-efi/
>>
>>
>> --joel
>>
>> ___
>> devel mailing list
>> devel@rtems.org
>> http://lists.rtems.org/mailman/listinfo/devel
___
devel mailing list
devel@rtems.org
http://lists.rtems.org/mailman/listinfo/devel


Re: [PATCH] Rework to minimize and eventually eliminate RTEMS use of bsp_specs

2018-05-18 Thread Amaan Cheval
To be clear, I applied this patch (with my fixes) on the 7.3 release
through the RSB to test, not on GCC's master branch.

> to add i386/rtemself64.h

What you sent in this email thread adds rtemself64.h already. Do you
mean you'd like to split the commits up or something?

The only changes I made on top of yours were:

- Readd "rtems.h" to config.gcc
- Fix comments

I've attached the patch file I used within the RSB here (sorry if you
meant a patch of _just_ the fixes I made on top of yours, this is just
the cumulative diff I used to patch GCC 7.3 to test).

Regards,

On Fri, May 18, 2018 at 7:00 PM, Joel Sherrill <j...@rtems.org> wrote:
>
>
>
> On Fri, May 18, 2018 at 1:38 AM, Amaan Cheval <amaan.che...@gmail.com>
> wrote:
>>
>> I just compiled my local fixed copy (adding rtems.h back in) and
>> there's good news! With the patch, the x86_64 compile stub works with
>> a blank bsp_specs file!
>
>
> Awesome!
>
> Can you send me your changes as a patch? I am thinking I need to make
> sure we agree on what the gcc master for x86_64-rtems looks like.
>
> Apparently I owe committing a patch to add i386/rtemself64.h since it is
> missing on the master. And the comment is wrong.  What else?
>
>> On Fri, May 18, 2018 at 12:59 AM, Amaan Cheval <amaan.che...@gmail.com>
>> wrote:
>> > Hey!
>> >
>> > Thanks so much for sharing this, it's quite useful to put your earlier
>> > email[1] about minimzing the bsp_specs in context.
>> >
>> > From looking ahead a bit without testing (still compiling), the patch
>> > may need an ENDFILE_SPEC definition as well for "crtend.o" (it defines
>> > __TMC_END__ which crtbegin.o has left undefined for eg.) and possibly
>> > "crtn.o", at least to eliminate the x86_64 port's bsp_specs entirely
>> > (see here[2]).
>>
>> Just noticed that ENDFILE_SPEC already includes crtend in i386elf.h,
>> so there's no need for this change.
>>
>> >
>> > I've also left some comments inline below.
>> >
>> > +1 on upstreaming this into GCC (making sure it also backports to 7.3
>> > for simplicity, so we don't need to write a 7.3-specific patch for the
>> > RSB as well) with a few additons (at least for the x86_64 target, to
>> > try to have an empty bsp_specs to begin with).
>> >
>> > [1] https://lists.rtems.org/pipermail/devel/2018-May/021430.html
>> > [2]
>> > https://github.com/AmaanC/rtems-gsoc18/blob/ac/daily-01-compile-stub/bsps/x86_64/amd64/start/bsp_specs
>> >
>> > On Wed, May 16, 2018 at 8:46 PM, Joel Sherrill <j...@rtems.org> wrote:
>> >> ---
>> >>  gcc/config.gcc|  2 +-
>> >>  gcc/config/arm/rtems.h|  4 
>> >>  gcc/config/bfin/rtems.h   |  4 
>> >>  gcc/config/i386/rtemself.h|  6 +-
>> >>  gcc/config/i386/rtemself64.h  | 39
>> >> +++
>> >>  gcc/config/m68k/rtemself.h|  4 
>> >>  gcc/config/microblaze/rtems.h |  4 
>> >>  gcc/config/mips/rtems.h   |  4 
>> >>  gcc/config/moxie/rtems.h  |  4 
>> >>  gcc/config/nios2/rtems.h  |  4 
>> >>  gcc/config/riscv/rtems.h  |  4 
>> >>  gcc/config/rs6000/rtems.h |  5 +
>> >>  gcc/config/rtems.h|  6 +-
>> >>  gcc/config/sh/rtems.h |  4 
>> >>  gcc/config/sh/rtemself.h  |  4 
>> >>  gcc/config/sparc/rtemself.h   |  4 
>> >>  gcc/config/v850/rtems.h   |  4 
>> >>  17 files changed, 103 insertions(+), 3 deletions(-)
>> >>  create mode 100644 gcc/config/i386/rtemself64.h
>> >>
>> >> diff --git a/gcc/config.gcc b/gcc/config.gcc
>> >> index d509800..de27e5c 100644
>> >> --- a/gcc/config.gcc
>> >> +++ b/gcc/config.gcc
>> >> @@ -1499,7 +1499,7 @@ x86_64-*-elf*)
>> >> tm_file="${tm_file} i386/unix.h i386/att.h dbxelf.h elfos.h
>> >> newlib-stdint.h i386/i386elf.h i386/x86-64.h"
>> >> ;;
>> >>  x86_64-*-rtems*)
>> >> -   tm_file="${tm_file} i386/unix.h i386/att.h dbxelf.h elfos.h
>> >> newlib-stdint.h i386/i386elf.h i386/x86-64.h i386/rtemself.h rtems.h"
>> >> +   tm_file="${tm_file} i386/unix.h i386/att.h dbxelf.h elfos.h
>> >> newlib-stdint.h i386/i386elf.h i386/x86-64.h i386/rtemself64.h"
>> >
>> > In rebasing with upstream, this commit m

  1   2   >