[coreboot] Re: [coreboot - Bug #524] `CONFIG_X2APIC_ONLY=y`or `CONFIG_X2APIC_RUNTIME=y` cause Linux in emulation/qemu-i440fx to crash

2024-02-19 Thread ron minnich
This is another example of "don't try to support impossible hardware" :-)

The real bug is that coreboot build system let you build the i440fx with
APIC2, right? I assume that's what Paul meant.

OTOH, it is a way to test that linux properly fails when told to use
impossible hardware :-)

On Mon, Feb 19, 2024 at 12:39 PM Felix Held  wrote:

> Issue #524 has been updated by Felix Held.
>
>
> This isn't really a coreboot bug, since qemu doesn't support x2apic
> emulation: https://gitlab.com/qemu-project/qemu/-/issues/330
> Also the i440fx chipset is much older than the first CPU supporting x2apic
> mode.
>
> 
> Bug #524: `CONFIG_X2APIC_ONLY=y`or `CONFIG_X2APIC_RUNTIME=y` cause Linux
> in emulation/qemu-i440fx to crash
> https://ticket.coreboot.org/issues/524#change-1754
>
> * Author: Paul Menzel
> * Status: New
> * Priority: Normal
> * Target version: none
> * Start date: 2024-02-04
> * Affected versions: 4.21
> 
> Build coreboot for `emulation/qemu-i440fx` with `CONFIG_X2APIC_ONLY=y`or
> `CONFIG_X2APIC_RUNTIME=y` causes Linux to crash:
>
> ### `CONFIG_X2APIC_ONLY=y`
>
> ```
> $ qemu-system-x86_64 -bios /dev/shm/coreboot/build/coreboot.rom -L
> /dev/shm -enable-kvm -smp cpus=2 -m 1G -hda /dev/shm/debian-32.img -serial
> stdio -net nic -net user,hostfwd=tcp::2-:22
> WARNING: Image format was not specified for '/dev/shm/debian-32.img' and
> probing guessed raw.
>  Automatically detecting the format is dangerous for raw images,
> write operations on block 0 will be restricted.
>  Specify the 'raw' format explicitly to remove the restrictions.
>
>
> [NOTE ]  coreboot-4.22-671-g416cc665929e Fri Feb 02 20:40:20 UTC 2024
> x86_64 bootblock starting (log level: 7)...
> [INFO ]  Timestamp - end of bootblock: 18374588
> [INFO ]  Timestamp - start of romstage: 20847149
> [INFO ]  QEMU: firmware config interface detected
> [INFO ]  Firmware config version id: 3
> [INFO ]  QEMU: firmware config: Found 'etc/e820'
> [DEBUG]  CBMEM:
> [DEBUG]  IMD: root @ 0x3000 254 entries.
> [DEBUG]  IMD: root @ 0x3fffec00 62 entries.
> [DEBUG]  FMAP: Found "FLASH" version 1.1 at 0x0.
> [DEBUG]  FMAP: base = 0xffc0 size = 0x40 #areas = 3
> [DEBUG]  FMAP: area COREBOOT found @ 200 (4193792 bytes)
> [INFO ]  CBFS: mcache @0x00014e00 built for 13 files, used 0x294 of 0x4000
> bytes
> [INFO ]  CBFS: Found 'fallback/postcar' @0x1c140 size 0x91b0 in mcache
> @0x00014fe0
> [DEBUG]  Loading module at 0x3ffce000 with entry 0x3ffce042. filesize:
> 0x7d90 memsize: 0xe118
> [DEBUG]  Processing 636 relocs. Offset value of 0x3dfce000
> [INFO ]  Timestamp - end of romstage: 69275568
> [DEBUG]  BS: bootblock times (exec / console): total (unknown) / 12 ms
>
>
> [NOTE ]  coreboot-4.22-671-g416cc665929e Fri Feb 02 20:40:20 UTC 2024
> x86_64 postcar starting (log level: 7)...
> [INFO ]  Timestamp - start of postcar: 113911520
> [INFO ]  Timestamp - end of postcar: 117105771
> [INFO ]  Timestamp - starting to load ramstage: 121657944
> [DEBUG]  FMAP: area COREBOOT found @ 200 (4193792 bytes)
> [INFO ]  CBFS: Found 'fallback/ramstage' @0xd040 size 0xf072 in mcache
> @0x3fffeaec
> [INFO ]  Timestamp - starting LZMA decompress (ignore for x86): 133785778
> [INFO ]  Timestamp - finished LZMA decompress (ignore for x86): 158178845
> [DEBUG]  Loading module at 0x3fe9d000 with entry 0x3fe9d000. filesize:
> 0x234e0 memsize: 0x12fe28
> [DEBUG]  Processing 3390 relocs. Offset value of 0x3be9d000
> [INFO ]  Timestamp - finished loading ramstage: 193391473
> [DEBUG]  BS: postcar times (exec / console): total (unknown) / 5 ms
>
>
> [NOTE ]  coreboot-4.22-671-g416cc665929e Fri Feb 02 20:40:20 UTC 2024
> x86_64 ramstage starting (log level: 7)...
> [INFO ]  Timestamp - start of ramstage: 216358307
> [INFO ]  Timestamp - device enumeration: 219906562
> [DEBUG]  BS: BS_DEV_INIT_CHIPS run times (exec / console): 0 / 1 ms
> [INFO ]  Enumerating buses...
> [DEBUG]  Root Device scanning...
> [DEBUG]  CPU_CLUSTER: 0 enabled
> [DEBUG]  DOMAIN:  enabled
> [DEBUG]  DOMAIN:  scanning...
> [DEBUG]  PCI: pci_scan_bus for segment group 00 bus 00
> [DEBUG]  PCI: 00:00:00.0 [8086/1237] enabled
> [DEBUG]  PCI: 00:00:01.0 [8086/7000] enabled
> [DEBUG]  PCI: 00:00:01.1 [8086/7010] enabled
> [DEBUG]  PCI: 00:00:01.3 [8086/7113] enabled
> [DEBUG]  PCI: 00:00:02.0 [1234/] enabled
> [DEBUG]  PCI: 00:00:03.0 [8086/100e] enabled
> [DEBUG]  PCI: 00:00:01.0 scanning...
> [DEBUG]  scan_bus: bus PCI: 00:00:01.0 finished in 0 msecs
> [DEBUG]  PCI: 00:00:01.3 scanning...
> [DEBUG]  scan_bus: bus PCI: 00:00:01.3 finished in 0 msecs
> [DEBUG]  scan_bus: bus DOMAIN:  finished in 14 msecs
> [DEBUG]  scan_bus: bus Root Device finished in 18 msecs
> [INFO ]  done
> [DEBUG]  BS: BS_DEV_ENUMERATE run times (exec / console): 1 / 21 ms
> [INFO ]  Timestamp - device configuration: 298893234
> [DEBUG]  found VGA at PCI: 00:00:02.0
> [DEBUG]  Setting up VGA for PCI: 00:00:02.0

[coreboot] Re: QEMU x86 i440fx/piix4 build fails for >= 32MB ROMs - Assertion IS_HOST_SPACE_ADDRESS(host_space_address) failed

2024-02-19 Thread ron minnich
I guess what I’m thinking is I’m not sure it’s worth the effort to make a
build work for something that is physically impossible

On Mon, Feb 19, 2024 at 12:11 Felix Held 
wrote:

> Hi Mike,
>
> SPI NOR flash chips with more than 16MByte use 4 byte addresses while
> ones with up to 16MBytes use 3 byte addresses. The SPI flash controllers
> on older systems often only support the 3 byte address mode. Also
> typically only up to 16 MBytes worth of SPI flash contents can be mapped
> right below the 4GB boundary, since the 16MByte below that contain the
> MMIO of for example LAPIC and IOAPIC.
> Had a quick look at the BKDG for family 16h model 30h, which is newer
> than the chip used on G505S or A88XM-E, and it didn't have the registers
> in the SPI controller that I'd expect to be present if it supports the 4
> byte address mode.
>
> Regards,
> Felix
>
> On 19/02/2024 19:55, Mike Banon wrote:
> > Theoretically - yes, if someone finds & solders there a 32 MB (256
> > megabit) SPI Flash chip with 8 pins. Hopefully, as the proprietary
> > UEFIs become more & more bloated, these large capacity chips will
> > become more widely available in the near future. And, since a coreboot
> > itself consumes less than 1MB on these "opensource AGESA" AMD systems
> > such as G505S and A88XM-E, all this room will allow some very
> > interesting experiments! If even 3 MB is enough for me to put 9 of 10
> > floppies of the collection described here (thanks to LZMA compression)
> > -
> http://dangerousprototypes.com/docs/Lenovo_G505S_hacking#Useful_floppies
> > , guess what wonders we can do with 31 MB... ;-)
> >
> > On Mon, Feb 19, 2024 at 7:17 PM ron minnich  wrote:
> >>
> >> Can the system you are discussing actually use larger than 16 MB rom?
> >>
> >>   I am wondering about your use of the phrase “out of curiosity”
> >>
> >> On Mon, Feb 19, 2024 at 07:05 Mike Banon  wrote:
> >>>
> >>> Small bump, I am still having this error while (out of curiosity)
> >>> trying to build the Lenovo G505S ROM for 32 MB or 64 MB spi flash:
> >>>
> >>>  OBJCOPYbootblock.raw.bin
> >>> Created CBFS (capacity = 33488356 bytes)
> >>>  BOOTBLOCK
> >>>  CBFS   cbfs_master_header
> >>>  CBFS   fallback/romstage
> >>> Image SIZE 33554432
> >>> cbfstool:
> /media/mint/2183183a-158f-476a-81af-b42534a68706/shared/core/coreboot/util/cbfstool/cbfstool.c:1186:
> >>> cbfstool_convert_mkstage: Assertion
> >>> `IS_HOST_SPACE_ADDRESS(host_space_address)' failed.
> >>> Aborted (core dumped)
> >>> make: *** [Makefile.mk:1210: build/coreboot.pre] Error 134
> >>>
> >>> Meanwhile, it builds fine for 4 MB / 8 MB / 16 MB , only these large
> >>> sizes are a problem
> >>>
> >>> On Sat, Jun 25, 2022 at 12:55 AM Julius Werner 
> wrote:
> >>>>
> >>>> I can see a little bug that makes this return a confusing error (it
> >>>> should have really failed with `SPI flash address(0x300) not in any
> >>>> mmap window!`), and we can fix that if you want. But that still won't
> >>>> make this build (and my patch didn't cause the underlying problem,
> >>>> before that it may have built an image but it probably wouldn't have
> >>>> booted). By default cbfstool only expects the top 16MB of the flash to
> >>>> be memory-mapped, so it cannot link XIP stages into areas outside of
> >>>> that. The real solution is to either change your FMAP to put the
> >>>> COREBOOT section into the top 16MB (we might want to change the
> >>>> auto-generated default FMAP to do that), or pass
> >>>> --ext-win-base/--ext-win-size options to cbfstool to tell it how to
> >>>> map areas below the top 16MB.
> >>>>
> >>>> On Thu, Jun 23, 2022 at 1:09 AM Paul Menzel 
> wrote:
> >>>>>
> >>>>> Dear Mike,
> >>>>>
> >>>>>
> >>>>> Am 23.06.22 um 09:49 schrieb Mike Banon:
> >>>>>> If I use a default config for i440fx/piix4, building a 16MB ROM
> works
> >>>>>> fine, but 32MB or 64MB doesn't work anymore:
> >>>>>>
> >>>>>> ...
> >>>>>>   CC postcar/southbridge/intel/common/rtc.o
> >>>>>>   LINK   cbfs/fallback/postcar.debug
> >>>>>>   OBJCOPYcbfs/fallback/romstage.el

[coreboot] Re: QEMU x86 i440fx/piix4 build fails for >= 32MB ROMs - Assertion IS_HOST_SPACE_ADDRESS(host_space_address) failed

2024-02-19 Thread ron minnich
Can the system you are discussing actually use larger than 16 MB rom?

 I am wondering about your use of the phrase “out of curiosity”

On Mon, Feb 19, 2024 at 07:05 Mike Banon  wrote:

> Small bump, I am still having this error while (out of curiosity)
> trying to build the Lenovo G505S ROM for 32 MB or 64 MB spi flash:
>
> OBJCOPYbootblock.raw.bin
> Created CBFS (capacity = 33488356 bytes)
> BOOTBLOCK
> CBFS   cbfs_master_header
> CBFS   fallback/romstage
> Image SIZE 33554432
> cbfstool:
> /media/mint/2183183a-158f-476a-81af-b42534a68706/shared/core/coreboot/util/cbfstool/cbfstool.c:1186:
> cbfstool_convert_mkstage: Assertion
> `IS_HOST_SPACE_ADDRESS(host_space_address)' failed.
> Aborted (core dumped)
> make: *** [Makefile.mk:1210: build/coreboot.pre] Error 134
>
> Meanwhile, it builds fine for 4 MB / 8 MB / 16 MB , only these large
> sizes are a problem
>
> On Sat, Jun 25, 2022 at 12:55 AM Julius Werner 
> wrote:
> >
> > I can see a little bug that makes this return a confusing error (it
> > should have really failed with `SPI flash address(0x300) not in any
> > mmap window!`), and we can fix that if you want. But that still won't
> > make this build (and my patch didn't cause the underlying problem,
> > before that it may have built an image but it probably wouldn't have
> > booted). By default cbfstool only expects the top 16MB of the flash to
> > be memory-mapped, so it cannot link XIP stages into areas outside of
> > that. The real solution is to either change your FMAP to put the
> > COREBOOT section into the top 16MB (we might want to change the
> > auto-generated default FMAP to do that), or pass
> > --ext-win-base/--ext-win-size options to cbfstool to tell it how to
> > map areas below the top 16MB.
> >
> > On Thu, Jun 23, 2022 at 1:09 AM Paul Menzel 
> wrote:
> > >
> > > Dear Mike,
> > >
> > >
> > > Am 23.06.22 um 09:49 schrieb Mike Banon:
> > > > If I use a default config for i440fx/piix4, building a 16MB ROM works
> > > > fine, but 32MB or 64MB doesn't work anymore:
> > > >
> > > > ...
> > > >  CC postcar/southbridge/intel/common/rtc.o
> > > >  LINK   cbfs/fallback/postcar.debug
> > > >  OBJCOPYcbfs/fallback/romstage.elf
> > > >  CREATE
>  build/mainboard/emulation/qemu-i440fx/cbfs-file.vqaXlP.out (from
> /home/my_custom_path_to/coreboot/.config)
> > > >  CC+STRIP   src/lib/cbfs_master_header.c
> > > >  OBJCOPYcbfs/fallback/bootblock.elf
> > > >  OBJCOPYbootblock.raw.elf
> > > >  OBJCOPYbootblock.raw.bin
> > > > Created CBFS (capacity = 33553892 bytes)
> > > >  BOOTBLOCK
> > > >  CBFS   cbfs_master_header
> > > >  CBFS   fallback/romstage
> > > > cbfstool:
> /home/my_custom_path_to/coreboot/util/cbfstool/cbfstool.c:1145:
> cbfstool_convert_mkstage: Assertion
> `IS_HOST_SPACE_ADDRESS(host_space_address)' failed.
> > > > make: *** [Makefile.inc:1116: build/coreboot.pre] Aborted
> > >
> > > Thank you for the report. It looks like a regression of commit
> > > 20ad36547e25 (cbfstool: Do host space address conversion earlier when
> > > adding files) [1].
> > >
> > > Building a 32 MB ROM also fails for emulation/qemu-q35
> > > (`CONFIG_BOARD_EMULATION_QEMU_X86_Q35=y`).
> > >
> > >
> > > Kind regards,
> > >
> > > Paul
> > >
> > >
> > > [1]: https://review.coreboot.org/c/coreboot/+/60018
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
>
>
> --
> Best regards, Mike Banon
> Open Source Community Manager of 3mdeb - https://3mdeb.com/
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2023-08-23 - coreboot Leadership meeting minutes

2023-09-11 Thread ron minnich
While it is true that there are binary blobs in coreboot today, that
was not always the case.

From 1999 to about 2011, all coreboot images were built from 100%
source code, most of it GPL, some of it BSD-style license (e.g.
microcode).

Around 2006, Intel made it impossible to continue with open source for
server platforms by restricting access to chipset data.

In 2014, AMD decided they would no longer release AGESA as open
source, breaking with a commitment they made in 2006 at the LinuxBIOS
meeting in Hamburg.

I don't think that the existence of blobs justifies adding more blobs.

On Mon, Sep 11, 2023 at 10:52 AM Peter Stuge  wrote:
>
> Hello Hannah,
>
> Williams, Hannah wrote:
> > Already there are binaries FSP, AGESA, PSP being used in Coreboot
>
> We consider this historical problems in the coreboot project and it
> is absolutely not something we intend to lean into. Since you have
> a mandate to work with coreboot I guess you are onboard with this,
> after all, our culture is inherent to our project.
>
>
> > and because of IP and licensing issues everything cannot be open sourced.
>
> I consider this a temporary problem for Intel that it needs to solve
> to avoid losing future business to competing, open source solutions.
>
>
> > This is the fastest method for this specific product
>
> Since coreboot isn't owned by Intel I'm afraid Intel has to accept
> that the request gets denied by coreboot.
>
> I guess you can understand that coreboot has no incentive to make
> further compromise that would really only benefit Intel and actually
> hurt coreboot.
>
>
> > binary rule (by the way where is this stated in coreboot.org?).
>
> Do you really not understand the spirit of open source and what
> coreboot is doing since more than 20 years? I guess you do but that
> you need a reference to something written for some silly higher-ups. :\
>
>
> > We will also work closely with our other open source Linux graphics
> > team to see how we can leverage common code for future Silicon.
>
> This is a good idea. Maybe libgfxinit could even become the primary
> codebase at some point. In any case it's just silly to duplicate work
> at all, including for i915+GOP.
>
>
> Kind regards
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2023-08-23 - coreboot Leadership meeting minutes

2023-09-07 Thread ron minnich
Were it not for the fact that we've been having the general open
source discussion with intel for 24 years, and this graphics
discussion for 10 years, we might believe that the claim of future
open source is possible.

Intel did not take open source into account when Intel wrote this
code; why would we expect Intel to take open source into account now?

It's very easy to predict that the open source rewrite, or release,
will never happen. Because there's decades of history to draw upon.

I hope this explains a certain apparent disbelief on the community's
part that we should take binaries now, and an open source version
tomorrow. Because we know it won't happen. It seems pointless to
accept a binary blob, in the short term, when we know there are only
binary blobs in the long term.

ron


On Thu, Sep 7, 2023 at 1:50 PM Williams, Hannah
 wrote:
>
> It is not possible to open source uGOP today without re-writing it. We do not 
> have time to re-write considering our product timeline and hence the request 
> to allow to use binary now. We acknowledge that we will make effort to open 
> source uGOP for future SOC by working internally with the other teams in 
> Intel like i915 team. We have to see how to write common code between the two 
> so that we can open source at the same time.
>
> Hannah
>
>
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: How to best handle new Intel Saphire Rapids server boards based on Intel Archer City?

2023-06-20 Thread ron minnich
The approach in the last 24 years (of this unsustainable project :-) has
been to get several mainboards of a type, and, once we have them, try to
work out what code is truly common and what code is similar but not truly
common.

Code that is truly common can then be factored out into places such as
src/lib. Code then migrates out of specific mainboards and into common
spaces. This progression has happened many times.

And, yes, no question, this is an activity that likely occurs less than it
should. Such is our industry.

It is not possible to know, a priori, what those common pieces will be. So
we are left with this admittedly non-ideal situation.

There is the further risk (this has happened) that a seemingly harmless
change is discovered to break some board, 1.5 years later. This has
happened. To me.

As has been pointed out, always in motion, the future is.

On Tue, Jun 20, 2023 at 4:08 PM Peter Stuge  wrote:

> (sorry if you receive this twice)
>
> David Hendricks wrote:
> > it will be easier to refactor portions of the code with the large
> > patches merged in a buildable and (hopefully) usable/testable state.
>
> That's pretty weak sauce and I think you all know deep down.
>
> Who pays for refactoring? Probably someone else.
>
> That's unsustainable for the project.
>
> It externalizes refactoring cost from those creating the initial
> mainboard ports but that's not how any platform code can grow
> into something well-engineered and versatile.
>
> One can certainly argue that well-engineered is just too costly for
> our community to strive for but while that does seem the prevailing
> group think I have to say I would consider it a sad resignation.
>
>
> //Peter
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: How to best handle new Intel Saphire Rapids server boards based on Intel Archer City?

2023-06-19 Thread ron minnich
There have always been two approaches to new mainboards in coreboot.

The first, "copy and convert", is to take the most similar board, copy the
code, and then change it. I believe this is the strategy paul is unhappy
with.

The second, "embrace and extend", is to modify existing board in place to
support a new board.

The question of "copy and convert" vs "extend and embrace" has come up many
times in the last 23 years. There are no ideal answers, we have found.

When the boards are variants from a single vendor, it can work well.
chromebook boards are a good example. One vendor will (we hope) make sure
that, as new variants are added, they do not break old variants.

We've had trouble over the years merging boards from different vendors.
Even in the simple case, with similar boards from more than one vendor, a
merged board will look like the union, with a greatly expanded set of build
options. But in some early coreboot experience, e.g. with Opteron ca 2004,
we found that some options were incompatible. CKE was a big pain.

In the earliest days, we did make an effort to have common code for similar
boards. Problems arose as boards from one vendor changed in ways
incompatible with other vendors. We saw this from the beginning with boards
that used SiS 630 chipsets, which led to a panic debug session for SC 2000
as we dealt with a new board, allegedly compatible with an earlier board,
which had some key differences that could not be accommodated in a single
board source tree. We went with copy and convert.

I think it's great to have as much common code as possible. But calling an
inventec board and a bytedance board variants just because they happen to
use SPR? I think that's a mistake.



On Mon, Jun 19, 2023 at 2:02 PM Paul Menzel  wrote:

> Dear Martin,
>
>
> Thank you very much for taking the time to answer.
>
>
> Am 19.06.23 um 22:31 schrieb Martin Roth:
> > Duplicated code between mainboards isn't a big issue in my opinion.
> > It allows the boards to be customized without worrying about other
> > companies' mainboards. We've tried to make mainboards as small as we
> > can, and we can keep refactoring things out where it makes sense.
>
> Are you talking in general or in this specific SRP boards. As stated by
> others, a lot of code was copy-pasted.
>
> > If some common code fits under the SoC, that's great, and I'm all for
> > moving it there, but let's not force the burden of that refactoring
> > work onto inexperienced mainboard maintainers. Doing so just
> > encourages vendors to keep their mainboards in private repositories -
> > the opposite of what we should be working for. Even if this means
> > that it doesn't get refactored and gets a bit out-of-date, I find
> > that preferable to making contributors (more) frustrated about
> > getting their boards accepted.
>
> I haven’t seen any frustration. Do you know more? I agree, nobody should
> be frustrated. The question has to be answered though, who is going to
> do that general maintenance work – I think Kyösti raised a similar
> question a few weeks back. I welcome all new contributors, but it has to
> be clear, that coreboot – especially for commercial offerings – needs
> and expects some kind of maintenance commitment. I am pretty sure, the
> vendors are going to understand the benefits. It just has to be
> communicated transparently.
>
> > CRBs are (as the name says) reference boards and it should be
> > absolutely fine to duplicate their code when another mainboard vendor
> > uses the CRB as their base - that's what the CRB is for - as an
> > example. Forcing the board to be under the intel vendor directory
> > tells me that intel is responsible for the board, when that isn't the
> > case.
>
> Isn’t `MAINTAINERS` the source for the responsibilities?
>
> > In my opinion, Mainboards should be free to customize anything and
> > everything in their directories without having to worry about what
> > other mainboards are affected. I think for the most part, variants
> > should be reserved for duplicates that are owned/maintained by the
> > same vendor. Whitelabel vendors like Clevo can be an exception to
> > this, but the chip vendors' CRBs should not be forced as a baseboard
> > for some other company's design.
> Well, that is the thing to discuss. What differences are there going to
> be and can variants be extended or a new model be found to make it
> easier to maintain the hopefully vast number of new server boards?
>
> Maybe Angel’s suggestion to submit the boards and then refactor is good.
> The other way would be to try variants first, and if that does not work,
> split them out. I’d welcome, that there is some commitment for further
> work on that though, and I try to avoid that the AGESA experience
> repeats, where coreboot didn’t want to frustrate AMD and trusted AMD’s
> promises, but, due to whatever reasons, it only resulted in frustration
> in the project itself.
>
>
> Kind regards,
>
> Paul
> 

[coreboot] Re: proposal for cross compilation: add Rust support

2023-04-07 Thread ron minnich
I agree with you on both points.

On Fri, Apr 7, 2023 at 9:50 AM Maximilian Brune <
maximilian.br...@9elements.com> wrote:

> I am not against it, but I would like to avoid making it a hard
> dependency, since then I would need to install it in order to build
> coreboot.
> Furthermore if the coreboot toolchain installs cargo, then I would like it
> to install rust (and all the cargo packages) locally in the coreboot
> repository instead of in the HOME directory of the user. I don't want the
> coreboot build system to touch anything outside of the coreboot repository.
>
> Am Sa., 1. Apr. 2023 um 18:07 Uhr schrieb ron minnich  >:
>
>> https://review.coreboot.org/c/coreboot/+/74124
>>
>> First step, which I almost certainly did incorrectly, is to add Rust
>> toolchain support to the Makefile.
>>
>> Next, adding a rust payload was suggested. I'll look into that, probably
>> rewriting linuxcheck in rust (since I'm the only user anyway :-)
>>
>>
>> ___
>> coreboot mailing list -- coreboot@coreboot.org
>> To unsubscribe send an email to coreboot-le...@coreboot.org
>>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] proposal for cross compilation: add Rust support

2023-04-01 Thread ron minnich
https://review.coreboot.org/c/coreboot/+/74124

First step, which I almost certainly did incorrectly, is to add Rust
toolchain support to the Makefile.

Next, adding a rust payload was suggested. I'll look into that, probably
rewriting linuxcheck in rust (since I'm the only user anyway :-)
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Toolchain build on Ubuntu 22.04 fails (at GCC 8.3.0)?

2023-01-09 Thread ron minnich
thanks Paul, I had not realized I'd not yet loaded uuid-dev on the new
machine.

That said, I have seen, more recently, some libraries that don't come with
a .a any more. But, clearly, that test for libuuid was broken it seems.

On Mon, Jan 9, 2023 at 1:23 AM Paul Menzel  wrote:

> Dear Ron,
>
>
> Am 08.01.23 um 20:16 schrieb ron minnich:
> > But in some cases static libs are no longer provided at all. Would be
> nice
> > to know if that's the case for libuuid.
>
> The static library is part of `uuid-dev` [1]:
>
>  /usr/include/uuid/uuid.h
>  /usr/lib/x86_64-linux-gnu/libuuid.a
>  /usr/lib/x86_64-linux-gnu/libuuid.so
>  /usr/lib/x86_64-linux-gnu/pkgconfig/uuid.pc
>  […]
>
>
> Kind regards,
>
> Paul
>
>
> [1]: https://packages.ubuntu.com/kinetic/amd64/uuid-dev/filelist
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Toolchain build on Ubuntu 22.04 fails (at GCC 8.3.0)?

2023-01-08 Thread ron minnich
What does it to to see if library is present? Strace may help.

On Sun, Jan 8, 2023, 12:48 PM Rafael Send 
wrote:

> Hi,
> The weird part is that if I build from the root directory, it DOES say
> uuid-dev is found yet the build still fails. Might that be for the
> aforementioned reason* " Many tests will indicate that a library is
> present if they find the .so. But the .so is useless if you need static
> linking."* ?
>
> I've attached logs of that, in case the solution is different for any
> reason.
>
> My goal is to build the older fork I mentioned, not the current master. I
> just tried that again, the Tianocore setup / selection has changed and that
> DOES work (it's now called edk2 instead of Tianocore right?).
>
> Thanks,
> Rafael
>
> Am So., 8. Jan. 2023 um 11:31 Uhr schrieb Martin Roth <
> gauml...@tutanota.com>:
>
>> If it would help, we could supply a VM with each release that had the
>> coreboot toolchain pre-built.  It's also possible to use the released
>> docker images to rebuild coreboot - those already contain the toolchain.
>>
>> The advantages to both of these is that you'd be building in the
>> environment that was originally used at the time the code was developed -
>> that works around a significant number of problems that you could be
>> running int.
>> If you're interested in either of those options, let me know and I'll
>> supply images and instructions to get you up and running.
>>
>> Martin
>>
>> Jan 8, 2023, 12:16 by rminn...@gmail.com:
>>
>> > But in some cases static libs are no longer provided at all. Would be
>> nice to know if that's the case for libuuid.
>> >
>> > On Sun, Jan 8, 2023, 9:24 AM Nico Huber <> nic...@gmx.de> > wrote:
>> >
>> >> On 08.01.23 17:42, ron minnich wrote:
>> >>  > For reasons I still don't understand, the various linux distros no
>> longer
>> >>  > ship .a as part of the library package.
>> >>
>> >>  They ship them separately. On Ubuntu, usually a -dev package. I even
>> >>  recall -devel-static packages (-devel was headers only and such)
>> >>  ~20 years back.
>> >>
>> >>  The reason to ship them separately is simple: not everybody has the
>> >>  space/bandwidth to spare. These habits are decades old. Some newer
>> >>  distros moved to ship everything in one package, though.
>> >>
>> >>  One odd thing about libuuid: The Ubuntu package is called uuid-dev
>> >>  (not libuuid-dev). That's something I don't understand ;)
>> >>
>> >>  Nico
>> >>
>>
>>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Toolchain build on Ubuntu 22.04 fails (at GCC 8.3.0)?

2023-01-08 Thread ron minnich
But in some cases static libs are no longer provided at all. Would be nice
to know if that's the case for libuuid.

On Sun, Jan 8, 2023, 9:24 AM Nico Huber  wrote:

> On 08.01.23 17:42, ron minnich wrote:
> > For reasons I still don't understand, the various linux distros no longer
> > ship .a as part of the library package.
>
> They ship them separately. On Ubuntu, usually a -dev package. I even
> recall -devel-static packages (-devel was headers only and such)
> ~20 years back.
>
> The reason to ship them separately is simple: not everybody has the
> space/bandwidth to spare. These habits are decades old. Some newer
> distros moved to ship everything in one package, though.
>
> One odd thing about libuuid: The Ubuntu package is called uuid-dev
> (not libuuid-dev). That's something I don't understand ;)
>
> Nico
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Toolchain build on Ubuntu 22.04 fails (at GCC 8.3.0)?

2023-01-08 Thread ron minnich
If i look on ubuntu 18  I see this: /usr/lib/x86_64-linux-gnu/libuuid.a

If I look on ubuntu 22, it's no longer there.

to build firmware with this sort of library, you need the .a (which is for
static linking). Somewhat ironic, for UEFI, since the entire DXE model is
built on the DLL model, that they can't handle a uuid .so, but so it goes
:-)

For reasons I still don't understand, the various linux distros no longer
ship .a as part of the library package.

Many tests will indicate that a library is present if they find the .so.
But the .so is useless if you need static linking. In parts of the
discussion you posted, it looks like people are finding the .so and not
realizing they need the .a?

Hence, the policy of distros like ubuntu, to remove .a files that support
static linking, is a dagger aimed at the heart of firmware projects.

The weirdest tithing: linux guarantees syscall ABI compatibility that goes
back, at this point, the better part of TWO DECADES. So why do these
distros remove support for static linking? I never have trouble with kernel
ABI, it's the .so ABI that tends to break. IOW, statically linked binaries
are more portable than ones that use .so!

Anyway, I think that's your problem. Ubuntu is breaking firmware now, with
a poorly thought out change to what is shipped in a package. What about
other distros? We might want to recommend people stop trying to build with
ubuntu?

Sorry.

On Sat, Jan 7, 2023 at 10:57 PM Rafael Send 
wrote:

> Actually, that's not quite right. When I "make" from the root directory,
> it appears not to fully build Tianocore and throws an error saying
> "UEFIPAYLOAD.fd - no such file or directory". If I enter
> payloads/externa/tianocore directory and attempt to "make" there as I've
> done before (I think), I receive the previously mentioned error.
>
> Apologies for the inaccuracy.
> R
>
> Am Sa., 7. Jan. 2023 um 22:52 Uhr schrieb Rafael Send <
> flyingfishfin...@gmail.com>:
>
>> Hi,
>> Thanks for the suggestion. It built in master, then I copied it to the
>> xgcc folder from the fork I'm using. Close, but no dice just yet. I'm now
>> running into a different problem, both on master and on my fork:
>> Tianocore says it lacks uuid-dev (libuuid-devel). I'm on Ubuntu, and
>> uuid-dev is already installed. What am I missing? It sounds similar to this
>> here:
>> https://www.reddit.com/r/coreboot/comments/kigu2z/uuiddev_libuuiddev_tianocore/
>> but that doesn't appear to have a resolution that I can see...
>>
>> Cheers,
>> Rafael
>>
>> Am Sa., 7. Jan. 2023 um 15:15 Uhr schrieb Nico Huber :
>>
>>> On 07.01.23 23:53, Rafael Send wrote:
>>> > If I wanted to try your first suggestion, does the toolchain build
>>> > transcend directories? In order to not mix up  trees & versions so far
>>> I
>>> > have just checked out the latest master and the version in question
>>> into
>>> > two totally separate coreboot folders.
>>>
>>> Not automatically, but you can point to the
>>>   .../util/crossgcc/xgcc/bin
>>> directory in $XGCCPATH or add it to your $PATH.
>>>
>>> You can also pull out the whole xgcc/ dir, so it doesn't live in a
>>> specific tree checked out.
>>>
>>> Nico
>>>
>> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: FSP 2.4: runtime blobs!

2022-09-30 Thread ron minnich
Laurie, thanks for the invite. If ever a single statement crystallized the
problem, this is it:
"Intel® FSP is an ingredient to a full system firmware solution and
incorporates multiple bootloaders including coreboot."

That's the problem: " incorporates multiple bootloaders including
coreboot". With all due respect, this statement displays a lack of
understanding of coreboot. coreboot is not a bootloader. Intel keeps
wanting to box it in to that role, but that's not what coreboot is or ever
was.

coreboot incorporates FSP. FSP provides a support library to coreboot. On
Intel, that is. The FSP model is not a desirable solution on other open
platforms, particularly ARM, and hence is a point solution for Intel only.

What you are seeing in this discussion is our disappointment in Intel's
continuing push to maximize what FSP does, and to minimize what open source
code such as coreboot does -- and, further, to Intel's unwillingness to
listen.

Even on AMD, the trend (see: 0xide) is to have a full open source stack on
the AP. FSP as a model is better than UEFI, but not anywhere near what many
Intel customers would like to see. Hence the open letter; hence our
disappointment that it seems to have had zero impact on Intel's behavior;
hence many companies continuing search for alternatives to Intel products.

I had hopes, with the new CEO in charge, and the statement that Intel was
going to change, that Intel would in fact change; now I have only
disappointments. At the same time, I'm not at all surprised that nothing
has changed. I've been down this road with Intel many times in the last 23
years.

Also, note: from 1999-2005, we had 100% GPL firmware for Intel CPUs, in
coreboot (back then, called LinuxBIOS). Why is that not the case today?
Because Intel would not allow it.

Thanks

ron

On Fri, Sep 30, 2022 at 10:01 AM Arthur Heymans  wrote:

> Hi Laurie
>
> I created a github issue about this
> https://github.com/UniversalScalableFirmware/documentation/issues/37
> almost 11 months ago with no reply.
> I also don't see any discussions about FSP-I in particular in that
> community but I posted my github issue again on that mailing list.
>
> Arthur
>
> On Fri, Sep 30, 2022 at 6:45 PM Jarlstrom, Laurie <
> laurie.jarlst...@intel.com> wrote:
>
>> Hi,
>>
>> I would like to invite you to the Universal Scalable Firmware (USF)
>> Community where Intel® FSP is an ingredient to a full system firmware
>> solution and incorporates multiple bootloaders including coreboot.
>>
>> There have been discussions on the updates to Intel® FSP within this
>> community
>>
>> https://universalscalablefirmware.groups.io/g/discussion
>>
>>
>>
>> What is USF?  Links to  USF training videos:
>> https://www.youtube.com/playlist?list=PLehYIRQs6PR5N73cbW8CPvU_stDTAG_j5
>>
>>
>>
>> Thanks,
>>
>> *Laurie*
>>
>>
>>
>> laurie.jarlst...@intel.com,
>>
>> System Firmware Products
>>
>> Firmware Ecosystem & Business Dev.
>>
>> (503) 880 5648 Mobile
>>
>>
>>
>> *From:* ron minnich 
>> *Sent:* Friday, September 30, 2022 9:00 AM
>> *To:* Nico Huber 
>> *Cc:* Arthur Heymans ; coreboot <
>> coreboot@coreboot.org>
>> *Subject:* [coreboot] Re: FSP 2.4: runtime blobs!
>>
>>
>>
>>  note that I am having this exact same problem in the RISC-V community:
>> https://github.com/riscv-non-isa/riscv-sbi-doc/issues/102
>>
>>
>>
>> People just like their SMM. It's hard to kill.
>>
>>
>>
>> I fear that you're not going to get much luck with Intel, which is why I
>> try to work with non-Intel CPUs as much as I can nowadays.
>>
>>
>>
>> On Fri, Sep 30, 2022 at 5:58 AM Nico Huber  wrote:
>>
>> Hi Arthur, coreboot fellows,
>>
>> On 30.09.22 13:53, Arthur Heymans wrote:
>> > What are your thoughts?
>>
>> printing, bonfire...
>>
>> > Do we take a stance against FSP-I integration in coreboot?
>>
>> I think we already do. From coreboot.org:
>>
>>   "coreboot is an extended firmware platform that delivers a lightning
>>fast and secure boot experience on modern computers and embedded
>>systems. As an Open Source project it provides auditability and
>>maximum control over technology."
>>
>> FSP-I means exactly the opposite of most of the above points. It's
>> inherently incompatible.
>>
>> IMO, unless we discuss if we want to change how we define coreboot
>> first, there can't be a discussion about integrating FSP-I nor any
>> action in that direction.
>>
>> > Are there precedent

[coreboot] Re: FSP 2.4: runtime blobs!

2022-09-30 Thread ron minnich
 note that I am having this exact same problem in the RISC-V community:
https://github.com/riscv-non-isa/riscv-sbi-doc/issues/102

People just like their SMM. It's hard to kill.

I fear that you're not going to get much luck with Intel, which is why I
try to work with non-Intel CPUs as much as I can nowadays.

On Fri, Sep 30, 2022 at 5:58 AM Nico Huber  wrote:

> Hi Arthur, coreboot fellows,
>
> On 30.09.22 13:53, Arthur Heymans wrote:
> > What are your thoughts?
>
> printing, bonfire...
>
> > Do we take a stance against FSP-I integration in coreboot?
>
> I think we already do. From coreboot.org:
>
>   "coreboot is an extended firmware platform that delivers a lightning
>fast and secure boot experience on modern computers and embedded
>systems. As an Open Source project it provides auditability and
>maximum control over technology."
>
> FSP-I means exactly the opposite of most of the above points. It's
> inherently incompatible.
>
> IMO, unless we discuss if we want to change how we define coreboot
> first, there can't be a discussion about integrating FSP-I nor any
> action in that direction.
>
> > Are there precedents where blobs runtimes are installed on the main CPU,
> > that I don't know of which could justify FSP-I?
>
> There's something for the main CPU but definitely not the same: I was
> told AMD's binary pi can provide runtime ACPI code. But running ACPI is
> an opt-in for the OS, whilst FSP-I wouldn't even allow an opt-out, I
> guess.
>
> >
> > P.S. It's quite sad to see this happen after an open letter 361 people
> > signed for a more open FSP.
> >
> https://openletter.earth/adopting-open-source-firmware-approach-for-intel-fsp-59d7a0c6
>
> Sad, but not unexpected. I believe this is part of a more than a
> decade old strategy. It seems to me Intel never really supported
> open-source OS drivers for their server platforms. They just hid
> everything in SMM with a nice open-source facade for Linux. We
> turned a blind eye to that. Now it seems that the ecosystem around
> Intel servers is rather unprepared for open source. Even if they'd
> open up their SMM code, it would just be wrong to keep the code in
> SMM, IMO. Proper OS drivers should be written instead.
>
> Nico
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Verifiable UEFI OS

2022-09-05 Thread ron minnich
You just confirmed my guess that I was going to learn something I didn't
know 

On Mon, Sep 5, 2022, 3:04 PM Matt DeVillier 
wrote:

> Ron,
> if I had to hazard a guess, for most users with EOL ChromeOS hardware,
> it's simply several orders of magnitude easier to flash my upstream
> coreboot + edk2 firmware and install ChromeOS Flex, than to build
> their own ChromiumOS (vs ChromeOS, since the private overlays are not
> available) and manage updates
>
> On Mon, Sep 5, 2022 at 4:21 PM ron minnich  wrote:
> >
> > I'm completely lost. Why would you update a chromebook to chromeos flex
> when you can build chromeos and install that.
> >
> > Did you know that you can build chromeos from source, rekey the
> chromebook, and then it will boot in normal mode with your build? You can
> even run the chromeos OTA service from a machine you own, which is kinda
> fun. This is a talk I gave at ELC in 2014 (15? I forget) about doing just
> that:
> https://docs.google.com/presentation/d/1jSUJteAjEgHCFyx6VsqhWmNGTKipvTmAdsoW0gme7qA/edit?usp=sharing
> >
> > This is also the year the vendor LF hired to record the talks lost all
> the videos of the talks, so the slides are all I have.
> >
> > But to the original question: chromeos flex IIUC is set up to boot on
> non-chromebook environments like UEFI, so I'm a bit lost on why you'd want
> it on a real chromebook.
> >
> > This probably means I'm about to learn something I did not know.
> >
> >
> >
> > On Mon, Sep 5, 2022 at 9:37 AM Matt DeVillier 
> wrote:
> >>
> >> This has nothing to do with coreboot, the message is from the UEFI
> >> payload (Tianocore/edk2). It's telling you that whatever boot device
> >> it is trying to boot (and it tells you in the error msg) does not
> >> contain a UEFI-bootable 64-bit OS. If you didn't install ChromeOS Flex
> >> to your internal storage, then that is why (since ChromeOS proper is
> >> not UEFI-bootable).
> >>
> >> On Mon, Sep 5, 2022 at 9:10 AM CJ 
> wrote:
> >> >
> >> > First, I just want to thank all of you for what you do.  Using
> Coreboot is delightful.  I had a question about how the program verifies
> UEFI OS's.  I'm using it to update a chromebook to ChromeOS flex and
> receiving the error; doesn't contain a verifiable 64-bit UEFI OS.d From the
> documentation ChromeOS Flex is supposed to be a 64 bit UEFI so I"m
> wondering if it could be something I did while creating the boot media, a
> problem with updating chromebook, or how coreboot is verifying the OS?
> After entering the menu I can boot to the USB and install just fine so no
> big deal just curious why it gave that error.  Thanks again for putting in
> the work!  We all appreciate it.
> >> > ___
> >> > coreboot mailing list -- coreboot@coreboot.org
> >> > To unsubscribe send an email to coreboot-le...@coreboot.org
> >> ___
> >> coreboot mailing list -- coreboot@coreboot.org
> >> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Verifiable UEFI OS

2022-09-05 Thread ron minnich
I'm completely lost. Why would you update a chromebook to chromeos flex
when you can build chromeos and install that.

Did you know that you can build chromeos from source, rekey the chromebook,
and then it will boot in normal mode with your build? You can even run the
chromeos OTA service from a machine you own, which is kinda fun. This is a
talk I gave at ELC in 2014 (15? I forget) about doing just that:
https://docs.google.com/presentation/d/1jSUJteAjEgHCFyx6VsqhWmNGTKipvTmAdsoW0gme7qA/edit?usp=sharing

This is also the year the vendor LF hired to record the talks lost all the
videos of the talks, so the slides are all I have.

But to the original question: chromeos flex IIUC is set up to boot on
non-chromebook environments like UEFI, so I'm a bit lost on why you'd want
it on a real chromebook.

This probably means I'm about to learn something I did not know.



On Mon, Sep 5, 2022 at 9:37 AM Matt DeVillier 
wrote:

> This has nothing to do with coreboot, the message is from the UEFI
> payload (Tianocore/edk2). It's telling you that whatever boot device
> it is trying to boot (and it tells you in the error msg) does not
> contain a UEFI-bootable 64-bit OS. If you didn't install ChromeOS Flex
> to your internal storage, then that is why (since ChromeOS proper is
> not UEFI-bootable).
>
> On Mon, Sep 5, 2022 at 9:10 AM CJ  wrote:
> >
> > First, I just want to thank all of you for what you do.  Using Coreboot
> is delightful.  I had a question about how the program verifies UEFI OS's.
> I'm using it to update a chromebook to ChromeOS flex and receiving the
> error; doesn't contain a verifiable 64-bit UEFI OS.d From the documentation
> ChromeOS Flex is supposed to be a 64 bit UEFI so I"m wondering if it could
> be something I did while creating the boot media, a problem with updating
> chromebook, or how coreboot is verifying the OS?  After entering the menu I
> can boot to the USB and install just fine so no big deal just curious why
> it gave that error.  Thanks again for putting in the work!  We all
> appreciate it.
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Intel Quark - a quick update

2022-07-19 Thread ron minnich
Do we have criteria on which to decide if quark is worth keeping? Is
there a deadline for the work? At some point, you're going to find
code changing out from under you; are you committing to be the
maintainer?

On Tue, Jul 19, 2022 at 7:45 AM Angel Pons  wrote:
>
> Hi Andy,
>
> On Tue, Jul 19, 2022 at 2:19 PM Andy Pont  wrote:
> >
> > Ron wrote…
> >
> > >so how's it going?
> > Slowly! The day job has got in the way a bit but I have been struggling
> > to build the FSP binaries based on the instructions at [1].  I’m not
> > sure whether that is down to me not fully understanding the instructions
> > (always possible) or whether there is work-in-progress that needs to be
> > completed.
> >
> > I managed to build the FSP 1.1 binary using the six year old version of
> > edk2 that Lee also has on his GitHub using an Ubuntu 16.04 development
> > machine.  I haven’t yet managed to find a way to successful build the
> > FSP 2.0 binary.  Trying to build EDK2 BaseTools throws a pile of Python
> > syntax errors which may or may not be critical.  I’ve assumed they
> > aren’t for now and am working on getting the binary to build.
>
> Instructions on how to build QuarkFsp were added here:
> https://review.coreboot.org/29029
>
> > -Andy.
> >
> > 1 - https://github.com/LeeLeahy/quarkfsp
>
> Best regards,
> Angel
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Reconsidering the Gerrit submission strategy

2022-07-14 Thread ron minnich
I like it. We do this on most of my other projects.

On Thu, Jul 14, 2022, 6:12 AM Peter Stuge  wrote:

> Patrick Georgi via coreboot wrote:
> > I was recently made aware that Gerrit now supports adding metadata to
> > commit messages in the "rebase" strategy.
>
> That's cool!
>
>
> > It's a matter of trade-offs, but given that "rebase always" can now add
> the
> > metadata that was the deal breaker for anything but cherry-pick back in
> the
> > day, I wanted to know how y'all feel about changing or keeping the
> > submission strategy.
>
> I find the benefits quite desirable.
>
>
> > David proposed that we could try out "rebase always" for a while (maybe a
> > month) to see how it feels.
>
> Good idea!
>
>
> Thanks
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Intel Quark - a quick update

2022-07-12 Thread ron minnich
so how's it going?

On Tue, Apr 26, 2022 at 8:41 AM Martin Roth via coreboot
 wrote:
>
> Thanks Andy, I think that's totally reasonable.
> Martin
>
> Apr 26, 2022, 06:56 by andy.p...@sdcsystems.com:
>
> > Felix wrote...
> >
> > >So, will you also step up as a maintainer for it?
> > I’m going to reserve judgement on that until I see how things go with
> > trying to get the existing coreboot code running on the boards.  The Gen
> > 1 should be here tomorrow (I think).
> >
> > -Andy.
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
> >
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: POST codes and PCI Post card

2022-07-11 Thread ron minnich
They are incredibly useful, which is why they are still there. That
first post-bist code has been there since the first code in 1999.

If you have jtag, it still helps. But many BMC also have ways to see
port 80 writes. You can see it before PCI is up.

On Mon, Jul 11, 2022 at 8:00 AM Pedro Erencia  wrote:
>
> I've noticed that POST codes are sent very early. In x86 the first code is 
> sent just after storing the bist.
> Are codes sent so early useful? They are sent ages before any PCI 
> configuration. Is there any guarantee of them reaching a PCI POST card?
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Cisco Meraki use coreboot in some MX products and will not provide the source code

2022-06-29 Thread ron minnich
I've asked the software freedom conservancy to take a look.

On Wed, Jun 29, 2022, 2:48 PM Hal Martin  wrote:

> Hello,
>
> Several Cisco Meraki products (MX84, MX250) are using the coreboot
> bootloader. Meraki are also distributing coreboot builds for these products
> via their update mechanism.
>
> In October 2021, I requested the corresponding coreboot source code for
> the MX84 from open-sou...@meraki.com. Another individual requested the
> coreboot source code for the MX250 around the same time. We own the devices
> in quesiton.
>
> To date, Meraki have not provided the source code or provided an
> explanation as to the delay in providing the source code. The last reply I
> received was in January, and they have not replied to any of my follow up
> requests.
>
> As coreboot is GPL licensed software, I wanted to inform the coreboot
> community that I believe Cisco Meraki are not acting in good faith and are,
> in my opinion, violating the GPL by not providing the coreboot source code
> upon request.
>
> Kind regards,
> Hal Martin
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] #pragma once

2022-05-17 Thread ron minnich
David, apropos your comment that we can assume clang will catch any
real problems; your comment reminds me of a lesson we learned on
Harvey-OS.

What we learned on Harvey-OS, when we were compiling a working kernel
across 3 versions of gcc, and 3 versions of clang, was that at one
time or another, one of these 6 compilers gave us a warning on some
critical code issue in Harvey-0S, that the other 5 did not. Older
compilers warned of something a newer one did not; gcc and clang each
had warnings the other missed.

This was surprising and a useful lesson to us. It's also why I'll
never be able to accept the idea that coreboot should be written in a
gcc-specific dialect. But that's another argument for another day.

On Mon, May 16, 2022 at 9:53 PM David Hendricks
 wrote:
>
> Thanks for digging up all that useful information, Martin! So `#pragma once` 
> is not the clear winner after all. Too bad since we could eliminate some 
> boilerplate code with that approach.
>
> Is there another way to solve the problem Arthur raised in this thread? The 
> LMKL thread had a python script and people were also talking about adding 
> something to checkpatch.pl. Maybe these days it's sufficient to assume clang 
> will catch any real problems (CB:62173).
>
>
>
> On Mon, May 16, 2022 at 1:03 PM ron minnich  wrote:
>>
>> we have, in the past, used Linux kernel style as our guideline on
>> coreboot style.
>>
>> I'd say, based on Martin's note, that #pragma once is not such a good
>> idea after all. If we decide NOT to use it, however, let's put a note
>> about it in our style guide?
>>
>> This is not the first time this question has come up.
>>
>>
>> On Mon, May 16, 2022 at 12:34 PM Martin Roth  wrote:
>> >
>> > After reading what I've included below, I'm going to have to vote to keep 
>> > the guards.
>> > Martin
>> >
>> > May 16, 2022, 10:59 by david.hendri...@gmail.com:
>> >
>> > > On Mon, May 16, 2022 at 8:59 AM ron minnich  wrote:
>> > >
>> > >>
>> > >> btw, sometimes this has gone the other direction ..
>> > >> https://github.com/lowRISC/opentitan/pull/5693
>> > >>
>> > >
>> > > It looks like they did that solely to conform to Google's style guide
>> > > which, dogmatic as it may appear, makes sense since OpenTitan is a
>> > > Google-lead project.
>> > >
>> > The question then is 'why does Google require the use of guards?'.  
>> > Whatever you think of google, they're not going to mandate something like 
>> > this without a good reason.
>> >
>> > I went searching for where this rule came from, and found this:
>> >
>> > ```
>> > If you trust our in-house C++ compiler gurus, here's the most salient part 
>> > of the whole thread linked above.Matt Austern (4/11/2013): If you talk to 
>> > the authors of [most C++] compilers, I think you'll find that most of them 
>> > consider "#pragma once" (or the equivalent #import) to be a flaky feature 
>> > -- something that works almost all of the time and that can cause 
>> > seriously annoying bugs when it doesn't work.
>> >
>> > Chandler Carruth (4/12/2013): As one of the authors of one of those 
>> > compilers, I couldn't agree more.
>> > ```
>> > any interested Googlers can find this here:
>> >https://yaqs.corp.google.com/eng/q/5768278562045952
>> >
>> >
>> > Further digging:
>> > ```
>> > To support #pragma once, the compiler tries to identify duplicate 
>> > encounters with the same file, but the check gcc actually performs to 
>> > establish the identity of the file is weak. Here's someone who made two 
>> > copies of the same header with different names, each with a #pragma once, 
>> > and it screwed up his build.
>> >
>> >  http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52566
>> >
>> > The two headers had the same size, same content, and same timestamps 
>> > because he had run "touch *" on them, but they were intended to both be 
>> > included. Only one was included, and the other was falsely identified as a 
>> > re-inclusion and ignored.One might say he was asking for trouble by 
>> > running "touch *", but timestamp collisions are EASY to come by. First of 
>> > all, they're only 1sec resolution. You might patch all the relevant files 
>> > and they'd have matching timestamps. You might be using a network 
>> > filesystem that just doesn't bother with timestamps (common).
&g

[coreboot] Re: [RFC] #pragma once

2022-05-16 Thread ron minnich
we have, in the past, used Linux kernel style as our guideline on
coreboot style.

I'd say, based on Martin's note, that #pragma once is not such a good
idea after all. If we decide NOT to use it, however, let's put a note
about it in our style guide?

This is not the first time this question has come up.


On Mon, May 16, 2022 at 12:34 PM Martin Roth  wrote:
>
> After reading what I've included below, I'm going to have to vote to keep the 
> guards.
> Martin
>
> May 16, 2022, 10:59 by david.hendri...@gmail.com:
>
> > On Mon, May 16, 2022 at 8:59 AM ron minnich  wrote:
> >
> >>
> >> btw, sometimes this has gone the other direction ..
> >> https://github.com/lowRISC/opentitan/pull/5693
> >>
> >
> > It looks like they did that solely to conform to Google's style guide
> > which, dogmatic as it may appear, makes sense since OpenTitan is a
> > Google-lead project.
> >
> The question then is 'why does Google require the use of guards?'.  Whatever 
> you think of google, they're not going to mandate something like this without 
> a good reason.
>
> I went searching for where this rule came from, and found this:
>
> ```
> If you trust our in-house C++ compiler gurus, here's the most salient part of 
> the whole thread linked above.Matt Austern (4/11/2013): If you talk to the 
> authors of [most C++] compilers, I think you'll find that most of them 
> consider "#pragma once" (or the equivalent #import) to be a flaky feature -- 
> something that works almost all of the time and that can cause seriously 
> annoying bugs when it doesn't work.
>
> Chandler Carruth (4/12/2013): As one of the authors of one of those 
> compilers, I couldn't agree more.
> ```
> any interested Googlers can find this here:
>https://yaqs.corp.google.com/eng/q/5768278562045952
>
>
> Further digging:
> ```
> To support #pragma once, the compiler tries to identify duplicate encounters 
> with the same file, but the check gcc actually performs to establish the 
> identity of the file is weak. Here's someone who made two copies of the same 
> header with different names, each with a #pragma once, and it screwed up his 
> build.
>
>  http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52566
>
> The two headers had the same size, same content, and same timestamps because 
> he had run "touch *" on them, but they were intended to both be included. 
> Only one was included, and the other was falsely identified as a re-inclusion 
> and ignored.One might say he was asking for trouble by running "touch *", but 
> timestamp collisions are EASY to come by. First of all, they're only 1sec 
> resolution. You might patch all the relevant files and they'd have matching 
> timestamps. You might be using a network filesystem that just doesn't bother 
> with timestamps (common).
> ```
>
> Now both of these are almost a decade old, so things might have changed quite 
> a bit since then.
>
>
> Linux kernel threads:
> https://lkml.iu.edu/hypermail/linux/kernel/1401.0/02048.html
> https://lore.kernel.org/lkml/CAHk-=wi54descexxpmro+q2nag_tup+y5ybhc_9_xglerfp...@mail.gmail.com/
>
> ```
> On Sun, Feb 28, 2021 at 11:34 AM Alexey Dobriyan  
> wrote:>> >> > End result: #pragma is fundamentally less reliable than the> > 
> traditional #ifdef guard. The #ifdef guard works fine even if you> > re-read 
> the file for whatever reason, while #pragma relies on some> > kind of magical 
> behavior.You continue to not even answer this very fundamental 
> question."#pragma once" doesn't seem to have a _single_ actual real 
> advantage.Everybody already does the optimization of not even opening - 
> muchless reading and re-parsing - headers that have the traditional 
> #ifdefguard.And even if you _don't_ do that optimization, the #ifdef 
> guardfundmentally semantically guarantyees the right behavior.So the #ifdef 
> guard is (a) standard (b) simple (c) reliable (d) traditionaland you have yet 
> to explain a _single_ advantage of "#pragma once".Why add this incredible 
> churn that has no upside?So no. We're not using #pragma once unless y9ou can 
> come up with somevery strong argument for itAnd 
 no, having to come up with a name for the #ifdef guard is not astrong 
argument. It's simply not that complicated.   Linus
> ```
>
>
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] #pragma once

2022-05-16 Thread ron minnich
I was sure I'd looked at  this at one point and found this from years ago ...

"The discussion evolved to a related question, around #pragma once. A
few years back, on the Akaros project (kernel written in C, FWIW), a
Linux kernel luminary convinced us to get rid of file guards and go to
#pragma once. I am not sure it was worth the trouble but we did it. It
*can* speed up compile time; cpp doesn't need to process a whole file
and then conclude it did not have to process it; it can realize it can
skip the open. A significant downside is that it's not in any standard
-- just all the compilers out there, it seems, save romcc.

I did a simple test: apply #pragma once to coreboot. A coreboot build
for watson opens 80K .h files today. #pragma once makes barely any
difference; this says we are doing a good job in how we use our .h
files."

Anyway, all this said, #pragma once seems a good idea.


On Mon, May 16, 2022 at 9:59 AM David Hendricks
 wrote:
>
> On Mon, May 16, 2022 at 8:59 AM ron minnich  wrote:
> >
> > btw, sometimes this has gone the other direction ..
> > https://github.com/lowRISC/opentitan/pull/5693
>
> It looks like they did that solely to conform to Google's style guide
> which, dogmatic as it may appear, makes sense since OpenTitan is a
> Google-lead project.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [RFC] #pragma once

2022-05-16 Thread ron minnich
btw, sometimes this has gone the other direction ..
https://github.com/lowRISC/opentitan/pull/5693

On Mon, May 16, 2022 at 3:04 AM Angel Pons  wrote:
>
> Hi Arthur, list,
>
> On Sun, May 15, 2022 at 6:56 PM Arthur Heymans  wrote:
> >
> > Hi
> >
> > To make sure headers don't create conflicts, guards are added to all of 
> > them.
> > But the guard needs to be correct: e.g. 
> > https://review.coreboot.org/c/coreboot/+/64360/2
> > Most compilers implement '#pragma once ' as an alternative.
> > Should we use this instead across the tree, as it is less error prone and 
> > less code?
>
> Given that coreboot is built with a very specific toolchain, it seems
> very reasonable. The only thing that worries me are headers used to
> build stuff with the system toolchain, e.g. util/ and src/commonlib/
> headers. Still, it's highly unlikely that the system toolchain doesn't
> know about #pragma once provided that it is able to build crossgcc.
>
> > Sidenote: clang warns about wrong header guards.
> > https://review.coreboot.org/c/coreboot/+/62173/23 hooks up clang to our CI 
> > for some platforms ;-).
>
> And mismatched names in #ifndef and #define is not the only problem. I
> recently pondered about the scenario in which a compilation unit
> includes two different header files that use the same name in their
> guard. Using #pragma once would fundamentally eliminate both problems.
>
> > Kind regards
> > Arthur
>
> Best regards,
> Angel
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Automated Engineering Notes was Re: Deprecation of the Intel Quark SoC

2022-05-03 Thread ron minnich
yeah, I ran out of time for now.

On Mon, May 2, 2022 at 10:57 AM Karl Semich <0xl...@gmail.com> wrote:
>
> I'm not working on this but am still interested in the idea, in my hobby 
> capacity. The below emails were exchanged. I didn't yet receive further reply.
>
> -- Forwarded message -
> Date: Sun, Apr 24, 2022, 10:31 AM
> Subject: Re: [coreboot] Re: Deprecation of the Intel Quark SoC
> To: Karl Semich <0xl...@gmail.com>
>
>
> I'm very interested in your ideas.
>
> Yep, I"m interested, I've been trying to find someone interested in
> this for 5 years now
>
> -- Forwarded message -
> Date: Sun, Apr 24, 2022, 10:32 AM
> Subject: Re: [coreboot] Re: Deprecation of the Intel Quark SoC
> To: Karl Semich <0xl...@gmail.com>
>
>
> I think we should start with a binary blob for an open system like the
> allwinner d1. We have the blob, it's pretty simple code.
>
> -- Forwarded message -
> From: Karl Semich <0xl...@gmail.com>
> Date: Sun, Apr 24, 2022, 2:41 PM
> Subject: Re: [coreboot] Re: Deprecation of the Intel Quark SoC
>
>
> hi
>
> is this too many words below? are you still interested? how do you feel about 
> making the conversation more public?
>
> my biggest issue is neurological and psychological, it's had for me to do 
> things, so the more people the better
>
> the transformer model approach works on data. they need a lot of examples of 
> something to begin copying well, thousands or millions or more. it seems 
> fewer and fewer examples are needed the more similar data a model has been 
> trained on already, down to finally needing none. ("few-shot", "zero-shot"). 
> so one could take a pretrained model and e.g. adjust it (via training) to 
> predict comments from uncommented code, or predict source from binaries of an 
> architecture where more binaries are available, or from a specific set of 
> compilers compiler where many training binaries could be generated from 
> commented code. then by combining these separate models one might likely be 
> able to perform a task on which less data is available.
>
> for me at this point, finding or generating that data, and finding paths to 
> fit the model better to the target tasks, seems equally challenging compared 
> to just setting up a basic training framework to fine tune models. there are 
> a few different infrastructures, and sometimes people make their own if doing 
> a new kind of task.
>
> most people are using jupyter notebooks via various online portals. there are 
> also compute servers at places like vast.ai. but I am out of money and most 
> of my devices are very low end.
>
> I expect this to happen and am interested in helping it happen but i'll be 
> pretty slow and flaky on my own :)
>
> do you have any thoughts on generating or finding data similar to many 
> pairings of embedded binary blobs and license-free interface specifications? 
> is the allwinner d1 much data?
>
> most of the similar models I know of out there continue streams of human text 
> or source code pick selecting up to 2048 words from a fixed vocabulary maybe 
> 10k to 100k words large. There are many known techniques to extend this to 
> more words but finding and implementing them is part of the effort. This also 
> informs how to pick or develop a training framework: something that makes it 
> easier to rearchitect the models will be needed to work with or generate data 
> larger than 2048 symbols long.
>
> the challenge of lengthening may also be postponeable if the meaningful 
> patterns of the data can be squeezed to fit within that size, as is done for 
> human text in the mainstream. the model may learn appropriate patterns and 
> then take only a little further adjustment to handle larger ones once 
> rearchitectured to do so.
>
>
> -- Forwarded message -
> From: Karl Semich <0xl...@gmail.com>
> Date: Sun, Apr 24, 2022, 2:58 PM
> Subject: Re: [coreboot] Re: Deprecation of the Intel Quark SoC
>
>
> Basically, there's a class of neural networks called transformer models that 
> have become mainstream for successfully solving problems across domains.
>
> Common systems used to work with them that I'm aware of include pytorch, jax, 
> huggingface transformers, keras, and tensorflow: all python libraries. A 
> number of groups are offering hubs where people share pretrained models for 
> others to use. I've heard https://fast.ai/ has a good starting course, but it 
> didn't quite seem for me, I haven't taken it.
>
> I think most people are learning from example jupyter notebooks. I've mostly 
> learned by skimming the huggingface transformers source code and occasional 
> research papers.
>
> Each transformer model is roughly made of "layers" of "self attention" , 
> which is roughly just a class of a few operations performed on data 
> represented as matrices, and then passed to the next layer where the 
> operations are performed again. Each layer has many "weights" -- coefficients 
> -- which are 

[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-23 Thread ron minnich
On Fri, Apr 22, 2022 at 11:59 PM Karl Semich <0xl...@gmail.com> wrote:
>>
>> We are deprecating ALL boards on oreboot that need FSP, as we took the
>> decision a few weeks ago to drop boards
>> that require blobs on the main CPU (we're accepting PSP blobs for now)
>
>
> Just a quick note that our society basically has the technology to 
> automatically reimplement binary blobs as differently licensed source.

Absent seeing a demonstration, I'll leave this state as "important if true." :-)
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Open letter to Intel regarding the PSE on Elkhart Lake

2022-04-22 Thread ron minnich
Nice job Werner, I'm completely shocked!

On Thu, Apr 21, 2022, 10:24 PM Zeh, Werner  wrote:

> Hi everybody.
>
> It has been a while now that we started the open letter to Intel regarding
> open-sourcing the PSE firmware.
> I am now happy to announce that all this effort was not worthless!
> Intel pushed the PSE firmware sources yesterday to github [1]!
>
> A big "Thank You!" to all the supporters of the initiative out there.
>
> Werner
>
> [1]: https://github.com/intel/pse-fw
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-22 Thread ron minnich
oh oops the person doing that misunderstood me, we'll have to fix it

On Fri, Apr 22, 2022 at 5:41 PM Martin Roth  wrote:
>
> Hey Ron,
>   I think this is a good plan.  We can make a markdown file doing the same.  
> I'm not sure that coreboot wants to record where it's deleted, but instead 
> the branch where it would be maintained.
> This is the solution I was talking about in the coreboot leadership meeting:
> https://review.coreboot.org/c/coreboot/+/63754
>
> Take care.
> Martin
> Apr 22, 2022, 17:24 by rminn...@gmail.com:
>
> > The discussion here has been pretty helpful to my thinking. I think
> > the concerns people are raising are important.
> >
> > We are deprecating ALL boards on oreboot that need FSP, as we took the
> > decision a few weeks ago to drop boards
> > that require blobs on the main CPU (we're accepting PSP blobs for now)
> >
> > This leaves two x86 boards behind, not counting qemu.
> >
> > But, based on what has come up here, we decided we did not want to
> > leave all memory of old efforts behind.
> >
> > This is the result: https://github.com/oreboot/oreboot/pull/570/files
> >
> > Not sure if this would be desired for coreboot, but I am mentioning it
> > here for reference.
> >
> > Thanks
> >
> > ron
> >
> > On Mon, Apr 18, 2022 at 11:05 AM Sam Kuper  wrote:
> >
> >>
> >> (I'm not a Coreboot dev/maintainer, so apologies for commenting from the
> >> peanut gallery...)
> >>
> >>
> >> On Mon, Apr 18, 2022 at 04:32:36AM +0200, Martin Roth wrote:
> >> > [...]
> >> > 2) Decide on a set of criteria that we can use to evaluate whether or
> >> > not things should be removed from the master branch and maintained on
> >> > a release branch.
> >>
> >> Makes sense.
> >>
> >>
> >> > Currently, the only reason we have specified that a platform will be
> >> > moved to a release branch is if it's hindering progress.  Basically if
> >> > it's not using an API we want to mark a required, or using code that
> >> > is being deprecated.
> >> >
> >> > If hindering progress is the only reason that something should ever be
> >> > removed from the master branch, I'm fine with that.  Let's decide that
> >> > and document it so we don't have to have this discussion in the
> >> > future.
> >>
> >> Surely there will sometimes be platforms/chips that, for good reason,
> >> the community wants to keep in master - even if this would mean rewrites
> >> for those platforms/chips re the two matters you mentioned above:
> >>
> >> a.  not using APIs that future versions of coreboot will require;
> >>
> >> b.  using code that is being deprecated?
> >>
> >>
> >> Such "good reasons" could include that the plaform/chip:
> >>
> >> 1.  is in widespread use with coreboot, even if long out of production
> >>  (e.g. certain server boards or Thinkpad models); or
> >>
> >> 2.  is targeted by related projects (e.g. Heads) that coreboot
> >>  developers would prefer to avoid unnecessarily inconveniencing; or
> >>
> >> 3.  has active coreboot testers/maintainers able to integrate relevant
> >>  updates, and is passing all significant CI tests.
> >>
> >>
> >> > Other options we could look at:
> >> >
> >> > - A platform has not been sold in X years.
> >> >
> >> > - A chip has not been produced for X years.
> >>
> >> I can see the appeal of these criteria: they are easy to define.
> >> However, they are probably not wise criteria, as they may conflict with
> >> one or more of the "good reasons" above, especially reason 1.
> >>
> >>
> >> > - A platform has not been tested in X years.
> >> >
> >> > - A platform hasn't had an active maintainer for X years.  (Maybe the
> >> >   maintainer should be asked to test a platform as well?)
> >>
> >> These seem much better criteria.
> >>
> >> To make them easier to apply, should coreboot comprehensively track, for
> >> platforms/chips (roughly as Debian does for packages):
> >>
> >> -   the current maintainer(s) for that platform/chip,
> >>
> >> -   the current tester(s) for that platform/chip,
> >>
> >> -   when that platform/chip was last tested, and
> >>
> >> -   what the test results were?
> >>
> >> I think coreboot already tracks some of this data via
> >> https://lava.9esec.io or https://qa.coreboot.org/ - I'm not certain.
> >>
> >> That being so, I propose some draft policy wording (please
> >> change/improve...):
> >>
> >>  "For any given platform or chip that has ever been targeted by the
> >>  coreboot project:
> >>
> >>  -   For each coreboot "version" (point release, master, or
> >>  hardware-specific branch):
> >>
> >>  -   if the platform/chip has been tested on that version, but
> >>  the test was unsuccessful, the platform/chip shall be
> >>  labelled *broken* re that version; else
> >>
> >>  -   if the test was successful, the platform/chip shall be
> >>  labelled *working* re that version; else
> >>
> >>  -   the platform/chip shall be labelled *untested* re that
> >>  version.
> >>
> >>  -   If the platform/chip has known security vulnerabilities on that
> >>  version, the 

[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-22 Thread ron minnich
we may be saying the same thing, but that commit in our file is the
"check this out  to get this board" ref.


On Fri, Apr 22, 2022 at 5:41 PM Martin Roth  wrote:
>
> Hey Ron,
>   I think this is a good plan.  We can make a markdown file doing the same.  
> I'm not sure that coreboot wants to record where it's deleted, but instead 
> the branch where it would be maintained.
> This is the solution I was talking about in the coreboot leadership meeting:
> https://review.coreboot.org/c/coreboot/+/63754
>
> Take care.
> Martin
> Apr 22, 2022, 17:24 by rminn...@gmail.com:
>
> > The discussion here has been pretty helpful to my thinking. I think
> > the concerns people are raising are important.
> >
> > We are deprecating ALL boards on oreboot that need FSP, as we took the
> > decision a few weeks ago to drop boards
> > that require blobs on the main CPU (we're accepting PSP blobs for now)
> >
> > This leaves two x86 boards behind, not counting qemu.
> >
> > But, based on what has come up here, we decided we did not want to
> > leave all memory of old efforts behind.
> >
> > This is the result: https://github.com/oreboot/oreboot/pull/570/files
> >
> > Not sure if this would be desired for coreboot, but I am mentioning it
> > here for reference.
> >
> > Thanks
> >
> > ron
> >
> > On Mon, Apr 18, 2022 at 11:05 AM Sam Kuper  wrote:
> >
> >>
> >> (I'm not a Coreboot dev/maintainer, so apologies for commenting from the
> >> peanut gallery...)
> >>
> >>
> >> On Mon, Apr 18, 2022 at 04:32:36AM +0200, Martin Roth wrote:
> >> > [...]
> >> > 2) Decide on a set of criteria that we can use to evaluate whether or
> >> > not things should be removed from the master branch and maintained on
> >> > a release branch.
> >>
> >> Makes sense.
> >>
> >>
> >> > Currently, the only reason we have specified that a platform will be
> >> > moved to a release branch is if it's hindering progress.  Basically if
> >> > it's not using an API we want to mark a required, or using code that
> >> > is being deprecated.
> >> >
> >> > If hindering progress is the only reason that something should ever be
> >> > removed from the master branch, I'm fine with that.  Let's decide that
> >> > and document it so we don't have to have this discussion in the
> >> > future.
> >>
> >> Surely there will sometimes be platforms/chips that, for good reason,
> >> the community wants to keep in master - even if this would mean rewrites
> >> for those platforms/chips re the two matters you mentioned above:
> >>
> >> a.  not using APIs that future versions of coreboot will require;
> >>
> >> b.  using code that is being deprecated?
> >>
> >>
> >> Such "good reasons" could include that the plaform/chip:
> >>
> >> 1.  is in widespread use with coreboot, even if long out of production
> >>  (e.g. certain server boards or Thinkpad models); or
> >>
> >> 2.  is targeted by related projects (e.g. Heads) that coreboot
> >>  developers would prefer to avoid unnecessarily inconveniencing; or
> >>
> >> 3.  has active coreboot testers/maintainers able to integrate relevant
> >>  updates, and is passing all significant CI tests.
> >>
> >>
> >> > Other options we could look at:
> >> >
> >> > - A platform has not been sold in X years.
> >> >
> >> > - A chip has not been produced for X years.
> >>
> >> I can see the appeal of these criteria: they are easy to define.
> >> However, they are probably not wise criteria, as they may conflict with
> >> one or more of the "good reasons" above, especially reason 1.
> >>
> >>
> >> > - A platform has not been tested in X years.
> >> >
> >> > - A platform hasn't had an active maintainer for X years.  (Maybe the
> >> >   maintainer should be asked to test a platform as well?)
> >>
> >> These seem much better criteria.
> >>
> >> To make them easier to apply, should coreboot comprehensively track, for
> >> platforms/chips (roughly as Debian does for packages):
> >>
> >> -   the current maintainer(s) for that platform/chip,
> >>
> >> -   the current tester(s) for that platform/chip,
> >>
> >> -   when that platform/chip was last tested, and
> >>
> >> -   what the test results were?
> >>
> >> I think coreboot already tracks some of this data via
> >> https://lava.9esec.io or https://qa.coreboot.org/ - I'm not certain.
> >>
> >> That being so, I propose some draft policy wording (please
> >> change/improve...):
> >>
> >>  "For any given platform or chip that has ever been targeted by the
> >>  coreboot project:
> >>
> >>  -   For each coreboot "version" (point release, master, or
> >>  hardware-specific branch):
> >>
> >>  -   if the platform/chip has been tested on that version, but
> >>  the test was unsuccessful, the platform/chip shall be
> >>  labelled *broken* re that version; else
> >>
> >>  -   if the test was successful, the platform/chip shall be
> >>  labelled *working* re that version; else
> >>
> >>  -   the platform/chip shall be labelled *untested* re that
> >>  version.
> >>
> >>  -   If the platform/chip has known security 

[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-22 Thread ron minnich
The discussion here has been pretty helpful to my thinking. I think
the concerns people are raising are important.

We are deprecating ALL boards on oreboot that need FSP, as we took the
decision a few weeks ago to drop boards
that require blobs on the main CPU (we're accepting PSP blobs for now)

This leaves two x86 boards behind, not counting qemu.

But, based on what has come up here, we decided we did not want to
leave all memory of old efforts behind.

This is the result: https://github.com/oreboot/oreboot/pull/570/files

Not sure if this would be desired for coreboot, but I am mentioning it
here for reference.

Thanks

ron

On Mon, Apr 18, 2022 at 11:05 AM Sam Kuper  wrote:
>
> (I'm not a Coreboot dev/maintainer, so apologies for commenting from the
> peanut gallery...)
>
>
> On Mon, Apr 18, 2022 at 04:32:36AM +0200, Martin Roth wrote:
> > [...]
> > 2) Decide on a set of criteria that we can use to evaluate whether or
> > not things should be removed from the master branch and maintained on
> > a release branch.
>
> Makes sense.
>
>
> > Currently, the only reason we have specified that a platform will be
> > moved to a release branch is if it's hindering progress.  Basically if
> > it's not using an API we want to mark a required, or using code that
> > is being deprecated.
> >
> > If hindering progress is the only reason that something should ever be
> > removed from the master branch, I'm fine with that.  Let's decide that
> > and document it so we don't have to have this discussion in the
> > future.
>
> Surely there will sometimes be platforms/chips that, for good reason,
> the community wants to keep in master - even if this would mean rewrites
> for those platforms/chips re the two matters you mentioned above:
>
> a.  not using APIs that future versions of coreboot will require;
>
> b.  using code that is being deprecated?
>
>
> Such "good reasons" could include that the plaform/chip:
>
> 1.  is in widespread use with coreboot, even if long out of production
> (e.g. certain server boards or Thinkpad models); or
>
> 2.  is targeted by related projects (e.g. Heads) that coreboot
> developers would prefer to avoid unnecessarily inconveniencing; or
>
> 3.  has active coreboot testers/maintainers able to integrate relevant
> updates, and is passing all significant CI tests.
>
>
> > Other options we could look at:
> >
> > - A platform has not been sold in X years.
> >
> > - A chip has not been produced for X years.
>
> I can see the appeal of these criteria: they are easy to define.
> However, they are probably not wise criteria, as they may conflict with
> one or more of the "good reasons" above, especially reason 1.
>
>
> > - A platform has not been tested in X years.
> >
> > - A platform hasn't had an active maintainer for X years.  (Maybe the
> >   maintainer should be asked to test a platform as well?)
>
> These seem much better criteria.
>
> To make them easier to apply, should coreboot comprehensively track, for
> platforms/chips (roughly as Debian does for packages):
>
> -   the current maintainer(s) for that platform/chip,
>
> -   the current tester(s) for that platform/chip,
>
> -   when that platform/chip was last tested, and
>
> -   what the test results were?
>
> I think coreboot already tracks some of this data via
> https://lava.9esec.io or https://qa.coreboot.org/ - I'm not certain.
>
> That being so, I propose some draft policy wording (please
> change/improve...):
>
> "For any given platform or chip that has ever been targeted by the
> coreboot project:
>
> -   For each coreboot "version" (point release, master, or
> hardware-specific branch):
>
> -   if the platform/chip has been tested on that version, but
> the test was unsuccessful, the platform/chip shall be
> labelled *broken* re that version; else
>
> -   if the test was successful, the platform/chip shall be
> labelled *working* re that version; else
>
> -   the platform/chip shall be labelled *untested* re that
> version.
>
> -   If the platform/chip has known security vulnerabilities on that
> version, the shall be labelled *vulnerable* re that version.
>
> -   If the platform/chip has a person/team assigned to test/maintain
> it re master, it shall be labelled *maintained*, unless it has
> been *vulnerable*, *broken*, or *untested* re master for at
> least 6 months in which case it shall be labelled
> *unmaintained*,
>
> -   If a platform/chip has been labelled *unmaintained* for at least
> 6 months, a branch shall be created for it, from the last
> coreboot point-release for which it was tested and found to be
> working.  Such a platform/chip shall be labelled *relegated*.
>
> -   A person/team who merges subsequent updates from master into
> such a branch, such that the branch becomes acceptable to the
> gatekeepers of 

[coreboot] Re: Two Payloads

2022-04-21 Thread ron minnich
for a good example of matt's (1) above, see the pc engines apu2
coreboot image. It's very nice.

On Thu, Apr 21, 2022 at 7:05 AM Matt DeVillier  wrote:
>
> there are two ways to handle multiple payloads in coreboot:
>
> 1) have a single primary payload, which provides the mechanism to
> execute secondary payloads. Grub and SeaBIOS can do this; Heads and
> Tianocore cannot
>
> 2) have two primary payloads, and use the normal/fallback mechanism to
> select between them using nvramtool
>
> Either way you'd need to read/modify/write your existing coreboot
> image, and in many cases simply rebuilding is easier.
>
> For the example you gave, you'd want to use option #2 for
> Heads+Tianocore, since launching one of those from the other is not
> (currently) doable
>
> On Thu, Apr 21, 2022 at 4:54 AM Lukas Jungbauer  wrote:
> >
> > Hi,
> >
> > is it possible to run two Payloads on Coreboot at the same Time, like Heads 
> > and TianoCore?
> >
> > Payloads can only be installed by building a new Image and 
> > Software-Flashing it, correct?
> >
> > Best Regards,
> > Lukas
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2022-03-29 - coreboot UEFI working group meeting minutes

2022-04-20 Thread ron minnich
Arthur, your proposal would actually make things worse, surprisingly.

While your proposal would fix a problem, it would change the binary
layout, and create a problem.

Consider the case of a 10y old coreboot, with a modern kernel (Linux)
booting from it. How does linux parse the structures? They're going to
be different, as you fixed the problem Rudolf describes. The kernel
needs two parsers: which one should it use?

Well, you can put a version tag in -- see the MP table. That generally
works poorly -- in the end, there was only ever one version of the MP
table, because ... what do you do if you have an old kernel and a
newer MP table, compiled in such a way that the layout has somehow
changed? oops.

Well, ok, maybe you can find some way to make it work for, say, 15
years of gcc, coreboot, and linux.

Now I want to boot plan 9.
Plan 9 rules for struct layout mainly match gcc, but not quite. In
particular, Plan 9, having been written by the inventors of C, was
averse to packed. In fact, they did not use the packed keyword: in
Plan 9 c, you got packed with
#pragma hjdicks
And, yes, this word was an expression of the opinion of the inventors
of the C language on the idea of packed structures, Humorous back and
forth from years ago:
https://comp.os.plan9.narkive.com/P06B6nkZ/9fans-am-i-nuts-does-8c-support-packed-structs,
which I found by accident just now.

I don't  think we want to version lock coreboot to particular versions
of linux :-)

Rudolf's point is crucial: "Challenge accepted. They aren't [self
defining] because they are defined with ABI/compiler:"

As Rudolf points out, we are defining a binary layout with a c
compiler. That's known not to work. It's why things like xdr compilers
(and protobufs) exist.

You don't have to a use an XDR compiler.
If you look in the linux kernel, you'll see stuff like this:
req = p9_client_rpc(clnt, P9_TLOCK, "dbdqqds", fid->fid, flock->type,
What is that thing in quotes? It's the data layout of the packet:
dword, byte, dword, quad, string, etc. s in this case means 2 bytes of
length, then bytes.

I'm not saying we can use this, but if you use this string to generate
an array of uint8_t, then you package the string with the array, you
now have a self-describing structure, I believe.

Again, what we have today is not self-describing, not portable to
non-gcc toolchains or other kernels, and not portable even across
kernel and compiler versions (gcc 2 to gcc 3 exposed this kind of
thing, years ago).

Further, because coreboot depends on gcc features, kernels like Plan 9
will not compile those structs the same way.

coreboot tables are nice, but like it or not (and I was part of the
problem, I was there for the creation), they have a gcc and x86 bias
-- I've seen this in Plan 9.

We can do better. And as the kernel source I referenced shows, it
doesn't have to be super complex.

On Tue, Apr 19, 2022 at 11:04 PM Rudolf Marek  wrote:
>
> Hi,
>
> On 19. 04. 22 11:42, Arthur Heymans wrote:
> > Nice catch!
> > Regardless of the upshot of this it's worth fixing this problem in the 
> > coreboot tables implementation.
> > I'm not very knowledgeable on the topic but don't a lot of CPU ARCH support 
> > unaligned pointer access in hardware but it slows things down?
>
> Yes a bit. But mostly for SIMD instructions.
>
> > The way you find coreboot table structs is by looping over ->size, since 
> > payloads may not know
> > some tags. So aligning the size up to 8 bytes when generating the coreboot 
> > tables should do the trick.
>
> Yes and no. Problem is if you have following layout in your struct:
>
> u32 x
> u64 y
>
> On 64-bit you will get "space" in the middle of your struct.
>
> u32 x
> u32 hidden_padding
> u64 y (aligned to 8 byte boundary)
>
> and on 32-bit you will get:
>
> u32 x
> u64 y (aligned to 4 byte boundary)
>
> And similar thing happens at the end of the struct where extra padding is 
> inserted, but the size depends on the actual ABI/Architecture.
> Linux uses other ABI than UEFI for example. You can play with that using gcc. 
> Just add -Wpadded to your
> compiler flags.
>
> The problem above exactly happens with coreboot memory entries (when there is 
> more then one). See libpayload/include/coreboot_tables.h
> which introduces for this reason struct cbuint64 which is 64-bit datatype 
> split to 2 32-bit parts to fix this oversight.
>
> Have a look what multiboot2 folks did. In general multiboot2 [1] got this 
> "right". It aligns the start of the entries and I think there are no such 
> issues. Also it defines the machine state at the point of handoff which is 
> nice. Also, it has some infrastructure to pass various strange future memory 
> lists in the memory tag.
>
>
> Thanks,
> Rudolf
>
> [1] https://www.gnu.org/software/grub/manual/multiboot2/multiboot.html
>
> >
> > Kind regards
> > Arthur
> >
> >
> > On Tue, Apr 19, 2022 at 10:44 AM Rudolf Marek  > > wrote:
> >
> > Dne 12. 04. 22 v 0:04 Peter Stuge napsal(a):
> > 

[coreboot] Re: lb_serial: drop 'uart_pci_addr' entry

2022-04-17 Thread ron minnich
Wow, that one chip-specific kludge impacts almost 10 source files.

On Sun, Apr 17, 2022 at 11:15 AM ron minnich  wrote:
>
> yes, and this is a perfect example of how one platform, which is not
> used, can cause unneeded features to persist and make the codebase
> more complex than it needs to be.
>
> I support dropping it.
>
>
> On Sun, Apr 17, 2022 at 2:12 AM Arthur Heymans  wrote:
> >
> > Hi
> >
> > In 2016 'uart_pci_addr' was added to the coreboot table entry for serial 
> > devices.
> > (https://review.coreboot.org/c/coreboot/+/14609)
> > It was done for the Intel Quark platform which has its uart on a PCI device 
> > like other
> > Intel hardware. Right now only Quark sets this to a non zero value using an 
> > awkwardly defined Kconfig parameter: CONFIG_UART_PCI_ADDR. It looks like 
> > only tianocore uses this and it's
> > pretty much a NOOP used only to get the VID/DID of the PCI device.
> >
> > Should we update tianocore and just drop this for the lb_table?
> > Most other payloads don't even have this struct entry updated to contain 
> > this entry...
> > Now our codebase has awkward code with "serial.uart_pci_addr = 
> > CONFIG_UART_PCI_ADDR;" on a lot of platforms that don't even feature PCI 
> > and there is no real use case as far as I can tell.
> >
> > Do any of your payloads use this in a meaningful way?
> > If not, can we just drop it?
> >
> > Kind regards
> >
> > Arthur
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: lb_serial: drop 'uart_pci_addr' entry

2022-04-17 Thread ron minnich
yes, and this is a perfect example of how one platform, which is not
used, can cause unneeded features to persist and make the codebase
more complex than it needs to be.

I support dropping it.


On Sun, Apr 17, 2022 at 2:12 AM Arthur Heymans  wrote:
>
> Hi
>
> In 2016 'uart_pci_addr' was added to the coreboot table entry for serial 
> devices.
> (https://review.coreboot.org/c/coreboot/+/14609)
> It was done for the Intel Quark platform which has its uart on a PCI device 
> like other
> Intel hardware. Right now only Quark sets this to a non zero value using an 
> awkwardly defined Kconfig parameter: CONFIG_UART_PCI_ADDR. It looks like only 
> tianocore uses this and it's
> pretty much a NOOP used only to get the VID/DID of the PCI device.
>
> Should we update tianocore and just drop this for the lb_table?
> Most other payloads don't even have this struct entry updated to contain this 
> entry...
> Now our codebase has awkward code with "serial.uart_pci_addr = 
> CONFIG_UART_PCI_ADDR;" on a lot of platforms that don't even feature PCI and 
> there is no real use case as far as I can tell.
>
> Do any of your payloads use this in a meaningful way?
> If not, can we just drop it?
>
> Kind regards
>
> Arthur
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-16 Thread ron minnich
btw, if you are interested in looking at new abstractions intended to
resolve some of the issues we are discussing here, that's what oreboot
is all about.

We did just drop the fsp platforms, because we decided there is no
interest in binary blobs for oreboot, but other than that ... if you
like the idea of full source, Rust, an effort to work out new
abstractions, and to remove some legacy coreboot ideas that did not
work out ... we're there every friday, with lots of RISCV focus
nowadays.

I am taking this discussion to heart, and I think in oreboot we're
going to try to find a way to record mainboards we've dropped, with
their commit, so memory is not lost. Maybe with tags, maybe with plain
text file, not sure. It's possible whatever we try out will work for
coreboot, but we'll see.

On Sat, Apr 16, 2022 at 8:25 AM ron minnich  wrote:
>
> nico, it was not so much a matter of me jumping on the bandwagon, as
> my reluctance to get involved in another never-ending discussion over
> retiring a platform that nobody uses or cares about.
>
> But let's keep it simple. I think it's clear that the effort to
> maintain the Quark is > 0. The number of users is zero. The effort per
> user, depending on what you get when you divide a number by zero, is
> "high" :-)
>
> But there's a bigger issue here. If I have a board, which ran coreboot
> at some time, which version of coreboot do I use? In many cases, the
> working version of coreboot will not be master. That is true for
> chromebooks in many cases, which is why Google maintains a fork for
> each chromebook, once it is known to work. If you've been doing this
> for long enough, you've experienced building a mainboard at tip of
> tree and having it not work; then finding an older commit for which
> the board does work.
>
> This discussion has led me to believe that we should change how we
> name branches. People get upset that boards are not in master. Should
> we get rid of the entire idea of a master branch which works for all
> boards, since that is not quite true anyway? Or is the problem that
> people don't want to see a name lost to memory (I understand that)?
> Can we maintain a record of boards, along with the last working
> commit?
>
> In other words, builds != boots. But we continue to act as though
> boards that build will also boot. This is known not to be true.
>
> The issue is not whether my board is in master. The issue is, what's
> the last known commit in coreboot for which a board was tested and
> known to work?
>
> So we would no longer deprecate boards, or drop them, or do whatever
> gets people upset. We would have a way to know, for any given board,
> which commit to check out to build it. The list of boards would grow
> over time, and it would be easy to checkout a board and build it.
> Boards would not be lost to memory.
>
> We could acknowledge this reality by naming master to tip, or some
> similar name, which is less likely to get people upset.
>
> This was my original goal for the mainboards status page, but we never
> got there. Maybe it's time to bring that to life.
>
> On Sat, Apr 16, 2022 at 7:33 AM Nico Huber  wrote:
> >
> > Hi Sheng,
> >
> > On 16.04.22 11:01, Sheng Lean Tan wrote:
> > > Personally I think moving Galileo soc to stable branch is a win-win 
> > > situation for all of us.
> >
> > it looks like nobody is maintaining such a stable branch yet. Would you
> > volunteer to maintain one for Quark? AIUI, some people already want to
> > take care of testing. So you'd only have to maintain compatibility with
> > newer toolchain and payload versions and such.
> >
> > > For the enthusiast who still want to use it are freely to do so without 
> > > the baggage, and for others it’s a great savings on resources spent, so 
> > > that we could leave more rooms (and also testing resources)to the more 
> > > upcoming coreboot products and architecture (I think much more will come, 
> > > the public has just only warmed  up to coreboot ;) ).
> >
> > FWIW, most resources for newer platforms are wasted by copying code
> > (kind of forking the original code in the same repository). So there
> > is much more potential to save resources by adding proper abstraction
> > instead. And what would be better to get the abstractions right than
> > a diverse set of platforms in the tree? I'm not saying, you need Quark
> > for that, but so far I also don't see how it could hurt.
> >
> > Nico
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-16 Thread ron minnich
nico, it was not so much a matter of me jumping on the bandwagon, as
my reluctance to get involved in another never-ending discussion over
retiring a platform that nobody uses or cares about.

But let's keep it simple. I think it's clear that the effort to
maintain the Quark is > 0. The number of users is zero. The effort per
user, depending on what you get when you divide a number by zero, is
"high" :-)

But there's a bigger issue here. If I have a board, which ran coreboot
at some time, which version of coreboot do I use? In many cases, the
working version of coreboot will not be master. That is true for
chromebooks in many cases, which is why Google maintains a fork for
each chromebook, once it is known to work. If you've been doing this
for long enough, you've experienced building a mainboard at tip of
tree and having it not work; then finding an older commit for which
the board does work.

This discussion has led me to believe that we should change how we
name branches. People get upset that boards are not in master. Should
we get rid of the entire idea of a master branch which works for all
boards, since that is not quite true anyway? Or is the problem that
people don't want to see a name lost to memory (I understand that)?
Can we maintain a record of boards, along with the last working
commit?

In other words, builds != boots. But we continue to act as though
boards that build will also boot. This is known not to be true.

The issue is not whether my board is in master. The issue is, what's
the last known commit in coreboot for which a board was tested and
known to work?

So we would no longer deprecate boards, or drop them, or do whatever
gets people upset. We would have a way to know, for any given board,
which commit to check out to build it. The list of boards would grow
over time, and it would be easy to checkout a board and build it.
Boards would not be lost to memory.

We could acknowledge this reality by naming master to tip, or some
similar name, which is less likely to get people upset.

This was my original goal for the mainboards status page, but we never
got there. Maybe it's time to bring that to life.

On Sat, Apr 16, 2022 at 7:33 AM Nico Huber  wrote:
>
> Hi Sheng,
>
> On 16.04.22 11:01, Sheng Lean Tan wrote:
> > Personally I think moving Galileo soc to stable branch is a win-win 
> > situation for all of us.
>
> it looks like nobody is maintaining such a stable branch yet. Would you
> volunteer to maintain one for Quark? AIUI, some people already want to
> take care of testing. So you'd only have to maintain compatibility with
> newer toolchain and payload versions and such.
>
> > For the enthusiast who still want to use it are freely to do so without the 
> > baggage, and for others it’s a great savings on resources spent, so that we 
> > could leave more rooms (and also testing resources)to the more upcoming 
> > coreboot products and architecture (I think much more will come, the public 
> > has just only warmed  up to coreboot ;) ).
>
> FWIW, most resources for newer platforms are wasted by copying code
> (kind of forking the original code in the same repository). So there
> is much more potential to save resources by adding proper abstraction
> instead. And what would be better to get the abstractions right than
> a diverse set of platforms in the tree? I'm not saying, you need Quark
> for that, but so far I also don't see how it could hurt.
>
> Nico
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-15 Thread ron minnich
I think quark revival should come with a reasonable deadline. IOW, if
people are serious about keeping this platform, I think we ought to
see commitments as to when they report that it works. I'd suggest July
1. We've had a lot of commitments before, but everyone is busy, and
hopes can outrun reality. It should not take more than a few hours to
verify that this board or does not work.

Keeping an old platform is not zero cost. It comes with costs for
running CI, keeping it up to date as other parts of coreboot evolve,
and dealing with build failures that can occur as it falls out of
date. Those costs are all externalized, for most of us, to Patrick and
Martin, but they do exist.

In round numbers, coreboot is at about 5k commits/year (last time I
looked; maybe it's higher or lower now).  Assuming each CL takes
around ten builds, that's 50,000 builds, times 350 boards, which
translates to "a lot." It keeps Martin's house warm, I suspect. That's
not counting the continuous builds that go on for Chromebooks at
Google, Intel, and many other places. These builds all include Quark.
To put it another way, Quark has a CO2 footprint. There ought to be
usage to justify this cost.

I'm told that 1% or so of our mainboards are dependent on quark. As
far as I know, there are 0 quark boards out there using coreboot. We
seem to be putting an awful lot of effort into a board with no users
-- a board and chip that, furthermore, has been dead for several
years, and was never that great to begin with.

On Fri, Apr 15, 2022 at 11:29 AM Zimmer, Vincent
 wrote:
>
> Sure. Send me a mailing address.  Unites should have Europe-friendly 
> wall-wart/power supplies and cables, etc. in the box.
>
>
>
> Vincent
>
>
>
> From: Andy Pont 
> Sent: Friday, April 15, 2022 11:13 AM
> To: Zimmer, Vincent ; Karl Semich 
> <0xl...@gmail.com>; lpleah...@gmail.com
> Cc: Coreboot 
> Subject: Re: [coreboot] Re: Deprecation of the Intel Quark SoC
>
>
>
> Vincent wrote...
>
>
> > I can provide some Galileo h/w for folks if there is interest in supporting.
>
>
>
> Looking at the configs it looks like both a Gen 1 and Gen 2 Galileo boards 
> are the place to start?
>
>
>
> If you have both and can get them shipped to the UK that would be great. I 
> suspect I have power supplies and debug cables for them.
>
>
>
> -Andy
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-14 Thread ron minnich
do you have a way to recover if the flash fails?

On Thu, Apr 14, 2022 at 1:09 PM Andy Pont  wrote:
>
> Karl wrote…
>
> >Obviously a way to sidestep all this would be to simply test the board
> >in question, which is a small investment of money and time.
> There is still one of these boards (Intel Galileo) available on eBay
> here in the UK.  I can likely commit the time to test coreboot on that
> board but don’t have the money spare to purchase it due to the ongoing
> cost of living crisis here.
>
> If someone wants to purchase and donate the board, I can have a look at
> how coreboot is on it.
>
> -Andy.
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2022-03-29 - coreboot UEFI working group meeting minutes

2022-04-12 Thread ron minnich
My goal was pretty simple: Kill the UEFI HOBs, and the FSP UPD, and
put something better in their place. coreboot tables could easily
replace HOBs, save that intel will never accept that; but I don't see
coreboot tables replacing UPD.

[one might argue that what Intel will accept matters a lot less than
it did 5 years ago, and I would agree. So maybe we can worry less
about what Intel will accept, but still ... :-) ]

I like self describing data as it avoids that mess that we are in with
UPD today, where you can end up with problems if the compilers you use
for FSP and (e.g.) coreboot don't agree totally on how to lay out data
structures. UPD are also a major pain for non-C firmware, such as
oreboot.
So I'd like a data format that is not defined by a compiler or
language. But maybe I'm the only person who wants that :-)

On Tue, Apr 12, 2022 at 11:45 AM Peter Stuge  wrote:
>
> ron minnich wrote:
> > peter, you are right about CBOR, and that says to me it does not
> > really meet the original goal of self-describing data.
>
> Hm, whose goal is that?
>
> Anyway, using some data structure serialized in CBOR requires
> defining the structure somewhere. Using coreboot tables requires
> definitions too, they are currently defined in coreboot,
> standardizing coreboot tables would probably see them move to a
> repo of their own.
>
>
> > But coreboot tables, at least in my understanding, is also not
> > self-describing.
>
> I don't know? What do you mean by self-describing actually?
>
>
> > Do you have some thoughts on a good format that is self-describing?
>
> So what's the expectation there; what does a self-describing format
> enable or need to enable? And what's the complexity tradeoff involved?
>
> As Arthur pointed out, coreboot tables have the quite significant
> advantage of being very very simple to read and write.
>
>
> I think this is still interesting to pursue:
>
> > > > So if the idea is to create a payload handoff format that can be
> > > > shared and used by multiple different firmware packages, do you have
> > > > a better option?
> > >
> > > I'd ask what other boot firmware is missing from coreboot tables for
> > > them to be universally acceptable.
>
> Martin wrote that the goal is to create a handoff format that can be
> shared and I'm asking what coreboot tables are missing to serve others,
> because I think we have a really good (simple) technical solution there.
>
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: 2022-03-29 - coreboot UEFI working group meeting minutes

2022-04-12 Thread ron minnich
peter, you are right about CBOR, and that says to me it does not
really meet the original goal of self-describing data. But coreboot
tables, at least in my understanding, is also not self-describing.

Do you have some thoughts on a good format that is self-describing?

On Mon, Apr 11, 2022 at 3:05 PM Peter Stuge  wrote:
>
> Martin Roth via coreboot wrote:
> > > Your concern is valid and I think a key point. CBOR may not be bad
> > > over a socket, but such a complex and arbitrarily extensible format
> > > is much too error prone to be a good technical choice during boot.
> >
> > So if the idea is to create a payload handoff format that can be
> > shared and used by multiple different firmware packages, do you have
> > a better option?  Yes, coreboot can just continue with just the
> > coreboot tables, but that seems a little like sticking our head in
> > the sand and refusing to recognize that other boot firmware exists.
>
> I'd ask what other boot firmware is missing from coreboot tables for
> them to be universally acceptable.
>
>
> > > I agree that it could be a step forward, but I think the devil is in
> > > the details. CBOR data structures can also be unneccessarily complex
> > > and error prone, beyond the parser itself.
> >
> > So maybe we try to limit the complexity?  I'm not really familar with
> > CBOR, so I don't know the issues with it.
>
> CBOR (RFC 8949) is a binary serialization of JSON with some extensions.
>
> So "CBOR" itself says nothing about the data within.
>
>
> > Intel did say that they were willing to look at other alternatives if
> > we had any.
>
> That's a positive signal!
>
> I propose that coreboot tables are a good alternative - fight me! :)
>
>
> > I hope nobody takes any of this as criticism - I appreciate the
> > open discussion, and am sincerely looking for the best path forward here.
>
> Not at all.
>
> Let's see if coreboot tables can grow to cover all needs?
>
>
> Kind regards
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Another day, another SMM loader vulnerability

2022-04-11 Thread ron minnich
arthur, what might we do with either the build process or startup to
avoid this problem in future? Do you think we could find a way to
catch this programmatically soon, rather than humanly too late?

On Mon, Apr 11, 2022 at 2:48 AM Arthur Heymans  wrote:
>
> Hi
>
> After last week's SMM loader problem on all but the BSP, I noticed another 
> problem in the SMM setup.
> The permanent smihandler is currently built as a relocatable module such that 
> coreboot
> can place it wherever it thinks it's a good idea. (TSEG is not known at 
> buildtime).
> These relocatable modules have an alignment requirement.
>
> It looks however that the code to deal with the alignment requirement is also 
> wrong
> and aligns the handler upwards instead of downwards which makes it encroach 
> either an SSE2
> FX_SAVE area or an SMM register save state. It's hard to know whether this is 
> easily exploitable.
> I would think that a carefully crafted SMM save state on the right AP 
> arbitrary code executing might be possible. On the other hand I noticed last 
> week that launching SMM on APs is broken too so this is likely a lesser 
> problem.
>
> Anyway the fix is in https://review.coreboot.org/c/coreboot/+/63475
> (It has a comment indicating what code was causing this problem)
> Please review and update your coreboot code!
>
> Kind regards
> Arthur
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-04-01 Thread ron minnich
Does anyone even have one? Has anyone done a build and burn recently
to test? Has anyone volunteered to maintain it? How much does it it
impact other code as a special case?

I threw mine out years ago.

On Fri, Apr 1, 2022 at 6:19 AM Peter Stuge  wrote:
>
> Felix Singer wrote:
> > to me it seems like the Intel Quark SoC has been unmaintained and
> > unused for a long time now. So I'm proposing to deprecate the support
> > for it with coreboot release 4.17 [1], in order to drop the support
> > with release 4.19 so that the community has less maintenance overhead.
> >
> > Does anyone use this platform? Any opinions against this?
>
> What Paul wrote; are there some practical concerns beyond the
> academic concern seemingly based on perceived utility?
>
> Quark is funny and as I understand also the ME CPU, I find those to
> be two more good reasons not to delete it.
>
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecation of the Intel Quark SoC

2022-03-31 Thread ron minnich
go for it.

Long overdue.

On Thu, Mar 31, 2022 at 1:55 PM Felix Singer  wrote:
>
> Hi all,
>
> to me it seems like the Intel Quark SoC has been unmaintained and
> unused for a long time now. So I'm proposing to deprecate the support
> for it with coreboot release 4.17 [1], in order to drop the support
> with release 4.19 so that the community has less maintenance overhead.
>
> Does anyone use this platform? Any opinions against this?
>
>
> // Felix
>
> [1] https://review.coreboot.org/c/coreboot/+/63283
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [GSoC 2022] Interested in Contributing to coreboot.

2022-03-12 Thread ron minnich
gotcha. Makes sense to use the UEFI code from u-boot as a payload.

note that I used that code to tell me how to write rundxerun,
https://talks.osfc.io/osfc2021/talk/VSPKZG/,  it's very readable ;-)

ron

On Sat, Mar 12, 2022 at 10:27 AM Ahamed Husni  wrote:
>
> + Simon Glass
>
> The current U-Boot payload in coreboot supports UEFI.
> In the UEFI working group meeting there was a suggestion to take just
> the UEFI code from U-Boot and build our own payload for coreboot.
>
>
> On Sat, Mar 12, 2022 at 11:41 PM ron minnich  wrote:
>>
>> so is your plan to port the UEFI code as a payload or to  use u-boot
>> as a payload and then use their UEFI support?
>>
>> On Sat, Mar 12, 2022 at 9:36 AM Ahamed Husni  wrote:
>> >
>> > Hi everyone,
>> >
>> > On Fri, Feb 25, 2022 at 4:21 PM Ahamed Husni  
>> > wrote:
>> > >
>> > > Dear All,
>> > >
>> > > I am hoping to apply for the GSoC 2022 program. I am really interested 
>> > > in contributing to the coreboot project as I love to work with low level 
>> > > development.
>> > >
>> > > I completed the pre-requisites mentioned in the documentation.
>> > >
>> > > I built a 32-bit EFI binary in the gnu-efi environment. I used U-Boot as 
>> > > the payload for coreboot to test this EFI binary. The target is 
>> > > x86_64(QEMU).
>> > > Serial output: 
>> > > https://gist.github.com/drac98/6166d29f6c3a2baf2f4e791925ea98d3
>> > >
>> > > I sent a patch for review about a year ago related to adding 
>> > > heap-out-of-bound detection support to AddressSanitizer port. If this is 
>> > > not acceptable as my first patch for review, please let me know. I can 
>> > > fix some coding styles or do a simple task given by the community.
>> > > PATCH: https://review.coreboot.org/c/coreboot/+/51787
>> > >
>> > > I find the "Add U-BOOT's UEFI payload as a coreboot payload" project 
>> > > mentioned in the coreboot project collection interesting to me. I have 
>> > > no prior experience with UEFI. As a starting point I learned to build 
>> > > and run an EFI binary with coreboot using the U-Boot payload.
>> > >
>> > > Please let me know what I should do next to learn and prepare for this 
>> > > project.
>> > >
>> > > Thanks & Best regards,
>> > > Husni.
>> >
>> > I reached out to the U-Boot community to check the current status of the 
>> > UEFI. Please find the thread here: 
>> > https://lists.denx.de/pipermail/u-boot/2022-March/477682.html
>> >
>> > I have a few questions about the project.
>> > What should be the deliverables for this project?
>> > Do we need 64-bit EFI support (U-Boot doesn't support 64bit EFI)?
>> > How do we develop a payload? Is it done using libpayload?
>> > Is this a medium sized project or a large project?
>> >
>> > --Husni
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: [GSoC 2022] Interested in Contributing to coreboot.

2022-03-12 Thread ron minnich
so is your plan to port the UEFI code as a payload or to  use u-boot
as a payload and then use their UEFI support?

On Sat, Mar 12, 2022 at 9:36 AM Ahamed Husni  wrote:
>
> Hi everyone,
>
> On Fri, Feb 25, 2022 at 4:21 PM Ahamed Husni  wrote:
> >
> > Dear All,
> >
> > I am hoping to apply for the GSoC 2022 program. I am really interested in 
> > contributing to the coreboot project as I love to work with low level 
> > development.
> >
> > I completed the pre-requisites mentioned in the documentation.
> >
> > I built a 32-bit EFI binary in the gnu-efi environment. I used U-Boot as 
> > the payload for coreboot to test this EFI binary. The target is 
> > x86_64(QEMU).
> > Serial output: 
> > https://gist.github.com/drac98/6166d29f6c3a2baf2f4e791925ea98d3
> >
> > I sent a patch for review about a year ago related to adding 
> > heap-out-of-bound detection support to AddressSanitizer port. If this is 
> > not acceptable as my first patch for review, please let me know. I can fix 
> > some coding styles or do a simple task given by the community.
> > PATCH: https://review.coreboot.org/c/coreboot/+/51787
> >
> > I find the "Add U-BOOT's UEFI payload as a coreboot payload" project 
> > mentioned in the coreboot project collection interesting to me. I have no 
> > prior experience with UEFI. As a starting point I learned to build and run 
> > an EFI binary with coreboot using the U-Boot payload.
> >
> > Please let me know what I should do next to learn and prepare for this 
> > project.
> >
> > Thanks & Best regards,
> > Husni.
>
> I reached out to the U-Boot community to check the current status of the 
> UEFI. Please find the thread here: 
> https://lists.denx.de/pipermail/u-boot/2022-March/477682.html
>
> I have a few questions about the project.
> What should be the deliverables for this project?
> Do we need 64-bit EFI support (U-Boot doesn't support 64bit EFI)?
> How do we develop a payload? Is it done using libpayload?
> Is this a medium sized project or a large project?
>
> --Husni
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: How to maintain AGESA-based ports long-term?

2021-12-04 Thread ron minnich
based on what I'm seeing so far, 100 hours just means "compiles",
which is only a fraction of the possible effort to get it to "works".
You then have 50 boards to get working.

and even then, at real world rates, 100 hours -> 25,000.

There's only so much we can do. I at least would be way happier to see
effort going into new boards.

On Sat, Dec 4, 2021 at 8:48 PM Keith Emery  wrote:
>
>
> I only said 100 hours because that was the figure that somebody stated
> to shift all the listed boards onto the new Resource Allocator. We need
> that to happen if these boards are to see maintenance in the future, so
> I figured it made sense to just start with that.
>
>
> On 5/12/21 5:53 am, ron minnich wrote:
> > 100 hours of work for 50 boards? 2 hours per board? Each one fully
> > tested and working as before? 'm pretty surprised.
> >
> > On Sat, Dec 4, 2021 at 4:38 AM Keith Emery  
> > wrote:
> >> Getting the listed AGESA boards operating on the new scheduler is
> >> estimated to take around 100 hours of work. So do we have some idea of
> >> what might be considered an acceptable hourly rate? Also do we have a
> >> clear estimate of how many people have one of the effected boards? That
> >> at least gives us a funding goal to work with.
> >>
> >> Alternatively is there some other way to determine a price to at least
> >> get my specific board working with the new infrastructure?
> >>
> >>
> >> On 4/12/21 12:37 pm, ron minnich wrote:
> >>> I think the reason the question comes up time and time again is
> >>> because the effort is non trivial. Were it reasonably easy, it would
> >>> have been done by now. It's easy to get it to compile, but getting it
> >>> to work solidly is not at all easy.
> >>>
> >>> It's very hard to let old systems go. But there's always a tradeoff.
> >>>
> >>>   From my point of view, I'd be very grateful if we could get this
> >>> community strongly engaged in getting upstream coreboot builds working
> >>> on, e.g., chromebooks.
> >>>
> >>> I.e., upstream coreboot working on systems that are designed for, and
> >>> ship with, coreboot. Even things that look easy are not always easy.
> >>>
> >>> ron
> >>>
> >>> On Fri, Dec 3, 2021 at 1:07 PM Matt B  wrote:
> >>>>> It's just software, so it could certainly be done. How much work would
> >>>>> be involved is the right question. Alas, I have no idea. One needs to
> >>>>> study the AGESA sources to tell, I guess.
> >>>> This question has come up time and time again:
> >>>> What would actually be involved in {"cleaning up","doing a 'real' 
> >>>> port","whatever else makes sense'} to make these platforms based on 
> >>>> AGESA as maintainable as corresponding intel platforms?
> >>>>
> >>>> I'll happily buy a round of beer (or equivalent) for anyone who can 
> >>>> provide a clear picture of what the road forward looks like. Then we can 
> >>>> at least talk in grounded terms.
> >>>>
> >>>> -Matt
> >>>>
> >>>> On Wed, Dec 1, 2021 at 12:51 PM ron minnich  wrote:
> >>>>> We've always deprecated platforms. And they're still in tree --  you
> >>>>> can build for DEC Alpha if you want. There's no shame in not being in
> >>>>> the latest release.
> >>>>>
> >>>>> Given unlimited time and money and people, we could fix all the
> >>>>> problems. We live in a world of limits, and must do what we can with
> >>>>> the resources we have.
> >>>>>
> >>>>> Nobody is stopping anyone from cleaning up the AGESA code. But it's
> >>>>> been about 10 years since it came in, and such cleanup has yet to
> >>>>> happen.
> >>>>>
> >>>>> We should move forward with the resource allocator, and if a board
> >>>>> can't work with v4, and nobody is willing to do the work, that board
> >>>>> should be left out of new releases. Having v3 and v4 both in-tree is
> >>>>> not a viable long term strategy.
> >>>>>
> >>>>> On Wed, Dec 1, 2021 at 8:43 AM Nico Huber  wrote:
> >>>>>> On 01.12.21 15:57, Ivan Ivanov wrote:
> >>>>>>> Thank you, these seem to be good points. However, in 

[coreboot] Re: How to maintain AGESA-based ports long-term?

2021-12-03 Thread ron minnich
I think the reason the question comes up time and time again is
because the effort is non trivial. Were it reasonably easy, it would
have been done by now. It's easy to get it to compile, but getting it
to work solidly is not at all easy.

It's very hard to let old systems go. But there's always a tradeoff.

>From my point of view, I'd be very grateful if we could get this
community strongly engaged in getting upstream coreboot builds working
on, e.g., chromebooks.

I.e., upstream coreboot working on systems that are designed for, and
ship with, coreboot. Even things that look easy are not always easy.

ron

On Fri, Dec 3, 2021 at 1:07 PM Matt B  wrote:
>>
>> It's just software, so it could certainly be done. How much work would
>> be involved is the right question. Alas, I have no idea. One needs to
>> study the AGESA sources to tell, I guess.
>
>
> This question has come up time and time again:
> What would actually be involved in {"cleaning up","doing a 'real' 
> port","whatever else makes sense'} to make these platforms based on AGESA as 
> maintainable as corresponding intel platforms?
>
> I'll happily buy a round of beer (or equivalent) for anyone who can provide a 
> clear picture of what the road forward looks like. Then we can at least talk 
> in grounded terms.
>
> -Matt
>
> On Wed, Dec 1, 2021 at 12:51 PM ron minnich  wrote:
>>
>> We've always deprecated platforms. And they're still in tree --  you
>> can build for DEC Alpha if you want. There's no shame in not being in
>> the latest release.
>>
>> Given unlimited time and money and people, we could fix all the
>> problems. We live in a world of limits, and must do what we can with
>> the resources we have.
>>
>> Nobody is stopping anyone from cleaning up the AGESA code. But it's
>> been about 10 years since it came in, and such cleanup has yet to
>> happen.
>>
>> We should move forward with the resource allocator, and if a board
>> can't work with v4, and nobody is willing to do the work, that board
>> should be left out of new releases. Having v3 and v4 both in-tree is
>> not a viable long term strategy.
>>
>> On Wed, Dec 1, 2021 at 8:43 AM Nico Huber  wrote:
>> >
>> > On 01.12.21 15:57, Ivan Ivanov wrote:
>> > > Thank you, these seem to be good points. However, in regards to:
>> > >
>> > >> If you have any hope of open-source coreboot for newer platforms, you 
>> > >> shouldn't make it harder for coreboot to advance.
>> > >
>> > > Where to advance? Are there any "newer platforms" that are as worthy
>> > > as the "older platforms":
>> >
>> > Not sure how to compare that, nobody has written native coreboot code
>> > for the platforms that you deem worthy either. Also, ...
>> >
>> > > 1) as secure: no Intel ME / AMD PSP "security" co-processors, which
>> > > are seen as harmful to real security by many ;
>> >
>> > ...open-source AGESA seems worse to me. In theory one could review it,
>> > but did anyone? AIUI, it even provides runtime code for the OS (ACPI
>> > DSDT), i.e. tells the OS what to do.
>> >
>> > So what you call "real security" seems more like wishful security to
>> > me. Presence of ME or PSP does not provide a security issue per se. It
>> > depends on your threat model and if they are your weakest spot. There
>> > are plenty of controllers even in older machines that run code from ROM
>> > masks. What's the difference? Can we trust vendors with code in ROM
>> > masks but not with code in flash? These are subtle considerations. So
>> > far, it doesn't make older hardware more attractive to me.
>> >
>> > Did I mention that at least one of the pre-PSP platforms already has
>> > a PSP, just hidden? Ok, I admit I didn't look at the silicon to check,
>> > but it's common that a silicon vendor puts new stuff early into chips
>> > to test it. So it seems very likely to be true. We generally don't
>> > know what exactly lives in these chips. I rather trust what I can see.
>> >
>> > > 2) as affordable: the older devices are possible to get used for like
>> > > $100-$200. Meanwhile - because of Boot Guard etc. - the "newer
>> > > platforms" are unlikely to have coreboot without vendor's involvement,
>> > > who will gladly charge a big extra for "coreboot support".
>> >
>> > Last time I checked BootGuard wasn't a big issue, i.e. not so many
>> > devices ship with it. 

[coreboot] Re: How to maintain AGESA-based ports long-term?

2021-12-01 Thread ron minnich
We've always deprecated platforms. And they're still in tree --  you
can build for DEC Alpha if you want. There's no shame in not being in
the latest release.

Given unlimited time and money and people, we could fix all the
problems. We live in a world of limits, and must do what we can with
the resources we have.

Nobody is stopping anyone from cleaning up the AGESA code. But it's
been about 10 years since it came in, and such cleanup has yet to
happen.

We should move forward with the resource allocator, and if a board
can't work with v4, and nobody is willing to do the work, that board
should be left out of new releases. Having v3 and v4 both in-tree is
not a viable long term strategy.

On Wed, Dec 1, 2021 at 8:43 AM Nico Huber  wrote:
>
> On 01.12.21 15:57, Ivan Ivanov wrote:
> > Thank you, these seem to be good points. However, in regards to:
> >
> >> If you have any hope of open-source coreboot for newer platforms, you 
> >> shouldn't make it harder for coreboot to advance.
> >
> > Where to advance? Are there any "newer platforms" that are as worthy
> > as the "older platforms":
>
> Not sure how to compare that, nobody has written native coreboot code
> for the platforms that you deem worthy either. Also, ...
>
> > 1) as secure: no Intel ME / AMD PSP "security" co-processors, which
> > are seen as harmful to real security by many ;
>
> ...open-source AGESA seems worse to me. In theory one could review it,
> but did anyone? AIUI, it even provides runtime code for the OS (ACPI
> DSDT), i.e. tells the OS what to do.
>
> So what you call "real security" seems more like wishful security to
> me. Presence of ME or PSP does not provide a security issue per se. It
> depends on your threat model and if they are your weakest spot. There
> are plenty of controllers even in older machines that run code from ROM
> masks. What's the difference? Can we trust vendors with code in ROM
> masks but not with code in flash? These are subtle considerations. So
> far, it doesn't make older hardware more attractive to me.
>
> Did I mention that at least one of the pre-PSP platforms already has
> a PSP, just hidden? Ok, I admit I didn't look at the silicon to check,
> but it's common that a silicon vendor puts new stuff early into chips
> to test it. So it seems very likely to be true. We generally don't
> know what exactly lives in these chips. I rather trust what I can see.
>
> > 2) as affordable: the older devices are possible to get used for like
> > $100-$200. Meanwhile - because of Boot Guard etc. - the "newer
> > platforms" are unlikely to have coreboot without vendor's involvement,
> > who will gladly charge a big extra for "coreboot support".
>
> Last time I checked BootGuard wasn't a big issue, i.e. not so many
> devices ship with it. Did that change?
>
> Devices sold today will be as affordable tomorrow (well, on a slightly
> larger timescale). What's your point?
>
> > 3) as available: these generic consumer electronics, which have been
> > shipped with a proprietary UEFI but got coreboot support later, have a
> > huge numbers all over the world - compared to the quite limited
> > availability of newer coreboot platforms.
>
> I don't understand this point either. This will change, earth keeps
> turning, right? Also, I'm quite sure that your numbers are wrong
> anyway. Please check how many Chromebooks are sold, for instance.
> These, are sold by people who actually support the project btw.
>
> >
> > Sorry, I don't see any "newer platforms" which would match the "older
> > platforms" on these critically-important points.
>
> You seem to be too much used to look behind. Please look ahead from
> time to time. And regarding security, don't trust what you read on
> the internet. It's far more subtle than non-PSP is secure, PSP is
> insecure.
>
> Also, it's not about old vs. new hardware anyway. There's much older
> hardware than the AGESA ports that will stay maintained. It's about
> hardware that nobody took the time to write a proper, long-term main-
> tainable coreboot for. And I can't blame anyone for it. Everything
> AMD Bulldozer based always seemed like the most unattractive hard-
> ware to me.
>
> > So it doesn't seem reasonable to drop the "crappy code" of "older
> > platforms" in favor of the "beautiful code" of "newer platforms", if
> > they could never become as worthy.
>
> You made it clear that they are worthy to *you* (even your arguments
> seem extremely fragile, so maybe that changed), so you are free to look
> after their code. Why not start with that instead of complaining that
> nobody else does it for you?
>
> >
> > Well, maybe some corporation sees their newer platform as "more
> > worthy" - despite it's losing on all 3 points above and there are
> > blobs-over-blobs. But they can't speak for the community of opensource
> > hobbyists all over the world, people like you and me. And pleasing the
> > corporations by easing their "burden" - while dropping the "older
> > platforms" which are more 

[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-28 Thread ron minnich
having read this discussion, and with all respect for all the opinions
so clearly expressed, I still support Arthur's original proposal.


On Sun, Nov 28, 2021 at 2:20 PM David Hendricks
 wrote:
>
>
>
>> 1. These boards will be gone for the people who check the "mainboards
>> supported by coreboot" and see only the "new Intel stuff". This
>> hinders the coreboot community growth around the "gone boards", and
>> also of the coreboot community in general: the fewer boards are
>> supported by coreboot, the more difficult it is for a potential
>> user/contributor to find the supported board and join us.
>
>
> For the record, we have removed Intel boards from the master branch in the 
> past - See 4.11_branch. This was for boards that used FSP 1.0, including 
> popular Baytrail Atom and Broadwell-DE platforms which are still widely used 
> today. This ensures that those platforms continue existence on an 
> easy-to-find stable branch where one can reasonably expect to check out the 
> code and have it work. Checking out the master branch only to find out that 
> it doesn't work and then bisecting years worth of commits is a poor user 
> experience.
>
> Perhaps we should follow the 4.11_branch example and do something similar 
> with old AGESA boards? Boards which are forward-ported and tested can stay 
> (or be re-introduced) in the master branch, of course.
>
> Many of the AGESA platforms in the list Arthur provided are ~10 years old. 
> Some are clearly obsolete, like the Gizmosphere boards that have not been in 
> production for years and whose manufacturer is defunct. Others like the 
> PCEngines APUs should be more readily available to test and have developers 
> able to spend some time forward-porting the necessary bits.
>
> Lastly, I'll mention that there is an active crowdfunding effort to 
> re-upstream KGPE-D16 support: https://github.com/osresearch/heads/issues/719. 
> There's clearly a lot of enthusiasm with that board, and 3mdeb is already 
> porting allocate v4 to it. Perhaps enthusiasts for other boards can piggyback 
> on this effort and leverage some of their work to bring other boards up to 
> date.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-24 Thread ron minnich
The word 'drop' has ominous connotations, but it's not a deletion. A
board is never really gone. It's git. I can still find the Alpha
boards in there if I go back far enough. It's just that active
development ends, as no one is working to keep them up to date.

Would it be ok with you to drop the board, and bring it back when it
is working again?

There is a cost to keeping boards too long when there is no one
maintaining them. They may still build, but they can stop working.
That's happened and in my view it's best not to let it happen. People
should be able to count on a board working if they build an image.

Thanks

ron


On Wed, Nov 24, 2021 at 12:16 PM Mike Banon  wrote:
>
> With all due respect, dropping support for the majority of AMD boards
> - with a quite significant community around them! - doesn't seem like
> a wise decision, if we still care about the coreboot marketshare on
> the worldwide-available consumer PCs. Small improvement in the common
> source, but a huge loss of boards? (almost 50!). For the sake of the
> bright future of the coreboot project, this must be prevented at all
> costs...
>
> Some time ago I did https://review.coreboot.org/c/coreboot/+/41431
> change where tried to get a resource allocator V4 working for these
> AGESA boards, and despite a tiny size (less than 20 lines) - it almost
> worked, judging by that fam15h A88XM-E booted fine (although there
> might have been some other problems undercover). I wonder if it could
> help and will be happy to test the new changes related to this.
>
>
> On Wed, Nov 24, 2021 at 8:52 PM Arthur Heymans  wrote:
> >
> > > We could announce this deprecation in the 4.16 release notes, then 
> > > deprecate after 4.18 (8.5 months from now).  At that point, we'd create a 
> > > branch and set up a verification builder so that any deprecated platforms 
> > > could be continued in the 4.18 branch.
> >
> > That timeline of 8.5 months does sound fair. I just found this updated 
> > release schedule in the meeting minutes.
> > If we are going to release every 3 months then I guess that's a good way to 
> > go.
> >
> > I started a CL: https://review.coreboot.org/c/coreboot/+/59618 . I'll 
> > update it to reflect that schedule if it can be agreed upon.
> >
> > On Wed, Nov 24, 2021 at 6:07 PM Martin Roth  wrote:
> >>
> >> Hey Arthur,
> >>
> >> Nov 24, 2021, 05:50 by art...@aheymans.xyz:
> >>
> >> > Hi
> >> > I would like to suggest to deprecate some legacy codepaths inside the 
> >> > coreboot tree and therefore make some newer ones mandatory.
> >> > ... snip ...>  About the timeline of deprecations. Is deprecating non 
> >> > conforming platforms from the master branch after the 4.16 release in 6 
> >> > months a reasonable proposal?
> >> >
> >> I have no strong opinion about the platform deprecations, although I 
> >> suspect that PC Engines might be unhappy if it's platforms were removed 
> >> from the ToT codebase.
> >>
> >>  My preference would be to announce deprecations in the release notes.  We 
> >> just missed the 4.15 release, but we're switching to a 3 month release 
> >> cadence, so the next release will be in early February, 2.5 months from 
> >> now.
> >>
> >> We could announce this deprecation in the 4.16 release notes, then 
> >> deprecate after 4.18 (8.5 months from now).  At that point, we'd create a 
> >> branch and set up a verification builder so that any deprecated platforms 
> >> could be continued in the 4.18 branch.
> >>
> >> Would this schedule work?
> >>
> >> Martin
> >>
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
>
>
> --
> Best regards, Mike Banon
> Open Source Community Manager of 3mdeb - https://3mdeb.com/
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion for deprecation: LEGACY_SMP_INIT & RESOURCE_ALLOCATOR_V3

2021-11-24 Thread ron minnich
I think, given how good a job you've all done with the release tags
and so on, it's easy for people to get to a working build for a board;
therefore, deprecating non conforming platforms make sense, as does
your suggestion for six months.


On Wed, Nov 24, 2021 at 4:51 AM Arthur Heymans  wrote:
>
> Hi
>
> I would like to suggest to deprecate some legacy codepaths inside the 
> coreboot tree and therefore make some newer ones mandatory.
>
> The first one I'd like to deprecate is LEGACY_SMP_INIT. This also includes 
> the codepath for SMM_ASEG. This code is used to start APs and do some feature 
> programming on each AP, but also set up SMM. This has largely been superseded 
> by PARALLEL_MP, which should be able to cover all use cases of 
> LEGACY_SMP_INIT, with little code changes. The reason for deprecation is that 
> having 2 codepaths to do the virtually the same increases maintenance burden 
> on the community a lot, while also being rather confusing.
>
> A few things are lacking in PARALLEL_MP init:
> - Support for !CONFIG_SMP on single core systems. It's likely easy to extend 
> PARALLEL_MP or write some code that just does CPU detection on the BSP CPU.
> - Support smm in the legacy ASEG (0xa - 0xb) region. A POC showed 
> that it's not that hard to do with PARALLEL_MP 
> https://review.coreboot.org/c/coreboot/+/58700
>
> No platforms in the tree have any hardware limitations that would block 
> migrating to PARALLEL_MP / a simple !CONFIG_SMP codebase.
>
> The second codepath that I'd like to propose for deprecation is 
> RESOURCE_ALLOCATOR_V3.
> V4 was introduced more than a year ago and with minor changes most platforms 
> were able to work just fine with it. A major difference is that V3 uses just 
> one continuous region below 4G to allocate all PCI memory BAR's. V4 uses all 
> available space below 4G and if asked to, also above 4G too. This makes it 
> important that SoC code properly reports all fixed resources.
>
> Currently only AGESA platforms have issues with it. On gerrit both attempts 
> to fix AMD AGESA codebases to use V4 and compatibility modes inside the V4 
> allocator have been proposed, but both efforts seem stalled. See the (not yet 
> merged) documentation https://review.coreboot.org/c/coreboot/+/43603 on it's 
> details. It looks like properly reporting all fixed resources is the culprit.
>
> About the timeline of deprecations. Is deprecating non conforming platforms 
> from the master branch after the 4.16 release in 6 months a reasonable 
> proposal?
>
> The affected boards currently are:
> AMD_INAGUA
> AMD_OLIVEHILL
> AMD_PARMER
> AMD_SOUTHSTATION
> AOPEN_DXPLPLUSU
> AMD_PERSIMMON
> AMD_THATCHER
> AMD_UNIONSTATION
> ASROCK_E350M1
> ASUS_A88XM_E
> ASROCK_IMB_A180
> ASUS_AM1I_A
> ASUS_F2A85_M
> ASUS_F2A85_M_PRO
> ASUS_F2A85_M_LE
> ASUS_P2B_RAMDEBUG
> ASUS_P2B_LS
> ASUS_P2B_F
> ASUS_P2B_D
> ASUS_P2B_DS
> ASUS_P3B_F
> ASUS_P2B
> ODE_E20XX
> BIOSTAR_AM1ML
> BIOSTAR_A68N5200
> ELMEX_PCM205400
> ELMEX_PCM205401
> GIZMOSPHERE_GIZMO2
> GIZMOSPHERE_GIZMO
> HP_ABM
> HP_PAVILION_M6_1035DX
> JETWAY_NF81_T56N_LF
> LENOVO_G505S
> LIPPERT_FRONTRUNNER_AF
> LIPPERT_TOUCAN_AF
> MSI_MS7721
> PCENGINES_APU1_
> PCENGINES_APU2_
> PCENGINES_APU3_
> PCENGINES_APU4_
> PCENGINES_APU5_
> PCENGINES_APU1
> PCENGINES_APU2
> PCENGINES_APU3
> PCENGINES_APU4
> PCENGINES_APU5
>
> sidenote: Qemu platforms support both LEGACY_SMP_INIT and PARALLEL_MP init so 
> I did not list them here.
>
> Let me know your thoughts.
>
> Arthur
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Another year, another blob?

2021-11-05 Thread ron minnich
I'd like to get back to a rating system. There's a really simple
measure that I've never seen improved upon, namely, for a given
firmware image, what fraction of the bits in that image come from open
source code?

e.g., the KGPE-D16 would get a 100%, and a typical laptop would get 0%.

This is a really easy number to compute, automatically, and might even
be an option to cbfstool or a ROM tool.

Marketing types are sensitive to numbers like this: we could
prominently display these numbers on coreboot.org

ron

On Fri, Nov 5, 2021 at 10:16 AM Martin Roth via coreboot
 wrote:
>
> Nov 4, 2021, 05:24 by pmen...@molgen.mpg.de:
>
> > On 20.10.21 14:24, Nico Huber wrote:
> >
> >> My proposal:
> >> How about we set up some guidelines how to proceed when adding support
> >> for a new platform that requires any blobs? My vague idea is as follows:
> >> Before the very first commit for such a new platform can be merged, a
> >> set of predefined, blob related questions (to be discussed) should be
> >> answered. This should also apply if a new platform just brings the same
> >> kind of blobs with it as its predecessor (e.g. next gen FSP). Situations
> >> may change and blobs do too. Speaking of FSP, it's actually a set of
> >> many blobs. I think questions should be answered for each part of it
> >> individually.
> >> ...>> What do you think?
> >>
> >
> > Thank you for bringing this up, and I totally agree. Reaching out to the 
> > coreboot community and including it in the planing phase is currently 
> > lacking quite a lot. The coreboot mailing list is the perfect forum for 
> > that, but unfortunately not used a lot.
> >
> > Kind regards,
> > Paul
> >
> The current reality is that binary blobs are needed for almost every platform 
> in coreboot.  I believe the coreboot leadership is united behind the 
> unfortunate reality that allowing these blobs is a requirement for the 
> platform.  I don't think we're going to refuse a platform right now simply 
> because it has blobs.  I'm not sure what coreboot would look like right now 
> if we'd started refusing blobs when the required blobs started appearing, but 
> it definitely wouldn't have many modern platforms.
>
> We all agree that we don't like adding more proprietary binaries, but there 
> are times when a binary needs to be closed for a time until the platform is 
> released such as with the PSE.  This should be acceptable, so long as the 
> promise is actually followed through upon.  If not, the company making that 
> promise loses credibility.  Unfortunately, that's not always a great 
> motivator.  Maybe the coreboot organization & SFC can enter into a contract 
> that specified a rough timeframe that the firmware would be open sourced.  
> Hopefully that would be enough of a guarantee.
>
> Every company is in business to make money in some way.  If there's no profit 
> to be made doing something, they're going to have a hard time keeping their 
> doors open.  So long as they don't see a financial benefit to being 
> open-source, they're simply not going to do it. To make this happen, we need 
> more companies requesting that the chip vendors open their proprietary blobs.
>
> Being more involved in the planning phase would be great, and obviously there 
> are companies contributing to coreboot who ARE involved in these discussions. 
> Expecting companies to discuss their plans for future chips in the open 
> probably isn't going to happen.
>
>
>
> Simply refusing to accept the binaries *only hurts us*, most companies will 
> be probably happy using Slimboot or TianoCore. Making things difficult to 
> work with coreboot only makes it easier to show why something shouldn't be 
> open and why the chip vendors shouldn't work with coreboot.  I cant tell you 
> how many times I've heard that the reason coreboot wasn't used or wasn't 
> upstreamed was that it takes too long to get changes into coreboot.
>
>
>
> These things said, I think we can come up with solutions to make things 
> easier. Ron suggested several years ago that we could enable Kconfig to only 
> show the platforms with the amount of binaries that people are comfortable 
> with.  Maybe we need to look into that more.  We can require that the 
> soc/cpu/chipset Kconfig screens display what blobs are required.  We can push 
> to get anything we can moved from the blobs into coreboot.  We can, and we 
> are, pushing the vendors to be more open-source friendly, and we're finally 
> starting to see some more and more people at these companies buying into this.
>
> Martin
>
>
>
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: There is a python in our toolchain?!?

2021-10-16 Thread ron minnich
Still working on getting the setup, but 9elements reports: "We have it
running as CI System attached to a Github repo and do build and boot
testing on Qemu und real hardware with it."

On Sat, Oct 16, 2021 at 1:39 AM Patrick Georgi  wrote:
>
> Am Sa., 16. Okt. 2021 um 02:40 Uhr schrieb ron minnich :
>>
>> Contest is easy to set up, easy to run, it's
>> getting contributors. I understand it's a commitment of a day or so to
>> figure out, but it's worth it in our experience. It's just not that
>> hard.
>>
>> I believe starting down the python path is a bad start, and I'd rather
>> not make it.
>
> It is, but that's the code proposed on the repo.
> I asked you about using Contest in a setting that comes without client/server 
> architecture, SQL database and a dedicated system under test. Any news on 
> that end?
>
>> I realize "it's only 160 lines", now; but that's how these things
>> always start. They don't end well.
>
> It's not a license for adding more of the same.
>
> I consider it:
> 1. A signal that we care for testing our tools (as in: we like contributions 
> that improve our test coverage).
> 2. A signal of encouragement towards Ricardo who haplessly ran into the trap 
> of coreboot discouraging (ahem) python (we should document that!) while 
> trying to improve the project's general posture.
>
> As soon as you bring up an alternative path that's acceptable to the project 
> at large, Ricardo offered to rewrite this specific test in whatever we'll use 
> officially (I suppose there's a caveat of it being "within reason": don't get 
> out your brainf*ck-based e2e testing framework!), and I hereby offer to be 
> the rewriter-of-last-resort in case Ricardo is gone by then.
>
> As is, Content looks like a solution for a _different_ problem. As soon as we 
> can clear up that confusion, I'm all for using it, and the faster we get that 
> done, the faster this python code is removed again.
>
> Therefore I guess one could say that I also consider this change (and my 
> proposal of merging it):
>
> 3. A rather blunt tool to get you to resolve the open questions regarding 
> Contest ;-)
>
>
> Patrick
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: There is a python in our toolchain?!?

2021-10-15 Thread ron minnich
I would rather we not start depending on pytest. Just my take.There's
a difference between a utility for one chipset and global
infrastructure, and pytest could become the latter.

There's a pretty big testing activity starting up in OCP, involving
many companies, and python is not something we will be using, for all
the usual reasons. Some of those companies had python test infra and
they are moving on. Contest is easy to set up, easy to run, it's
getting contributors. I understand it's a commitment of a day or so to
figure out, but it's worth it in our experience. It's just not that
hard.

I believe starting down the python path is a bad start, and I'd rather
not make it. I like to know programs have errors in the source code
before I run them. I spend too much time debugging python call traces,
having it work back to a missing module, to have confidence in it.
Just recently I ran a program which has worked for months, and got
this:
Traceback (most recent call last):
  File "/Users/rminnich/projects/python-dlipower/scripts/dlipower",
line 30, in 
from dlipower import PowerSwitch
ModuleNotFoundError: No module named 'dlipower'

What changed? nothing. Why did it break? I don't know. This sucks, but
it super sucks when it happens on a batch test system and you're on a
deadline. no no no no no.

Again, speaking as the person who wrote the first three device tree
parsers in python, I was not as opposed to python for infra back then,
but I am now. Python is always a headache every time I have to deal
with it for things like test.

I realize "it's only 160 lines", now; but that's how these things
always start. They don't end well.

On Fri, Oct 15, 2021 at 11:06 AM Patrick Georgi  wrote:
>
> Am Fr., 15. Okt. 2021 um 19:50 Uhr schrieb Ricardo Quesada 
> :
>>
>> In the meantime, would it make sense, as Jack mentioned, to land my change 
>> [1] as it is? It is small/simple and it only has  ~160 LoC Python.
>> For comparison, other util are using Python: util/qualcomm has ~3500 LoC 
>> Python [2]
>> I'll happily migrate + integrate my test once a end-to-end test has been 
>> chosen.
>
> I'm fine getting it in for now, but it won't see testing every build.
>
> Will wait for other people to chime in, though. If I don't see pushback 
> against getting this in as-is (with the understanding that it might be out 
> quickly again once we decide on a plan of action) by next Thursday at noon in 
> Germany (UTC+0200), I'll see that I submit it.
>
>
> Patrick
> --
> Google Germany GmbH, ABC-Str. 19, 20354 Hamburg
> Registergericht und -nummer: Hamburg, HRB 86891, Sitz der Gesellschaft: 
> Hamburg
> Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: There is a python in our toolchain?!?

2021-10-12 Thread ron minnich
contest is a coming standard. The companies I work with have moved away
from python test frameworks. Familiarity did not breed respect for python
test frameworks.

On Tue, Oct 12, 2021 at 7:31 AM Jack Rosenthal  wrote:

> Both bats and expect seem problematic for Ricardo's use case ...
> generating the elog binary format in these tools seems difficult, bash
> wasn't really meant for generating C structs.
>
> One huge advantage of pytest is that it's fairly industry standard at this
> point. There's a good number of people who have used it, and there's a
> large community around it (i.e., finding Q on sites like stack overflow
> is possible). I wasn't able to find a single stack overflow question about
> ConTest, although I may be looking in the wrong places (looks like they do
> have a slack group chat).
>
> I believe Ricardo had an open question on the CL if it was OK to submit as
> an optional test suite for elogtool. I.e., tests don't need to run on CI
> tools or anything, if there's developers that want to run the tests they
> can if they have Python.
>
>
>
> On Tue, Oct 12, 2021 at 8:02 AM Patrick Georgi  wrote:
>
>> Hi Ricardo,
>>
>> sorry for the late response, and that your project fell a bit by the
>> wayside. I guess discussion configuration frameworks is more attractive to
>> this community than testing frameworks (which also explains why we have ~3
>> config frameworks and only ~1 testing frameworks ;-) )
>>
>> So yes, testing is something we really need to improve on. I'm not sure
>> if "contest" is the right solution to your particular problem though. The
>> first thing I see when opening up its page is something about mysql, and
>> scrolling down, something about submitting jobs to a server. That seems
>> more like a potential replacement to our Jenkins install (qa.coreboot.org)
>> rather than something to easily write end-to-end tests for our userland
>> tools.
>>
>> Looking for options, my first instinct was to go for expect(1), but
>> that's really best for interactive uses - might be interesting if we ever
>> grow interactive tools, but so far we manage to stick to clean and tidy
>> CLI. Then I ran into https://github.com/bats-core/bats-core. That seems
>> maintained, reasonably minimal in its dependencies, it emits TAP which is
>> as good as JUnit in terms of Jenkins-integration (so we can have
>> qa.coreboot.org parse the results and give meaningful feedback on
>> review.coreboot.org), and it seems to be fairly widely used for similar
>> things to what you're doing, see
>> https://github.com/bats-core/bats-core/wiki/Projects-Using-Bats - it
>> points to many, many code examples, e.g.
>> https://github.com/docker/machine/blob/master/test/integration/core/core-commands.bats
>> which should cover the basic "call some command, see what it did" scenario
>> quite nicely.
>>
>> Of course in the end it's all a matter of taste, and that's why I opened
>> the can of worms again that is Python-use in coreboot land. As python
>> hasn't seen a warmer reception than last time, I'd look for alternatives.
>> Maybe Bats could do? Of course, I haven't _actually_ used it yet and if
>> writing tests with that makes you want to scream and run away, we'd have to
>> look for other options (please, don't run away!)
>>
>>
>> Regards,
>> Patrick
>>
>> Am Mo., 4. Okt. 2021 um 18:59 Uhr schrieb Ricardo Quesada <
>> ricar...@google.com>:
>>
>>> Hi all,
>>>
>>> Regarding Patrick's 2nd point (end-to-end testing), what's the recommend
>>> way to go forward?
>>> I just need to run one of the Coreboot's utils (in this case
>>> "elogtool"), and make sure the output is the expected one.
>>>
>>> Should I use Contest, as suggested by Ron Minnich?
>>>
>>> Thanks!
>>>
>>>
>>>
>>> On Thu, Sep 30, 2021 at 10:18 AM ron minnich  wrote:
>>>
>>>> Speaking as the person who wrote the first few config tools in python,
>>>> and was happy to see the python dependency gone, I think bringing
>>>> python back in would be a mistake.
>>>>
>>>> Every single python test framework I've worked with works until there
>>>> is a problem, and I then find myself having to walk back a python call
>>>> trace, because what inevitably breaks is the test framework tooling.
>>>> It's why so many projects are removing python test frameworks.
>>>>
>>>> If you want a good test framework, get the linuxboot fork of
>>>> facebook's cont

[coreboot] Re: There is a python in our toolchain?!?

2021-10-12 Thread ron minnich
I set up a raspberry pi as a device under test with contest, and ran
the contest controller in a mode that needed no mysql, and it took
about 14 minutes.

I'd still recommend looking at contest. I've found it very easy to use
and I am very new to it.

On Tue, Oct 12, 2021 at 7:02 AM Patrick Georgi  wrote:
>
> Hi Ricardo,
>
> sorry for the late response, and that your project fell a bit by the wayside. 
> I guess discussion configuration frameworks is more attractive to this 
> community than testing frameworks (which also explains why we have ~3 config 
> frameworks and only ~1 testing frameworks ;-) )
>
> So yes, testing is something we really need to improve on. I'm not sure if 
> "contest" is the right solution to your particular problem though. The first 
> thing I see when opening up its page is something about mysql, and scrolling 
> down, something about submitting jobs to a server. That seems more like a 
> potential replacement to our Jenkins install (qa.coreboot.org) rather than 
> something to easily write end-to-end tests for our userland tools.
>
> Looking for options, my first instinct was to go for expect(1), but that's 
> really best for interactive uses - might be interesting if we ever grow 
> interactive tools, but so far we manage to stick to clean and tidy CLI. Then 
> I ran into https://github.com/bats-core/bats-core. That seems maintained, 
> reasonably minimal in its dependencies, it emits TAP which is as good as 
> JUnit in terms of Jenkins-integration (so we can have qa.coreboot.org parse 
> the results and give meaningful feedback on review.coreboot.org), and it 
> seems to be fairly widely used for similar things to what you're doing, see 
> https://github.com/bats-core/bats-core/wiki/Projects-Using-Bats - it points 
> to many, many code examples, e.g. 
> https://github.com/docker/machine/blob/master/test/integration/core/core-commands.bats
>  which should cover the basic "call some command, see what it did" scenario 
> quite nicely.
>
> Of course in the end it's all a matter of taste, and that's why I opened the 
> can of worms again that is Python-use in coreboot land. As python hasn't seen 
> a warmer reception than last time, I'd look for alternatives.
> Maybe Bats could do? Of course, I haven't _actually_ used it yet and if 
> writing tests with that makes you want to scream and run away, we'd have to 
> look for other options (please, don't run away!)
>
>
> Regards,
> Patrick
>
> Am Mo., 4. Okt. 2021 um 18:59 Uhr schrieb Ricardo Quesada 
> :
>>
>> Hi all,
>>
>> Regarding Patrick's 2nd point (end-to-end testing), what's the recommend way 
>> to go forward?
>> I just need to run one of the Coreboot's utils (in this case "elogtool"), 
>> and make sure the output is the expected one.
>>
>> Should I use Contest, as suggested by Ron Minnich?
>>
>> Thanks!
>>
>>
>>
>> On Thu, Sep 30, 2021 at 10:18 AM ron minnich  wrote:
>>>
>>> Speaking as the person who wrote the first few config tools in python,
>>> and was happy to see the python dependency gone, I think bringing
>>> python back in would be a mistake.
>>>
>>> Every single python test framework I've worked with works until there
>>> is a problem, and I then find myself having to walk back a python call
>>> trace, because what inevitably breaks is the test framework tooling.
>>> It's why so many projects are removing python test frameworks.
>>>
>>> If you want a good test framework, get the linuxboot fork of
>>> facebook's contest github.com/linuxboot/contest, written in Go, in use
>>> at scale near you.
>>>
>>> It's easy to let the joy of building a build system overwhelm the
>>> actual goals of a project. coreboot is not about being a build system.
>>> It's easy to fall into the trap of creating an ever more complex
>>> system that is more than is needed.
>>>
>>> On Thu, Sep 30, 2021 at 9:11 AM Patrick Georgi via coreboot
>>>  wrote:
>>> >
>>> > Am Do., 30. Sept. 2021 um 17:29 Uhr schrieb Jack Rosenthal 
>>> > :
>>> >>
>>> >> With respect to Kconfig, we (at Google) encountered a lovely build flake 
>>> >> after the Kconfig uprev last month in the coreboot tree that took a 
>>> >> couple of weeks to sort out and resolve. Some sort of automated 
>>> >> validation that the code is working could have possibly helped. Of 
>>> >> course, the C implementation of Kconfig has no tests at all. Some tests 
>>> >> is better than nothing.
>>> >
>>&g

[coreboot] Re: Atomic Accesses to Local APIC

2021-10-06 Thread ron minnich
The specific case is code that is technical debt for *two* steppings
(B and C2) of *one* instance of a CPU (P54C). There's no need to keep
that around, especially since, as pointed out in the CL, it's causing
trouble.

I +2'ed the CL based on this discussion.

On Wed, Oct 6, 2021 at 5:27 AM Peter Stuge  wrote:
>
> ron minnich wrote:
> > same applies to new software applied to antiques.
>
> While you are correct for some software and some antiques I find this
> premise completely unacceptable. This attitude may be convenient for
> developers but it only further normalizes planned obsolecense. Not OK!
>
> Software can make it a high priority to be compatible. Windows is a
> great example of that, and I'm sure that backwards compatibility (has)
> contributes significantly to its success.
>
> Hardware is no different and can of course also make it a priority to
> be backwards compatible. If we consider the x86 instruction set in
> isolation then that's another great example.
>
> I don't see this problem as lack of compatibility but more as lack of
> transparency, openness and/or collaboration - those are the
> ingredients for a general hardware initialization software without all
> the ridiculous fights that coreboot must endure to this day.
>
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Atomic Accesses to Local APIC

2021-10-05 Thread ron minnich
"Doctor, it hurts when I do this"
"Then don't do that"

same applies to new software applied to antiques.

On Tue, Oct 5, 2021 at 1:48 PM Matt B  wrote:
>
> My concern is more about surprise brokenness when trying to use the newest 
> version, if any of those pentiums remain.
>
> On Tue, Oct 5, 2021 at 4:06 PM ron minnich  wrote:
>>
>> That's what versions are all about. It seems sensible to me to leave
>> the old bad stuff behind; if people need it, it's all still there if
>> they know the tag.
>>
>> On Tue, Oct 5, 2021 at 1:02 PM Matt B  wrote:
>> >
>> > I should note I'm not 100% sure what they're doing there.
>> >
>> > Are there any more of these buggy pentiums left in the coreboot tree? (If 
>> > he chooses to update) I can imagine RMS getting real snippy if we break 
>> > his thinkpad. :P
>> >
>> > On Tue, Oct 5, 2021 at 3:53 PM ron minnich  wrote:
>> >>
>> >> nice fine. Might be worth adding the text of this comment (modified as
>> >> needed) to the CL so that in years to come people understand the
>> >> reasons.
>> >>
>> >> On Tue, Oct 5, 2021 at 12:51 PM Matt B  wrote:
>> >> >
>> >> > A quick google turned this up:
>> >> > https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3.0/arch/x86/kernel/cpu/intel.c#253
>> >> >
>> >> > On Tue, Oct 5, 2021 at 4:06 AM Julian Stecklina 
>> >> >  wrote:
>> >> >>
>> >> >> On Tue, 2021-10-05 at 09:29 +0300, Kyösti Mälkki wrote:
>> >> >> > On Mon, Oct 4, 2021 at 8:12 PM Julian Stecklina
>> >> >> >  wrote:
>> >> >> > >
>> >> >> > > But it looks like the workaround was just carried forward with no 
>> >> >> > > discussion
>> >> >> > > of
>> >> >> > > whether it's still necessary or what it actually works around.
>> >> >> > >
>> >> >> >
>> >> >> > Hi
>> >> >> >
>> >> >> > Removal has been suggested with the X2APIC work:
>> >> >> >
>> >> >> > https://review.coreboot.org/c/coreboot/+/55199
>> >> >> >
>> >> >>
>> >> >> I've been looking at 4.13 instead of master. My bad. In master, indeed 
>> >> >> most
>> >> >> atomic accesses are gone and the ones writing to ICR are left. This 
>> >> >> mostly makes
>> >> >> sense and is much clearer now. :)
>> >> >>
>> >> >> Thanks,
>> >> >> Julian
>> >> >>
>> >> >> ___
>> >> >> coreboot mailing list -- coreboot@coreboot.org
>> >> >> To unsubscribe send an email to coreboot-le...@coreboot.org
>> >> >
>> >> > ___
>> >> > coreboot mailing list -- coreboot@coreboot.org
>> >> > To unsubscribe send an email to coreboot-le...@coreboot.org
>> >
>> > ___
>> > coreboot mailing list -- coreboot@coreboot.org
>> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Atomic Accesses to Local APIC

2021-10-05 Thread ron minnich
That's what versions are all about. It seems sensible to me to leave
the old bad stuff behind; if people need it, it's all still there if
they know the tag.

On Tue, Oct 5, 2021 at 1:02 PM Matt B  wrote:
>
> I should note I'm not 100% sure what they're doing there.
>
> Are there any more of these buggy pentiums left in the coreboot tree? (If he 
> chooses to update) I can imagine RMS getting real snippy if we break his 
> thinkpad. :P
>
> On Tue, Oct 5, 2021 at 3:53 PM ron minnich  wrote:
>>
>> nice fine. Might be worth adding the text of this comment (modified as
>> needed) to the CL so that in years to come people understand the
>> reasons.
>>
>> On Tue, Oct 5, 2021 at 12:51 PM Matt B  wrote:
>> >
>> > A quick google turned this up:
>> > https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3.0/arch/x86/kernel/cpu/intel.c#253
>> >
>> > On Tue, Oct 5, 2021 at 4:06 AM Julian Stecklina 
>> >  wrote:
>> >>
>> >> On Tue, 2021-10-05 at 09:29 +0300, Kyösti Mälkki wrote:
>> >> > On Mon, Oct 4, 2021 at 8:12 PM Julian Stecklina
>> >> >  wrote:
>> >> > >
>> >> > > But it looks like the workaround was just carried forward with no 
>> >> > > discussion
>> >> > > of
>> >> > > whether it's still necessary or what it actually works around.
>> >> > >
>> >> >
>> >> > Hi
>> >> >
>> >> > Removal has been suggested with the X2APIC work:
>> >> >
>> >> > https://review.coreboot.org/c/coreboot/+/55199
>> >> >
>> >>
>> >> I've been looking at 4.13 instead of master. My bad. In master, indeed 
>> >> most
>> >> atomic accesses are gone and the ones writing to ICR are left. This 
>> >> mostly makes
>> >> sense and is much clearer now. :)
>> >>
>> >> Thanks,
>> >> Julian
>> >>
>> >> ___
>> >> coreboot mailing list -- coreboot@coreboot.org
>> >> To unsubscribe send an email to coreboot-le...@coreboot.org
>> >
>> > ___
>> > coreboot mailing list -- coreboot@coreboot.org
>> > To unsubscribe send an email to coreboot-le...@coreboot.org
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Atomic Accesses to Local APIC

2021-10-05 Thread ron minnich
nice fine. Might be worth adding the text of this comment (modified as
needed) to the CL so that in years to come people understand the
reasons.

On Tue, Oct 5, 2021 at 12:51 PM Matt B  wrote:
>
> A quick google turned this up:
> https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3.0/arch/x86/kernel/cpu/intel.c#253
>
> On Tue, Oct 5, 2021 at 4:06 AM Julian Stecklina 
>  wrote:
>>
>> On Tue, 2021-10-05 at 09:29 +0300, Kyösti Mälkki wrote:
>> > On Mon, Oct 4, 2021 at 8:12 PM Julian Stecklina
>> >  wrote:
>> > >
>> > > But it looks like the workaround was just carried forward with no 
>> > > discussion
>> > > of
>> > > whether it's still necessary or what it actually works around.
>> > >
>> >
>> > Hi
>> >
>> > Removal has been suggested with the X2APIC work:
>> >
>> > https://review.coreboot.org/c/coreboot/+/55199
>> >
>>
>> I've been looking at 4.13 instead of master. My bad. In master, indeed most
>> atomic accesses are gone and the ones writing to ICR are left. This mostly 
>> makes
>> sense and is much clearer now. :)
>>
>> Thanks,
>> Julian
>>
>> ___
>> coreboot mailing list -- coreboot@coreboot.org
>> To unsubscribe send an email to coreboot-le...@coreboot.org
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Atomic Accesses to Local APIC

2021-10-04 Thread ron minnich
I would be happy if all those old buggy systems were gone, good idea Peter!


On Mon, Oct 4, 2021 at 9:39 AM Peter Stuge  wrote:
>
> ron minnich wrote:
> > The problem, at this point, is that a change this broad must also be
> > tested across all platforms to make sure it's not breaking.
>
> While true it could be worthwhile to check how often CONFIG_X86_GOOD_APIC
> is unset...
>
>
> > This looks like it was done for a hardware problem. We had a lot of
> > x86 implementations in tree at that time, and they had lots of bugs.
>
> Maybe none of them are left.
>
>
> //Peter
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Atomic Accesses to Local APIC

2021-10-04 Thread ron minnich
The problem, at this point, is that a change this broad must also be
tested across all platforms to make sure it's not breaking.

This looks like it was done for a hardware problem. We had a lot of
x86 implementations in tree at that time, and they had lots of bugs.

On Mon, Oct 4, 2021 at 8:11 AM Julian Stecklina
 wrote:
>
> On Mon, 2021-10-04 at 07:32 -0700, ron minnich wrote:
> > that was pre-git but is there any useful comment in git anyway? I only
> > have the vaguest memory of why it went in.
>
> It was introduced in c84c1906b7 and fcd5ace00b3 without explanation. I
> particularly don't understand the lapic_write_around and lapic_read_around
> functions that pop up. From my perspective, none of this is needed and you can
> just use the usual lapic_read and lapic_write functions.
>
> Julian
>
> >
> > On Mon, Oct 4, 2021 at 7:14 AM Julian Stecklina
> >  wrote:
> > >
> > > Hello,
> > >
> > > I was looking at the Local APIC code in coreboot and was wondering about
> > > `lapic_write_atomic` in src/include/cpu/x86/lapic.h. This uses an atomic
> > > XCHG to
> > > write to Local APIC registers. I would like to understand why this would 
> > > be
> > > necessary, because none of the OSes I've seen or worked on do anything
> > > similar.
> > > ALso the Intel SDM discourages acceses that are not plain loads/stores.
> > >
> > > In the coreboot code, this function seems to exist for a really long time.
> > > I've
> > > found this from 2004:
> > >
> > > #ifdef CONFIG_X86_GOOD_APIC
> > > # define FORCE_READ_AROUND_WRITE 0
> > > # define lapic_read_around(x) lapic_read(x)
> > > # define lapic_write_around(x,y) lapic_write((x),(y))
> > > #else
> > > # define FORCE_READ_AROUND_WRITE 1
> > > # define lapic_read_around(x) lapic_read(x)
> > > # define lapic_write_around(x,y) lapic_write_atomic((x),(y))
> > > #endif
> > >
> > > This seems to indicate that using atomic writes was a workaround of some
> > > kind.
> > > Does anyone know more?
> > >
> > > Thanks!
> > > Julian
>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: PXE on Coreboot

2021-10-04 Thread ron minnich
use linuxboot, don't use UEFI payload. That's what we've done at
Google, among other companies (ByteDance too).

if you use linuxboot, then you get our pxeboot written in Go, and that
can use http to pull the image down as well as tftp; http is literally
(measured) 100x faster.

I would avoid using those buggy UEFI DXE at all cost. A Linuxboot
payload can be as small as a few MiB, which easily fits in today's SPI
parts.

On Mon, Oct 4, 2021 at 5:21 AM Rao G  wrote:
>
> Hi Coreboot Developers,
>
> Very warm greetings!
>
> Trying to enable PXE along with UEFI payload on Elkhart Lake CRB platform
>
> Enabling Network stack requires below DXE drivers to be enabled
>
>
>
> https://github.com/tianocore/edk2/blob/vUDK2018/MdeModulePkg/MdeModulePkg.dsc
>
>
>
> MdeModulePkg/Universal/Network/ArpDxe/ArpDxe.inf
>
> MdeModulePkg/Universal/Network/Dhcp4Dxe/Dhcp4Dxe.inf
>
> MdeModulePkg/Universal/Network/DpcDxe/DpcDxe.inf
>
> MdeModulePkg/Universal/Network/Ip4Dxe/Ip4Dxe.inf
>
> MdeModulePkg/Universal/Network/IScsiDxe/IScsiDxe.inf
>
> MdeModulePkg/Universal/Network/MnpDxe/MnpDxe.inf
>
> MdeModulePkg/Universal/Network/VlanConfigDxe/VlanConfigDxe.inf
>
> MdeModulePkg/Universal/Network/Mtftp4Dxe/Mtftp4Dxe.inf
>
> MdeModulePkg/Universal/Network/SnpDxe/SnpDxe.inf
>
> MdeModulePkg/Universal/Network/Tcp4Dxe/Tcp4Dxe.inf
>
> MdeModulePkg/Universal/Network/Udp4Dxe/Udp4Dxe.inf
>
>
>
>  [Components.IA32, Components.X64, Components.IPF, Components.AARCH64]
>
>   MdeModulePkg/Universal/Network/UefiPxeBcDxe/UefiPxeBcDxe.inf
>
>
> Please send your comments/views on how to enable PXE with uEFI payload, It is 
> also okay incase if there is an alternate approach to enable PXE without UEFI 
> payload.
>
>
> Thanks
>
> Ranga
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Atomic Accesses to Local APIC

2021-10-04 Thread ron minnich
that was pre-git but is there any useful comment in git anyway? I only
have the vaguest memory of why it went in.

On Mon, Oct 4, 2021 at 7:14 AM Julian Stecklina
 wrote:
>
> Hello,
>
> I was looking at the Local APIC code in coreboot and was wondering about
> `lapic_write_atomic` in src/include/cpu/x86/lapic.h. This uses an atomic XCHG 
> to
> write to Local APIC registers. I would like to understand why this would be
> necessary, because none of the OSes I've seen or worked on do anything 
> similar.
> ALso the Intel SDM discourages acceses that are not plain loads/stores.
>
> In the coreboot code, this function seems to exist for a really long time. 
> I've
> found this from 2004:
>
> #ifdef CONFIG_X86_GOOD_APIC
> # define FORCE_READ_AROUND_WRITE 0
> # define lapic_read_around(x) lapic_read(x)
> # define lapic_write_around(x,y) lapic_write((x),(y))
> #else
> # define FORCE_READ_AROUND_WRITE 1
> # define lapic_read_around(x) lapic_read(x)
> # define lapic_write_around(x,y) lapic_write_atomic((x),(y))
> #endif
>
> This seems to indicate that using atomic writes was a workaround of some kind.
> Does anyone know more?
>
> Thanks!
> Julian
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: There is a python in our toolchain?!?

2021-09-30 Thread ron minnich
Speaking as the person who wrote the first few config tools in python,
and was happy to see the python dependency gone, I think bringing
python back in would be a mistake.

Every single python test framework I've worked with works until there
is a problem, and I then find myself having to walk back a python call
trace, because what inevitably breaks is the test framework tooling.
It's why so many projects are removing python test frameworks.

If you want a good test framework, get the linuxboot fork of
facebook's contest github.com/linuxboot/contest, written in Go, in use
at scale near you.

It's easy to let the joy of building a build system overwhelm the
actual goals of a project. coreboot is not about being a build system.
It's easy to fall into the trap of creating an ever more complex
system that is more than is needed.

On Thu, Sep 30, 2021 at 9:11 AM Patrick Georgi via coreboot
 wrote:
>
> Am Do., 30. Sept. 2021 um 17:29 Uhr schrieb Jack Rosenthal 
> :
>>
>> With respect to Kconfig, we (at Google) encountered a lovely build flake 
>> after the Kconfig uprev last month in the coreboot tree that took a couple 
>> of weeks to sort out and resolve. Some sort of automated validation that the 
>> code is working could have possibly helped. Of course, the C implementation 
>> of Kconfig has no tests at all. Some tests is better than nothing.
>
>
> Let me put the record straight:
>
> - The last kconfig "uprev" hasn't been a simple update the way 
> https://review.coreboot.org/c/coreboot/+/57880 is, but rebuilt the entire 
> build system integration to ease maintenance
> - That issue sprang up because before the kconfig update, we were shipping 
> prebuilt parser files (C code) while now we made bison and flex hard 
> dependencies for our build
> - Tests covering the C code wouldn't have helped, because the issue wasn't in 
> the C code
> - The issue we were facing has been an external dependency (namely: the 
> Chromium OS development environment shipping a broken version of bison(1))
> - Fixing bison in CrOS wouldn't have helped any because we have to assume 
> that other users come with the same kind of broken bison tool
> - The fix has been to ship pre-built files again to remove an external 
> dependency
>
> The alternative that we did actually consider was to add support for building 
> bison and flex to util/crossgcc/buildgcc. For three files that's sheer 
> madness, so we went back to precompiled files instead.
>
> In relation to your proposal to adopt kconfiglib: We can run into any kind of 
> external tool demonstrating weird behavior. That's true for bison (as seen 
> here) just as it can be true for arbitrary python versions (even when 
> specified to be "python 3.6+" or whatever): Linux distributions do strange 
> things to their packages, and we're not a Linux-only project, so even 
> official, unchanged, straight-from-the-server python might behave 
> unexpectedly on less well exercised platforms.
>
> The best way to reduce issues on that end is to avoid external dependencies - 
> like bison, like flex... like python.
> I'd _love_ to avoid the dependency on the host compiler as well, but that's 
> one of those "sheer madness" moments when you support a multitude of 
> operating systems on as many architectures.
>
> Introducing kconfiglib (and through it, a deep reliance on python) just 
> doesn't seem to provide comparable benefits.
>
>
> Patrick
> --
> Google Germany GmbH, ABC-Str. 19, 20354 Hamburg
> Registergericht und -nummer: Hamburg, HRB 86891, Sitz der Gesellschaft: 
> Hamburg
> Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: RFC: On testing and reporting how well coreboot does on real hardware

2021-07-17 Thread ron minnich
I think there are two issues here. First, we can't get it all right at
once. But, second, we have to think in terms of eventual scale. The
system will need to make the status of hundreds of motherboards
accessible.

The Google doc is wonderful for a deep dive, but a bit overwhelming as
the first thing one sees.

I'm not sure the testportal scales that well either. The only report
system I've seen that approaches our needs is the chromeos build
waterfall, but even that may not be enough.

I think all the efforts to date were very good, and we need to learn
what we can from them. I also feel they are not quite right.

On Sat, Jul 17, 2021 at 2:45 AM Daniel Kulesz via coreboot
 wrote:
>
> Hi Patrick,
>
> > Looking at the software you described it seems a wonderful tool for humans
> > to create, execute test steps and analyse test results entered manually by
> > a human.
>
> Actually, this was the primary goal - we wanted to support testing of systems 
> that are close to hardware (such as coreboot). Especially with coreboot, 
> personally, I found too often that I bought a board that was officially 
> "supported" just to find out that some things were actually broken while they 
> seemed to have been working in past versions well (happened to me lately with 
> a Thinkpad T410, see my previous postings to the list about this). The idea 
> in SystemTestPortal is to support testing for such regressions - but this of 
> course requires the availability of humans that execute these tests.
>
> > What we are looking for is only the "data store" and visual representation
> > of such, where automated tests are run by robots. Those (self-hosted or
> > propietary) robots need to publish their test results somewhere using a web
> > API.
>
> I see. Well, there is a different project named "ReportPortal.io" that (imho) 
> does exactly that. We had a joint stand at FOSDEM 2019 together with them and 
> KiwiTCMS. It might be worth looking into that.
>
> > Does SystemTestPortal support input from robots over for example REST API?
>
> Not at the moment but it is a feature request we received multiple times so 
> we will eventually add this in the future.
>
> > Does it support the idea of different products/configurations?
>
> Yes, it does. We have a two-dimensional concept of a products and variants, 
> but I think coreboot would need three dimensions or even more, right? So for 
> example you could have:
>
> - coreboot 4.14 ("clean" without patches)
> - on a Thinkpad T410
> - built with config options X and Y enabled but Z disabled
>
> In addition, there could be also different configurations of the target 
> machine, different OSs on which you would test etc. - the key challenge here 
> is to decide what to put into the test cases themselves and what to put in 
> the products/variants/configuration metadata. Maybe you could try to describe 
> a data model that would be ideal from the coreboot perspective?
>
> Cheers, Daniel
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Deprecating spurious PCI bus master enabling (was Re: Planning the next coreboot release)

2020-11-10 Thread ron minnich
nice idea!

On Tue, Nov 10, 2020 at 7:13 AM Nico Huber  wrote:
>
> On 10.11.20 16:06, Nico Huber wrote:
> > If anybody knows or discovers more cases where it needs to be enabled
> > in advance by coreboot, please mention it here.
>
> We just discussed on IRC cases where unfixable OS drivers might need it.
> For such cases, it would probably be best to add individual Kconfig
> switches for each case. This would make it easier to get rid of the big
> switch.
>
> Nico
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Universal Payload - RFC and Invitation from Universal Payload community

2020-11-05 Thread ron minnich
Thanks for the note.

I think you're going at this somewhat backwards. While this may seem
to you to be a universal payload, to many it appears to be "make
everybody support UEFI model."

As you have seen, this will not be well received.

As Peter points out, the coreboot payload model is simple,
deliberately, and in that simplicity there has been a lot of thought.
As we know, it's far harder to write less code than to write more
code. The coreboot model is to pare down functionality to the minimum
possible. In the coreboot model, complexity belongs in the payload.

In fact, the coreboot payload went through four distinct evolutions
before settling on the current model. That was so long ago it is
little remembered; one indicator of our success is that the payload
model has been unchanged for almost the last 15 years.

But a common payload model might be desired. (could we call it common
and not universal? universal has a feel of hubris)

Were I to do this, I'd start with a survey. What do the different
firmware systems do? What is a payload in these systems? What do
payloads require of the firmware, and what do they provide? How is
data communicated to payloads? Is there value in having multiple
payloads, and can payloads return? How are payloads selected? And so
on.

>From there, we could figure out what the limitations of these models are.

>From there, we could start to talk about common models.

A good example for me is the UEFI handoff block. I think the design is
fatally flawed, as it is not self-describing data: you have to know
what struct was compiled, what pieces were not #ifdef'd out, by what
version of what compiler, with what padding and alignment rules, or
you can't use the struct. That's not great.

So, rather than accepting a HOB as a given, in a standard that may
well last 20 years, why not state the goal of having a way to
communicate information to a payload, and then work out a good way to
do that, and THEN define what a HOB could be.

Might you consider restarting this as a true community effort?
Unfortunately there is an appearance that Intel is dictating the
standard to the rest of us, take it or leave it, and that is going to
generate bad feeling.

Thanks

ron

On Mon, Nov 2, 2020 at 2:58 AM Banik, Subrata  wrote:
>
> HI All,
>
>
>
> coreboot is a modular design with hardware initialization stage followed by a 
> payload to boot OS https://doc.coreboot.org/payloads.html
>
>
>
> There is a new initiative to standardize the bootloader to payload interface. 
> The initiative is called Universal Payload project and details can be found @ 
> https://github.com/universalpayload/Introduction
>
>
>
> The goal for this initiative is interoperability between bootloaders and 
> payloads so that different bootloaders can work with different payloads.
>
>
>
> An early draft of the spec can be found @ 
> https://universalpayload.github.io/documentation/spec/spec.html.
>
>
>
> The Universal Payload community welcomes feedback and contributions.
>
>
>
> We are developing various POC codes to demonstrate the concept. One POC uses 
> coreboot as the bootloader and EDKII UEFI as the payload 
> https://github.com/universalpayload/coreboot/tree/universal_payload
>
>
>
> Thanks,
>
> Subrata
>
>
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Resource allocator: multiple boards regression

2020-05-17 Thread ron minnich
This is a welcome change, long overdue. It's not surprising there were
issues. It's far reaching enough that, if you had time, something in
Documentation detailing what you did and why could be very valuable
for future contributors?

ron

On Sat, May 16, 2020 at 12:30 PM Furquan Shaikh
 wrote:
>
> On Sat, May 16, 2020 at 9:07 AM Nico Huber  wrote:
> >
> >
> > I would have wished for an announcement and that more people would have
> > been invited for review. But apart from that, I don't see a reason to
> > quarrel.
>
> Thanks Nico! Feedback taken. I will keep that in mind the next time :).
>
> >
> > Nico
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Intel ME

2020-03-14 Thread ron minnich
This one is not bad:
https://www.howtogeek.com/334013/intel-management-engine-explained-the-tiny-computer-inside-your-cpu/

overall, the ME could have been a neat idea, had it come without the
attendant security through obscurity.

it "coulda been a contenda"

ron

On Sat, Mar 14, 2020 at 4:45 PM Gregg Levine  wrote:
>
> Hello!
> Would one of you good people take the time to explain what the
> Management Engine that Intel thinks our system needs and of course
> does?
>
> And did AMD make the mistake of thinking the same thing? Apologies in
> advance if we have people from both companies on this list.
> -
> Gregg C Levine gregg.drw...@gmail.com
> "This signature fought the Time Wars, time and again."
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: OpenBMC on KGPE-D16, somebody has it working?

2019-10-13 Thread ron minnich
On Sun, Oct 13, 2019 at 6:42 AM Merlin Büge  wrote:

>
> ... as far as I can see, u-bmc doesn't support the AST2050 yet (the BMC
> chip used by the KGPE-D16).

This is the kind of challenge people on this list live for :-)

The cool thing about u-bmc is that is has only about 1000 lines of
assembly. It turns on dram, loads linux from flash, starts it. It's
not u-boot, or coreboot, or anything. In fact it's a lot like
linuxbios v0.

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: OpenBMC on KGPE-D16, somebody has it working?

2019-10-12 Thread ron minnich
If you like running systemd on your bmc, the minimum 60 seconds
openbmc takes to boot, the complex, fragile, and long time it takes to
build from source, and the openbmc stack's need for giant memory
footprint and lots of nvme, stop reading.

IF, OTOH, you like the idea of a very lightweight stack, which builds
in minutes not hours, and needs maybe 32M of memory to run, and boots
much faster, well, you might want to checkout u-bmc.

https://github.com/u-root/u-bmc

On Sat, Oct 12, 2019 at 2:00 PM Kinky Nekoboi  wrote:
>
> Debian 10.
>
> Thats perfect i  have the nessary flashing tools.
>
>
> Am 12.10.19 um 22:57 schrieb Merlin Büge:
> > Hi,
> >
> >> Due to lib depenency hell i am not able to build openbmc myself atm.
> >>
> > Are you building on Debian (9 or 10)?
> >
> >> Does somebody has openbmc working for there kgpe-d16 system and can
> >> maybe send me a rom ?
> > I'll send you a separate email.
> >
> >> Can i flash the module with a Programmer and testclip or only internally?
> > Yes, a SOIC16 testclip and an SPI programmer like the CH341a will work.
> >
> >
> > Merlin
> >
> >
>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Apollolake: SATA issues

2019-10-07 Thread ron minnich
one thought just occured to me that might be useful.

We have a utility, called utk, that lets you script edits to UEFI
firmware volumes. I use it all the time to slice and dice and strip
these firmware volumes down. There are hundreds (500+) DXE in these
UEFI volumes that are not needed. I've removed in one case 214 out of
424.

what you might do: use utk to remove lots of the DXEs until you get to
the bare minimum. Then, try to work from there to find out which DXE
is setting the parameters in sata. Then, from there, you might RE what
it does so you can figure out what coreboot needs to do.

The goal is not to shrink UEFI down, but to isolate the thing in UEFI
that adjusts tuning registers and figure out what it does, and get
that into coreboot.


On Mon, Oct 7, 2019 at 5:43 AM werner@siemens.com
 wrote:
>
> Hi Christian.
>
> In APL there are special tuning registers for the SATA lanes which may need 
> an adjust.
> The fact that stock UEFI runs without issues may be explained that UEFI 
> touches this registers.
>
> Do you have access to related intel docs? Otherwise have a look at
> https://review.coreboot.org/plugins/gitiles/coreboot/+/refs/heads/master/src/mainboard/siemens/mc_apl1/variants/mc_apl1/mainboard.c
>  line 71 and following for an example.
>
> You may need this, too if your HW design needs some help to run stable.
>
> Werner
>
>
> -Ursprüngliche Nachricht-
> Von: Christian Gmeiner 
> Gesendet: Montag, 30. September 2019 16:10
> An: coreboot 
> Betreff: [coreboot] Apollolake: SATA issues
>
> Hi all
>
> I have ported coreboot to a custom design based on APL and have random SATA 
> problems with CFast cards. I am using the latest public APL FSP from github 
> with the latest coreboot master.
>
> From time to time the SATA link 'dies' during runtime or I it is not possible 
> to establish the SATA link at all (in the used u-boot payload). At the moment 
> I am running out of ideas and I hope someone can point me in the right 
> direction.
>
> Btw. with the vendor blob SATA works with out any problems :/
>
> --
> greets
> --
> Christian Gmeiner, MSc
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: AMD AGESA maintenance and/or deprecation

2019-09-12 Thread ron minnich
excellent points matt


"There is no working bad source so bad that a binary blob is better"

"working bad source should never be replaced by a blob, but only by
improved working bad source"

"we must never remove working bad source that is in use if the only
replacement is a blob"

On Thu, Sep 12, 2019 at 1:47 PM Matt B  wrote:
>
> Greetings all,
>
> Patrick gregori said:
>>
>> Mostly chatter on IRC, to be honest. Part of the intent of this mail was to 
>> surface this more officially.
>
> It would be helpful to carry over more details when porting discussions from 
> IRC. It is always good to be specific how something is broken, not just that 
> the garbage collector is coming.
>
> Kyösti said:
>>
>> Since coreboot seems to accept blobs with ease nowadays, the solution to 
>> keep these platforms can be such that we move AGESA vendorcode to submodule.
>
> This is one way of handling it, but I don't think it does anything to address 
> the quality of the code. Out of sight and out of mind, so to speak.
>
>> We can equally make such quality assurance arguments about FSP2.0, once 
>> commercial vendor gets something merged, they don't really care how much it 
>> gets in the way of overall architecture or subsystems evolution.
>
> As Ron noted:
>>
>> We kept it relatively the same when AMD was a player in coreboot, but 
>> they're long gone; time for major surgery?
>
> I assume that AMD probably doesn't have any plans to revisit 15h if/when they 
> follow through on porting 17h? :-)
>
> Patrik said:
>>
>> This was about surfacing issues like these earlier, to reduce the amount of 
>> surprise. Having your status update on the C bootblock and CAR migration 
>> implementation is useful for that, too!
>
> Is there a single point of reference for this kind of information, to avoid 
> surprises?
>
> Ron said:
>>
>> Is this statement true?
>> "There is no source so bad that a binary blob is better"
>
> Unless this is the trivial case where the blob boots and the code doesn't (or 
> is otherwise functionally inferior), I would argue that any code can be 
> compiled into a blob.
> Thus, the code is automatically as-good-or-better. :-)
>
>> If we take that to be true, then what about this:
>> "bad source should never be replaced by a blob, but only by better source"
>
> This is where things begin to come apart. The first part is true based on (1) 
> but unless someone writes better source, eventually the code breaks and you 
> enter the trivial case above.
>
>> "we must never remove source that is in use if the only replacement is a 
>> blob"
>
> If the code works (it is in use after all), then refer to (1). Otherwise, 
> this is the trivial case.
>
> Once you find yourself in the trivial case, you need to make the decision to 
> either fix the broken code, or embrace the blob.
>
> Sincerely,
> -Matt
>
> On Thu, Sep 12, 2019 at 1:36 PM ron minnich  wrote:
>>
>> Interesting discussion! It got me to wondering, having spent a lot of
>> time in the V1, V2, and V3 trees the last few months.
>>
>> Is this statement true?
>> "There is no source so bad that a binary blob is better"
>>
>> If we take that to be true, then what about this:
>> "bad source should never be replaced by a blob, but only by better source"
>>
>> From there:
>> "we must never remove source that is in use if the only replacement is a 
>> blob"
>>
>> Would that help guide the decision on AGESA? Or is it just more bikeshedding 
>> :-)
>>
>> I personally find the AgEsA cOdE quite UgLy, but ... it's code. I
>> wonder if it can't be fixed with a few
>> initial spatch passes. We kept it relatively the same when AMD was a
>> player in coreboot, but they're long
>> gone; time for major surgery?
>>
>> I don't think we want to say "ugly code, nuke it" unless there is
>> replacement that is code.
>>
>> ron
>>
>>
>>
>> On Thu, Sep 12, 2019 at 9:46 AM awokd via coreboot
>>  wrote:
>> >
>> > Patrick Georgi via coreboot:
>> > > Hi everybody,
>> > >
>> > > coreboot is shipping AMD's open sourced AGESA for a few generations
>> > > as part of its tree.
>> > >
>> > > Some people advocate dropping the code due to its quality and lack
>> > > of maintenance while others are happy with using the code.
>> > >
>> > > So: to help keep this code alive, we'd need maintainers - people
&

[coreboot] Re: AMD AGESA maintenance and/or deprecation

2019-09-12 Thread ron minnich
Interesting discussion! It got me to wondering, having spent a lot of
time in the V1, V2, and V3 trees the last few months.

Is this statement true?
"There is no source so bad that a binary blob is better"

If we take that to be true, then what about this:
"bad source should never be replaced by a blob, but only by better source"

From there:
"we must never remove source that is in use if the only replacement is a blob"

Would that help guide the decision on AGESA? Or is it just more bikeshedding :-)

I personally find the AgEsA cOdE quite UgLy, but ... it's code. I
wonder if it can't be fixed with a few
initial spatch passes. We kept it relatively the same when AMD was a
player in coreboot, but they're long
gone; time for major surgery?

I don't think we want to say "ugly code, nuke it" unless there is
replacement that is code.

ron



On Thu, Sep 12, 2019 at 9:46 AM awokd via coreboot
 wrote:
>
> Patrick Georgi via coreboot:
> > Hi everybody,
> >
> > coreboot is shipping AMD's open sourced AGESA for a few generations
> > as part of its tree.
> >
> > Some people advocate dropping the code due to its quality and lack
> > of maintenance while others are happy with using the code.
> >
> > So: to help keep this code alive, we'd need maintainers - people
> > willing to work through issues, improve the code quality and generally
> > act as a point of contact if any questions arise.
> >
> > One item to start with could be to work through Coverity
> > issues, where the largest proportion is now AGESA based
> > after Jacob cleaned up most of the rest of the tree. See
> > https://scan.coverity.com/projects/coreboot
>
> I would like to help out. Is there someone experienced who can mentor me
> on setting up a streamlined, open-source development environment for
> Coreboot? I've been using grep and gedit for my hacking needs, but
> trying to maintain a 5 level deep state table of AGESA code dependencies
> in my head was a problem. How did Jacob get started and what IDE did he
> use, for example?
>
> > Drivers needs support to not get in the way of later development,
> > and AGESA is sorely lacking in that department. If you see value
> > in that code, please step up now, not only when we're looking into
> > removing that code for good.
>
> Which drivers and what support? I see Kyösti Mälkki replied with better
> questions. Where is the biggest pain point today, i.e. not already being
> worked on and would return the most value to Coreboot by my work?
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Web site revamp

2019-09-02 Thread ron minnich
> What about following proposal:
> coreboot is an extended firmware platform that delivers a lightning
> fast and secure boot experience on modern computers and embedded
> systems. As an Open Source project it aims to provide auditability and
> maximum control over technology; On some platforms (especially
> non-open ISA platforms), some boot functionalities are provided by
> Silicon Vendor binary blobs.

This is too wordy and full of jargon, and confuses goals.

it was never about the speed. The speed was a nice side effect, but ti
was really about the openness. Once you start talking about speed you
lose the thread -- we had this problem all the time in 2000: vendors
got focused on fast and missed the main point, that we wanted control.

Remember that many people come to coreboot thinking they're going to
load a usb stick up and install it somehow. Few people have any clue
what's going on here.

You need fewer adjectives, and simpler words.

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Web site revamp

2019-09-01 Thread ron minnich
Right after sending my note I got a note from a friend:

https://www.phoronix.com/scan.php?page=news_item=AMD-Hiring-For-Coreboot

"We were tipped off today that AMD's Head of Platform Firmware, Edward
Benyukhis, publicly posted on LinkedIn that he is "looking to hire
someone with solid Coreboot and UEFI background." If you have Coreboot
experience or know someone who is, see LinkedIn for contacting
Benyukhis."

I am convinced that things are getting better.

On Sun, Sep 1, 2019 at 7:38 PM ron minnich  wrote:
>
> I should add that I support what Patrick is saying in this discussion.
> I think he's right.
>
> This is a good question to ask: "Can you point to even one instance in
> the past few years where this strategy has yielded less vendor
> proprietary firmware ..."
>
> I think I can: I can point to the increasing number of committers from
> intel.com to coreboot.
>
> I can also point out that every FSP/coreboot system that ships, ships
> without the many megabytes of UEFI DXEs that are not needed; FSP
> represents a pretty large blob-ectomy.
>
> More is happening. There will be some interesting announcements this
> month. I wish developments could be more public, for a number of
> chipset and board vendors. But I do know for a fact the world is
> moving in the right direction, although it is agonizingly slow. And
> such movement is thanks to unceasing efforts of folks like Patrick to
> make it so.
>
> ron
>
> On Sun, Sep 1, 2019 at 2:55 PM Timothy Pearson
>  wrote:
> >
> > - Original Message -
> > > From: "Patrick Georgi" 
> > > To: "Timothy Pearson" 
> > > Cc: "David Hendricks" , "coreboot" 
> > > 
> > > Sent: Sunday, September 1, 2019 10:30:30 AM
> > > Subject: Re: [coreboot] Re: Web site revamp
> >
> > > Am So., 1. Sept. 2019 um 02:23 Uhr schrieb Timothy Pearson <
> > > tpear...@raptorengineering.com>:
> > >
> > >> Another bad analogy: if I start a project for "maximum control" of an
> > >> airliner, but the reality of the situation is the best level of control I
> > >> can ever attain is how far back my seat reclines, then the wording is
> > >> purposefully grandiose, opaque and (IMO) rather weasel-worded to make it
> > >> sound like the project is doing something far more than it can ever
> > >> accomplish.
> > >>
> > > A project for maximum control of an airliner that stops at seat
> > > configurations shouldn't talk about maximum control of an airliner.
> > >
> > > But coreboot isn't content with all these blobs, so this analogy, no 
> > > matter
> > > how bad or good it is, doesn't apply.
> > > Blobs are just the situation we're in that enables us to continue to work
> > > on coreboot, push it in the market (and thereby create demand. Even IBVs
> > > are now doing coreboot development for hire after years of "if you want to
> > > do a project like that you MUST go UEFI") and create channels where we can
> > > discuss how to get rid of blobs.
> > > Most of this doesn't happen in the open, but that's due to how business
> > > works, with NDAs and all that other magic pixie dust that corporate 
> > > lawyers
> > > sprinkle over these companies' engineers' work as if that's good for
> > > anything.
> > >
> > > coreboot is _aiming_ for nothing less than 100% open. We're just not
> > > waiting for that to happen spontaneously (it won't).
> > >
> > > Since home-CMOS isn't a very likely prospect (and where they succeed they
> > > mostly move the goal post because all those lithography machines still 
> > > need
> > > to come from somewhere), we need to work with silicon vendors somehow to 
> > > be
> > > able to run software. Getting them to put their magic in the chips and
> > > documentation for those rather than in their legal agreements seems a more
> > > worthwhile cause to me.
> > >
> > > People don't want to get into that dirty work? Fine. libreboot can be a
> > > good home for them. But they won't change the shape of the industry in a
> > > way that renders the blobs question obsolete: Silicon vendors don't have 
> > > to
> > > care about us until it means business. Getting coreboot out there in
> > > products is the money-shaped carrot we're dangling in front of them.
> > >
> > > 10 years ago it would have been entirely impossible for Intel to
> > > acknowledge that any part of firmware could be &

[coreboot] Re: Web site revamp

2019-09-01 Thread ron minnich
I should add that I support what Patrick is saying in this discussion.
I think he's right.

This is a good question to ask: "Can you point to even one instance in
the past few years where this strategy has yielded less vendor
proprietary firmware ..."

I think I can: I can point to the increasing number of committers from
intel.com to coreboot.

I can also point out that every FSP/coreboot system that ships, ships
without the many megabytes of UEFI DXEs that are not needed; FSP
represents a pretty large blob-ectomy.

More is happening. There will be some interesting announcements this
month. I wish developments could be more public, for a number of
chipset and board vendors. But I do know for a fact the world is
moving in the right direction, although it is agonizingly slow. And
such movement is thanks to unceasing efforts of folks like Patrick to
make it so.

ron

On Sun, Sep 1, 2019 at 2:55 PM Timothy Pearson
 wrote:
>
> - Original Message -
> > From: "Patrick Georgi" 
> > To: "Timothy Pearson" 
> > Cc: "David Hendricks" , "coreboot" 
> > 
> > Sent: Sunday, September 1, 2019 10:30:30 AM
> > Subject: Re: [coreboot] Re: Web site revamp
>
> > Am So., 1. Sept. 2019 um 02:23 Uhr schrieb Timothy Pearson <
> > tpear...@raptorengineering.com>:
> >
> >> Another bad analogy: if I start a project for "maximum control" of an
> >> airliner, but the reality of the situation is the best level of control I
> >> can ever attain is how far back my seat reclines, then the wording is
> >> purposefully grandiose, opaque and (IMO) rather weasel-worded to make it
> >> sound like the project is doing something far more than it can ever
> >> accomplish.
> >>
> > A project for maximum control of an airliner that stops at seat
> > configurations shouldn't talk about maximum control of an airliner.
> >
> > But coreboot isn't content with all these blobs, so this analogy, no matter
> > how bad or good it is, doesn't apply.
> > Blobs are just the situation we're in that enables us to continue to work
> > on coreboot, push it in the market (and thereby create demand. Even IBVs
> > are now doing coreboot development for hire after years of "if you want to
> > do a project like that you MUST go UEFI") and create channels where we can
> > discuss how to get rid of blobs.
> > Most of this doesn't happen in the open, but that's due to how business
> > works, with NDAs and all that other magic pixie dust that corporate lawyers
> > sprinkle over these companies' engineers' work as if that's good for
> > anything.
> >
> > coreboot is _aiming_ for nothing less than 100% open. We're just not
> > waiting for that to happen spontaneously (it won't).
> >
> > Since home-CMOS isn't a very likely prospect (and where they succeed they
> > mostly move the goal post because all those lithography machines still need
> > to come from somewhere), we need to work with silicon vendors somehow to be
> > able to run software. Getting them to put their magic in the chips and
> > documentation for those rather than in their legal agreements seems a more
> > worthwhile cause to me.
> >
> > People don't want to get into that dirty work? Fine. libreboot can be a
> > good home for them. But they won't change the shape of the industry in a
> > way that renders the blobs question obsolete: Silicon vendors don't have to
> > care about us until it means business. Getting coreboot out there in
> > products is the money-shaped carrot we're dangling in front of them.
> >
> > 10 years ago it would have been entirely impossible for Intel to
> > acknowledge that any part of firmware could be "open" (yes, Tianocore was
> > open source but that was for the IBV consortium that calls itself UEFI
> > Forum, not for the general population to hack on). The fully open i945 port
> > (that is older than 10 years) happened despite Intel, not because of them,
> > and I'm pretty sure that back then the folks in charge there were expecting
> > that to be a one-off. And now they're a big sponsor of and send lots of
> > speakers to the Open Source Firmware Conference.
> >
> > That's quite a shift in perspective, and it wouldn't have happened without
> > coreboot remaining a constant talking point and thorn in Intel's side.
> > There's one easy way to unroll all of that and that's by stopping to work
> > with them. I don't think that would be a desirable result.
> >
> > What is to be gained by hiding the reality of the situation from
> >> non-technical users visiting the website?
> >>
> > I suppose we could create an online course on hardware init with some
> > chapters of vendor business models thrown in to provide a full picture, but
> > with anything less than that, no matter what we say, users will be confused.
> > Firmware is simply a rather opaque field (see: the magic capabilities that
> > random people on the internet tend to ascribe to "firmware").
> >
> >
> > Patrick
> > --
> > Google Germany GmbH, ABC-Str. 19, 20354 Hamburg
> > Registergericht und -nummer: Hamburg, HRB 

[coreboot] Re: Web site revamp

2019-09-01 Thread ron minnich
Just a note on oreboot: the name is taken, it means Rust, not C, and
you can see our talk about it next week. It works on SiFive HiFive-U.
We'd love to have help on real ARM chips.

We intend to be forceful about holding the line on "no development
from NDA data sheets", "no binary blobs", as well as holding the line
on "no C". This also means we don't expect to run on x86 now if ever.

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Suggestion Computer-On-Module implementations

2019-07-10 Thread ron minnich
On Tue, Jul 9, 2019 at 10:59 PM Frans Hendriks  wrote:

> The COM board is 'subset' of mainboard.
> Suggestion is splitting COM board support (which is not a complete mainboard) 
> from carrier support.

we had this type of board in LinuxBIOS v1 with some PPC systems and
followed a similar approach.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: More coding style change proposals

2019-06-25 Thread ron minnich
On Tue, Jun 25, 2019 at 1:29 PM Julius Werner  wrote:

> No we don't. We had a long discussion about multi-line comment styles
> some years back and I made the same argument there. ;)

you're totally right, oops! I misread the coding style again just a
few days ago. Sheesh!

>. I'd
> bet 95+% of coreboot systems run Linux afterwards, and when adding
> support for new platforms it's just normal that the hardware vendor
> has the same people working on both kernel and firmware drivers for
> the same component.

I'm sure you are absolutely correct here. That said, I'm starting to
encounter folks from, e.g., the Rust community who find some aspects
or our coding style odd. I'm expecting to see more of this. From
personal experience, I can say that moving from Rust-based
hypervisors, kernels, and firmware to coreboot is starting to feel a
bit ... weird, and in this case, coreboot is the outlier.

But yeah, point taken that many people are working in the Linux kernel
style nowadays  ... OTOH, we have had our encounters with UeFi SyLe
CoDe, and they could make a similar argument around code style ...

I'm still of the opinion that the addition of {} after if is a good
idea. After years of Go and now Rust, seeing those unprotected bits of
code after an if just scares me.

Thanks

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: More coding style change proposals

2019-06-25 Thread ron minnich
If we're going to talk about wasted lines consider this:
we mandate comments as follows:
/*
 * something
 */

I just did a test and we have something like 40K lines of white space
spent on that dangling */.

We've got something like 30K if not followed by {

So, were we to to get away from the comments with "wings" (as some
kernel people once called them) we could save all those lines
with just a /* or */ on them. This would more than make up for
additional white lines added by a dangling {.

I note that the changing our comment style would have zero impact on
code safety. The improvement of requiring a { on the ifs is known to
have positive impact; it's why Rust and Go both require it to my
understanding.

As for "the kernel" and its coding style, we are going to increasingly
see people coming from worlds where "the kernel" and its coding style
matter less and less. Maybe adherence to "the kernel" coding style
mattered ten years ago; I don't think it is as important now. In my
case, increasingly, "the kernel" coding style is looking a bit dated.


On Mon, Jun 24, 2019 at 8:14 PM Julius Werner  wrote:
>
> Doesn't -Wmisleading-indentation already catch all of this? That's
> enabled by default on the coreboot gcc. I don't think "it's just a
> heuristic" should be a concern unless anyone knows of a real example
> that is otherwise valid coreboot code style but not caught by this
> heuristic. (If we're worried about examples that are not valid code
> style, then changing the code style to make them even more forbidden
> doesn't help... so I think weird cases that mix tab and space
> indentation or the like should count in favor of this.)
>
> If we're concerned that clang-format might cement errors automatically
> then that's a reason for not using clang-format that way, but I don't
> see how changing the coding style would solve it. clang-format's whole
> job is to take whatever input and transform it into the coding style,
> so the input is likely not style-compliant yet.
>
> Forcing braces on single-line statements adds an extra line of
> whitespace where it would otherwise not necessarily belong, which
> hurts readability. How much of a function can fit on a single screen
> is a real readability concern, and I think this style change would
> harm it. That's the same reason we write
>
>  while {
>
> instead of
>
>  while
>  {
>
> like some other projects. (Of course blank lines can also help
> readability, but only in the *right* places and not randomly injected
> by style rules.) It would also move us yet again further away from
> kernel style, causing more issues for people coming from other
> projects.
>
> On Thu, Jun 20, 2019 at 2:54 PM ron minnich  wrote:
> >
> > On Thu, Jun 20, 2019 at 1:12 PM Stefan Reinauer
> >  wrote:
> > >
> > >
> > >
> > > On 20 Jun 2019 08:26, ron minnich  wrote:
> > >
> > > clang-format is not a textual preprocessor. It is basically the ast
> > > builder of followed by output.
> > >
> > > So in your case, I just tried it
> > > main() {
> > >
> > >   if (foo)
> > > bar();
> > > baz();
> > > }
> > >
> > > and got the right thing, i.e. braces around bar but not baz.
> > >
> > >
> > > The right thing (e.g. the obviously intended thing) would be too have 
> > > braces around both.
> > >
> > > clang-format in this case masks the problem and makes it harder to 
> > > identify by expanding the syntax of the unwanted behavior to look 
> > > intentional.
> >
> > Nico and Stefan, you make a good point, but I would then argue for
> > even better tools, like clang-tidy:
> > /tmp/x.c:13:10: warning: statement should be inside braces
> > [readability-braces-around-statements]
> > if (foo)
> > ^
> >  {
> >
> > In this case, there is a warning thrown, and the author has to clean it up.
> >
> > I don't believe, based on a lot of the history of this sort of problem
> > in C, that we should depend on human reviewers to catch mistakes like
> > this. These tools exist because of a demonstrated need. I think
> > coreboot could benefit from their proper application.
> >
> > You've presented a good case, but what about something like this:
> > if (foo)
> > bar();
> > baz();
> >
> > what's intended? There's an indent, but it's partial. I would not want
> > to guess. But I agree with you that an invisible fixup would be
> > inappropriate.
> > ___
> > coreboot mailing list -- coreboot@coreboot.org
> > To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does NSA contribute to Coreboot?

2019-06-24 Thread ron minnich
Julius, good point. You're right.

I was about to talk to the author and ask if he would mind some help.
I'd like to see this code get in.


On Mon, Jun 24, 2019 at 5:28 PM Julius Werner  wrote:
>
> > We're reviewing the STM code, of course.
>
> While we're on the topic, can someone please ask the NSA to honor our
> coding style? ;) I don't want to get involved because it's really not
> my area, but it looks pretty terrible at the moment (full of camelCase
> and ALL_CAPS identifiers, C99 comments, typedefs to non-coreboot
> types, commented-out code, incorrect or missing license headers,
> #pragma pack() instead of __packed, etc.). If they just want to
> copy wholesale UEFI files to coreboot, they should dump them in
> vendorcode instead.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does NSA contribute to Coreboot?

2019-06-24 Thread ron minnich
Thanks for clearing that up 

On Mon, Jun 24, 2019, 11:16 AM Hubert Ruch  wrote:

>
>
> On 6/24/19 10:17 PM, ron minnich wrote:
> > On Mon, Jun 24, 2019 at 7:20 AM Hubert Ruch  wrote:
> >> Thanks for the info. Didn't know that. Now, one has to wonder how many
> skilled developers actually do read and understand their code. IIRC Leah
> Rowe paid someone $90.000 for adding some code to LibreBoot. I'm mentioning
> this because it leads to the assumption that boot coding must be a pretty
> difficult task.
> > Speculation preceded by IIRC is not helpful. Lots of people read this
> > list and you can now expect to see your IIRC bounce around the world
> > as fact, and we have no idea if it's true or not.
> Here's the source. Leah Rove writes that she "paid 90,000 USD to Raptor
> Engineering to port the ASUS KGPE-D16 and KCMA-D8 to Libreboot".
> https://libreboot.org/news/leah-fundraiser.html
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does NSA contribute to Coreboot?

2019-06-24 Thread ron minnich
> Well, I experience this very differently. Reviews aside, I spent most
> of my time with bug fixing. And most of the bugs I encounter are either
> due to unnecessary software complexity or because somebody ignored the
> little documentation that exists. Those aren't boot-coding problems.

re whether firmware is hard: I used to think it was not so bad. But
I've changed my mind.

What I learned, when I came back to coreboot in 2012 after a few years
off, is that it can be pretty hard, esp. around stuff like memory
training. I was shocked by how much I'd forgotten and how hard I found
it to get started up again. Another change is that every bus seems to
need training. Another observation: I've been told that it can take a
few years, on newer complex chipsets, to get the DRAM code working --
one reason it's nice we can update the romstage in chromebooks. Final
datapoint: a supercomputer vendor shipped a big machine, ca. 2007, and
it was over a year before they chased down some memory issues -- and
they *started* with a working port (I tried to help them with that and
failed). I think firmware is hard.

Nico, I accuse you of being extremely good at this kind of work; it
only looks easy to you :-)

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Does NSA contribute to Coreboot?

2019-06-24 Thread ron minnich
On Mon, Jun 24, 2019 at 7:20 AM Hubert Ruch  wrote:
> Thanks for the info. Didn't know that. Now, one has to wonder how many 
> skilled developers actually do read and understand their code. IIRC Leah Rowe 
> paid someone $90.000 for adding some code to LibreBoot. I'm mentioning this 
> because it leads to the assumption that boot coding must be a pretty 
> difficult task.

Speculation preceded by IIRC is not helpful. Lots of people read this
list and you can now expect to see your IIRC bounce around the world
as fact, and we have no idea if it's true or not.

As Stefan points out, the project started at LANL in 1999 and ran
there for over five years, so USG involvement is hardly new. DOE Labs
spent well over $10M on systems running LinuxBIOS over a 6 year
period, and if we count the full cost of the DOE Lab FTEs contributing
to LinuxBIOS, the total commitment from 1999-2006 edges up to about
$20M. I know this because I oversaw the purchase of most of those
systems, and the funding of those FTEs (including me).

It's probably not well remembered at this point but the NSA also
contributed a lot to early Linux. Go back far enough, look at some of
Don Becker's ethernet drivers, and you will find National Security
Agency copyrights. This is because at the time Don wrote those drivers
he worked at the Supercomputing Research Center in Bowie, MD, USA. I
know this because I was there at that time too. Don was very active in
the creation of the early Linux networking stack, not just drivers.
NSA contributions to open source code goes back almost 30 years.

We're reviewing the STM code, of course. If you're going to worry
about something, worry about FSP 2.0 still being closed source. FSP is
not optional and we have no idea of all the things it does/can do.

Finally, boot coding is a pretty difficult task. You don't see how
hard it is on x86 any more because x86 now depends on binary blobs to
work (I'm still very sad about that) and the really hard parts are in
the blobs. But it is intricate, difficult code, even on simple ARM
SOCs. That has not changed.

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: More coding style change proposals

2019-06-20 Thread ron minnich
On Thu, Jun 20, 2019 at 1:12 PM Stefan Reinauer
 wrote:
>
>
>
> On 20 Jun 2019 08:26, ron minnich  wrote:
>
> clang-format is not a textual preprocessor. It is basically the ast
> builder of followed by output.
>
> So in your case, I just tried it
> main() {
>
>   if (foo)
> bar();
> baz();
> }
>
> and got the right thing, i.e. braces around bar but not baz.
>
>
> The right thing (e.g. the obviously intended thing) would be too have braces 
> around both.
>
> clang-format in this case masks the problem and makes it harder to identify 
> by expanding the syntax of the unwanted behavior to look intentional.

Nico and Stefan, you make a good point, but I would then argue for
even better tools, like clang-tidy:
/tmp/x.c:13:10: warning: statement should be inside braces
[readability-braces-around-statements]
if (foo)
^
 {

In this case, there is a warning thrown, and the author has to clean it up.

I don't believe, based on a lot of the history of this sort of problem
in C, that we should depend on human reviewers to catch mistakes like
this. These tools exist because of a demonstrated need. I think
coreboot could benefit from their proper application.

You've presented a good case, but what about something like this:
if (foo)
bar();
baz();

what's intended? There's an indent, but it's partial. I would not want
to guess. But I agree with you that an invisible fixup would be
inappropriate.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: More coding style change proposals

2019-06-20 Thread ron minnich
clang-format is not a textual preprocessor. It is basically the ast
builder of followed by output.

So in your case, I just tried it
main() {

  if (foo)
bar();
baz();
}

and got the right thing, i.e. braces around bar but not baz.

The history of reviewers looking at code is they miss this kind of
error. Constantly. I'm in favor of as much automation as we can get.

ron

On Thu, Jun 20, 2019 at 5:25 AM Nico Huber  wrote:
>
> On 20.06.19 06:01, Jacob Garber wrote:
> > On Wed, Jun 19, 2019 at 08:38:14PM -0700, ron minnich wrote:
> >> Given the number of serious problems that lack of braces causes, I
> >> like this proposal. It's indicative that both Rust and Go require the
> >> {}, for reasons of safety.
> >
> > There was a famous vulnerability in Apple's SSL code several years ago
> > because of lack of braces. clang-format can also reformat old code to have
> > mandatory braces if I'm not mistaken.
>
> What will clang-format do if it encounters?
>
> if (foo)
> bar();
> baz();
>
> a)
> if (foo) {
> bar();
> }
> baz();
>
> or b)
> if (foo) {
> bar();
> baz();
> }
>
> Will it spit out a warning? If not, this shows how dangerous automatic
> formatting can be. Because after the formatter run, it's much less ob-
> vious for the reviewer that something is wrong.
>
> Nico
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: More coding style change proposals

2019-06-19 Thread ron minnich
Given the number of serious problems that lack of braces causes, I
like this proposal. It's indicative that both Rust and Go require the
{}, for reasons of safety.

On Wed, Jun 19, 2019 at 11:27 AM Jonathan Neuschäfer
 wrote:
>
> On Wed, Jun 19, 2019 at 01:39:50PM -0400, Patrick Georgi via coreboot wrote:
> > Hey everybody,
> >
> > in today's leadership meeting, the question was brought up if we want
> > to normalize the coding style in coreboot to _always_ use braces in
> > if, else and for statements, even if it's just one statement they're
> > wrapping.
> >
> > The arguments made in favor were:
> >
> > 1. it's more consistent
> > 2. it's safer: you won't accidentally add a statement that is outside
> >the context it's supposed to run in (or even move a statement out
> >into the parent context by inserting another statement)
> >
> > So instead of:
> >
> > if (foo)
> >   bar();
> > else {
> >   baz();
> >   quux();
> > }
> >
> > we'd do:
> >
> > if (foo) {
> >   bar();
> > } else {
> >   baz();
> >   quux();
> > }
>
> Quick sidenote: Under the old rules[1], inherited from the Linux
> kernel coding style[2], the above example would look the same:
>
> | [...]
> | This does not apply if only one branch of a conditional statement is a
> | single statement; in the latter case use braces in both branches:
> |
> | if (condition) {
> | do_this();
> | do_that();
> | } else {
> | otherwise();
> | }
>
> ... so a better example would be one where technically no parentheses
> are required at all:
>
> if (foo)
> bar();
> else
> baz();
>
>   becomes:
>
> if (foo) {
> bar();
> } else {
> baz();
> }
>
>
> greetings,
> Jonathan Neuschäfer
>
>
> [1]: https://doc.coreboot.org/coding_style.html#placing-braces-and-spaces
> [2]: 
> https://www.kernel.org/doc/html/latest/process/coding-style.html?highlight=coding%20style#placing-braces-and-spaces
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Chainloading Windows from a Linux Payload

2019-06-10 Thread ron minnich
if you boot windows 12 would you need tianocore?

On Mon, Jun 10, 2019 at 1:44 PM Nico Huber  wrote:
>
> On 09.06.19 20:53, Matt B wrote:
> > It is possible through u-root support for multiboot images [1] to chainload
> > grub?
>
> Yes, I would think so. But in case we are still on topic: It won't
> help you to boot Windows (unless you also implement UEFI services
> in your LinuxBoot and use a UEFI GRUB).
>
> To chainload something for Windows I would currently go either one of
> these ways:
>
> coreboot -> LinuxBoot -> SeaBIOS   -> Windows loader
> coreboot -> LinuxBoot -> tianocore -> Windows loader
>
> I think SeaBIOS already has an option to build a multiboot image. In
> either case you could also (in theory) pack either into a bzImage and
> feed that to kexec.
>
> Nico
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Chainloading Windows from a Linux Payload

2019-06-10 Thread ron minnich
yes. multiboot support went in a few months ago and we can, for
example, load vmware esxi.

I wonder why you would want to chainload grub, however, instead of
using u-root programs that read grub config files and do the boot
directly? There are reasons to use grub, of course, but I was curious
about your specific reason.

thanks

ron

On Sun, Jun 9, 2019 at 8:54 PM Matt B  wrote:
>
> Hi,
>
> It is possible through u-root support for multiboot images [1] to chainload 
> grub?
>
> -Matt
>
> [1] https://godoc.org/github.com/u-root/u-root/pkg/boot#MultibootImage
>
> On Sat, Apr 13, 2019 at 2:48 PM ron minnich  wrote:
>>
>> Esxi works today freebsd is coming and windows is in Long term thinking
>>
>> On Fri, Apr 12, 2019, 11:46 AM Rafael Send  
>> wrote:
>>>
>>> Good question, I'd be interested in the answer to this as well if anyone 
>>> has some insight.
>>>
>>> Cheers,
>>> R
>>>
>>> On Fri, Apr 12, 2019 at 7:45 AM Matt B  wrote:
>>>>
>>>> Greetings,
>>>>
>>>> From what I can find, Linux can only chainload another linux kernel. (via 
>>>> kexec) Does this mean that a Linux payload like LinuxBoot cannot be used 
>>>> to boot Windows or another OS, either directly or by chainloading another 
>>>> payload from CBFS?
>>>>
>>>> It's nice that a Linux payload can provide superior flexibility and 
>>>> configurability than UEFI with the added benefit of a battle-hardened 
>>>> environment, but the ability to only boot a Linux OS seems like a pretty 
>>>> significant limitation (if this is indeed the case).
>>>>
>>>> Sincerely,
>>>> -Matt
>>>> ___
>>>> coreboot mailing list -- coreboot@coreboot.org
>>>> To unsubscribe send an email to coreboot-le...@coreboot.org
>>>
>>> ___
>>> coreboot mailing list -- coreboot@coreboot.org
>>> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: coreboot leadership meeting minutes for May 8 & May 22

2019-05-24 Thread ron minnich
On Thu, May 23, 2019 at 6:13 PM Julius Werner  wrote:
>
> > > * Can we adopt on principle removing all cpp guards on protos?
> > Why would this be a good idea?
> > What does "cpp guards" actually refer to?
>
> Upon re-reading this I *think* what they mean is stuff like
>
> #if CONFIG(SOME_OPTION)
> void function_prototype(void);
> #endif

yes. That's what the question meant: "should we remove cpp guards on
protos", as it said. I guess "cpp guards" is just a term of art I grew
up with, so did not realize it might confuse. OTOH it gets 2.5M hits.

I asked this question in the meeting because it came up in some
reviews, and just for one example,
https://review.coreboot.org/c/coreboot/+/32764/3/src/arch/x86/include/arch/acpi.h

and in some of those reviews a number of comments were made that some
of the guards on protos could be removed, even in cases where the
functions in question were not used or defined. A function prototype
essentially being a type declaration, this makes a sort of sense.

I also noticed this from
https://review.coreboot.org/c/coreboot/+/30466/29/src/mainboard/sifive/hifive-unleashed/flash.h
+++ b/src/mainboard/sifive/hifive-unleashed/flash.h
@@ -0,0 +1,21 @@
+/*
+ * This file is part of the coreboot project.
+ *
+ * Copyright (C) 2018 HardenedLinux
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; version 2 of the License.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __HIFIVE_UNLEASHED_FLASH_H__
+#define __HIFIVE_UNLEASHED_FLASH_H__
+
+void flash_init(void);
+
+#endif /* __HIFIVE_UNLEASHED_FLASH_H__ */

This one file contains one prototype, i.e. one line, with three lines
of cpp stuff to guard it. Is this sensible? I have no idea. But were
we to adopt a policy of not guarding prototype declarations, we could,
for a cpu or architecture, put all the protos in one file. That's what
the Plan 9 authors (kernel written in C, by the inventors of Unix and
C, for reference) elected to do. It was incredibly convenient -- all
the protos you need were in one place. As weird as it sounds, it's
very, very nice. I'm not recommending that for coreboot, just trying
to point out what other projects have done.

But the overall observation was that we have guards around protos in
some places, and in other places people have argued they can be
removed, even when the function in question is not defined or used,
and the question I got to wondering was: should we just remove all
those guards, like Plan 9, or what?

There are situations where what would have been a compile time error
is now a link time error. People might find this confusing.

The discussion evolved to a related question, around #pragma once. A
few years back, on the Akaros project (kernel written in C, FWIW), a
Linux kernel luminary convinced us to get rid of file guards and go to
#pragma once. I am not sure it was worth the trouble but we did it. It
*can* speed up compile time; cpp doesn't need to process a whole file
and then conclude it did not have to process it; it can realize it can
skip the open. A significant downside is that it's not in any standard
-- just all the compilers out there, it seems, save romcc.

I did a simple test: apply #pragma once to coreboot. A coreboot build
for watson opens 80K .h files today. #pragma once makes barely any
difference; this says we are doing a good job in how we use our .h
files.

>From that point of view #pragma once hardly seems worth it.
Either way, the build takes 35 seconds. For compilation speed, I see
no advantage. And, again, #pragma once is not in any standard.

And that's it. I got kind of curious about usage of cpp guards, the
meeting was on the schedule, I asked the question, and we discussed
it. I value the opinions of the people on the call.

I'm sorry it elicited such a strong reaction.

As for the call: it's open. Anyone can call in. I'd be happy if more people did.

ron
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: redleaf

2019-05-20 Thread ron minnich
well, I was not going to announce this until september, but events are
moving faster than planned, and the paper forced my hand. I started
looking into some new paths for firmware a few years back by messing
about with the Rust-based tock kernel. It taught me a lot.

The project I will be more publicly announcing at the OSFC is called
oreboot. oreboot is coreboot without 'C'. It's a downstream fork of
coreboot. I started with coreboot because  there's a huge amount of
value in coreboot's directory layout, its many tools, and
documentation, that should not be discarded

but, in oreboot:
- I remove all the C code
- I remove all Intel chipsets since the days of intel source code
support are long gone
- I remove anything that depends on a binary blob
- and anything that tries to support binary blobs

And that's just a start. But a lot of stuff is gone.

The first target is RISCV-SPIKE, as that lightweight CPU has made so
much possible in the last 5 years (including the idea of RAMPAYLOAD).

We could use some Rust expertise (I can barely spell C, must less
Rust); if this interests you, please let me know. I don't want to
clutter this list with oreboot discussions, so we may want to keep it
separate unless there is interest.

Possibly at the workshop in 2 weeks we can spend some time
establishing the proper way to code for oreboot.

Thanks

ron

On Mon, May 20, 2019 at 9:54 AM Jacob Garber  wrote:
>
> On Fri, May 17, 2019 at 03:43:43PM -0700, ron minnich wrote:
> > "RedLeaf is a new operating system being developed from scratch to
> > utilize formal verification for implementing provably secure firmware.
> > RedLeaf is developed in a safe language, Rust, and relies on automated
> > reasoning using satisfiability modulo theories (SMT) solvers for
> > formal verification. RedLeaf builds on two premises: (1) Rust's linear
> > type system enables practical language safety even for systems with
> > tightest performance and resource budgets (e.g., firmware), and (2) a
> > combination of SMT-based reasoning and pointer discipline enforced by
> > linear types provides a unique way to automate and simplify
> > verification effort scaling it to the size of a small OS kernel."
> >
> > https://dl.acm.org/citation.cfm?id=3321449
>
> "To make things worse, modern firmware is fundamentally insecure. The
> software engineering technology behind the platform firmware has remained
> unchanged for decades. Modern firmware is developed in a combination of
> low-level assembly and an unsafe programming language, namely C. (Several
> rare exceptions demonstrate applications of fuzzing and symbolic execution
> [24,40].) Typical firmware both ad-heres to multiple low-level hardware
> specifications and implements functionality of a minimal OS, i.e., implements
> multiple device drivers and sometimes even provides support for file systems
> and network protocols [27,28,43]. Due to such inherent complexity, bugs and
> vulnerabilities are routinely introduced in the omni-privileged firmware."
>
> Is there interest in using more formal methods in Coreboot? For example,
> I know gfxinit is written in Ada SPARK, and certainly some of the Coverity
> bugs I look at would benefit from using a stricter programming language.
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] lewisburg GPIO

2019-04-17 Thread ron minnich
I need some gpio experts.

On an (e.g.) lewisburg, how does one enumerate the GPIOs and make them
visible in sysfs as one does on ARM? Is the info in ACPI in some magic
spot?

thanks
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Chainloading Windows from a Linux Payload

2019-04-13 Thread ron minnich
Esxi works today freebsd is coming and windows is in Long term thinking

On Fri, Apr 12, 2019, 11:46 AM Rafael Send 
wrote:

> Good question, I'd be interested in the answer to this as well if anyone
> has some insight.
>
> Cheers,
> R
>
> On Fri, Apr 12, 2019 at 7:45 AM Matt B  wrote:
>
>> Greetings,
>>
>> From what I can find, Linux can only chainload another linux kernel. (via
>> kexec) Does this mean that a Linux payload like LinuxBoot cannot be used to
>> boot Windows or another OS, either directly or by chainloading another
>> payload from CBFS?
>>
>> It's nice that a Linux payload can provide superior flexibility and
>> configurability than UEFI with the added benefit of a battle-hardened
>> environment, but the ability to only boot a Linux OS seems like a pretty
>> significant limitation (if this is indeed the case).
>>
>> Sincerely,
>> -Matt
>> ___
>> coreboot mailing list -- coreboot@coreboot.org
>> To unsubscribe send an email to coreboot-le...@coreboot.org
>>
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Directly boot the image of Linux for RISC-V

2019-04-04 Thread ron minnich
yeah I agree. Thanks for responding Philipp, I dropped the ball.

We need to have another riscv hacking session in june.

On Thu, Apr 4, 2019 at 7:39 AM Philipp Hug  wrote:

> Hi Xiang,
>
> I think the best way to integrate opensbi for now would be to put it into
> the payloads/external directory and execute it as a rampayload. (in m-mode)
>
> Philipp
>
> On Tue, 2 Apr 2019 at 11:32, 王翔  wrote:
>
>> Hello Dear:
>>
>> I added some code to support linux binary directly as a payload to boot
>> for RISC-V.
>>
>> modify:
>> 1. opensbi is an open source sbi implementation(
>> https://github.com/riscv/opensbi)
>> 2. Linux binary is used as the payload of opensbi
>> 3. The elf generated by opensbi compilation is used as the coreboot
>> payload.
>>
>> Code reference:
>> https://github.com/hardenedlinux/coreboot-HiFiveUnleashed/tree/r1
>>
>> Whether to consider adding opensbi to 3rdparty?
>> Do you have any suggestions?
>>
>> XiangWang
>>
>> --
>> 王翔
>>
>> 安全研究员
>>
>> 广州市腾御安信息科技有限公司
>>
>> 广州市天河区珠江新城华穗路406号保利克洛维二期中景A座1020-1024
>>
>>
>
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: GSoC 2019

2019-03-29 Thread ron minnich
I think extending ghidra is a great idea, and it is more than enough
for a gsoc.

On Fri, Mar 29, 2019 at 6:03 AM Daniel Lim  wrote:
>
> Hi,
>
> I am Daniel, a computer engineering student at the National University
> of Singapore.
>
> I am interested in extending Ghidra to support the analysis of
> firmware images. However, I am unsure whether this would be a
> sufficiently big project for GSoC, and whether I should have an
> additional one. If that's the case, I would like to also work on
> porting the GRUB2 payload to the RISC-V architecture.
>
> Apart from that, to familiarize myself with the code review process of
> coreboot, I have made a simple change to cpu/ti/am335x, fixing
> warnings from checkpatch. May I know who can I add to review it? The
> patch can be found at https://review.coreboot.org/c/coreboot/+/32119 .
>
> Last but not least, may I ask for the scope of GSoC, should I ask
> questions more on the mailing list or IRC?
>
> I am looking forward to learning from you all. Thanks.
>
> Regards,
> Daniel Lim
> ___
> coreboot mailing list -- coreboot@coreboot.org
> To unsubscribe send an email to coreboot-le...@coreboot.org
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


[coreboot] Re: Coding style and automatic code formatting

2019-03-28 Thread Ron Minnich via coreboot
The general usage pattern in Go is format-on-save-from-editor, unless
it's something like vscode where magic happens.

On Thu, Mar 28, 2019 at 2:03 PM Nico Huber  wrote:
>
> On 16.03.19 18:15, Ron Minnich wrote:
> > On Sat, Mar 16, 2019 at 9:41 AM Patrick Georgi  wrote:
> > o Huber  schrieb am Sa., 16. März 2019, 16:32:
> >>>
> >>> Do we want to enforce a single editor / IDE + configuration for coreboot
> >>> contributions?
> >
> > we don't want to lock out, e.g., sublime, emacs, and vscode users, so no.
>
> yeah, that was just an example, because Vim works for me. And usually
> the argument for code formatters seems to be that they work for some-
> body.
>
> Why I brought this IDE point up at all: I played with the idea of a
> separate tool for formatting in my head and I couldn't come up with
> anything that would fit into my usual code+commit+push workflow. Maybe
> that's just because I'm not experienced with such tools. I wonder, at
> what point it would fit in:
>
>   o Between editor and `git add`?
>   o As a pre-commit hook?
>   o As a pre-push hook?
>   o As a hook on Gerrit's side?
>
> I couldn't come up with anything that wouldn't result in more manual
> work, e.g. to synchronize my local tree with the resulting commits.
> And well, I guess the point is to save us some work (I don't see where
> but probably somebody does).
>
> So before we make any final call for a formatting tool, I guess we need
> a volunteer who says they're going to take the week (or month?) off to
> integrate it flawlessly (unless I miss here how easy it really is).
>
> >
> >>> Do we want to enforce a single tool, e.g. clang-format, that does the
> >>> job for us after editing a source file?
> >>
> >> If we go for strict coding style adherence requirements that would avoid 
> >> having to manually review for coding style, which means less talking about 
> >> it, which in my book is a plus.
>
> Well, I hope it will be a very huge plus. In my book it's at an
> incredible minus atm: Over the past 7 years of coreboot development,
> I've spent a lot more time to analyze or work around and discuss our
> check-patch/check-style hooks than I spent on bikeshedding code format
> or debating about line lengths.
>
> >
> > This is the direction most new projects in modern languages are
> > taking. clang-fmt can do this for older languages like C and I think
> > it makes the most sense.
>
> Most modern languages are parsable, I guess? I haven't tried it yet,
> but I guess modern languages are much easier to format automatically.
>
> Nico
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


  1   2   3   4   5   6   7   8   9   10   >