[SeaBIOS] Re: [RESEND PATCH] fw/pciinit: don't misalign large BARs

2024-04-15 Thread Kevin O'Connor
On Thu, Apr 11, 2024 at 10:51:35PM +0300, Daniil Tatianin wrote:
> Previously we would unconditionally lower the alignment for large BARs
> in case their alignment was greater than "pci_mem64_top >> 11", this
> would make it impossible to use these devices by the kernel:
> [   13.821108] pci :9c:00.0: can't claim BAR 1 [mem 
> 0x660-0x67f 64bit pref]: no compatible bridge window
> [   13.823492] pci :9d:00.0: can't claim BAR 1 [mem 
> 0x640-0x65f 64bit pref]: no compatible bridge window
> [   13.824218] pci :9e:00.0: can't claim BAR 1 [mem 
> 0x620-0x63f 64bit pref]: no compatible bridge window
> [   13.828322] pci :8a:00.0: can't claim BAR 1 [mem 
> 0x6e0-0x6ff 64bit pref]: no compatible bridge window
> [   13.830691] pci :8b:00.0: can't claim BAR 1 [mem 
> 0x6c0-0x6df 64bit pref]: no compatible bridge window
> [   13.832218] pci :8c:00.0: can't claim BAR 1 [mem 
> 0x6a0-0x6bf 64bit pref]: no compatible bridge window
> 
> Fix it by only overwriting the alignment in case it's actually greater
> than the desired by the BAR window.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] stdvgaio: Only read/write one color palette entry at a time

2024-04-15 Thread Kevin O'Connor
On Tue, Apr 09, 2024 at 11:20:14AM -0400, Kevin O'Connor wrote:
> Introduce stdvga_dac_read_many() and stdvga_dac_write_many() for
> writing multiple dac palette entries.  Convert the stdvga_dac_read()
> and stdvga_dac_write() low-level IO access functions in stdvgaio.c to
> access just one color palette entry.
> 
> Signed-off-by: Kevin O'Connor 
> --
> 
> There should not be any functional change resulting from this patch.
> However, during mode switches (and a few other rare calls) there will
> likely be a few more outb() calls due to additional writes to the dac
> index register.  In theory this could adversely impact performance,
> but in practice I think it is unlikely it will be noticeable.  The
> original code was also not optimized (it already makes a large number
> of redundant writes to the dac index register), and performance
> sensitve code already avoids using the vgabios.

I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4] Add basic MXM 3.0 interrupt support

2024-04-14 Thread Kevin O'Connor
On Sun, Apr 14, 2024 at 05:56:31PM +, Riku Viitanen wrote:
> > I think it would be preferable if mxm_setup() was passed the pci
> > device for uniformity, and have mxm_setup() call romfile_xxx() itself.
> 
> Do you mean, run mxm30_setup unconditionally?
> And it checks if the file exists:
> 
> else if (mxm30_setup(pci))
> /* nothing left to do here */

That's fine - or something like:

int ret = mxm30_setup(pci);
if (!ret)
return;


-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4] Add basic MXM 3.0 interrupt support

2024-04-14 Thread Kevin O'Connor
On Sun, Apr 14, 2024 at 05:39:21PM +, Riku Viitanen wrote:
> > Ah, I missed that vgahook_setup() checks for NULL. Out of curiosity,
> > is it possible to limit this support to a particular hardware vendor
> > instead of by existence of file?
> 
> ATI, AMD, and Nvidia based MXM cards exist. I don't know of a way to
> probe it to find out if it's an MXM format card. And even if there
> was a way to tell with 100% reliability, the spec allows for the
> System Information Structure to be provided by an EEPROM on the
> mainboard, connected to the MXM GPU via I²C. In that case, we wouldn't
> have to provide these interrupts.
> 
> Despite the declining popularity, new MXM cards are still released.
> Maybe one day we'd have to add support for Intel Arc cards as well.
> 
> Or do you mean CBvendor/CBpart? I fear that list could grow quickly
> and would just make it more annoying to port new boards. Coreboot
> already has EliteBook 8770w and HP Compaq Elite 8300 USDT (that's a
> desktop btw). And Dell Precision ports are in progress.

Okay, thanks for the explanation.

Maybe unconditionally call mxm30_setup() and that function can exit
early if "mxm-30-sis" doesn't exist?

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4] Add basic MXM 3.0 interrupt support

2024-04-14 Thread Kevin O'Connor
On Sun, Apr 14, 2024 at 11:51:33AM -0400, Kevin O'Connor wrote:
> On Sun, Apr 14, 2024 at 10:30:55AM +, Riku Viitanen via SeaBIOS wrote:
> > VGAROMs on MXM graphics cards need certain int15h functions present
> > to function properly.
> > 
> > On HP EliteBook 8560w with coreboot and Quadro 2000M, this warning
> > is displayed for 30 seconds, making every boot extremely slow:
> > 
> > ERROR: Valid MXM Structure not found
> > POST halted for 30 seconds, P-state limited to P10...
> > 
> > This patch implements the minimum functionality to get rid of it: the
> > functions 0 and 1, version 3.0 of the specification [1]. Older version
> > 2.1 is not implemented.
> > 
> > Functions are enabled only if romfile "mxm-30-sis" exists. These functions
> > are specific to the MXM revision, but not the board or GPU. Some boards
> > might require more of the optional functions to be implemented, however.
> > 
> > - Function 0 returns information about supported functions and revision.
> > 
> > - Function 1 returns a pointer to a MXM configuration structure in low
> > memory. This is copied from romfile "mxm-30-sis". It can be extracted
> > from vendor BIOS, by using this same interrupt. I wrote a tool [2] to do
> > that from Linux.
> > 
> > TEST: HP EliteBook 8560w boots without error and delay, graphics work.
> > 
> > [1]: MXM Graphics Module Software Specification Version 3.0, Revision 1.1
> > [2]: https://codeberg.org/Riku_V/mxmdump/
> 
> Thanks.  In general it looks good to me.  I have a few minor comments.
> 
> > 
> > Signed-off-by: Riku Viitanen riku.viita...@protonmail.com
> > ---
> >  src/vgahooks.c | 79 ++
> >  1 file changed, 79 insertions(+)
> > 
> > diff --git a/src/vgahooks.c b/src/vgahooks.c
> > index 1f149532..6adf5b9b 100644
> > --- a/src/vgahooks.c
> > +++ b/src/vgahooks.c
> > @@ -11,13 +11,16 @@
> >  #include "hw/pcidevice.h" // pci_find_device
> >  #include "hw/pci_ids.h" // PCI_VENDOR_ID_VIA
> >  #include "hw/pci_regs.h" // PCI_VENDOR_ID
> > +#include "malloc.h" // malloc_low
> >  #include "output.h" // dprintf
> > +#include "romfile.h" // struct romfile_s, romfile_find
> >  #include "string.h" // strcmp
> >  #include "util.h" // handle_155f, handle_157f
> >  
> >  #define VH_VIA 1
> >  #define VH_INTEL 2
> >  #define VH_SMI 3
> > +#define VH_MXM30 4
> >  
> >  int VGAHookHandlerType VARFSEG;
> >  
> > @@ -296,6 +299,77 @@ winent_mb6047_setup(struct pci_device *pci)
> >  SmiBootDisplay = 0x02;
> >  }
> >  
> > +/
> > + * MXM 3.0 hooks
> > + /
> > +
> > +void *MXM30SIS VARFSEG;
> > +
> > +/* Function 0: Return Specification Support Level */
> > +static void
> > +mxm30_F00(struct bregs *regs)
> > +{
> > +regs->ax = 0x005f; /* Success */
> > +regs->bl = 0x30; /* MXM version 3.0 */
> > +regs->cx = (1<<0) | (1<<1); /* Supported functions */
> > +set_success(regs);
> > +}
> > +
> > +/* Function 1: Return Pointer to MXM System Information Structure */
> > +static void
> > +mxm30_F01(struct bregs *regs)
> > +{
> > +switch (regs->cx) {
> > +case 0x0030:
> > +regs->ax = 0x005f; /* Success */
> > +regs->es = (u32)GET_GLOBAL(MXM30SIS)/16;
> > +regs->di = (u32)GET_GLOBAL(MXM30SIS)%16;
> 
> This should use SEG_LOW and LOWFLAT2LOW(MXM30SIS).
> 
> > +set_success(regs);
> > +break;
> > +default:
> > +handle_155fXX(regs);
> > +break;
> > +}
> > +}
> > +
> > +static void
> > +mxm30_155f80(struct bregs *regs)
> > +{
> > +switch (regs->bx) {
> > +case 0xff00: mxm30_F00(regs); break;
> > +case 0xff01: mxm30_F01(regs); break;
> > +default: handle_155fXX(regs); break;
> > +}
> > +}
> > +
> > +static void
> > +mxm30_155f(struct bregs *regs)
> > +{
> > +switch (regs->al) {
> > +case 0x80: mxm30_155f80(regs); break;
> > +default:   handle_155fXX(regs); break;
> > +}
> > +}
> > +
> > +static void
> > +mxm30_setup(struct romfile_s *sis_file)
> > +{
> > +/* Load MXM System Informa

[SeaBIOS] Re: [PATCH v4] Add basic MXM 3.0 interrupt support

2024-04-14 Thread Kevin O'Connor
On Sun, Apr 14, 2024 at 10:30:55AM +, Riku Viitanen via SeaBIOS wrote:
> VGAROMs on MXM graphics cards need certain int15h functions present
> to function properly.
> 
> On HP EliteBook 8560w with coreboot and Quadro 2000M, this warning
> is displayed for 30 seconds, making every boot extremely slow:
> 
> ERROR: Valid MXM Structure not found
> POST halted for 30 seconds, P-state limited to P10...
> 
> This patch implements the minimum functionality to get rid of it: the
> functions 0 and 1, version 3.0 of the specification [1]. Older version
> 2.1 is not implemented.
> 
> Functions are enabled only if romfile "mxm-30-sis" exists. These functions
> are specific to the MXM revision, but not the board or GPU. Some boards
> might require more of the optional functions to be implemented, however.
> 
> - Function 0 returns information about supported functions and revision.
> 
> - Function 1 returns a pointer to a MXM configuration structure in low
> memory. This is copied from romfile "mxm-30-sis". It can be extracted
> from vendor BIOS, by using this same interrupt. I wrote a tool [2] to do
> that from Linux.
> 
> TEST: HP EliteBook 8560w boots without error and delay, graphics work.
> 
> [1]: MXM Graphics Module Software Specification Version 3.0, Revision 1.1
> [2]: https://codeberg.org/Riku_V/mxmdump/

Thanks.  In general it looks good to me.  I have a few minor comments.

> 
> Signed-off-by: Riku Viitanen riku.viita...@protonmail.com
> ---
>  src/vgahooks.c | 79 ++
>  1 file changed, 79 insertions(+)
> 
> diff --git a/src/vgahooks.c b/src/vgahooks.c
> index 1f149532..6adf5b9b 100644
> --- a/src/vgahooks.c
> +++ b/src/vgahooks.c
> @@ -11,13 +11,16 @@
>  #include "hw/pcidevice.h" // pci_find_device
>  #include "hw/pci_ids.h" // PCI_VENDOR_ID_VIA
>  #include "hw/pci_regs.h" // PCI_VENDOR_ID
> +#include "malloc.h" // malloc_low
>  #include "output.h" // dprintf
> +#include "romfile.h" // struct romfile_s, romfile_find
>  #include "string.h" // strcmp
>  #include "util.h" // handle_155f, handle_157f
>  
>  #define VH_VIA 1
>  #define VH_INTEL 2
>  #define VH_SMI 3
> +#define VH_MXM30 4
>  
>  int VGAHookHandlerType VARFSEG;
>  
> @@ -296,6 +299,77 @@ winent_mb6047_setup(struct pci_device *pci)
>  SmiBootDisplay = 0x02;
>  }
>  
> +/
> + * MXM 3.0 hooks
> + /
> +
> +void *MXM30SIS VARFSEG;
> +
> +/* Function 0: Return Specification Support Level */
> +static void
> +mxm30_F00(struct bregs *regs)
> +{
> +regs->ax = 0x005f; /* Success */
> +regs->bl = 0x30; /* MXM version 3.0 */
> +regs->cx = (1<<0) | (1<<1); /* Supported functions */
> +set_success(regs);
> +}
> +
> +/* Function 1: Return Pointer to MXM System Information Structure */
> +static void
> +mxm30_F01(struct bregs *regs)
> +{
> +switch (regs->cx) {
> +case 0x0030:
> +regs->ax = 0x005f; /* Success */
> +regs->es = (u32)GET_GLOBAL(MXM30SIS)/16;
> +regs->di = (u32)GET_GLOBAL(MXM30SIS)%16;

This should use SEG_LOW and LOWFLAT2LOW(MXM30SIS).

> +set_success(regs);
> +break;
> +default:
> +handle_155fXX(regs);
> +break;
> +}
> +}
> +
> +static void
> +mxm30_155f80(struct bregs *regs)
> +{
> +switch (regs->bx) {
> +case 0xff00: mxm30_F00(regs); break;
> +case 0xff01: mxm30_F01(regs); break;
> +default: handle_155fXX(regs); break;
> +}
> +}
> +
> +static void
> +mxm30_155f(struct bregs *regs)
> +{
> +switch (regs->al) {
> +case 0x80: mxm30_155f80(regs); break;
> +default:   handle_155fXX(regs); break;
> +}
> +}
> +
> +static void
> +mxm30_setup(struct romfile_s *sis_file)
> +{
> +/* Load MXM System Information Structure into low memory */
> +MXM30SIS = malloc_low(sis_file->size);

This code does not check if sis_file is NULL, which could lead to
memory corruption.

Probably best to use romfile_loadfile(), check for non-null, and then
malloc_low(), manual memcpy(), and free() of the temporary space.  As
it's preferable to avoid calls to free() on malloc_low() allocations -
as that can lead to memory fragmentation of the scarce "low" region.

> +if (!MXM30SIS) {
> +warn_noalloc();
> +return;
> +}
> +int ret = sis_file->copy(sis_file, MXM30SIS, sis_file->size);
> +if (ret < 0) {
> +free(MXM30SIS);
> +MXM30SIS = NULL;
> +return;
> +}
> +
> +VGAHookHandlerType = VH_MXM30;
> +}
> +
>  /
>   * Entry and setup
>   /
> @@ -313,6 +387,7 @@ handle_155f(struct bregs *regs)
>  switch (htype) {
>  case VH_VIA:   via_155f(regs); break;
>  case VH_INTEL: intel_155f(regs); break;
> +case VH_MXM30: mxm30_155f(regs); break;
>  default:   

[SeaBIOS] Re: [PATCH] fw/pciinit: don't misalign large BARs

2024-04-10 Thread Kevin O'Connor
On Wed, Apr 10, 2024 at 02:24:50PM +0200, Gerd Hoffmann wrote:
> On Wed, Apr 10, 2024 at 11:01:34AM +0300, Daniil Tatianin wrote:
> > Previously we would unconditionally lower the alignment for large BARs
> > in case their alignment was greater than "pci_mem64_top >> 11", this
> > would make it impossible to use these devices by the kernel:
> > [   13.821108] pci :9c:00.0: can't claim BAR 1 [mem 
> > 0x660-0x67f 64bit pref]: no compatible bridge window
> 
> 128G bar.  Wow.  May I ask what device this is?
> 
> > +u64 top_align = pci_mem64_top >> 11;
> >  if (hotplug_support && pci_pad_mem64 && is64
> > -&& (type == PCI_REGION_TYPE_PREFMEM))
> > -align = pci_mem64_top >> 11;
> > +&& (type == PCI_REGION_TYPE_PREFMEM) && (top_align > 
> > align))
> > +align = top_align;
> 
> Makes sense.
> 
> Reviewed-by: Gerd Hoffmann 

The patch looks fine to me as well.  However, the patch didn't make it
through to the seabios mailing list.  For review purposes, it would be
best if the patch is publicly available on the list.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH] stdvgaio: Only read/write one color palette entry at a time

2024-04-09 Thread Kevin O'Connor
Introduce stdvga_dac_read_many() and stdvga_dac_write_many() for
writing multiple dac palette entries.  Convert the stdvga_dac_read()
and stdvga_dac_write() low-level IO access functions in stdvgaio.c to
access just one color palette entry.

Signed-off-by: Kevin O'Connor 
--

There should not be any functional change resulting from this patch.
However, during mode switches (and a few other rare calls) there will
likely be a few more outb() calls due to additional writes to the dac
index register.  In theory this could adversely impact performance,
but in practice I think it is unlikely it will be noticeable.  The
original code was also not optimized (it already makes a large number
of redundant writes to the dac index register), and performance
sensitve code already avoids using the vgabios.

---
 src/std/vbe.h|  7 +++
 vgasrc/stdvga.c  | 50 +---
 vgasrc/stdvga.h  | 26 +++
 vgasrc/stdvgaio.c| 37 
 vgasrc/stdvgamodes.c |  8 +++
 vgasrc/vbe.c | 16 +-
 vgasrc/vgabios.c | 22 +--
 vgasrc/vgautil.h | 23 
 8 files changed, 109 insertions(+), 80 deletions(-)

diff --git a/src/std/vbe.h b/src/std/vbe.h
index fe96f5e..fddaa4f 100644
--- a/src/std/vbe.h
+++ b/src/std/vbe.h
@@ -88,6 +88,13 @@ struct vbe_crtc_info {
 u8 reserved[40];
 } PACKED;
 
+struct vbe_palette_entry {
+u8 blue;
+u8 green;
+u8 red;
+u8 align;
+} PACKED;
+
 /* VBE Return Status Info */
 /* AL */
 #define VBE_RETURN_STATUS_SUPPORTED  0x4F
diff --git a/vgasrc/stdvga.c b/vgasrc/stdvga.c
index afe26db..aeab9d4 100644
--- a/vgasrc/stdvga.c
+++ b/vgasrc/stdvga.c
@@ -9,6 +9,7 @@
 #include "farptr.h" // SET_FARVAR
 #include "stdvga.h" // stdvga_setup
 #include "string.h" // memset_far
+#include "std/vbe.h" // vbe_palette_entry
 #include "vgabios.h" // struct vgamode_s
 #include "vgautil.h" // stdvga_attr_write
 #include "x86.h" // outb
@@ -128,6 +129,41 @@ stdvga_get_palette_page(u8 *pal_pagesize, u8 *pal_page)
  * DAC control
  /
 
+// Store dac colors into memory in 3-byte rgb format
+void
+stdvga_dac_read_many(u16 seg, u8 *data_far, u8 start, int count)
+{
+while (count) {
+struct vbe_palette_entry rgb = stdvga_dac_read(start);
+SET_FARVAR(seg, *data_far, rgb.red);
+data_far++;
+SET_FARVAR(seg, *data_far, rgb.green);
+data_far++;
+SET_FARVAR(seg, *data_far, rgb.blue);
+data_far++;
+start++;
+count--;
+}
+}
+
+// Load dac colors from memory in 3-byte rgb format
+void
+stdvga_dac_write_many(u16 seg, u8 *data_far, u8 start, int count)
+{
+while (count) {
+u8 r = GET_FARVAR(seg, *data_far);
+data_far++;
+u8 g = GET_FARVAR(seg, *data_far);
+data_far++;
+u8 b = GET_FARVAR(seg, *data_far);
+data_far++;
+struct vbe_palette_entry rgb = { .red=r, .green=g, .blue=b };
+stdvga_dac_write(start, rgb);
+start++;
+count--;
+}
+}
+
 // Convert all loaded colors to shades of gray
 void
 stdvga_perform_gray_scale_summing(u16 start, u16 count)
@@ -135,16 +171,16 @@ stdvga_perform_gray_scale_summing(u16 start, u16 count)
 stdvga_attrindex_write(0x00);
 int i;
 for (i = start; i < start+count; i++) {
-u8 rgb[3];
-stdvga_dac_read(GET_SEG(SS), rgb, i, 1);
+struct vbe_palette_entry rgb = stdvga_dac_read(i);
 
 // intensity = ( 0.3 * Red ) + ( 0.59 * Green ) + ( 0.11 * Blue )
-u16 intensity = ((77 * rgb[0] + 151 * rgb[1] + 28 * rgb[2]) + 0x80) >> 
8;
+u16 intensity = ((77 * rgb.red + 151 * rgb.green
+  + 28 * rgb.blue) + 0x80) >> 8;
 if (intensity > 0x3f)
 intensity = 0x3f;
-rgb[0] = rgb[1] = rgb[2] = intensity;
+rgb.red = rgb.green = rgb.blue = intensity;
 
-stdvga_dac_write(GET_SEG(SS), rgb, i, 1);
+stdvga_dac_write(i, rgb);
 }
 stdvga_attrindex_write(0x20);
 }
@@ -474,7 +510,7 @@ stdvga_save_dac_state(u16 seg, struct saveDACcolors *info)
 SET_FARVAR(seg, info->rwmode, inb(VGAREG_DAC_STATE));
 SET_FARVAR(seg, info->peladdr, inb(VGAREG_DAC_WRITE_ADDRESS));
 SET_FARVAR(seg, info->pelmask, stdvga_pelmask_read());
-stdvga_dac_read(seg, info->dac, 0, 256);
+stdvga_dac_read_many(seg, info->dac, 0, 256);
 SET_FARVAR(seg, info->color_select, 0);
 }
 
@@ -482,7 +518,7 @@ static void
 stdvga_restore_dac_state(u16 seg, struct saveDACcolors *info)
 {
 stdvga_pelmask_write(GET_FARVAR(seg, info->pelmask));
-stdvga_dac_write(seg, info->dac, 0, 256);
+stdvga_dac_write_many(seg, info->dac, 0, 256);
 outb(GET_FARVAR(seg, info->peladdr), VGAREG_DAC_W

[SeaBIOS] Re: [PATCH] stdvga: Add stdvga_set_vertical_size() helper function

2024-04-09 Thread Kevin O'Connor
On Tue, Apr 02, 2024 at 02:35:05PM -0400, Kevin O'Connor wrote:
> Add helper function and update the bochsvga.c code to use it.  This
> emphasizes the relationship between stdvga_get_vertical_size() and
> stdvga_set_vertical_size() code.
> 
> Signed-off-by: Kevin O'Connor 

I committed this patch.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 0/9] Improve comments in stdvga.c

2024-04-09 Thread Kevin O'Connor
On Mon, Apr 01, 2024 at 11:51:22AM -0400, Kevin O'Connor wrote:
> This series adds comments to vgasrc/stdvga.c along with some variable
> and function renaming.  The goal is to make it a little easier to
> understand this legacy code.
> 
> There should be no functionality changes with this patch series.
> 
> -Kevin

I committed this patch series.  (During merge I also fixed a minor
comment spelling error in patch 6.)

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: Floppy drive configuration

2024-04-04 Thread Kevin O'Connor
On Thu, Apr 04, 2024 at 04:00:20PM +, Eduardo Batalha via SeaBIOS wrote:
>  Hi,
> 
> I had some old files saved in diskettes that I really needed to see, so I 
> bought a floppy drive and installed it in my coreboot based computer, so, no, 
> It's not a retro project. :)
> 
> Regarding the AHCI CDROM issues, I really don't know. I did not have problems 
> with it so I did not look at that code at all. 
> 
> Going back to the floppy configuration issue I think I need to explain my 
> logic a bit better. 
> So... Seabios does not provide us with a fancy interface to configure the 
> floppy drive type. It also does not assume anything. For real hardware, (not 
> qemu,) it is completely dependent on the etc/floppy "file" being present in 
> CBFS. The reason it cannot depend on the cmos value is because when you 
> remove the RTC battery, the CMOS memory gets filled up with random values. 
> coreboot, which runs before seabios has a mechanism to fix this, but ignores 
> the floppy type byte, leaving it with a random value.
> 
> My idea below, with my code, is not to use the CMOS value if there are no 
> etc/floppy files but instead to update the cmos value unconditionally 
> whenever etc/floppy files exist.
> romfile_loadint returns the second argument if the file does not exist, i.e. 
> 0. so the if condition is correct for the logic I explained above.
> 
> On a second thought, I now think the condition "if (type0 || type 1)" should 
> not even exist. The best solution is to have the CMOS_FLOPPY_DRIVE_TYPE 
> always updated on every boot. This way, if there are no etc/floppy files in 
> CBFS the value of the byte would be zero, indicating to the operating system 
> that there are no floppy drives installed.
> 

Reading and writing any of the "cmos" settings is problematic.  The
RTC registers (0x00-0x0f) are a recognized hardware standard, but
there isn't a consistent standard for the "nvram" content (nor even
how many bytes there are).

As a result, SeaBIOS never accesses any of the "nvram" parts of the
"cmos" when in coreboot mode.  (It does read some parts when compiled
for QEMU, but that was only to support a legacy configuration
interface with QEMU which has long been deprecated.)

Writing to the "nvram" is particularly troublesome, as some software
requires that any changes to "nvram" content also update a checksum
byte (also in "nvram").  There is no consistent standard for the
location of that checksum or how it is calculated.  So, if one were to
change any value in "nvram", it could break any software expecting a
particular layout.

As a result, I don't think it would be a good idea for SeaBIOS to
start modifying the "nvram" parts of the "cmos".  If you have some
software that requires it, you might want to consider getting a new
battery and programming the nvram with your desired contents.
Alternatively, if this is for Linux, I'd guess you could manually
instruct Linux to use the floppy (eg,
https://docs.kernel.org/admin-guide/blockdev/floppy.html ).

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: an issue with win10 boot and different compiler versions

2024-02-20 Thread Kevin O'Connor
On Sat, Feb 10, 2024 at 11:17:54PM +0300, Michael Tokarev wrote:
> So.. the difference is vgabios only, not seabios (vgabios-stdvga in this 
> case).
> 
> And I can't get it to work with debugging vgabios, it always fails even with 
> DEBUG_LEVEL=2
> (and level-1 logging isn't useful).
> 
> I was able to capture logs just for the non-working version, so there's 
> nothing to
> compare it against.  So I tried a different machine type in qemu, the one 
> which
> works, which uses SMBIOS 3.0 (q35-8.2).

Thanks for testing.  So, if I understand the issue correctly:
1. If smbios v3 is used then the problem does not occur.
2. If gcc v13 is used to compile vgabios then the problem does not occur.
3. If smbios v2 is used and gcc v12 is used then win10 can not boot.
Is that correct?

A strange issue.  Issues like this tend to be very difficult to track
down.

As a random guess, one possibility is that it could be related to
vgabios stack size usage.  You could try always enabling the "extra
vga stack" with a change like:

--- a/vgasrc/vgabios.c
+++ b/vgasrc/vgabios.c
@@ -285,8 +285,7 @@ vga_set_mode(int mode, int flags)
 // Disable extra stack if it appears a modern OS is in use.
 // This works around bugs in some versions of Windows (Vista
 // and possibly later) when the stack is in the e-segment.
-MASK_BDA_EXT(flags, BF_EXTRA_STACK
- , (flags & MF_LEGACY) ? BF_EXTRA_STACK : 0);
+MASK_BDA_EXT(flags, BF_EXTRA_STACK, BF_EXTRA_STACK);
 if (memmodel == MM_TEXT) {
 SET_BDA(video_cols, width);
 SET_BDA(video_rows, height-1);

Separately, if you can provide the failing and succeeding builds, I
can try to take a look at it locally.  To do this, make sure you're on
commit 82faf1d5, run make, run "tar cfz fullbuild.tgz out/", and
provide the resulting tgz file.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: an issue with win10 boot and different compiler versions

2024-02-10 Thread Kevin O'Connor
On Wed, Feb 07, 2024 at 03:21:15AM +0300, Michael Tokarev wrote:
> 07.02.2024 02:17, Michael Tokarev пишет:
[...]
> > The binary in question is vgabios-stdvga.bin.
[...]
>"SMBIOS 2.1 table length 66822 exceeds 65535"
> 
> Which wont help with 7.2 machine types (it changes defaults for 8.1+).
> 
> And yes, running current qemu with -M pc-q35-7.2 shows the same issue
> again.
> 
> So it might not be a gcc issue really, but just a too large bios and
> gcc-13 is able to produce more compact code which actually fits.

Ah, that makes sense.  In the future, if you enable the seabios logs
it should help track these things down.  (Take the log from the
working version and compare it to the log from the non-working
version.)  I suspect in the log would be messages from
seabios/seavgabios hinting to the issue.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: Malloc Question

2023-09-11 Thread Kevin O'Connor
On Fri, Aug 25, 2023 at 10:36:04PM +0100, Kieran Kunhya wrote:
> Hello,
> 
> I'm trying to update this patch for 2017 as I have the same issue as the
> author in that I can't use the KVM virtual keyboard and a physical keyboard
> at the same time:
> https://patchew.org/Seabios/20171215100946.58298-1-stef.van...@prodrive-technologies.com/
> 
> I have implemented a linked list as recommended but this has not worked as
> expected. Assuming I have not made an obvious mistake, I think maybe this
> patch is not working because I'm using the wrong malloc. Which one should I
> be using?

There is some documentation in docs/Memory_Model.md (also at
https://www.seabios.org/Memory_Model ).  Since the usb keyboard code
is using GET_GLOBAL() to dereference pointers, the memory would likely
need to be allocated with malloc_fseg().

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] ahci: handle TFES irq correctly

2023-06-21 Thread Kevin O'Connor
On Tue, May 30, 2023 at 03:44:05PM +0200, Niklas Cassel via SeaBIOS wrote:
> According to AHCI 1.3.1, 5.3.8.1 RegFIS:Entry, if ERR_STAT is set in the
> received FIS, the HBA shall jump to state ERR:FatalTaskfile, which will
> raise a TFES IRQ.
> 
> This means that if ERR_STAT is set in the recevied FIS, PxIS.TFES will
> be set, without either PxIS.DHRS or PxIS.PSS being set.
> 
> SeaBIOS function ahci_port_setup() will try to identify an AHCI device
> by sending an ATAPI identify device command. However, such a command
> will be aborted with ERR_STAT set for a regular (non-ATAPI) device.
> 
> ahci_command() already performs the correct error recovery steps when
> status is correctly set, so simply modify ahci_command() to read the
> correct status when PxIS.TFES is set.
> 
> It is safe to read PxTFD when PxIS.TFES is set, even for systems with a
> port multiplier, see AHCI 1.3.1, 9.3.7 PxTFD Register Information:
> "When a taskfile error occurs (PxIS.TFES is set to '1'), the host may
> refer to the values in PxTFD. The values in PxTFD at this time are
> guaranteed to correspond to the device that reported the taskfile error
> condition."
> 
> Without this, each boot will be delayed by 32 seconds, waiting for the
> AHCI command to timeout.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [SeaBIOS PATCH] xen: require Xen info structure at 0x1000 to detect Xen

2023-02-01 Thread Kevin O'Connor
On Fri, Jan 20, 2023 at 11:33:19AM +, David Woodhouse wrote:
> From: David Woodhouse 
> 
> When running under Xen, hvmloader places a table at 0x1000 with the e820
> information and BIOS tables. If this isn't present, SeaBIOS will 
> currently panic.
> 
> We now have support for running Xen guests natively in QEMU/KVM, which
> boots SeaBIOS directly instead of via hvmloader, and does not provide
> the same structure.
> 
> As it happens, this doesn't matter on first boot. because although we
> set PlatformRunningOn to PF_QEMU|PF_XEN, reading it back again still
> gives zero. Presumably because in true Xen, this is all already RAM. But
> in QEMU with a faithfully-emulated PAM config in the host bridge, it's
> still in ROM mode at this point so we don't see what we've just written.
> 
> On reboot, however, the region *is* set to RAM mode and we do see the
> updated value of PlatformRunningOn, do manage to remember that we've
> detected Xen in CPUID, and hit the panic.
> 
> It's not trivial to detect QEMU vs. real Xen at the time xen_preinit()
> runs, because it's so early. We can't even make a XENVER_extraversion
> hypercall to look for hints, because we haven't set up the hypercall
> page (and don't have an allocator to give us a page in which to do so).
> 
> So just make Xen detection contingent on the info structure being
> present. If it wasn't, we were going to panic anyway. That leaves us
> taking the standard QEMU init path for Xen guests in native QEMU,
> which is just fine.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [SeaBIOS PATCH] xen: require Xen info structure at 0x1000 to detect Xen

2023-01-26 Thread Kevin O'Connor
On Fri, Jan 20, 2023 at 11:33:19AM +, David Woodhouse wrote:
> From: David Woodhouse 
> 
> When running under Xen, hvmloader places a table at 0x1000 with the e820
> information and BIOS tables. If this isn't present, SeaBIOS will 
> currently panic.
> 
> We now have support for running Xen guests natively in QEMU/KVM, which
> boots SeaBIOS directly instead of via hvmloader, and does not provide
> the same structure.
> 
> As it happens, this doesn't matter on first boot. because although we
> set PlatformRunningOn to PF_QEMU|PF_XEN, reading it back again still
> gives zero. Presumably because in true Xen, this is all already RAM. But
> in QEMU with a faithfully-emulated PAM config in the host bridge, it's
> still in ROM mode at this point so we don't see what we've just written.
> 
> On reboot, however, the region *is* set to RAM mode and we do see the
> updated value of PlatformRunningOn, do manage to remember that we've
> detected Xen in CPUID, and hit the panic.
> 
> It's not trivial to detect QEMU vs. real Xen at the time xen_preinit()
> runs, because it's so early. We can't even make a XENVER_extraversion
> hypercall to look for hints, because we haven't set up the hypercall
> page (and don't have an allocator to give us a page in which to do so).
> 
> So just make Xen detection contingent on the info structure being
> present. If it wasn't, we were going to panic anyway. That leaves us
> taking the standard QEMU init path for Xen guests in native QEMU,
> which is just fine.
> 
> Untested on actual Xen but ObviouslyCorrect™.

Thanks.  I don't have a way to test this, but it looks fine to me.
I'll give a few more days to see if others have comments, and
otherwise look to commit.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] timer: Configure timer1 to a standard settings

2022-08-05 Thread Kevin O'Connor
On Wed, Jul 13, 2022 at 03:25:17AM +0200, Petr Cvek wrote:
> ASPI2DOS.SYS and KEYB.COM from Win 98 SE installation CD (and most likely
> other DOS versions too) depend on I/O port 0x61 bit 4 to be toggled. This
> requires timer 1 (I/O 0x41, legacy DRAM refresh) to be correctly set.
> Also Intel ICH7 Family Datasheet, chapter 5.8 states:
> 
> Programming the counter to anything other than Mode 2 will result in
> undefined behavior for the REF_TOGGLE bit.
> 
> Failing to have the timer 1 configured indeed causes affected OSes to
> freeze during the boot.

Thanks.  In general, I don't see an issue with initializing standard
hardware.  However, a change like this could have a subtle impact on
existing installations.  So, I'd like to have a better understanding
of this change.

Did the above compatibility issue occur on coreboot or on QEMU?  If on
coreboot, can you check if the problem exists on QEMU?  If it isn't an
issue on QEMU, do you know why?  Finally, have you found any documents
that describe how timer1 is intended to be configured on legacy
systems?

Thanks again,
-Kevin


> 
> Signed-off-by: Petr Cvek 
> ---
>  src/hw/timer.c | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/src/hw/timer.c b/src/hw/timer.c
> index b6f102e..b272f44 100644
> --- a/src/hw/timer.c
> +++ b/src/hw/timer.c
> @@ -280,4 +280,10 @@ pit_setup(void)
>  // maximum count of H = 18.2Hz
>  outb(0x0, PORT_PIT_COUNTER0);
>  outb(0x0, PORT_PIT_COUNTER0);
> +
> +// timer1: binary count, 16bit count, mode 2
> +outb(PM_SEL_TIMER1|PM_ACCESS_WORD|PM_MODE2|PM_CNT_BINARY, PORT_PIT_MODE);
> +// maximum count of 0012H = 66.3kHz
> +outb(0x12, PORT_PIT_COUNTER1);
> +outb(0x0, PORT_PIT_COUNTER1);
>  }
> -- 
> 2.37.0
> 
> ___
> SeaBIOS mailing list -- seabios@seabios.org
> To unsubscribe send an email to seabios-le...@seabios.org
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] floppy: Support CMOS setting outside QEMU

2022-08-05 Thread Kevin O'Connor
On Wed, Jul 13, 2022 at 03:24:49AM +0200, Petr Cvek wrote:
> SeaBIOS can be used for booting legacy OS and also Linux is still using
> CMOS address 0x10 to configure floppy controller. Under these assumptions
> it makes sense to allow boot from CMOS defined floppy drives.

There was never really a standard for the layout of CMOS nor for the
encoding of floppy type.  Currently, SeaBIOS doesn't use CMOS for
anything when configured for coreboot mode and I think we should keep
it that way.  If you have a machine with a floppy drive that you'd
like to use with coorboot+SeaBIOS then you can set the "etc/floppy0"
or "etc/floppy1" cbfs files to activate support in SeaBIOS.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] virtio-blk: use larger default request size

2022-07-07 Thread Kevin O'Connor
On Thu, Jul 07, 2022 at 12:06:16PM +0200, Gerd Hoffmann wrote:
> Bump default from 8 to 64 blocks.  Using 8 by default leads
> to requests being splitted on qemu, which slows down boot.
> 
> Some (temporary) debug logging added showed that almost all
> requests on a standard fedora install are less than 64 blocks,
> so that should bring us back to 1.15 performance levels.

Thanks.  Looks fine to me.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 3/3] handle_13: Only terminate El Torito disk emulation when active

2022-06-02 Thread Kevin O'Connor
On Fri, May 20, 2022 at 03:19:03PM +, Lev Kujawski wrote:
> In accord with the El Torito specification[1], SeaBIOS now signals
> failure (AH=1 and CF) whenever attempts are made to terminate disk
> emulation (0x4B) for non-emulated drives. The prior behavior of always
> returning success caused the Solaris 9 Device Configuration
> Assistant (DCA) diskette (d1_image) to fail to boot as it was unable
> to access the non-existent underlying drive.
> 
> References:
> 
> [1]:  "CF - Clear if system released
>CF - Set if system not in emulation mode"
> 
>  C. E. Stevens and S. Merkin, '"El Torito" Bootable CD-ROM Format
>  Specification', Phoenix Technologies and IBM, 1994, p. 17.

Thanks.

I noticed a bit of "code movement" in this patch.  Is the only
functional change to return DISK_RET_EPARAM when 134b is invoked on a
non-emulated drive?

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 1/3] handle_1591: [INT15/AH=91h] Always set AH=0 and clear CF

2022-06-02 Thread Kevin O'Connor
On Fri, May 20, 2022 at 03:19:01PM +, Lev Kujawski wrote:
> References:
>   R. Brown, "INT 15 - OS HOOK - DEVICE POST (AT,PS)", in
>   The x86 Interrupt List. https://www.cs.cmu.edu/~ralf/files.html
>   [Updated: 2018-02-28].
> 
>   F. v. Gilluwe, The Undocumented PC. Boston, MA: Addison-Wesley,
>   1997, p. 780.

Thanks.  Is there some program that now works because of this change?
We use "rbil" quite a bit as a reference, but we also know it is not a
complete reference.  So, it helps to know what real-world programs a
change is known to impact.  Same comment on patch 2.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4 0/2] use large ZoneHigh when there is enough memory

2022-04-27 Thread Kevin O'Connor
On Wed, Apr 27, 2022 at 09:22:02AM +0200, Gerd Hoffmann wrote:
> v4:
>  - make calculations more robust.
> 
> v3:
>  - check size instead of end address.

The series looks good to me.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 2/2] malloc: use large ZoneHigh when there is enough memory

2022-04-27 Thread Kevin O'Connor
On Wed, Apr 27, 2022 at 08:26:35AM +0200, Gerd Hoffmann wrote:
> > >  if (!highram_start) {
> > > +if (e - s > BUILD_MAX_HIGHTABLE * 16)
> > > +highram_size = BUILD_MAX_HIGHTABLE;
> > >  u32 newe = ALIGN_DOWN(e - highram_size, MALLOC_MIN_ALIGN);
> > >  if (newe <= e && newe >= s) {
> > >  highram_start = newe;
> > 
> > Thanks, but I'm still seeing a corner case where this fails - if
> > ALIGN_DOWN() falls below s.
> 
> Given that the code requires the memory block being 16x the size of the
> zonehigh allocation (i.e. use 16M in case the block is 256M or larger)
> I fail to see how any reasonable alignment requirement can make that
> fail ...

My mistake.

I do think your new v4 version of the code is easier to understand
though.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 2/2] malloc: use large ZoneHigh when there is enough memory

2022-04-26 Thread Kevin O'Connor
On Tue, Apr 26, 2022 at 10:56:42AM +0200, Gerd Hoffmann wrote:
> In case there is enough memory installed use a large ZoneHigh.
> 
> Signed-off-by: Gerd Hoffmann 
> ---
>  src/config.h | 3 ++-
>  src/malloc.c | 4 +++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/src/config.h b/src/config.h
> index 93c8dbc2d530..9abe355b6c47 100644
> --- a/src/config.h
> +++ b/src/config.h
> @@ -17,7 +17,8 @@
>  // Maximum number of map entries in the e820 map
>  #define BUILD_MAX_E820 32
>  // Space to reserve in high-memory for tables
> -#define BUILD_MAX_HIGHTABLE (256*1024)
> +#define BUILD_MIN_HIGHTABLE (256*1024)
> +#define BUILD_MAX_HIGHTABLE (16*1024*1024)
>  // Largest supported externaly facing drive id
>  #define BUILD_MAX_EXTDRIVE 16
>  // Number of bytes the smbios may be and still live in the f-segment
> diff --git a/src/malloc.c b/src/malloc.c
> index 7fa50a85595b..967e000f51ba 100644
> --- a/src/malloc.c
> +++ b/src/malloc.c
> @@ -423,7 +423,7 @@ malloc_preinit(void)
>  
>  // Populate temp high ram
>  u32 highram_start = 0;
> -u32 highram_size = BUILD_MAX_HIGHTABLE;
> +u32 highram_size = BUILD_MIN_HIGHTABLE;
>  int i;
>  for (i=e820_count-1; i>=0; i--) {
>  struct e820entry *en = _list[i];
> @@ -434,6 +434,8 @@ malloc_preinit(void)
>  continue;
>  u32 s = en->start, e = end;
>  if (!highram_start) {
> +if (e - s > BUILD_MAX_HIGHTABLE * 16)
> +highram_size = BUILD_MAX_HIGHTABLE;
>  u32 newe = ALIGN_DOWN(e - highram_size, MALLOC_MIN_ALIGN);
>  if (newe <= e && newe >= s) {
>  highram_start = newe;

Thanks, but I'm still seeing a corner case where this fails - if
ALIGN_DOWN() falls below s.  In general, it seems odd to assign
highram_size with the possability of not assigning highram_start.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v2 2/2] malloc: use large ZoneHigh when there is enough memory

2022-04-25 Thread Kevin O'Connor
On Mon, Apr 25, 2022 at 09:41:45AM +0200, Gerd Hoffmann wrote:
> In case there is enough memory installed use a large ZoneHigh.
> 
> Signed-off-by: Gerd Hoffmann 
> ---
>  src/config.h | 3 ++-
>  src/malloc.c | 4 +++-
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/src/config.h b/src/config.h
> index 93c8dbc2d530..9abe355b6c47 100644
> --- a/src/config.h
> +++ b/src/config.h
> @@ -17,7 +17,8 @@
>  // Maximum number of map entries in the e820 map
>  #define BUILD_MAX_E820 32
>  // Space to reserve in high-memory for tables
> -#define BUILD_MAX_HIGHTABLE (256*1024)
> +#define BUILD_MIN_HIGHTABLE (256*1024)
> +#define BUILD_MAX_HIGHTABLE (16*1024*1024)
>  // Largest supported externaly facing drive id
>  #define BUILD_MAX_EXTDRIVE 16
>  // Number of bytes the smbios may be and still live in the f-segment
> diff --git a/src/malloc.c b/src/malloc.c
> index 7fa50a85595b..553164d53a38 100644
> --- a/src/malloc.c
> +++ b/src/malloc.c
> @@ -423,7 +423,7 @@ malloc_preinit(void)
>  
>  // Populate temp high ram
>  u32 highram_start = 0;
> -u32 highram_size = BUILD_MAX_HIGHTABLE;
> +u32 highram_size = BUILD_MIN_HIGHTABLE;
>  int i;
>  for (i=e820_count-1; i>=0; i--) {
>  struct e820entry *en = _list[i];
> @@ -434,6 +434,8 @@ malloc_preinit(void)
>  continue;
>  u32 s = en->start, e = end;
>  if (!highram_start) {
> +if (e > BUILD_MAX_HIGHTABLE * 16)
> +highram_size = BUILD_MAX_HIGHTABLE;
>  u32 newe = ALIGN_DOWN(e - highram_size, MALLOC_MIN_ALIGN);
>  if (newe <= e && newe >= s) {
>  highram_start = newe;

Thanks.  At a high-level, it looks fine.  However, I think the above
might introduce a corner case where a fragmented e820 might fail to
find any ZoneHigh.

Maybe something like the following instead:

 u32 newe = ALIGN_DOWN(e - BUILD_MIN_HIGHTABLE, MALLOC_MIN_ALIGN);
 if (newe <= e && newe >= s) {
 if (newe - s >= 20*1024*1024) {
 highram_size = 16*1024*1024;
 newe -= highram_size - BUILD_MIN_HIGHTABLE;
 }
 highram_start = e = newe;
 }

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: Floppy setup

2022-04-25 Thread Kevin O'Connor
On Sun, Apr 24, 2022 at 07:29:56PM -0400, Keith Hui wrote:
> (Looping in the SeaBIOS mailing list again.)
> 
> See this: https://www.seabios.org/Runtime_config
> 
> It appears CMOS locations are a legacy artifact and files within the
> cbfs image (eg. etc/floppy0) is the preferred way of passing SeaBIOS
> settings. If you want a proper fix with the value stored in CMOS (and
> be configurable via nvramtool), you'll need to modify both SeaBIOS and
> the coreboot option table layout for your board. If you need help with
> coreboot and are not subscribed to that mailing list, I can reach out
> and loop you in.
> 
> @Kevin - What are we to do for SeaBIOS settings that the we may want
> to change at runtime eg. via nvramtool? Are there any such settings?

The floppy drive type corresponds to the floppy drive hardware
attached to the machine, so it's not something that should change
frequently.  Storing it in flash (as etc/floppy0) I think should be
sufficient.

I, personally, wouldn't recommend using cmos for anything, as using
flash is notably more flexible.  That said, I suppose some protocol
could be worked out between coreboot and seabios.  It seems odd to go
through all that work for ancient floppy drives though.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] memory: reserve more space for ZoneHigh

2022-04-22 Thread Kevin O'Connor
On Fri, Apr 22, 2022 at 01:55:56PM +0200, Gerd Hoffmann wrote:
> Bump BUILD_MAX_HIGHTABLE from 256k to 1M to avoid running
> out of memory with very large smbios tables.
> 
> Signed-off-by: Gerd Hoffmann 
> ---
>  src/config.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/config.h b/src/config.h
> index 93c8dbc2d530..f4b19898a1f0 100644
> --- a/src/config.h
> +++ b/src/config.h
> @@ -17,7 +17,7 @@
>  // Maximum number of map entries in the e820 map
>  #define BUILD_MAX_E820 32
>  // Space to reserve in high-memory for tables
> -#define BUILD_MAX_HIGHTABLE (256*1024)
> +#define BUILD_MAX_HIGHTABLE (1024*1024)

Alas, I think this would break things if QEMU were started with
exactly 2M of ram.

Maybe an alternative would be to increase it to 512K of ram.

Another possibility might be to check if there is at least 16M of ram
and set aside 4M in that case (and continue to use 256K for low-mem
machines).

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 0/2] improve ZoneHigh memory management

2022-04-21 Thread Kevin O'Connor
On Thu, Apr 21, 2022 at 11:33:24AM +0200, Gerd Hoffmann wrote:
> When running out of memory get a chunk of memory from ZoneTmpHigh to
> expand ZoneHigh.  Drop simliar logic fro pmm code because it's not
> needed ay more.
> 
> This fixes some scalability problems, for example with lots of vcpus,
> where seabios runs out of memory due to large smbios/acpi tables.

I'm not sure this is a good idea, because it could cause subtle issues
with reproducibility.

SeaBIOS does not have a deterministic ordering to memory allocations
because of its implementation of "threads" (coroutines).  Should
permanent allocations need to spill over to ZoneTmpHigh then it will
likely result in a fragmented e820 memory map.  In that case, there is
a good chance that different bootups will have different e820 maps,
which may result in different OS behavior.

The goal of ZoneHigh was to be the maximum amount of space needed.
Unused space gets returned to the e820 map before boot, so there is
generally not much harm in increasing it.  Order of allocations in the
ZoneHigh region is less important because we generally don't free
allocations in that zone.

IIRC, the pmm ZoneTmpHigh hack was primarily intended for ridiculously
large allocations (like framebuffers) where allocating from the e820
map was the only feasible solution.

What's using the ZoneHigh region that is so large that we need to
expand it?

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] MP: fix mptable interrupt source generation for pci devices

2022-04-19 Thread Kevin O'Connor
On Tue, Apr 19, 2022 at 11:46:59AM +0200, Gerd Hoffmann wrote:
> > > diff --git a/src/fw/mptable.c b/src/fw/mptable.c
> > > index 47385cc..3a7b02f 100644
> > > --- a/src/fw/mptable.c
> > > +++ b/src/fw/mptable.c
> > 
> > If you look at the top of that file you'll see the notice:
> > 
> > // DO NOT ADD NEW FEATURES HERE.  (See paravirt.c / biostables.c instead.)
> > 
> > The mptable.c file in seabios is no longer modified.  If a guest OS
> > needs a different mptable then it will be necessary to extend qemu to
> > pass the desired mptable to seabios (as is done for smbios and acpi).
> 
> Maybe its time to think about removing this code?  Current state of
> affairs in qemu is:
> 
> # qemu-default -M help
> Supported machines are:
> [ ... ]
> pc-i440fx-2.1Standard PC (i440FX + PIIX, 1996)
> pc-i440fx-2.0Standard PC (i440FX + PIIX, 1996)
> pc-i440fx-1.7Standard PC (i440FX + PIIX, 1996) (deprecated)
> pc-i440fx-1.6Standard PC (i440FX + PIIX, 1996) (deprecated)
> pc-i440fx-1.5Standard PC (i440FX + PIIX, 1996) (deprecated)
> pc-i440fx-1.4Standard PC (i440FX + PIIX, 1996) (deprecated)
> [ ... ]
> 
> So qemu plans to remove support for machine types 1.7 and older, this
> will probably happen later this year in the 7.1 or 7.2 release.
> 
> qemu added support for passing acpi tables to the firmware in version
> 1.7, so the compatibility code is only used for machine types 1.6 and
> older.
> 
> So in a year or so the only purpose the compatibility code will serve
> is confusing people ...

I know qemu produces the acpi and smbios tables.  I wasn't aware it
will currently pass in the mptable to seabios.  (Nor the PIR table.)

I'm okay with removing support from seabios.  As a first step we can
change the Kconfig defaults to not produce the tables.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] MP: fix mptable interrupt source generation for pci devices

2022-04-15 Thread Kevin O'Connor
On Fri, Apr 15, 2022 at 04:33:34PM +0530, Jay Khandkar wrote:
> Set the correct IOAPIC INTIN pin number for pci bus interrupt
> sources during MP spec table generation (on emulators). Currently,
> the pin number is set to the interrupt line field in the device's
> configuration space, which is set to either IRQ 10 or 11 as the
> boot rom driver expects. This works since qemu maps all ISA
> compatible PIC IRQs onto IOAPIC pins 0-15, including PIRQs. But it
> will break if, for some reason, the IRQ is routed to something
> other than the INTLINE value using, for eg. ACPI _CRS. This patch
> ensures the pin number is set to the correct value (16-23) that the
> INTx pin is routed to in APIC mode, in agreement with the ACPI _PRT
> provided routing.
> 
> Tested on a Linux 5.17.2 guest on qemu with the pci=nomsi and
> acpi=noirq boot parameters to force MP table parsing for interrupt
> sources.
> 
> Signed-off-by: Jay Khandkar 
> ---
>  src/fw/mptable.c | 30 --
>  1 file changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/src/fw/mptable.c b/src/fw/mptable.c
> index 47385cc..3a7b02f 100644
> --- a/src/fw/mptable.c
> +++ b/src/fw/mptable.c

If you look at the top of that file you'll see the notice:

// DO NOT ADD NEW FEATURES HERE.  (See paravirt.c / biostables.c instead.)

The mptable.c file in seabios is no longer modified.  If a guest OS
needs a different mptable then it will be necessary to extend qemu to
pass the desired mptable to seabios (as is done for smbios and acpi).

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 0/2] Fix reset issue with QEMU -machine q35

2022-04-04 Thread Kevin O'Connor
On Sat, Apr 02, 2022 at 08:24:41PM +0200, Volker Rümelin wrote:
> This reset issue was reported on the QEMU issue tracker at
> https://gitlab.com/qemu-project/qemu/-/issues/766
> 
> A reset with QEMU -machine q35 -accel tcg leads to a reset loop
> and with -machine q35 -accel kvm the reset only works because
> KVM ignores the read-only semantics of the C-F segments.
> 
> Details about the issue are in "reset: force standard PCI
> configuration access".

Thanks.  I committed this series.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v2 2/2] reset: force standard PCI configuration access

2022-03-30 Thread Kevin O'Connor
On Sat, Mar 26, 2022 at 10:33:19AM +0100, Volker Rümelin wrote:
> After a reset of a QEMU -machine q35 guest, the PCI Express
> Enhanced Configuration Mechanism is disabled and the variable
> mmconfig no longer matches the configuration register PCIEXBAR
> of the Q35 chipset. Until the variable mmconfig is reset to 0,
> all pci_config_*() functions no longer work.
> 
> The variable mmconfig is located in the read-only F-segment.
> To reset it the pci_config_*() functions are needed, but they
> do not work.
> 
> Replace all pci_config_*() calls with Standard PCI Configuration
> Mechanism pci_ioconfig_*() calls until mmconfig is reset to 0.
> 
> This fixes
> 
> In resume (status=0)
> In 32bit resume
> Attempting a hard reboot
> Unable to unlock ram - bridge not found
> 
> and a reset loop with QEMU -accel tcg.
> 
> Reviewed-by: Gerd Hoffmann 
> Signed-off-by: Volker Rümelin 
> ---
>  src/fw/shadow.c | 16 +---
>  src/hw/pci.c| 32 
>  src/hw/pci.h|  7 +++
>  3 files changed, 48 insertions(+), 7 deletions(-)
> 
> diff --git a/src/fw/shadow.c b/src/fw/shadow.c
> index 4c627a8..0722df2 100644
> --- a/src/fw/shadow.c
> +++ b/src/fw/shadow.c
> @@ -32,8 +32,8 @@ __make_bios_writable_intel(u16 bdf, u32 pam0)
>  {
>  // Read in current PAM settings from pci config space
>  union pamdata_u pamdata;
> -pamdata.data32[0] = pci_config_readl(bdf, ALIGN_DOWN(pam0, 4));
> -pamdata.data32[1] = pci_config_readl(bdf, ALIGN_DOWN(pam0, 4) + 4);
> +pamdata.data32[0] = pci_ioconfig_readl(bdf, ALIGN_DOWN(pam0, 4));
> +pamdata.data32[1] = pci_ioconfig_readl(bdf, ALIGN_DOWN(pam0, 4) + 4);
>  u8 *pam = [pam0 & 0x03];
>  
>  // Make ram from 0xc-0xf writable
> @@ -46,8 +46,8 @@ __make_bios_writable_intel(u16 bdf, u32 pam0)
>  pam[0] = 0x30;
>  
>  // Write PAM settings back to pci config space
> -pci_config_writel(bdf, ALIGN_DOWN(pam0, 4), pamdata.data32[0]);
> -pci_config_writel(bdf, ALIGN_DOWN(pam0, 4) + 4, pamdata.data32[1]);
> +pci_ioconfig_writel(bdf, ALIGN_DOWN(pam0, 4), pamdata.data32[0]);
> +pci_ioconfig_writel(bdf, ALIGN_DOWN(pam0, 4) + 4, pamdata.data32[1]);
>  
>  if (!ram_present)
>  // Copy bios.
> @@ -59,7 +59,7 @@ __make_bios_writable_intel(u16 bdf, u32 pam0)
>  static void
>  make_bios_writable_intel(u16 bdf, u32 pam0)
>  {
> -int reg = pci_config_readb(bdf, pam0);
> +int reg = pci_ioconfig_readb(bdf, pam0);
>  if (!(reg & 0x10)) {
>  // QEMU doesn't fully implement the piix shadow capabilities -
>  // if ram isn't backing the bios segment when shadowing is
> @@ -125,8 +125,8 @@ make_bios_writable(void)
>  // At this point, statically allocated variables can't be written,
>  // so do this search manually.
>  int bdf;
> -foreachbdf(bdf, 0) {
> -u32 vendev = pci_config_readl(bdf, PCI_VENDOR_ID);
> +pci_ioconfig_foreachbdf(bdf, 0) {
> +u32 vendev = pci_ioconfig_readl(bdf, PCI_VENDOR_ID);
>  u16 vendor = vendev & 0x, device = vendev >> 16;
>  if (vendor == PCI_VENDOR_ID_INTEL
>  && device == PCI_DEVICE_ID_INTEL_82441) {
> @@ -183,10 +183,12 @@ qemu_reboot(void)
>  apm_shutdown();
>  }
>  make_bios_writable();
> +pci_disable_mmconfig();
>  HaveRunPost = 3;
>  } else {
>  // Copy the BIOS making sure to only reset HaveRunPost at end
>  make_bios_writable();
> +pci_disable_mmconfig();

I don't understand these two calls to pci_disable_mmconfig() as the
f-segment is about to be overwritten and the machine is about to be
turned off.

Othwerwise, looks fine to me.

Thanks,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [ANNOUNCE] SeaBIOS 1.16.0

2022-03-15 Thread Kevin O'Connor
On Tue, Mar 15, 2022 at 10:28:12AM +0300, Michael Tokarev wrote:
> 02.03.2022 05:19, Kevin O'Connor rote:
> > The 1.16.0 version of SeaBIOS has now been released.  For more
> > information on the release, please see:
> 
> Hi Kevin!
> This release seems to be missing at https://www.seabios.org/downloads/ -
> there are binaries in there, but not the source. Is it intentional?

Thanks.  That was an error on my part.  The file should be there now.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [ANNOUNCE] SeaBIOS 1.16.0

2022-03-01 Thread Kevin O'Connor
The 1.16.0 version of SeaBIOS has now been released.  For more
information on the release, please see:

https://seabios.org/Releases


New in this release:

* SMBIOS v3.0 support on QEMU
* Several bug fixes and code cleanups.


For information on obtaining SeaBIOS, please see:

https://seabios.org/Download


= git shortlog -n rel-1.15.0..rel-1.16.0 =

Eduardo Habkost (19):
  biostables: copy_fseg_table() function
  util.h: Delete unused get_smbios_entry_point() prototype
  smbios: Rename code specific for SMBIOS 2.1 entry points
  smbios: Generic smbios_next() function
  smbios: smbios_get_tables() function
  smbios: Use smbios_get_tables()/smbios_next() at display_uuid()
  smbios: smbios_major_version()/smbios_minor_version() helpers
  tpm: Use smbios_get_tables()
  csm: Don't check SMBios21Addr before calling copy_smbios_21()
  smbios: Make SMBios21Addr variable static
  smbios: Use smbios_next() at smbios_romfile_setup()
  smbios: Extract SMBIOS table building code to separate function
  smbios: Make smbios_build_tables() more generic
  smbios: smbios_21_setup_entry_point() function
  smbios: Make some smbios_build_tables() arguments optional
  smbios: Make smbios_build_tables() ready for 64-bit tables
  smbios: copy_smbios_30() function
  smbios: Support SMBIOS 3.0 entry point at copy_table()
  smbios: Support SMBIOS 3.0 entry point at smbios_romfile_setup()

Kevin O'Connor (13):
  vgasrc: Don't use VAR16 in header files to fix gcc warning
  memmap: Fix gcc out-of-bounds warning
  readserial: Improve Python3 compatibility
  scripts: Remove python23compat.py
  smm: Suppress gcc array-bounds warnings
  nvme: Rework nvme_io_readwrite() to return -1 on error
  nvme: Add nvme_bounce_xfer() helper function
  nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()
  nvme: Pass prp1 and prp2 directly to nvme_io_xfer()
  nvme: Build the page list in the existing dma buffer
  nvme: Only allocate one dma bounce buffer for all nvme drives
  sercon: Fix missing GET_LOW() to access rx_bytes
  docs: Note v1.16.0 release

Andy Pei (3):
  virtio-blk: add feature VIRTIO_BLK_F_SIZE_MAX and VIRTIO_BLK_F_SEG_MAX
  virtio-blk: abstract a function named virtio_blk_op_one_segment to handle 
r/w request
  virtio-blk: split large IO according to size_max

Igor Mammedov (2):
  pci: reserve resources for pcie-pci-bridge to fix regressed hotplug on q35
  pci: let firmware reserve IO for pcie-pci-bridge

Florian Larysch (1):
  nvme: fix LBA format data structure

Gerd Hoffmann (1):
  svgamodes: add standard 4k modes

Jan Beulich via SeaBIOS (1):
  nvme: avoid use-after-free in nvme_controller_enable()
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Next release

2022-02-23 Thread Kevin O'Connor
I'm currently planning to make a release early next week.  This will
be a v1.16.0 release.

If anyone knows of any critical bugs or deficiencies with the current
code, please let me know.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] nvme: fix LBA format data structure

2022-02-03 Thread Kevin O'Connor
On Sun, Jan 23, 2022 at 05:43:57PM +0100, Florian Larysch wrote:
> The LBA Format Data structure is dword-sized, but struct nvme_lba_format
> erroneously contains an additional member, misaligning all LBAF
> descriptors after the first and causing them to be misinterpreted.
> Remove it.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] nvme: fix LBA format data structure

2022-02-02 Thread Kevin O'Connor
On Wed, Feb 02, 2022 at 11:50:50AM +0100, Alexander Graf wrote:
> 
> On 02.02.22 02:52, Kevin O'Connor wrote:
> > On Tue, Feb 01, 2022 at 08:39:10PM +0100, Florian Larysch wrote:
> > > On Thu, Jan 27, 2022 at 11:37:52AM -0500, Kevin O'Connor wrote:
> > > > Thanks.  I don't know enough about NVMe to review this patch though.
> > > > Maybe Julian or Alex could comment?
> > > Happy to hear their comments on this. However, I can also try to explain
> > > the reasoning in a bit more detail.
> > > 
> > > This follows from the NVMe spec[1]: The Identify command returns a data
> > > structure that contains packed LBAF records (Figure 249, starting at
> > > offset 131). This is represented by struct nvme_identify_ns in the
> > > SeaBIOS code.
> > > 
> > > Figure 250 gives the structure of these records and this is where the
> > > aforementioned discrepancy lies.
> > Thanks.  I agree that this should be fixed in SeaBIOS.
> > 
> > However, your patch removes the "reserved" field, which doesn't look
> > correct.  It sounds like the "res" struct member should remain, but
> > the "lbads" and "rp" members should be turned into a new "lbads_rp"
> > member.  Also, the nvme.c code should be updated so that the read of
> > lbads performs the proper bit masking.
> 
> 
> Looking at the spec, lbads is a full well aligned 8 bits. The collision here
> is between rp and res, with rp taking only 2 bits and res the remaining 6.

Ah, I misread the spec.  So, the original patch removing the "res"
struct member should be fine.

> Is using bitfields acceptable in SeaBIOS? If so, the patch below should
> automatically give us masking as well.

I'd recommend against using C bitfields to access hardware registers.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] nvme: fix LBA format data structure

2022-02-01 Thread Kevin O'Connor
On Tue, Feb 01, 2022 at 08:39:10PM +0100, Florian Larysch wrote:
> On Thu, Jan 27, 2022 at 11:37:52AM -0500, Kevin O'Connor wrote:
> > Thanks.  I don't know enough about NVMe to review this patch though.
> > Maybe Julian or Alex could comment?
> 
> Happy to hear their comments on this. However, I can also try to explain
> the reasoning in a bit more detail.
> 
> This follows from the NVMe spec[1]: The Identify command returns a data
> structure that contains packed LBAF records (Figure 249, starting at
> offset 131). This is represented by struct nvme_identify_ns in the
> SeaBIOS code.
> 
> Figure 250 gives the structure of these records and this is where the
> aforementioned discrepancy lies.

Thanks.  I agree that this should be fixed in SeaBIOS.

However, your patch removes the "reserved" field, which doesn't look
correct.  It sounds like the "res" struct member should remain, but
the "lbads" and "rp" members should be turned into a new "lbads_rp"
member.  Also, the nvme.c code should be updated so that the read of
lbads performs the proper bit masking.

-Kevin


> 
> I think what happened was that somebody mistook the reserved 6 uppermost
> bits as a whole byte and put it into struct nvme_lba_format as such. To
> correctly parse the RP field, you'd still need to mask off these bits,
> but the code only uses the LBADS/MS fields anyway.
> 
> I encountered this on an NVMe device that had FLBAS[3:0] != 0, which
> caused nonsensical outputs of the various dprintf()s and also throws off
> everything that depends on knowing the block size. Empirically, the
> patch fixes these problems.
> 
> [1] 
> https://nvmexpress.org/wp-content/uploads/NVM-Express-1_4c-2021.06.28-Ratified.pdf
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: Cut 1.15.1 release?

2022-01-28 Thread Kevin O'Connor
On Fri, Jan 28, 2022 at 01:03:23PM +0100, Gerd Hoffmann wrote:
> On Thu, Jan 27, 2022 at 05:43:32PM +0100, Paul Menzel wrote:
> > Dear SeaBIOS folks,
> > 
> > 
> > with the latest NVMe fixes, would it make sense to tag a 1.15.1 release?
> > 
> > The 1.16.0 release is planned for end of February, as far as I understand
> > it. Maybe that could be moved up two weeks?
> 
> Going straight to 1.16.0 looks more useful to me.  I don't feel like
> rushing a 1.15.1 release with patches commited only yesterday, also
> there are lots of fixes so a 1.15.1 would have a rather big portion of
> the patches in master cherry-picked.

That makes sense to me.

To the original question - I don't see moving up the v1.16.0 release.
Indeed, if we continue to see bug reports then we may have to push it
back.  (Though, end of February still seems reasonable at this time.)

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] nvme: fix LBA format data structure

2022-01-27 Thread Kevin O'Connor
On Sun, Jan 23, 2022 at 05:43:57PM +0100, Florian Larysch wrote:
> The LBA Format Data structure is dword-sized, but struct nvme_lba_format
> erroneously contains an additional member, misaligning all LBAF
> descriptors after the first and causing them to be misinterpreted.
> Remove it.

Thanks.  I don't know enough about NVMe to review this patch though.
Maybe Julian or Alex could comment?

-Kevin


> 
> Signed-off-by: Florian Larysch 
> ---
>  src/hw/nvme-int.h | 1 -
>  1 file changed, 1 deletion(-)
> 
> diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
> index a4c1555..5779203 100644
> --- a/src/hw/nvme-int.h
> +++ b/src/hw/nvme-int.h
> @@ -156,7 +156,6 @@ struct nvme_lba_format {
>  u16 ms;
>  u8  lbads;
>  u8  rp;
> -u8  res;
>  };
>  
>  struct nvme_identify_ns {
> -- 
> 2.34.1
> 
> ___
> SeaBIOS mailing list -- seabios@seabios.org
> To unsubscribe send an email to seabios-le...@seabios.org
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] nvme: avoid use-after-free in nvme_controller_enable()

2022-01-27 Thread Kevin O'Connor
On Mon, Jan 24, 2022 at 10:20:53AM +0100, Jan Beulich via SeaBIOS wrote:
> Commit b68f313c9139 ("nvme: Record maximum allowed request size")
> introduced a use of "identify" past it being passed to free(). Latch the
> value of interest into a local variable.
> 
> Reported-by: Coverity (ID 1497613)
> Signed-off-by: Jan Beulich 

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] sercon: Fix missing GET_LOW() to access rx_bytes

2022-01-27 Thread Kevin O'Connor
On Fri, Jan 21, 2022 at 11:12:35AM -0500, Kevin O'Connor wrote:
> The variable rx_bytes is marked VARLOW, but there was a missing
> GET_LOW() to access rx_bytes.  Fix by copying rx_bytes to a local
> variable and avoid the repetitive segment memory accesses.
> 
> Reported-by: Gabe Black 
> Signed-off-by: Volker Rümelin 
> Signed-off-by: Kevin O'Connor 

FYI, I committed this patch.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCHv2 0/6] Rework NVME page transfers and avoid fsegment allocation

2022-01-27 Thread Kevin O'Connor
On Fri, Jan 21, 2022 at 11:48:42AM -0500, Kevin O'Connor wrote:
> This is a resend of the previous series.  Changes since last time:
> 
> * Patch 1: Fix function comment on nvme_io_xfer()
> * Patch 3-5: Added "goto bounce" to nvme_io_xfer() as suggested by Alex
> * Patch 6: Added separate "single dma bounce buffer" patch to this series
> * Patch 6: Add checking for malloc failure
> 
> -Kevin

FYI, I committed this series.

Thanks.
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 5/5] nvme: Build the page list in the existing dma buffer

2022-01-22 Thread Kevin O'Connor
On Fri, Jan 21, 2022 at 09:44:46PM +0100, Alexander Graf wrote:
> On 21.01.22 17:54, Kevin O'Connor wrote:
> > On Fri, Jan 21, 2022 at 05:41:17PM +0100, Alexander Graf wrote:
> > > On 21.01.22 17:02, Kevin O'Connor wrote:
> > > > On Fri, Jan 21, 2022 at 03:28:33PM +0100, Alexander Graf wrote:
> > > > > On 19.01.22 19:45, Kevin O'Connor wrote:
> > > > > > if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
> > > > > > -/* We need to describe more than 2 pages, rely on PRP List 
> > > > > > */
> > > > > > -prp2 = ns->prpl;
> > > > > > +/* We need to describe more than 2 pages, build PRP List */
> > > > > > +u32 prpl_len = 0;
> > > > > > +u64 *prpl = (void*)ns->dma_buffer;
> > > > > At 15*8=120 bytes of data, do we even need to put the prpl into 
> > > > > dma_buffer
> > > > > or can we just use the stack? Will the array be guaranteed 64bit 
> > > > > aligned or
> > > > > would we need to add an attribute to it?
> > > > > 
> > > > >   u64 prpl[NVME_MAX_PRPL_ENTRIES];
> > > > > 
> > > > > Either way, I don't have strong feelings one way or another. It just 
> > > > > seems
> > > > > more natural to keep the bounce buffer purely as bounce buffer :).
> > > > In general it's possible to DMA from the stack (eg,
> > > > src/hw/ohci.c:ohci_send_pipe() ).  However, SeaBIOS doesn't guarantee
> > > > stack alignment so it would need to be done manually.  Also, I'm not
> > > > sure how tight the nvme request completion code is - if it returns
> > > > early for some reason it could cause havoc (device dma into random
> > > > memory).
> > > > 
> > > > Another option might be to make a single global prpl array, but that
> > > > would consume the memory even if nvme isn't in use.
> > > > 
> > > > FWIW though, I don't see a harm in the single ( u64 *prpl =
> > > > (void*)ns->dma_buffer ) line.
> > > 
> > > Fair, works for me. But then we probably want to also adjust
> > > MAX_PRPL_ENTRIES to match the full page we now have available, no?
> > > 
> > I don't think a BIOS disk request can be over 64KiB so I don't think
> > it matters.  I got the impression the current checks are just "sanity
> > checks".  I don't see a harm in keeping the sanity check and that size
> > is as good as any other as far as I know.
> 
> It's a bit of both: Sanity checks and code that potentially can be reused
> outside of the SeaBIOS context. So I would try as hard as possible to not
> make assumptions that we can only handle max 64kb I/O requests.

I agree.  I also think it would be good to address the two items I
raised at:
https://www.mail-archive.com/seabios@seabios.org/msg12833.html

I'd prefer if someone more familiar with nvme (and has a better
testing infrastructure) could look at the above items though.  I'm not
familiar with the nvme hardware specs and I'm just testing by
"checking if qemu starts".

I'd be inclined to go forward with the current patch series as the
above items seem orthogonal.  That is, if there is interest, I'd guess
they would be best merged on top of this series anyway.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCHv2 3/6] nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()

2022-01-21 Thread Kevin O'Connor
On Fri, Jan 21, 2022 at 11:48:45AM -0500, Kevin O'Connor wrote:
> Rename nvme_build_prpl() to nvme_prpl_xfer() and directly invoke
> nvme_io_xfer() or nvme_bounce_xfer() from that function.
> 
> Signed-off-by: Kevin O'Connor 
> ---
>  src/hw/nvme-int.h |  1 -
>  src/hw/nvme.c | 46 --
>  2 files changed, 20 insertions(+), 27 deletions(-)
> 
> diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
> index a4c1555..9564c17 100644
> --- a/src/hw/nvme-int.h
> +++ b/src/hw/nvme-int.h
> @@ -125,7 +125,6 @@ struct nvme_namespace {
>  
>  /* Page List */
>  u32 prpl_len;
> -void *prp1;
>  u64 prpl[NVME_MAX_PRPL_ENTRIES];
>  };
>  
> diff --git a/src/hw/nvme.c b/src/hw/nvme.c
> index d656e9b..eee7d17 100644
> --- a/src/hw/nvme.c
> +++ b/src/hw/nvme.c
> @@ -498,10 +498,13 @@ static int nvme_add_prpl(struct nvme_namespace *ns, u64 
> base)
>  return 0;
>  }
>  
> -static int nvme_build_prpl(struct nvme_namespace *ns, void *op_buf, u16 
> count)
> +// Transfer data using page list (if applicable)
> +static int
> +nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
> +   int write)
>  {
>  int first_page = 1;
> -u32 base = (long)op_buf;
> +u32 base = (long)buf;
>  s32 size;
>  
>  if (count > ns->max_req_size)
> @@ -511,31 +514,32 @@ static int nvme_build_prpl(struct nvme_namespace *ns, 
> void *op_buf, u16 count)
>  
>  size = count * ns->block_size;
>  /* Special case for transfers that fit into PRP1, but are unaligned */
> -if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE)) {
> -ns->prp1 = op_buf;
> -return count;
> -}
> +if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
> +return nvme_io_xfer(ns, lba, buf, count, write);

Just as a "self review", I think this code path could result in
nvme_io_xfer() being called with a buffer that is not word aligned and
thus result in a DISK_RET_EBADTRACK.  It would seem using the bounce
buffer would be a better result.


>  /* Every request has to be page aligned */
>  if (base & ~NVME_PAGE_MASK)
> -return 0;
> +goto bounce;
>  
>  /* Make sure a full block fits into the last chunk */
>  if (size & (ns->block_size - 1ULL))
> -return 0;
> +goto bounce;

Also, it's not clear to me how this branch could ever be taken as size
is assigned a few lines up as "size = count * ns->block_size".


However, in this patch series I tried to reproduce the existing
behavior.  Since I don't have a good way for testing nvme, I'm not
planning to make functional changes for the above.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 5/5] nvme: Build the page list in the existing dma buffer

2022-01-21 Thread Kevin O'Connor
On Fri, Jan 21, 2022 at 05:41:17PM +0100, Alexander Graf wrote:
> On 21.01.22 17:02, Kevin O'Connor wrote:
> > On Fri, Jan 21, 2022 at 03:28:33PM +0100, Alexander Graf wrote:
> > > On 19.01.22 19:45, Kevin O'Connor wrote:
> > > >if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
> > > > -/* We need to describe more than 2 pages, rely on PRP List */
> > > > -prp2 = ns->prpl;
> > > > +/* We need to describe more than 2 pages, build PRP List */
> > > > +u32 prpl_len = 0;
> > > > +u64 *prpl = (void*)ns->dma_buffer;
> > > 
> > > At 15*8=120 bytes of data, do we even need to put the prpl into dma_buffer
> > > or can we just use the stack? Will the array be guaranteed 64bit aligned 
> > > or
> > > would we need to add an attribute to it?
> > > 
> > >  u64 prpl[NVME_MAX_PRPL_ENTRIES];
> > > 
> > > Either way, I don't have strong feelings one way or another. It just seems
> > > more natural to keep the bounce buffer purely as bounce buffer :).
> > In general it's possible to DMA from the stack (eg,
> > src/hw/ohci.c:ohci_send_pipe() ).  However, SeaBIOS doesn't guarantee
> > stack alignment so it would need to be done manually.  Also, I'm not
> > sure how tight the nvme request completion code is - if it returns
> > early for some reason it could cause havoc (device dma into random
> > memory).
> > 
> > Another option might be to make a single global prpl array, but that
> > would consume the memory even if nvme isn't in use.
> > 
> > FWIW though, I don't see a harm in the single ( u64 *prpl =
> > (void*)ns->dma_buffer ) line.
> 
> 
> Fair, works for me. But then we probably want to also adjust
> MAX_PRPL_ENTRIES to match the full page we now have available, no?
> 

I don't think a BIOS disk request can be over 64KiB so I don't think
it matters.  I got the impression the current checks are just "sanity
checks".  I don't see a harm in keeping the sanity check and that size
is as good as any other as far as I know.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCHv2 6/6] nvme: Only allocate one dma bounce buffer for all nvme drives

2022-01-21 Thread Kevin O'Connor
There is no need to create multiple dma bounce buffers as the BIOS
disk code isn't reentrant capable.

Also, verify that the allocation succeeds.

Signed-off-by: Kevin O'Connor 
Reviewed-by: Alexander Graf 
---
 src/hw/nvme-int.h |  3 ---
 src/hw/nvme.c | 21 +++--
 2 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index f0d717d..f9c807e 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -117,9 +117,6 @@ struct nvme_namespace {
 u32 block_size;
 u32 metadata_size;
 u32 max_req_size;
-
-/* Page aligned buffer of size NVME_PAGE_SIZE. */
-char *dma_buffer;
 };
 
 /* Data structures for NVMe admin identify commands */
diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index e9c449d..3dfa0ce 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -20,6 +20,9 @@
 #include "nvme.h"
 #include "nvme-int.h"
 
+// Page aligned "dma bounce buffer" of size NVME_PAGE_SIZE in high memory
+static void *nvme_dma_buffer;
+
 static void *
 zalloc_page_aligned(struct zone_s *zone, u32 size)
 {
@@ -257,6 +260,14 @@ nvme_probe_ns(struct nvme_ctrl *ctrl, u32 ns_idx, u8 mdts)
 goto free_buffer;
 }
 
+if (!nvme_dma_buffer) {
+nvme_dma_buffer = zalloc_page_aligned(, NVME_PAGE_SIZE);
+if (!nvme_dma_buffer) {
+warn_noalloc();
+goto free_buffer;
+}
+}
+
 struct nvme_namespace *ns = malloc_fseg(sizeof(*ns));
 if (!ns) {
 warn_noalloc();
@@ -294,8 +305,6 @@ nvme_probe_ns(struct nvme_ctrl *ctrl, u32 ns_idx, u8 mdts)
 ns->max_req_size = -1U;
 }
 
-ns->dma_buffer = zalloc_page_aligned(, NVME_PAGE_SIZE);
-
 char *desc = znprintf(MAXDESCSIZE, "NVMe NS %u: %llu MiB (%llu %u-byte "
   "blocks + %u-byte metadata)",
   ns_id, (ns->lba_count * ns->block_size) >> 20,
@@ -459,12 +468,12 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 u16 blocks = count < max_blocks ? count : max_blocks;
 
 if (write)
-memcpy(ns->dma_buffer, buf, blocks * ns->block_size);
+memcpy(nvme_dma_buffer, buf, blocks * ns->block_size);
 
-int res = nvme_io_xfer(ns, lba, ns->dma_buffer, NULL, blocks, write);
+int res = nvme_io_xfer(ns, lba, nvme_dma_buffer, NULL, blocks, write);
 
 if (!write && res >= 0)
-memcpy(buf, ns->dma_buffer, res * ns->block_size);
+memcpy(buf, nvme_dma_buffer, res * ns->block_size);
 
 return res;
 }
@@ -498,7 +507,7 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 /* Build PRP list if we need to describe more than 2 pages */
 if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
 u32 prpl_len = 0;
-u64 *prpl = (void*)ns->dma_buffer;
+u64 *prpl = nvme_dma_buffer;
 int first_page = 1;
 for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
 if (first_page) {
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCHv2 5/6] nvme: Build the page list in the existing dma buffer

2022-01-21 Thread Kevin O'Connor
Commit 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists")
introduced multi-page requests using the NVMe PRP mechanism. To store the
list and "first page to write to" hints, it added fields to the NVMe
namespace struct.

Unfortunately, that struct resides in fseg which is read-only at runtime.
While KVM ignores the read-only part and allows writes, real hardware and
TCG adhere to the semantics and ignore writes to the fseg region. The net
effect of that is that reads and writes were always happening on address 0,
unless they went through the bounce buffer logic.

This patch builds the PRP maintenance data in the existing "dma bounce
buffer" and only builds it when needed.

Fixes: 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists")
Reported-by: Matt DeVillier 
Signed-off-by: Alexander Graf 
Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme-int.h |  6 -
 src/hw/nvme.c | 61 +++
 2 files changed, 24 insertions(+), 43 deletions(-)

diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index 9564c17..f0d717d 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -10,8 +10,6 @@
 #include "types.h" // u32
 #include "pcidevice.h" // struct pci_device
 
-#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */
-
 /* Data structures */
 
 /* The register file of a NVMe host controller. This struct follows the naming
@@ -122,10 +120,6 @@ struct nvme_namespace {
 
 /* Page aligned buffer of size NVME_PAGE_SIZE. */
 char *dma_buffer;
-
-/* Page List */
-u32 prpl_len;
-u64 prpl[NVME_MAX_PRPL_ENTRIES];
 };
 
 /* Data structures for NVMe admin identify commands */
diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index 20976fc..e9c449d 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -469,39 +469,23 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 return res;
 }
 
-static void nvme_reset_prpl(struct nvme_namespace *ns)
-{
-ns->prpl_len = 0;
-}
-
-static int nvme_add_prpl(struct nvme_namespace *ns, u64 base)
-{
-if (ns->prpl_len >= NVME_MAX_PRPL_ENTRIES)
-return -1;
-
-ns->prpl[ns->prpl_len++] = base;
-
-return 0;
-}
+#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */
 
 // Transfer data using page list (if applicable)
 static int
 nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
int write)
 {
-int first_page = 1;
 u32 base = (long)buf;
 s32 size;
 
 if (count > ns->max_req_size)
 count = ns->max_req_size;
 
-nvme_reset_prpl(ns);
-
 size = count * ns->block_size;
 /* Special case for transfers that fit into PRP1, but are unaligned */
 if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
-return nvme_io_xfer(ns, lba, buf, NULL, count, write);
+goto single;
 
 /* Every request has to be page aligned */
 if (base & ~NVME_PAGE_MASK)
@@ -511,28 +495,31 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 if (size & (ns->block_size - 1ULL))
 goto bounce;
 
-for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
-if (first_page) {
-/* First page is special */
-first_page = 0;
-continue;
+/* Build PRP list if we need to describe more than 2 pages */
+if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
+u32 prpl_len = 0;
+u64 *prpl = (void*)ns->dma_buffer;
+int first_page = 1;
+for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
+if (first_page) {
+/* First page is special */
+first_page = 0;
+continue;
+}
+if (prpl_len >= NVME_MAX_PRPL_ENTRIES)
+goto bounce;
+prpl[prpl_len++] = base;
 }
-if (nvme_add_prpl(ns, base))
-goto bounce;
+return nvme_io_xfer(ns, lba, buf, prpl, count, write);
 }
 
-void *prp2;
-if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
-/* We need to describe more than 2 pages, rely on PRP List */
-prp2 = ns->prpl;
-} else if ((ns->block_size * count) > NVME_PAGE_SIZE) {
-/* Directly embed the 2nd page if we only need 2 pages */
-prp2 = (void *)(long)ns->prpl[0];
-} else {
-/* One page is enough, don't expose anything else */
-prp2 = NULL;
-}
-return nvme_io_xfer(ns, lba, buf, prp2, count, write);
+/* Directly embed the 2nd page if we only need 2 pages */
+if ((ns->block_size * count) > NVME_PAGE_SIZE)
+return nvme_io_xfer(ns, lba, buf, buf + NVME_PAGE_SIZE, count, write);
+
+single:
+/* One page is enough, don't expose anything else */
+return nvme_io_xfer(ns, lba, buf, NULL, count, write);
 

[SeaBIOS] [PATCHv2 4/6] nvme: Pass prp1 and prp2 directly to nvme_io_xfer()

2022-01-21 Thread Kevin O'Connor
When using a prp2 parameter, build it in nvme_prpl_xfer() and pass it
directly to nvme_io_xfer().

Signed-off-by: Kevin O'Connor 
Reviewed-by: Alexander Graf 
---
 src/hw/nvme.c | 39 ++-
 1 file changed, 18 insertions(+), 21 deletions(-)

diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index eee7d17..20976fc 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -416,33 +416,19 @@ err:
 
 /* Reads count sectors into buf. The buffer cannot cross page boundaries. */
 static int
-nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
- int write)
+nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void *prp1, void *prp2,
+ u16 count, int write)
 {
-u32 buf_addr = (u32)buf;
-void *prp2;
-
-if (buf_addr & 0x3) {
+if (((u32)prp1 & 0x3) || ((u32)prp2 & 0x3)) {
 /* Buffer is misaligned */
 warn_internalerror();
 return -1;
 }
 
-if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
-/* We need to describe more than 2 pages, rely on PRP List */
-prp2 = ns->prpl;
-} else if ((ns->block_size * count) > NVME_PAGE_SIZE) {
-/* Directly embed the 2nd page if we only need 2 pages */
-prp2 = (void *)(long)ns->prpl[0];
-} else {
-/* One page is enough, don't expose anything else */
-prp2 = NULL;
-}
-
 struct nvme_sqe *io_read = nvme_get_next_sqe(>ctrl->io_sq,
  write ? NVME_SQE_OPC_IO_WRITE
: NVME_SQE_OPC_IO_READ,
- NULL, buf, prp2);
+ NULL, prp1, prp2);
 io_read->nsid = ns->ns_id;
 io_read->dword[10] = (u32)lba;
 io_read->dword[11] = (u32)(lba >> 32);
@@ -475,7 +461,7 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 if (write)
 memcpy(ns->dma_buffer, buf, blocks * ns->block_size);
 
-int res = nvme_io_xfer(ns, lba, ns->dma_buffer, blocks, write);
+int res = nvme_io_xfer(ns, lba, ns->dma_buffer, NULL, blocks, write);
 
 if (!write && res >= 0)
 memcpy(buf, ns->dma_buffer, res * ns->block_size);
@@ -515,7 +501,7 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 size = count * ns->block_size;
 /* Special case for transfers that fit into PRP1, but are unaligned */
 if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
-return nvme_io_xfer(ns, lba, buf, count, write);
+return nvme_io_xfer(ns, lba, buf, NULL, count, write);
 
 /* Every request has to be page aligned */
 if (base & ~NVME_PAGE_MASK)
@@ -535,7 +521,18 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 goto bounce;
 }
 
-return nvme_io_xfer(ns, lba, buf, count, write);
+void *prp2;
+if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
+/* We need to describe more than 2 pages, rely on PRP List */
+prp2 = ns->prpl;
+} else if ((ns->block_size * count) > NVME_PAGE_SIZE) {
+/* Directly embed the 2nd page if we only need 2 pages */
+prp2 = (void *)(long)ns->prpl[0];
+} else {
+/* One page is enough, don't expose anything else */
+prp2 = NULL;
+}
+return nvme_io_xfer(ns, lba, buf, prp2, count, write);
 
 bounce:
 /* Use bounce buffer to make transfer */
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCHv2 3/6] nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()

2022-01-21 Thread Kevin O'Connor
Rename nvme_build_prpl() to nvme_prpl_xfer() and directly invoke
nvme_io_xfer() or nvme_bounce_xfer() from that function.

Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme-int.h |  1 -
 src/hw/nvme.c | 46 --
 2 files changed, 20 insertions(+), 27 deletions(-)

diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index a4c1555..9564c17 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -125,7 +125,6 @@ struct nvme_namespace {
 
 /* Page List */
 u32 prpl_len;
-void *prp1;
 u64 prpl[NVME_MAX_PRPL_ENTRIES];
 };
 
diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index d656e9b..eee7d17 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -498,10 +498,13 @@ static int nvme_add_prpl(struct nvme_namespace *ns, u64 
base)
 return 0;
 }
 
-static int nvme_build_prpl(struct nvme_namespace *ns, void *op_buf, u16 count)
+// Transfer data using page list (if applicable)
+static int
+nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
+   int write)
 {
 int first_page = 1;
-u32 base = (long)op_buf;
+u32 base = (long)buf;
 s32 size;
 
 if (count > ns->max_req_size)
@@ -511,31 +514,32 @@ static int nvme_build_prpl(struct nvme_namespace *ns, 
void *op_buf, u16 count)
 
 size = count * ns->block_size;
 /* Special case for transfers that fit into PRP1, but are unaligned */
-if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE)) {
-ns->prp1 = op_buf;
-return count;
-}
+if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
+return nvme_io_xfer(ns, lba, buf, count, write);
 
 /* Every request has to be page aligned */
 if (base & ~NVME_PAGE_MASK)
-return 0;
+goto bounce;
 
 /* Make sure a full block fits into the last chunk */
 if (size & (ns->block_size - 1ULL))
-return 0;
+goto bounce;
 
 for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
 if (first_page) {
 /* First page is special */
-ns->prp1 = (void*)base;
 first_page = 0;
 continue;
 }
 if (nvme_add_prpl(ns, base))
-return 0;
+goto bounce;
 }
 
-return count;
+return nvme_io_xfer(ns, lba, buf, count, write);
+
+bounce:
+/* Use bounce buffer to make transfer */
+return nvme_bounce_xfer(ns, lba, buf, count, write);
 }
 
 static int
@@ -736,24 +740,14 @@ nvme_scan(void)
 static int
 nvme_cmd_readwrite(struct nvme_namespace *ns, struct disk_op_s *op, int write)
 {
-u16 i, blocks;
-
+int i;
 for (i = 0; i < op->count;) {
 u16 blocks_remaining = op->count - i;
 char *op_buf = op->buf_fl + i * ns->block_size;
-
-blocks = nvme_build_prpl(ns, op_buf, blocks_remaining);
-if (blocks) {
-int res = nvme_io_xfer(ns, op->lba + i, ns->prp1, blocks, write);
-if (res < 0)
-return DISK_RET_EBADTRACK;
-} else {
-int res = nvme_bounce_xfer(ns, op->lba + i, op_buf, blocks, write);
-if (res < 0)
-return DISK_RET_EBADTRACK;
-blocks = res;
-}
-
+int blocks = nvme_prpl_xfer(ns, op->lba + i, op_buf,
+blocks_remaining, write);
+if (blocks < 0)
+return DISK_RET_EBADTRACK;
 i += blocks;
 }
 
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCHv2 2/6] nvme: Add nvme_bounce_xfer() helper function

2022-01-21 Thread Kevin O'Connor
Move bounce buffer processing to a new helper function.

Signed-off-by: Kevin O'Connor 
Reviewed-by: Alexander Graf 
---
 src/hw/nvme.c | 35 +--
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index 608651a..d656e9b 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -464,6 +464,25 @@ nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 return count;
 }
 
+// Transfer up to one page of data using the internal dma bounce buffer
+static int
+nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
+ int write)
+{
+u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
+u16 blocks = count < max_blocks ? count : max_blocks;
+
+if (write)
+memcpy(ns->dma_buffer, buf, blocks * ns->block_size);
+
+int res = nvme_io_xfer(ns, lba, ns->dma_buffer, blocks, write);
+
+if (!write && res >= 0)
+memcpy(buf, ns->dma_buffer, res * ns->block_size);
+
+return res;
+}
+
 static void nvme_reset_prpl(struct nvme_namespace *ns)
 {
 ns->prpl_len = 0;
@@ -717,7 +736,6 @@ nvme_scan(void)
 static int
 nvme_cmd_readwrite(struct nvme_namespace *ns, struct disk_op_s *op, int write)
 {
-u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
 u16 i, blocks;
 
 for (i = 0; i < op->count;) {
@@ -730,21 +748,10 @@ nvme_cmd_readwrite(struct nvme_namespace *ns, struct 
disk_op_s *op, int write)
 if (res < 0)
 return DISK_RET_EBADTRACK;
 } else {
-blocks = blocks_remaining < max_blocks ? blocks_remaining
-   : max_blocks;
-
-if (write) {
-memcpy(ns->dma_buffer, op_buf, blocks * ns->block_size);
-}
-
-int res = nvme_io_xfer(ns, op->lba + i, ns->dma_buffer,
-   blocks, write);
+int res = nvme_bounce_xfer(ns, op->lba + i, op_buf, blocks, write);
 if (res < 0)
 return DISK_RET_EBADTRACK;
-
-if (!write) {
-memcpy(op_buf, ns->dma_buffer, blocks * ns->block_size);
-}
+blocks = res;
 }
 
 i += blocks;
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCHv2 1/6] nvme: Rework nvme_io_readwrite() to return -1 on error

2022-01-21 Thread Kevin O'Connor
Rename nvme_io_readwrite() to nvme_io_xfer() and change it so it
implements the debugging dprintf() and it returns -1 on an error.

Signed-off-by: Kevin O'Connor 
Reviewed-by: Alexander Graf 
---
 src/hw/nvme.c | 37 ++---
 1 file changed, 18 insertions(+), 19 deletions(-)

diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index f035fa2..608651a 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -414,11 +414,10 @@ err:
 return -1;
 }
 
-/* Reads count sectors into buf. Returns DISK_RET_*. The buffer cannot cross
-   page boundaries. */
+/* Reads count sectors into buf. The buffer cannot cross page boundaries. */
 static int
-nvme_io_readwrite(struct nvme_namespace *ns, u64 lba, char *buf, u16 count,
-  int write)
+nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
+ int write)
 {
 u32 buf_addr = (u32)buf;
 void *prp2;
@@ -426,7 +425,7 @@ nvme_io_readwrite(struct nvme_namespace *ns, u64 lba, char 
*buf, u16 count,
 if (buf_addr & 0x3) {
 /* Buffer is misaligned */
 warn_internalerror();
-return DISK_RET_EBADTRACK;
+return -1;
 }
 
 if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
@@ -457,10 +456,12 @@ nvme_io_readwrite(struct nvme_namespace *ns, u64 lba, 
char *buf, u16 count,
 dprintf(2, "read io: %08x %08x %08x %08x\n",
 cqe.dword[0], cqe.dword[1], cqe.dword[2], cqe.dword[3]);
 
-return DISK_RET_EBADTRACK;
+return -1;
 }
 
-return DISK_RET_SUCCESS;
+dprintf(5, "ns %u %s lba %llu+%u\n", ns->ns_id, write ? "write" : "read",
+lba, count);
+return count;
 }
 
 static void nvme_reset_prpl(struct nvme_namespace *ns)
@@ -716,20 +717,18 @@ nvme_scan(void)
 static int
 nvme_cmd_readwrite(struct nvme_namespace *ns, struct disk_op_s *op, int write)
 {
-int res = DISK_RET_SUCCESS;
 u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
 u16 i, blocks;
 
-for (i = 0; i < op->count && res == DISK_RET_SUCCESS;) {
+for (i = 0; i < op->count;) {
 u16 blocks_remaining = op->count - i;
 char *op_buf = op->buf_fl + i * ns->block_size;
 
 blocks = nvme_build_prpl(ns, op_buf, blocks_remaining);
 if (blocks) {
-res = nvme_io_readwrite(ns, op->lba + i, ns->prp1, blocks, write);
-dprintf(5, "ns %u %s lba %llu+%u: %d\n", ns->ns_id, write ? "write"
-  : "read",
-op->lba, blocks, res);
+int res = nvme_io_xfer(ns, op->lba + i, ns->prp1, blocks, write);
+if (res < 0)
+return DISK_RET_EBADTRACK;
 } else {
 blocks = blocks_remaining < max_blocks ? blocks_remaining
: max_blocks;
@@ -738,12 +737,12 @@ nvme_cmd_readwrite(struct nvme_namespace *ns, struct 
disk_op_s *op, int write)
 memcpy(ns->dma_buffer, op_buf, blocks * ns->block_size);
 }
 
-res = nvme_io_readwrite(ns, op->lba + i, ns->dma_buffer, blocks, 
write);
-dprintf(5, "ns %u %s lba %llu+%u: %d\n", ns->ns_id, write ? "write"
-  : "read",
-op->lba + i, blocks, res);
+int res = nvme_io_xfer(ns, op->lba + i, ns->dma_buffer,
+   blocks, write);
+if (res < 0)
+return DISK_RET_EBADTRACK;
 
-if (!write && res == DISK_RET_SUCCESS) {
+if (!write) {
 memcpy(op_buf, ns->dma_buffer, blocks * ns->block_size);
 }
 }
@@ -751,7 +750,7 @@ nvme_cmd_readwrite(struct nvme_namespace *ns, struct 
disk_op_s *op, int write)
 i += blocks;
 }
 
-return res;
+return DISK_RET_SUCCESS;
 }
 
 int
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCHv2 0/6] Rework NVME page transfers and avoid fsegment allocation

2022-01-21 Thread Kevin O'Connor
This is a resend of the previous series.  Changes since last time:

* Patch 1: Fix function comment on nvme_io_xfer()
* Patch 3-5: Added "goto bounce" to nvme_io_xfer() as suggested by Alex
* Patch 6: Added separate "single dma bounce buffer" patch to this series
* Patch 6: Add checking for malloc failure

-Kevin


Kevin O'Connor (6):
  nvme: Rework nvme_io_readwrite() to return -1 on error
  nvme: Add nvme_bounce_xfer() helper function
  nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()
  nvme: Pass prp1 and prp2 directly to nvme_io_xfer()
  nvme: Build the page list in the existing dma buffer
  nvme: Only allocate one dma bounce buffer for all nvme drives

 src/hw/nvme-int.h |  10 ---
 src/hw/nvme.c | 163 ++
 2 files changed, 78 insertions(+), 95 deletions(-)

-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH] sercon: Fix missing GET_LOW() to access rx_bytes

2022-01-21 Thread Kevin O'Connor
The variable rx_bytes is marked VARLOW, but there was a missing
GET_LOW() to access rx_bytes.  Fix by copying rx_bytes to a local
variable and avoid the repetitive segment memory accesses.

Reported-by: Gabe Black 
Signed-off-by: Volker Rümelin 
Signed-off-by: Kevin O'Connor 
---
 src/sercon.c | 20 +++-
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/src/sercon.c b/src/sercon.c
index 66a1f24..3019d9b 100644
--- a/src/sercon.c
+++ b/src/sercon.c
@@ -626,7 +626,7 @@ sercon_check_event(void)
 u16 addr = GET_LOW(sercon_port);
 u16 keycode;
 u8 byte, count = 0;
-int seq, chr;
+int seq;
 
 // check to see if there is a active serial port
 if (!addr)
@@ -640,20 +640,23 @@ sercon_check_event(void)
 // read all available data
 while (inb(addr + SEROFF_LSR) & 0x01) {
 byte = inb(addr + SEROFF_DATA);
-if (GET_LOW(rx_bytes) < sizeof(rx_buf)) {
-SET_LOW(rx_buf[rx_bytes], byte);
-SET_LOW(rx_bytes, GET_LOW(rx_bytes) + 1);
+u8 rb = GET_LOW(rx_bytes);
+if (rb < sizeof(rx_buf)) {
+SET_LOW(rx_buf[rb], byte);
+SET_LOW(rx_bytes, rb + 1);
 count++;
 }
 }
 
 for (;;) {
 // no (more) input data
-if (!GET_LOW(rx_bytes))
+u8 rb = GET_LOW(rx_bytes);
+if (!rb)
 return;
 
 // lookup escape sequences
-if (GET_LOW(rx_bytes) > 1 && GET_LOW(rx_buf[0]) == 0x1b) {
+u8 next_char = GET_LOW(rx_buf[0]);
+if (rb > 1 && next_char == 0x1b) {
 seq = findseq();
 if (seq >= 0) {
 enqueue_key(GET_GLOBAL(termseq[seq].keycode));
@@ -664,12 +667,11 @@ sercon_check_event(void)
 
 // Seems we got a escape sequence we didn't recognise.
 //  -> If we received data wait for more, maybe it is just incomplete.
-if (GET_LOW(rx_buf[0]) == 0x1b && count)
+if (next_char == 0x1b && count)
 return;
 
 // Handle input as individual char.
-chr = GET_LOW(rx_buf[0]);
-keycode = ascii_to_keycode(chr);
+keycode = ascii_to_keycode(next_char);
 if (keycode)
 enqueue_key(keycode);
 shiftbuf(1);
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] smm: Suppress gcc array-bounds warnings

2022-01-21 Thread Kevin O'Connor
On Thu, Jan 13, 2022 at 05:27:11PM +0100, Paul Menzel wrote:
> Dear Kevin,
> 
> 
> Am 13.01.22 um 17:19 schrieb Kevin O'Connor:
> > Add a hack to suppress spurious gcc array-bounds warning.
> 
> Wow, thank you for fixing it. Maybe elaborate a little, what version it
> started with, and reference the GCC bug report [1].
> 
> > Signed-off-by: Kevin O'Connor 
> 
> Tested-by: Paul Menzel (gcc (Debian 11.2.0-14) 11.2.0)

Thanks.  I pushed this change.  I did add a comment to the commit
message about gcc version.  I didn't include the bug report - http
urls tend to age and it's not clear if that bug is directly related.

-Kevin


> 
> 
> Kind regards,
> 
> Paul
> 
> 
> [1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99578
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 5/5] nvme: Build the page list in the existing dma buffer

2022-01-21 Thread Kevin O'Connor
On Fri, Jan 21, 2022 at 03:28:33PM +0100, Alexander Graf wrote:
> 
> On 19.01.22 19:45, Kevin O'Connor wrote:
> > Commit 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists")
> > introduced multi-page requests using the NVMe PRP mechanism. To store the
> > list and "first page to write to" hints, it added fields to the NVMe
> > namespace struct.
> > 
> > Unfortunately, that struct resides in fseg which is read-only at runtime.
> > While KVM ignores the read-only part and allows writes, real hardware and
> > TCG adhere to the semantics and ignore writes to the fseg region. The net
> > effect of that is that reads and writes were always happening on address 0,
> > unless they went through the bounce buffer logic.
> > 
> > This patch builds the PRP maintenance data in the existing "dma bounce
> > buffer" and only builds it when needed.
> > 
> > Fixes: 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists")
> > Reported-by: Matt DeVillier 
> > Signed-off-by: Alexander Graf 
> > Signed-off-by: Kevin O'Connor 
> > ---
> >   src/hw/nvme-int.h |  6 --
> >   src/hw/nvme.c | 51 +--
> >   2 files changed, 18 insertions(+), 39 deletions(-)
> > 
> > diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
> > index 9564c17..f0d717d 100644
> > --- a/src/hw/nvme-int.h
> > +++ b/src/hw/nvme-int.h
> > @@ -10,8 +10,6 @@
> >   #include "types.h" // u32
> >   #include "pcidevice.h" // struct pci_device
> > 
> > -#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */
> > -
> >   /* Data structures */
> > 
> >   /* The register file of a NVMe host controller. This struct follows the 
> > naming
> > @@ -122,10 +120,6 @@ struct nvme_namespace {
> > 
> >   /* Page aligned buffer of size NVME_PAGE_SIZE. */
> >   char *dma_buffer;
> > -
> > -/* Page List */
> > -u32 prpl_len;
> > -u64 prpl[NVME_MAX_PRPL_ENTRIES];
> >   };
> > 
> >   /* Data structures for NVMe admin identify commands */
> > diff --git a/src/hw/nvme.c b/src/hw/nvme.c
> > index 3a73784..39b9138 100644
> > --- a/src/hw/nvme.c
> > +++ b/src/hw/nvme.c
> > @@ -470,35 +470,19 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, 
> > void *buf, u16 count,
> >   return res;
> >   }
> > 
> > -static void nvme_reset_prpl(struct nvme_namespace *ns)
> > -{
> > -ns->prpl_len = 0;
> > -}
> > -
> > -static int nvme_add_prpl(struct nvme_namespace *ns, u64 base)
> > -{
> > -if (ns->prpl_len >= NVME_MAX_PRPL_ENTRIES)
> > -return -1;
> > -
> > -ns->prpl[ns->prpl_len++] = base;
> > -
> > -return 0;
> > -}
> > +#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */
> > 
> >   // Transfer data using page list (if applicable)
> >   static int
> >   nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
> >  int write)
> >   {
> > -int first_page = 1;
> >   u32 base = (long)buf;
> >   s32 size;
> > 
> >   if (count > ns->max_req_size)
> >   count = ns->max_req_size;
> > 
> > -nvme_reset_prpl(ns);
> > -
> >   size = count * ns->block_size;
> >   /* Special case for transfers that fit into PRP1, but are unaligned */
> >   if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
> > @@ -512,28 +496,29 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, 
> > void *buf, u16 count,
> >   if (size & (ns->block_size - 1ULL))
> >   return nvme_bounce_xfer(ns, lba, buf, count, write);
> > 
> > -for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
> > -if (first_page) {
> > -/* First page is special */
> > -first_page = 0;
> > -continue;
> > -}
> > -if (nvme_add_prpl(ns, base))
> > -return nvme_bounce_xfer(ns, lba, buf, count, write);
> > -}
> > -
> > -void *prp2;
> >   if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
> > -/* We need to describe more than 2 pages, rely on PRP List */
> > -prp2 = ns->prpl;
> > +/* We need to describe more than 2 pages, build PRP List */
> > +u32 prpl_len = 0;
> > +u64 *prpl = (void*)ns->dma_buffer;
> 
> 
> At

[SeaBIOS] Re: [PATCH] sercon: add missing GET_LOW() to access rx_bytes

2022-01-19 Thread Kevin O'Connor
On Wed, Jan 19, 2022 at 10:08:49PM +0100, Volker Rümelin wrote:
> The variable rx_bytes is marked VARLOW. Add a missing GET_LOW()
> to access rx_bytes.
> 
> Reported-by: Gabe Black 
> Signed-off-by: Volker Rümelin 

Thanks.  Good catch!

I think we should fix this by copying the contents into a local
variable to hopefully avoid this type of error in the future.  Patch
below.

-Kevin


--- a/src/sercon.c
+++ b/src/sercon.c
@@ -626,7 +626,7 @@ sercon_check_event(void)
 u16 addr = GET_LOW(sercon_port);
 u16 keycode;
 u8 byte, count = 0;
-int seq, chr;
+int seq;
 
 // check to see if there is a active serial port
 if (!addr)
@@ -640,20 +640,23 @@ sercon_check_event(void)
 // read all available data
 while (inb(addr + SEROFF_LSR) & 0x01) {
 byte = inb(addr + SEROFF_DATA);
-if (GET_LOW(rx_bytes) < sizeof(rx_buf)) {
-SET_LOW(rx_buf[rx_bytes], byte);
-SET_LOW(rx_bytes, GET_LOW(rx_bytes) + 1);
+u8 rb = GET_LOW(rx_bytes);
+if (rb < sizeof(rx_buf)) {
+SET_LOW(rx_buf[rb], byte);
+SET_LOW(rx_bytes, rb + 1);
 count++;
 }
 }
 
 for (;;) {
 // no (more) input data
-if (!GET_LOW(rx_bytes))
+u8 rb = GET_LOW(rx_bytes);
+if (!rb)
 return;
 
 // lookup escape sequences
-if (GET_LOW(rx_bytes) > 1 && GET_LOW(rx_buf[0]) == 0x1b) {
+u8 next_char = GET_LOW(rx_buf[0]);
+if (rb > 1 && next_char == 0x1b) {
 seq = findseq();
 if (seq >= 0) {
 enqueue_key(GET_GLOBAL(termseq[seq].keycode));
@@ -664,12 +667,11 @@ sercon_check_event(void)
 
 // Seems we got a escape sequence we didn't recognise.
 //  -> If we received data wait for more, maybe it is just incomplete.
-if (GET_LOW(rx_buf[0]) == 0x1b && count)
+if (next_char == 0x1b && count)
 return;
 
 // Handle input as individual char.
-chr = GET_LOW(rx_buf[0]);
-keycode = ascii_to_keycode(chr);
+keycode = ascii_to_keycode(next_char);
 if (keycode)
 enqueue_key(keycode);
 shiftbuf(1);
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH] nvme: Only allocate one dma bounce buffer for all nvme drives

2022-01-19 Thread Kevin O'Connor
There is no need to create multiple dma bounce buffers as the BIOS
disk code isn't reentrant capable.

Signed-off-by: Kevin O'Connor 

---

This is a minor cleanup on top of the previous nvme code series.

-Kevin

--- 
src/hw/nvme-int.h |  3 ---
 src/hw/nvme.c | 14 +-
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index f0d717d..f9c807e 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -117,9 +117,6 @@ struct nvme_namespace {
 u32 block_size;
 u32 metadata_size;
 u32 max_req_size;
-
-/* Page aligned buffer of size NVME_PAGE_SIZE. */
-char *dma_buffer;
 };
 
 /* Data structures for NVMe admin identify commands */
diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index 39b9138..0ba981c 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -20,6 +20,9 @@
 #include "nvme.h"
 #include "nvme-int.h"
 
+// Page aligned "dma bounce buffer" of size NVME_PAGE_SIZE in high memory
+static void *nvme_dma_buffer;
+
 static void *
 zalloc_page_aligned(struct zone_s *zone, u32 size)
 {
@@ -294,7 +297,8 @@ nvme_probe_ns(struct nvme_ctrl *ctrl, u32 ns_idx, u8 mdts)
 ns->max_req_size = -1U;
 }
 
-ns->dma_buffer = zalloc_page_aligned(, NVME_PAGE_SIZE);
+if (!nvme_dma_buffer)
+nvme_dma_buffer = zalloc_page_aligned(, NVME_PAGE_SIZE);
 
 char *desc = znprintf(MAXDESCSIZE, "NVMe NS %u: %llu MiB (%llu %u-byte "
   "blocks + %u-byte metadata)",
@@ -460,12 +464,12 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 u16 blocks = count < max_blocks ? count : max_blocks;
 
 if (write)
-memcpy(ns->dma_buffer, buf, blocks * ns->block_size);
+memcpy(nvme_dma_buffer, buf, blocks * ns->block_size);
 
-int res = nvme_io_xfer(ns, lba, ns->dma_buffer, NULL, blocks, write);
+int res = nvme_io_xfer(ns, lba, nvme_dma_buffer, NULL, blocks, write);
 
 if (!write && res >= 0)
-memcpy(buf, ns->dma_buffer, res * ns->block_size);
+memcpy(buf, nvme_dma_buffer, res * ns->block_size);
 
 return res;
 }
@@ -499,7 +503,7 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
 /* We need to describe more than 2 pages, build PRP List */
 u32 prpl_len = 0;
-u64 *prpl = (void*)ns->dma_buffer;
+u64 *prpl = nvme_dma_buffer;
 int first_page = 1;
 for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
 if (first_page) {
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 5/5] nvme: Build the page list in the existing dma buffer

2022-01-19 Thread Kevin O'Connor
Commit 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists")
introduced multi-page requests using the NVMe PRP mechanism. To store the
list and "first page to write to" hints, it added fields to the NVMe
namespace struct.

Unfortunately, that struct resides in fseg which is read-only at runtime.
While KVM ignores the read-only part and allows writes, real hardware and
TCG adhere to the semantics and ignore writes to the fseg region. The net
effect of that is that reads and writes were always happening on address 0,
unless they went through the bounce buffer logic.

This patch builds the PRP maintenance data in the existing "dma bounce
buffer" and only builds it when needed.

Fixes: 01f2736cc905d ("nvme: Pass large I/O requests as PRP lists")
Reported-by: Matt DeVillier 
Signed-off-by: Alexander Graf 
Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme-int.h |  6 --
 src/hw/nvme.c | 51 +--
 2 files changed, 18 insertions(+), 39 deletions(-)

diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index 9564c17..f0d717d 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -10,8 +10,6 @@
 #include "types.h" // u32
 #include "pcidevice.h" // struct pci_device
 
-#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */
-
 /* Data structures */
 
 /* The register file of a NVMe host controller. This struct follows the naming
@@ -122,10 +120,6 @@ struct nvme_namespace {
 
 /* Page aligned buffer of size NVME_PAGE_SIZE. */
 char *dma_buffer;
-
-/* Page List */
-u32 prpl_len;
-u64 prpl[NVME_MAX_PRPL_ENTRIES];
 };
 
 /* Data structures for NVMe admin identify commands */
diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index 3a73784..39b9138 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -470,35 +470,19 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 return res;
 }
 
-static void nvme_reset_prpl(struct nvme_namespace *ns)
-{
-ns->prpl_len = 0;
-}
-
-static int nvme_add_prpl(struct nvme_namespace *ns, u64 base)
-{
-if (ns->prpl_len >= NVME_MAX_PRPL_ENTRIES)
-return -1;
-
-ns->prpl[ns->prpl_len++] = base;
-
-return 0;
-}
+#define NVME_MAX_PRPL_ENTRIES 15 /* Allows requests up to 64kb */
 
 // Transfer data using page list (if applicable)
 static int
 nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
int write)
 {
-int first_page = 1;
 u32 base = (long)buf;
 s32 size;
 
 if (count > ns->max_req_size)
 count = ns->max_req_size;
 
-nvme_reset_prpl(ns);
-
 size = count * ns->block_size;
 /* Special case for transfers that fit into PRP1, but are unaligned */
 if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
@@ -512,28 +496,29 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 if (size & (ns->block_size - 1ULL))
 return nvme_bounce_xfer(ns, lba, buf, count, write);
 
-for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
-if (first_page) {
-/* First page is special */
-first_page = 0;
-continue;
-}
-if (nvme_add_prpl(ns, base))
-return nvme_bounce_xfer(ns, lba, buf, count, write);
-}
-
-void *prp2;
 if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
-/* We need to describe more than 2 pages, rely on PRP List */
-prp2 = ns->prpl;
+/* We need to describe more than 2 pages, build PRP List */
+u32 prpl_len = 0;
+u64 *prpl = (void*)ns->dma_buffer;
+int first_page = 1;
+for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
+if (first_page) {
+/* First page is special */
+first_page = 0;
+continue;
+}
+if (prpl_len >= NVME_MAX_PRPL_ENTRIES)
+return nvme_bounce_xfer(ns, lba, buf, count, write);
+prpl[prpl_len++] = base;
+}
+return nvme_io_xfer(ns, lba, buf, prpl, count, write);
 } else if ((ns->block_size * count) > NVME_PAGE_SIZE) {
 /* Directly embed the 2nd page if we only need 2 pages */
-prp2 = (void *)(long)ns->prpl[0];
+return nvme_io_xfer(ns, lba, buf, buf + NVME_PAGE_SIZE, count, write);
 } else {
 /* One page is enough, don't expose anything else */
-prp2 = NULL;
+return nvme_io_xfer(ns, lba, buf, NULL, count, write);
 }
-return nvme_io_xfer(ns, lba, buf, prp2, count, write);
 }
 
 static int
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 4/5] nvme: Pass prp1 and prp2 directly to nvme_io_xfer()

2022-01-19 Thread Kevin O'Connor
When using a prp2 parameter, build it in nvme_prpl_xfer() and pass it
directly to nvme_io_xfer().

Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme.c | 39 ++-
 1 file changed, 18 insertions(+), 21 deletions(-)

diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index bafe8bf..3a73784 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -417,33 +417,19 @@ err:
 /* Reads count sectors into buf. Returns DISK_RET_*. The buffer cannot cross
page boundaries. */
 static int
-nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
- int write)
+nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void *prp1, void *prp2,
+ u16 count, int write)
 {
-u32 buf_addr = (u32)buf;
-void *prp2;
-
-if (buf_addr & 0x3) {
+if (((u32)prp1 & 0x3) || ((u32)prp2 & 0x3)) {
 /* Buffer is misaligned */
 warn_internalerror();
 return -1;
 }
 
-if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
-/* We need to describe more than 2 pages, rely on PRP List */
-prp2 = ns->prpl;
-} else if ((ns->block_size * count) > NVME_PAGE_SIZE) {
-/* Directly embed the 2nd page if we only need 2 pages */
-prp2 = (void *)(long)ns->prpl[0];
-} else {
-/* One page is enough, don't expose anything else */
-prp2 = NULL;
-}
-
 struct nvme_sqe *io_read = nvme_get_next_sqe(>ctrl->io_sq,
  write ? NVME_SQE_OPC_IO_WRITE
: NVME_SQE_OPC_IO_READ,
- NULL, buf, prp2);
+ NULL, prp1, prp2);
 io_read->nsid = ns->ns_id;
 io_read->dword[10] = (u32)lba;
 io_read->dword[11] = (u32)(lba >> 32);
@@ -476,7 +462,7 @@ nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 if (write)
 memcpy(ns->dma_buffer, buf, blocks * ns->block_size);
 
-int res = nvme_io_xfer(ns, lba, ns->dma_buffer, blocks, write);
+int res = nvme_io_xfer(ns, lba, ns->dma_buffer, NULL, blocks, write);
 
 if (!write && res >= 0)
 memcpy(buf, ns->dma_buffer, res * ns->block_size);
@@ -516,7 +502,7 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 size = count * ns->block_size;
 /* Special case for transfers that fit into PRP1, but are unaligned */
 if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
-return nvme_io_xfer(ns, lba, buf, count, write);
+return nvme_io_xfer(ns, lba, buf, NULL, count, write);
 
 /* Every request has to be page aligned */
 if (base & ~NVME_PAGE_MASK)
@@ -536,7 +522,18 @@ nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 return nvme_bounce_xfer(ns, lba, buf, count, write);
 }
 
-return nvme_io_xfer(ns, lba, buf, count, write);
+void *prp2;
+if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
+/* We need to describe more than 2 pages, rely on PRP List */
+prp2 = ns->prpl;
+} else if ((ns->block_size * count) > NVME_PAGE_SIZE) {
+/* Directly embed the 2nd page if we only need 2 pages */
+prp2 = (void *)(long)ns->prpl[0];
+} else {
+/* One page is enough, don't expose anything else */
+prp2 = NULL;
+}
+return nvme_io_xfer(ns, lba, buf, prp2, count, write);
 }
 
 static int
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 3/5] nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()

2022-01-19 Thread Kevin O'Connor
Rename nvme_build_prpl() to nvme_prpl_xfer() and directly invoke
nvme_io_xfer() or nvme_bounce_xfer() from that function.

Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme-int.h |  1 -
 src/hw/nvme.c | 42 --
 2 files changed, 16 insertions(+), 27 deletions(-)

diff --git a/src/hw/nvme-int.h b/src/hw/nvme-int.h
index a4c1555..9564c17 100644
--- a/src/hw/nvme-int.h
+++ b/src/hw/nvme-int.h
@@ -125,7 +125,6 @@ struct nvme_namespace {
 
 /* Page List */
 u32 prpl_len;
-void *prp1;
 u64 prpl[NVME_MAX_PRPL_ENTRIES];
 };
 
diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index fd7c1d0..bafe8bf 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -499,10 +499,13 @@ static int nvme_add_prpl(struct nvme_namespace *ns, u64 
base)
 return 0;
 }
 
-static int nvme_build_prpl(struct nvme_namespace *ns, void *op_buf, u16 count)
+// Transfer data using page list (if applicable)
+static int
+nvme_prpl_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
+   int write)
 {
 int first_page = 1;
-u32 base = (long)op_buf;
+u32 base = (long)buf;
 s32 size;
 
 if (count > ns->max_req_size)
@@ -512,31 +515,28 @@ static int nvme_build_prpl(struct nvme_namespace *ns, 
void *op_buf, u16 count)
 
 size = count * ns->block_size;
 /* Special case for transfers that fit into PRP1, but are unaligned */
-if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE)) {
-ns->prp1 = op_buf;
-return count;
-}
+if (((size + (base & ~NVME_PAGE_MASK)) <= NVME_PAGE_SIZE))
+return nvme_io_xfer(ns, lba, buf, count, write);
 
 /* Every request has to be page aligned */
 if (base & ~NVME_PAGE_MASK)
-return 0;
+return nvme_bounce_xfer(ns, lba, buf, count, write);
 
 /* Make sure a full block fits into the last chunk */
 if (size & (ns->block_size - 1ULL))
-return 0;
+return nvme_bounce_xfer(ns, lba, buf, count, write);
 
 for (; size > 0; base += NVME_PAGE_SIZE, size -= NVME_PAGE_SIZE) {
 if (first_page) {
 /* First page is special */
-ns->prp1 = (void*)base;
 first_page = 0;
 continue;
 }
 if (nvme_add_prpl(ns, base))
-return 0;
+return nvme_bounce_xfer(ns, lba, buf, count, write);
 }
 
-return count;
+return nvme_io_xfer(ns, lba, buf, count, write);
 }
 
 static int
@@ -737,24 +737,14 @@ nvme_scan(void)
 static int
 nvme_cmd_readwrite(struct nvme_namespace *ns, struct disk_op_s *op, int write)
 {
-u16 i, blocks;
-
+int i;
 for (i = 0; i < op->count;) {
 u16 blocks_remaining = op->count - i;
 char *op_buf = op->buf_fl + i * ns->block_size;
-
-blocks = nvme_build_prpl(ns, op_buf, blocks_remaining);
-if (blocks) {
-int res = nvme_io_xfer(ns, op->lba + i, ns->prp1, blocks, write);
-if (res < 0)
-return DISK_RET_EBADTRACK;
-} else {
-int res = nvme_bounce_xfer(ns, op->lba + i, op_buf, blocks, write);
-if (res < 0)
-return DISK_RET_EBADTRACK;
-blocks = res;
-}
-
+int blocks = nvme_prpl_xfer(ns, op->lba + i, op_buf,
+blocks_remaining, write);
+if (blocks < 0)
+return DISK_RET_EBADTRACK;
 i += blocks;
 }
 
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 2/5] nvme: Add nvme_bounce_xfer() helper function

2022-01-19 Thread Kevin O'Connor
Move bounce buffer processing to a new helper function.

Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme.c | 35 +--
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index a97501b..fd7c1d0 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -465,6 +465,25 @@ nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void 
*buf, u16 count,
 return count;
 }
 
+// Transfer up to one page of data using the internal dma bounce buffer
+static int
+nvme_bounce_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
+ int write)
+{
+u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
+u16 blocks = count < max_blocks ? count : max_blocks;
+
+if (write)
+memcpy(ns->dma_buffer, buf, blocks * ns->block_size);
+
+int res = nvme_io_xfer(ns, lba, ns->dma_buffer, blocks, write);
+
+if (!write && res >= 0)
+memcpy(buf, ns->dma_buffer, res * ns->block_size);
+
+return res;
+}
+
 static void nvme_reset_prpl(struct nvme_namespace *ns)
 {
 ns->prpl_len = 0;
@@ -718,7 +737,6 @@ nvme_scan(void)
 static int
 nvme_cmd_readwrite(struct nvme_namespace *ns, struct disk_op_s *op, int write)
 {
-u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
 u16 i, blocks;
 
 for (i = 0; i < op->count;) {
@@ -731,21 +749,10 @@ nvme_cmd_readwrite(struct nvme_namespace *ns, struct 
disk_op_s *op, int write)
 if (res < 0)
 return DISK_RET_EBADTRACK;
 } else {
-blocks = blocks_remaining < max_blocks ? blocks_remaining
-   : max_blocks;
-
-if (write) {
-memcpy(ns->dma_buffer, op_buf, blocks * ns->block_size);
-}
-
-int res = nvme_io_xfer(ns, op->lba + i, ns->dma_buffer,
-   blocks, write);
+int res = nvme_bounce_xfer(ns, op->lba + i, op_buf, blocks, write);
 if (res < 0)
 return DISK_RET_EBADTRACK;
-
-if (!write) {
-memcpy(op_buf, ns->dma_buffer, blocks * ns->block_size);
-}
+blocks = res;
 }
 
 i += blocks;
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 1/5] nvme: Rework nvme_io_readwrite() to return -1 on error

2022-01-19 Thread Kevin O'Connor
Rename nvme_io_readwrite() to nvme_io_xfer() and change it so it
implements the debugging dprintf() and it returns -1 on an error.

Signed-off-by: Kevin O'Connor 
---
 src/hw/nvme.c | 34 +-
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/src/hw/nvme.c b/src/hw/nvme.c
index f035fa2..a97501b 100644
--- a/src/hw/nvme.c
+++ b/src/hw/nvme.c
@@ -417,8 +417,8 @@ err:
 /* Reads count sectors into buf. Returns DISK_RET_*. The buffer cannot cross
page boundaries. */
 static int
-nvme_io_readwrite(struct nvme_namespace *ns, u64 lba, char *buf, u16 count,
-  int write)
+nvme_io_xfer(struct nvme_namespace *ns, u64 lba, void *buf, u16 count,
+ int write)
 {
 u32 buf_addr = (u32)buf;
 void *prp2;
@@ -426,7 +426,7 @@ nvme_io_readwrite(struct nvme_namespace *ns, u64 lba, char 
*buf, u16 count,
 if (buf_addr & 0x3) {
 /* Buffer is misaligned */
 warn_internalerror();
-return DISK_RET_EBADTRACK;
+return -1;
 }
 
 if ((ns->block_size * count) > (NVME_PAGE_SIZE * 2)) {
@@ -457,10 +457,12 @@ nvme_io_readwrite(struct nvme_namespace *ns, u64 lba, 
char *buf, u16 count,
 dprintf(2, "read io: %08x %08x %08x %08x\n",
 cqe.dword[0], cqe.dword[1], cqe.dword[2], cqe.dword[3]);
 
-return DISK_RET_EBADTRACK;
+return -1;
 }
 
-return DISK_RET_SUCCESS;
+dprintf(5, "ns %u %s lba %llu+%u\n", ns->ns_id, write ? "write" : "read",
+lba, count);
+return count;
 }
 
 static void nvme_reset_prpl(struct nvme_namespace *ns)
@@ -716,20 +718,18 @@ nvme_scan(void)
 static int
 nvme_cmd_readwrite(struct nvme_namespace *ns, struct disk_op_s *op, int write)
 {
-int res = DISK_RET_SUCCESS;
 u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
 u16 i, blocks;
 
-for (i = 0; i < op->count && res == DISK_RET_SUCCESS;) {
+for (i = 0; i < op->count;) {
 u16 blocks_remaining = op->count - i;
 char *op_buf = op->buf_fl + i * ns->block_size;
 
 blocks = nvme_build_prpl(ns, op_buf, blocks_remaining);
 if (blocks) {
-res = nvme_io_readwrite(ns, op->lba + i, ns->prp1, blocks, write);
-dprintf(5, "ns %u %s lba %llu+%u: %d\n", ns->ns_id, write ? "write"
-  : "read",
-op->lba, blocks, res);
+int res = nvme_io_xfer(ns, op->lba + i, ns->prp1, blocks, write);
+if (res < 0)
+return DISK_RET_EBADTRACK;
 } else {
 blocks = blocks_remaining < max_blocks ? blocks_remaining
: max_blocks;
@@ -738,12 +738,12 @@ nvme_cmd_readwrite(struct nvme_namespace *ns, struct 
disk_op_s *op, int write)
 memcpy(ns->dma_buffer, op_buf, blocks * ns->block_size);
 }
 
-res = nvme_io_readwrite(ns, op->lba + i, ns->dma_buffer, blocks, 
write);
-dprintf(5, "ns %u %s lba %llu+%u: %d\n", ns->ns_id, write ? "write"
-  : "read",
-op->lba + i, blocks, res);
+int res = nvme_io_xfer(ns, op->lba + i, ns->dma_buffer,
+   blocks, write);
+if (res < 0)
+return DISK_RET_EBADTRACK;
 
-if (!write && res == DISK_RET_SUCCESS) {
+if (!write) {
 memcpy(op_buf, ns->dma_buffer, blocks * ns->block_size);
 }
 }
@@ -751,7 +751,7 @@ nvme_cmd_readwrite(struct nvme_namespace *ns, struct 
disk_op_s *op, int write)
 i += blocks;
 }
 
-return res;
+return DISK_RET_SUCCESS;
 }
 
 int
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 0/5] Rework NVME page transfers and avoid fsegment allocation

2022-01-19 Thread Kevin O'Connor
Hi Alex,

I was looking at your recent fix for SeaBIOS NVME.  I agree that it is
not valid to write to the f-segment during runtime.  However, I was
struggling to understand the PRP list management in the nvme.c code.

I came up with a different implementation that avoids allocating
another buffer in the "high zone".  Instead, the code in this patch
series builds the page list in the existing "dma bounce buffer".

I don't have a good way to test this on real hardware.  It passes
simple qemu tests (both with and without kvm).  For example:

qemu-system-x86_64 -k en-us -snapshot -L test -chardev stdio,id=seabios -device 
isa-debugcon,iobase=0x402,chardev=seabios -m 512 -drive 
file=dos-drivec,if=none,id=drive0 -device nvme,drive=drive0,serial=nvme-1 -boot 
menu=on

Thanks,
-Kevin


Kevin O'Connor (5):
  nvme: Rework nvme_io_readwrite() to return -1 on error
  nvme: Add nvme_bounce_xfer() helper function
  nvme: Convert nvme_build_prpl() to nvme_prpl_xfer()
  nvme: Pass prp1 and prp2 directly to nvme_io_xfer()
  nvme: Build the page list in the existing dma buffer

 src/hw/nvme-int.h |   7 ---
 src/hw/nvme.c | 143 --
 2 files changed, 61 insertions(+), 89 deletions(-)

-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: new november release?

2022-01-13 Thread Kevin O'Connor
On Thu, Jan 13, 2022 at 11:56:35AM +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > 2. Commit the smbios 3 changes (and pcie-pci-bridge changes) and
> >release v1.15.0 in early January.
> 
> I think all pending patches have been applied meanwhile.
> So time to enter 1.15 freeze and release in ~2 weeks?

It feels we just made a release.  Is there a timeline you'd like to
reach?  Otherwise, how about we target end of February for next
release.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH] smm: Suppress gcc array-bounds warnings

2022-01-13 Thread Kevin O'Connor
Add a hack to suppress spurious gcc array-bounds warning.

Signed-off-by: Kevin O'Connor 
---
 src/fw/smm.c | 21 +++--
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/src/fw/smm.c b/src/fw/smm.c
index d90e43a..a0b50b2 100644
--- a/src/fw/smm.c
+++ b/src/fw/smm.c
@@ -59,6 +59,14 @@ struct smm_layout {
 struct smm_state cpu;
 };
 
+// Hack to supress some gcc array-bounds warnings
+static void
+memcpy_nowarn(void *d, const void *s, size_t len)
+{
+asm("" : "+r"(d), "+r"(s));
+memcpy(d, s, len);
+}
+
 void VISIBLE32FLAT
 handle_smi(u16 cs)
 {
@@ -85,8 +93,8 @@ handle_smi(u16 cs)
 if (CONFIG_CALL32_SMM) {
 // Backup current cpu state for SMM trampolining
 struct smm_layout *newsmm = (void*)BUILD_SMM_ADDR;
-memcpy(>backup1, >cpu, sizeof(newsmm->backup1));
-memcpy(>backup2, >cpu, sizeof(newsmm->backup2));
+memcpy_nowarn(>backup1, >cpu, 
sizeof(newsmm->backup1));
+memcpy_nowarn(>backup2, >cpu, 
sizeof(newsmm->backup2));
 HaveSmmCall32 = 1;
 }
 
@@ -145,8 +153,8 @@ smm_save_and_copy(void)
 // save original memory content
 struct smm_layout *initsmm = (void*)BUILD_SMM_INIT_ADDR;
 struct smm_layout *smm = (void*)BUILD_SMM_ADDR;
-memcpy(>cpu, >cpu, sizeof(smm->cpu));
-memcpy(>codeentry, >codeentry, sizeof(smm->codeentry));
+memcpy_nowarn(>cpu, >cpu, sizeof(smm->cpu));
+memcpy_nowarn(>codeentry, >codeentry, 
sizeof(smm->codeentry));
 
 // Setup code entry point.
 initsmm->codeentry = SMI_INSN;
@@ -168,8 +176,9 @@ smm_relocate_and_restore(void)
 /* restore original memory content */
 struct smm_layout *initsmm = (void*)BUILD_SMM_INIT_ADDR;
 struct smm_layout *smm = (void*)BUILD_SMM_ADDR;
-memcpy(>cpu, >cpu, sizeof(initsmm->cpu));
-memcpy(>codeentry, >codeentry, sizeof(initsmm->codeentry));
+memcpy_nowarn(>cpu, >cpu, sizeof(initsmm->cpu));
+memcpy_nowarn(>codeentry, >codeentry
+  , sizeof(initsmm->codeentry));
 
 // Setup code entry point.
 smm->codeentry = SMI_INSN;
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 1/2] readserial: Improve Python3 compatibility

2021-12-27 Thread Kevin O'Connor
On Sun, Dec 19, 2021 at 09:45:59AM -0500, Kevin O'Connor wrote:
> Signed-off-by: Kevin O'Connor 
> ---
>  scripts/readserial.py | 28 +++-
>  1 file changed, 11 insertions(+), 17 deletions(-)
> 

FYI, I committed this series (along with the two gcc warning patches
sent separately).

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 2/2] scripts: Remove python23compat.py

2021-12-19 Thread Kevin O'Connor
It's simpler to use b"" designations around binary strings than to use
the as_bytes() function.

Signed-off-by: Kevin O'Connor 
---
 scripts/buildrom.py   |  6 ++
 scripts/checkrom.py   |  4 +---
 scripts/python23compat.py | 14 --
 3 files changed, 3 insertions(+), 21 deletions(-)
 delete mode 100644 scripts/python23compat.py

diff --git a/scripts/buildrom.py b/scripts/buildrom.py
index 0499049..48bfc17 100755
--- a/scripts/buildrom.py
+++ b/scripts/buildrom.py
@@ -7,8 +7,6 @@
 
 import sys, struct
 
-from python23compat import as_bytes
-
 def alignpos(pos, alignbytes):
 mask = alignbytes - 1
 return (pos + mask) & ~mask
@@ -31,7 +29,7 @@ def main():
 count = len(data)
 
 # Pad to a 512 byte boundary
-data += as_bytes("\0") * (alignpos(count, 512) - count)
+data += b"\0" * (alignpos(count, 512) - count)
 count = len(data)
 
 # Check if a pci header is present
@@ -42,7 +40,7 @@ def main():
 
 # Fill in size field; clear checksum field
 blocks = struct.pack('
-#
-# This file may be distributed under the terms of the GNU GPLv3 license.
-
-import sys
-
-if (sys.version_info > (3, 0)):
-def as_bytes(str):
-return bytes(str, "ASCII")
-else:
-def as_bytes(str):
-return str
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH 1/2] readserial: Improve Python3 compatibility

2021-12-19 Thread Kevin O'Connor
Signed-off-by: Kevin O'Connor 
---
 scripts/readserial.py | 28 +++-
 1 file changed, 11 insertions(+), 17 deletions(-)

diff --git a/scripts/readserial.py b/scripts/readserial.py
index a7383e8..e3f56e5 100755
--- a/scripts/readserial.py
+++ b/scripts/readserial.py
@@ -10,8 +10,6 @@
 
 import sys, os, time, select, optparse
 
-from python23compat import as_bytes
-
 # Reset time counter after this much idle time.
 RESTARTINTERVAL = 60
 # Number of bits in a transmitted byte - 8N1 is 1 start bit + 8 data
@@ -20,11 +18,11 @@ BITSPERBYTE = 10
 
 def calibrateserialwrite(outfile, byteadjust):
 # Build 4000 bytes of dummy data.
-data = "0123456789" * 4 + "012345678" + "\n"
+data = b"0123456789" * 4 + b"012345678" + b"\n"
 data = data * 80
 while 1:
 st = time.time()
-outfile.write(as_bytes(data))
+outfile.write(data)
 outfile.flush()
 et = time.time()
 sys.stdout.write(
@@ -84,38 +82,34 @@ def readserial(infile, logfile, byteadjust):
 msg = "\n\n=== %s (adjust=%.1fus)\n" % (
 time.asctime(time.localtime(datatime)), byteadjust * 100)
 sys.stdout.write(msg)
-logfile.write(as_bytes(msg))
+logfile.write(msg.encode())
 lasttime = datatime
 
 # Translate unprintable chars; add timestamps
-out = as_bytes("")
-for c in d:
+out = bytearray()
+for oc in bytearray(d):
 if isnewline:
 delta = datatime - starttime - (charcount * byteadjust)
-out += "%06.3f: " % delta
+out += b"%06.3f: " % delta
 isnewline = 0
-oc = ord(c)
 charcount += 1
 datatime += byteadjust
 if oc == 0x0d:
 continue
 if oc == 0x00:
-out += "<00>\n"
+out += b"<00>\n"
 isnewline = 1
 continue
 if oc == 0x0a:
-out += "\n"
+out += b"\n"
 isnewline = 1
 continue
 if oc < 0x20 or oc >= 0x7f and oc != 0x09:
-out += "<%02x>" % oc
+out += b"<%02x>" % oc
 continue
-out += c
+out.append(oc)
 
-if (sys.version_info > (3, 0)):
-sys.stdout.buffer.write(out)
-else:
-sys.stdout.write(out)
+sys.stdout.write(out.decode())
 sys.stdout.flush()
 logfile.write(out)
 logfile.flush()
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] memmap: Fix gcc out-of-bounds warning

2021-12-18 Thread Kevin O'Connor
On Sat, Dec 18, 2021 at 09:13:57PM +0100, Paul Menzel wrote:
> Dear Kevin,
> 
> 
> Am 18.12.21 um 19:10 schrieb Kevin O'Connor:
> > Use a different definition for the linker script symbol to avoid a gcc
> > warning.
> 
> What GCC version did you use, and which warning is it exactly?

$ gcc --version
gcc (GCC) 11.2.1 20210728 (Red Hat 11.2.1-1)


src/post.c: In function 'reloc_preinit':
src/post.c:248:34: warning: array subscript 'u32 {aka unsigned int}[0]' is 
partly outside array bounds of 'char[1]' [-Warray-bounds]
  248 | *((u32*)(dest + *reloc)) += delta;
  |  ^~
In file included from src/biosvar.h:11,
 from src/post.c:8:
src/post.c:278:26: note: while referencing 'code32flat_start'
  278 | updateRelocs(VSYMBOL(code32flat_start), VSYMBOL(_reloc_init_start)
  |  ^~~~
src/memmap.h:18:36: note: in definition of macro 'SYMBOL'
   18 | #define SYMBOL(SYM) ({ extern char SYM; (u32) })
  |    ^~~


> 
> > Signed-off-by: Kevin O'Connor 
> > ---
> >   src/memmap.h | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/src/memmap.h b/src/memmap.h
> > index 22bd4bc..32ca265 100644
> > --- a/src/memmap.h
> > +++ b/src/memmap.h
> > @@ -15,7 +15,7 @@ static inline void *memremap(u32 addr, u32 len) {
> >   }
> >   // Return the value of a linker script symbol (see scripts/layoutrom.py)
> > -#define SYMBOL(SYM) ({ extern char SYM; (u32) })
> > +#define SYMBOL(SYM) ({ extern char SYM[]; (u32)SYM; })
> >   #define VSYMBOL(SYM) ((void*)SYMBOL(SYM))
> >   #endif // memmap.h
> 
> gcc (Debian 11.2.0-13) 11.2.0 still shows a lot of:
> 
>   Compile checking out/src/fw/smm.o
> In file included from src/fw/smm.c:18:
> src/fw/smm.c: In function 'smm_save_and_copy':
> src/string.h:23:16: warning: '__builtin_memcpy' offset [0, 511] is out
> of the bounds [0, 0] [-Warray-bounds]
>23 | #define memcpy __builtin_memcpy
> src/fw/smm.c:148:5: note: in expansion of macro 'memcpy'
>   148 | memcpy(>cpu, >cpu, sizeof(smm->cpu));
>   | ^~
> 

Yes - I see that as well in smm.c.  Alas, I don't have a fix for it.
It seems to me that gcc is producing bogus warnings here.  It looks
like anything of the form "memcpy((void*)0x1234, p, n)" where n is
greater than 8 produces this warning.  It's a requirement to memcpy to
a physical memory address.  Disabling the warning would require adding
both "-Wno-array-bounds -Wno-stringop-overflow" to the build.

Maybe someone else has an idea on how to suppress this warning.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH] memmap: Fix gcc out-of-bounds warning

2021-12-18 Thread Kevin O'Connor
Use a different definition for the linker script symbol to avoid a gcc
warning.

Signed-off-by: Kevin O'Connor 
---
 src/memmap.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/memmap.h b/src/memmap.h
index 22bd4bc..32ca265 100644
--- a/src/memmap.h
+++ b/src/memmap.h
@@ -15,7 +15,7 @@ static inline void *memremap(u32 addr, u32 len) {
 }
 
 // Return the value of a linker script symbol (see scripts/layoutrom.py)
-#define SYMBOL(SYM) ({ extern char SYM; (u32) })
+#define SYMBOL(SYM) ({ extern char SYM[]; (u32)SYM; })
 #define VSYMBOL(SYM) ((void*)SYMBOL(SYM))
 
 #endif // memmap.h
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [PATCH] vgasrc: Don't use VAR16 in header files to fix gcc warning

2021-12-18 Thread Kevin O'Connor
Some versions of gcc complain when VAR16 is used in both the header
and C files - use only in the C file to fix the warning.

Signed-off-by: Kevin O'Connor 
---
 vgasrc/svgamodes.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/vgasrc/svgamodes.h b/vgasrc/svgamodes.h
index 782d30b..6ac1d64 100644
--- a/vgasrc/svgamodes.h
+++ b/vgasrc/svgamodes.h
@@ -6,7 +6,7 @@ struct generic_svga_mode {
 struct vgamode_s info;
 };
 
-extern struct generic_svga_mode svga_modes[] VAR16;
-extern unsigned int svga_mcount VAR16;
+extern struct generic_svga_mode svga_modes[];
+extern unsigned int svga_mcount;
 
 #endif /* __SVGAMODES_H */
-- 
2.31.1

___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] svgamodes: add standard 4k modes

2021-12-18 Thread Kevin O'Connor
On Thu, Dec 16, 2021 at 08:20:58AM +0100, Gerd Hoffmann wrote:
> Add all three 4k modes.  Computer monitors typically use
> the first one (3840x2160).
> 
> Add 16 and 32 bpp variants.  24bpp is dead these days, and
> software which is so old that still uses those modes most
> likely doesn't even know what 4k is.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4 0/3] virtio-blk: add feature VIRTIO_BLK_F_SIZE_MAX

2021-12-18 Thread Kevin O'Connor
On Tue, Dec 07, 2021 at 09:31:05AM +0800, Andy Pei wrote:
> if driver reads data larger than VIRTIO_BLK_F_SIZE_MAX, it will cause
> some issue to the DMA engine.
> So add VIRTIO_BLK_F_SIZE_MAX feature support, when upper software
> wants to read data larger than VIRTIO_BLK_F_SIZE_MAX, virtio-blk
> driver split one large request into multiple smaller ones.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 0/2] pci: reserve resources for pcie-pci-bridge to fix regressed hotplug on q35 with ACPI hotplug enabled

2021-12-18 Thread Kevin O'Connor
On Mon, Nov 29, 2021 at 06:48:10AM -0500, Igor Mammedov wrote:
> Since QEMU-6.1, Q35 machine is using ACPI PCI hotplug by default.
> However SeaBIOS did not take in account pcie-pci-bridge and treated it
> as any other PCIe device which is not correct in the case of the bridge.
> 
> Patch [1/2] fixes main issue by treating the bridge as a device with
> possible SHPC capability.
> Patch [2/2] fixes missed IO reservation, by making sure that in the case of
> the bridge firmware reserves IO resource.

Thanks.  I committed this series.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 00/19] Support SMBIOS 3.0 entry points

2021-12-18 Thread Kevin O'Connor
On Thu, Dec 10, 2020 at 04:26:21PM -0500, Eduardo Habkost wrote:
> This series implements support for SMBIOS 3.0 entry points in
> 
> SeaBIOS.
> 
> The main advantage of SMBIOS 3.0 entry points is the higher limit
> for total table size.  The SMBIOS 2.1 64435 bytes limit can be
> easily hit in QEMU if creating virtual machines with more than
> 720 VCPUs.
> 

Thanks.  Somehow this slipped through the cracks.  However, I merged
this patch series as found at: https://gitlab.com/ehabkost/seabios on
the work/smbios-3.0 branch.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [coreboot] [ANNOUNCE] SeaBIOS 1.15.0

2021-12-03 Thread Kevin O'Connor
On Fri, Dec 03, 2021 at 09:35:24AM +0100, Paul Menzel wrote:
> Dear Kevin,
> 
> 
> Am 03.12.21 um 03:40 schrieb Kevin O'Connor:
> > The 1.15.0 version of SeaBIOS has now been released.  For more
> > information on the release, please see:
> 
> Thank you for cutting the release.
> 
> > http://seabios.org/Releases
> 
> It’d be great if you updated your announcement template to use HTTPS URLs.

Thanks.  Good catch.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: new november release?

2021-12-02 Thread Kevin O'Connor
On Wed, Dec 01, 2021 at 12:21:34PM +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > I'm not sure what you'd prefer to do with the SeaBIOS release though.
> > We have a couple of high-level options:
> > 
> > 1. Release v1.15.0 this week without these changes (tag 64f37cc5).
> > 
> > 2. Commit the smbios 3 changes (and pcie-pci-bridge changes) and
> >release v1.15.0 in early January.
> 
> You mean v1.16.0 I guess?

Actually, I meant delaying v1.15.0 until January.  But, releasing
v1.16.0 in early 2022 is fine too.

I went ahead and tagged v1.15.0 with the current code.  I'll look to
merge in the new changes next week.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] [ANNOUNCE] SeaBIOS 1.15.0

2021-12-02 Thread Kevin O'Connor
The 1.15.0 version of SeaBIOS has now been released.  For more
information on the release, please see:

http://seabios.org/Releases


New in this release:

* Improved support for USB devices with multiple interfaces.
* Support for USB XHCI devices using direct MMIO access (instead of PCI).
* NVMe improvements.
* Increased "f-segment" RAM allocations for BIOS tables.
* Several bug fixes and code cleanups.


For information on obtaining SeaBIOS, please see:

http://seabios.org/Download


= git shortlog -n rel-1.14.0..rel-1.15.0 =

Gerd Hoffmann (9):
  output: add support for uppercase hex numbers
  dsdt: add support for pnp ids as strings
  usb: add boot prio support for mmio host adapters
  usb/xhci: split xhci setup into generic and pci parts
  usb/xhci: add support for mmio host adapters (via acpi).
  usb boot: add xhci mmio example
  nvme: improve namespace allocation
  nvme: drive desc should not include the newline
  Increase BUILD_MIN_BIOSTABLE for large roms

Alexander Graf (4):
  nvme: Record maximum allowed request size
  nvme: Allow to set PRP2
  nvme: Pass large I/O requests as PRP lists
  nvme: Split requests by maximum allowed size

Stefan Berger (4):
  tcgbios: Fix details in log entries
  Add implementations for sha256, sha384, and sha512
  tcgbios: Use The proper sha function for each PCR bank
  tcgbios: Disable platform hierarchy in case of failure

Volker Rümelin (2):
  stacks: call check_irqs() in run_thread()
  stacks: call check_irqs() after switch_next()

Alex Martens via SeaBIOS (1):
  nvme: fix missing newline on sq full print

Daniel P. Berrangé (1):
  smbios: avoid integer overflow when adding SMBIOS type 0 table

David Woodhouse (1):
  nvme: Clean up nvme_cmd_readwrite()

Kevin O'Connor (1):
  docs: Note v1.15.0 release

Matt DeVillier (1):
  usb.c: Fix devices using non-primary interface descriptor

Mike Banon (1):
  Support booting USB drives with a write protect switch enabled

Sergei Trofimovich (1):
  vgasrc: ignore .node.gnu.property (binutils-2.36 support)

Stefan Ott via SeaBIOS (1):
  usb-hid: Increase MAX_KBD_EVENT

weitaowang...@zhaoxin.com (1):
  USB:Fix xHCI initail fail by using longer reset and CNR clear timeout 
value
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: new november release?

2021-11-30 Thread Kevin O'Connor
On Tue, Nov 23, 2021 at 09:50:44AM +0100, Gerd Hoffmann wrote:
>   Hi,
> 
> > > Damn.  Support for smbiod 3.0 fall completely through the cracks:
> > > 
> > > https://www.mail-archive.com/seabios@seabios.org/msg12438.html
> > > 
> > > The qemu-side changes have long been merged (except for flipping the
> > > default to 3.0), but I didn't pay attention so the seabios patches
> > > didn't got merged :(
> > > 
> > > I'd live to have them included in the new release, I suspect that
> > > implies we have to extend the freeze and release in early december?
> > 
> > Yeah - if we're going to make a change now, I think we should wait a
> > few weeks before tagging a release.
> > 
> > It's been a while since I've looked at the smbios 3 patches - are
> > there any changes since then?
> 
> Latest version is at:
> https://gitlab.com/ehabkost/seabios/-/commits/work/smbios-3.0/
> 
> Rebased, but no functional changes in the smbios updates.

I took another look at those patches and they look fine to me.

I'm not sure what you'd prefer to do with the SeaBIOS release though.
We have a couple of high-level options:

1. Release v1.15.0 this week without these changes (tag 64f37cc5).

2. Commit the smbios 3 changes (and pcie-pci-bridge changes) and
   release v1.15.0 in early January.

I suppose we could commit the changes and try to make a release in
December, but that seems like it would dilute the meaning of a
"release".

What are your thoughts?

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 0/2] pci: reserve resources for pcie-pci-bridge to fix regressed hotplug on q35 with ACPI hotplug enabled

2021-11-30 Thread Kevin O'Connor
On Mon, Nov 29, 2021 at 06:48:10AM -0500, Igor Mammedov wrote:
> Since QEMU-6.1, Q35 machine is using ACPI PCI hotplug by default.
> However SeaBIOS did not take in account pcie-pci-bridge and treated it
> as any other PCIe device which is not correct in the case of the bridge.

Thanks.  It looks fine to me.  I'll give another day or so for others
to comment, but otherwise will plan to commit.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: new november release?

2021-11-22 Thread Kevin O'Connor
On Mon, Nov 22, 2021 at 10:15:49AM +0100, Gerd Hoffmann wrote:
> On Thu, Oct 21, 2021 at 09:43:39PM -0400, Kevin O'Connor wrote:
> > On Thu, Oct 21, 2021 at 03:01:23PM +0200, Gerd Hoffmann wrote:
> > >   Hi,
> > > 
> > > It's been a while since the last release, we have some nice but
> > > unreleased bits in the master branch (tpm and nvme improvements
> > > for example), time for a new release I think.
> > > 
> > > How about freeze in a week, release by roughly mid-november?
> > 
> > It's fine with me.  A release in late November might be a little
> > easier for me.
> 
> Damn.  Support for smbiod 3.0 fall completely through the cracks:
> 
> https://www.mail-archive.com/seabios@seabios.org/msg12438.html
> 
> The qemu-side changes have long been merged (except for flipping the
> default to 3.0), but I didn't pay attention so the seabios patches
> didn't got merged :(
> 
> I'd live to have them included in the new release, I suspect that
> implies we have to extend the freeze and release in early december?

Yeah - if we're going to make a change now, I think we should wait a
few weeks before tagging a release.

It's been a while since I've looked at the smbios 3 patches - are
there any changes since then?

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: new november release?

2021-10-21 Thread Kevin O'Connor
On Thu, Oct 21, 2021 at 03:01:23PM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> It's been a while since the last release, we have some nice but
> unreleased bits in the master branch (tpm and nvme improvements
> for example), time for a new release I think.
> 
> How about freeze in a week, release by roughly mid-november?

It's fine with me.  A release in late November might be a little
easier for me.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 0/5] Make seabios linkable with lld

2021-10-21 Thread Kevin O'Connor
On Tue, Oct 19, 2021 at 07:50:40PM -0400, Brad Smith via SeaBIOS wrote:
> On 8/14/2021 1:02 AM, Brad Smith wrote:
> 
> > As an OpenBSD developer I'm very interested in these patches. SeaBIOS is
> > the only package
> > left in our ports tree still using lld to link and one of very few ports
> > not building with Clang. I
> > just happened to come across your diffs looking to see if anything had
> > been done in this area
> > when searching via Google.
> 
> Hello? Anyone?

Hi,

It's been pretty quiet here on the SeaBIOS mailing list recently.
There hasn't been that much new code in a while - the main emphasis
has been on incremental changes.

If someone wants to send in a series of incremental patches that
convert the build to work with both clang and binutils then I'll take
a look at them.  I don't see an issue with a more robust build.
However, the main emphasis is going to be on not introducing a
regression.  That is, I don't think simplifying the build requirements
is a sufficient reason to risk introducing a regression.  In
particular, one of the main goals of SeaBIOS is to support old
software (30+ years) - it's really difficult to identify and track
down regressions when they do occur.

There was a series of patches from Fāng-ruì Sòng in April of 2020:

https://www.mail-archive.com/seabios@seabios.org/msg12169.html

There was a review of those patches and, if I recall correctly, it was
determined that more testing would be needed before we were confident
in merging.  Alas, it seems no one wanted to do that testing.

The same author of those patches also went on to submit patches to
binutils that knowingly broke SeaBIOS and did not inform either
project about it until after the patches were accepted and deployed.

https://www.mail-archive.com/seabios@seabios.org/msg12285.html

That caused build breakages in "linux beta distrubtions" and required
both projects to deploy workarounds to mitigate the issue.  So, we are
obviously concerned given our goal of not introducing regressions.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] tcgbios: Disable platform hierarchy in case of failure

2021-09-23 Thread Kevin O'Connor
On Tue, Sep 07, 2021 at 05:05:52PM -0400, Stefan Berger wrote:
> In the rare case of a TPM 2 failure, disable the platform hierarchy after
> disabling the endorsement and owner hierarchies.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] nvme: fix missing newline on sq full print

2021-06-30 Thread Kevin O'Connor
On Sat, Jun 26, 2021 at 10:13:40PM +, Alex Martens via SeaBIOS wrote:
> Signed-off-by: Alex Martens 

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH] tcgbios: Fix details in log entries

2021-06-30 Thread Kevin O'Connor
On Wed, Jun 09, 2021 at 01:31:59PM -0400, Stefan Berger wrote:
> Fix two details of the logs:
> 
> - Set the field 'SpecErrata' to 2 as required by specs.
> - Write the separator into the log entry's event field.
> 
> Signed-off-by: Stefan Berger 

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 0/2] tcgbios: Use the proper hashes for the TPM 2 PCR banks

2021-06-30 Thread Kevin O'Connor
On Mon, Jun 14, 2021 at 01:35:47PM -0400, Stefan Berger wrote:
> This PR adds the implementations for sha{256, 384, 512} and makes use
> of the hash implementation when extending the PCRs of the respective
> banks rather than always using the sha1 and zero-padding it for the PCR
> banks of a TPM 2.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 0/2] tcgbios: Use the proper hashes for the TPM 2 PCR banks

2021-06-14 Thread Kevin O'Connor
On Mon, Jun 14, 2021 at 01:35:47PM -0400, Stefan Berger wrote:
> This PR adds the implementations for sha{256, 384, 512} and makes use
> of the hash implementation when extending the PCRs of the respective
> banks rather than always using the sha1 and zero-padding it for the PCR
> banks of a TPM 2.

Okay, thanks.  Looks fine to me.  If there are no further comments
I'll look to commit this (and the previous tcgbios fix) in a few days.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4 0/2] fix PS/2 keyboard initialization failures

2021-06-09 Thread Kevin O'Connor
On Fri, Jun 04, 2021 at 07:59:27PM +0200, Volker Rümelin wrote:
> Hi,
> 
> I wrote a few patches for qemu [1] and patch 10/11 exposes a bug
> in seabios.
> 
> Sometimes the PS/2 keyboard initialization fails and seabios
> doesn't detect the keyboard. This patch fixes the PS/2 keyboard
> initialization.
> 
> I wouldn't be surprised if my patch also fixes PS/2 keyboard
> initialization problems on bare metal. A few bug reports look
> very similar to what I see on my computer with qemu.

Thanks, I committed this series.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v4 0/2] fix PS/2 keyboard initialization failures

2021-06-04 Thread Kevin O'Connor
On Fri, Jun 04, 2021 at 07:59:27PM +0200, Volker Rümelin wrote:
> Hi,
> 
> I wrote a few patches for qemu [1] and patch 10/11 exposes a bug
> in seabios.
> 
> Sometimes the PS/2 keyboard initialization fails and seabios
> doesn't detect the keyboard. This patch fixes the PS/2 keyboard
> initialization.
> 
> I wouldn't be surprised if my patch also fixes PS/2 keyboard
> initialization problems on bare metal. A few bug reports look
> very similar to what I see on my computer with qemu.

Thanks, the series looks good to me.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 1/3] stacks: fix asm constraints in run_thread()

2021-06-02 Thread Kevin O'Connor
On Wed, Jun 02, 2021 at 09:25:54PM +0200, Volker Rümelin wrote:
> > On Tue, Jun 01, 2021 at 09:06:54PM +0200, Volker Rümelin wrote:
> > > The variables data, func, thread and cur are input operands for
> > > the asm statement in run_thread(). The assembly code clobbers
> > > these inputs.
> > > 
> > > >From the gcc documentation chapter Extended-Asm, Input Operands:
> > > "It is not possible to use clobbers to inform the compiler that
> > > the values in these inputs are changing. One common work-around
> > > is to tie the changing input variable to an output variable that
> > > never gets used."
> > Thanks.  However, this change doesn't look correct to me.  The
> > variables data, func, thread and cur were all *output* operands.  They
> > were output operands that use "+" to indicate that they also have
> > inputs associated with them.  That is, unless I'm missing something,
> > the asm already followed the gcc documentation (use an output operand
> > and don't use the results of it).
> 
> Hi Kevin,
> 
> the "+" output constraint indicates that the assembly code uses
> the variable as input and updates the same variable.
> 
> The gcc manual does not say the compiler will not use the output
> operand result. It actually uses the updated variable.
> 
> Your code works because you never use the output variables.
> 
> > Did you observe a problem during runtime with the original code?
> 
> My next patch does not work, because the compiler assumes edx is
> the updated variable cur at the end of the assembly code, but
> edx is actually clobbered. Just use a dprintf() to print cur
> before and after the asm statement to see I'm right.
> objdump -dS src/stacks.o will show the same.

Okay, I think I understand the issue.  The asm constrains aren't
wrong, but "cur" is clobbered by the asm.  I think it's probably
simpler to use this in the new code:

if (getCurThread() == )
check_irqs();

And do something similar in yield().  That is, not save/restore
getCurThread() during stack switches.  If it helps, we can also rename
"cur" to "edx" to make it more clear that the C variable is clobbered.

> > > Add unused output variables and fix the asm constraints in
> > > run_thread().
> > > 
> > > Signed-off-by: Volker Rümelin
> > > ---
> > >   src/stacks.c | 10 ++
> > >   1 file changed, 6 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/src/stacks.c b/src/stacks.c
> > > index 2fe1bfb..ef0aba1 100644
> > > --- a/src/stacks.c
> > > +++ b/src/stacks.c
> > > @@ -565,6 +565,7 @@ run_thread(void (*func)(void*), void *data)
> > >   thread->stackpos = (void*)thread + THREADSTACKSIZE;
> > >   struct thread_info *cur = getCurThread();
> > >   hlist_add_after(>node, >node);
> > > +u32 unused_a, unused_b, unused_c, unused_d;
> > >   asm volatile(
> > >   // Start thread
> > >   "  pushl $1f\n" // store return pc
> > > @@ -576,14 +577,15 @@ run_thread(void (*func)(void*), void *data)
> > >   // End thread
> > >   "  movl %%ebx, %%eax\n" // %eax = thread
> > >   "  movl 4(%%ebx), %%ebx\n"  // %ebx = thread->node.next
> > > -"  movl (%5), %%esp\n"  // %esp = MainThread.stackpos
> > > -"  calll %4\n"  // call __end_thread(thread)
> > > +"  movl (%9), %%esp\n"  // %esp = MainThread.stackpos
> > > +"  calll %8\n"  // call __end_thread(thread)
> > >   "  movl -4(%%ebx), %%esp\n" // %esp = next->stackpos
> > >   "  popl %%ebp\n"// restore %ebp
> > >   "  retl\n"  // restore pc
> > >   "1:\n"
> > > -: "+a"(data), "+c"(func), "+b"(thread), "+d"(cur)
> > > -: "m"(*(u8*)__end_thread), "m"(MainThread)
> > > +: "=a"(unused_a), "=c"(unused_c), "=b"(unused_b), "=d"(unused_d)
> > > +: "0"(data), "1"(func), "2"(thread), "3"(cur),
> > > +  "m"(*(u8*)__end_thread), "m"(MainThread)
> > The original code made sure data was in eax, func in ecx, thread in
> > ebx, and cur in edx.  The altered code no longer enforces this
> > association and I think that could introduce a bug.
> 
> The digit input constraints tie the variables to the correct
> register just like the old code. data, func, thread and cur are
> still in eax, ecx, ebx and edx at the beginning of the assembly
> code.

Sorry, you are correct, I missed the association.  That said, I don't
think we need to tell gcc to save/restore the original values during
the stack switch as they can be easily recalculated.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3 1/3] stacks: fix asm constraints in run_thread()

2021-06-02 Thread Kevin O'Connor
On Tue, Jun 01, 2021 at 09:06:54PM +0200, Volker Rümelin wrote:
> The variables data, func, thread and cur are input operands for
> the asm statement in run_thread(). The assembly code clobbers
> these inputs.
> 
> From the gcc documentation chapter Extended-Asm, Input Operands:
> "It is not possible to use clobbers to inform the compiler that
> the values in these inputs are changing. One common work-around
> is to tie the changing input variable to an output variable that
> never gets used."

Thanks.  However, this change doesn't look correct to me.  The
variables data, func, thread and cur were all *output* operands.  They
were output operands that use "+" to indicate that they also have
inputs associated with them.  That is, unless I'm missing something,
the asm already followed the gcc documentation (use an output operand
and don't use the results of it).

Did you observe a problem during runtime with the original code?

> Add unused output variables and fix the asm constraints in
> run_thread().
> 
> Signed-off-by: Volker Rümelin 
> ---
>  src/stacks.c | 10 ++
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/src/stacks.c b/src/stacks.c
> index 2fe1bfb..ef0aba1 100644
> --- a/src/stacks.c
> +++ b/src/stacks.c
> @@ -565,6 +565,7 @@ run_thread(void (*func)(void*), void *data)
>  thread->stackpos = (void*)thread + THREADSTACKSIZE;
>  struct thread_info *cur = getCurThread();
>  hlist_add_after(>node, >node);
> +u32 unused_a, unused_b, unused_c, unused_d;
>  asm volatile(
>  // Start thread
>  "  pushl $1f\n" // store return pc
> @@ -576,14 +577,15 @@ run_thread(void (*func)(void*), void *data)
>  // End thread
>  "  movl %%ebx, %%eax\n" // %eax = thread
>  "  movl 4(%%ebx), %%ebx\n"  // %ebx = thread->node.next
> -"  movl (%5), %%esp\n"  // %esp = MainThread.stackpos
> -"  calll %4\n"  // call __end_thread(thread)
> +"  movl (%9), %%esp\n"  // %esp = MainThread.stackpos
> +"  calll %8\n"  // call __end_thread(thread)
>  "  movl -4(%%ebx), %%esp\n" // %esp = next->stackpos
>  "  popl %%ebp\n"// restore %ebp
>  "  retl\n"  // restore pc
>  "1:\n"
> -: "+a"(data), "+c"(func), "+b"(thread), "+d"(cur)
> -: "m"(*(u8*)__end_thread), "m"(MainThread)
> +: "=a"(unused_a), "=c"(unused_c), "=b"(unused_b), "=d"(unused_d)
> +: "0"(data), "1"(func), "2"(thread), "3"(cur),
> +  "m"(*(u8*)__end_thread), "m"(MainThread)

The original code made sure data was in eax, func in ecx, thread in
ebx, and cur in edx.  The altered code no longer enforces this
association and I think that could introduce a bug.

Separately, the rest of the patch series looks good to me.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v3] Increate BUILD_MIN_BIOSTABLE for large roms

2021-06-02 Thread Kevin O'Connor
On Mon, May 31, 2021 at 07:55:28AM +0200, Gerd Hoffmann wrote:
> BUILD_MIN_BIOSTABLE reserves space in the f-segment.  Some data
> structures -- for example disk drives known to seabios -- must be
> stored there, so the space available here limits the number of
> devices seabios is able to manage.
> 
> This patch sets BUILD_MIN_BIOSTABLE to 8k for bios images being 256k or
> larger in size.  32bit code is moved off in that case, so we have more
> room in the f-segment then.

Thanks.  Looks good to me.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 1/1] stacks: fix missing NMI_DISABLE_BIT on call32_post()

2021-05-27 Thread Kevin O'Connor
On Thu, May 27, 2021 at 03:43:51PM -0300, Heitor Alves de Siqueira wrote:
> On Thu, May 27, 2021 at 11:44 AM Kevin O'Connor  wrote:
> >
> > The purpose of this code is to restore the NMI_DISABLE_BIT to what it
> > was prior to call32_prep().  If something calls the bios without the
> > NMI_DISABLE_BIT set, it's this code that makes sure SeaBIOS returns to
> > that calling code with NMI_DISABLE_BIT also not set.
> >
> > If you've run into some bug, I think it would help if you could
> > further describe that bug.
> >
> > Cheers,
> > -Kevin
> 
> Hi Kevin,
> 
> Thank you for the explanation! Sorry, it seems I misunderstood this part of 
> the
> code as I thought every access to PORT_CMOS_DATA should be performed with NMIs
> disabled. Maybe giving some of the background on the issue will help me
> understand this a bit better, indeed!
> 
> This has been originally reported by some Ubuntu users running specific VMs on
> older versions of seabios, where they would occasionally see KVM emulation
> failures and VMs going into "PAUSED" state (and being unable to resume 
> without a
> full VM reboot afterwards). Inspecting the ASM dumps [0] on those VMs revealed
> that the last actions performed were accesses to PORT_CMOS_DATA, and those
> seemed to be caused by rtc_mask(). Since these were on old versions of 
> seabios,
> they looked like a result of our builds missing patch 3156b71a535e (rtc: 
> Disable
> NMI in rtc_mask()) [1], which we tried to address initially.
> 
> After providing new packages of seabios with the rtc_mask() patch, some users
> noticed that a few VMs still continued to present similar symptoms, but with a
> different ASM dump this time. This was also seen on "newer" versions of our
> seabios packages based on upstream 1.10.2, which should already include the
> rtc_mask() patches by default (git describe --contains reports this patch 
> being
> introduced with rel-1.9.0~47). These new failed instances lead us to believe
> that call32_post() was the culprit, since the trapping instruction was still 
> the
> same access to PORT_CMOS_DATA which was "unguarded" by an NMI_DISABLE_BIT.

It's not necessary to disable NMI (non-maskable interrupts) to access
the CMOS bits.  It's just that IO port 0x71 (PORT_CMOS_DATA) is used
for both the CMOS bits and for the NMI disable flag.  It's two totally
separate functions merged into the same IO register (it seems that was
considered great fun back in the '80s).

>We
> then provided another package for testing, implementing the patch I've 
> proposed
> originally in this thread, and our users reported no further KVM emulation
> failures.
> 
> Unfortunately, I'm not entirely sure what originally causes the KVM emulation
> failures, as I've been unable to reproduce these issues in test VMs. Our users
> reported that the commands below can trigger the emulation failures, but I 
> have
> no details on the exact platform those are running on or any details of the
> specific file system in use:
> 
> root@vsfo-2[]:/root> date; fsfreeze --freeze /flash
> root@vsfo-2[]:/root> date; dd if=/dev/zero of=/flash/test bs=1 count=0 seek=1G
> 
> In any case, I'm somewhat puzzled by CMOS port accesses causing KVM emulation
> failures. Could it be that an NMI comes in between outb/inb and we end up 
> trying
> to read from a nonsense CMOS index?

No, there's no invalid values in the CMOS index.  Likely what's
occuring is that an NMI is run in a state that it is not expecting and
it corrupts the cpu/stack.  KVM probably only observes the problem (or
reports the problem) when the NMI returns.

> If I understand it correctly, my proposed patch effectively turns off NMIs
> unconditionally which sounds like it should cause horrible breakage.

Linux, almost assuredly, restores NMIs to their appropriate state when
it regains control of the processor.  So, the temporary disabling of
NMIs likely goes unnoticed.

>Could you
> help me understand why that doesn't happen with the rtc_read/write/mask
> functions in src/hw/rtc.c as well?

SeaBIOS needs to disable NMIs when in 32bit mode.  It's not related to
the CMOS bits - it's just that we can't permit an NMI when in 32bit
mode.  So, any time the code writes to IO port 0x71 (PORT_CMOS_DATA)
when in 32bit mode it needs to make sure it doesn't mistakenly enable
NMIs.  Thus the rtc_xxx() functions make sure the NMI bit is always
set.

The reason that NMIs are disabled in 32bit mode is because old 3rd
party irq handlers are expecting to be invoked in 16bit mode and will
not function correctly if they are invoked when the cpu is in 32bit
mode.  Thus, we can't transition to 32bit mode without first disabling
all irqs (both regular irqs and NMIs).

In contrast, the call32_prep

[SeaBIOS] Re: [PATCH v2] make BUILD_MIN_BIOSTABLE configurable

2021-05-27 Thread Kevin O'Connor
On Thu, May 27, 2021 at 10:52:27AM +0200, Gerd Hoffmann wrote:
> BUILD_MIN_BIOSTABLE reserves space in the f-segment.  Some data
> structures -- for example disk drives known to seabios -- must be
> stored there, so the space available here limits the number of
> devices seabios is able to manage.
> 
> This patch adds an config option for BUILD_MIN_BIOSTABLE so the size of
> the reservation can be configured at build time.  Default is 8k for bios
> images being 256k or larger in size (32bit code is moved off then so we
> have more room in the f-segment), 2k (current value) otherwise.

I'm not sure it is a good idea to make this a user configurable value.
I'm concerned that it's too easy to set it to a value that breaks
things.  Is there any harm in setting it to 8K unconditionally?

> 
> Signed-off-by: Gerd Hoffmann 
> ---
>  Makefile | 2 +-
>  scripts/layoutrom.py | 3 ++-
>  src/Kconfig  | 7 +++
>  3 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/Makefile b/Makefile
> index 3d8943ef5f25..ddcd1dff399d 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -172,7 +172,7 @@ $(OUT)romlayout16.lds: $(OUT)ccode32flat.o 
> $(OUT)code32seg.o $(OUT)ccode16.o $(O
>   $(Q)$(OBJDUMP) -thr $(OUT)code32flat.o > $(OUT)code32flat.o.objdump
>   $(Q)$(OBJDUMP) -thr $(OUT)code32seg.o > $(OUT)code32seg.o.objdump
>   $(Q)$(OBJDUMP) -thr $(OUT)code16.o > $(OUT)code16.o.objdump
> - $(Q)$(PYTHON) ./scripts/layoutrom.py $(OUT)code16.o.objdump 
> $(OUT)code32seg.o.objdump $(OUT)code32flat.o.objdump 
> $(OUT)$(KCONFIG_AUTOHEADER) $(OUT)romlayout16.lds $(OUT)romlayout32seg.lds 
> $(OUT)romlayout32flat.lds
> + $(Q)$(PYTHON) ./scripts/layoutrom.py $(OUT)code16.o.objdump 
> $(OUT)code32seg.o.objdump $(OUT)code32flat.o.objdump 
> $(OUT)$(KCONFIG_AUTOHEADER) $(OUT)romlayout16.lds $(OUT)romlayout32seg.lds 
> $(OUT)romlayout32flat.lds $(CONFIG_MIN_BIOSTABLE)
>  
>  # These are actually built by scripts/layoutrom.py above, but by pulling them
>  # into an extra rule we prevent make -j from spawning layoutrom.py 4 times.
> diff --git a/scripts/layoutrom.py b/scripts/layoutrom.py
> index 6616721d1b58..94deca9fcc85 100755
> --- a/scripts/layoutrom.py
> +++ b/scripts/layoutrom.py
> @@ -636,7 +636,8 @@ def scanconfig(file):
>  
>  def main():
>  # Get output name
> -in16, in32seg, in32flat, cfgfile, out16, out32seg, out32flat = 
> sys.argv[1:]
> +in16, in32seg, in32flat, cfgfile, out16, out32seg, out32flat, biostable 
> = sys.argv[1:]
> +BUILD_MIN_BIOSTABLE = biostable * 1024

FYI, the layoutrom.py script reads in the Kconfig file explicitly a
few lines down.  If we do want to have a setting in Kconfig, I think
it should be taken directly from Kconfig instead of passed from the
command-line.

Cheers,
-Kevin


>  # Read in the objdump information
>  infile16 = open(in16, 'r')
> diff --git a/src/Kconfig b/src/Kconfig
> index 3a8ffa15fded..95519fc50fa3 100644
> --- a/src/Kconfig
> +++ b/src/Kconfig
> @@ -140,6 +140,13 @@ endchoice
>  it into 128 KB (which was big enouth for a long time) you'll
>  probably have to disable some featues such as xhci support.
>  
> +config MIN_BIOSTABLE
> +int "biostable size (in KB)"
> +default 8 if ROM_SIZE >= 256
> +default 2
> +help
> +Memory space for BIOS tables in f-segment.
> +
>  endmenu
>  
>  menu "Hardware support"
> -- 
> 2.31.1
> 
> ___
> SeaBIOS mailing list -- seabios@seabios.org
> To unsubscribe send an email to seabios-le...@seabios.org
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 1/1] stacks: fix missing NMI_DISABLE_BIT on call32_post()

2021-05-27 Thread Kevin O'Connor
On Thu, May 27, 2021 at 09:25:01AM -0300, Heitor Alves de Siqueira wrote:
> We should disable NMIs when accessing CMOS ports during state restore,
> like it's done during state backup in call32_prep().
> 
> Signed-off-by: Heitor Alves de Siqueira 
> ---
>  src/stacks.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/src/stacks.c b/src/stacks.c
> index 2fe1bfbd64fd..e315066d52e8 100644
> --- a/src/stacks.c
> +++ b/src/stacks.c
> @@ -107,7 +107,7 @@ call32_post(void)
>  // Restore cmos index register
>  u8 cmosindex = GET_LOW(Call16Data.cmosindex);
>  if (!(cmosindex & NMI_DISABLE_BIT)) {
> -outb(cmosindex, PORT_CMOS_INDEX);
> +outb(cmosindex | NMI_DISABLE_BIT, PORT_CMOS_INDEX);
>  inb(PORT_CMOS_DATA);
>  }

The purpose of this code is to restore the NMI_DISABLE_BIT to what it
was prior to call32_prep().  If something calls the bios without the
NMI_DISABLE_BIT set, it's this code that makes sure SeaBIOS returns to
that calling code with NMI_DISABLE_BIT also not set.

If you've run into some bug, I think it would help if you could
further describe that bug.

Cheers,
-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH 0/2] nvme fixes

2021-05-26 Thread Kevin O'Connor
On Wed, May 26, 2021 at 09:49:55AM +0200, Gerd Hoffmann wrote:
> Gerd Hoffmann (2):
> 
>   nvme: improve namespace allocation
> 
>   nvme: drive desc should not include the newline
> 

Thanks.  It looks fine to me.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


[SeaBIOS] Re: [PATCH v2] vgasrc/vgalayout.lds.S: ignore .node.gnu.property (binutils-2.36 support)

2021-05-26 Thread Kevin O'Connor
On Thu, May 20, 2021 at 11:18:48PM +0100, Sergei Trofimovich wrote:
> From: Sergei Trofimovich 
> 
> Modern binutils unconditionally tracks x86_64 ISA levels in intermediate
> files in .note.gnu.property. Custom liker script does not handle the
> section and complains about it:
> 
> ld --gc-sections -T out/vgasrc/vgalayout.lds out/vgaccode16.o \
> out/vgaentry.o out/vgaversion.o -o out/vgarom.o
> ld: section .note.gnu.property LMA [,0027] \
> overlaps section .text LMA [,98af]
> 
> The change ignores .note* sections.

Thanks.  I committed this change.

-Kevin
___
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org


  1   2   3   4   5   6   7   8   9   10   >