here's the NVMe init log, was able to get it off a Chromebox with CCD :)

/7aa26000\ Start thread
|7aa26000| Found NVMe controller with version 1.3.0.
|7aa26000|   Capabilities 000000303c033fff
|7aa26000|  q 0x7aa77bce q_idx 1 dbl 0x91201004
|7aa26000|  q 0x7aa77bbc q_idx 0 dbl 0x91201000
|7aa26000| sq 0x7aa77bbc q_idx 0 sqe 0x7aa53000
|7aa26000|   admin submission queue: 0x7aa53000
|7aa26000|   admin completion queue: 0x7aa54000
|7aa26000| sq 0x7aa77bbc next_sqe 0
|7aa26000| sq 0x7aa77bbc commit_sqe 0
|7aa26000| cq 0x7aa77bce head 0 -> 1
|7aa26000| sq 0x7aa77bbc advanced to 1
|7aa26000| NVMe has 1 namespace.
|7aa26000|  q 0x7aa77bf1 q_idx 3 dbl 0x9120100c
|7aa26000| sq 0x7aa77bbc next_sqe 1
|7aa26000| sq 0x7aa77bbc commit_sqe 1
|7aa26000| cq 0x7aa77bce head 1 -> 2
|7aa26000| sq 0x7aa77bbc advanced to 2
|7aa26000|  q 0x7aa77bdf q_idx 2 dbl 0x91201008
|7aa26000| sq 0x7aa77bdf q_idx 2 sqe 0x7aa4e000
|7aa26000| sq 0x7aa77bbc next_sqe 2
|7aa26000| sq 0x7aa77bdf create dword10 00ff0001 dword11 00010001
|7aa26000| sq 0x7aa77bbc commit_sqe 2
|7aa26000| cq 0x7aa77bce head 2 -> 3
|7aa26000| sq 0x7aa77bbc advanced to 3
|7aa26000| sq 0x7aa77bbc next_sqe 3
|7aa26000| sq 0x7aa77bbc commit_sqe 3
|7aa26000| cq 0x7aa77bce head 3 -> 4
|7aa26000| sq 0x7aa77bbc advanced to 4
|7aa26000| NVME NS 1 max request size: 4096 sectors
|7aa26000| NVMe NS 1: 476940 MiB (976773168 512-byte blocks + 0-byte metadata)
|7aa26000| Searching bootorder for: /pci@i0cf8/pci-bridge@1c,4/*@0
|7aa26000| Registering bootable: NVMe NS 1: 476940 MiB (976773168
512-byte blocks + 0-byte metadata) (type:2 prio:103 data:f59e0)
|7aa26000| NVMe initialization complete!
\7aa26000/ End thread

On Mon, Jan 17, 2022 at 7:24 PM Matt DeVillier <matt.devill...@gmail.com> wrote:
>
> Greetings! Was this patch set tested on bare metal hardware? I'm
> seeing "GRUB loading: Read Error" when attempting to boot from NVMe on
> various Purism Librem devices (Intel Skylake thru Cometlake hardware).
> Reverting this series resolves the issue. I'm not seeing anything in
> coreboot's cbmem console log (even with the debug level raised), which
> I suppose isn't unexpected given it's a grub error and not in SeaBIOS
> itself. NVMe is a run of the mill Samsung EVO (960 or 970) and SeaBIOS
> reports the max request size is 4096 sectors.
>
> cheers,
> Matt
>
>
>
>
> On Fri, Oct 30, 2020 at 6:09 PM Kevin O'Connor <ke...@koconnor.net> wrote:
> >
> > On Thu, Oct 29, 2020 at 05:21:04PM +0000, David Woodhouse wrote:
> > > On Wed, 2020-10-07 at 12:13 -0400, Kevin O'Connor wrote:
> > > > > > > --- a/src/hw/nvme.c
> > > > > > > +++ b/src/hw/nvme.c
> > > > > > > @@ -727,6 +727,22 @@ nvme_cmd_readwrite(struct nvme_namespace 
> > > > > > > *ns, struct disk_op_s *op, int write)
> > > > > > >      u16 const max_blocks = NVME_PAGE_SIZE / ns->block_size;
> > > > > > >      u16 i;
> > > > > > >
> > > > > > > +    /* Split up requests that are larger than the device can 
> > > > > > > handle */
> > > > > > > +    if (op->count > ns->max_req_size) {
> > > > > > > +        u16 count = op->count;
> > > > > > > +
> > > > > > > +        /* Handle the first max_req_size elements */
> > > > > > > +        op->count = ns->max_req_size;
> > > > > > > +        if (nvme_cmd_readwrite(ns, op, write))
> > > > > > > +            return res;
> > > > > > > +
> > > > > > > +        /* Handle the remainder of the request */
> > > > > > > +        op->count = count - ns->max_req_size;
> > > > > > > +        op->lba += ns->max_req_size;
> > > > > > > +        op->buf_fl += (ns->max_req_size * ns->block_size);
> > > > > > > +        return nvme_cmd_readwrite(ns, op, write);
> > > > > > > +    }
> > > > > >
> > > > > > Depending on the disk access, this code could run with a small 
> > > > > > stack.
> > > > > > I would avoid recursion.
> > > > >
> > > > > This is tail recursion, which any reasonable compiler converts into
> > > > > a jmp :). Take a look at the .o file - for me it did become a plain
> > > > > jump.
> > > > >
> > > >
> > > > Okay, that's fine then.
> > >
> > > It's still kind of icky. This used to be a purely iterative function.
> > > Now for a large unaligned request (which nvme_build_prpl refuses to
> > > handle) you get a mixture of (mostly tail) recursion and iteration.
> > >
> > > It'll recurse for each max_req_size, then iterate over each page within
> > > that.
> > >
> > > I'd much rather see just a simple iterative loop. Something like this
> > > (incremental, completely untested):
> >
> > FWIW, I agree that a version without recursion (even tail recursion)
> > would be nice.  If someone wants to test and confirm that, then that
> > would be great.
> >
> > -Kevin
> > _______________________________________________
> > SeaBIOS mailing list -- seabios@seabios.org
> > To unsubscribe send an email to seabios-le...@seabios.org
_______________________________________________
SeaBIOS mailing list -- seabios@seabios.org
To unsubscribe send an email to seabios-le...@seabios.org

Reply via email to