Re: lang/gcc* package builds vs. release/11.0.1/ and the future release/11.1.0 because of vm_ooffset_t and vm_pindex_t changes and how the lang/gcc* work

2017-06-29 Thread Gerald Pfeifer


Am 29. Juni 2017 18:55:59 GMT+08:00 schrieb Mark Millard :
>I'm not currently set up to run more than head on
>any of amd64, powerpc64, powerpc, aarch64, or armv6/7
>(which are all I target). And I'm in the middle of
>attempting a fairly large jump to head -r320458 on
>those.

Oh, then I had misunderstood your previous mail. No worries, I'll gently 
proceed then.

I expect to update gcc5 in the next 24 hours.

>[In my normal/head environment I'm switching to lang/gcc7-devel
>for gcc (from lang/gcc6 ) but I'm odd that way.]

The compiler should be fine, it's a number of ports that are not (even blocking 
the move from GCC 5 to 6 as default).

Gerald
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 11.1-BETA3 uhub probes on warm boot triggers endless reboot cycle

2017-06-29 Thread Glen Barber
On Fri, Jun 30, 2017 at 10:32:23AM +1200, Jonathan Chen wrote:
> On 30 June 2017 at 10:27, Glen Barber  wrote:
> > On Fri, Jun 30, 2017 at 10:23:26AM +1200, Jonathan Chen wrote:
> >> Hi,
> >>
> >> I've got a dual boot system at home, booting into Windows 10 for
> >> games. I've noticed that since I updated to 11.1-BETA3, a reboot from
> >> Windows into FreeBSD results in a endless reboot cycle. In order to
> >> reboot FreeBSD, I have to cold-boot. The endless reboot cycle appears
> >> to be triggered when the kernel attempts to talk to "uhub", at which
> >> point it reboots, and the cycle repeats. No kernel panics, no crash
> >> cores.
> >>
> >> The issue is not a biggie, other than the time it takes for me to
> >> power down completely, wait for capacitors to discharge completely,
> >> and then cold boot the machine. However, I thought I'd bring it to the
> >> notice of list.
> >>
> >
> > Thank you for the report.  Could you please open a PR about this?
> 
> https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=220370
> 

Thank you.

Glen



signature.asc
Description: PGP signature


Re: 11.1-BETA3 uhub probes on warm boot triggers endless reboot cycle

2017-06-29 Thread Jonathan Chen
On 30 June 2017 at 10:27, Glen Barber  wrote:
> On Fri, Jun 30, 2017 at 10:23:26AM +1200, Jonathan Chen wrote:
>> Hi,
>>
>> I've got a dual boot system at home, booting into Windows 10 for
>> games. I've noticed that since I updated to 11.1-BETA3, a reboot from
>> Windows into FreeBSD results in a endless reboot cycle. In order to
>> reboot FreeBSD, I have to cold-boot. The endless reboot cycle appears
>> to be triggered when the kernel attempts to talk to "uhub", at which
>> point it reboots, and the cycle repeats. No kernel panics, no crash
>> cores.
>>
>> The issue is not a biggie, other than the time it takes for me to
>> power down completely, wait for capacitors to discharge completely,
>> and then cold boot the machine. However, I thought I'd bring it to the
>> notice of list.
>>
>
> Thank you for the report.  Could you please open a PR about this?

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=220370

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: 11.1-BETA3 uhub probes on warm boot triggers endless reboot cycle

2017-06-29 Thread Glen Barber
On Fri, Jun 30, 2017 at 10:23:26AM +1200, Jonathan Chen wrote:
> Hi,
> 
> I've got a dual boot system at home, booting into Windows 10 for
> games. I've noticed that since I updated to 11.1-BETA3, a reboot from
> Windows into FreeBSD results in a endless reboot cycle. In order to
> reboot FreeBSD, I have to cold-boot. The endless reboot cycle appears
> to be triggered when the kernel attempts to talk to "uhub", at which
> point it reboots, and the cycle repeats. No kernel panics, no crash
> cores.
> 
> The issue is not a biggie, other than the time it takes for me to
> power down completely, wait for capacitors to discharge completely,
> and then cold boot the machine. However, I thought I'd bring it to the
> notice of list.
> 

Thank you for the report.  Could you please open a PR about this?

Glen



signature.asc
Description: PGP signature


11.1-BETA3 uhub probes on warm boot triggers endless reboot cycle

2017-06-29 Thread Jonathan Chen
Hi,

I've got a dual boot system at home, booting into Windows 10 for
games. I've noticed that since I updated to 11.1-BETA3, a reboot from
Windows into FreeBSD results in a endless reboot cycle. In order to
reboot FreeBSD, I have to cold-boot. The endless reboot cycle appears
to be triggered when the kernel attempts to talk to "uhub", at which
point it reboots, and the cycle repeats. No kernel panics, no crash
cores.

The issue is not a biggie, other than the time it takes for me to
power down completely, wait for capacitors to discharge completely,
and then cold boot the machine. However, I thought I'd bring it to the
notice of list.

Cheers.
-- 
Jonathan Chen 
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


[Bug 213903] Kernel crashes from turnstile_broadcast (/usr/src/sys/kern/subr_turnstile.c:837)

2017-06-29 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903

--- Comment #38 from Chris Collins  ---
(In reply to Mateusz Guzik from comment #35)

Just to let you know my pfsense unit affected by this issue does not have an
atom cpu.

It has a celeron N3150 cpu.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: What is /dev/zfs?

2017-06-29 Thread Alan Somers
On Thu, Jun 29, 2017 at 8:28 AM, Patrick M. Hausen  wrote:
> Hi, folks
>
> any pointer to an explanation would be nice,
> there seems to be no zfs(4) manpage ...
>
> Reason for asking: I have a piece of software
> that uses 14,000 ioctl() calls on that device during
> one execution and I'm asking myself what it tries
> to do.
>
> Thanks!
> Patrick

The zpool and zfs commands do everything through ioctls, and /dev/zfs
is the device node those ioctls are bound to.  You can't read from it
or write to it; all you can with /dev/zfs is use ZFS's custom ioctls.

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


What is /dev/zfs?

2017-06-29 Thread Patrick M. Hausen
Hi, folks

any pointer to an explanation would be nice,
there seems to be no zfs(4) manpage ...

Reason for asking: I have a piece of software
that uses 14,000 ioctl() calls on that device during
one execution and I'm asking myself what it tries
to do.

Thanks!
Patrick


signature.asc
Description: Message signed with OpenPGP


Problems attaching disks in Azure under 11.1-BETA2

2017-06-29 Thread Pete French
I am trying to attach a brand new disk to an azure VM, and
what I see is the disk attachng and detaching immediately like this:

da2 at storvsc3 bus 0 scbus5 target 0 lun 0
da2:  Fixed Direct Access SPC-2 SCSI device
da2: 300.000MB/s transfers
da2: Command Queueing enabled
da2: 32768MB (67108864 512 byte sectors)
da2 at storvsc3 bus 0 scbus5 target 0 lun 0
da2:  detached
g_access(918): provider da2 has error
g_access(918): provider da2 has error
g_access(918): provider da2 has error
g_access(918): provider da2 has error
(da2:storvsc3:0:0:0): Periph destroyed

I have seen this before with an existing disk if I detahc and reattach
wiht a new LUN numer, but never with a brand new disk. The command I
am using to create this is below:

az vm disk attach --vm-name az-london01 \
--resource-group ticketswitch-london \
--caching ReadWrite --new --size-gb 32 \
--sku Standard_LRS \
--disk az-london01-tank01

The only thing I could find in bug reports which is similar is this:

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=212914

But those patches should have gone in ages ago. Will try and find a
workaround for now, but its a bit puzzling.

-pete.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


[Bug 213903] Kernel crashes from turnstile_broadcast (/usr/src/sys/kern/subr_turnstile.c:837)

2017-06-29 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903

--- Comment #37 from Cassiano Peixoto  ---
(In reply to Mateusz Guzik from comment #35)
Mateusz,

Yes i realized many changes has been made on 11-STABLE related to this issue. I
think it could be fixed as well. Anyway i have a server running with 11.1-BETA3
to confirm.

Regarding FreeBSD 10, i can test it for you, just provide a patch and let me
know.

Thanks.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


[Bug 213903] Kernel crashes from turnstile_broadcast (/usr/src/sys/kern/subr_turnstile.c:837)

2017-06-29 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903

--- Comment #36 from Franco Fichtner  ---
You can use https://github.com/opnsense/src/commit/6b79b52c.patch on stable/10,
it was verified working on 11.0.


Cheers,
Franco

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


[Bug 213903] Kernel crashes from turnstile_broadcast (/usr/src/sys/kern/subr_turnstile.c:837)

2017-06-29 Thread bugzilla-noreply
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=213903

--- Comment #35 from Mateusz Guzik  ---
Hi there, sorry for late reply. This somehow fell through the cracks.

First of all there is no kernel bug per se that I can see, rather a bug in the
atom cpu which started manifesting itself. There were several changes to the
affected code path on stable/11 and in particular the condition which triggered
the bug is gone. I don't have any reports, but I suspect stable/11 is now
cleared. stable/10 will require a hack, but I'll need someone to test it as
don't have the hardware to reproduce.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: redundant zfs pool, system traps and tonns of corrupted files

2017-06-29 Thread Alan Somers
On Thu, Jun 29, 2017 at 6:04 AM, Eugene M. Zheganin  wrote:
> Hi,
>
> On 29.06.2017 16:37, Eugene M. Zheganin wrote:
>>
>> Hi.
>>
>>
>> Say I'm having a server that traps more and more often (different panics:
>> zfs panics, GPFs, fatal traps while in kernel mode etc), and then I realize
>> it has tonns of permanent errors on all of it's pools that scrub is unable
>> to heal. Does this situation mean it's a bad memory case ? Unfortunately I
>> switched the hardware to an identical server prior to encountering zpools
>> have errors, so I'm not use when did they appear. Right now I'm about to run
>> a memtest on an old hardware.
>>
>>
>> So, whadda you say - does it point at the memory as the root problem ?

Certainly a good guess.

>>
>
> I'm also not quite getting the situation when I have errors on a vdev level,
> but 0 errors on a lower device layer (could someone please explain this):

ZFS checksums whole records at a time.  On RAIDZ, each record is
spread over multiple disks, usually the entire RAID stripe.  So when
ZFS detects a checksum error on a record stored in RAIDZ, it doesn't
know which individual disk was actually responsible.  Instead, it
blames the RAIDZ vdev.  That's why you have thousands of checksum
errors on your raidz vdevs.  The few checksum errors you have on
individual disks might have come from the labels or uberblocks, which
are not raided.

-Alan
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: redundant zfs pool, system traps and tonns of corrupted files

2017-06-29 Thread Eugene M. Zheganin

Hi,

On 29.06.2017 16:37, Eugene M. Zheganin wrote:

Hi.


Say I'm having a server that traps more and more often (different 
panics: zfs panics, GPFs, fatal traps while in kernel mode etc), and 
then I realize it has tonns of permanent errors on all of it's pools 
that scrub is unable to heal. Does this situation mean it's a bad 
memory case ? Unfortunately I switched the hardware to an identical 
server prior to encountering zpools have errors, so I'm not use when 
did they appear. Right now I'm about to run a memtest on an old hardware.



So, whadda you say - does it point at the memory as the root problem ?



I'm also not quite getting the situation when I have errors on a vdev 
level, but 0 errors on a lower device layer (could someone please 
explain this):


  pool: esx
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: resilvered 3,74G in 0h5m with 0 errors on Tue Dec 27 05:14:32 2016
config:

NAMESTATE READ WRITE CKSUM
esx ONLINE   0 0 99,0K
  raidz1-0  ONLINE   0 0  113K
da0 ONLINE   0 0 0
da1 ONLINE   0 0 0
da2 ONLINE   0 0 2
da3 ONLINE   0 0 0
da5 ONLINE   0 0 0
  raidz1-1  ONLINE   0 0 84,7K
da12ONLINE   0 0 0
da13ONLINE   0 0 1
da14ONLINE   0 0 0
da15ONLINE   0 0 0
da16ONLINE   0 0 0

errors: 25 data errors, use '-v' for a list

  pool: gamestop
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://illumos.org/msg/ZFS-8000-8A
  scan: scrub in progress since Thu Jun 29 12:30:21 2017
1,67T scanned out of 4,58T at 1002M/s, 0h50m to go
0 repaired, 36,44% done
config:

NAMESTATE READ WRITE CKSUM
gamestopONLINE   0 0 1
  raidz1-0  ONLINE   0 0 2
da6 ONLINE   0 0 0
da7 ONLINE   0 0 0
da8 ONLINE   0 0 0
da9 ONLINE   0 0 0
da11ONLINE   0 0 0

errors: 10 data errors, use '-v' for a list

P.S. This is a FreeBSD 11.1-BETA2 r320056M (M stands for CTL_MAX_PORTS = 
1024), with ECC memory.


Thanks.
Eugene.
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


redundant zfs pool, system traps and tonns of corrupted files

2017-06-29 Thread Eugene M. Zheganin

Hi.


Say I'm having a server that traps more and more often (different 
panics: zfs panics, GPFs, fatal traps while in kernel mode etc), and 
then I realize it has tonns of permanent errors on all of it's pools 
that scrub is unable to heal. Does this situation mean it's a bad memory 
case ? Unfortunately I switched the hardware to an identical server 
prior to encountering zpools have errors, so I'm not use when did they 
appear. Right now I'm about to run a memtest on an old hardware.



So, whadda you say - does it point at the memory as the root problem ?


Thanks.

Eugene.

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: lang/gcc* package builds vs. release/11.0.1/ and the future release/11.1.0 because of vm_ooffset_t and vm_pindex_t changes and how the lang/gcc* work

2017-06-29 Thread Mark Millard

On 2017-Jun-29, at 3:10 AM, Gerald Pfeifer  wrote:

> Am 28. Juni 2017 22:38:52 GMT+08:00 schrieb Mark Millard  dsl-only.net>:
>> A primary test is building lang/gcc5-devel under release/11.0.1
>> and then using it under stable/11 or some draft of release/11.1.0 .
> 
> Thank you, Mark. Let me know how it went. In the meantime I'll prepare the 
> change for gcc5 itself.

I'm not currently set up to run more than head on
any of amd64, powerpc64, powerpc, aarch64, or armv6/7
(which are all I target). And I'm in the middle of
attempting a fairly large jump to head -r320458 on
those. (powerpc 32-bit and 64-bit just failed
for libc++ time-usage compiling now that 32-bit has
64-bit time_t, including in world32/lib32 contexts
for powerpc64.)

It will likely be a while before I manage to have a
11.x context (without losing my head contexts), much
less examples from all "my" 5 TARGET_ARCH's. (Given past
wchar_t type handling problems (e.g.) for gcc targeting
powerpc family members I think it should be checked.)
I'll have to find and set up disks: I do not even have
such handy/ready at the moment.

[I got into this area by being asked questions, not by
my direct use of release/11.0.1 , stable/11 , or a
draft of release/11.1.0 .]

I'll let you know when I have some test results but
others may get some before I do.

> . . .
>> Eventually most of the lang/gcc* 's will need whatever
>> technique is used.
> 
> Yes, agreed. Version 5 is most important since it's the default; then 6; 4.x 
> is for retro computing fans ;-), so 7 will then be next.

[In my normal/head environment I'm switching to lang/gcc7-devel
for gcc (from lang/gcc6 ) but I'm odd that way.]

===
Mark Millard
markmi at dsl-only.net

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: lang/gcc* package builds vs. release/11.0.1/ and the future release/11.1.0 because of vm_ooffset_t and vm_pindex_t changes and how the lang/gcc* work

2017-06-29 Thread Gerald Pfeifer


Am 28. Juni 2017 22:38:52 GMT+08:00 schrieb Mark Millard :
>A primary test is building lang/gcc5-devel under release/11.0.1
>and then using it under stable/11 or some draft of release/11.1.0 .

Thank you, Mark. Let me know how it went. In the meantime I'll prepare the 
change for gcc5 itself.

>It looks like the the lang/gcc5-devel build still creates and
>uses the headers that go in include-fixed/ but that they are
>removed from $(STAGEDIR}${TARGLIB} 's tree before installation
>or packaging.
>
>So, if I understand right, lang/gcc5-devel itself still does use
>the adjusted headers to produce its own materials but when
>lang/gcc5-devel is used later it does not. Definitely
>something to be testing since it is a mix overall.

I am not worried about that since that should not cause any binary 
incompatibilities (ABI). The problem we encountered was about source code and 
API in a wide sense of that term.

>Is some form of exp-like run needed that tries to force use
>of a release/11.0.1 built lang/gcc5-devel (-r444563) to build
>other things under, say, stable/11  or some draft of
>release/11.1.0 ? Is this odd combination even possible
>currently?

I am not aware of it, and while originally I was thinking to request an -exp 
run (after the GCC version update which is dragging due to broken ports), time 
is not on our side and the change should be low risk.

> [altermative approach] But I guess that did not work out.

Not with my current level of connectivity and my notebook a dead brick on top 
of that. And my preference is to still build, but stow away (unless explicitly 
requested to keep).

>Eventually most of the lang/gcc* 's will need whatever
>technique is used.

Yes, agreed. Version 5 is most important since it's the default; then 6; 4.x is 
for retro computing fans ;-), so 7 will then be next.

Gerald
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: EFI loader doesn't handle md_preload (md_image) correct?

2017-06-29 Thread Toomas Soome

> On 29. juuni 2017, at 11:24, Harry Schmalzbauer  wrote:
> 
> Bezüglich Harry Schmalzbauer's Nachricht vom 16.05.2017 18:26 (localtime):
>> B
> …
> The issue is, that current UEFI implementation is using 64MB staging
> memory for loading the kernel and modules and files. When the boot is
> called, the relocation code will put the bits from staging area into the
> final places. The BIOS version does not need such staging area, and that
> will explain the difference.
> 
> I actually have different implementation to address the same problem,
> but thats for illumos case, and will need some work to make it usable
> for freebsd; the idea is actually simple - allocate staging area per
> loaded file and relocate the bits into the place by component, not as
> continuous large chunk (this would also allow to avoid the mines like
> planted by hyperv;), but right now there is no very quick real solution
> other than just build efi loader with larger staging size.
 Ic, thanks for the explanation.
 While not aware about the purpose of the staging area nor the
 consequences of enlarging it, do you think it's feasable increasing it
 to 768Mib?
 
 At least now I have an idea baout the issue and an explanation why
 reducing md_imgae to 100MB hasn't helped – still more than 64...
 
 Any quick hint where to define the staging area size highly appreciated,
 fi there are no hard objections against a 768MB size.
 
 -harry
>>> The problem is that before UEFI Boot Services are not switched off, the 
>>> memory is managed (and owned) by the firmware,
>> Hmm, I've been expecting something like that (owend by firmware) ;-)
>> 
>> So I'll stay with CSM for now, and will happily be an early adopter if
>> you need someone to try anything (-stable mergable).
> 
> Toomas, thanks for your help so far! I'm just curious if there's news on
> this.
> Was there a decision made whether kernel should be utilized to relocate
> the MD image modules or the loader should be extended to handle
> (x-)large staging areas?
> 
> I'd like to switch back to UEFI booting for various reasons (most
> priority has consistency), but can't since it breaks md-rootfs with that
> machine (the other run ESXi still).
> 
> If there's anything to test, please let me know.
> 
> Thanks,
> 
> -harry

There has not been too much activities about this topic, except some 
discussions. But it is quite clear that this change has to be handled by the 
loader in first place - as we need to get the data in safe location; now of 
course there is secondary part as well - it may be that kernel would need some 
work as well, depending on how the md image(s) are to be handled in relation to 
memory maps.

rgds,
toomas
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Re: EFI loader doesn't handle md_preload (md_image) correct?

2017-06-29 Thread Harry Schmalzbauer
 Bezüglich Harry Schmalzbauer's Nachricht vom 16.05.2017 18:26 (localtime):
> B
…
 The issue is, that current UEFI implementation is using 64MB staging
 memory for loading the kernel and modules and files. When the boot is
 called, the relocation code will put the bits from staging area into the
 final places. The BIOS version does not need such staging area, and that
 will explain the difference.

 I actually have different implementation to address the same problem,
 but thats for illumos case, and will need some work to make it usable
 for freebsd; the idea is actually simple - allocate staging area per
 loaded file and relocate the bits into the place by component, not as
 continuous large chunk (this would also allow to avoid the mines like
 planted by hyperv;), but right now there is no very quick real solution
 other than just build efi loader with larger staging size.
>>> Ic, thanks for the explanation.
>>> While not aware about the purpose of the staging area nor the
>>> consequences of enlarging it, do you think it's feasable increasing it
>>> to 768Mib?
>>>
>>> At least now I have an idea baout the issue and an explanation why
>>> reducing md_imgae to 100MB hasn't helped – still more than 64...
>>>
>>> Any quick hint where to define the staging area size highly appreciated,
>>> fi there are no hard objections against a 768MB size.
>>>
>>> -harry
>> The problem is that before UEFI Boot Services are not switched off, the 
>> memory is managed (and owned) by the firmware,
> Hmm, I've been expecting something like that (owend by firmware) ;-)
>
> So I'll stay with CSM for now, and will happily be an early adopter if
> you need someone to try anything (-stable mergable).

Toomas, thanks for your help so far! I'm just curious if there's news on
this.
Was there a decision made whether kernel should be utilized to relocate
the MD image modules or the loader should be extended to handle
(x-)large staging areas?

I'd like to switch back to UEFI booting for various reasons (most
priority has consistency), but can't since it breaks md-rootfs with that
machine (the other run ESXi still).

If there's anything to test, please let me know.

Thanks,

-harry

___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"