Re: Raspberry Pi 4 Model B

2021-09-25 Thread Peter J. Philipp
On Fri, Sep 24, 2021 at 03:07:12PM -0600, Andrew Klaus wrote:
> Have you read the Raspberry Pi instructions from
> https://ftp.openbsd.org/pub/OpenBSD/6.9/arm64/INSTALL.arm64 ?
> 
> What does your UART output show after it boots?
> 
> Andrew

Apologies I lost the OP's mail.  But this is what helped me a great deal
installing OpenBSD on my rpi4b:

https://marc.info/?l=openbsd-misc=160708819221625=2

It has to be concisely read, and if you do it will work.  Some things are
newer now like the eeprom.  I'd say just improvise or install and upgrade.

I must confess it didn't work for me the first day, upon the second day I got
it to work though.  And upgrading I've always done manually and made sure
the firmware wasn't overwritten, I have several backups of this on my disk, and
it happened once where I bricked booting OpenBSD.  But that was fixed by
fixing the USB3 disk with my laptop (amd64) the byte order is the same 
thankfully.

Best of luck!
-peter

> On Fri, Sep 24, 2021 at 2:45 PM Joseph Olatt  wrote:
> 
> > Hi,
> >
> > I'm trying to load OpenBSD on a Raspberry Pi 4 Model B and I'm not
> > having much luck. I've tried OpenBSD 6.9's miniroot69.img and the
> > install process does not go past the U-Boot prompt.
> >
> > I tried miniroot70.img from the snapshots directory, as of today, and
> > after displaying the following line:
> >
> >   bwfm0: failed loadfirmware of file
> > brcmfmac43455-sdio.raspberrypi,4-model-b.bin
> >
> > it reboots.
> >
> > Anybody else have success installing OpenBSD on this particular
> > hardware? Any insights or suggestions?
> >
> > Thank you.
> > joseph
> >
> >



Re: CARP Cold Spare

2021-09-25 Thread jslee
Hi,

You haven’t said anything about your hardware platform, but could you run one 
of them on non-UPS power? Then you’d still have one online when (*not* if) the 
UPS fails, and also they’ll both normally be online for maintenance, syspatch, 
config changes etc

I do recall installing a pair of identical servers at the same time and having 
them both fail a year later within an hour of each other, both with seized CPU 
fans, so I am somewhat sympathetic to your idea. But I think the practical cost 
of maintenance may be rather high

John


On Sat, 25 Sep 2021, at 08:13, Don Tek wrote:
> Would there be any ‘problem’ with configuring a 2-machine CARP setup 
> and then just keeping one machine powered-off until needed?
>
> I realize this defeats live failover, but this is not a requirement for 
> my customer.
>
> I just want them to be able to, in the event of a primary machine 
> failure, power-on the secondary and have it take over.  Logic here is 
> to otherwise not have the secondary sucking power off the UPS’s in the 
> event of a power failure, or in general.
>
> Legit?



Re: Raspberry Pi 4 Model B

2021-09-25 Thread Nenhum_de_Nos
On Fri, September 24, 2021 16:51, Joseph Olatt wrote:
> Hi,
>
> I'm trying to load OpenBSD on a Raspberry Pi 4 Model B and I'm not
> having much luck. I've tried OpenBSD 6.9's miniroot69.img and the
> install process does not go past the U-Boot prompt.
>
> I tried miniroot70.img from the snapshots directory, as of today, and
> after displaying the following line:
>
>   bwfm0: failed loadfirmware of file
> brcmfmac43455-sdio.raspberrypi,4-model-b.bin
>
> it reboots.
>
> Anybody else have success installing OpenBSD on this particular
> hardware? Any insights or suggestions?
>
> Thank you.
> joseph

Hi Joseph,

I followed https://ftp.openbsd.org/pub/OpenBSD/6.9/arm64/INSTALL.arm64 and
got two Raspberry Pi (3B and 4B) running fine since 6.8:

OpenBSD 6.9 (GENERIC.MP) #2: Tue Aug 10 08:28:07 MDT 2021

r...@syspatch-69-arm64.openbsd.org:/usr/src/sys/arch/arm64/compile/GENERIC.MP
real mem  = 479776 (3920MB)
avail mem = 3952427008 (3769MB)
random: good seed from bootblocks
mainbus0 at root: Raspberry Pi 4 Model B Rev 1.1
cpu0 at mainbus0 mpidr 0: ARM Cortex-A72 r0p3
cpu0: 48KB 64b/line 3-way L1 PIPT I-cache, 32KB 64b/line 2-way L1 D-cache
cpu0: 1024KB 64b/line 16-way L2 cache
cpu0: CRC32,ASID16
cpu1 at mainbus0 mpidr 1: ARM Cortex-A72 r0p3
cpu1: 48KB 64b/line 3-way L1 PIPT I-cache, 32KB 64b/line 2-way L1 D-cache
cpu1: 1024KB 64b/line 16-way L2 cache
cpu1: CRC32,ASID16
cpu2 at mainbus0 mpidr 2: ARM Cortex-A72 r0p3
cpu2: 48KB 64b/line 3-way L1 PIPT I-cache, 32KB 64b/line 2-way L1 D-cache
cpu2: 1024KB 64b/line 16-way L2 cache
cpu2: CRC32,ASID16
cpu3 at mainbus0 mpidr 3: ARM Cortex-A72 r0p3
cpu3: 48KB 64b/line 3-way L1 PIPT I-cache, 32KB 64b/line 2-way L1 D-cache
cpu3: 1024KB 64b/line 16-way L2 cache
cpu3: CRC32,ASID16
efi0 at mainbus0: UEFI 2.8
efi0: Das U-Boot rev 0x20210100
apm0 at mainbus0
simplefb0 at mainbus0: 640x480, 32bpp
wsdisplay0 at simplefb0 mux 1
wsdisplay0: screen 0-5 added (std, vt100 emulation)
"system" at mainbus0 not configured
"axi" at mainbus0 not configured
simplebus0 at mainbus0: "soc"
bcmclock0 at simplebus0
bcmmbox0 at simplebus0

matheus

-- 
"We will call you Cygnus,
the God of balance you shall be."



Re: CARP Cold Spare

2021-09-25 Thread leonard
What is the power draw? I use a 1500 VA apc backups with 6 outlets on ups and 5 
on surge protection. As long as your total draw is less than 1200 VA, for < 
$200 canadian you have a cheap simple solution. Just put on on the ups side and 
the other on the surge suppressor side. Or buy 2.leonard@on the road
 Original message From: Don Tek  Date: 
2021-09-25  11:40  (GMT-05:00) To: jslee  Cc: 
misc@openbsd.org Subject: Re: CARP Cold Spare I'm not sure why the hardware 
matters, but the two machines are a couple HP 1U Gen 8 Xeon servers.  Suffice 
to say, they are identical and have supported hardware configurations for 
OpenBSD.Of course I _could_ run one off direct power, but it would be a 
terrible idea.  The location is notorious for power surges, blips that are 
enough to reboot servers and several-second brown-outs.  So, not connected to 
the UPS is just asking for damages.They experience multi-hour blackouts what 
seems like once a month; this is where the desire to limit the draw on the 
UPS's comes from.  To ensure we make it through without having to shut 
down.Remote access is of primary concern, both for me for support, since I'm 
geographically far enough away that being on-site is not feasible, and to the 
customer, who just wants to stay home and work on systems in the 
office.Configurations on the servers almost never change (simple firwall), so 
besides having to run a quick syspatch and reboot once at time of failover, I 
don't see maintenance being so bad.  I keep config files backed-up otherwise 
centrally for quick restore to the running box as well.My primary concern here 
is if CARP / pfsync will have issues with the one machine being down a majority 
of the time.  Based on the FAQ, I think not, but have no practical experience.> 
On Sep 25, 2021, at 3:00 AM, jslee  wrote:> > Hi,> > 
You haven’t said anything about your hardware platform, but could you run one 
of them on non-UPS power? Then you’d still have one online when (*not* if) the 
UPS fails, and also they’ll both normally be online for maintenance, syspatch, 
config changes etc> > I do recall installing a pair of identical servers at the 
same time and having them both fail a year later within an hour of each other, 
both with seized CPU fans, so I am somewhat sympathetic to your idea. But I 
think the practical cost of maintenance may be rather high> > John> > >> On 
Sat, 25 Sep 2021, at 08:13, Don Tek wrote:>> Would there be any ‘problem’ with 
configuring a 2-machine CARP setup >> and then just keeping one machine 
powered-off until needed?>> >> I realize this defeats live failover, but this 
is not a requirement for >> my customer.>> >> I just want them to be able to, 
in the event of a primary machine >> failure, power-on the secondary and have 
it take over.  Logic here is >> to otherwise not have the secondary sucking 
power off the UPS’s in the >> event of a power failure, or in general.>> >> 
Legit?

Re: CARP Cold Spare

2021-09-25 Thread Don Tek
I'm not sure why the hardware matters, but the two machines are a couple HP 1U 
Gen 8 Xeon servers.  Suffice to say, they are identical and have supported 
hardware configurations for OpenBSD.

Of course I _could_ run one off direct power, but it would be a terrible idea.  
The location is notorious for power surges, blips that are enough to reboot 
servers and several-second brown-outs.  So, not connected to the UPS is just 
asking for damages.

They experience multi-hour blackouts what seems like once a month; this is 
where the desire to limit the draw on the UPS's comes from.  To ensure we make 
it through without having to shut down.

Remote access is of primary concern, both for me for support, since I'm 
geographically far enough away that being on-site is not feasible, and to the 
customer, who just wants to stay home and work on systems in the office.

Configurations on the servers almost never change (simple firwall), so besides 
having to run a quick syspatch and reboot once at time of failover, I don't see 
maintenance being so bad.  I keep config files backed-up otherwise centrally 
for quick restore to the running box as well.

My primary concern here is if CARP / pfsync will have issues with the one 
machine being down a majority of the time.  Based on the FAQ, I think not, but 
have no practical experience.

> On Sep 25, 2021, at 3:00 AM, jslee  wrote:
> 
> Hi,
> 
> You haven’t said anything about your hardware platform, but could you run one 
> of them on non-UPS power? Then you’d still have one online when (*not* if) 
> the UPS fails, and also they’ll both normally be online for maintenance, 
> syspatch, config changes etc
> 
> I do recall installing a pair of identical servers at the same time and 
> having them both fail a year later within an hour of each other, both with 
> seized CPU fans, so I am somewhat sympathetic to your idea. But I think the 
> practical cost of maintenance may be rather high
> 
> John
> 
> 
>> On Sat, 25 Sep 2021, at 08:13, Don Tek wrote:
>> Would there be any ‘problem’ with configuring a 2-machine CARP setup 
>> and then just keeping one machine powered-off until needed?
>> 
>> I realize this defeats live failover, but this is not a requirement for 
>> my customer.
>> 
>> I just want them to be able to, in the event of a primary machine 
>> failure, power-on the secondary and have it take over.  Logic here is 
>> to otherwise not have the secondary sucking power off the UPS’s in the 
>> event of a power failure, or in general.
>> 
>> Legit?



Re: CARP Cold Spare

2021-09-25 Thread Don Tek
Unsure what the power draw is on these guys yet, they just got them.  They have 
redundant 450W Platinum power supplies.

The "new" servers are completely overkill for the application, but this is a 
work-with-what's-available situation.  They got these free from a friend and 
don't want to spend on new hardware, otherwise I'd just get them something 
"smaller" and supremely efficient.

We already have redundant UPS's.  With the 3 servers previously (main VM 
servers and 1 now-dead firewall) attached to battery-backed power, we were 
getting roughly 3.5 hours of runtime before UPS drop.  Sometimes minutes can 
matter with these blackouts.

I also want that with the one machine not running, it's not incurring any wear 
or tear just for the sake of hot failover.

I've got no talent on site, so in the event of a hardware or file system 
failure, It's an extremely tedious experience for me to walk someone there 
through swapping hardware, wiring, or reinstalling an operating system.  So, 
I'm looking for that turn-this-one-off-and-this-one-on recovery option, without 
the need to fiddle with any hardware.  If this works the way I want, the only 
single point of failure will be their cable modem, and Comcast can handle 
replacing that.

Option B is to just have identical configurations and have them need to swap 
the network wiring as part of the failover.  Will still prevent the days of 
downtime we're incurring now due to this failure, but not as simple for them, 
if the CARP solution is solid.

> On Sep 25, 2021, at 10:25 AM, leonard  wrote:
> 
> 
> What is the power draw? I use a 1500 VA apc backups with 6 outlets on ups and 
> 5 on surge protection. As long as your total draw is less than 1200 VA, for < 
> $200 canadian you have a cheap simple solution. Just put on on the ups side 
> and the other on the surge suppressor side. Or buy 2.
> 
> 
> 
> leonard@on the road
> 
> 
>  Original message 
> From: Don Tek 
> Date: 2021-09-25 11:40 (GMT-05:00)
> To: jslee 
> Cc: misc@openbsd.org
> Subject: Re: CARP Cold Spare
> 
> I'm not sure why the hardware matters, but the two machines are a couple HP 
> 1U Gen 8 Xeon servers.  Suffice to say, they are identical and have supported 
> hardware configurations for OpenBSD.
> 
> Of course I _could_ run one off direct power, but it would be a terrible 
> idea.  The location is notorious for power surges, blips that are enough to 
> reboot servers and several-second brown-outs.  So, not connected to the UPS 
> is just asking for damages.
> 
> They experience multi-hour blackouts what seems like once a month; this is 
> where the desire to limit the draw on the UPS's comes from.  To ensure we 
> make it through without having to shut down.
> 
> Remote access is of primary concern, both for me for support, since I'm 
> geographically far enough away that being on-site is not feasible, and to the 
> customer, who just wants to stay home and work on systems in the office.
> 
> Configurations on the servers almost never change (simple firwall), so 
> besides having to run a quick syspatch and reboot once at time of failover, I 
> don't see maintenance being so bad.  I keep config files backed-up otherwise 
> centrally for quick restore to the running box as well.
> 
> My primary concern here is if CARP / pfsync will have issues with the one 
> machine being down a majority of the time.  Based on the FAQ, I think not, 
> but have no practical experience.
> 
> > On Sep 25, 2021, at 3:00 AM, jslee  wrote:
> > 
> > Hi,
> > 
> > You haven’t said anything about your hardware platform, but could you run 
> > one of them on non-UPS power? Then you’d still have one online when (*not* 
> > if) the UPS fails, and also they’ll both normally be online for 
> > maintenance, syspatch, config changes etc
> > 
> > I do recall installing a pair of identical servers at the same time and 
> > having them both fail a year later within an hour of each other, both with 
> > seized CPU fans, so I am somewhat sympathetic to your idea. But I think the 
> > practical cost of maintenance may be rather high
> > 
> > John
> > 
> > 
> >> On Sat, 25 Sep 2021, at 08:13, Don Tek wrote:
> >> Would there be any ‘problem’ with configuring a 2-machine CARP setup 
> >> and then just keeping one machine powered-off until needed?
> >> 
> >> I realize this defeats live failover, but this is not a requirement for 
> >> my customer.
> >> 
> >> I just want them to be able to, in the event of a primary machine 
> >> failure, power-on the secondary and have it take over.  Logic here is 
> >> to otherwise not have the secondary sucking power off the UPS’s in the 
> >> event of a power failure, or in general.
> >> 
> >> Legit?
> 


Libsecret + gnome-keyring + qykeychain broken (was nextcloudclient fails to work with gnome-keyring)

2021-09-25 Thread Rubén Llorente
Hello again.

I reproduced the issue once again:

I performed a fresh install of -current and installed Gnome.
Starting Gnome via GDM brings up all the services necessary
for storing secrets (dbus and gnome-keyring) out-of-the-box.

Nextcloud Client is still incapable of leveraging libsecret
or Gnome Keyring for storing secrets in a keyring in this
configuration.

My conclussion, unless somebody brings an alternative, is
that either Libsecret, Gnome Keychain, qtkeychain or
Nextcloud Client are broken.

If it is one of the first three, it means such port is
broken since it cannot perform the primary function
expected from it.

Since interest in having this working is low I am
staying with my workaround of using kwalletd5 instead.
If anybody is willing to debug this problem further I
am willing to provide more information.

Rubén Llorente  wrote:
> I have reproduced the issue in Fluxbox.
> 
> ~.xession:
> 
> /usr/local/bin/startfluxbox
> 
> 
> ~.fluxbox/startup
> 
> if [ -z "$DBUS_SESSION_BUS_ADDRESS" ]; then
>eval $(dbus-launch --sh-syntax --exit-with-session)
> fi
> 
> exec fluxbox
> 
> 
> Under a similar configuration in Linux, Nextcloud calls
> a prompt for creating a password database on first launch,
> and stores the credentials in a keyring. In OpenBSD this
> happens not.
> 
> Some worrying logs from Nextcloud:
> 
> 2021-09-25 00:52:25:016 [ info nextcloud.sync.credentials.webflow ]:Get 
> QNAM
> 2021-09-25 00:52:25:667 [ info nextcloud.sync.credentials.webflow ]:Fetch 
> from keychain!
> 2021-09-25 00:52:25:985 [ info nextcloud.sync.credentials.keychainchunk ]:
>   Backend unavailable (yet?) Retrying in a few seconds. "Unknown error"
> 2021-09-25 00:52:36:312 [ warning nextcloud.sync.credentials.keychainchunk ]: 
>   Unable to read 
> "someuser_clientCertificatePEM:https://somedomain.invalid/:0; chunk "0" 
> "Unknown error"
> 2021-09-25 00:52:36:313 [ info nextcloud.sync.credentials.keychainchunk ]:
>   Backend unavailable (yet?) Retrying in a few seconds. "Unknown error"
> 2021-09-25 00:52:46:491 [ warning nextcloud.sync.credentials.keychainchunk ]: 
>   Unable to read "someuser_clientKeyPEM:https://somedomain.invalid/:0; chunk 
> "0" "Unknown error"
> 2021-09-25 00:52:46:491 [ warning nextcloud.sync.credentials.webflow ]: 
> Unable to read client key "Unknown error"
> 2021-09-25 00:52:46:492 [ info nextcloud.sync.credentials.keychainchunk ]:
>   Backend unavailable (yet?) Retrying in a few seconds. "Unknown error"
> 2021-09-25 00:52:56:186 [ warning nextcloud.sync.credentials.keychainchunk ]: 
>   Unable to read 
> "someuser_clientCaCertificatePEM0:https://somedomain.invalid/:0; chunk "0" 
> "Unknown error"
> 
> "Unknown Error" is returned by qtkeychain when unable to operate its
> backend.
> 
> Either there is a problem with libsecret or the PEBKAC level is
> astronomical at this point.
> 
-- 
OpenPGP Key Fingerprint:
543F EB89 7FDE 8E33 AFF7 E794 E4AB 4807 58F7 6C76