[ITP] python-license-expression and cygport PoC patch (was: calm: SPDX licence list data update please)

2024-05-30 Thread Brian Inglis via Cygwin-apps

On 2024-05-28 08:37, Brian Inglis via Cygwin-apps wrote:

On 2024-05-27 15:15, Jon Turney via Cygwin-apps wrote:

On 24/05/2024 17:08, Brian Inglis via Cygwin-apps wrote:
Can we please get the SPDX licence list data updated in calm to 3.24 sometime 
if possible as the licences complained about below have been in 


This is not quite straightforward, as the system python on sourceware is 
currently python3.6, and the last supported nexB/license-expression on that is 
30.0.0, and moving to a later one has some wrinkles, since various pieces of 
interconnected stuff aren't venv'd (yet?).



If not, perhaps I could be of some help if I knew requirements?


So, there aren't any requirements here except "validate the SPDX license 
expression to detect maintainer mistakes and typos".


It looks like using that python module might have been a mistake.

I'm not sure why it needs to contain it's own version of the license data, 
ideally we'd have something that read the official SPDX data (ratelimited to 
once per day or something. It looks like maybe this would possible to do by 
feeding our own license list into the module rather than using it's built in 
one, but one could hope for this to be built in already...)


Would we or should we also allow specifying LICENSE_URI (as I have been doing) 
like PEP 639 license-files, with defaults searched as suggested:


 "LICEN[CS]E*", "COPYING*", "NOTICE*", "AUTHORS*"?

where globs and source paths are allowed as usual in cygport files, and 
directories may match these paths, implicitly including file entries, but no 
file *contents* checked, unless we see a need in future, to generate and 
validate licences.


I found github/nexB/license-expression Python package to do SPDX licence checks 
with current data, developed by the same team doing SPDX-toolkit for SPDX, 
working with Fedora folks et al.


Successful attempt to package Python license-expression (without tests):

https://cygwin.com/cgi-bin2/jobs.cgi?id=8210

cygport attached and at:

https://cygwin.com/cgit/cygwin-packages/playground/commit/?id=3626386b10c967f780547d1703ad23bd50f6331a

log at:

https://github.com/cygwin/scallywag/actions/runs/9293093201

The package installs and runs using PoC attached in spdx-license-expression.py 
script hooked into /usr/share/cygport/lib/pkg_pkg.cygpart license hint addition 
patch attached.


I also ran a test of the Python script and module against all package source 
cygport files declaring licences which I maintain or ever looked at, including a 
git/cygwin-packages/*.cygport download from 2023-02, showing the results in the 
attached log.
I also attempted to trap the exceptions in the script, but that does not seem to 
work in any documented obvious manner, but I do not know enough Python to 
address this.


If someone else who knows python cared to adopt and improve this in a more 
normal manner, and incorporate this more smoothly into cygport, we could all 
appreciate that.

Alternatively, some candid comments and frank feedback might allow me to do so! 
;^>

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry#|/usr/bin/cygport
# python-license-expression.cygport - Python license-expression Cygwin package 
build control script definitions

inherit python-wheel

NAME=python-license-expression
VERSION=30.3.0
RELEASE=1

BASE=${NAME#python-}

CATEGORY=Python
SUMMARY="Python license expression utility library"
DESCRIPTION="Python utility library to parse, compare, simplify and normalize
license expressions (such as SPDX license expressions)."

ARCH=noarch

LICENSE=Apache-2.0
LICENSE_SPDX="SPDX-License-Identifier: $LICENSE"
# SPDX-License-Identifier: Apache-2.0
LICENSE_URI="NOTICE apache-2.0.LICENSE"

DOCS="
license-expression.ABOUT
AUTHORS.rst CHANGELOG.rst CODE_OF_CONDUCT.rst README.rst
$LICENSE_URI
"

#!/usr/bin/python
"""spdx-license-expression.py - validate SPDX licence expression

Usage: spdx-license-expression.py 

Author: Brian Inglis 
"""

from license_expression import get_spdx_licensing
import sys

def main(args):
if len(args) != 1:
print("usage: " + sys.argv[0] + " ",
  file=sys.stderr)
return 1

licensing = get_spdx_licensing()
expression = args[0]
errs = licensing.validate(expression).errors
#ExpressionInfo(
#   original_expression='... and MIT and GPL-2.0+',
#   normalized_expression=None,
#   errors=['Unknown license key(s): ...'],
#   invalid_symbols=['...']
#)
for e in errs:
print(e, file=sys.stderr)
i

[Bug 2067633] Re: `lxc` commands returning `Error: Failed to begin transaction: context deadline exceeded`

2024-05-30 Thread Brian Murray
** Tags added: cuqa-manual-testing

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067633

Title:
  `lxc` commands returning `Error: Failed to begin transaction: context
  deadline exceeded`

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2067633/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Fwd: newer version of mingw64-*-win-iconv ?

2024-05-30 Thread Brian Inglis via Cygwin-apps

On 2024-05-29 02:53, Bruno Haible via Cygwin wrote:

Brian Inglis wrote:

Ran playground local and CI builds of these packages at v0.0.8 successfully:

https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-x86_64-win-iconv
and   https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-i686-win-iconv



Do we really need the fix at git HEAD to add UCS-2-INTERNAL encoding?


v0.0.8 is good enough. Hardly anyone needs UCS-2-INTERNAL. But many programs
need a working ASCII encoding, which is fixed in v0.0.8.


NMU updates of the above packages would be appreciated, based on those
mingw...-win-iconv playground branch updates, if someone could help, please?

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry


Re: newer version of mingw64-*-win-iconv ?

2024-05-30 Thread Brian Inglis via Cygwin

On 2024-05-29 02:53, Bruno Haible via Cygwin wrote:

Brian Inglis wrote:

Ran playground local and CI builds of these packages at v0.0.8 successfully:

https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-x86_64-win-iconv
and   https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-i686-win-iconv



Do we really need the fix at git HEAD to add UCS-2-INTERNAL encoding?


v0.0.8 is good enough. Hardly anyone needs UCS-2-INTERNAL. But many programs
need a working ASCII encoding, which is fixed in v0.0.8.


NMU updates of the above packages would be appreciated, based on those
mingw...-win-iconv playground branch updates, if someone could help, please?

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry

--
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


[Bug 2009885] Update Released

2024-05-30 Thread Brian Murray
The verification of the Stable Release Update for timeshift has
completed successfully and the package is now being released to
-updates.  Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2009885

Title:
  Timeshift 21.09.1-1 broken after Rsync upgrade  to
  3.2.7-0ubuntu0.22.04.2

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/timeshift/+bug/2009885/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [ansible-project] Multi-conditional when statements

2024-05-30 Thread Brian Coca
First of all, both are valid ways of writing conditionals.

>From the execution standpoint, the main difference is that the list
version will be evaluated in order, one at a time by Ansible passing
each item to Jinja.  While the other one will be passed as one item
into Jinja. This creates a minor change in efficiency depending on the
amount of conditions and the likelyhood of failure, but for most cases
(less than 100 conditionals) I would consider it negligible.

>From a practical standpoint, the 2nd form is easier to put into a
variable and compose 'ANDed' conditions by adding to a list, you only
need to ensure each condition's correctness, not the aggregated whole.
The first form on the other hand supports 'OR' conditions also.

In the end I would consider it a preference issue, though most Ansible
users are used to the 2nd form and might get confused by the first,
but that is only a consideration when/if sharing the content.

-- 
--
Brian Coca (he/him/yo)

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/CACVha7fDShTB-g-68uo%3DnMAqZXOVM2Oq3i%2BWUsGgM-nGXSNiRg%40mail.gmail.com.


[slurm-users] Re: slurmdbd not connecting to mysql (mariadb)

2024-05-30 Thread Brian Andrus via slurm-users

That SIGTERM message means something is telling slurmdbd to quit.

Check your cron jobs, maintenance scripts, etc. Slurmdbd is being told 
to shutdown. If you are running in the foreground, a ^C does that. If 
you run a kill or killall on it, you will get that same message.


Brian Andrus

On 5/30/2024 6:53 AM, Radhouane Aniba via slurm-users wrote:
Yes I can connect to my database using mysql --user=slurm 
--password=slurmdbpass  slurm_acct_db and there is no firewall 
blocking mysql after checking the firewall question


ALso here is the output of slurmdbd -D -vvv (note I can only run this 
as sudo )


sudo slurmdbd -D -vvv
slurmdbd: debug: Log file re-opened
slurmdbd: debug: Munge authentication plugin loaded
slurmdbd: debug2: mysql_connect() called for db slurm_acct_db
slurmdbd: debug2: Attempting to connect to localhost:3306
slurmdbd: debug2: innodb_buffer_pool_size: 134217728
slurmdbd: debug2: innodb_log_file_size: 50331648
slurmdbd: debug2: innodb_lock_wait_timeout: 50
slurmdbd: error: Database settings not recommended values: 
innodb_buffer_pool_size innodb_lock_wait_timeout

slurmdbd: Accounting storage MYSQL plugin loaded
slurmdbd: debug2: ArchiveDir = /tmp
slurmdbd: debug2: ArchiveScript = (null)
slurmdbd: debug2: AuthAltTypes = (null)
slurmdbd: debug2: AuthInfo = (null)
slurmdbd: debug2: AuthType = auth/munge
slurmdbd: debug2: CommitDelay = 0
slurmdbd: debug2: DbdAddr = localhost
slurmdbd: debug2: DbdBackupHost = (null)
slurmdbd: debug2: DbdHost = hannibal-hn
slurmdbd: debug2: DbdPort = 7032
slurmdbd: debug2: DebugFlags = (null)
slurmdbd: debug2: DebugLevel = 6
slurmdbd: debug2: DebugLevelSyslog = 10
slurmdbd: debug2: DefaultQOS = (null)
slurmdbd: debug2: LogFile = /var/log/slurmdbd.log
slurmdbd: debug2: MessageTimeout = 100
slurmdbd: debug2: Parameters = (null)
slurmdbd: debug2: PidFile = /run/slurmdbd.pid
slurmdbd: debug2: PluginDir = /usr/lib/x86_64-linux-gnu/slurm-wlm
slurmdbd: debug2: PrivateData = none
slurmdbd: debug2: PurgeEventAfter = 1 months*
slurmdbd: debug2: PurgeJobAfter = 12 months*
slurmdbd: debug2: PurgeResvAfter = 1 months*
slurmdbd: debug2: PurgeStepAfter = 1 months
slurmdbd: debug2: PurgeSuspendAfter = 1 months
slurmdbd: debug2: PurgeTXNAfter = 12 months
slurmdbd: debug2: PurgeUsageAfter = 24 months
slurmdbd: debug2: SlurmUser = root(0)
slurmdbd: debug2: StorageBackupHost = (null)
slurmdbd: debug2: StorageHost = localhost
slurmdbd: debug2: StorageLoc = slurm_acct_db
slurmdbd: debug2: StoragePort = 3306
slurmdbd: debug2: StorageType = accounting_storage/mysql
slurmdbd: debug2: StorageUser = slurm
slurmdbd: debug2: TCPTimeout = 2
slurmdbd: debug2: TrackWCKey = 0
slurmdbd: debug2: TrackSlurmctldDown= 0
slurmdbd: debug2: acct_storage_p_get_connection: request new connection 1
slurmdbd: debug2: Attempting to connect to localhost:3306
slurmdbd: slurmdbd version 19.05.5 started
slurmdbd: debug2: running rollup at Thu May 30 13:50:08 2024
slurmdbd: debug2: Everything rolled up


It goes like this for some time and then it crashes with this message

slurmdbd: Terminate signal (SIGINT or SIGTERM) received
slurmdbd: debug: rpc_mgr shutting down


On Thu, May 30, 2024 at 8:18 AM mercan  
wrote:


Did you try to connect database using mysql command?

mysql --user=slurm --password=slurmdbpass slurm_acct_db

C. Ahmet Mercan

On 30.05.2024 14:48, Radhouane Aniba via slurm-users wrote:

Thank you Ahmet,
I dont have a firewall active.
And because slurmdbd cannot connect to the database I am not able
to getting it to be activated through systemctl I will share the
output for slurmdbd -D -vvv shortly but overall it is always
saying trying to connect to the db and then retries a couple of
times and crashes

R.




On Thu, May 30, 2024 at 2:51 AM mercan
 wrote:

Hi;

Did you check can you connect db with your conf parameters
from head-node:

mysql --user=slurm --password=slurmdbpass slurm_acct_db

Also, check and stop firewall and selinux, if they are running.

Last, you can stop slurmdbd, then run run terminal with:

slurmdbd -D -vvv

Regards;

C. Ahmet Mercan

On 30.05.2024 00:05, Radhouane Aniba via slurm-users wrote:

Hi everyone
I am trying to get slurmdbd to run on my local home server
but I am really struggling.
Note : am a novice slurm user
my slurmdbd always times out even though all the details in
the conf file are correct

My log looks like this

[2024-05-29T20:51:30.088] Accounting storage MYSQL plugin
loaded
[2024-05-29T20:51:30.088] debug2: ArchiveDir = /tmp
[2024-05-29T20:51:30.088] debug2: ArchiveScript = (null)
[2024-05-29T20:51:30.088] debug2: AuthAltTypes = (null)
[2024-05-29T20:51:30.088] debug2: AuthInfo = (null)
[2024-05-29T20:51:30.088] debug2: AuthType = auth/munge
[2024-05-29T20:51:30.088] debug2: CommitDelay = 0

[OAUTH-WG] Re: [Technical Errata Reported] RFC9470 (7951)

2024-05-30 Thread Brian Campbell
I suspect a variety of not-entirely-improbable rational could be provided
to explain why it might make sense. But the reality is that it's just a
mistake in the document where somewhere along the way updates were made to
the examples that didn't fully align with content already in those
examples. I try to be careful with details like that but apparently wasn't
careful enough in this case.

On Thu, May 23, 2024 at 5:45 AM Tomasz Kuczyński <
tomasz.kuczyn...@man.poznan.pl> wrote:

> The introspection response should rather reflect facts related to the
> access token sent for the introspection. So even in case, a new
> authentication event took place after the token issuance, it should not be
> included in the response as the authentication event is not related to the
> introspected access token.
> The inclusion of that information in the introspection response should be
> treated as a vulnerability.
>
> Regardless of the above, the "exp" in response is also earlier than the
> "auth_time", which means that the introspected token is beyond the time
> window of its validity and in fact, the introspection response should
> contain nothing more than {"active": false}.
>
> Best regards
> Tomasz Kuczyński
> W dniu 23.05.2024 o 01:06, Justin Richer pisze:
>
> This seems to be logical - the authentication event would always be before
> the token was issued in the usual case. However, assuming that the AS
> "upgrades" an existing token in-place during a step up, isn't it possible
> for the latest relevant authentication event to come after the token was
> initially issued?
>
>  - Justin
> --
> *From:* RFC Errata System 
> 
> *Sent:* Wednesday, May 22, 2024 2:30 PM
> *To:* vitto...@auth0.com  ;
> bcampb...@pingidentity.com 
> ; debcool...@gmail.com 
> ; paul.wout...@aiven.io 
> ; hannes.tschofe...@arm.com
>  ;
> rifaat.s.i...@gmail.com 
> 
> *Cc:* tomasz.kuczyn...@man.poznan.pl 
> ; oauth@ietf.org 
> ; rfc-edi...@rfc-editor.org 
> 
> *Subject:* [OAUTH-WG] [Technical Errata Reported] RFC9470 (7951)
>
> The following errata report has been submitted for RFC9470,
> "OAuth 2.0 Step Up Authentication Challenge Protocol".
>
> --
> You may review the report below and at:
> https://www.rfc-editor.org/errata/eid7951
>
> --
> Type: Technical
> Reported by: Tomasz Kuczyński 
> 
>
> Section: 6.2
>
> Original Text
> -
>  "exp": 1639528912,
>  "iat": 1618354090,
>  "auth_time": 1646340198,
>
> Corrected Text
> --
>  "exp": 1639528912,
>  "iat": 1618354090,
>  "auth_time": 1618354090,
>
> Notes
> -
> I noticed a small inconsistency in the example "Figure 7: Introspection
> Response". It seems that the time for the user-authentication event should
> be less than or equal to the time of token issuance to ensure logical
> coherence.
>
> Instructions:
> -
> This erratum is currently posted as "Reported". (If it is spam, it
> will be removed shortly by the RFC Production Center.) Please
> use "Reply All" to discuss whether it should be verified or
> rejected. When a decision is reached, the verifying party
> will log in to change the status and edit the report, if necessary.
>
> --
> RFC9470 (draft-ietf-oauth-step-up-authn-challenge-17)
> --
> Title   : OAuth 2.0 Step Up Authentication Challenge Protocol
> Publication Date: September 2023
> Author(s)   : V. Bertocci, B. Campbell
> Category: PROPOSED STANDARD
> Source  : Web Authorization Protocol
> Stream  : IETF
> Verifying Party : IESG
>
> ___
> OAuth mailing list -- oauth@ietf.org
> To unsubscribe send an email to oauth-le...@ietf.org
>
> --
> Tomasz Kuczynski
>
> Applications Division
> Large Scale Applications and Services Department Manager
> Poznan Supercomputing and Networking Center
> Polish Academy of Sciences
> Jana Pawla II 10, Room 1.28
> 61-139 Poznan, Poland
> Tel.: +48 693 918 148
>
>

-- 
_CONFIDENTIALITY NOTICE: This email may contain confidential and privileged 
material for the sole use of the intended recipient(s). Any review, use, 
distribution or disclosure by others is strictly prohibited.  If you have 
received this communication in error, please notify the sender immediately 
by e-mail and delete the message and any file attachments from your 
computer. Thank you._
___
OAuth mailing list -- oauth@ietf.org
To unsubscribe send an email to oauth-le...@ietf.org


Re: [Semihosting Tests PATCH v2 1/3] .editorconfig: add code conventions for tooling

2024-05-30 Thread Brian Cain



On 5/30/2024 6:23 AM, Alex Bennée wrote:

It's a pain when you come back to a code base you haven't touched in a
while and realise whatever indent settings you were using having
carried over. Add an editorconfig and be done with it.

Signed-off-by: Alex Bennée 



Adding an editorconfig seems like a great idea IMO.  But I wonder - will 
it result in unintentional additional changes when saving a file that 
contains baseline non-conformance?


Related: would a .clang-format file also be useful? git-clang-format can 
be used to apply formatting changes only on the code that's been changed.


Also: should we consider excluding any exceptional files that we don't 
expect to conform?




---
v2
   - drop mention of custom major modes (not needed here)
   - include section for assembly
---
  .editorconfig | 29 +
  1 file changed, 29 insertions(+)
  create mode 100644 .editorconfig

diff --git a/.editorconfig b/.editorconfig
new file mode 100644
index 000..c72a55c
--- /dev/null
+++ b/.editorconfig
@@ -0,0 +1,29 @@
+# EditorConfig is a file format and collection of text editor plugins
+# for maintaining consistent coding styles between different editors
+# and IDEs. Most popular editors support this either natively or via
+# plugin.
+#
+# Check https://editorconfig.org for details.
+#
+
+root = true
+
+[*]
+end_of_line = lf
+insert_final_newline = true
+charset = utf-8
+
+[Makefile*]
+indent_style = tab
+indent_size = 8
+emacs_mode = makefile
+
+[*.{c,h}]
+indent_style = space
+indent_size = 4
+emacs_mode = c
+
+[*.{s,S}]
+indent_style = tab
+indent_size = 8
+emacs_mode = asm




Re: [Live-devel] building on raspberrypi Bookworm

2024-05-30 Thread Brian Koblenz
Thanks.  The 2024.05.30 tar files has config.raspberrypi and it builds
and runs correctly.  Yippee.

fwiw, as of this morning, the latest tar file still does NOT reflect
the 5.30 file.

On Wed, May 29, 2024 at 8:03 PM Ross Finlayson  wrote:
>
>
>
> > On May 29, 2024, at 6:35 PM, Brian Koblenz  
> > wrote:
> >
> > still missing??
>
> You must be getting an old, cached copy of the tar file.  To be sure that you 
> get the latest one, download
> http://www.live555.com/liveMedia/public/live.2024.05.30.tar.gz
> instead.
>
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> ___
> live-devel mailing list
> live-devel@lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel

___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


[Bug 2067618] [NEW] displaycal-py3 fails to launch due to python version mismatch

2024-05-30 Thread Brian Innes
Public bug reported:

No LSB modules are available.
Description:Ubuntu Studio 24.04 LTS
Release:24.04

displaycal:
  Installed: 3.9.11-2

upon launching displaycal I expected the application to start, however
an error message appeared

"Traceback (most recent call last):
  File "/usr/bin/displaycal", line 4, in 
from DisplayCAL.main import main
  File "/usr/lib/python3/dist-packages/DisplayCAL/main.py", line 27, in 
raise RuntimeError(
RuntimeError: Need Python version >= 3.8 <= 3.11, got 3.12.3"

ProblemType: Bug
DistroRelease: Ubuntu 24.04
Package: displaycal 3.9.11-2
ProcVersionSignature: Ubuntu 6.8.0-31.31.1-lowlatency 6.8.1
Uname: Linux 6.8.0-31-lowlatency x86_64
ApportVersion: 2.28.1-0ubuntu3
Architecture: amd64
CasperMD5CheckResult: pass
CurrentDesktop: KDE
Date: Thu May 30 15:26:08 2024
InstallationDate: Installed on 2024-05-30 (0 days ago)
InstallationMedia: Ubuntu-Studio 24.04 LTS "Noble Numbat" - Release amd64 
(20240424.1)
SourcePackage: displaycal-py3
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: displaycal-py3 (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug noble

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067618

Title:
  displaycal-py3 fails to launch due to python version mismatch

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/displaycal-py3/+bug/2067618/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[RBW] Re: FS: Bedrock Mountain Clogs Leather Size 13

2024-05-30 Thread Brian Forsee
Tim/Michael,

Do you guys also have bedrock sandals? Did you size up on the clog compared 
to the sandal? I wear the 12M sandal, but have a little bit of room on the 
footbed. If they made half sizes a 11.5 would probably be spot on for me. I 
don't think my sandals are the new EVO, they are from 2017 (re-sole club!). 

I was really excited about the clogs but was bummed when i found out they 
were not US manufactured and never ended up buying a pair.

Brian

On Thursday, May 30, 2024 at 7:07:55 AM UTC-5 Tim Bantham wrote:

> I hope you enjoy those new to you clogs Michael!
>
> On Wednesday, May 29, 2024 at 7:05:57 PM UTC-4 Michael Ullmer wrote:
>
>> And sold
>>
>> On Wednesday, May 29, 2024 at 2:37:54 PM UTC-5 Michael Ullmer wrote:
>>
>>> I picked up a size 12 off the list this month from Tim to replace my too 
>>> large for me size 13s.
>>>
>>> I bought these brand new from Bedrock during their last run last Fall. In 
>>> good shape, some crank wear on the side (I mainly rode in these). There's a 
>>> faint mark on the top of one, I tried to capture in the pics. Soles are in 
>>> great shape.
>>>
>>> $100 shipped in USA
>>>
>>> Pics here: https://photos.app.goo.gl/LFhFnM3bFmh1jMvKA
>>>
>>> Mike in Minneapolis
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/9372dc86-f4b4-4aa5-8c56-98bb5a4cc63an%40googlegroups.com.


Re: [RE-wrenches] Large Residence ESS and NFPA 855

2024-05-30 Thread Brian Mehalic via RE-wrenches
I'm not looking at the most recent editions right, but the previous
versions of both 855 and the IRC both have statements to the effect of "ESS
installations exceeding the permitted individual or aggregate ratings shall
be installed in accordance with Section xxx" (pointing to the IFC in the
case of the IRC, and pointing to Chapters 4-9 in the case of 855), in other
words basically follow the (more onerous) rules for ESS on other than one-
and two-family dwellings.

Brian Mehalic
NABCEP Certified Solar PV Installation Professional™ R031508-59
National Electrical Code® CMP-4 Member
(520) 204-6639

Solar Energy International
http://www.solarenergy.org



On Thu, May 30, 2024 at 3:17 AM Jason Szumlanski via RE-wrenches <
re-wrenches@lists.re-wrenches.org> wrote:

> Has anyone dealt with a large ESS installation that exceeds the aggregate
> storage limits in NFPA 855? The limits are quite low for very large single
> family homes. I have a situation where we need 120 kWh minimum for an
> off-grid home. There is a building dedicated for the inverters and
> batteries that is 400 feet from the main home. The intent was to house the
> batteries there, but we are in excess of the 80 kWh limit for an accessory
> structure. It would be impractical to house part of the batteries on an
> exterior wall.
>
> Are there any "loopholes" that might allow this larger ESS for this
> property? And if not, has anyone attempted to comply with Ch 4-9? That
> seems onerous.
>
> Jason Szumlanski
> Florida Solar Design Group
> ___
> List sponsored by Redwood Alliance
>
> Pay optional member dues here: http://re-wrenches.org
>
> List Address: RE-wrenches@lists.re-wrenches.org
>
> Change listserver email address & settings:
> http://lists.re-wrenches.org/options.cgi/re-wrenches-re-wrenches.org
>
> There are two list archives for searching. When one doesn't work, try the
> other:
> https://www.mail-archive.com/re-wrenches@lists.re-wrenches.org/
> http://lists.re-wrenches.org/pipermail/re-wrenches-re-wrenches.org
>
> List rules & etiquette:
> http://www.re-wrenches.org/etiquette.htm
>
> Check out or update participant bios:
> http://www.members.re-wrenches.org
>
>
___
List sponsored by Redwood Alliance

Pay optional member dues here: http://re-wrenches.org

List Address: RE-wrenches@lists.re-wrenches.org

Change listserver email address & settings:
http://lists.re-wrenches.org/options.cgi/re-wrenches-re-wrenches.org

There are two list archives for searching. When one doesn't work, try the other:
https://www.mail-archive.com/re-wrenches@lists.re-wrenches.org/
http://lists.re-wrenches.org/pipermail/re-wrenches-re-wrenches.org

List rules & etiquette:
http://www.re-wrenches.org/etiquette.htm

Check out or update participant bios:
http://www.members.re-wrenches.org



Re: [Live-devel] building on raspberrypi Bookworm

2024-05-29 Thread Brian Koblenz
still missing??

pi@Master-198:~/live $ ls
BasicUsageEnvironment   config.cygwin-for-vlc
config.linux-no-openssl config.solaris-64bit groupsock
 UsageEnvironment
config.armeb-uclibc config.freebsd
config.linux-with-shared-libraries  config.uClinux   hlsProxy
 win32config
config.armlinux config.freebsd-no-openssl
config.macosx-bigsurconfigureliveMedia
 win32config.Borland
config.avr32-linux  config.iphoneos
config.macosx-catalina  COPYING
Makefile.head  WindowsAudioInputDevice
config.bfin-linux-uclibcconfig.iphone-simulator
config.macosx-no-opensslCOPYING.LESSER
Makefile.tail
config.bfin-uclinux config.linux   config.mingw
fix-makefile mediaServer
config.bsplinux config.linux-64bit config.openbsd
genMakefiles proxyServer
config.cris-axis-linux-gnu  config.linux-gdb   config.qnx4
genWindowsMakefiles  README
config.cygwin   config.linux-gdb-sanitize
config.solaris-32bitgenWindowsMakefiles.cmd  testProgs

On Wed, May 29, 2024 at 6:05 PM Ross Finlayson  wrote:
>
>
>
> > On May 29, 2024, at 5:56 PM, Brian Koblenz  
> > wrote:
> >
> > Here is what I downloaded:
> > wget http://www.live555.com/liveMedia/public/live555-latest.tar.gz
> >
> > After unpacking, I do not see any live/config.raspberrypi
>
> Oops, that was my mistake.  Sorry about that.  Try again now.
>
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> ___
> live-devel mailing list
> live-devel@lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel

___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


CVS: cvs.openbsd.org: ports

2024-05-29 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/29 19:25:18

Modified files:
devel/dub  : Makefile distinfo 
devel/dub/patches: patch-source_dub_dub_d 

Log message:
Update to dub-1.37.0
Changelog: https://github.com/dlang/dub/compare/v1.31.1...v1.37.0



Re: [Live-devel] building on raspberrypi Bookworm

2024-05-29 Thread Brian Koblenz
Here is what I downloaded:
wget http://www.live555.com/liveMedia/public/live555-latest.tar.gz

After unpacking, I do not see any live/config.raspberrypi
Do I have the right tarball?

fwiw, I am (still) targeting pi-3b+ but I think the main issues are
the OS level (12/bookworm) instead of the hw variant.

thanks


On Wed, May 29, 2024 at 5:26 PM Ross Finlayson  wrote:
>
> Brian,
>
> I recently updated the code to add a new configuration file
> config.raspberrypi
> that is known to work for building on the Raspberry Pi 5 (and likely other 
> versions as well).  So just run
> genMakefiles raspberrypi
> before building.
>
> And this is the first time I’ve ever heard about C++ compilers complaining 
> about ‘indentation’ (which is a bit weird, considering that indentation has 
> no semantic significance in the language).  And the examples you gave are 
> certainly not ‘misleading’.  I suggest turning off such warnings if you can.
>
>
> Ross Finlayson
> Live Networks, Inc.
> http://www.live555.com/
>
>
> ___
> live-devel mailing list
> live-devel@lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel

___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


[Live-devel] building on raspberrypi Bookworm

2024-05-29 Thread Brian Koblenz
I am trying to create an environment on the latest debian os (Lite).
I last did this a few years ago and it is not one of my favorite
things.

This time around, I have forgotten all I learned about  the live555
stuff and am unsuccessfully trying to (naively) build
live555ProxyServer.

fwiw, I have added (to config.linux) -DALLOW_SERVER_PORT_REUSE=1 as a
COMPILER_OPT

I have a couple of issues:
a) there are a few places where the compiler complains about
unexpected indentation.  These certainly look suspicious and could be
eliminated by adding the relevant {}.  I am not (yet) too concerned
about these, but I include a couple of examples below

b) The BasicTaskScheduler.cpp is missing a member function.  See below.

In general, I know this software is carefully crafted and well
maintained so I suspect I am just out to lunch in some basic way.  Or
maybe my debian bookworm installation needs something important???

-brian


c++ -c -Iinclude -I../UsageEnvironment/include -I../groupsock/include
-I/usr/local/include -I. -O2 -DSOCKLEN_T=socklen_t
-D_LARGEFILE_SOURCE=1 -D_FILE_OFFSET_BITS=64
-DALLOW_SERVER_PORT_REUSE=1  -Wall -DBSD=1   BasicTaskScheduler.cpp
BasicTaskScheduler.cpp: In member function ‘virtual void
BasicTaskScheduler::SingleStep(unsigned int)’:
BasicTaskScheduler.cpp:191:40: error: ‘struct std::atomic_flag’ has no
member named ‘test’
  191 |   if (fTriggersAwaitingHandling[i].test()) {
  |^~~~
gmake[1]: *** [Makefile:41: BasicTaskScheduler.o] Error 1
gmake[1]: Leaving directory '/home/pi/live/BasicUsageEnvironment'

Examples of unexpected indentation exist in RTCP.cpp and VorbisAudioRTPSink.cpp

RTCP.cpp: In member function ‘void
RTCPInstance::processIncomingReport(unsigned int, const
sockaddr_storage&, int, unsigned char)’:
RTCP.cpp:568:7: warning: this ‘if’ clause does not guard...
[-Wmisleading-indentation]
  568 |   if (length < 4) break; length -= 4;
  |   ^~
RTCP.cpp:568:30: note: ...this statement, but the latter is
misleadingly indented as if it were guarded by the ‘if’
  568 |   if (length < 4) break; length -= 4;
  |  ^~
RTCP.cpp:586:11: warning: this ‘if’ clause does not guard...
[-Wmisleading-indentation]
  586 |   if (length < 20) break; length -= 20;
  |   ^~
RTCP.cpp:586:35: note: ...this statement, but the latter is
misleadingly indented as if it were guarded by the ‘if’
  586 |   if (length < 20) break; length -= 20;

___
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel


[RBW] Re: 11 Speed on Silver Crankset?

2024-05-29 Thread brian tester
Can confirm it works with a KMC X11!

On Wednesday, May 29, 2024 at 4:42:36 PM UTC-7 sebastian...@gmail.com wrote:

> Hello,
>
> Has anyone successfully used a 11-spd chain on a silver wide-low crankset?
>
> The plan is use an 11-34 11spd cassette in the back paired with an 38/24 
> silver crankset.
>
> The Riv website states the silver cranks are compatible with up to 10 
> speed, but i am wondering if anyone can comment on 11 speed.
>
> Thanks!
>

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/d3518756-e719-4c1e-95f7-b8fe4599ec85n%40googlegroups.com.


[ITP] python-license-expression and cygport PoC patch (was: calm: SPDX licence list data update please)

2024-05-29 Thread Brian Inglis via Cygwin-apps

On 2024-05-28 08:37, Brian Inglis via Cygwin-apps wrote:

On 2024-05-27 15:15, Jon Turney via Cygwin-apps wrote:

On 24/05/2024 17:08, Brian Inglis via Cygwin-apps wrote:


Can we please get the SPDX licence list data updated in calm to 3.24 sometime 
if possible as the licences complained about below have been in 


I thought I wrote about this the last time you asked, but obviously not.


Thought not, but after recent reminder, not so sure now ;^>

This is not quite straightforward, as the system python on sourceware is 
currently python3.6, and the last supported nexB/license-expression on that is 
30.0.0, and moving to a later one has some wrinkles, since various pieces of 
interconnected stuff aren't venv'd (yet?).



releases for nearly a year since 3.21:



If not, perhaps I could be of some help if I knew requirements?


So, there aren't any requirements here except "validate the SPDX license 
expression to detect maintainer mistakes and typos".


It looks like using that python module might have been a mistake.

I'm not sure why it needs to contain it's own version of the license data, 
ideally we'd have something that read the official SPDX data (ratelimited to 
once per day or something. It looks like maybe this would possible to do by 
feeding our own license list into the module rather than using it's built in 
one, but one could hope for this to be built in already...)


There have been changes in how to specify exceptions using WITH.

It would also be useful if it could also be taught to accept 'LicenseRef-.*' 
identifiers.


Ditto ExceptionRef-.* but that and LicenseRef-.* do not seem to be allowed by 
PEP 639, as they unrealistically expect projects to change existing licences, 
whereas we have to deal with historical reality like Fedora!


So, suggestions on a different module to use, or patches to make this work 
better, or cogent arguments why we should just remove this validation are all 
welcome.


How about if we delegate licence validation to cygport, as someone recently 
offered, or as currently done in calm, with current Cygwin python - add licence 
validation hint to src hint - if not there, calm does it as now?


Would we or should we also allow specifying LICENSE_URI (as I have been doing) 
like PEP 639 license-files, with defaults searched as suggested:


 "LICEN[CS]E*", "COPYING*", "NOTICE*", "AUTHORS*"?

where globs and source paths are allowed as usual in cygport files, and 
directories may match these paths, implicitly including file entries, but no 
file *contents* checked, unless we see a need in future, to generate and 
validate licences.



You can also now remove the exceptions in calm/fixes.py(licmap):


Thanks, will do so.


Cheers!


I found github/nexB/license-expression Python package to do SPDX licence checks 
with current data, developed by the same team doing SPDX-toolkit for SPDX, 
working with Fedora folks et al.


Successful attempt to package Python license-expression (without tests):

https://cygwin.com/cgi-bin2/jobs.cgi?id=8210

cygport attached and at:

https://cygwin.com/cgit/cygwin-packages/playground/commit/?id=3626386b10c967f780547d1703ad23bd50f6331a

log at:

https://github.com/cygwin/scallywag/actions/runs/9293093201

The package installs and runs using PoC attached in spdx-license-expression.py 
script hooked into /usr/share/cygport/lib/pkg_pkg.cygpart license hint addition 
patch attached.


I also ran a test of the Python script and module against all package source 
cygport files declaring licences which I maintain or ever looked at, including a 
git/cygwin-packages/*.cygport download from 2023-02, showing the results in the 
attached log.
I also attempted to trap the exceptions in the script, but that does not seem to 
work in any documented obvious manner, but I do not know enough Python to 
address this.


If someone else who knows python cared to adopt and improve this in a more 
normal manner, and incorporate this more smoothly into cygport, we could all 
appreciate that.

Alternatively, some candid comments and frank feedback might allow me to do so! 
;^>

The approach may also be adaptable to calm if you can get 
python-license-expression 30.3.0 installed on the server(s), and kept updated:


https://repology.org/project/python:license-expression/versions

and Cygwin should soon be added there hopefully! ;^>

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry#|/usr/bin/cygport
# python-license-expression.cygport - Python license-expression Cygwin package 
build control script definitions

inherit python-wheel

NAME=python-license-expression
VERS

issue with forwarder zones

2024-05-29 Thread Cuttler, Brian R (HEALTH) via bind-users
My bad - I'd mailed this mistakenly to an individual and not the list.

---

I am currently running BIND 9.18.18-0ubuntu0.22.04.2-Ubuntu.

I am sometimes seeing that I don't have resolution for some FQDN in forwarder 
zones.

Usually it works, sometimes I don't get resolution. Interesting I failed 
resolution for an FQDN yesterday and while relieved to see that I failed to get 
resolution on the authoritative server I later was able to resolve on the 
authoritative server but still failed on the local forwarding server.

Wondering what is going on there.

Conjecture - caching the failed response for some period of time?
If so, disable caching for the problematic forwarder zone?

Some other issue? If so what might it be, how can I test for it and how do I 
resolve/work-around it?

Thanks in advance,
Brian


Brian R Cuttler
System and Network Administrator
Wadsworth Center, New York State Department of Health
Empire State Plaza
Corning Tower, Sublevel D-280, Albany NY 12237
518 486-1697 (O)
518 redacted (C)
brian.cutt...@health.ny.gov
https://www.wadsworth.org


-- 
Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from 
this list

ISC funds the development of this software with paid support subscriptions. 
Contact us at https://www.isc.org/contact/ for more information.


bind-users mailing list
bind-users@lists.isc.org
https://lists.isc.org/mailman/listinfo/bind-users


Hyperref clarification

2024-05-29 Thread Brian Kneller via lyx-users
Hi

I am using book standard class and am installing chapters from MS Word and PDF 
graphics into LyX. I have hyperlinks generated between the figures list, TOC 
and Bibliography in the standard class without hyperref being available. The 
options I have tried in Docs>Settings> PDF Props seem to work and hyperref is 
made available through the “Use hyperref Support” option. I have not installed 
any new document cross refs yet in LyX as they, and the Bibliography refs 
already exist for manual cross referencing. I would like to automate cross 
reference access using hyperref and the existing cross ref ID.  I understand 
this is automatic when creating them in LyX, however I have been unable to find 
a feature to enable the document reader to return from an accessed cross 
reference back to the point of calling or have I missed the obvious?

 

Separate issue: after generating a PDF from source I sometimes get errors and 
some of these quote a line number of large value which does not reference the 
source file, or is it the error log and in which case how can I enumerate the 
lines please?

 

Many Thanks

 

Config: Mac Book Pro, Big Sur , LyX  V 2.3.7 -- 
lyx-users mailing list
lyx-users@lists.lyx.org
http://lists.lyx.org/mailman/listinfo/lyx-users


[Wikitech-l] Re: Can we do better than just redirect HTTP API requests to HTTPS?

2024-05-29 Thread Brian Wolff
I personally think the rather low risk is not worth the inconvinence,
especially since many uses of the API are unauthenticated.

If we did it, i think we should only do it for requests that actually have
credentials attached (cookie or oauth)

Just my 2 cents.

--
Brian

On Wednesday 29 May 2024, psnbaotg via Wikitech-l <
wikitech-l@lists.wikimedia.org> wrote:

> I noticed an interesting post on Hacker News:
> https://news.ycombinator.com/item?id=40504756 (https://jviide.iki.fi/http-
> redirects)
>
> Basically, this article argues that for reasons, API should "fail early",
> such as returning with 403 and revoking all credentials sent via plain
> text, rather than redirecting all HTTP requests to HTTPS.
>
> In my humble opinion, this article's point make perfect sense. Because we
> cannot expect an arbitrary client to follow HSTS and a simple typo can
> cause serious credential leak.
>
> I found that all our APIs (action API, Wikimedia REST, and even Wikimedia
> Enterprise) are doing redirects:
>
> ```
> $ curl -I "http://en.wikipedia.org/api/rest_v1/page/title/Earth;
> HTTP/1.1 301 Moved Permanently
> content-length: 0
> location: https://en.wikipedia.org/api/rest_v1/page/title/Earth
> server: HAProxy
> x-cache: cp5023 int
> x-cache-status: int-tls
> connection: close
>
> $ curl -I "http://en.wikipedia.org/w/api.php?action=query=
> info=Earth"
> HTTP/1.1 301 Moved Permanently
> content-length: 0
> location: https://en.wikipedia.org/w/api.php?action=query=
> info=Earth
> server: HAProxy
> x-cache: cp5023 int
> x-cache-status: int-tls
> connection: close
>
> $ curl -I http://api.enterprise.wikimedia.com/v2/snapshots
> HTTP/1.1 301 Moved Permanently
> Server: awselb/2.0
> Date: Wed, 29 May 2024 10:03:24 GMT
> Content-Type: text/html
> Content-Length: 134
> Connection: keep-alive
> Location: https://api.enterprise.wikimedia.com:443/v2/snapshots
>
> ```
>
> I'm asking security folks, should we consider making above changes, like
> those services listed in the article? Thanks you.
>
> Best regards,
> diskdance
> ___
> Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
> To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
> https://lists.wikimedia.org/postorius/lists/wikitech-l.
> lists.wikimedia.org/
>
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

Re: poll(): IN/OUT vs {RD,WR}NORM

2024-05-29 Thread Brian Buhrow
Hello.  I just did a quick scan of the telnetd sources from NetBSD-5 
and there are
interesting notes in there about all of this and how urgent data is used, or 
not used, in
different cases.  A check of -current sources still have the same notes and 
code regarding all
of this in telnetd.


-Brian


Re: [RBW] Re: Cockpit swap on my Sam

2024-05-29 Thread Brian Turner
I’m a big fan of the chunky ESI grips. They come in a variety of colors, but I’ve also seen folks wrap them in cotton tape and twine.BrianLex Ky On May 29, 2024, at 7:57 AM, Mathias Steiner  wrote:One or two layers of 'cork' tape, covered with a delightful shade of Newbaum's.On Wednesday, May 29, 2024 at 7:45:15 AM UTC-4 Tim Bantham wrote:Once again I am considering a cockpit change for my Sam Hillborne. This time I am going to upright Billie bars. For simplicity and cost savings I'm keeping the current bar end shifters. I'm currently pondering grips. I love the way cork grips look but find them to be too slippery. I've been happy with Oury grips. I like the way they feel but I don't think they look as good when you cut off the end to accommodate  bar ends shifters. I've seen the felt and twine DIY grips that Riv promotes but I don't think I have the artistic ability and patience to recreate it. What am I missing? What grips are out there that look good, feel good and work with bar ends?  



-- 
You received this message because you are subscribed to the Google Groups "RBW Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rbw-owners-bunch/ab215896-ac66-45fd-bf28-7d8485382131n%40googlegroups.com.




-- 
You received this message because you are subscribed to the Google Groups "RBW Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rbw-owners-bunch/3D41CEFF-64FF-44CD-A77B-FEB3D22D0AD9%40gmail.com.


Re: newer version of mingw64-*-win-iconv ?

2024-05-29 Thread Brian Inglis via Cygwin-apps

On 2024-05-28 19:12, Bruno Haible via Cygwin wrote:

It would be useful if someone could rebuild the two packages
   https://cygwin.com/packages/summary/mingw64-i686-win-iconv.html
   https://cygwin.com/packages/summary/mingw64-x86_64-win-iconv.html
based off the current git HEAD [1].
Reason: The current git HEAD is a reasonable alternative to
GNU libiconv; all encodings that it supports, other than EUC-JP
and GB18030, have reasonably good conversion tables. Wherease the
current Cygwin packages are based off source code from 2013
and have a major problem already with the ASCII encoding.
[1] https://github.com/win-iconv/win-iconv


Ran playground local and CI builds of these packages at v0.0.8 successfully:

https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-x86_64-win-iconv

https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-x86_64-win-iconv

Do we really need the fix at git HEAD to add UCS-2-INTERNAL encoding?

Could someone please do any further tweaks for this source git if required, and 
do NMU builds and deploys of these?


[Are we really still building 32 bit mingw packages when we dropped support of 
32 bit Windows << 1%?

Steam estimated 32 bit games PCs ~ 0.25% in 2021, and dropped support in 
February.
Surveys don't even bother to report that share nowadays!]

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry


Re: newer version of mingw64-*-win-iconv ?

2024-05-29 Thread Brian Inglis via Cygwin

On 2024-05-28 19:12, Bruno Haible via Cygwin wrote:

It would be useful if someone could rebuild the two packages
   https://cygwin.com/packages/summary/mingw64-i686-win-iconv.html
   https://cygwin.com/packages/summary/mingw64-x86_64-win-iconv.html
based off the current git HEAD [1].
Reason: The current git HEAD is a reasonable alternative to
GNU libiconv; all encodings that it supports, other than EUC-JP
and GB18030, have reasonably good conversion tables. Wherease the
current Cygwin packages are based off source code from 2013
and have a major problem already with the ASCII encoding.
[1] https://github.com/win-iconv/win-iconv


Ran playground local and CI builds of these packages at v0.0.8 successfully:

https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-x86_64-win-iconv

https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=mingw64-x86_64-win-iconv

Do we really need the fix at git HEAD to add UCS-2-INTERNAL encoding?

Could someone please do any further tweaks for this source git if required, and 
do NMU builds and deploys of these?


[Are we really still building 32 bit mingw packages when we dropped support of 
32 bit Windows << 1%?

Steam estimated 32 bit games PCs ~ 0.25% in 2021, and dropped support in 
February.
Surveys don't even bother to report that share nowadays!]

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry

--
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


[webkit-changes] [WebKit/WebKit] 237b90: Add an early return if loading updated DNR rules f...

2024-05-28 Thread Brian Weinstein
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 237b90c1c756967c60ca644f02dcc13084364628
  
https://github.com/WebKit/WebKit/commit/237b90c1c756967c60ca644f02dcc13084364628
  Author: Brian Weinstein 
  Date:   2024-05-28 (Tue, 28 May 2024)

  Changed paths:
M 
Source/WebKit/UIProcess/Extensions/Cocoa/API/WebExtensionContextAPIDeclarativeNetRequestCocoa.mm

  Log Message:
  ---
  Add an early return if loading updated DNR rules fails
https://bugs.webkit.org/show_bug.cgi?id=274785
rdar://128749905

Reviewed by Timothy Hatcher.

Without this early return, we were both trying to roll back to a savepoint and 
commit the same savepoint,
and then we would call the completion handler more than once, leading to this 
crash.

These code paths should be mutually exclusive, so make sure to add an early 
return.

* 
Source/WebKit/UIProcess/Extensions/Cocoa/API/WebExtensionContextAPIDeclarativeNetRequestCocoa.mm:
(WebKit::WebExtensionContext::updateDeclarativeNetRequestRulesInStorage):

Canonical link: https://commits.webkit.org/279426@main



To unsubscribe from these emails, change your notification settings at 
https://github.com/WebKit/WebKit/settings/notifications
___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[Bug 2067120] Re: apport-retrace crashed with subprocess.CalledProcessError in check_call(): Command '['dpkg', '-x', '/srv/vms/apport-retrace/Ubuntu 24.04/apt/var/cache/apt/archives//base-files_13ubun

2024-05-28 Thread Brian Murray
** Changed in: apport (Ubuntu)
   Importance: Medium => High

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067120

Title:
  apport-retrace crashed with subprocess.CalledProcessError in
  check_call(): Command '['dpkg', '-x', '/srv/vms/apport-retrace/Ubuntu
  24.04/apt/var/cache/apt/archives//base-files_13ubuntu9_amd64.deb',
  '/tmp/apport_sandbox_zj9wto2z']' returned non-zero exit status 2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apport/+bug/2067120/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 1994521] Update Released

2024-05-28 Thread Brian Murray
The verification of the Stable Release Update for cinder has completed
successfully and the package is now being released to -updates.
Subsequently, the Ubuntu Stable Release Updates Team is being
unsubscribed and will not receive messages about this bug report.  In
the event that you encounter a regression using the package from
-updates please report a new bug using ubuntu-bug and tag the bug report
regression-update so we can easily find any regressions.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1994521

Title:
  HPE3PAR: Failing to clone a volume having children

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1994521/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Patrizia Nanz: I run a university – people like me should be backing students’ right to protest over Gaza Patrizia Nanz

2024-05-28 Thread Brian Holmes via nettime-l
Thank you Patrizia!
Thank you Patrizio!

This text is so true and so just. I  just wonder about these two sentences:

"It is also in the interest of academic institutions to have a
comprehensive picture of their “political economy” – the networks of power
and influence that they are part of.
This picture is often lacking, not because of deliberate opacity but
because of organisational complexity."

Maybe Italian universities are not deliberately opaque - but large American
universities have many government, military and corporate contracts to
hide. As soon as students throw off their complacency and seek the truth,
conflict is inevitable.

It's better to fight against hypocritical liberalism now - before it's a
bloodier fight against open fascism (coming soon to a democracy near you).

BH

On Mon, May 27, 2024, 11:24 Patrice Riemens via nettime-l <
nettime-l@lists.nettime.org> wrote:

>
> Original to:
>
> https://www.theguardian.com/commentisfree/article/2024/may/27/university-student-protests-gaza-right
>
>
> I run a university – people like me should be backing students’ right to
> protest over Gaza
> Patrizia Nanz
>
> The brutal repression of student protests from Amsterdam to Los Angeles is
> exposing failings at the heart of our universities
>
> Mon 27 May 2024
>
> Across the world, university students have set up encampments to protest
> against the humanitarian disaster unfolding in Gaza and put pressure on
> academic institutions and governments. Whatever one thinks of their message
> and of their requests, their moral indignation in the face of avoidable
> human suffering is one we should all be able to share.
>
> I find it inspiring that this student movement has been spearheaded by a
> generation that was too quickly labelled apolitical and self-absorbed.
> Think about it: these students grew up in the bleak post-9/11 world, with a
> future foreclosed by the 2008 financial crisis and the climate meltdown.
> They are still reeling from two years of pandemic that have taken a heavy
> educational and emotional toll. Still, this generation has succeeded in
> organising a global movement that is coordinated, smart and humane. It
> deserves much better than condescension.
>
> The encampments and most of the protests have been largely peaceful, yet
> they have at times been brutally repressed. As a university president in
> Italy, I have watched with dismay the scenes of violence that have unfolded
> at universities from Amsterdam to Los Angeles and Sydney. It was good to
> see some of my counterparts, such as the presidents of Brown and Wesleyan
> in the US, engage constructively with the students and even accept some of
> their requests. However, they remain exceptions that underscore how far
> most academic institutions have drifted away from their main constituency.
>
> Global movements are complex and dynamic phenomena. In the jet stream of
> slogans they churn out, some may be contentious or even unreasonable. Yet
> single slogans rarely capture the meaning of such large-scale protests.
> What matters are fundamental principles.
>
> It seems to me that the student protests are driven by an attachment to
> peace and human life. This is what makes the repressive reaction of many
> academic administrations so shocking. Anyone attached to the idea of
> university as a place of free intellectual inquiry and self-fulfilment can
> only feel sadness seeing the recent pictures of the empty Columbia
> University Morningside campus barricaded behind police lines. Ironically,
> western universities without students have become the mirror image of
> Palestinian students without universities. Something has gone terribly
> wrong. But what?
>
> When researchers and students from the European University Institute,
> where I work, set up camp in Florence alongside their peers from other
> universities across Tuscany, I saw this as an opportunity to take stock and
> clarify the fundamental principles that should inform academic debate. Such
> clarity is critical if we want to navigate these challenging times together.
>
> The EUI is a postgraduate university with students and researchers from
> around the world, including many who are Jewish and Muslim. I was heartened
> to see all of them side by side when I visited the encampment in Florence’s
> Piazza San Marco. I am committed to ensuring that their university remains
> a space where they can all feel included while exercising their freedom to
> the fullest. This includes asking difficult and contentious questions, with
> no restriction other than intellectual rigour and respect for the dignity
> of those involved.
>
> While it is essential to keep the focus on the humanitarian disaster
> unfolding in Gaza and on the Israeli hostages held by Hamas, I am also
> struck by what this movement says about the state of universities. It
> reveals a deep rift between students and administrations. The latter have
> grown hugely over the past decades and become massive 

[slurm-users] Re: slurmdbd archive format

2024-05-28 Thread Brian Andrus via slurm-users

Oh, to address the passed train:

Restore the archive data with "sacctmgr archive load", then you can do 
as you need.


From man sacctmgr:

*archive*{dump|load} 

    Write database information to a flat file or load information that 
has previously been written to a file.


Brian Andrus


Setup your other MariaDB instance, dump the current slurmdbd and 
restore/import it, then restore your archive


On 5/28/2024 11:38 AM, Brian Andrus wrote:


Instead of using the archive files, couldn't you query the db directly 
for the info you need?


I would recommend sacct/sreport if those can get the info you need.

Brian Andrus

On 5/28/2024 9:59 AM, O'Neal, Doug (NIH/NCI) [C] via slurm-users wrote:


My organization needs to access historic job information records for 
metric reporting and resource forecasting. slurmdbd is archiving only 
the job information, which should be sufficient for our numbers, but 
is using the default archive script. In retrospect, this data should 
have been migrated to a secondary MariaDB instance, but that train 
has passed.



The format of the archive files is not well documented. Does anyone 
have a program (python/C/whatever) that will read a job_table_archive 
file and decode it into a parsable structure?


Douglas O’Neal, Ph.D. (contractor)

Manager, HPC Systems Administration Group, ITOG

Frederick National Laboratory for Cancer Research

Leidos Biomedical Research, Inc.

Phone: 301-228-4656

Email: Douglas.O’n...@nih.gov 
<mailto:Doug%20O'Neal%20%3cDouglas.O'n...@nih.gov%3e>



-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com


[slurm-users] Re: slurmdbd archive format

2024-05-28 Thread Brian Andrus via slurm-users
Instead of using the archive files, couldn't you query the db directly 
for the info you need?


I would recommend sacct/sreport if those can get the info you need.

Brian Andrus

On 5/28/2024 9:59 AM, O'Neal, Doug (NIH/NCI) [C] via slurm-users wrote:


My organization needs to access historic job information records for 
metric reporting and resource forecasting. slurmdbd is archiving only 
the job information, which should be sufficient for our numbers, but 
is using the default archive script. In retrospect, this data should 
have been migrated to a secondary MariaDB instance, but that train has 
passed.



The format of the archive files is not well documented. Does anyone 
have a program (python/C/whatever) that will read a job_table_archive 
file and decode it into a parsable structure?


Douglas O’Neal, Ph.D. (contractor)

Manager, HPC Systems Administration Group, ITOG

Frederick National Laboratory for Cancer Research

Leidos Biomedical Research, Inc.

Phone: 301-228-4656

Email: Douglas.O’n...@nih.gov 
<mailto:Doug%20O'Neal%20%3cDouglas.O'n...@nih.gov%3e>



-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com


[Bug 2061304] Re: FileZilla: Uploading any file causes a core dump in 24.04 beta

2024-05-28 Thread Brian Kelly
*** This bug is a duplicate of bug 2061954 ***
https://bugs.launchpad.net/bugs/2061954

Confirm after 'upgrade' (total disaster) it also affects Kubuntu 24.04.
Identical symptoms to other posters.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2061304

Title:
  FileZilla: Uploading any file causes a core dump in 24.04 beta

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/filezilla/+bug/2061304/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [ansible-project] set module_defaults globally?

2024-05-28 Thread Brian Coca
At that point copy the module into a custom one and set the defaults you want.

-- 
--
Brian Coca (he/him/yo)

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/CACVha7ejfcpTdfKbv%3DXz3%3D5zqaUKPekLrvAKwmEgH7Gx_HNYfA%40mail.gmail.com.


[Bug 2067120] Re: apport-retrace crashed with subprocess.CalledProcessError in check_call(): Command '['dpkg', '-x', '/srv/vms/apport-retrace/Ubuntu 24.04/apt/var/cache/apt/archives//base-files_13ubun

2024-05-28 Thread Brian Murray
** Information type changed from Private to Public

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067120

Title:
  apport-retrace crashed with subprocess.CalledProcessError in
  check_call(): Command '['dpkg', '-x', '/srv/vms/apport-retrace/Ubuntu
  24.04/apt/var/cache/apt/archives//base-files_13ubuntu9_amd64.deb',
  '/tmp/apport_sandbox_zj9wto2z']' returned non-zero exit status 2.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apport/+bug/2067120/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[RBW] Re: DIY build or order complete?

2024-05-28 Thread Brian Forsee
I agree with Ken regarding the most difficult part being collecting 
compatible parts. Maybe if you ask nicely riv will sell you a kit-in-a-box? 
That way you can spend time learning and practicing the actual assembly and 
adjustment instead of worrying about if the parts in front of you even can 
play nice together.

Brian

On Tuesday, May 28, 2024 at 11:24:11 AM UTC-5 Ken Yokanovich wrote:

> IMHO, your greatest challenge will be finding and collecting all of the 
> parts necessary to build the bike. I think the key issue being 
> compatibility, when to ignore and when to respect it. Rivendell World 
> Headquarters does an fabulous job when it comes to mechanical wisdom and 
> experience with what works/doesn't. Unless you have experience and a home 
> shop stocked with components and incidentals, I think you will probably 
> wind up spending more building a complete bike yourself. (Even excluding 
> the cost for specialized tools that may be required.)
>
> I strongly encourage you to explore bicycle maintenance on your own, 
> perhaps experiment on an existing used bike. I was VERY young when 
> beginning my bicycle (dis)/assembly and repair. I destroyed a lot of parts 
> in my ignorance and learning experience. Even after YEARS of experience, I 
> learned TONS more later when attending professional training and continued 
> to learn from co-workers and experience with almost every repair while 
> employed as a professional bike mechanic.  No longer working in the 
> industry, I am still always learning. 
>
> On Tuesday, May 28, 2024 at 10:32:14 AM UTC-5 Michael wrote:
>
>> Hi all, 
>> Ordered a Sam as my first Riv but unsure whether or not I should tackle 
>> building it up myself or just let Riv have at it. I have never built a bike 
>> before but I do have a workshop and am good with tools/mechanically 
>> inclined. 
>>
>> Are there any specific steps that you would absolutely not recommend a 
>> beginner attempt? By the time i purchase specialty tools, it may have been 
>> wiser to just order it complete? 
>>
>> Let me know what you guys think, I really don't want to do something 
>> stupid!
>>
>> Thanks,
>>
>

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/635c9319-6c56-449c-9f1e-c7c06201b462n%40googlegroups.com.


Re: About the conflict between XFS inode recycle and VFS rcu-walk

2024-05-28 Thread Brian Foster
On Mon, May 27, 2024 at 01:18:23AM +0100, Al Viro wrote:
> On Mon, May 27, 2024 at 07:51:39AM +0800, Ian Kent wrote:
> 
> > Indeed, that's what I found when I had a quick look.
> > 
> > 
> > Maybe a dentry (since that's part of the subject of the path walk and inode
> > is readily
> > 
> > accessible) flag could be used since there's opportunity to set it in vfs
> > callbacks that
> > 
> > are done as a matter of course.
> 
> You might recheck ->d_seq after fetching ->get_link there; with XFS
> ->get_link() unconditionlly failing in RCU mode that would prevent
> this particular problem.  But it would obviously have to be done
> in pick_link() itself (and I refuse to touch that area in 5.4 -
> carrying those changes across the e.g. 5.6 changes in pathwalk
> machinery is too much).
> 

Ian sent a patch along those lines a couple years or so ago:

https://lore.kernel.org/linux-fsdevel/164180589176.86426.501271559065590169.st...@mickey.themaw.net/

I'm still not quite sure why we didn't merge this, at least as a bandaid
fix for the symlink variant of this particular problem..?

Brian

> And it's really just the tip of the iceberg - e.g. I'd expect a massive
> headache in ACL-related part of permission checks, etc.
> 




Re: new git update fails with fatal: fetch-pack: invalid index-pack output

2024-05-28 Thread Brian Inglis via Cygwin

On 2024-05-27 16:10, Dan Shelton via Cygwin wrote:

I can replicate the 'fatal: fetch-pack: invalid index-pack output'
error with https://github.com/gcc-mirror/gcc.git, but only every 11-20
attempts.

I think this is a race condition somewhere, maybe in the threading code?


SO suggestions are other git versions in the path, bad downstream proxy cache, 
slow or invasive network security appliance, which may be bypassed with ssh or 
VPN URIs, low resource limits in containers, which can be relieved by bumping 
resources or reducing sizes:


git config pack.windowMemory 10m
git config pack.packSizeLimit 20m

or huge repos which can be alleviated by a shallow clone --depth=1.

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry

--
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


Re: calm: SPDX licence list data update please

2024-05-28 Thread Brian Inglis via Cygwin-apps

On 2024-05-27 15:15, Jon Turney via Cygwin-apps wrote:

On 24/05/2024 17:08, Brian Inglis via Cygwin-apps wrote:


Can we please get the SPDX licence list data updated in calm to 3.24 sometime 
if possible as the licences complained about below have been in 


I thought I wrote about this the last time you asked, but obviously not.


Thought not, but after recent reminder, not so sure now ;^>

This is not quite straightforward, as the system python on sourceware is 
currently python3.6, and the last supported nexB/license-expression on that is 
30.0.0, and moving to a later one has some wrinkles, since various pieces of 
interconnected stuff aren't venv'd (yet?).



releases for nearly a year since 3.21:



If not, perhaps I could be of some help if I knew requirements?


So, there aren't any requirements here except "validate the SPDX license 
expression to detect maintainer mistakes and typos".


It looks like using that python module might have been a mistake.

I'm not sure why it needs to contain it's own version of the license data, 
ideally we'd have something that read the official SPDX data (ratelimited to 
once per day or something. It looks like maybe this would possible to do by 
feeding our own license list into the module rather than using it's built in 
one, but one could hope for this to be built in already...)


There have been changes in how to specify exceptions using WITH.

It would also be useful if it could also be taught to accept 'LicenseRef-.*' 
identifiers.


Ditto ExceptionRef-.* but that and LicenseRef-.* do not seem to be allowed by 
PEP 639, as they unrealistically expect projects to change existing licences, 
whereas we have to deal with historical reality like Fedora!


So, suggestions on a different module to use, or patches to make this work 
better, or cogent arguments why we should just remove this validation are all 
welcome.


How about if we delegate licence validation to cygport, as someone recently 
offered, or as currently done in calm, with current Cygwin python - add licence 
validation hint to src hint - if not there, calm does it as now?


Would we or should we also allow specifying LICENSE_URI (as I have been doing) 
like PEP 639 license-files, with defaults searched as suggested:


"LICEN[CS]E*", "COPYING*", "NOTICE*", "AUTHORS*"?

where globs and source paths are allowed as usual in cygport files, and 
directories may match these paths, implicitly including file entries, but no 
file *contents* checked, unless we see a need in future, to generate and 
validate licences.



You can also now remove the exceptions in calm/fixes.py(licmap):


Thanks, will do so.


Cheers!

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry


Re: [prometheus-users] how to get count of no.of instance

2024-05-28 Thread 'Brian Candler' via Prometheus Users
Those mangled screenshots are no use. What I would need to see are the 
actual results of the two queries, from the Prometheus web interface (not 
Grafana), in plain text: e.g.

foo{bar="baz",qux="abc"} 42.0

...with the *complete* set of labels, not expurgated. That's what's needed 
to formulate the join query.

On Tuesday 28 May 2024 at 13:23:21 UTC+1 Sameer Modak wrote:

> Hello Brian,
>
> Actually tried as you suggested earlier but when i execute it says no data 
> . So below are the individual query ss , so if i ran individually they give 
> the output
>
> On Sunday, May 26, 2024 at 1:24:10 PM UTC+5:30 Brian Candler wrote:
>
>> The labels for the two sides of the division need to match exactly.
>>
>> If they match 1:1 except for additional labels, then you can use
>> xxx / on (foo,bar) yyy   # foo,bar are the matching labels
>> or
>> xxx / ignoring (baz,qux) zzz   # baz,qux are the labels to ignore
>>
>> If they match N:1 then you need to use group_left or group_right.
>>
>> If you show the results of the two halves of the query separately then we 
>> can be more specific. That is:
>>
>> sum(kafka_consumergroup_lag{cluster=~"$cluster",consumergroup=~"$consumergroup",topic=~"$topic"})
>>  
>> by (consumergroup, topic) 
>>
>> count(up{job="prometheus.scrape.kafka_exporter"})
>>
>> On Sunday 26 May 2024 at 08:28:10 UTC+1 Sameer Modak wrote:
>>
>>> I tried the same i m not getting any data post adding below 
>>>
>>> sum(kafka_consumergroup_lag{cluster=~"$cluster",consumergroup=~
>>> "$consumergroup",topic=~"$topic"}) by (consumergroup, topic) / count(up{
>>> job="prometheus.scrape.kafka_exporter"})
>>>
>>> On Saturday, May 25, 2024 at 11:53:44 AM UTC+5:30 Ben Kochie wrote:
>>>
>>>> You can use the `up` metric
>>>>
>>>> sum(...)
>>>> /
>>>> count(up{job="kafka"})
>>>>
>>>> On Fri, May 24, 2024 at 5:53 PM Sameer Modak  
>>>> wrote:
>>>>
>>>>> Hello Team,
>>>>>
>>>>> I want to know the no of instance data sending to prometheus. How do i 
>>>>> formulate the query .
>>>>>
>>>>>
>>>>> Basically i have below working query but issues is we have 6  
>>>>> instances hence its summing value of all instances. Instead we just need 
>>>>> value from one instance.
>>>>> sum(kafka_consumergroup_lag{cluster=~"$cluster",consumergroup=~
>>>>> "$consumergroup",topic=~"$topic"})by (consumergroup, topic)
>>>>> I was thinking to divide it / 6 but it has to be variabalise on runtime
>>>>> if 3 exporters are running then it value/3 to get exact value.
>>>>>
>>>>> -- 
>>>>> You received this message because you are subscribed to the Google 
>>>>> Groups "Prometheus Users" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send 
>>>>> an email to prometheus-use...@googlegroups.com.
>>>>> To view this discussion on the web visit 
>>>>> https://groups.google.com/d/msgid/prometheus-users/fa5f309f-779f-45f9-b5a0-430b75ff0884n%40googlegroups.com
>>>>>  
>>>>> <https://groups.google.com/d/msgid/prometheus-users/fa5f309f-779f-45f9-b5a0-430b75ff0884n%40googlegroups.com?utm_medium=email_source=footer>
>>>>> .
>>>>>
>>>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/9633348b-5d27-409e-b28f-e1e32e8af6b0n%40googlegroups.com.


[Bug 2067290] Re: [needs-packaging] xlnx-ddr-qos package from Xilinx

2024-05-27 Thread Brian Murray
*** This is an automated message ***

This bug is tagged needs-packaging which identifies it as a request for
a new package in Ubuntu.  As a part of the managing needs-packaging bug
reports specification,
https://wiki.ubuntu.com/QATeam/Specs/NeedsPackagingBugs, all needs-
packaging bug reports have Wishlist importance.  Subsequently, I'm
setting this bug's status to Wishlist.

** Changed in: ubuntu
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067290

Title:
  [needs-packaging] xlnx-ddr-qos package from Xilinx

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2067290/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2067254] Re: [needs-packaging] xlnx-axi-qos package from Xilinx

2024-05-27 Thread Brian Murray
*** This is an automated message ***

This bug is tagged needs-packaging which identifies it as a request for
a new package in Ubuntu.  As a part of the managing needs-packaging bug
reports specification,
https://wiki.ubuntu.com/QATeam/Specs/NeedsPackagingBugs, all needs-
packaging bug reports have Wishlist importance.  Subsequently, I'm
setting this bug's status to Wishlist.

** Changed in: ubuntu
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067254

Title:
  [needs-packaging] xlnx-axi-qos package from Xilinx

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2067254/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: new git update fails with fatal: fetch-pack: invalid index-pack output

2024-05-27 Thread Brian Inglis via Cygwin

On 2024-05-26 16:44, David Dyck via Cygwin wrote:

After updating I still get the same error.

$ git clone -v https://github.com/lxml/lxml.git
Cloning into 'lxml'...
POST git-upload-pack (175 bytes)
POST git-upload-pack (gzip 8652 to 4282 bytes)
remote: Enumerating objects: 33941, done.
remote: Counting objects: 100% (3786/3786), done.
remote: Compressing objects: 100% (1328/1328), done.
remote: Total 33941 (delta 2360), reused 3474 (delta 2243), pack-reused
30155
Receiving objects: 100% (33941/33941), 20.20 MiB | 17.42 MiB/s, done.
fatal: fetch-pack: invalid index-pack output


$ cygcheck -srv >cygcheck.out
cygcheck: dump_sysinfo: GetVolumeInformation() for drive S: failed: 53

$ git --version
git version 2.45.1

$ cygcheck -c git
Cygwin Package Information
Package  VersionStatus
git  2.45.1-1   OK

$  type git
git is hashed (/usr/bin/git)

attached cygcheck.out


Retry running git prefixed with PATH=/usr/bin:/bin/:/usr/sbin:/sbin
ISTR in the past having to lose MS dirs from my Cygwin PATH.

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry


--
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


Re: [New] lang/nelua

2024-05-27 Thread Brian Callahan
Hello Nicolas --

On 5/27/2024 11:51 AM, Nicolas Sampaio wrote:
> Hi,
> 
> home: https://nelua.io/ <https://nelua.io/>
> 
> comment: programming language with a lua flavor.
> 
> descr: Nelua is a minimal, statically-typed and meta-programmable 
> systems programming language heavily inspired by Lua, which compiles to
> C and native code.
> 
> I'm open to suggestions.
> 
> Reis
As a step in the right direction, how about the attached version?

The tests results:
[] runner | 9 successes / 8 failures / 0.174995 seconds
385 successes / 0 skipped / 123 failures / 8.737528 seconds

And all the failures look the same; my guess is if you can fix this all
the tests will pass:
./spec/tools/expect.lua:121: expected success status in run:
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:174: attempt
to index a nil value (local 'p')
stack traceback:
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:174:
in function 'nelua.utils.fs.abspath'
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:297:
in function 'nelua.utils.fs.makepath'
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:305:
in function 'nelua.utils.fs.makepath'
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:305:
in function 'nelua.utils.fs.makepath'
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:305:
in function 'nelua.utils.fs.makepath'
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:305:
in function 'nelua.utils.fs.makepath'
...a-20240113/nelua-lang-20240113/lualib/nelua/utils/fs.lua:321:
in function 'nelua.utils.fs.makefile'
...-20240113/nelua-lang-20240113/lualib/nelua/ccompiler.lua:292:
in function 'nelua.ccompiler.compile_code'
...lua-20240113/nelua-lang-20240113/lualib/nelua/runner.lua:229:
in upvalue 'run'
...lua-20240113/nelua-lang-20240113/lualib/nelua/runner.lua:262:
in function
<...lua-20240113/nelua-lang-20240113/lualib/nelua/runner.lua:261>
... (skipping 3 levels)
./spec/tools/expect.lua:100: in upvalue 'run'
./spec/tools/expect.lua:120: in function 'spec.tools.expect.run'
./spec/runner_spec.lua:186: in function <./spec/runner_spec.lua:185>
[C]: in function 'xpcall'
(...lester...)



stack traceback:
[C]: in function 'error'
...40113/nelua-lang-20240113/lualib/nelua/utils/errorer.lua:15:
in function 'nelua.utils.errorer.assertf'
./spec/tools/expect.lua:121: in function 'spec.tools.expect.run'
./spec/runner_spec.lua:186: in function <./spec/runner_spec.lua:185>
[C]: in function 'xpcall'
...3/nelua-lang-20240113/lualib/nelua/thirdparty/lester.lua:309:
in function 'nelua.thirdparty.lester.it'
./spec/runner_spec.lua:185: in local 'func'
...3/nelua-lang-20240113/lualib/nelua/thirdparty/lester.lua:214:
in function 'nelua.thirdparty.lester.describe'
./spec/runner_spec.lua:10: in main chunk
[C]: in function 'require'
    spec/init.lua:19: in main chunk
[C]: in ?

~Brian


nelua.tgz
Description: application/compressed


Re: [RBW] Silver Hub sound

2024-05-27 Thread Brian Turner
Glad to be of some help! It’s probably a bit louder sounding because it’s 
echoing inside my garage and off my garage door… but I don’t find it loud or 
distracting compared to others. Now, my Crust has a Hope rear hub, and that’s 
even louder than a White, possibly louder than Chris King too. I’ve had several 
Deore XT hubs and they are nice and quiet. I have one weird XT that’s about 15 
years old that makes absolutely NO sound whatsoever. Dead silent. My favorite 
hub sound? Phil Wood, hands down!

- Brian

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/77F38AF1-5779-4C73-AD48-89BAEF3DA0E0%40gmail.com.


Re: [RBW] Silver Hub sound

2024-05-27 Thread Brian Watts
Thanks for everyone’s input. Brian that video is super helpful. I would say 
that is pretty loud. 1-10; deore 2, silver 6, white 8.5. I think I’m 
looking for a 4 on my scale

On Monday, May 27, 2024 at 8:31:43 AM UTC-7 Brian Turner wrote:

> If it helps, here’s a video I made of the Silver hub sound on my Gus:
>
> https://photos.app.goo.gl/6ae4EXFGqjZRvAwA7
>

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/be1747e8-b6c8-4eb6-9076-3c6dd049477fn%40googlegroups.com.


Re: [RBW] Silver Hub sound

2024-05-27 Thread Brian Turner
If it helps, here’s a video I made of the Silver hub sound on my Gus:

https://photos.app.goo.gl/6ae4EXFGqjZRvAwA7

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/33C40286-5739-4665-A6B5-B6FE85FE51BA%40gmail.com.


Re: [prometheus-users] Pod with Pending phase is in endpoints scraping targets (Prometheus 2.46.0)

2024-05-27 Thread 'Brian Candler' via Prometheus Users
Have you looked in the changelog 
 for 
Prometheus? I found:

## 2.51.0 / 2024-03-18

* [BUGFIX] Kubernetes SD: Pod status changes were not discovered by 
Endpoints service discovery #13337 


*=> fixes #11305 , 
which looks similar to your problem*

## 2.50.0 / 2024-02-22

* [ENHANCEMENT] Kubernetes SD: Check preconditions earlier and avoid 
unnecessary checks or iterations in kube_sd. #13408 


I'd say it's worth trying the latest release, 2.51.2.

On Monday 27 May 2024 at 12:21:01 UTC+1 Vu Nguyen wrote:

> Hi,
>
> Do you have a response to this thread? Has anyone ever encountered the 
> issue?
>
> Regards,
> Vu
>
> On Mon, May 20, 2024 at 2:56 PM Vu Nguyen  wrote:
>
>> Hi,
>>
>> With endpoints scraping role, the job should scrape POD endpoint that is 
>> up and running. That is what we are expected. 
>>
>> I think by concept, K8S does not create an endpoint if Pod is in other 
>> phases like Pending, Failed, etc.
>>
>> In our environments, Prometheus 2.46.0 on K8S v1.28.2, we currently have 
>> issues: 
>> 1) POD is up and running from `kubectl get pod`, but from Prometheus 
>> discovery page, it shows:
>> __meta_kubernetes_pod_phase="Pending" 
>> __meta_kubernetes_pod_ready="false"  
>>
>> 2) The the endpoints job discover POD targets with pod phase=`Pending`.
>>
>> Those issues disappear after we restart Prometheus pod.  
>>
>> I am not sure if 1) that is K8S that does not trigger event after POD 
>> phase changes so Prometheus is not able to refresh its endpoints discovery 
>> or 2) it is a known problem of Prometheus? 
>>
>> And do you think it is worth to add the following relabeling rule to 
>> endpoints job role?
>>
>>   - source_labels: [ __meta_kubernetes_pod_phase ]
>> regex: Pending|Succeeded|Failed|Completed
>> action: drop
>>
>> Thanks, Vu 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Prometheus Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to prometheus-use...@googlegroups.com.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/prometheus-users/c0f97ed7-1421-4c7c-a57d-2d301bb12418n%40googlegroups.com
>>  
>> 
>> .
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/0641d658-e295-418b-ae00-af6ce83e7ccbn%40googlegroups.com.


Re: [RBW] Silver Hub sound

2024-05-27 Thread Brian Turner
I can’t seem to find the link, but a few years ago, Will posted a video of the 
sound of a Silver hub.

I’d say the description of being slightly louder than a Deore XT hub is 
accurate. But, nowhere near as loud as a White hub, that’s for sure.

- Brian 
Lexington KY

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/82DACF27-2464-4295-8C76-4E9A3426CCD3%40gmail.com.


[RBW] Silver Hub sound

2024-05-27 Thread Brian Watts
Hi all! Can anyone shed light on the sound of Riv Silver Hubs? I have a 
White Ind. T11 that is quite loud, I want something more subtle on my Sam 
build. I'm familiar with the very quiet Deore option and wouldn't mind 
something a touch louder. I'm in contact with Rich on my wheelset and his 
reply wasLoud!.
thanks for anyone with input!
Brian

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/0a554455-7817-45ef-809d-0b47d5a2a754n%40googlegroups.com.


Re: [M100] Backgammon

2024-05-26 Thread Brian White
In an archived discussion in the M100SIG someone said they had it mostly
done, but no game itself.

https://github.com/search?q=repo%3ALivingM100SIG%2FLiving_M100SIG%20backgammon=code

-- 
bkw

On Sun, May 26, 2024, 8:08 PM David Plass  wrote:

> Yes, it looks like it was a Compuserve game.
>
>
>
> On Sun, May 26, 2024 at 12:38 PM John R. Hogerhuis 
> wrote:
>
>> Never seen it, but I never looked either.
>>
>> One possibility is that it was an online (BBS or information service)
>> game.
>>
>> -- John.
>>
>


Re: new git update fails with fatal: fetch-pack: invalid index-pack output

2024-05-26 Thread Brian Inglis via Cygwin

On 2024-05-26 06:03, Adam Dinwoodie via Cygwin wrote:

On Sun, 26 May 2024 at 05:10, David Dyck via Cygwin wrote:


I upgraded to the most recent git and I get the following error
(  stable2.45.1-1x86_648597 KiB2024-05-25 18:58 )

$ git clone -v https://github.com/lxml/lxml.git
Cloning into 'lxml'...
POST git-upload-pack (175 bytes)
POST git-upload-pack (gzip 8652 to 4281 bytes)
remote: Enumerating objects: 33933, done.
remote: Counting objects: 100% (3778/3778), done.
remote: Compressing objects: 100% (1322/1322), done.
remote: Total 33933 (delta 2356), reused 3471 (delta 2241), pack-reused
30155
Receiving objects: 100% (33933/33933), 20.19 MiB | 15.38 MiB/s, done.
fatal: fetch-pack: invalid index-pack output

when I downgraded to 2.43.0-1x86_648402 KiB2024-01-07 20:32
I was able to get the repository download

$ git clone -v https://github.com/lxml/lxml.git
Cloning into 'lxml'...
POST git-upload-pack (175 bytes)
POST git-upload-pack (gzip 8652 to 4281 bytes)
remote: Enumerating objects: 33933, done.
remote: Counting objects: 100% (3778/3778), done.
remote: Compressing objects: 100% (1322/1322), done.
remote: Total 33933 (delta 2356), reused 3471 (delta 2241), pack-reused
30155
Receiving objects: 100% (33933/33933), 20.19 MiB | 13.13 MiB/s, done.
Resolving deltas: 100% (16518/16518), done.


Thanks for the report. This is working fine for me locally. Can you
please upgrade, check the problem is still recurring, and provide the
output from `cygcheck -srv >cygcheck.out`?


I got the same symptom yesterday from the previous git version on the recently 
updated curl repo, and just put it down to traffic, as `git pull --ff` worked 
immediately after, as did a later `git remote update`:


$ git remote update
remote: Enumerating objects: 6617, done.
remote: Counting objects: 100% (4385/4385), done.
remote: Compressing objects: 100% (280/280), done.
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 
8)
error: 4751 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
$ git --version
git version 2.43.0

Of course, it could also be some issue with my latest curl build! ;^>

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry

--
Problem reports:  https://cygwin.com/problems.html
FAQ:  https://cygwin.com/faq/
Documentation:https://cygwin.com/docs.html
Unsubscribe info: https://cygwin.com/ml/#unsubscribe-simple


Re: [prometheus-users] how to get count of no.of instance

2024-05-26 Thread 'Brian Candler' via Prometheus Users
The labels for the two sides of the division need to match exactly.

If they match 1:1 except for additional labels, then you can use
xxx / on (foo,bar) yyy   # foo,bar are the matching labels
or
xxx / ignoring (baz,qux) zzz   # baz,qux are the labels to ignore

If they match N:1 then you need to use group_left or group_right.

If you show the results of the two halves of the query separately then we 
can be more specific. That is:

sum(kafka_consumergroup_lag{cluster=~"$cluster",consumergroup=~"$consumergroup",topic=~"$topic"})
 
by (consumergroup, topic) 

count(up{job="prometheus.scrape.kafka_exporter"})

On Sunday 26 May 2024 at 08:28:10 UTC+1 Sameer Modak wrote:

> I tried the same i m not getting any data post adding below 
>
> sum(kafka_consumergroup_lag{cluster=~"$cluster",consumergroup=~
> "$consumergroup",topic=~"$topic"}) by (consumergroup, topic) / count(up{
> job="prometheus.scrape.kafka_exporter"})
>
> On Saturday, May 25, 2024 at 11:53:44 AM UTC+5:30 Ben Kochie wrote:
>
>> You can use the `up` metric
>>
>> sum(...)
>> /
>> count(up{job="kafka"})
>>
>> On Fri, May 24, 2024 at 5:53 PM Sameer Modak  
>> wrote:
>>
>>> Hello Team,
>>>
>>> I want to know the no of instance data sending to prometheus. How do i 
>>> formulate the query .
>>>
>>>
>>> Basically i have below working query but issues is we have 6  instances 
>>> hence its summing value of all instances. Instead we just need value from 
>>> one instance.
>>> sum(kafka_consumergroup_lag{cluster=~"$cluster",consumergroup=~
>>> "$consumergroup",topic=~"$topic"})by (consumergroup, topic)
>>> I was thinking to divide it / 6 but it has to be variabalise on runtime
>>> if 3 exporters are running then it value/3 to get exact value.
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "Prometheus Users" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to prometheus-use...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/prometheus-users/fa5f309f-779f-45f9-b5a0-430b75ff0884n%40googlegroups.com
>>>  
>>> 
>>> .
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/4487c977-aac5-478b-a81d-47c7edace010n%40googlegroups.com.


[R-SIG-Mac] FLIBS for binary R installations (was library 'quadmath' not found)

2024-05-26 Thread Prof Brian Ripley via R-SIG-Mac

This is only about use of CRAN binary installations of R, and
specifically the arm64 build of R 4.4.0.  It is rather technical ... but
with a simple take-home message at the end.

Because CRAN binary distributions contain copies of the Fortran
runtime libraries there are a number of other possibilities.
The distributed etc/Makeconf contains

FLIBS =  -L/opt/gfortran/lib/gcc/aarch64-apple-darwin20.0/12.2.0 
-L/opt/gfortran/lib -lgfortran -lemutls_w -lquadmath


(Only the arm64 version contains -lemutls_w: it is a static library so
only needed when compiling Fortran code, and probably only Fortran
code using OpenMP and that has to be linked by gfortran and so does not
use FLIBS.)

https://cran.r-project.org/doc/manuals/r-devel/R-admin.html#macOS-packages
and
https://cran.r-project.org/doc/manuals/r-devel/R-admin.html#macOS-packages

describe how to override this, either personally (via ~/.R/Makevars)
or for all users (set R_MAKEVARS_SITE).

FLIBS is used when installing a package that contains Fortran source
code or links to Fortran libraries, most prominently BLAS and LAPACK
libraries.  The way linking works is rather different on macOS
from other platforms.


Scenarios
=

a) If you want to compile a package from sources containing Fortran
code, you will need the recommended Fortran compiler and installing
that will match the FLIBS in the distribution. It will also work to
use

FLIBS = -L/Library/Frameworks/R.framework/Resources/lib -lgfortran 
-lquadmath


NB: a large majority of the Fortran-using packages do not make any calls 
to the Fortran runtime libraries, so any setting of FLIBS including 
empty will work provided it does not refer to non-existent paths.


If you use a different compiler (such as those for gfortran 13 or 14 
mentioned in the manual), you will need to set FLIBS appropriately but 
the above will probably do.



b) Compiling a package from source which calls BLAS or LAPACK but not 
Fortran will need to find the Fortran runtime libraries.  As the 
distributed BLAS and LAPACK libraries are already built against those 
included with R and hardcode their paths, you could also use


FLIBS =


c) Installing a CRAN binary package that uses Fortran source code.
This will have a .so linked against the Fortran libraries from the
CRAN binary R, e.g.

auk2% otool -L Hmisc/libs/Hmisc.so
Hmisc/libs/Hmisc.so:
Hmisc.so (compatibility version 0.0.0, current version 0.0.0)

/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libgfortran.5.dylib
 (compatibility version 6.0.0, current version 6.0.0)

/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libquadmath.0.dylib
 (compatibility version 1.0.0, current version 1.0.0)

/Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/lib/libR.dylib 
(compatibility version 4.4.0, current version 4.4.0)

So FLIBS is not used.  (Actually Hmisc is one of the many packages
with no calls to the Fortran runtime libraries.)

Ditto for a binary package which calls BLAS or LAPACK.

This may not be the case for binary packages from other repositories.


d) A very few packages compile a Fortran executable (cepreader,
x13binary, INLA).  As these are linked by gfortran, FLIBS is not used,
but the CRAN binary package for cepreader is linked against the paths
in the gfortran distribution so that is required.  For x13binary the
R-provided libs are used whereas INLA ships its own copies (but has
missed one).


It seems that life for those using binary builds would be simpler if
their etc/Makeconf contained

FLIBS = -L/Library/Frameworks/R.framework/Resources/lib -lgfortran 
-lquadmath


Until that is done, I recommend creating a ~/.R/Makevars file
containing that line.



On 23/05/2024 17:03, Prof Brian Ripley via R-SIG-Mac wrote:

On 23/05/2024 16:00, Petar Milin wrote:
I have recently updated my R (4.4.0) and all the packages running on 
Sonoma (14.5) with Intel. When I try to install from GitHub with:


install_github("zdk123/SpiecEasi")

I get the error message:

ld: warning: search path 
'/opt/gfortran/lib/gcc/x86_64-apple-darwin20.0/12.2.0' not found

ld: warning: search path '/opt/gfortran/lib' not found

     is the important message.


ld: library 'quadmath' not found
clang: error: linker command failed with exit code 1 (use -v to see 
invocation)

make: *** [SpiecEasi.so] Error 1
ERROR: compilation failed for package ‘SpiecEasi’

I did

xcode-select –install

but that was done already.

I also installed

gfortran-4.2.3.pkg

but that did not help either.


Please remove it via pkgutils: it is way too old.


Can anyone advise, please?


Please read the R-admin manual.  To install a package using Fortran from 
sources you need to install 
https://mac.r-project.org/tools/gfortran-12.2-universal.pkg, also 
indicated via the tools page linked from 
https://cran.r-project.org/bin/macosx/





--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Emeritus Professor 

CVS: cvs.openbsd.org: ports

2024-05-25 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/25 05:22:48

Modified files:
sysutils/coreutils: Makefile distinfo 
sysutils/coreutils/patches: patch-Makefile_in 
Removed files:
sysutils/coreutils/patches: patch-src_split_c 

Log message:
Update to coreutils-9.5
Changelog: https://git.savannah.gnu.org/cgit/coreutils.git/tree/NEWS
sparc64 and bulk testing by tb@
ok tb@



[Bug 2067062] Re: [needs-packaging] ubuntu-x13s

2024-05-24 Thread Brian Murray
*** This is an automated message ***

This bug is tagged needs-packaging which identifies it as a request for
a new package in Ubuntu.  As a part of the managing needs-packaging bug
reports specification,
https://wiki.ubuntu.com/QATeam/Specs/NeedsPackagingBugs, all needs-
packaging bug reports have Wishlist importance.  Subsequently, I'm
setting this bug's status to Wishlist.

** Changed in: ubuntu
   Importance: Undecided => Wishlist

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2067062

Title:
   [needs-packaging] ubuntu-x13s

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+bug/2067062/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2043820] Re: Unable to contact snap store on Xubuntu from Jammy to Noble

2024-05-24 Thread Brian Murray
** Changed in: snapd (Ubuntu Noble)
Milestone: None => noble-updates

** Changed in: snapd (Ubuntu)
Milestone: None => noble-updates

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2043820

Title:
  Unable to contact snap store on Xubuntu from Jammy to Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/firefox/+bug/2043820/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2057490] Re: Crash during upgrade from Jammy/Mantic to Noble

2024-05-24 Thread Brian Murray
Is this still happening with our current upgrade testing?

** Changed in: network-manager (Ubuntu)
   Status: New => Incomplete

** Tags added: cuqa-automated-testing

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2057490

Title:
  Crash during upgrade from Jammy/Mantic to Noble

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/network-manager/+bug/2057490/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [Hpr] Posting question to the community about syndication rules

2024-05-24 Thread Brian K Navarette via Hpr
I think cutting out part of your show and posting it to hpr fits perfectly
with the guide lines  about syndication.
That part of your show, after editing, falls under the following "For this
reason we are only releasing material created exclusively for HPR"
Lets hear it!
brian-in-ohio


On Mon, May 20, 2024 at 10:12 AM Carl D Hamann via Hpr <
hpr@lists.hackerpublicradio.com> wrote:

>
> On Mon, May 20, 2024, 8:44 AM Thaj A. Sara via Hpr <
> hpr@lists.hackerpublicradio.com> wrote:
>
>> During the recording for our next episode, we spent a good deal of time
>> talking about HPR, specifically in response to Episode 4109 - The future of
>> HPR by knightwise.
>>
>
> I think that's definitely in the spirit of the guidelines, as the content
> is particularly relevant to HPR in a way that syndicating a whole show
> would not be.
>
>> recording a new intro specifically for HPR.
>>
>
> And I believe this observes the "letter" by making it "material created
> exclusively for HPR".
>
> Looking forward to hearing it in the feed,
> - Laindir
>
>> ___
> Hpr mailing list
> Hpr@lists.hackerpublicradio.com
> https://lists.hackerpublicradio.com/mailman/listinfo/hpr
>
___
Hpr mailing list
Hpr@lists.hackerpublicradio.com
https://lists.hackerpublicradio.com/mailman/listinfo/hpr


Re: [RBW] Sam Hillbornes Go Live Tomorrow

2024-05-24 Thread Brian Watts
I’m trying to come up with my color palette for a Peri-Sam as well. 
Anodized silver hub (rear)w Son front; matte silver quill rims, polished 
touring cantis (currently w pink and purple nuts), white VBC cranks 
(polished arms, black rings, may pick new crank cap color). 
 May wait to choose colors after seeing color in person when I take my 
parts to Riv. 
  Brian
On Monday, May 20, 2024 at 1:15:50 PM UTC-7 Ted Durant wrote:

> For those who are getting a Periwinkle Sam ... what color combos are you 
> planning for your build?
>
> I've ordered a purple Funky Monkey front cable hanger from Paul. Not sure 
> it's going to be a good match. I have Phil Wood blue bottom bracket dust 
> caps ... they'll probably match okay, not sure. The one that Riv built up 
> for photos seems to be straight up gray-silver-black, and IMO it looks very 
> nice. 
>
> Ted Durant
> Milwaukee WI USA
>

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/92eecb9f-ff5a-4751-869f-76ddfa12d2ccn%40googlegroups.com.


CVS: cvs.openbsd.org: ports

2024-05-24 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/24 10:16:02

Modified files:
games/stockfish: Makefile distinfo 
games/stockfish/patches: patch-src_Makefile 

Log message:
Update to Stockfish-16.1
Remove BROKEN-i386; builds and runs without issue on i386
Changelog:
https://github.com/official-stockfish/Stockfish/releases/tag/sf_16.1



Re: calm: SPDX licence list data update please

2024-05-24 Thread Brian Inglis via Cygwin-apps

Hi folks,

Can we please get the SPDX licence list data updated in calm to 3.24 sometime if 
possible as the licences complained about below have been in releases for nearly 
a year since 3.21:


On 2024-05-24 02:18, cygwin-no-re...@cygwin.com wrote:

INFO: package 'man-pages-linux': errors in license expression: ['Unknown 
license key(s): Linux-man-pages-1-para, Linux-man-pages-copyleft-var, 
Linux-man-pages-copyleft-2-para']


https://github.com/spdx/license-list-XML/releases/tag/v3.21

https://github.com/spdx/license-list-data/releases

https://github.com/spdx/license-list-data/tree/main/json
https://github.com/spdx/license-list-data/tree/main/jsonld

https://spdx.github.io/license-list-data/

If these are handled by PEP-0639/pip/NexB/SPDX license-expression, possibly 
someone could package it and license-list-data and add to calm prereqs?


If not, perhaps I could be of some help if I knew requirements?

You can also now remove the exceptions in calm/fixes.py(licmap):

https://cygwin.com/git/?p=cygwin-apps/calm.git;a=blob;f=calm/fixes.py;h=1f67d131d49d68c93f96548af1947dd405b4f743;hb=HEAD#l150

for my packages dash, cpuid, units, grep, gzip, readline, unifont, bison, wget, 
libgcrypt, mingw64-*-/libidn, mingw64-*-/libidn2, mingw64-*-/curl, 
man-pages-{linux,posix}, vttest, tz{code,data}:


BSD-3-Clause AND GPL-2.0-or-later
dash/dash.cygport:LICENSE="BSD-3-Clause AND GPL-2.0-or-later"

GPL-2.0-or-later
cpuid/cpuid.cygport:LICENSE=GPL-2.0-or-later

GPL-3.0-or-later
units/units.cygport:LICENSE="GPL-3.0-only AND GFDL-1.3-only"

GPL-2.0-or-later
grep/grep.cygport:LICENSE=GPL-2.0-or-later
gzip/gzip.cygport:LICENSE=GPL-2.0-or-later
readline/readline.cygport:LICENSE=GPL-3.0-or-later

GPL-2.0-or-later WITH Font-exception-2.0 OR OFL-1.1,
unifont/unifont.cygport:LICENSE="(GPL-2.0-with-font-exception OR OFL-1.1) AND 
GPL-2.0-or-later AND LicenseRef-Unifoundry-Unifont-Public-Domain"


**I will update Unifont as I see GPL...-exception is now deprecated**
and 16 is in beta preview.

Can we just split these long quoted strings or do we need \ line continuations?
And does anyone know if there is a convention for splitting licence expressions 
in comments across lines?


GPL-3.0-or-later
bison/bison.cygport:LICENSE=GPL-3.0-or-later
wget/wget.cygport:LICENSE=GPL-3.0-or-later

LGPL-2.1-or-later AND GPL-2.0-or-later
libgcrypt/libgcrypt.cygport:LICENSE="LGPL-2.1-or-later AND GPL-2.0-or-later AND 
(GPL-2.0-only OR BSD-3-Clause) AND BSD-3-Clause"


(LGPL-3.0-or-later OR GPL-2.0-or-later) AND GPL-3.0-or-later,
libidn/libidn.cygport:LICENSE="LGPL-3.0-or-later AND GPL-2.0-or-later AND 
GPL-3.0-or-later AND GFDL-1.3-or-later"

libidn/mingw64-i686-libidn/mingw64-i686-libidn.cygport:LICENSE=LGPLv3+/GPLv2+/GPLv3+/GFDLv1.3+
libidn/mingw64-x86_64-libidn/mingw64-x86_64-libidn.cygport:LICENSE="LGPL-3.0-or-later 
AND GPL-2.0-or-later AND GPL-3.0-or-later AND GFDL-1.3-or-later"


(LGPL-3.0-or-later OR GPL-2.0-or-later) AND GPL-3.0-or-later AND 
Unicode-DFS-2016,
libidn2/libidn2.cygport:LICENSE="LGPL-3.0-or-later AND GPL-2.0-or-later AND 
GPL-3.0-or-later AND Unicode-TOU AND Unicode-DFS-2016"
libidn2/mingw64-i686-libidn2/mingw64-i686-libidn2.cygport:LICENSE="LGPL-3.0-or-later 
AND GPL-2.0-or-later AND GPL-3.0-or-later AND Unicode-TOU AND Unicode-DFS-2016"
libidn2/mingw64-x86_64-libidn2/mingw64-x86_64-libidn2.cygport:LICENSE="LGPL-3.0-or-later 
AND GPL-2.0-or-later AND GPL-3.0-or-later AND Unicode-TOU AND Unicode-DFS-2016"


curl
curl/curl.cygport:LICENSE=curl
curl/mingw64-i686-curl/mingw64-i686-curl.cygport:LICENSE=curl
curl/mingw64-x86_64-curl/mingw64-x86_64-curl.cygport:LICENSE=curl

Linux-man-pages-copyleft
man-pages-linux/man-pages-linux.cygport:LICENSE="MIT \
man-pages-posix/man-pages-posix.cygport:LICENSE=Linux-man-pages-copyleft

BSD-Source-Code
vttest/vttest.cygport:LICENSE=BSD-Source-Code

BSD-3-Clause AND Public-Domain
tzdata/tzdata.cygport:LICENSE=LicenceRef-IANA-TZ-Public-Domain
tzcode/tzcode.cygport:LICENSE=LicenceRef-IANA-TZ-Public-Domain

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry


CVS: cvs.openbsd.org: ports

2024-05-24 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/24 10:02:25

Modified files:
devel/astyle   : Makefile distinfo 

Log message:
Update to astyle-3.4.16
Changelog: https://astyle.sourceforge.net/notes.html



CVS: cvs.openbsd.org: ports

2024-05-24 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/24 09:55:04

Modified files:
sysutils/diffoscope: Makefile distinfo 
sysutils/diffoscope/pkg: PLIST 

Log message:
Update to diffoscope-268



Re: [Alpine-info] Gmail Access

2024-05-24 Thread Brian S. Baker [VIA BBUS] via Alpine-info
Andrew:

Thank you for that information. I didn’t realize that you could use a dash and 
then the HUP after the dash. I know that signal nine is only advisable when you 
don’t have a choice. I’ve had a few times when you have to use a signal line 
because something locks up and you have to end it

I remember when I was working with Eduardo when he was helping me with Alpine. 
When we were setting up all of these tokens, there were times when I had to 
signal my Alpine because I could not exit and that is because whatever master 
password or authentication strategy was just locking the entire system up. 

I guess the reason why I was using signal nine all the time was because when I 
was working as an online volunteer, someone would always say that they have to 
signal nine in a process. I’m glad that you don’t have to do a signal nine 
every single time and now I understand what the HUP means after the dash: it 
means hang up, but I didn’t realize that process hangs up like you would hang 
up a telephone he he he he he 

Have a great weekend, and I believe this one is a long one!

Brian



Sent from my iPhone

> On May 23, 2024, at 6:36 PM, Andrew C Aitchison  
> wrote:
> 
> On Thu, 23 May 2024, Brian S. Baker [VIA BBUS] via Alpine-info wrote:
> 
>> The way you take care of that is you give it a signal 9 kill and
>> this will release the locks process because one of them will be
>> terminated.
> 
> Please only use kill -9 as a last resort.
> 
> If alpine is still running, kill -HUP will be sufficient and will
> allow it to syncronize all open folders and remove locks safely.
> 
> kill -9 can leave files in unknown, unsafe states and destroy data.
> This is as bad as what the locks are trying to avoid.
> 
> --
> Andrew C. Aitchison  Kendal, UK
>   and...@aitchison.me.uk
___
Alpine-info mailing list
Alpine-info@u.washington.edu
http://mailman12.u.washington.edu/mailman/listinfo/alpine-info


Re: [Origami] Yet Another Birthday for the O-list!

2024-05-24 Thread Brian K. Webb via Origami
> On May 24, 2024, at 8:34 AM, Anne LaVin via Origami 
>  wrote:
> 
> On Thu, May 23, 2024 at 4:31 PM Laura R via Origami 
> mailto:origami@lists.digitalorigami.com>> 
> wrote:
> Happy birthday dear O-List! 
> Anne, do you happen to keep a copy of the first (or some of the first) email 
> exchange?
> 
> The short answer is: yes, we have the data, possibly even all of it.
> 
> The longer answer is: it's not in easily shareable/archivable/viewable 
> format(s). It did take a while for things to get rolling, so the first 
> bundles of conversations are not all that interesting, really.


Hi all,
Perhaps we have some AI experts that can train a LLM (Large Language Model) on 
the entire O-List to provide an interactive chat bot to our queries?


Cheers,
Brian K. Webb


> On May 24, 2024, at 8:34 AM, Anne LaVin via Origami 
>  wrote:
> 
> On Thu, May 23, 2024 at 4:31 PM Laura R via Origami 
> mailto:origami@lists.digitalorigami.com>> 
> wrote:
> Happy birthday dear O-List! 
> Anne, do you happen to keep a copy of the first (or some of the first) email 
> exchange?
> 
> The short answer is: yes, we have the data, possibly even all of it.
> 
> The longer answer is: it's not in easily shareable/archivable/viewable 
> format(s). It did take a while for things to get rolling, so the first 
> bundles of conversations are not all that interesting, really.
> 
> That said, "getting the o-list archives somewhere usable, sometime" has long 
> been on the list of things that would be nice to do for the list... but it's 
> a Pretty Serious Project, at this point. The data is stored in multiple 
> chunks, in random formats, so putting them all into something that could act 
> like a single mail archive would be quite a job. A huge pile of the early 
> messages were not kept as actual email messages, so their unique message-IDs, 
> which systems use for creating threading, don't exist any more. They would 
> likely take human intervention [and there are tens of thousands of them!] to 
> clean up into something like a real mail archive. I've wondered, on and off, 
> if there's a way to somehow wiki-fy [not *actually* a wiki, just the concept] 
> the information, and get volunteers with the right mindset to attack it, and 
> gradually tidy it up. But wrangling that, and/or running/creating a system 
> that would make such a collaborative effort possible, is itself a pretty big 
> project.
> 
> Anne
> 
> 
> 
> 
> > On May 23, 2024, at 5:12 PM, Anne LaVin via Origami 
> >  > <mailto:origami@lists.digitalorigami.com>> wrote:
> > 
> > Yep, the List is another year older.
> > 
> > For this is the day when, back in 1988 (!) the first messages were 
> > exchanged in what would eventually migrate to this version of the List, run 
> > on a private server my husband and I maintain, using the open-source 
> > Mailman mailing list system. 
> > 
> > Pretty much everything has changed a lot since then, but the List is still 
> > getting used, so we're still here. Maybe this will be the year to migrate 
> > things to a forum-style backend (I hear good things about Discourse) but 
> > there will always be an email component for you diehards, never fear!
> > 
> > I hope everyone is having a grand day. Do go fold something, and come back 
> > and tell us about it!
> > 
> > Anne
> > 
> 



Memory leaks in c/pthread libraries

2024-05-23 Thread Brian Marcotte
Since upgrading to NetBSD-10, we've seen memory leaks in several
daemons which use libpthread:

gpg-agent
opendmarc
dkimpy_milter (python3)
syslog-ng (in some cases)
mysqld
mariadbd

In most cases, the daemons leak as they are used, but running this
will show the leak just sitting there:

  gpg-agent --daemon

I opened PR#57831 on this issue back in January.

Has anyone noticed this?

Thanks.

--
- Brian



UNBREAK/UPDATE: lang/ldc 1.35.0 => 1.38.0, and de-bootstrap ldc

2024-05-23 Thread Brian Callahan
Hi ports --

Attached is an update to the latest LDC. With this, I would also like to
make the build process a lot easier.

LDC does not need to be built with itself; it can be built with DMD or
GDC as well. I would like to make LDC have a BDEP on DMD. Now that DMD
has an LTS bootstrap it should be always have a package available. Also,
Phobos, the D standard library, is always built and shipped as a static
library, so DMD won't be an LDEP or RDEP, just a BDEP.

This has the instant benefit of enabling LDC on i386, which I have
tested alongside amd64 and does work. This also has the benefit of
making LDC much easier to update. Should be as easy as cranking version
number, updating patches, and updating plist.

Other platforms may need a bootstrap or use GDC to build LDC, but I will
tackle those on an as-needed basis.

I tried making an LTS bootstrap of LDC but there are linker errors for
multiple symbols of the same name. And I think this new strategy is good
enough.

OK?

~BrianIndex: Makefile
===
RCS file: /cvs/ports/lang/ldc/Makefile,v
retrieving revision 1.13
diff -u -p -r1.13 Makefile
--- Makefile9 Mar 2024 15:18:28 -   1.13
+++ Makefile24 May 2024 00:52:45 -
@@ -1,15 +1,14 @@
-BROKEN=needs new bootstrap
+# A bootstrap may be needed for !amd64, !i386 archs.
+# Can try GDC to bootstrap as well.
+ONLY_FOR_ARCHS =   amd64 i386
 
-# You must create a bootstrap for each supported arch.
-# Can use GDC to create said bootstrap compiler.
-ONLY_FOR_ARCHS =   amd64
+# No BT CFI yet.
+USE_NOBTCFI =  Yes
 
-V =1.35.0
+V =1.38.0
 COMMENT =  LLVM D Compiler
 DISTFILES =ldc-${V}-src.tar.gz
-DISTFILES.boot= ldc-${V}-bootstrap.tar.gz
 PKGNAME =  ldc-${V}
-REVISION = 0
 CATEGORIES =   lang
 
 HOMEPAGE = https://wiki.dlang.org/LDC
@@ -23,7 +22,6 @@ PERMIT_PACKAGE =  Yes
 WANTLIB += ${COMPILER_LIBCXX} c execinfo m z
 
 SITES =
https://github.com/ldc-developers/ldc/releases/download/v${V}/
-SITES.boot =   https://github.com/ibara/ldc/releases/download/bootstrap-${V}/
 
 # C++14
 COMPILER = base-clang ports-gcc
@@ -34,29 +32,18 @@ MODCLANG_COMPILER_LINKS =   No
 MODCLANG_RUNDEP =  No
 MODCLANG_VERSION = 16
 
+BUILD_DEPENDS =lang/dmd
+
 # COMPILE_D_MODULES_SEPARATELY=ON lets ldc compile with sane memory limits.
 CONFIGURE_ARGS =   -DCOMPILE_D_MODULES_SEPARATELY=ON \
-DLDC_DYNAMIC_COMPILE=OFF \
-DLDC_WITH_LLD=OFF \

-DLLVM_CONFIG="${LOCALBASE}/bin/llvm-config-${MODCLANG_VERSION}"
 
-# Use a bootstrap compiler, similar to DMD.
-CONFIGURE_ENV =
DMD="${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}/ldmd2" \
-   LD_LIBRARY_PATH="${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}"
-
-MAKE_ENV +=LD_LIBRARY_PATH="${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}"
+# Use DMD as the bootstrap compiler.
+CONFIGURE_ENV =DMD="${LOCALBASE}/bin/dmd"
 
 WRKDIST=   ${WRKDIR}/ldc-${V}-src
-
-# I put a vanilla ldc2.conf in the bootstrap tarball.
-# This fixes it up for the specifics for each arch.
-post-patch:
-   sed -i 's#/usr/local/include/d#${WRKDIR}/ldc-${V}-bootstrap/d#g' \
-   ${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}/ldc2.conf
-   sed -i 
's#"/usr/local/lib",#"/usr/local/lib","${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}",#g'
 \
-   ${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}/ldc2.conf
-   chmod 644 ${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}/lib*.so* # XXX
-   cp /usr/lib/libc.so.98.* ${WRKDIR}/ldc-${V}-bootstrap/${MACHINE_ARCH}/ 
# XXX
 
 # Same trick as with flang:
 #   Replace the shared LLVM.so library with the static libraries.
Index: distinfo
===
RCS file: /cvs/ports/lang/ldc/distinfo,v
retrieving revision 1.6
diff -u -p -r1.6 distinfo
--- distinfo14 Dec 2023 15:31:31 -  1.6
+++ distinfo24 May 2024 00:52:45 -
@@ -1,4 +1,2 @@
-SHA256 (ldc-1.35.0-bootstrap.tar.gz) = 
8IbnqIzaBxRP2vOWAl3c3sTvyTVv9NxMuJD24CVF1i8=
-SHA256 (ldc-1.35.0-src.tar.gz) = bilpk3BsdsCT5gkTmqCz+HBDVfoPN1b2dY141EIm36A=
-SIZE (ldc-1.35.0-bootstrap.tar.gz) = 49813924
-SIZE (ldc-1.35.0-src.tar.gz) = 8241960
+SHA256 (ldc-1.38.0-src.tar.gz) = ymI47+Ai40zTB2dB+KBwxqN3GWNRxhlJpIpIyZN5844=
+SIZE (ldc-1.38.0-src.tar.gz) = 8691096
Index: patches/patch-driver_cl_options_instrumentation_cpp
===
RCS file: patches/patch-driver_cl_options_instrumentation_cpp
diff -N patches/patch-driver_cl_options_instrumentation_cpp
--- patches/patch-driver_cl_options_instrumentation_cpp 14 Aug 2023 22:35:35 
-  1.1
+++ /dev/null   1 Jan 1970 00:00:00 -
@@ -1,15 +0,0 @@
-Default to -fcf-protection=branch
-May need to be tweaked if ldc grows !amd64 packages...
-
-Index: 

[Bug 2066995] Re: apport-gtk keeps prompting to report crashes in a loop

2024-05-23 Thread Brian Murray
** Tags added: rls-nn-incoming

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2066995

Title:
  apport-gtk keeps prompting to report crashes in a loop

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/apport/+bug/2066995/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[slurm-users] Re: Building Slurm debian package vs building from source

2024-05-23 Thread Brian Andrus via slurm-users
I would guess either you install GPU drivers on the non-GPU nodes or 
build slurm without GPU support for that to work due to package 
dependencies.


Both viable options. I have done installs where we just don't compile 
GPU support in and that is left to the users to manage.


Brian Andrus

On 5/23/2024 6:16 AM, Christopher Samuel via slurm-users wrote:

On 5/22/24 3:33 pm, Brian Andrus via slurm-users wrote:


A simple example is when you have nodes with and without GPUs.
You can build slurmd packages without for those nodes and with for 
the ones that have them.


FWIW we have both GPU and non-GPU nodes but we use the same RPMs we 
build on both (they all boot the same SLES15 OS image though).




--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com


Re: [Alpine-info] Gmail Access

2024-05-23 Thread Brian S. Baker [VIA BBUS] via Alpine-info
A folder lock would have to be initiated when we’re talking about mail, in 
Linux, other process that runs, if you try to use either The wrong account, 
then it will tell you that a folder lock is in place and remind you “are you 
root?” When you try to use a program more than one time. I believe this is so 
that the controlling process Can only happen once, because you wouldn’t want 
two instances of something like synaptic, running at the same time, because one 
of them has to have permission to modify files or to make addition changes or 
deletions, so root take control of the process, and locks it out, so that you 
don’t have two competing processes trying to do the same thing.

The way you take care of that is you give it a signal 9 kill and this will 
release the locks process because one of them will be terminated. If you don’t 
have a lock created, then you have two processes fighting against each other, 
and it doesn’t make sense, secondly, there should only be one account that 
makes the changes, so if root has control, then it will complete the process. 
That way you don’t have two or three instances of the same account being logged 
in and trying to access that one process.

Brian


Sent from my iPhone

> On May 23, 2024, at 2:41 PM, Chime Hart via Alpine-info 
>  wrote:
> 
> Thank you Eduardo: So as I understand that, whether we open her inbox or 
> not, after 7days we will not until we create a new tocan?
> Now, as to my unrelated question, I will ask it differently? Is their a way 
> of opening Alpine in 2 separate consoles, without 1 of them becoming "read 
> only"? I would like either session to act on messages. Thanks so much in 
> advance
> Chime
> 
> ___
> Alpine-info mailing list
> Alpine-info@u.washington.edu
> http://mailman12.u.washington.edu/mailman/listinfo/alpine-info
___
Alpine-info mailing list
Alpine-info@u.washington.edu
http://mailman12.u.washington.edu/mailman/listinfo/alpine-info


Re: [R-SIG-Mac] library 'quadmath' not found

2024-05-23 Thread Prof Brian Ripley via R-SIG-Mac

On 23/05/2024 16:00, Petar Milin wrote:

I have recently updated my R (4.4.0) and all the packages running on Sonoma 
(14.5) with Intel. When I try to install from GitHub with:

install_github("zdk123/SpiecEasi")

I get the error message:

ld: warning: search path '/opt/gfortran/lib/gcc/x86_64-apple-darwin20.0/12.2.0' 
not found
ld: warning: search path '/opt/gfortran/lib' not found

    is the important message.


ld: library 'quadmath' not found
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [SpiecEasi.so] Error 1
ERROR: compilation failed for package ‘SpiecEasi’

I did

xcode-select –install

but that was done already.

I also installed

gfortran-4.2.3.pkg

but that did not help either.


Please remove it via pkgutils: it is way too old.


Can anyone advise, please?


Please read the R-admin manual.  To install a package using Fortran from 
sources you need to install 
https://mac.r-project.org/tools/gfortran-12.2-universal.pkg, also 
indicated via the tools page linked from 
https://cran.r-project.org/bin/macosx/



--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Emeritus Professor of Applied Statistics, University of Oxford

___
R-SIG-Mac mailing list
R-SIG-Mac@r-project.org
https://stat.ethz.ch/mailman/listinfo/r-sig-mac


Re: [go-nuts] Re: Call the .net dlls into the golang

2024-05-23 Thread 'Brian Candler' via golang-nuts
"try building at the command line"

That's your clue. You'll most likely get a meaningful error message on 
stderr, which your IDE is hiding.

On Thursday 23 May 2024 at 13:47:10 UTC+1 Pavan Kumar A R wrote:

> Hello , 
>
> I am try to call the c program in the golang , but i am getting the issue 
> .  please see the error message :
> package main
>
> go list failed to return CompiledGoFiles. This may indicate failure to 
> perform cgo processing; try building at the command line. See 
> https://golang.org/issue/38990.go list
> View Problem (Alt+F8)
>
>
> //#cgo LDFLAGS: -L. -llibCWrapper-win64 -Wl,-rpath,.
> //#include "ealApiLib.h"
>
> import "C"
> import "fmt"
>
> func main() {
>
> result := C.Connect("192.168.1.1")
>
> fmt.Println("connection success" + result)
>
> }
>
>
>
> On Thursday 23 May 2024 at 02:53:12 UTC+5:30 peterGo wrote:
>
>> cgo command
>> https://pkg.go.dev/cmd/cgo
>> Cgo enables the creation of Go packages that call C code.
>>
>> On Wednesday, May 22, 2024 at 4:41:13 PM UTC-4 Carla Pfaff wrote:
>>
>>> Yes, you can call C functions from Go, it's called Cgo: 
>>> https://go.dev/blog/cgo
>>>
>>> On Wednesday 22 May 2024 at 18:04:19 UTC+2 Pavan Kumar A R wrote:
>>>

> Okay,  it's possible to call the C program to Golang. Because I also 
> have the C platform dlls and the C platform dlls  we using in the cross 
> platform mono and in the mono internally call the.net dlls function , 
> can I use the C program dlls in Golang?  
>


-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/21db8561-176e-4857-a904-8bda3c226ab3n%40googlegroups.com.


Re: [Alpine-info] Gmail Access

2024-05-23 Thread Brian S. Baker [VIA BBUS] via Alpine-info
Good morning everyone:

I just wanted to point out that Steve is right in one way, and wrong in another

What I mean by that is I have used versions of pine starting at version 3.89, 
all the way up to 3.96, and then when Alpine came out, I started using versions 
one all the way up until 2.26

Not only that, but when I was a staff member of the Tallahassee Fina in 
Tallahassee, Florida, I had answered a question from a user. The person wanted 
to know how to use pine

I had written a response to this, and the administrator of the system was so 
impressed with the way that I handled it that I was placed on the questions 
answers team. I believe I still have those particular documents in one of my 
old backups from 1994, and then it was updated all the way up until 2007.

To the issue hand. Says that Gmail is notoriously difficult to set up using 
Alpine. I don’t doubt that, but just from my experience just because you own 
your own domain for 15 or $30, that doesn’t necessarily mean that it’s easier 
to set up. It would be for Gmail, even though you’re right in that point.

What I mean by that I own five domains of my own. GODADDY recently gave all of 
their email services management to Microsoft. At that point you have to pay at 
least $38 every year to maintain your email not only that, but I had to Ask 
Eduardo for assistance, because I had a hell of a time trying to set up 
everything, and I must’ve communicated with him at least 35 times during this 
entire situation, but he helped me get through it. His thoughtful and very 
informative posts as well a patience are a testament to how awesome he is. He 
just said let’s start at the beginning, so I started emailing him off list, and 
each time we communicated I would let him know what was going on. Then he would 
respond back to me, and we kept doing this until, I  got 0Auth working with my 
domain account. I also asked Eduardo about my Gmail accounts. Eduardo help me 
set that up to help me set up my domains. I can log into all of my accounts. I 
want access to including all of the IMA account folders, and I have all of my 
authentication token saved.

Again, I think Eduardo for his patience and his expertise in this situation. I 
was so happy after I got done with this that I was totally psyched. Alpine as 
well as the program Pine are very powerful applications. You just have to know 
how to open up the power so that you can wire it in right. Eduardo also help me 
set up so that I can respond as either of the user or my personal account.

So as I stated, it is not necessarily easier to set up domain account email if 
you have to go through authentication token so many times that your head wants 
to spin backwards. 

Most hosts I know of  use IMAP as one of the primary ways to access email. I 
don’t know if pop three is still as popular as it used to be, but I am at is so 
awesome because it does not require you to bring the mail down from your server 
onto your primary machine. It allows it to stay there and you go in and take 
care of it from there. 

I want to be safe or safer: however, sometimes when they try to make things 
better, they always seem to make things worse, which means people like Eduardo 
who develop Alpine have to be about seven steps ahead of somebody else who is 
trying to make changes to the Mail host while he is trying to update Alpine to 
the newest additions. 226 and he was able to tell me that some of the program 
of Alpine 2.24 was not operating properly because there was a piece of it that 
was causing issues, and he told me to update to the newest version, I’m back in 
business. 

Have a great day!

Brian




Sent from my iPhone

> On May 23, 2024, at 6:21 AM, Steve Litt via Alpine-info 
>  wrote:
> 
> Marc Lytle via Alpine-info said on Sat, 18 May 2024 16:59:04 -0700
> 
>> Hello all,
>> 
>> I have set up Alpine with Gmail and I'm able to successfully connect
>> to the Gmail account and access/send email. The issue is that this
>> just stops working after a couple of days.
> 
> Why gmail? It's notoriously difficult to deal with. Why not just buy a
> domain for $15/year and use the mailing address(s) from that domain?
> 
> SteveT
> 
> Steve Litt
> 
> Autumn 2023 featured book: Rapid Learning for the 21st Century
> http://www.troubleshooters.com/rl21
> ___
> Alpine-info mailing list
> Alpine-info@u.washington.edu
> http://mailman12.u.washington.edu/mailman/listinfo/alpine-info
___
Alpine-info mailing list
Alpine-info@u.washington.edu
http://mailman12.u.washington.edu/mailman/listinfo/alpine-info


Re: [BVARC] Emergency preparness

2024-05-22 Thread Brian Shircliffe via BVARC
Here is a link to the event we are holding tomorrow for BLUETTI.  Going to
have drinks and you can get to know Julio who can do mobile radio
installations as well.  We will be selling these select units, and *we will
even pay for the sales tax*.  We really want to get as many of our units
into the hands of Houstonians that need help with portable power.  Please
if you know anyone that could use one of these units these are great for
emergencies!

https://www.linkedin.com/events/after-stormsalesevent7198786779599011840/comments/

[image: image.png]



On Wed, May 22, 2024 at 11:49 AM Brian Shircliffe 
wrote:

> I got some BLUETTI portable units coming down from my warehouse in Dallas
> today.  Donating a lot of them to the Salvation Army, and trying to get
> them in the hands of anyone that wants to buy a unit at cost. It is mainly
> our smaller units 300-2000wh batteries and 200 watt panels.  If you know
> anyone looking for some portable power units let me know.  832-452-9868
>
> -Brian
>
> On Fri, May 17, 2024 at 1:17 PM K5BOU via BVARC  wrote:
>
>> Cathy, great story and thanks to share with us. You story backup what i
>> was thinling on my email. You can train  all your life long, if there is no
>> one to help, whats the point. Im glad someone was able to hrlp on GMRS.
>> Aboutbto buy my GMRS lucence too.
>>
>>
>>
>>
>>
>>
>>
>> [image: image001.png]
>>
>> K5BOU-PhilippeBoucaumont
>>
>> Houston*|*Texas*|*USA*|*
>>
>> https://mccrarymeadowsweather.com/
>>
>> [image: image002.jpg]
>>
>> On May 17, 2024, at 12:51, Cathy Steinberg via BVARC 
>> wrote:
>>
>> Last night a limb fell on some electrical wires behind  a house across
>> the street from where my daughter lives in West Houston. There were flashes
>> and spark like explosions happening over and over. Two houses were in
>> jeopardy of catching fire.
>>
>> My daughter and several neighbors called 911,311, Centerpoint Energy for
>> over an hour and could not get anyone to answer.
>>
>> When my daughter called me to tell me about it, I got on my radio and
>> nobody was available to respond.
>>
>> I then tried a GMRS channel and there were 2 people monitoring the calls.
>> I gave them the address and within 15 minutes a constable and fire
>> department showed up. A couple of hours later the trees behind these houses
>> caught on fire. Since 911 was still not available I got back on the GMRS
>> channel and they got the fire department to come back. In my opinion, the
>> people monitoring this channel prevented a house fire and I was extremely
>> grateful.
>>
>> I just received my technician license in January and got it for emergency
>> reasons. It already has paid off well.
>>
>> I just wanted to share my experience.
>>
>> Cathy
>> N0JAB
>> WSAB 405
>>
>> Sent from my iPhone
>>
>> On May 17, 2024, at 12:49 PM, Scott Medbury via BVARC 
>> wrote:
>>
>> 
>> I think it's time for Harris, Fort Bend, Galveston,  Montgomery, and
>> Chambers counties to step up and do it again.  David, I thank you for
>> Brazoria country leadership. All too often we see severe loss of property
>> and needless loss of life from severe weather.
>>
>> 73 ... Scott KD5FBA
>>
>>
>>
>> On Fri, May 17, 2024, 12:18 PM David Lira  wrote:
>>
>>> NWS did a skywarn class in Brazoria county down in Angleton a few weeks
>>> ago. It was very informative. They're around but they aren't promoted very
>>> well.
>>>
>>> Regards,
>>> David Lira K5DBL
>>>
>>>
>>> On Fri, May 17, 2024, 12:10 Scott Medbury via BVARC 
>>> wrote:
>>>
>>>> It has been years since I have heard of a NWS Skywarn Class which is
>>>> several hours long and VERY informative.  I think that most of us could
>>>> benefit from a refresher or the initial class.
>>>>
>>>> 73 and safe at home..
>>>>  Scott KD5FBA
>>>>
>>>> On Fri, May 17, 2024, 11:51 AM K5BOU via BVARC  wrote:
>>>>
>>>>> I hope everyone is well. The following email contains my personal
>>>>> observations, which unfortunately lean towards a negative aspect.
>>>>>
>>>>> Yesterday, we encountered the effects of inclement weather. I was
>>>>> curious about the response of our dedicated emergency Ham radio Operators,
>>>>> who annually invest their time and effort in training for such situations.
>>>>>
>>>>> I 

[slurm-users] Re: Building Slurm debian package vs building from source

2024-05-22 Thread Brian Andrus via slurm-users
Not that I recommend it much, but you can build them for each 
environment and install the ones needed in each.


A simple example is when you have nodes with and without GPUs.
You can build slurmd packages without for those nodes and with for the 
ones that have them.


Generally, so long as versions are compatible, they can work together. 
You will need to be aware of differences for jobs and configs, but it is 
possible.


Brian Andrus

On 5/22/2024 12:45 AM, Arnuld via slurm-users wrote:
We have several nodes, most of which have different Linux 
distributions (distro for short). Controller has a different distro as 
well. The only common thing between controller and all the does is 
that all of them ar x86_64.


I can install Slurm using package manager on all the machines but this 
will not work because controller will have a different version of 
Slurm compared to the nodes (21.08 vs 23.11)


If I build from source then I see two solutions:
 - build a deb package
 - build a custom package (./configure, make, make install)

Building a debian package on the controller and then distributing the 
binaries on nodes won't work either because that binary will start 
looking for the shared libraries that it was built for and those don't 
exist on the nodes.


So the only solution I have is to build a static binary using a custom 
package. Am I correct or is there another solution here?




--
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com


Re: [RBW] Re: Is an Atlantis Worth It?

2024-05-22 Thread Brian Turner
I have always wanted to own an Atlantis or an All Rounder. Since like, the 
early to mid 2000s. To me, they were the quintessential Rivendell bikes to own. 
I loved the timeless, classic look and have always loved touring bikes built 
for whatever. However, the current version of the Atlantis does not hold the 
same appeal to me as those classic Rivs did. That’s why I searched for an older 
model that I’m very happy to have found.

Brian
Lexington KY

-- 
You received this message because you are subscribed to the Google Groups "RBW 
Owners Bunch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rbw-owners-bunch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rbw-owners-bunch/AE4F6441-4081-4EBA-9E06-4FE3689F6DB6%40gmail.com.


CVS: cvs.openbsd.org: ports

2024-05-22 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/22 12:55:56

Modified files:
devel/dtools   : Makefile distinfo 
Added files:
devel/dtools/patches: patch-Makefile 
Removed files:
devel/dtools/patches: patch-posix_mak 

Log message:
Chase dtools update now that dmd has been updated.
Changelog: https://github.com/dlang/tools/compare/v2.104.0...v2.108.1



Re: [BVARC] Emergency preparness

2024-05-22 Thread Brian Shircliffe via BVARC
I got some BLUETTI portable units coming down from my warehouse in Dallas
today.  Donating a lot of them to the Salvation Army, and trying to get
them in the hands of anyone that wants to buy a unit at cost. It is mainly
our smaller units 300-2000wh batteries and 200 watt panels.  If you know
anyone looking for some portable power units let me know.  832-452-9868

-Brian

On Fri, May 17, 2024 at 1:17 PM K5BOU via BVARC  wrote:

> Cathy, great story and thanks to share with us. You story backup what i
> was thinling on my email. You can train  all your life long, if there is no
> one to help, whats the point. Im glad someone was able to hrlp on GMRS.
> Aboutbto buy my GMRS lucence too.
>
>
>
>
>
>
>
> [image: image001.png]
>
> K5BOU-PhilippeBoucaumont
>
> Houston*|*Texas*|*USA*|*
>
> https://mccrarymeadowsweather.com/
>
> [image: image002.jpg]
>
> On May 17, 2024, at 12:51, Cathy Steinberg via BVARC 
> wrote:
>
> Last night a limb fell on some electrical wires behind  a house across
> the street from where my daughter lives in West Houston. There were flashes
> and spark like explosions happening over and over. Two houses were in
> jeopardy of catching fire.
>
> My daughter and several neighbors called 911,311, Centerpoint Energy for
> over an hour and could not get anyone to answer.
>
> When my daughter called me to tell me about it, I got on my radio and
> nobody was available to respond.
>
> I then tried a GMRS channel and there were 2 people monitoring the calls.
> I gave them the address and within 15 minutes a constable and fire
> department showed up. A couple of hours later the trees behind these houses
> caught on fire. Since 911 was still not available I got back on the GMRS
> channel and they got the fire department to come back. In my opinion, the
> people monitoring this channel prevented a house fire and I was extremely
> grateful.
>
> I just received my technician license in January and got it for emergency
> reasons. It already has paid off well.
>
> I just wanted to share my experience.
>
> Cathy
> N0JAB
> WSAB 405
>
> Sent from my iPhone
>
> On May 17, 2024, at 12:49 PM, Scott Medbury via BVARC 
> wrote:
>
> 
> I think it's time for Harris, Fort Bend, Galveston,  Montgomery, and
> Chambers counties to step up and do it again.  David, I thank you for
> Brazoria country leadership. All too often we see severe loss of property
> and needless loss of life from severe weather.
>
> 73 ... Scott KD5FBA
>
>
>
> On Fri, May 17, 2024, 12:18 PM David Lira  wrote:
>
>> NWS did a skywarn class in Brazoria county down in Angleton a few weeks
>> ago. It was very informative. They're around but they aren't promoted very
>> well.
>>
>> Regards,
>> David Lira K5DBL
>>
>>
>> On Fri, May 17, 2024, 12:10 Scott Medbury via BVARC 
>> wrote:
>>
>>> It has been years since I have heard of a NWS Skywarn Class which is
>>> several hours long and VERY informative.  I think that most of us could
>>> benefit from a refresher or the initial class.
>>>
>>> 73 and safe at home..
>>>  Scott KD5FBA
>>>
>>> On Fri, May 17, 2024, 11:51 AM K5BOU via BVARC  wrote:
>>>
>>>> I hope everyone is well. The following email contains my personal
>>>> observations, which unfortunately lean towards a negative aspect.
>>>>
>>>> Yesterday, we encountered the effects of inclement weather. I was
>>>> curious about the response of our dedicated emergency Ham radio Operators,
>>>> who annually invest their time and effort in training for such situations.
>>>>
>>>> I understand during storms, it's crucial to disconnect antennas, either
>>>> before or after the event. I didn't notice any activity on the radio during
>>>> this time eighteen before or after. I tuned in to some net in Alabama and
>>>> Florida, where they were actively discussing and preparing for the weather;
>>>> Dallas also seemed to have a few emergency net in place yesterday.
>>>>
>>>> A week ago, during a "stir crazy net," someone mentioned that during
>>>> previous hurricane events, there was little to no activity from the Ham
>>>> radio emergency group/team.
>>>>
>>>> Here are some questions to consider:
>>>>
>>>> - Should our approach be reactive or proactive?
>>>> - When is it appropriate for the Ham radio emergency responders to be
>>>> activated?
>>>> - Are all members of the Emergency Ham radio group in Fort Bend or
>>>> Ha

Re: [go-nuts] Errors trying to use external pkg z3

2024-05-22 Thread 'Brian Candler' via golang-nuts
* Start in an empty directory
* Run "go mod init example"
* Create your main.go with that import statement in it
* Run:

go mod tidy
go run .

On Wednesday 22 May 2024 at 16:26:54 UTC+1 Kenneth Miller wrote:

> I tried that, same error
>
> On Wednesday, May 22, 2024 at 9:12:58 AM UTC-6 Brian Candler wrote:
>
>> It's because the name of the module is "github.com/aclements/go-z3/z3", 
>> not "z3"
>>
>> Only packages in the standard library have short names, like "fmt", 
>> "strings" etc.
>>
>> On Wednesday 22 May 2024 at 15:46:30 UTC+1 robert engels wrote:
>>
>>> If it is your own code, you have to use
>>>
>>> import “github.com/aclements/go-z3/z3”
>>>
>>> or you need a special go.mod file.
>>>
>>> On May 22, 2024, at 9:38 AM, robert engels  wrote:
>>>
>>> What are you trying to run? z3 is a library.
>>>
>>> On May 22, 2024, at 9:29 AM, Kenneth Miller  
>>> wrote:
>>>
>>> I did go get -u github.com/aclements/go-z3/z3
>>>
>>> but when I go run . I get
>>>
>>> main.go:5:2: package z3 is not in std
>>>
>>> the offending line is 
>>>
>>> import "z3"
>>>
>>> can someone help me please? I'm sure this has been asked before but I 
>>> couldn't find it
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/d46b52fb-fa96-4544-914a-42bf83d75322n%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/golang-nuts/d46b52fb-fa96-4544-914a-42bf83d75322n%40googlegroups.com?utm_medium=email_source=footer>
>>> .
>>>
>>>
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "golang-nuts" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to golang-nuts...@googlegroups.com.
>>>
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/golang-nuts/EE0FDF46-94C6-44D6-98C0-19D52C371C20%40ix.netcom.com
>>>  
>>> <https://groups.google.com/d/msgid/golang-nuts/EE0FDF46-94C6-44D6-98C0-19D52C371C20%40ix.netcom.com?utm_medium=email_source=footer>
>>> .
>>>
>>>
>>>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/75533bfc-b63b-41cc-9f66-0885d0c00c7cn%40googlegroups.com.


CVS: cvs.openbsd.org: ports

2024-05-22 Thread Brian Callahan
CVSROOT:/cvs
Module name:ports
Changes by: bcal...@cvs.openbsd.org 2024/05/22 10:36:08

Modified files:
lang/dmd   : Makefile distinfo 
lang/dmd/patches: patch-dmd_compiler_src_dmd_link_d 
lang/dmd/pkg   : PLIST 
Added files:
lang/dmd/patches: patch-dmd-bootstrap_openbsd_bin32_dmd_conf 
  patch-dmd_Makefile 
  patch-dmd_compiler_docs_Makefile 
  patch-dmd_druntime_Makefile 
  patch-phobos_Makefile 
Removed files:
lang/dmd/patches: patch-dmd_druntime_posix_mak 
  patch-dmd_posix_mak patch-phobos_posix_mak 

Log message:
Unbreak and update DMD to 2.108.1. Enable i386 backend alongside amd64.
Introduce an LTS bootstrap that should hopefully last quite a long time and
make future updates much easier.
ok tb@



Re: UNBREAK/UPDATE: lang/dmd 2.106.0 => 2.108.1, and LTS bootstrap

2024-05-22 Thread Brian Callahan
On 5/20/2024 9:20 PM, Brian Callahan wrote:
> Hi ports --
> 
> Attached is a diff to unbreak and update the reference D compiler.
> 
> In addition, this update includes what I am hoping can serve as an LTS
> bootstrap compiler. Very old versions of the D compiler, all the way
> back to 2.076.1, can bootstrap the current D compiler. The bootstrap
> compiler was built with -static using GDC 14.1 and therefore doesn't
> have any library dependencies. I would like to be able to use this as an
> LTS bootstrap compiler until either it no longer builds modern versions
> of the D compiler or no longer runs on OpenBSD (or until I feel like
> giving it an update). Should make updating as easy as cranking version
> number, updating patches, and updating plist.
> 
> OK?
> 
> ~Brian

Updated diff, now including i386 support using the same process to
create a bootstrap compiler.

OK?

~Brian
Index: Makefile
===
RCS file: /cvs/ports/lang/dmd/Makefile,v
retrieving revision 1.17
diff -u -p -r1.17 Makefile
--- Makefile9 Mar 2024 15:18:28 -   1.17
+++ Makefile22 May 2024 15:59:36 -
@@ -1,15 +1,14 @@
-BROKEN=needs new bootstrap
-
-# i386 forthcoming
-ONLY_FOR_ARCHS =   amd64
+# Backend only supports amd64 and i386
+ONLY_FOR_ARCHS =   amd64 i386
 
 # No BT CFI yet.
 USE_NOBTCFI =  Yes
 
-V =2.106.0
+V =2.108.1
+BOOTSTRAP =2.108.1
 COMMENT =  reference compiler for the D programming language
 DISTFILES =dmd-${V}{v${V}}.tar.gz
-DISTFILES.boot=dmd-${V}-bootstrap.tar.gz
+DISTFILES.boot=dmd-${BOOTSTRAP}-bootstrap.tar.gz
 DISTFILES.phobos = phobos-${V}{v${V}}.tar.gz
 PKGNAME =  dmd-${V}
 CATEGORIES =   lang
@@ -22,15 +21,13 @@ PERMIT_PACKAGE =Yes
 
 WANTLIB += c c++abi execinfo m pthread
 
-SITES.boot =   https://github.com/ibara/dmd/releases/download/bootstrap-${V}/
+SITES.boot =   
https://github.com/ibara/dmd/releases/download/bootstrap-${BOOTSTRAP}/
 SITES =https://github.com/dlang/dmd/archive/refs/tags/
 SITES.phobos = https://github.com/dlang/phobos/archive/refs/tags/
 
 USE_GMAKE =Yes
 MAKE_ENV = HOST_CXX="${CXX}" \
-   HOST_DMD="${WRKDIR}/dmd-bootstrap/openbsd/bin${MODEL}/dmd" \
-   LD_LIBRARY_PATH="${WRKSRC}/dmd-bootstrap/openbsd/bin${MODEL}"
-MAKE_FILE =posix.mak
+   HOST_DMD="${WRKDIR}/dmd-bootstrap/openbsd/bin${MODEL}/dmd"
 
 NO_TEST =  Yes
 
@@ -45,21 +42,21 @@ MODEL = 32
 post-extract:
mv ${WRKSRC}/dmd-${V} ${WRKSRC}/dmd
mv ${WRKSRC}/phobos-${V} ${WRKSRC}/phobos
-   chmod 644 ${WRKDIR}/dmd-bootstrap/openbsd/bin64/lib*.so* # XXX
-   cp /usr/lib/libc.so.98.* ${WRKDIR}/dmd-bootstrap/openbsd/bin64/ # XXX
 
 # We need to do this manually.
 # Yes, this is all really correct.
 do-build:
cd ${WRKDIR}/phobos && \
-   ${SETENV} ${MAKE_ENV} ${MAKE_PROGRAM} -f ${MAKE_FILE} && \
-   ${SETENV} ${MAKE_PROGRAM} -f ${MAKE_FILE} install
+   ${SETENV} ${MAKE_ENV} ${MAKE_PROGRAM} && \
+   ${SETENV} ${MAKE_PROGRAM} install
cd ${WRKDIR}/dmd/druntime && \
-   ${SETENV} ${MAKE_ENV} ${MAKE_PROGRAM} -f ${MAKE_FILE} && \
-   ${SETENV} ${MAKE_PROGRAM} -f ${MAKE_FILE} install
+   ${SETENV} ${MAKE_ENV} ${MAKE_PROGRAM} && \
+   ${SETENV} ${MAKE_PROGRAM} install
mkdir ${WRKDIR}/install/openbsd/bin${MODEL}
cp ${WRKDIR}/dmd/generated/openbsd/release/${MODEL}/dmd \
${WRKDIR}/install/openbsd/bin${MODEL}
+   cd ${WRKDIR}/dmd/compiler/docs && \
+   ${SETENV} ${MAKE_ENV} ${MAKE_PROGRAM} build
 
 # We need to do this manually too.
 do-install:
@@ -67,7 +64,8 @@ do-install:
${PREFIX}/bin
${INSTALL_DATA} ${WRKDIR}/install/openbsd/lib${MODEL}/libphobos2.a \
${PREFIX}/lib
-   ${INSTALL_MAN} ${WRKDIR}/dmd-bootstrap/dmd.1 ${PREFIX}/man/man1
+   ${INSTALL_MAN} ${WRKSRC}/dmd/generated/docs/man/man1/dmd.1 \
+   ${PREFIX}/man/man1
${INSTALL_MAN} ${WRKDIR}/dmd/compiler/docs/man/man5/dmd.conf.5 \
${PREFIX}/man/man5
${INSTALL_DATA_DIR} ${PREFIX}/include/dmd
Index: distinfo
===
RCS file: /cvs/ports/lang/dmd/distinfo,v
retrieving revision 1.11
diff -u -p -r1.11 distinfo
--- distinfo18 Dec 2023 22:20:58 -  1.11
+++ distinfo22 May 2024 15:59:36 -
@@ -1,6 +1,6 @@
-SHA256 (dmd-2.106.0-bootstrap.tar.gz) = 
lSsGN+CaT80zcTbEuUQHbFV0wL5DXikDGWFepL7/nbY=
-SHA256 (dmd-2.106.0.tar.gz) = EHlknEGpuODT6BxXPILYTNaHOzr8lbN9b1IGhCztx8k=
-SHA256 (phobos-2.106.0.tar.gz) = P5Ju4mkFwvb+RX560v5P6sjK2DxXGnCt6jC2zUpDZrY

Re: [go-nuts] Errors trying to use external pkg z3

2024-05-22 Thread 'Brian Candler' via golang-nuts
It's because the name of the module is "github.com/aclements/go-z3/z3", not 
"z3"

Only packages in the standard library have short names, like "fmt", 
"strings" etc.

On Wednesday 22 May 2024 at 15:46:30 UTC+1 robert engels wrote:

> If it is your own code, you have to use
>
> import “github.com/aclements/go-z3/z3”
>
> or you need a special go.mod file.
>
> On May 22, 2024, at 9:38 AM, robert engels  wrote:
>
> What are you trying to run? z3 is a library.
>
> On May 22, 2024, at 9:29 AM, Kenneth Miller  wrote:
>
> I did go get -u github.com/aclements/go-z3/z3
>
> but when I go run . I get
>
> main.go:5:2: package z3 is not in std
>
> the offending line is 
>
> import "z3"
>
> can someone help me please? I'm sure this has been asked before but I 
> couldn't find it
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/d46b52fb-fa96-4544-914a-42bf83d75322n%40googlegroups.com
>  
> 
> .
>
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts...@googlegroups.com.
>
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/golang-nuts/EE0FDF46-94C6-44D6-98C0-19D52C371C20%40ix.netcom.com
>  
> 
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/ebeb05d6-5649-4ad4-b0d7-28ef36ee6dd7n%40googlegroups.com.


Datavant question

2024-05-22 Thread Gryzlak, Brian M
WARNING: This message has originated from an External Source. This may be a 
phishing expedition that can result in unauthorized access to our IT System. 
Please use proper judgment and caution when opening attachments, clicking 
links, or responding to this email.
Hi Devs,

UIowa is undergoing a routine review of the Datavant tool by ITS.  Last review 
we described the tool as sharing/storing de-id data to the cloud and that no 
PHI was accessible by Datavant per se (though the software does leverage it).

Can someone confirm that this is still the case / correct any inaccuracies with 
this description?

Thanks,
Brian


Brian M. Gryzlak, MSW, MA
Research Specialist
S414 CPHB
319.335.8218
The University of Iowa
Iowa City, IA 52242
brian-gryz...@uiowa.edu<mailto:brian-gryz...@uiowa.edu>

Health Effectiveness Research Center 
(HERCe)<https://nam02.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.public-health.uiowa.edu%2Fherce%2F=05%7C02%7CGPCDEV-L%40PO.MISSOURI.EDU%7Cbab85cfef2194bd9535208dc7a66e4da%7Ce3fefdbef7e9401ba51a355e01b05a89%7C0%7C0%7C638519829581183378%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C=b65ELnNqoaEuqzjsRr1OWRqHb1YWmsyY54Vm8m3qF5c%3D=0>




To unsubscribe from the GPCDEV-L list, click the following link:
https://po.missouri.edu/scripts/wa.exe?SUBED1=GPCDEV-L=1


[Sprinklerforum] Re: Interstitial Heads

2024-05-22 Thread Brian Harris
Fantastic. That's what I ordered and wanted to double check.

Thank you,

Brian Harris, CET
BVS Systems Inc.

From: Travis Mack 
Sent: Wednesday, May 22, 2024 8:39 AM
To: Discussion list on issues relating to automatic fire sprinklers 

Subject: [Sprinklerforum] Re: Interstitial Heads

It's been in the standards since the 2002 edition. Concealed combustible spaces 
with slope not exceeding 2:12 and <36" clear (paraphrasing) require specially 
listed sprinklers.

So yes, they have been required in those spaces for more than 20 years.


Travis Mack, SET

M.E.P.CAD |

181 N. Arroyo Grande Blvd. #105 I Henderson, NV 89074

www.mepcad.com<http://www.mepcad.com/> | m: 480.547.9348



AutoSPRINK  |  AutoSPRINK FAB  |  AutoSPRINK RVT  |  AlarmCAD



Book appointment time in my calendar

https://calendly.com/t_mack_mepcad

____
From: Brian Harris mailto:bhar...@bvssystemsinc.com>>
Sent: Wednesday, May 22, 2024 5:36:38 AM
To: 
sprinklerforum@lists.firesprinkler.org<mailto:sprinklerforum@lists.firesprinkler.org>
 
mailto:sprinklerforum@lists.firesprinkler.org>>
Subject: [Sprinklerforum] Interstitial Heads


Is there a benefit, and or requirement, to use Interstitial heads instead of a 
plane jane 5.6k upright in those spaces?



Thank you,



Brian Harris, CET

BVS Systems Inc.

bvssystemsinc.com<http://bvssystemsinc.com/>

Phone: 704.896.9989

Fax: 704.896.1935



_
SprinklerForum mailing list:
https://lists.firesprinkler.org/list/sprinklerforum.lists.firesprinkler.org
To unsubscribe send an email to sprinklerforum-le...@lists.firesprinkler.org

Re: [PacketFence-users] PF RADIUS against Okta

2024-05-22 Thread Brian Blater via PacketFence-users
Just wondering if there is any help that can be given to this issue or am I
one of the first to do this and will have to blaze my own trail?

Is it possible to break this down into individual pieces that I can get
some help with and then maybe we can have success. I am documenting what
I'm trying so we can have a write-up on how to do this with Okta LDAP.

Thanks

On Wed, May 15, 2024 at 10:11 AM Brian Blater <
brian.blater+packetfe...@digitalturbine.com> wrote:

> New user to PacketFence. As our company is moving away from AD to Okta
> for our IdP, I need to replace our Windows NPS for authenticating our
> Wifi users. I've been posting on reddit in the r/PacketFence there,
> but I understand this is the better place to get assistance. So I'm
> going to try here.
>
> Here is what I have so far:
>
> I've created the realm for our domain. I have created a RADIUS
> authentication source and associated it with the created realm. No
> rules created at this time. I have also created an LDAP authentication
> source to our Okta LDAP interface and associated that to our realm.
> The test with the associated Bind DN is successful. I've tried
> creating a rule using LDAP selecting member is member of
> dn=Wireless_Users_Group,ou=groups,dc=domain,dc=okta,dc=com with action
> Role - default and Access Duration - 1 day.
>
> Using ldap search as follows:
> ldapsearch -D "uid=serv...@domain.com,ou=users, dc=domain, dc=okta,
> dc=com" -W -H ldaps://domain.ldap.okta.com -b dc=domain,dc=okta,dc=com
> uid=test...@domain.com \* +
> This will list the various attributes of the user, but does not list
> the groups the user is a member of.
>
> To list groups the user is a member of I can do the following ldap search:
>
> ldapsearch -x -H ldaps://domain.ldap.okta.com -D
> "uid=serv...@domain.com,ou=users,dc=domain,dc=okta,dc=com" -W -b
> dc=domain,dc=okta,dc=com uid=test...@domain.com memberOf
>
> This will show me a long list of groups the user is a member of in the
> following format:
> memberOf: cn=miro_users,ou=groups,dc=domain,dc=okta,dc=com
>
> This is different from the typical AD approach to getting memberOf.
>
> I get the following when doing a radtest:
>
> radtest u...@domain.com  localhost:18120 12 testing123
> Sent Access-Request Id 184 from 0.0.0.0:57241 to 127.0.0.1:18120 length
> 106
> User-Name = "u...@domain.com"
> User-Password = ""
> NAS-IP-Address = 127.0.1.1
> NAS-Port = 12
> Message-Authenticator = 0x00
> Cleartext-Password = ""
>
> The above is repeated 3 times and then I get:
> (0) No reply from server for ID 184 socket 3
>
> Obviously Okta is not the usual IdP for RADIUS from what I can see and
> their LDAP implementation may be a bit different. In my google
> searches I see that players like SecureW2 using FreeRADIUS on the
> backend are using SAML connectivity with Okta. I've configured a SAML
> authentication source in PF, but that is as far as I've got so far.
>
> I tried to start PF FreeRADIUS in debug mode, but didn't have any
> success. In System Configuration | Services I stopped radiusd and
> radiusd-auth and tried using the following: freeradius -X -d
> /usr/local/pf/raddb -n auth
> That fails binding to status address of 127.0.0.1 port 18121: Address
> already in use. So, not sure how to get debug mode working to see more
> info on what is happening in RADIUS.
>
> At this point I'm pretty lost. Not sure what steps I'm missing in all
> of this and have tried to follow documentation to set things up, but
> I'm obviously missing some stuff.
>
> The goal is to get users to authenticate against Okta for Wifi access,
> if they belong to a certain group. Then depending on that group assign
> them the correct VLAN. We are using Unifi APs with a Unifi Cloud Key
> and have that currently working in NPS. Just need to move it over to
> PacketFence.
>
> Any assistance you can provide to get me working would be greatly
> appreciated.
>
> Thanks,
> Brian
>
___
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users


[PacketFence-users] FreeRADIUS Debug

2024-05-22 Thread Brian Blater via PacketFence-users
>From a google search I did the following to get FreeRADIUS into debug mode:

In System Configuration | Services I stopped radiusd and
radiusd-auth and tried using the following: freeradius -X -d
/usr/local/pf/raddb -n auth

That didn't work. What is the command to get FreeRADIUS into debug
mode so I can look at what is happening on PF?

thx


___
PacketFence-users mailing list
PacketFence-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/packetfence-users


[Sprinklerforum] Interstitial Heads

2024-05-22 Thread Brian Harris
Is there a benefit, and or requirement, to use Interstitial heads instead of a 
plane jane 5.6k upright in those spaces?

Thank you,

Brian Harris, CET
BVS Systems Inc.
bvssystemsinc.com<http://bvssystemsinc.com/>
Phone: 704.896.9989
Fax: 704.896.1935


_
SprinklerForum mailing list:
https://lists.firesprinkler.org/list/sprinklerforum.lists.firesprinkler.org
To unsubscribe send an email to sprinklerforum-le...@lists.firesprinkler.org

[prometheus-users] Re: Regular Expression and Label Action Support to match two or more source labels

2024-05-22 Thread 'Brian Candler' via Prometheus Users
I would assume that the reason this feature was added was because there 
wasn't a feasible alternative way to do it.

I suggest you upgrade to v2.45.5 which is the current "Long Term Stable" 
release.  The previous LTS release (v2.37) went end-of-life 
<https://prometheus.io/docs/introduction/release-cycle/> in July 2023, so 
it seems you're very likely running something unsupported at the moment.

On Wednesday 22 May 2024 at 11:52:03 UTC+1 tejaswini vadlamudi wrote:

> Sure Brian, I was aware of this.
> This config comes with a software change, but is there any possibility or 
> workaround in the old (< 2.41) Prometheus releases on this topic?
>
> /Teja
>
> On Wednesday, May 22, 2024 at 12:01:31 PM UTC+2 Brian Candler wrote:
>
>> Yes, there are similar relabel actions "keepequal" and "dropequal":
>>
>> https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
>>
>> These were added in v2.41.0 
>> <https://github.com/prometheus/prometheus/releases/v2.41.0> / 2022-12-20
>> https://github.com/prometheus/prometheus/pull/11564
>>
>> They behave slightly differently from VM: in Prometheus, the 
>> concatenation of source_labels is compared with target_label.
>>
>> On Tuesday 21 May 2024 at 15:43:05 UTC+1 tejaswini vadlamudi wrote:
>>
>>> The below relabeling rule from Victoria Metrics is useful for matching 
>>> accurate ports and dropping unwanted targets.- action: 
>>> keep_if_equal
>>>   source_labels: 
>>> [__meta_kubernetes_service_annotation_prometheus_io_port, 
>>> __meta_kubernetes_pod_container_port_number]
>>> Does anyone know how we can compare two labels using Prometheus 
>>> Relabeling rules?
>>>
>>> Based on my analysis, Prometheus doesn't support regex patterns on 
>>> 1. backreferences like \1 
>>> 2. lookaheads or lookbehinds
>>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/0a936729-7115-4f5c-ae98-04e99a6287a8n%40googlegroups.com.


[prometheus-users] Re: Regular Expression and Label Action Support to match two or more source labels

2024-05-22 Thread 'Brian Candler' via Prometheus Users
Yes, there are similar relabel actions "keepequal" and "dropequal":
https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config

These were added in v2.41.0 
 / 2022-12-20
https://github.com/prometheus/prometheus/pull/11564

They behave slightly differently from VM: in Prometheus, the concatenation 
of source_labels is compared with target_label.

On Tuesday 21 May 2024 at 15:43:05 UTC+1 tejaswini vadlamudi wrote:

> The below relabeling rule from Victoria Metrics is useful for matching 
> accurate ports and dropping unwanted targets.- action: 
> keep_if_equal
>   source_labels: 
> [__meta_kubernetes_service_annotation_prometheus_io_port, 
> __meta_kubernetes_pod_container_port_number]
> Does anyone know how we can compare two labels using Prometheus Relabeling 
> rules?
>
> Based on my analysis, Prometheus doesn't support regex patterns on 
> 1. backreferences like \1 
> 2. lookaheads or lookbehinds
>

-- 
You received this message because you are subscribed to the Google Groups 
"Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to prometheus-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/prometheus-users/ee16deed-f7c8-4388-ae2f-78a767bb1cc6n%40googlegroups.com.


[coreboot] [coreboot - Bug #538] [Soft Brick] x230 Dock Causes Internal Display to "Permanently" Malfunction

2024-05-21 Thread Brian L
Issue #538 has been updated by Brian L.

Priority changed from High to Normal

Libgfxinit issue has been resolved. It was caused by a damaged LVDS cable, 
which was most likely jostled loose by the violent eject button mechanism on 
the Lenovo dock. Very interesting symptoms it ended up producing!

VGA bios blob issue identified regression caused by commit 42f0396a1028.

This issue can be closed (I don't seem to have the option to mark as closed). 
Thanks for your help!


Bug #538: [Soft Brick] x230 Dock Causes Internal Display to "Permanently" 
Malfunction
https://ticket.coreboot.org/issues/538#change-1850

* Author: Brian L
* Status: New
* Priority: Normal
* Target version: none
* Start date: 2024-05-14
* Affected hardware: Lenovo x230

Environment:  
  - Lenovo x230
  - Stock screen replaced with Pixel Qi (not sure if relevant) (plug & play 
LVDS)
  - Coreboot using Heads (coreboot + linuxboot)
  - Official lenovo docking station connected to external monitor via 
DisplayPort

Bug Trigger:  
Using Heads/coreboot fine for years with my Pixel Qi screen modded x230. I then 
bought a Lenovo docking station. Booted up, everything worked fine.  
Disconnected from dock, booted up, and there was no bios screen. Screen did not 
turn on until taken over by Linux Kernel. Once in userspace, wayland could no 
longer identify the monitor as a Pixel Qi or its proper resolution. EDID is 
blank. 
Booting with docking station allows bios to show on external display.

Restarting did *not* fix the issue, reflashing heads did *not* fix the issue, 
flashing skulls (coreboot + seabios) did *not* fix the issue.

Flashing stock bios *did fix* the issue. I can now see BIOS screen and get 
proper EDID in userspace whether on the dock or not. 
*However* reflashing coreboot again, even coming from stock bios working state, 
and I immediately now no longer get a BIOS screen or EDID, even without ever 
introducing the dock again. Essentially now bricked with anything but stock 
bios.

---Files
cbmem.log (62.6 KB)


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


Fedora Workstation 40 aarch64 download -- How to run Live CD installer?

2024-05-21 Thread Brian Masney
Hi all,

I want to put Fedora 40 on my Lenovo Thinkpad x13s laptop, which is an
aarch64-based laptop with a Qualcomm SoC. I downloaded the Fedora raw
image from [1], and I can boot from USB using the directions at [2].
All of the other supported architectures have an ISO available,
however aarch64 only has a raw image available.

In the past, I would dd the Fedora image directly to my nvme drive,
however this time I'd like to go through the installer so that I can
easily setup LUKS encryption on my nvme drive through the installer.
The raw image doesn't have the installer, and I didn't have luck
installing the anaconda-livecd package.

Is there a Live ISO available for aarch64 anywhere with an installer?
I looked on the alternative downloads page [3] and the only ISO is for
KDE.

[1] https://fedoraproject.org/workstation/download
[2] https://fedoraproject.org/wiki/Thinkpad_X13s
[3] https://alt.fedoraproject.org/alt/

Brian
--
___
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 
https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org
Do not reply to spam, report it: 
https://pagure.io/fedora-infrastructure/new_issue


Countdown - June PUG

2024-05-21 Thread Brian Walters

G'day all

The theme for June: 'Abstract'.

Submit here: http://pug.komkon.org/submit/

Submission Guidelines here:
http://pug.komkon.org/general/autosubmit.html


Cheers
Brian
++
Brian Walters
Western Sydney Australia
https://lyons-ryan.org/Travelling/brians-pics/


--
This email has been checked for viruses by Avast antivirus software.
www.avast.com
--
%(real_name)s Pentax-Discuss Mail List
To unsubscribe send an email to pdml-le...@pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.


Re: [PATCH v3 4/4] target/hexagon: idef-parser simplify predicate init

2024-05-21 Thread Brian Cain



On 5/21/2024 3:16 PM, Anton Johansson via wrote:

Only predicate instruction arguments need to be initialized by
idef-parser. This commit removes registers from the init_list and
simplifies gen_inst_init_args() slightly.

Signed-off-by: Anton Johansson 
Reviewed-by: Taylor Simpson 
---


Reviewed-by: Brian Cain 


  target/hexagon/idef-parser/idef-parser.y|  2 --
  target/hexagon/idef-parser/parser-helpers.c | 26 +++--
  2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/target/hexagon/idef-parser/idef-parser.y 
b/target/hexagon/idef-parser/idef-parser.y
index cd2612eb8c..9ffb9f9699 100644
--- a/target/hexagon/idef-parser/idef-parser.y
+++ b/target/hexagon/idef-parser/idef-parser.y
@@ -233,8 +233,6 @@ code : '{' statements '}'
  argument_decl : REG
  {
  emit_arg(c, &@1, &$1);
-/* Enqueue register into initialization list */
-g_array_append_val(c->inst.init_list, $1);
  }
| PRED
  {
diff --git a/target/hexagon/idef-parser/parser-helpers.c 
b/target/hexagon/idef-parser/parser-helpers.c
index c150c308be..a7dcd85fe4 100644
--- a/target/hexagon/idef-parser/parser-helpers.c
+++ b/target/hexagon/idef-parser/parser-helpers.c
@@ -1652,26 +1652,28 @@ void gen_inst(Context *c, GString *iname)
  
  
  /*

- * Initialize declared but uninitialized registers, but only for
- * non-conditional instructions
+ * Initialize declared but uninitialized instruction arguments. Only needed for
+ * predicate arguments, initialization of registers is handled by the Hexagon
+ * frontend.
   */
  void gen_inst_init_args(Context *c, YYLTYPE *locp)
  {
+HexValue *val = NULL;
+char suffix;
+
+/* If init_list is NULL arguments have already been initialized */
  if (!c->inst.init_list) {
  return;
  }
  
  for (unsigned i = 0; i < c->inst.init_list->len; i++) {

-HexValue *val = _array_index(c->inst.init_list, HexValue, i);
-if (val->type == REGISTER_ARG) {
-/* Nothing to do here */
-} else if (val->type == PREDICATE) {
-char suffix = val->is_dotnew ? 'N' : 'V';
-EMIT_HEAD(c, "tcg_gen_movi_i%u(P%c%c, 0);\n", val->bit_width,
-  val->pred.id, suffix);
-} else {
-yyassert(c, locp, false, "Invalid arg type!");
-}
+val = _array_index(c->inst.init_list, HexValue, i);
+suffix = val->is_dotnew ? 'N' : 'V';
+yyassert(c, locp, val->type == PREDICATE,
+ "Only predicates need to be initialized!");
+yyassert(c, locp, val->bit_width == 32,
+ "Predicates should always be 32 bits");
+EMIT_HEAD(c, "tcg_gen_movi_i32(P%c%c, 0);\n", val->pred.id, suffix);
  }
  
  /* Free argument init list once we have initialized everything */




Re: [PATCH v3 3/4] target/hexagon: idef-parser fix leak of init_list

2024-05-21 Thread Brian Cain



On 5/21/2024 3:16 PM, Anton Johansson via wrote:

gen_inst_init_args() is called for instructions using a predicate as an
rvalue. Upon first call, the list of arguments which might need
initialization init_list is freed to indicate that they have been
processed. For instructions without an rvalue predicate,
gen_inst_init_args() isn't called and init_list will never be freed.

Free init_list from free_instruction() if it hasn't already been freed.
A comment in free_instruction is also updated.

Signed-off-by: Anton Johansson 
Reviewed-by: Taylor Simpson 
---


Reviewed-by: Brian Cain 


  target/hexagon/idef-parser/parser-helpers.c | 9 -
  1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/target/hexagon/idef-parser/parser-helpers.c 
b/target/hexagon/idef-parser/parser-helpers.c
index 95f2b43076..c150c308be 100644
--- a/target/hexagon/idef-parser/parser-helpers.c
+++ b/target/hexagon/idef-parser/parser-helpers.c
@@ -2121,9 +2121,16 @@ void free_instruction(Context *c)
  g_string_free(g_array_index(c->inst.strings, GString*, i), TRUE);
  }
  g_array_free(c->inst.strings, TRUE);
+/*
+ * Free list of arguments that might need initialization, if they haven't
+ * already been freed.
+ */
+if (c->inst.init_list) {
+g_array_free(c->inst.init_list, TRUE);
+}
  /* Free INAME token value */
  g_string_free(c->inst.name, TRUE);
-/* Free variables and registers */
+/* Free declared TCGv variables */
  g_array_free(c->inst.allocated, TRUE);
  /* Initialize instruction-specific portion of the context */
  memset(&(c->inst), 0, sizeof(Inst));




Re: [PATCH v3 1/4] target/hexagon: idef-parser remove unused defines

2024-05-21 Thread Brian Cain



On 5/21/2024 3:16 PM, Anton Johansson via wrote:

Before switching to GArray/g_string_printf we used fixed size arrays for
output buffers and instructions arguments among other things.

Macros defining the sizes of these buffers were left behind, remove
them.

Signed-off-by: Anton Johansson 
Reviewed-by: Taylor Simpson 
---


Reviewed-by: Brian Cain 



  target/hexagon/idef-parser/idef-parser.h | 10 --
  1 file changed, 10 deletions(-)

diff --git a/target/hexagon/idef-parser/idef-parser.h 
b/target/hexagon/idef-parser/idef-parser.h
index 3faa1deecd..8594cbe3a2 100644
--- a/target/hexagon/idef-parser/idef-parser.h
+++ b/target/hexagon/idef-parser/idef-parser.h
@@ -23,16 +23,6 @@
  #include 
  #include 
  
-#define TCGV_NAME_SIZE 7

-#define MAX_WRITTEN_REGS 32
-#define OFFSET_STR_LEN 32
-#define ALLOC_LIST_LEN 32
-#define ALLOC_NAME_SIZE 32
-#define INIT_LIST_LEN 32
-#define OUT_BUF_LEN (1024 * 1024)
-#define SIGNATURE_BUF_LEN (128 * 1024)
-#define HEADER_BUF_LEN (128 * 1024)
-
  /* Variadic macros to wrap the buffer printing functions */
  #define EMIT(c, ...)  
 \
  do {  
 \




Re: [PATCH v3 2/4] target/hexagon: idef-parser remove undefined functions

2024-05-21 Thread Brian Cain



On 5/21/2024 3:16 PM, Anton Johansson via wrote:

Signed-off-by: Anton Johansson 
Reviewed-by: Taylor Simpson 
---


Reviewed-by: Brian Cain 



  target/hexagon/idef-parser/parser-helpers.h | 13 -
  1 file changed, 13 deletions(-)

diff --git a/target/hexagon/idef-parser/parser-helpers.h 
b/target/hexagon/idef-parser/parser-helpers.h
index 7c58087169..2087d534a9 100644
--- a/target/hexagon/idef-parser/parser-helpers.h
+++ b/target/hexagon/idef-parser/parser-helpers.h
@@ -143,8 +143,6 @@ void commit(Context *c);
  
  #define OUT(c, locp, ...) FOR_EACH((c), (locp), OUT_IMPL, __VA_ARGS__)
  
-const char *cmp_swap(Context *c, YYLTYPE *locp, const char *type);

-
  /**
   * Temporary values creation
   */
@@ -236,8 +234,6 @@ HexValue gen_extract_op(Context *c,
  HexValue *index,
  HexExtract *extract);
  
-HexValue gen_read_reg(Context *c, YYLTYPE *locp, HexValue *reg);

-
  void gen_write_reg(Context *c, YYLTYPE *locp, HexValue *reg, HexValue *value);
  
  void gen_assign(Context *c,

@@ -274,13 +270,6 @@ HexValue gen_ctpop_op(Context *c, YYLTYPE *locp, HexValue 
*src);
  
  HexValue gen_rotl(Context *c, YYLTYPE *locp, HexValue *src, HexValue *n);
  
-HexValue gen_deinterleave(Context *c, YYLTYPE *locp, HexValue *mixed);

-
-HexValue gen_interleave(Context *c,
-YYLTYPE *locp,
-HexValue *odd,
-HexValue *even);
-
  HexValue gen_carry_from_add(Context *c,
  YYLTYPE *locp,
  HexValue *op1,
@@ -349,8 +338,6 @@ HexValue gen_rvalue_ternary(Context *c, YYLTYPE *locp, 
HexValue *cond,
  
  const char *cond_to_str(TCGCond cond);
  
-void emit_header(Context *c);

-
  void emit_arg(Context *c, YYLTYPE *locp, HexValue *arg);
  
  void emit_footer(Context *c);




[Bug 2064090] Re: Automatically installed bit not transitioned to t64 libraries

2024-05-21 Thread Brian Murray
I think it would be worthwhile to send an e-mail to ubuntu-devel about
this so that any early upgraders to Noble can clean up any packages
which were marked as manually installed.

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064090

Title:
  Automatically installed bit not transitioned to t64 libraries

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/2064090/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: [ITA] dateutils

2024-05-21 Thread Brian Inglis via Cygwin-apps

On 2024-05-21 09:57, Brian Inglis via Cygwin-apps wrote:

On 2024-05-21 07:17, Jon Turney via Cygwin-apps wrote:

On 17/05/2024 06:43, Brian Inglis via Cygwin-apps wrote:

Date manipulation utilities
I would like to adopt the above orphaned package.

Thanks.
I added this to your packages.

https://cygwin.com/cgit/cygwin-packages/dateutils/tree/dateutils.cygport?h=playground

Please cleanup all the commented out detritus.


Sure!

What is the reasoning for changing SRC_URI to point to github?  The project 
homepage still points to bitbucket.org for downloads.


They provide the same release tarball on github, and README on both sites state 
that "Dateutils are hosted primarily on github:", so I see no reason to use what 
appears to be the legacy repo at another site, although I would treat them as 
project mirrors if possible.
Looking at latest release downloads they are 50/50 across both sites so far, 
although the signature downloads from github are much higher; see:


 https://bitbucket.org/hroptatyr/dateutils/downloads/

https://somsubhra.github.io/github-release-stats/?username=hroptatyr=dateutils=1_page=100



"*** Warning: DEPEND is deprecated, use BUILD_REQUIRES instead."

Scallywag runs:
https://cygwin.com/cgi-bin2/jobs.cgi?srcpkg=dateutils
The single test failure is not reproducible standalone, and appears to
be a Windows, Cygwin, or shell environment space limitation, due to
running 888 tests on a single command line?

Can you share the reasoning that lets your reach that conclusion from:

FAIL: tzmap_check_02.ctst


The original failure log messages from xargs complained about lack of 
environment space.


The build directory should be available as an artifact which may contain more 
detailed information on the failure.


I wish - the test runner is very tidy - just the trs and log.


Have you established that this failure is not a regression?


Running standalone from test dir with:

 $ make --trace TESTS=tzmap_check_02.ctst V=1 check

passes with all the usual messages - attached.


Error message in attached log is:

xargs: environment is too large for exec

consistent across local and scallywag builds.

--
Take care. Thanks, Brian Inglis  Calgary, Alberta, Canada

La perfection est atteinte   Perfection is achieved
non pas lorsqu'il n'y a plus rien à ajouter  not when there is no more to add
mais lorsqu'il n'y a plus rien à retirer but when there is no more to cut
-- Antoine de Saint-Exupéry$ find "${root}/lib" "${TZMAP_DIR}/../lib" -name '*.tzmcc' | xargs "${TZMAP}" 
check
xargs: environment is too large for exec
$? 1
FAIL tzmap_check_02.ctst (exit status: 1)


[coreboot] [coreboot - Bug #538] [Soft Brick] x230 Dock Causes Internal Display to "Permanently" Malfunction

2024-05-21 Thread Brian L
Issue #538 has been updated by Brian L.


Nico Huber wrote in #note-17:
> Brian L wrote in #note-16:
> > > If you want to get libgfxinit working again, a log would really be the 
> > > next best step.
> > 
> > See attached full ADA debug log, thanks
> 
> Thanks. Not what I expected. Gave me some ideas, though, even if it feels a 
> bit desparate: There's a timeout waiting for the GMBUS (I2C) controller when 
> trying to read an EDID over the *DP* connector. Not directly a problem, but 
> this controller is known to sometimes act rather erratic, and maybe, maybe it 
> causes problems even for other connectors probed later.
> ```
> [0.053716] HW.GFX.GMA.Registers.Wait:  0x0800 <- 0x0800 & 
> 0x000c5108:PCH_GMBUS2
> [0.553718] HW.GFX.GMA.Registers.Wait: Timed Out!
> ```
> This happens after we request power to the panel and delays for incredibly 
> long 500ms. That makes a timing issue unlikely. Two things I'd try:
> 1. Place the LVDS first in 
> `src/mainboard/lenovo/x230/variants/x230/gma-mainboard.ads` (if that helps, 
> try the old order with HDMI1 removed).
> 2. Add another desparate delay to the Wait_On() procedure nevertheless:
> ```
> diff --git a/common/hw-gfx-gma-panel.adb b/common/hw-gfx-gma-panel.adb
> index 532bf674b41d..c546b81dc820 100644
> --- a/common/hw-gfx-gma-panel.adb
> +++ b/common/hw-gfx-gma-panel.adb
> @@ -384,6 +384,8 @@ is
>   TOut_MS  => 300);
>  
>Registers.Unset_Mask (PP (Panel).CONTROL, PCH_PP_CONTROL_VDD_OVERRIDE);
> +
> +  Time.M_Delay (500);
> end Wait_On;
>  
> procedure Off (Panel : Panel_Control)
> ```

Attempting to investigate hardware issue -- removing the bezel of my display 
was enough to cause the screen to go completely offline. At this time I am 
investigating hardware failure as the most likely root cause. Providing this 
update so no one wastes time on this until I have a chance to rule that out or 
not


Bug #538: [Soft Brick] x230 Dock Causes Internal Display to "Permanently" 
Malfunction
https://ticket.coreboot.org/issues/538#change-1849

* Author: Brian L
* Status: New
* Priority: High
* Target version: none
* Start date: 2024-05-14
* Affected hardware: Lenovo x230

Environment:  
  - Lenovo x230
  - Stock screen replaced with Pixel Qi (not sure if relevant) (plug & play 
LVDS)
  - Coreboot using Heads (coreboot + linuxboot)
  - Official lenovo docking station connected to external monitor via 
DisplayPort

Bug Trigger:  
Using Heads/coreboot fine for years with my Pixel Qi screen modded x230. I then 
bought a Lenovo docking station. Booted up, everything worked fine.  
Disconnected from dock, booted up, and there was no bios screen. Screen did not 
turn on until taken over by Linux Kernel. Once in userspace, wayland could no 
longer identify the monitor as a Pixel Qi or its proper resolution. EDID is 
blank. 
Booting with docking station allows bios to show on external display.

Restarting did *not* fix the issue, reflashing heads did *not* fix the issue, 
flashing skulls (coreboot + seabios) did *not* fix the issue.

Flashing stock bios *did fix* the issue. I can now see BIOS screen and get 
proper EDID in userspace whether on the dock or not. 
*However* reflashing coreboot again, even coming from stock bios working state, 
and I immediately now no longer get a BIOS screen or EDID, even without ever 
introducing the dock again. Essentially now bricked with anything but stock 
bios.

---Files
cbmem.log (62.6 KB)


-- 
You have received this notification because you have either subscribed to it, 
or are involved in it.
To change your notification preferences, please click here: 
https://ticket.coreboot.org/my/account
___
coreboot mailing list -- coreboot@coreboot.org
To unsubscribe send an email to coreboot-le...@coreboot.org


  1   2   3   4   5   6   7   8   9   10   >