Re: recovery modules

2024-05-20 Thread Nathan Bossart
rebased

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From c23ddbe1dac8b9a79db31ad67df423848e475905 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Wed, 15 Feb 2023 14:28:53 -0800
Subject: [PATCH v23 1/5] introduce routine for checking mutually exclusive
 string GUCs

---
 src/backend/postmaster/pgarch.c |  8 +++-
 src/backend/utils/misc/guc.c| 22 ++
 src/include/utils/guc.h |  3 +++
 3 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c
index 02f91431f5..5f1a6f190d 100644
--- a/src/backend/postmaster/pgarch.c
+++ b/src/backend/postmaster/pgarch.c
@@ -912,11 +912,9 @@ LoadArchiveLibrary(void)
 {
ArchiveModuleInit archive_init;
 
-   if (XLogArchiveLibrary[0] != '\0' && XLogArchiveCommand[0] != '\0')
-   ereport(ERROR,
-   (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
-errmsg("both \"archive_command\" and 
\"archive_library\" set"),
-errdetail("Only one of \"archive_command\", 
\"archive_library\" may be set.")));
+   (void) CheckMutuallyExclusiveStringGUCs(XLogArchiveLibrary, 
"archive_library",
+   
XLogArchiveCommand, "archive_command",
+   
ERROR);
 
/*
 * If shell archiving is enabled, use our special initialization 
function.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 547cecde24..05dc5303bc 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -2659,6 +2659,28 @@ ReportGUCOption(struct config_generic *record)
pfree(val);
 }
 
+/*
+ * If both parameters are set, emits a log message at 'elevel' and returns
+ * false.  Otherwise, returns true.
+ */
+bool
+CheckMutuallyExclusiveStringGUCs(const char *p1val, const char *p1name,
+const char 
*p2val, const char *p2name,
+int elevel)
+{
+   if (p1val[0] != '\0' && p2val[0] != '\0')
+   {
+   ereport(elevel,
+   (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+errmsg("both \"%s\" and \"%s\" set", p1name, 
p2name),
+errdetail("Only one of \"%s\", \"%s\" may be 
set.",
+  p1name, p2name)));
+   return false;
+   }
+
+   return true;
+}
+
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
  * to the given base unit.  'value' and 'unit' are the input value and unit
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index e4a594b5e8..018bb7e55b 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -376,6 +376,9 @@ extern void RestrictSearchPath(void);
 extern void AtEOXact_GUC(bool isCommit, int nestLevel);
 extern void BeginReportingGUCOptions(void);
 extern void ReportChangedGUCOptions(void);
+extern bool CheckMutuallyExclusiveStringGUCs(const char *p1val, const char 
*p1name,
+   
 const char *p2val, const char *p2name,
+   
 int elevel);
 extern void ParseLongOption(const char *string, char **name, char **value);
 extern const char *get_config_unit_name(int flags);
 extern bool parse_int(const char *value, int *result, int flags,
-- 
2.39.3 (Apple Git-146)

>From b35cd2dd0b360a50af7ab9175b2887646ba57f37 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Wed, 15 Feb 2023 10:36:00 -0800
Subject: [PATCH v23 2/5] refactor code for restoring via shell

---
 src/backend/Makefile  |   2 +-
 src/backend/access/transam/timeline.c |  12 +-
 src/backend/access/transam/xlog.c |  50 -
 src/backend/access/transam/xlogarchive.c  | 167 ---
 src/backend/access/transam/xlogrecovery.c |   3 +-
 src/backend/meson.build   |   1 +
 src/backend/postmaster/startup.c  |  16 +-
 src/backend/restore/Makefile  |  18 ++
 src/backend/restore/meson.build   |   5 +
 src/backend/restore/shell_restore.c   | 245 ++
 src/include/Makefile  |   2 +-
 src/include/access/xlogarchive.h  |   9 +-
 src/include/meson.build   |   1 +
 src/include/postmaster/startup.h  |   1 +
 src/include/restore/shell_restore.h   |  26 +++
 src/tools/p

Stus-List Re: C 35 Mk 2 available

2024-05-20 Thread Nathan Post via CnC-List
David,

If you are looking to donate it, SailMaine https://www.sailmaine.org/ in
Portland could be interested in a donation https://www.sailmaine.org/. They
have a great kids sailing program and use adult sailing memberships on keel
boats to fund it. I race on their J/22 fleet and volunteer at Sail Maine.
They just added a J/35 to their fleet this summer so they might be
interested in another boat in that size range - I am not sure. Reach out on
the website or you can contact Ben Lewis  who is their
adult program coordinator. What condition is your boat in and how is she
equipped? Is she in reasonable operating condition to be able to sail her
safely up to Maine? If so and if you want to donate her I can put a crew
together to sail her up in late June (earliest time that works in my
schedule) - you can let Ben know that I am offering to do that.

Nathan
S/V Wisper
1981 C KCB
Portland ME

~~~
Nathan Post
+1 (781)  605-8671
Please show your appreciation for this list and the Photo Album site and help 
me pay the associated bills.  Make a contribution at:
https://www.paypal.me/stumurray
Thanks for your help.
Stu

Re: [PATCH] Check the change argument for a double minus at the start.

2024-05-19 Thread Nathan Hartman
On Sun, May 19, 2024 at 4:09 PM Timofey Zhakov  wrote:
>
> Hi,
>
> I found a little bug in parsing a change revision: If the number,
> given to the --change argument, starts with a double minus or with
> `-r-`, the command aborts. This patch fixes this bug.
>
> Steps to reproduce:
>
> $ svn diff https://svn.apache.org/repos/asf -c --123
> svn: E235000: In file '..\..\..\subversion\libsvn_client\ra.c' line
> 692: assertion failed (SVN_IS_VALID_REVNUM(start_revnum))
>
> Or...
>
> $ svn diff https://svn.apache.org/repos/asf -c -r-123
> svn: E235000: In file '..\..\..\subversion\libsvn_client\ra.c' line
> 692: assertion failed (SVN_IS_VALID_REVNUM(start_revnum))
>
> The same would happen if the svn diff command is invoked from a
> working copy, without URL.
>
> [[[
> Fix bug: check the change argument for a double minus at the start.
>
> If changeno is negative and is_negative is TRUE, raise
> SVN_ERR_CL_ARG_PARSING_ERROR, because string with a double minus is
> not a valid number.
>
> * subversion/svn/svn.c
>   (sub_main): If changeno is negative and is_negative is TRUE, raise
>   SVN_ERR_CL_ARG_PARSING_ERROR.
> * subversion/tests/cmdline/diff_tests.py
>   (diff_invalid_change_arg): New test.
>   (test_list): Run new test.
> ]]]
>
> Best regards,
> Timofei Zhakov


Good catch! I agree we should issue a proper error and not assert.

While studying this change, I noticed a second similar bug if we are
given a range and the second value is negative.

In other words, this works correctly:

$ svn diff -c r1917671-1917672 http://svn.apache.org/repos/asf/subversion/trunk

But this doesn't; notice the double minus signs:

$ svn diff -c r1917671--1917672 http://svn.apache.org/repos/asf/subversion/trunk
svn: E235000: In file 'subversion/libsvn_client/ra.c' line 682:
assertion failed (SVN_IS_VALID_REVNUM(start_revnum))
Abort trap: 6

The first minus sign is correctly interpreted to indicate a range, but
the second minus sign is read by strtol(). That's several lines above
your change in svn.c (line 2379 on trunk@1917826). We are doing:

changeno_end = strtol(s, , 10);

but we are not checking if 'changeno_end' is negative (which it isn't
allowed to be).

I'm okay to commit this patch now and handle the second bug in a
follow-up, or handle both bugs together. Let me know what you prefer.

Cheers,
Nathan


[nznog] NZNOG 2024 Post Conference Survey

2024-05-19 Thread Nathan Ward
Apologies if you have received this through other channels.

NZNOG 2024 was held in Nelson in April.

Each year we produce a survey. This is very useful, for example it helps us
to:
- Give feedback to speakers
- Get feedback ourselves around the conference in general, and programme
- Plan future NZNOG conferences

Please help us out by completing our survey, at the following URL:
https://www.surveymonkey.com/r/B6GS7MT

Even if you did not attend, please have a look - you can of course skip the
sections about feedback for 2024, and provide feedback to help us plan
future conferences.

--
Nathan Ward for the NZNOG Trustees
___
NZNOG mailing list -- nznog@list.waikato.ac.nz
To unsubscribe send an email to nznog-le...@list.waikato.ac.nz


Re: problems with "Shared Memory and Semaphores" section of docs

2024-05-17 Thread Nathan Bossart
On Fri, May 17, 2024 at 06:30:08PM +, Imseih (AWS), Sami wrote:
>> The advantage of the GUC is that its value could be seen before trying to
>> actually start the server. 
> 
> Only if they have a sample in postgresql.conf file, right? 
> A GUC like shared_memory_size_in_huge_pages will not be.

shared_memory_size_in_huge_pages is computed at runtime and can be viewed
with "postgres -C" before actually trying to start the server [0].

[0] https://www.postgresql.org/docs/devel/kernel-resources.html#LINUX-HUGE-PAGES

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: problems with "Shared Memory and Semaphores" section of docs

2024-05-17 Thread Nathan Bossart
On Fri, May 17, 2024 at 12:48:37PM -0500, Nathan Bossart wrote:
> On Fri, May 17, 2024 at 01:09:55PM -0400, Tom Lane wrote:
>> Nathan Bossart  writes:
>>> At a bare minimum, we should probably fix the obvious problems, but I
>>> wonder if we could simplify this section a bit, too.
>> 
>> Yup.  "The definition of insanity is doing the same thing over and
>> over and expecting different results."  Time to give up on documenting
>> these things in such detail.  Anybody who really wants to know can
>> look at the source code.
> 
> Cool.  I'll at least fix the back-branches as-is, but I'll see about
> revamping this stuff for v18.

Attached is probably the absolute least we should do for the back-branches.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 853104fa59bb1c219f02f71ece0d5106cb6c0588 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Fri, 17 May 2024 14:17:59 -0500
Subject: [PATCH v1 1/1] fix kernel resources docs on back-branches

---
 doc/src/sgml/runtime.sgml | 9 +
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 6047b8171d..883a849e6f 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -781,13 +781,13 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such

 SEMMNI
 Maximum number of semaphore identifiers (i.e., sets)
-at least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) plus room for other applications
+at least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 7) / 16) plus room for other applications

 

 SEMMNS
 Maximum number of semaphores system-wide
-ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) * 17 plus room for other applications
+ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 7) / 16) * 17 plus room for other applications

 

@@ -838,7 +838,8 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such
 When using System V semaphores,
 PostgreSQL uses one semaphore per allowed connection
 (), allowed autovacuum worker process
-() and allowed background
+(), allowed WAL sender process
+(), and allowed background
 process (), in sets of 16.
 Each such set will
 also contain a 17th semaphore which contains a magic
@@ -852,7 +853,7 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such
 linkend="sysvipc-parameters"/>).  The parameter SEMMNI
 determines the limit on the number of semaphore sets that can
 exist on the system at one time.  Hence this parameter must be at
-least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16).
+least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 7) / 16).
 Lowering the number
 of allowed connections is a temporary workaround for failures,
 which are usually confusingly worded No space
-- 
2.25.1



Re: [PATCH] kunit: tool: Build compile_commands.json

2024-05-17 Thread Nathan Chancellor
Hi Brendan,

On Thu, May 16, 2024 at 07:40:53PM +, Brendan Jackman wrote:
> compile_commands.json is used by clangd[1] to provide code navigation
> and completion functionality to editors. See [2] for an example
> configuration that includes this functionality for VSCode.
> 
> It can currently be built manually when using kunit.py, by running:
> 
>   ./scripts/clang-tools/gen_compile_commands.py -d .kunit
> 
> With this change however, it's built automatically so you don't need to
> manually keep it up to date.
> 
> Unlike the manual approach, having make build the compile_commands.json
> means that it appears in the build output tree instead of at the root of
> the source tree, so you'll need to add --compile-commands-dir=.kunit to
> your clangd args for it to be found. This might turn out to be pretty
> annoying, I'm not sure yet. If so maybe we can later add some hackery to
> kunit.py to work around it.
> 
> [1] https://clangd.llvm.org/
> [2] https://github.com/FlorentRevest/linux-kernel-vscode
> 
> Signed-off-by: Brendan Jackman 

This makes sense to do automatically in my opinion, as Python will
already be available (which is the only dependency of
gen_compile_commands.py as far as I am aware) and it should not take
that long to generate.

Reviewed-by: Nathan Chancellor 

> ---
>  tools/testing/kunit/kunit_kernel.py | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/kunit/kunit_kernel.py 
> b/tools/testing/kunit/kunit_kernel.py
> index 7254c110ff23..61931c4926fd 100644
> --- a/tools/testing/kunit/kunit_kernel.py
> +++ b/tools/testing/kunit/kunit_kernel.py
> @@ -72,7 +72,8 @@ class LinuxSourceTreeOperations:
>   raise ConfigError(e.output.decode())
>  
>   def make(self, jobs: int, build_dir: str, make_options: 
> Optional[List[str]]) -> None:
> - command = ['make', 'ARCH=' + self._linux_arch, 'O=' + 
> build_dir, '--jobs=' + str(jobs)]
> + command = ['make', 'all', 'compile_commands.json', 'ARCH=' + 
> self._linux_arch,
> +'O=' + build_dir, '--jobs=' + str(jobs)]
>   if make_options:
>   command.extend(make_options)
>   if self._cross_compile:
> 
> ---
> base-commit: 3c999d1ae3c75991902a1a7dad0cb62c2a3008b4
> change-id: 20240516-kunit-compile-commands-d994074fc2be
> 
> Best regards,
> -- 
> Brendan Jackman 
> 
> 



Re: problems with "Shared Memory and Semaphores" section of docs

2024-05-17 Thread Nathan Bossart
On Fri, May 17, 2024 at 01:09:55PM -0400, Tom Lane wrote:
> Nathan Bossart  writes:
>> [ many, many problems in documented formulas ]
> 
>> At a bare minimum, we should probably fix the obvious problems, but I
>> wonder if we could simplify this section a bit, too.
> 
> Yup.  "The definition of insanity is doing the same thing over and
> over and expecting different results."  Time to give up on documenting
> these things in such detail.  Anybody who really wants to know can
> look at the source code.

Cool.  I'll at least fix the back-branches as-is, but I'll see about
revamping this stuff for v18.

>> If the exact values
>> are important, maybe we could introduce more GUCs like
>> shared_memory_size_in_huge_pages that can be consulted (instead of
>> requiring users to break out their calculators).
> 
> I don't especially like shared_memory_size_in_huge_pages, and I don't
> want to introduce more of those.  GUCs are not the right way to expose
> values that you can't actually set.  (Yeah, I'm guilty of some of the
> existing ones like that, but it's still not a good thing.)  Maybe it's
> time to introduce a system view for such things?  It could be really
> simple, with name and value, or we could try to steal some additional
> ideas such as units from pg_settings.

The advantage of the GUC is that its value could be seen before trying to
actually start the server.  I don't dispute that it's not the right way to
surface this information, though.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




problems with "Shared Memory and Semaphores" section of docs

2024-05-17 Thread Nathan Bossart
(moving to a new thread)

On Thu, May 16, 2024 at 09:16:46PM -0500, Nathan Bossart wrote:
> On Thu, May 16, 2024 at 04:37:10PM +, Imseih (AWS), Sami wrote:
>> Also, Not sure if I am mistaken here, but the "+ 5" in the existing docs
>> seems wrong.
>>  
>> If it refers to NUM_AUXILIARY_PROCS defined in 
>> include/storage/proc.h, it should a "6"
>> 
>> #define NUM_AUXILIARY_PROCS 6
>> 
>> This is not a consequence of this patch, and can be dealt with
>> In a separate thread if my understanding is correct.
> 
> Ha, I think it should actually be "+ 7"!  The value is calculated as
> 
>   MaxConnections + autovacuum_max_workers + 1 + max_worker_processes + 
> max_wal_senders + 6
> 
> Looking at the history, this documentation tends to be wrong quite often.
> In v9.2, the checkpointer was introduced, and these formulas were not
> updated.  In v9.3, background worker processes were introduced, and the
> formulas were still not updated.  Finally, in v9.6, it was fixed in commit
> 597f7e3.  Then, in v14, the archiver process was made an auxiliary process
> (commit d75288f), making the formulas out-of-date again.  And in v17, the
> WAL summarizer was added.
> 
> On top of this, IIUC you actually need even more semaphores if your system
> doesn't support atomics, and from a quick skim this doesn't seem to be
> covered in this documentation.

A couple of other problems I noticed:

* max_wal_senders is missing from this sentence:

When using System V semaphores,
PostgreSQL uses one semaphore per allowed 
connection
(), allowed autovacuum worker process
() and allowed background
process (), in sets of 16.

* AFAICT the discussion about the formulas in the paragraphs following the
  table doesn't explain the reason for the constant.

* IMHO the following sentence is difficult to decipher, and I can't tell if
  it actually matches the formula in the table:

The maximum number of semaphores in the system
is set by SEMMNS, which consequently must be at least
as high as max_connections plus
autovacuum_max_workers plus 
max_wal_senders,
plus max_worker_processes, plus one extra for each 16
allowed connections plus workers (see the formula in ).

At a bare minimum, we should probably fix the obvious problems, but I
wonder if we could simplify this section a bit, too.  If the exact values
are important, maybe we could introduce more GUCs like
shared_memory_size_in_huge_pages that can be consulted (instead of
requiring users to break out their calculators).

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: [lustre-discuss] Unexpected result with overstriping

2024-05-17 Thread Nathan Dauchy via lustre-discuss
John,

I believe the lfs-setstripe man page is incorrect (or at least misleading) in 
this case. I recall seeing 2000 hardcoded as a maximum, so it appears to be 
picking that.

Using "-C -1" to put a single stripe on each OST wouldn't have any benefit over 
"-c -1".   IMHO, it would probably be more useful to have negative values 
represent number of stripes per OST. 

-Nathan

From: lustre-discuss  on behalf of 
John Bauer 
Sent: Friday, May 17, 2024 8:48 AM
To: lustre-discuss 
Subject: [lustre-discuss] Unexpected result with overstriping

External email: Use caution opening links or attachments


Good morning all,

I am playing around with overstriping a bit and I found a behavior that, to me, 
would seem unexpected.  The documentation for -C -1  indicates that the file 
should be striped over all available OSTs.  The pool, which happens to be the 
default, is ssd-pool which has 32 OSTs.  I got a stripeCount of 2000.  Is this 
as expected?

pfe20.jbauer2 213> rm -f /nobackup/jbauer2/ddd.dat
pfe20.jbauer2 214> lfs setstripe -C -1 /nobackup/jbauer2/ddd.dat
pfe20.jbauer2 215> lfs getstripe /nobackup/jbauer2/ddd.dat
/nobackup/jbauer2/ddd.dat
lmm_stripe_count:  2000
lmm_stripe_size:   1048576
lmm_pattern:   raid0,overstriped
lmm_layout_gen:0
lmm_stripe_offset: 119
lmm_pool:  ssd-pool
obdidx objid objid group
   119  523862870x31f59ef 0
   123  523479470x31ec42b 0
   127  527344870x324aa17 0
   121  528393960x32643e4 0
   131  527427090x324ca35 0
   116  522426590x31d28e3 0
   117  518311250x316e155 0
   124  524252180x31ff202 0
   125  524027220x31f9a22 0
   106  527005810x32425a5 0

edited for brevity

___
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org


Re: improve performance of pg_dump --binary-upgrade

2024-05-17 Thread Nathan Bossart
rebased

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From ded5e61ff631c2d02835fdba941068dcd86741ce Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Thu, 18 Apr 2024 15:15:19 -0500
Subject: [PATCH v5 1/2] Remove is_index parameter from
 binary_upgrade_set_pg_class_oids().

---
 src/bin/pg_dump/pg_dump.c | 20 ++--
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index e324070828..0fbb8e8831 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -324,7 +324,7 @@ static void binary_upgrade_set_type_oids_by_rel(Archive *fout,
 const TableInfo *tbinfo);
 static void binary_upgrade_set_pg_class_oids(Archive *fout,
 			 PQExpBuffer upgrade_buffer,
-			 Oid pg_class_oid, bool is_index);
+			 Oid pg_class_oid);
 static void binary_upgrade_extension_member(PQExpBuffer upgrade_buffer,
 			const DumpableObject *dobj,
 			const char *objtype,
@@ -5391,8 +5391,7 @@ binary_upgrade_set_type_oids_by_rel(Archive *fout,
 
 static void
 binary_upgrade_set_pg_class_oids(Archive *fout,
- PQExpBuffer upgrade_buffer, Oid pg_class_oid,
- bool is_index)
+ PQExpBuffer upgrade_buffer, Oid pg_class_oid)
 {
 	PQExpBuffer upgrade_query = createPQExpBuffer();
 	PGresult   *upgrade_res;
@@ -5441,7 +5440,8 @@ binary_upgrade_set_pg_class_oids(Archive *fout,
 	appendPQExpBufferStr(upgrade_buffer,
 		 "\n-- For binary upgrade, must preserve pg_class oids and relfilenodes\n");
 
-	if (!is_index)
+	if (relkind != RELKIND_INDEX &&
+		relkind != RELKIND_PARTITIONED_INDEX)
 	{
 		appendPQExpBuffer(upgrade_buffer,
 		  "SELECT pg_catalog.binary_upgrade_set_next_heap_pg_class_oid('%u'::pg_catalog.oid);\n",
@@ -11668,7 +11668,7 @@ dumpCompositeType(Archive *fout, const TypeInfo *tyinfo)
 		binary_upgrade_set_type_oids_by_type_oid(fout, q,
  tyinfo->dobj.catId.oid,
  false, false);
-		binary_upgrade_set_pg_class_oids(fout, q, tyinfo->typrelid, false);
+		binary_upgrade_set_pg_class_oids(fout, q, tyinfo->typrelid);
 	}
 
 	qtypname = pg_strdup(fmtId(tyinfo->dobj.name));
@@ -15802,7 +15802,7 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
 
 		if (dopt->binary_upgrade)
 			binary_upgrade_set_pg_class_oids(fout, q,
-			 tbinfo->dobj.catId.oid, false);
+			 tbinfo->dobj.catId.oid);
 
 		appendPQExpBuffer(q, "CREATE VIEW %s", qualrelname);
 
@@ -15904,7 +15904,7 @@ dumpTableSchema(Archive *fout, const TableInfo *tbinfo)
 
 		if (dopt->binary_upgrade)
 			binary_upgrade_set_pg_class_oids(fout, q,
-			 tbinfo->dobj.catId.oid, false);
+			 tbinfo->dobj.catId.oid);
 
 		appendPQExpBuffer(q, "CREATE %s%s %s",
 		  tbinfo->relpersistence == RELPERSISTENCE_UNLOGGED ?
@@ -16755,7 +16755,7 @@ dumpIndex(Archive *fout, const IndxInfo *indxinfo)
 
 		if (dopt->binary_upgrade)
 			binary_upgrade_set_pg_class_oids(fout, q,
-			 indxinfo->dobj.catId.oid, true);
+			 indxinfo->dobj.catId.oid);
 
 		/* Plain secondary index */
 		appendPQExpBuffer(q, "%s;\n", indxinfo->indexdef);
@@ -17009,7 +17009,7 @@ dumpConstraint(Archive *fout, const ConstraintInfo *coninfo)
 
 		if (dopt->binary_upgrade)
 			binary_upgrade_set_pg_class_oids(fout, q,
-			 indxinfo->dobj.catId.oid, true);
+			 indxinfo->dobj.catId.oid);
 
 		appendPQExpBuffer(q, "ALTER %sTABLE ONLY %s\n", foreign,
 		  fmtQualifiedDumpable(tbinfo));
@@ -17403,7 +17403,7 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)
 	if (dopt->binary_upgrade)
 	{
 		binary_upgrade_set_pg_class_oids(fout, query,
-		 tbinfo->dobj.catId.oid, false);
+		 tbinfo->dobj.catId.oid);
 
 		/*
 		 * In older PG versions a sequence will have a pg_type entry, but v14
-- 
2.25.1

>From 7ff5168f5984865bd405e5d53dc6a190f989e7cd Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Mon, 22 Apr 2024 13:21:18 -0500
Subject: [PATCH v5 2/2] Improve performance of pg_dump --binary-upgrade.

---
 src/bin/pg_dump/pg_dump.c| 141 +--
 src/tools/pgindent/typedefs.list |   1 +
 2 files changed, 96 insertions(+), 46 deletions(-)

diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 0fbb8e8831..7b8ddc6443 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -55,6 +55,7 @@
 #include "catalog/pg_trigger_d.h"
 #include "catalog/pg_type_d.h"
 #include "common/connect.h"
+#include "common/int.h"
 #include "common/relpath.h"
 #include "compress_io.h"
 #include "dumputils.h"
@@ -92,6 +93,17 @@ typedef struct
 	int			objsubid;		/* subobject (table column #) */
 } SecLabelItem;
 
+typedef struct
+{
+	Oid			oid;			/* object OID */
+	char		relkind;		/* o

Re: Does Nuttx support hard real time?

2024-05-17 Thread Nathan Hartman
On Fri, May 17, 2024 at 1:42 AM 吳岱儒  wrote:

> Hi community,
>
> Is Nuttx a hard real time RTOS or software RTOS?
> If Nuttx support hard real time, is there any documentation about this
> feature and design ?
>
> BRs,
> TaiJuWu
>

Hello,

It depends on the CPU architecture:

For example, on ARM Cortex M architecture, the hardware provides an
interrupt vector table which supports multiple interrupt vectors, and
multiple interrupt priority levels, and there is an
interrupt base priority register which allows to keep certain interrupts
enabled while most interrupts are disabled. This hardware support makes it
possible to implement what we call a "Zero Latency Interrupt" (sometimes
called a Raw Interrupt or Unmanaged Interrupt by other RTOSes).

To get a hard real time performance, you would identify the time-critical part
of your software that needs zero latency handling (usually a tiny part of
the whole software) and then design your software to run that part in a
Zero Latency Interrupt. Everything else goes in regular code.

There is a page in the NuttX documentation that explains Zero Latency
Interrupts. See [1].

It may be possible to implement Zero Latency Interrupts for other CPU
architectures besides ARM Cortex M, if the hardware provides mechanisms
that make it possible, such as a way to define a high priority interrupt
that is never disabled.

For the best possible performance, you need to choose your microcontroller
hardware carefully. Look for features like the ability to put your zero
latency interrupt handler and its variables in RAM for faster execution
(because code in FLASH may take longer to fetch). Some microcontrollers
even have a special section of RAM that is faster than the rest for this
exact purpose. For example the STmicro STM32G series has this feature and
calls it CCM SRAM. Other microcontroller vendors might have a similar thing
but call it a different name.

Fortunately NuttX supports many CPU architectures and many microcontroller
models from many hardware vendors, so you have lots of choices. Also if the
microcontroller you want to use isn't currently supported by NuttX, it is
usually not too difficult to add support, especially if there is already
support for something similar.

References:
[1]
https://nuttx.apache.org/docs/12.5.0/guides/zerolatencyinterrupts.html

Hope this helps,
Nathan


Re: PoshSvn – Subversion for PowerShell

2024-05-16 Thread Nathan Hartman
On Thu, May 16, 2024 at 1:50 PM Timofey Zhakov  wrote:
>
> Hello everyone!
>
> I like Subversion and use it for my projects.
>
> PoshSvn is a PowerShell module which provides a tab competition and
> typed output for the Subversion cmdlets. I found it useful for
> scripting and everyday life.
>
> For example to get the status of a working copy, you could use the
> svn-status cmdlet:
>
> [[[
> PS C:\> svn-status
>
> Status  Path
> --  
> M   PoshSvn\CmdLets\SvnAdd.cs
> M   PoshSvn\CmdLets\SvnLog.cs
> M   PoshSvn\SvnCmdletBase.cs
> M   README.md
> ]]]
>
> This is useful for scripting because of typed output. For example:
>
> [[[
> PS C:\> $info = svn-info https://svn.apache.org/repos/asf
> PS C:\> $info.Revision
> 1917749
> PS C:\> $info.LastChangedAuthor
> projects_role
> ]]]
>
> Documentation is available at: https://www.poshsvn.com/
>
> The installation is very easy. Just type `Install-Module PoshSvn` in
> the PowerShell command prompt.
>
> This module is fully free and open source.
>
> Any kind of feedback would be much appreciated.
>
> Thanks!
>
> --
> Timofei Zhakov


Thanks for sharing!

I'm not a Windows user or a PowerShell user so I can't try it out for
myself, but I am always glad to hear about new additions to the
Subversion ecosystem.

If you ever feel like participating in Subversion development, there
are plenty of opportunities around here :-)

Cheers,
Nathan


Re: allow changing autovacuum_max_workers without restarting

2024-05-16 Thread Nathan Bossart
On Thu, May 16, 2024 at 04:37:10PM +, Imseih (AWS), Sami wrote:
> I thought 256 was a good enough limit. In practice, I doubt anyone will 
> benefit from more than a few dozen autovacuum workers. 
> I think 1024 is way too high to even allow.

WFM

> I don't think combining 1024 + 5 = 1029 is a good idea in docs.
> Breaking down the allotment and using the name of the constant 
> is much more clear.
> 
> I suggest 
> " max_connections + max_wal_senders + max_worker_processes + 
> AUTOVAC_MAX_WORKER_SLOTS + 5"
> 
> and in other places in the docs, we should mention the actual 
> value of AUTOVAC_MAX_WORKER_SLOTS. Maybe in the 
> below section?
> 
> Instead of:
> -() and allowed background
> +(1024) and allowed background
> 
> do something like:
> -() and allowed background
> +   AUTOVAC_MAX_WORKER_SLOTS  (1024) and allowed background
> 
> Also,  replace the 1024 here with AUTOVAC_MAX_WORKER_SLOTS.
> 
> +max_wal_senders,
> +plus max_worker_processes, plus 1024 for autovacuum
> +worker processes, plus one extra for each 16

Part of me wonders whether documenting the exact formula is worthwhile.
This portion of the docs is rather complicated, and I can't recall ever
having to do the arithmetic is describes.  Plus, see below...

> Also, Not sure if I am mistaken here, but the "+ 5" in the existing docs
> seems wrong.
>  
> If it refers to NUM_AUXILIARY_PROCS defined in 
> include/storage/proc.h, it should a "6"
> 
> #define NUM_AUXILIARY_PROCS 6
> 
> This is not a consequence of this patch, and can be dealt with
> In a separate thread if my understanding is correct.

Ha, I think it should actually be "+ 7"!  The value is calculated as

MaxConnections + autovacuum_max_workers + 1 + max_worker_processes + 
max_wal_senders + 6

Looking at the history, this documentation tends to be wrong quite often.
In v9.2, the checkpointer was introduced, and these formulas were not
updated.  In v9.3, background worker processes were introduced, and the
formulas were still not updated.  Finally, in v9.6, it was fixed in commit
597f7e3.  Then, in v14, the archiver process was made an auxiliary process
(commit d75288f), making the formulas out-of-date again.  And in v17, the
WAL summarizer was added.

On top of this, IIUC you actually need even more semaphores if your system
doesn't support atomics, and from a quick skim this doesn't seem to be
covered in this documentation.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-16 Thread Nathan Bossart
Here is a rebased version of 0002, which I intend to commit once v18
development begins.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From e9cba5e4303c7fa5ad2d7d5deb23fe0b1c740b09 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 7 May 2024 14:35:34 -0500
Subject: [PATCH v5 1/1] Simplify pg_sequences a bit.

XXX: NEEDS CATVERSION BUMP
---
 src/backend/catalog/system_views.sql |  6 +-
 src/backend/commands/sequence.c  | 12 
 src/test/regress/expected/rules.out  |  5 +
 3 files changed, 6 insertions(+), 17 deletions(-)

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 53047cab5f..b32e5c3170 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -176,11 +176,7 @@ CREATE VIEW pg_sequences AS
 S.seqincrement AS increment_by,
 S.seqcycle AS cycle,
 S.seqcache AS cache_size,
-CASE
-WHEN has_sequence_privilege(C.oid, 'SELECT,USAGE'::text)
-THEN pg_sequence_last_value(C.oid)
-ELSE NULL
-END AS last_value
+pg_sequence_last_value(C.oid) AS last_value
 FROM pg_sequence S JOIN pg_class C ON (C.oid = S.seqrelid)
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
 WHERE NOT pg_is_other_temp_schema(N.oid)
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 28f8522264..cd0e746577 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1783,21 +1783,17 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 	/* open and lock sequence */
 	init_sequence(relid, , );
 
-	if (pg_class_aclcheck(relid, GetUserId(), ACL_SELECT | ACL_USAGE) != ACLCHECK_OK)
-		ereport(ERROR,
-(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
- errmsg("permission denied for sequence %s",
-		RelationGetRelationName(seqrel;
-
 	/*
 	 * We return NULL for other sessions' temporary sequences.  The
 	 * pg_sequences system view already filters those out, but this offers a
 	 * defense against ERRORs in case someone invokes this function directly.
 	 *
 	 * Also, for the benefit of the pg_sequences view, we return NULL for
-	 * unlogged sequences on standbys instead of throwing an error.
+	 * unlogged sequences on standbys and for sequences for which we lack
+	 * USAGE or SELECT privileges instead of throwing an error.
 	 */
-	if (!RELATION_IS_OTHER_TEMP(seqrel) &&
+	if (pg_class_aclcheck(relid, GetUserId(), ACL_SELECT | ACL_USAGE) == ACLCHECK_OK &&
+		!RELATION_IS_OTHER_TEMP(seqrel) &&
 		(RelationIsPermanent(seqrel) || !RecoveryInProgress()))
 	{
 		Buffer		buf;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index ef658ad740..04b3790bdd 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -1699,10 +1699,7 @@ pg_sequences| SELECT n.nspname AS schemaname,
 s.seqincrement AS increment_by,
 s.seqcycle AS cycle,
 s.seqcache AS cache_size,
-CASE
-WHEN has_sequence_privilege(c.oid, 'SELECT,USAGE'::text) THEN pg_sequence_last_value((c.oid)::regclass)
-ELSE NULL::bigint
-END AS last_value
+pg_sequence_last_value((c.oid)::regclass) AS last_value
FROM ((pg_sequence s
  JOIN pg_class c ON ((c.oid = s.seqrelid)))
  LEFT JOIN pg_namespace n ON ((n.oid = c.relnamespace)))
-- 
2.25.1



Re: optimizing pg_upgrade's once-in-each-database steps

2024-05-16 Thread Nathan Bossart
On Thu, May 16, 2024 at 05:09:55PM -0700, Jeff Davis wrote:
> How much complexity do you avoid by using async instead of multiple
> processes?

If we didn't want to use async, my guess is we'd want to use threads to
avoid complicated IPC.  And if we followed pgbench's example for using
threads, it might end up at a comparable level of complexity, although I'd
bet that threading would be the more complex of the two.  It's hard to say
definitively without coding it up both ways, which might be worth doing.

> Also, did you consider connecting once to each database and running
> many queries? Most of those seem like just checks.

This was the idea behind 347758b.  It may be possible to do more along
these lines.  IMO parallelizing will still be useful even if we do combine
more of the steps.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




optimizing pg_upgrade's once-in-each-database steps

2024-05-16 Thread Nathan Bossart
A number of pg_upgrade steps require connecting to each database and
running a query.  When there are many databases, these steps are
particularly time-consuming, especially since this is done sequentially in
a single process.  At a quick glance, I see the following such steps:

* create_logical_replication_slots
* check_for_data_types_usage
* check_for_isn_and_int8_passing_mismatch
* check_for_user_defined_postfix_ops
* check_for_incompatible_polymorphics
* check_for_tables_with_oids
* check_for_user_defined_encoding_conversions
* check_old_cluster_subscription_state
* get_loadable_libraries
* get_db_rel_and_slot_infos
* old_9_6_invalidate_hash_indexes
* report_extension_updates

I set out to parallelize these kinds of steps via multiple threads or
processes, but I ended up realizing that we could likely achieve much of
the same gain with libpq's asynchronous APIs.  Specifically, both
establishing the connections and running the queries can be done without
blocking, so we can just loop over a handful of slots and advance a simple
state machine for each.  The attached is a proof-of-concept grade patch for
doing this for get_db_rel_and_slot_infos(), which yielded the following
results on my laptop for "pg_upgrade --link --sync-method=syncfs --jobs 8"
for a cluster with 10K empty databases.

total pg_upgrade_time:
* HEAD:  14m 8s
* patch: 10m 58s

get_db_rel_and_slot_infos() on old cluster:
* HEAD:  2m 45s
* patch: 36s

get_db_rel_and_slot_infos() on new cluster:
* HEAD:  1m 46s
* patch: 29s

I am posting this early to get thoughts on the general approach.  If we
proceeded with this strategy, I'd probably create some generic tooling that
each relevant step would provide a set of callback functions.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 05a9903295cb3b57ca9144217e89f0aac27277b5 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Wed, 15 May 2024 12:07:10 -0500
Subject: [PATCH v1 1/1] parallel get relinfos

---
 src/bin/pg_upgrade/info.c| 266 +++
 src/tools/pgindent/typedefs.list |   1 +
 2 files changed, 202 insertions(+), 65 deletions(-)

diff --git a/src/bin/pg_upgrade/info.c b/src/bin/pg_upgrade/info.c
index 95c22a7200..bb28e262c7 100644
--- a/src/bin/pg_upgrade/info.c
+++ b/src/bin/pg_upgrade/info.c
@@ -11,6 +11,7 @@
 
 #include "access/transam.h"
 #include "catalog/pg_class_d.h"
+#include "fe_utils/string_utils.h"
 #include "pg_upgrade.h"
 
 static void create_rel_filename_map(const char *old_data, const char *new_data,
@@ -22,13 +23,16 @@ static void report_unmatched_relation(const RelInfo *rel, const DbInfo *db,
 static void free_db_and_rel_infos(DbInfoArr *db_arr);
 static void get_template0_info(ClusterInfo *cluster);
 static void get_db_infos(ClusterInfo *cluster);
-static void get_rel_infos(ClusterInfo *cluster, DbInfo *dbinfo);
+static void start_rel_infos_query(PGconn *conn);
+static void get_rel_infos_result(PGconn *conn, DbInfo *dbinfo);
 static void free_rel_infos(RelInfoArr *rel_arr);
 static void print_db_infos(DbInfoArr *db_arr);
 static void print_rel_infos(RelInfoArr *rel_arr);
 static void print_slot_infos(LogicalSlotInfoArr *slot_arr);
-static void get_old_cluster_logical_slot_infos(DbInfo *dbinfo, bool live_check);
-static void get_db_subscription_count(DbInfo *dbinfo);
+static void start_old_cluster_logical_slot_infos_query(PGconn *conn, bool live_check);
+static void get_old_cluster_logical_slot_infos_result(PGconn *conn, DbInfo *dbinfo);
+static void start_db_sub_count_query(PGconn *conn, DbInfo *dbinfo);
+static void get_db_sub_count_result(PGconn *conn, DbInfo *dbinfo);
 
 
 /*
@@ -268,6 +272,16 @@ report_unmatched_relation(const RelInfo *rel, const DbInfo *db, bool is_new_db)
 			   reloid, db->db_name, reldesc);
 }
 
+typedef enum
+{
+	UNUSED,
+	CONN_STARTED,
+	CONNECTING,
+	STARTED_RELINFO_QUERY,
+	STARTED_LOGICAL_QUERY,
+	STARTED_SUBSCRIPTION_QUERY,
+} InfoState;
+
 /*
  * get_db_rel_and_slot_infos()
  *
@@ -279,7 +293,12 @@ report_unmatched_relation(const RelInfo *rel, const DbInfo *db, bool is_new_db)
 void
 get_db_rel_and_slot_infos(ClusterInfo *cluster, bool live_check)
 {
-	int			dbnum;
+	int			dbnum = 0;
+	int			dbnum_proc = 0;
+	InfoState  *states;
+	int		   *dbs;
+	PGconn	  **conns;
+	int			jobs = (user_opts.jobs < 1) ? 1 : user_opts.jobs;
 
 	if (cluster->dbarr.dbs != NULL)
 		free_db_and_rel_infos(>dbarr);
@@ -287,20 +306,103 @@ get_db_rel_and_slot_infos(ClusterInfo *cluster, bool live_check)
 	get_template0_info(cluster);
 	get_db_infos(cluster);
 
-	for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
-	{
-		DbInfo	   *pDbInfo = >dbarr.dbs[dbnum];
+	states = (InfoState *) pg_malloc(sizeof(InfoState *) * jobs);
+	dbs = (int *) pg_malloc(sizeof(int) * jobs);
+	

Re: race condition when writing pg_control

2024-05-16 Thread Nathan Bossart
On Thu, May 16, 2024 at 12:19:22PM -0400, Melanie Plageman wrote:
> Today, after committing a3e6c6f, I saw recovery/018_wal_optimize.pl
> fail and see this message in the replica log [2].
> 
> 2024-05-16 15:12:22.821 GMT [5440][not initialized] FATAL:  incorrect
> checksum in control file
> 
> I'm pretty sure it's not related to my commit. So, I was looking for
> existing reports of this error message.

Yeah, I don't see how it could be related.

> It's a long shot, since 0001 and 0002 were already pushed, but this is
> the only recent report I could find of "FATAL:  incorrect checksum in
> control file" in pgsql-hackers or bugs archives.
> 
> I do see this thread from 2016 [3] which might be relevant because the
> reported bug was also on Windows.

I suspect it will be difficult to investigate this one too much further
unless we can track down a copy of the control file with the bad checksum.
Other than searching for any new code that isn't doing the appropriate
locking, maybe we could search the buildfarm for any other occurrences.  I
also seem some threads concerning whether the way we are reading/writing
the control file is atomic.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[Bug 2065818] Re: No way to assume yes for do-release-upgrade

2024-05-16 Thread Nathan Teodosio
** Changed in: ubuntu-release-upgrader (Ubuntu)
   Status: Incomplete => New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2065818

Title:
  No way to assume yes for do-release-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/2065818/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2065789] Re: Ubuntu Pro logo is not aligned to the left and also dances around depending on the window dimensions

2024-05-16 Thread Nathan Teodosio
Merge request: https://salsa.debian.org/gnome-team/gnome-initial-
setup/-/merge_requests/25

** Description changed:

  In the Ubuntu Pro page the logo is not aligned to the left and also
  dances around depending on the window dimensions.
+ 
+ This is only a problem with the GTK4 port, thus only occurs from 24.04
+ on.

** Also affects: gnome-initial-setup (Ubuntu Noble)
   Importance: Undecided
   Status: New

** Also affects: gnome-initial-setup (Ubuntu Oracular)
   Importance: Low
 Assignee: Nathan Teodosio (nteodosio)
   Status: In Progress

** Changed in: gnome-initial-setup (Ubuntu Noble)
   Status: New => Triaged

** Changed in: gnome-initial-setup (Ubuntu Noble)
   Importance: Undecided => Low

** Changed in: gnome-initial-setup (Ubuntu Noble)
 Assignee: (unassigned) => Nathan Teodosio (nteodosio)

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2065789

Title:
  Ubuntu Pro logo is not aligned to the left and also dances around
  depending on the window dimensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2065789/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2065818] Re: No way to assume yes for do-release-upgrade

2024-05-16 Thread Nathan Teodosio
Seems to be a bug in ubuntu-release-upgrader, but I think apart from "it
just quits if I do that" all the other words in the report are not
yours, being either the program log or automatically collected
information. Can you please give more context, explain what you were
trying to to and how?

** Changed in: update-manager (Ubuntu)
   Status: New => Incomplete

** Also affects: ubuntu-release-upgrader (Ubuntu)
   Importance: Undecided
   Status: New

** Changed in: ubuntu-release-upgrader (Ubuntu)
   Status: New => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2065818

Title:
  No way to assume yes for do-release-upgrade

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/+bug/2065818/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: recovery modules

2024-05-15 Thread Nathan Bossart
rebased

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 046d6e4b13d3a6b15df1245f3154969f7553594d Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Wed, 15 Feb 2023 14:28:53 -0800
Subject: [PATCH v22 1/5] introduce routine for checking mutually exclusive
 string GUCs

---
 src/backend/postmaster/pgarch.c |  8 +++-
 src/backend/utils/misc/guc.c| 22 ++
 src/include/utils/guc.h |  3 +++
 3 files changed, 28 insertions(+), 5 deletions(-)

diff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c
index d82bcc2cfd..98a5aa3661 100644
--- a/src/backend/postmaster/pgarch.c
+++ b/src/backend/postmaster/pgarch.c
@@ -912,11 +912,9 @@ LoadArchiveLibrary(void)
 {
 	ArchiveModuleInit archive_init;
 
-	if (XLogArchiveLibrary[0] != '\0' && XLogArchiveCommand[0] != '\0')
-		ereport(ERROR,
-(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
- errmsg("both archive_command and archive_library set"),
- errdetail("Only one of archive_command, archive_library may be set.")));
+	(void) CheckMutuallyExclusiveStringGUCs(XLogArchiveLibrary, "archive_library",
+			XLogArchiveCommand, "archive_command",
+			ERROR);
 
 	/*
 	 * If shell archiving is enabled, use our special initialization function.
diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c
index 3fb6803998..02bc4d66e7 100644
--- a/src/backend/utils/misc/guc.c
+++ b/src/backend/utils/misc/guc.c
@@ -2659,6 +2659,28 @@ ReportGUCOption(struct config_generic *record)
 	pfree(val);
 }
 
+/*
+ * If both parameters are set, emits a log message at 'elevel' and returns
+ * false.  Otherwise, returns true.
+ */
+bool
+CheckMutuallyExclusiveStringGUCs(const char *p1val, const char *p1name,
+ const char *p2val, const char *p2name,
+ int elevel)
+{
+	if (p1val[0] != '\0' && p2val[0] != '\0')
+	{
+		ereport(elevel,
+(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+ errmsg("cannot set both %s and %s", p1name, p2name),
+ errdetail("Only one of %s or %s may be set.",
+		   p1name, p2name)));
+		return false;
+	}
+
+	return true;
+}
+
 /*
  * Convert a value from one of the human-friendly units ("kB", "min" etc.)
  * to the given base unit.  'value' and 'unit' are the input value and unit
diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h
index e4a594b5e8..018bb7e55b 100644
--- a/src/include/utils/guc.h
+++ b/src/include/utils/guc.h
@@ -376,6 +376,9 @@ extern void RestrictSearchPath(void);
 extern void AtEOXact_GUC(bool isCommit, int nestLevel);
 extern void BeginReportingGUCOptions(void);
 extern void ReportChangedGUCOptions(void);
+extern bool CheckMutuallyExclusiveStringGUCs(const char *p1val, const char *p1name,
+			 const char *p2val, const char *p2name,
+			 int elevel);
 extern void ParseLongOption(const char *string, char **name, char **value);
 extern const char *get_config_unit_name(int flags);
 extern bool parse_int(const char *value, int *result, int flags,
-- 
2.25.1

>From 88e5e696792efdb62f7067400223c67357db6dba Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Wed, 15 Feb 2023 10:36:00 -0800
Subject: [PATCH v22 2/5] refactor code for restoring via shell

---
 src/backend/Makefile  |   2 +-
 src/backend/access/transam/timeline.c |  12 +-
 src/backend/access/transam/xlog.c |  50 -
 src/backend/access/transam/xlogarchive.c  | 167 ---
 src/backend/access/transam/xlogrecovery.c |   3 +-
 src/backend/meson.build   |   1 +
 src/backend/postmaster/startup.c  |  16 +-
 src/backend/restore/Makefile  |  18 ++
 src/backend/restore/meson.build   |   5 +
 src/backend/restore/shell_restore.c   | 245 ++
 src/include/Makefile  |   2 +-
 src/include/access/xlogarchive.h  |   9 +-
 src/include/meson.build   |   1 +
 src/include/postmaster/startup.h  |   1 +
 src/include/restore/shell_restore.h   |  26 +++
 src/tools/pgindent/typedefs.list  |   1 +
 16 files changed, 407 insertions(+), 152 deletions(-)
 create mode 100644 src/backend/restore/Makefile
 create mode 100644 src/backend/restore/meson.build
 create mode 100644 src/backend/restore/shell_restore.c
 create mode 100644 src/include/restore/shell_restore.h

diff --git a/src/backend/Makefile b/src/backend/Makefile
index 6700aec039..590b5002c0 100644
--- a/src/backend/Makefile
+++ b/src/backend/Makefile
@@ -19,7 +19,7 @@ include $(top_builddir)/src/Makefile.global
 SUBDIRS = access archive backup bootstrap catalog parser commands executor \
 	foreign lib libpq \
 	main nodes optimizer partitioning port postmaster \
-	regex replication rewrite \
+	regex replication restore rewrite \
 	statistics storage tcop tsearch utils $(top_builddir)/src/timezone \
 	jit
 
diff --git a/src/backend

Re: Why does pgindent's README say to download typedefs.list from the buildfarm?

2024-05-15 Thread Nathan Bossart
On Wed, May 15, 2024 at 04:52:19PM -0400, Robert Haas wrote:
> On Wed, May 15, 2024 at 4:50 PM Tom Lane  wrote:
>> At this point my OCD got the better of me and I did a little
>> additional wordsmithing.  How about the attached?
> 
> No objections here.

+1

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: Why does pgindent's README say to download typedefs.list from the buildfarm?

2024-05-15 Thread Nathan Bossart
On Wed, May 15, 2024 at 04:07:18PM -0400, Robert Haas wrote:
> On Wed, May 15, 2024 at 3:30 PM Nathan Bossart  
> wrote:
>> This is much cleaner, thanks.  The only thing that stands out to me is that
>> the "once per release cycle" section should probably say to do an indent
>> run after downloading the typedef file.
> 
> How's this?

I compared this with my v1, and the only bit of information there that I
see missing in v3 is that validation step 4 only applies in the
once-per-cycle run (or if you forget to pgindent before committing a
patch).  This might be why I was struggling to untangle the two types of
pgindent runs in my attempt.  Perhaps it's worth adding a note to that step
about when it is required.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: More performance improvements for pg_dump in binary upgrade mode

2024-05-15 Thread Nathan Bossart
On Wed, May 15, 2024 at 10:15:13PM +0200, Daniel Gustafsson wrote:
> With the typarray caching from the patch attached here added *and* Nathan's
> patch from [0] added:
> 
> $ time ./bin/pg_dump --schema-only --quote-all-identifiers --binary-upgrade \
>   --format=custom --file a postgres > /dev/null
> 
> real  0m1.566s
> user  0m0.309s
> sys   0m0.080s
> 
> The combination of these patches thus puts binary uphrade mode almost on par
> with a plain dump, which has the potential to make upgrades of large schemas
> faster.  Parallel-parking this patch with Nathan's in the July CF, just wanted
> to type it up while it was fresh in my mind.

Nice!  I'll plan on taking a closer look at this one.  I have a couple
other ideas in-flight (e.g., parallelizing the once-in-each-database
operations with libpq's asynchronous APIs) that I'm hoping to post soon,
too.  v18 should have a lot of good stuff for pg_upgrade...

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: An improved README experience for PostgreSQL

2024-05-15 Thread Nathan Bossart
On Wed, May 15, 2024 at 07:23:19AM +0200, Peter Eisentraut wrote:
> I think for CONTRIBUTING.md, a better link would be
> <https://www.postgresql.org/developer/>.

WFM

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 74de0e89bea2802bf699397837ebf77252a0e06b Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 16 Apr 2024 21:23:52 -0500
Subject: [PATCH v3 1/1] Add code of conduct, contributing, and security files.

---
 .github/CODE_OF_CONDUCT.md | 2 ++
 .github/CONTRIBUTING.md| 2 ++
 .github/SECURITY.md| 2 ++
 3 files changed, 6 insertions(+)
 create mode 100644 .github/CODE_OF_CONDUCT.md
 create mode 100644 .github/CONTRIBUTING.md
 create mode 100644 .github/SECURITY.md

diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md
new file mode 100644
index 00..99bb1905d6
--- /dev/null
+++ b/.github/CODE_OF_CONDUCT.md
@@ -0,0 +1,2 @@
+The PostgreSQL code of conduct can be found at
+<https://www.postgresql.org/about/policies/coc/>.
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
new file mode 100644
index 00..0108e72956
--- /dev/null
+++ b/.github/CONTRIBUTING.md
@@ -0,0 +1,2 @@
+For information about contributing to PostgreSQL, see
+<https://www.postgresql.org/developer/>.
diff --git a/.github/SECURITY.md b/.github/SECURITY.md
new file mode 100644
index 00..ebdbe609db
--- /dev/null
+++ b/.github/SECURITY.md
@@ -0,0 +1,2 @@
+For information about reporting security issues, see
+<https://www.postgresql.org/support/security/>.
-- 
2.25.1



Re: Why does pgindent's README say to download typedefs.list from the buildfarm?

2024-05-15 Thread Nathan Bossart
On Wed, May 15, 2024 at 12:06:03PM -0400, Robert Haas wrote:
> What jumps out at me when I read this patch is that it says that an
> incremental run should do steps 1-3 of a complete run, and then
> immediately backtracks and says not to do step 2, which seems a little
> strange.
> 
> I played around with this a bit and came up with the attached, which
> takes a slightly different approach. Feel free to use, ignore, etc.

This is much cleaner, thanks.  The only thing that stands out to me is that
the "once per release cycle" section should probably say to do an indent
run after downloading the typedef file.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[Bug 2055728] Re: update-notifier notifies on unactionable "Pro" updates

2024-05-15 Thread Nathan Teodosio
*** This bug is a duplicate of bug 2051115 ***
https://bugs.launchpad.net/bugs/2051115

** This bug has been marked a duplicate of bug 2051115
   ubuntu pro integration interferes with dist-upgrade prompting

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2055728

Title:
  update-notifier notifies on unactionable "Pro" updates

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/update-notifier/+bug/2055728/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2062971] Re: Enable Ubuntu Pro page formatting is hard to follow

2024-05-15 Thread Nathan Teodosio
** Changed in: gnome-initial-setup (Ubuntu)
   Importance: Medium => Low

** Changed in: gnome-initial-setup (Ubuntu)
   Importance: Low => Undecided

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2062971

Title:
  Enable Ubuntu Pro page formatting is hard to follow

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2062971/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2065789] Re: Ubuntu Pro logo is not aligned to the left and also dances around depending on the window dimensions

2024-05-15 Thread Nathan Teodosio
I asked at https://discourse.gnome.org/t/gtkpicture-gtklabel-in-
horizontal-gtkbox-hexpand-and-halign-ignored/20967 and then opened a bug
report at https://gitlab.gnome.org/GNOME/gtk/-/issues/6705.

The solution I could find was to replace GtkBox by GtkGrid.

** Bug watch added: gitlab.gnome.org/GNOME/gtk/-/issues #6705
   https://gitlab.gnome.org/GNOME/gtk/-/issues/6705

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2065789

Title:
  Ubuntu Pro logo is not aligned to the left and also dances around
  depending on the window dimensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2065789/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2062971] Re: Enable Ubuntu Pro page formatting is hard to follow

2024-05-15 Thread Nathan Teodosio
> There are two columns at the top, and then only one. Also the bullet
indentation is inconsistent.

I'm not really sure what this means, the bullet points are here both
aligned and also in your picture. I'll open a separate bug for the
moving logo then.

** Changed in: gnome-initial-setup (Ubuntu)
   Status: Triaged => Incomplete

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2062971

Title:
  Enable Ubuntu Pro page formatting is hard to follow

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2062971/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2065789] [NEW] Ubuntu Pro logo is not aligned to the left and also dances around depending on the window dimensions

2024-05-15 Thread Nathan Teodosio
Public bug reported:

In the Ubuntu Pro page the logo is not aligned to the left and also
dances around depending on the window dimensions.

** Affects: gnome-initial-setup (Ubuntu)
 Importance: Low
 Assignee: Nathan Teodosio (nteodosio)
 Status: In Progress

** Attachment added: "3pro-2024-03-20_10.06.45.mkv"
   
https://bugs.launchpad.net/bugs/2065789/+attachment/5778609/+files/3pro-2024-03-20_10.06.45.mkv

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2065789

Title:
  Ubuntu Pro logo is not aligned to the left and also dances around
  depending on the window dimensions

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2065789/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: An improved README experience for PostgreSQL

2024-05-14 Thread Nathan Bossart
On Tue, May 14, 2024 at 06:12:26PM +0200, Alvaro Herrera wrote:
> On 2024-May-14, Tom Lane wrote:
> 
>> I don't have a position on whether we want
>> these additional files or not; but if we do, I think the best answer
>> is to stick 'em under .github/ where they are out of the way but yet
>> updatable by any committer.
> 
> +1 for .github/, that was my first reaction as well after reading the
> link Peter posted.

Here's an updated patch that uses .github/.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From d6e27e9acf65bf77c54e2292f6e02590d34adeb6 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 16 Apr 2024 21:23:52 -0500
Subject: [PATCH v2 1/1] Add code of conduct, contributing, and security files.

---
 .github/CODE_OF_CONDUCT.md | 2 ++
 .github/CONTRIBUTING.md| 2 ++
 .github/SECURITY.md| 2 ++
 3 files changed, 6 insertions(+)
 create mode 100644 .github/CODE_OF_CONDUCT.md
 create mode 100644 .github/CONTRIBUTING.md
 create mode 100644 .github/SECURITY.md

diff --git a/.github/CODE_OF_CONDUCT.md b/.github/CODE_OF_CONDUCT.md
new file mode 100644
index 00..99bb1905d6
--- /dev/null
+++ b/.github/CODE_OF_CONDUCT.md
@@ -0,0 +1,2 @@
+The PostgreSQL code of conduct can be found at
+<https://www.postgresql.org/about/policies/coc/>.
diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
new file mode 100644
index 00..ff93024352
--- /dev/null
+++ b/.github/CONTRIBUTING.md
@@ -0,0 +1,2 @@
+For information about contributing to PostgreSQL, see
+<https://wiki.postgresql.org/wiki/Submitting_a_Patch>.
diff --git a/.github/SECURITY.md b/.github/SECURITY.md
new file mode 100644
index 00..ebdbe609db
--- /dev/null
+++ b/.github/SECURITY.md
@@ -0,0 +1,2 @@
+For information about reporting security issues, see
+<https://www.postgresql.org/support/security/>.
-- 
2.25.1



Re: An improved README experience for PostgreSQL

2024-05-14 Thread Nathan Bossart
On Tue, May 14, 2024 at 10:05:01AM +0200, Peter Eisentraut wrote:
> On 13.05.24 17:26, Nathan Bossart wrote:
>> On Sun, May 12, 2024 at 05:17:42PM +0200, Peter Eisentraut wrote:
>> > I don't know, I find these files kind of "yelling".  It's fine to have a
>> > couple, but now it's getting a bit much, and there are more that could be
>> > added.
>> 
>> I'm not sure what you mean by this.  Do you mean that the contents are too
>> blunt?  That there are too many files?  Something else?
> 
> I mean the all-caps file names, cluttering up the top-level directory.

It looks like we could also put these files in .github/ or docs/ to avoid
the clutter.

>> > If we want to enhance the GitHub experience, we can also add these files to
>> > the organization instead: 
>> > https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file
>> 
>> This was the intent of my patch.  There might be a few others that we could
>> use, but I figured we could start with the low-hanging fruit that would
>> have the most impact on the GitHub experience.
> 
> My point is, in order to get that enhanced GitHub experience, you don't
> actually have to commit these files into the individual source code
> repository.  You can add them to the organization and they will apply to all
> repositories under the organization.  This is explained at the above link.

Oh, I apologize, my brain skipped over the word "organization" in your
message.

> However, I don't think these files are actually that useful.  People can go
> to the web site to find out about things about the PostgreSQL community.  We
> don't need to add bunch of $X.md files that just say, essentially, got to
> postgresql.org/$X.

That's a reasonable stance.  I think the main argument in favor of these
extra files is to make things a tad more accessible to folks who are
accustomed to using GitHub when contributing to open-source projects, but
you're right that this information is already pretty easy to find.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-14 Thread Nathan Gauër via cfe-commits

https://github.com/Keenuts closed 
https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-14 Thread Nathan Gauër via cfe-commits

Keenuts wrote:

rebased on main, local tests are passing, waiting on CI to merge.

https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-14 Thread Nathan Gauër via cfe-commits

https://github.com/Keenuts updated 
https://github.com/llvm/llvm-project/pull/88918

From 440cdfa4132a969702348c32f2810924012c5ea6 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Nathan=20Gau=C3=ABr?= 
Date: Mon, 15 Apr 2024 17:05:40 +0200
Subject: [PATCH 1/6] [clang][SPIR-V] Always add convervence intrinsics
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is
when parsing the condition, the loopstack doesn't yet reflect the
correct values, as expected since we are not yet in the loop.

However, convergence tokens should sometimes already be available.
The solution which seemed the simplest is to greedily generate the
tokens when we generate SPIR-V.

Fixes #88144

Signed-off-by: Nathan Gauër 
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  88 +
 clang/lib/CodeGen/CGCall.cpp  |   3 +
 clang/lib/CodeGen/CGStmt.cpp  |  94 ++
 clang/lib/CodeGen/CodeGenFunction.cpp |   9 ++
 clang/lib/CodeGen/CodeGenFunction.h   |   9 +-
 .../builtins/RWBuffer-constructor.hlsl|   1 -
 .../CodeGenHLSL/convergence/do.while.hlsl |  90 +
 clang/test/CodeGenHLSL/convergence/for.hlsl   | 121 ++
 clang/test/CodeGenHLSL/convergence/while.hlsl | 119 +
 9 files changed, 445 insertions(+), 89 deletions(-)
 create mode 100644 clang/test/CodeGenHLSL/convergence/do.while.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/for.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/while.hlsl

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index f9ee93049b12d..e251091c6ce3e 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -1141,91 +1141,8 @@ struct BitTest {
   static BitTest decodeBitTestBuiltin(unsigned BuiltinID);
 };
 
-// Returns the first convergence entry/loop/anchor instruction found in |BB|.
-// std::nullptr otherwise.
-llvm::IntrinsicInst *getConvergenceToken(llvm::BasicBlock *BB) {
-  for (auto  : *BB) {
-auto *II = dyn_cast();
-if (II && isConvergenceControlIntrinsic(II->getIntrinsicID()))
-  return II;
-  }
-  return nullptr;
-}
-
 } // namespace
 
-llvm::CallBase *
-CodeGenFunction::addConvergenceControlToken(llvm::CallBase *Input,
-llvm::Value *ParentToken) {
-  llvm::Value *bundleArgs[] = {ParentToken};
-  llvm::OperandBundleDef OB("convergencectrl", bundleArgs);
-  auto Output = llvm::CallBase::addOperandBundle(
-  Input, llvm::LLVMContext::OB_convergencectrl, OB, Input);
-  Input->replaceAllUsesWith(Output);
-  Input->eraseFromParent();
-  return Output;
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::emitConvergenceLoopToken(llvm::BasicBlock *BB,
-  llvm::Value *ParentToken) {
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto CB = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_loop, {}, {});
-  Builder.restoreIP(IP);
-
-  auto I = addConvergenceControlToken(CB, ParentToken);
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceEntryToken(llvm::Function *F) {
-  auto *BB = >getEntryBlock();
-  auto *token = getConvergenceToken(BB);
-  if (token)
-return token;
-
-  // Adding a convergence token requires the function to be marked as
-  // convergent.
-  F->setConvergent();
-
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto I = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_entry, {}, {});
-  assert(isa(I));
-  Builder.restoreIP(IP);
-
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceLoopToken(const LoopInfo *LI) {
-  assert(LI != nullptr);
-
-  auto *token = getConvergenceToken(LI->getHeader());
-  if (token)
-return token;
-
-  llvm::IntrinsicInst *PII =
-  LI->getParent()
-  ? emitConvergenceLoopToken(
-LI->getHeader(), 
getOrEmitConvergenceLoopToken(LI->getParent()))
-  : getOrEmitConvergenceEntryToken(LI->getHeader()->getParent());
-
-  return emitConvergenceLoopToken(LI->getHeader(), PII);
-}
-
-llvm::CallBase *
-CodeGenFunction::addControlledConvergenceToken(llvm::CallBase *Input) {
-  llvm::Value *ParentToken =
-  LoopStack.hasInfo()
-  ? getOrEmitConvergenceLoopToken(())
-  : getOrEmitConvergenceEntryToken(Input->getFunction());
-  return addConvergenceControlToken(Input, ParentToken);
-}
-
 BitTest BitTest::decodeBitTestBuiltin(unsigned BuiltinID) {
   switch (BuiltinID) {
 // Main portable variants.
@@ -18402,12 +18319,9 @@ Value *CodeGenFunction::EmitHLSLBuiltinExpr(unsigned 
BuiltinID,
   

pgsql: Fix pg_sequence_last_value() for unlogged sequences on standbys.

2024-05-14 Thread Nathan Bossart
Fix pg_sequence_last_value() for unlogged sequences on standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12

Branch
--
REL_13_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/09ec5d45548ae435717176c4ac954f43e34ac731

Modified Files
--
src/backend/commands/sequence.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)



pgsql: Fix pg_sequence_last_value() for unlogged sequences on standbys.

2024-05-14 Thread Nathan Bossart
Fix pg_sequence_last_value() for unlogged sequences on standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12

Branch
--
REL_14_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/c8714230ad35cc4587555ebe7052ba0ec87722f0

Modified Files
--
src/backend/commands/sequence.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)



pgsql: Fix pg_sequence_last_value() for unlogged sequences on standbys.

2024-05-14 Thread Nathan Bossart
Fix pg_sequence_last_value() for unlogged sequences on standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12

Branch
--
REL_12_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/2812059d3eeaa05ef98dc2e2cc71a716967aeeb8

Modified Files
--
src/backend/commands/sequence.c | 27 ++-
1 file changed, 18 insertions(+), 9 deletions(-)



pgsql: Fix pg_sequence_last_value() for unlogged sequences on standbys.

2024-05-14 Thread Nathan Bossart
Fix pg_sequence_last_value() for unlogged sequences on standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/3cb2f13ac500983c9c6b1eef3b3c2091c26f3040

Modified Files
--
doc/src/sgml/system-views.sgml| 29 +
src/backend/commands/sequence.c   | 31 ++-
src/test/recovery/t/001_stream_rep.pl |  8 
3 files changed, 55 insertions(+), 13 deletions(-)



pgsql: Fix pg_sequence_last_value() for unlogged sequences on standbys.

2024-05-14 Thread Nathan Bossart
Fix pg_sequence_last_value() for unlogged sequences on standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12

Branch
--
REL_16_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/c1664c8eefad0a034d64adde1c2ff70d23be6a2c

Modified Files
--
doc/src/sgml/system-views.sgml| 29 +
src/backend/commands/sequence.c   | 31 ++-
src/test/recovery/t/001_stream_rep.pl |  8 
3 files changed, 55 insertions(+), 13 deletions(-)



pgsql: Fix pg_sequence_last_value() for unlogged sequences on standbys.

2024-05-14 Thread Nathan Bossart
Fix pg_sequence_last_value() for unlogged sequences on standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12

Branch
--
REL_15_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/857d280c652876e0bbf0e1b5c237574514b50ebc

Modified Files
--
doc/src/sgml/system-views.sgml| 29 +
src/backend/commands/sequence.c   | 31 ++-
src/test/recovery/t/001_stream_rep.pl |  9 +
3 files changed, 56 insertions(+), 13 deletions(-)



Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-13 Thread Nathan Bossart
Committed.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-13 Thread Nathan Gauër via cfe-commits


@@ -1586,6 +1586,12 @@ class CodeGenModule : public CodeGenTypeCache {
   void AddGlobalDtor(llvm::Function *Dtor, int Priority = 65535,
  bool IsDtorAttrFunc = false);
 
+  // Return whether structured convergence intrinsics should be generated for
+  // this target.
+  bool shouldEmitConvergenceTokens() const {
+return getTriple().isSPIRVLogical();

Keenuts wrote:

That makes sense. Added a TODO line!

https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-13 Thread Nathan Gauër via cfe-commits

https://github.com/Keenuts updated 
https://github.com/llvm/llvm-project/pull/88918

From a8bf6fe83a1c145ef81ee30471dc51de1b5354ef Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Nathan=20Gau=C3=ABr?= 
Date: Mon, 15 Apr 2024 17:05:40 +0200
Subject: [PATCH 1/6] [clang][SPIR-V] Always add convervence intrinsics
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is
when parsing the condition, the loopstack doesn't yet reflect the
correct values, as expected since we are not yet in the loop.

However, convergence tokens should sometimes already be available.
The solution which seemed the simplest is to greedily generate the
tokens when we generate SPIR-V.

Fixes #88144

Signed-off-by: Nathan Gauër 
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  88 +
 clang/lib/CodeGen/CGCall.cpp  |   3 +
 clang/lib/CodeGen/CGStmt.cpp  |  94 ++
 clang/lib/CodeGen/CodeGenFunction.cpp |   9 ++
 clang/lib/CodeGen/CodeGenFunction.h   |   9 +-
 .../builtins/RWBuffer-constructor.hlsl|   1 -
 .../CodeGenHLSL/convergence/do.while.hlsl |  90 +
 clang/test/CodeGenHLSL/convergence/for.hlsl   | 121 ++
 clang/test/CodeGenHLSL/convergence/while.hlsl | 119 +
 9 files changed, 445 insertions(+), 89 deletions(-)
 create mode 100644 clang/test/CodeGenHLSL/convergence/do.while.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/for.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/while.hlsl

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 8e31652f4dabe..fb5904558bbae 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -1141,91 +1141,8 @@ struct BitTest {
   static BitTest decodeBitTestBuiltin(unsigned BuiltinID);
 };
 
-// Returns the first convergence entry/loop/anchor instruction found in |BB|.
-// std::nullptr otherwise.
-llvm::IntrinsicInst *getConvergenceToken(llvm::BasicBlock *BB) {
-  for (auto  : *BB) {
-auto *II = dyn_cast();
-if (II && isConvergenceControlIntrinsic(II->getIntrinsicID()))
-  return II;
-  }
-  return nullptr;
-}
-
 } // namespace
 
-llvm::CallBase *
-CodeGenFunction::addConvergenceControlToken(llvm::CallBase *Input,
-llvm::Value *ParentToken) {
-  llvm::Value *bundleArgs[] = {ParentToken};
-  llvm::OperandBundleDef OB("convergencectrl", bundleArgs);
-  auto Output = llvm::CallBase::addOperandBundle(
-  Input, llvm::LLVMContext::OB_convergencectrl, OB, Input);
-  Input->replaceAllUsesWith(Output);
-  Input->eraseFromParent();
-  return Output;
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::emitConvergenceLoopToken(llvm::BasicBlock *BB,
-  llvm::Value *ParentToken) {
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto CB = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_loop, {}, {});
-  Builder.restoreIP(IP);
-
-  auto I = addConvergenceControlToken(CB, ParentToken);
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceEntryToken(llvm::Function *F) {
-  auto *BB = >getEntryBlock();
-  auto *token = getConvergenceToken(BB);
-  if (token)
-return token;
-
-  // Adding a convergence token requires the function to be marked as
-  // convergent.
-  F->setConvergent();
-
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto I = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_entry, {}, {});
-  assert(isa(I));
-  Builder.restoreIP(IP);
-
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceLoopToken(const LoopInfo *LI) {
-  assert(LI != nullptr);
-
-  auto *token = getConvergenceToken(LI->getHeader());
-  if (token)
-return token;
-
-  llvm::IntrinsicInst *PII =
-  LI->getParent()
-  ? emitConvergenceLoopToken(
-LI->getHeader(), 
getOrEmitConvergenceLoopToken(LI->getParent()))
-  : getOrEmitConvergenceEntryToken(LI->getHeader()->getParent());
-
-  return emitConvergenceLoopToken(LI->getHeader(), PII);
-}
-
-llvm::CallBase *
-CodeGenFunction::addControlledConvergenceToken(llvm::CallBase *Input) {
-  llvm::Value *ParentToken =
-  LoopStack.hasInfo()
-  ? getOrEmitConvergenceLoopToken(())
-  : getOrEmitConvergenceEntryToken(Input->getFunction());
-  return addConvergenceControlToken(Input, ParentToken);
-}
-
 BitTest BitTest::decodeBitTestBuiltin(unsigned BuiltinID) {
   switch (BuiltinID) {
 // Main portable variants.
@@ -18400,12 +18317,9 @@ Value *CodeGenFunction::EmitHLSLBuiltinExpr(unsigned 
BuiltinID,
   

Re: An improved README experience for PostgreSQL

2024-05-13 Thread Nathan Bossart
On Mon, May 13, 2024 at 05:43:45PM +0200, Alvaro Herrera wrote:
> Can't we add these two lines per topic to the README.md?

That would be fine with me, too.  The multiple-files approach is perhaps a
bit more tailored to GitHub, but there's something to be said for keeping
this information centralized.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[f2fs-dev] [PATCH] f2fs: Add inline to f2fs_build_fault_attr() stub

2024-05-13 Thread Nathan Chancellor
When building without CONFIG_F2FS_FAULT_INJECTION, there is a warning
from each file that includes f2fs.h because the stub for
f2fs_build_fault_attr() is missing inline:

  In file included from fs/f2fs/segment.c:21:
  fs/f2fs/f2fs.h:4605:12: warning: 'f2fs_build_fault_attr' defined but not used 
[-Wunused-function]
   4605 | static int f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned 
long rate,
|^

Add the missing inline to resolve all of the warnings for this
configuration.

Fixes: 4ed886b187f4 ("f2fs: check validation of fault attrs in 
f2fs_build_fault_attr()")
Signed-off-by: Nathan Chancellor 
---
 fs/f2fs/f2fs.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index ef7de97be647..1974b6aff397 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -4602,8 +4602,8 @@ static inline bool f2fs_need_verity(const struct inode 
*inode, pgoff_t idx)
 extern int f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned long rate,
unsigned long type);
 #else
-static int f2fs_build_fault_attr(struct f2fs_sb_info *sbi, unsigned long rate,
-   unsigned long type)
+static inline int f2fs_build_fault_attr(struct f2fs_sb_info *sbi,
+   unsigned long rate, unsigned long type)
 {
return 0;
 }

---
base-commit: 991b6bdf1b009832256f8bc3035d4bcba664657b
change-id: 
20240513-f2fs-add-missing-inline-to-f2fs_build_fault_attr-207a50c97005

Best regards,
-- 
Nathan Chancellor 



___
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel


Re: An improved README experience for PostgreSQL

2024-05-13 Thread Nathan Bossart
On Sun, May 12, 2024 at 05:17:42PM +0200, Peter Eisentraut wrote:
> I don't know, I find these files kind of "yelling".  It's fine to have a
> couple, but now it's getting a bit much, and there are more that could be
> added.

I'm not sure what you mean by this.  Do you mean that the contents are too
blunt?  That there are too many files?  Something else?

> If we want to enhance the GitHub experience, we can also add these files to
> the organization instead: 
> https://docs.github.com/en/communities/setting-up-your-project-for-healthy-contributions/creating-a-default-community-health-file

This was the intent of my patch.  There might be a few others that we could
use, but I figured we could start with the low-hanging fruit that would
have the most impact on the GitHub experience.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[PATCH] Ability to specify :html-head as a function

2024-05-13 Thread Nathan Nichols
Hello org-mode users,

Here's a patch that adds the ability to specify :html-head as a function. I
think this is a logical change because:

1. It provides a wider range of options for how to use :html-head (before
:html-head could only be a string, now it can also be a function.)
2. It is consistent with the behavior of :html-preamble and
:html-postamble, which can both either be a string or a function.

I probably did this wrong but anyway here's my attempt at a patch
submission. Please let me know if you need any additional information or
have any questions.

Thanks,

-Nate
From a4c5d3e648898c91858643c27ff6d56cde7af3be Mon Sep 17 00:00:00 2001
From: Nate Nichols 
Date: Sun, 12 May 2024 19:53:09 -0400
Subject: [PATCH] Squashed commit of the following:

commit 8160b298a544642881fd10c651fd4e736517cf2f
Author: Nate Nichols 
Date:   Sun May 12 19:03:25 2024 -0400

Added ability to specify :html-head as a function
---
 lisp/org-element.el | 20 ---
 lisp/ox-html.el | 59 +++--
 2 files changed, 42 insertions(+), 37 deletions(-)

diff --git a/lisp/org-element.el b/lisp/org-element.el
index cf0982f18..63222c2cc 100644
--- a/lisp/org-element.el
+++ b/lisp/org-element.el
@@ -5578,9 +5578,8 @@ If there is no affiliated keyword, return the empty string."
 ;; global indentation from the contents of the current element.
 
 (defun org-element-normalize-string (s)
-  "Ensure string S ends with a single newline character.
-
-If S isn't a string return it unchanged.  If S is the empty
+  "Return S, or evaluate to a string ending with a single newline character.
+If S isn't a string or a function, return it unchanged.  If S is the empty
 string, return it.  Otherwise, return a new string with a single
 newline character at its end."
   (cond
@@ -5589,6 +5588,21 @@ newline character at its end."
(t (and (string-match "\\(\n[ \t]*\\)*\\'" s)
 	   (replace-match "\n" nil nil s)
 
+
+(defun org-element-normalize-str-or-fn (input  trailing)
+  "If INPUT is a string, it is passed to `org-element-normalize-string'.
+If INPUT is a function, it is applied to arguments TRAILING, and the result is
+passed to `org-element-normalize-string'."
+  (let ((s (if (functionp input) (format "%s" (apply input trailing)) input)))
+(org-element-normalize-string s)))
+
+
+;; Test cases for `org-element-normalize-str-or-fn'
+(cl-assert (string= (org-element-normalize-str-or-fn (lambda (_res) "abcdefg") nil) "abcdefg\n"))
+(cl-assert (string= (org-element-normalize-str-or-fn "abcdefg") "abcdefg\n") nil)
+(cl-assert (= (org-element-normalize-str-or-fn 123 nil) 123))
+
+
 (defun org-element-normalize-contents (element  ignore-first)
   "Normalize plain text in ELEMENT's contents.
 
diff --git a/lisp/ox-html.el b/lisp/ox-html.el
index ec0add65e..72a8590c4 100644
--- a/lisp/ox-html.el
+++ b/lisp/ox-html.el
@@ -131,7 +131,11 @@
 (:html-equation-reference-format "HTML_EQUATION_REFERENCE_FORMAT" nil org-html-equation-reference-format t)
 (:html-postamble nil "html-postamble" org-html-postamble)
 (:html-preamble nil "html-preamble" org-html-preamble)
-(:html-head "HTML_HEAD" nil org-html-head newline)
+;; You should be able to use multiple headline properties "#+EXPORT_HTML_HEAD" in a file.
+;; The results of each occurrence will be joined by a newline to form the final string
+;; included in the  section.
+;; TODO: Test/verify this works still. See: `org-export-options-alist'.
+(:html-head "HTML_HEAD" "html-head" org-html-head newline)
 (:html-head-extra "HTML_HEAD_EXTRA" nil org-html-head-extra newline)
 (:subtitle "SUBTITLE" nil nil parse)
 (:html-head-include-default-style
@@ -1402,6 +1406,24 @@ This option can also be set on with the CREATOR keyword."
   :package-version '(Org . "8.0")
   :type '(string :tag "Creator string"))
 
+
+ Template :: Head
+
+(defcustom org-html-head ""
+  "When set to a string, include that string in the HTML header.
+When set to a function, apply this function and insert the
+returned string.  The function takes the property list of export
+options as its only argument.
+
+Setting :html-preamble in publishing projects will take
+precedence over this variable."
+  :group 'org-export-html
+  :type '(choice (const :tag "Default (empty)" "")
+ (string :tag "Fixed string")
+		 (function :tag "Function (must return a string)")))
+
+
+
  Template :: Preamble
 
 (defcustom org-html-preamble t
@@ -1525,38 +1547,7 @@ style information."
 ;;;###autoload
 (put 'org-html-head-include-default-style 'safe-local-variable 'booleanp)
 
-(defcustom org-html-head ""
-  "Org-wide head definitions for exported HTML files.
-
-This variable can contain the full HTML structure to provide a
-style, including the surrounding HTML tags.  You can consider
-including definitions for the following classes: title, todo,
-done, timestamp, timestamp-kwd, tag, target.
-
-For example, a valid value would be:
-
-   
-  

[Bug 2062971] Re: Enable Ubuntu Pro page formatting is hard to follow

2024-05-13 Thread Nathan Teodosio
I created a minimal reproducer and asked in the Gnome forum:
https://discourse.gnome.org/t/gtkpicture-gtklabel-in-horizontal-gtkbox-
hexpand-and-halign-ignored/20967

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2062971

Title:
  Enable Ubuntu Pro page formatting is hard to follow

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2062971/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-13 Thread Nathan Gauër via cfe-commits

Keenuts wrote:

Thanks for the reviews. Waiting for 1 approval from MS and I'll merge

https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][SPIR-V] Always add convergence intrinsics (PR #88918)

2024-05-13 Thread Nathan Gauër via cfe-commits


@@ -1586,6 +1586,12 @@ class CodeGenModule : public CodeGenTypeCache {
   void AddGlobalDtor(llvm::Function *Dtor, int Priority = 65535,
  bool IsDtorAttrFunc = false);
 
+  // Return whether structured convergence intrinsics should be generated for
+  // this target.
+  bool shouldEmitConvergenceTokens() const {
+return getTriple().isSPIRVLogical();

Keenuts wrote:

Not necessarily. The SPIR-V compute (as OpenCL not Vulkan compute) doesn't 
require a structured CFG, so this could remain forever if they don't see any 
benefit in generating a SCFG.

https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


Re: Flink Kubernetes Operator - How can I use a jar that is hosted on a private maven repo for a FlinkSessionJob?

2024-05-12 Thread Nathan T. A. Lewis
Hi Mate,


That option might be exactly what I need. Thanks!

Best regards,
Nathan T. A. Lewis




 On Sun, 12 May 2024 05:27:10 -0600 czmat...@gmail.com wrote 


Hi Nathan,


Job submissions for FlinkSessionJob resources will always be done by first 
uploading the JAR file itself from the Operator pod using the JobManager's REST 
API, then starting a new job using the uploaded JAR. This means that 
downloading the JAR file with an initContainer to the JobManager will not help 
in your case.


You could look into the Operator config option 
'kubernetes.operator.user.artifacts.http.header' to set the HTTP headers used 
to download the artifacts. Please check FLINK-27483 [1] for more information.


[1] https://issues.apache.org/jira/browse/FLINK-27483


Regards,
Mate Czagany



Nathan T. A. Lewis  ezt írta (időpont: 2024. máj. 
9., Cs, 19:00):

Hello,

I am trying to run a Flink Session Job with a jar that is hosted on a maven 
repository in Google's Artifact Registry.

The first thing I tried was to just specify the `jarURI` directly:

apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
  name: myJobName
spec:
  deploymentName: flink-session
  job:
    jarURI: 
"https://mylocation-maven.pkg.dev/myGCPproject/myreponame/path/to/the.jar;
    entryClass: myentryclass
    parallelism: 1
    upgradeMode: savepoint

But, since it is a private repository, it not surprisingly resulted in:

java.io.IOException: Server returned HTTP response code: 401 for URL: 
https://mylocation-maven.pkg.dev/myGCPproject/myreponame/path/to/the.jar

I didn't see anywhere in the FlinkSessionJob definition to put a bearer token 
and doubt it would be a good idea security-wise to store one there anyway, so I 
instead looked into using `initContainers` on the FlinkDeployment like in this 
example: 
https://github.com/apache/flink-kubernetes-operator/blob/main/examples/pod-template.yaml

apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: flink-session
spec:
  flinkVersion: v1_18
  flinkConfiguration:
    taskmanager.numberOfTaskSlots: "2"
    state.checkpoints.dir: mycheckpointsdir
    state.savepoints.dir: mysavepointsdir
    state.backend: rocksdb
    state.backend.rocksdb.timer-service.factory: ROCKSDB
    state.backend.incremental: "true"
    execution.checkpointing.interval: "1m"
  serviceAccount: flink
  jobManager:
    resource:
      memory: "2048m"
      cpu: 0.5
  taskManager:
    resource:
      memory: "2048m"
      cpu: 1
  podTemplate:
      spec:
        initContainers:
          - name: gcloud
            image: google/cloud-sdk:latest
            volumeMounts:
              - mountPath: /opt/flink/downloads
                name: downloads
            command: ["sh", "-c", "gcloud artifacts files download 
--project=myGCPproject --repository=myreponame --location=mylocation 
--destination=/opt/flink/downloads path/to/the.jar"]
        containers:
          - name: flink-main-container
            volumeMounts:
              - mountPath: /opt/flink/downloads
                name: downloads
        volumes:
          - name: downloads
            emptyDir: { }

This worked well for getting the jar onto the jobManager pod, but it looks like 
the FlinkSessionJob actually looks for the jar on the pod of the Flink 
Kubernetes Operator itself. So in the end, the job still isn't being run.

As a workaround for now, I'm planning to move my jar from Maven to a Google 
Cloud Storage bucket and then add the gcs filesystem plugin to the operator 
image. What I'd love to know is if I've overlooked some already implemented way 
to connect to a private maven repository for a FlinkSessionJob. I suppose in a 
worst case, we could write a filesystem plugin that handles the 
`artifactrepository://` scheme and uses Google's java libraries to handle 
authentication and download of the artifact. Again, I'm kind of hoping 
something already exists though, rather than having to build something new.


Best regards,
Nathan T.A. Lewis


[clang] [clang][NFC] Remove class layout scissor (PR #89055)

2024-05-12 Thread Nathan Sidwell via cfe-commits

https://github.com/urnathan closed 
https://github.com/llvm/llvm-project/pull/89055
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[Numpy-discussion] Re: Fastpathing lexsort for integers

2024-05-11 Thread Nathan
Sorry for not responding without prompting. A PR is indeed a better venue
to ask questions about a proposed code change than a mailing list post.

Adding a keyword argument to trigger a fast path seems like a bad python
API to me, since most users won’t notice it. “kind” seems nicer in that
it’s more general, but it would be even better to have some kind of
heuristic to choose the fast path when appropriate, although it sounds like
that’s not possible? Are there cases where the int path is a pessimisation?

It seems to me like it would be more natural to alter the C code as you’re
implying, but I think there’s some confusion about which C function you
need. You probably shouldn’t touch the public C API (PyArray_LexSort is in
the public API). The function that is actually being called in that python
file is a wrapper for the C API function:

https://github.com/numpy/numpy/blob/1e5386334b6f9508964fcd2e1c30293a9d82f026/numpy/_core/src/multiarray/multiarraymodule.c#L3446

So, rather than putting your int fast path in python, you’d implement in C
in that file, adding the new “kind” keyword or some sort of heuristic to
trigger it to array_lexsort in C.

If it’s possible to use a heuristic rather than requiring users to opt in,
then it could make sense to update PyArray_LexSort, but changing public C
APIs is much more disruptive in C than in python, so we generally don’t do
it and make python-level optimizations possible in C by adding new
non-public C functions that python can call using private APIs like
_multiarray_umath.

Obviously writing CPython C API code is a lot less straightforward than
Python, but the numpy code reviewers have a lot of experience spotting C
API issues and we can point you to resources for learning.

Hope that helps,

Nathan

On Sat, May 11, 2024 at 4:35 PM  wrote:

> Any feedback, even just on where to locate the Python code I wrote?
>
> Otherwise I will try to just open a PR and see how it goes.
>
> Thanks,
>
> Pietro
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-10 Thread Nathan Bossart
On Wed, May 08, 2024 at 11:01:01AM +0900, Michael Paquier wrote:
> On Tue, May 07, 2024 at 02:39:42PM -0500, Nathan Bossart wrote:
>> Okay, phew.  We can still do something like v3-0002 for v18.  I'll give
>> Michael a chance to comment on 0001 before committing/back-patching that
>> one.
> 
> What you are doing in 0001, and 0002 for v18 sounds fine to me.

Great.  Rather than commit this on a Friday afternoon, I'll just post what
I have staged for commit early next week.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 19d9a1dd88385664e6991121e4751aba85a45639 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Fri, 10 May 2024 15:55:24 -0500
Subject: [PATCH v4 1/1] Fix pg_sequence_last_value() for unlogged sequences on
 standbys.

Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like

ERROR:  could not open file "base/5/16388": No such file or directory

Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly.  To fix, modify the function to return NULL
for unlogged sequences on standby servers.  Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions.  For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences.  The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.

Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.

We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.

Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12
---
 doc/src/sgml/system-views.sgml| 34 +++
 src/backend/commands/sequence.c   | 31 +---
 src/test/recovery/t/001_stream_rep.pl |  8 +++
 3 files changed, 60 insertions(+), 13 deletions(-)

diff --git a/doc/src/sgml/system-views.sgml b/doc/src/sgml/system-views.sgml
index a54f4a4743..9842ee276e 100644
--- a/doc/src/sgml/system-views.sgml
+++ b/doc/src/sgml/system-views.sgml
@@ -3091,15 +3091,41 @@ SELECT * FROM pg_locks pl LEFT JOIN pg_prepared_xacts 
ppx
   
The last sequence value written to disk.  If caching is used,
this value can be greater than the last value handed out from the
-   sequence.  Null if the sequence has not been read from yet.  Also, if
-   the current user does not have USAGE
-   or SELECT privilege on the sequence, the value is
-   null.
+   sequence.
   
  
 

   
+
+  
+   The last_value column will read as null if any of
+   the following are true:
+   
+
+ 
+  The sequence has not been read from yet.
+ 
+
+
+ 
+  The current user does not have USAGE or
+  SELECT privilege on the sequence.
+ 
+
+
+ 
+  The sequence is a temporary sequence created by another session.
+ 
+
+
+ 
+  The sequence is unlogged and the server is a standby.
+ 
+
+   
+  
+
  
 
  
diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 46103561c3..28f8522264 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1777,11 +1777,8 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
Oid relid = PG_GETARG_OID(0);
SeqTableelm;
Relationseqrel;
-   Buffer  buf;
-   HeapTupleData seqtuple;
-   Form_pg_sequence_data seq;
-   boolis_called;
-   int64   result;
+   boolis_called = false;
+   int64   result = 0;
 
/* open and lock sequence */
init_sequence(relid, , );
@@ -1792,12 +1789,28 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 errmsg("permission denied for sequence %s",

RelationGetRelationName(seqrel;
 
-   seq = read_seq_tuple(seqrel, , );
+   /*
+* We return NULL for other sessions' temporary sequences.  The
+* pg_sequences system view already filters those out, but this offers a
+* defense against ERRORs in case someone invokes this function 
directly.
+*
+* Also, for the benefit of the pg_sequ

[clang] [clang][NFC] Remove class layout scissor (PR #89055)

2024-05-10 Thread Nathan Sidwell via cfe-commits

https://github.com/urnathan updated 
https://github.com/llvm/llvm-project/pull/89055

>From 9ab483f3451bfcaa7968c5f1cf7115144522f58a Mon Sep 17 00:00:00 2001
From: Nathan Sidwell 
Date: Mon, 1 Apr 2024 16:15:12 -0400
Subject: [PATCH 1/3] [clang] Remove class layout scissor

---
 clang/lib/CodeGen/CGRecordLayoutBuilder.cpp | 22 ++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp 
b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index 868b1ab98e048..cc51cc3476c43 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -75,7 +75,7 @@ struct CGRecordLowering {
   // sentinel member type that ensures correct rounding.
   struct MemberInfo {
 CharUnits Offset;
-enum InfoKind { VFPtr, VBPtr, Field, Base, VBase, Scissor } Kind;
+enum InfoKind { VFPtr, VBPtr, Field, Base, VBase } Kind;
 llvm::Type *Data;
 union {
   const FieldDecl *FD;
@@ -197,7 +197,7 @@ struct CGRecordLowering {
  const CXXRecordDecl *Query) const;
   void calculateZeroInit();
   CharUnits calculateTailClippingOffset(bool isNonVirtualBaseType) const;
-  void checkBitfieldClipping() const;
+  void checkBitfieldClipping(bool isNonVirtualBaseType) const;
   /// Determines if we need a packed llvm struct.
   void determinePacked(bool NVBaseType);
   /// Inserts padding everywhere it's needed.
@@ -299,8 +299,8 @@ void CGRecordLowering::lower(bool NVBaseType) {
   accumulateVBases();
   }
   llvm::stable_sort(Members);
+  checkBitfieldClipping(NVBaseType);
   Members.push_back(StorageInfo(Size, getIntNType(8)));
-  checkBitfieldClipping();
   determinePacked(NVBaseType);
   insertPadding();
   Members.pop_back();
@@ -894,8 +894,6 @@ CGRecordLowering::calculateTailClippingOffset(bool 
isNonVirtualBaseType) const {
 }
 
 void CGRecordLowering::accumulateVBases() {
-  Members.push_back(MemberInfo(calculateTailClippingOffset(false),
-   MemberInfo::Scissor, nullptr, RD));
   for (const auto  : RD->vbases()) {
 const CXXRecordDecl *BaseDecl = Base.getType()->getAsCXXRecordDecl();
 if (BaseDecl->isEmpty())
@@ -950,18 +948,20 @@ void CGRecordLowering::calculateZeroInit() {
 }
 
 // Verify accumulateBitfields computed the correct storage representations.
-void CGRecordLowering::checkBitfieldClipping() const {
+void CGRecordLowering::checkBitfieldClipping(
+bool isNonVirtualBaseType LLVM_ATTRIBUTE_UNUSED) const {
 #ifndef NDEBUG
+  auto ScissorOffset = calculateTailClippingOffset(isNonVirtualBaseType);
   auto Tail = CharUnits::Zero();
   for (const auto  : Members) {
-// Only members with data and the scissor can cut into tail padding.
-if (!M.Data && M.Kind != MemberInfo::Scissor)
+// Only members with data could possibly overlap.
+if (!M.Data)
   continue;
 
 assert(M.Offset >= Tail && "Bitfield access unit is not clipped");
-Tail = M.Offset;
-if (M.Data)
-  Tail += getSize(M.Data);
+Tail = M.Offset + getSize(M.Data);
+assert((Tail <= ScissorOffset || M.Offset >= ScissorOffset) &&
+   "Bitfield straddles scissor offset");
   }
 #endif
 }

>From 050df411c74bdab8d9d6562c127abc92babbb527 Mon Sep 17 00:00:00 2001
From: Nathan Sidwell 
Date: Wed, 17 Apr 2024 17:15:57 -0400
Subject: [PATCH 2/3] Fix param spelling

---
 clang/lib/CodeGen/CGRecordLayoutBuilder.cpp | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp 
b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index cc51cc3476c43..38167903cda50 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -949,9 +949,9 @@ void CGRecordLowering::calculateZeroInit() {
 
 // Verify accumulateBitfields computed the correct storage representations.
 void CGRecordLowering::checkBitfieldClipping(
-bool isNonVirtualBaseType LLVM_ATTRIBUTE_UNUSED) const {
+bool IsNonVirtualBaseType LLVM_ATTRIBUTE_UNUSED) const {
 #ifndef NDEBUG
-  auto ScissorOffset = calculateTailClippingOffset(isNonVirtualBaseType);
+  auto ScissorOffset = calculateTailClippingOffset(IsNonVirtualBaseType);
   auto Tail = CharUnits::Zero();
   for (const auto  : Members) {
 // Only members with data could possibly overlap.

>From 4b93593a63850f4165979a38018d6c4be23dd681 Mon Sep 17 00:00:00 2001
From: Nathan Sidwell 
Date: Mon, 6 May 2024 10:28:06 -0400
Subject: [PATCH 3/3] lose attribute

---
 clang/lib/CodeGen/CGRecordLayoutBuilder.cpp | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp 
b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index 38167903cda50..5169be204c14d 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -948,8 +948,7 @@ void CGRecordLowering::calculateZeroIn

Re: Missing bytes on serial port

2024-05-09 Thread Nathan Hartman
When writing to the port, write() should indicate number of bytes written.
Are you checking the return value and is it less than expected?

Note that write() enqueues the bytes but returns before write complete. If
port is closed before hardware finishes shifting all the bits out, message
will be truncated.

Note that most MCUs have errata related to UART completion: usually the
"busy" bit (or whatever a particular micro calls it) indicates transmission
done before it actually shifts out the last bits.

On Thu, May 9, 2024 at 12:41 PM Mark Stevens
 wrote:

> This is a direct connection between the two chips on a PCB.
>
> Regards,
> Mark
> —
> Mark Stevens
> mark.stev...@wildernesslabs.co
>
>
>
>
>
>
> > On 9 May 2024, at 17:38, Bill Rees 
> wrote:
> >
> >
> > I've seen this problem before which revolved around flow control;
> essentially soft versus hard flow control (xmit off/ xmit on)
> >
> > Are you using a null modem cable? If not that may give you the accuracy
> you're looking for, else hardware flow control is the only other
> possibility if it is flow control.
> >
> > Bill
> >
> > On 5/9/2024 9:24 AM, Tomek CEDRO wrote:
> >> On Thu, May 9, 2024 at 6:15 PM Mark Stevens
> >>  wrote:
> >>> Yes, I am sure both side are configured correctly.
> >>> If I run the kernel code only then all works as expected.
> >>> If I run user space code alone all works as expected.
> >>> The problems happen when I transition from kernel use of the UART to
> user space use of the UART.
> >>> I have also connected a logic analyser to the system and all looks
> good.
> >>> Also, my current problem is NuttX reading data not sending it.
> Sending may also be a problem but I have not got that far at the moment.
> >> Which UART do you use? What happens when you use different UART? Are
> >> you sure it does not interfere with console?
> >>
> >> --
> >> CeDeROM, SQ7MHZ, http://www.tomek.cedro.info
> >
>
>


Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall

2024-05-09 Thread Nathan Bossart
On Thu, May 09, 2024 at 09:03:56AM +0900, Michael Paquier wrote:
> +1.  Could there be an argument in favor of a backpatch?  This is a
> performance improvement, but one could also side that the addition of
> sync support in pg_dump[all] has made that a regression that we'd
> better fix because the flushes don't matter in this context.  They
> also bring costs for no gain.

I don't see a strong need to back-patch this, if for no other reason than
it seems to have gone unnoticed for 7 major versions.  Plus, based on my
admittedly limited testing, this is unlikely to provide significant
improvements.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




bug#70848: ntpd updating the time hangs shepherd

2024-05-09 Thread Nathan Dehnel
When I turn off my computer and take it somewhere and turn it back on,
the time is wrong. I think this is because my RTC battery is
malfunctioning. After I boot, ntpd updates the time, and this breaks
SOMETHING, I don't know what.

May  9 14:04:23 localhost ntpd[1079]: Soliciting pool server 23.131.160.7
May  9 14:04:23 localhost ntpd[1079]: Soliciting pool server 23.150.41.123
May  9 14:04:24 localhost ntpd[1079]: Soliciting pool server 44.31.46.226
May  9 14:06:42 localhost ntpd[1079]: Soliciting pool server 74.208.117.38
May  9 14:06:42 localhost ntpd[1079]: Soliciting pool server 168.61.215.74
May  9 14:08:54 localhost ntpd[1079]: kernel reports TIME_ERROR: 0x41:
Clock Unsynchronized

Shepherd then hangs and becomes unresponsive to all commands, and the
fan spins up. I lose the ability to shut down and must kill the device
with REISUB.

guix 2bea3f2





[Numpy-discussion] Re: Add context management for np.seterr

2024-05-09 Thread Nathan
I think you're looking for the errstate context manager:

https://numpy.org/doc/stable/reference/generated/numpy.errstate.html

On Thu, May 9, 2024 at 1:11 PM  wrote:

> The current way (according to 1.26 doc) of setting and resetting error is
> ```
> old_settings = np.seterr(all='ignore')  #seterr to known value
> np.seterr(over='raise')
> {'divide': 'ignore', 'over': 'ignore', 'under': 'ignore', 'invalid':
> 'ignore'}
> np.seterr(**old_settings)  # reset to default
> ```
> This may be tedious and not elegant when we need to suppress the error for
> some certain lines, for example, `np.nan_to_num(a/b) ` as we need to
> suppress divide here.
>
> I think it would be way more elegant to use `with` statement here, which
> should be able to be implemented with some simple changes.
>
> An ideal result would be like:
> ```
> with np.seterr(divide='ignore'):
> np.nan_to_num(a/b) # no warning
>
> a/0 # still warn
> ```
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


pgsql: Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

2024-05-09 Thread Nathan Bossart
Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

The catalog view pg_stats_ext fails to consider privileges for
expression statistics.  The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies.  To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner.  It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise.  Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.

On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory.  This file can be
used to apply the fix to existing clusters.

Bumps catversion on 'master' branch only.

Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14

Branch
--
REL_16_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/2485a85e96db137f7962a2e702b99869957f0990

Modified Files
--
doc/src/sgml/catalogs.sgml|   3 +-
doc/src/sgml/system-views.sgml|   4 +-
src/backend/catalog/Makefile  |   3 +-
src/backend/catalog/fix-CVE-2024-4317.sql | 117 ++
src/backend/catalog/meson.build   |   1 +
src/backend/catalog/system_views.sql  |  11 +--
src/test/regress/expected/rules.out   |   8 +-
src/test/regress/expected/stats_ext.out   |  43 +++
src/test/regress/sql/stats_ext.sql|  27 +++
9 files changed, 200 insertions(+), 17 deletions(-)



pgsql: Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

2024-05-09 Thread Nathan Bossart
Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

The catalog view pg_stats_ext fails to consider privileges for
expression statistics.  The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies.  To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner.  It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise.  Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.

On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory.  This file can be
used to apply the fix to existing clusters.

Bumps catversion on 'master' branch only.

Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14

Branch
--
master

Details
---
https://git.postgresql.org/pg/commitdiff/521a7156ab47623e299855dd04a2a4ea3ad71afe

Modified Files
--
doc/src/sgml/catalogs.sgml  |  3 +--
doc/src/sgml/system-views.sgml  |  4 +--
src/backend/catalog/system_views.sql| 11 +++--
src/include/catalog/catversion.h|  2 +-
src/test/regress/expected/rules.out |  8 +++---
src/test/regress/expected/stats_ext.out | 43 +
src/test/regress/sql/stats_ext.sql  | 27 +
7 files changed, 81 insertions(+), 17 deletions(-)



pgsql: Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

2024-05-09 Thread Nathan Bossart
Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

The catalog view pg_stats_ext fails to consider privileges for
expression statistics.  The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies.  To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner.  It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise.  Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.

On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory.  This file can be
used to apply the fix to existing clusters.

Bumps catversion on 'master' branch only.

Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14

Branch
--
REL_15_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/9cc2b62894de6a8b3d78d20bcd1a6647a7553a6c

Modified Files
--
doc/src/sgml/catalogs.sgml|   3 +-
doc/src/sgml/system-views.sgml|   4 +-
src/backend/catalog/Makefile  |   3 +-
src/backend/catalog/fix-CVE-2024-4317.sql | 117 ++
src/backend/catalog/system_views.sql  |  11 +--
src/test/regress/expected/rules.out   |   8 +-
src/test/regress/expected/stats_ext.out   |  43 +++
src/test/regress/sql/stats_ext.sql|  27 +++
8 files changed, 199 insertions(+), 17 deletions(-)



pgsql: Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

2024-05-09 Thread Nathan Bossart
Fix privilege checks in pg_stats_ext and pg_stats_ext_exprs.

The catalog view pg_stats_ext fails to consider privileges for
expression statistics.  The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies.  To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner.  It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise.  Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.

On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory.  This file can be
used to apply the fix to existing clusters.

Bumps catversion on 'master' branch only.

Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14

Branch
--
REL_14_STABLE

Details
---
https://git.postgresql.org/pg/commitdiff/c3425383ba67ae6ecaddc8896025a91faadb430a

Modified Files
--
doc/src/sgml/catalogs.sgml|   7 +-
src/backend/catalog/Makefile  |   3 +-
src/backend/catalog/fix-CVE-2024-4317.sql | 115 ++
src/backend/catalog/system_views.sql  |  11 ++-
src/test/regress/expected/rules.out   |   8 +--
src/test/regress/expected/stats_ext.out   |  43 +++
src/test/regress/sql/stats_ext.sql|  27 +++
7 files changed, 197 insertions(+), 17 deletions(-)



Flink Kubernetes Operator - How can I use a jar that is hosted on a private maven repo for a FlinkSessionJob?

2024-05-09 Thread Nathan T. A. Lewis
Hello,

I am trying to run a Flink Session Job with a jar that is hosted on a maven 
repository in Google's Artifact Registry.

The first thing I tried was to just specify the `jarURI` directly:

apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
  name: myJobName
spec:
  deploymentName: flink-session
  job:
jarURI: 
"https://mylocation-maven.pkg.dev/myGCPproject/myreponame/path/to/the.jar;
entryClass: myentryclass
parallelism: 1
upgradeMode: savepoint

But, since it is a private repository, it not surprisingly resulted in:

java.io.IOException: Server returned HTTP response code: 401 for URL: 
https://mylocation-maven.pkg.dev/myGCPproject/myreponame/path/to/the.jar

I didn't see anywhere in the FlinkSessionJob definition to put a bearer token 
and doubt it would be a good idea security-wise to store one there anyway, so I 
instead looked into using `initContainers` on the FlinkDeployment like in this 
example: 
https://github.com/apache/flink-kubernetes-operator/blob/main/examples/pod-template.yaml

apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
  name: flink-session
spec:
  flinkVersion: v1_18
  flinkConfiguration:
taskmanager.numberOfTaskSlots: "2"
state.checkpoints.dir: mycheckpointsdir
state.savepoints.dir: mysavepointsdir
state.backend: rocksdb
state.backend.rocksdb.timer-service.factory: ROCKSDB
state.backend.incremental: "true"
execution.checkpointing.interval: "1m"
  serviceAccount: flink
  jobManager:
resource:
  memory: "2048m"
  cpu: 0.5
  taskManager:
resource:
  memory: "2048m"
  cpu: 1
  podTemplate:
  spec:
initContainers:
  - name: gcloud
image: google/cloud-sdk:latest
volumeMounts:
  - mountPath: /opt/flink/downloads
name: downloads
command: ["sh", "-c", "gcloud artifacts files download 
--project=myGCPproject --repository=myreponame --location=mylocation 
--destination=/opt/flink/downloads path/to/the.jar"]
containers:
  - name: flink-main-container
volumeMounts:
  - mountPath: /opt/flink/downloads
name: downloads
volumes:
  - name: downloads
emptyDir: { }

This worked well for getting the jar onto the jobManager pod, but it looks like 
the FlinkSessionJob actually looks for the jar on the pod of the Flink 
Kubernetes Operator itself. So in the end, the job still isn't being run.

As a workaround for now, I'm planning to move my jar from Maven to a Google 
Cloud Storage bucket and then add the gcs filesystem plugin to the operator 
image. What I'd love to know is if I've overlooked some already implemented way 
to connect to a private maven repository for a FlinkSessionJob. I suppose in a 
worst case, we could write a filesystem plugin that handles the 
`artifactrepository://` scheme and uses Google's java libraries to handle 
authentication and download of the artifact. Again, I'm kind of hoping 
something already exists though, rather than having to build something new.


Best regards,
Nathan T.A. Lewis


Re: Missing bytes on serial port

2024-05-09 Thread Nathan Hartman
On Thu, May 9, 2024 at 3:31 AM Mark Stevens
 wrote:

> So we have a two chip board:
>
> * STM32 running NuttX (v7.5 I believe)
> * ESP32 acting as a coprocessor running custom firmware
>
> The STM32 runs the show and the ESP32 provides services to the STM32 code.
>
> In normal run mode, NuttX has a kernel thread that reads data from the
> ESP32 over UART (/dev/ttyS2) and then processes the data.  This is working
> fine as is.
>
> The UART is configured to use a 512 byte buffer.
>
> Every now and then we want to upload new firmware to the ESP32.  This is
> done by a user mode thread and it goes through the following steps:
>
> * Signals to the kernel thread that it should close the UART and exit.
> * Opens the serial port
> * Starts the programming sequence
>
> If we try to do this then the user mode thread misses bytes in the byte
> stream.
>
> Kernel mode thread only:
> When the system starts then this thread works fine, no bytes are lost.
>
> User mode thread only:
> If we do not start the kernel mode thread then the programming works fine,
> no bytes are lost.
>
> Both threads:
> Starting the kernel works fine, we do not miss any bytes.  The kernel
> thread can be stopped, the UART is closed correctly.
>
> The user thread can open the serial port correctly after the kernel thread
> has stopped but now it misses bytes.
>
> So we know the individual threads work as expected when used on their own
> but not together.
>
> How anyone seen this or have any advice on how we can resolve the issue?
>
> Regards,
> Mark
> ______
> mark.stev...@wildernesslabs.co
>
>
>
>

Which bytes are missing? Are they the ones at the beginning of the message,
or the end?

Nathan


Re: ALTER EXTENSION SET SCHEMA versus dependent types

2024-05-08 Thread Nathan Bossart
On Wed, May 08, 2024 at 07:57:55PM -0400, Tom Lane wrote:
> Nathan Bossart  writes:
>> Agreed.  Another option could be to just annotate the arguments with the
>> parameter names.
> 
> At the call sites you mean?  Sure, I can do that.

Yes.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: ALTER EXTENSION SET SCHEMA versus dependent types

2024-05-08 Thread Nathan Bossart
On Wed, May 08, 2024 at 07:42:18PM -0400, Tom Lane wrote:
> Nathan Bossart  writes:
>> Looks reasonable to me.  The added test coverage seems particularly
>> valuable.  If I really wanted to nitpick, I might complain about the three
>> consecutive Boolean parameters for AlterTypeNamespaceInternal(), which
>> makes lines like
> 
>> +AlterTypeNamespaceInternal(arrayOid, nspOid, true, false, true,
>> +   objsMoved);
> 
>> difficult to interpret.  But that's not necessarily the fault of this patch
>> and probably needn't block it.
> 
> I considered merging ignoreDependent and errorOnTableType into a
> single 3-valued enum, but didn't think it was worth the trouble
> given the very small number of callers; also it wasn't quite clear
> how to map that to AlterTypeNamespace_oid's API.  Perhaps a little
> more thought is appropriate though.
> 
> One positive reason for increasing the number of parameters is that
> that will be a clear API break for any outside callers, if there
> are any.  If I just replace a bool with an enum, such callers might
> or might not get any indication that they need to take a fresh
> look.

Agreed.  Another option could be to just annotate the arguments with the
parameter names.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: ALTER EXTENSION SET SCHEMA versus dependent types

2024-05-08 Thread Nathan Bossart
On Wed, May 08, 2024 at 05:52:31PM -0400, Tom Lane wrote:
> The attached patch fixes up the code and adds a new test to
> the test_extensions module.  The fix basically is to skip the
> pg_depend entries for dependent types, assuming that they'll
> get dealt with when we process their parent objects.

Looks reasonable to me.  The added test coverage seems particularly
valuable.  If I really wanted to nitpick, I might complain about the three
consecutive Boolean parameters for AlterTypeNamespaceInternal(), which
makes lines like

+   AlterTypeNamespaceInternal(arrayOid, nspOid, true, false, true,
+  objsMoved);

difficult to interpret.  But that's not necessarily the fault of this patch
and probably needn't block it.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




[LincolnTalk] Vintage Guitar Magazines

2024-05-08 Thread NATHAN PARKE via Lincoln
Free. Collection of Vintage Guitar magazines. 2009-2016. 96 issues, no gaps.
Call or text Nat, 781-983-7017.

Nathan G Parke
111 S Great Rd
Lincoln, MA 01773
(781) 983-7017

-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: add --no-sync to pg_upgrade's calls to pg_dump and pg_dumpall

2024-05-08 Thread Nathan Bossart
On Wed, May 08, 2024 at 10:09:46AM +0200, Peter Eisentraut wrote:
> On 03.05.24 19:13, Nathan Bossart wrote:
>> This is likely small potatoes compared to some of the other
>> pg_upgrade-related improvements I've proposed [0] [1] or plan to propose,
>> but this is easy enough, and I already wrote the patch, so here it is.
>> AFAICT there's no reason to bother syncing these dump files to disk.  If
>> someone pulls the plug during pg_upgrade, it's not like you can resume
>> pg_upgrade from where it left off.  Also, I think we skipped syncing before
>> v10, anyway, as the --no-sync flag was only added in commit 96a7128, which
>> added the code to sync dump files, too.
> 
> Looks good to me.

Thanks for looking.  I noticed that the version check is unnecessary since
we always use the new binary's pg_dump[all], so I removed that in v2.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 265a999ed65bf56491f76ae013f705ab64491486 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Fri, 3 May 2024 10:35:21 -0500
Subject: [PATCH v2 1/1] add --no-sync to pg_upgrade's calls to pg_dump[all]

---
 src/bin/pg_upgrade/dump.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/src/bin/pg_upgrade/dump.c b/src/bin/pg_upgrade/dump.c
index 29fb45b928..8345f55be8 100644
--- a/src/bin/pg_upgrade/dump.c
+++ b/src/bin/pg_upgrade/dump.c
@@ -22,7 +22,7 @@ generate_old_dump(void)
 	/* run new pg_dumpall binary for globals */
 	exec_prog(UTILITY_LOG_FILE, NULL, true, true,
 			  "\"%s/pg_dumpall\" %s --globals-only --quote-all-identifiers "
-			  "--binary-upgrade %s -f \"%s/%s\"",
+			  "--binary-upgrade %s --no-sync -f \"%s/%s\"",
 			  new_cluster.bindir, cluster_conn_opts(_cluster),
 			  log_opts.verbose ? "--verbose" : "",
 			  log_opts.dumpdir,
@@ -53,7 +53,7 @@ generate_old_dump(void)
 
 		parallel_exec_prog(log_file_name, NULL,
 		   "\"%s/pg_dump\" %s --schema-only --quote-all-identifiers "
-		   "--binary-upgrade --format=custom %s --file=\"%s/%s\" %s",
+		   "--binary-upgrade --format=custom %s --no-sync --file=\"%s/%s\" %s",
 		   new_cluster.bindir, cluster_conn_opts(_cluster),
 		   log_opts.verbose ? "--verbose" : "",
 		   log_opts.dumpdir,
-- 
2.25.1



[Bug 2060976] Debdiff v2

2024-05-08 Thread Nathan Teodosio
Hi Paride, thanks for the review.

I've addressed your comments, except for number 6. For some reason this
failed when I tried it in the past. But if necessary I can try again and report
with the resulting error, as this I don't remember.


** Patch added: "freerdp2-v2.diff"
   
https://bugs.launchpad.net/bugs/2060976/+attachment/5776303/+files/freerdp2-v2.diff

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2060976

Title:
  Create autopkgtest

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/freerdp2/+bug/2060976/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: New GUC autovacuum_max_threshold ?

2024-05-07 Thread Nathan Bossart
On Tue, May 07, 2024 at 10:31:00AM -0400, Robert Haas wrote:
> On Wed, May 1, 2024 at 10:03 PM David Rowley  wrote:
>> I think we need at least 1a) before we can give autovacuum more work
>> to do, especially if we do something like multiply its workload by
>> 1024x, per your comment above.
> 
> I guess I view it differently. It seems to me that right now, we're
> not vacuuming large tables often enough. We should fix that,
> independently of anything else. If the result is that small and medium
> sized tables get vacuumed less often, then that just means there were
> never enough resources to go around in the first place. We haven't
> taken a system that was working fine and broken it: we've just moved
> the problem from one category of tables (the big ones) to a different
> category of tables. If the user wants to solve that problem, they need
> to bump up the cost limit or add hardware. I don't see that we have
> any particular reason to believe such users will be worse off on
> average than they are today. On the other hand, users who do have a
> sufficiently high cost limit and enough hardware will be better off,
> because we'll start doing all the vacuuming work that needs to be done
> instead of only some of it.
> 
> Now, if we start vacuuming any class of table whatsoever 1024x as
> often as we do today, we are going to lose. But that would still be
> true even if we did everything on your list. Large tables need to be
> vacuumed more frequently than we now do, but not THAT much more
> frequently. Any system that produces that result is just using a wrong
> algorithm, or wrong constants, or something. Even if all the necessary
> resources are available, nobody is going to thank us for vacuuming
> gigantic tables in a tight loop. The problem with such a large
> increase is not that we don't have prioritization, but that such a
> large increase is fundamentally the wrong thing to do. On the other
> hand, I think a more modest increase is the right thing to do, and I
> think it's the right thing to do whether we have prioritization or
> not.

This is about how I feel, too.  In any case, I +1'd a higher default
because I think we need to be pretty conservative with these changes, at
least until we have a better prioritization strategy.  While folks may opt
to set this value super low, I think that's more likely to lead to some
interesting secondary effects.  If the default is high, hopefully these
secondary effects will be minimized or avoided.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-07 Thread Nathan Bossart
On Tue, May 07, 2024 at 03:02:01PM -0400, Tom Lane wrote:
> Nathan Bossart  writes:
>> On Tue, May 07, 2024 at 01:44:16PM -0400, Tom Lane wrote:
>>> +1 to include that, as it offers a defense if someone invokes this
>>> function directly.  In HEAD we could then rip out the test in the
>>> view.
> 
>> I apologize for belaboring this point, but I don't see how we would be
>> comfortable removing that check unless we are okay with other sessions'
>> temporary sequences appearing in the view, albeit with a NULL last_value.
> 
> Oh!  You're right, I'm wrong.  I was looking at the CASE filter, which
> we could get rid of -- but the "WHERE NOT pg_is_other_temp_schema(N.oid)"
> part has to stay.

Okay, phew.  We can still do something like v3-0002 for v18.  I'll give
Michael a chance to comment on 0001 before committing/back-patching that
one.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 2a37834699587eef18b50bf8d58723790bbcdde7 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 30 Apr 2024 20:54:51 -0500
Subject: [PATCH v3 1/2] Fix pg_sequence_last_value() for non-permanent
 sequences on standbys.

---
 src/backend/commands/sequence.c   | 22 --
 src/test/recovery/t/001_stream_rep.pl |  8 
 2 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 46103561c3..9d7468d7bb 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1780,8 +1780,8 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 	Buffer		buf;
 	HeapTupleData seqtuple;
 	Form_pg_sequence_data seq;
-	bool		is_called;
-	int64		result;
+	bool		is_called = false;
+	int64		result = 0;
 
 	/* open and lock sequence */
 	init_sequence(relid, , );
@@ -1792,12 +1792,22 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
  errmsg("permission denied for sequence %s",
 		RelationGetRelationName(seqrel;
 
-	seq = read_seq_tuple(seqrel, , );
+	/*
+	 * For the benefit of the pg_sequences system view, we return NULL for
+	 * temporary and unlogged sequences on standbys instead of throwing an
+	 * error.  We also always return NULL for other sessions' temporary
+	 * sequences.
+	 */
+	if ((RelationIsPermanent(seqrel) || !RecoveryInProgress()) &&
+		!RELATION_IS_OTHER_TEMP(seqrel))
+	{
+		seq = read_seq_tuple(seqrel, , );
 
-	is_called = seq->is_called;
-	result = seq->last_value;
+		is_called = seq->is_called;
+		result = seq->last_value;
 
-	UnlockReleaseBuffer(buf);
+		UnlockReleaseBuffer(buf);
+	}
 	sequence_close(seqrel, NoLock);
 
 	if (is_called)
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index 5311ade509..4c698b5ce1 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -95,6 +95,14 @@ $result = $node_standby_2->safe_psql('postgres', "SELECT * FROM seq1");
 print "standby 2: $result\n";
 is($result, qq(33|0|t), 'check streamed sequence content on standby 2');
 
+# Check pg_sequence_last_value() returns NULL for unlogged sequence on standby
+$node_primary->safe_psql('postgres',
+	"CREATE UNLOGGED SEQUENCE ulseq; SELECT nextval('ulseq')");
+$node_primary->wait_for_replay_catchup($node_standby_1);
+is($node_standby_1->safe_psql('postgres',
+	"SELECT pg_sequence_last_value('ulseq'::regclass) IS NULL"),
+	't', 'pg_sequence_last_value() on unlogged sequence on standby 1');
+
 # Check that only READ-only queries can run on standbys
 is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),
 	3, 'read-only queries on standby 1');
-- 
2.25.1

>From b96d1f21f6144640561360c84b361f569a2edc48 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 7 May 2024 14:35:34 -0500
Subject: [PATCH v3 2/2] Simplify pg_sequences a bit.

XXX: NEEDS CATVERSION BUMP
---
 src/backend/catalog/system_views.sql |  6 +-
 src/backend/commands/sequence.c  | 15 +--
 src/test/regress/expected/rules.out  |  5 +
 3 files changed, 7 insertions(+), 19 deletions(-)

diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 53047cab5f..b32e5c3170 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -176,11 +176,7 @@ CREATE VIEW pg_sequences AS
 S.seqincrement AS increment_by,
 S.seqcycle AS cycle,
 S.seqcache AS cache_size,
-CASE
-WHEN has_sequence_privilege(C.oid, 'SELECT,USAGE'::text)
-THEN pg_sequence_last_value(C.oid)
-ELSE NULL
-END AS last_value
+pg_sequence_last_value(C.oid) AS last_value
 FROM pg_sequence S JOIN pg_class C ON (C.oid = S.seqrelid)
  LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
 WHERE NOT pg_is_other_temp_schema(N.oid)
diff --gi

[Numpy-discussion] Re: PR - can I get a new review?

2024-05-07 Thread Nathan
I think most of the build failures you’re seeing would be fixed by merging
with or rebasing on the latest main branch.

Note that there is currently an issue with some of the windows CI runners,
so you’ll see failures related to our spin configuration failing to handle
a gcov argument that was added in spin 0.9 released a couple days ago.

On Mon, May 6, 2024 at 8:48 PM Jake S.  wrote:

> Hi community,
>
> PR 26081  is about making
> numpy's ShapeType covariant and bound to a tuple of ints.  The community
> has requested this occasionally in issue 16544
> .  I'm reaching out via the
> listserv because it's been a few months, and I don't want it to get too
> stale.  I could really use some help pushing it over the finish line.
>
> Summary:
> Two numpy reviewers and one interested community member reviewed the PR
> and asked for a type alias akin to npt.NDArray that allowed shape.  I
> worked through the issues with TypeVarTuple and made npt.Array, and it was
> fragile, but passing CI.  After a few months passed, I returned to fix the
> fragility in the hopes of getting some more attention, but now it fails CI
> in some odd builds (passes the mypy bit).  I have no idea how to get these
> to pass, as they appear unrelated to anything I've worked on (OpenBLAS on
> windows, freeBSD...?).
>
> Thanks,
> Jake
> ___
> NumPy-Discussion mailing list -- numpy-discussion@python.org
> To unsubscribe send an email to numpy-discussion-le...@python.org
> https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
> Member address: nathan12...@gmail.com
>
___
NumPy-Discussion mailing list -- numpy-discussion@python.org
To unsubscribe send an email to numpy-discussion-le...@python.org
https://mail.python.org/mailman3/lists/numpy-discussion.python.org/
Member address: arch...@mail-archive.com


Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-07 Thread Nathan Bossart
On Tue, May 07, 2024 at 01:44:16PM -0400, Tom Lane wrote:
> Nathan Bossart  writes:
>>  char relpersist = seqrel->rd_rel->relpersistence;
> 
>>  if (relpersist == RELPERSISTENCE_PERMANENT ||
>>  (relpersist == RELPERSISTENCE_UNLOGGED && 
>> !RecoveryInProgress()) ||
>>  !RELATION_IS_OTHER_TEMP(seqrel))
>>  {
>>  ...
>>  }
> 
> Should be AND'ing not OR'ing the !TEMP condition, no?  Also I liked
> your other formulation of the persistence check better.

Yes, that's a silly mistake on my part.  I changed it to

if ((RelationIsPermanent(seqrel) || !RecoveryInProgress()) &&
!RELATION_IS_OTHER_TEMP(seqrel))
{
...
}

in the attached v2.

>> I personally think that would be fine to back-patch since pg_sequences
>> already filters it out anyway.
> 
> +1 to include that, as it offers a defense if someone invokes this
> function directly.  In HEAD we could then rip out the test in the
> view.

I apologize for belaboring this point, but I don't see how we would be
comfortable removing that check unless we are okay with other sessions'
temporary sequences appearing in the view, albeit with a NULL last_value.
This check lives in the WHERE clause today, so if we remove it, we'd no
longer exclude those sequences.  Michael and you seem united on this, so I
have a sinking feeling that I'm missing something terribly obvious.

> BTW, I think you also need something like
> 
> - int64   result;
> + int64   result = 0;
> 
> Your compiler may not complain about result being possibly
> uninitialized, but IME others will.

Good call.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 974f56896add92983b664c11fd25010ef29ac42c Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 30 Apr 2024 20:54:51 -0500
Subject: [PATCH v2 1/1] Fix pg_sequence_last_value() for non-permanent
 sequences on standbys.

---
 src/backend/commands/sequence.c   | 22 --
 src/test/recovery/t/001_stream_rep.pl |  8 
 2 files changed, 24 insertions(+), 6 deletions(-)

diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c
index 46103561c3..9d7468d7bb 100644
--- a/src/backend/commands/sequence.c
+++ b/src/backend/commands/sequence.c
@@ -1780,8 +1780,8 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
 	Buffer		buf;
 	HeapTupleData seqtuple;
 	Form_pg_sequence_data seq;
-	bool		is_called;
-	int64		result;
+	bool		is_called = false;
+	int64		result = 0;
 
 	/* open and lock sequence */
 	init_sequence(relid, , );
@@ -1792,12 +1792,22 @@ pg_sequence_last_value(PG_FUNCTION_ARGS)
  errmsg("permission denied for sequence %s",
 		RelationGetRelationName(seqrel;
 
-	seq = read_seq_tuple(seqrel, , );
+	/*
+	 * For the benefit of the pg_sequences system view, we return NULL for
+	 * temporary and unlogged sequences on standbys instead of throwing an
+	 * error.  We also always return NULL for other sessions' temporary
+	 * sequences.
+	 */
+	if ((RelationIsPermanent(seqrel) || !RecoveryInProgress()) &&
+		!RELATION_IS_OTHER_TEMP(seqrel))
+	{
+		seq = read_seq_tuple(seqrel, , );
 
-	is_called = seq->is_called;
-	result = seq->last_value;
+		is_called = seq->is_called;
+		result = seq->last_value;
 
-	UnlockReleaseBuffer(buf);
+		UnlockReleaseBuffer(buf);
+	}
 	sequence_close(seqrel, NoLock);
 
 	if (is_called)
diff --git a/src/test/recovery/t/001_stream_rep.pl b/src/test/recovery/t/001_stream_rep.pl
index 5311ade509..4c698b5ce1 100644
--- a/src/test/recovery/t/001_stream_rep.pl
+++ b/src/test/recovery/t/001_stream_rep.pl
@@ -95,6 +95,14 @@ $result = $node_standby_2->safe_psql('postgres', "SELECT * FROM seq1");
 print "standby 2: $result\n";
 is($result, qq(33|0|t), 'check streamed sequence content on standby 2');
 
+# Check pg_sequence_last_value() returns NULL for unlogged sequence on standby
+$node_primary->safe_psql('postgres',
+	"CREATE UNLOGGED SEQUENCE ulseq; SELECT nextval('ulseq')");
+$node_primary->wait_for_replay_catchup($node_standby_1);
+is($node_standby_1->safe_psql('postgres',
+	"SELECT pg_sequence_last_value('ulseq'::regclass) IS NULL"),
+	't', 'pg_sequence_last_value() on unlogged sequence on standby 1');
+
 # Check that only READ-only queries can run on standbys
 is($node_standby_1->psql('postgres', 'INSERT INTO tab_int VALUES (1)'),
 	3, 'read-only queries on standby 1');
-- 
2.25.1



Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-07 Thread Nathan Bossart
On Sat, May 04, 2024 at 06:45:32PM +0900, Michael Paquier wrote:
> On Fri, May 03, 2024 at 05:22:06PM -0400, Tom Lane wrote:
>> Nathan Bossart  writes:
>>> IIUC this would cause other sessions' temporary sequences to appear in the
>>> view.  Is that desirable?
>> 
>> I assume Michael meant to move the test into the C code, not drop
>> it entirely --- I agree we don't want that.
> 
> Yup.  I meant to remove it from the script and keep only something in
> the C code to avoid the duplication, but you're right that the temp
> sequences would create more noise than now.
> 
>> Moving it has some attraction, but pg_is_other_temp_schema() is also
>> used in a lot of information_schema views, so we couldn't get rid of
>> it without a lot of further hacking.  Not sure we want to relocate
>> that filter responsibility in just one view.
> 
> Okay.

Okay, so are we okay to back-patch something like v1?  Or should we also
return NULL for other sessions' temporary schemas on primaries?  That would
change the condition to something like

char relpersist = seqrel->rd_rel->relpersistence;

if (relpersist == RELPERSISTENCE_PERMANENT ||
(relpersist == RELPERSISTENCE_UNLOGGED && 
!RecoveryInProgress()) ||
!RELATION_IS_OTHER_TEMP(seqrel))
{
...
}

I personally think that would be fine to back-patch since pg_sequences
already filters it out anyway.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




Re: allow changing autovacuum_max_workers without restarting

2024-05-07 Thread Nathan Bossart
On Fri, May 03, 2024 at 12:57:18PM +, Imseih (AWS), Sami wrote:
>> That's true, but using a hard-coded limit means we no longer need to add a
>> new GUC. Always allocating, say, 256 slots might require a few additional
>> kilobytes of shared memory, most of which will go unused, but that seems
>> unlikely to be a problem for the systems that will run Postgres v18.
> 
> I agree with this.

Here's what this might look like.  I chose an upper limit of 1024, which
seems like it "ought to be enough for anybody," at least for now.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
>From 72e0496294ef0390c77cef8031ae51c1a44ebde8 Mon Sep 17 00:00:00 2001
From: Nathan Bossart 
Date: Tue, 7 May 2024 10:59:24 -0500
Subject: [PATCH v3 1/1] allow changing autovacuum_max_workers without
 restarting

---
 doc/src/sgml/config.sgml  |  3 +-
 doc/src/sgml/runtime.sgml | 15 ---
 src/backend/access/transam/xlog.c |  2 +-
 src/backend/postmaster/autovacuum.c   | 44 ---
 src/backend/postmaster/postmaster.c   |  2 +-
 src/backend/storage/lmgr/proc.c   |  9 ++--
 src/backend/utils/init/postinit.c | 20 ++---
 src/backend/utils/misc/guc_tables.c   |  7 ++-
 src/backend/utils/misc/postgresql.conf.sample |  1 -
 src/include/postmaster/autovacuum.h   |  8 
 src/include/utils/guc_hooks.h |  2 -
 11 files changed, 58 insertions(+), 55 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index e93208b2e6..8e2a1d6902 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -8528,7 +8528,8 @@ COPY postgres_log FROM '/full/path/to/logfile.csv' WITH csv;

 Specifies the maximum number of autovacuum processes (other than the
 autovacuum launcher) that may be running at any one time.  The default
-is three.  This parameter can only be set at server start.
+is three.  This parameter can only be set in the
+postgresql.conf file or on the server command line.

   
  
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 6047b8171d..8a672a8383 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -781,13 +781,13 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such

 SEMMNI
 Maximum number of semaphore identifiers (i.e., sets)
-at least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) plus room for other applications
+at least ceil((max_connections + max_wal_senders + max_worker_processes + 1029) / 16) plus room for other applications

 

 SEMMNS
 Maximum number of semaphores system-wide
-ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16) * 17 plus room for other applications
+ceil((max_connections + max_wal_senders + max_worker_processes + 1029) / 16) * 17 plus room for other applications

 

@@ -838,7 +838,7 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such
 When using System V semaphores,
 PostgreSQL uses one semaphore per allowed connection
 (), allowed autovacuum worker process
-() and allowed background
+(1024) and allowed background
 process (), in sets of 16.
 Each such set will
 also contain a 17th semaphore which contains a magic
@@ -846,13 +846,14 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such
 other applications. The maximum number of semaphores in the system
 is set by SEMMNS, which consequently must be at least
 as high as max_connections plus
-autovacuum_max_workers plus max_wal_senders,
-plus max_worker_processes, plus one extra for each 16
+max_wal_senders,
+plus max_worker_processes, plus 1024 for autovacuum
+worker processes, plus one extra for each 16
 allowed connections plus workers (see the formula in ).  The parameter SEMMNI
 determines the limit on the number of semaphore sets that can
 exist on the system at one time.  Hence this parameter must be at
-least ceil((max_connections + autovacuum_max_workers + max_wal_senders + max_worker_processes + 5) / 16).
+least ceil((max_connections + max_wal_senders + max_worker_processes + 1029) / 16).
 Lowering the number
 of allowed connections is a temporary workaround for failures,
 which are usually confusingly worded No space
@@ -883,7 +884,7 @@ psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such
 When using POSIX semaphores, the number of semaphores needed is the
 same as for System V, that is one semaphore per allowed connection
 (), allowed autovacu

Re: [PATCH] wifi: nl80211: Avoid address calculations via out of bounds array indexing

2024-05-07 Thread Nathan Chancellor
On Tue, May 07, 2024 at 12:46:46PM +0200, Johannes Berg wrote:
> On Thu, 2024-04-25 at 11:13 -0700, Nathan Chancellor wrote:
> > On Wed, Apr 24, 2024 at 03:01:01PM -0700, Kees Cook wrote:
> > > Before request->channels[] can be used, request->n_channels must be set.
> > > Additionally, address calculations for memory after the "channels" array
> > > need to be calculated from the allocation base ("request") rather than
> > > via the first "out of bounds" index of "channels", otherwise run-time
> > > bounds checking will throw a warning.
> > > 
> > > Reported-by: Nathan Chancellor 
> > > Fixes: e3eac9f32ec0 ("wifi: cfg80211: Annotate struct 
> > > cfg80211_scan_request with __counted_by")
> > > Signed-off-by: Kees Cook 
> > 
> > Tested-by: Nathan Chancellor 
> > 
> 
> How do you get this tested? We have the same, and more, bugs in
> cfg80211_scan_6ghz() which I'm fixing right now, but no idea how to
> actually get the checks done?

You'll need a toolchain with __counted_by support, which I believe is
only clang 18+ at this point (I have prebuilts available at [1]), and
CONFIG_UBSAN_BOUNDS enabled, then they should just pop up in dmesg.

[1]: https://mirrors.edge.kernel.org/pub/tools/llvm/

Cheers,
Nathan



[clang] [clang][SPIR-V] Always add convervence intrinsics (PR #88918)

2024-05-07 Thread Nathan Gauër via cfe-commits

Keenuts wrote:

 Hi all, rebased on main, and addressed the comments.
 This commits changes the register order on SPIR-V vs DXIL, which required me 
to fix the mad+lerp intrinsic tests. Should be NFC, just storing the register 
name in a CHECK variable.

https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][SPIR-V] Always add convervence intrinsics (PR #88918)

2024-05-07 Thread Nathan Gauër via cfe-commits

https://github.com/Keenuts updated 
https://github.com/llvm/llvm-project/pull/88918

From a8bf6fe83a1c145ef81ee30471dc51de1b5354ef Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Nathan=20Gau=C3=ABr?= 
Date: Mon, 15 Apr 2024 17:05:40 +0200
Subject: [PATCH 1/5] [clang][SPIR-V] Always add convervence intrinsics
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is
when parsing the condition, the loopstack doesn't yet reflect the
correct values, as expected since we are not yet in the loop.

However, convergence tokens should sometimes already be available.
The solution which seemed the simplest is to greedily generate the
tokens when we generate SPIR-V.

Fixes #88144

Signed-off-by: Nathan Gauër 
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  88 +
 clang/lib/CodeGen/CGCall.cpp  |   3 +
 clang/lib/CodeGen/CGStmt.cpp  |  94 ++
 clang/lib/CodeGen/CodeGenFunction.cpp |   9 ++
 clang/lib/CodeGen/CodeGenFunction.h   |   9 +-
 .../builtins/RWBuffer-constructor.hlsl|   1 -
 .../CodeGenHLSL/convergence/do.while.hlsl |  90 +
 clang/test/CodeGenHLSL/convergence/for.hlsl   | 121 ++
 clang/test/CodeGenHLSL/convergence/while.hlsl | 119 +
 9 files changed, 445 insertions(+), 89 deletions(-)
 create mode 100644 clang/test/CodeGenHLSL/convergence/do.while.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/for.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/while.hlsl

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index 8e31652f4dabef..fb5904558bbae6 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -1141,91 +1141,8 @@ struct BitTest {
   static BitTest decodeBitTestBuiltin(unsigned BuiltinID);
 };
 
-// Returns the first convergence entry/loop/anchor instruction found in |BB|.
-// std::nullptr otherwise.
-llvm::IntrinsicInst *getConvergenceToken(llvm::BasicBlock *BB) {
-  for (auto  : *BB) {
-auto *II = dyn_cast();
-if (II && isConvergenceControlIntrinsic(II->getIntrinsicID()))
-  return II;
-  }
-  return nullptr;
-}
-
 } // namespace
 
-llvm::CallBase *
-CodeGenFunction::addConvergenceControlToken(llvm::CallBase *Input,
-llvm::Value *ParentToken) {
-  llvm::Value *bundleArgs[] = {ParentToken};
-  llvm::OperandBundleDef OB("convergencectrl", bundleArgs);
-  auto Output = llvm::CallBase::addOperandBundle(
-  Input, llvm::LLVMContext::OB_convergencectrl, OB, Input);
-  Input->replaceAllUsesWith(Output);
-  Input->eraseFromParent();
-  return Output;
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::emitConvergenceLoopToken(llvm::BasicBlock *BB,
-  llvm::Value *ParentToken) {
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto CB = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_loop, {}, {});
-  Builder.restoreIP(IP);
-
-  auto I = addConvergenceControlToken(CB, ParentToken);
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceEntryToken(llvm::Function *F) {
-  auto *BB = >getEntryBlock();
-  auto *token = getConvergenceToken(BB);
-  if (token)
-return token;
-
-  // Adding a convergence token requires the function to be marked as
-  // convergent.
-  F->setConvergent();
-
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto I = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_entry, {}, {});
-  assert(isa(I));
-  Builder.restoreIP(IP);
-
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceLoopToken(const LoopInfo *LI) {
-  assert(LI != nullptr);
-
-  auto *token = getConvergenceToken(LI->getHeader());
-  if (token)
-return token;
-
-  llvm::IntrinsicInst *PII =
-  LI->getParent()
-  ? emitConvergenceLoopToken(
-LI->getHeader(), 
getOrEmitConvergenceLoopToken(LI->getParent()))
-  : getOrEmitConvergenceEntryToken(LI->getHeader()->getParent());
-
-  return emitConvergenceLoopToken(LI->getHeader(), PII);
-}
-
-llvm::CallBase *
-CodeGenFunction::addControlledConvergenceToken(llvm::CallBase *Input) {
-  llvm::Value *ParentToken =
-  LoopStack.hasInfo()
-  ? getOrEmitConvergenceLoopToken(())
-  : getOrEmitConvergenceEntryToken(Input->getFunction());
-  return addConvergenceControlToken(Input, ParentToken);
-}
-
 BitTest BitTest::decodeBitTestBuiltin(unsigned BuiltinID) {
   switch (BuiltinID) {
 // Main portable variants.
@@ -18400,12 +18317,9 @@ Value *CodeGenFunction::EmitHLSLBuiltinExpr(unsigned 
BuiltinID,
   

[Tails-dev] Security advisor about Claws Mail no longer applies

2024-05-07 Thread Nathan Teodósio
Hi,

I stumbled upon the "Claws Mail leaks plaintext of encrypted
emails to IMAP server" article[1].

The corresponding bug task[2] is marked RESOLVED FIXED, so I
assume the article no longer applies and I suggest that the
article is updated to make this immediately clear (maybe an
information box at the beginning?).

Cheers,
Nathan

[1] https://tails.net/security/claws_mail_leaks_plaintext_to_imap/index.en.html
[2] https://www.thewildbeast.co.uk/claws-mail/bugzilla/show_bug.cgi?id=2965
___
Tails-dev mailing list
Tails-dev@boum.org
https://www.autistici.org/mailman/listinfo/tails-dev
To unsubscribe from this list, send an empty email to 
tails-dev-unsubscr...@boum.org.


[clang] [clang][SPIR-V] Always add convervence intrinsics (PR #88918)

2024-05-07 Thread Nathan Gauër via cfe-commits


@@ -3101,3 +3130,68 @@ CodeGenFunction::GenerateCapturedStmtFunction(const 
CapturedStmt ) {
 
   return F;
 }
+
+namespace {
+// Returns the first convergence entry/loop/anchor instruction found in |BB|.
+// std::nullptr otherwise.
+llvm::IntrinsicInst *getConvergenceToken(llvm::BasicBlock *BB) {
+  for (auto  : *BB) {
+auto *II = dyn_cast();
+if (II && llvm::isConvergenceControlIntrinsic(II->getIntrinsicID()))
+  return II;
+  }
+  return nullptr;
+}
+
+} // namespace
+
+llvm::CallBase *
+CodeGenFunction::addConvergenceControlToken(llvm::CallBase *Input,
+llvm::Value *ParentToken) {
+  llvm::Value *bundleArgs[] = {ParentToken};
+  llvm::OperandBundleDef OB("convergencectrl", bundleArgs);
+  auto Output = llvm::CallBase::addOperandBundle(
+  Input, llvm::LLVMContext::OB_convergencectrl, OB, Input);
+  Input->replaceAllUsesWith(Output);
+  Input->eraseFromParent();
+  return Output;
+}
+
+llvm::IntrinsicInst *
+CodeGenFunction::emitConvergenceLoopToken(llvm::BasicBlock *BB,
+  llvm::Value *ParentToken) {
+  CGBuilderTy::InsertPoint IP = Builder.saveIP();
+
+  if (BB->empty())
+Builder.SetInsertPoint(BB);
+  else
+Builder.SetInsertPoint(>front());
+
+  auto CB = Builder.CreateIntrinsic(
+  llvm::Intrinsic::experimental_convergence_loop, {}, {});
+  Builder.restoreIP(IP);
+
+  auto I = addConvergenceControlToken(CB, ParentToken);

Keenuts wrote:

Right, replaced the auto usage

https://github.com/llvm/llvm-project/pull/88918
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[clang] [clang][SPIR-V] Always add convervence intrinsics (PR #88918)

2024-05-07 Thread Nathan Gauër via cfe-commits

https://github.com/Keenuts updated 
https://github.com/llvm/llvm-project/pull/88918

From 94d76dcdfac88d1d50fe705406c0280c33766e15 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Nathan=20Gau=C3=ABr?= 
Date: Mon, 15 Apr 2024 17:05:40 +0200
Subject: [PATCH 1/4] [clang][SPIR-V] Always add convervence intrinsics
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

PR #80680 added bits in the codegen to lazily add convergence intrinsics
when required. This logic relied on the LoopStack. The issue is
when parsing the condition, the loopstack doesn't yet reflect the
correct values, as expected since we are not yet in the loop.

However, convergence tokens should sometimes already be available.
The solution which seemed the simplest is to greedily generate the
tokens when we generate SPIR-V.

Fixes #88144

Signed-off-by: Nathan Gauër 
---
 clang/lib/CodeGen/CGBuiltin.cpp   |  88 +
 clang/lib/CodeGen/CGCall.cpp  |   3 +
 clang/lib/CodeGen/CGStmt.cpp  |  94 ++
 clang/lib/CodeGen/CodeGenFunction.cpp |   9 ++
 clang/lib/CodeGen/CodeGenFunction.h   |   9 +-
 .../builtins/RWBuffer-constructor.hlsl|   1 -
 .../CodeGenHLSL/convergence/do.while.hlsl |  90 +
 clang/test/CodeGenHLSL/convergence/for.hlsl   | 121 ++
 clang/test/CodeGenHLSL/convergence/while.hlsl | 119 +
 9 files changed, 445 insertions(+), 89 deletions(-)
 create mode 100644 clang/test/CodeGenHLSL/convergence/do.while.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/for.hlsl
 create mode 100644 clang/test/CodeGenHLSL/convergence/while.hlsl

diff --git a/clang/lib/CodeGen/CGBuiltin.cpp b/clang/lib/CodeGen/CGBuiltin.cpp
index df7502b8def5314..f5d40a1555fcb57 100644
--- a/clang/lib/CodeGen/CGBuiltin.cpp
+++ b/clang/lib/CodeGen/CGBuiltin.cpp
@@ -1133,91 +1133,8 @@ struct BitTest {
   static BitTest decodeBitTestBuiltin(unsigned BuiltinID);
 };
 
-// Returns the first convergence entry/loop/anchor instruction found in |BB|.
-// std::nullptr otherwise.
-llvm::IntrinsicInst *getConvergenceToken(llvm::BasicBlock *BB) {
-  for (auto  : *BB) {
-auto *II = dyn_cast();
-if (II && isConvergenceControlIntrinsic(II->getIntrinsicID()))
-  return II;
-  }
-  return nullptr;
-}
-
 } // namespace
 
-llvm::CallBase *
-CodeGenFunction::addConvergenceControlToken(llvm::CallBase *Input,
-llvm::Value *ParentToken) {
-  llvm::Value *bundleArgs[] = {ParentToken};
-  llvm::OperandBundleDef OB("convergencectrl", bundleArgs);
-  auto Output = llvm::CallBase::addOperandBundle(
-  Input, llvm::LLVMContext::OB_convergencectrl, OB, Input);
-  Input->replaceAllUsesWith(Output);
-  Input->eraseFromParent();
-  return Output;
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::emitConvergenceLoopToken(llvm::BasicBlock *BB,
-  llvm::Value *ParentToken) {
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto CB = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_loop, {}, {});
-  Builder.restoreIP(IP);
-
-  auto I = addConvergenceControlToken(CB, ParentToken);
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceEntryToken(llvm::Function *F) {
-  auto *BB = >getEntryBlock();
-  auto *token = getConvergenceToken(BB);
-  if (token)
-return token;
-
-  // Adding a convergence token requires the function to be marked as
-  // convergent.
-  F->setConvergent();
-
-  CGBuilderTy::InsertPoint IP = Builder.saveIP();
-  Builder.SetInsertPoint(>front());
-  auto I = Builder.CreateIntrinsic(
-  llvm::Intrinsic::experimental_convergence_entry, {}, {});
-  assert(isa(I));
-  Builder.restoreIP(IP);
-
-  return cast(I);
-}
-
-llvm::IntrinsicInst *
-CodeGenFunction::getOrEmitConvergenceLoopToken(const LoopInfo *LI) {
-  assert(LI != nullptr);
-
-  auto *token = getConvergenceToken(LI->getHeader());
-  if (token)
-return token;
-
-  llvm::IntrinsicInst *PII =
-  LI->getParent()
-  ? emitConvergenceLoopToken(
-LI->getHeader(), 
getOrEmitConvergenceLoopToken(LI->getParent()))
-  : getOrEmitConvergenceEntryToken(LI->getHeader()->getParent());
-
-  return emitConvergenceLoopToken(LI->getHeader(), PII);
-}
-
-llvm::CallBase *
-CodeGenFunction::addControlledConvergenceToken(llvm::CallBase *Input) {
-  llvm::Value *ParentToken =
-  LoopStack.hasInfo()
-  ? getOrEmitConvergenceLoopToken(())
-  : getOrEmitConvergenceEntryToken(Input->getFunction());
-  return addConvergenceControlToken(Input, ParentToken);
-}
-
 BitTest BitTest::decodeBitTestBuiltin(unsigned BuiltinID) {
   switch (BuiltinID) {
 // Main portable variants.
@@ -18306,12 +18223,9 @@ Value *CodeGenFunction::EmitHLSLBuiltinExpr(unsigned 
BuiltinID,
   

[clang-tools-extra] [clangd] Support callHierarchy/outgoingCalls (PR #91191)

2024-05-06 Thread Nathan Ridge via cfe-commits

HighCommander4 wrote:

If I'm understanding correctly, the implementation approach in this PR only 
finds callees in the current translation unit.

The approach in #77556 uses the project's index to find callees across 
translation unit boundaries.

Regarding reviews: yes, it seems quite unfortunate that the original developers 
seem to have largely moved on to other things. I will do my best to make some 
progress of the project's review backlog (including in particular 
https://github.com/llvm/llvm-project/pull/86629 and 
https://github.com/llvm/llvm-project/pull/67802) as time permits.

https://github.com/llvm/llvm-project/pull/91191
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[Bug 2063985] Re: Does not detect hotplugged storage device

2024-05-06 Thread Nathan Sheffield
I'm also experiencing the same thing on a fresh install of ubuntu 24.04,
trying to plug in an NTFS-formatted USB external drive that works fine
in ubuntu 22. I'm using kernel 6.8.0-31

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2063985

Title:
  Does not detect hotplugged storage device

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/2063985/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2063985] Re: Does not detect hotplugged storage device

2024-05-06 Thread Nathan Sheffield
I'm also experiencing the same thing on a fresh install of ubuntu 24.04,
trying to plug in an NTFS-formatted USB external drive that works fine
in ubuntu 22. I'm using kernel 6.8.0-31

-- 
You received this bug notification because you are a member of Ubuntu
Desktop Bugs, which is subscribed to gvfs in Ubuntu.
https://bugs.launchpad.net/bugs/2063985

Title:
  Does not detect hotplugged storage device

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gvfs/+bug/2063985/+subscriptions


-- 
desktop-bugs mailing list
desktop-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/desktop-bugs

[clang] [clang][NFC] Remove class layout scissor (PR #89055)

2024-05-06 Thread Nathan Sidwell via cfe-commits

https://github.com/urnathan updated 
https://github.com/llvm/llvm-project/pull/89055

>From db5e6456f26ea9b859d3ff24161d7494d58bb7e1 Mon Sep 17 00:00:00 2001
From: Nathan Sidwell 
Date: Mon, 1 Apr 2024 16:15:12 -0400
Subject: [PATCH 1/3] [clang] Remove class layout scissor

---
 clang/lib/CodeGen/CGRecordLayoutBuilder.cpp | 22 ++---
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp 
b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index 868b1ab98e048a..cc51cc3476c438 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -75,7 +75,7 @@ struct CGRecordLowering {
   // sentinel member type that ensures correct rounding.
   struct MemberInfo {
 CharUnits Offset;
-enum InfoKind { VFPtr, VBPtr, Field, Base, VBase, Scissor } Kind;
+enum InfoKind { VFPtr, VBPtr, Field, Base, VBase } Kind;
 llvm::Type *Data;
 union {
   const FieldDecl *FD;
@@ -197,7 +197,7 @@ struct CGRecordLowering {
  const CXXRecordDecl *Query) const;
   void calculateZeroInit();
   CharUnits calculateTailClippingOffset(bool isNonVirtualBaseType) const;
-  void checkBitfieldClipping() const;
+  void checkBitfieldClipping(bool isNonVirtualBaseType) const;
   /// Determines if we need a packed llvm struct.
   void determinePacked(bool NVBaseType);
   /// Inserts padding everywhere it's needed.
@@ -299,8 +299,8 @@ void CGRecordLowering::lower(bool NVBaseType) {
   accumulateVBases();
   }
   llvm::stable_sort(Members);
+  checkBitfieldClipping(NVBaseType);
   Members.push_back(StorageInfo(Size, getIntNType(8)));
-  checkBitfieldClipping();
   determinePacked(NVBaseType);
   insertPadding();
   Members.pop_back();
@@ -894,8 +894,6 @@ CGRecordLowering::calculateTailClippingOffset(bool 
isNonVirtualBaseType) const {
 }
 
 void CGRecordLowering::accumulateVBases() {
-  Members.push_back(MemberInfo(calculateTailClippingOffset(false),
-   MemberInfo::Scissor, nullptr, RD));
   for (const auto  : RD->vbases()) {
 const CXXRecordDecl *BaseDecl = Base.getType()->getAsCXXRecordDecl();
 if (BaseDecl->isEmpty())
@@ -950,18 +948,20 @@ void CGRecordLowering::calculateZeroInit() {
 }
 
 // Verify accumulateBitfields computed the correct storage representations.
-void CGRecordLowering::checkBitfieldClipping() const {
+void CGRecordLowering::checkBitfieldClipping(
+bool isNonVirtualBaseType LLVM_ATTRIBUTE_UNUSED) const {
 #ifndef NDEBUG
+  auto ScissorOffset = calculateTailClippingOffset(isNonVirtualBaseType);
   auto Tail = CharUnits::Zero();
   for (const auto  : Members) {
-// Only members with data and the scissor can cut into tail padding.
-if (!M.Data && M.Kind != MemberInfo::Scissor)
+// Only members with data could possibly overlap.
+if (!M.Data)
   continue;
 
 assert(M.Offset >= Tail && "Bitfield access unit is not clipped");
-Tail = M.Offset;
-if (M.Data)
-  Tail += getSize(M.Data);
+Tail = M.Offset + getSize(M.Data);
+assert((Tail <= ScissorOffset || M.Offset >= ScissorOffset) &&
+   "Bitfield straddles scissor offset");
   }
 #endif
 }

>From 36705e5bcdcda6983ed5a163ae8ea3e9911ad275 Mon Sep 17 00:00:00 2001
From: Nathan Sidwell 
Date: Wed, 17 Apr 2024 17:15:57 -0400
Subject: [PATCH 2/3] Fix param spelling

---
 clang/lib/CodeGen/CGRecordLayoutBuilder.cpp | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp 
b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index cc51cc3476c438..38167903cda508 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -949,9 +949,9 @@ void CGRecordLowering::calculateZeroInit() {
 
 // Verify accumulateBitfields computed the correct storage representations.
 void CGRecordLowering::checkBitfieldClipping(
-bool isNonVirtualBaseType LLVM_ATTRIBUTE_UNUSED) const {
+bool IsNonVirtualBaseType LLVM_ATTRIBUTE_UNUSED) const {
 #ifndef NDEBUG
-  auto ScissorOffset = calculateTailClippingOffset(isNonVirtualBaseType);
+  auto ScissorOffset = calculateTailClippingOffset(IsNonVirtualBaseType);
   auto Tail = CharUnits::Zero();
   for (const auto  : Members) {
 // Only members with data could possibly overlap.

>From 2d2dcdecb0328b8d397bb14072e5750ddf20a39a Mon Sep 17 00:00:00 2001
From: Nathan Sidwell 
Date: Mon, 6 May 2024 10:28:06 -0400
Subject: [PATCH 3/3] lose attribute

---
 clang/lib/CodeGen/CGRecordLayoutBuilder.cpp | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp 
b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
index 38167903cda508..5169be204c14d0 100644
--- a/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
+++ b/clang/lib/CodeGen/CGRecordLayoutBuilder.cpp
@@ -948,8 +948,7 @@ void CGRecordLowering::calculateZeroIn

Re: [Starlink] It’s the Latency, FCC

2024-05-06 Thread Nathan Owens via Starlink
You really don’t need 25Mbps for decent 4K quality - depends on the
content. Netflix has some encodes that go down to 1.8Mbps with a very high
VMAF:
https://netflixtechblog.com/optimized-shot-based-encodes-for-4k-now-streaming-47b516b10bbb

Apple TV has the highest bitrate encodes of any mainstream streaming
service, and those do top out at ~25Mbps. Could they be more efficient?
Probably…

On Mon, May 6, 2024 at 7:19 AM Alexandre Petrescu via Starlink <
starlink@lists.bufferbloat.net> wrote:

>
> Le 02/05/2024 à 21:50, Frantisek Borsik a écrit :
>
> Thanks, Colin. This was just another great read on video (and audio - in
> the past emails from you) bullet-proofing for the near future.
>
> To be honest, the consensus on the bandwidth overall in the bufferbloat
> related circles was in the 25/3 - 100/20 ballpark
>
>
> To continue on this discussion of 25mbit/s (mbyte/s ?) of 4k, and 8k, here
> are some more thoughts:
>
> - about 25mbit/s bw needs for 4K:  hdmi cables for 4K HDR10 (high dynamic
> range) are specified at 18gbit/s and not 25mbit/s (mbyte?).  These HDMI
> cables dont run IP.  But, supposedly, the displayed 4K image is of a higher
> quality if played over hdmi (presumably from a player) than from a server
> remote on the Internet.   To achieve parity, maybe one wants to run that
> hdmi flow from the server with IP, and at that point the bandwidth
> requirement is higher than 25mbit/s.  This goes hand in hand with the disc
> evolutions (triple-layer bluray discs of 120Gbyte capacity is the most
> recent; I dont see signs of that to slow).
>
> - in some regions, the terrestrial DVB (TV on radio frequencies, with
> antenna receivers, not  IP) run at 4K HDR10 starting this year.  I dont
> know what MPEG codec is it, at what mbit/s speed.  But it is not over the
> Internet.  This means that probably  ISPs are inclined to do more than that
> 4K over the Internet, maybe 8K, to distinguish their service from DVB.  The
> audience of these DVB streams is very wide, with cheap one-time buy
> receivers (no subscription, like with ISP) already widely available in
> electronics stores.
>
> - a reduced audience, yet important,  is that of 8K TV via satellites.
> There is one japanese 8K TV satcom provider, and the audience (number of
> watchers) is probably smaller than that of DVB 4K HDR.  Still, it
> constitutes competition for IPTV from ISPs.
>
> To me, that reflects a direction of growth of the 4K to 8K capability
> requirement from the Internet.
>
> Still, that growth in bandwidth requirement does not say anything about
> the latency requirement.  That can be found elsewhere, and probably it is
> very little related to TV.
>
> Alex
>
> , but all what many of us were trying to achieve while talking to FCC (et
> al) was to point out, that in order to really make it bulletproof and
> usable for not only near future, but for today, a reasonable Quality of
> Experience requirement is necessary to be added to the definition of
> broadband. Here is the link to the FCC NOI and related discussion:
> https://circleid.com/posts/20231211-its-the-latency-fcc
>
> Hopefully, we have managed to get that message over to the other side. At
> least 2 of 5 FCC Commissioners seems to be getting it - Nathan Simington
> and Brendan Carr - and Nathan event arranged for his staffers to talk with
> Dave and others. Hope that this line of of cooperation will continue and we
> will manage to help the rest of the FCC to understand the issues at hand
> correctly.
>
> All the best,
>
> Frank
>
> Frantisek (Frank) Borsik
>
>
>
> https://www.linkedin.com/in/frantisekborsik
>
> Signal, Telegram, WhatsApp: +421919416714
>
> iMessage, mobile: +420775230885
>
> Skype: casioa5302ca
>
> frantisek.bor...@gmail.com
>
>
> On Thu, May 2, 2024 at 4:47 PM Colin_Higbie via Starlink <
> starlink@lists.bufferbloat.net> wrote:
>
>> Alex, fortunately, we are not bound to use personal experiences and
>> observations on this. We have real market data that can provide an
>> objective, data-supported conclusion. No need for a
>> chocolate-or-vanilla-ice-cream-tastes-better discussion on this.
>>
>> Yes, cameras can film at 8K (and higher in some cases). However, at those
>> resolutions (with exceptions for ultra-high end cameras, such as those used
>> by multi-million dollar telescopes), except under very specific conditions,
>> the actual picture quality doesn't vary past about 5.5K. The loss of detail
>> simply moves from a consequence of too few pixels to optical and focus
>> limits of the lenses. Neighboring pixels simply hold a blurry image,
>> meaning they don't actually carry any usable information. A still shot with

[clang] [llvm] [SPIRV] Add tan intrinsic part 3 (PR #90278)

2024-05-06 Thread Nathan Gauër via cfe-commits

https://github.com/Keenuts approved this pull request.


https://github.com/llvm/llvm-project/pull/90278
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


[Bug 2062971] Re: Enable Ubuntu Pro page formatting is hard to follow

2024-05-06 Thread Nathan Teodosio
The behavior that I'm talking about:
https://people.canonical.com/~npt/3pro-2024-03-20_10.06.45.mkv

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2062971

Title:
  Enable Ubuntu Pro page formatting is hard to follow

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2062971/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

[Bug 2062971] Re: Enable Ubuntu Pro page formatting is hard to follow

2024-05-06 Thread Nathan Teodosio
The padding around the logo is certainly botched after transitioning
from GTK3 to GTK4. I tried everything and left a comment in the source
code[1] pointing to the halign not being respected there, maybe someone
has an idea of how to fix that.

> and then only one

To what exactly does 'then' refer here?

[1] https://salsa.debian.org/gnome-team/gnome-initial-
setup/-/blob/ubuntu/latest/debian/patches/0001-Add-Ubuntu-mode-with-
special-pages.patch?ref_type=heads#L1366

** Changed in: gnome-initial-setup (Ubuntu)
   Status: New => Triaged

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2062971

Title:
  Enable Ubuntu Pro page formatting is hard to follow

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/gnome-initial-setup/+bug/2062971/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Re: Remove the IRC channels

2024-05-05 Thread Nathan Hartman
On Sun, May 5, 2024 at 12:35 PM Daniel Sahlberg 
wrote:

> Den sön 5 maj 2024 kl 11:42 skrev Stefan Sperling :
>
>> On Sun, May 05, 2024 at 11:20:28AM +0200, Daniel Sahlberg wrote:
>> > What about this new topic for #svn?
>> >
>> > [[[
>> > The Apache® Subversion® version control system (
>> > https://subversion.apache.org/) | Read the book:
>> http://www.svnbook.org/ |
>> > FAQ: https://subversion.apache.org/faq.html | This channel has limited
>> use,
>> > if you have questions please ask on the mailing list
>> > https://subversion.apache.org/mailing-lists#users-ml
>> > ]]]
>> >
>> > And for #svn-dev:
>> >
>> > [[[
>> > Development of Apache® Subversion® (https://subversion.apache.org/) |
>> This
>> > channel has limited use, if you have questions please ask on the mailing
>> > list https://subversion.apache.org/mailing-lists#dev-ml
>> > ]]]
>>
>> Reads well. Fine with me!
>>
>
> Thank you, and also thanks to Nathan for reviewing the changes to the
> website.
>
> I have updated the topics as above and I've also merged the website
> changes in r1917520.
>
> I believe that is everything, but if you find something else, just let me
> know.
>
> Kind regards,
> Daniel
>


Thanks Daniel for taking care of it and to Stefan for your thoughts and
review.

Another advantage of this change is that it removes some other
administrative headaches like restarting the CommitBot when it freezes, etc.

Cheers,
Nathan


Re: svn commit: r1917512 - in /subversion/site/staging: ./ docs/community-guide/

2024-05-05 Thread Nathan Hartman
freenode.net until May 2021. It may 
> still exist
> -  but it is no longer recognized as an official channel.)
> -  
>  
>
>  
> @@ -742,6 +735,23 @@ again.
>
>  
>
> +
> +Where are the IRC channels?
> +   +title="Link to this section">
> +
> +
> +Previously there were official IRC channels #svn and #svn-dev on
> +freenode.net (until May 2021) and on irc.libera.chat (from May 2021). Due to
> +the low number of participants, we no longer recommend using these channels
> +for support and/or development questions.
> +
> +Archives are available
> + https://colabti.org/irclogger/irclogger_logs/svn;>here.
> +
> +
> +
> +
>  
>  How is Subversion affected by changes in Daylight Savings Time (DST)?
>
> Modified: subversion/site/staging/faq.ja.html
> URL: 
> http://svn.apache.org/viewvc/subversion/site/staging/faq.ja.html?rev=1917512=1917511=1917512=diff
> ==
> --- subversion/site/staging/faq.ja.html [utf-8] (original)
> +++ subversion/site/staging/faq.ja.html [utf-8] Sun May  5 08:27:07 2024
> @@ -463,7 +463,6 @@ Win32システムには、シンボリ�
>Subversion ユーザーズメイリングリスト ( href="mailto:us...@subversion.apache.org;>us...@subversion.apache.org)
>  注意: このメイリングリストは href="#moderation">モデレータ制だから、あなたの投稿が配送されるまでには、少し遅延があるかも。
>https://svn.haxx.se/users/;>Subversion ユーザーズリストのアーカイブ
> -  IRC。irc.libera.chat の #svn チャンネルにて。
>  
>
>  
>
> Modified: subversion/site/staging/faq.zh.html
> URL: 
> http://svn.apache.org/viewvc/subversion/site/staging/faq.zh.html?rev=1917512=1917511=1917512=diff
> ==
> --- subversion/site/staging/faq.zh.html [utf-8] (original)
> +++ subversion/site/staging/faq.zh.html [utf-8] Sun May  5 08:27:07 2024
> @@ -443,7 +443,6 @@ href="http://svn.collab.net/repos/svn/tr
> >us...@subversion.apache.org)
>  注意这个列表需要经过审核,所以在显示之前有一些延迟。
>https://svn.haxx.se/users/;>Subversion用户信息列表。
> -  在线聊天系统(IRC)在irc.libera.chat的#svn频道。
>  
>
>  
>
>


Looks good to me! +1 to merge to publish whenever you're ready.

Thanks for taking care of this!

Cheers,
Nathan


[clang-tools-extra] [clangd] Add 'apply all clangd fixes' and 'apply all '_' fixes' QuickFixes (PR #79867)

2024-05-04 Thread Nathan Ridge via cfe-commits

HighCommander4 wrote:

> Bump @HighCommander4 - did you get a chance to review this?

Hi @torshepherd! Sorry for not being more responsive on this. I haven't had as 
much time as I'd like to spend on clangd reviews recently, and certainly not 
enough to keep up with the volume of review requests that I get.

That said, I haven't forgotten about this patch, and it's definitely on a 
shortlist that I'm planning to make time to look at over the next couple of 
weeks.

https://github.com/llvm/llvm-project/pull/79867
___
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits


Re: Remove the IRC channels

2024-05-04 Thread Nathan Hartman
On Sat, May 4, 2024 at 4:15 PM Stefan Sperling  wrote:

> On Sat, May 04, 2024 at 09:24:58PM +0200, Daniel Sahlberg wrote:
> > Hi,
> >
> > I’m personally not an IRC user but I try to keep an eye on the IRC logs.
> > For personal reasons I haven’t had time to do since the start of the year
> > but I spent some time tonight to browse the archives.
> >
> > Since January there were somewhere between 5 and 10 persons asking
> > questions and NO ONE got a timely reply. On the other hand we have had a
> > slow but steady stream of questions on users@ and they have received
> > replies from different members of the community.
> >
> > I think the lack of replies on IRC reflects badly on the community. I’m
> not
> > able to put any energy into following the IRC channels and I cannot ask
> > anyone else either. But I think the mailing lists could absorb these
> > questions and give timely answers.
> >
> > For this reason, I suggest that we discontinue the IRC channels (remove
> > them from libera.chat) or at least change the topic to indicate that they
> > are no longer “official” and to refer all questions to the mailing lists.
> > The website should of course be updated accordingly.
> >
> > Kind regards
> >
> > Daniel Sahlberg
>
> I am not sure whether outright removing (aka "dropping") our Libera
> channels from ChanServ is a good idea. Anyone could re-register a
> dropped channel and squat on the name and/or impersonate the project.
>
> Given this, I suppose the #svn users channel should be marked as
> unmaintained in the topic. Redirecting questions to the mailing
> list via the topic line should work well enough.
>
> The #svn-dev channel might still be useful for quick communication
> during bursts of increased project activity. I would keep it around.
>
> Cheers,
> Stefan



I'd like to keep the IRC channels, at the very least so that someone won't
squat on the name as already mentioned, but I agree that we should update
the channel topics to direct user questions to the mailing list where they
are much more likely to receive a timely response.

I'd also recommend to update our text on the website to say that the IRC
channels exist but the mailing lists are the more recommended way to
communicate.

Cheers,
Nathan


bug#70762: --goproxy flag in guix go importer apparently nonfunctional

2024-05-03 Thread Nathan Dehnel
$ guix import go --goproxy="https://pkg.go.dev; gopkg.in/yaml.v2
guix import: warning: Failed to import package "gopkg.in/yaml.v2".
reason: "https://pkg.go.dev/gopkg.in/yaml.v2/@v/list; could not be
fetched: HTTP error 400 ("Bad Request").
This package and its dependencies won't be imported.
guix import: error: failed to download meta-data for module 'gopkg.in/yaml.v2'.

The package exists and the URL looks right:
https://pkg.go.dev/gopkg.in/yaml.v2





Re: pg_sequence_last_value() for unlogged sequences on standbys

2024-05-03 Thread Nathan Bossart
On Wed, May 01, 2024 at 12:39:53PM +0900, Michael Paquier wrote:
> However, it seems to me that you should also drop the
> pg_is_other_temp_schema() in system_views.sql for the definition of
> pg_sequences.  Doing that on HEAD now would be OK, but there's nothing
> urgent to it so it may be better done once v18 opens up.  Note that
> pg_is_other_temp_schema() is only used for this sequence view, which
> is a nice cleanup.

IIUC this would cause other sessions' temporary sequences to appear in the
view.  Is that desirable?

> By the way, shouldn't we also change the function to return NULL for a
> failed permission check?  It would be possible to remove the
> has_sequence_privilege() as well, thanks to that, and a duplication
> between the code and the function view.  I've been looking around a
> bit, noticing one use of this function in check_pgactivity (nagios
> agent), and its query also has a has_sequence_privilege() so returning
> NULL would simplify its definition in the long-run.  I'd suspect other
> monitoring queries to do something similar to bypass permission
> errors.

I'm okay with that, but it would be v18 material that I'd track separately
from the back-patchable fix proposed in this thread.

-- 
Nathan Bossart
Amazon Web Services: https://aws.amazon.com




  1   2   3   4   5   6   7   8   9   10   >