Fwiw I think the real problem with automatic retries is that the SQL interface doesn't lend itself to it because the server never really knows if the command is going to be followed by a commit or more commands.
I actually think if that problem were tackled it would very likely be a highly appreciated option. Because I think there's a big overlap between the set of users interested in higher isolation levels and the set of users writing stored procedures defining their business logic. They're both kind of "traditional" SQL engine approaches and both lend themselves to the environment where you have a lot of programmers working on a system and you're not able to do things like define strict locking and update orderings. So a lot of users are probably looking at something like "BEGIN; SELECT create_customer_order(....); COMMIT" and wondering why the server can't handle automatically retrying the query if they get an isolation failure. There are actually other reasons why providing the whole logic for the transaction up front with a promise that it'll be the whole transaction is attractive. E.g. vacuum could ignore a transaction if it knows the transaction will never look at the table it's processing... Or automatic deadlock testing tools could extract the list of tables being accessed and suggest "lock table" commands to put at the head of the transaction sorted in a canonical order. These things may not be easy but they're currently impossible for the same reasons automatically retrying is. The executor doesn't know what subsequent commands will be coming after the current one and doesn't know whether it has the whole transaction.