Re: [HACKERS] [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-16 Thread Alvaro Herrera
Alvaro Herrera wrote:

 I see another hole in this area.  See do_start_worker() -- there we only
 consider the offsets limit to determine a database to be in
 almost-wrapped-around state (causing emergency attention).  If the
 database in members trouble has no pgstat entry, it might get completely
 ignored.

For the record -- it was pointed out to me that this was actually fixed
by 53bb309d2.  \o/

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [GENERAL] 9.4.1 - 9.4.2 problem: could not access status of transaction 1

2015-06-16 Thread Thomas Munro
On Wed, Jun 17, 2015 at 6:58 AM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Thomas Munro wrote:

 Thanks.   As mentioned elsewhere in the thread, I discovered that the
 same problem exists for page boundaries, with a different error
 message.  I've tried the attached repro scripts on 9.3.0, 9.3.5, 9.4.1
 and master with the same results:

 FATAL:  could not access status of transaction 2048
 DETAIL:  Could not read from file pg_multixact/offsets/ at
 offset 8192: Undefined error: 0.

 FATAL:  could not access status of transaction 131072
 DETAIL:  Could not open file pg_multixact/offsets/0002: No such file
 or directory.

 So I checked this bug against current master, because it's claimed to be
 closed.  The first script doesn't emit a message at all; the second
 script does emit a message:

 LOG:  could not truncate directory pg_multixact/offsets: apparent wraparound

 If you start and stop again, there's no more noise in the logs.  That's
 pretty innocuous -- great.

Right, I included a fix for this in
https://commitfest.postgresql.org/5/265/ which handles both
pg_subtrans and pg_multixact, since it was lost in the noise in this
thread...  Hopefully someone can review that.

 But then I modified your script to do two segments instead of one.  Then
 after the second cycle is done, start the server and stop it again.  The
 end result is a bit surprising: you end up with no files in
 pg_multixact/offsets at all!

Ouch.  I see why: latest_page_number gets initialised to a different
value when you restart (computed from oldest multixact ID, whereas
during normal running it remembers the last created page number), so
in this case (next == oldest, next % 2048 == 0), restarting the server
moves latest_page_number forwards by one, so SimpleLruTruncate no
longer bails out with the above error message and it happily deletes
all files.  That is conceptually OK (there are no multixacts, so no
files should be OK), but see below...  Applying the page linked above
prevents this problem (it always keeps at least one multixact and
therefore at least one page and therefore at least one segment,
because it steps back one multixact to avoid boundary problems when
oldest == next).

As for whether it's actually OK to have no files in
pg_multixact/offsets, it seems that if you restart *twice* after
running checkpoint-segment-boundary.sh, you finish up with earliest =
4294965248 in TruncateMultiXact, because this code assumes that there
was at least one file found and then proceeds to assign (-1 * 2048) to
earliest (which is unsigned).

trunc.earliestExistingPage = -1;
SlruScanDirectory(MultiXactOffsetCtl,
SlruScanDirCbFindEarliest, trunc);
earliest = trunc.earliestExistingPage * MULTIXACT_OFFSETS_PER_PAGE;
if (earliest  FirstMultiXactId)
earliest = FirstMultiXactId;

I think this should bail out if earliestExistingPage is still -1 after
the call to SlruScanDirectory.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 release notes

2015-06-16 Thread Alvaro Herrera
Bruce Momjian wrote:
 On Sun, Jun 14, 2015 at 11:21:35AM -0400, Tom Lane wrote:

  For pretty much the same reason, I'm not in favor of small caps either.
  Even assuming we can do that consistently (which I bet we can't; we
  do not have all that much control over how web browsers render HTML),
  it would be calling attention to itself, which is exactly not the result
  I think we should be after.
 
 I am sure almost every browser can render smallcaps, even if it doesn't
 have a smallcaps-specific font installed --- same for PDF.

For HTML, smallcaps is a CSS property.  The way it works is that you
write in lowercase, and then the browser displays smallcaps.  So for the
browsers that don't do it correctly, it would just work fine by
displaying as lower case.

One trouble is how to write the SGML so that the smallcaps bits reach
the generated HTML.

I don't know about PDF.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 release scheduling (was Re: logical column ordering)

2015-06-16 Thread Noah Misch
On Thu, Dec 11, 2014 at 10:24:20AM -0800, Jeff Janes wrote:
 On Thu, Dec 11, 2014 at 8:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:
  2. The amount of pre-release testing we get from people outside the
  hard-core development crowd seems to be continuing to decrease.
  We were fortunate that somebody found the JSONB issue before it was
  too late to do anything about it.

 We are not particularly inviting of feedback for whatever testing has been
 done.
 
 The definitive guide seems to be
 https://wiki.postgresql.org/wiki/HowToBetaTest, and says:
 
 You can report tests by email. You can subscribe to any PostgreSQL mailing
 list from the subscription form http://www.postgresql.org/community/lists/
 .
 
- pgsql-bugs: this is the preferred mailing list if you think you have
found a bug in the beta. You can also use the Bug Reporting Form
http://www.postgresql.org/support/submitbug/.
- pgsql-hackers: bugs, questions, and successful test reports are
welcome here if you are already subscribed to pgsql-hackers. Note that
pgsql-hackers is a high-traffic mailing list with a lot of development
discussion.
 
 
 =
 
 So if you find a bug, you can report it on the bug reporting form (which
 doesn't have a drop-down entry for 9.4RC1).

Let's get 9.5 alpha/beta/rc releases into that drop-down as we release them.

 If you have positive results rather than negative ones (or even complaints
 that are not actually bugs), you can subscribe to a mailing list which
 generates a lot of traffic which is probably over your head and not
 interesting to you.

Feel welcome to revise that part.  Don't messages from non-subscribed people
make it to the list after manual moderation?  Testers might want to create a
no-delivery subscription to avoid moderation delay, but the decision to
receive all -hackers mail is separate.

 Does the core team keep a mental list of items they want to see tested by
 the public, and they will spend their own time testing those things
 themselves if they don't hear back on some positive tests for them?

Not sure about the core team.  I myself would test essentially the same things
during beta regardless of what end users report having tested, because end
users will pick different test scenarios for the same features.

 If we find reports of public testing that yields good results (or at least
 no bugs) to be useful, we should be more clear on how to go about doing
 it.  But are positive reports useful?  If I report a bug, I can write down
 the steps to reproduce it, and then follow my own instructions to make sure
 it does actually reproduce it.  If I find no bugs, it is just I did a
 bunch of random stuff and nothing bad happened, that I noticed.

Positive reports have potential to be useful.  In particular, mention the new
features you took action to try.  Areas like BRIN, pg_rewind, foreign tables,
event triggers, CREATE POLICY, INSERT ... ON CONFLICT, and GROUPING SETS are
either completely new or have new sub-features.  If nothing else, we can CC
reporters when considering changes to features they reported upon.  Other
analysis would become attractive given a larger corpus of positive reports.

Thanks,
nm


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Auto-vacuum is not running in 9.1.12

2015-06-16 Thread Prakash Itnal
Hi,

Currently the issue is easily reproducible. Steps to reproduce:
* Set some aggressive values for auto-vacuuming.
* Run a heavy database update/delete/insert queries. This leads to invoking
auto-vacuuming in quick successions.
* Change the system time to older for eg. 1995-01-01

Suddenly auto-vacuuming stops working. Even after changing system time back
to current time, the auto-vacuuming did not resume.

So the question is, does postrges supports system time changes?.


On Tue, Jun 16, 2015 at 10:12 AM, Prakash Itnal prakash...@gmail.com
wrote:

 Hi,

 @Avaro Herrera, Thanks for quick reply. I was on leave and hence not able
 to reply soon.

 This issue was observed on customer site. However after long discussion
 and digging into what happened around the date 2nd May 2015, we got to know
 that NTP server suddenly went back in time to 1995. It remained there for
 some time until it is noticed and corrected. So after correcting NTP server
 time the whole cluster is synced to current date. After this change in time
 the auto-vacuum stopped. Since auto-vacuuming is triggered periodically, I
 doubt if this time change has affected any timer!

 So I suspect the time change is the root cause! It would be great if
 someone can clarify if this is the root cause for auto-vacuum stopped.


 On Wed, Jun 10, 2015 at 8:19 PM, Alvaro Herrera alvhe...@2ndquadrant.com
 wrote:

 Prakash Itnal wrote:
  Hello,
 
  Recently we encountered a issue where the disc space is continuously
  increasing towards 100%. Then a manual vacuum freed the disc space. But
  again it is increasing. When digged more it is found that auto-vacuuming
  was not running or it is either stucked/hanged.

 Hm, we have seen this on Windows, I think.

 Is the stats collector process running?  Is it stuck?

 If you attach to process 6504 (autovac launcher), what's the backtrace?

  4) Last run auto-vacuum:
  SELECT now(), schemaname, relname, last_vacuum, last_autovacuum,
 vacuum_count, autovacuum_count FROM pg_stat_user_tables;
 
now  | schemaname |relname|
 last_vacuum |last_autovacuum| vacuum_count |
 autovacuum_count
 
 ---++---+-+---+--+--
   2015-06-10 01:03:03.574212+02 | public | abcd  |
| 2015-04-18 00:52:35.008874+02 |0 |2
   2015-06-10 01:03:03.574212+02 | public | xyz   |
| 2015-05-02 06:01:35.220651+02 |0 |   20
 
  NOTE: I changed the relname for above two tables due to confidentiality.

 Are there dead tuples in tables?  Maybe vacuums are getting executed and
 these values are not updated, for instance?

 --
 Álvaro Herrerahttp://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Remote DBA, Training  Services




 --
 Cheers,
 Prakash




-- 
Cheers,
Prakash


Re: [HACKERS] does tuple store subtransaction id in it?

2015-06-16 Thread Haribabu Kommi
On Tue, Jun 16, 2015 at 1:08 PM, Xiaoyulei xiaoyu...@huawei.com wrote:
 In XidInMVCCSnapshot, it will check xid from tuple if is in snapshot-subxip. 
 It means tuple store subtransaction?

Tuple stores only the transaction id related to the operation. This
can be either main transaction id or sub transaction id.

 But in PushTransaction, I see TransactionState.subTransaction will assign 
 currentSubTransactionId, currentSubTransactionId will reinitialize in 
 StartTransaction. So I think tuple should not store subtransaction id.

 I am confuse about this. If subtransaction id will reinitialize every start 
 time. How to judge it is a subtransaction from xid in tuple?

StartTransaction is called only once per transaction.Further on for
sub transactions it calls only startSubTransaction.
Hope this answers your question.

Regards,
Hari Babu
Fujitsu Australia


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_rewind failure by file deletion in source server

2015-06-16 Thread Michael Paquier
On Fri, Jun 12, 2015 at 9:02 PM, Fujii Masao wrote:
 You want to draft a patch? Should I?

 Please feel free to try that! :)

OK, so attached are a patch and a test case able to trigger easily the
error. Apply the patch and run the test case to reproduce the
following failure:
$ ERROR:  could not open file base/16384/16385_fsm for reading: No
such file or directory
STATEMENT:  SELECT path, begin,
  pg_read_binary_file(path, begin, len) AS chunk
FROM fetchchunks
The patch adds a call to pg_usleep after the list of files from source
server has been fetched with libpq in pg_rewind.c to let time to run
some DROP actions, like DROP DATABASE, DROP TABLE, etc in order to
trigger the error easily.

In order to reduce the risk of failure to a minimum and to preserve
the performance of the tool when using --source-server, I think that
we should add some check using pg_stat_file to see if a file is still
present or not, and if it is missing we can safely skip it thanks to
minRecoveryPoint. Now the problem is that pg_stat_file fails
automatically if the file targeted is missing. Hence, to avoid a bunch
of round trips with the server with one call to pg_stat_dir per file,
I think that we should add some if_not_exists option to pg_stat_file,
defaulting to false, to skip the error related to the file missing and
have it return NULL in this case. Then we could use this filter on the
file path in libpq_executeFileMap() to fetch only the file chunks that
actually exist on the server. Note that we could as well use some
plpgsql-ing to do the same, but the extension of pg_stat_file looks
more useful to me. Thoughts?
-- 
Michael


rewind_test.bash
Description: Binary data


20150616_pgrewind_sleep.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Need Multixact Freezing Docs

2015-06-16 Thread Jim Nasby

On 6/14/15 9:50 AM, Alvaro Herrera wrote:

+   values[0] = MultiXactState-oldestMultiXactId;


What about oldestOffset and offsetStopLimit? Seems those would be useful 
too. Looks good other than that.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] On columnar storage

2015-06-16 Thread Jim Nasby

On 6/14/15 10:22 AM, Alvaro Herrera wrote:

To me, it feels like there are two different features here that would
be better separated.  First, there's the idea of having a table that
gets auto-joined to other tables whenever you access it, so that the
user sees one really wide table but really the data is segregated by
column groups under the hood.  That's a neat idea.

Thanks.  (It also seems pretty tricky to implement.)


I look at it as a form of vertical partitioning; the big difference 
being whether you normalize the columns out or not (or to use data 
warehouse parlance, slow vs fast changing dimensions).


Perhaps it would be useful to vet this out as a userspace extension 
first since that would presumably be much easier. I believe we actually 
have all the backend infrastructure that would be needed for this now 
that views are smart enough to exclude tables that aren't referenced at 
all. I suspect that even a 'dumb userspace' approach would still expose 
a lot of the planner problems we'll run into (join explosion and 
filtering through the join come to mind).


Related to idea of an 'auto join', I do wish we had the ability to 
access columns in a referenced FK table from a referring key; something 
like SELECT customer_id.first_name FROM invoice (which would be 
translated to SELECT first_name FROM invoice JOIN customer USING( 
customer_id )).

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_stat_*_columns?

2015-06-16 Thread Jim Nasby

On 6/8/15 3:26 PM, Joel Jacobson wrote:

So I've heard from Magnus Hagander today IRL at our Stockholm PostgreSQL
User Group meeting where we discussed this idea. He told me the overhead
in the statistics collector is mainly when reading from it, not that
much when writing to it.


I've heard enough stories of people moving the stats files to faster 
storage that I'm not sure how true that really is...



Magnus idea was to first optimize the collector to make it less of a
problem to collect more data. Sounds like a good thing to do, but maybe
more data in it wouldn't be a problem as long as you don't read too
often from it?


The stats collector is a known problem under certain circumstances, so 
improving it would probably be a good thing. The first thing that comes 
to mind is splitting it into more files.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [Proposal] More Vacuum Statistics

2015-06-16 Thread Syed, Rahila
Hello,

Maybe, For DBAs,
It might be better to show vacuum progress in pg_stat_activity.
(if we'd do, add a free-style column like progress ?) This column might also 
be able to use for other long time commands like ANALYZE, CREATE/RE INDEX and 
COPY. To realize this feature, we certainly need to properly change 
pgstat_report_activity, use it more and add a new track-activity parameter
 Very similar idea was proposed in the following
http://www.postgresql.org/message-id/1284756643.25048.42.ca...@vanquo.pezone.net

IIUC, problem with showing progress in pg_stat_activity is that it introduces 
compulsary progress calculation overhead  in core for every command.
As work units of each command varies, common infrastructure might not be able 
to represent every command progress effectively.
An architecture which will display progress only  on users demand  for each 
command separately will be more efficient.
So, suggestion was rather to have a detailed progress report including 
remaining time for a command on users demand.

FWIW, I am working on designing an approach to report VACUUM progress stats for 
which I will be posting a detailed proposal.
The use case is reporting progress for long running VACUUMs. The approach 
involves using hooks to extract VACUUM progress statistics .
The progress can be displayed using psql view (ex. pg_stat_maintenance).

Thank you,
Rahila Syed


-Original Message-
From: pgsql-hackers-ow...@postgresql.org 
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Naoya Anzai
Sent: Tuesday, June 16, 2015 8:41 AM
To: Tomas Vondra
Cc: pgsql-hackers@postgresql.org; Akio Iwaasa; bench.cof...@gmail.com; Tom 
Lane; Jeff Janes; Jim Nasby; Andres Freund; Alvaro Herrera
Subject: Re: [HACKERS] [Proposal] More Vacuum Statistics

Hi,

Thank you for comments. and Sorry for my late response.

 
 pg_stat_vacuum view
 

 I understand it is not good to simply add more counters in 
 pg_stat_*_tables. For now, I'd like to suggest an extension which can 
 confirm vacuum statistics like pg_stat_statements.

 Similar feature has been already provided by pg_statsinfo package.
 But it is a full-stack package for PG-stats and it needs to redesign 
 pg_log and design a repository database for introduce.
 And it is not a core-extension for PostgreSQL.
 (I don't intend to hate pg_statsinfo,
   I think this package is a very convinient tool)

 Everyone will be able to do more easily tuning of VACUUM.
 That's all I want.

I'm still wondering whether these stats will really make the tuning any 
easier. What I do right now is looking at pg_stat_all_tables.n_deat_tup 
and if it exceeds some threshold, it's a sign that vacuum may need a 
bit of tuning. Sometimes it really requires tuning vacuum itself, but 
more often than not it's due to something else (a large bulk delete, 
autovacuum getting stuck on another table, ...). I don't see how the 
new stats would make this any easier.

Can you give some examples on how the new stats might be used (and 
where the current stats are insufficient)? What use cases do you 
imagine for those stats?

pg_stat_vacuum can keep histories of vacuum statistics for each tables/indices 
into shared memory.(They are not only last vacuum. 
This is already able to confirm using pg_stat_all_tables.) It makes easier 
analysis of vacuum histories because this view can sort or aggregate or filter.

My use cases for those stats are following.

- examine TRANSITION of vacuum execution time on any table  (you can predict 
the future vacuum execution time)
- examine EXECUTION INTERVAL of vacuum for each table  (if too frequent, it 
should make vacuum-threshold tuning to up)
- examine REST of dead-tuples just after vacuum  (if dead-tuples remain, it may 
be due to any idle in transaction sessions)


It might help differentiate the autovacuum activity from the rest of 
the system (e.g. there's a lot of I/O going on - how much of that is 
coming from autovacuum workers?). This would however require a more 
fine-grained reporting, because often the vacuums run for a very long 
time, especially on very large tables (which is exactly the case when 
this might be handy) - I just had a VACUUM that ran for 12 hours. These 
jobs should report the stats incrementally, not just once at the very 
end, because that makes it rather useless IMNSHO.

+1

Certainly, VACUUM have often much execution time, I just had too.
At present, we cannot predict when this vacuum finishes, what this vacuum is 
doing now, and whether this vacuum have any problem or not.

Maybe, For DBAs,
It might be better to show vacuum progress in pg_stat_activity.
(if we'd do, add a free-style column like progress ?) This column might also 
be able to use for other long time commands like ANALYZE, CREATE/RE INDEX and 
COPY. To realize this feature, we certainly need to properly change 
pgstat_report_activity, use it more and add a new track-activity parameter.

Regards,

Anzai Naoya
---

Re: [HACKERS] could not adopt C locale failure at startup on Windows

2015-06-16 Thread Noah Misch
On Mon, Jun 15, 2015 at 08:47:12AM -0400, Tom Lane wrote:
 Noah Misch n...@leadboat.com writes:
  While Windows was the bellwether, harm potential is greater on non-Windows
  systems.  pg_perm_setlocale() sets the LC_CTYPE environment variable to help
  PL/Perl avoid clobbering the process locale; see plperl_init_interp()
  comments.  However, that function has bespoke code for Windows, on which
  setting the environment variable doesn't help.  I don't know which other
  platforms invalidate previous setlocale() return values on 
  setlocale(LC_CTYPE,
  NULL).  Therefore, I propose committing the attached diagnostic patch and
  reverting it after about one buildfarm cycle.  It will make affected
  configurations fail hard, and then I'll have a notion about the prevalence 
  of
  damage to expect in the field.
 
 I doubt this will teach us anything; if any buildfarm systems were
 exhibiting the issue, they'd have been failing all along, no?

No; most systems let environment variables carry arbitrary strings of non-nul
bytes, so they don't see $SUBJECT.  I want to probe for all systems that are
currently issuing putenv(LC_CTYPE=garbage), not just the ones where a
picky putenv() illuminates it.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 release notes

2015-06-16 Thread Bruce Momjian
On Sun, Jun 14, 2015 at 11:21:35AM -0400, Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  OK, new idea.  What about, instead of having the last name be all-caps,
  we have the last name start with an uppercase letter, then use smallcaps
  for the rest of the last name:
  https://en.wikipedia.org/wiki/Small_caps
  That way, the last name will not appear too large, but will be clear as
  something different from other names.  Peter, I assume small-caps is
  possible.
 
 FWIW, I vote strongly against having any contributor names in caps in the
 release notes.  It would be visually distracting, and it would make the
 name look like the most important thing in the entry, while in point of
 fact it's the *least* important.  (Maybe not to the contributor, but
 certainly to anybody else.)

Yes, we are already trying to deemphasize contributor names, so
capitalizing them is certainly moving in the wrong direction.

 For pretty much the same reason, I'm not in favor of small caps either.
 Even assuming we can do that consistently (which I bet we can't; we
 do not have all that much control over how web browsers render HTML),
 it would be calling attention to itself, which is exactly not the result
 I think we should be after.

I am sure almost every browser can render smallcaps, even if it doesn't
have a smallcaps-specific font installed --- same for PDF.  We couldn't
do that for a text file, but I don't think we ship a text HISTORY file
anymore.

I didn't think smallcaps would be any more visible than standard
lower-case text.  In fact, smallcaps is designed to fit the font size of
lowercase letters.  I would run a text but it doesn't seem we have any
references to smallcaps in our SGML files.  This could get tricky
because, as I remember, the rendering control is in the web style
sheets, and you would need to have all rendering do the same thing.

Any new ideas on how we can signify family names first?  It seems
culturally-insensitive to always put the family name last if people
don't want that, but it also seems odd to have a mix of ordering in the
same document.  Certainly others must have had the same issue.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.5 feature count

2015-06-16 Thread Bruce Momjian
On Sun, Jun 14, 2015 at 02:12:16PM -0400, Bruce Momjian wrote:
 I have run a script to count the number of listitem items in the
 major release notes of each major version of Postgres back to 7.4:
 
   7.4280
   8.0238
   8.1187
   8.2230
   8.3237
   8.4330
   9.0252
   9.1213
   9.2250
   9.3187
   9.4217
   9.5176
 
 The 9.5 number will only change a little by 9.5 final.

FYI, all final releases have 5-10 listed major items which are repeats
of other items, so the final 9.5 count will be slightly higher.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers