Re: [ADMIN] Re: best practice for moving millions of rows to child table when setting up partitioning?
On Wed, Apr 27, 2011 at 04:17:16PM -0400, Mark Stosberg wrote: > On 04/27/2011 10:48 AM, Mark Stosberg wrote: > > > > Hello, > > > > I'm working on moving a table with over 30 million to rows to be > > partitioned. The table seeing several inserts per second. It's > > essentially an activity log that only sees insert activity and is > > lightly used for reporting, such that queries against it can safely be > > disabled during a transition. > > > > I'm looking for recommendations for a way to do this that will be least > > disruptive to the flow of inserts statements that will continue to > > stream in. > > > > Here's the plan which seems best to me at the moment. Is it is > > reasonable? > > I revised my plan based on feedback and mentioned resources here, and > also due to my own concerns about the resource and performance impact of > using the INSERT .. SELECT pattern on millions of rows at a time. > > Here's my revised plan for the transition, which avoids using large > INSERT..SELECT statements, and only requires a minimal amount of the > transition to work to happen once the application has started to insert > data into the child tables. Seem reasonable? > > New Plan > - > > 1. Create the child tables targeted to contain data from the parent as >standalone tables (including archive tables and the current month). > > 2. Take a full database backup from this morning and extract the COPY >statement for the parent table. Manually split it up by date to create >multiple copy statements, one for each partition. > > 3. Run the COPY statements to load the data into each child table. So >far, everything has happened outside of application access. > > 4. Find the max ID that has been inserted in the current child table. > > 5. INSERT .. SELECT the missing rows from the last backup from the parent >table to the current child table to be. Again, note the max ID. > > 6. Now, during a maintenance window: >- alter the child tables to inherit the parent >- Set up the trigger which starts redirecting inserts > from the parent table to the child table. >- INSERT .. SELECT the file the final few missing rows from > the parent to the current child >- TRUNCATE then CLUSTER the parent table. > Hi Mark, I used a similar process to migrate to a partitioned table from a non-partitioned table. However, I the future partitions first and put them into place. Then I updated the trigger to push to the child tables. Then once the in-use "daily" or "weekly" table rolled I back-filled the existing tables from the big table. Anyway, my two cents. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] number of rows in analyze
On Mon, Aug 01, 2011 at 04:42:23PM +0200, Willy-Bas Loos wrote: > Hi, > > I'ver been wondering how to set the number of rows that are scanned by > analyze. (I want to increase it) > I couln't find it at first but when i read closer i found that is does: > "The largest statistics target among the columns being analyzed > determines the number of table rows sampled to prepare the > statistics." > (http://www.postgresql.org/docs/8.4/interactive/sql-analyze.html) > > > My question is *HOW* does it "detirmine the number of table rows > sampled" with the statistics target? > What is the formula behind it? > > I am not familiar enough with the source code to figure that out > myself. sry about that. > Would be good to include in the docs if we find out. > > Cheers, > > WBL > Peruse the above reference a bit more carefully. It is actually explained pretty well. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] User password encryption using a stronger hashing function?
On Wed, Dec 28, 2011 at 11:13:22AM +0100, lst_ho...@kwsoft.de wrote: > Zitat von Craig Ringer : > > >On 22/12/2011 5:41 PM, lst_ho...@kwsoft.de wrote: > >>Zitat von "Liu, Jianli (Jianli)" : > >> > >>>No one has responded to my questions. I am wondering if anyone > >>>can point me to where in postgresql source code I can further > >>>look into the issue and explore the possibility of this > >>>change? Thanks. > >> > >>I was already looking for the same topic but could not find > >>anything useful. Maybe no one is using the build-in password > >>encryption, but does it at application level with transparent > >>data access? > >> > >>Maybe one of the developers can explain why the choices for > >>build-in password encryption are that limited. > >> > >I'm not a Pg dev, but I won't be surprised if the v3 protocol > >doesn't support other hash functions. I haven't looked into it. If > >you'd like to know more I suggest you search for things like > >"md5", "sha1", "sha256", "stronger hash" etc on the pogsql-hackers > >mailing list archives. > > > >The main downside of md5 as a password hash function is that the > >result is small and quick to compute the standard of today's > >hashes, so rainbow tables are easier to build and can have better > >coverage. Brute-force cracking is also faster but there's not as > >much difference there. > > > >If you have a need for stronger hashing functions you might want > >to contact one of the consultants who does contract work on > >PostgreSQL development and find out what'd be involved in funding > >the development of the feature. Think about why you need it first, > >though; what threat(s) are you trying to protect from? > > The reasoning is that if your Database content get lost your > passwords are in danger to be decrypted todays with md5 hash and > most of the time passwords are reused at other places. With stronger > hashes at least the password itself would be somewhat safe. But as > said in many environment the application does not use database users > anyway, but does its own user management with hopefully stronger > encryption of the passwords. > > Thanks > > Andreas > Exactly. You need to use GSSAPI or something else to secure it. Then the passwords are not available to be decrypted in the database and you can use much more extensive encryption for them. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Getting "ident" to work in Windows
On Fri, May 04, 2012 at 01:57:47PM +0200, Thomas Kellerer wrote: > Hi, > > I'm trying to enable "ident" authentication on my Windows machine using > PostgreSQL 9.1 > > However as soon as I add the following line to pg_hba.conf > >hostall all ::1/128ident sameuser > > I get an error message when trying to login (after a server reload of course) > >psql: FATAL: could not load pg_hba.conf > > It is precisely that line above. As soon as I remove it I can logon again > (with a username/password of course). > > I did verify that my connections are done through IPV6. > > Here is the current content of my pg_hba.conf: > > - snip -- > # TYPE DATABASEUSERADDRESS METHOD > > # IPv4 local connections: > hostall all 127.0.0.1/32md5 > > # IPv6 local connections: > hostall all ::1/128 md5 > - snip -- > > Ideally I would like to make "ident" to work with a custom map but using > "ident tkmap". But that throws the same error which isn't really surprising > as "ident tkmap" is not working either. > > Is ident not supported on Windows? I could not find anything related to that > in the manual. > > Regards > Thomas > I believe that that is the case. There is no Windows ident support. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Very long " in transaction" query
On Fri, May 04, 2012 at 03:22:47PM +0530, Gnanakumar wrote: > > SELECT application_name, client_addr, client_hostname, client_port > > FROM pg_stat_activity > > WHERE procpid = 14740; > > > (Replace 14740 of the process ID of the "idle in transaction" backend). > > > Look on the client machine and find the process that holds TCP port > > "client_port" open (on Linux you can use "lsof" for that). > > Ours is a web-based application and all calls are made "only" from where the > web server is running. No external calls are allowed in my case. Hence, > the "only" client machine in my case is the web server. > You may also want to consider setting a statement_timeout to prevent this until you can find the problem with the application. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Very long " in transaction" query
On Fri, May 04, 2012 at 08:57:08AM -0500, Kevin Grittner wrote: > "k...@rice.edu" wrote: > > > You may also want to consider setting a statement_timeout to > > prevent this until you can find the problem with the application. > > How would a statement timeout help close a transaction on an idle > connection? It's idle because no statements are being run. > > -Kevin Yes, you are correct. I misunderstood. Too bad we do not have a transaction_timeout. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] CPU Load question / PgBouncer config
On Mon, May 14, 2012 at 11:02:30AM -0600, Benjamin Krajmalnik wrote: > I am experiencing a little higher CPU load than I would like to see, and > was wondering if it has to do with the number of connections (although > many are idle). > > I am running PG 9.0.4/amd64 on FreeBSD 8.1., dual boxes running > streaming replication. > > Hardware is a 16 core box with 96GB RAM, using 6GB of shared buffers. > > RAID 1 for OS, 12 drive RAID 10 for data, RAID 1 for logs. > > I am using pgbouncer as a connection pooler, with 200 max client > connections, a default pool size of 100, and a reserve of 80. > > The number of backends running are between 40 and 70. I monitor the > number of non-idle connections. Average is about 20, with non-frequent > spikes going to about 40. > > My gut feeling is that I can probably reduce the pool size somewhat. > > Would reducing the number of backends relieve some of the load on the > server (I am seeing a 5 min load of 12 more often than I would like to)? > > Also, for this size setup, what would be the recommended values for the > pool sizes? I am certain the values are way above what they should be, > but nbot certain. > > > > I posted here instead of the pgbouncer group hoping someone has > experienced similar things. > > > > Prresently, I have the following load, > > > > > > last pid: 96231; load averages: 12.18, 11.07, 10.97 > up 112+19:23:29 11:02:05 > > > > and this is the result from checking the state of pgbouncer: > > > > pgbouncer=# SHOW POOLS; > > database | user| cl_active | cl_waiting | sv_active | sv_idle | > sv_used | sv_tested | sv_login | maxwait > > ---+---+---++---+-+- > +---+--+- > > ishield | xxx |48 | 0 |48 | 4 | > 2 | 0 |0 | 0 > > pgbouncer | pgbouncer | 1 | 0 | 0 | 0 | > 0 | 0 |0 | 0 > > (2 rows) > > > > > > pgbouncer=# SHOW POOLS; > > database | user| cl_active | cl_waiting | sv_active | sv_idle | > sv_used | sv_tested | sv_login | maxwait > > ---+---+---++---+-+- > +---+--+- > > ishield | xxx |55 | 0 |55 | 0 | > 2 | 0 |0 | 0 > > pgbouncer | pgbouncer | 1 | 0 | 0 | 0 | > 0 | 0 |0 | 0 > > (2 rows) > > > > > > > > Thanks in advance, > > > > Benjamin > Hi Benjamin, For a 16-core box a load of 12 is only at 75% capacity, so it sounds like you are doing just fine. It does seem like your connection pool is a bit large for your hardware. A common recommendation is: (number_of_cores * 2) + effective_spindle_count or for your box: (16 * 2) + 6 = 38 It looks like you are about 2.5X that on your system. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] What does a .1 suffix on a PG data file signify?
On Wed, May 16, 2012 at 02:52:44PM -0400, David Schnur wrote: > I'm investigating a 'could not stat file' error that points to a file > "base/16384/52212305.1". All the data files I've ever seen have names that > are whole numbers; I've never seen a decimal suffix. It occurs to me that > perhaps this is some kind of temp-file, or a system for avoiding duplicate > file names, both of which might be relevant in the context of this error. > > Can anyone confirm what the presence of the .1 suffix means? > > I'm using Postgres 8.3.15 on Windows. > > Thanks, > > David Hi David, Once you exceed the first 1GB of data for a file, a new file is created with a .1 appended for the 2nd GB, and so on. For small databases, you may never see a .1,.2,... Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] "Data import from Mysql to MS Sql Server"
On Thu, Jul 26, 2012 at 01:16:42PM +0530, Rajat Sangal wrote: > Hello Everyone, > > I want to import Data from MySql to MS Sql Server 2008 using vb.net. > So please help me. Pls give me reply as soon possible. > You should probably post this to a MySQL or MSSQL mailing list or forum. This venue is for PostgreSQL related issues. Cheers, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] logging full queries separately
On Sat, Feb 23, 2013 at 12:55:07AM -0500, Charles Sprickman wrote: > Howdy, > > I've been finding that keeping full query logs is quite helpful; I started > doing this to be able to run pgbadger each day to get a nice overview of > what's going on with the db servers. The one huge downside to this is that > the logs are so noisy, it's hard to track down errors and stats real-time > since the query logging is so noisy. > > Looking at the manual, I don't see any creative solutions to split the query > logs off from the rest of the logs. Is there anything I've overlooked to > take care of this? > > Right now I'm considering just switching to syslog-ng or rsyslog and seeing > if there's a way I can parse the non-query messages out and dump them in > another logfile… Any pointers on that? Without breaking compatibility with > pgbadger, what log_line_prefix might make it easier to pick out statements > (even when they span multiple lines)? > > Thanks, > > Charles > Hi Charles, Using syslog will work well for this. I would definitely recommend rsyslog over syslog-ng. We have a system running both and rsyslog is more efficient with less overhead and is less costly than syslog-ng for some functionality. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] [GENERAL] Use, Set Catalog and JDBC questions
On Mon, Feb 25, 2013 at 01:57:11PM -0500, Frank Cavaliero wrote: > Hi Adrian, > > Thanks for the response. The situation is more like the following: > > Using the JDBC driver, I connect to database TEST1 and immediately, > without having to pass username credentials again, I want to use database > TEST2. In MySQL, you can simply run: use TEST2. Wondering if > PostgreSQL has something similar. > > Thanks, > Frank > Hi Frank, The "USE xxx;' is a non-standard MySQL extension to the SQL language. This functionality is not available at the SQL layer and must be supported by the connection application itself, I think. For example, you can use "\c xxx" in psql to perform that function. If JDBC does not support it natively, you would need to open a new connection to the new database. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] [SQL] Encrypting PGBouncer to Postgres DB connections
On Wed, Apr 10, 2013 at 11:06:32AM -0700, Bhanu Murthy wrote: > Hi all, > > Can someone please point me to detailed documentation on how to > secure/encrypt connections between PGBouncer and Postgresql database (version > 8.4.3)? > > Thanks in advance! > > Bhanu M. Gandikota > Cell: (415) 420-7740 Hi Bhanu, You will need to use your link encryption process of choice to tunnel the connections from pgbouncer to the backend. SSH and STunnel are two that we have used successfully in the past. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Installing multiple instances of Postgred on one FreeBSD server
On Wed, May 01, 2013 at 01:58:21PM -0600, Benjamin Krajmalnik wrote: > I need to migrate a production database from Postgres 9.0 to Postgres > 9.2. > > Running FreeBSD 8.1/amd64, and presently running Postgres 9.0.4. > > Due to the nature of the application, I cannot have any considerable > downtime, so the pg_dump and install new version route is not an option. > > I would like to install PG 9.2 on the same server and use Slony to > replicate the databases, and once everything is up take the 9.0 cluster > down. > > Ideally, I would like to do this from the ports. > > Any suggestions will be deeply appreciated. > Hi Benjamin, Using Slony will definitely work. We have used it many times for this type of upgrade. Since you are on the same box, have you considered using pg_upgrade? Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] exceeded MAX_ALLOCATED_DESCS while trying to open file
On Tue, May 28, 2013 at 11:28:02AM -0400, steve.tout...@inspq.qc.ca wrote: > Hi, > Using postgres 9.2, I created a FOREIGN TABLE on a csv file. It works. > But I need to change the structure of the data and I created several sql > select using union, like > select from .foreign table... where... > union > select from .foreign table... where... > union > ... > It works for the first 10 select but then I get > ERROR: exceeded MAX_ALLOCATED_DESCS while trying to open file > "/srv/www/data/antidote/registre.csv" > > What is it MAX_ALLOCATED_DESCS? and how can I modify it without changing > the code and re-compile? I tried using postgref.conf but this parameters > is not recognize. > thanks > steve Hi Steve, Each SELECT opens your FOREIGN TABLE so you are opening it 10 times. This is hardcoded by a #define in ./src/backend/storage/file/fd.c during the build so you would need to recompile the software yourself to change it. Can you re- write your query using a single SELECT or possibly read the data into a temporary table for processing? Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Upgrading from 9.1 to 9.2 in place, same machine
On Wed, Jul 10, 2013 at 02:08:04PM -0400, Jonathan Nalley wrote: > On Wed, Jul 10, 2013 at 1:53 PM, Wells Oliver wrote: > > I have 9.1 running on 5432, and 9.2 running on 5433. The 9.1 database size > > is 153g. 9.1 is actively used by systems, 9.2 is just sitting there empty. > > > > I'd like to move the 9.1 database to 9.2 without any down time, and ensuring > > that no data is lost. > > > > My original idea was to make 9.2 a slave of 9.1, then switch it over. > > > > Is there a better idea? Thanks for any tips. > > have you looked at simply using pg_upgrade ? > > http://www.postgresql.org/docs/9.2/static/pgupgrade.html > Hi Wells, I would definitely consider pgupgrade. You could test it first to get some timings. I would make a 9.1 slave on the same box and run pg_upgrade against it. Then if you have a problem you can restart using the original. You can also use something like slony or bucardo to replicate from 9.1 to 9.2 and then failover to it that way. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Upgrading from 9.1 to 9.2 in place, same machine
On Wed, Jul 10, 2013 at 11:52:54AM -0700, Wells Oliver wrote: > Can anyone speak to the speed of pg_upgrade? Our database is 153GB. From > 9.1 to 9.2, I'm nervous it might take hours. If it were less than hour, > downtime would be more acceptable. > You can use the --link option and then you make hard links instead of requiring a copy of the data files. It should be pretty fast. Without --link a copy of 153GB could take a while. :) http://www.postgresql.org/docs/9.2/interactive/pgupgrade.html Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Upgrading from 9.1 to 9.2 in place, same machine
On Wed, Jul 10, 2013 at 12:02:06PM -0700, Wells Oliver wrote: > Hard linking means that you must maintain 8.2's data directory though, even > after upgrade, correct? Since it's a link and not a copied file. > I think it only hard links the files to the new 9.2 data directory so you can delete the old data directory and the files will still be in the new directory since the link count will be > 0. Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Upgrading from 9.1 to 9.2 in place, same machine
On Wed, Jul 10, 2013 at 03:46:28PM -0500, lxnf9...@gmail.com wrote: > On Wed, 10 Jul 2013, Vincent Lau wrote: > > how would pg_upgrade be used to upgrade to a new machine > use nfs? > would i run pg_upgrade on the new server? > > I would replicate to the new server from the old using WAL to get an up to date copy of the DB. Shutdown the new copy and run pg_upgrade. Then you can fallback to the old system if there is a problem. Regards, Ken -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin