? As long as the SQL statement that the view is based on is
still valid why does it care if the table is dropped and recreated?
--
Tim Uckun
Mobile Intelligence Unit.
--
"There are some who
which is to say
that it's nothing more then a SQL statement. As long as that SQL statement
is valid, parseable and returns a recordset it really ought not to care
about oids.
--
Tim Uckun
Mobile Intelligence Unit
engines.
WAL is a backup system.
TOAST is a system for working with rows that have to use more then the 8K
limitation.
AFAIK!
What happened to outer joins? Don't you need outer joins to compete with
the big boys?
--
Tim Uckun
Mobile
At 01:37 AM 10/12/2000 -0400, Tom Lane wrote:
Tim Uckun [EMAIL PROTECTED] writes:
What happened to outer joins? Don't you need outer joins to compete with
the big boys?
They're done too ;-)
Wooo Hooo time to break out the champagne
At 04:58 PM 10/12/00 -0400, Louis Bertrand wrote:
Thanks. It helped cheer me up: I'm fighting with MS-Access at the
moment (and losing badly).
Error Number 3135 There is no message for this error.
:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/ Americas Background
Investigation
the database in any way. I am suprised
nobody has done this yet. Is there a document which describes how to create
locales?
:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/ Americas Background
Investigation Expert.
If your company isn't doing background checks, maybe you haven't
that are dribbled here and there...
You are right of course but what happens once you have learned it? For me I
never seem seem to be able to do the right thing that being "now that I
have solved the problem I should write it down and submit it to the
maintainers of the document".
:wq
Tim
is the currency datatype working with access and ODBC yet?
:wq
Tim Uckun
Due Diligence Inc. http://www.diligence.com/Americas Background
Investigation Expert.
If your company isn't doing background checks, maybe you haven't considered
the risks of a bad hire.
I just downloaded postgres 8.3 for windows and noticed that citext is
not an option for columns.
The web site has the source code but no binaries for windows.
I downloaded the enterprisedb and noticed that it has the citext.dll
in the lib directory so I copied the dll over to the postgres lib
Where can I get uuid-ossp for windows? Also where can I get citext for windows.
These two are missing from the windows installer.
Thanks.
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
citext is not part of core PostgreSQL or something we have any
intention to include in the Windows distribution at this time.
Is there an alternative for people wanting a case insensitive collation?
---(end of broadcast)---
TIP 4: Have you
Does anybody know of an open source application which leverages
postgres to build a scheduling/calendaring application. Especially if
it uses some of the datetime types and functions that are unique to
postgres.
I am specifically interested in methods to deal with recurring events
and dealing
If I have a primary key constraint defined in the database do I also
need to create an index on that field for fast lookup?
The documentation on the web seems to imply that the contraint is not
an index. Is that right?
What the difference between creating a unique, not null index and
setting a
Hey all.
I am using postgres 8.3 with a rails application. I have a column
defined like this.
ALTER TABLE provisions ADD COLUMN provider_id integer;
ALTER TABLE provisions ALTER COLUMN provider_id SET STORAGE PLAIN;
ALTER TABLE provisions ALTER COLUMN provider_id SET NOT NULL;
ALTER TABLE
Not sure how you do this in rails but if you use DEFAULT in the query it
works:
create table a(a int default 5);
CREATE TABLE
insert into a(a) values (DEFAULT);
INSERT 0 1
Unfortunately the SQL is being generated by the ORM. I really don't
want to bypass the ORM that would be way too
Are you using the ruby-pg interface? I was under the impression it
handled this properly.
I am using postgres-pr
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Fri, Oct 17, 2008 at 4:40 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
On Thu, Oct 16, 2008 at 9:09 PM, Tim Uckun [EMAIL PROTECTED] wrote:
Hey all.
I am using postgres 8.3 with a rails application. I have a column
defined like this.
ALTER TABLE provisions ADD COLUMN provider_id integer
Does anybody know if there is a sample database or text files I can import
to do some performance testing?
I would like to have tables with tens of millions of records if possible.
I would like to have tables with tens of millions of records if possible.
It is easy to create such a table:
test=# create table huge_data_table as select s, md5(s::text) from
generate_series(1,10) s;
Thanks I'll try something like that.
I guess can create some random dates or something
2009/5/28 Eddy Ernesto Baños Fernández eeba...@estudiantes.uci.cu:
Try Cybercluster
I looked into that. There is one piece of documentation that is less
than ten pages long. There is no users group, no listserve, no
community that I can discern.
Do you have experience with it and if so
Does anybody have any experience with tungsten or sequia they would
like to share?
How about pgcluster or cybercluster?
There are a lot of options but not a lot of insight or documentation really.
Also note that the postgres-r web site says it's not production ready
and it will take months if
I don't see any ubuntu packages for 8.4 in the default repositories.
Does anybody know if they will be upgrading the postgresql package to
8.4 or creating a new package for it.
I'd rather use the packages than to compile it myself. If anybody has
an argument as to why I should compile it I am
On Fri, Jul 10, 2009 at 9:22 PM, Stuart Bishopstu...@stuartbishop.net wrote:
On Fri, Jul 10, 2009 at 8:28 AM, Tim Uckuntimuc...@gmail.com wrote:
I don't see any ubuntu packages for 8.4 in the default repositories.
Does anybody know if they will be upgrading the postgresql package to
8.4 or
I am using pg_standby to set up a warm standby.
Everything seems to be working OK so far but I do have one question.
I wanted to check to make sure that the replication was happening so I
created the trigger file which put the database to livemode. I ran
a query on the standby to make sure the
I am trying to monitor replication lag using zabbix. I have written a
simple script in ruby to get the lag it goes like this.
require 'date'
require 'yaml'
y = YAML.load `/usr/lib/postgresql/8.3/bin/pg_controldata
/var/lib/postgresql/8.3/main`
last_checkpoint = DateTime.parse( y['Time of latest
On lots of systems, giving group permissions is nearly as bad as giving
world permissions (eg, all the users might be in a users group).
So we don't do it by default. If you want to poke holes in the security
of your own installation, go right ahead.
I decided to see if I could do it without
I am trying to monitor my replication lag with pg_controldata and it's
driving me nuts.
If I run pg_controldata from the command line as user postgres or root
I get the following line
Time of latest checkpoint:Thu 30 Jul 2009 00:36:12 NZST
If I run it from the crontab I get this
O
cron runs programs in a very limited environment. Things like TZ etc. are
usually not set. To see what your cron sets, just run a shell script with
something like
#!/bin/sh
env
and look at the email you get with the output.
Read the cron/crontab manpage. It tells you how to set
I have a pretty simple query on a pretty simple table with about 60
million records in it.
This is the query.
SELECT * FROM changes WHERE (id 1935759 and company_id = 4 and
source_model_name = 'CommissionedVisit') ORDER BY id ASC LIMIT 1
The id field is the primary key. The other fields are
It probably thinks the id check is going to be better to limit the result
set.
How many records are there for id 1935759 ?
About 40 million or so.
vs
How many records for company_id = 4 and source_model_name =
'CommissionedVisit' ?
If this is a common query you could probably do a
If you try the multi-column index (which is a good idea), be sure that id
is the last of the three columns, since that's the column on which you have
an inequality test rather than an equality test; eg,
(company_id,source_model_name,id).
Interesting. I would have thought the order of the
I just did an upgrade on two of my servers (the main and the
failover). The main went OK but the postgres on the failover won't
start.
Unfortunately there is nothing anywhere telling me what the problem
is. The log file is empty, there is nothing in the /var/log/messages
or /var/log/syslog
I know, from IRC, the problem has been solved, there was no space on the
disk ...
Unfortunately, i haven't logs.
Yes. Thanks to everybody on the IRC who helped me out.
The suggestion that was most helpful was to call the posgres binary
directly. /usr/lib/postgresql/8.3/bin/postgres. Calling
I am trying to backup one database and restore it into a new schema in
another database. Database1 has the tables in the public schema
database2 has some tables in the public schema but their names will
clash so the database needs to be stored in a different schema.
I back up like this.
There is no support for that built into pg_dump. You could try:
That's too bad.
* dumping to a text script and doing search-and-replace for the schema
name on the script file.
I did a dump without privileges or owners so I was thinking I could
just replace the
SET search_path = public,
I have a couple of tables (people and addresses) which are using
serials as primary keys and contain many potentially duplicate data in
them. The problem is that the data has not been input in a careful way
so for example you have a first_name, middle_name and last_name fields
but you could have
I have three tables. traffic, sales and dates. Both the traffic table
and the sales table has multiple entries per date with each row
representing the date, some subdivision, and the total. For example
every day five divisions could be reporting their sales so there would
be five entries in the
On Tue, Jul 12, 2011 at 3:01 PM, David Johnston pol...@yahoo.com wrote:
If traffic has 5 records on a date and sales has 4 on the same date you would
output 20 records for that date.
What would I have to do in order to get 9 records instead of 20. Like
a union but with dissimilar schema.
--
I have two tables, traffic and sales. Each one has a date field and
lists the traffic and sales broken down by various parameters
(multiple rows for each date).
If I run select (select count(*) from traffic) as traffic, (select
count(*) from sales) as sales; I get the following 49383;167807
if
Only the first SELECT is used to define column types and names (in the case
of NULL AS source2_* I am not positive if you need to cast the NULL or if it
will use the type found in the second SELECT) and I generally put a source
field into the output with a textual representation of which
I am using a library which is emitting SQL like this SELECT COUNT(*)
FROM batches LIMIT 15 OFFSET 15 the library fails because on postgres
this query returns nothing (not even zero as a result). Presumably it
returns some valid value on mysql and other databases.
Other than hacking the library
Yea I figured it would need a subquery. I filed a ticket with the
library. Hopefully they will fix it.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I am occasionally getting this kind of error when attempting a SELECT statement.
PGError: ERROR: missing chunk number 0 for toast value 27143 in pg_toast_2619
What does this mean? Is some sort of corruption creeping into the database?
Postgres 9.0 linux.
--
Sent via pgsql-general mailing
Hard to tell. We've seen enough reports like that to make it seem like
there may be some bug buried there, but no one has provided anything to
do any debugging work with. Can you create a reproducible test case?
Not really. I have a nightly process which downloads data and sticks
it into a
Not really. I have a nightly process which downloads data and sticks
it into a text field. Afterwards another process reads that text data
and processes it creating rows in another table. The problem occurs in
the last step and at seemingly random intervals. For example one time
it might
A self-contained test case (code and data) that triggers the error.
If it only does so probabilistically, once in every-so-many runs,
that's fine.
I'll see what I can do.
Give me a few days.
Cheers.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
Hey Guys.
It's been a while since I looked into this and it seems like new
options have cropped up for postgres HA and scalability. Is there a
consensus on the best way to achieve HA. My primary concern is HA
but of course any scalability gains would be more than welcome. All
the servers will
On Tue, Jan 17, 2012 at 10:47 AM, David Morton davidmor...@xtra.co.nz wrote:
Is shared storage an option for you ? We've had a fairly pleasant experience
with shared storage partnered up with SLES and its HAE (high availability
extension) suite using a Pacemaker cluster for resource control. On
virtual servers tend to have lousy storage performance, for what thats
worth. the actual physical resources are being shared by who knows what
other workloads, and they tend to be higher latency than direct-attach
storage, or proper SAN.
I realize that. Eventually we might have to go to
I wonder. If its a write heavy database, I totally agree with you. But if
its mostly read-only, and mostly fits in ram, then a pgpool of servers
should be faster.
Be nice to know the usage patterns of this database. (and size).
In this case the databases are small to medium and the usage
On Tue, Jan 17, 2012 at 12:31 PM, David Morton davidmor...@xtra.co.nz wrote:
Have you looked at a 'shared storage' solution based on DRBD ? I configured
a test environment using SLES HAE and DRBD with relative ease and it behaved
very well (can probably supply a build script if you like), there
Only a single-master. If you want a multi-master solution, see Postgres-XC.
Is postgres XC production ready? Can I trust my most valuable data to it?
Cheers.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
I have a few clusters running on EC2 using DRBD to replicate between
availability zones. It's not fast, but it works. If your write load is under
30MB/sec it's definitely an option. I run DRBD over SSH tunnels to get around
the random IP address issue. I use heartbeat on top for resource
because you quickly get trapped into OS specific quicksand with these
features.
Isn't that an issue with just about every feature? Besides the issues
have already been solved mostly. Pgpool already exists. Tatsuo Ishii
says porting a windows is just a resource issue as he doesn't have the
I have a situation where I am pulling CSV data from various sources
and putting them into a database after they are cleaned up and such.
Currently I am doing bulk of the work outside the database using code
but I think the work would go much faster if I was to use import the
data into temp tables
1. COPY from a text field in a table like this COPY from (select
text_field from table where id =2) as text_data ...
The syntax is a bit different:
CREATE TABLE text_data AS select text_field from table where id=2
Really? Wow, I would have never guessed that. That's awesome.
Thanks.
--
I am trying to understand this bit of documentation about GIST and GIN searches
Also, * can be attached to a lexeme to specify prefix matching:
SELECT to_tsquery('supern:*A star:A*B');
to_tsquery
--
'supern':*A 'star':*AB
I tried various experiments but can't
I want to be able to search a lot of fields using queries that use
ILIKE and unfortunately many of the queries will be using the
'%SOMETHING%' or '%SOMETHING' type clauses. Since indexes are useless
on those I was thinking I could use tsvectors but I can't figure out
how to accomplish this.
One
If you're using 9.1, you might look into contrib/pg_trgm instead.
If I was to use trgm would it be better to create a trigram index on
each text field? In the past I have created a text field which
contains the rest of the fields concatenated. That works great as long
as you are looking for
We made most of our text, varchar columns citext data types so that we
could do case insensitive searches. Is this going to negate most of the
index searches? It appeared to our DBA that it would be easier to use
citext data type then need to use ILIKE instead?
In the same vein...
Does
However, given the size of this table, I have no idea how long something
like this might take. In general I've had a tough time getting feedback
from postgres on the progress of a query, how long something might take,
etc.
You can always do this which would result in minimum hassles.
It is my understanding that since the extention citext is available that
this gives you what your asking for and at least at this point isn't
going to be part of the core.
For me it's more of a workaround than a solution but yes probably good
enough. Collation is more subtle than case
Is there a way to backup a database or a cluster though a database
connection? I mean I want to write some code that connects to the
database remotely and then issues a backup command like it would issue
any other SQL command. I realize the backups would need to reside on
the database server.
--
On Tue, Mar 27, 2012 at 1:00 PM, David Boreham david_l...@boreham.org wrote:
fwiw we run db_dump locally, compress the resulting file and scp or rsync it
to the remote server.
I wanted to see if I can do that without running pg_dump on the remote
server. That would involve connecting to the
We're also using libpq to trigger backups using NOTIFY from a client
app.
Do you have an example of how this is done?
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
I am wondering if either of these features are on the plate for
postgres anytime soon? I see conversations going back to 2007 on
updateable views and some conversations about synonyms but obviously
they have never been added to the database for some reason or another.
With regards to synonyms. It
what sorts of operations do synonyms need to support?
truncate?
alter table?
reindex?
I am no expert or anything but I would think they would be like
symlinks so yes to all of the above.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I am following the instructions on the wiki
https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#PITR.2C_Warm_Standby.2C_Hot_Standby.2C_and_Streaming_Replication
using the 10 minute version of the setup.
On the master I have
postgresql.conf
wal_level = hot_standby
max_wal_senders = 3
I am using 9.1.
Apparently it's working now, it took a couple of restarts but it seems
to be going.
Thanks.
On Mon, Jun 25, 2012 at 12:57 PM, Michael Nolan htf...@gmail.com wrote:
On Sun, Jun 24, 2012 at 1:57 AM, Tim Uckun timuc...@gmail.com wrote:
I am following the instructions
I am sure this is intended behavior but it seems odd (and inconvenient) to me.
create database tim_test_copy template tim_test
ERROR: source database tim_test is being accessed by other users
DETAIL: There are 1 other session(s) using the database.
I would presume only reads are required from
Thanks for the explanation.
On Mon, May 6, 2013 at 8:43 PM, Jasen Betts ja...@xnet.co.nz wrote:
On 2013-05-06, Tim Uckun timuc...@gmail.com wrote:
--047d7b2e4ea07402b004dc034a3b
Content-Type: text/plain; charset=UTF-8
Say I have a select like this.
SELECT * FROM table where field
I have the following query.
with parsed_data as (
SELECT
devicereportedtime ,
DATE_TRUNC('minute', devicereportedtime - (EXTRACT(minute FROM
devicereportedtime)::integer % 5 || 'minutes')::interval) as interval_start
FROM systemevents
WHERE devicereportedtime =
I can aways convert it. I am just
not getting the right offset when I convert. That's what's puzzling.
On Wed, Oct 2, 2013 at 9:15 PM, Albe Laurenz laurenz.a...@wien.gv.atwrote:
Tim Uckun wrote:
I have the following query.
[...]
SELECT
interval_start,
(interval_start AT TIME ZONE
Theodore tony.theod...@gmail.comwrote:
On 02/10/2013, at 6:49 PM, Tim Uckun timuc...@gmail.com wrote:
The reason for that is that in PostgreSQL there is no time zone
information stored along with a timestamp with time zone,
it is stored in UTC.
That seems unintuitive. What is the difference
I do think it would help to add it to the docs. The name TIMESTAMP WITH
TIME ZONE clearly implies the time zone is stored in the field. One day
there will really be a timestamp with time zone embedded in it field and I
wonder what they will call that.
I have a table foo. It has a serial column called id. I execute the
following statement
ALTER TABLE table_name RENAME TO archived_table_name;
CREATE TABLE table_name (LIKE archived_table_name INCLUDING
DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES);
. Archieve the table
I cannot readily speak to why you are not seeing sequence ownership as a
dependent when looking at the now-archive table definition.
pgadmin knows it's a dependency because when you try to drop it you get a
message saying so but I can't see it in the defintion of the table.
BTW is there a
As I've marked here, both default expressions are depending on the
sequence, but there's only one ownership dependency of the sequence
on a column. To complete the switchover you'd need to use ALTER SEQUENCE
... OWNED BY ... to move that ownership dependency to the new table.
Then the old
Hi all.
I have the following scenario I want to accomplish.
In order to test a new branch of code I want to create a snapshot of the
live database into a testing database. The code will be deployed after that
and it may run some migrations which will change the schema of the
database. The code
I want to accomplish what I would think would be a simple thing. I
want the root user to be able to connect to the postgres database as
user postgres from the local machine without passwords. Since I am
doing this from a program I don't want to use the su facility.
I have tried a lot of
I suspect you are expecting that the map will cause root to be
logged in as postgres without asking for that. It won't.
What it will do is allow psql -U postgres and similar to work.
That's exactly what I am looking to do. In my case I have a script
that runs as root. I want to log in as
then say you're postgres in the script with the -U (if you're using psql)
AS ROOT:
psql -U postgres -h remote_db dbname
Note that ident doesn't work so well between machines, so you might
want to look at .pgpass
That's what I am trying to get working. In actuality I am using ruby
and
either create a postgres user named 'root' and give it superuser privileges,
In order to do that I need to connect to the database with my script
which is running under the root account.
or switch to a different method of authentication for LOCAL users
I am confused. I presumed the proper
But afterwards, inside the script, you could use su to temporarily switch to
a less priviledged user:
... commands running as root
su postgres -c 'psql ' # running as postgres
... running as root again
OK I will try this.
I am very confused about something though. Not one person here
authenication type is controlled via the pg_hba.conf file.
frankly, I've never used the pg_ident file, it just seems like it would add
more confusion to things. But, it appears to use it you need a
map=/mapname/ primitive in your pg_hba.conf
That's why I attempted to do. I read the
I just tried with 8.4.1. Started with the default configuration, created
data/pg_ident.conf with:
pg_map root postgres
pg_map postgres postgres
Replaced in pg_hba.conf:
local all all trust
by
local all all
Is there a way I can get a list of the top 10 longest running queries
for the day/week/month or the top 10 queries that took the most CPU?
select * from pg_stat_activity only shows the current status.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
Is there a way to get pg_standby to put timestamps in the output it
generates? I am currently piping the output to a log fie but since it
contains no timestamps it's of limited use to me.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
printf %s %s\n, strftime(%Y-%m-%d %H:%M:%S, localtime(time)), $item
(untested, and I am not a regular Perl programmer, its but File::Tail is the
best library I know of to do this sort of thing)
Ah if I am going to do that I suppose something like this would work.
#!/bin/sh
while read
If that works, great. I'm not sure if you'll run afoul of output buffering
in this situation. Clearly you've got the right idea, just need to make
sure it behaves as you expect and doesn't clump the line reads into larger
chunks.
Actually I could not get it to send the output to the pipe
You might use the copy command instead of insert, which is far faster.
If you want the fastest possible inserts, then probably copy is the way
to go instead of insert.
Here is copy command via API:
http://www.postgresql.org/docs/current/static/libpq-copy.html
Here is copy command via SQL:
Technically you *can* disable triggers, including RI checks, but it's VERY
unwise and almost completely defeats the purpose of having the checks. In
most such situations you're much better off dropping the constraints then
adding them again at the end of the load.
I know that the SQL server
I, for one, would loudly and firmly resist the addition of such a
feature. Almost-as-fast options such as intelligent re-checking of
Even if it was not the default behavior?
If you really want to do that, look at the manual for how to disable
triggers, but understand that you are throwing
On Thu, Jan 7, 2010 at 3:13 AM, Dimitri Fontaine dfonta...@hi-media.com wrote:
Tim Uckun timuc...@gmail.com writes:
Is there a command like COPY which will insert the data but skip all
triggers and optionally integrity checks.
pg_bulkload does that AFAIK.
That's a great utility
Hello.
I have lots of ruby daemons running connected to postgres. Some of
them start getting connection errors after about a day or two of
running. The odd thing is that they don't all get the same error.
Some get this error: PGError: lost synchronization with server: got
message type T
Others
Most of the cases we've seen like that have been because multiple
threads in the client application were trying to use the same PGconn
connection object concurrently. There's no cross-thread synchronization
built into libpq, so you have to provide the interlocks yourself if
there's any
I have this query it runs reasonably quickly.
SELECT consolidated_urls.* FROM consolidated_urls INNER JOIN topical_urls
ON consolidated_urls.id = topical_urls.consolidated_url_id
WHERE ((topical_urls.domain_id = 157) AND ((topical_urls.hidden = 'f')))
ORDER BY index_delta DESC LIMIT 10
The
2011/1/15 pasman pasmański pasma...@gmail.com:
Try :
order by index_delta+1 desc
I have attached the explain analyze for that below why does this
return instantly?
Limit (cost=29910.05..29910.07 rows=10 width=1880) (actual
time=42.563..42.563 rows=0 loops=1)
- Sort
2011/1/16 pasman pasmański pasma...@gmail.com:
I think this is a planner's bug. Can you send these explains to pgsql-bugs ?
Sure. BTW I thought I would change the query a little by putting a AND
index_value .100 instead of index_delta and it didn't help at all. I
thought maybe using another
I want to set up a central database and several satellite databases
which use some of the data from the central database.
For example
Say my central database contains people records, with a many to many
relationship with clients records.
Each client has their own database but needs read, write
1 - 100 of 243 matches
Mail list logo