On Sat, 3 Sep 2016, Adrian Klaver wrote:
I am guessing this from before you managed to get the dump file to load and
populate the appropriate tables with user information from the old Postgres
instance.
Could well be the case. Tomorrow will try removing all user-generated
databases and
On Sat, 3 Sep 2016, Adrian Klaver wrote:
Or if you are fine running the 9.5 instance at port 5432, what happens if you
do?:
psql -d postgres -U some_user -p 5432
$ psql -d postgres -U rshepard -p 5432
Password for user rshepard:
psql: FATAL: password authentication failed for user
On Sat, 3 Sep 2016, Adrian Klaver wrote:
Once you verify that the new instance is running and you can connect to it
then:
Shut down 9.3.4 using pg_ctl stop as user postgres. Started 9.5.4 as user
postgres using 'pg_ctl start /var/lib/pgsql/9.5/data &'.
psql -U some_user -d postgres -p
On Sat, 3 Sep 2016, Adrian Klaver wrote:
A question are you really using 9.4.5 or 9.5.4(the latest version of 9.5)?
And if you want to use 9.4 I would say use the latest(9.4.9).
Adrian,
It is 9.4.5, but I have the source tarball for 9.5.4 in the build
directory.
If the above is a yes, any
On Sat, 3 Sep 2016, Adrian Klaver wrote:
No it says you are using the 9.4.5 version of psql to connect to a 9.3.4
server. psql is available independent of the server running. You will need to
see if both servers are running by doing something like:
Adrian,
Yeah, that's what it said. :-(
Postgres-9.3.4 was built directly from the web site download source. It's
installed in /usr/lib/postgresql/9.3.4/. Postgresql-9.4.5 was built from the
SlackBuilds.org script and installed in /usr/lib/postgresql/9.4/. The bin/
subdirectory of the 9.4 version has all the excutable files, but that
On Fri, 12 Aug 2016, Adrian Klaver wrote:
Or are the various levels tightly coupled and you are looking to start
from scratch?
Adrian,
I want to start from scratch. Yet, I think that I found a solution that
will work better than that that.
There's a tool called XRMS that appears to have
On Fri, 12 Aug 2016, Adrian Klaver wrote:
FYI regular(classic) wxPython does not support Python 3. If you are looking
forward you want:
https://www.wxpython.org/Phoenix/docs/html/index.html
Adrian,
Yes, I'm aware of this and decided that this new application should be
written using
I need to expand my client relation management/sales tracking application
from its existing python3/Qt5/SQLite3 version to a more capable one built on
a postgres backend. (For the record, I've been using postgres and sqlite on
linux for almost 20 years now so I'm comfortable with both.)
While designing the schema for a new application have become high-centered
and stuck relating some many-to-many tables. Fresh eyes and suggestions are
needed on how to create many-to-many association tables among these five.
Table 'Permits': Contains information about each permit, PK is
While designing the schema for a new application have become high-centered
and stuck relating some many-to-many tables. Fresh eyes and suggestions are
needed on how to create many-to-many association tables among these five.
Table 'Permits': Contains information about each permit, PK is
On Thu, 23 Jul 2015, Melvin Davidson wrote:
Does this help?
Melvin,
Yep.
After pondering David's response I recognized my error: I had the permit
table as the center of the relationships when it should be the conditions
table. As you wrote, permits, parameters, and locations all feed
On Thu, 23 Jul 2015, David G. Johnston wrote:
Conditions - (Permit + Location + Parameter + Frequency + Temporal (from,
until))
Monitoring - (Condition + Event)
While you may require additional modelling tables to support your user
interface (i.e., picking valid combinations of PeLoPa when
On Fri, 5 Sep 2014, John McKown wrote:
They are excellent. They are _not_ for beginners. The For Smarties
portion is not just a play against the For Dummies series. Joe does some
high powered SQL.
I read Joe Celko's columns in Database Advisor and a couple of other
magazines in the '80s,
I've read some on table partitioning and using nested select statements
with group by, but have not found the syntax to produce the needed results.
From a table I extract row counts grouped by three columns:
select stream, sampdate, func_feed_grp, count(*) from benthos group
by stream,
On Fri, 29 Aug 2014, David G Johnston wrote:
You want to use window clause/function.
David,
I read about this, but did not absorb everything.
Add the following to the first query, in the select-list:
Sum(count(*)) over (partition by stream, sampdate) as stream_date_total
You function
On Fri, 29 Aug 2014, Adrian Klaver wrote:
I am going to assume you mean Postgres did not like the syntax.
Adrian,
Oops! Mea culpa. Yes, postgres.
What was the error message you got back?
I don't recall. It was yesterday afternoon and I flushed it from memory
when it did not work.
I have some rows in a table where a column attribute has a newline (\n)
appended to the string. How do I represent that newline character in a SQL
statement using psql?
I've tried adding E'\n' to the end of the string but that doesn't work.
Here's what I see when I select distinct for
On Wed, 27 Aug 2014, Jeff Ross wrote:
You want the E in front of the entire string, not just before the \n.
select
'Use Discover: ' || E'\t' || 'Yes' || E'\n'
Jeff,
That did the trick. Turns out that 202 of 204 rows had the newline! The
syntax that worked:
update benthos set stream =
One column in a table has values for the attribute 'stream'. Some queries
return some rows where a stream name (only identified one so far) has an
appended '+'. I cannot update the table to remove that appended character,
and I've not seen this before.
Example:
2220 | STV |
On Sat, 23 Aug 2014, Ian Barwick wrote:
You have a newline character. Try:
select count(*) from benthos where stream = E'StarvationCrk\n';
Ian,
Interesting; that query returned 202 of 204 rows.
Thanks,
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make
On Fri, 22 Aug 2014, Adrian Klaver wrote:
Is this only in psql?
Adrian,
Yes.
Thanks,
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
My highest priority has just changed to developing a multiuser database
application. The backend is postgres, of course, and everything else will be
written in Python, SQLAlchemy, and wxPython. This application is at least
one order of magnitude more complicated/sophisticated than any I have
On Wed, 2 Jul 2014, Tom Lane wrote:
I might be misunderstanding, but I think you're looking for GRANT/REVOKE:
GRANT rolename TO newuser;
tom,
You correctly understand my question. That's the syntax I need and did not
see.
Thank you,
Rich
--
Sent via pgsql-general mailing list
On Wed, 2 Jul 2014, David G Johnston wrote:
or if you want to do it as part of creating a new user:
CREATE ROLE new_management_user
[other stuff here]
IN ROLE management;
http://www.postgresql.org/docs/9.2/interactive/sql-createrole.html
David,
I'll have to think deeply about what this
On Wed, 2 Jul 2014, Adrian Klaver wrote:
To follow up on my previous post. If you have your own user table, having a
user in that table with the same name as one of the Postgres role does not
mean they pick up the Postgres role permissions.
Adrian,
OK. Now I see the issue.
What I have
On Wed, 2 Jul 2014, David G Johnston wrote:
Its the exact same outcome Tom provided, just via a different mechanism...
You want to assign all permissions to the standard group-roles and then
have new users inherit the appropriate permissions via their membership in
the appropriate group-role.
On Wed, 2 Jul 2014, Gregory Haase wrote:
If it made you feel better, remember that CREATE USER is an alias for
CREATE ROLE that includes LOGIN by default.
Greg,
Yes, I read that in the docs.
So if you simply swap one word in your command, the context becomes a
little more clear for what
I'm developing a new application and want to take advantage of postgres
features such as triggers and stored procedures and put as much of the
middleware 'business logic' into the database engine as is practical.
Is it possible, or practical, to validate a potential user login within
the
On Tue, 1 Jul 2014, hubert depesz lubaczewski wrote:
That depends. For example - for system that will have 5 users, and
requires strict security policies - it would make sense. On the other
hand, for website, with thousands of users, putting them all as actual
roles in Pg doesn't make much
I have a table with some NULL values in a date column. I need to make a
copy of that table containing only those rows where the date column is not
null. Reading the CREATE TABLE man page I've tried to create a copy of the
original table from which I could drop rows. While the table creating is
On Mon, 16 Jun 2014, Rich Shepard wrote:
While I suspect there's a way to write a SELECT statement for those rows
that are not null and save the results to a different table name, I've not
found the syntax in my postgres and SQL references.
Got it:
# create table benthos as select * from
On Sat, 14 Jun 2014, Damir Dezeljin wrote:
This is more a theoretical or better to say, conceptual question; still, I
hope to get some feed backs from you folks. Additionally this is going to
be a very long post :) off-topic: I asked a similar question on the MySQL
forum as I'm still undecided
On Sat, 14 Jun 2014, Damir Dezeljin wrote:
I am designing a database for storing various biological and ecological
data. Although there is no clear dividing line, it is possible to group
the data into two groups, namely the measured (physical) and quantitative
(mostly biological) data;
I
Postgresql-9.0.5 is installed in /usr/local/pgsql/ and -9.4's data/
subdirectory is installed in /var/lib/pgsql/9.4/ with the executable in
/usr/bin/.
Running pg_upgrade fails at the end because of a locale difference:
lc_ctype cluster values do not match: old C, new en_US.UTF-8
Failure,
On Fri, 21 Mar 2014, Sergey Konoplev wrote:
Drop your newly created 9.4 cluster dir and re-init it with C locale like this:
Sergey,
How do I drop the cluster? I've not had need to do this before.
initdb --locale=C -D ...
Do I want the former locale or the current one; does it make
On Fri, 21 Mar 2014, Joshua D. Drake wrote:
If you want to use pg_upgrade, then you are going to have to dump your old
database, clean it to UTF-8 (with say iconv), then reimport it.
At that point you might as well just pull it into 9.3.
Joshua,
I did a pg_dumpall before starting this
On Fri, 21 Mar 2014, Rich Shepard wrote:
I did a pg_dumpall before starting this process. If I can use pg_restore
on the running 9.4 will that convert locales and get the job done?
Let me be more specific. Can I get databases working again by doing these
steps?
1.) Start the new -9.4
On Fri, 21 Mar 2014, Rich Shepard wrote:
1.) Start the new -9.4 version running.
It's now running.
2.) Run pg_restore on each database that's in the .sql file created by
pg_dumpall.
Will this work?
psql -e -d template1 -f
/home/rshepard/data/database-backups/pg-9.0.5-2014-03.17
On Fri, 21 Mar 2014, Rich Shepard wrote:
psql -e -d template1 -f
/home/rshepard/data/database-backups/pg-9.0.5-2014-03-17.sql
Yep.
Now all databases are running on 9.3.4.
Thanks, all!
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
On Mon, 17 Mar 2014, Rich Shepard wrote:
Now, -9.0.5 is installed in /usr/local/pgsql/ and -9.3.3 is installed in
/opt/pgsql-9.3.3. I want to use pg_upgrade and have read the Web page with
the instructions.
I am having problems initializing the new version in /opt/pgsql-9.3.3. I
kill
On Tue, 18 Mar 2014, Tom Lane wrote:
Your PATH seems to be finding initdb in /bin (or is that /usr/bin), not
the one you want under /opt/pgsql-9.3.3.
Tom,
Thanks for catching what I did not see. There was an initdb from 2011 in
/bin/ and the new one in /usr/bin/ is a softlink to
Now, -9.0.5 is installed in /usr/local/pgsql/ and -9.3.3 is installed in
/opt/pgsql-9.3.3. I want to use pg_upgrade and have read the Web page with
the instructions.
Instruction #4 tells me to install pg_upgrade and pg_upgrade_support. I
have the 9.0.5 version of pg_upgrade in
On Mon, 17 Mar 2014, Tom Lane wrote:
I'm guessing from those path names that you are using self-compiled
executables, not somebody's packaging? If the latter, whose? I'm
confused as to how you got pg_upgrade installed without knowing where it
came from.
Tom,
Both were buit with scripts
On Sat, 1 Mar 2014, Vincent Veyron wrote:
I've had success with mdbtools, but on older version of Jet database files
and small files. I've haven't tested it on Access 2007+.
Vincent,
I suspect this file is in the current Access version.
I understand you run linux, but if you can get hold
On Fri, 28 Feb 2014, John McKown wrote:
Is this data generally available? If so, then can you tell us how to get a
copy of the data? If we can get a copy, it might be able to figure out how
to read it.
John,
Here's the URL:
http://www.streamnet.org/datastore_search.cfm?id=410keywords=
On Sat, 1 Mar 2014, Adrian Klaver wrote:
FYI in the Summary / Abstract at the above link there is this:
If you would like these data in a different format or would like help in
using this file, please contact.. contact info
I missed seeing that. Thanks.
Rich
--
Sent via pgsql-general
I just downloaded two scientific data files from a federal agency's Web
site. Both are in M$ JetDB format. I run only linux and keep all my
scientific dat in postgres.
My Web search did not turn up anything useful; the closest was a thread
from this mail list in 2000 on how to send a
On Fri, 28 Feb 2014, Adrian Klaver wrote:
The only software I could find to directly read MDB files on Linux is :
http://mdbtools.sourceforge.net/
Not sure what version of MDB you have and whether the above is current enough
to read them. Otherwise you need to crank up an Access instance and
On Fri, 28 Feb 2014, Rich Shepard wrote:
And, I discovered 'jet-tool' on code.google.com. That'll do the job, I
believe.
Well, it won't work: it's a M$ .exe file and the source is apparently in
some M$-type of Pascal.
Rich
--
Sent via pgsql-general mailing list (pgsql-general
On Sat, 1 Mar 2014, Thomas Kellerer wrote:
It's not clear to me what exactly you are trying to do.
Or are you just trying to copy the data into the Postgres database?
If the latter you could try one of the JDBC based query tools to export
or copy the data using the UCanAccess JDBC driver for
On Fri, 28 Feb 2014, Adrian Klaver wrote:
Well JET is the database engine for Access and mdb is one of the file
format extensions for Access database files. JET has been used in other MS
products and Access has used different file formats(extensions) over the
years. So you may be in for a chore
On Fri, 28 Feb 2014, Adrian Klaver wrote:
Well JET is the database engine for Access and mdb is one of the file
format extensions for Access database files. JET has been used in other MS
products and Access has used different file formats(extensions) over the
years. So you may be in for a chore
On Sat, 1 Mar 2014, Thomas Kellerer wrote:
No, not within psql, but from within a Java/JDBC based query tool (such as my
SQL Workbench)
Oh. OK.
Thanks,
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Sat, 1 Mar 2014, Ian Lawrence Barwick wrote:
Not sure if this will be of use, but there are a couple of links here:
https://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#Microsoft_Access
I'll post it anyway in case someone comes across this thread in the future.
While I was out of the office yesterday the server shut down unexpectedly.
Now I'm having difficulties getting postgres restarted because it's been so
long since I've done this.
Based on error messages when trying to start postgres I killed the .pid
file and /tmp/.s.PGSQL.5432. Obviously the
On Tue, 7 Jan 2014, Rich Shepard wrote:
What is the process I follow to get postgres running again?
Found the problem and fixed it.
I had touched /tmp/.s.PGSQL.5432 as root and it was a regular file, not a
socket. Stopping postgres, deleting that file, and restarting the server
fixed
I'd like to understand what I'm missing when I COPY a table to a .csv file
but the text is not quoted.
Here's an example of a command:
copy chemistry to '/home/postgres/emapchem.csv' with (format CSV, header True, quote
'', delimiter '|');
and here're part of the header and following row
On Thu, 31 Jan 2013, Adrian Klaver wrote:
http://www.postgresql.org/docs/9.2/interactive/sql-copy.html
Adrian,
I've read this.
CSV Format
...
The values in each record are separated by the DELIMITER character. If
the value contains the delimiter character, the QUOTE character, the NULL
On Thu, 31 Jan 2013, Steve Crawford wrote:
Definition of the CSV Format:
While there are various specifications and implementations for the CSV
format...there is no formal specification in existence, which allows for a
wide variety of interpretations of CSV files.
Steve,
It's been this
On Thu, 31 Jan 2013, Adrian Klaver wrote:
FORCE_QUOTE Forces quoting to be used for all non-NULL values in each
specified column. NULL output is never quoted. If * is specified,
non-NULL values will be quoted in all columns. This option is allowed only
in COPY TO, and only when using CSV
On Sun, 27 Jan 2013, Jasen Betts wrote:
yeah, emacs is slow on large files.
Jasen,
I've noticed this over the years.
for a one-off I'd use less(1), to extract the desired table data.
If I had to repeat it i'd use sed or awk
I used 'joe'. It handled the job with aplomb.
Thanks,
Rich
On Thu, 24 Jan 2013, Bruno Wolff III wrote:
For a client who needs to learn how to query the db:
I found the postgresql documentation very useful for learning SQL.
Two more suggestions: Rick van der Lans' 'Introduction to SQL, 4th
Edition' for its comprehensiveness and extensive
I neglected to dump a single table before adding additional rows to it via
psql. Naturally, I messed up the table. I have a full pg_dumpall of all
three databases and all their tables in a single .sql file from 2 days ago.
The file is 386M in size and emacs is taking a very long time to move
On Tue, 22 Jan 2013, Rich Shepard wrote:
Is there a way I can extract a single table's schema and data from the
full backup? If so, I can then drop the fubar'd table and do it correctly
this time.
My solution: view the file in the pager I use (less), then copy relevant
lines to another
On Tue, 22 Jan 2013, Joshua D. Drake wrote:
Rich, the main problem is using pg_dumpall. Unfortunately pg_dumpall has
not kept up with all the other advances Postgres has had in the last
decade. To set up dump based backups properly I suggest reviewing:
What is the behavior if a column data type is timestamptz but there is
only the date portion available? There must be a default time; can that be
defined?
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Mon, 21 Jan 2013, Adrian Klaver wrote:
Easy enough to test:
Thanks again, Adrian.
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
On Mon, 21 Jan 2013, Tom Lane wrote:
Note that that default is local midnight according to your current
timezone setting (from which we may guess that Adrian lives on the US west
coast, or somewhere in that general longitude).
Yep. About 3 hours north of me.
Not sure you can change the
On Fri, 18 Jan 2013, Adrian Klaver wrote:
test= SELECT ('2012-10-29 '||'10:19')::timestamp;
timestamp
-
2012-10-29 10:19:00
Thanks, Adrian. I suspected it was simple but I could not find a reference
to the syntax.
Much appreciated,
Rich
--
Sent via
My Web searching foo fails me, and I don't see the answer in the postgres
docs so I hope someone here can point me in the proper direction.
There is a table for bacteriological data that contains two columns for
the date and time the water was collected and another two columns for the
date
On Fri, 18 Jan 2013, Adrian Klaver wrote:
How are they stored, as date and time type, strings, other?
Adrian,
ISO date and time.
A sample of the data would help also.
Example: 2012-10-29 | 10:19 | 2012-10-30 | 09:40
Rich
--
Sent via pgsql-general mailing list
On Fri, 11 Jan 2013, Ken Tanzer wrote:
I'm wondering if anyone can point me towards a good method for moving
mysql data into Postgres?
I had to do this last year with the ITIS (Integrated Taxonomic Information
System) maintained by the US Geological Survey.
Some MySQL key words were
I have the need to develop an application that will use postgres as the
back end, and most of the design has been worked out, but I've one issue
left to resolve and want help in this. If this is not the appropriate forum
for this type of question, please point me in the right direction.
For
On Thu, 3 Jan 2013, Fabrízio de Royes Mello wrote:
And keep in mind that kind of table tend to grow quickly, so you must use
some strategy to purge old historical data or make your audit table
partitioned...
Fabrizio,
There should not be many changes in these tables. And historical data
On Thu, 3 Jan 2013, Adrian Klaver wrote:
As a matter of course I include fields to record the timestamp and user for
insert of records and last update of record on my tables.
Adrian,
This is a useful addition to the application.
For a relatively simple solution see this blog post I put
On Thu, 3 Jan 2013, Bèrto ëd Sèra wrote:
if it's a strict legal requirement you may want to enforce it with a
trigger system, so that each time a record is inserted/updated/deleted
you create an exact copy of it in a historical table, that has the
original record plus data about who performed
What I thought would be a simple, single table select keeps eluding me.
I've looked in Rick van der Laans' book and the Joe Celko books here and
have not learned how to write the query.
The table has a Boolean indicator column with values of 0 or 1 for each
row in the table and another
On Sat, 21 Jul 2012, Chris Angelico wrote:
Try this:
SELECT DISTINCT param FROM table WHERE indicator=0
EXCEPT
SELECT DISTINCT param FROM table WHERE indicator=1
Chris,
Thank you. I knew it was simple, and I've not before used the EXCEPT
condition.
Very much appreciate,
Rich
--
Sent
Source data has duplicates. I have a file that creates the table then
INSERTS INTO the table all the rows. When I see errors flash by during the
'psql -d database -f file.sql' I try to scroll back in the terminal to
see where the duplicate rows are located. Too often they are too far back to
One table, waterchem, with primary key 'site' has columns easting and
northing with no values in them. A second table, sites, with primary key
'name' has values for easting and northing for each row. I want to update
waterchem with the easting and northing values from sites. My proposed
On Wed, 13 Jun 2012, Thomas Kellerer wrote:
No it's not quite correct:
UPDATE waterchem
SET waterchem.easting = s.easting
waterchem.northing = s.northing
FROM sites AS s
WHERE waterchem.site = s.name;
Thomas,
I wondered about this syntax, and doing each column separately.
Thanks
On Wed, 13 Jun 2012, Thomas Kellerer wrote:
UPDATE waterchem
SET waterchem.easting = s.easting
waterchem.northing = s.northing
FROM sites AS s
WHERE waterchem.site = s.name;
Thomas,
For the record, running this pushed me to the correct syntax:
UPDATE waterchem
SET easting =
On Tue, 12 Jun 2012, David Johnston wrote:
And, to be thorough, you need to put commas between each field you want to
update.
Did that. Didn't write it that way in the message.
Regards,
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
I want to combine columns in two tables and use the exported resulting
table for statistical analyses. The SQL script is:
INSERT INTO waterchem (site, sampdate, param, quant, ceneq1, low, high,
stream, basin)
SELECT c.site, c.sampdate, c.param, c.quant, c.ceneq1, c.low, c.high,
On Tue, 5 Jun 2012, Greg Williamson wrote:
Try a single equals sign, e.g. WHERE c.site = s.siteid
Greg,
I'm surprised: that worked! I thought the WHERE clause was looking for
equivalency, not an assignment.
There's another problem now that will be more difficult to fix. Postgres
tells
On Tue, 5 Jun 2012, David Johnston wrote:
As for the duplicate key I would first check to see if you are inserting
into a non-empty table and that one or more of your newly inserted records
conflicts with existing records on the waterchem table. The easiest way
would be to insert into a
The table has a column 'coll_time' of type time without time zone. New
rows for the table are in a .sql file and the time values throw an error at
the colon between hours:minutes. Do time values need to be quoted?
TIA,
Rich
--
Sent via pgsql-general mailing list
On Wed, 25 Apr 2012, Ben Chobot wrote:
Yes, (date)time values need to be quoted as if they were strings.
Thanks, Ben. I thought that was the case but wanted to confirm it.
Much appreciated,
Rich
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to
I'm storing vector map attribute data in postgres tables and somehow
managed to create two databases (of similar names) rather than one. I want
to combine the two.
For tables that exist in the one database I want to eliminate, I thought
to use pg_dump to create .sql files, then use
On Thu, 29 Mar 2012, Gabriele Bartolini wrote:
I suggest that you look at the -l and -L options in pg_restore, which
allow you to select which dump entries to restore (selective restore).
Gabriele,
After sending the message I realized the proper syntax is 'psql -d
database -f table.sql'.
On Thu, 29 Mar 2012, Andy Colson wrote:
How many tables are we talking about. If its a few tables, I'd rename them:
alter table lake rename to lake_old;
... etc
then dump it out and restore into the proper db.
Andy,
This will work just fine. Thanks for the insight.
Rich
--
Sent via
I'm trying to insert rows into a table, but some date and time columns are
missing values. In the INSERT INTO ... statements of the .sql file I've
tried various formats: ,, and ,'', and ,' ', but they all generate the error
of invalid syntax for type date.
When I have missing date of the
On Wed, 29 Feb 2012, Andrew Gould wrote:
If the column is null'able, I think you can use the keyword: DEFAULT
insert into(id, xdate) values (1, DEFAULT);
Or... use NULL:
insert into(id, xdate) values (1, NULL);
NULL works, but one advantage of using DEFAULT is that you won't have to
worry
On Tue, 28 Feb 2012, mgo...@isstrucksoftware.net wrote:
If we move to Linux, what is the preferred Linux for running Postgres on.
This machine would be dedicated to the database only.
Michael,
There is no 'preferred' linux distribution; the flame wars on this topic
died out a decade or so
On Tue, 28 Feb 2012, haman...@t-online.de wrote:
one thing you might want to consider is system lifetime: some distro may
be set up so that you more or less have to reinstall within 2 years, if
you plan to use update service - others may be longer. Now, fast
development is great AND allows you
I have a lot of data currently in .pdf files. I can extract the relevant
data to plain text and format it to create a large text file of INSERT INTO
... rows. I need a unique ID for each row and there are no columns that
would make a natural key so the serial data type would be appropriate.
On Thu, 9 Feb 2012, Merlin Moncure wrote:
The record should be logically unique as well as physically unique (of if
it isn't, why bother making a unique constraint at all?). Sometimes you
*have* to force a surrogate, for example if certain (broken) client tools
need a primary key to work, but
On Thu, 9 Feb 2012, Andy Colson wrote:
If you create a serial column, dont put the column name or a value into your
insert statement.
create table junk (id serial, stuff text);
insert into junk(stuff) values ('my stuff');
Andy,
That's what I assumed would work but did not know for sure.
On Thu, 9 Feb 2012, David Salisbury wrote:
Interesting. I used to think natural keys were okay, but have since decided
that surrogates are the way to go. That second layer of abstraction allows
for much easier data modifications when needed. What would be an example
of a natural key that
101 - 200 of 554 matches
Mail list logo