Our major concern is related to the write-back issues we can face in this
environment.
I agree this is the OS's responsibility. Greg Smith's Postgres: High
Performance book has a lot to say about this, but there's also a lot
you could read online, since it's really a requirement for any
The UTF-8 encoding for a pound sign is 0xc2a3, not just 0xa3. You
might want to make sure your PHP file is correct. If you're on Linux,
you can use a command like `od --format=ax1 foo.php` to see the actual
byte values. If that shows the wrong value, then the problem is your
text editor is saving
I'm trying to create a unique index where the unique constraint is
`deferrable initially immediate`. But I don't see any way to do this
in the syntax of the `create index` command. It looks like the only
way to do it is via `alter table foo add unique`. Is that right, or
can I do it as part of
Deferrability is a property of a constraint, not an index
Yes, but creating a unique constraint implicitly creates an index, and
creating a unique index implicitly creates a constraint. So I'm
wondering whether I can create a pair where the index is partial and
the constraint is deferrable. It
Hi Alban,
I think Postgres works great for OLAP work, and Amazon's Red Shift is
even based on Postgres. 100 million sales should be not problem at
all. My understanding is Greenplum also builds on top of Postgres, so
if you ever do outgrow your Postgres installation, that would be an
easy
Because the training budget is used up for this year, and I don't want
to wait until January, I'm trying to find things that are inexpensive
enough to get started on right away.
I'm looking forward to others' suggestions which will surely be
better, but I originally learned the basics of data
Yesterday I found that one of the databases in my database cluster suddenly
lost all its tables.
A \dt in psql showed nothing.
Is there any chance this could be a search_path issue? Do you have a ~/.psqlrc?
Or are you sure you're in the right database? If you are connecting as
the postgres
I use this script on an Ubuntu system:
#!/bin/bash
set -eu
postgresql_autodoc -d example_dev -u example_dev -h localhost
--password=
dot -Tpng -o example-schema.png example_dev.dot
dot -Tpdf -o example-schema.pdf example_dev.dot
That gives you a schema diagram in pdf, png, dia, and dot
Hello,
I created some indexes with WHERE clauses, so that only part of the
table would be indexed. Now I'd like to get a list of indexes, and
include the WHERE clause if an index has one. This is what I'm trying
right now:
SELECT indc.relname, ind.indpred
FROM pg_index ind, pg_class indc
It'd be great to get just the WHERE clause if possible, although I can
work around it if not. I couldn't find much documentation re
pg_get_expr.
To answer my own question, this works:
select pg_get_expr(indpred, indrelid) from pg_index where
indexrelid = 223630;
pg_get_expr
pg_get_indexdef() should help. If you really want just the WHERE
clause, possibly pg_get_expr() would work, but I've not tried it on
index clauses.
Thank you for such a quick response!
pg_get_indexdef is very helpful:
select pg_get_indexdef(223630);
pg_get_indexdef
I think the OID
argument will need to be the table not the index, but not 100% sure.
Yep, that's true. :-)
Paul
--
_
Pulchritudo splendor veritatis.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Is there a trick to using window functions to SET columns in an UPDATE?
Here is the query I'd like to run:
UPDATE profiles
SET score_tier = percent_rank()
OVER (PARTITION BY site_id ORDER BY score ASC)
WHERE score IS NOT NULL
But that gives me
Hello,
I'd like to write a histogram-like query that shows these columns:
- x-value from 0 to k.
- number of rows with that x-value.
- number of rows seen so far (i.e. with the current x-value or less).
- % of total rows seen so far.
The following query works for the first three columns,
Seems like this should work (untested though):
UPDATE table1 a
SET field1 = a.field1 || (SELECT regexp_replace(b.field1, '', '...')
FROM table2 b WHERE a.id = b.id AND b.field1 LIKE '')
WHERE a.field1 NOT LIKE ''
Paul
On Tue, Dec 4, 2012 at 11:52 AM, Sebastian P. Luque
I don't think your Java code does what you think it does. You should read
some more about how Java handles string encodings. Here is a method I wrote
some years ago that might also help you. It converts streams, not strings,
but what you need should be pretty close (and simpler):
/**
*
The JDBC drivers will handle all the conversion.
Do NOT manually convert the data.
Yeah, I agree this is the right answer here, since you're using JDBC. By
the time you get a String from the MySQL driver, it's already in Java's
2-bytes-per-char format. And the Postgres driver will deal with the
Is there any way to determine, by querying pg_index and other pg_* tables,
whether an index was created as `USING something`? I've already got a big
query joining pg_class, pg_index, etc. to pull out various attributes about
the indexes in my database, and I'd like to include whether it's a GIST
pg_indexes (not pg_index) seems to have the data you're looking for,
unless I misunderstood the question.
That is a lovely table, but I want to get each attribute individually,
without having to parse the CREATE INDEX statement.
It looks like I was almost there with pg_opclass. This will
pg_opclass seems the hard way --- just use pg_class.relam, which is
the OID of the index's AM.
Ah, that works like a charm. Thanks!
Paul
Hello,
I'm running a specialized search engine that indexes a few tens of millions
of web pages, keeping everything in Postgres, and one problem I'm starting
to see is poor cache hit rates. My database has two or three tables just
for the text of the scraped pages, with one row every time a page
I have a long-running multi-row UPDATE that is deadlocking with a
single-row UPDATE:
2013-03-09 11:07:51 CST ERROR: deadlock detected
2013-03-09 11:07:51 CST DETAIL: Process 18851 waits for ShareLock on
transaction 10307138; blocked by process 24203.
Process 24203 waits for ShareLock on
2) All transactions modify table rows in the same order, e.g. ascending
id.
With the big update you can do that by putting an ORDER BY tg2.id
into
the subquery, and with the little transactions you'll have to make
sure
that rows are updated in ascending id order.
I agree this would
Out of curiosity: any reason the ORDER BY should be in the subquery? It
seems like it ought to be in the UPDATE (if that's allowed).
Hmm, it's not allowed. :-) It's still surprising that you can guarantee the
order of a multi-row UPDATE by ordering a subquery.
Paul
--
into separate servers.
So it sounds like I'm on the right track. But a separate
cluster/server seems like a drastic solution.
Thanks,
Paul
On Mon, Mar 11, 2013 at 12:17 PM, Ben Chobot be...@silentmedia.com wrote:
On Mar 9, 2013, at 11:54 AM, Paul Jungwirth wrote:
Hello,
I'm running
Wow, this is a fun puzzle. I'd love to be the first to solve it with
just SQL, but I don't have a solution yet. Here are some elements that
might be useful:
SELECT market_segment_dimension, array_agg(value)
FROM market_segment_dimension_values
GROUP BY market_segment_dimension;
the UNNEST
On Wed, Mar 20, 2013 at 8:40 PM, Paul Jungwirth
p...@illuminatedcomputing.com wrote:
Wow, this is a fun puzzle. I'd love to be the first to solve it with
just SQL, but I don't have a solution yet. Here are some elements that
might be useful:
SELECT market_segment_dimension, array_agg(value)
FROM
Try this:
SELECT REGEXP_REPLACE(LOWER('300 North 126th Street'),
'(\d)(st|nd|rd|th)', '\1', 'g');
Note that matching a number is \d not /D: backslash, not forward
slash, and lowercase d not uppercase. \d means a digit, \D means
anything except a digit.
Also, I don't think Postgres supports
We are load balancing 2 data centers.
Chapter 8 of Scalable Internet Architectures has a good discussion of
running master-master setups in separate data centers. I'd read that
whole chapter for some of the challenges you'll face.
If DC1 goes down our LB is failing over to DC2.
This sounds
Do these queries update more than one row? I ran into a similar issue
a year ago, where two multi-row updates would deadlock because they
processed rows in a different order. I'd love to see UPDATE support
ORDER BY to fix this, but it doesn't yet. (If I ever try contributing
to Postgres, this is a
Is this the most efficient way to perform this kind of query?
I don't think there is one answer that's always correct, but you could
compare it with a LEFT OUTER JOIN.
There are lots of articles and blog posts about EXISTS vs OUTER JOIN
vs IN, for all the major RDBMSes. Note that not all these
Hello,
In light of the Heartbleed OpenSSL bug[0,1], I'm wondering if I need
to regenerate the SSL certs on my postgres installations[2] (at least
the ones listening on more than localhost)? On Ubuntu it looks like
there are symlinks at /var/lib/postgresql/9.1/main/server.{crt,key}
pointing to
Have you read the Debian README?
/usr/share/doc/postgresql-*/README.Debian.gz
Thank you for pointing me to that file. From
/etc/share/doc/ssl-cert/README it sounds like the old snakeoil cert is
already self-signed, so that's promising. So I take it that psql and
the postgres client library
Are you sure that there is no pure sql solution for this ?
There is no pure SQL solution because a SQL query always gives a fixed
number of columns. You could compose the SQL in your client app and
vary the columns by the current warehouses. Or you could say GROUP BY
produkit, tblwarehouse.id
A very quick search shows that rds supports dblink
Then I'd need to open our servers to external visits.
This is sort of getting away from Postgres, but if the RDS instance is
in a VPC, you could put a VPN on the VPC so dblink wouldn't have to go
over the open Internet.
Paul
On Tue, May 6,
Hello,
I'm trying to migrate an existing hstore column to json in Postgres
9.3, and I'd like to be able to run the script in reverse. I know not
all JSON can turn back into hstore, but since this is coming from an
old hstore column, I know the structure is flat (no nesting), and that
all values
the hstore column winds up with just one key/value pair.
Any suggestions for making this one big UPDATE?
Thanks,
Paul
On Tue, Jul 1, 2014 at 3:26 PM, Paul Jungwirth
p...@illuminatedcomputing.com wrote:
Hello,
I'm trying to migrate an existing hstore column to json in Postgres
9.3, and I'd
listen_addresses='*'
I'm pretty sure that listen_addresses belongs in postgresql.conf, not
pg_hba.conf.
Paul
On Thu, Jul 10, 2014 at 1:40 PM, Aram Fingal fin...@multifactorial.com wrote:
I just tried to set up a PostgreSQL server on an existing instillation of
Ubuntu 13.10 server but I
It is non-specific since it is assumed at this point in the documentation
that you realize ALL configuration parameters are defined in
postgres.conf or its includes.
I think the comments in pg_hba.conf are a lot more misleading than the
online documentation, and are more likely to be read.
Or another idea, add a column that is the path of the parent:
I don't think this will work. The problem is you need the full path to
keep the children with their parents, but you also need the score. If
you make the path an array of (-votes, id) tuples (perhaps flattened
for simplicity), then
Can you confirm that your software is SHA-256 Compliant?
Postgres's SSL certificate key live at the value of ssl_cert_file
and ssl_key_file in your postgresql.conf. Why not point it at a
SHA-256 certificate, restart, and try it out?
Paul
--
_
Pulchritudo
Hi All,
I have Postgres 9.3 on Ubuntu 14.04 set up in a master/slave
configuration with streaming replication. On the master I ran `sudo
service postgresql stop` and then on the slave I ran `sudo touch
$trigger_file`. Now the slave seems to be running fine, but I'm trying
to figure out the
A bit more info:
What if there were changes on the master that didn't get replicated
before I originally shut it down?
It looks like Ubuntu's init.d script does a fast shutdown, i.e.
SIGINT on this page:
http://www.postgresql.org/docs/9.3/static/server-shutdown.html
I can't tell from the doc
How do you handle DDL changes in general? I would treat stored
procedures the same way. For instance Ruby on Rails has database
migrations where you write one method to apply the DDL change and
another to revert it, like this:
def up
add_column :employees, :manager_id, :integer
Hello,
I'm working on a package of functions that compute statistics on
arrays of numbers. For example this one computes a histogram from a
bunch of values plus some bucket characteristics:
CREATE OR REPLACE FUNCTION
array_to_hist(double precision[], double precision, double precision, int)
Hello,
The table of which C types represent which SQL types seems to be missing bigint:
http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-TYPE-TABLE
It looks like bigint should be listed and should correspond to an
int64 C type. Also I see there is an INT8OID, PG_GETARG_INT64,
I imagine your original would be at risk of LIMITing out the very row you
seek to get at the top, since you don't have an ORDER BY to tell it which
ones to keep during the outer LIMIT.
Here is an old thread about combining ORDER BY with UNION:
Or maybe instead of a view you could write a
set-returning function, e.g. as described here:
I thought I'd see if I could make this work just for fun. Here is a
simple proof of concept (on 9.3):
-- DROP TABLE IF EXISTS topics;
CREATE TABLE topics (
id INTEGER PRIMARY KEY,
bumped_at INTEGER
:15 AM, Paul Jungwirth
p...@illuminatedcomputing.com wrote:
Hi Pierre,
It looks like you're saying that each row has an id plus three numeric
columns, and you want the stddev calculated from the three numeric
columns? In that case you could do this:
create table foo (id integer, a float, b
Oh sorry, you should leave off the grouping:
select stddev(a), stddev(b), stddev(c) from foo;
Paul
On Wed, Jan 21, 2015 at 10:24 AM, Paul Jungwirth
p...@illuminatedcomputing.com wrote:
Hi Pierre,
Looking at your Excel document I think I misinterpreted, and you are
trying to take the stddev
Hi Pierre,
It looks like you're saying that each row has an id plus three numeric
columns, and you want the stddev calculated from the three numeric
columns? In that case you could do this:
create table foo (id integer, a float, b float, c float);
insert into foo values (1, 2,3,4);
insert into
In a number of places on the web I've seen it claimed that ordering can be
set via prepared statements.
...
sandbox=# PREPARE testplan(text) AS
SELECT * FROM test ORDER BY $1;
But the output is not what one would expect:
sandbox=# EXECUTE testplan('gender');
...
As opposed to:
Hi Han,
Here is an example:
create table foo (v integer);
insert into foo values (23), (45), (65), (22);
create table bar (v integer);
insert into bar select v from foo, generate_series(1,5);
But note that in any relational database there is no defined order for
the rows. A table is more like a
Hi Pierre,
How do you know in which group each row belongs? If you don't care how
the rows are grouped, you can say this:
create table foo (v float);
insert into foo select random() from generate_series(1, 100) s(a);
select n % 50 g, stddev(v) from (select row_number() over () n, v from
foo)
I am currently planning to set up a PostgreSQL + PostGIS instance for my
lab. Turns out I believe this would be useful for the whole center, so
that I'm now considering setting up the server for everyone—if interest
is shared of course. At the moment, I am however struggling with what
would be
i want to select based on input, but if input is not provided or if
input is empty, then i want to select all rows.
I think you can just use OR:
SELECT *
FROMtable
WHERE (input = '' OR input = ?)
This is assuming that `input` is a column in your table and ? is the
user input, based
With the hstore you can do hstore ? 'key' to check if the object contains
the key-- is there a similar function for json objects?
Is this good enough?:
= select ('{a:1,b:null}'::json) - 'a';
?column?
--
1
= select ('{a:1,b:null}'::json) - 'b';
?column?
--
null
= select
Hi Michael,
I can't comment on the domain-specific stuff, but I recently used numeric
arrays for a project and it worked well. In my case we had one million
simulation results (floats) per scenario, so rather than reading one
million separate rows to compute a histogram, we stored everything in
I've noticed that when representing lists as JSON, Postgres 9.4 sometimes
outputs spaces after commas, and other times does not.
Here is a similar test on 9.3:
# select '[1,2,3]'::json::text, '[1, 2, 3]'::json::text;
text | text
-+---
[1,2,3] | [1, 2, 3]
It looks like
Hi Michael,
hey, john, i did as you said like:
update db.user
set deleted= 1,
updateterminal = UpdateTerminal,
updateuser = UpdateUser,
updatedate = UpdateDate
returning
credittypeid,
I have a table with two timestamp columns for the start time and end
time of each record (call them start and end).I'm trying to figure out
if there is a way to group these records by hour of day,
I think you can do this by selecting `FROM generate_series(0, 23) s(h)`
and then joining to your
Some weird edge cases to be careful about: activities that cross midnight.
Activities that last more than one full day,
e.g. start 3/15 and end 3/17.
Right. And I will run into some of those (at least the crossing midnight),
so I'll keep an eye out.
If you are running the report on more
So next question: how do I get the active time per hour from this?
I think you just SUM() over the intersection between each hourly window
and each event, right? This might be easiest using tsrange, something
like this:
SUM(extract(minutes from (tsrange(start_time, end_time)
tsrange(h,
test= select h, count(*) from start_end, generate_series(0, 23) as s(h) where
h between extract(hour from start_time) and extract(hour from end_time) group by h
order by h;
h | count
+---
8 | 2
9 | 3
10 | 2
11 | 2
Note if you always want all 24 rows with a count
1. I have a table with mailmessages, which has an FK to a table of hub users.
2. I'd like to retain the content of message repository (with it's domain key
not cleared), when I drop a particular username from service to release
that username to others.
3. I try to do that with FK
Does someone have a success story of using Postgres Full Search
Capability with significant data, lets say 50-100 GB ?
This is a recent and very complete article on using Postgres for
full-text search:
http://blog.lostpropertyhq.com/postgres-full-text-search-is-good-enough/
see also the
This underfits:
postgres=# select hstore_to_array('a=1,b=2,c=3'::hstore)
hstore_to_array('a=2,d=4,b=2'::hstore)
...because array overlaps op takes every element (even 'a' or 2 alone)
and doesn't test for key and value together like in 'b=2'!
How about hstore_to_matrix? Then you have a
I'm not sure how to create a result where I get the average number of
new users per day of the week. My issues are that days that did not
have any new users will not be factored into the average
This is a pretty common problem with time-series queries when there is
sparse data. My go-to
Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
would solve this problem. Wouldn't I have to generate a series based on
the date range (by day) and then group by DOW _after_ that? Can you give
me an example of how I'd do it with a series based on 0 to 6?
Looks like David
what you've said above is incorrect.
All WITH TIME ZONE does is tell PostgreSQL to apply timezone
conversions during various operations. The stored data is represented
as an epoch without any concept of the source data's timezone
representation.
Oh, very interesting! Thank you for pointing
Hi Steve,
Thanks for such a thorough response! I agree that time is a lot trickier
and trappier than one might expect, so it's good to learn how others
grapple with it.
Your original question had to do with month/year.
Just to clarify, that was Daniel's original question, but you're
You really shouldn't use WITHOUT TIME ZONE.
I'd like to know more about this. Can you say why? Are there any
articles you'd recommend? I'm fond of normalizing all times to UTC and
only presenting them in a time zone when I know the current
perspective. I've written about that approach in a
Is it possible to have date type data that only contain month and year?,
how can I obtain that from a timestamp (without time zone) column?
I think you want date_trunc, which will cut everything down to the first
of the month, e.g 2015-01-01, 2015-02-01, etc. The results will still be
dates,
On 11/12/2015 11:49 AM, db042190 wrote:
I see "unexpected eof...could not receive data..target
machine actively refused it.".
That sounds like the same error message as discussed here:
http://www.postgresql.org/message-id/4d75289d.9020...@techbaza.pl
Could it be a problem of too many open
Hello,
I'm running Postgres 9.3 in a warm standby configuration, and the slave
has this setting in recovery.conf:
archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup
/secure/pgsql/archive/ %r'
But I noticed that the archive directory had files going back to
February
Is there anything else beside *.backup files in the directory?
There were a few *.history files, and a few files with no extension,
like this: 000600BE0040.
Paul
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 11/02/2015 09:11 AM, Adrian Klaver wrote:
The *.backup files should not be 16MB and from your original post they
looked to be 300 bytes. Now if you have 30K of 16MB files then something
else is going on.
Ah, you are right! Sorry for the misunderstanding.
Paul
--
Sent via pgsql-general
So something is doing a base backup roughly every two hours.
Is that what you would expect?
No. :-)
Sounds like I need to do some archeology. This is a system I inherited,
so I haven't yet explored all the dark corners.
Paul
--
Sent via pgsql-general mailing list
So, as Albe posted pg_archivecleanup is only cleaning up the WAL files,
not the auxiliary files. The WAL files would be the ones with no
extension and a size of 16 MB(unless someone changed the compile settings).
Okay, thank you both for the explanation! I'm glad to hear that it's not
a
Running the queries individually and using a limit on the golite ip db
results are back immediately 1-2ms but when using the first query it
takes 2-3 seconds.
Is there a way to use a limit in the join?
This sounds like the real issue is a missing/incorrect index, but if
you're on 9.4+ you can
On 06/17/2016 03:03 AM, Alex John wrote:
RDS is a prime candidate except for the fact that they have explicitly
stated that the Postgres engine is *not* HIPAA compliant.
More precisely, it is not covered by the BAA Amazon will sign.
I've helped several companies run HIPAA-compliant Postgres
Hi Adam,
On 06/17/2016 03:01 PM, Adam Brusselback wrote:
Just wondering what others have done for using enum or uuid columns in
exclusion constraints?
[snip]
And as a closing note on this, I really can't wait until these are
supported types for gist indexes.
Here is some work I did to add
On 02/24/2016 09:44 AM, Paul Jungwirth wrote:
Also, Rails wants to use Postgres "ident" authentication, which does not
require a password because it trusts that the OS has already
authenticated you.
Sorry, I misspoke: this is "peer" authentication, not "ident". S
On 02/24/2016 12:34 AM, Marco Lobbia wrote:
I am on a Ubuntu 14.04 LTS machine.
I thought I'd chime in since I work with Rails and Postgres on Ubuntu
all day long. :-) 14.04 LTS is fine for both production and development.
(Sounds like you're using Heroku for production in any case.)
On 04/21/2016 01:36 PM, Guyren Howe wrote:
Anyone familiar with the issue would have to say that the tech world
> would be a significantly better place if IBM had developed a real
> relational database with an elegant query language
I'm surprised no one yet has mentioned Tutorial D by C. J.
On 05/03/2016 09:11 PM, Guyren Howe wrote:
I think I'm going to write a book called Love Your Database, aimed at web
developers
What might I cover that I haven't mentioned? What are the usual objections to
server-side code and how can they be met? When *are* they justified and what
should
On 05/04/2016 08:39 AM, Paul Jungwirth wrote:
On 05/03/2016 09:11 PM, Guyren Howe wrote:
I think I'm going to write a book called Love Your Database, aimed at
web developers
I gave a talk here about doing "interesting" Postgres things in Rails:
Oh also: one part of my talk I did
On 07/27/2016 07:44 AM, Vick Khera wrote:
On Wed, Jul 27, 2016 at 3:28 AM, Anton Ananich wrote:
In my situation this order is invalid. Obviously, year 2016 should go after
2014, like that:
I think you expect JSONB to sort differently than it does. I cannot
imagine
Hello,
I'm trying to create a custom inetrange type. This works:
CREATE TYPE inetrange AS RANGE (
subtype = inet,
subtype_diff = inet_diff
);
But since inet is discrete, not continuous, I'd like to define a
canonical function too:
CREATE TYPE inetrange AS RANGE (
On 02/28/2017 08:21 AM, Chris Withers wrote:
How can I aggregate the results of a query that equates to "show me the
number of matching rows and the set of |tag1| value that have
a |tag2| value of |t2val1|?
...but I really want:
|count |tag1
On 03/01/2017 08:39 AM, jonathan vanasco wrote:
I have to store/search some IP data in Postgres 9.6 and am second-guessing my
storage options.
Would anyone mind giving this a quick look for me?
Right now I have two tables, and am just using cidr for both:
Hi Jonathan,
CIDR seems like a
Hi Melvin:
On 09/29/2016 12:06 PM, Melvin Davidson wrote:
I list the creation time for a WAL file and it shows:
/home/mdavidson/dba$ ls -l --time=ctime
/d-log/pg_xlog/0001000D00C9
-rw--- 1 postgres postgres 16777216 Sep 29 07:14
/d-log/pg_xlog/0001000D00C9
ctime
On 09/19/2016 08:48 AM, Willy-Bas Loos wrote:
Since records can
be changed afterwards, it has been argued that we should have
"journaling", meaning that every change to the data is saved in a
separate schema that holds a "journaling" copy of each table
I don't think this is especially unusual.
> On 09/19/2016 10:56 AM, Willy-Bas Loos wrote:
> > On Mon, Sep 19, 2016 at 6:26 PM, Paul Jungwirth
> > <p...@illuminatedcomputing.com <mailto:p...@illuminatedcomputing.com>>
wrote:
> > I've worked on similar
> > projects that maintain history fo
On Fri, Sep 30, 2016 at 6:06 AM, Rakesh Kumar
wrote:
A typical fear mongering Q from
them "what if due to a bug in your s/w, our competitors end up looking at our
data" or
something like that. That's why schema level vs db level discussion.
I've been reading this
Hi All,
I've noticed in the past that doing aggregates while joining to more
than one table can sometimes give you unintended results. For example,
suppose I have three tables: products, sales, and resupplies. In sales I
track what I sell, and in resupplies I track my own purchases to
Hi All,
I wrote a blog post about the Postgres permissions system, and I thought
I'd share:
http://illuminatedcomputing.com/posts/2017/03/postgres-permissions/
The main point I wanted to convey, which I somehow never grasped
confidently from reading the docs or other articles, is how
On 03/29/2017 06:36 AM, Tom Lane wrote:
Karsten Hilbert writes:
Being able to create foreign keys may allow to indirectly
discover whether certain values exists in a table which I
don't otherwise have access to (by means of failure or
success to create a judiciously
Also I don't understand why you wrote “You need the permission on both
tables”: Only the owner of a table can add constraints to it
Ah, this piece was really helpful for me in making it click. Thanks so
much! I added a couple new paragraphs to my post with a link back to
this thread. I feel
On 03/21/2017 12:21 PM, David G. Johnston wrote:
> words=> COPY words_reviews (uid, author, nice, review, updated) FROM
> stdin FORMAT csv;
What did you read that lead you to think the above shoud work?
I don't know about COPY FROM, but COPY TO works without parens (or
FORMAT), like
1 - 100 of 113 matches
Mail list logo