On Fri, Jul 10, 2015 at 9:40 AM, John McKown
wrote:
> On Fri, Jul 10, 2015 at 11:15 AM, Robert DiFalco > wrote:
>
>> I have a table something like this:
>>
>> CREATE TABLE devices (
>> owner_idBIGINT NOT NULL,
>> utc_offset_secs INT,
>>
I have a table something like this:
CREATE TABLE devices (
owner_idBIGINT NOT NULL,
utc_offset_secs INT,
PRIMARY KEY (uid, platform),
FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE
);
I want to do a query from an application that returns all devices who's
time is b
>
> I am fairly certain this does not give you the correct results.
> Specifically, the minimum value for each cDate is going to be 1 since
> count(*) counts NULLs. count(u) should probably work.
>
>
> Yes you are right, I forgot to change COUNT(*) to COUNT(id), as you
mention COUNT(u.*) will a
Paul, I'm sure I'm missing something but it seems like your approach will
not work. It's because the LEFT OUTER JOIN is on the numeric day of the
week. So if you had this query going over weeks or months of data wouldn't
you have the same issue with the days that had no new users not being
factored
Thanks Paul, I guess I'm not sure how a generate_series between 0 to 6
would solve this problem. Wouldn't I have to generate a series based on the
date range (by day) and then group by DOW _after_ that? Can you give me an
example of how I'd do it with a series based on 0 to 6?
On Mon, Jul 6, 2015
I'm not sure how to create a result where I get the average number of new
users per day of the week. My issues are that days that did not have any
new users will not be factored into the average, giving an overinflated
result.
This is what I started with:
WITH userdays AS
(SELECT u.created::DAT
I want to make sure I understand the repercussions of this before making it
a global setting.
As far as I can tell this will put data/referential integrity at risk. It
only means that there is a period of time (maybe 600 msecs) between when a
commit occurs and when that data is safe in the case of
I don't think an advisory lock would remove the deadlock.
On Sun, Jan 18, 2015 at 10:33 PM, Roxanne Reid-Bennett
wrote:
> On 1/16/2015 2:41 AM, Jim Nasby wrote:
>
>> On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
>>
>>>
>>> try this: (if you still get deadlocks, uncomment the advisory lock
>>
Hometowns get selected and possibly inserted in unpredictable ways even
from multiple concurrent sessions. The only way I could figure out how to
solve it was to force each INSERT hometowns to be in its own transaction.
On Mon, Jan 19, 2015 at 1:56 PM, Robert DiFalco
wrote:
> I don't
be my
understanding is off.
On Mon, Jan 19, 2015 at 11:10 AM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> Robert DiFalco wrote:
>
> > I have several tables that I use for logging and real-time stats. These
> are not
> > critical and since they are a bottleneck I
I have several tables that I use for logging and real-time stats. These are
not critical and since they are a bottleneck I want transactions against
them to always be asynchronous. Is there a way to specify this at a table
level or do I have to make sure to call set synchronous_commit='off' every
ld be picked
up. And there should only be a quick recoverable deadlock.
On Fri, Jan 16, 2015 at 7:49 AM, Daniel Verite
wrote:
> Robert DiFalco wrote:
>
> > I must be doing something wrong because both of these approaches are
> giving
> > me deadlock exceptions.
>
>
The code shown in the Doc (I think) will still give you deadlock in the
case where you have two sessions concurrently trying to insert the same
'hometown'. For example:
INSERT INTO users VALUES('Tom', select_hometown_id('Portland, OR'));
INSERT INTO users VALUES(''Waits', select_hometown_id('P
rt into users (name, hometown_id)
> values ('Robert', v_id);
> END;
>
> On Tue, Jan 13, 2015 at 5:06 PM, Robert DiFalco
> wrote:
> > This seems to get rid of the INSERT race condition.
> >
> > CREATE OR REPLACE FUNCTION select_hometown_id(hometown_name VAR
; you're doing this inside of a postgres function, your changes are
> >> already atomic, so I don't believe by switching you are buying
> >> yourself much (if anything) by using a CTE query instead of something
> >> more traditional here.
> >>
> &g
Good points. I guess my feeling is that if there can be a race condition on
INSERT then the CTE version is not truly atomic, hence the LOOP.
On Tue, Jan 13, 2015 at 3:11 PM, Brian Dunavant wrote:
> A very good point, but it does not apply as here (and in my article)
> we are not using updates, o
switching you are buying
> yourself much (if anything) by using a CTE query instead of something
> more traditional here.
>
> The advantages of switching to a CTE would be if this code was all
> being done inside of the app code with multiple queries.
>
> On Tue, Jan 13, 2015
Well, traditionally I would create a LOOP where I tried the SELECT, if
there was nothing I did the INSERT, if that raised an exception I would
repeat the LOOP.
What's the best way to do it with the CTE? Currently I have the following
which gives me Duplicate Key Exceptions when two sessions try to
This CTE approach doesn't appear to play well with multiple concurrent
transactions/connections.
On Tue, Jan 13, 2015 at 10:05 AM, John McKown
wrote:
> On Tue, Jan 13, 2015 at 11:45 AM, Robert DiFalco > wrote:
>
>> Thanks John. I've been seeing a lot of examples like
x27;
WHERE NOT EXISTS (SELECT 1 FROM sel)
RETURNING id
)
INSERT INTO users(name, hometown_id)
VALUES ('Robert', SELECT id FROM ins UNION ALL SELECT id FROM sel);
On Tue, Jan 13, 2015 at 8:50 AM, John McKown
wrote:
> On Tue, Jan 13, 2015 at 10:42 AM, Robert DiFalco > wro
Let's say I have two tables like this (I'm leaving stuff out for
simplicity):
CREATE SEQUENCE HOMETOWN_SEQ_GEN START 1 INCREMENT 1;
CREATE TABLE hometowns (
id INTEGER DEFAULT nextval('HOMETOWN_SEQ_GEN'),
name VARCHAR,
PRIMARY KEY (id),
UNIQUE(name)
);
CREATE SEQUENCE USER_SEQ_GEN START 1
cost approach than what I was already doing.
On Thu, Dec 18, 2014 at 2:07 PM, David Johnston
wrote:
>
> On Thu, Dec 18, 2014 at 3:02 PM, Robert DiFalco
> wrote:
>
>> Is the intersect any better than what I originally showed? On the ROW
>> approach, I'm not sure
David G Johnston <
david.g.johns...@gmail.com> wrote:
>
> Robert DiFalco wrote
> > For 2 arbitrary ids, I need a query to get two pieced of data:
> >* Are the two users friends?
>
> This seems easy...ROW(u_id, f_id) = ROW(n1, n2)
>
>
> >* How many f
I have a table called friends with a user_id and a friend_id (both of these
relate to an id in a users table).
For each friend relationship there are two rows. There are currently ONLY
reciprocal relationships. So if user ids 1 and 2 are friends there will be
two rows (1,2) and (2,1).
For 2 arbit
OUTER JOINs vs EXISTS queries and
if there was a better alternative I had not considered.
On Tue, Dec 9, 2014 at 11:44 AM, Arthur Silva wrote:
> On Tue, Dec 9, 2014 at 4:18 PM, Robert DiFalco
> wrote:
>
>> I'm sorry, I missed a JOIN on the second variation. It is:
>>
>
I have users, friends, and friend_requests. I need a query that essentially
returns a summary containing:
* user (name, imageURL, bio, ...)
* Friend status (relative to an active user)
* Is the user a friend of the active user?
* Has the user sent a friend request to the acti
.id
<http://u.id>*
LEFT OUTER JOIN friend_requests s ON s.to_id = 33 AND s.from_id = u.id
LEFT OUTER JOIN friend_requests r ON r.to_id = u.id AND r.from_id = 33
WHERE u.id != 33 AND u.name LIKE '%John%' ORDER BY u.name;
On Tue, Dec 9, 2014 at 10:15 AM, Robert DiFalco
wr
saying that you would not remove both rows?
Thanks!
On Thu, Nov 13, 2014 at 8:10 AM, Jonathan Vanasco wrote:
>
> On Nov 11, 2014, at 5:38 PM, Robert DiFalco wrote:
>
> > Thoughts? Do I just choose one or is there a clear winner? TIA!
>
>
> I prefer this model
>
>
Thanks everyone.
Either I'm not that smart or I am working on too many things at once (or
both) but making Full Text work seems super tedious. I just have a single
VARCHAR field for name, so the full name "William S. Burroughs" is a single
row and column. I want to as simply as possible have the a
I notice there are several modules to create specialized indices in
PostgreSQL for searching VARCHAR data. For example, fuzzy, trigram, full
text, etc.
I've been googling around but I can't find the optimal method (reasonable
speed and size, simplicity) for my use case.
My text searches will alw
I have a question about modeling a mutual relationship. It seems basic but
I can't decide, maybe it is 6 of one a half dozen of the other.
In my system any user might be friends with another user, that means they
have a reciprocal friend relationship.
It seems I have two choices for modeling it.
t; Except for the fact that I get the new id returned from the first insert,
> which means that the insert probably did happen.
>
> Susan
>
>
>> On Wed, Apr 16, 2014 at 11:55 PM, Alban Hertroys wrote:
>> On 17 Apr 2014, at 2:49, David G Johnston wrote:
>>
Two common cases I can think of:
1. The PERL framework is only caching the insert and does not actually
perform it until commit is issued.
2. You really are not on the same transaction even though it appears you
are and the transaction isolation is such that you cannot see the insert
until it is f
iod).
On Wed, Apr 16, 2014 at 8:42 AM, Roxanne Reid-Bennett wrote:
> On 4/15/2014 9:10 PM, Robert DiFalco wrote:
>
>> 1. >500K rows per day into the calls table.
>> 2. Very rarely. The only common query is gathering users that have not
>> been called "today
I mean whether I de-normalize as people are
asking or not, there would still be the question of a summary table for MAX
and COUNT queries or to not have a summary table for those. I probably made
the original question too open ended.
On Tue, Apr 15, 2014 at 3:26 PM, Roxanne Reid-Bennett wrote:
&g
ch and the completely normalized set of tables approach. Thanks for
your input!
On Tue, Apr 15, 2014 at 8:12 AM, Chris Curvey wrote:
>
>
>
> On Tue, Apr 15, 2014 at 10:56 AM, Chris Curvey wrote:
>
>> On Mon, Apr 14, 2014 at 12:27 PM, Robert DiFalco <
>> robert.difa...
answered_date would be null if the call was
never answered.
On Tue, Apr 15, 2014 at 5:37 AM, Vincent Veyron wrote:
> On Mon, 14 Apr 2014 15:22:13 -0700
> Robert DiFalco wrote:
>
> Hi Robert,
>
> > But then I lose a bunch of data like the TIMESTAMPTZ of the call, an
, Rob Sargent wrote:
> On 04/14/2014 04:22 PM, Robert DiFalco wrote:
>
> But then I lose a bunch of data like the TIMESTAMPTZ of the call, answer,
> connection, etc. Btw, currently these tables never need to be UPDATEd. They
> are immutable in the current design. And in the end
wrote:
>
>
> On Mon, 14 Apr 2014 09:27:29 -0700
> Robert DiFalco wrote:
>
> > I have several related tables that represent a call state.
> >
> > And so on for calls_connected, calls_completed, call_errors, etc.
> >
> > So for my question -- is the choice betwee
I have several related tables that represent a call state. Let's think of
these as phone calls to simplify things. Sometimes I need to determine the
last time a user was called, the last time a user answered a call, or the
last time a user completed a call.
The basic schema is something like this:
You're right. This was my error.
Sent from my iPhone
> On Apr 7, 2014, at 8:53 PM, Tom Lane wrote:
>
> Robert DiFalco writes:
>> Is there any way to set the timezone of the postgres server differently
>> than the OS? I *have* tried setting timezone = 'UTC
Is there any way to set the timezone of the postgres server differently
than the OS? I *have* tried setting timezone = 'UTC' in my postgresql.conf
file but that seems to be a VIEW of time zone. It doesn't seem to behave
the same as running a postgresql server on a machine that is in UTC. The
machin
Heh, scratch that, the EXISTS query DOES work. Is this the most efficient
way to perform this kind of query? Thanks!
On Tue, Apr 1, 2014 at 1:21 PM, Robert DiFalco wrote:
> I have two queries I would like to combine into one.
>
> I have a table that represents a user's contacts.
I have two queries I would like to combine into one.
I have a table that represents a user's contacts. It has fields like "id,
owner_id, user_id". Owner ID cannot be null but user_id can be null. They
are numeric field, the ID is just generated.
I want a query to retrieve all of a user's contacts
44 matches
Mail list logo