Re: [GENERAL] Postgresql and github

2017-11-09 Thread Nicklas Avén
On Thu, 2017-11-09 at 12:27 +0100, Poul Kristensen wrote:
> Hi!
> 
> How come that Postgresql is not present in a github with latest
> release?
> 
> It would be very convenient to deploy PG using Ansible.
> 
> Oracle(latest release) is available through github.
> 
> TIA
> 
> Poul
> 

How can you say Oracle is present on github?
In the link in your last post it clearly states that "You'll need to
manually download the Oracle software and make it available to the
control-machine (either locally or on a web-server, or through a nfs-
share) before running the playbook."


As I understand it ansible is just some deployment automation?


PostgreSQL is available as binaries and sources from a lot of places,
but Oracle is not, or am I missing something here?



Regards


Nicklas Avén

Re: [GENERAL] Logcal replication in large scale

2017-09-20 Thread Nicklas Avén
On Wed, 2017-09-20 at 09:44 +0200, Magnus Hagander wrote:
> On Wed, Sep 20, 2017 at 8:53 AM, Nicklas Avén <nicklas.aven@jordogsko
> g.no> wrote:
> > Hallo all
> > 
> > 
> > 
> > I am thrilled about logical replication in PostgreSQL 10. My head
> > have
> > 
> > started spinning about use cases.
> > 
> > 
> > 
> > Would it be possible to use logical replication as a distribution
> > 
> > method of data?
> 
> As an answer to the generic  question: yes :)
> 
>  
> > I think about map data from national mapping authorities. The
> > problem
> > 
> > is to get the updates of their data sets. Especially the open data
> > sets
> > 
> > are now distributed as files (shape files) in Sweden and as pg_dump
> > in
> > 
> > Norway.
> > 
> > 
> > 
> > I guess it is beyond what logical replication is designed for, so I
> > ask
> > 
> > what problems that might arise for a scenario like:
> > 
> > 
> > 
> > The distributor has a publication database with logical replication
> > 
> > publications of the tables. All users, probably thousands or more,
> > 
> > would subscribe to that publication to get an updated copy of the
> > data
> > 
> > set.
> > 
> > 
> > 
> > How would the publication server react? I guess the WAL-files will
> > only
> > 
> > be written once anyway?
> 
> Yes. But they will  need to be kept around until *all* subscribers
> have pulled down their changes. So even one subscriber falling behind
> will mean your WAL will never get cleaned up.
> 
> Of course, you can keep some sort of watcher process that kills old
> replication slots.
> 
> I am also not sure how well PostgreSQL will react to having thousands
> of replication slots. It's not what the system was designed for I
> believe :)
> 
Ok, I have to read me up on how this works. I thought about it as a
bucket of WAL-files that the subscribers just "grab".But of course
there is some bookkeeping to make things work.
I guess there is also mechanisms so a new subscriber can get a complete
table after the publcation have been active for a long time? I mean the
"cleaning up" of Wal-files will not leave new subscribers missing what
is written to the table long ago?

> You might be better of using logical decoding (which goes back to
> 9.4) to stream the data out, but not having each subscriber be a
> postgresql subscriber. Either using it to generate some sort of "diff
> files" that can then be consumed downstream, or by distributing it
> via some kind of dedicated queuing system designed to handle that
> many downstreams.

The thing is that I am not involved in any of those organisations, just
a user that want to give them some good reasons to consider
deistributing through PostgreSQL :-)
>  
> > My guess is that it will be a lower payload than today anyway when
> > the
> > 
> > whole data set have to be fetched to get updates.
> 
> As always that's going to depend on the number of updates. If the
> same thing is updated 1000 times, then if you use logical replication
> it gets transferred 1000 times. So there are definitely cases when
> there will be *more* traffic with logical replication, but in cases
> like yours I would guess it will be less.
> 
> //Magnus
> 
> 
> 

[GENERAL] Logcal replication in large scale

2017-09-20 Thread Nicklas Avén
Hallo all

I am thrilled about logical replication in PostgreSQL 10. My head have
started spinning about use cases.

Would it be possible to use logical replication as a distribution
method of data?

I think about map data from national mapping authorities. The problem
is to get the updates of their data sets. Especially the open data sets
are now distributed as files (shape files) in Sweden and as pg_dump in
Norway.

I guess it is beyond what logical replication is designed for, so I ask
what problems that might arise for a scenario like:

The distributor has a publication database with logical replication
publications of the tables. All users, probably thousands or more,
would subscribe to that publication to get an updated copy of the data
set.

How would the publication server react? I guess the WAL-files will only
be written once anyway?

My guess is that it will be a lower payload than today anyway when the
whole data set have to be fetched to get updates.

In the future maybe it would be possible to make some sort of
distributed arrangement so the replication can be against any of the
client servers instead to reduce payload on the original server. But
that would of course include some way to check data integrity against
the original server with a md5 hash or something so result is identical
with the original. 

ATB

Nicklas Avén


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ERROR: cannot convert relation containing dropped columns to view

2016-02-22 Thread Nicklas Avén
Hello

I get this error message :
ERROR:  cannot convert relation containing dropped columns to view

I have googled, but find only very old posts that doesn't seem to be the
same situation.

What I have done is that I have a quite big table that I added a column
to for deletion time.

Then I droped that column and added it again with the right type.

After that I cannot create a rule that is returning data.

The reason I have to return data is irrelevant here, but PostgREST
expects that.

To reproduce:
create table foo
(
id serial,
deleted int
);

alter table foo drop column deleted;
alter table foo add column deleted timestamp;

CREATE or replace RULE del_post AS ON DELETE TO foo
DO INSTEAD
update foo set deleted = now()
WHERE id = OLD.id
returning *;

returns:
ERROR:  cannot convert relation containing dropped columns to view


If I don't drop any column (adding the right type at once) it works as
expected.

two questions:
1) is this a bug
2) is there a way to "cean" the table from the deleted columns without
recreating it?

Best Regards

Nicklas Avén




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Different result depending on order of joins

2015-05-22 Thread Nicklas Avén

Hallo


I was a little surprised by this behavior.
Is this what is supposed to happen?


This query returns what I want:



with
a as (select generate_series(1,3) a_val)
,b as (select generate_series(1,2) b_val)
,c as (select generate_series(1,1) c_val)
select * from a 
inner join c on a.a_val=c.c_val
full join b on a.a_val=b.b_val
;


I get all values from b since it only has a full join and nothing else.


But if I change the order in the joining like this:



with 
a as (select generate_series(1,3) a_val)
,b as (select generate_series(1,2) b_val)
, c as (select generate_series(1,1) c_val)
select * from a 
full join b on a.a_val=b.b_val
inner join c on a.a_val=c.c_val
;


also b is limited to only return value 1.


I thought that the join was defined by on a.a_val=c.c_val
and that the relation between b and the rest wasn't affected by that last inner 
join.


I use PostgreSQL 9.3.6


Is this the expected behavior?


Thanks


Nicklas Aveacute;n





Re: [GENERAL] I want the stupidest possible binary export

2014-09-18 Thread Nicklas Avén
I use php and pg_unescape_bytea 

http://php.net/manual/en/function.pg-unescape-bytea.php

You also need to set bytea format to escaped in front of your query. 

If php can be en option 

/Nicklas Avén

Skickat från min Samsung Mobil.

div Originalmeddelande /divdivFrån: Jov am...@amutu.com 
/divdivDatum:2014-09-18  16:55  (GMT+01:00) /divdivTill: David Rysdam 
drys...@ll.mit.edu,pgsql-general pgsql-general@postgresql.org 
/divdivRubrik: Re: [GENERAL] I want the stupidest possible binary export 
/divdiv
/divpsql can only input/output text string,which can not be binary 
content。with 9.2,you can encode bytea to base64,save to file,then use shell 
command to decode the file。
google “amutu.com pg bytea” can get a blog post。

with upcoming 9.4,you can change bytea to large object,then use lo_* psql cmd 
save it to file。

2014年9月18日 10:09 PM于 David Rysdam drys...@ll.mit.edu写道:
I've got a some tables with bytea fields that I want to export only the
binary data to files. (Each field has a gzipped data file.)

I really want to avoid adding overhead to my project by writing a
special program to do this, so I'm trying to do it from psql. Omitting
the obvious switches for username, etc, here's what I'm doing:

psql -t -c \copy (select mybinaryfield from mytable where key = 1) to
'file'

That works, but I get escaped bytes. I want actual binary directly out
of the DB. Another option might be:

psql -t -c \copy (select mybinaryfield from mytable where key = 1) to
'file' with format binary

However, there are two problems. First, I get an syntax error at or
near 'format'. (Running 9.2 client and server.) And second, I suspect
that'll be some proprietary PG format, not the actual bytes from just
my field.

What option am I missing?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to ignore blank lines with file_fdw

2014-04-22 Thread Nicklas Avén
Hallo


I am struggling to find the best solution to ignore blank lines in csv-file 
when using file_fdw.


A blank line makes the table unreadable.


I would like to avoid manipulating the file directly and avoid the need to make 
a new corrected copy of the file.


I am on Linux so I have found a solution when using COPY:
COPY test_table from program 'sed ''/^ *$/d'' 
/opt/builds/inotify_test/test.csv'  with (format 'csv', header 'true');


but since the program option not seems to be implemented in file_fdw I am 
still searching for a solution.


I have also found in an email from 2011 
http://www.postgresql.org/message-id/4e699de6.8010...@gmail.com


that when force_not_null was implemented in file_fdw the patch also included 
some cosmetic changes such as removing useless blank lines.
But I do not find that blank lines is removed in general since I cannot read 
csv-files with blank lines, and I do not understand how the option 
force_not_null can do the trick since that is on the column level and not 
lines/row.


Any good ideas out there?


Thanks
Nicklas Aveacute;n

Re: [GENERAL] cal I pass arguments directly to final function in aggregates

2013-05-20 Thread Nicklas Avén

Thank you Tom

On 05/19/2013 01:26 AM, Tom Lane wrote:

=?UTF-8?B?Tmlja2xhcyBBdsOpbg==?= nicklas.a...@jordogskog.no writes:

Perhaps you could construct your usage like this:

post_process_function(aggregate_function(...), fixed_argument)

where the aggregate_function just collects the varying values
and then the post_process_function does what you were thinking
of as the final function.




Maybe that is the way I have to go. But I would like to avoid it because 
I think the interface gets a bit less clean for the users.


I also suspect that it causes some more memcopying to get out of the 
aggregation function and into a new function. (Am I right about that)


As i understand it i have two options

1)Do as you suggest and divide the process in one aggregate function 
and one post_processing
2Contruct a structure for the state-value that can hold those 
values. In this case those arguments is just 1 smallint , and 1 char(3). 
I will just have to handle them for the first row to store them in my 
structure, then I can just ignore them. Am I right that it will be a 
very small overhead even if those values are sent to the function for 
each row?


My question is if I can get further advice about what bottlenecks and 
traps I should consider.


What I am aggregating is geometries (PostGIS). It can be 1 to millions 
of rows, and the geometries can be points of a few bytes to complex 
geometry-collections of many megabytes.


Regards

/Nicklas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] cal I pass arguments directly to final function in aggregates

2013-05-18 Thread Nicklas Avén

Hallo

I am trying to create an aggregate function.

One of the arguments is static and is not needed until the final function.
From the doc I see that the final function  must take a single 
argument of typestate_data_type


Is it not possible to just pass this arguments directly to the final 
function?


I was looking at string_agg but that was doing all the processing in the 
SFUNC if I understand it right.


But to use it as example, if it instead would have collected all strings 
in an array and added the delimiter in the final function, would that be 
possible?


I would expect the definition of the aggregate to look something like:

CREATE AGGREGATE string_agg(text, text) (
  SFUNC=string_agg_transfn($1),
  STYPE=internal,
  FINALFUNC=string_agg_finalfn($2)
);

Am I missing something or am I just thinking in irregular circles (that 
happens now and then)



Best Regards

Nicklas Avén



Re: [GENERAL] FREE hosting platforms with PostgreSQL, Java SDK, Tomcat, ecc.?

2011-08-06 Thread Nicklas Avén
Open Source is about sharing ideas and source code that has no cost in
copying as soon as it is once invented. The link you are providing is
the same thing. All information about the chip is open. But that doesn't
mean that anyone will send you a chip for free.

To host a project on the web needs resourses like electric power,
computor hardware, cooling hardware for the server room. Server rooms,
people willing to get out of bed in the middle of the night to fix a
problem to keep up the up time and so on. Those resourses is not
possible to just copy for free. Those resourses are bound to the service
you get and not just to the idea about the service.

There is something else to note. The open sourse software also cost a
lot to develop. The idea is just that if a lot of people share what they
develop for their costumers or for their own sake that will gain the
development for everybody. But it is still a cost someone have to take
in terms of time or paying a developer.

So, I think the seeking for free in terms of no cost is quite provoking
on a list like this. Open Source is about sharing ideas not giving
things away for free. That is just a great side effect that what can be
copied without cost is shared without cost.

But so fare there is no technology to copy the hardware, server rooms,
internet bandwidth and so on for no cost.


Regards

Nicklas Avén



On Sat, 2011-08-06 at 15:49 +0200, Fernando Pianegiani wrote:
 Thak you Antonio.
 
 After open source for the software, we will wait for open resource for
 the hardware (this is just a first example http://www.arduino.cc/,
 even if of different nature).
 
 I need to eat too, for this reason I cannot pay for an hosting
 platform after that my funded research project ended.
 
 Fernando
 
 On Sat, Aug 6, 2011 at 3:28 PM, Antonio Goméz Soto
 antonio.gomez.s...@gmail.com wrote:
 Well,
 
 I am from the hosting business, and can assure you, what you
 are looking for does not exist.
 This configuration requires specialists on the provider side,
 which are expensive. They
 need to eat too.
 
 And history teaches, that even if it would exist, you should
 not put anything meaningful on it,
 because they surely will go out of business soon.
 
 Antonio.
 
 
 Op 06-08-11 10:02, Fernando Pianegiani schreef:
 
 
 Hello,
 
 do you know any FREE hosting platforms where
 PostgreSQL, Java SDK, Tomcat (or other web servers)
 can be already found installed or where they can be
 installed from scratch? In possible, it would be
 better if the PostgreSQL be directly accessible by my
 servlet, without any web service/PHP script in the
 middle.
 
 Thank you very much in advance.
 
 Kind regards.
 
 Fernando Pianegiani
 
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] (not equal to) in where clause can use index

2011-02-08 Thread Nicklas Avén
One way to get around the problem is often to do a left or right join,
use = instead of  , and catch the rows with null in id field,
something like:

SELECT a.* from 
table a LEFT JOIN tableb b on a.testfld=b.testfld
WHERE b.id is null;

This one will use indexes on testflda and testfldb an

HTH

/Nicklas

On Tue, 2011-02-08 at 15:09 +0600, AI Rumman wrote:
 Is it possible to add some operator class in Postgresql 9 so that
 (not equal to) in where clause can use index?
 
 If yes how?
 
 Any idea please. 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] how to avoid repeating expensive computation in select

2011-02-03 Thread Nicklas Avén
That's interesting Tom.


This is a discussion coming up sometimes at PostGIS lists since PostGIS
often handles expensive calculations.

Regina wrote a blog post about it:
http://postgresonline.com/journal/archives/113-How-to-force-PostgreSQL-to-use-a-pre-calculated-value.html

I thought the offset 0 trick was just a dirty hack, but coming from
you, Tom, I assume it is a robust way of doing it.

I also tried some of the queries we tried then, almost 2 years ago, and
I think it seems like PostgreSQL handles this much better in 9.0. Is
that possible?
What was strange then was that PostGIS functions marked immutable also
was recalculated between Select and where-clause and also if used
multiple times in the Select part.

But I think (from very few tests) that the result of the function was
reused in a better way now.

Can that be the case? 

Thanks

Nicklas Avén 


On Thu, 2011-02-03 at 13:16 -0500, Tom Lane wrote: 
 Bob Price rjp_em...@yahoo.com writes:
  I would like to know if there is a way in PostgreSQL to avoid repeating an 
  expensive computation in a SELECT where the result is needed both as a 
  returned value and as an expression in the WHERE clause.
 
 Use a subselect.  You might need OFFSET 0 to prevent the planner from
 flattening the subselect, eg
 
 SELECT whatever FROM
   (SELECT *, expensivefunc(value) AS score FROM mytable OFFSET 0) ss
 WHERE id LIKE '%z%' AND score  0.5;
 
 Keep in mind that in the above formulation, expensivefunc will be
 evaluated at rows that don't pass the LIKE test.  So you probably want
 to push down as much as you can into the sub-select's WHERE clause.
 The planner will not help you with that if you put in the OFFSET 0
 optimization-fence.  It's a good idea to use EXPLAIN (or even better
 EXPLAIN VERBOSE, if you're using = 8.4) to confirm that you're getting
 the plan you want.
 
   regards, tom lane
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Performance on multiple OR conditions inside ()

2011-01-13 Thread Nicklas Avén
No, the order is not supposed to make any difference. The planner plans
the query from costs and statistics and shouldn't care about the order
(I think)

What you should do is to divide the big geometries and build new index. 

That can be done by building a grid with generate_series and cutting the
geometry against that grid by using ST_Intersection between the grid and
your geometry. I have not tried it myself so I don't have any query
written for it.

Another way that might be simplier is to use some other map than taht
south Vietnam map. If you have some map with smaller regions instead the
index will be more efficiency.

Also use ST_Dump to tear apart all geometries as much as possible.

Then build a new index and analyse for the planner to know what you have
done.

One thing to remember is that you in many cases will get the same
geometry many times because it is within the given distance to many of
your regions like ST_Intersects(Mjøsa, fylker) will give many
fylke-mjøsa combinations.

Regards

Nicklas


 

On Thu, 2011-01-13 at 21:51 +0100, Håvard Wahl Kongsgård wrote:
 Hi, so one solution is to use the most common feature first (), in
 this case the houses ?
 
 |What version of PostGIS are you using?
 
 1.5.1
 
 
 2011/1/13 Nicklas Avén nicklas.a...@jordogskog.no
 Hallo Håvard
 
 The planner is supposed to take care of that. It estimates
 ehat is the
 cheapest part of the OR statements and checks that. If true,
 then
 nothing is done with the others.
 
 Do you have spatial indexes on the geometry columns?
 
 Do you know if they are used by the planner?
 
 Even if the indexes is in place and are used this query will
 probably be
 slow because of how the spatial index works.
 
 What the spatial index does for ST_Within and ST_DWithin is to
 tell if
 the geometries has overlapping bounding boxes (or expanded
 bounding
 boxes in ST_DWithin case). If they do the index is of no more
 help and
 the rest of the calculation has to be done vertex by vertex
 which is
 costly.
 
 So, the problem is when the bounding boxes covers many
 geoemtries, then
 the part of the work that the index can help with is small.
 
 There is techniques to slice the big geometry in smaller
 pieces, build a
 new index and things will go faster.
 
 Long roads often have this problem. If you want to find all
 houses along
 a road the bounding box test will find many more houses than
 those close
 to the road (If the road is not going just north/south or
 east/west)
 
 I don't think it should do any difference for the planner but
 I would
 test to build the query with joins instead.
 
 What version of PostGIS are you using?
 
 
 
 
  
 
 
 Regards
 
 Nicklas Avén
 
 
 
 On Thu, 2011-01-13 at 17:47 +0100, Håvard Wahl Kongsgård
 wrote:
  Hi, I have a spatial query with multiple OR statements, for
 large
  tables it's very slow. Is it possible to skip the spatial
 lookup on
  the other conditions if first(previous) condition equal 1,
 and thereby
  increase the performance?
 
  SELECT vciia_main.sitrp,vciia_main.date_time from
 vciia_main,
  south_vietnam72, roads, rails, houses, city where
  st_within(vciia_main.geom, south_vietnam72.geom) and
 date_time is not
  null and (st_dwithin(vciia_main.geom, roads.geom, 500) OR
  st_dwithin(vciia_main.geom, rails.geom, 500) or
  st_dwithin(vciia_main.geom, city.geom, 800) or
  st_dwithin(vciia_main.geom, houses.the_geom, 500))
 
  --
  Håvard Wahl Kongsgård
  Peace Research Institute Oslo (PRIO)
 
  http://havard.security-review.net/
 
 
 
 
 
 
 
 -- 
 Håvard Wahl Kongsgård
 Peace Research Institute Oslo (PRIO)
 
 http://havard.security-review.net/
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HNY-2011

2011-01-04 Thread Nicklas Avén
I just want to say I apologize for over reacting yesterday

As someone guessed I am not even religious so I shouldn't bother. The
reason for my reaction though is that I really like those open source
communities. People are working together collaborating to make things
better for everyone. Then, to me it is very important that everyone
feels welcome. It is a little like an oasis in a world where people put
a lot of effort in protecting, hiding, defending and amongst other
things fighting about religions.

But to make this work, the most important thing is to treat each other
with respect, and that is why I have to apologize because my answer
yesterday was not respectful. It was more like an instinctive reaction.

So to Adarsh Sharma, I am sorry and I apologize

Regards
Nicklas Avén


On Mon, 2011-01-03 at 13:47 +0100, Nicklas Avén wrote:
 Happy new year
 but spare me from any blessings, please
 
 /Nicklas
 
 2011-01-03 skrev Adarsh Sharma :
 
 Dear all,
 
 
 A very-very Happy New Year 2011 to all. May God Bless all of us to
 solve 
 future problems.
 
 
 Thanks and Regards
 
 Adarsh Sharma
 
 
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general
 
 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] HNY-2011

2011-01-03 Thread Nicklas Avén
Happy new year
but spare me from any blessings, please

/Nicklas

2011-01-03 skrev Adarsh Sharma :

Dear all,


A very-very Happy New Year 2011 to all. May God Bless all of us to solve 
future problems.


Thanks and Regards

Adarsh Sharma



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Need Help in query

2010-12-23 Thread Nicklas Avén
Hallo

This I think should work.

To get the usernames by hour as you describe:

SELECT h.hour, usrlog.userid
(select generate_series(1,24) as hour) as h
inner join
usrlog
on h.hour = usrlog.loginhr and h.hour lt;= usrlog.logouthr
order by h.hour, usrlog.userid;

To get the number of users per hour :

Select h.hour, count(*) asNumberOfUsers
(select generate_series(1,24) as hour) h
inner join
usrlog
on h.hour = usrlog.loginhr and h.hour lt;= usrlog.logouthr
group by h.hour;

HTH

Nicklas



2010-12-23 skrev Satish Burnwal (sburnwal) :

I need a help in writing a query. I have data as follows:

mydb=# select * from usrlog ;
 logid |  userid  | loginhr | logouthr 
---+--+-+--
 0 | sburnwal |   0 |1
 1 | rickyrs   |   1 |5
 2 | satishbn |   1 |6
 3 | taohujin |   2 |4
 4 | jospehm |   4 |5


Table captures the login and logout time (taking hour here to simplify)
of users and my aim to find the number of logged-in users (online users)
at every hr (1st hr i.e. 0-1, 2nd hrs i.e. 1-2, 3rd hr i.e. 2-3 and ...
so on). As the data indicates, use is not logging out in same hr as hr
of login. A user can be logged-in for more than one hr. For example,
here user rickyrs is logged-in for 1st, 2nd, 3rd, 4th  and 5th hr. My
query needs to find out in the last 24 hrs, how many users were
logged-in at each hr. I want the result this way:

Nth-hr   user
---
1  sburnwal
2  rickyrs
2  satishbn
3  rickyrs
3  satishbn
3  taohujin
4  rickyrs
4  satishbn
4  taohujin
4  josephm

Appreciate your response in advance. For me, even the count of users on
hourly basis is fine.

Thanks
-Satish

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Counting boolean values (how many true, how many false)

2010-11-16 Thread Nicklas Avén
If you want to use the boolean approach I would just (as suggested earlier) 
cast to integer and sum. Like:
SELECT
sum(good::int) as good,
count(good)-sum(good::int) as not good

and so on

I thing the boolean approach seems reasonable if good, nice and fair is three 
separaty judgements as I understand they are.

Regards
Nicklas



- Original message -
 Thank you all, I've ended up with the following.
 
 But I really wonder if using   boolean in my case
 (trying to offer players a web from with 3 x 2 radio buttons
 to rate each other) is really the best choice -
 since it feels a bit awkward (and maybe slow?)
 
 #   create table pref_rep (
                                 id varchar(32) references pref_users(id) 
check (id !=
 author),                               author varchar(32) references 
 pref_users(id),
                                 good boolean,
                                 fair boolean,
                                 nice boolean,
                                 about varchar(256),
                                 last_rated timestamp default current_timestamp
                 );
 
 # select * from pref_rep ;
       id     | author | good | fair | nice |     about     |                 
last_rated
 ++--+--+--+---+
   DE7085 | DE7317 | t       | t       | t       | comment 1 | 2010-11-16
 20:26:04.780827 DE7085 | DE7336 | t       |           | t       | comment 1 |
 2010-11-16 20:26:14.510118 DE7085 | DE7641 | t       | f       | t       |
 comment 2 | 2010-11-16 20:26:29.574055 DE7085 | DE7527 | f       | f       | t
       | comment 3 | 2010-11-16 20:26:45.211207 DE7085 | DE7184 | f       | f  
 
   | f       | comment 3 | 2010-11-16 20:26:56.30616 (5 rows)
 
 # select
 sum(case when good then 1 else 0 end) as good,
 sum(case when not good then 1 else 0 end) as not good,
 sum(case when fair then 1 else 0 end) as fair,
 sum(case when not fair then 1 else 0 end) as not fair,
 sum(case when nice then 1 else 0 end) as nice,
 sum(case when not nice then 1 else 0 end) as not nice
 from public.pref_rep;
 
   good | not good | fair | not fair | nice | not nice
 --+--+--+--+--+--
         3 |               2 |       1 |               3 |       4 |           
    1
 (1 row)
 
 -- 
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general