Vinayak wrote:
We have converted Oracle SYSDATE to PostgreSQL statement_timestamp() but
there is a difference in timezone.
SYSDATE returns the time on the server where the database instance is
running(returns operating system time) so the time depends on the OS
timezone setting.
while the
Arze, Cesar ca...@som.umaryland.edu writes:
creating template1 database in /mnt/pg_data/base/1 ... FATAL: could
not open file pg_xlog/00010001 (log file 0, segment
1): No such file or directory
We've seen something slightly similar when running PostgreSQL in a Linux
Yogesh. Sharma wrote
Dear David,
Are you currently using PostgreSQL?
Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
Now we plan to update this to PostgreSQL 9.0 version with RHEL6.5. As in
verion 9.0 I found least Compatibilities.
So, please guide me.
Regards,
I've read some on table partitioning and using nested select statements
with group by, but have not found the syntax to produce the needed results.
From a table I extract row counts grouped by three columns:
select stream, sampdate, func_feed_grp, count(*) from benthos group
by stream,
Rich Shepard wrote
I've read some on table partitioning and using nested select statements
with group by, but have not found the syntax to produce the needed
results.
From a table I extract row counts grouped by three columns:
select stream, sampdate, func_feed_grp, count(*) from
Greetings!
I'm looking for tools/resources/ideas for making pg_dump's output
compatible with SQLite v. 3.1.3.
Ideally, I'd love to be able to do something like this (Unix):
% rm -f mydatabase.db
% pg_dump --no-owner --inserts mydatabase | pg_dump2sqlite3 | sqlite3
mydatabase.db
...where
On Fri, Aug 29, 2014 at 9:06 AM, Kynn Jones kyn...@gmail.com wrote:
Greetings!
I'm looking for tools/resources/ideas for making pg_dump's output compatible
with SQLite v. 3.1.3.
Ideally, I'd love to be able to do something like this (Unix):
% rm -f mydatabase.db
% pg_dump --no-owner
On 08/29/2014 07:40 AM, John McKown wrote:
On Fri, Aug 29, 2014 at 9:06 AM, Kynn Jones kyn...@gmail.com wrote:
Greetings!
I'm looking for tools/resources/ideas for making pg_dump's output compatible
with SQLite v. 3.1.3.
Ideally, I'd love to be able to do something like this (Unix):
% rm
You're correct. It is Friday leading to a 3 day weekend here. And it
is a short work day too. So my brain has definitely already left the
building. Thanks for pointing that out. I use SQLite some, but just
for very basic stuff and am not really familiar with it. Perhaps Kynn
could show what, in
On 08/28/2014 09:14 PM, Yogesh. Sharma wrote:
Dear David,
Are you currently using PostgreSQL?
Currently we are using PostgreSQL 8.1.18 version on RHEL 5.8.
Now we plan to update this to PostgreSQL 9.0 version with RHEL6.5. As in
verion 9.0 I found least Compatibilities.
So what are the
On 08/28/2014 10:06 PM, Vinayak wrote:
Hello,
We have converted Oracle SYSDATE to PostgreSQL statement_timestamp() but
there is a difference in timezone.
SYSDATE returns the time on the server where the database instance is
running(returns operating system time) so the time depends on the OS
Hello,
I use Postgres version 9.3.5 and spot a performance issue
with postgres_fdw.
I have a table object_003_xyz with 275000 lines and is
exported to the master node as master_object_003_xyz.
( The following query is only a part of an automatically
generated complex query. )
On Fri, 29 Aug 2014, David G Johnston wrote:
You want to use window clause/function.
David,
I read about this, but did not absorb everything.
Add the following to the first query, in the select-list:
Sum(count(*)) over (partition by stream, sampdate) as stream_date_total
You function
On 08/29/2014 09:50 AM, Rich Shepard wrote:
On Fri, 29 Aug 2014, David G Johnston wrote:
You want to use window clause/function.
David,
I read about this, but did not absorb everything.
Add the following to the first query, in the select-list:
Sum(count(*)) over (partition by stream,
On Fri, 29 Aug 2014, Adrian Klaver wrote:
I am going to assume you mean Postgres did not like the syntax.
Adrian,
Oops! Mea culpa. Yes, postgres.
What was the error message you got back?
I don't recall. It was yesterday afternoon and I flushed it from memory
when it did not work.
Hello Postgresql users,
Is there a function to save schema history internally?
By keeping the schema history inside the DB, we can keep track of what and when
is changed in the schema.
While searching google. It seems it is a limitation with the audit trigger:
Hello list,
May I know is there a way to "alter column type to varchar"
(previous is varchar(***)) without view drop/re-creation?
Basically, looking for a way to change column without have to
drop/re-create dependent views.
varchar(***) to varchar
Hello list,
May I know is there a way to "alter column type to varchar"
(previous is varchar(***)) without view drop/re-creation?
Basically, looking for a way to change column without have to
drop/re-create dependent views.
On 08/29/2014 12:09 PM, Emi Lu wrote:
Hello list,
May I know is there a way to alter column type to varchar (previous
is varchar(***)) without view drop/re-creation?
Basically, looking for a way to change column without have to
drop/re-create dependent views.
varchar(***) to varchar and no
On 08/29/2014 10:15 AM, Rich Shepard wrote:
On Fri, 29 Aug 2014, Adrian Klaver wrote:
I am going to assume you mean Postgres did not like the syntax.
Adrian,
Oops! Mea culpa. Yes, postgres.
What was the error message you got back?
I don't recall. It was yesterday afternoon and I
Hi Craig -- Sorry for the late response, I've been tied up with some other
things for the last day. Just to give some context, this is a machine that
sits in our office and replicates from another read slave in production via
a tunnel set up with spiped. The spiped tunnel is working and postgres
On 2014-08-29 13:04:43 -0700, Patrick Krecker wrote:
Hi Craig -- Sorry for the late response, I've been tied up with some other
things for the last day. Just to give some context, this is a machine that
sits in our office and replicates from another read slave in production via
a tunnel set up
On 08/29/2014 11:23 AM, Patrick Dung wrote:
Hello Postgresql users,
Is there a function to save schema history internally?
By keeping the schema history inside the DB, we can keep track of what
and when is changed in the schema.
While searching google. It seems it is a limitation with the
Hello,
On 08/29/2014 03:16 PM, Adrian Klaver wrote:
May I know is there a way to "alter
column type to varchar" (previous
is varchar(***)) without view drop/re-creation?
Basically,
On Fri, Aug 29, 2014 at 2:11 PM, Andres Freund and...@2ndquadrant.com
wrote:
On 2014-08-29 13:04:43 -0700, Patrick Krecker wrote:
Hi Craig -- Sorry for the late response, I've been tied up with some
other
things for the last day. Just to give some context, this is a machine
that
sits in
[FWIW: proper quoting makes answering easier and thus more likely]
On 2014-08-29 15:37:51 -0700, Patrick Krecker wrote:
I ran the following on the local endpoint of spiped:
while [ true ]; do psql -h localhost -p 5445 judicata -U marbury -c select
modtime, pg_last_xlog_receive_location(),
On Fri, Aug 29, 2014 at 3:46 PM, Andres Freund and...@2ndquadrant.com
wrote:
[FWIW: proper quoting makes answering easier and thus more likely]
On 2014-08-29 15:37:51 -0700, Patrick Krecker wrote:
I ran the following on the local endpoint of spiped:
while [ true ]; do psql -h localhost
Hi Adrian,
Thanks for the info.
Thanks and regards,
Patrick
On Saturday, August 30, 2014 5:28 AM, Adrian Klaver adrian.kla...@aklaver.com
wrote:
On 08/29/2014 11:23 AM, Patrick Dung wrote:
Hello Postgresql users,
Is there a function to save schema history internally?
By keeping the
Hello Postgresql users,
Suppose the table 'attendance' is very large:
id bigint
student_name varchar
late boolean
record_timestamp timestamp
The table is already partitioned by year (attendance_2012p, attendance_2013p,
...).
I would like to count the number of lates by year.
Instead of
On 8/29/2014 9:38 PM, Patrick Dung wrote:
Suppose the table 'attendance' is very large:
id bigint
student_name varchar
late boolean
record_timestamp timestamp
The table is already partitioned by year (attendance_2012p,
attendance_2013p, ...).
I would like to count the number of lates by year.
Thanks for reply.
The constraint is like:
ADD CONSTRAINT attandence_2014p_record_timestamp_check CHECK
(record_timestamp = '2014-01-01 00:00:00'::timestamp without time zone AND
record_timestamp '2015-01-01 00:00:00'::timestamp without time zone);
Let us assume it is a complete year
31 matches
Mail list logo