On 01/15/2015 04:30 AM, M Tarkeshwar Rao wrote:
Hi all,
We are getting following error message on doing any action on the table
like(Select or open from pgadmin).
Please suggest.
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619
** Error **
ERROR: missing
Hi List,
I need to import data from a large Excel spreadsheet into a PostgreSQL
table. I have a program that uses ODBC to connect to Excel and extract data
using SQL queries. The program then inserts the data into a PostgreSQL
table.
The challenge with this particular spreadsheet is that
Hi,
I could not understand on the basis of documentation about transaction-level
advisory lock:
Transaction-level lock requests ... are automatically released at the end
of the transaction ...
So, what order will the use of transactions and unlock the lock?
The transaction would be applied
Saimon Lim aimon.s...@gmail.com writes:
Transaction-level lock requests ... are automatically released at the end
of the transaction ...
So, what order will the use of transactions and unlock the lock?
The transaction would be applied at first and the lock would be unlocked
after it or
M Tarkeshwar Rao wrote on 15.01.2015 13:30:
Hi all,
We are getting following error message on doing any action on the table
like(Select or open from pgadmin).
Please suggest.
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619
** Error **
ERROR: missing
On 01/15/2015 08:07 AM, sambsa wrote:
I have created a job using pgagent as follows:
Did you install the pgAgent daemon/service?:
http://www.pgadmin.org/docs/dev/pgagent-install.html#daemon-installation-on-unix
http://postgresql.nabble.com/file/n5834094/pic1.png
As Remi notes, going with a pointcloud approach might be wiser, particularly if
you aren’t storing much more about the points that coordinates and other lidar
return information. Since you’re only working with points, depending on your
spatial distribution (over poles? dateline?) you might just
M Tarkeshwar Rao m.tarkeshwar@ericsson.com writes:
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619
What PG version is that exactly?
If you're not running the latest minor release in your branch, updating
would be advisable; there have been fixes in the past for bugs
I have created a job using pgagent as follows:
http://postgresql.nabble.com/file/n5834094/pic1.png
http://postgresql.nabble.com/file/n5834094/pic2.png
http://postgresql.nabble.com/file/n5834094/pic3.png
http://postgresql.nabble.com/file/n5834094/pic4.png
But when I am running the job I am
On 1/15/2015 6:44 AM, Daniel Begin wrote:
Hi, I'm trying to create an index on coordinates (geography type) over a
large table (4.5 billion records) using GiST...
CREATE INDEX nodes_geom_idx ON nodes USING gist (geom);
The command ran for 5 days until my computer stops because a power outage!
On 01/15/2015 05:44 AM, Daniel Begin wrote:
Hi, I'm trying to create an index on coordinates (geography type) over a
large table (4.5 billion records) using GiST...
CREATE INDEX nodes_geom_idx ON nodes USING gist (geom);
The command ran for 5 days until my computer stops because a power
Hey,
You may want to post this on postGIS list.
I take that so many rows mean either raster or point cloud.
If it is point cloud simply consider using pg_pointcloud.
A 6 billion point cloud is about 600 k lines for one of my data set.
If it is raster, you may consider using postgis raster type.
Thank, there is a lot of potential ways to resolve this problem!
For Rob, here is a bit of context concerning my IT environment…
Windows 7 64b Desktop, running with an Intel i7 core and 16GB ram. The
PostgreSQL 9.3 database is stored on a 3TB external drive (USB-3 connection
with write
On January 15, 2015 at 12:36:29 PM, Daniel Begin
(jfd...@hotmail.com(mailto:jfd...@hotmail.com)) wrote:
Paul, the nodes distribution is all over the world but mainly over inhabited
areas. However, if I had to define a limit of some sort, I would use the
dateline. Concerning spatial
FWIW I was using the select_hometown_id FUNCTION like this:
INSERT INTO users(...) values(..., select_hometown_id('Portland, OR'));
On Tue, Jan 13, 2015 at 2:21 PM, Brian Dunavant br...@omniti.com wrote:
The loop to run it twice handles that yes. I don't think that buys
you anything over a
I have this situation:
Machine:
VPS with CentOS 6.6 x86_64
64GB of RAM
2GB of swap (unused)
Ulimit settings:
postgressoftnproc 2047
postgreshardnproc 16384
postgressoftnofile 1024
postgreshardnofile 65536
postgreshard
On 1/15/15 6:22 AM, M Tarkeshwar Rao wrote:
We are getting following error message on doing any action on the table
like(Select or open from pgadmin).
Error reports should go to pgsql-general. I'm moving the discussion there (and
BCC'ing -hackers).
Please suggest.
ERROR: missing chunk
I must be doing something wrong because both of these approaches are giving
me deadlock exceptions.
On Tue, Jan 13, 2015 at 2:21 PM, Brian Dunavant br...@omniti.com wrote:
The loop to run it twice handles that yes. I don't think that buys
you anything over a more traditional non-cte method
On 1/15/2015 12:36 PM, Daniel Begin wrote:
Thank, there is a lot of potential ways to resolve this problem!
For Rob, here is a bit of context concerning my IT environment…
Windows 7 64b Desktop, running with an Intel i7 core and 16GB ram. The
PostgreSQL 9.3 database is stored on a 3TB
On 1/15/2015 3:17 PM, Enrico Bianchi wrote:
When I launch a query (the principal field is JSONb), the database
return this:
ERROR: out of memory
DETAIL: Failed on request of size 110558.
it looks like your query is trying to return 7 million rows, although
you didn't do EXPLAIN ANALYZE,
On 1/15/15 9:43 AM, George Weaver wrote:
Hi List,
I need to import data from a large Excel spreadsheet into a PostgreSQL table.
I have a program that uses ODBC to connect to Excel and extract data using SQL
queries. The program then inserts the data into a PostgreSQL table.
The challenge
Enrico Bianchi enrico.bian...@ymail.com writes:
When I launch a query (the principal field is JSONb), the database
return this:
ERROR: out of memory
DETAIL: Failed on request of size 110558.
That error should be associated with a memory usage map getting dumped to
postmaster stderr, where
On 1/15/15 7:42 AM, Vincent Veyron wrote:
On Wed, 14 Jan 2015 11:42:45 +1100
Tobias Fielitz tob...@streethawk.com wrote:
OPTION 1 - PARTITIONING:
For each query only a few columns are interesting and I could partition
the table (as it was suggested on SO)
by *created* and by *code*.
There is
On 1/14/15 8:28 AM, Daniel Verite wrote:
Roxanne Reid-Bennett wrote:
When you have a sequence of steps that need to be serialized across
processes, choose or even create a table to use for locking
This can also be done with an advisory lock, presumably faster:
Greetings
Our company is writing a small ad-hoc implementation of a load balancer
for Postgres (`version()` = PostgreSQL 9.2.9 on
x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat
4.4.7-4), 64-bit).
We're using both streaming and WAL shipping based replication.
Most
On 16/01/15 14:37, Jim Nasby wrote:
On 1/15/15 7:12 PM, Fabio Ugo Venchiarutti wrote:
Greetings
Our company is writing a small ad-hoc implementation of a load
balancer for Postgres (`version()` = PostgreSQL 9.2.9 on
x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red
Hat
On Thu, Jan 15, 2015 at 6:19 PM, Kyotaro HORIGUCHI
horiguchi.kyot...@lab.ntt.co.jp wrote:
On Wed, Jan 14, 2015 at 2:11 AM, Heikki Linnakangas
hlinnakan...@vmware.com
wrote:
On 01/13/2015 12:11 PM, Vladimir Borodin wrote:
05 янв. 2015 г., в 18:15, Vladimir Borodin
On 1/15/2015 6:12 PM, Robert DiFalco wrote:
FWIW I was using the select_hometown_id FUNCTION like this:
INSERT INTO users(...) values(..., select_hometown_id('Portland, OR'));
try this: (if you still get deadlocks, uncomment the advisory lock
[thanks Daniel] and try again)
Logically I
On 01/15/2015 07:59 PM, Pawel Veselov wrote:
Hi.
I'm trying to cast hstore to json, but I don't seem to be getting a json
object from hstore key/value pairs.
9.3 documentation says:
*Note:* The hstore
http://www.postgresql.org/docs/9.3/static/hstore.html extension has a
cast from hstore to
Hi,
I have a few doubts regarding table locks. Assume I am executing the
following query
Query 1 -- INSERT INTO FOREIGN_TABLE SELECT * FROM POSTGRES_TABLE
Query 2 -- INSERT INTO FOREIGN_TABLE SELECT * FROM FOREIGN_TABLE
If i get a table lock for the foreign table , will the second table
On 1/15/2015 9:14 PM, sri harsha wrote:
What kind of lock will the second table will be in ?
I suspect that depends on what FOREIGN_TABLE is connected to.
--
john r pierce 37N 122W
somewhere on the middle of the left coast
--
Sent via pgsql-general
Hi John.
What if my query was INSERT INTO Postgres_Table_A SELECT * FROM
Postgres_Table_B ?? What are the locks given for the two tables ??
--Harsha
On Fri, Jan 16, 2015 at 10:50 AM, John R Pierce pie...@hogranch.com wrote:
On 1/15/2015 9:14 PM, sri harsha wrote:
What kind of lock will
Thanks Tom for your quick reply.
We are using 9.1.3.
We got some information on internet regarding this and also found bug IDs:
You have any idea is these bugs are fixed in any release or fixed in latest
9.3.5?
We are in blocking stage as many users depend on this. Can you please help us
in
Le 16 janv. 2015 06:27, sri harsha sriharsha9...@gmail.com a écrit :
Hi John.
What if my query was INSERT INTO Postgres_Table_A SELECT * FROM
Postgres_Table_B ?? What are the locks given for the two tables ??
If they are both regular tables, they can insert at the same time. But you
can't
On Fri, Jan 16, 2015 at 2:38 PM, M Tarkeshwar Rao
m.tarkeshwar@ericsson.com wrote:
Thanks Tom for your quick reply.
We are using 9.1.3.
First thing: update to 9.1.18 if you do not upgrade to 9.3.5, you are
missing more than 2 years worth of bug fixes.
You have any idea is these bugs are
On 1/15/15 8:41 PM, Fabio Ugo Venchiarutti wrote:
Does it mean that pg_last_xlog_receive_location() returns the last WAL record that has
been successfully staged for replay by the stream replication whereas
pg_last_xlog_replay_location() returns the last successful WAL replay regardless of it
On 1/15/15 10:57 PM, Roxanne Reid-Bennett wrote:
try this: (if you still get deadlocks, uncomment the advisory lock [thanks
Daniel] and try again)
Logically I suppose it might run faster to do the select, then insert if. I
almost always write these as insert first - because it's the more
I need to support the following queries :
1. give all documents where attrib X='value'
2. give me all documents where attib X='value' and attrib Y='value2'
the distinct attributes if about 10,000,000,000 on about 10 difference
type (X,Y etc), so in average 1000 M for each.
each
Hi,
On Wed, Jan 14, 2015 at 2:11 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
On 01/13/2015 12:11 PM, Vladimir Borodin wrote:
05 янв. 2015 г., в 18:15, Vladimir Borodin r...@simply.name написал(а):
Hi all.
I have a simple script for planned switchover of PostgreSQL
-- Forwarded message -
From: amihay gonen agone...@gmail.com
Date: Thu, 15 Jan 2015 13:37
Subject: What is the best way to model attributes relations ?
To: pgsql-general pgsql-general@postgresql.org
I need to support the following queries :
1. give all documents where attrib
On Wed, Jan 14, 2015 at 2:11 AM, Heikki Linnakangas hlinnakan...@vmware.com
wrote:
On 01/13/2015 12:11 PM, Vladimir Borodin wrote:
05 янв. 2015 г., в 18:15, Vladimir Borodin r...@simply.name написал(а):
Hi all.
I have a simple script for planned switchover of PostgreSQL (9.3 and
9.4)
Hi all,
We are getting following error message on doing any action on the table
like(Select or open from pgadmin).
Please suggest.
ERROR: missing chunk number 0 for toast value 54787 in pg_toast_2619
** Error **
ERROR: missing chunk number 0 for toast value 54787 in
Hi, I'm trying to create an index on coordinates (geography type) over a
large table (4.5 billion records) using GiST...
CREATE INDEX nodes_geom_idx ON nodes USING gist (geom);
The command ran for 5 days until my computer stops because a power outage!
Before restarting the index creation, I am
On Wed, 14 Jan 2015 11:42:45 +1100
Tobias Fielitz tob...@streethawk.com wrote:
OPTION 1 - PARTITIONING:
For each query only a few columns are interesting and I could partition
the table (as it was suggested on SO)
by *created* and by *code*.
There is roughly 10 different codes and I would
I'd restructure the table to be split into perhaps 100 or so inherited
tables (or more). That many rows in a table are usually not efficient with
postgres in my experience. My target is to keep the tables under about 100
million rows. I slice them up based on the common query patterns, usually
by
Hi.
I'm trying to cast hstore to json, but I don't seem to be getting a json
object from hstore key/value pairs.
9.3 documentation says:
*Note:* The hstore
http://www.postgresql.org/docs/9.3/static/hstore.html extension
has a cast from hstore to json, so that converted hstore values are
On 01/15/2015 04:56 PM, Jim Nasby wrote:
On 1/15/15 9:43 AM, George Weaver wrote:
Hi List,
I need to import data from a large Excel spreadsheet into a PostgreSQL
table. I have a program that uses ODBC to connect to Excel and
extract data using SQL queries. The program then inserts the data
On 1/15/15 7:12 PM, Fabio Ugo Venchiarutti wrote:
Greetings
Our company is writing a small ad-hoc implementation of a load balancer for
Postgres (`version()` = PostgreSQL 9.2.9 on x86_64-unknown-linux-gnu, compiled
by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit).
We're using both
48 matches
Mail list logo