Hi,
So I'd call this an oracle_fdw bug. It needs to postpone what it's
doing here to the first normal FDW function call in a session.
Thanks a lot for looking so quickly into this!
I've opened an issue with oracle_fdw:
https://github.com/laurenz/oracle_fdw/issues/215
Thanks,
Chris.
--
Hi!
I've found a problem in either PostgreSQL 9.6.6 or oracle_fdw 2.0.0.
Background:
I was testing a setup on a current CentOS 7 system with PostgreSQL 9.6.6
installed from the PGDG repositories and oracle_fdw 2.0.0 installed via pgxn
install. Everything went absolutely fine until I enabled
[>] 1/1 rows (100.0%) Table MYTAB (0 recs/sec)
[>] 1/2 total rows (50.0%) - (4 sec., avg: 0 recs/sec).
Out of memory!] 1/2 rows (50.0%) on total estimated data (4 sec.,
avg: 0 recs/sec)
Issuing rollback() due to DESTROY without explicit
C:\ora2pg>ora2pg -c ora2pg.conf
[>] 2/2 tables (100.0%) end of scanning.
[>] 0/2 tables (0.0%) end of scanning.
[>] 2/2 tables (100.0%) end of table export.
Looks good so far.
This means you could connect to Oracle DB now.
I am facing below issue while running below command.
*C:\ora2pg>ora2pg -c ora2pg.conf*
DBI connect('host=;sid=;port=',',...) failed:
ORA-01017: invalid username/p
assword; logon denied (DBD ERROR: OCISessionBegin) at
C:/Strawberry/perl/site/lib/Ora2Pg.pm line 1376.
FATAL: 1017 ... ORA-01017:
Does one any having list of bottlenecks and workarounds while migrating data
from Oracle to Postgresql. like what are thing which we can migrate from
Oracle database to Postgresql and what we can't?
Which is the best tool for migration from Oracle to Postgresql?
Hi,
I like this tool and
you should increase shared_memory to 40GB. General philosphy is to allocate 80%
of system memory to shared_memory
Uhm...
80% is too much, likely:
https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html
Bye,
Chris.
--
Sent via pgsql-general mailing list
Sorry,
my mistake (I'm a bit nervous...)
that's not work_mem, but shared_buffers
Hi.
The resident set size of the worker processes includes all shared memory blocks
they touched.
So it's not that each of those workers allocated their own 3GB...
(in Linux at least)
Bye,
Chris.
--
Postgres version?
9.6.1
Have you considered upgrading to 9.6.2?
There were some fixes, including WAL related:
https://www.postgresql.org/docs/9.6/static/release-9-6-2.html
Not exactly regarding what you see, though...
Bye,
Chris.
--
Sent via pgsql-general mailing
ORDER BY can only be processed after all rows have been fetched, this
includes the expensive result column.
You can easily avoid that by applying the LIMIT first:
SELECT r, expensive()
FROM (SELECT r
FROM big
ORDER BY r
LIMIT 10
) inner;
I don't know how
https://git.postgresql.org/gitweb/?p=postgresql.git=commitdiff=9118d03a8
Hi,
thanks!
I've just tested with 9.6 and the test runs fast with or without expensive().
So the above patch does indeed improve this case a lot!
Bye,
Chris.
--
Sent via pgsql-general mailing list
Hi,
I've found a (simple) situation where the planner does something I don't
understand.
Below is a complete test case followed by output.
From the timings it appears that in the second explain analyze query a function
call in the select list (expensive()) is evaluated in the sequential scan
On 17/01/17 23:21, Tomas Vondra wrote:
Hi,
after looking at the explain plans again, I very much doubt those come from the
same query. The reason is the 9.5 plan contains this part:
-> HashAggregate (cost=67.54..68.12 rows=192 width=4)
Group Key: d.vip_patient_id
->
Hi.
explain analyze
SELECT count(*)
[...]
QUERY PLAN
-
Aggregate
I would also like to add this:
The explain analyze show that the index on that numeric field *is not* being
used.
I also try to set the seqscan off but that index continues not to be used.
Maybe the problem is this?
Thank you again!
/F
Hi,
maybe I missed something, but it appears to me you
with AWS, your system is sharing the vendors virtual machine environment with
other customers, and performance is pretty much out of your control.
I found no strange processes or queries while load average was at peak. IO also
didn't change. Some more slow queries were logged, but not
with AWS, your system is sharing the vendors virtual machine environment with
other customers, and performance is pretty much out of your control.
I found no strange processes or queries while load average was at peak. IO also
didn't change. Some more slow queries were logged, but not
with AWS, your system is sharing the vendors virtual machine environment with
other customers, and performance is pretty much out of your control.
I found no strange processes or queries while load average was at peak. IO also
didn't change. Some more slow queries were logged, but not
Dear folks,
I have a table with thousands of rows ( currently 15 thousand but will grow
very fast).
I need to return from the query rows which are random and non-repeating.
I know there is random() function, but would like to know from postgresql
practitioners before embarking that path.
With a heavy query, when line number results raise over 600k query hangs with
out of memory.
Here is the explain analyze:
[...]
Work_mem is.512mb, shared buffers 3084mb and system Ram 10Gb. Postgres version
is 8.4.8 and for some months i cannot upgrade.
Is there a way to solve the problem?
On 19/08/16 10:57, Thomas Güttler wrote:
What do you think?
I store most of my logs in flat textfiles syslog style, and use grep for adhoc
querying.
200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon you're
talking big tables.
in fact thats several rows/second on a 24/7
I dowloaded the installer
The strings "9.4-1208" and "8.4-703" look suspiciously like the PostgreSQL JDBC
drivers
(drivers for the Java Programming Language). Perhaps that's not what you wanted?
To install PostgreSQL itself start here:
https://www.postgresql.org/download/
Bye,
Chris.
Hi,
maybe this is a late reply, but also note that 4.725 alone already cannot be
represented in floating point exactly (and this has nothing to do with
Postgres).
Just sum it up 100 times to "see" the round off error becoming visible:
chris=# select sum(4.725::double precision) from
Can someone point me in the right direction per how I would remove the
first 25 bytes and the last 2 bytes from a bytea column?
http://www.postgresql.org/docs/9.3/static/functions-binarystring.html
Substring might do it for you.
won't doing it in SQL still result in a BYTEA result which will
I was told that "The amazon linux is compatible with Centos 6.x". Does that
correspond to RHEL 6? Is there a command I could use to find out?
Not quite.
Amazon Linux is RHEl/CentOS/Fedora derived, but it's not based on exactly
RHEl/CentOS 6 or
exactly RHEl/CentOS 7.
This is its current
Error: Package: gdal-libs-1.9.2-6.rhel6.x86_64 (pgdg93)
Requires: libpoppler.so.5()(64bit)
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Hi,
what happens if you try to install libpoppler (it is in the standard
I have a program that inserts 50M records of about 30 bytes each [..]
Now I suspect the limit is OSX throttling per-process CPU.
Does this sound right?
Mmm... I don't think so.
How do you perform the inserts?
- Single inserts per transaction?
- Bundled inserts in transactions (with or
I then might want to extract a list from, say, (1, 3) to (3, 2), giving:
x | y
-
1 | 3
1 | 4
2 | 1
2 | 2
2 | 3
2 | 4
3 | 1
3 | 2
For the life of me, I can't figure out how to do this.
Hi,
starting from this:
chris=# select * from t order by x,y;
x | y
---+---
1 | 1
1 | 2
1 | 3
1 |
>> Sure:
>>
>> (depesz@[local]:5960) 12:15:46 [depesz]
>> $ select geoloc::numeric(8,4)[] from alex;
>> geoloc
>> ---
>> {5.3443,100.2946}
>
> Nice!
Indeed :)
Bye,
Chris.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your
On 21/09/15 11:55, Alex Magnum wrote:
> Hello,
>
> I have a float array holding geo location information.
>
> geoloc
> ---
> {5.3443133704554,100.29457569122}
> {5.3885574294704,100.29601335526}
> {3.1654978750403,101.60915851593}
>
I see there are many different ways to build a PG cluster. What would be
the best choice in my case?
Hi,
a few keywords in your mail hint at the fact you're using AWS?
If that's the case, you might want to look into their managed
PostgreSQL hosting: it's called Amazon RDS for PostgreSQL and
On 19/08/15 13:37, Tom Smith wrote:
Hi:
I have a jsonb columne with json object like belo
{a: 1, b:2, c:3}
I'd like to get subset of the object with key list [a,c]
so it retruns json object of
{a: 1, c:3}
something like
select '{a: 1, b:2, c:3}'::jsob -'[a,c]'
what would be
I've edited the /etc/security/limits.conf and added
* softstack 12288
* hardstack 12288
on a CentOS 6.7 box I can confirm that adding those lines to
/etc/security/limits.conf DOES increase the limit to 12288
(after a reboot).
I don't see the PAM
I don't know if that works, but it is in line with the
systemd broke things idea...
Rereading this, I realize this might come over as too harsh.
What I meant was the introduction of systemd broke things.
So this wasn't meant as anti-systemd or anything. No flames
intented ;)
Bye,
Chris.
Hi All,
I'm tuning up my database and need to increase the max_stack_depth
parameter in postgresql.conf.
I've edited the /etc/security/limits.conf and added
* softstack 12288
* hardstack 12288
but I noticed the comments at the top of the file -
Chris/Joshua
I would like to know more details.
As per this:
http://www.postgresql.org/docs/current/static/warm-standby.html#SYNCHRONOUS-REPLICATION
When requesting synchronous replication, each commit of a write
transaction will wait until confirmation is received that the commit
Does sync replication guarantee that any inserted data on primary is
immediately visible for read on standbys with no lag.
Basically yes. Of course there is *some* latency, at the very least
from the network.
If I run a process on a standby machine that displays a value every
0.1 sec and
Hello,
I have a csv string in a text field that is unsorted and contains
duplicates.
Is there a simple way to remove these and sort the string.
E.g
2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27
i tried string to array and unique but that did not work...
Any suggestions on how to do
This is interesting, I tried nc a few times. Inconsistent results:
- most of the time the first line doesn't get through, even after a long wait
- on sending a second line from the client both lines appear
instantly, and it's 100% from there on
- or I can send a line from the server. The
SOLVED!
Glad to see my macbook is not haunted.
Calling a priest would have been my next suggestion... ;)
Bye,
Chris.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Use netstat to look what IP postgres actually binds to.
Nothing unusual:
~ netstat -an | grep 5432
tcp4 0 0 *.5432 *.*LISTEN
tcp6 0 0 *.5432 *.*LISTEN
9767b1c9fd5d8ab1 stream 0 0
So that's all good. If I use -h it doesn't work:
psql -Umyuser -d mydb -h localhost
listen_addresses = 'localhost' # also '*', '127.0.0.1', '::1'
Use netstat to look what IP postgres actually binds to.
OS X uses the BSD syntax:
netstat -an
For example on my Mac (not homebrew):
tcp4
Hi,
just to be 100% sure everything works upt to the TCP layer...
0. Stop postgres.
1. Open a terminal, enter:
nc -l 5432
and leave that running.
2. Open another terminal and enter:
nc 127.0.0.1 5432
follow up with some text such as hello and then hit CTRL-D
So... did hello
Hi,
is xmltable available in postgres..?,if not please give me a
advice to replace the xmtable in postgres..?
Hi,
PostgreSQL has a native XML type and related functions:
http://www.postgresql.org/docs/9.4/static/datatype-xml.html
Hi,
in oracle regexp_like(entered
date,'[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}','i')
for postgres i have regexp_matches ,But i need how to match [:digit:] in
postgres when we pass date..?
any help
[:digit:] is Posix syntax, supported by Postgres.
Looks good to me:
I have a very simple query that is giving me some issues due to the size of
the database and the number of requests I make to it in order to compile the
report I need:
A dumbed down version of the table and query:
CREATE TABLE a_to_b (
id_a INT NOT NULL REFERENCES
Hello,
I'd like to count the number linebreaks within a string,
but I get a memory allocation error when using regexp_matches or
regexp_split_to_table.
Any idea for an alternative to this problem ?
select count(*)-1 from
( select regexp_split_to_table(full_message,'(\n)', 'g')
select *;
--
ERROR: SELECT * with no tables specified is not valid
select count(*);
1
Is this a must? and why 1?
Hi,
regarding the why 1 part:
I think that if we accept that
chris= select 'foo';
?column?
--
foo
(1 row)
returns 1 row, then
Hi,
quick question regarding
https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug
Will 9.4.3 be exactly like 9.4.2 except for the permission
bug, or will there be other fixes too?
Bye,
Chris.
PS: yes, I've read the section Should I not apply the updates?.
--
Sent via
Will 9.4.3 be exactly like 9.4.2 except for the permission
bug, or will there be other fixes too?
There are a few more fixes available in the queue, including another
multixact fix.
Ok,
good to know, thanks!
Bye,
Chris.
--
Sent via pgsql-general mailing list
psql (8.4.7)
Uhm the last update to 8.4 was 8.4.22: besides using an unsupported
version, you're missing three and a half years of patches in 8.4.x :|
Bye,
Chris
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
I use Postgis and PGrouting extension.
The error come when I use a pgrouting function pgr_createtopology()
It appears pgrouting violates the 1GB per chunk limit in the postgres backend
when processing large datasets:
https://github.com/pgRouting/pgrouting/issues/291
Bye,
Chris.
--
Sent
Hello, I'm processing a 100Million row table.
I get error message about memory and I'ld like to know what can cause this
issue.
...
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104855000 edges processed
psql:/home/ubuntu/create_topo.sql:12: NOTICE: 104856000 edges processed
Dear Postgresql mailing list,
we use Postgresql 8.4.x on our Linux firewall distribution.
Actually, we are moving from standard SATA disk to mSATA SSD solid drive, and
we noticed that the DB, using lots of indexes, is writing a lot.
In some monthes, two test machine got SSD broken, and
select * from T_SORT order by NAME ;
rollback;
id |name
+
1 | FINISH_110_150_1
2 | FINISH_110_200_1
3 | FINISH_1.10_20.0_3
4 | FINISH_1.10_20.0_4
5 | FINISH_1.10_30.0_3
6 | FINISH_1.10_30.0_4
7 | FINISH_120_150_1
8 | FINISH_120_200_1
Hi,
does no one have an idea?
It may be a rare case doing the same UPDATE a thousand times. But I´m really
interested why this is not happening when doing DIFFERENT updates. And, of
course, if something could be done on the database side to prevent this
behavior in case some
The function is to execute updates on each ID assigning the value, but if
one of these operation fails (does not meet certain criteria)
inside the function i would like to rollback and leave everything
untouched, in case other ID;s were already updated previously,
and come back to the caller
https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing
Note: due an error in dump script, if you are in Linux/Unix environment, use
this command for uncompressing the file:
bzip2 -d -c comment_test.dump.bz2 |sed -e '12d' comment_test.dump
Hi,
I've played a bit
Hi,
I've been asked whether it's possible to use PostgreSQL as a backend for
Wizcon 9.2. I don't know anything about Wizcon, but I'd like to give an
answer (for PostgreSQL advocacy reasons).
So I'm forwarding the question to this list.
Any clues?
Bye :)
Chris.
I've been asked whether it's possible to use PostgreSQL as a backend for
Wizcon 9.2. I don't know anything about Wizcon, but I'd like to give an
answer (for PostgreSQL advocacy reasons).
So I'm forwarding the question to this list.
Not one. I'd ask the folks who write the software is they
I have OS X tiger with all the updates:
uname -r
8.8.0
Here is what I get when I try to initdb on a freshly compiled 8.2:
selecting default max_connections ... 10
selecting default shared_buffers/max_fsm_pages ... 400kB/2
creating configuration files ... ok
creating template1
That looks like the solution to my problem, thanks!...I tried running it
on my 8.0.8 server, but it wasn't found, I assume that's an 8.1 only
function?
Note that the upcoming 8.2 release has a handy returning clause
for insert:
I'd like to know if there's any reasoning for not allowing creating an index
inside the same schema where the table is. For example, if I have a
multi-company database where each company has its own schema and its employees
table, shouldn't I have a different index for each of those? What if
create index testing123_index on testing.testing123 (otherthing);
and you'll otain exactly what you want (see below).
Bye, Chris.
I know I can workaround such debilitation. What I wanted to know is if
there's some reason (such as performance gain, for example) for that
decision.
I have a process that is hitting deadlocks. The message I get talks about
relation and database numbers, not names. How do I map the numbers back
into names?
Hi,
you need to query the catalog:
-- relations like tables:
select oid, relname from pg_class;
-- databases:
select oid,
its just a vacuumdb --all. We already learned that full vacuums are
evil because the database was carrupted after some time.
Wait a sec...
vacuum full maybe evil in the 'locks stuff and takes long to run'-sense,
but it should definitly NOT corrupt your database.
Are you sure there's no issues
that same function.
Also there's not just PL/PGSQL: you might want
to define a function in C or Perl and then have a trigger call it.
Bye, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 4: Have you searched our list archives
.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
-ERROR-TRAPPING
Trapping division by zero is given as an example there...
Bye, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 6: explain analyze is your friend
Do a simple test to see my point:
1. create table test (id int4, aaa int4, primary key (id));
2. insert into test values (0,1);
3. Execute update test set aaa=1 where id=0; in an endless loop
I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
sorry had no configured
I’ve been studying the whole evening and don’t seem to find an answer:
I want to “store” transactions on the server- like view’s, or, (sorry) as in
M$ SQL Server CREATE OR REPLACE TRANSACTION xyz()
Of course, it's possible.
What you need is
CREATE OR REPLACE FUNCTION xyz() RETURNS
Hi,
a few random question...
i have a Tomcat application with PostgreSQL 8.1.4 running which
performs about 1 inserts/deletes every 2-4 minutes and updates on
a database and after some hours of loadtesting the top output says
0.0% idle, 6-7% system load, load average 32, 31, 28
Who
A accepting
connections on host B?
Maybe you can use an SSH tunnel?
Bye, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
:
http://protopg.projects.postgresql.org/nightlies/
Bye, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
errors.
Use numeric for exact, decimal math.
Bye, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes
...
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing
, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through
for you).
Bye, Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through
listings at
http://www.postgresql.org/support/professional_support
Bye,
Chris.
--
Chris Mair
http://www.1006.org
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command
i have a table with around 57 million tuples, with the following columns:
pid(varchar), crit(varchar), val1(varchar), val2(varchar). Example:
pidcritval1val2
p1 c1 xy
p1 c2 xz
p1 c3 yx
...
What i am doing is to query all
http://www.postgresql.org/docs/8.1/interactive/triggers.html
it says something like this:
It is not currently possible to write a trigger function in the
plain SQL function language.
The whole paragraph says.
It is also possible to write a trigger function in C, although most
people
SELECT md5('secret_salt' || nextval('my_seq')::text)
* When somebody knows md5('secret_salt' || '5') he will be able to
easily compute
md5('secret_salt' || '50')
md5('secret_salt' || '51')
md5('secret_salt' || '52')
...
md5('secret_salt' || '59')
On Fri, 2005-01-07 at 17:04, Marc G. Fournier wrote:
[...]
A current list of *known* supported platforms can be found at:
http://developer.postgresql.org/supported-platforms.html
We're always looking to improve that list, so we encourage anyone that is
running a platform not
On Fri, 2005-01-07 at 17:04, Marc G. Fournier wrote:
[...]
A current list of *known* supported platforms can be found at:
http://developer.postgresql.org/supported-platforms.html
We're always looking to improve that list, so we encourage anyone that
is
running a platform not
select
earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));
The result I get is this:
128862.563227506
The distance from Portland to Seattle is not 128862
miles.
It is 128000m = 128km.
Welcome to the metric system :)
Bye, Chris.
Druid? http://druid.sourceforge.net/
from reading the description, it looks like they might have something,
but there is *no* documentation and i can't figure out how they want
me to run the install jar file.
java -jar druid-3.5-install.jar
Bye, Chris.
---(end
87 matches
Mail list logo