Re: [GENERAL] Postgres 9.3 tuning advice

2014-08-13 Thread Albe Laurenz
dushy wrote:
 Iam running a postgresql 9.0.13 master/slave instance in a write heavy
 workload.
 
 The hardware is a Dell 720, E5530 - 8 core, 128GB RAM. The database (around
 250g with indexes/bloat etc) is sitting on flashcache device with 2 fusion-
 io PCIe MLC SSDs as frontend and a MD3200 based RAID10 (14*1TB SATA disks)
 as backend. OS is centos 6.2 with kernel 2.6.32-431.20.3.el6.x86_64.
 
 Currently, the performance related configuration is mostly default i,e
 shared_buffers,
 effective_cache_size. The only directive that seems different is
 checkpoint_segments = 96
 
 Iam moving to postgresql 9.3 shortly and planning to tune the above
 directives as below..
 
 effective_cache_size = 100GB # free+buffers is pretty consistent around 110
 to 120GB and pg_oscache_total is around 80GB consistently
 checkpoint_segments = 32 # 96 seems to long and all flushes seem to be only
 due to checkpoint_timeout
 
 Additionally iam turning off THB defrag as suggested by some posts on the
 lists. Though, My initial pgbench testing doesn't seem to indicate any
 issues with THB defrag turned on/off.
 
 Iam not sure about shared_buffers and wal_buffers - iam inclined to leave
 them to defaults. But based on this article
 (http://rhaas.blogspot.in/2012/03/tuning-sharedbuffers-and-walbuffers.html)
 it looks there will be some advantages in tuning it
 
 What would be a good value for shared_buffers and wal_buffers ?
 
 Please let me know if additional information will help.

The frequently heard advice for setting shared_buffers is 25% of RAM, but
with memory as big as that that may be too much (it can lead to checkpoint
I/O spikes and greater overhead in managing shared buffers).
Try with something like 8 or 16 GB.
Ideally you should test, use pg_buffercache to inspect shared buffers
and see what setting works best for you.

Set wal_buffers to 16MB so that a whole WAL segment will fit.

The best tuning strategy would be to stuff another 128 GB RAM into
the machine and have your DB in RAM.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inserting large binary data into lo type table

2014-08-13 Thread Albe Laurenz
Jose Moreira wrote:

 I guess this is easy a question for the gurus, but afraid I cannot get te 
 answer!
 
 I have this table:
 
 aif_test=# \d sbox;
  Table public.sbox
  Column |  Type  | Modifiers
 ++---
  id | character varying(255) | not null
  data   | lo |
 Indexes:
 sbox_pkey PRIMARY KEY, btree (id)
 
 
 And trying to insert a binary file which I show only the first lines:
 
 Emitidas/10001.doc|\\x3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E0A3C
 41434B43464520786D6C6E733D22687474703A2F2F6366652E646769
 2E6775622E75792220786D6C6E733A6E73323D22687474703A2F2F772E77332E6F72672F323030302F30392F786D6C6473
 6967232220786D6C6E733A7873693D22687474703A2F2F772E77332E6F726
 72F323030312F584D4C536368656D612D696E7374616E6365222076657273696F6E3D22312E3
 
 When runing a simple insert I get:
 
 [postgres@dbfactura01dmz  /tmp]$ psql -d aif_test -U aif  
 subida_postgres_copys_sbox.sql
 Password for user aif:
 ERROR:  invalid input syntax for type oid:
 \x3C3F786D6C2076657273696F6E3D22312E302220656E636F64696E673D225554462D38223F3E0A3C41434B43464520786D6C
 6E733D22687474703A2F2F6366652E6.
 
 sql script:
 [postgres@dbfactura01dmz  /tmp]$ more subida_postgres_copys_sbox.sql
 COPY sbox FROM '/tmp/TXT/SBOX.txt' WITH DELIMITER '|' NULL '';
 
 I am not able to insert a simple binary text into a lo type column!!!  Read 
 on the doc but it should
 work thoug. Any help please??

You must be using the lo extension.

Reading the doc I see that lo is just an alias for oid, so that is not the 
way to
insert a large object.

You need to either import a file with lo_import() or use lo_create() and 
lowrite() to
create a large object. See 
http://www.postgresql.org/docs/current/static/largeobjects.html .

The extension essentially contains a trigger that you can use to automatically
delete the large object if the row gets deleted in the table.

Maybe you should go for the bytea data type, that would work as you expect.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL as a triple store

2014-08-13 Thread Seref Arikan
Hi Jimmy,
I think you're going to need to perform experiments and decide if the
flexibility you get is worth the performance you're going to lose.
As you can see from John R Pierce's response it is only matter of time
before someone (correctly) warns you about the performance issues with EAV
derivatives, but then the conversation becomes too abstract, at least in my
opinion.
Why? Because we don't know how much data you're going to process with this
design, with what kind of hardware.

I've developed solutions where I've used EAV along with a more relational
design, they were delivered on time, performed at the expected/acceptable
level, so no problems. Then again, I've tried to use EAV bluntly in a
healthcare related project and it died on me quite quickly when the data
grew.

I always use on demand Amazon instances to test performance of the db
design to test performance using automatically generated data. It is a very
good way of testing your design as data grows and I suggest you consider
the same approach.

Also, your requirements sounds related to some projects that use PostGis,
have you taken a look at that?

Best regards
Seref





On Tue, Aug 12, 2014 at 2:19 PM, Jimmy Thrasibule 
thrasibule.ji...@gmail.com wrote:

 Hello,

 I'm looking into gathering information about geographical locations:

 - Coordinates
 - Place name
 - Pictures
 - etc.

 Since a place can be anything, a specific place may  need any kind of data
 type to be described.

 I'm therefore looking into using the same method as the semantic Web and
 trying to describe a place with triples.

 1. The triples table will reference a subject, attribute and value.
 2. The attributes table will have an attribute name, a type and mauve a
 table name.
 3. For each attribute type, a new table is created to store the values.
 4. A resources table list all the available resources (location,
 picture, ...).

 Using this, I can easily add a new resource and describe it by adding new
 triples. Every item is identified using UUIDs as primary key.

 This seems too simple to be true so I falling back to you in case you see
 any pitfalls here.

 The triples table will grow very fast in this setup and may become a
 bottleneck. However, I don't see any other way to store information about
 something as wide as places around the world.

 Regards,
 Jimmy



Re: [GENERAL] Database block lifecycle

2014-08-13 Thread pinker
Jeff Janes wrote
 On Tuesday, August 12, 2014, pinker lt;

 pinker@

 gt; wrote:
 
 Do you ever plan on restarting this server?  Doing maintenance?  Applying
 security patches?
 
 Sure, I assumed when db is up and running, of course after first read from
 disk when whole data should be in RAM.
 
 
 More like b), but you are missing all the states that involve clean in
 shared_buffers, dirty in FS cache and such.
 
 Ok, so modified block is taken from shared_buffers or from RAM when
 needed, and is readed always from shared buffers?
 
 

 btw. 512MB if we assume up to 600 connection is a reasonable value?

 
Reasonable value for what?
 
 For normal server load.
 
 Cheers,
 
 Jeff





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Database-block-lifecycle-tp5814627p5814672.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL as a triple store

2014-08-13 Thread Chris Travers
On Wed, Aug 13, 2014 at 12:43 AM, Seref Arikan serefari...@gmail.com
wrote:

 Hi Jimmy,
 I think you're going to need to perform experiments and decide if the
 flexibility you get is worth the performance you're going to lose.
 As you can see from John R Pierce's response it is only matter of time
 before someone (correctly) warns you about the performance issues with EAV
 derivatives, but then the conversation becomes too abstract, at least in my
 opinion.
 Why? Because we don't know how much data you're going to process with this
 design, with what kind of hardware.


We also don't know how the data will be processed.

Where EAV dies a screaming horrible death is when you try to search on
multiple attributes.  If this data is just never going to be searched on
attributes, EAV isn't that bad (I use it for some things where that is
clearly the case).

-- 
Best Wishes,
Chris Travers


[GENERAL] Can I see the detailed log of query fired by particular Postgres client on Postgres server?

2014-08-13 Thread M Tarkeshwar Rao
Hi all,

Can I see the detailed log of query fired by particular Postgres client  on 
Postgres server?
This client might be any java application or  any console Postgres client.
I want this for our debugging purpose.

Regards
Tarkeshwar



Re: [GENERAL] Can I see the detailed log of query fired by particular Postgres client on Postgres server?

2014-08-13 Thread Soni M
On each session created by the client, run set log_statement to 'all'
before firing your query


On Wed, Aug 13, 2014 at 4:21 PM, M Tarkeshwar Rao 
m.tarkeshwar@ericsson.com wrote:

  Hi all,



 Can I see the detailed log of query fired by particular Postgres client
  on Postgres server?

 This client might be any java application or  any console Postgres client.

 I want this for our debugging purpose.



 Regards

 Tarkeshwar






-- 
Regards,

Soni Maula Harriz


Re: [GENERAL] Can I see the detailed log of query fired by particular Postgres client on Postgres server?

2014-08-13 Thread M Tarkeshwar Rao
Thanks for reply. We tried it but it is creating blank log file on pg_log.

From: Soni M [mailto:diptat...@gmail.com]
Sent: 13 August 2014 15:02
To: M Tarkeshwar Rao
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Can I see the detailed log of query fired by particular 
Postgres client on Postgres server?

On each session created by the client, run set log_statement to 'all' before 
firing your query

On Wed, Aug 13, 2014 at 4:21 PM, M Tarkeshwar Rao 
m.tarkeshwar@ericsson.commailto:m.tarkeshwar@ericsson.com wrote:
Hi all,

Can I see the detailed log of query fired by particular Postgres client  on 
Postgres server?
This client might be any java application or  any console Postgres client.
I want this for our debugging purpose.

Regards
Tarkeshwar




--
Regards,

Soni Maula Harriz


Re: [GENERAL] Can I see the detailed log of query fired by particular Postgres client on Postgres server?

2014-08-13 Thread Soni M
check you postgresql.conf, where you're heading your log, log_destination,
etc.
http://www.postgresql.org/docs/9.1/static/runtime-config-logging.html




On Wed, Aug 13, 2014 at 4:37 PM, M Tarkeshwar Rao 
m.tarkeshwar@ericsson.com wrote:

  Thanks for reply. We tried it but it is creating blank log file on
 pg_log.



 *From:* Soni M [mailto:diptat...@gmail.com]
 *Sent:* 13 August 2014 15:02
 *To:* M Tarkeshwar Rao
 *Cc:* pgsql-general@postgresql.org
 *Subject:* Re: [GENERAL] Can I see the detailed log of query fired by
 particular Postgres client on Postgres server?



 On each session created by the client, run set log_statement to 'all'
 before firing your query



 On Wed, Aug 13, 2014 at 4:21 PM, M Tarkeshwar Rao 
 m.tarkeshwar@ericsson.com wrote:

 Hi all,



 Can I see the detailed log of query fired by particular Postgres client
  on Postgres server?

 This client might be any java application or  any console Postgres client.

 I want this for our debugging purpose.



 Regards

 Tarkeshwar







 --

 Regards,



 Soni Maula Harriz




-- 
Regards,

Soni Maula Harriz


Re: [GENERAL] Postgres 9.3 tuning advice

2014-08-13 Thread Marti Raudsepp
On Wed, Aug 13, 2014 at 9:44 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
 Set wal_buffers to 16MB so that a whole WAL segment will fit.

No need, wal_buffers is automatically tuned now. If your
shared_buffers is 512MB or larger, wal_buffers will be 16MB.

Regards,
Marti


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL as a triple store

2014-08-13 Thread Jimmy Thrasibule
 I think you're going to need to perform experiments and decide if the
flexibility you get is worth the performance you're going to lose.
 As you can see from John R Pierce's response it is only matter of time
before someone (correctly) warns you about the performance issues with EAV
derivatives, but then the conversation becomes too abstract, at least in my
opinion.
 Why? Because we don't know how much data you're going to process with
this design, with what kind of hardware.

My data model is not completely ready yet and is still being worked on. But
yes,  I think I have first to think more about how I will be using the data
before looking at a storage model.

I will indeed run some experiments before but using this model, I was
planning to but some complexity in the code.

 Also, your requirements sounds related to some projects that use PostGis,
have you taken a look at that?

Never heard of it, I'll give it a look.

---
Jimmy


Re: [GENERAL] Database block lifecycle

2014-08-13 Thread Jeff Janes
On Wed, Aug 13, 2014 at 1:07 AM, pinker pin...@onet.eu wrote:

 
  btw. 512MB if we assume up to 600 connection is a reasonable value?
 
 
 Reasonable value for what?
 
  For normal server load.


512MB is being questioned as a reasonable value for what?  shared_buffers?
 work_mem?  maintenance_work_mem?

Cheers,

Jeff


Re: [GENERAL] Database block lifecycle

2014-08-13 Thread Robin


On 13/08/2014 17:23, Jeff Janes wrote:


On Wed, Aug 13, 2014 at 1:07 AM, pinker pin...@onet.eu 
mailto:pin...@onet.eu wrote:



 btw. 512MB if we assume up to 600 connection is a reasonable value?


Reasonable value for what?

 For normal server load.


512MB is being questioned as a reasonable value for what? 
 shared_buffers?  work_mem?  maintenance_work_mem?


Cheers,

Jeff


Generally speaking, folk imagine that DBMS performance is all about disk 
access - in reality chucking as much memory as possible at the server(s) 
is an optimal investment. analyse your queries and store time critical 
stuff in memory


R+C



[GENERAL] Trigger function cannot reference field name with capital letter

2014-08-13 Thread Patrick Dung
Hello PGSQL users,


I have a field called postTimestamp.
The trigger function could not reference it.
When I change my field to post_timestamp. I can reference it from the tigger 
function.
Version is 9.3.5. Any comment?

 2014-08-14 00:23:32.717 HKT ERROR:  post new has no field posttimestamp
 2014-08-14 00:23:32.717 HKT CONTEXT:  SQL statement SELECT * from tbl1
    where NEW.posttimestamp  2014-01-01 00:00:00 )
    PL/pgSQL function test_trigger() line 9 at assignment
 2014-08-14 00:23:32.717 HKT STATEMENT:  INSERT INTO public.tbl1(vendor, 
url, postTimestamp, product, type, itemID) VALUES ('vendor1'::text, 
'http://example.org'::text, '2014-01-01 01:01:01'::timestamp without time zone, 
'product1'::text, 'food'::text, '1'::bigint)
 2014-08-14 00:32:39.708 HKT ERROR:  syntax error at or near SELECT at 
character 314


Thanks and regards,
Patrick


Re: [GENERAL] Trigger function cannot reference field name with capital letter

2014-08-13 Thread Adrian Klaver

On 08/13/2014 08:52 PM, Patrick Dung wrote:

Hello PGSQL users,

I have a field called postTimestamp.
The trigger function could not reference it.
When I change my field to post_timestamp. I can reference it from the
tigger function.
Version is 9.3.5. Any comment?

 2014-08-14 00:23:32.717 HKT ERROR:  post new has no field
posttimestamp


The clue is above. Postgres folds unquoted mixed case to lower case by 
default, so it is looking for posttimestamp. If you want to preserve the 
mixed case, quote the field name postTimestamp.



 2014-08-14 00:23:32.717 HKT CONTEXT:  SQL statement SELECT * from tbl1
 where NEW.posttimestamp  2014-01-01 00:00:00 )
 PL/pgSQL function test_trigger() line 9 at assignment
 2014-08-14 00:23:32.717 HKT STATEMENT:  INSERT INTO
public.tbl1(vendor, url, postTimestamp, product, type, itemID)
VALUES ('vendor1'::text, 'http://example.org'::text, '2014-01-01
01:01:01'::timestamp without time zone, 'product1'::text, 'food'::text,
'1'::bigint)
 2014-08-14 00:32:39.708 HKT ERROR:  syntax error at or near SELECT
at character 314

Thanks and regards,
Patrick



--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trigger function cannot reference field name with capital letter

2014-08-13 Thread John R Pierce

On 8/13/2014 8:52 PM, Patrick Dung wrote:


I have a field called postTimestamp.
The trigger function could not reference it.
When I change my field to post_timestamp. I can reference it from the 
tigger function.

Version is 9.3.5. Any comment?

 2014-08-14 00:23:32.717 HKT ERROR: post new has no field 
posttimestamp

 2014-08-14 00:23:32.717 HKT CONTEXT:  SQL statement SELECT * from tbl1
where NEW.posttimestamp  2014-01-01 00:00:00 )
PL/pgSQL function test_trigger() line 9 at assignment



field and talbe names with mixed case have to be Quoted. string 
constants, on the other hand, are are in single 'quotes'.


try...

SELECT * from tbl1 where new.postTimestamp'  timestamp '2014-01-01 
00:00:00'



 2014-08-14 00:23:32.717 HKT STATEMENT:  INSERT INTO 
public.tbl1(vendor, url, postTimestamp, product, type, itemID) 
VALUES ('vendor1'::text, 'http://example.org'::text, '2014-01-01 
01:01:01'::timestamp without time zone, 'product1'::text, 
'food'::text, '1'::bigint)
 2014-08-14 00:32:39.708 HKT ERROR:  syntax error at or near 
SELECT at character 314


those two error logs have different timestamps, I don't believe that 
error is on that statement.


there's a whole lot of unnecessary typecasting in that insert, however.

INSERT INTO public.tbl1(vendor, url, postTimestamp, product, type, 
itemID)
VALUES ('vendor1', 'http://example.org', timestamp '2014-01-01 
01:01:01', 'product1', 'food', 1)


would suffice nicely.




--
john r pierce  37N 122W
somewhere on the middle of the left coast




Re: [GENERAL] Trigger function cannot reference field name with capital letter

2014-08-13 Thread John R Pierce

On 8/13/2014 9:13 PM, John R Pierce wrote:


SELECT * from tbl1 where new.postTimestamp'  timestamp '2014-01-01 
00:00:00'


oops.

SELECT * from tbl1 where new.postTimestamp  timestamp '2014-01-01 
00:00:00'


I meant.  I should proof what I write, hah!


--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pass where clause to a function

2014-08-13 Thread Andrew Bartley
Hi all,

Is it possible to create a view or foreign table that will do something
like this

select * from table_x
where x_id = 10;

passing the where x_id = 10 to a function

sorta like this

select * from api_function('x = 10')

or

 select * from api_function(10)

passing the result set back to the original select

Thanks


Re: [GENERAL] Pass where clause to a function

2014-08-13 Thread John R Pierce

On 8/13/2014 10:08 PM, Andrew Bartley wrote:


Is it possible to create a view or foreign table that will do 
something like this


select * from table_x
where x_id = 10;

passing the where x_id = 10 to a function

sorta like this

select * from api_function('x = 10')

or

 select * from api_function(10)

passing the result set back to the original select



your first version of api_function would have to use that x=10 to 
construct the query and EXECUTE it, then return the recordset,


your second version of the view would just pass 10 in as an argument, 
which could be used for the query select * from table_x where x_id=$1


either way, your view would be select * from api_function(whichever).

but I think you're rather confused here, as I don't see much utility in 
either of these constructs in the form you specified.




--
john r pierce  37N 122W
somewhere on the middle of the left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general