[GENERAL] best practice in archiving CDR data

2010-03-29 Thread Juan Backson
Hi,

I am using Postgres to store CDR data for voip switches.  The data size
quickly goes about a few TBs.

What I would like to do is to be able to regularly archive the oldest data
so only the most recent 6 months of data is available.

All those old data will be stored in a format that can be retrieved back
either into DB table or flat files.

Does anyone know how should I go about doing that?  Is there any existing
tool that can already do that?

thanks,
jb


Re: [GENERAL] best practice in archiving CDR data

2010-03-29 Thread Juan Backson
Hi

Instead of dropping the table, I would like to archive the old table into a
format that can be read and retrieved.

Can I db_dump on each child table?

What is the best way to do it?  db_dump and make the data into csv and then
tar.gz it or backup it up into a pg archived format?

thanks,
jb

On Mon, Mar 29, 2010 at 9:33 PM, A. Kretschmer 
andreas.kretsch...@schollglas.com wrote:

 In response to Juan Backson :
  Hi,
 
  I am using Postgres to store CDR data for voip switches.  The data size
 quickly
  goes about a few TBs.
 
  What I would like to do is to be able to regularly archive the oldest
 data so
  only the most recent 6 months of data is available.
 
  All those old data will be stored in a format that can be retrieved back
 either
  into DB table or flat files.
 
  Does anyone know how should I go about doing that?  Is there any existing
 tool
  that can already do that?

 Sounds like table partitioning: create, for instance, a table for each
 month and DROP old tables after 6 month or so.

 http://www.postgresql.org/docs/current/static/ddl-partitioning.html


 Regards, Andreas
 --
 Andreas Kretschmer
 Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: - Header)
 GnuPG: 0x31720C99, 1006 CCB4 A326 1D42 6431  2EB0 389D 1DC2 3172 0C99

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



[GENERAL] pgmemcache vs pgmemcached

2009-11-12 Thread Juan Backson
Hi,

Have anyone tried pgmemcache and pgmemcached?

What is the difference betweent he two?  They are both non-persistent, so I
have difficult time deciding which one to use.

Any suggestion?

Thanks,
jb


[GENERAL] question about using pgmemcache

2009-11-08 Thread Juan Backson
Hi,

I am planning to pgmemcache to act as a fast-accessible cache before my app
and postgres.

I have got it pgmemcache setup.  The question I have is that if memcached
crashes, and I need to restart memcached, since memcached is not persistent,
do I need to run some scripts to auto-recreate all the key value data from
postgres?

What is the easiest way to do it?  Is there anyway that I can write some
syncup function to let postgres to do the re-creation of data to memcached.

Thanks,
jb


[GENERAL] strange error occurs when adding index

2009-09-04 Thread Juan Backson
Hi,

When I tried to add the following index, I get some strange error.  Does
anyone know what these errors mean and how to fix it?

Here is the index query:
create index idx_product_items_digits on product_items using gist (digits
gist_prefix_range_ops,product_id)


Here is the error:
NOTICE:  __pr_penalty(sa[], a1928901[]) orig-first=0 orig-last=0NOTICE:
__pr_penalty(sa[], 1206323[]) orig-first=0 orig-last=0
NOTICE:  __pr_penalty(a1206329[], 1206370[]) orig-first=0
orig-last=0NOTICE:  __pr_penalty(a1206329[], a1206328[]) orig-first=0
orig-last=0NOTICE:  __pr_penalty(a120632[8-9], 1206369[]) orig-first=56
orig-last=57
NOTICE:  __pr_penalty(a120632[8-9], a1206327[]) orig-first=56
orig-last=57NOTICE:  __pr_penalty(a120632[7-9], 1206368[]) orig-first=55
orig-last=57NOTICE:  __pr_penalty(a120632[7-9], a1206326[]) orig-first=55
orig-last=57NOTICE:  __pr_penalty(a120632[6-9], 1206367[]) orig-first=54
orig-last=57NOTICE:  __pr_penalty(a120632[6-9], a1206325[]) orig-first=54
orig-last=57NOTICE:  __pr_penalty(a120632[5-9], 1206366[]) orig-first=53
orig-last=57NOTICE:  __pr_penalty(a120632[5-9], a1206324[]) orig-first=53
orig-last=57NOTICE:  __pr_penalty(a120632[4-9], 1206365[]) orig-first=52
orig-last=57NOTICE:  __pr_penalty(a120632[4-9], a1206323[]) orig-first=52
orig-last=57NOTICE:  __pr_penalty(a120632[3-9], 1206364[]) orig-first=51
orig-last=57NOTICE:  __pr_penalty(a120632[3-9], a1206322[]) orig-first=51
orig-last=57NOTICE:  __pr_penalty(a120632[2-9], 1206363[]) orig-first=50
orig-last=57NOTICE:  __pr_penalty(a120632[2-9], a1206321[]) orig-first=50
orig-last=57NOTICE:  __pr_penalty(a120632[1-9], 1206362[]) orig-first=49
orig-last=57NOTICE:  __pr_penalty(a120632[1-9], a1206320[]) orig-first=49
orig-last=57NOTICE:  __pr_penalty(a120632[0-9], 1206361[]) orig-first=48
orig-last=57NOTICE:  __pr_penalty(a120632[0-9], a1206319[]) orig-first=48
orig-last=57NOTICE:  __pr_penalty(a12063[1-2], 1206359[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206318[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], 1206358[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206315[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], 1206357[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206314[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], 1206356[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206313[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], 1206355[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206312[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], 1206354[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206310[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], 1206353[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[1-2], a1206309[]) orig-first=49
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], 1206352[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206307[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], 1206351[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206306[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], 1206350[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206304[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], 1206349[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206303[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], 1206347[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206302[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1952240[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206301[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1952240[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206300[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1952240[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a12063[0-2], a1206299[]) orig-first=48
orig-last=50NOTICE:  __pr_penalty(a1206[2-3], a1952240[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1206298[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1952240[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1206297[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1952240[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1206296[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1952240[]) orig-first=50
orig-last=51NOTICE:  __pr_penalty(a1206[2-3], a1206295[]) orig-firs


Thanks for your help.

jb


[GENERAL] N + 1 replication

2009-09-03 Thread Juan Backson
Hi,

Does anyone know if there is any N + 1 replication for Postgres?

Could someone please point me to the right direction?

Thanks,
JB


Re: [GENERAL] libpq performance

2009-09-03 Thread Juan Backson
Hi,

For this problem, what if I use prepared statement?  if I use prepared
statement directly from libpq, would it help?

Thanks,
JB

On Mon, Aug 24, 2009 at 9:59 AM, Craig Ringer
cr...@postnewspapers.com.auwrote:

 On Mon, 2009-08-24 at 00:08 +0800, Juan Backson wrote:

  I have a sql that only takes 0.3 ms to run when using psql with
  explain analyze.

 explain analyze reports server-side execution time.

  However, when I execute it using PQexec, it takes 12ms for PGexec.
  Does anyone know why it is that slow?

 ... and if you're timing PQexec you're probably taking the client-side
 time, ie the time from start of PQexec call to the time the PQexec call
 returns.

 That means that network latency *is* a factor, albeit a small one. I
 don't think EXPLAIN ANALYZE will report any delays due to lock
 acquisition or anything like that either. However, most of the
 difference probably comes from the time taken to parse and plan the
 statement. It'd help if you actually provided the EXPLAIN ANALYZE output
 and the statement in question so there was less guesswork involved.

  My db server is in the internal network, so there should not be any
  latency issue.

 Rather than assuming that, I'd recommend measuring it:

 - Run the test program on the DB server with a connection over the
 loopback interface (127.0.0.1); and
 - if the DB server is UNIX based, run the test program on the DB server
 with a connection over a UNIX socket; and
 - Use Wireshark to examine the actual network traffic to see how big a
 gap there is between request and response

 However, as I said above I personally expect the difference is mostly in
 parsing and planning time.

 There are ways to reduce planning time (at the cost of potentially
 inferior query plans) - but if you're really that worried about query
 execution time, might you perhaps be executing a huge number of tiny
 queries in a situation where one or two bigger queries can get the job
 done more quickly?

 --
 Craig Ringer





[GENERAL] libpq performance

2009-08-23 Thread Juan Backson
Hi,

I have a sql that only takes 0.3 ms to run when using psql with explain
analyze.

However, when I execute it using PQexec, it takes 12ms for PGexec.  Does
anyone know why it is that slow?

My db server is in the internal network, so there should not be any latency
issue.

Any suggestion will be greatly appreciated.

Thanks,
JB


[GENERAL] how to return field based on field= NULL or not

2009-08-23 Thread Juan Backson
Hi,

In my table, I have fieldA and fieldB.  At any point in time, only one of
these fields can have data in it.  The other is NULL.

Instead of select fieldA, fieldB from table, I want it to return either
fieldA or fieldB depends on whether it is NULL or not.

The reason is because I want to use select
array_to_string(array_accum(field A or field B) ,',') from table.

Is it possible to do it that way?

Thanks,
JB


[GENERAL] Need help with libpq and PQexec

2009-08-11 Thread Juan Backson
Hi,
In my source code, I have:

res = PQexec(conn,select * from resource);
if(PQresultStatus(res) != PGRES_COMMAND_OK){
switch_log_printf(SWITCH_CHANNEL_LOG,SWITCH_LOG_ERROR,PQexec
failed:%s\n,PQerrorMessage(conn));
PQclear(res);
return NULL;
}
PQclear(res);

When it is executed, I am getting PGexec failed error , but the return value
of  PQerrorMessage(conn) is empty.

Does anyone know why?

Thanks,
JB


Re: [GENERAL] Need help with libpq and PQexec[RESOLVEd]

2009-08-11 Thread Juan Backson
I used if (PQresultStatus(res) != PGRES_TUPLES_OK)   and it is fine now.

Thanks,
JB

On Tue, Aug 11, 2009 at 3:35 PM, Juan Backson juanback...@gmail.com wrote:

 Hi,
 In my source code, I have:

 res = PQexec(conn,select * from resource);
 if(PQresultStatus(res) != PGRES_COMMAND_OK){
 switch_log_printf(SWITCH_CHANNEL_LOG,SWITCH_LOG_ERROR,PQexec
 failed:%s\n,PQerrorMessage(conn));
 PQclear(res);
 return NULL;
 }
 PQclear(res);

 When it is executed, I am getting PGexec failed error , but the return
 value of  PQerrorMessage(conn) is empty.

 Does anyone know why?

 Thanks,
 JB



[GENERAL] keepalive problem with libpg connection

2009-08-11 Thread Juan Backson
Hi,

I am writing my own connection pool for libpq's connection.

My goal is to try to reduce the amount of time spent in opening and closing
socket.   However, I see that Postgres would close my connection after every
so often.

What is the best solution for that? Should I regularly send some Select 1
to keep the connection alive or is there a better way to write a connection
pool for libpq?


Any suggestion will be greatly appreciated.

JB


[GENERAL] PQstatus does not seem to work

2009-08-10 Thread Juan Backson
Hi

 I used PQstatus(conn) function to check connection status, but I found that
it still returns CONNECTION_OK even after postgres is restarted.  Does
anyone know if there is another command that I can use to check connection
status?

What other solution is available to check whether a connection is still
alive?

Thanks,
JB


Re: [GENERAL] PQstatus does not seem to work

2009-08-10 Thread Juan Backson
Hi Tim,

Thank you for your suggestion.

In my application, it is a multi-thread and each thread will need to query 5
select statements.


Right now, I am having my own pool of 500 PgConn inside the code.  For each
connection that I obtain from the connection pool, I am using direct socket
into querying the database, without ODBC.  That way, I can get the data much
faster.

Does PGpool II has c api that I can use inside my code?

Also, can I use direct socket connection to query the db with PgpoolI?   The
way I am executing query is by using :

res = PGexec(conn, BEGIN);
res = PQexec(pgconn, DECLARE CURSOR select * );
res = PGexec(conn, END);


Could someone help me out?  What is the best way for 1) using connectin
pooling in my situation and 2) it is the right way to do  BEGIN; DECLARE
CURSOR... ; END; for each select query?


Thanks for all your help.

JB

On Tue, Aug 11, 2009 at 2:02 AM, Tim Hart tjh...@mac.com wrote:


 On 8/10/09 12:08 PM, Sam Mason s...@samason.me.uk wrote:

  On Tue, Aug 11, 2009 at 12:41:47AM +0800, Juan Backson wrote:
  I used PQstatus(conn) function to check connection status, but I found
 that
  it still returns CONNECTION_OK even after postgres is restarted.  Does
  anyone know if there is another command that I can use to check
 connection
  status?
 
  Yes, PQstatus just gives back the last status.  It doesn't go off and
  check anything.
 
  What other solution is available to check whether a connection is still
  alive?
 
  As a connection can go down at any time, this doesn't seem useful.  Just
  send off your request as normal and if it fails because the connection
  was closed then you can open a new one and try again.

 Depending on your situation, connection pooling might be a reasonable
 option. Instead of managing the connections yourself, you leave that to
 another process entirely.

 http://www.revsys.com/writings/postgresql-performance.html

 Look at the section on Stateless Applications

 I spend a lot of time writing stateless apps that server many 'users'
 concurrently. For me, the pooling idea is much simpler because I only
 interact with the 'pool', and the pool manages opening and closing
 connections on my behalf.

 Of course, this is not a good option if you're writing a stateful app. Your
 original email didn't say either way, so this is a take on the other side
 of
 the problem.