[GENERAL] New Slave - timeline ERROR

2016-01-08 Thread drum.lu...@gmail.com
I've started a new SLAVE PostgreSQL server set up.

** NOTE: I run the pg_basebackup from another STANDBY SERVER. Not from the
MASTER*

1 - screen -t basebackup

2 - su - postgres

3 - cd ~/9.2/data/

4 - ssh postgres@slave01 'pg_basebackup --pgdata=- --format=tar
--label=bb_master --progress --host=localhost --port=5432
--username=replicator --xlog | pv --quiet --rate-limit 100M' | tar -x
--no-same-owner

5 - I've commented the "primary_conninfo =" and "standby_mode=" so the
slave can get the files from WAL_ARCHIVE

6 - Afte I got the logs:

postgres(iostreams)[10037]:   2016-01-09 00:07:26.604
UTC|10085|LOG:  database system is ready to accept read only
connections

7 - After the server finished the WAL_ARCHIVE, I turned on replication from
MASTER on recovery.conf:

*recovery.conf on the New Slave:*

restore_command = 'exec nice -n 19 ionice -c 2 -n 7
../../bin/restore_wal_segment.bash "../wal_archive/%f" "%p"'
archive_cleanup_command = 'exec nice -n 19 ionice -c 2 -n 7
../../bin/pg_archivecleaup_mv.bash -d "../wal_archive" "%r"'
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=192.168.100.XX port=5432 user=replicator
application_name=replication_slave02'

But, once I've restarted the POSTGRESQL I got this error:

WAL segment `../wal_archive/0005.history` not found2016-01-09
01:13:39.183 UTC|774|FATAL:  timeline 2 of the primary does not match
recovery target timeline 4

What can I do to solve the problem?

It's really important as it's a production New Slave. Thank you!


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
Hi @bricklen. Thanks for your reply. I've been working on it for 20h =(
So

The master is currently shipping the WALs to the slave.
When pg_basebackup has done, I got a successful log:

postgres(iostreams)[10037]:   2016-01-09 00:07:26.604
UTC|10085|LOG:  database system is ready to accept read only
connections

The problem happens when, after pg_basebackup, I change the recovery.conf
to replicate from live,  changing these 2 lines:

standby_mode = on

primary_conninfo = 'host=IP_TO_THE_OTHER_SLAVE port=5432
user=replicator application_name=replication_slave02'


I read the link you have sent: http://dba.stackexchange.com/a/53546/24393
I'm doing all exactly the same, but only one parameter is different:

time pg_basebackup --pgdata=$PGDATA --host=IP_OF_MASTER --port=5432
--username=replication --password --xlog-method=stream --format=plain
--progress --verbose

I'm not using --xlog-method=strem
I'm using:
ssh postgres@slave1 'pg_basebackup --pgdata=- --format=tar
--label=bb_master --progress --host=localhost --port=5432
--username=replicator *--xlog* | pv --quiet --rate-limit 100M' | tar -x
--no-same-owner

on http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html is:
-xlog

Using this option is equivalent of using -X with method fetch.
So... I have 2 TB of data. I wouldn't like to run the pg_basebackup again
with the option: --xlog-method=stream without have sure that is going to
work


Do you have any other ideia? Do you know if --xlog it's the problem and I
should re-run the pg_basebackup again with the *--xlog-method=stream*
option?

Thank you



Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 06:06, bricklen  wrote:

> On Fri, Jan 8, 2016 at 8:44 PM, drum.lu...@gmail.com  > wrote:
>
> Hi, I'm a bit too lazy to try suss out the exact reasons for your failure,
> but here is a reasonably thorough guide to set up replication:
> http://dba.stackexchange.com/a/53546/24393
>
> A few tips:
> - Having the master ship WALs to the slaves is handy if you can pull it
> off. If you are doing it over the wire and using rsync, "-z" for
> compression is recommended. If you are doing the tar format of the
> pg_basebackup, you *must* have the master ship the WALs to the slave
> otherwise it won't be able to synchronize (the "stream" method ships WALs
> over the wire so the end result is a synchronized system.
>
> - I always run pg_basebackup from the slave I am building, for simplicity.
> - I create new slaves almost every day (we have thousands of databases)
> using a bash script and it almost much never fails. In essence it is a big
> wrapper around the pg_basebackup command (though we are using pg93 mostly).
>
> The base backup command that I run from the slave I am building:
> pg_basebackup --pgdata=$PGDATA --host=$MASTER_IP --port=$PGPORT
> --username=replication --no-password --xlog-method=stream --format=plain
> --progress --verbose
>
> The recovery.conf:
> standby_mode = 'on'
> primary_conninfo = 'user=replication host=$IP_OF_UPSTREAM_SLAVE_OR_MASTER
> port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
> recovery_target_timeline = 'latest'
> archive_cleanup_command = '/usr/pgsql-9.3/bin/pg_archivecleanup
> /path/to/WALs %r'
> restore_command = 'cp /path/to/WALs/%f "%p" 2>>
> /your/PGDATA/path/pg_log/standby.log'
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
Hi,

If the master is successfully ships WALs to the slave you are setting up
> you do not need the "stream" option.


yes.. the master is successfully shipping the WALs

Is there anything else? Help, please hehehehe





Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 10:34, bricklen  wrote:

>
> On Sat, Jan 9, 2016 at 12:36 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Do you have any other ideia? Do you know if --xlog it's the problem and I
>> should re-run the pg_basebackup again with the *--xlog-method=stream*
>> option?
>>
>
>
> If the master is successfully ships WALs to the slave you are setting up
> you do not need the "stream" option.
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
Hi,

If you are able to stop Postgres on the slave you are taking the base
> backup from, you could do this:


I'm not... the data base is 2 TB.
So, a RSYNC would take DAYS.  And I'm not able to stop the SLAVE for
that long time

Lucas



Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 10:53, bricklen  wrote:

> On Sat, Jan 9, 2016 at 1:49 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi,
>>
>> If the master is successfully ships WALs to the slave you are setting up
>>> you do not need the "stream" option.
>>
>>
>> yes.. the master is successfully shipping the WALs
>>
>> Is there anything else? Help, please hehehehe
>>
>
> If you are able to stop Postgres on the slave you are taking the base
> backup from, you could do this:
>
> 1). Stop postgres on slave1
> 2). Rsync slave1 to slave2 to copy only the deltas.
> 3). When you start up slave2 the WALs that the master has shipped to
> slave2 should apply and bring your system up to consistency.
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
Hmm... I see...
>
>
>> Depending on when you took the base backup and how many changes have
>> occurred at your source (slave1) database cluster, the rsync execution time
>> may or may not take as long as a new base backup if is only only shipping
>> deltas (changed files).
>
>
I could stop the slave then But I'm afraid getting it back online and
get some other errors hehehehe =\







Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 10:59, bricklen  wrote:

> On Sat, Jan 9, 2016 at 1:54 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi,
>>
>> If you are able to stop Postgres on the slave you are taking the base
>>> backup from, you could do this:
>>
>>
>> I'm not... the data base is 2 TB.
>> So, a RSYNC would take DAYS.  And I'm not able to stop the SLAVE for
>> that long time
>>
>
> Depending on when you took the base backup and how many changes have
> occurred at your source (slave1) database cluster, the rsync execution time
> may or may not take as long as a new base backup if is only only shipping
> deltas (changed files).
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
*rsync would be something like:*

from slave1:
rsync -av data/pg_xlog slave2:/var/lib/postgresql/data/

Is that correct?

At this point I think your options are slim. If you are feeling
> adventurous, you can try doing the rsync with the slave running, then do a
> second rsync with the slave stopped or do it from the master after putting
> the master in backup mode (eg. executing "pg_start_backup('slave_
> backup')")


I didn't unterstand why doing RSYNC twice... sorry

Lucas



Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 11:19, bricklen  wrote:

> On Sat, Jan 9, 2016 at 2:10 PM, drum.lu...@gmail.com  > wrote:
>
>> I could stop the slave then But I'm afraid getting it back online and
>> get some other errors
>>
>
> At this point I think your options are slim. If you are feeling
> adventurous, you can try doing the rsync with the slave running, then do a
> second rsync with the slave stopped or do it from the master after putting
> the master in backup mode (eg. executing "pg_start_backup('slave_backup')")
>
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
>
> rsync -azr --progress --partial postgres@$MASTER_IP:
> /var/lib/postgresql/data/var/lib/postgresql/data/ --exclude
> postmaster.pid


Ah ok! So this will do an incrementa, right? not supposed to copy ALL the
base/ again?





Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 11:31, bricklen  wrote:

>
>
> On Sat, Jan 9, 2016 at 2:22 PM, drum.lu...@gmail.com  > wrote:
>
>> *rsync would be something like:*
>>
>> from slave1:
>> rsync -av data/pg_xlog slave2:/var/lib/postgresql/data/
>>
>
> Normally I run something like the following from the slave I am setting up.
>
> rsync -azr --progress --partial postgres@$MASTER_IP:
> /var/lib/postgresql/data /var/lib/postgresql/data/ --exclude
> postmaster.pid
>
>
>> I didn't unterstand why doing RSYNC twice... sorry
>>
>
> Unless the source db cluster you are rsync'ing from is stopped, there will
> be changes to data files replicated from the master. The second rsync might
> not be necessary given the WALs are shipping from the master to slave2.
>
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
Should I point of replication new slave to same DB?

Lucas

On Sunday, 10 January 2016, bricklen  wrote:

> On Sat, Jan 9, 2016 at 2:35 PM, drum.lu...@gmail.com
>  <
> drum.lu...@gmail.com
> > wrote:
>
>> rsync -azr --progress --partial postgres@$MASTER_IP:
>>> /var/lib/postgresql/data/var/lib/postgresql/data/ --exclude
>>> postmaster.pid
>>
>>
>> Ah ok! So this will do an incrementa, right? not supposed to copy ALL the
>> base/ again?
>>
>
> Yes, this is for incremental copying from the upstream source.
> Actually, you don't need the -r with -a (it is implied), and you can run
> it first with --dry-run to see what it _would_ do.
> If you are not shipping over the WAN, then omit the -z flag as you do not
> need compression.
>


-- 


Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-09 Thread drum.lu...@gmail.com
What is the --pgdata=- in your original command? Are you perhaps in the
> wrong directory and not getting all the required files?


I run the pg_basebackup from the Slave on /var/lib/pgsql/9.2/data.
So I'm not in the wrong directory...

I'm out of fresh ideas. The rsync command is what I would go with, given
> that I can't think of any other commands to try.


I chose the pg_basebackup command just to not stop any database. It's out
of circumstances to stop even the slave one... sorry...

I really don't know what else to do. Have tried everything!

Lucas

On 10 January 2016 at 13:31, bricklen  wrote:

> Bottom-posting is the convention in the postgresql lists, and makes it
> easier to follow a long thread.
>
> On Sat, Jan 9, 2016 at 3:16 PM, drum.lu...@gmail.com  > wrote:
>
>> My servers are not in the same network. A new pg_backup would take 30
>> hours to complete as I use --rate-limit 100MB.
>
>
> If you had enough bandwidth, you could do some shell magic to parallelize
> the rsync commands, or use something like
> http://moo.nac.uci.edu/~hjm/parsync/ to do that. If you are limited by
> bandwidth, then a single rsync run is probably what you're stuck with.
>
>
>> I really need to put his server up! =\
>>
>
> If you were running zfs you could also take a snapshot of the fs and use
> that for your base backup, but I assume you would have mentioned that if it
> was an option.
>
>
>
>> I don't think that running a pg_basebackup one more time will solve the
>> problem, because I've already done that!
>> I could run actually, but the problem is that it takes 30h! hahahahah
>>
>
> What is the --pgdata=- in your original command? Are you perhaps in the
> wrong directory and not getting all the required files?
>
>
> I'm out of fresh ideas. The rsync command is what I would go with, given
> that I can't think of any other commands to try.
>
>
>
>>
>> *Have a look:*
>> http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html
>>
>> Note that there are some limitations in an online backup from the standby:
>>>
>>
>>
>> The backup history file is not created in the database cluster backed up.
>>> There is no guarantee that all WAL files required for the backup are
>>> archived at the end of backup. If you are planning to use the backup for an
>>> archive recovery and want to ensure that all required files are available
>>> at that moment, you need to include them into the backup by using -x
>>>  option.
>>>
>>
> You had that in your original command I believe.
>


Re: [GENERAL] New Slave - timeline ERROR

2016-01-13 Thread drum.lu...@gmail.com
Hi guys..
I started a new PG_BASEBACKUP and it's working now..

The problem was the line: standby_mode = on on the recovery.conf  on the
STANDBY server.

After the basebackup, I comented this line and started the postgreSQL. BUT,
you shouldn't  do that.

On the last time I didn't  comment and it worked.

Thank you!



Lucas Possamai

kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>

On 10 January 2016 at 19:22, drum.lu...@gmail.com 
wrote:

> What is the --pgdata=- in your original command? Are you perhaps in the
>> wrong directory and not getting all the required files?
>
>
> I run the pg_basebackup from the Slave on /var/lib/pgsql/9.2/data.
> So I'm not in the wrong directory...
>
> I'm out of fresh ideas. The rsync command is what I would go with, given
>> that I can't think of any other commands to try.
>
>
> I chose the pg_basebackup command just to not stop any database. It's out
> of circumstances to stop even the slave one... sorry...
>
> I really don't know what else to do. Have tried everything!
>
> Lucas
>
> On 10 January 2016 at 13:31, bricklen  wrote:
>
>> Bottom-posting is the convention in the postgresql lists, and makes it
>> easier to follow a long thread.
>>
>> On Sat, Jan 9, 2016 at 3:16 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrote:
>>
>>> My servers are not in the same network. A new pg_backup would take 30
>>> hours to complete as I use --rate-limit 100MB.
>>
>>
>> If you had enough bandwidth, you could do some shell magic to parallelize
>> the rsync commands, or use something like
>> http://moo.nac.uci.edu/~hjm/parsync/ to do that. If you are limited by
>> bandwidth, then a single rsync run is probably what you're stuck with.
>>
>>
>>> I really need to put his server up! =\
>>>
>>
>> If you were running zfs you could also take a snapshot of the fs and use
>> that for your base backup, but I assume you would have mentioned that if it
>> was an option.
>>
>>
>>
>>> I don't think that running a pg_basebackup one more time will solve the
>>> problem, because I've already done that!
>>> I could run actually, but the problem is that it takes 30h! hahahahah
>>>
>>
>> What is the --pgdata=- in your original command? Are you perhaps in the
>> wrong directory and not getting all the required files?
>>
>>
>> I'm out of fresh ideas. The rsync command is what I would go with, given
>> that I can't think of any other commands to try.
>>
>>
>>
>>>
>>> *Have a look:*
>>> http://www.postgresql.org/docs/9.2/static/app-pgbasebackup.html
>>>
>>> Note that there are some limitations in an online backup from the
>>>> standby:
>>>>
>>>
>>>
>>> The backup history file is not created in the database cluster backed up.
>>>> There is no guarantee that all WAL files required for the backup are
>>>> archived at the end of backup. If you are planning to use the backup for an
>>>> archive recovery and want to ensure that all required files are available
>>>> at that moment, you need to include them into the backup by using -x
>>>>  option.
>>>>
>>>
>> You had that in your original command I believe.
>>
>
>


[GENERAL] “Loop” in plpgsql Function - PostgreSQL 9.2

2016-01-18 Thread drum.lu...@gmail.com
I've created a function that allows me to do an huge update.

But I need to limit this function. I need to do 50k rows (example) and then
stop it. After that I need to continue from the rows that I've stopped...
I'll have to modify the call function *select batch_number()* as well.

How can I do that? Using for?

The function below is already working, but, the table has 40m rows. And
it's taking years.

FUNCTION:

CREATE or REPLACE FUNCTION batch_number()
RETURNS INTEGER AS $$
declare
   batch_num integer;
   offset_num integer;begin
offset_num = 0;
batch_num = 1;

while (select true from gorfs.nfs_data where batch_number is null
limit 1) loop
with ids(id) as
(
select
file_id
from
gorfs.nfs_data
order by
file_id
offset offset_num
limit 1000
)
update
gorfs.nfs_data
set
batch_number = batch_num
from ids
where file_id = ids.id;

offset_num = offset_num + 1000;
batch_num = batch_num + 1;
end loop;

return batch_num;end
$$ language 'plpgsql';


[GENERAL] Re: [GENERAL] “Loop” in plpgsql Function - PostgreSQL 9.2

2016-01-18 Thread drum.lu...@gmail.com
>
> It is customary to restrict emails to a single list or at least make
> others aware when you do have a legitmate need to cross-post.
>
> Specifically your post on -admin <“Break” in plpgsql Function - PostgreSQL
> 9.2>
>
> Yes, it didn't belong on -admin in the first place but posting it here
> with a different title and not indicating on either thread that the other
> exists and/or is the valid one just makes it harder for others to follow
> along.
>
>
Sorry about that. It wasn't on purpose... It won't happen again.


>
> As for your general question I try to approach this problem in the
> following manner:
>
> SELECT however many of something that you need (FOR UPDATE)
> UPDATE those selected to indicate that they have been seen
> PROCESS them as needed
> ​repeat step 1 until it returns no records
>
> ​It doesn't always work - and given a sufficiently large number of records
> it may be unadvisable - but it is set-oriented which is generally a plus in
> SQL.
>
> The other way to assign batches is to use the integer modulus operator
> (e.g., 10 % 3 = 1 : read 3 *remainder of 1*) or integer division (10 / 3
> = 3) to derive the batch number based upon an attribute of the data itself
> as opposed to its order of appearance in a result set.
>
> David J.
>
>
> ​
>
>


[GENERAL] Re: [GENERAL] “Loop” in plpgsql Function - PostgreSQL 9.2

2016-01-18 Thread drum.lu...@gmail.com
On 19 January 2016 at 11:44, dinesh kumar  wrote:

> Hello,
>
> On Mon, Jan 18, 2016 at 1:37 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> I've created a function that allows me to do an huge update.
>>
>> But I need to limit this function. I need to do 50k rows (example) and
>> then stop it. After that I need to continue from the rows that I've
>> stopped... I'll have to modify the call function *select batch_number()* as
>> well.
>>
>> How can I do that? Using for?
>>
>> The function below is already working, but, the table has 40m rows. And
>> it's taking years.
>>
>>
> Do you need to run the function on any Staging(Not Production).  I mean,
> do you want to run this batch processes on a single transaction.
>

I don't want to run this batch in a single transaction. I'm already doing
that and, as it has 40 million rows, it's taking years.

So, the new plan is:

1 - Select 50.000 rows and gives it a batch number.
2 - Select *MORE* 50,000 rows and gives it a *NEW* batch number.
3 - Select *MORE* 50,000 rows and gives it a *NEW* batch number.
4 - etc  etc etc


>
> If not, I had the similar problem, where I needed to implement a function,
> which we can run in multiple sessions. I ran this function in one of the BI
> servers, where we have around 5 Million records.
>
> Find this
> <http://manojadinesh.blogspot.com/2015/07/parallel-operations-with-plpgsql_9.html>link
> about the implementation details.
>
> If your question was about "Using Loops", then please ignore my comments.
>


[GENERAL] ERROR: check constraint - PostgreSQL 9.2

2016-01-24 Thread drum.lu...@gmail.com
Hi all, need some help to add a constraint to an existing table (with data).


I'm running the command:

*Query:*

ALTER TABLE integrations.accounts DROP CONSTRAINT IF EXISTS
cc_at_least_one_setting_needed,
ADD CONSTRAINT cc_at_least_one_setting_needed CHECK
(("qb_settings" IS NOT NULL) or

("xero_settings" IS NOT NULL) or

("freshbooks_settings" IS NOT NULL) or

("myob_settings" IS NOT NULL) or

("ppy_settings" IS NOT NULL));

But, I got the following error:

ERROR:  check constraint "cc_at_least_one_setting_needed" is violated by
> some row


So, I made a SELECT to get some data, to see where's the issue:

SELECT * FROM integrations.accounts WHERE qb_settings IS NULL OR
xero_settings IS NULL OR freshbooks_settings IS NULL OR myob_settings
IS NULL OR ppy_settings IS NULL;


And I got 59 rows. So, it seems that's my problem - I have NULLS that
violate the CHECK CONSTRAINT.

*Question:*
How can I solve the problem? How can I get the command successfully be done?

Cheers;
Lucas Possamai

- kinghost.co.nz

- DigitalOcean 


[GENERAL] pg_dump streaming fails - PostgreSQL 9.2

2016-01-28 Thread drum.lu...@gmail.com
Hi there!

I'm running this command: *(Trying to copy a full DB (2TB) from a
hot-standby server to a master (test) server)*

ssh postgres@myslaveserver "/usr/pgsql-9.2/bin/pg_dump
--exclude-table-data='junk.*' -—format=custom master_db" |
/usr/pgsql-9.2/bin/pg_restore --dbname=master_db_temp --exit-on-error
—-verbose

Then, after 3 GB I got this error:

pg_dump: Dumping the contents of table "invoices" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR:  canceling statement due to
conflict with recovery
DETAIL:  User was holding a relation lock for too long.
pg_dump: The command was: COPY dm.invoices (invoice_id,
format_version, ts_issue, ts_ack, customer_id, code, tag,
account_data, customer_data, invoice_data, invoice_items_data) TO
stdout;

I've tried the pg_dump command even with: "--no-unlogged-table-data"
option, but it was unsuccessful (
http://www.postgresql.org/message-id/e1tutdo-0001hb...@wrigleys.postgresql.org
)

DB size: 2 TB

How can I solve the problem? What's going on? Thanks


[GENERAL] pg_dump - ERROR - PostgreSQL 9.2

2016-02-01 Thread drum.lu...@gmail.com
Hi all,

*masterdb01 -> slave01 -> slave02 -> slave03*

*testmaster01 (Full copy from masterdb01)*

I'm trying to refresh my main DB, by running this command on my test server:

ssh postgres@slave02 "/usr/pgsql-9.2/bin/pg_dump
> --exclude-table-data='junk.*' --format=custom master_db_name" |
> /usr/pgsql-9.2/bin/pg_restore --dbname=master_db_name_temp --exit-on-error


But, I got the following error after few seconds:

pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
> failed.
> pg_dump: Error message from server: ERROR:  canceling statement due to
> conflict with recovery
> DETAIL:  User was holding a relation lock for too long.
> pg_dump: The command was: COPY dm.invoices (invoice_id, format_version,
> ts_issue, ts_ack, customer_id, code, tag, account_data, customer_data,
> invoice_data, invoice_items_data) TO stdout;


If I run the pg_dump from the MASTER DB, it runs ok. But as the DB is 2 TB,
I just can't let it coping from Production. It has to be from a slave
server (I've also tried to copy from another slave. I get the same error).

What am I doing wrong? How can I solve the problem?


* I've sent the same e-mail to pgsql-admin

Thanks;
Lucas


Re: [GENERAL] pg_dump - ERROR - PostgreSQL 9.2

2016-02-01 Thread drum.lu...@gmail.com
[SOLVED]

Just to let you guys know.. I've solved this issue changing
the max_standby_streaming_delay on my slave to "-1".

Cheers



Lucas Possamai

- kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>
- DigitalOcean <https://m.do.co/c/a0d31c597a03>

On 2 February 2016 at 13:21, drum.lu...@gmail.com 
wrote:

> Hi all,
>
> *masterdb01 -> slave01 -> slave02 -> slave03*
>
> *testmaster01 (Full copy from masterdb01)*
>
> I'm trying to refresh my main DB, by running this command on my test
> server:
>
> ssh postgres@slave02 "/usr/pgsql-9.2/bin/pg_dump
>> --exclude-table-data='junk.*' --format=custom master_db_name" |
>> /usr/pgsql-9.2/bin/pg_restore --dbname=master_db_name_temp --exit-on-error
>
>
> But, I got the following error after few seconds:
>
> pg_dump: Dumping the contents of table "invoices" failed: PQgetResult()
>> failed.
>> pg_dump: Error message from server: ERROR:  canceling statement due to
>> conflict with recovery
>> DETAIL:  User was holding a relation lock for too long.
>> pg_dump: The command was: COPY dm.invoices (invoice_id, format_version,
>> ts_issue, ts_ack, customer_id, code, tag, account_data, customer_data,
>> invoice_data, invoice_items_data) TO stdout;
>
>
> If I run the pg_dump from the MASTER DB, it runs ok. But as the DB is 2
> TB, I just can't let it coping from Production. It has to be from a slave
> server (I've also tried to copy from another slave. I get the same error).
>
> What am I doing wrong? How can I solve the problem?
>
>
> * I've sent the same e-mail to pgsql-admin
>
> Thanks;
> Lucas
>


[GENERAL] Fatal error when not numeric value - PostgreSQL 9.2

2016-02-03 Thread drum.lu...@gmail.com
Hi all,

Below is an example of the auto-generated update query, with
client-supplied keys (_iid). There's a fatal error when _iid is not
numeric. However; this should accept any value.

*Question:* How could I do something that would allow _iid to be more than
just an INT?

WITH
in_rows AS (
  SELECT
CAST(customer_id AS BIGINT),
csv_data,
freshbooks_id,
myob_id,
ppy_id,
qb_id,
xero_id,
_iid
  FROM
(
  VALUES('3905', E'\x1A', E'\x1A', 'c59894c-142b6', E'\x1A',
E'\x1A', E'\x1A', '44'),('39107', E'\x1A', E'\x1A', '6260-2ba1',
E'\x1A', E'\x1A', E'\x1A', '65e-0f0d-49b4-9ac1-a8752ba1'),

Thank you
Lucas


Re: [GENERAL] Fatal error when not numeric value - PostgreSQL 9.2

2016-02-03 Thread drum.lu...@gmail.com
On 4 February 2016 at 12:03, David G. Johnston 
wrote:

> On Wed, Feb 3, 2016 at 3:48 PM, drum.lu...@gmail.com  > wrote:
>
>> Hi all,
>>
>> Below is an example of the auto-generated update query, with
>> client-supplied keys (_iid). There's a fatal error when _iid is not
>> numeric. However; this should accept any value.
>>
>> *Question:* How could I do something that would allow _iid to be more
>> than just an INT?
>>
>> WITH
>> in_rows AS (
>>   SELECT
>> CAST(customer_id AS BIGINT),
>> csv_data,
>> freshbooks_id,
>> myob_id,
>> ppy_id,
>> qb_id,
>> xero_id,
>> _iid
>>   FROM
>> (
>>   VALUES('3905', E'\x1A', E'\x1A', 'c59894c-142b6', E'\x1A', 
>> E'\x1A', E'\x1A', '44'),('39107', E'\x1A', E'\x1A', '6260-2ba1', E'\x1A', 
>> E'\x1A', E'\x1A', '65e-0f0d-49b4-9ac1-a8752ba1'),
>>
>> Thank you
>> Lucas
>>
>
> ​You have a fatal error because the query you provided is malformed.  Send
> something that works, and provokes the relevant error, and we might be able
> to help.
>
> David J.
> ​
>
>
QUERY:

WITH
in_rows AS (
  SELECT
CAST(customer_id AS BIGINT),
csv_data,
freshbooks_id,
myob_id,
ppy_id,
qb_id,
xero_id,
_iid
  FROM
(
  VALUES
('3915105', E'\x1A', E'\x1A',
'c59894cb-0ffe-4ad6-823d-73c1392142b6', E'\x1A', E'\x1A', E'\x1A',
'44'),
('3915135', E'\x1A', E'\x1A',
'fe88ff8f-6b4d-4e3d-8020-3475a101d25e', E'\x1A', E'\x1A', E'\x1A',
'fe88ff8f-6b4d-4e3d-8020-3475a101d25e'),
) AS id(customer_id, csv_data, freshbooks_id, myob_id, ppy_id,
qb_id, xero_id, _iid)
  ),
id_overlays AS (
  SELECT
ir.customer_id,
(tt.customer_id IS NOT NULL) AS tt_matched,
(CASE WHEN (ir.csv_data :: TEXT = E'\x1A')
  THEN tt.csv_data :: TEXT
 ELSE NULLIF(ir.csv_data :: TEXT, E'\x18') END)  AS csv_data,
(CASE WHEN (ir.freshbooks_id :: TEXT = E'\x1A')
  THEN tt.freshbooks_id :: TEXT
 ELSE NULLIF(ir.freshbooks_id :: TEXT, E'\x18') END) AS freshbooks_id,
(CASE WHEN (ir.myob_id :: TEXT = E'\x1A')
  THEN tt.myob_id :: TEXT
 ELSE NULLIF(ir.myob_id :: TEXT, E'\x18') END)   AS myob_id,
(CASE WHEN (ir.ppy_id :: TEXT = E'\x1A')
  THEN tt.ppy_id :: TEXT
 ELSE NULLIF(ir.ppy_id :: TEXT, E'\x18') END)AS ppy_id,
(CASE WHEN (ir.qb_id :: TEXT = E'\x1A')
  THEN tt.qb_id :: TEXT
 ELSE NULLIF(ir.qb_id :: TEXT, E'\x18') END) AS qb_id,
(CASE WHEN (ir.xero_id :: TEXT = E'\x1A')
  THEN tt.xero_id :: TEXT
 ELSE NULLIF(ir.xero_id :: TEXT, E'\x18') END)   AS xero_id,
ir._iid :: TEXT  AS _iid
  FROM
in_rows AS ir LEFT JOIN integrations.customers AS tt USING (customer_id)
  )SELECT
  io.customer_id,
  io._iid,
  io.tt_matched,
  ((io.csv_data IS NOT NULL) OR (io.freshbooks_id IS NOT NULL) OR
(io.myob_id IS NOT NULL) OR (io.ppy_id IS NOT NULL) OR
   (io.qb_id IS NOT NULL) OR (io.xero_id IS NOT NULL)) AS tt_staysFROM
  id_overlays AS io;

ERROR:

ERROR:  invalid input syntax for integer:
"fe88ff8f-6b4d-4e3d-8020-3475a101d25e" at character 419


Sorry about the missing data.
If you need something else let me know.


Re: [GENERAL] Fatal error when not numeric value - PostgreSQL 9.2

2016-02-04 Thread drum.lu...@gmail.com
>
>
>> On 4 February 2016 at 12:03, David G. Johnston
>> mailto:david.g.johns...@gmail.com>> wrote:
>>
>> On Wed, Feb 3, 2016 at 3:48 PM, drum.lu...@gmail.com
>> <mailto:drum.lu...@gmail.com> > <mailto:drum.lu...@gmail.com>>wrote:
>>
>> Hi all,
>>
>> Below is an example of the auto-generated update query, with
>> client-supplied keys (_iid). There's a fatal error when _iid is
>> not numeric. However; this should accept any value.
>>
>> *Question:* How could I do something that would allow _iid to be
>> more than just an INT?
>>
>> |WITHin_rows AS(SELECTCAST(customer_id ASBIGINT),
>>  csv_data,freshbooks_id,myob_id,
>> ppy_id,qb_id,xero_id,_iid
>>
>> FROM(VALUES('3905',E'\x1A',E'\x1A','c59894c-142b6',E'\x1A',E'\x1A',E'\x1A','44'),('39107',E'\x1A',E'\x1A','6260-2ba1',E'\x1A',E'\x1A',E'\x1A','65e-0f0d-49b4-9ac1-a8752ba1'),|
>>
>> |Thank you|
>> |Lucas|
>>
>>
>> ​You have a fatal error because the query you provided is
>> malformed.  Send something that works, and provokes the relevant
>> error, and we might be able to help.
>>
>> David J.
>> ​
>>
>>
>> QUERY:
>>
>> |WITHin_rows AS(SELECTCAST(customer_id
>> ASBIGINT),csv_data,freshbooks_id,myob_id,ppy_id,qb_id,xero_id,_iid
>>
>> FROM(VALUES('3915105',E'\x1A',E'\x1A','c59894cb-0ffe-4ad6-823d-73c1392142b6',E'\x1A',E'\x1A',E'\x1A','44'),('3915135',E'\x1A',E'\x1A','fe88ff8f-6b4d-4e3d-8020-3475a101d25e',E'\x1A',E'\x1A',E'\x1A','fe88ff8f-6b4d-4e3d-8020-3475a101d25e'),)ASid(customer_id,csv_data,freshbooks_id,myob_id,ppy_id,qb_id,xero_id,_iid)),id_overlays
>> AS(SELECTir.customer_id,(tt.customer_id
>> ISNOTNULL)AStt_matched,(CASEWHEN(ir.csv_data ::TEXT
>> =E'\x1A')THENtt.csv_data ::TEXT ELSENULLIF(ir.csv_data
>> ::TEXT,E'\x18')END)AScsv_data,(CASEWHEN(ir.freshbooks_id ::TEXT
>> =E'\x1A')THENtt.freshbooks_id ::TEXT ELSENULLIF(ir.freshbooks_id
>> ::TEXT,E'\x18')END)ASfreshbooks_id,(CASEWHEN(ir.myob_id ::TEXT
>> =E'\x1A')THENtt.myob_id ::TEXT ELSENULLIF(ir.myob_id
>> ::TEXT,E'\x18')END)ASmyob_id,(CASEWHEN(ir.ppy_id ::TEXT
>> =E'\x1A')THENtt.ppy_id ::TEXT ELSENULLIF(ir.ppy_id
>> ::TEXT,E'\x18')END)ASppy_id,(CASEWHEN(ir.qb_id ::TEXT
>> =E'\x1A')THENtt.qb_id ::TEXT ELSENULLIF(ir.qb_id
>> ::TEXT,E'\x18')END)ASqb_id,(CASEWHEN(ir.xero_id ::TEXT
>> =E'\x1A')THENtt.xero_id ::TEXT ELSENULLIF(ir.xero_id
>> ::TEXT,E'\x18')END)ASxero_id,ir._iid ::TEXT AS_iid FROMin_rows ASir
>>
>
> Well above you are turning _iid into ::TEXT which is fine, though(as David
> pointed out) the receiving field should be that type also. So the problem
> is not your casting of _iid it is where you are sending that value.
>
> LEFTJOINintegrations.customers AStt
>>
>> USING(customer_id))SELECTio.customer_id,io._iid,io.tt_matched,((io.csv_data
>> ISNOTNULL)OR(io.freshbooks_id ISNOTNULL)OR(io.myob_id
>> ISNOTNULL)OR(io.ppy_id ISNOTNULL)OR(io.qb_id ISNOTNULL)OR(io.xero_id
>> ISNOTNULL))AStt_stays FROMid_overlays ASio;|
>>
>> ERROR:
>>
>> |ERROR:invalid input syntax
>> forinteger:"fe88ff8f-6b4d-4e3d-8020-3475a101d25e"at character 419|
>> 
>>
>

I believe that's what culling out the record from being inserted/updated,
it's the prepass before safedatainjector. There is no update persay. It
selects records that it thinks it needs to either insert/update/delete
defined by tt_matched with some php logic that then splits the collections.


[GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread drum.lu...@gmail.com
Hi all

I'm having this error message:

Input data for column `%1$s` is not compatible with data type


But I was unable to find the table which is having this issue

Would be possible someone help me with that?
Thanks

Complete error link:
https://bitbucket.org/snippets/lucaspossamai/7r9yX

Query link:
https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/


Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread drum.lu...@gmail.com
Actually I do know which table is...

it seems to be ja_notes.


But I can't see any difference



Lucas Possamai

- kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>
- DigitalOcean <https://m.do.co/c/a0d31c597a03>

On 11 February 2016 at 09:11, David G. Johnston 
wrote:

> On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all
>>
>> I'm having this error message:
>>
>> Input data for column `%1$s` is not compatible with data type
>>
>>
>> But I was unable to find the table which is having this issue
>>
>> Would be possible someone help me with that?
>> Thanks
>>
>> Complete error link:
>> https://bitbucket.org/snippets/lucaspossamai/7r9yX
>>
>> Query link:
>> https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/
>>
>
> ​For those who come later please, if at all possible, write self-contained
> emails.
>
> David J.​
>
>
>


Re: [GENERAL] Input data for column is not compatible with data type

2016-02-10 Thread drum.lu...@gmail.com
yep..

thank you for your replly.

The problem is a code problem. Not in the DB.

Thank you.



Lucas Possamai

- kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>
- DigitalOcean <https://m.do.co/c/a0d31c597a03>

On 11 February 2016 at 10:27, Adrian Klaver 
wrote:

> On 02/10/2016 12:11 PM, David G. Johnston wrote:
>
>> On Wed, Feb 10, 2016 at 12:56 PM, drum.lu...@gmail.com
>> <mailto:drum.lu...@gmail.com> > <mailto:drum.lu...@gmail.com>>wrote:
>>
>> Hi all
>>
>> I'm having this error message:
>>
>> Input data for column `%1$s` is not compatible with data type
>>
>>
>> But I was unable to find the table which is having this issue
>>
>> Would be possible someone help me with that?
>> Thanks
>>
>> Complete error link:
>> https://bitbucket.org/snippets/lucaspossamai/7r9yX
>>
>> Query link:
>> https://bitbucket.org/snippets/lucaspossamai/rbpk4/edit/
>>
>>
>> ​For those who come later please, if at all possible, write
>> self-contained emails.
>>
>
> In that vein the second link is non-functional:(
>
>
>> David J.​
>>
>>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


[GENERAL] Optimize Query

2016-02-10 Thread drum.lu...@gmail.com
Hi all,

I've got a slow query and I'm trying to make it faster.

*New Query:*

SELECT concat(client.company, ' ', client.name_first, ' ',
> client.name_last) AS customer,
>sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
> bill_item.quantity) AS revenue,
>sum(bill_item.quantity) AS quantity,
>sum(COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
> bill_item.quantity) AS cost
>FROM ja_clients AS account
>JOIN ja_customers AS client ON client.clientid = account.id
>JOIN ja_jobs AS job ON client.id=job.customerid
>JOIN ja_notes AS note ON note.jobid = job.id
>JOIN dm.bill_items AS bill_item ON
> bill_item.bill_item_id=note.bill_item_id
>LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
>LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
>  OR invoice.invoice_id=bill_item.invoice_id
>LEFT JOIN dm.billables AS billable ON
> billable.billable_id=note.billable_id
>LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
>JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
>JOIN ja_status AS status ON status.id = job.status_label_id
>JOIN ja_role AS ROLE ON ROLE.id="user".user_type
>WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER
> VARYING)::text,
> ('part'::CHARACTER
> VARYING)::text ])
>  AND NOT job.templated
>  AND NOT job.deleted
>  AND job.clientid = 6239
>  AND job.time_job >= 1438351200
>  AND job.time_job <= 144340
>  AND bill_item.for_invoicing = TRUE
> GROUP BY customer
> ORDER BY revenue DESC;


*The original query has:*


SELECT $cols $ec , sum(revenue) as revenue, $cost_cols
> FROM (".note_detail_report_view(). ") AS i
> LEFT JOIN (
> SELECT $join_col , SUM(cost) AS cost, SUM(quantity) AS quantity
> FROM (".note_detail_report_view(). ") AS note_detail_report_view
> $whereClause AND *n_quote_status = 0*
> GROUP BY $join_col
> ) AS a
> ON $joiner
> $whereClause AND invoice = true $limit_inv
> GROUP BY $group_by $ec, a.cost , a.quantity
> ORDER BY $order_by


I just need the a-case. i and a look very similar, except A with an
additional filter: *n_quote_status = 0*


How can I re-write that using the A case?

Thanks


Re: [GENERAL] Optimize Query

2016-02-10 Thread drum.lu...@gmail.com
>
>
> FYI, it is always helpful (and polite) to state the PostgreSQL VERSION and
> O/S for archive documentation purposes!
>

That's right. My mistake... I'm using PostgreSQL 9.2.


>
> Note that various postgresql.conf options, system memory & hardware also
> play a factor here, in addition to current table statistics.
>
> That being said, try the following:
>
> WITH jobs AS
> (
>  SELECT id,
> customerid,
> status_label_id
>   FROM ja_jobs
>  WHERE NOT templated
>AND NOT deleted
>AND clientid = 6239
>AND time_job >= 1438351200
>AND time_job <= 144340
> )
> SELECT concat(client.company,
>' ',
>client.name_first,
>' ', client.name_last) AS customer,
>sum(COALESCE(bill_item.unit_price, billable.unit_price, 0) *
> bill_item.quantity) AS revenue,
>sum(bill_item.quantity) AS quantity,
>sum(COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
> bill_item.quantity) AS cost
>   FROM ja_clientsAS account
>   JOIN ja_customers  AS client ON client.clientid = account.id
>   JOIN jobs  AS jobON job.customerid = client.id
>   JOIN ja_notes  AS note   ON note.jobid = job.id
>   JOIN dm.bill_items AS bill_item ON
> bill_item.bill_item_id=note.bill_item_id
>   LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
>   LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
>OR
> invoice.invoice_id=bill_item.invoice_id
>   LEFT JOIN dm.billables AS billable ON
> billable.billable_id=note.billable_id
>   LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
>   JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
>   JOIN ja_status AS status ON status.id = job.status_label_id
>   JOIN ja_role AS ROLE ON ROLE.id="user".user_type
>  WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER
> VARYING)::text, ('part'::CHARACTER VARYING)::text ])
>AND bill_item.for_invoicing = TRUE
>  GROUP BY customer
>  ORDER BY revenue DESC;
>

Thank you Melvin.
Sorry but I was unable to see the *n_quote_status = 0*

Did you use it?


Re: [GENERAL] Optimize Query

2016-02-13 Thread drum.lu...@gmail.com
Anyone can help with that please?

Cheers

On Thursday, 11 February 2016, drum.lu...@gmail.com 
wrote:

> oh ok!
>
> thanks
>
>
>
> Lucas Possamai
>
> - kinghost.co.nz
> <http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>
> - DigitalOcean <https://m.do.co/c/a0d31c597a03>
>
> On 11 February 2016 at 15:41, Melvin Davidson  > wrote:
>
>>
>>
>>
>>> Thank you Melvin.
>>> Sorry but I was unable to see the *n_quote_status = 0*
>>>
>>> Did you use it?
>>>
>>>
>> No, I just revised you "new" query to be more efficient.
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>

-- 


Lucas Possamai

- kinghost.co.nz
<http://forum.kinghost.co.nz/memberlist.php?mode=viewprofile&u=2&sid=e999f8370385657a65d41d5ff60b0b38>
- DigitalOcean <https://m.do.co/c/a0d31c597a03>


Re: [GENERAL] Optimize Query

2016-02-14 Thread drum.lu...@gmail.com
Hi Alban! Sorry.. that was my mistake


Original Query:

SELECT concat(company, ' ', customer_name_first, ' ',
customer_name_last) AS customer,
   sum(revenue) AS revenue,
   sum(i.quantity) AS quantity,
   sum(i.cost) AS costFROM
  ( SELECT account.id,
   job.customerid,
   job.title,
   job.gps_lat,
   job.gps_long,
   status.label AS status,
   status.status_type_id,
   job.status_label_id,
   client."position",
   bill_item.quantity,
   client.businesstype,
   account.id AS clientid,
   client.name_first AS customer_name_first,
   client.name_last AS customer_name_last,
   job.id AS jobid,
   note.mobiuserid,
   bill_item.for_invoicing AS invoice,
   COALESCE(bill_item.unit_price, billable.unit_price, 0) AS unit_price,
   note.n_quote_status,
   COALESCE(bill_item.unit_cost, billable.unit_cost, 0) AS unit_cost,
   job.time_job,
   "user".name_first,
   "user".name_last,
   role.id AS roleid,
   role.name AS role_name,
   billable.billable_id AS taskid,
   COALESCE(labs.tag, billable.code) AS task_name,
   note.time_start,
   client.company,
   job.refnum,
   (COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
bill_item.quantity) AS cost,
   (COALESCE(bill_item.unit_price, billable.unit_price, 0) *
bill_item.quantity) AS revenue,
   bill_item.for_invoicing AS invoiceable,
   COALESCE(extract('epoch'
FROM bill.ts_creation AT TIME ZONE 'UTC'),
bill_item.invoice_id, NULL) IS NOT NULL AS invoiced
   FROM ja_clients AS account
   JOIN ja_customers AS client ON client.clientid = account.id
   JOIN ja_jobs AS job ON client.id=job.customerid
   JOIN ja_notes AS note ON note.jobid = job.id
   JOIN dm.bill_items AS bill_item ON bill_item.bill_item_id=note.bill_item_id
   LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
   LEFT JOIN dm.invoices AS invoice ON invoice.invoice_id=bill.invoice_id
   OR invoice.invoice_id=bill_item.invoice_id
   LEFT JOIN dm.billables AS billable ON billable.billable_id=note.billable_id
   LEFT JOIN dm.labors AS labs ON labs.billable_id = billable.billable_id
   JOIN ja_mobiusers AS "user" ON "user".id = note.mobiuserid
   JOIN ja_status AS status ON status.id = job.status_label_id
   JOIN ja_role AS ROLE ON ROLE.id="user".user_type
   WHERE note.note_type::text = ANY (ARRAY[ ('time'::CHARACTER VARYING)::text,
('part'::CHARACTER VARYING)::text ])
 AND NOT job.templated
 AND NOT job.deleted
 AND job.clientid = 6239
 AND time_job >= 1438351200
 AND time_job <= 144340
 AND bill_item.for_invoicing = TRUE) AS iLEFT JOIN
  (SELECT customerid,
  SUM(cost) AS cost,
  SUM(quantity) AS quantity
   FROM
 (SELECT account.id,
 job.customerid,
 job.title,
 job.gps_lat,
 job.gps_long,
 status.label AS status,
 status.status_type_id,
 job.status_label_id,
 client."position",
 bill_item.quantity,
 client.businesstype,
 account.id AS clientid,
 client.name_first AS customer_name_first,
 client.name_last AS customer_name_last,
 job.id AS jobid,
 note.mobiuserid,
 bill_item.for_invoicing AS invoice,
 COALESCE(bill_item.unit_price, billable.unit_price, 0) AS
unit_price,
 note.n_quote_status,
 COALESCE(bill_item.unit_cost, billable.unit_cost, 0) AS unit_cost,
 job.time_job,
 "user".name_first,
 "user".name_last,
 ROLE.id AS roleid,
  ROLE.name AS role_name,
   billable.billable_id AS taskid,
   COALESCE(labs.tag, billable.code) AS task_name,
   note.time_start,
   client.company,
   job.refnum,
   (COALESCE(bill_item.unit_cost,
billable.unit_cost, 0) * bill_item.quantity) AS cost,
   (COALESCE(bill_item.unit_price,
billable.unit_price, 0) * bill_item.quantity) AS revenue,
   bill_item.for_invoicing AS invoiceable,
   COALESCE(extract('epoch'
FROM bill.ts_creation AT TIME
ZONE 'UTC'), bill_item.invoice_id, NULL) IS NOT NULL AS invoiced
  FROM ja_clients AS account
  JOIN ja_customers AS client ON client.clientid = account.id
  JOIN ja_jobs AS job ON client.id=job.customerid
  JOIN ja_notes AS note ON note.jobid = job.id
  JOIN dm.bill_items AS bill_item ON
bill_item.bill_item_id=note.bill_item_id
  LEFT JOIN dm.bills AS bill ON bill.bill_id=bill_item.bill_id
  LEFT JOIN dm.i

[GENERAL] pg_restore real file size

2016-02-25 Thread drum.lu...@gmail.com
Hi all,


I'm doing the pg_restore now in a 1.5TB file:

*# ls -la*

postgres postgres 1575324616939 Feb 20 13:55 devdb_0.sql

But, the restore has gone over 1.6 TB

*# \l+*

1639 GB


How is that even possible?


*pg_restore command:*

/usr/pgsql-9.2/bin/pg_restore -d dbname --exit-on-error --jobs=4 --verbose
devdb_0.sql

Cheers


Re: [GENERAL] pg_restore real file size

2016-02-28 Thread drum.lu...@gmail.com
This has been solved...

The difference between the files must be the indexes...

All good now.
Thank you

On 26 February 2016 at 17:26, drum.lu...@gmail.com 
wrote:

> Hi all,
>
>
> I'm doing the pg_restore now in a 1.5TB file:
>
> *# ls -la*
>
> postgres postgres 1575324616939 Feb 20 13:55 devdb_0.sql
>
> But, the restore has gone over 1.6 TB
>
> *# \l+*
>
> 1639 GB
>
>
> How is that even possible?
>
>
> *pg_restore command:*
>
> /usr/pgsql-9.2/bin/pg_restore -d dbname --exit-on-error --jobs=4 --verbose
> devdb_0.sql
>
> Cheers
>


[GENERAL] Export binary data - PostgreSQL 9.2

2016-02-28 Thread drum.lu...@gmail.com
Hi all,


Which command would be to export the binary data for a table?

I was unable to find it...

Thanks


[GENERAL] Function fixing - PostgreSQL 9.2

2016-02-29 Thread drum.lu...@gmail.com
There is a number beside each color flag, and a number in the "sub status"
drop down. Instead of "the number of jobs that have billable items that can
be invoiced, it shows the "the number of jobs that have "something"
billable but not yet invoiced."

And since those payments cannot be invoiced alone, they won't show up in
the list.

Let me know if I can provide any further information.

*Question:*

Payments in a Pending state cannot be invoiced and are excluded from the
Invoice Runs section, but they are showing in the count mechanic.

How can I solve this?

Thank you



*Function:*

CREATE OR REPLACE FUNCTION "public"."g_status_types_jobs" ("client_id"
bigint DEFAULT NULL::bigint, "output_order" character varying DEFAULT
'-START_TIME'::character varying, "start_time" timestamp without time
zone DEFAULT NULL::timestamp without time zone, "end_time" timestamp
without time zone DEFAULT NULL::timestamp without time zone,
"statuses" "text" DEFAULT NULL::"text", "status_types" "text" DEFAULT
NULL::"text", "customer_id" bigint DEFAULT NULL::bigint, "user_id"
bigint DEFAULT NULL::bigint, "recurrence_id" bigint DEFAULT
NULL::bigint, "search_str" "text" DEFAULT NULL::"text",
"unscheduled_is_desired" boolean DEFAULT false,
"unassigned_is_desired" boolean DEFAULT false, "templated_status"
boolean DEFAULT false, "by_job_ref" boolean DEFAULT false,
"by_job_description" boolean DEFAULT false, "by_job_address" boolean
DEFAULT false, "by_title" boolean DEFAULT false, "by_status" boolean
DEFAULT false, "by_order_number" boolean DEFAULT false, "by_client"
boolean DEFAULT false, "by_client_notes" boolean DEFAULT false,
"by_billing_client" boolean DEFAULT false, "by_staff" boolean DEFAULT
false, "by_notes_description" boolean DEFAULT false,
"invoiceable_notes_only" boolean DEFAULT false)  RETURNS
TABLE("status_type_id" bigint, "jobs_count" bigint, "job_ids" "text",
"status_type_data" "text")
  STABLEAS $dbvis$SELECT
COALESCE(s.status_type_id, -1) AS status_type_id,
CAST(ROUND(SUM(s.jobs_count)) AS BIGINT) AS jobs_count,
-- we concatenate the lists from all the status labels. some
nullif/substring trickery is required here
CONCAT('{', STRING_AGG(NULLIF(SUBSTRING(s.job_ids FROM 2 FOR
(CHAR_LENGTH(s.job_ids) - 2)), ''), (CASE WHEN (s.job_ids != '{}')
THEN ',' ELSE '' END)), '}') AS job_ids,
(CASE
WHEN (COALESCE(s.status_type_id, -1) != -1) THEN
STRING_AGG(CONCAT(
CAST(s.status_id AS TEXT),
E'\t', REPLACE(REPLACE(s.status_label, E'\t',
''), E'\n', ''),
E'\t', CAST(s.status_is_default AS TEXT),
E'\t', CAST(s.jobs_count AS TEXT),
E'\t', CAST(s.job_ids AS TEXT)
), E'\n')
ELSE
null
END) AS status_type_data
FROM
public.g_statuses_jobs($1, $2, $3, $4, $5, $6, $7, $8, $9,
$10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, $23,
$24, $25) AS s
GROUP BY
s.status_type_id
;$dbvis$ LANGUAGE sql


Re: [GENERAL] Function fixing - PostgreSQL 9.2

2016-02-29 Thread drum.lu...@gmail.com
On 1 March 2016 at 11:35, David G. Johnston 
wrote:

> On Mon, Feb 29, 2016 at 2:56 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> *Question:*
>>
>> Payments in a Pending state cannot be invoiced and are excluded from the
>> Invoice Runs section, but they are showing in the count mechanic.
>>
>> How can I solve this?
>>
>
> ​In 9.2 you probably need to convert the count into a conditional sum:
>
> SELECT sum(CASE WHEN  THEN 1 ELSE 0 END) FROM data;
>
> You can probably do the same with count since it excludes nulls.
>
> SELECT count(NULLIF(invoice_status, 'Pending') FROM invoices;
>
> ​9.4 introduced a FILTER clause for Aggregate Expressions that can do this
> much more cleanly and efficiently.​
>
> ​http://www.postgresql.org/docs/9.4/static/sql-expressions.html​
>
> David J.
>
>
Thank you David...

Can you please show me how it would be with the new changes?

Thanks


[GENERAL] Schema Size

2016-03-01 Thread drum.lu...@gmail.com
Hi there

Wanna see how size a schema is in my PostgreSQL 9.2

Got two queries - they return different values... can u please check?

cheers;

Query 1:
SELECT schema_name,
pg_size_pretty(sum(table_size)::bigint) as "disk space",
(sum(table_size) / pg_database_size(current_database())) * 100
as "percent"
FROM (
 SELECT pg_catalog.pg_namespace.nspname as schema_name,
 pg_relation_size(pg_catalog.pg_class.oid) as table_size
 FROM   pg_catalog.pg_class
 JOIN pg_catalog.pg_namespace
 ON relnamespace = pg_catalog.pg_namespace.oid
) t
GROUP BY schema_name
ORDER BY schema_name


Query 2:
select schemaname,
pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
from pg_stat_user_tables
group by schemaname


Re: [GENERAL] Function fixing - PostgreSQL 9.2

2016-03-01 Thread drum.lu...@gmail.com
>
>
>>> *Question:*

 Payments in a Pending state cannot be invoiced and are excluded from
 the Invoice Runs section, but they are showing in the count mechanic.

 How can I solve this?

>>>
>>> ​In 9.2 you probably need to convert the count into a conditional sum:
>>>
>>> SELECT sum(CASE WHEN  THEN 1 ELSE 0 END) FROM data;
>>>
>>> You can probably do the same with count since it excludes nulls.
>>>
>>> SELECT count(NULLIF(invoice_status, 'Pending') FROM invoices;
>>>
>>> ​9.4 introduced a FILTER clause for Aggregate Expressions that can do
>>> this much more cleanly and efficiently.​
>>>
>>> ​http://www.postgresql.org/docs/9.4/static/sql-expressions.html​
>>>
>>> David J.
>>>
>>>
>>
> ​I barely scanned your original query - just read the description.  I
> don't have the inclination - especially without a self-contained example -
> to make changes to it.
>
> David J.
> ​
>
>
>
Just an update - Hope this help you to get more data:


There is a number beside each color flag, and a number in the "sub status"
drop down. Instead of "the number of jobs that have billable items that can
be invoiced, it shows the "the number of jobs that have "something"
billable but not yet invoiced."

And since those payments cannot be invoiced alone, they won't show up in
the list.

This is the Query:

> public function search($type=self::SEARCH_TYPE_STATUS_TYPE, &$rawData =
> []){
> if($type === self::SEARCH_TYPE_STATUS_TYPE) {
> $fields = "
> stj.status_type_id,
> stj.jobs_count,
> stj.job_ids,
> (
> SELECT
> array_to_json(array_agg(srcs))
> FROM
> (
> -- property names in the json match column names in g_statuses_jobs()
> SELECT
> (srs.sr[1]::BIGINT) AS status_id,
> (srs.sr[2]::TEXT) AS status_label,
> (srs.sr[3]::BOOLEAN) AS status_is_default,
> (srs.sr[4]::BIGINT) AS jobs_count,
> (srs.sr[5]::JSON) AS job_ids
> FROM
> (SELECT regexp_split_to_array(regexp_split_to_table(stj.status_type_data,
> E'\n'), E'\t')) AS srs(sr)
> ) AS srcs
> ) AS status_type_json
> ";
> $searchFunction =
>  
> 'g_status_types_jobs($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25)
> AS stj';
> $factory = new JobSearchStatusSummaryFactory();
> }else{
> $fields = '*';
> $searchFunction =
>  
> "g_statuses_jobs($1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25)";
> $factory = new JobSearchResultFactory();
> }
> $query = "SELECT
> $fields
> FROM $searchFunction";


Re: [GENERAL] Schema Size

2016-03-01 Thread drum.lu...@gmail.com
On 2 March 2016 at 12:23, Scott Mead  wrote:

>
> On Tue, Mar 1, 2016 at 6:07 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> You should read the definitions for the functions you are using to
>> retrieve the sizes.
>>
>> ​http://www.postgresql.org/docs/current/static/functions-admin.html​
>>
>> +1, you've gotta be careful with each of these, they all tend to hide
> different, yet critical components of size that you may be having trouble
> resolving.
>
>  The other thing to consider is that this isn't including any on-disk
> space required for your change traffic in the WAL.  Your $PGDATA will
> always be larger than the sum of all your databases sizes...
>
>
>> On Tue, Mar 1, 2016 at 3:48 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrote:
>>
>>> Hi there
>>>
>>> Wanna see how size a schema is in my PostgreSQL 9.2
>>>
>>> Got two queries - they return different values... can u please check?
>>>
>>> cheers;
>>>
>>> Query 1:
>>> SELECT schema_name,
>>> pg_size_pretty(sum(table_size)::bigint) as "disk space",
>>> (sum(table_size) / pg_database_size(current_database())) * 100
>>> as "percent"
>>> FROM (
>>>  SELECT pg_catalog.pg_namespace.nspname as schema_name,
>>>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>>>  FROM   pg_catalog.pg_class
>>>  JOIN pg_catalog.pg_namespace
>>>  ON relnamespace = pg_catalog.pg_namespace.oid
>>> ) t
>>> GROUP BY schema_name
>>> ORDER BY schema_name
>>>
>>>
>> ​​pg_relation_size: "Disk space used by the specified fork ('main',
>> 'fsm', 'vm', or 'init') of the specified table or index"
>>
>> The 'init' fork is (I think) non-zero but extremely small.
>> TOAST for a given relation is considered its own table
>>
>>
>>> Query 2:
>>> select schemaname,
>>> pg_size_pretty(sum(pg_table_size(schemaname||'.'||relname))::bigint) as s
>>> from pg_stat_user_tables
>>> group by schemaname
>>>
>>
>> pg_table_size: "Disk space used by the specified table, excluding indexes
>> (but including TOAST, free space map, and visibility map)"
>>
>
> Personally, I'm a huge fan of 'pg_total_relation_size' which is all of
> pg_table_size + indexes.  It really depends on specifically what you're
> trying to count.  If you're looking for the total disk space required by
> your tables in a schema, I always [personally] want to include indexes in
> this count to make sure I understand the total impact on disk of accessing
> my relations.
>
>
>>
>> David J.​
>>
>>
>

So.. I'm doing this way:

CREATE OR REPLACE FUNCTION pg_schema_size(text) RETURNS BIGINT AS
$$SELECT SUM(pg_total_relation_size(quote_ident(schemaname) || '.' ||
quote_ident(tablename)))::BIGINT FROM pg_tables WHERE schemaname = $1

$$ LANGUAGE SQL;

Link: https://wiki.postgresql.org/wiki/Schema_Size


That's working - But I'd like to test it.. to compare the results with
another one trustfull - Do you have some?


Re: [GENERAL] Export binary data - PostgreSQL 9.2

2016-03-02 Thread drum.lu...@gmail.com
On 29 February 2016 at 06:31, Steve Crawford  wrote:

> What exactly are you trying to do? Dump/backup your data (e.g. pg_dump)?
> Read binary data from a table? If so, what field type (bytea, blob, ...)?
> Export to where?
>
> Cheers,
> Steve
>
>
> On Sun, Feb 28, 2016 at 9:12 AM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>>
>> Which command would be to export the binary data for a table?
>>
>> I was unable to find it...
>>
>> Thanks
>>
>
>

Have just used the COPY  WITH Binary

Thank you.


[GENERAL] Slow Query - Postgres 9.2

2016-03-02 Thread drum.lu...@gmail.com
Hi all...

I'm working on a Slow Query. It's faster now (It was 20sec before) but
still not good.

Can you have a look and see if you can find something?
Cheers

Query:

WITH jobs AS (
SELECT
job.id,
job.clientid,
CONCAT(customer.company, ' ', customer.name_first, ' ',
customer.name_last) AS "identity",
job.gps_lat,
job.gps_long
FROM public.ja_jobs AS job
JOIN public.ja_customers AS customer ON customer.id = job.customerid
WHERE
job.clientid = 22
AND job.time_job >= 1422702000
AND job.time_job <= 1456743540

AND NOT job.deleted
AND NOT job.templated), items AS (
SELECT
job.identity,
COALESCE(bill_item.unit_cost, billable.unit_cost, 0) *
bill_item.quantity AS cost,
COALESCE(bill_item.unit_price, billable.unit_price, 0) *
bill_item.quantity AS revenue,
bill_item.quantity AS quantity,
note.n_quote_status,
bill_item.for_invoicing
FROM
jobs AS job
JOIN
public.ja_notes AS note
ON
note.jobid = job.id
AND note.note_type IN ('time', 'part')
JOIN
dm.bill_items AS bill_item
ON
bill_item.bill_item_id = note.bill_item_id
AND bill_item.for_invoicing
LEFT JOIN
dm.billables AS billable
ON
billable.billable_id = note.billable_id
JOIN
public.ja_mobiusers AS user_creator
ON
user_creator.id = note.mobiuserid
AND (
user_creator.name_first ilike 'Alan'
OR user_creator.name_last ilike 'Alan'
))SELECT
item.identity,
SUM(CASE WHEN item.for_invoicing THEN item.revenue ELSE 0 END) AS revenue,
SUM(CASE WHEN item.for_invoicing THEN item.quantity ELSE 0 END) AS quantity,
SUM(CASE WHEN item.for_invoicing THEN item.cost ELSE 0 END) AS costFROM
items AS itemGROUP BY
item.identityORDER BY
revenue DESC,
item.identity ASC

Explain analyze link: http://explain.depesz.com/s/IIDj


Re: [GENERAL] Slow Query - Postgres 9.2

2016-03-02 Thread drum.lu...@gmail.com
On 3 March 2016 at 10:33, Vitaly Burovoy  wrote:

> On 3/2/16, drum.lu...@gmail.com  wrote:
> > Hi all...
> >
> > I'm working on a Slow Query. It's faster now (It was 20sec before) but
> > still not good.
> >
> > Can you have a look and see if you can find something?
> > Cheers
> >
> > Query:
> >
> > WITH jobs AS (
> > ...
> > FROM
> > jobs AS job
> > JOIN
> > public.ja_notes AS note
> > ON
> > note.jobid = job.id
> > AND note.note_type IN ('time', 'part')
> > ...
>
> It is the most long part. All query is 8.8sec.
> SeqScan by CTE is 2.8sec! and index scan in ix_notes_jobid_per_type
> 500rows(loops) * 9.878ms!!! = 4.939sec.
>
> Why does it take so long time?
> For example, index scan in ja_customers_pkey is only 0.781 per row...
> 10 times faster!
>
> What definition of the ix_notes_jobid_per_type? Is it bloated?
>
>
> Hi there!

CREATE INDEX
ix_notes_jobid_per_type
ON
ja_notes
(
"jobid",
"note_type"
);


\di+ ix_notes_jobid_per_type

   List of relations

 Schema |  Name   | Type  |  Owner   |  Table   |  Size  |
Description

+-+---+--+--++-

 public | ix_notes_jobid_per_type | index | postgres | ja_notes | 484 MB |



it does not seem to be bloated... since the table is 2805 MB


[GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-14 Thread drum.lu...@gmail.com
Hi all,


down votefavorite


I've got 2 tables:

Temp-Table
Table-A

Need to copy all the data from *Temp-Table* to *Table-A*. But there is a
Constraint that does not allow me to have duplicated items.

So I need to create a Unique value.

*Example:*

Column Code|   Column Info   |
code_67493675936info_2016

*Question:*

How can I do that using PostgreSQL 9.2?


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-14 Thread drum.lu...@gmail.com
On 15 March 2016 at 10:29, David G. Johnston 
wrote:

> On Mon, Mar 14, 2016 at 2:13 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>>
> favorite
>> <http://dba.stackexchange.com/questions/132226/unique-values-uuid-generation-postgresql-9-2#>
>>
>> I've got 2 tables:
>>
>> Temp-Table
>> Table-A
>>
>> Need to copy all the data from *Temp-Table* to *Table-A*. But there is a
>> Constraint that does not allow me to have duplicated items.
>>
>> So I need to create a Unique value.
>>
>> *Example:*
>>
>> Column Code|   Column Info   |
>> code_67493675936info_2016
>>
>> *Question:*
>>
>> How can I do that using PostgreSQL 9.2?
>>
>
> You might want to try to restate the problem and question.  I'm having a
> hard time trying to figure out what you want.
>
> Reading your subject line I'll point you to:
>
> http://www.postgresql.org/docs/9.2/interactive/datatype-uuid.html
>
> specifically the extension that is mentioned.
>
> ​Usually people figure out ways to accomplish their goals without using
> UUID though.
>
> David J.
> ​
>
>
I want to import data from table A to table B, but when doing it the column
"code" on table B has to have some unique random data.

I could use UUID like:
insert into "TB" ("Id", "Title") values (uuid_generate_v4(), '111');

but I'm doing:
INSERT INTO tableb (SELECT * FROM TABLEA)

So, how to use UUID using the SELECT above?


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-14 Thread drum.lu...@gmail.com
On 15 March 2016 at 10:46, James Keener  wrote:

> Is a uuid a valid value in the application making use of the data? Why
> can't you add the column to table b and then import, or use create the uuid
> in the import select clause? I'm also having trouble understanding the
> problem and why you've discounted the options you've not even told us
> you've considered.
>
>
>>>
>> I want to import data from table A to table B, but when doing it the
>> column "code" on table B has to have some unique random data.
>>
>> I could use UUID like:
>> insert into "TB" ("Id", "Title") values (uuid_generate_v4(), '111');
>>
>> but I'm doing:
>> INSERT INTO tableb (SELECT * FROM TABLEA)
>>
>> So, how to use UUID using the SELECT above?
>>
>>
>>

On the target table, I've got a CONSTRAINT:

> ALTER TABLE dm.billables
>   ADD CONSTRAINT uc_billable_code_unique_per_account UNIQUE("account_id",
> "code");


So I'm importing a CSV file with repeated values on the field "code"
Example:

> 'Interpreting Normal/AH'
> 'Interpreting Normal/AH'
> 'Interpreting Normal/AH'
> 'Interpreting Normal/AH'
> 'Interpreting Normal/AH4'
> 'Interpreting Normal/AH'
> 'Interpreting Normal/AH6'
> 'Interpreting Normal/AH'


So when importing it to the target table I got the error:

> ERROR:  duplicate key value violates unique constraint
> "uc_billable_code_unique_per_account"
> DETAIL:  Key ("account_id", "code")=(32152, 'Interpreting Normal/AH')
> already exists.

Command used to import the values:

> INSERT INTO dm.billables (SELECT billable_id, code, info FROM temptable)

OR directly through the CSV file:

COPY dm.billables (code, info, unit_cost, unit_price, account_id) FROM
> '/var/lib/pgsql/sql/lucas/charge_test.csv' WITH DELIMITER ',' QUOTE '"' CSV
> HEADER;


So. I determined that to do that without dropping the CONSTRAINT, I'll have
to generate a unique but random value to the "code" column.

*NOW:*
*COLUMN CODE|   COLUMN INFO*
'Interpreting Normal/AH'Travel1
'Interpreting Normal/AH1'trip2
'Interpreting Normal/AH2'test897
'Interpreting Normal/AH3'trip11
'Interpreting Normal/AH4'trave1

*NEW:*
*COLUMN CODE|   COLUMN INFO*
code_32152563bdc6453645Travel1
code_32152563bdc4566hhhtrip2
code_32152563b654645uuu   test897
code_32152563bdc4546uuitrip11
code_32152563bdc4db11aatrave1

How can I do that?


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-14 Thread drum.lu...@gmail.com
>
>
>
> On the target table, I've got a CONSTRAINT:
>
>> ALTER TABLE dm.billables
>>   ADD CONSTRAINT uc_billable_code_unique_per_account UNIQUE("account_id",
>> "code");
>
>
> So I'm importing a CSV file with repeated values on the field "code"
> Example:
>
>> 'Interpreting Normal/AH'
>> 'Interpreting Normal/AH'
>> 'Interpreting Normal/AH'
>> 'Interpreting Normal/AH'
>> 'Interpreting Normal/AH4'
>> 'Interpreting Normal/AH'
>> 'Interpreting Normal/AH6'
>> 'Interpreting Normal/AH'
>
>
> So when importing it to the target table I got the error:
>
>> ERROR:  duplicate key value violates unique constraint
>> "uc_billable_code_unique_per_account"
>> DETAIL:  Key ("account_id", "code")=(32152, 'Interpreting Normal/AH')
>> already exists.
>
> Command used to import the values:
>
>> INSERT INTO dm.billables (SELECT billable_id, code, info FROM temptable)
>
> OR directly through the CSV file:
>
> COPY dm.billables (code, info, unit_cost, unit_price, account_id) FROM
>> '/var/lib/pgsql/sql/lucas/charge_test.csv' WITH DELIMITER ',' QUOTE '"' CSV
>> HEADER;
>
>
> So. I determined that to do that without dropping the CONSTRAINT, I'll
> have to generate a unique but random value to the "code" column.
>
> *NOW:*
> *COLUMN CODE|   COLUMN INFO*
> 'Interpreting Normal/AH'Travel1
> 'Interpreting Normal/AH1'trip2
> 'Interpreting Normal/AH2'test897
> 'Interpreting Normal/AH3'trip11
> 'Interpreting Normal/AH4'trave1
>
> *NEW:*
> *COLUMN CODE|   COLUMN INFO*
> code_32152563bdc6453645Travel1
> code_32152563bdc4566hhhtrip2
> code_32152563b654645uuu   test897
> code_32152563bdc4546uuitrip11
> code_32152563bdc4db11aatrave1
>
> How can I do that?
>


I could also drop the COSNSTRAINT, import all the data and then change the
"code" column to use UUID - *But how?*


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-14 Thread drum.lu...@gmail.com
On 15 March 2016 at 11:44, Brent Wood  wrote:

> Not best practice but perhaps viable...
>
>
> In the target table add a serial datatype column as part of the unique
> constraint.
>
>
> Do not populate this column explicitly on insert, but have the db do it
> for you. It will allocate an incremental (unique) value automatically on
> insert.
>
>
> But I think your problem is more fundamental - if you genuinely have
> duplicate values in a column - there should not be a unique constraint on
> it. If it should be unique, then you should modify your insert data.
>
>
>
I Can't modify my insert data, because there's a PHP RANDOM CODE that does
exactly what I wanna do with the SQL - It generates a random but unique
value for the column "code" - So the customer will be able to have
duplicates values on that column

Today the PHP is already generating for example:

code_321525694417ad6b5f

So that is linked to another table (I can do that manually no problem)

I just need to know how can I do all of this


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-14 Thread drum.lu...@gmail.com
On 15 March 2016 at 12:05, David G. Johnston 
wrote:

> On Mon, Mar 14, 2016 at 4:05 PM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Mon, Mar 14, 2016 at 3:51 PM, drum.lu...@gmail.com <
>> drum.lu...@gmail.com> wrote:
>>
>>> I just need to know how can I do all of this
>>>
>>
>> ​You may have missed my prior email.
>>
>> You cannot COPY directly into the target table.  You must copy to a
>> staging table.  You then insert from the staging table to the target table,
>> listing every single column, and replacing those columns you want to change
>> with some kind of expression.
>>
>> Basically:
>>
>> INSERT INTO targettable (col1, col2, col3)
>> SELECT col1, col2 || '_' || nextval('sequence_name')::text, col3
>> FROM stagingtable;
>>
>>
> ​In theory an INSERT trigger might work too - but this is likely to be
> simpler and faster.
>
> David J.
> ​
>
>

Hi David... Thanks for you reply. I haven't seen it before.

So I'm doing:

CREATE EXTENSION "uuid-ossp";


INSERT INTO junk.wm_260_billables2 (account_id, code, info) SELECT
> account_id, code || '_' || nextval('uuid_generate_v4()')::text, info FROM
> junk.wm_260_billables1;

Getting the error:

ERROR:  relation "uuid_generate_v4()" does not exist


But the extension is working:

> select uuid_generate_v4() as one;
>  one
> --
>  59ad418e-53fa-4725-aadb-8f779c1a12b2
> (1 row)


select * from pg_available_extensions;
> uuid-ossp  | 1.0 | 1.0   | generate
> universally unique identifiers (UUIDs)


Do you know what might I being doing wrong?


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-15 Thread drum.lu...@gmail.com
>
>
>>
> Not reading the documentation for functions you've never heard of makes
> the list.
>
> David J.
>

INSERT INTO junk.test1 (account_id, code, info)SELECT account_id,
uuid_generate_v4(), infoFROM junk.test2;

It works but I get data like:

abc77f31-0ee6-44fd-b954-08a3a3aa7b28
f307fb42-23e5-4742-ab8f-8ce5c0a8e852

*Is it possible to do the same, but with TEXT on the beginning?*

Example:

test_32152563bdc4db11aa

test_321525694417ad6b5f


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-15 Thread drum.lu...@gmail.com
On 16 March 2016 at 10:30, David G. Johnston 
wrote:

> On Tue, Mar 15, 2016 at 2:01 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>>
>>>>
>>> Not reading the documentation for functions you've never heard of makes
>>> the list.
>>>
>>> David J.
>>>
>>
>> INSERT INTO junk.test1 (account_id, code, info)SELECT account_id, 
>> uuid_generate_v4(), infoFROM junk.test2;
>>
>> It works but I get data like:
>>
>> abc77f31-0ee6-44fd-b954-08a3a3aa7b28
>> f307fb42-23e5-4742-ab8f-8ce5c0a8e852
>>
>> *Is it possible to do the same, but with TEXT on the beginning?*
>>
>> Example:
>>
>> test_32152563bdc4db11aa
>>
>> test_321525694417ad6b5f
>>
>>
> ​Yes, it is possible.  Did you even try?
>
> "test" in that example is called a string.  There are bunch of functions
> and operators that work with strings.  They are documented here:
>
> http://www.postgresql.org/docs/9.4/static/functions-string.html
>
> You'll find the ones that "concatenate" - which basically is a fancy way
> to say: "to combine" or "to join together" - to be quite useful when faced
> with problems of this sort.
>
> David J.
>
>
I wouldn't ask if I wouldn't have tested it!

Will have a look.

Lucas


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-15 Thread drum.lu...@gmail.com
>
>>>
>> I wouldn't ask if I wouldn't have tested it!
>>
>> Will have a look.
>>
>> ​
> I didn't asked if you tested what you did post.  I asked if you tried
> anything else before asking to be fed the answer.  If you did it would be
> nice to include those other attempts.
>
> David J.
> ​
>
>


It's already working:

> INSERT INTO junk.wm_260_billables2 (account_id, code, info)
> SELECT account_id, 'test_' || uuid_generate_v4(), info
> FROM junk.wm_260_billables1;


BUT.. I'm getting a very long UUID - Would like some smaller

> test_ea8bacbe-fa3c-4072-b511-643a56feb40e


[GENERAL] Schema Size - PostgreSQL 9.2

2016-03-18 Thread drum.lu...@gmail.com
Hi all,

Can you please provide me a Query that tells me how much space is a Schema
in my DB?

I'm using one but don't think it's right

>
> SELECT schema_name,
>pg_size_pretty(sum(table_size)::bigint),
>(sum(table_size) / pg_database_size(current_database())) * 100 as a
> FROM (
>   SELECT pg_catalog.pg_namespace.nspname as schema_name,
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>   FROM   pg_catalog.pg_class
>  JOIN pg_catalog.pg_namespace ON relnamespace =
> pg_catalog.pg_namespace.oid
> ) t
> GROUP BY schema_name
> ORDER BY schema_name


[GENERAL] Insert data in two columns same table

2016-03-18 Thread drum.lu...@gmail.com
I'm trying to insert data from TABLE A to TABLE B.

1 - Select billable_id from dm.billable
2 - Select mobiuser_id from ja_mobiusers
3 - Insert the billable_id and the mobiuser_id to the dm.billables_links
table.


*FYI -* It has to be in the same transaction because the mobiuser_id must
go to the selected billable_id on the first select.

Well... Would be something like:

> INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
> FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
> INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
> public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
> 'Dson%'))



The problem is that I need to do that at the same time, because of a
constraint:

ALTER TABLE dm.billables_links
  ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS
NOT NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id"
IS NOT NULL)::integer) = 1);

I'm having trouble by creating that SQL... can anyone help please?


[GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-18 Thread drum.lu...@gmail.com
Hi all,

I've got four servers:

1 - Master
2 - Slave Hot Standby (Same hardware)
3 - Slave Hot Standby (Same hardware)
4 - Slave Hot Standby (VM - Very slow machine)

On the master server, I've got a schema named "GORFS" with 80 GB, according
to this SQL:

SELECT schema_name,
>pg_size_pretty(sum(table_size)::bigint),
>(sum(table_size) / pg_database_size(current_database())) * 100
> FROM (
>   SELECT pg_catalog.pg_namespace.nspname as schema_name,
>  pg_relation_size(pg_catalog.pg_class.oid) as table_size
>   FROM   pg_catalog.pg_class
>  JOIN pg_catalog.pg_namespace ON relnamespace =
> pg_catalog.pg_namespace.oid
> ) t
> GROUP BY schema_name
> ORDER BY schema_name


On that schema, we have all the user's files, like Photos, notes, docs,
etc...

We're migrating it to a NFS server, taking out from the DB to save up space
and also related to performance

*QUESTION:*

Once the migration is completed, how can I save up (remove) the schema from
the DB?

1 - The problem here is that a VACUUM FULL will lock all the DB to wirte,
am I right? My DB is 1.7 TB, so it will take a while and the System can't
be offline

   1. Migrate the files to the NFS server
   2. Delete the schema from the MASTER DB
   3. Put the slaves into read-only servers
   4. Run Vacuum FULL into the MASTER DB
   5. Once the vacuum is done, do a DUMP from the MASTER to the slaves
   (excluding the GORFS schema of course)

2 - I think this is the most recommended option - But I've some questions
about doing this.

   1. Put a slave as a new Master
   2. Do a dump excluding the GORFS schema in the OLD master
   3. DELETE the old DB from the old master
   4. IMPORT the new dump file to the old master
   5. Turn the old master into the NEW master (*What has been changed into
   the slave that became a master, how can those changes be in the new master?*
   )
   6. Import the dump into the others slaves and make them re-sync from the
   new master


Thank you.
Lucas


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-18 Thread drum.lu...@gmail.com
>
>
>>
> SELECT ​'test_'​ || substring(uuid_generate_v4()::text, 1, 1);
>
> Or use a sequence, or just compute "ROW_NUMBER() OVER ()" and tack that on.
>
> David J.
> ​
>
>
Well.. I was able to do it by using:

> INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
> FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
> INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
> public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
> 'Dson%'))



The problem is that I need to do that at the same time, because of a
constraint:

ALTER TABLE dm.billables_links
  ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS
NOT NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id"
IS NOT NULL)::integer) = 1);

I'm having trouble by creating that SQL... can anyone help please?

FYI - It has to be in the same transaction because the mobiuser_id must go
to the selected billable_id on the first select.


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread drum.lu...@gmail.com
On 18 March 2016 at 03:23, Adrian Klaver  wrote:

> On 03/16/2016 07:07 PM, drum.lu...@gmail.com wrote:
>
>>
>>
>>
>>
>> I see a lot of other problems: you have 3 independet tables. Your 2
>> queries
>> (selects) returns 2 independet results, you can't use that for
>> insert into the
>> 3rd table. And i think, you are looking for an update, not insert.
>> So you have
>> to define how your tables are linked together (join).
>>
>> Can you explain how these tables are linked together?
>>
>>
>>
>>
>> Hi Andreas!
>>
>> Well...
>>
>> There are two tables that I need to get data from(dm.billables /
>> public.ja_mobiusers), and a third table (dm.billables_links) that I need
>> to insert data from those two tables.
>>
>> The table dm.billables has four (important) columns:
>>
>> *billable_id / customer_id / role_id / mobiuser_id*
>>
>> I wanna add data there. The data is not there yet, so it's not an UPDATE.
>>
>> *1 -* select the billable_id: (SELECT1)
>> SELECT billable_id FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%'
>>
>> *2 -* select the mobiuser_id: (SELECT2)
>> SELECT id FROM public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND
>> name_last LIKE 'Dadryl%'
>>
>> *3 -* Insert those two data into the dm.billables_links table (EXAMPLE):
>> INSERT INTO dm.billables_links (billable_id, mobiuser_id) VALUES
>> (SELECT1, SELECT2);
>>
>>
>> CREATE TABLE
>> *billables*
>>  (
>>  billable_id BIGINT DEFAULT
>> "nextval"('"dm"."billables_billable_id_seq"'::"regclass") NOT
>>  NULL,
>>  account_id BIGINT NOT NULL,
>>  code CHARACTER VARYING(64) NOT NULL,
>>  info "TEXT",
>>  CONSTRAINT pk_billables PRIMARY KEY (billable_id),
>>  CONSTRAINT uc_billable_code_unique_per_account UNIQUE
>> ("account_id", "code"),
>>  );
>> CREATE TABLE
>> *billables_links*
>>  (
>>  billable_link_id BIGINT DEFAULT
>> "nextval"('"dm"."billables_links_billable_link_id_seq"'::
>>  "regclass") NOT NULL,
>>  billable_id BIGINT NOT NULL,
>>  customer_id BIGINT,
>>  role_id BIGINT,
>>  mobiuser_id BIGINT,
>>  CONSTRAINT pk_billables_links PRIMARY KEY
>> (billable_link_id),
>>  CONSTRAINT fk_billable_must_exist FOREIGN KEY
>> (billable_id) REFERENCES billables
>>  (billable_id),
>>  CONSTRAINT cc_one_and_only_one_target CHECK
>> ("customer_id" IS NOT NULL))::INTEGER + (
>>  ("role_id" IS NOT NULL))::INTEGER) + (("mobiuser_id" IS
>> NOT NULL))::INTEGER) = 1)
>>
>
> Would it not be easier if instead of customer_id, role_id, mobiuser_id you
> had id_type('customer', 'role', 'mobi') and user_id(id). Then you could
> eliminate the CHECK, which as far as I can see is just restricting entry to
> one user id anyway.
>
>  );
>> CREATE TABLE
>> *ja_mobiusers*
>>  (
>>  id BIGINT DEFAULT
>> "nextval"('"ja_mobiusers_id_seq"'::"regclass") NOT NULL,
>>  clientid BIGINT DEFAULT 0,
>> [...]
>>  PRIMARY KEY (id),
>>  CONSTRAINT fk_account_must_exist FOREIGN KEY
>> (clientid) REFERENCES ja_clients (id),
>>  );
>>
>>
>>
>
>
>

I just did it using:

> INSERT INTO dm.billables_links (billable_id, mobiuser_id)
> SELECT billable_id
> , (SELECT id
>FROM   public.ja_mobiusers
>WHERE  name_first LIKE 'Anthon%'
>ANDname_last LIKE 'Swile%') AS foo  -- alias irrelevant
> FROM   dm.billables
> WHERE  info ILIKE '%Anthon%' AND info ILIKE '%Swile%' AND account_id =
> 32152 ;


Re: [GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-19 Thread drum.lu...@gmail.com
On 17 March 2016 at 10:21, David G. Johnston 
wrote:

> On Wed, Mar 16, 2016 at 1:59 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>>
>> 1 - The problem here is that a VACUUM FULL will lock all the DB to wirte,
>> am I right? My DB is 1.7 TB, so it will take a while and the System can't
>> be offline
>>
>>1. Migrate the files to the NFS server
>>2. Delete the schema from the MASTER DB
>>3. Put the slaves into read-only servers
>>4. Run Vacuum FULL into the MASTER DB
>>5. Once the vacuum is done, do a DUMP from the MASTER to the slaves
>>(excluding the GORFS schema of course)
>>
>>
> ​If you are removing the entire object there should be no cause to VACUUM
> FULL.  A vacuum full reclaims unused space ​*within a given relation.*
>
> ​Both DROP TABLE and TRUNCATE have the effect of (near) immediately
> ​freeing up the disk spaced used by the named table and returning it to the
> operating system.
>
> ​You want to use VACUUM FULL tablename; if you remove a significant chuck
> of a table using DELETE or UPDATE and want to reclaim the spaced that was
> occupied by the older version of the ​row within "tablename".
>
> VACUUM FULL; simply does this for all tables - I'm not sure when locks are
> taken and removed.  likely only the actively worked on tables are locked -
> but the I/O hit is global so targeted locking only buys you so much.
>
> David J.
>
>
>

I see..

so in your opinion a DROP SCHEMA and maybe a VACUUM (not full) would be
enough?


Re: [GENERAL] Insert data in two columns same table

2016-03-19 Thread drum.lu...@gmail.com
>
>
>
> I see a lot of other problems: you have 3 independet tables. Your 2 queries
> (selects) returns 2 independet results, you can't use that for insert into
> the
> 3rd table. And i think, you are looking for an update, not insert. So you
> have
> to define how your tables are linked together (join).
>
> Can you explain how these tables are linked together?
>
>
>
>
Hi Andreas!

Well...

There are two tables that I need to get data from(dm.billables /
public.ja_mobiusers), and a third table (dm.billables_links) that I need to
insert data from those two tables.

The table dm.billables has four (important) columns:

*billable_id / customer_id / role_id / mobiuser_id*

I wanna add data there. The data is not there yet, so it's not an UPDATE.

*1 -* select the billable_id: (SELECT1)
SELECT billable_id FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%'

*2 -* select the mobiuser_id: (SELECT2)
SELECT id FROM public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND
name_last LIKE 'Dadryl%'

*3 -* Insert those two data into the dm.billables_links table (EXAMPLE):
INSERT INTO dm.billables_links (billable_id, mobiuser_id) VALUES (SELECT1,
SELECT2);


CREATE TABLE
> *billables*
> (
> billable_id BIGINT DEFAULT
> "nextval"('"dm"."billables_billable_id_seq"'::"regclass") NOT
> NULL,
> account_id BIGINT NOT NULL,
> code CHARACTER VARYING(64) NOT NULL,
> info "TEXT",
> CONSTRAINT pk_billables PRIMARY KEY (billable_id),
> CONSTRAINT uc_billable_code_unique_per_account UNIQUE
> ("account_id", "code"),
> );
> CREATE TABLE
> *billables_links*
> (
> billable_link_id BIGINT DEFAULT
> "nextval"('"dm"."billables_links_billable_link_id_seq"'::
> "regclass") NOT NULL,
> billable_id BIGINT NOT NULL,
> customer_id BIGINT,
> role_id BIGINT,
> mobiuser_id BIGINT,
> CONSTRAINT pk_billables_links PRIMARY KEY (billable_link_id),
> CONSTRAINT fk_billable_must_exist FOREIGN KEY (billable_id)
> REFERENCES billables
> (billable_id),
> CONSTRAINT cc_one_and_only_one_target CHECK ("customer_id"
> IS NOT NULL))::INTEGER + (
> ("role_id" IS NOT NULL))::INTEGER) + (("mobiuser_id" IS NOT
> NULL))::INTEGER) = 1)
> );
> CREATE TABLE
> *ja_mobiusers*
> (
> id BIGINT DEFAULT
> "nextval"('"ja_mobiusers_id_seq"'::"regclass") NOT NULL,
> clientid BIGINT DEFAULT 0,
> [...]
> PRIMARY KEY (id),
> CONSTRAINT fk_account_must_exist FOREIGN KEY (clientid)
> REFERENCES ja_clients (id),
> );


Re: [GENERAL] Deleting schema - saving up space - PostgreSQL 9.2

2016-03-19 Thread drum.lu...@gmail.com
>
>>
>>
> ​I don't deal with Hot Standby's in my day-to-day but if you DROP SCHEMA
> all of the spaced consumed by indexes and tables in that schema will be
> freed.  The vacuum might make a small difference in performance on the
> system catalogs (pg_class, stats, etc)  that were updated but with respect
> to the dropped schema there won't be anything present there for vacuum to
> touch.
>
> Create and populate a dummy table in a test setup, measure the HD space
> taken in PGDATA, then drop it and measure again to see it in action.
>
> I've only done this using "TRUNCATE" - I've got a system with space
> constraints a the same kind of "file data" table and freed up around 20GB
> with a single fast truncate (though ensuring FKs wouldn't be a problem was
> fun...).
>
> David J.
>
>
Have made some tests just by doing:

1 - Creating a master server
2 - Creating a slave server
3 - Creating a table on public schema and creating a gorfs schema with
another table
4 - inserting some random data into gorfs.test (*insert into gorfs.test
(descr) values (unnest(array(select md5(random()::text) from
generate_series(1, 3000;*)
5 - The data has been replicated into the slave - *PASS*
6 - Dropping the schema on the master server - *PASS* (Had to use the
CASCADE option)
7 - The schema has gone on the slave server as well
8 - Checked the free space - *PASS* (*I had more free space after deleting
the schema*)

So it seems that only by doing a DROP SCHEMA will be enough :)


Re: [GENERAL] Unique UUID value - PostgreSQL 9.2

2016-03-20 Thread drum.lu...@gmail.com
Sorry guys... *WRONG *email

On 17 March 2016 at 14:29, drum.lu...@gmail.com 
wrote:

>
>>>
>> SELECT ​'test_'​ || substring(uuid_generate_v4()::text, 1, 1);
>>
>> Or use a sequence, or just compute "ROW_NUMBER() OVER ()" and tack that
>> on.
>>
>> David J.
>> ​
>>
>>
> Well.. I was able to do it by using:
>
>> INSERT INTO dm.billables_links (billable_id) VALUES ((SELECT billable_id
>> FROM junk.wm_260_billables2 WHERE info ilike '%Alisha%')),
>> INSERT INTO dm.billables_links (mobiuser_id) VALUES ((SELECT id FROM
>> public.ja_mobiusers WHERE name_first LIKE 'Alisha%' AND name_last LIKE
>> 'Dson%'))
>
>
>
> The problem is that I need to do that at the same time, because of a
> constraint:
>
> ALTER TABLE dm.billables_links
>   ADD CONSTRAINT cc_one_and_only_one_target CHECK ((("customer_id" IS NOT 
> NULL)::integer + ("role_id" IS NOT NULL)::integer + ("mobiuser_id" IS NOT 
> NULL)::integer) = 1);
>
> I'm having trouble by creating that SQL... can anyone help please?
>
> FYI - It has to be in the same transaction because the mobiuser_id must go
> to the selected billable_id on the first select.
>
>


[GENERAL] template1 being accessed

2016-04-04 Thread drum.lu...@gmail.com
Hi there.

I'm currently using postgres 9.2.
As you can see below, my "template1" database was being accessed:

[image: Inline images 2]


That server is a 4-day-old backup DB - does a gzip of pg_dump, excluding
some tables; also 4-day old replication using WAL-archive with 345600s
delay; also file-level copies (excl. PG DB)


Why was the template1 database being accessed and now it's not anymore?

Cheers
Lucas


[GENERAL] I/O - Increase RAM

2016-04-13 Thread drum.lu...@gmail.com
Hi all,

At the moment I'm having 100% I/O during the day. My server has SATA HDs,
and it can't be changed now.
So, to solve the problem (or at least try) I was thinking about double the
RAM, and by doing that, increasing the cache.

[image: Inline images 1]

The server has 128GB today:

shared_buffers = 51605MB (I'll change it to 32GB)
> effective_cache_size = 96760MB



Question:

I know that might not be the best option, but by increasing the RAM and the
CACHE would help, right?

Thanks
Lucas


Re: [GENERAL] I/O - Increase RAM

2016-04-13 Thread drum.lu...@gmail.com
On 14 April 2016 at 08:52, Joshua D. Drake  wrote:

> On 04/13/2016 01:43 PM, drum.lu...@gmail.com wrote:
>
> Question:
>>
>> I know that might not be the best option, but by increasing the RAM and
>> the CACHE would help, right?
>>
>
> might, not necessarily would.
>
>
> Would be nice if you could explain why not / why yes


Re: [GENERAL] I/O - Increase RAM

2016-04-13 Thread drum.lu...@gmail.com
I'm using a MASTER server and a SLAVE as read-only as well.
The results I'm posting here is related to the *master* server.



> We're gonna need better stats. iostat, iotop, vmstat etc will all break
> down your io between reads and writes, random vs sequential etc.
>

I'll try to get more data during a spike

*SPIKE:*

rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz avgqu-sz
await  svctm  %util

dm-2  0.00 0.00  129.00  585.10  5932.00  4680.8014.86
  26.82   37.58   1.40  99.80


>
> If you're at 100% IO Util, and iostat says you're writing is taking up 20
> or 30% of the time, then no, adding cache probably won't help.
>

Well.. I'm getting spikes. So, I'm not getting 100% of I/O all the time. It
does happen several times during the day.



>
> Start looking into adding SSDs. They are literally 20 to 1000 times faster
> at a lot of io stuff than spinning drives. And they're relatively cheap for
> what they do.
>

I know.. but unfortunately the bosses don't want to spend money :(


>
> Note that a software RAID-5 array of SSDs can stomp a hardware controller
> running RAID-10 with spinning disks easily, and RAID-5 is pretty much as
> slow as RAID gets.
>
> Here's a few minutes of "iostat -xd 10 /dev/sdb" on one of my big servers
> at work. These machines have a RAID-5 of 10x750GB SSDs under LSI MegaRAIDs
> with caching turned off. (much faster that way). The array created thus is
> 6.5TB and it's 83% full. Note that archiving and pg_xlog are on separate
> volumes as well.
>
> Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
> sdb   0.00   236.30 1769.10 5907.30 20366.80 69360.80
> 23.3836.384.740.346.06   0.09  71.00
>

*NORMAL SERVER:* (as it usually works during the day)

rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
avgqu-sz   await  svctm  %util

dm-2  0.00 0.00   42.60  523.60  1644.80  4188.80
  10.30 7.85   13.88   1.04  59.15

- Those results are changing all the time


>
> So we're seeing 1769 reads/s, 5907 writes/s and we're reading ~20MB/s and
> writing ~70MB/s. In the past this kind of performance from spinning disks
> required massive caching and cabinets full of hard drives. When first
> testing these boxes we got literally a fraction of this performance with 20
> spinning disks in RAID-10, and they had 512GB of RAM. Management at first
> wanted to throw more memory at it, these machines go to 1TB RAM, but we
> tested with 1TB RAM and the difference was literally a few % points going
> from 512GB to 1TB RAM.
>
> If your iostat output looks anything like mine, with lots of wkB/s and w/s
> then adding memory isn't going to do much.
>

Thanks a lot for your reply!
Lucas


[GENERAL] SAN - Same array Master-Slave

2016-04-17 Thread drum.lu...@gmail.com
Hi all,

Currently, I have four servers:


   - 1 Master server
   - 1 Slave server (read-only)
   - 2 Slaves servers (hot_standby)


We're having I/O and size issue, currently running SATA disks.
So we'll need to change our disks to SAN.


*Question:*

Can I use the same SAN array to a Master and a Slave server?

Short example:
[image: Inline images 1]

Is that possible?

Cheers
Lucas


[GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
Hi all,

I've got two tables:

- users
- companies

I'm trying to create a function that:


   - if users.code is empty, it gives a default value
   - And the increment_client_code in company should auto increment for the
   next client code

What I've done so far:

DROP FUNCTION IF EXISTS client_code_increment_count();
> CREATE OR REPLACE FUNCTION "public"."client_code_increment_count" ()
> RETURNS TABLE("code" INT) AS
> $BODY$
> SELECT MAX(CAST(users.code AS INT)) FROM users WHERE users.code ~ '^\d+$'
> AND company_id = 2
> $BODY$
> LANGUAGE sql;
> SELECT * FROM "client_code_increment_count"();





CREATE OR REPLACE FUNCTION "public"."auto_generate_client_code_if_empty" ()
> RETURNS "trigger"
>  VOLATILE
> AS $dbvis$
> BEGIN
> END;
> $dbvis$ LANGUAGE plpgsql;






> CREATE TRIGGER "increment_client_code"
> BEFORE INSERT OR UPDATE ON users
> FOR EACH ROW
> EXECUTE PROCEDURE "auto_generate_client_code_if_empty"();



But still can't do that works.. What Am I missing?

Cheers


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
On 20 April 2016 at 10:38, David G. Johnston 
wrote:

> On Tue, Apr 19, 2016 at 3:23 PM, drum.lu...@gmail.com <
> drum.lu...@gmail.com> wrote:
>
>> Hi all,
>>
>> I've got two tables:
>>
>> - users
>> - companies
>>
>> I'm trying to create a function that:
>>
>>
>>- if users.code is empty, it gives a default value
>>- And the increment_client_code in company should auto increment for
>>the next client code
>>
>> What I've done so far:
>>
>> DROP FUNCTION IF EXISTS client_code_increment_count();
>>> CREATE OR REPLACE FUNCTION "public"."client_code_increment_count" ()
>>> RETURNS TABLE("code" INT) AS
>>> $BODY$
>>> SELECT MAX(CAST(users.code AS INT)) FROM users WHERE users.code ~
>>> '^\d+$' AND company_id = 2
>>> $BODY$
>>> LANGUAGE sql;
>>> SELECT * FROM "client_code_increment_count"();
>>
>>
>>
> The need to do "WHERE users.code ~ '^\d+$' means your model is poorly
> specified.
> ​
>
>
>>
>>
>>
>> CREATE OR REPLACE FUNCTION "public"."auto_generate_client_code_if_empty"
>>> () RETURNS "trigger"
>>>  VOLATILE
>>> AS $dbvis$
>>> BEGIN
>>> END;
>>> $dbvis$ LANGUAGE plpgsql;
>>
>>
>>
> It would be nice if you actually showed some work here...​
>
>
>>
>>
>>> CREATE TRIGGER "increment_client_code"
>>> BEFORE INSERT OR UPDATE ON users
>>> FOR EACH ROW
>>> EXECUTE PROCEDURE "auto_generate_client_code_if_empty"();
>>
>>
>>
> ​
> ​I'd question the need to execute this trigger on UPDATE...​
>
> ​
>
>
>>
>> But still can't do that works.. What Am I missing?
>>
>>
>
> ​The stuff that goes between "BEGIN" and "END" in
> auto_generate_client_code_if_empty...?
>
>
That's all I got David.. working on it and would like some help if
possible...
Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
>
> Information. eg.:
>

> The schema for the tables.
>
> Why is not just adding a DEFAULT value to the users.code not an option?
>
>

The customer can add their own value to the users.code column.
That's why I can't have a default value.



> What the default code should be or how it is to be calculated?
>

the default value is 1000.

So the customer can set their own code value. But if they don't do that,
I've to provide the next available value. 1001, 1002, 1003, etc



>
> What is increment_client_code?
>

It's a column:

ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
ALTER TABLE public.companies ALTER COLUMN client_code_increment SET DEFAULT
1000;



>
> Does increment_client_code relate to users or some other table, say
> clients?
>
>
nope.. there is no link between them



table users:

> CREATE TABLE
> users
> (
> id INTEGER DEFAULT nextval('users_id_seq'::regclass) NOT NULL,
> email CHARACTER VARYING DEFAULT ''::CHARACTER VARYING NOT NULL,
> encrypted_password CHARACTER VARYING DEFAULT ''::CHARACTER VARYING
> NOT NULL,
> reset_password_token CHARACTER VARYING,
> reset_password_sent_at TIMESTAMP(6) WITHOUT TIME ZONE,
> remember_created_at TIMESTAMP(6) WITHOUT TIME ZONE,
> sign_in_count INTEGER DEFAULT 0 NOT NULL,
> current_sign_in_at TIMESTAMP(6) WITHOUT TIME ZONE,
> last_sign_in_at TIMESTAMP(6) WITHOUT TIME ZONE,
> current_sign_in_ip INET,
> last_sign_in_ip INET,
> created_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> updated_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> name CHARACTER VARYING,
> confirmation_token CHARACTER VARYING,
> confirmed_at TIMESTAMP(6) WITHOUT TIME ZONE,
> confirmation_sent_at TIMESTAMP(6) WITHOUT TIME ZONE,
> company_name CHARACTER VARYING,
> country CHARACTER VARYING,
> mobile_number CHARACTER VARYING,
> landline_number CHARACTER VARYING,
> staff_colour CHARACTER VARYING,
> company_id INTEGER,
> role_id INTEGER,
> active BOOLEAN DEFAULT false,
> deleted BOOLEAN DEFAULT false,
> avatar_file_name CHARACTER VARYING,
> avatar_content_type CHARACTER VARYING,
> avatar_file_size INTEGER,
> avatar_updated_at TIMESTAMP(6) WITHOUT TIME ZONE,
> fax CHARACTER VARYING,
> website CHARACTER VARYING,
> business_type CHARACTER VARYING,
> lead_source CHARACTER VARYING,
> code CHARACTER VARYING,
> notes TEXT,
> status CHARACTER VARYING,
> tsv TSVECTOR,
> origin CHARACTER VARYING,
> origin_id CHARACTER VARYING,
> first_name CHARACTER VARYING,
> last_name CHARACTER VARYING,
> billed_client_id INTEGER,
> username CHARACTER VARYING,
> is_client BOOLEAN DEFAULT false,
> job_share BOOLEAN DEFAULT true
> );


Table companies:

> CREATE TABLE
> companies
> (
> id INTEGER DEFAULT nextval('companies_id_seq'::regclass) NOT NULL,
> name CHARACTER VARYING,
> country CHARACTER VARYING,
> timezone CHARACTER VARYING,
> mobile_number CHARACTER VARYING,
> email CHARACTER VARYING,
> website CHARACTER VARYING,
> phone CHARACTER VARYING,
> created_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> updated_at TIMESTAMP(6) WITHOUT TIME ZONE NOT NULL,
> reference_increment INTEGER DEFAULT 1000,
> activated BOOLEAN DEFAULT true,
> enable_quotes BOOLEAN DEFAULT false,
> allow_billing_client BOOLEAN DEFAULT true,
> allow_templates_recurrence BOOLEAN DEFAULT true,
> recurrence_limit INTEGER DEFAULT 30,
> job_title_dropdown BOOLEAN DEFAULT false,
> default_reference_prefix CHARACTER VARYING,
> default_reference_increment INTEGER,
> default_visit_start_day INTEGER,
> default_visit_start_hour INTEGER,
> default_visit_start_min INTEGER,
> job_date_entry_duration BOOLEAN DEFAULT true,
> default_visit_duration_hour INTEGER DEFAULT 0,
> default_visit_duration_min INTEGER DEFAULT 30,
> date_entry_short BOOLEAN DEFAULT true,
> time_entry_24 BOOLEAN DEFAULT true,
> time_field_increment INTEGER DEFAULT 10,
> enable_job_share BOOLEAN DEFAULT true,
> token CHARACTER VARYING
> );


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-19 Thread drum.lu...@gmail.com
Just forgot to say:


[...]


>
>
>>
>> Does increment_client_code relate to users or some other table, say
>> clients?
>>
>>
> nope.. there is no link between them
>
>

If the users.code is empty/null, then the trigger has to get the last
number from client_code_increment and put on the users.code column



>
>
[...]


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-20 Thread drum.lu...@gmail.com
On 21 April 2016 at 09:44, Adrian Klaver  wrote:

> On 04/19/2016 07:34 PM, drum.lu...@gmail.com wrote:
>
>> Information. eg.:
>>
>>
>> The schema for the tables.
>>
>> Why is not just adding a DEFAULT value to the users.code not an
>> option?
>>
>>
>>
>> The customer can add their own value to the users.code column.
>> That's why I can't have a default value.
>>
>
> That is contradicted by your next statement below.
>
>
>> What the default code should be or how it is to be calculated?
>>
>>
>> the default value is 1000.
>>
>
> See above.
>
>
>> So the customer can set their own code value. But if they don't do that,
>> I've to provide the next available value. 1001, 1002, 1003, etc
>>
>
> Then why is users.code a varchar field?
>
>
>>
>> What is increment_client_code?
>>
>>
>> It's a column:
>> ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
>> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET
>> DEFAULT 1000;
>>
>>
>> Does increment_client_code relate to users or some other table, say
>> clients?
>>
>>
>> nope.. there is no link between them
>>
>
> Then what is its purpose?
>
> I am with the other responses in this thread, this is a set up that is not
> going to end well. More to the point, I still have no idea what you are
> trying to achieve with your triggers and functions.
>
>>
>>

Well.. will try ONE more time then.


1 - The customer can add any value into users.code column
2 - The customer can chose between *add or no**t* add the value on
users.code column
3 - If users.code is null (because the customer's chosen not to add any
value in there), a trigger/function has to do the job.
4 - the function/trigger add the next available value, which the default is
1000 and it's stored on companies.client_code_increment

Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-04-20 Thread drum.lu...@gmail.com
>
>
>
> If I am following, this duplicates the information in
> companies.client_code_increment, in that they both return the last non-user
> code. Of course this assumes, as David mentioned, that the client is not
> using a numeric code system. Then you are left trying to figure whether a
> number is 'your' number or 'their' number?
>
>>

The customer can add any value into users.code:

code CHARACTER VARYING,


But he also can let it blank/null if he wants to.
That's when the trigger do its job.. Put a value (starting in 1000) in that
column.

Of course that has to be unique, as nobody can use the same value of others.


- I was hoping you cans could help me to start doing the function...


[GENERAL] Update field to a column from another table

2016-04-20 Thread drum.lu...@gmail.com
I've got two tables:

- ja_jobs
- junk.ja_jobs_23856

I need to update the null column ja_jobs.time_job with the data from the
table  junk.ja_jobs_23856

So I'm doing:

>
> UPDATE public.ja_jobs AS b
> SET   time_job = a.time_job
> FROM junk.ja_jobs_23856 AS a
> WHERE a.id = b.id
> AND a.clientid = b.clientid;


But it's now working... I'm using PostgreSQL 9.2

Do you guys have an idea why?

cheers;
Lucas


Re: [GENERAL] Update field to a column from another table

2016-04-21 Thread drum.lu...@gmail.com
So when I run:

UPDATE ja_jobs t2
> SET time_job = t1.time_job
> FROM junk.ja_test t1
> WHERE t2.id = t1.id
> AND t2.time_job IS DISTINCT FROM t1.time_job;


I get:

UPDATE 2202

So I check the data by doing:

select * FROM public.ja_jobs WHERE id = 14574527


And the "time_job" field is null


Re: [GENERAL] Update field to a column from another table

2016-04-21 Thread drum.lu...@gmail.com
The problem was a trigger in my DB, when I disabled it the data started to
be updated.


Lucas


[GENERAL] index question

2016-05-01 Thread drum.lu...@gmail.com
Hi all,

I've got the following index on the gorfs.inode_segments table:

>
> CREATE INDEX ix_clientids
>   ON gorfs.inode_segments
>   USING btree
>   (("split_part"("full_path"::"text", '/'::"text", 4)::integer))
>   WHERE "gorfs"."is_kaminski_note_path"("full_path"::"text");


And I'm running the following Query:

> SELECT
> * FROM ( SELECT split_part(full_path, '/', 4)::INT AS account_id,
>split_part(full_path, '/', 6)::INT AS note_id,
>split_part(full_path, '/', 9)::TEXT AS variation,
>st_size,
>segment_index,
>reverse(split_part(reverse(full_path), '/', 1)) as file_name,
>i.st_ino,
>full_path
> FROM gorfs.inodes i
> JOIN gorfs.inode_segments s
>   ON i.st_ino = s.st_ino_target
> WHERE i.checksum_md5 IS NOT NULL
>   AND s.full_path ~ '/userfiles/account/[0-9]+/[a-z]+/[0-9]+'
>   AND i.st_size > 0) as test WHERE account_id = 12225


*- But the query does not use the index... Why?*

Explain analyze:

> "Seq Scan on "inode_segments"  (cost=0.00..3047212.44 rows=524846
> width=63) (actual time=14212.466..51428.439 rows=31 loops=1)"
> "  Filter: ("split_part"(("full_path")::"text", '/'::"text", 4) =
> '12225'::"text")"
> "  Rows Removed by Filter: 104361402"
> "Total runtime: 51428.482 ms"


Cheers
Lucas


Re: [GENERAL] index question

2016-05-01 Thread drum.lu...@gmail.com
>
>
>
> Well, a little more information would be useful like:
>

Ops.. yes sure.. sorry about that.


> 1. What is the PostgreSQL version?
>

PostgreSQL 9.2


> 2. What is the O/S?
>

Linux Centos 6.7 64 bits


> 3. What is the structure of gorfs.inode_segments?
>

Table inode_segments: (I'll leave the comments to help)

> CREATE TABLE gorfs.inode_segments
> (
>   st_ino "gorfs"."ino_t" NOT NULL, -- Inode number the segment belongs to.
> alongside segment_index, it forms the table's primary key to ensure
> uniqueness per relevant scope
>   segment_index "gorfs"."pathname_component" NOT NULL, -- See st_no's
> column description for further details. The meaning of this column varies
> based on the host inode type:...
>   st_ino_target "gorfs"."ino_t", -- Target inode number. Meaningful for
> directory inode segments (objects in the directory)
>   full_path "gorfs"."absolute_pathname", -- Exploded absolute canonical
> path for quick lookups. Meaningful only for directory inode segments
> (objects in the directory)
>   segment_data "bytea", -- Actual data segment. Meaningful only for
> S_IFLNK and S_IFREG
>   CONSTRAINT pk_inode_segments PRIMARY KEY ("st_ino", "segment_index"),
>   CONSTRAINT fk_host_inode_must_exist FOREIGN KEY (st_ino)
>   REFERENCES gorfs.inodes (st_ino) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT fk_target_inode_must_exist FOREIGN KEY (st_ino_target)
>   REFERENCES gorfs.inodes (st_ino) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT uc_no_duplicate_full_paths UNIQUE ("full_path"),
>   CONSTRAINT cc_only_root_can_be_its_own_parent CHECK ("st_ino_target" IS
> NULL OR "st_ino"::bigint <> "st_ino_target"::bigint OR "st_ino"::bigint = 2)
> )


Table gorfs.inodes:

> CREATE TABLE gorfs.inodes
> (
>   st_dev "gorfs"."dev_t" DEFAULT NULL::bigint, -- ID of device containing
> file. Meaningless in this implementation
>   st_ino "gorfs"."ino_t" NOT NULL DEFAULT
> "nextval"('"gorfs"."inodes_st_ino_idseq"'::"regclass"), -- Inode number
>   st_mode "gorfs"."mode_t" NOT NULL, -- File type/mode bits
>   st_nlink "gorfs"."nlink_t" NOT NULL, -- Number of hard links (directory
> segments) pointing to this inode. See stat(2) manual page for details (man
> 2 stat)
>   st_uid "gorfs"."uid_t" NOT NULL, -- User ID that owns the file. See
> stat(2) manual page for details (man 2 stat)
>   st_gid "gorfs"."gid_t" NOT NULL, -- Group ID that owns the file.See
> stat(2) manual page for details (man 2 stat)
>   st_rdev "gorfs"."dev_t", -- Device number (currently we don't support
> device files).  See stat(2) manual page for details (man 2 stat)
>   st_size "gorfs"."off_t", -- File size, if applicable. See stat(2) manual
> page for details (man 2 stat)
>   st_blksize "gorfs"."blksize_t", -- Block size for I/O. Meaningless here,
> hard coded to 512. See stat(2) manual page for details (man 2 stat)
>   st_blocks "gorfs"."blkcnt_t", -- Number of allocated blocks. Meaningless
> here, but calculated from block size. See stat(2) manual page for details
> (man 2 stat)
>   st_atime "gorfs"."time_t" NOT NULL, -- Timestamp of last access. Stored
> as a timestamp as opposed to unix TS. See stat(2) manual page for details
> (man 2 stat)
>   st_mtime "gorfs"."time_t" NOT NULL, -- Timestamp of last modification.
> Stored as a timestamp as opposed to unix TS. See stat(2) manual page for
> details (man 2 stat)
>   st_ctime "gorfs"."time_t" NOT NULL, -- Timestamp of last change. Stored
> as a timestamp as opposed to unix TS. See stat(2) manual page for details
> (man 2 stat)
>   checksum_md5 "md5_hash", -- MD5 checksum of the file. Supplied by the
> application as the DB might not even see the payload
>   media_subtype_id integer, -- Reference to MIME type (see FK constraint).
> We can't support all media types but unknow types can be stored as
> application/octet-stream
>   external_size "gorfs"."off_t", -- For symlinks only. Meaningful for fat
> links only: total size of the fat link target. Null for normal symlinks
>   CONSTRAINT pk_inodes PRIMARY KEY ("st_ino"),
>   CONSTRAINT fk_media_subtype_must_exist FOREIGN KEY (media_subtype_id)
>   REFERENCES public.media_subtypes (media_subtype_id) MATCH SIMPLE
>   ON UPDATE NO ACTION ON DELETE NO ACTION,
>   CONSTRAINT cc_mount_devices_not_supported CHECK ("st_dev" IS NULL)
> )



4. Did you do an ANALYZE table gorfs.inode_segments   after you created the
> index?
>

Yes.. actually the index was already created.


Re: [GENERAL] index question

2016-05-01 Thread drum.lu...@gmail.com
>
> To clarify, the index is based on a function called "split_part()
> The WHERE clause is only referencing the full_part column, so the planner
> cannot associate the index with the full_part column.
>

Thanks for the explanation, Melvin.

It would be simple like:

CREATE INDEX CONCURRENTLY ON gorfs.inode_segments USING btree ("full_path");

?

Thanks again.
Lucas


Re: [GENERAL] index question

2016-05-01 Thread drum.lu...@gmail.com
Sorry @Melvin, sent the previous email just to you..


That's a great one, too! Cheers!


Well.. the index creation did not help...

if possible please have a look on the explain analyze results:

http://explain.depesz.com/s/rHOU

What else can I do?

*The indexes I created is:*
- CREATE INDEX CONCURRENTLY ix_inode_segments_notes_clientids2 ON
gorfs.inode_segments USING btree ("full_path");

- CREATE INDEX CONCURRENTLY ix_inodes_checksum_st_size ON gorfs.inodes
USING btree ("checksum_md5","st_size");


Re: [GENERAL] index question

2016-05-02 Thread drum.lu...@gmail.com
The index that I've created and is working is:

Index without typecasting:

> CREATE INDEX CONCURRENTLY ix_clientids2 ON gorfs.inode_segments USING
> btree (full_path);


Thanks for the help, guys!


Melvin, that Query you sent is very interesting..

SELECT n.nspname as schema,
>i.relname as table,
>i.indexrelname as index,
>i.idx_scan,
>i.idx_tup_read,
>i.idx_tup_fetch,
>CASE WHEN idx.indisprimary
> THEN 'pkey'
> WHEN idx.indisunique
> THEN 'uidx'
> ELSE 'idx'
> END AS type,
>pg_get_indexdef(idx.indexrelid),
>CASE WHEN idx.indisvalid
> THEN 'valid'
> ELSE 'INVALID'
> END as statusi,
>pg_relation_size(quote_ident(n.nspname)|| '.' ||
> quote_ident(i.relname)) as size_in_bytes,
>pg_size_pretty(pg_relation_size(quote_ident(n.nspname)|| '.' ||
> quote_ident(i.relname))) as size
>   FROM pg_stat_all_indexes i
>   JOIN pg_class c ON (c.oid = i.relid)
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>  WHERE n.nspname NOT LIKE 'pg_%'
> ORDER BY 1, 2, 3;



I've found more then 100 indexes that the columns:

"idx_scan", "idx_tup_read" and "idx_tup_fetch" are 0.
So, it's safe to say that they are not being used, is that right?

But some indexes have almost 100GB on the size column. This means they are
not being used now, but they could be used in the past?

- Is it safe to remove them?

Cheers
Lucas


Re: [GENERAL] index question

2016-05-02 Thread drum.lu...@gmail.com
>
>
>>
> ​Index size and index usage are unrelated.  Modifications to the index to
> keep it in sync with the table do not count as "usage" - only reading it
> for where clause use counts.​
>
> ​David J.
> ​
>
>

So only those with* 0 size*, should be deleted? Is that you're saying?
Can you be more clear please?

Lucas


Re: [GENERAL] index question

2016-05-02 Thread drum.lu...@gmail.com
>
>
> Generically speaking,  if the total of dx_scan + idx_tup_read +
> idx_tup_fetch  are 0, then it is an _indication_ that those indexes should
> be dropped.
> You should also consider how long those indexes have existed and how often
> queries are executed.
>
> A good practice would be to save the SQL to recreate the indexes before
> you drop any. In that way, if you notice a degradation in performance, you
> can just rebuild
> You can use the following query to do that, but you might want to edit and
> add the CONCURRENT option.
>
> SELECT pg_get_indexdef(idx.indexrelid) || ';'
>   FROM pg_stat_all_indexes i
>   JOIN pg_class c ON (c.oid = i.relid)
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>  WHERE NOT idx.indisprimary
>AND NOT idx.indisunique
>AND i.relname NOT LIKE 'pg_%'
>AND i.idx_scan + idx_tup_read + idx_tup_fetch = 0
>ORDER BY n.nspname,
>   i.relname;
>
> The following query generates the drop statements.
>
> SELECT 'DROP INDEX CONCURRENTLY IF EXISTS "' || quote_ident(n.nspname) ||
> '"' || '.' || '"' || quote_ident(i.indexrelname) || '"' ||';'
>   FROM pg_stat_all_indexes i
>   JOIN pg_class c ON (c.oid = i.relid)
>   JOIN pg_namespace n ON (n.oid = c.relnamespace)
>   JOIN pg_index idx ON (idx.indexrelid =  i.indexrelid )
>  WHERE NOT idx.indisprimary
>AND i.relname NOT LIKE 'pg_%'
>AND i.idx_scan + idx_tup_read + idx_tup_fetch = 0
>ORDER BY i.indexrelname;
>
>
> I would not place any concern on the size of the index. That is just what
> is needed to keep track of all associated rows.
> Once you drop the indexes you determine are not needed, you will gain back
> the space that they use up.
>
> Please stay in touch and let me know how it goes.
>


I will. Thanks for the help/tips!



Cheers
Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-02 Thread drum.lu...@gmail.com
This is what I've done:


-- 1 - Creating the Sequence:

CREATE SEQUENCE users_code_seq
> INCREMENT 1
> MINVALUE 1
> MAXVALUE 9223372036854775807
> START 1000;
> CACHE 1;


-- 2 - Setting the DEFAULT

ALTER TABLE public.users ALTER COLUMN code SET DEFAULT
> NEXTVAL('users_code_seq');


-- 3 - Setting the column as NOT NULL;

>
> ALTER TABLE public.users ALTER COLUMN code SET NOT NULL;


-- 4 - Setting the trigger

CREATE TRIGGER public.update_code_column
>   BEFORE UPDATE OR INSERT
>   ON public.users
>   FOR EACH ROW
>   EXECUTE PROCEDURE public.users_code_seq;


-- 5 - Creating a CONSTRAINT UNIQUE

> ALTER TABLE public.users
>   ADD CONSTRAINT uc_users_code UNIQUE("code");



Is that right?
Am I missing something?

Cheers
Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-02 Thread drum.lu...@gmail.com
On 3 May 2016 at 12:44, drum.lu...@gmail.com  wrote:

> This is what I've done:
>
>
> -- 1 - Creating the Sequence:
>
> CREATE SEQUENCE users_code_seq
>> INCREMENT 1
>> MINVALUE 1
>> MAXVALUE 9223372036854775807
>> START 1000;
>> CACHE 1;
>
>
> -- 2 - Setting the DEFAULT
>
> ALTER TABLE public.users ALTER COLUMN code SET DEFAULT
>> NEXTVAL('users_code_seq');
>
>
> -- 3 - Setting the column as NOT NULL;
>
>>
>> ALTER TABLE public.users ALTER COLUMN code SET NOT NULL;
>
>
> -- 4 - Setting the trigger
>
> CREATE TRIGGER public.update_code_column
>>   BEFORE UPDATE OR INSERT
>>   ON public.users
>>   FOR EACH ROW
>>   EXECUTE PROCEDURE public.users_code_seq;
>
>
> -- 5 - Creating a CONSTRAINT UNIQUE
>
>> ALTER TABLE public.users
>>   ADD CONSTRAINT uc_users_code UNIQUE("code");
>
>
>
> Is that right?
> Am I missing something?
>
> Cheers
> Lucas
>


Well.. I don't need to add a constraint if I already have a default value,
that's right...

Anyway...


hmm.. actually.. it's a little bit different what I've done and what I need
=(


1 - each user on the public.users table, is part of a company. Each company
has a unique company_id

2 - Remember the default 1000 value? That value is per company.

*Example:*

Company Test1 - Company_id = 1
- user john01 = users.code: 1000
- user john02 = users.code: Nz
- user john03 = users.code: 1001
- user john04 = users.code: Nz

Company Test2 - Company_id = 2
- user matt01 = users.code: Text1
- user matt02 = users.code: 1000
- user matt03 = users.code: 1001
- user matt04 = users.code: 1002

Company Test3 - Company_id = 3
- user luke01 = users.code: 1000
- user luke02 = users.code: 1001
- user luke03 = users.code: Text2
- user luke04 = users.code: 1002


So, the default value is 1000 for EACH company. And the users must get the
nextval value from there.


How can I do that?
Or at least if you guys can give me a direction...

Cheers


[GENERAL] (VERY) Slow Query - PostgreSQL 9.2

2016-05-03 Thread drum.lu...@gmail.com
Hi all,


I'm trying to get the query below a better performance.. but just don't
know what else I can do...

Please, have a look and let me know if you can help somehow.. also.. if you
need some extra data jet ask me please.

* Note that the gorfs.inode_segments table is 1.7TB size

I have the following Query:

explain analyzeSELECT split_part(full_path, '/', 4)::INT AS account_id,
   split_part(full_path, '/', 6)::INT AS note_id,
   split_part(full_path, '/', 9)::TEXT AS variation,
   st_size,
   segment_index,
   reverse(split_part(reverse(full_path), '/', 1)) as file_name,
   i.st_ino,
   full_path,
   (i.st_size / 100::FLOAT)::NUMERIC(5,2) || 'MB' AS
size_mbFROM gorfs.inodes iJOIN gorfs.inode_segments s
  ON i.st_ino = s.st_ino_targetWHERE
  i.checksum_md5 IS NOT NULL
  AND s.full_path ~ '^/userfiles/account/[0-9]+/[a-z]+/[0-9]+'
  AND i.st_size > 0;
  split_part(s.full_path, '/', 4)::INT IN (
SELECT account.id
FROM public.ja_clients AS account
WHERE
NOT (
((account.last_sub_pay > EXTRACT('epoch' FROM
(transaction_timestamp() - CAST('4 Months' AS INTERVAL AND
(account.price_model > 0)) OR
(account.regdate > EXTRACT('epoch' FROM
(transaction_timestamp() - CAST('3 Month' AS INTERVAL OR
(((account.price_model = 0) AND (account.jobcredits >
0)) AND (account.last_login > EXTRACT('epoch' FROM
(transaction_timestamp() - CAST('4 Month' AS INTERVAL)
) LIMIT 100);


   - Explain analyze link: http://explain.depesz.com/s/Oc6

The query is taking ages, and I can't get the problem solved.

These are the index I've already created on the inode_segments table:

Indexes:
"ix_account_id_from_full_path" "btree"
(("split_part"("full_path"::"text", '/'::"text", 4)::integer)) WHERE
"full_path"::"text" ~
'^/userfiles/account/[0-9]+/[a-z]+/[0-9]+'::"text"
"ix_inode_segments_ja_files_lookup" "btree" ((CASE
WHEN "full_path"::"text" ~ '/[^/]*\.[^/]*$'::"text" THEN
"upper"("regexp_replace"("full_path"::"text", '.*\.'::"text",
''::"text", 'g'::"text"))
ELSE NULL::"text"END)) WHERE
"gorfs"."is_kaminski_note_path"("full_path"::"text")
"ix_inode_segments_notes_clientids" "btree"
(("split_part"("full_path"::"text", '/'::"text", 4)::integer)) WHERE
"gorfs"."is_kaminski_note_path"("full_path"::"text")
"ix_inode_segments_notes_clientids2" "btree" ("full_path")
"ix_inode_segments_notes_fileids" "btree"
(("split_part"("full_path"::"text", '/'::"text", 8)::integer)) WHERE
"gorfs"."is_kaminski_note_path"("full_path"::"text")
"ix_inode_segments_notes_noteids" "btree"
((NULLIF("split_part"("full_path"::"text", '/'::"text", 6),
'unassigned'::"text")::integer)) WHERE
"gorfs"."is_kaminski_note_path"("full_path"::"text")

These are the index I've already created on the inodes table:

 Indexes:
"ix_inodes_checksum_st_size" "btree" ("checksum_md5", "st_size")
WHERE "checksum_md5" IS NOT NULL

*Question:*

What else can I do to improve the Performance of the Query?


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-03 Thread drum.lu...@gmail.com
On 4 May 2016 at 01:18, Melvin Davidson  wrote:

>
>
> On Tue, May 3, 2016 at 1:21 AM, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>>
>>> Well.. I don't need to add a constraint if I already have a default
>>> value, that's right...
>>>
>>
>> Wrong
>>
>> David J.
>>
>
> What you need is a TRIGGER function & TRIGGER that will select  and assign
> the next users_code based on company_id.
> I'm not going to write the whole thing for you, but here is part of the
> trigger function logic.
>
> eg: IF NEW.company_id = 1 THEN
>   NEW.users_code = NEXTVAL(c1_users_code_seq);
>   ELSEIF  NEW.company.id = 2 THEN
>   NEW.users_code =  NEXTVAL(c2_users_code_seq);
>   ELSEIF  NEW.company.id = 3 THEN
>   NEW.users_code =  NEXTVAL(c3_users_code_seq);
>   ...
>   ...
>   ELSE
>    ?
>   END IF;
>
>
>

Do I have to have one sequence peer company_id ? There will be thousands..
isn't there a better way to do that?



> Seriously, get yourself the books I have recommended and study them
> BEFORE you continue attempting to design your database.
> You need a firm understanding of logical design & flow, otherwise you will
> be wasting your time.
>

That's what I'm doing.. Studying.. asking for some help to get a better
understand isn't this the purpose of this mail list?


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-03 Thread drum.lu...@gmail.com
>
>
>
> I agree that having thousands of sequences can be hard to manage,
> especially in a function, but you did not state that fact before,
> only that you wanted separate sequences for each company. That
> being said, here is an alternate solution.
>

Yep.. that was my mistake.


>
> 1. CREATE TABLE company_seqs
>(company_id bigint NOT NULL,
> last_seq   bigint NOT NULL,
> CONSTRAINT company_seqs_pk PRIMARY KEY (company_id)
>);
>
> 2. Every time you create a new company, you must insert the
>corresponding company_id  and last_seq [which will be 1}
> into the company_seqs table.
>

ok that's right.. just a comment here...
the value inside the users.code column must start with 1000 and not 1.
So, it would be 1001, 1002, 1003, etc.

The field "last_seq + 1" is ok, but how can I determine that the start
point would be 1000?


>
> 3. Change the trigger function logic to something like below:
>
> DECLARE
>v_seq_num INTEGER;
>
>  BEGIN
>SELECT last_seq
>FROM company_seqs
>WHERE company_id = NEW.company_id INTO v_seq_num;
>UPDATE company_seqs
>   SET last_seq  = last_seq + 1
> WHERE company_id = NEW.company_id;
>
>new.users_code = v_seq_num;
>

not sure what v_seq_num is...


>
>
> Now, just a quick comment. As has been said before, wanting a sequence
> with no gaps for
> each user in each company is a bit unrealistic and serves no purpose. For
> example,
> company_id 3 has 10 users, 1 > 10. What do you do when user 3 leaves and
> is deleted?
> As long as you have a unique user_code for each user, it does not matter.
>
> >... Studying.. asking for some help to get a better understand isn't
> this the purpose of this mail list?
>
> Yes, but at the same time, it is evident that you are trying to design the
> database before you have
> a valid understanding of database design. To wit, you are putting the cart
> before the horse.
> While this list is here to help you, it is not meant as a DATABASE 101
> course.
>

Yep.. got it


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-03 Thread drum.lu...@gmail.com
   - This is what I did...

-- Creating the tableCREATE TABLE public.company_seqs(company_id
BIGINT NOT NULL,
last_seq BIGINT NOT NULL DEFAULT 1000,CONSTRAINT company_seqs_pk
PRIMARY KEY (company_id));

-- Creating the function
CREATE OR REPLACE FUNCTION users_code_seq()
   RETURNS "trigger" AS'
BEGIN
UPDATE public.company_seqs
SET last_seq = (last_seq + 1)
WHERE company_id = NEW.company_id;
SELECT INTO NEW.code last_seq
FROM public.company_seqs WHERE company_id = NEW.company_id;
END IF;
RETURN new;
END
'

LANGUAGE 'plpgsql' VOLATILE;
-- Creating the triggerCREATE TRIGGER tf_users_code_seq
   BEFORE INSERT
   ON public.users
   FOR EACH ROW
   EXECUTE PROCEDURE users_code_seq();

When inserting data:

INSERT INTO public.users
(id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
VALUES (672,'te...@test.com','bucefalo','0','2016-05-03
00:01:01','2016-05-03 00:01:01',default,'1');
INSERT INTO public.users
(id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
VALUES (672,'te...@test.com','bucefalo','0','2016-05-03
00:01:01','2016-05-03 00:01:01','inserting my own data code
column','1');


   -

   On the first query, nothing happens on the users.code column. The column
   is null.
   -

   On the second query, I can see the "inserting my own data code column"
   inserted into the code column. This means my Trigger function is not
   working.. I don't know why.


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-04 Thread drum.lu...@gmail.com
>
>
>
> 1) I just may be over-sensitive to this, but after Adrian Klaver referred
> you to a ten-years old post that the above looks an awful lot similar too,
> it sure would be nice to see some attribution
> rather than claiming it as your own with "...what *I* did..."
>


I would expect a minimum of respect from the members of this list, but
seems you got none. If someone would need my help, I'd never insult him/her
like you guys are doing.

If my questions are too "child" for you, please, do not answer them. Ignore
the emails... Isn't that simple?

Talking to me like you guys are talking, is what I call "a ten-years old
post".


3) Not sure you need a separate company_seq table. Since there is a
> one-to-one relation between company and company_seqs, put the last_seq
> column in the company table.


There is no company table, my friend.


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-04 Thread drum.lu...@gmail.com
>
> CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> BEGIN
>
> IF (TG_OP = 'INSERT') THEN
> UPDATE public.company_seqs SET last_seq = (last_seq + 1) WHERE
> company_id = NEW.company_id;
>
> ELSEIF NEW.code IS NULL THEN
> SELECT last_seq INTO code FROM public.company_seqs WHERE
> company_id = NEW.company_id ORDER BY last_seq DESC;
>
> END IF;
>
> RETURN NEW;
>
> END;
> $$ LANGUAGE plpgsql;


- The command above, does not insert the last_seq into users.code column.
And I don't know why.


If I comment the line: *ELSEIF NEW.code IS NULL THEN*, the data is inserted
into the users.code column.

But as the customer can add data into that column, I only insert the
last_seq if he/she didn't insert anything.

So I need that *ELSEIF NEW.code IS NULL THEN*.

- What am I missing?

Thanks
Lucas


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-04 Thread drum.lu...@gmail.com
On 5 May 2016 at 16:56, drum.lu...@gmail.com  wrote:

> CREATE OR REPLACE FUNCTION users_code_seq()
>>RETURNS "trigger" AS $$
>> BEGIN
>>
>> IF (TG_OP = 'INSERT') THEN
>> UPDATE public.company_seqs SET last_seq = (last_seq + 1) WHERE
>> company_id = NEW.company_id;
>>
>> ELSEIF NEW.code IS NULL THEN
>> SELECT last_seq INTO code FROM public.company_seqs WHERE
>> company_id = NEW.company_id ORDER BY last_seq DESC;
>>
>> END IF;
>>
>> RETURN NEW;
>>
>> END;
>> $$ LANGUAGE plpgsql;
>
>
> - The command above, does not insert the last_seq into users.code column.
> And I don't know why.
>
>
> If I comment the line: *ELSEIF NEW.code IS NULL THEN*, the data is
> inserted into the users.code column.
>
> But as the customer can add data into that column, I only insert the
> last_seq if he/she didn't insert anything.
>
> So I need that *ELSEIF NEW.code IS NULL THEN*.
>
> - What am I missing?
>
> Thanks
> Lucas
>



I got it.. it's working now.
Thanks


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-04 Thread drum.lu...@gmail.com
I'm just having some problem when doing:

INSERT INTO public.users
> (id,email,encrypted_password,sign_in_count,created_at,updated_at,company_id)
> VALUES (66,'tes...@test.com','password','0','2016-05-03
> 00:01:01','2016-05-03 00:01:01','15');


- see that I'm not providing the "code" column value? If I run the query
above, I get the following error:

> ERROR:  query returned no rows
> CONTEXT:  PL/pgSQL function users_code_seq() line 7 at SQL statement


- If I include the code column with a default value:

> INSERT INTO public.users
> (id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
> VALUES (4,'te...@test.com','password','0','2016-05-03
> 00:01:01','2016-05-03 00:01:01',default,'2');

I get the same error

- Please, if anyone can help with that.. I'd appreciate it.

*The final function code is:*

CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> DECLARE code character varying;
> BEGIN
> IF NEW.code IS NULL THEN
> SELECT client_code_increment INTO STRICT NEW.code FROM
> public.companies WHERE id = NEW.id ORDER BY client_code_increment DESC;
> END IF;
> IF (TG_OP = 'INSERT') THEN
> UPDATE public.companies SET client_code_increment =
> (client_code_increment + 1) WHERE id = NEW.id;
> END IF;
> RETURN NEW;
> END;
> $$ LANGUAGE plpgsql;



*companies.client_code_increment:*

ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET NOT
> NULL;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET
> DEFAULT 1000;


*Trigger:*

> CREATE TRIGGER tf_users_code_seq
>BEFORE INSERT
>ON public.users
>FOR EACH ROW
>EXECUTE PROCEDURE users_code_seq();


*Tests I'm doing:*

*1 - Insert data into companies table:*

> INSERT INTO
> public.companies(id,name,created_at,updated_at,client_code_increment)
> VALUES (1,'Company 1','2016-05-03 00:01:01','2016-05-03 00:01:01',default);
> - *PASS*
> INSERT INTO
> public.companies(id,name,created_at,updated_at,client_code_increment)
> VALUES (2,'Company 2','2016-05-03 00:01:01','2016-05-03
> 00:01:01',default);  *- PASS*


*2 - insert data into users table:*

> INSERT INTO public.users
> (id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
> VALUES (1,'te...@test.com','password','0','2016-05-03
> 00:01:01','2016-05-03 00:01:01','default','2'); -
> *PASS*INSERT INTO public.users
> (id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
> VALUES (2,'te...@test.com','password','0','2016-05-03
> 00:01:01','2016-05-03 00:01:01',default,'1'); -
> *NO PASS*INSERT INTO public.users
> (id,email,encrypted_password,sign_in_count,created_at,updated_at,code,company_id)
> VALUES (3,'te...@test.com','password','0','2016-05-03
> 00:01:01','2016-05-03 00:01:01',default,'2'); - *NO PASS*


Cheers


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-05 Thread drum.lu...@gmail.com
>
>
>>  SELECT client_code_increment INTO STRICT NEW.code FROM
>> public.companies WHERE id =
>> NEW.id ORDER BY client_code_increment DESC;
>>
>
>
>
> I am pretty sure the above line is wrong. NEW.id refers to users.id, not
> the companies.id. Also, the implementation presents a potential race
> condition, e.g., if two different sessions attempt an insert almost
> simultaneously.


I don't think so..
Even because if I change that to company_id, I get the error:

ERROR:  column "company_id" does not exist



>
>
>
>
>  END IF;
>>  IF (TG_OP = 'INSERT') THEN
>>  UPDATE public.companies SET client_code_increment =
>> (client_code_increment + 1) WHERE
>> id = NEW.id;
>>
>
>
> Ditto w.r.t. NEW.id.
>
>>
>>

Same as above


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-05 Thread drum.lu...@gmail.com
On 6 May 2016 at 02:29, David G. Johnston 
wrote:

> On Thu, May 5, 2016 at 3:54 AM, Alban Hertroys  wrote:
>
>>
>> > On 05 May 2016, at 8:42, drum.lu...@gmail.com wrote:
>>
>> > The final function code is:
>> >
>> > CREATE OR REPLACE FUNCTION users_code_seq()
>> >RETURNS "trigger" AS $$
>> > DECLARE code character varying;
>> > BEGIN
>> > IF NEW.code IS NULL THEN
>> > SELECT client_code_increment INTO STRICT NEW.code FROM
>> public.companies WHERE id = NEW.id ORDER BY client_code_increment DESC;
>>
>>
>> ^^^
>> There's your problem. I'm pretty sure the keyword STRICT isn't valid
>> there. It probably gets interpreted as a column name.
>>
>>
> ​No, its a sanity check/assertion.  If that trips its because there is no
> company having a value of NEW.id on the public.companies table.  If that is
> OK then remove the STRICT but if you are indeed expecting a record to be
> present and it is not it is correctly telling you that there is a problem
> in the data.  Namely that said company needs to be added to the table.
>
> David J.​
>
>


Taking off the "STRICT", the errors were gone. But still, it's not working.
Please have a look below.


If I use the other table:

CREATE TABLE public.company_seqs
> (company_id BIGINT NOT NULL,
> last_seq BIGINT NOT NULL DEFAULT 1000,
> CONSTRAINT company_seqs_pk PRIMARY KEY (company_id)
> );


It works fine.. the problem is when I try to use the companies table..
which is already there and I just add another column
named: client_code_increment

haven't found the problem yet...


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-05 Thread drum.lu...@gmail.com
>
>
> 1) You attached users_code_seq() to a trigger on the users table.
>

yes


> 2) You have a where clause:  company_id = NEW.id
> 3) NEW refers to users
> 4) NEW.id is obstensibly a USER ID
>


No...

CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> DECLARE code character varying;
> BEGIN
> IF (TG_OP = 'INSERT') THEN
> UPDATE public.companies SET client_code_increment =
> (client_code_increment + 1) WHERE id = NEW.id;
> END IF;
> IF NEW.code IS NULL THEN
> SELECT client_code_increment INTO NEW.code FROM public.companies
> as c WHERE c.id = NEW.id ORDER BY client_code_increment DESC;
> END IF;
> RETURN NEW;
> END;
> $$ LANGUAGE plpgsql;


I'm updating the companies table... The company table has a column called
ID.
If I change that to company_id, I get the error:  column "company_id" does
not exist, because that column is inside USERS and not COMPANIES.

So as far as I can see here, that command is right.

- If I'm wrong, please, explain to me.

5) So you are basically saying: WHERE company_id = user_id
> 6) If you were to get match it would be entirely by accident - say because
> you used the same integer for both id values
>

Just to be clear here:

1 -

> ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET NOT
> NULL;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET
> DEFAULT 1000;



2 -

> CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> DECLARE code character varying;
> BEGIN
> IF (TG_OP = 'INSERT') THEN
> UPDATE public.companies SET client_code_increment =
> (client_code_increment + 1) WHERE company_id = NEW.id;
> END IF;
> IF NEW.code IS NULL THEN
> SELECT client_code_increment INTO NEW.code FROM public.companies
> as c WHERE c.id = NEW.id ORDER BY client_code_increment DESC;
> END IF;
> RETURN NEW;
> END;
> $$ LANGUAGE plpgsql;


3 -

> CREATE TRIGGER tf_users_code_seq
>BEFORE INSERT
>ON public.users
>FOR EACH ROW
>EXECUTE PROCEDURE users_code_seq();


Re: [GENERAL] Function PostgreSQL 9.2

2016-05-05 Thread drum.lu...@gmail.com
It's working now...

Final code:

ALTER TABLE public.companies ADD COLUMN client_code_increment integer;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET NOT
> NULL;
> ALTER TABLE public.companies ALTER COLUMN client_code_increment SET
> DEFAULT 1000;
> COMMIT TRANSACTION;
>
> BEGIN;
> -- Creating the function
> CREATE OR REPLACE FUNCTION users_code_seq()
>RETURNS "trigger" AS $$
> DECLARE code character varying;
> BEGIN
> -- if it's an insert, then we update the client_code_increment column
> value to +1
> IF (TG_OP = 'INSERT') THEN
> UPDATE public.companies SET client_code_increment =
> (client_code_increment + 1) WHERE id = NEW.company_id;
> END IF;
> -- IF the customer didn't provide a code value, we insert the next
> available from companies.client_code_increment
> IF NEW.code IS NULL THEN
> SELECT client_code_increment INTO NEW.code FROM public.companies
> as c WHERE c.id = NEW.company_id ORDER BY client_code_increment DESC;
> END IF;
> RETURN NEW;
> END;
> $$ LANGUAGE plpgsql;
> -- Creating the trigger
> CREATE TRIGGER tf_users_code_seq
>BEFORE INSERT
>ON public.users
>FOR EACH ROW
>EXECUTE PROCEDURE users_code_seq();
>
> COMMIT TRANSACTION;


Thanks.
Lucas