Re: [GENERAL] :Posgres - performance problem

2015-03-25 Thread lucas.g...@gmail.com
I'm not an expert either, but your data model sounds very broken as
well...  I guess it's possible that each query would need all 417 columns
but it seems unlikely...

If that were normalized into 'n' tables then each query would be returning
a whole lot less data...

I've never heard of a database being stuffed into one table before...

What is your use case / requirement for one table?

Gary

On Wed, Mar 25, 2015 at 4:50 AM, John McKown 
wrote:

> On Wed, Mar 25, 2015 at 4:19 AM, ginkgo36  wrote:
> > Hi all,
> > I have 1 table have:
> > - 417 columns
> > - 600.000 rows data
> > - 34 indexs
> >
> > when i use query on this table, it so long. ex:
> >
> > update master_items set
> > temp1 = '' where temp1 <> '' --Query returned successfully: 435214 rows
> > affected, 1016137 ms execution time.
> >
> > alter table master_items add "TYPE-DE" varchar default ''
> > -- Query returned successfully with no result in 1211019 ms.
> >
> > update master_items set "feedback_to_de" = 'Yes'
> > --Query returned successfully: 591268 rows affected, 1589335 ms execution
> > time.
> >
> > Can you help me find any way to increase performance?
> >
> > Thanks all
>
> I am not any kind of a performance expert. But the first thing that I
> would try is an EXPLAIN. If you're using the psql line command, I'd do
> something like:
>
> BEGIN; -- BEGIN TRANSACTION
> EXPLAIN (ANALYZE, TIMING) UPDATE master_items SET temp1 = "where temp1 <>";
> ROLLBACK;
>
> I'd put the EXPLAIN in a transaction that I roll back so that I
> wouldn't actually update anything permanently . Also, doing a simple
> ANALYZE on the table might help some. I'm not sure.
>
> ANALYZE master_items;
>
> --
> If you sent twitter messages while exploring, are you on a textpedition?
>
> He's about as useful as a wax frying pan.
>
> 10 to the 12th power microphones = 1 Megaphone
>
> Maranatha! <><
> John McKown
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] :Posgres - performance problem

2015-03-25 Thread lucas.g...@gmail.com
"I need gather data into one table for consistency and easy for export and
import, it's ok if I split data to smaller tables, but when
export/import/update, i must excute query on alot of table. And this way
lead data to inconsistency if I forget update/export/import on 1 or more
table. It is terrible."

That statement is wrong on many levels:

   - Easy for import and export...
  - Multiple tables / one table are identical for import export
  purposes.
   - export/import/update, i must excute query on alot of table.
   - That's what SQL is for...
   - lead data to inconsistency if I forget update/export/import on 1 or
   more
   table. It is terrible.
   - You build your process once and test it... Additional runs of the
  process are 'free'...

And as someone else mentioned, the 34 indexes are additional tables anyway.


There is probably a way to optimize your current system...  There often is
no matter how horrible the implementation...

But I would start by normalizing that as much as possible and then running
performance tests against a normalized jobs.  There's lots of tools to do
that... But they probably aren't much help with your current schema.

Gary

On Wed, Mar 25, 2015 at 7:43 AM, Adrian Klaver 
wrote:

> On 03/25/2015 07:30 AM, ginkgo36 wrote:
>
>> @Gary
>> I'm working on big data, because of the demands of the job so I
>> export/import/update data on this table every day.
>>
>>   I guess it's possible that each query would need all 417 columns but it
>> seems unlikely... --> Yes, not at all but 2/3 of 417 columns :)
>>
>
> There is also the matter of the 34 indexes, which can thought of as 34
> axillary tables that need to be kept in sync with the main table.
>
>
>> I need gather data into one table for consistency and easy for export and
>> import, it's ok if I split data to smaller tables, but when
>> export/import/update, i must excute query on alot of table. And this way
>> lead data to inconsistency if I forget update/export/import on 1 or more
>> table. It is terrible.
>>
>>
>> @John
>>
>> I will learing about it. Thanks alot.
>>
>> @All: If you have any suggestion, please let me known. Thanks for you help
>>
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Question regarding log files in postgresql

2014-07-21 Thread lucas.g...@gmail.com
I've rolled postgres to 'production', so box is in prod, but the
applications aren't active yet.

When I rolled the new box I'm seeing 2 log files:

postgresql--MM-dd_hhmmss.log
AND
postgresql-9.3-main.log

The 'main' log doesn't appear to be used, however the time stamped file is.

Does this make sense?  Is it normal?

My logrotate entry makes some assumptions that may be incorrect now.

Gary


Re: [GENERAL] Question regarding log files in postgresql

2014-07-22 Thread lucas.g...@gmail.com
Hi again, it's deployed to a Ubuntu image, but we're using chef to
configure the box.  I know for a fact that chef starts and stops Postgresql
multiple times during the install so I'm pretty sure the MAIN log is from
the initial start, thank you too David G for pointing that out.

My next question is how is postgres picking the filename, but after doing a
little investigation that mystery is cleared up too.

Thank you both for responding.  Now I just have to modify our logrotate
script + blow away the accumulated cruft.

Gary


On Mon, Jul 21, 2014 at 5:34 PM, Adrian Klaver 
wrote:

> On 07/21/2014 02:21 PM, lucas.g...@gmail.com wrote:
>
>>
>>
>> I've rolled postgres to 'production', so box is in prod, but the
>> applications aren't active yet.
>>
>> When I rolled the new box I'm seeing 2 log files:
>>
>> postgresql--MM-dd_hhmmss.log
>> AND
>> postgresql-9.3-main.log
>>
>
> Are they both in the same location?
>
> I'm going to say you used a Debian/Ubuntu package correct?
>
>
>
>> The 'main' log doesn't appear to be used, however the time stamped file
>> is.
>>
>
> That is the log file set up by the packaging system.
>
>
>
>> Does this make sense?  Is it normal?
>>
>
> My guess is either:
>
> 1) You are running a Debian style package and went into postgresql.conf
> and changed the log settings so you are not using the package logging.
>
> or
>
> 2) You have/had a Debian package installed and you did a source(or other
> install) and you have two logs as a result.
>
>
> I think it would be a good idea to investigate further for no other reason
> than ensure that you really do not have competing systems at work. Might
> save you some headaches down the road.
>
>
>
>> My logrotate entry makes some assumptions that may be incorrect now.
>>
>
> Another reason to verify what is going on.
>
>
>> Gary
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>