On Thu, Sep 29, 2016 at 1:45 AM, Haribabu Kommi <kommi.harib...@gmail.com> wrote: > Currently, The SQL stats is a fixed size counter to track the all the ALTER > cases as single counter. So while sending the stats from the backend to > stats collector at the end of the transaction, the cost is same, because of > it's fixed size. This approach adds overhead to send and read the stats > is minimal. > > With the following approach, I feel it is possible to support the counter at > command tag level. > > Add a Global and local Hash to keep track of the counters by using the > command tag as the key, this hash table increases dynamically whenever > a new type of SQL command gets executed. The Local Hash data is passed > to stats collector whenever the transaction gets committed. > > The problem I am thinking is that, Sending data from Hash and populating > the Hash from stats file for all the command tags adds some overhead.
Yeah, I'm not very excited about that overhead. This seems useful as far as it goes, but I don't really want to incur measurable overhead when it's in use. Having a hash table rather than a fixed array of slots means that you have to pass this through the stats collector rather than updating shared memory directly, which is fairly heavy weight. If each backend could have its own copy of the slot array and just update that, and readers added up the values across the whole array, this could be done without any locking at all, and it would generally be much lighter-weight than this approach. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (firstname.lastname@example.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers