[HACKERS] Query regarding permission on table_column%type access

2017-10-31 Thread Neha Sharma
Hi,

I have observed that even if the user does not have permission on a
table(created in by some other user),the function parameter still can have
a parameter of that table_column%type.

Scenario:
postgres=# create user u1 with login ;
CREATE ROLE
postgres=# create user u2 with login ;
CREATE ROLE
postgres=# \c - u1
You are now connected to database "postgres" as user "u1".
postgres=> create table t1(a int);
CREATE TABLE
postgres=> revoke ALL on t1 from u2;
REVOKE
postgres=> \c  - u2
You are now connected to database "postgres" as user "u2".
postgres=> create table t2(a int);
CREATE TABLE

postgres=> create or replace function foo(x t1.a%type) returns int as $$
BEGIN
return x + 1;
END;
$$ LANGUAGE plpgsql;
NOTICE:  type reference t1.a%TYPE converted to integer
CREATE FUNCTION
postgres=> select foo(1);
 foo
-
   2
(1 row)
postgres=> select * from t1;
ERROR:  permission denied for relation t1


Is this an expected behaviour? What if the user does not wants the object
type to be accessed across?

Thanks.
--
Regards,
Neha Sharma


Re: [HACKERS] [TRAP: FailedAssertion] causing server to crash

2017-07-21 Thread Neha Sharma
Here is the back trace from the core dump attached.

(gdb) bt
#0  0x7f4a71424495 in raise () from /lib64/libc.so.6
#1  0x7f4a71425c75 in abort () from /lib64/libc.so.6
#2  0x009dc18a in ExceptionalCondition (conditionName=0xa905d0
"!(TransactionIdPrecedesOrEquals(oldestXact,
ShmemVariableCache->oldestXid))",
errorType=0xa9044f "FailedAssertion", fileName=0xa90448 "clog.c",
lineNumber=683) at assert.c:54
#3  0x00524215 in TruncateCLOG (oldestXact=150036635,
oldestxid_datoid=13164) at clog.c:682
#4  0x006a6be8 in vac_truncate_clog (frozenXID=150036635,
minMulti=1, lastSaneFrozenXid=200562449, lastSaneMinMulti=1) at
vacuum.c:1197
#5  0x006a6948 in vac_update_datfrozenxid () at vacuum.c:1063
#6  0x007ce0a2 in do_autovacuum () at autovacuum.c:2625
#7  0x007cc987 in AutoVacWorkerMain (argc=0, argv=0x0) at
autovacuum.c:1715
#8  0x007cc562 in StartAutoVacWorker () at autovacuum.c:1512
#9  0x007e2acd in StartAutovacuumWorker () at postmaster.c:5414
#10 0x007e257e in sigusr1_handler (postgres_signal_arg=10) at
postmaster.c:5111
#11 
#12 0x7f4a714d3603 in __select_nocancel () from /lib64/libc.so.6
#13 0x007dde88 in ServerLoop () at postmaster.c:1717
#14 0x007dd67d in PostmasterMain (argc=3, argv=0x2eb8b00) at
postmaster.c:1361
#15 0x0071a218 in main (argc=3, argv=0x2eb8b00) at main.c:228
(gdb) print ShmemVariableCache->oldestXid
$3 = 548


Regards,
Neha Sharma

On Fri, Jul 21, 2017 at 11:01 AM, Thomas Munro <
thomas.mu...@enterprisedb.com> wrote:

> On Fri, Jul 21, 2017 at 4:16 PM, Neha Sharma
> <neha.sha...@enterprisedb.com> wrote:
> >
> > Attached is the core dump file received on PG 10beta2 version.
>
> Thanks Neha.  It's be best to post the back trace and if possible
> print oldestXact and ShmemVariableCache->oldestXid from the stack
> frame for TruncateCLOG.
>
> The failing assertion in TruncateCLOG() has a comment that says
> "vac_truncate_clog already advanced oldestXid", but vac_truncate_clog
> calls SetTransactionIdLimit() to write ShmemVariableCache->oldestXid
> *after* it calls TruncateCLOG().  What am I missing here?
>
> What actually prevents ShmemVariableCache->oldestXid from going
> backwards anyway?  Suppose there are two or more autovacuum processes
> that reach vac_truncate_clog() concurrently.  They do a scan of
> pg_database whose tuples they access without locking through a
> pointer-to-volatile because they expect concurrent in-place writers,
> come up with a value for frozenXID, and then arrive at
> SetTransactionIdLimit() in whatever order and clobber
> ShmemVariableCache->oldestXid.  What am I missing here?
>
> --
> Thomas Munro
> http://www.enterprisedb.com
>


Re: [HACKERS] [TRAP: FailedAssertion] causing server to crash

2017-07-20 Thread Neha Sharma
Hi,

Attached is the core dump file received on PG 10beta2 version.
​
 myfile.tgz
<https://drive.google.com/a/enterprisedb.com/file/d/0B4_zX36siXF0MHJTTjZHZjNtN1E/view?usp=drive_web>
​

Regards,
Neha Sharma

On Thu, Jul 20, 2017 at 2:45 PM, Neha Sharma <neha.sha...@enterprisedb.com>
wrote:

>
>
> Regards,
> Neha Sharma
>
> On Thu, Jul 20, 2017 at 1:28 PM, Craig Ringer <cr...@2ndquadrant.com>
> wrote:
>
>> On 20 July 2017 at 15:00, Neha Sharma <neha.sha...@enterprisedb.com>
>> wrote:
>>
>>> Hi Craig,
>>>
>>> I had done a fresh initdb,the default parameter configuration was used.
>>> I was setting few set of parameters while startup by the below command.
>>>
>>> ./postgres -d postgres -c shared_buffers=$shared_bufs -N 200 -c
>>> min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c
>>> maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
>>>
>>> Now I have modified the script a bit with Robert's suggestion as below.
>>> Instead of starting it with postgres binary i have set it in conf file and
>>> starting the server with pg_ctl. I am waiting for the results,once the core
>>> dump is generated will share the details.
>>>
>>
>> Thanks.
>>
>> To verify that you do get a coredump, you might want to consider sending
>> a kill -SEGV to a backend and make sure that it actually dumps core and you
>> can find the core.
>>
>> Ideally you'd actually set the coredumps to include shmem (see
>> coredump_filter in http://man7.org/linux/man-pages/man5/core.5.html),
>> but with 8GB shared_buffers that may not be practical. It'd be very useful
>> if possible.
>>
>> If this is wraparound-related, as it appears to be, you might get faster
>> results by using a custom pgbench script for one or more workers that just
>> runs txid_current() a whole lot. Or jump the server's xid space forward.
>>
> Thanks. Will put together suggestions to get the result.
>
>>
>> I've got a few other things on right now but I'll keep an eye out and
>> hope for a core dump.
>>
>> --
>>  Craig Ringer   http://www.2ndQuadrant.com/
>>  PostgreSQL Development, 24x7 Support, Training & Services
>>
>
>


Re: [HACKERS] [TRAP: FailedAssertion] causing server to crash

2017-07-20 Thread Neha Sharma
Regards,
Neha Sharma

On Thu, Jul 20, 2017 at 1:28 PM, Craig Ringer <cr...@2ndquadrant.com> wrote:

> On 20 July 2017 at 15:00, Neha Sharma <neha.sha...@enterprisedb.com>
> wrote:
>
>> Hi Craig,
>>
>> I had done a fresh initdb,the default parameter configuration was used. I
>> was setting few set of parameters while startup by the below command.
>>
>> ./postgres -d postgres -c shared_buffers=$shared_bufs -N 200 -c
>> min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c
>> maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
>>
>> Now I have modified the script a bit with Robert's suggestion as below.
>> Instead of starting it with postgres binary i have set it in conf file and
>> starting the server with pg_ctl. I am waiting for the results,once the core
>> dump is generated will share the details.
>>
>
> Thanks.
>
> To verify that you do get a coredump, you might want to consider sending a
> kill -SEGV to a backend and make sure that it actually dumps core and you
> can find the core.
>
> Ideally you'd actually set the coredumps to include shmem (see
> coredump_filter in http://man7.org/linux/man-pages/man5/core.5.html), but
> with 8GB shared_buffers that may not be practical. It'd be very useful if
> possible.
>
> If this is wraparound-related, as it appears to be, you might get faster
> results by using a custom pgbench script for one or more workers that just
> runs txid_current() a whole lot. Or jump the server's xid space forward.
>
Thanks. Will put together suggestions to get the result.

>
> I've got a few other things on right now but I'll keep an eye out and hope
> for a core dump.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


Re: [HACKERS] [TRAP: FailedAssertion] causing server to crash

2017-07-20 Thread Neha Sharma
Hi Craig,

I had done a fresh initdb,the default parameter configuration was used. I
was setting few set of parameters while startup by the below command.

./postgres -d postgres -c shared_buffers=$shared_bufs -N 200 -c
min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c
maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &

Now I have modified the script a bit with Robert's suggestion as below.
Instead of starting it with postgres binary i have set it in conf file and
starting the server with pg_ctl. I am waiting for the results,once the core
dump is generated will share the details.

Test code Snippet :

#Pre condition:
#Create and initialize the database and set the export PGDATA
export PGDATA=/home/centos/PG_sources/postgresql/inst/bin/data
export PGPORT=5432
export
LD_LIBRARY_PATH=/home/centos/PG_sources/postgresql/inst/lib:$LD_LIBRARY_PATH
#for i in "scale_factor shared_buffers time_for_readings no_of_readings
orig_or_patch"
for i in "300 8GB 1800 3"
do
scale_factor=`echo $i | cut -d" " -f1`
shared_bufs=`echo $i | cut -d" " -f2`
time_for_reading=`echo $i | cut -d" " -f3`
no_of_readings=`echo $i | cut -d" " -f4`

# ---

echo "Start of script for $scale_factor $shared_bufs " >>
/home/centos/test_results.txt


echo "== $run_bin =" >>
/home/centos/test_results.txt

for threads in 1 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128
#for threads in 8 16
do
#Start taking reading
for ((readcnt = 1 ; readcnt <= $no_of_readings ; readcnt++))
do
echo ""  >>
/home/centos/test_results.txt
echo $scale_factor, $shared_bufs, $threads, $threads,
$time_for_reading Reading - ${readcnt}  >> /home/centos/test_results.txt
#start server
./pg_ctl -D data -c -l logfile start
#./postgres -d postgres -c shared_buffers=$shared_bufs -N 200 -c
min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c
maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
sleep 5
#drop and recreate database
./dropdb test
./createdb test
#initialize database
./pgbench -i -s $scale_factor test
sleep 5
# Run pgbench
./pgbench -c $threads -j $threads -T $time_for_reading -M prepared
test  >> /home/centos/test_results.txt
sleep 10
./psql -d test -c "checkpoint" >> /home/centos/test_results.txt
./pg_ctl stop
done;
done;

sleep 1


mv /home/centos/test_results.txt
/home/centos/test_results_list_${scale_factor}_${shared_bufs}_rw.txt
done;



Regards,
Neha Sharma

On Thu, Jul 20, 2017 at 11:23 AM, Craig Ringer <cr...@2ndquadrant.com>
wrote:

> On 19 July 2017 at 20:26, Neha Sharma <neha.sha...@enterprisedb.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I am getting FailedAssertion while executing the attached
>>> script.However,I am not able to produce the core dump for the same,the
>>> script runs in background and takes around a day time to produce the
>>> mentioned error.
>>>
>>> "TRAP: FailedAssertion("!(TransactionIdPrecedesOrEquals(oldestXact,
>>> ShmemVariableCache->oldestXid))", File: "clog.c", Line: 683)
>>> 2017-07-19 01:16:51.973 GMT [27873] LOG:  server process (PID 28084) was
>>> terminated by signal 6: Aborted
>>> 2017-07-19 01:16:51.973 GMT [27873] DETAIL:  Failed process was running:
>>> autovacuum: VACUUM pg_toast.pg_toast_13029 (to prevent wraparound)"
>>>
>>
>>
> What are the starting conditions of your postgres instance? Does your
> script assume a newly initdb'd instance with no custom configuration? If
> not, what setup steps/configuration precede your script run?
>
>
>
>
>
>> well short of the 2-million mark.
>>
>
> Er, billion.
>
> --
>  Craig Ringer   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


[HACKERS] pg_restore failed for foreign key constraint

2017-07-17 Thread Neha Sharma
Hi,

I am getting the below error while restoring data having foreign key
constraint.

./psql postgres
psql (10beta2)
Type "help" for help.

postgres=# create user test;
CREATE ROLE
postgres=# create schema test;
CREATE SCHEMA
postgres=# grant all on SCHEMA test to test;
GRANT
postgres=# set search_path to 'test';
SET
postgres=# CREATE TABLE test_tbl_fk (c1 INTEGER PRIMARY KEY);
CREATE TABLE
postgres=# CREATE TABLE test_tbl (c1 INTEGER PRIMARY KEY, c2 INTEGER, c3
VARCHAR,
postgres(#   CONSTRAINT test_tbl_fk_c1 FOREIGN KEY (c1) REFERENCES
test_tbl_fk(c1));
CREATE TABLE
postgres=# insert into test_tbl_fk values (1),(2),(3);
INSERT 0 3
postgres=# INSERT INTO test_tbl VALUES (1,1,'p11');
INSERT 0 1
postgres=# INSERT INTO test_tbl VALUES (2,2,'p11');
INSERT 0 1
postgres=# INSERT INTO test_tbl VALUES (3,3,'p11');
INSERT 0 1
postgres=# show  search_path ;
 search_path
-
 test
(1 row)

postgres=# \q
[edb@localhost bin]$ ./pg_dump -f a.dump -Fc postgres
[edb@localhost bin]$ ./createdb test1
[edb@localhost bin]$ ./pg_restore -d test1 -c -e a.dump
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2903; 2606 16399 FK
CONSTRAINT test_tbl test_tbl_fk_c1 edb
pg_restore: [archiver (db)] could not execute query: ERROR:  schema "test"
does not exist
Command was: ALTER TABLE ONLY test.test_tbl DROP CONSTRAINT
test_tbl_fk_c1;

Is this an expected behaviour?

-- 
Regards,
Neha Sharma


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-06-06 Thread Neha Sharma
 * until they finish their job to launch next one.
>
> IMHO, there's still a good bit of work needed here to make this sound
> like American English.  For example:
>
> - *It is a bgworker which automatically records information about
> blocks
> - *which were present in buffer pool before server shutdown and
> then
> - *prewarm the buffer pool upon server restart with those blocks.
> + *It is a bgworker process that automatically records information
> about
> + *blocks which were present in buffer pool before server
> shutdown and then
> + *prewarms the buffer pool upon server restart with those blocks.
>
> This construction "It is a..." without a clear referent seems to be
> standard in Indian English, but it looks wrong to English speakers
> from other parts of the world, or at least to me.
>
> + * Since there could be at max one worker who could do a prewarm,
> hence,
> + * acquiring locks is not required before setting
> skip_prewarm_on_restart.
>
> To me, adding a comma before hence looks like a significant
> improvement, but the word hence itself seems out-of-place.  Also, I'd
> change "at max" to "at most" and maybe reword the sentence a little.
> There's a lot of little things like this which I have tended be quite
> strict about changing before commit; I occasionally wonder whether
> it's really worth the effort.  It's not really wrong, it just sounds
> weird to me as an American.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>



-- 

Regards,

Neha Sharma


Re: [HACKERS] Gather Merge

2017-02-03 Thread Neha Sharma
Hi,

I have done some testing with the latest patch

1)./pgbench postgres -i -F 100 -s 20
2) update pgbench_accounts set filler = 'foo' where aid%10 = 0;
3) vacuum analyze pgbench_accounts;
4) set max_parallel_workers_per_gather = 4;
5) set max_parallel_workers = 4;

*Machine Configuration :-*
RAM:- 16GB
VCPU  :- 8
Disk :- 640 GB

Test case script with out-file attached.

*LCOV Report :- *

File Names Line Coverage without Test cases Line Coverage with Test
cases Function
Coverage without Test cases Function Coverage with Test cases
src/backend/executor/nodeGatherMerge.c 0.0 % 92.3 % 0.0 % 92.3 %
src/backend/commands/explain.c 65.5 % 68.4 % 81.7 % 85.0 %
src/backend/executor/execProcnode.c 92.50% 95.1 % 100% 100.0 %
src/backend/nodes/copyfuncs.c 77.2 % 77.6 % 73.0 % 73.4 %
src/backend/nodes/outfuncs.c 32.5 % 35.9 % 31.9 % 36.2 %
src/backend/nodes/readfuncs.c 62.7 % 68.2 % 53.3 % 61.7 %
src/backend/optimizer/path/allpaths.c 93.0 % 93.4 % 100 % 100%
src/backend/optimizer/path/costsize.c 96.7 % 96.8 % 100% 100%
src/backend/optimizer/plan/createplan.c 89.9 % 91.2 % 95.0 % 96.0 %
src/backend/optimizer/plan/planner.c 95.1 % 95.2 % 97.3 % 97.3 %
src/backend/optimizer/plan/setrefs.c 94.7 % 94.7 % 97.1 % 97.1 %
src/backend/optimizer/plan/subselect.c 94.1 % 94.1% 100% 100%
src/backend/optimizer/util/pathnode.c 95.6 % 96.1 % 100% 100%
src/backend/utils/misc/guc.c 67.4 % 67.4 % 91.9 % 91.9 %

On Wed, Feb 1, 2017 at 7:02 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
wrote:

> Due to recent below commit, patch not getting apply cleanly on
> master branch.
>
> commit d002f16c6ec38f76d1ee97367ba6af3000d441d0
> Author: Tom Lane <t...@sss.pgh.pa.us>
> Date:   Mon Jan 30 17:15:42 2017 -0500
>
> Add a regression test script dedicated to exercising system views.
>
> Please find attached latest patch.
>
>
>
> On Wed, Feb 1, 2017 at 5:55 PM, Rushabh Lathia <rushabh.lat...@gmail.com>
> wrote:
>
>> I am sorry for the delay, here is the latest re-based patch.
>>
>> my colleague Neha Sharma, reported one regression with the patch, where
>> explain output for the Sort node under GatherMerge was always showing
>> cost as zero:
>>
>> explain analyze select '' AS "xxx" from pgbench_accounts  where filler
>> like '%foo%' order by aid;
>>QUERY
>> PLAN
>> 
>> 
>> 
>>  Gather Merge  (cost=47169.81..70839.91 rows=197688 width=36) (actual
>> time=406.297..653.572 rows=20 loops=1)
>>Workers Planned: 4
>>Workers Launched: 4
>>->  Sort  (*cost=0.00..0.00 rows=0 width=0*) (actual
>> time=368.945..391.124 rows=4 loops=5)
>>  Sort Key: aid
>>  Sort Method: quicksort  Memory: 3423kB
>>  ->  Parallel Seq Scan on pgbench_accounts  (cost=0.00..42316.60
>> rows=49422 width=36) (actual time=296.612..338.873 rows=4 loops=5)
>>Filter: (filler ~~ '%foo%'::text)
>>Rows Removed by Filter: 36
>>  Planning time: 0.184 ms
>>  Execution time: 734.963 ms
>>
>> This patch also fix that issue.
>>
>>
>>
>>
>> On Wed, Feb 1, 2017 at 11:27 AM, Michael Paquier <
>> michael.paqu...@gmail.com> wrote:
>>
>>> On Mon, Jan 23, 2017 at 6:51 PM, Kuntal Ghosh
>>> <kuntalghosh.2...@gmail.com> wrote:
>>> > On Wed, Jan 18, 2017 at 11:31 AM, Rushabh Lathia
>>> > <rushabh.lat...@gmail.com> wrote:
>>> >>
>>> > The patch needs a rebase after the commit 69f4b9c85f168ae006929eec4.
>>>
>>> Is an update going to be provided? I have moved this patch to next CF
>>> with "waiting on author" as status.
>>> --
>>> Michael
>>>
>>
>>
>>
>> --
>> Rushabh Lathia
>>
>
>
>
> --
> Rushabh Lathia
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
>


-- 

Regards,

Neha Sharma
\! pgbench -F 100 -i -s 20 regression

\set VERBOSITY verbose

update pgbench_accounts set filler = 'foo' where aid%10 = 0;

vacuum analyze pgbench_accounts;

set max_parallel_workers_per_gather = 4;

set max_parallel_workers = 4;

explain analyse select count(b.aid) from pgbench_accounts CROSS JOIN pgbench_accounts as a CROSS JOIN pgbench_accounts as b where a.filler like '%foo%'group by a.aid,b.abalance;

explain analyze select sum(aid) from pgbench_accounts where aid % 25= 0 grou