You stated you wanted to dump just one table, but your command is dumping
the whole database!

So if you truly want to dump just a single table, then change your command
to:

pg_dump -t RECORDER postgres --format=t -t your_table_name -w  > /tmp/dump

Also, please explain why you cannot provide the other required information.
Are you not the DBA? If that is the case, then I can only encourage you to
consult with him/her.

On Thu, Oct 15, 2015 at 2:40 PM, anj patnaik <patn...@gmail.com> wrote:

> It's a Linux machine with 8 CPUs. I don't have the other details.
>
> I get archive member too large for tar format.
>
> Is there a recommended command/options when dealing with very large
> tables, aka 150K rows and half of the rows have data being inserted with
> 22MB?
>
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w  > /tmp/dump
> pg_dump: [archiver (db)] connection to database "postgres" failed:
> fe_sendauth: no password supplied
> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t   > /tmp/dump
> Password:
> pg_dump: [tar archiver] archive member too large for tar format
> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
> -bash: pg_dumpall: command not found
> -bash: tmpdb.out-2015101510.gz: Permission denied
> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz
>
>
> Thank you so much for replying and accepting my post to this NG.
>
> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6...@gmail.com>
> wrote:
>
>> In addition to exactly what you mean by "a long time" to pg_dump 77k of
>> your table,
>>
>> What is your O/S and how much memory is on your system?
>> How many CPU's are in your system?
>> Also, what is your hard disk configuration?
>> What other applications are running simultaneously with pg_dump?
>> What is the value of shared_memory & maintenance_work_mem in
>> postgresql.conf?
>>
>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver <
>> adrian.kla...@aklaver.com> wrote:
>>
>>> On 10/14/2015 06:39 PM, anj patnaik wrote:
>>>
>>>> Hello,
>>>>
>>>> I recently downloaded postgres 9.4 and I have a client application that
>>>> runs in Tcl that inserts to the db and fetches records.
>>>>
>>>> For the majority of the time, the app will connect to the server to do
>>>> insert/fetch.
>>>>
>>>> For occasional use, we want to remove the requirement to have a server
>>>> db and just have the application retrieve data from a local file.
>>>>
>>>> I know I can use pg_dump to export the tables. The questions are:
>>>>
>>>> 1) is there an in-memory db instance or file based I can create that is
>>>> loaded with the dump file? This way the app code doesn't have to change.
>>>>
>>>
>>> No.
>>>
>>>
>>>> 2) does pg support embedded db?
>>>>
>>>
>>> No.
>>>
>>> 3) Or is my best option to convert the dump to sqlite and the import the
>>>> sqlite and have the app read that embedded db.
>>>>
>>>
>>> Sqlite tends to follow Postgres conventions, so you might be able to use
>>> the pg_dump output directly if you use --inserts or --column-inserts:
>>>
>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html
>>>
>>>
>>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of
>>>> my table. right now, the table  has 77K rows. Are there any ways to
>>>> create automated batch files to create dumps overnight and do so
>>>> quickly?
>>>>
>>>
>>> Define long time.
>>>
>>> What is the pg_dump command you are using?
>>>
>>> Sure use a cron job.
>>>
>>>
>>>> Thanks for your inputs!
>>>>
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>>
>>> --
>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
>>> To make changes to your subscription:
>>> http://www.postgresql.org/mailpref/pgsql-general
>>>
>>
>>
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>
>
>


-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Reply via email to