On 08/20/2018 08:56 AM, Nicolas Paris wrote:
Can I split a large file into multiple files and then run copy using
each file.
AFAIK, copy command locks the table[1] while there is no mention of this
in the documentation[2].
[1] Is from Postgres 7.1(17 years ago). I suspect the conditions have
Greetings,
* bricklen (brick...@gmail.com) wrote:
> On Tue, Aug 14, 2018 at 10:45 AM Edmundo Robles
> wrote:
> > Is safe to upgrade from pg 9.3 to pg 10 directly using pg_upgrade or
> > is better upgrade, with pg_upgrade, from 9.3 -> 9.4 ->9.5 -> 9.6 -> 10.
>
> Using pg_upgrade, it is definit
Am 18.08.18 11:36 schrieb(en) kpi6...@gmail.com:
[snip]
What can I do to improve the performance of the regular query without using a
CTE?
Sorry for jumping into this discussion late – I'm facing similar problems with
Postgres choosing strange and inefficient query plans for no (for me) appar
Hi Edmundo,
On Tue, Aug 14, 2018 at 10:45 AM Edmundo Robles
wrote:
> Is safe to upgrade from pg 9.3 to pg 10 directly using pg_upgrade or
> is better upgrade, with pg_upgrade, from 9.3 -> 9.4 ->9.5 -> 9.6 -> 10.
>
Using pg_upgrade, it is definitely possible to upgrade 9.3 to 10 in one
jump.
1. The tables has no indexes at the time of load.2. The create table and copy
are in the same transaction.
So I guess that's pretty much it. I understand the long time it takes as some
of the tables have 400+ million rows.Also the env is a container and since this
is currently a POC system , n
Maybe he just has a large file that needs to be loaded into a table...
On 08/20/2018 11:47 AM, Vijaykumar Jain wrote:
Hey Ravi,
What is the goal you are trying to achieve here.
To make pgdump/restore faster?
To make replication faster?
To make backup faster ?
Also no matter how small you split
I guess this should help you, Ravi.
https://www.postgresql.org/docs/10/static/populate.html
On 8/20/18, 10:30 PM, "Christopher Browne" wrote:
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore faster?
> > To make replication faster?
> > To make backup faster ?
>
> None of the above.
>
> We got csv files from external vendor which are 880GB in total size, in
> What is the goal you are trying to achieve here.
> To make pgdump/restore faster?
> To make replication faster?
> To make backup faster ?
None of the above.
We got csv files from external vendor which are 880GB in total size, in 44
files. Some of the large tables had COPY running for severa
On Mon, 20 Aug 2018 at 14:46, Nick Dro wrote:
> My specific issue is alrady solved.
> For the greater good I sent the email requesting to allow reg exp in the
> position functions.
> Not sure if you will implement it... Just wanted to let you know that the
> limited capabilities of this function
Hey Ravi,
What is the goal you are trying to achieve here.
To make pgdump/restore faster?
To make replication faster?
To make backup faster ?
Also no matter how small you split the files into, if network is your
bottleneck then I am not sure you can attain n times the benefit my simply
sending
>
>
> Gives 5. It's wrong.
>
>
> True. Though your SO example didn't have the https in it.
> For some reason, substring() returns the parenthesised subexpression
> rather than the top level..
>
> The comment in testregexsubstr does say that it does this, but it's not
> clear from the documentat
> Can I split a large file into multiple files and then run copy using
> each file.
AFAIK, copy command locks the table[1] while there is no mention of this
in the documentation[2].
> Will the performance boost by close to 4x??
You might be interested in the pbBulkInsert tool[3] that allows
Luca Ferrari writes:
> Fixing the type creation into:
> CREATE TYPE hfsize (
>internallength = 16,
>input = hfsize_input_function,
>output = hfsize_output_function
> );
> solved the problem, so it was a length mismatch.
You really need to specify double alignment too; I
On Mon, Aug 20, 2018 at 4:51 PM Tom Lane wrote:
>
> Luca Ferrari writes:
> > I'm trying to define a custom data type that would represent a number
> > of bytes in a lossy human way.
>
> You did not show us the SQL definition of the type. I don't see anything
> obviously wrong in what you showed
Can I split a large file into multiple files and then run copy using each file.
The table does not contain any serial or sequence column which may need
serialization. Let us say I split a large file to 4 files. Will theperformance
boost by close to 4x??
ps: Pls ignore my previous post which wa
Can I split a large file into multiple files and then run copy using each file.
The table does not contain any serial or sequence column which may need
serialization. Let us say I split a large file to 4 files. Will theperformance
boost by close to 4x??
Luca Ferrari writes:
> I'm trying to define a custom data type that would represent a number
> of bytes in a lossy human way.
You did not show us the SQL definition of the type. I don't see anything
obviously wrong in what you showed (other than hfsize_add not setting the
result's scaling), so t
Hi all,
I'm trying to define a custom data type that would represent a number
of bytes in a lossy human way.
The type is defined as:
typedef struct HFSize
{
double size;
int scaling;
} HFSize;
and the operator function is defined as:
Datum
hfsize_add(PG_FUNCTION_AR
My CXXFLAGS has the -fPIC flag in it. As I understood from Tom Lane's previos
post, I have to recompile the libseal.a if the -fPIC flag or turn the
libseal.a into a libseal.so.
Best regars,
Tal
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
TalGloz wrote:
> Do I have to replace my -shared in the link command with -fPIC?
No, but -fPIC should go into your CXXFLAGS.
The pgxs makefile handles CFLAGS, but as it doesn't do C++,
you're on your own for CXXFLAGS.
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.
Hi,
My specific issue is alrady solved.
For the greater good I sent the email requesting to allow reg exp in the position functions.
Not sure if you will implement it... Just wanted to let you know that the limited capabilities of this function create overhead.ב אוג׳ 20, 2018 14:35, Geoff Winkless
On Mon, 20 Aug 2018 at 09:22, Nick Dro wrote:
>
> This incorrect.
> SELECT position(substring('https://www.webexample.com/s/help?' FROM
> '/(s|b|t)/') IN 'https://www.webexample.com/s/help?');
>
> Gives 5. It's wrong.
>
On Mon, 20 Aug 2018 at 09:22, Nick Dro wrote:
>
> This incorrect.
> SELE
This incorrect.
SELECT position(substring('https://www.webexample.com/s/help?' FROM '/(s|b|t)/') IN 'https://www.webexample.com/s/help?');
Gives 5. It's wrong.
The answer to my question is avaliable in Stackoverflow link. but look that it's very complex.
It would be very easy if Reg Exp would
If I correctly understood what you wanted based on the SO description
("Something like: select position ('/s/' or '/b/' or '/t/' in URL)"),
you could do that by combining SUBSTRING with position. Something like:
SELECT position(substring(URL FROM '/(s|b|t)/') IN URL);
Cheers,
Ken
On Sun, Aug
Hi,
Thanks for the reply but this question doesn't describe my problem. I've
asked the question again with much more detailed information about my
problem and got a solution, please see link below.
http://www.postgresql-archive.org/Linker-errors-while-creating-a-PostgreSQL-C-extension-function-td
TalGloz wrote:
> If I have an external library that I install using make install into the
> /usr/local/include/libraryname path by default and I want to include it in a
> C extension, the Makefile for the .so file includes the headers in
> /usr/local/include/libraryname path.
>
> Can I use #inclu
27 matches
Mail list logo