We are trying to implement postgresql code to load a large object into
a postgresql bytea in chunks to avoid loading the file into memory in
the client.
First attempt was to do
update build_attachment set chunk = chunk || newdata ;
this did not scale and got significantly slower after 4000-5000
Can you use a materialized view to do the bytea_agg() and then refresh
concurrently whenever you need updated data?
The refresh concurrently might take a few hours or days to run to keep the
matview up to date, but your queries would be pretty fast.
A possible problem is that you are running out
2018-03-21 13:03 GMT+01:00 Gary Cowell :
> We are trying to implement postgresql code to load a large object into
> a postgresql bytea in chunks to avoid loading the file into memory in
> the client.
>
> First attempt was to do
>
> update build_attachment set chunk = chunk || newdata ;
>
> this di
2018-03-21 13:56 GMT+01:00 Pavel Stehule :
>
>
> 2018-03-21 13:03 GMT+01:00 Gary Cowell :
>
>> We are trying to implement postgresql code to load a large object into
>> a postgresql bytea in chunks to avoid loading the file into memory in
>> the client.
>>
>> First attempt was to do
>>
>> update b
Thank you Pavel for those ideas.
I should probably have mentioned we don't have access to the file
system on the PostgreSQL server, as it's provided by Amazon AWS RDS
service.
These functions look good when you can push the file to be loaded into
the database file system.
I'll see if it's possib
2018-03-21 14:04 GMT+01:00 Gary Cowell :
> Thank you Pavel for those ideas.
>
> I should probably have mentioned we don't have access to the file
> system on the PostgreSQL server, as it's provided by Amazon AWS RDS
> service.
>
> These functions look good when you can push the file to be loaded i