Some time ago I had to setup a replicated file system between multiple linux 
servers. I tried everything I could based on postgres, including large objects, 
but everything was significantly slower than a regular filesystem.

My conclussion: postgres is not suitable for storing large files efficiently.

Do you need that for replication, or just for file storage?

Alvaro Aguayo
Jefe de Operaciones
Open Comb Systems E.I.R.L.

Oficina: (+51-1) 3377813 | RPM: #034252 / (+51) 995540103  | RPC: (+51) 
954183248
Website: www.ocs.pe

Sent from my Sony Xperia™ smartphone

---- Sridhar N Bamandlapally wrote ----

all media files are stored in database with size varies from 1MB - 5GB

based on media file types and user-group we storing in different tables,
but PostgreSQL store OID/Large-object in single table (pg_largeobject), 90%
of database size is with table pg_largeobject

due to size limitation BYTEA was not considered

Thanks
Sridhar



On Tue, Mar 29, 2016 at 3:05 PM, John R Pierce <pie...@hogranch.com> wrote:

> On 3/29/2016 2:13 AM, Sridhar N Bamandlapally wrote:
>
>> Hi
>>
>> pg_largeobject is creating performance issues as it grow due to single
>> point storage(for all tables)
>>
>> is there any alternate apart from bytea ?
>>
>> like configuration large-object-table at table-column level and oid
>> PK(primary key) stored at pg_largeobject
>>
>>
> I would as soon use a NFS file store for larger files like images, audio,
> videos, or whatever.   use SQL for the relational metadata.
>
> just sayin'....
>
>
>
> --
> john r pierce, recycling bits in santa cruz
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Reply via email to