We are attempting to run a postgres cluster which is composed of two nodes. Each mirroring the data on the other. Gluster config is identical on each node:

volume posix
 type storage/posix
 option directory /mnt/sdb1
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
 type performance/io-threads
 subvolumes locks
end-volume

volume server
 type protocol/server
 option transport-type tcp
 option auth.addr.brick.allow *
 subvolumes brick
end-volume

volume gfs01-hq
 type protocol/client
 option transport-type tcp
 option remote-host gfs01-hq
 option remote-subvolume brick
end-volume

volume gfs02-hq
 type protocol/client
 option transport-type tcp
 option remote-host gfs02-hq
 option remote-subvolume brick
end-volume

volume replicate
 type cluster/replicate
 option favorite-child gfs01-hq
 subvolumes gfs01-hq gfs02-hq
end-volume

volume writebehind
  type performance/write-behind
  option page-size 128KB
  option cache-size 1MB
  subvolumes replicate
end-volume

volume cache
  type performance/io-cache
  option cache-size 512MB
  subvolumes writebehind
end-volume

The basic problem is whenever i try to import a database created on a different cluster i am running into these errors.

-bash-3.2$ pg_restore -U entitystore -d entitystore --no-owner -n public entitystore pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 1829; 0 147089 TABLE DATA entity_medio-canon-all-0 entitystore pg_restore: [archiver (db)] COPY failed: ERROR: unexpected data beyond EOF in block 77309 of relation "entity_medio-canon-all-0" HINT: This has been seen to occur with buggy kernels; consider updating your system. CONTEXT: COPY entity_medio-canon-all-0, line 1022934: "medio-canon- all-0 1.mut_2572632518437988628 \\340\\000\\000\\001\\0008\\317\ \002ns2.http://schemas.m...";

The issue seems to be related to using gluster, as when i attempt the same restore to local (non-replicated disk) it works fine. Is there something amiss in our gluster config? Should we be doing something different?

Thanks for taking the time to read.

-Jeff




_______________________________________________
Gluster-users mailing list
[email protected]
http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users

Reply via email to