On Sat, 13 Aug 2011, Bob Friesenhahn wrote:

On Sat, 13 Aug 2011, andy thomas wrote:
However, one of our users recently put a 35 Gb tar.gz file on this server and uncompressed it to a 215 Gb tar file. But when he tried to untar it, after about 43 Gb had been extracted we noticed the disk usage reported by df for that ZFS pool wasn't changing much. Using du -sm on the extracted archive directory showed that the size would increase over a period of 30 seconds or so and then suddenly drop back about 50 Mb and start increasing again. In other words it seems to be going into some sort of a loop and all we could do was to kill tar and try again when exactly the same thing happened after 43 Gb had been extracted.

What 'tar' program were you using? Make sure to also try using the Solaris-provided tar rather than something like GNU tar.

I was using GNU tar actually as the original archive was created on a Linux machine. I will try it again using Solaris tar.

1GB of memory is not very much for Solaris to use. A minimum of 2GB is recommended for zfs.

We are going to upgrade the system to 4 Gb as soon as possible.

Thanks for the quick response,

zfs-discuss mailing list

Reply via email to