i seem to have read somewhere that packing is just wasting time on zipping as the same file will be transfered so i havent really looked to much into that? and i dont think that is even possible in our use case as that would also mean that we in theory need to store the same data twice at source site (unzipped and zipped data).

Best Regards
Andi Christiansen
On February 26, 2020 2:04 PM Andrew Beattie <[email protected]> wrote:


Why don’t you look at packaging your small files into larger files which will be handled more effectively.

There is no simple way to replicate / move billions of small files,

But surely you can build your work flow to package the files up into a zip or tar format which will simplify not only the number of IO transactions but also make the whole process more palatable to the NFS protocol

Sent from my iPhone

> On 26 Feb 2020, at 22:58, Andi Christiansen <[email protected]> wrote:
>
>
> Hi all,
>
> Does anyone know of an alternative to AFM ?
>
> We have been working on tuning AFM for a few weeks now and see little to no improvement.. And now we are searching for an alternative.. So if anyone knows of a product that can implement with Spectrum Scale i am open to any suggestions :)
>
> We have a good mix of files but primarily billions of very small files which AFM does not handle well on long distances.
>
>
> Best Regards
> A. Christiansen
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to