The use case I'm consider are network file systems. So perhaps a default can be a single threaded system for the local filesystems but add an option to cp for the -r case that would enable network file system to copy files in parallel.
On Thu, Jun 6, 2019 at 12:25 PM Marc Roos <[email protected]> wrote: > > > Hmmm without being a maintainer. I would say cp -r is most used on > single disk, so one thread is using the maximum disk iops taking y time > to copy. What would solve using multiple threads each taking their share > of the maximum disk iops, and because of the scheduling and other > overhead finishing later than y time? > > > > -----Original Message----- > From: Olga Kornievskaia [mailto:[email protected]] > Sent: donderdag 6 juni 2019 17:39 > To: [email protected] > Subject: question about parallelism in cp command > > Hi folks, > > Is there something philosophically incorrect in making a “cp” > multi-threaded and allow for parallel copies when “cp -r” is done? If > it’s something that’s possible, are there any plans in making a > multi-threaded cp? > > I’m not a member of the list so I kindly request you cc me on the > reply. > > Thank you. > > >
