Mike Gerdts wrote:
[I agree with the comments in this thread, but... I think we're still being old fashioned...] >> Imagine if university students were allowed to use as much space as >> they wanted but had to pay a per megabyte charge every two weeks or >> their account is terminated? This would surely result in huge >> reduction in disk space consumption. >> > > If you can offer the perception of more storage because of > efficiencies of the storage devices make it the same cost as less > storage, then perhaps allocating more per student is feasible. Or > maybe tuition could drop by a few bucks. > hmm... well, having spent the past two years at the University, I can provide the observation that: 0. Tuition never drops. 1. Everybody (yes everybody) had a laptop. I would say the average hard disk size per laptop was > 100 GBytes. 2. Everybody (yes everybody) had USB flash drives. In part because the school uses them for recruitment tools (give-aways), but they are inexpensive, too. 3. Everybody (yes everybody) had a MP3 player of some magnitude. Many were disk-based, but there were many iPod Nanos, too. 4. > 50% had smart phones -- crackberries, iPhones, etc. 5. The school actually provides some storage space, but I don't know anyone who took advantage of the service. E-mail and document sharing was outsourced to google -- no perceptible shortage of space there. Even Microsoft charges only $3/user/month for exchange and sharepoint services. I think many businesses would be hard-pressed to match that sort of efficiency. Unlike my undergraduate days, where we had to make trade-offs between beer and floppy disks, there does not seem to be a shortage of storage space amongst the university students today -- in spite of the rise of beer prices recently (hops shortage, they claim ;-O Is the era of centralized home directories for students over? I think that the normal enterprise backup scenarios are more likely to gain from de-dup, in part because they tend to make full backups of systems and end up with zillions of copies of (static) OS files. Actual work files tend to be smaller, for many businesses. De-dup on my desktop seems to be a non-issue. Has anyone done a full value chain or data path analysis for de-dup? Will de-dup grow beyond the backup function? Will the performance penalty of SHA-256 and bit comparison kill all interactive performance? Should I set aside a few acres at the ranch to grow hops? So many good questions, so little time... -- richard _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss