Re: [gentoo-user] Record sizes of directories of a directory tree (huge) most efficiently
On Wed, Jan 27, 2016 at 04:28:43PM -0400, David M. Fellows wrote: > On Wed, 27 Jan 2016 17:25:37 +0100 > meino.cra...@gmx.de wrote - > > Hi, > > > > I want to determine the size of the contents of all directories of a > > tree of directories on a hexacore AMD64 machine with 4GB RAM an one > > harddisk (containing that tree) -- most efficiently (least time > > consuming). > > > > I tried this (cwd = root of that tree): > > > > find . -depth -type d -print0 | xargs -0 -P 6 du -bsx {} \; > > > > . Is there any to do this faster? > > > > Thank you very much in advance for any help! > > Best regards, > > Meino > >man du > > Dave F Here's a couple of nice ones: < du -sh /* | sort -rh > < du -axk / | awk '$1 > 2^20 {print}' | sort -rn | head -20 > You could also check out the application ncdu for a curses-based du command analyzer.
[gentoo-user] What happened to portage?
Syncing resulted in thousands of lines of: app-admin/python-updater/.~tmp~/ app-admin/qpage/.~tmp~/ app-admin/qtpass/.~tmp~/ app-admin/quickswitch/.~tmp~/ app-admin/r10k/.~tmp~/ app-admin/radmind/.~tmp~/ app-admin/ranpwd/.~tmp~/ app-admin/recursos/.~tmp~/ app-admin/reportmagic/.~tmp~/ app-admin/restart_services/.~tmp~/ app-admin/rex/.~tmp~/ rsync: opendir "/app-arch/libarchive/.~tmp~" (in gentoo-portage) failed: Permission denied (13) rsync: opendir "/app-arch/libarchive/files/.~tmp~" (in gentoo-portage) failed: Permission denied (13) rsync: opendir "/app-arch/libpar2/.~tmp~" (in gentoo-portage) failed: Permission denied (13) rsync: opendir "/app-arch/libzpaq/.~tmp~" (in gentoo-portage) failed: Permission denied (13) And the portage tree is full of ".~tmp~" directories. What's going on?
[gentoo-user] Record sizes of directories of a directory tree (huge) most efficiently
Hi, I want to determine the size of the contents of all directories of a tree of directories on a hexacore AMD64 machine with 4GB RAM an one harddisk (containing that tree) -- most efficiently (least time consuming). I tried this (cwd = root of that tree): find . -depth -type d -print0 | xargs -0 -P 6 du -bsx {} \; . Is there any to do this faster? Thank you very much in advance for any help! Best regards, Meino
[gentoo-user] Re: eix showing me weird results
My issue resolved itself with today's sync, just a few minutes ago. The eix-diff that runs at the end of eix-sync showed: [>] == dev-python/numpy (1.10.4@01/25/2016; 1.9.2 -> 1.10.4): Fast array and numerical python library [>] == dev-qt/qtchooser (0_p20151008@01/25/2016; 0_p20150102 -> 0_p20151008): Qt4/Qt5 version chooser And eix is no longer giving me any weirdness about those two packages. On Wed, 27 Jan 2016 14:54:23 + (UTC) Martin Vaethwrote: > »Q« wrote: > > eix-sync > > Which method do you use for syncing (rsync, git, ...)? > > > I've run 'emerge --metadata' and 'eix-update' > > The requirement to run emerge --metadata seems to suggest that > you use git? If this is true, better use egencache to generate > the metadata in the repositories' directories instead of > /var/cache/edb/dep/ I user rsync. I don't usually use emerge --metadata, I just ran it this time on the long-shot hope that it might help. Since I no longer have the problem, I'm not trying your troubleshooting advice (which I've snipped), but thanks very much for posting it -- I've saved it in case this ever happens to me again.
Re: [gentoo-user] Record sizes of directories of a directory tree (huge) most efficiently
On Wed, 27 Jan 2016 17:25:37 +0100 meino.cra...@gmx.de wrote - > Hi, > > I want to determine the size of the contents of all directories of a > tree of directories on a hexacore AMD64 machine with 4GB RAM an one > harddisk (containing that tree) -- most efficiently (least time > consuming). > > I tried this (cwd = root of that tree): > > find . -depth -type d -print0 | xargs -0 -P 6 du -bsx {} \; > > . Is there any to do this faster? > > Thank you very much in advance for any help! > Best regards, > Meino man du Dave F
Re: [gentoo-user] Record sizes of directories of a directory tree (huge) most efficiently
On 01/27/2016 08:25 AM, meino.cra...@gmx.de wrote: > Hi, > > I want to determine the size of the contents of all directories of a > tree of directories on a hexacore AMD64 machine with 4GB RAM an one > harddisk (containing that tree) -- most efficiently (least time > consuming). > > I tried this (cwd = root of that tree): > > find . -depth -type d -print0 | xargs -0 -P 6 du -bsx {} \; > > . Is there any to do this faster? > > Thank you very much in advance for any help! > Best regards, > Meino > > > > > Did you try `du -cxb .` ? Dan
[gentoo-user] Re: Filesystem choice for NVMe SSD
Am Tue, 26 Jan 2016 20:02:47 +0300 schrieb Andrew Savchenko: > I have thoughts about caching NFS using filescached, but limited > durability of the drive (400 TBW warranty for 512 GB size) restrains > me here. Probably I'll use it for caching only in exceptional cases > (e.g. slow remote mounts like AFS), but with 64 GB RAM I doubt I'll > need additional NVMe-based caching, at least for now, with time > this may change of course. Take note, that filescached may not support every filesystem for cache storage - e.g. btrfs cannot be used for it currently. I haven't looked at other options yet. -- Regards, Kai Replies to list-only preferred. signature.asc Description: PGP signature
Re: [gentoo-user] Re: Filesystem choice for NVMe SSD
Hi, On Tue, 26 Jan 2016 17:29:36 + (UTC) James wrote: > > I have thoughts about caching NFS using filescached, but limited > > durability of the drive (400 TBW warranty for 512 GB size) restrains > > me here. Probably I'll use it for caching only in exceptional cases > > (e.g. slow remote mounts like AFS), but with 64 GB RAM I doubt I'll > > need additional NVMe-based caching, at least for now, with time > > this may change of course. > > Well, to be truthful, I was hoping your application was a speed boost > for clusters. Particularly a gentoo based cluster with some NVMe boards > on workstations that lend their excess power to a local cluster. Lots of > folks are building in house clusters where technical users have monster > workstations and use those excess workstation resources to boost the local > cluster. That's kinda my twist (lxqt on the desktop) for single (big > problem/data) on gentoo with mesos clusters. Do drop me a line, should that > type of usage permeate your thought_cycles. We have clusters at work, but they have quite different hardware. For HA clusters we indeed use bcache on quite durable Intel SSD (400 GB size, 8PBW resource). For HPC cluster we planned SSD cache for storage, but due to funding cut-off we have only small SSDs on each node. Best regards, Andrew Savchenko pgpN6YnGWESof.pgp Description: PGP signature
Re: [gentoo-user] Re: Filesystem choice for NVMe SSD
On Wed, 27 Jan 2016 09:31:26 +0100 Kai Krakow wrote: > Am Tue, 26 Jan 2016 20:02:47 +0300 > schrieb Andrew Savchenko: > > > I have thoughts about caching NFS using filescached, but limited > > durability of the drive (400 TBW warranty for 512 GB size) restrains > > me here. Probably I'll use it for caching only in exceptional cases > > (e.g. slow remote mounts like AFS), but with 64 GB RAM I doubt I'll > > need additional NVMe-based caching, at least for now, with time > > this may change of course. > > Take note, that filescached may not support every filesystem for cache > storage - e.g. btrfs cannot be used for it currently. I haven't looked > at other options yet. cachefilesd needs user_xattr feature, both ext4 and f2fs have it. Btrfs people claim that fs lacks this option because functionality is enabled by default: http://www.spinics.net/lists/linux-btrfs/msg06814.html But I haven't tested this myself. Best regards, Andrew Savchenko pgp9v5k7ik7Dz.pgp Description: PGP signature
[gentoo-user] Re: eix showing me weird results
»Q«wrote: > eix-sync Which method do you use for syncing (rsync, git, ...)? > I've run 'emerge --metadata' and 'eix-update' The requirement to run emerge --metadata seems to suggest that you use git? If this is true, better use egencache to generate the metadata in the repositories' directories instead of /var/cache/edb/dep/ Afterwards (or if you use rsync anyway), remove possibly obsolete /var/cache/edb/dep/* and do *not* call emerge --metadata manually. What is the output of eix-update? Also check whether perhaps the information of eix is correct. E.g. for > $ eix qtchooser > [?] dev-qt/qtchooser > Available versions: 0_p20150102 ~0_p20151008 check whether in ${PORTDIR}/metadata/md5-cache/dev-qt the file qtchooser-0_p20151008 perhaps contains a line with KEYWORDS= ... ~amd64 ...