Re: DragonFly pkgsrc policy for packages without freely or generally available sources
On Sat, May 19, 2012 at 8:29 AM, John Marino dragonfly...@marino.st wrote: Pkgsrc will occasionally maintain packages in the repository that no longer have retrievable source tarballs and also legally restrict others from hosting copies of them. The justification is that some older users may still be in possession of the source tarballs, and the package is maintained for these very few people. Personally I disagree with this philosophy. Pkgsrc packages should be buildable by anyone as a minimum requirement for being a package, and if this capability is lost, I believe the package should be removed from pkgsrc once it's clear the capability will never be regained. Along the same vein, there are some packages that depend on sources that one has to purchase. I wouldn't be shocked if all of these only worked for NetBSD only. Since the unavailable packages aren't getting removed upstream, I'm going to mark them all NOT-FOR-DRAGONFLY. Currently this is less than 10 packages. The ones depending on commercially-purchased source tarballs will also be marked NOT-FOR-DRAGONFLY. For the vast majority of users, this will not affect you in the least (unless you run the bulk build script, then your life will improve). If you find yourself with sources to build one of these packages, you can simply comment out the NOT-FOR-PLATFORM+= DragonFly-*-* line in the Makefile before trying to build it. Just for reference, can you point of some/all of these packages? Thanks! -- vs; http://ops101.org/4k/
Re: Failure to allocate contiguous memory
On Fri, Mar 23, 2012 at 9:41 PM, Kyuupi kyuupic...@gmail.com wrote: I have jerky window dragging where repainting is very slow. I believe it is because DRI is not working with my graphics card, in turn because of a failure to allocate contiguous memory. Relevant snippets of dmesg below. Is there something I can do to fix this? I'm using kernel source as of about 48 hours ago. Neil. DragonFly v2.13.0.381.gca541-DEVELOPMENT #0: Sun Nov 27 12:27:02 JST 2011 r...@athlon2.akihabara.co.uk:/usr/obj/usr/src/sys/X86_64_GENERIC CPU: AMD Athlon(tm) Dual Core Processor 5050e (2600.16-MHz K8-class CPU) Origin = AuthenticAMD Id = 0x60fb2 Stepping = 2 Features=0x178bfbffFPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,C MOV,PAT,PSE36,CLFLUSH,MMX,FXSR,SSE,SSE2,HTT Features2=0x2001SSE3,CX16 AMD Features=0xea500800SYSCALL,NX,MMX+,FFXSR,RDTSCP,LM,3DNow!+,3DNow! AMD Features2=0x11fLAHF,CMP,SVM,ExtAPIC,CR8,Prefetch real memory = 4025809920 (3839 MB) avail memory = 3718107136 (3545 MB) DMA space used: 2540k, remaining available: 16384k Mounting devfs drm0: ATI Radeon HD 3200 Graphics on vgapci0 vgapci0: child drm0 requested pci_enable_busmaster info: [drm] Initialized radeon 1.31.0 20080613 contigmalloc_map: failed size 16777216 low=0 high= align=4096 boundary=0 flags=0102 contigmalloc_map: failed size 16777216 low=0 high= align=4096 boundary=0 flags=0102 pid 27585 (conftest), uid 0: exited on signal 11 Warning: busy page 0xffe0036c39a8 found in cache Yep! Some AMD (ATI) graphics cards seem to require a 32MB physmem region for DRI to work. In /boot/loader.conf, set vm.dma_reserved to 32MB. Should work well! Good luck, -- vs; http://ops101.org/4k/
Re: Anyone know how to get bitcoin to compile?
Hi, bitcoind works fine on DragonFly; you just need to modify makefile.unix appropriately; I'll post an updated makefile in a day or so. Basically, you need to set the pkgsrc paths correctly and modify a few small things wrt library names. I've never tried the Qt version on DFly. -- vs;
Re: cross-Compiling DFBSD on an Ubuntu machine.
NetBSD invested a lot of time in build.sh, so that they can build on a fair number of platforms. It would be pretty cool if someone wanted to do the same here :0 -- vs;
Re: What's _slaballoc?
On Mon, Mar 5, 2012 at 11:26 PM, Pierre Abbat p...@phma.optus.nu wrote: I'm profiling a program so that I can optimize it and hopefully get it to run in real time. Here's the profile of the program running under Linux: Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds seconds calls s/call s/call name 65.94 7.59 7.59 613648 0.00 0.00 tonegenerator::fwaves(int) 11.29 8.89 1.30 613648 0.00 0.00 __gnu_cxx::__enable_ifstd::__is_scalarfloat::__value, float*::__type std::__fill_n_afloat*, unsigned int, float(float*, unsigned int, float const) 5.39 9.51 0.62 168385808 0.00 0.00 std::vectorfloat, std::allocatorfloat ::operator[](unsigned int) 2.78 9.83 0.32 991456 0.00 0.00 std::_Rb_treedouble, std::pairdouble const, errec, std::_Select1ststd::pairdouble const, errec , std::lessdouble, std::allocatorstd::pairdouble const, errec ::_M_lower_bound(std::_Rb_tree_nodestd::pairdouble const, errec const*, std::_Rb_tree_nodestd::pairdouble const, errec const*, double const) const 2.17 10.08 0.25 15606822 0.00 0.00 std::lessdouble::operator()(double const, double const) const 1.26 10.22 0.14 613648 0.00 0.00 mastertimer::skip() 1.04 10.35 0.12 14558791 0.00 0.00 std::_Rb_treedouble, std::pairdouble const, errec, std::_Select1ststd::pairdouble const, errec , std::lessdouble, std::allocatorstd::pairdouble const, errec ::_S_key(std::_Rb_tree_nodestd::pairdouble const, errec const*) 0.87 10.45 0.10 main 0.52 10.51 0.06 613648 0.00 0.00 tonegenerator::unfwave() 0.48 10.56 0.06 8000250 0.00 0.00 std::_Rb_tree_iteratorstd::pairdouble const, errec ::operator-() const 0.43 10.61 0.05 999513 0.00 0.00 std::mapdouble, errec, std::lessdouble, std::allocatorstd::pairdouble const, errec ::end() 0.43 10.66 0.05 991456 0.00 0.00 std::_Rb_treedouble, std::pairdouble const, errec, std::_Select1ststd::pairdouble const, errec , std::lessdouble, std::allocatorstd::pairdouble const, errec ::find(double const) const 0.43 10.71 0.05 205449 0.00 0.00 downspike() 0.35 10.75 0.04 1227298 0.00 0.00 std::vectorfloat, std::allocatorfloat ::~vector() 0.35 10.79 0.04 613648 0.00 0.00 std::vectorfloat, std::allocatorfloat ::vector(std::vectorfloat, std::allocatorfloat const) 0.30 10.82 0.04 1840944 0.00 0.00 parity(unsigned int) 0.26 10.86 0.03 14558791 0.00 0.00 std::_Rb_treedouble, std::pairdouble const, errec, std::_Select1ststd::pairdouble const, errec , std::lessdouble, std::allocatorstd::pairdouble const, errec ::_S_value(std::_Rb_tree_nodestd::pairdouble const, errec const*) 0.26 10.88 0.03 2011200 0.00 0.00 std::_Rb_treedouble, std::pairdouble const, errec, std::_Select1ststd::pairdouble const, errec , std::lessdouble, std::allocatorstd::pairdouble const, errec ::end() const 0.26 10.91 0.03 1970656 0.00 0.00 std::_Rb_tree_iteratorstd::pairdouble const, errec ::operator!=(std::_Rb_tree_iteratorstd::pairdouble const, errec const) const 0.26 10.95 0.03 1524715 0.00 0.00 float* std::__copy_movefalse, true, std::random_access_iterator_tag::__copy_mfloat(float const*, float const*, float*) 0.26 10.97 0.03 613648 0.00 0.00 std::vectorfloat, std::allocatorfloat ::operator=(std::vectorfloat, std::allocatorfloat const) 0.26 11.01 0.03 22128 0.00 0.00 compdiv(double) 0.22 11.03 0.03 7046596 0.00 0.00 std::_Rb_treedouble, std::pairdouble const, errec, std::_Select1ststd::pairdouble const, errec , std::lessdouble, std::allocatorstd::pairdouble const, errec ::_S_left(std::_Rb_tree_node_base const*) 0.22 11.05 0.03 5504021 0.00 0.00 std::vectorfloat, std::allocatorfloat ::size() const 0.17 11.07 0.02 8056834 0.00 0.00 std::pairdouble const, errec* std::__addressofstd::pairdouble const, errec (std::pairdouble const, errec) 0.17 11.10 0.02 1859754 0.00 0.00 __gnu_cxx::__normal_iteratorfloat const*, std::vectorfloat, std::allocatorfloat ::base() const 0.17 11.12 0.02 1859754 0.00 0.00 std::_Niter_base__gnu_cxx::__normal_iteratorfloat const*, std::vectorfloat, std::allocatorfloat ::iterator_type std::__niter_base__gnu_cxx::__normal_iteratorfloat const*, std::vectorfloat, std::allocatorfloat (__gnu_cxx::__normal_iteratorfloat const*, std::vectorfloat, std::allocatorfloat ) 0.17 11.13 0.02 1048032 0.00 0.00 std::vectordouble, std::allocatordouble ::operator[](unsigned int) 0.17 11.15 0.02
Re: dfly kvm
2012/3/6 Andrey N. Oktyabrski a...@bestmx.ru: Good day. Today I tried to update 2.10 to 3.0 on my VPS. 3.0 do not boot. Have anybody tested dfly on kvm? Is it possible to update, or I must use 2.10 there? Hi, We have some known issues with 3.0 in recent versions of kvm and qemu. They are related to the new interrupt code, I think. There are workarounds, though! You can start qemu/kvm with -no-acpi or set hw.ioapic_enable=0 in /boot/loader.conf; this will let you boot. You may need to enable polling mode for the virtual NIC as well; ifconfig em0 polling 1 will do that. The problem iiuc is that an interrupt is getting missed during boot,
Re: Install DragonFlyBSD on 48 MB RAM
On Thu, Mar 1, 2012 at 7:22 PM, Thomas Nikolajsen thomas.nikolaj...@mail.dk wrote: So my question is: can I install DragonFlyBSD on my PC with 48 MB RAM? And if it is possible, then what is the right way to do it? Yes that should be possible; you will have to setup a swap partition (e.g. 256MB) on your HD and enable it (swapon), before running the installer. 48MB is a rather low mem system, so you might not be able to run the installer, but you can do manual install of DragonFly, that way you will also learn the steps, see /README on install media; please use UFS; not HAMMER, it needs more mem. (http://gitweb.dragonflybsd.org/dragonfly.git/blob/HEAD:/nrelease/root/README) 48MB will not get you running big programs, only do small things, but it should work, see http://www.shiningsilence.com/dbsdlog/2012/02/28/9296.html. As some developers has expressed in this thread problems triggered by such a small amount of mem might not get 1st priority from them, but just file a bug if you do see such. -thomas One of our developers tested with snapshots; it looks like the DMA reserve commit is the one that made DF no longer run w/ 48MB. That makes sense, as 16MB of physical memory is locked up by that commit. You should be able to boot with a loader variable set to reserve less physical memory. We someday need a better physmem allocator; the 16MB reserve is a good step, but a low-fragmentation allocator would be better. -- vs;
Re: Install DragonFlyBSD on 48 MB RAM
You might be able to make progress by reducing the amount of memory reserved during boot for DMA allocations. You'll also be able to enable swap from the installer CD by logging in as root and using swapon on the swap partition. -- vs;
Re: Is anyone still using gcc 4.1 on master?
kaffe (from pkgsrc) seems to need gcc 4.1 to build correctly... -- vs;
Re: Marvell 88e8057, and SATA disk cache flushing?
On Sat, Oct 22, 2011 at 11:42 AM, james ja...@mansionfamily.plus.com wrote: I need to replace my old NAS and I have some hardware to use, but plan A failed dismally because Illumos does not have a working driver for the Marvell 88e8057 GegE chip on my Saphire mobo - an AMD E350 device which I have in a small case (otherwise I'd just add another NIC). So: 1) does dragonfly support this chip? I believe the msk driver does. Also - while I'm attracted to swap cache (and I have an SSD that I would have used as boot, L2ARC and ZIL) I'm a little concerned that the situation with flushing caches on hard disks isn't as clear as it is with ZFS, which I trust to use write back cackes and flush explicitly. So: 2) Does Dragonfly have proper support for flushing write back caches on SATA drives? I have read the notes on 'fsync flush modes' in hammer(8), but this does not discuss whether fsync writes to the drive assuming write-through, or forces a drive cache flush. I'm also interested in whether such flushes will work if I configure the 4-off 2TB drives as a software RAID5 array. HAMMER explicitly flushes drive caches. Our device-mapper linear target does forward FLUSH commands to backing devices. I don't know about vinum, unfortunately. At first skim of vinumstart(), it doesn't, but I only quickly scanned it. UFS does not ever flush drive caches. Boo. --vs;
Re: Real World DragonFlyBSD Hammer DeDup figures from HiFX - Reclaiming more than 1/4th ( 30% ) Disk Space from an Almost Full Drive
The memory use can be bounded with some additional work on the software, if someone wants to have a go at it. Basically the way you limit memory use is by dynamically limiting the CRC range that you observe in a pass. As you reach a self-imposed memory limit you reduce the CRC range and throw away out-of-range records. Once the pass is done you start a new pass with the remaining range. Rinse, repeat until the whole thing is done. That would make it possible to run de-dup with bounded memory. However, the extra I/O's required to verify duplicate data cannot be avoided. Currently the dedup code (in sbin/hammer/cmd_dedup.c) kicks off in hammer_cmd_dedup(); scan_pfs() calls process_btree_elm() for every data record in the B-Tree. There is an RB tree constructed of data records, keyed on their CRCs. process_btree_elm() has an easy job -- for every new record, it checks for a matching CRC in the tree; if it finds one, it attempts a dedup ioctl [the kernel performs a full block comparison, don't worry]. There is a really straightforward way to dramatically reduce memory use by dedup at a time cost -- run a fixed number of passes, each pass only storing records in the RB tree where (CRC % numpass == current_pass). After each pass, clear out the CRC RB tree. This will not run dedup in bounded space, but it is really straightforward to do and can result in dramatic memory use reductions (16 passes should reduce memory use by a factor of 16, for example). I've done a crude patch to try it, only changing ~20 lines of code in cmd_dedup.c. Something like this might work (very rough atm): in hammer_cmd_dedup(): - scan_pfs(av[0], process_btree_elm); + + for (i = 0; i npasses; i++) + scan_pfs(av[0], process_btree_elm); ... assert(RB_EMPTY(dedup_tree)); ... + passnum++; + + } /* for npasses */ + in process_btree_elm(): de = RB_LOOKUP(dedup_entry_rb_tree, dedup_tree, scan_leaf-data_crc); if (de == NULL) { + if (scan_leaf-data_crc % (passnum + 1)) + goto end; + To run in bounded space is also possible, but would require a variable number of passes. Imagine having a fixed number of CRC RB records you are willing to create each pass, MAXCRC. In each pass, you should keep accepting blocks with new CRCs into the RB tree until you've accepted MAXCRC ones. Then you record the highest accepted CRC for that pass and continue walking the disk, deduping blocks with matching CRCs but not accepting new ones to the tree. On the next pass, you will accept records with a CRC higher than the highest one you accepted on the last pass, up to MAXCRC. Between each pass, you clear the CRC RB tree. Example: Lets say I can have two CRCs in my tree (I have an old computer) and my FS has records with CRCs: [A B C A A B B D C C D D E]. On pass one, I'd store A and B in my RB tree as I see records. I'd record B as the highest CRC I dedup-ed on pass one; then I'd finish my disk walk and dedup all As and Bs. On pass two, I'd see a C and then later a D. I'd keep dedup C and D blocks on this pass and record D. So on and on till I've dedup-ed the highest CRC on disk. This would be a pretty neat way to do dedup! But it would involve more work than the fixed-numpasses approach. Either would be pretty good projects for someone who wanted to get started strikebreaking/strike working on DragonFly. There is very little that can go wrong with dedup strategies -- the kernel validates all data records before dedup-ing. In fact, a correct (if stupid) approach that'd involve nearly no memory commitment would be to run dedup ioctls for every data record with every other data record... -- vs
Re: Can DFly mount ext4?
Any plans of making DFly able to mount ext4? ext4 is a much more complex filesystem than ext2/3, unfortunately. there is a b-tree for directory structures and a new extent mapping scheme. As far as I know, no one in DFly is working on ext4 support... do any of the BSD systems have ext4 support we could look at? -- vs
Re: Hammer deduplication needs for RAM size
I deduped a dataset that from ~600G - 396G on a system with 256MB of physical RAM and a 32GB swapcache. Peak Virt size of 'hammer dedup' was in the 700MB range. double_buffer was on. Performance was pretty reasonable and the system was plenty usable the whole time. Don't remember how long it took, though. -- vs
Re: 2.10 Release schedule - Release will be April 23rd 2011
On Wed, Apr 6, 2011 at 10:09 PM, Matthew Dillon dil...@apollo.backplane.com wrote: There will be a ton of features in this release, including major compiler toolchain updates, better acpi, better swapcache, PF upgrade, HAMMER live dedup, and many many other goodies. Specifically: * gcc 4.4.5 is now the default compiler (replacing gcc 4.1) * binutils 2.21 is now the default binutils (replacing binutils 2.17) * PF is synced with the version in OpenBSD 4.4 * HAMMER now support offline dedup live dedup * The MP lock is a token now * __The global tokens no longer take the MP lock! :D__ * HAMMER improved write performance * Caching for more kernel slabs and thread structures/thread stacks (faster fork() / thread creation) * Support up to 63 CPUs on x86_64 systems * New FIFO LWKT token contention algorithm (fair waiting) * MONITOR/MWAIT support during LWKT token contention (saves _a lot_ of power) * New ACPI work and PCI interrupt routing * MP lock has been removed from tmpfs, * Google Code-In submissions: Many zalloc users have been converted to objcaches * __Lots__ of bugfixes I'd love to see benchmarks comparing 2.10 to 2.8 and 2.6; we've come very far in the ~year since 2.6. -- vs
Re: RegressionTest Results
Why is aio disabled in the default kernel? -- vs
Re: vlc assertion: z-z_Magic == ZALLOC_SLAB_MAGIC in _slabfree
On Thu, Nov 11, 2010 at 3:59 AM, Siju George sgeorge...@gmail.com wrote: Hi, Playing an flv file using vlc was giving distorted sound. KInd of echoing. After I quit the player it crashed with = [0x282e00b0] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. [flv @ 0x28509000]skipping flv packet: type 18, size 5307, flags 0 Compiler did not align stack variables. Libavcodec has been miscompiled and may be very slow or crash. This is not a bug in libavcodec, but in the compiler. You may try recompiling using gcc = 4.2. Do not report crashes to FFmpeg developers. mdb:71, lastbuf:0 skipping granule 0 QPainter::begin: Paint device returned engine == 0, type: 1 [0x28443920] pulse audio output: No. of Audio Channels: 2 mdb:255, lastbuf:0 skipping granule 0 mdb:255, lastbuf:0 skipping granule 0 mdb:255, lastbuf:110 skipping granule 0 mdb:255, lastbuf:110 skipping granule 0 mdb:255, lastbuf:219 skipping granule 0 assertion: z-z_Magic == ZALLOC_SLAB_MAGIC in _slabfree Abort trap: 6 (core dumped) == The sound still continued with greater speed for some more time and then stopped vlc-1.0.6nb4 2.9-DEVELOPMENT DragonFly v2.9.1.10.g4d623-DEVELOPMENT #1: Wed Nov 10 12:14:34 IST 2010 Cool, Do you happen to have the core dump? (It should be a file like vlc.core or something) in the directory where you ran VLC. -- vs
Re: Hammer filesystem
I've regularly run hammer on a 20gb disk and currently on a pair of 40's; my snapshot retention time is set to 3600 days; no explosions yet. Just as long as you keep an eye on strikethe rearview mirror/strike df -h and reblock regularly, you'll be fine. Even if your hammer fills up, you can reclaim space by pruning snapshots on less important PFSes, such as /usr. -- vs
Re: sound no longer works for some programs
On Wed, Oct 20, 2010 at 4:08 AM, Tomas Bodzar tomas.bod...@gmail.comwrote: On Tue, Oct 19, 2010 at 11:52 PM, Chris Turner c.tur...@199technologies.org wrote: Pierre Abbat wrote: It's not only about that. There is a LOT of improvements in audio on OpenBSD http://undeadly.org/cgi?action=articlesid=20091012150452 . These are worth of reading http://www.openbsd.org/papers/asiabsdcon2010_sndio_slides.pdf , http://www.openbsd.org/papers/asiabsdcon2010_sndio.pdf and that http://www.openbsd.org/cgi-bin/man.cgi?query=aucatapropos=0sektion=0manpath=OpenBSD+Currentarch=i386format=html is working wonderfully Very nice! sndio looks tasteful and fairly clever. -- vs
Re: Suggestion for hammer cleanup
On Mon, Oct 18, 2010 at 9:31 PM, elekktrett...@exemail.com.au wrote: What about hammerd? * Starts when system starts * Wakes up every 10 seconds or so to check current system load/memory usage to estimate if its appropriate to run a cleanup operation at this time. * Can have a text file configuration where you can specify maximum size(GB) of history per PFS or % of disk space * Remembers last time it was run (stores information on disk in some text file) * Maybe it can detect if it's a laptop and running on battery to defer cleanup till power is plugged in? The daemon itself would be fairly inexpensive to run. Petr I rather like the idea of hammer(e)d; I've been joking for a while about a HammerTimeDaemon, which would watch for specific udev events from a certain USB key/disk and hammer mirror-stream onto it... hammerd would be a nice place to put hammer things to run on some conditions rather than a schedule. -- vs
Re: Firefox still crashes
What's a magazine? Pierre In libc, nmalloc (lib/libc/stdlib/nmalloc.c) provides malloc() ( from malus locus, 'bad place' ) and free() for single-threaded and multithreaded applications. In the DragonFly 2.4 release cycle, the original allocator (phkmalloc, inherited from FreeBSD) was replaced with a port of the kernel slab allocator; in the 2.8 release cycle I committed some work to change the multithreaded strategy. The DragonFly 2.4 and 2.6 libc allocator had two strategies, one for small requests, one for large requests; large requests are served directly via mmap. Small requests are served from 64k, 64k-aligned regions of memory called 'slabs'. Each slab only services requests of a given size, minimizing fragmentation. (The DragonFly libc slab allocator was fairly different from the original Sun design -- the Sun design has variable-size slabs and a hash table for block-slab mappings). For multithreaded applications, the allocator kept track of four sets of slab structures; threads would attempt to use the set they'd used most recently. If they failed to lock that said, they'd move on to the next set. The 2.7/2.8 allocator has a new structure -- a magazine. A magazine is a fixed-size array of blocks of the same size. Each thread carries a pair of magazines; when a thread tries to allocate something, it first checks its magazines for a buffer _without any locking_. If the magazines are not able to support an allocation, a central collection of magazines, called the 'depot', is locked and a magazine is retrieved. If the depot is empty, we fall back to the slab allocator. The design is also from Sun -- see a paper called 'Magazines and Vmem: Extending the Slab Allocator to Many CPUs and Multiple Resources'. When I last measured, in the 2.6 release cycle, the magazine layer sped up sysbench OLTP / MySQL by approximately 20%. The 2.7/2.8 allocator also has work to reduce the number of mmap/munmap system calls relative to the earlier version of the allocator; rather than immediately unmapping a slab when it has no outstanding allocations, we keep around up to 64 old slabs and we attempt to allocate slabs in bursts from the system. When I last measured, the reduction in mmaps/munmaps was fairly dramatic. The latest bug was a fairly unfortunate one -- most of the locks in nmalloc use libc's spinlocks. The depot locks, however, used pthread_spinlocks; when nmalloc was linked against libc, it used the stub pthread_spinlocks in libc, rather than the versions in libthread. This meant that accesses to the depot magazine lists were not synchronized at all and the magazine lists were getting corrupted. Oops... -- vs
Importing history into hammer pfs?
Hi, I have a certain dataset in a hammer PFS, which is taking care of nightly snapshots for me. But I have about a year worth of snapshots from before I used dragonfly, taken with other tools. Is there any way I can import these snapshots into my PFS? Would my best bet to create a new PFS, set the clock to the date of the first snapshot, and copy each day over, take a hammer snapshot, delete the data and change the clock to the date of the next snapshot and repeat? Thanks, -- vs