CPU soft lockup in usb-audio
I've spent months hunting for this random crash bug until today when I finally found a way to reliably reproduce it (run a software that I develop under valgrind). Log received over netconsole: http://pastie.org/5848168 Is this a known issue or do you need more information to debug it? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
CPU soft lockup in usb-audio
I've spent months hunting for this random crash bug until today when I finally found a way to reliably reproduce it (run a software that I develop under valgrind). Log received over netconsole: http://pastie.org/5848168 Is this a known issue or do you need more information to debug it? -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: urandom is too slow
On 30.10.2012 23:38, Alan Cox wrote: If you want to wipe a disk issue a security erase command via hdparm. There is no guarantee that simply writing crap all over it will re-use the same sectors of physical media, and for a flash drive it causes massive wear and takes forever while a security erase is normally near immediate. Alan Thank you for your answers, they should be very helpful for someone who is actually blanking or shredding their disks. However, I am just genuinely interested on why is no better CSPRNG algorithm used in the kernel (is it simply because no-one sent a patch or am I missing something?). -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: urandom is too slow
On 30.10.2012 23:38, Alan Cox wrote: If you want to wipe a disk issue a security erase command via hdparm. There is no guarantee that simply writing crap all over it will re-use the same sectors of physical media, and for a flash drive it causes massive wear and takes forever while a security erase is normally near immediate. Alan Thank you for your answers, they should be very helpful for someone who is actually blanking or shredding their disks. However, I am just genuinely interested on why is no better CSPRNG algorithm used in the kernel (is it simply because no-one sent a patch or am I missing something?). -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
urandom is too slow
Apparently there has been little or no development on urandom even though the device is in widespread use for disk shredding and such use. The device emits data at rather slow rate of 19 MB/s even on modern hardware where other software-based PRNGs could do far better. An even better option seems to be utilizing AES for encrypting zeroes, using a random key, allowing for rates up to 500 MB/s with hardware that has AES-NI instructions. Why is urandom so slow and why isn't AES hardware acceleration utilized? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
urandom is too slow
Apparently there has been little or no development on urandom even though the device is in widespread use for disk shredding and such use. The device emits data at rather slow rate of 19 MB/s even on modern hardware where other software-based PRNGs could do far better. An even better option seems to be utilizing AES for encrypting zeroes, using a random key, allowing for rates up to 500 MB/s with hardware that has AES-NI instructions. Why is urandom so slow and why isn't AES hardware acceleration utilized? -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.23-rc3 USB segfaults + urb status -32
Sorry about late reply. I have been extremely busy with other things. > Does 2.6.22 work fine? Not perfect, but it does not crash spontaneously. I have been running 2.6.22-ck1 since my last email with only one crash and today I have tortured 2.6.22.5 as badly as I can (switching bConfigurationValue of the sound card, unplugging USB devices on fly, doing heavy I/O, etc), without crashes. However, with 2.6.23-rc5 I still got segfaults on boot¹ (log1), a few dozen urb status -32 messages by powering off the sound card while it was in use (log2), but no crashes there. It did finally crash (log3), though, after about three hours of coding + listening to music on text console (no Nvidia drivers nor any other external modules). Apparently it reset all USB buses, including the one that my system disk is connected to (there is nothing else on the same port), effectively killing the system, even though technically it didn't crash. It also seems that .23-rc5 no longer exposes bConfigurationValue and a few other settings under /sys/bus/usb/devices/*/, so I couldn't test if switching the bConfigurationValue of the sound card still causes crashes² there (it has been a reliable way to do that in the past). I could not reproduce the excessive log flood of urb status -32, mentioned in my original message. The logs are here: http://delenn.homelinux.net/kernel-usb/ ¹) I gather that usb_id is something related to udev. It does not crash with .22 kernels, only with the .23 series. ²) OOPSes, to be specific, IIRC. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 2.6.23-rc3 USB segfaults + urb status -32
Sorry about late reply. I have been extremely busy with other things. Does 2.6.22 work fine? Not perfect, but it does not crash spontaneously. I have been running 2.6.22-ck1 since my last email with only one crash and today I have tortured 2.6.22.5 as badly as I can (switching bConfigurationValue of the sound card, unplugging USB devices on fly, doing heavy I/O, etc), without crashes. However, with 2.6.23-rc5 I still got segfaults on boot¹ (log1), a few dozen urb status -32 messages by powering off the sound card while it was in use (log2), but no crashes there. It did finally crash (log3), though, after about three hours of coding + listening to music on text console (no Nvidia drivers nor any other external modules). Apparently it reset all USB buses, including the one that my system disk is connected to (there is nothing else on the same port), effectively killing the system, even though technically it didn't crash. It also seems that .23-rc5 no longer exposes bConfigurationValue and a few other settings under /sys/bus/usb/devices/*/, so I couldn't test if switching the bConfigurationValue of the sound card still causes crashes² there (it has been a reliable way to do that in the past). I could not reproduce the excessive log flood of urb status -32, mentioned in my original message. The logs are here: http://delenn.homelinux.net/kernel-usb/ ¹) I gather that usb_id is something related to udev. It does not crash with .22 kernels, only with the .23 series. ²) OOPSes, to be specific, IIRC. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
2.6.23-rc3 USB segfaults + urb status -32
My system is unusably unstable using this kernel. On last boot it started flooding urb status -32 to kernel log at a rate of several megabytes per second. Now it printed segfaults before the system had finished booting and then some other errors... The full log is here: I couldn't find information on these bugs. If you need more debug info, please contact me. I can also reproduce the errors without the Nvidia kernel module, if that is really necessary (note, however, that the first segfaults in this log happen before the module loads). I think that part of the USB problems may be related to M-Audio FastTrack Pro USB sound card, because I have managed to crash the kernel USB system before (with a 32 bit kernel, and also on another computer) by switching bConfigurationValue of that card. Running on x86-64 with Core2 Q6600 B3. Linux version 2.6.23-rc3 ([EMAIL PROTECTED]) (gcc version 4.1.1 (Gentoo 4.1.1-r3)) #1 SMP PREEMPT Sat Aug 25 10:01:23 EEST 2007 Command line: root=/dev/md3 usbhid.mousepoll=2 usb-storage.delay_use=0 BIOS-provided physical RAM map: BIOS-e820: - 0009f800 (usable) BIOS-e820: 0009f800 - 000a (reserved) BIOS-e820: 000f - 0010 (reserved) BIOS-e820: 0010 - 7fee (usable) BIOS-e820: 7fee - 7fee3000 (ACPI NVS) BIOS-e820: 7fee3000 - 7fef (ACPI data) BIOS-e820: 7fef - 7ff0 (reserved) BIOS-e820: f000 - f400 (reserved) BIOS-e820: fec0 - 0001 (reserved) Entering add_active_range(0, 0, 159) 0 entries of 256 used Entering add_active_range(0, 256, 524000) 1 entries of 256 used end_pfn_map = 1048576 DMI 2.4 present. ACPI: RSDP 000F6CF0, 0014 (r0 GBT ) ACPI: RSDT 7FEE3040, 0034 (r1 GBTGBTUACPI 42302E31 GBTU 1010101) ACPI: FACP 7FEE30C0, 0074 (r1 GBTGBTUACPI 42302E31 GBTU 1010101) ACPI: DSDT 7FEE3180, 49F4 (r1 GBTGBTUACPI 1000 MSFT 10C) ACPI: FACS 7FEE, 0040 ACPI: HPET 7FEE7CC0, 0038 (r1 GBTGBTUACPI 42302E31 GBTU 98) ACPI: MCFG 7FEE7D40, 003C (r1 GBTGBTUACPI 42302E31 GBTU 1010101) ACPI: APIC 7FEE7BC0, 0084 (r1 GBTGBTUACPI 42302E31 GBTU 1010101) Entering add_active_range(0, 0, 159) 0 entries of 256 used Entering add_active_range(0, 256, 524000) 1 entries of 256 used Zone PFN ranges: DMA 0 -> 4096 DMA324096 -> 1048576 Normal1048576 -> 1048576 Movable zone start PFN for each node early_node_map[2] active PFN ranges 0:0 -> 159 0: 256 -> 524000 On node 0 totalpages: 523903 DMA zone: 56 pages used for memmap DMA zone: 10 pages reserved DMA zone: 3933 pages, LIFO batch:0 DMA32 zone: 7108 pages used for memmap DMA32 zone: 512796 pages, LIFO batch:31 Normal zone: 0 pages used for memmap Movable zone: 0 pages used for memmap ACPI: PM-Timer IO Port: 0x408 ACPI: Local APIC address 0xfee0 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Processor #0 (Bootup-CPU) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) Processor #1 ACPI: LAPIC (acpi_id[0x02] lapic_id[0x03] enabled) Processor #3 ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled) Processor #2 ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1]) ACPI: IOAPIC (id[0x02] address[0xfec0] gsi_base[0]) IOAPIC[0]: apic_id 2, address 0xfec0, GSI 0-23 ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) ACPI: IRQ0 used by override. ACPI: IRQ2 used by override. ACPI: IRQ9 used by override. Setting APIC routing to flat ACPI: HPET id: 0x8086a201 base: 0xfed0 Using ACPI (MADT) for SMP configuration information Allocating PCI resources starting at 8000 (gap: 7ff0:7010) SMP: Allowing 4 CPUs, 0 hotplug CPUs PERCPU: Allocating 31200 bytes of per cpu data Built 1 zonelists in Zone order. Total pages: 516729 Kernel command line: root=/dev/md3 usbhid.mousepoll=2 usb-storage.delay_use=0 Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) time.c: Detected 2400.000 MHz processor. Console: colour VGA+ 80x25 console [tty0] enabled Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes) Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes) Checking aperture... Memory: 2055772k/2096000k available (4847k kernel code, 39476k reserved, 1890k data, 268k init) Calibrating delay using timer specific routine.. 4802.57 BogoMIPS (lpj=2401287) Mount-cache hash table entries: 256 CPU: L1 I cache: 32K, L1 D cache: 32K CPU: L2 cache: 4096K using mwait in idle threads. CPU: Physical Processor ID: 0 CPU: Processor Core ID: 0 SMP alternatives: switching to UP code ACPI: Core revision 20070126 Using local APIC timer interrupts. result 1654 Detected 16.666 MHz APIC timer. SMP
2.6.23-rc3 USB segfaults + urb status -32
My system is unusably unstable using this kernel. On last boot it started flooding urb status -32 to kernel log at a rate of several megabytes per second. Now it printed segfaults before the system had finished booting and then some other errors... The full log is here: I couldn't find information on these bugs. If you need more debug info, please contact me. I can also reproduce the errors without the Nvidia kernel module, if that is really necessary (note, however, that the first segfaults in this log happen before the module loads). I think that part of the USB problems may be related to M-Audio FastTrack Pro USB sound card, because I have managed to crash the kernel USB system before (with a 32 bit kernel, and also on another computer) by switching bConfigurationValue of that card. Running on x86-64 with Core2 Q6600 B3. Linux version 2.6.23-rc3 ([EMAIL PROTECTED]) (gcc version 4.1.1 (Gentoo 4.1.1-r3)) #1 SMP PREEMPT Sat Aug 25 10:01:23 EEST 2007 Command line: root=/dev/md3 usbhid.mousepoll=2 usb-storage.delay_use=0 BIOS-provided physical RAM map: BIOS-e820: - 0009f800 (usable) BIOS-e820: 0009f800 - 000a (reserved) BIOS-e820: 000f - 0010 (reserved) BIOS-e820: 0010 - 7fee (usable) BIOS-e820: 7fee - 7fee3000 (ACPI NVS) BIOS-e820: 7fee3000 - 7fef (ACPI data) BIOS-e820: 7fef - 7ff0 (reserved) BIOS-e820: f000 - f400 (reserved) BIOS-e820: fec0 - 0001 (reserved) Entering add_active_range(0, 0, 159) 0 entries of 256 used Entering add_active_range(0, 256, 524000) 1 entries of 256 used end_pfn_map = 1048576 DMI 2.4 present. ACPI: RSDP 000F6CF0, 0014 (r0 GBT ) ACPI: RSDT 7FEE3040, 0034 (r1 GBTGBTUACPI 42302E31 GBTU 1010101) ACPI: FACP 7FEE30C0, 0074 (r1 GBTGBTUACPI 42302E31 GBTU 1010101) ACPI: DSDT 7FEE3180, 49F4 (r1 GBTGBTUACPI 1000 MSFT 10C) ACPI: FACS 7FEE, 0040 ACPI: HPET 7FEE7CC0, 0038 (r1 GBTGBTUACPI 42302E31 GBTU 98) ACPI: MCFG 7FEE7D40, 003C (r1 GBTGBTUACPI 42302E31 GBTU 1010101) ACPI: APIC 7FEE7BC0, 0084 (r1 GBTGBTUACPI 42302E31 GBTU 1010101) Entering add_active_range(0, 0, 159) 0 entries of 256 used Entering add_active_range(0, 256, 524000) 1 entries of 256 used Zone PFN ranges: DMA 0 - 4096 DMA324096 - 1048576 Normal1048576 - 1048576 Movable zone start PFN for each node early_node_map[2] active PFN ranges 0:0 - 159 0: 256 - 524000 On node 0 totalpages: 523903 DMA zone: 56 pages used for memmap DMA zone: 10 pages reserved DMA zone: 3933 pages, LIFO batch:0 DMA32 zone: 7108 pages used for memmap DMA32 zone: 512796 pages, LIFO batch:31 Normal zone: 0 pages used for memmap Movable zone: 0 pages used for memmap ACPI: PM-Timer IO Port: 0x408 ACPI: Local APIC address 0xfee0 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Processor #0 (Bootup-CPU) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) Processor #1 ACPI: LAPIC (acpi_id[0x02] lapic_id[0x03] enabled) Processor #3 ACPI: LAPIC (acpi_id[0x03] lapic_id[0x02] enabled) Processor #2 ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1]) ACPI: IOAPIC (id[0x02] address[0xfec0] gsi_base[0]) IOAPIC[0]: apic_id 2, address 0xfec0, GSI 0-23 ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) ACPI: IRQ0 used by override. ACPI: IRQ2 used by override. ACPI: IRQ9 used by override. Setting APIC routing to flat ACPI: HPET id: 0x8086a201 base: 0xfed0 Using ACPI (MADT) for SMP configuration information Allocating PCI resources starting at 8000 (gap: 7ff0:7010) SMP: Allowing 4 CPUs, 0 hotplug CPUs PERCPU: Allocating 31200 bytes of per cpu data Built 1 zonelists in Zone order. Total pages: 516729 Kernel command line: root=/dev/md3 usbhid.mousepoll=2 usb-storage.delay_use=0 Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) time.c: Detected 2400.000 MHz processor. Console: colour VGA+ 80x25 console [tty0] enabled Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes) Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes) Checking aperture... Memory: 2055772k/2096000k available (4847k kernel code, 39476k reserved, 1890k data, 268k init) Calibrating delay using timer specific routine.. 4802.57 BogoMIPS (lpj=2401287) Mount-cache hash table entries: 256 CPU: L1 I cache: 32K, L1 D cache: 32K CPU: L2 cache: 4096K using mwait in idle threads. CPU: Physical Processor ID: 0 CPU: Processor Core ID: 0 SMP alternatives: switching to UP code ACPI: Core revision 20070126 Using local APIC timer interrupts. result 1654 Detected 16.666 MHz APIC timer. SMP alternatives:
Re: CIFS slowness & crashes
>>nor umount -f > What are the errors? What is the version of cifs.ko module? umount2: Device or resource busy umount: /tmpmnt: device is busy umount2: Device or resource busy umount: /tmpmnt: device is busy Without -f it doesn't print those umount2 errors, just the other two. The version is whatever comes with 2.6.12 or 2.6.13rc1. > My tests of reconnection after server crash or network reconnection with > smbfs works (for the past year or two) and also of course for cifs. > cifs also reconnects state (open files) not just the connection to > \\server\share Reconnection usually (or perhaps always, with new versions) works, but nothing cannot be done if the server goes permanently offline (nor until it comes back online). When the server is down, the programs that were using the CIFS mount or that try to access it will halt. > My informal tests (cifs->samba) showed a maximum of about 20% > utilization of gigabit doing large file writes and about double that for > large file reads with a single cifs client to Samba over gigabit. Should > be somewhat simalar to Windows server. This seems quite bad. Is it waiting for packet confirmations or what? However, I have never got anything better than about 40 Mo/s with a gigabit network so far. Jumbo frames and using direct cross-over cable between the clients make no difference. Still, all protocols that I have tried, except for SMB and CIFS, can reach that easily. I'll try to get two Windows machines that I can test with. Perhaps the problem is with Linux network drivers. Anyway, 20 Mo/s or something like that would not be a big problem. The real problem occurs when the speed drops under 3 Mo/s. > The most common cause of widely varying speeds are the following: > 1) memory fragmentation - some versions of the kernel badly fragment > slab allocations greater than 4 pages (cifs by default allocates 16.5K > buffers - which results in larger size allocations when more than 5 > processes are accessing the mount and the cifs buffer pool is exceeded) Wouldn't this show up as increased system loads? > 2) write behind due to oplock - smbfs does not do oplock, cifs does - > which means that cifs client caching can cause a large amount of > writebehind data to accumulate (with great performance for a while) - > then when memory gets tight due to the inode caching in linux's mm > layer, the cifs client is asked to write out a large amount of file data > at one time (which it does synchronously). > > Both of these are being improved. You can bypass the inode caching on > the client by using the cifs mount option > "forcedirectio" This option has little or no effect. Still runs slowly (2.4 Mo/s right now). My benchmark is reading data in 50 Mio chunks, as quickly as it can, calculating the average read speed. The files being read are bigger than 4 Gio. I haven't tried writing anything (the shares that I play with are read-only). For the record, I am using O_RDONLY | O_LARGEFILE on open and then pread for all reads. However, I am getting similar results with all programs, so I don't think that the reading method really matters that much. - Tronic - signature.asc Description: OpenPGP digital signature
Re: CIFS slowness crashes
nor umount -f What are the errors? What is the version of cifs.ko module? umount2: Device or resource busy umount: /tmpmnt: device is busy umount2: Device or resource busy umount: /tmpmnt: device is busy Without -f it doesn't print those umount2 errors, just the other two. The version is whatever comes with 2.6.12 or 2.6.13rc1. My tests of reconnection after server crash or network reconnection with smbfs works (for the past year or two) and also of course for cifs. cifs also reconnects state (open files) not just the connection to \\server\share Reconnection usually (or perhaps always, with new versions) works, but nothing cannot be done if the server goes permanently offline (nor until it comes back online). When the server is down, the programs that were using the CIFS mount or that try to access it will halt. My informal tests (cifs-samba) showed a maximum of about 20% utilization of gigabit doing large file writes and about double that for large file reads with a single cifs client to Samba over gigabit. Should be somewhat simalar to Windows server. This seems quite bad. Is it waiting for packet confirmations or what? However, I have never got anything better than about 40 Mo/s with a gigabit network so far. Jumbo frames and using direct cross-over cable between the clients make no difference. Still, all protocols that I have tried, except for SMB and CIFS, can reach that easily. I'll try to get two Windows machines that I can test with. Perhaps the problem is with Linux network drivers. Anyway, 20 Mo/s or something like that would not be a big problem. The real problem occurs when the speed drops under 3 Mo/s. The most common cause of widely varying speeds are the following: 1) memory fragmentation - some versions of the kernel badly fragment slab allocations greater than 4 pages (cifs by default allocates 16.5K buffers - which results in larger size allocations when more than 5 processes are accessing the mount and the cifs buffer pool is exceeded) Wouldn't this show up as increased system loads? 2) write behind due to oplock - smbfs does not do oplock, cifs does - which means that cifs client caching can cause a large amount of writebehind data to accumulate (with great performance for a while) - then when memory gets tight due to the inode caching in linux's mm layer, the cifs client is asked to write out a large amount of file data at one time (which it does synchronously). Both of these are being improved. You can bypass the inode caching on the client by using the cifs mount option forcedirectio This option has little or no effect. Still runs slowly (2.4 Mo/s right now). My benchmark is reading data in 50 Mio chunks, as quickly as it can, calculating the average read speed. The files being read are bigger than 4 Gio. I haven't tried writing anything (the shares that I play with are read-only). For the record, I am using O_RDONLY | O_LARGEFILE on open and then pread for all reads. However, I am getting similar results with all programs, so I don't think that the reading method really matters that much. - Tronic - signature.asc Description: OpenPGP digital signature
Re: Supermount
> To mount on demand use autofs. Unmounting and dealing with media removal > is the problem. Granted, that can get pretty close. However, having to use /auto/* instead of mounting directly where required often limits using it quite a bit. Thus, I don't see it as a real alternative. - Tronic - signature.asc Description: OpenPGP digital signature
Re: Supermount
To mount on demand use autofs. Unmounting and dealing with media removal is the problem. Granted, that can get pretty close. However, having to use /auto/* instead of mounting directly where required often limits using it quite a bit. Thus, I don't see it as a real alternative. - Tronic - signature.asc Description: OpenPGP digital signature
Re: Supermount
> Supermount is obsolete there are other tools in userspace that do the > job perfectly. > e.g ivman which uses hal and dbus. They cannot mount on demand, thus cannot do the same job. The boot partition, for example, is something that should only be mounted when required. The same obviously also goes for network filesystems in many cases (i.e. avoid having zillion idling connections to the server). > Also there are other fs like supermount e.g submount etc... I woudldn't care about the implementation (original supermount, supermountng, submount or something else). Getting the job done is what counts. - Tronic - signature.asc Description: OpenPGP digital signature
Re: Supermount
Supermount is obsolete there are other tools in userspace that do the job perfectly. e.g ivman which uses hal and dbus. They cannot mount on demand, thus cannot do the same job. The boot partition, for example, is something that should only be mounted when required. The same obviously also goes for network filesystems in many cases (i.e. avoid having zillion idling connections to the server). Also there are other fs like supermount e.g submount etc... I woudldn't care about the implementation (original supermount, supermountng, submount or something else). Getting the job done is what counts. - Tronic - signature.asc Description: OpenPGP digital signature
CIFS slowness & crashes
I mailed [EMAIL PROTECTED] (the guy who wrote the driver) about this a month ago, but didn't get any reply. Is anyone working on that driver anymore? The problems that I wrote him about were: 1. CIFS VFS hangs entirely if the server crashes or otherwise goes offline. Every process touching the mount halts too and cannot be killed (but they are not zombies). System loads start climbing and eventually the entire system will die (after system loads reach about 500). It is not possible to umount with either smbumount (hangs) nor umount -f (prints errors but doesn't umount anything). It won't recover without reboot, even if the server becomes back online. This problem has been around as long as I have used SMBFS or CIFS. There has only been slight variation from one version to another. Sometimes it is possible to umount them (after some pretty long timeout), sometimes it is not. It seems as if the problem was being fixed, but none of the fixes really worked. 2. Occassionally the transmission speeds go extremely low for no apparent reason. While writing this, I am getting 0.39 Mo/s over a gigabit network. Using FTP to read the same file gives 40 Mo/s, which is the speed that the file can be read locally on the server too. Remounting the CIFS does not help, nor does restarting Samba. However, using SMBFS I can get 20 Mo/s which is a bit better but still far from what it should be. It is important to mention that sometimes CIFS does work faster (about as quickly as SMBFS) and that this misbehavior occurs randomly. During CIFS transfer, both computers seem to be idling. The CPU usage (including I/O wait) is almost none. During SMBFS transfer the server smbd process uses about 15 % CPU and the client is almost idle. The client is P4 3.4 GHz and the server is Athlon64 3000+. I also tested with a Windows XP client machine and found out that this slowness issue does not happen with it, using the very same Samba server that the Linux CIFS mount is using. - Tronic - signature.asc Description: OpenPGP digital signature
Supermount
Is there a reason why this magnificient piece of software is not already in the mainline? It seems to be working very well and provides functionality that simply isn't available otherwise. For those who are not familiar with it: this system does on-demand mounting when the mount point is accessed and automatically umounts afterwards. Unlike autofs, this does not require a special automount filesystem to be mounted, but the actual filesystems can be directly mounted where-ever. Also, it "just works" and the CD drive will eject when the button is pressed, without having to wait for the umount timeout to pass. I haven't looked inside to find out HOW it actually does it, because I simply don't care, as long as it just works. - Tronic - signature.asc Description: OpenPGP digital signature
Supermount
Is there a reason why this magnificient piece of software is not already in the mainline? It seems to be working very well and provides functionality that simply isn't available otherwise. For those who are not familiar with it: this system does on-demand mounting when the mount point is accessed and automatically umounts afterwards. Unlike autofs, this does not require a special automount filesystem to be mounted, but the actual filesystems can be directly mounted where-ever. Also, it just works and the CD drive will eject when the button is pressed, without having to wait for the umount timeout to pass. I haven't looked inside to find out HOW it actually does it, because I simply don't care, as long as it just works. - Tronic - signature.asc Description: OpenPGP digital signature
CIFS slowness crashes
I mailed [EMAIL PROTECTED] (the guy who wrote the driver) about this a month ago, but didn't get any reply. Is anyone working on that driver anymore? The problems that I wrote him about were: 1. CIFS VFS hangs entirely if the server crashes or otherwise goes offline. Every process touching the mount halts too and cannot be killed (but they are not zombies). System loads start climbing and eventually the entire system will die (after system loads reach about 500). It is not possible to umount with either smbumount (hangs) nor umount -f (prints errors but doesn't umount anything). It won't recover without reboot, even if the server becomes back online. This problem has been around as long as I have used SMBFS or CIFS. There has only been slight variation from one version to another. Sometimes it is possible to umount them (after some pretty long timeout), sometimes it is not. It seems as if the problem was being fixed, but none of the fixes really worked. 2. Occassionally the transmission speeds go extremely low for no apparent reason. While writing this, I am getting 0.39 Mo/s over a gigabit network. Using FTP to read the same file gives 40 Mo/s, which is the speed that the file can be read locally on the server too. Remounting the CIFS does not help, nor does restarting Samba. However, using SMBFS I can get 20 Mo/s which is a bit better but still far from what it should be. It is important to mention that sometimes CIFS does work faster (about as quickly as SMBFS) and that this misbehavior occurs randomly. During CIFS transfer, both computers seem to be idling. The CPU usage (including I/O wait) is almost none. During SMBFS transfer the server smbd process uses about 15 % CPU and the client is almost idle. The client is P4 3.4 GHz and the server is Athlon64 3000+. I also tested with a Windows XP client machine and found out that this slowness issue does not happen with it, using the very same Samba server that the Linux CIFS mount is using. - Tronic - signature.asc Description: OpenPGP digital signature