Emerald themes
Does anyone know why the Emerald themes have been dropped from the Debian repositories? Or, were they never there and I was just getting them from Shame's repositories and not paying attention to where they came from? Some of the Emerald themes have a look that is far superior, imo, to anything available outside of Emerald. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: bug #350639
Thorny wrote: On Sun, 17 May 2009 12:31:05 -0700, Freddy Freeloader posted: [...] I have to ask why. Why is this left up every user of testing to fix this problem themselves when the fix is so simple? [...] One possible answer to this question would be that users of "testing" are supposed to be able to troubleshoot and fix problems in "testing". The maintainer may rely on that and not think there is a pressing need to do it. What you say about testing is even more true about Sid, yet the Sid version is fixed. Following your logic there was no need to fix the bug in Sid at all as Sid users are supposed to be even more skilled in fixing the problems they run across. Sid is supposed to be more buggy than testing by its very nature. Why can't this fix be uploaded to the Debian repositories? It's not like auto mounting of cd/dvd's and portable usb devices is something hardly anyone does on a daily basis. All I could suggest is to ask the maintainer directly, he may not read this list. The email address is available in the bug report or with the package itself. [...] In my mind there is no good reason for this fix to go into Sid and then sit there until the dependencies are satisfied for that version number. Well, that is the standard flow. I hear you that it isn't convenient for you but it is standard. It is also standard flow to fix bugs that are found in testing, not to always wait until a new version comes down from Sid. That's the purpose for which testing exists. Those bugs not found while a package is in Sid are fixed in testing when fixing them does not require a new dependency or will break some other package. So why should this bug be any different? It exists in testing so it should be fixed in testing too, not allowed to just sit for months when it's a very simple fix. I could see the lag if this was a bug that's difficult to fix and in some package that hardly anyone uses, but that is not the case with g-v-m or this bug. It exists in every Gnome installation by default. [...] Testing is what the biggest portion of Debian users have on their desktops. Do you have any documentation to support this, I do understand how you might think that reading lists and forums but I've never seen any documentation of version percentages for desktops. [...] Is the Debian development process in that much trouble, i.e. short of help, or have such unreasonable versioning rules that something this simple can't be fixed promptly? Following the standards is not necessarily an indication of "being in trouble", it's an indication of following standards and flow. "Stable" is the version that is stable, probably in some sense that stability is a result of the Debian "flow". This isn't important for your question but I personally don't like to automount, I prefer to mount as I choose and as needed. YMMV I think I do understand your frustration, and you've got a good example to work with but standards are standards and the flow has given us very good stable Debian for many years, I want it to remain the same. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: bug #350639
Matteo Riva wrote: On Sun, May 17, 2009 at 9:58 PM, Javier Barroso wrote: It seems resolved in unstable, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=525183 I have rebuilt the package from source as suggested by a user in that bug report, but now the update manager flags gnome-volume-manager as needing an update. How can I avoid this? You might try using apt-pinning. I'm not sure if it will work though as they are both the same version. I just use the auto-update wizard and uncheck that box before allowing it to proceed. Thanks. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: bug #350639
Javier Barroso wrote: Hi, On Sun, May 17, 2009 at 9:31 PM, Freddy Freeloader wrote: I'd like to point this bug out as it has been around now for 3 1/2 months with no official resolution. What's worse is that this bug was caused by the Debian gnome-volume-manager maintainer. Nobody else is responsible for it. He disabled automount when he compiled the gnome-volume-manager package. It seems resolved in unstable, http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=525183 I think maintainer forgot about dup (linking) these bugs. They are working in transition to gnome 2.26, so we should have patient. Ummm It took me approximately 5 minutes to download the source, make the changes in debian-rules, and compile a new package. I can count on one hand the number of Debian package I have created, so, it can't take any longer for an experienced developer to do the same thing than it did for me. I also think that waiting 3 1/2 months for a 5 minute fix before complaining is being patient. Regards, -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
bug #350639
I'd like to point this bug out as it has been around now for 3 1/2 months with no official resolution. What's worse is that this bug was caused by the Debian gnome-volume-manager maintainer. Nobody else is responsible for it. He disabled automount when he compiled the gnome-volume-manager package. That's right. This bug was caused by disabling automount in the debian-rules file. It's now 3 1/2 months later and the person, whoever it is, that maintains this package hasn't gotten around to just changing about 10 letters in one line of text and then recompiling and uploading the package again. I have to ask why. Why is this left up every user of testing to fix this problem themselves when the fix is so simple? Why can't this fix be uploaded to the Debian repositories? It's not like auto mounting of cd/dvd's and portable usb devices is something hardly anyone does on a daily basis. This functionality is used almost daily by everyone who has a computer. However, Debian testing users must either fix this themselves or manually mount usb and optical disks as the Debian dev just ignores testing. In my mind there is no good reason for this fix to go into Sid and then sit there until the dependencies are satisfied for that version number. It's just a matter of enabling a flag that was accidentally disabled. It's not like there are any dependency changes. I've been a Debian user for 5 or 6 years now and very much dislike the idea of using another distro, but I just can't see why Debian is leaving bugs like this unfixed for what is probably a majority of it's desktop users. Testing is what the biggest portion of Debian users have on their desktops. To flat out ignore them seems pretty strange. Is the Debian development process in that much trouble, i.e. short of help, or have such unreasonable versioning rules that something this simple can't be fixed promptly? I'd gladly volunteer to help, but I'm no developer. I'm willing to learn anything but I've read where Debian devs have no desire to do any teaching to get help so I've never offered. However, if things this simple go unfixed for this long then maybe it's time for some change. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
lenny to squeeze dist-upgrade
I did a dist-upgrade on a machine running linux raid today and ran across an interesting situation. After the upgrade I rebooted and the 2.6.26-2-686-bigmem kernel failed to boot. It couldn't find /dev/md1( raid5 ) which is /root. There are two other linux raid devices on the machine. /dev/md0 is a mirror mounted as /boot. /dev/md2 is raid0 and is swap From busybox I ran dmesg and found only 2 out of 4 drives for the raid 5 device had been found. Uninstalling dmraid solved the problem. BTW, the 2.6.26-1-686-bigmem kernel booted fine, even with dmraid installed. This seems to be a bug but I have no idea as to what software package to report this to. I don't know if this is a problem with the kernel in question, dist-upgrade, dmraid, or some combination thereof. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Debian's glacial movement--a rant
Harry Rickards wrote: On 12 May 2009, at 07:42, JoeHill wrote: Freddy Freeloader wrote: Just an update on this. I have asked on #gnucash if the patch for this bug can be backported to 2.2.6 upstream and they say this is a Debian problem as it is fixed in Gnucash. They will not do anything to help. Here's the short conversation from #gnucash. * Now talking on #gnucash * Topic for #gnucash is: Free GPL Personal and Small Business Accounting || Please don't ask to ask, just ask and wait! || publically-logged channel || latest stable: 2.2.9 * Topic for #gnucash set by jsled at Wed May 6 07:46:39 2009 Is it possible to have the patch for bug #564928 backported to 2.2.6? I have all my business records in gnucash and this bug crashes gnucash every time I start it. I run Debian and who knows how long it will take for Debian to catch up. Right now they are almost a year behind in version numbers. * twunder (~twun...@pool-96-234-154-75.bltmmd.fios.verizon.net) has joined #gnucash * warlord-afk is now known as warlord garyk: you need to ask that of Debian. The GnuCash releases dont have the bug. * warlord is now known as warlord-afk The Debian devs say it needs to be done upstream as they won't/can't backport the patch. That's why I asked here. This looks to be the classic answer given by my stepkids when they were growing up and didn't want to take responsibility for anything: Who's responsible? Not me. Not me was responsible for everything done wrong in our house for a couple of years. I didn't know one person could get into that much mischief all their own. ;) Gnucash version 2.2.6 was released July 31, 2008. The fixed version of Gnucash, 2.2.9 was released February 4, 2009. It's now May 11, 2009 and counting. The patch for this critical bug was released 3 months ago and nothing has been done. It's now 5 weeks since I was told I was being a jerk for saying Debian was slow in moving on this as they were 3 versions, and 9 months, behind. Well, Debian is now 3 versions and 10 months behind, and the patch has been available for 120+ days while the bug has been archived in Debian since the day after it was reported. It couldn't be that this critical bug just isn't on anyone's list of priorities could it? There were a couple of suggestions made a while back. Just curious, did you try any of those? One was to build a patched version of Gnucash yourself, with a patch supplied by Florian Kulzer along with detailed instrucions on how to build. Another was to contact Debian backports and ask if anyone there would build a deb for you. -- J Sorry if I've got something wrong, I haven't been reading this thread in detail. But, does the OP just require someone to build a gnucash deb package for stable, with the version in unstable? Again, sorry if I'm completely wrong. Thanks Harry Rickards I don't think so, but I couldn't swear to that. I had one more message from "warlord" on #gnucash last night after I had sent the post to this list. He said that this bug is purely a Debian bug as it was a result of a patch the Debian devs created. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Debian's glacial movement--a rant
Freddy Freeloader wrote: Douglas A. Tutty wrote: On Mon, Apr 06, 2009 at 01:06:00PM -0700, Freddy Freeloader wrote: Michael Biebl wrote: Freddy Freeloader wrote: I'm experiencing a bug in Gnucash that appeared a couple of days ago on my system that makes Gnucash completely unusable for me. I turned in a bug report on Friday, checked on it yesterday, and by today the bug had been blocked from being displayed. It could be found by searching Debian's bug tracker, but only if you know the bug id number. If you just search for bugs in Gnucash the bug does not appear to exist. The bug was closed, and blocked, because it's been fixed upstream in version 2.2.9 which was released by Gnucash in February of this year. Could you discuss how you're experiencing a new bug ("appeared a couple of days ago") if you're not using gnucach in a new way? I'm sure there are bugs in every piece of software I use. There are a couple of which I am aware (if I care to think on it) but I have work-arounds or I wouldn't have accepted the software for use and would have chosen a different solution. I've never run into the "appeared a couple of days ago" situation where the problem was a new bug; the problem has always been in a different system (or is upstream of the keyboard :)) Doug. Sorry I didn't answer this sooner, but I just now saw your message. I didn't change anything other than running an apt-get update && apt-get upgrade. I was using Gnucash in exactly the same way I had for last the year. It just started crashing after that when I would close a tab that was created automatically during creating and posting vendor invoices or customer bills. New bug or old bug I don't know. All I know was it was new to me as I'd been using Gnucash for more than a year to keep track of my business and suddenly it just didn't work correctly any more. But, that's all right. Just keep on blaming me. I'm surely at fault. Hell, I had only been using Gnucash for a year. An imbecile like me couldn't possibly have developed a stable work flow in that amount of time. I probably did things differently inside of Gnucash every day for that entire year Starting Gnucash and closing tabs inside it are such technically challenging tasks that users have a very difficult time doing them correctly and that most likely explains why Gnucash suddenly began to fail to open account files at startup too. It's gotta be the stupid user's fault. Funny ain't it though how the upstream version, 2.2.9, has a bug fix for this. I guess they call it the "stupid-user patch", and that's why they won't backport the patch to 2.2.6. Just an update on this. I have asked on #gnucash if the patch for this bug can be backported to 2.2.6 upstream and they say this is a Debian problem as it is fixed in Gnucash. They will not do anything to help. Here's the short conversation from #gnucash. * Now talking on #gnucash * Topic for #gnucash is: Free GPL Personal and Small Business Accounting || Please don't ask to ask, just ask and wait! || publically-logged channel || latest stable: 2.2.9 * Topic for #gnucash set by jsled at Wed May 6 07:46:39 2009 Is it possible to have the patch for bug #564928 backported to 2.2.6? I have all my business records in gnucash and this bug crashes gnucash every time I start it. I run Debian and who knows how long it will take for Debian to catch up. Right now they are almost a year behind in version numbers. * twunder (~twun...@pool-96-234-154-75.bltmmd.fios.verizon.net) has joined #gnucash * warlord-afk is now known as warlord garyk: you need to ask that of Debian. The GnuCash releases dont have the bug. * warlord is now known as warlord-afk The Debian devs say it needs to be done upstream as they won't/can't backport the patch. That's why I asked here. This looks to be the classic answer given by my stepkids when they were growing up and didn't want to take responsibility for anything: Who's responsible? Not me. Not me was responsible for everything done wrong in our house for a couple of years. I didn't know one person could get into that much mischief all their own. ;) Gnucash version 2.2.6 was released July 31, 2008. The fixed version of Gnucash, 2.2.9 was released February 4, 2009. It's now May 11, 2009 and counting. The patch for this critical bug was released 3 months ago and nothing has been done. It's now 5 weeks since I was told I was being a jerk for saying Debian was slow in moving on this as they were 3 versions, and 9 months, behind. Well, Debian is now 3 versions and 10 months behind, and the patch has been available for 120+ days while the bug has been archived in Debian since the day after it was reported. It couldn't be that this critical bug just isn't on anyone's list of priorities could it? -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Connect, ping, traceroute work, but not surf on the net
Marcelo Laia wrote: I connect to net from my notebook like this: ISP ---> computer ---> notebook (friend) > my notebook ADSLcable wireless ad-hoc My notebook connect, i am able to ping any IP, traceroute resolve, but firefox, aMSN, skype, emphaty didn't connect/surf on the web. This layout was working very well. Since may, 7, at night, dind't work any more. Here is some information: Debian testing kernel 2.6.29-1-686 At work, from eth0, I surf on the net very well. :~$ cat /etc/resolv.conf nameserver 200.221.11.100 nameserver 208.67.222.222 nameserver 192.168.0.1 :~$ :~$ traceroute www.google.com traceroute to www.google.com (74.125.113.147), 30 hops max, 60 byte packets 1 192.168.0.1 (192.168.0.1) 0.474 ms 0.965 ms 1.393 ms 2 10.1.1.1 (10.1.1.1) 12.775 ms 13.050 ms 13.282 ms 3 192.168.2.2 (192.168.2.2) 13.694 ms 14.743 ms 15.106 ms 4 BrT-L10-bnut3703.dsl.brasiltelecom.net.br (201.24.99.254) 50.339 ms 54.201 ms 57.317 ms 5 BrT-10G4-0-0-bsacoborder.brasiltelecom.net.br (201.10.206.206) 107.511 ms 110.020 ms 115.461 ms 6 200.163.207.202 (200.163.207.202) 248.044 ms 211.421 ms 236.112 ms 7 72.14.236.173 (72.14.236.173) 236.264 ms 239.429 ms 242.608 ms 8 209.85.254.252 (209.85.254.252) 271.682 ms * 271.750 ms 9 72.14.239.136 (72.14.239.136) 279.964 ms 280.079 ms 264.343 ms 10 209.85.249.238 (209.85.249.238) 272.560 ms 275.016 ms 280.158 ms 11 64.233.174.117 (64.233.174.117) 280.371 ms 282.002 ms 282.864 ms 12 72.14.236.193 (72.14.236.193) 225.116 ms 216.239.47.250 (216.239.47.250) 228.493 ms * 13 vw-in-f147.google.com (74.125.113.147) 240.297 ms 246.500 ms 252.203 ms :~$ :~$ ping 200.221.11.100 PING 200.221.11.100 (200.221.11.100) 56(84) bytes of data. 64 bytes from 200.221.11.100: icmp_seq=1 ttl=56 time=59.9 ms 64 bytes from 200.221.11.100: icmp_seq=2 ttl=56 time=58.9 ms 64 bytes from 200.221.11.100: icmp_seq=3 ttl=56 time=57.8 ms 64 bytes from 200.221.11.100: icmp_seq=4 ttl=56 time=52.3 ms 64 bytes from 200.221.11.100: icmp_seq=5 ttl=56 time=58.1 ms 64 bytes from 200.221.11.100: icmp_seq=6 ttl=56 time=56.2 ms 64 bytes from 200.221.11.100: icmp_seq=7 ttl=56 time=59.0 ms 64 bytes from 200.221.11.100: icmp_seq=8 ttl=56 time=56.7 ms ^C --- 200.221.11.100 ping statistics --- 8 packets transmitted, 8 received, 0% packet loss, time 7010ms rtt min/avg/max/mdev = 52.398/57.416/59.914/2.221 ms :~$ :~$ sudo route Tabela de Roteamento IP do Kernel Destino RoteadorMáscaraGen.Opções Métrica Ref Uso Iface 192.168.0.0 * 255.255.255.0 U 0 00 wlan0 default xxx-5e7b4676c1d 0.0.0.0 UG0 00 wlan0 :~$ The IP 192.168.0.1 is from host xxx-5e7b4676c1d (my friend's notebook). :~$ cat /etc/host.conf multi on :~$ Any clue here? Thank you very much! I have had very similar problems and every time it has been a problem with mdns being used to resolve sites external to my LAN. The easiest fix for me is to just get rid of the avahi-daemon package as it takes mdns with it and I don't use any packages that rely on mdns. The symptoms are the same for me as they are for you. I can ping, traceroute, and use host to resolve urls from the bash prompt, but when I try to surf the net nothing gets resolved. To check to see if this is the problem install wireshark, if you don't already have it installed, and do a packet capture when trying to surf to an external website. If you have dns queries going to 224.xxx.xxx.xxx avahi-daemon/mdns is the culprit. There's another fix that doesn't require getting rid of avahi-daemon if you make use of it, but I can't remember what it is off the top of my head. IIRC it has to do with /etc/hosts and localdomain, but I wouldn't swear to that. It's been too long since I wanted to keep avahi-daemon and repaired the problem without getting rid of avahi-daemon and I've forgotten the exact details. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Debian's glacial movement--a rant
Douglas A. Tutty wrote: On Mon, Apr 06, 2009 at 01:06:00PM -0700, Freddy Freeloader wrote: Michael Biebl wrote: Freddy Freeloader wrote: I'm experiencing a bug in Gnucash that appeared a couple of days ago on my system that makes Gnucash completely unusable for me. I turned in a bug report on Friday, checked on it yesterday, and by today the bug had been blocked from being displayed. It could be found by searching Debian's bug tracker, but only if you know the bug id number. If you just search for bugs in Gnucash the bug does not appear to exist. The bug was closed, and blocked, because it's been fixed upstream in version 2.2.9 which was released by Gnucash in February of this year. Could you discuss how you're experiencing a new bug ("appeared a couple of days ago") if you're not using gnucach in a new way? I'm sure there are bugs in every piece of software I use. There are a couple of which I am aware (if I care to think on it) but I have work-arounds or I wouldn't have accepted the software for use and would have chosen a different solution. I've never run into the "appeared a couple of days ago" situation where the problem was a new bug; the problem has always been in a different system (or is upstream of the keyboard :)) Doug. Sorry I didn't answer this sooner, but I just now saw your message. I didn't change anything other than running an apt-get update && apt-get upgrade. I was using Gnucash in exactly the same way I had for last the year. It just started crashing after that when I would close a tab that was created automatically during creating and posting vendor invoices or customer bills. New bug or old bug I don't know. All I know was it was new to me as I'd been using Gnucash for more than a year to keep track of my business and suddenly it just didn't work correctly any more. But, that's all right. Just keep on blaming me. I'm surely at fault. Hell, I had only been using Gnucash for a year. An imbecile like me couldn't possibly have developed a stable work flow in that amount of time. I probably did things differently inside of Gnucash every day for that entire year Starting Gnucash and closing tabs inside it are such technically challenging tasks that users have a very difficult time doing them correctly and that most likely explains why Gnucash suddenly began to fail to open account files at startup too. It's gotta be the stupid user's fault. Funny ain't it though how the upstream version, 2.2.9, has a bug fix for this. I guess they call it the "stupid-user patch", and that's why they won't backport the patch to 2.2.6. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Debian's glacial movement--a rant
Michael Biebl wrote: Freddy Freeloader wrote: Hi I've never been pissed off at Debian before but I guess there is always a first. That's usually not a good way to start a discussion (admitted, you said it's a rant, but aI'll try to answer anyway). I'm experiencing a bug in Gnucash that appeared a couple of days ago on my system that makes Gnucash completely unusable for me. I turned in a bug report on Friday, checked on it yesterday, and by today the bug had been blocked from being displayed. It could be found by searching Debian's bug tracker, but only if you know the bug id number. If you just search for bugs in Gnucash the bug does not appear to exist. The bug was closed, and blocked, because it's been fixed upstream in version 2.2.9 which was released by Gnucash in February of this year. Older bugs that have been fixed are automatically archived, so no longer show up by default. The bts allows you though, to show both archived and unarchived bug reports. If you are using the web frontend, scroll down to the bottom. Great. The bug has been fixed. Why it needed to be hidden from being displayed is puzzler for me, but that's the way it is. Now the bad news. Since Gnucash in both Sid and Sqeeze is now at version 2.2.6 I only have to wait until Debian works through versions 2.2.7 and 2.2.8 before Gnucash in Debian finally becomes usable for me again in version 2.2.9. As Sid is "only" 9 months behind Gnucash's release schedule at this point I guess the fact that all my business records for the last couple of years are in Gnucash means I'll be able to start doing my business accounting again sometime after the first of next year, at a minimum, if I wait for Debian so what is your point? Do you think the bug report was not correctly handled by the maintainer? Not at all. I'm saying because of how far Debian is behind in versions of Gnucash it's going to be unusable by me at least until next year if Debian stays at its current time lag behind Gnucash releases. As I have all my business records stored in Gnucash this is a major problem for me. This isn't aimed at any one developer. It's just a commentary on how Debian moves forward. And, that's not always a bad thing. In most cases it's fine as it means stable is exactly that in all meanings of the word, but in this instance this really bites me in a bad way. About my only choices are to spend a couple of days rebuilding and restoring my system with a Lenny install, or moving to a distro that has the current version of Gnucash. Part of this is also Gnucash's responsibility because 2.2.9 is built against glib >= 2.6. Not many distro's are using that version of glib, so it doesn't seem to me to make a whole lot of sense if they want the latest versions of their software to be used. That decision practically guarantees that their a lot of their bug fixes won't be available for the better part of a year to most Linux users. You can't even compile from source because of it unless you want to start making what are risky changes for most users. I certainly couldn't predict what upgrading glib to version 2.6 would do to my system. I've been using Debian now for almost 6 years, with a lot of that time spent running testing or unstable on my desktop, and this is the first time I've run across a bug that makes a package I depend on for my business unusable for approximately a year. I find that to be a big problem. If you don't think that would be a problem worthy of a rant for you, well, what can I say? You must be the worlds most patient man. Could you please point us to the relevant bug number. Michael -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Debian's glacial movement--a rant
I've never been pissed off at Debian before but I guess there is always a first. I'm experiencing a bug in Gnucash that appeared a couple of days ago on my system that makes Gnucash completely unusable for me. I turned in a bug report on Friday, checked on it yesterday, and by today the bug had been blocked from being displayed. It could be found by searching Debian's bug tracker, but only if you know the bug id number. If you just search for bugs in Gnucash the bug does not appear to exist. The bug was closed, and blocked, because it's been fixed upstream in version 2.2.9 which was released by Gnucash in February of this year. Great. The bug has been fixed. Why it needed to be hidden from being displayed is puzzler for me, but that's the way it is. Now the bad news. Since Gnucash in both Sid and Sqeeze is now at version 2.2.6 I only have to wait until Debian works through versions 2.2.7 and 2.2.8 before Gnucash in Debian finally becomes usable for me again in version 2.2.9. As Sid is "only" 9 months behind Gnucash's release schedule at this point I guess the fact that all my business records for the last couple of years are in Gnucash means I'll be able to start doing my business accounting again sometime after the first of next year, at a minimum, if I wait for Debian -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: xen virtual network(solved)
Freddy Freeloader wrote: I'm trying to figure out how to create both frontend and backend networks in xen. By that I mean a publicly available network for internet access and a virtual network for communication between guests only that has no internet or other network access. Here's what I've done in attempting to add the second virtual network. 1. Created dummy0 interface in /etc/network/interfaces in Dom0. 2. In /etc/xen/xend-config.sxp pointed (network-bridge) to the script below. 3. Created a script in /etc/xen/scripts to start up both xenbr0 and xenbr1. xenbr0 is the default bridge and xenbr1 is created on dummy0. The script is from an example in Running Xen on how to create multiple bridges. 4. Added the mac and xenbr info for xenbr1 to the "vif = (blah, blah, blah)" line in the domain.cfg file in /etc/xen. This results in losing all network connectivity to and from the guest OS. It also leads me to believe I should probably be creating this second interface in /etc/xen-tools/xen-tools.conf so that the second interface would be created in the guest by xen-create-image, but I can find no documentation on how to do this. The guest only shows eth0 and lo in /etc/network/interfaces. "brctl show" lists two bridges. bridge namebridge idSTP enabledinterfaces eth18000.00e04da05951nopeth1 xenbr18000.feff novif2.0 Can anyone either give me an example to look at or point me to a how-to on this? Just in case anyone else runs into this issue. You cannot use dynamic mac address creation, at least not when creating a purely virtual network and a network that has a public IP address. It seems to work OK in a DomU that has a single interface, but when there is more than one interface in a DomU it breaks networking. You can do thos by modifying the vif = line in /etc/xen/domain.name.cfg file. Use only the mac address and bridge data for each interface and make sure you delineate each interface with single quotes. (The examples I had seen did not do that and it results in only one vif being created in the DomU.) Or, you can use "xm network-attach mac=xx:xx:xx:xx:xx:xx bridge=bridge-name". You cannot use the ip=xxx.xxx.xxx.xxx option in "xm network-attach" as that functionality is broken. However, for xm network-attach to reattach the network to the DomU automatically after rebooting the DomU or Dom-0 you must script it. It's also much easier to create the second bridge manually in Dom-0 in /etc/network/interfaces using pre-up, post-down, and brctl. And, it's easier to just manually assign the ip addresses for each network device in /etc/network/interfaces in each DomU. Here's an example on the bridge creation in Dom-) in /etc/network/interfaces. auto brtest iface brtest inet static address 10.0.0.1 netmask 255.255.255.0 pre-up brctl addbr brtest post-down ifconfig brtest down post-down brctl delbr brtest The above example allows the DomU's to communicate with Dom-0 using this bridge. You can stop the DomU's from communicating with Dom-0 over this interface like this: auto brtest iface brtest inet manual pre-up brctl addbr brtest post-down ifconfig brtest down post-down brctl delbr brtest Anyway, I hope this helps someone else. I beat my head against a wall for long time over this. Between xen bugs, xen-tools bugs, and lack of good documentation, getting xen up and running in anything other than a default configuration on Debian isn't for the faint of heart. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: serving wrong index.html ?
Zach Uram wrote: Hi, I added name based vhosts to my Apache2 install on Debian lenny, but now when I go to my site: http://www.jesujuva.org or http://jesujuva.org instead of serving up /var/www/index.html it serves /var/www/bach/index.html ! Here are my files: debian:/etc/apache2# ls sites-available/ bach darcs netrek wiki wp debian:/etc/apache2# ls sites-enabled/ bach darcs netrek wiki wp debian:/etc/apache2# cat sites-enabled/bach ServerName bach.jesujuva.org DocumentRoot /var/www/bach Zach Apache is doing exactly what you are asking it to do. You are pointing to /var/www/bach as the DocumentRoot for jesujuva.org.So, of course it is going to be serving /var/www/back/index.html. If you want to serve a different set of files you have to point Apache to that set of files. For example, if the index.html file for jesujuva.org is inside the /var/www/jesujuva directory you would use that directory as your DocumentRoot. If the DocumentRoot for jesujuva.org is in /var/www then that is where you would point Apache. I agree with one of the other people who replied here. It's much easier to put the configuration for all your sites in a single file. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
xen virtual network
I'm trying to figure out how to create both frontend and backend networks in xen. By that I mean a publicly available network for internet access and a virtual network for communication between guests only that has no internet or other network access. Here's what I've done in attempting to add the second virtual network. 1. Created dummy0 interface in /etc/network/interfaces in Dom0. 2. In /etc/xen/xend-config.sxp pointed (network-bridge) to the script below. 3. Created a script in /etc/xen/scripts to start up both xenbr0 and xenbr1. xenbr0 is the default bridge and xenbr1 is created on dummy0. The script is from an example in Running Xen on how to create multiple bridges. 4. Added the mac and xenbr info for xenbr1 to the "vif = (blah, blah, blah)" line in the domain.cfg file in /etc/xen. This results in losing all network connectivity to and from the guest OS. It also leads me to believe I should probably be creating this second interface in /etc/xen-tools/xen-tools.conf so that the second interface would be created in the guest by xen-create-image, but I can find no documentation on how to do this. The guest only shows eth0 and lo in /etc/network/interfaces. "brctl show" lists two bridges. bridge namebridge idSTP enabledinterfaces eth18000.00e04da05951nopeth1 xenbr18000.feff novif2.0 Can anyone either give me an example to look at or point me to a how-to on this? -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: xen domU read only filesystem
Steve Kemp wrote: On Wed Feb 04, 2009 at 11:44:47 -0800, Freddy Freeloader wrote: So, you're telling me that if someone uses ext3 they will get a default file system that's read/write, but if they choose any other available file system it will be read only by default even though xen-tools.conf doesn't list ro only as an option for mounting them? No, I was specifically refuting your suggestion that by default installations will fail. The expectation is : a. default filesystem ext3 - it will work. b. reiser filesystem - it will work. That it hasn't worked for you in this case is either a bug, or error. I cannot tell which, though I'd like to lean towards a bug that has not yet been reported by any users. Steve OK. Thanks for your help. You want me to file a bug report on it? -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: xen domU read only filesystem
Steve Kemp wrote: On Wed Feb 04, 2009 at 11:10:21 -0800, Freddy Freeloader wrote: OK. That was stupid of me not to look at /etc/fstab. Ignoring the error message was an oversight, but not a stupid one. But, why are xen-tools creating a read only domU file system by default in the first place? The default xen-tools installation creates *ext3* filesystems by default. Since your post mentioned reiserfs you were no longer using the default. There are options to handle the different filesystem mount options. You can see those in /etc/xen-tools/xen-tools.conf: $ grep _options /etc/xen-tools/xen-tools.conf So, you're telling me that if someone uses ext3 they will get a default file system that's read/write, but if they choose any other available file system it will be read only by default even though xen-tools.conf doesn't list ro only as an option for mounting them? xen-tools.conf tells me if I use reiserfs xen will use reiserfs defaults to mount it. Since when is reiserfs a read only file system by default? I'm not meaning to pick at you, just understand what's going on. I do appreciate your hard work in creating xen-tools. It's just hard to wrap my head around everything about Xen in one go, and I'm someone who has to understand what's going on before I get comfortable with things. Why create an install that is, for all practical purposes, useless by default? I just don't understand. I don't believe the configuration is broken by default. I don't know enough to even begin to suggest that it is. I also don't believe any that any "default" can be reasonable for all users when it comes to virtualization - if only because people have different ideas about networks, filesystems, and packages. (ObDisclaimer: I wrote xen-tools.) I wondered about this when I saw the xen-tools author's name. Thanks for your hard work. Steve -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: xen domU read only filesystem
Steve Kemp wrote: On Wed Feb 04, 2009 at 09:10:27 -0800, Freddy Freeloader wrote: [7.149690] ReiserFS: xvda2: warning: bad value "remount-ro" for option "errors" There's your problem. Remove "remount-ro" from /etc/fstab, after remounting it read/write via: mount -o remount,rw / Or mounting it outside xen once you've stopped it running. Steve OK. That was stupid of me not to look at /etc/fstab. But, why are xen-tools creating a read only domU file system by default in the first place? That's what really threw me because it was just so odd that is was being done by default that I just couldn't get past that. I'm really green at virtualization so I thought I was missing some major piece of configuration. Why create an install that is, for all practical purposes, useless by default? I just don't understand. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
xen domU read only filesystem
I have another question on xen. I cannot get xen to create a guest that has read/write access to its own file system. I've been Googling this and reading documentation but so far haven't been able to find anything on this. I have two different systems run xen now. One is 32-bit, the other 64-bit. They both exhibit the same behavior. Once the domU system boots I have tried looking around at /var/log/syslog and it doesn't exist as the system can't write to its filesystem. I can't use touch as root to create a file either. Here's the error: test:~# touch test touch: cannot touch `test': Read-only file system Here's typical output from the domU console while the guest is booting up: [7.149690] ReiserFS: xvda2: warning: bad value "remount-ro" for option "errors" [7.150981] ReiserFS: xvda2: warning: bad value "remount-ro" for option "errors" mount: / not mounted already, or bad option Setting the system clock. Unable to set System Clock to: Wed Feb 4 16:55:16 UTC 2009 (warning). Cleaning up ifupdown Loading kernel modules...done. Checking file systems...fsck 1.41.3 (12-Oct-2008) done. Setting kernel variables (/etc/sysctl.conf)...done. Mounting local filesystems...done. Activating swapfile swap...done. rm: cannot remove `.X*-lock': Read-only file system rm: cannot remove `/tmp/.clean': Read-only file system bootclean: Failure deleting '/tmp/.clean'. failed! rm: cannot remove `./motd': Read-only file system bootclean: Failure cleaning /var/run. failed! rm: cannot remove `/var/lock/.clean': Read-only file system bootclean: Failure deleting '/var/lock/.clean'. failed! Setting up networking Configuring network interfaces...Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ can't create /var/lib/dhcp3/dhclient.eth0.leases: Read-only file system Listening on LPF/eth0/00:16:3e:96:02:9d Sending on LPF/eth0/00:16:3e:96:02:9d Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 3 DHCPOFFER from 192.168.1.28 DHCPREQUEST on eth0 to 255.255.255.255 port 67 DHCPACK from 192.168.1.28 rm: cannot remove `/etc/resolv.conf.dhclient-new': Read-only file system /sbin/dhclient-script: line 20: /etc/resolv.conf.dhclient-new: Read-only file system /sbin/dhclient-script: line 37: /etc/resolv.conf.dhclient-new: Read-only file system /sbin/dhclient-script: line 41: /etc/resolv.conf.dhclient-new: Read-only file system chown: cannot access `/etc/resolv.conf.dhclient-new': No such file or directory chmod: cannot access `/etc/resolv.conf.dhclient-new': No such file or directory mv: cannot stat `/etc/resolv.conf.dhclient-new': No such file or directory can't create /var/lib/dhcp3/dhclient.eth0.leases: Read-only file system bound to 192.168.1.2 -- renewal in 1567 seconds. if-up.d/mountnfs[eth0]: lock /var/run/network/mountnfs exist, not mounting failed! done. rm: cannot remove `.X*-lock': Read-only file system rm: cannot remove `/tmp/.clean': Read-only file system bootclean: Failure deleting '/tmp/.clean'. failed! rm: cannot remove `./motd': Read-only file system bootclean: Failure cleaning /var/run. failed! rm: cannot remove `/var/lock/.clean': Read-only file system bootclean: Failure deleting '/var/lock/.clean'. failed! /etc/rcS.d/S55bootmisc.sh: line 29: /var/run/utmp: Read-only file system rm: cannot remove `/var/lib/urandom/random-seed': Read-only file system mkdir: cannot create directory `/tmp/.X11-unix': Read-only file system Here's the output from xm list: NameID Mem VCPUs State Time(s) Domain-0 0 7603 2 r- 3202.2 test.gawako.local 15 512 1 -b 3.3 -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Re: Xen problems in Lenny
Steve Kemp wrote: On Tue Feb 03, 2009 at 09:57:17 -0800, Freddy Freeloader wrote: I know this isn't strictly a Debian issue but this has really got me stumped. I've installed the following Xen packages on a Lenny machine and cannot successfully create a working guest using xen-create-image. Exec Format Error suggests you're creating a 64-bit guest despite you being a 32-bit host. Add --arch=i386 to the creation call. Steve Thanks. That worked. -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Xen problems in Lenny
I know this isn't strictly a Debian issue but this has really got me stumped. I've installed the following Xen packages on a Lenny machine and cannot successfully create a working guest using xen-create-image. dpkg -l | grep xen ii libc6-xen2.7-18 GNU C Library: Shared libraries [Xen version ii libxenstore3.0 3.2.1-2 Xenstore communications library for Xen ii linux-image-2.6.26-1-xen-686 2.6.26-13 Linux 2.6.26 image on i686, oldstyle Xen sup ii linux-modules-2.6.26-1-xen-686 2.6.26-13 Linux 2.6.26 modules on i686 ii xen-docs-3.2 3.2.1-2 Documentation for Xen ii xen-hypervisor-3.2-1-i3863.2.1-2 The Xen Hypervisor on i386 ii xen-linux-system-2.6.26-1-xen-6862.6.26-13 XEN system with Linux 2.6.26 image on i686 ii xen-shell1.8-3 Console based Xen administration utility ii xen-tools3.9-4 Tools to manage Debian XEN virtual servers ii xen-utils-3.2-1 3.2.1-2 XEN administrative tools ii xen-utils-common 3.2.0-2 XEN administrative tools - common files ii xenstore-utils 3.2.1-2 Xenstore utilities for Xen Here is the log file produced when running xen-create-image. The failures all seem to be related to the inability of the installation script to use chroot and I can't figure out why as I can use chroot on the host OS. Is this a bug or am I missing something in the config files? General Information Hostname : test2.gawako.local Distribution : lenny Partitions : swap256Mb (swap) / 10Gb (reiserfs) Image type : sparse Memory size: 512Mb Kernel path: /boot/vmlinuz-2.6.26-1-xen-686 Initrd path: /boot/initrd.img-2.6.26-1-xen-686 Networking Information -- IP Address 1 : 192.168.1.43 [MAC: 00:16:3E:FE:30:04] Netmask: 255.255.255.0 Broadcast : 192.168.1.255 Gateway: 192.168.1.1 Creating partition image: /xen/domains/test2.gawako.local/swap.img 0+0 records in 0+0 records out 0 bytes (0 B) copied, 1.6915e-05 s, 0.0 kB/s Done Creating swap on /xen/domains/test2.gawako.local/swap.img Setting up swapspace version 1, size = 134213 kB no label, UUID=61334cb4-df9f-4b7d-9526-7692bd143955 Done Creating partition image: /xen/domains/test2.gawako.local/disk.img 0+0 records in 0+0 records out 0 bytes (0 B) copied, 2.9659e-05 s, 0.0 kB/s Done Creating reiserfs filesystem on /xen/domains/test2.gawako.local/disk.img mkfs.reiserfs 3.6.19 (2003 www.namesys.com) A pair of credits: Alexander Zarochentcev (zam) wrote the high low priority locking code, online resizer for V3 and V4, online repacker for V4, block allocation code, and major parts of the flush code, and maintains the transaction manager code. We give him the stuff that we know will be hard to debug, or needs to be very cleanly structured. Hans Reiser was the project initiator, source of all funding for the first 5.5 years. He is the architect and official maintainer. Done Installation method: debootstrap Falling back to default debootstrap command Copying files from host to image. Copying files from /var/cache/apt/archives -> /tmp/wjDNA2ChFT/var/cache/apt/archives Done Done I: Retrieving Release I: Retrieving Packages I: Validating Packages I: Resolving dependencies of required packages... I: Resolving dependencies of base packages... I: Checking component main on http://ftp.us.debian.org/debian... I: Validating adduser I: Validating apt I: Validating apt-utils I: Validating aptitude I: Validating base-files I: Validating base-passwd I: Validating bash I: Validating bsdmainutils I: Validating bsdutils I: Validating coreutils I: Validating cpio I: Validating cron I: Validating debconf I: Validating debconf-i18n I: Validating debian-archive-keyring I: Validating debianutils I: Validating dhcp3-client I: Validating dhcp3-common I: Validating diff I: Validating dmidecode I: Validating dpkg I: Validating e2fslibs I: Validating e2fsprogs I: Validating ed I: Validating findutils I: Validating gcc-4.2-base I: Validating gcc-4.3-base I: Validating gnupg I: Validating gpgv I: Validating grep I: Validating groff-base I: Validating gzip I: Validating hostname I: Validating ifupdown I: Validating info I: Validating initscripts I: Validating iproute I: Validating iptables I: Validating iputils-ping I: Validating libacl1 I: Validating libattr1 I: Validating libblkid1 I: Validating libbz2-1.0 I: Validating libc6 I: Validating libcomerr2 I: Validating libconsole I: Validating libcwidget3 I: Validating libdb4.6 I: Va
Re: 2.6.26-686 kernel problem in Lenny installer
Freddy Freeloader wrote: Hi all, Just wondering if anyone has run across this problem before. I'm rebuilding a server based on an Asus M2N-LR AM2 motherboard with a 2.8 ghz dual core Opteron. At boot the boot process is hanging at "pci :00:00:0 Enabling HT MSI Mapping". This same machine worked fine with both the 2.6.18 and the etchnhalf kernel. BTW, this is with the RC1 installer. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
2.6.26-686 kernel problem in Lenny installer
Hi all, Just wondering if anyone has run across this problem before. I'm rebuilding a server based on an Asus M2N-LR AM2 motherboard with a 2.8 ghz dual core Opteron. At boot the boot process is hanging at "pci :00:00:0 Enabling HT MSI Mapping". This same machine worked fine with both the 2.6.18 and the etchnhalf kernel. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: change loading order of modules in apache
lee wrote: On Wed, 12 Nov 2008 13:51:50 -0800 Freddy Freeloader <[EMAIL PROTECTED]> wrote: No. I just created symlinks in mods-enabled to mods-available as I've always done. That has always worked without a snag before. Maybe it has to do in which order files are found in the directories? Well, I tried changing that too without any luck by using the LoadModule directive. What's also very strange is that a2enmod tells me that no apache module in the system exists when I try to use it. (I figured I had nothing to lose by trying it but all it does is write to mods-enabled to configure apache to load modules.) I tried it on about a dozen modules and it told me none of them existed. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: change loading order of modules in apache
Jeff D wrote: On Wed, 12 Nov 2008, Freddy Freeloader wrote: I am experiencing something with Apache that hasn't been a problem on three previous Etch builds. We use Ajaxterm as a proxy to reach another server as part of our web application. On all previous builds Apache has loaded all related proxy modules in the correct order by default. In this latest install--a fresh server build--Apache will not load the load the proxy module first and I can find no documentation on how to change the load order in Debian. Can someone point me towards a link on how to do this? Apache loads everything in the order its presented. For modules listed in /etc/apache2/mods-enabled/ that would be in alphabetical order. Though, I have never ran into an issue with modules loading in the incorrect order, I'm wondering how you went about setting up the modules to be loaded. Did you use the a2enmod tool to load up the modules? Jeff No. I just created symlinks in mods-enabled to mods-available as I've always done. That has always worked without a snag before. I've never run into this issue before either. I'm beginning to wonder if this isn't a corrupted installation as I have other odd problems too. sshd won't accept ssh tunnelled connections from remote navicat 8 clients. I'm getting channel errors from sshd in auth.log. There was also a typo in the vsftpd.conf file that made vsftpd fail silently on startup. I've never run across any of these errors before in an Etch install. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
change loading order of modules in apache
I am experiencing something with Apache that hasn't been a problem on three previous Etch builds. We use Ajaxterm as a proxy to reach another server as part of our web application. On all previous builds Apache has loaded all related proxy modules in the correct order by default. In this latest install--a fresh server build--Apache will not load the load the proxy module first and I can find no documentation on how to change the load order in Debian. Can someone point me towards a link on how to do this? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Debian installer
Mark Allums wrote: elijah rutschman wrote: On Mon, Oct 27, 2008 at 12:37 PM, Freddy Freeloader <[EMAIL PROTECTED]> wrote: Just out of curiosity, will the ability to create RAID10 arrays ever be integrated into the installer? Oh, and while I'm at it, how about raid1 arrays with more than 2 drives? I did a Lenny install today and could not create a 1.2 terrabyte mirror with 4 640 gig drives. I was limited to 2 640 gig arrays of two drives each at best. If I told the installer I wanted all 4 drives in a mirror it gave me 1 640 gig mirror and reported the array used all 4 drives. Hello, AFAIK, RAID1 doesn't do any striping, so no matter how many disks you include in your array, you have a maximum usable device size equal to the smallest disk in the array. In your case, your maximum disk size would be 640gigs. It sounds like you want to do something like RAID5; you'd have some of the benefits of striping and some of the benefits of mirroring. I've only ever setup a RAID1 array with Debian, and I don't remember what options were presented during installation. You might be able to replicate RAID5 using LVM + RAID1 if RAID5 is not an explicit installation option. I have yet to experiment with LVM, so I don't know what this would entail, but I'm sure other members of this list could be more helpful in this area. Regards, Elijah No, he wants to do a 10, or 1+0 RAID. Mirror two pairs of drives, then stripe across the pairs. This is a compromise when a RAID 5 isn't quite practical for some reason. RAID 5 can be slow on a software setup when the partition is large and it fills up significantly. Mark Allums Hmmm I just figured that it would be possible for mdadm to be smart enough to realize that if you gave it 4 drives in a mirror it would at least take two of them for each portion of the mirror without having to stripe them. I just assumed that if Linux raid is smart enough to automatically integrate hot spares upon drive failure then it could figure out how to use 4 drives to create a larger mirror. Guess that's what I get for assuming. What I really wanted was to create a RAID10 array during the install and was playing around trying to see if I could work out another solution during the install even if it would be a little slower. Anyway, I created the array after the install was finished. I just wasn't aware of the 2 drive RAID1 limit in my experimentation. Nothing I had read on Linux raid mentioned this limit. Yeah, the man page examples for RAID1 use only 2 drives, but normally examples in man pages are usually very simple and don't cover all possibilities. Usually they're just there to give you an example of the syntax and switch usage. Anyone know if/when RAID10 capability will be included in the installer? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Debian installer
Just out of curiosity, will the ability to create RAID10 arrays ever be integrated into the installer? Oh, and while I'm at it, how about raid1 arrays with more than 2 drives? I did a Lenny install today and could not create a 1.2 terrabyte mirror with 4 640 gig drives. I was limited to 2 640 gig arrays of two drives each at best. If I told the installer I wanted all 4 drives in a mirror it gave me 1 640 gig mirror and reported the array used all 4 drives. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Marvell 88E80856 switching from static to dhcp configuration all on its own
Thomas Preud'homme wrote: The Friday 18 July 2008 00:24:04 Andrew Sackville-West, you wrote : On Thu, Jul 17, 2008 at 02:30:42PM -0700, Freddy Freeloader wrote: Andrew Sackville-West wrote: On Thu, Jul 17, 2008 at 01:36:43PM -0700, Freddy Freeloader wrote: Andrew Sackville-West wrote: On Thu, Jul 17, 2008 at 11:14:25AM -0700, Freddy Freeloader wrote: [...] I have two NICs. The onboard Marvell and a 3Com 3c905b. The 3Com handles dhcp and dns requests. Both are configured for statically configured IP addresses in /etc/network/interfaces. However, the Marvell will, after some unknown amount of time--less than 12 hours--drop its static IP address and request a dhcp address from the 3Com adapter. [...] probably a killall dhclient will sort it out. [...] [I] will give that a try and see if the behavior changes. [...] perhaps some other package is starting dhclient? Basically, if you have both interfaces using static ip, dhclient shouldn't even be started. care to post /etc/network/interfaces? [...] For some reason dhclient WAS running, but I don't know why. if it reappears, try a `ps aux`, maybe there will be a clue there as to where it's coming from. And after a reboot, run watch grep dhclient /var/log/syslog or some equivalent and watch for it to show up. You wouldn't happen to have any guesses on the second problem I listed would you? :) nope, sorry. A You could also look with a pstree or recompile dhclient with a getppid at the beginning in order to see which process launch dhclient. I'll try pstree. I hadn't thought of that. These troubleshooting suggestions really don't help explain to me why dhclient would override the settings in /etc/network/interfaces for one NIC and not the other though. And why would it override manual settings? Isn't there some process watching the settings in /etc/network/interfaces to stop just such a thing from happening, or doesn't the driver and device itself record its state so that dhclient wouldn't even attempt this unless there is some type of user override, i.e. ifdown/ifup, /etc/init.d/networking restart, etc... ? There's something going on that I really don't understand here, and just troubleshooting it at the level shown here doesn't seem to me that it will answer my base questions. Can anyone point me to documentation on how this works, because I must be missing something. Why wouldn't restarting networking not have killed dhclient after /etc/network/interfaces was read, the NIC's configured, and all NIC's were configured with a static IP address? Also, I still think there is a driver issue with this as eth1 sent out dhcp requests probably a dozen times (cycles of 6 dhcprequests), received no answers, and then was answered with multiple offers before it finally accepted one of those offers and bound to that address? It seems to me if there were no driver issues involved eth1 would have gotten its dhcp address on the first attempt if this was all related to JUST dhclient. One of the problems with this module in the 2.6.24 kernel was the interface would come up, accept an IP address from /etc/network/interfaces, but it couldn't send or receive anything. This seems to be an extension of that problem. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Marvell 88E80856 switching from static to dhcp configuration all on its own
Andrew Sackville-West wrote: On Thu, Jul 17, 2008 at 01:36:43PM -0700, Freddy Freeloader wrote: Andrew Sackville-West wrote: On Thu, Jul 17, 2008 at 11:14:25AM -0700, Freddy Freeloader wrote: Hi All, I'm having a strange problem with a Marvell 88E8056 - 10/100/1000 Controller on a Biostar TA 770 A2+ motherboard. This is an Etch AMD64 install, but I have added the 2.6.25-amd64 kernel as I could not get the Marvell controller to work at all with the 2.6.18 kernel. ... I have two NICs. The onboard Marvell and a 3Com 3c905b. The 3Com handles dhcp and dns requests. Both are configured for statically configured IP addresses in /etc/network/interfaces. However, the Marvell will, after some unknown amount of time--less than 12 hours--drop its static IP address and request a dhcp address from the 3Com adapter. ... I'm assuming this is a bug in the sky2 module, but don't know enough about things in this area to do more than assume. I bet it's not a driver problem but simply that you have inadvertently started a dhclient. It picks up a lease from somewhere, but then you restart networking which reverts the interface to a static address. Then when the dhclient thinks the lease has expired, it goes and gets another one. I've seen this happen on my laptop when I've been monkeying around with getting a connection at a new location. I'll forget that I manually started dhclient and then some time later... maybe days, I'll connect somewhere where I get static ip (like home) and then all of the sudden the dhclient will wake up and go looking for a new address... probably a killall dhclient will sort it out. I wondered about it, but it didn't make sense in that the networking system is completely ignoring its own configuration. Plus, this behavior has survived several reboots of the system. However, I will give that a try and see if the behavior changes. do you have network-mangler^Wmanager installed? perhaps some other package is starting dhclient? Basically, if you have both interfaces using static ip, dhclient shouldn't even be started. care to post /etc/network/interfaces? A For some reason dhclient WAS running, but I don't know why. Especially on that interface as it was not chosen as the default interface during install, and that's the only interface I've had configured to use dhcp at any time. As I said, both interfaces specify "static" in /etc/network/interfaces. It's working now as it's had long enough to repeat its behavior based on the time frame of how long things took to happen last time so I'm assuming it's fixed. And, if dhclient was getting a new IP address for that interface, why not for the other NIC too? What makes the Marvell interface the interface that keeps getting reconfigured and the 3Com interface stay stable? I've never seen this kind of thing happen before and I've switched interfaces between static and dhcp many times. I've never had to manually kill dhclient before to keep it from reconfiguring a working, manually configured, interface. To me that says there is possibly something wrong with the driver for the NIC itself and it's crashing and losing its state or something. Nothing shows up in syslog or kern.log, but I still wonder about it. And, no, network-mangler isn't running or installed. It's one of the first things I get rid of in a desktop install. IIRC, n-m only comes with a gui, and I have no gui installed. Plus, it doesn't show up using dpkg -l | grep network so I know it's not there This is just a base install with the server packages I need to run for use in my lab. Nothing else. You wouldn't happen to have any guesses on the second problem I listed would you? :) -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Marvell 88E80856 switching from static to dhcp configuration all on its own
Andrew Sackville-West wrote: On Thu, Jul 17, 2008 at 11:14:25AM -0700, Freddy Freeloader wrote: Hi All, I'm having a strange problem with a Marvell 88E8056 - 10/100/1000 Controller on a Biostar TA 770 A2+ motherboard. This is an Etch AMD64 install, but I have added the 2.6.25-amd64 kernel as I could not get the Marvell controller to work at all with the 2.6.18 kernel. ... I have two NICs. The onboard Marvell and a 3Com 3c905b. The 3Com handles dhcp and dns requests. Both are configured for statically configured IP addresses in /etc/network/interfaces. However, the Marvell will, after some unknown amount of time--less than 12 hours--drop its static IP address and request a dhcp address from the 3Com adapter. ... I'm assuming this is a bug in the sky2 module, but don't know enough about things in this area to do more than assume. I bet it's not a driver problem but simply that you have inadvertently started a dhclient. It picks up a lease from somewhere, but then you restart networking which reverts the interface to a static address. Then when the dhclient thinks the lease has expired, it goes and gets another one. I've seen this happen on my laptop when I've been monkeying around with getting a connection at a new location. I'll forget that I manually started dhclient and then some time later... maybe days, I'll connect somewhere where I get static ip (like home) and then all of the sudden the dhclient will wake up and go looking for a new address... probably a killall dhclient will sort it out. A I wondered about it, but it didn't make sense in that the networking system is completely ignoring its own configuration. Plus, this behavior has survived several reboots of the system. However, I will give that a try and see if the behavior changes. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Marvell 88E80856 switching from static to dhcp configuration all on its own
Hi All, I'm having a strange problem with a Marvell 88E8056 - 10/100/1000 Controller on a Biostar TA 770 A2+ motherboard. This is an Etch AMD64 install, but I have added the 2.6.25-amd64 kernel as I could not get the Marvell controller to work at all with the 2.6.18 kernel. Other than that this is pretty much a minimal install. No gui, just a basic system running dns, dhcp3, mysql 5.0, postgresql 8.1and Apache2 in my test lab. This system has no contact with the internet other than as a dns forwarder. I have two NICs. The onboard Marvell and a 3Com 3c905b. The 3Com handles dhcp and dns requests. Both are configured for statically configured IP addresses in /etc/network/interfaces. However, the Marvell will, after some unknown amount of time--less than 12 hours--drop its static IP address and request a dhcp address from the 3Com adapter. If I do an ifconfig -a it shows the dhcp address, and if I run nmap against the network--how I found this originally as there was an unknown IP address showing up and a known good one missing--the dhcp address shows up. However, if I look at /etc/network/interfaces the Marvell interface is still set to static, and if I restart networking ( /etc/init.d/networking restart ) the Marvell NIC picks up its static IP address again. Below is the output from lscpi -vv concerning the NIC in question. 03:00.0 Ethernet controller: Marvell Technology Group Ltd. Unknown device 4364 (rev 13) Subsystem: Biostar Microtech Int'l Corp Unknown device 2700 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 1276 Region 0: Memory at fddfc000 (64-bit, non-prefetchable) [size=16K] Region 2: I/O ports at de00 [size=256] [virtual] Expansion ROM at fdc0 [disabled] [size=128K] Capabilities: [48] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0 PME-Enable- DSel=0 DScale=1 PME- Capabilities: [50] Vital Product Data Capabilities: [5c] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable+ Address: fee0300c Data: 4191 Capabilities: [e0] Express Legacy Endpoint IRQ 0 Device: Supported: MaxPayload 128 bytes, PhantFunc 0, ExtTag- Device: Latency L0s unlimited, L1 unlimited Device: AtnBtn- AtnInd- PwrInd- Device: Errors: Correctable- Non-Fatal- Fatal- Unsupported- Device: RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- Device: MaxPayload 128 bytes, MaxReadReq 512 bytes Link: Supported Speed 2.5Gb/s, Width x1, ASPM L0s L1, Port 3 Link: Latency L0s <256ns, L1 unlimited Link: ASPM Disabled RCB 128 bytes CommClk+ ExtSynch- Link: Speed 2.5Gb/s, Width x1 Capabilities: [100] Advanced Error Reporting Hmmm It's been 45 minutes since I restarted networking and the Marvell has already dropped its static IP address and picked up a dhcp address. Below is the relevant information from syslog after restarting networking. Jul 17 09:45:56 lab kernel: [81245.173294] eth0: no IPv6 routers present Jul 17 09:46:01 lab kernel: [81249.915550] eth1: no IPv6 routers present Jul 17 09:46:26 lab dhcpd: receive_packet failed on eth0: Network is down Jul 17 09:46:26 lab kernel: [81275.403919] sky2 eth1: disabling interface Jul 17 09:46:26 lab dhclient: receive_packet failed on eth1: Network is down Jul 17 09:46:26 lab kernel: [81275.414464] eth0: setting full-duplex. Jul 17 09:46:26 lab kernel: [81275.449493] sky2 eth1: enabling interface Jul 17 09:46:26 lab kernel: [81275.451945] ADDRCONF(NETDEV_UP): eth1: link is not ready Jul 17 09:46:29 lab kernel: [81278.425075] sky2 eth1: Link is up at 1000 Mbps, full duplex, flow control both Jul 17 09:46:29 lab kernel: [81278.427530] ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready Jul 17 09:46:36 lab kernel: [81285.424608] eth0: no IPv6 routers present Jul 17 09:46:39 lab kernel: [81288.428601] eth1: no IPv6 routers present Jul 17 09:55:19 lab dhclient: DHCPREQUEST on eth1 to 192.168.1.28 port 67 Jul 17 09:55:33 lab last message repeated 3 times Jul 17 09:55:41 lab dhclient: DHCPREQUEST on eth1 to 192.168.1.28 port 67 Jul 17 09:56:18 lab last message repeated 3 times Jul 17 09:57:38 lab last message repeated 4 times Jul 17 09:58:39 lab last message repeated 4 times Jul 17 09:59:29 lab last message repeated 3 times Jul 17 10:00:28 lab last message repeated 4 times This repeats a few more times and then: Jul 17 10:04:39 lab dhcpd: Wrote 0 deleted host decls to leases file. Jul 17 10:04:39 lab dhcpd: Wrote 0 new dynamic host decls to leases file. Jul 17 10:04:39 lab dhcpd: Wrote 7 leases to lease
Re: Giving up on Iceweasel 3.0
Paul Scott wrote: I accidentally sent this first from an unsubscribed address. Excuse me if it shows up twice. Freddy Freeloader wrote: Ron Johnson wrote: On 07/12/08 00:18, Freddy Freeloader wrote: [snip] If both IW3 & Epiphany are segfaulting, it seems that you have a problem with gecko-1.9. I don't know if it's libxul0d or xulrunner. The iceweasel maintainer thinks it's xulrunner after reading all the debugging info I sent him. Is this possibly the font problem I have with IW3, FF3, Konqueror and IceApe but *not* with FF2 on this sid machine? Fonts on several of web pages I am creating are being rendered with fonts too large. They are fine on FF3 on my Mac laptop and other machines. Paul Scott I don't know, but I doubt it. I've never seen Iceweasel 3 render a font. It seg faults at startup so I've never opened a single web page with it. It's completely useless to me. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: prob. w. wordpress + php5
Freddy Freeloader wrote: Hugo Vanwoerkom wrote: Hi, Sid has dropped apache in favor of apache2. Apache2 does not have php4 support, only php5. But WordPress, in particular the Textile 2cb plugin, has a bug in it using php5 that makes enumerated lists come out wrong. I have the version of sid with apache + php4 on a mirror. I want to install my system from scratch using the most recent sid but stay with apache + php4 from my mirror. How would I do that? It's either that or change the plugin because it is no longer supported by its author. Hugo. The wordpress package in Sid will work with php5. Here's the output from "apt-cache show wordpress": Package: wordpress Priority: optional Section: web Installed-Size: 5032 Maintainer: Andrea De Iacovo <[EMAIL PROTECTED]> Architecture: all Version: 2.5.1-4 Depends: apache2 | httpd, virtual-mysql-client, libapache2-mod-php5 | php5 | php5-cgi | libapache2-mod-php4 | php4 | php4-cgi, php5-mysql | php4-mysql, libphp-phpmailer (>= 1.73-4), php5-gd | php4-gd, libjs-prototype, libjs-scriptaculous, tinymce (>= 3.0.7) Suggests: virtual-mysql-server Conflicts: mysql-server (<< 4.0.20-8) Filename: pool/main/w/wordpress/wordpress_2.5.1-4_all.deb Size: 1038376 MD5sum: 145d5509ea4b4067dded3ebd7736ffe9 SHA1: 08ce30c3d5e015ef7af2b5b2e02adee93659e322 SHA256: 43e390691e0725f1aacd6f3af6fe057d02c7d6afd776cb35f0253bcec8754ea7 Description: weblog manager WordPress is a full featured web blogging tool: * Instant publishing (no rebuilding) * Comment pingback support with spam protection * Non-crufty URLs * Themable * Plugin support . This package includes French language support. Homepage: http://wordpress.org Tag: implemented-in::php, interface::web, web::blog Looks to me like wordpress will run on php4 or php5. Never mind. Brain fart on my part. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: prob. w. wordpress + php5
Hugo Vanwoerkom wrote: Hi, Sid has dropped apache in favor of apache2. Apache2 does not have php4 support, only php5. But WordPress, in particular the Textile 2cb plugin, has a bug in it using php5 that makes enumerated lists come out wrong. I have the version of sid with apache + php4 on a mirror. I want to install my system from scratch using the most recent sid but stay with apache + php4 from my mirror. How would I do that? It's either that or change the plugin because it is no longer supported by its author. Hugo. The wordpress package in Sid will work with php5. Here's the output from "apt-cache show wordpress": Package: wordpress Priority: optional Section: web Installed-Size: 5032 Maintainer: Andrea De Iacovo <[EMAIL PROTECTED]> Architecture: all Version: 2.5.1-4 Depends: apache2 | httpd, virtual-mysql-client, libapache2-mod-php5 | php5 | php5-cgi | libapache2-mod-php4 | php4 | php4-cgi, php5-mysql | php4-mysql, libphp-phpmailer (>= 1.73-4), php5-gd | php4-gd, libjs-prototype, libjs-scriptaculous, tinymce (>= 3.0.7) Suggests: virtual-mysql-server Conflicts: mysql-server (<< 4.0.20-8) Filename: pool/main/w/wordpress/wordpress_2.5.1-4_all.deb Size: 1038376 MD5sum: 145d5509ea4b4067dded3ebd7736ffe9 SHA1: 08ce30c3d5e015ef7af2b5b2e02adee93659e322 SHA256: 43e390691e0725f1aacd6f3af6fe057d02c7d6afd776cb35f0253bcec8754ea7 Description: weblog manager WordPress is a full featured web blogging tool: * Instant publishing (no rebuilding) * Comment pingback support with spam protection * Non-crufty URLs * Themable * Plugin support . This package includes French language support. Homepage: http://wordpress.org Tag: implemented-in::php, interface::web, web::blog Looks to me like wordpress will run on php4 or php5. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Giving up on Iceweasel 3.0
Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/12/08 00:18, Freddy Freeloader wrote: [snip] I've pretty much given up on Iceweasel 3.0 too. It has seg faulted on my main workstation ever since it entered Sid. I have yet to get it to open a single web page. I've turned in bug reports, as have a few other people with the same problem, but nothing has been fixed yet. I just gave up and installed Firefox 3.0 from a tar package and Opera as Epiphany has the same failing and I don't like Konqueror. If both IW3 & Epiphany are segfaulting, it seems that you have a problem with gecko-1.9. - -- Ron Johnson, Jr. Jefferson LA USA "Kittens give Morbo gas. In lighter news, the city of New New York is doomed." -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkh4q+AACgkQS9HxQb37XmcAGQCeM4DB3T4yjN26pvCT7HUUGCkd J+YAnR7Bnf5bHkZxWWivTHwKJgWdDP18 =2CO0 -END PGP SIGNATURE- I don't know if it's libxul0d or xulrunner. The iceweasel maintainer thinks it's xulrunner after reading all the debugging info I sent him. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Giving up on Iceweasel 3.0
Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 07/03/08 06:21, Anthony Campbell wrote: Well, I struggled with iceweasel 3.0 from Sid for a week but have now given up. First, printing no longer worked (see earlier posts). I got it to work, sort of, by using inotifywait and printing the mozilla.ps file but it wasn't a good solution. Then I found I could no longer listent to the BBC: attempts to do so caused a crash. This makes it largely useless so far as I am concerned. While I have no doubt that you are having problems, neither of those issues are happening to me. My printer (blandly named "lp") shows up in the Print dialog box, and have RealPlayer 10.0.9-0.1 installed from debian-multimedia.org, so that's the format I choose when I click on a BBC Radio station. Here are the pages I tested: http://www.bbc.co.uk/radio/ http://www.asimovs.com/_issue_0806/ref.shtml So I've installed the version from Testing and propose to stay with that unless and until these bugs are fixed. I know there are said to be security issues with older versions of the browser but I shall have to hope for the best. - -- Ron Johnson, Jr. Jefferson LA USA "Kittens give Morbo gas. In lighter news, the city of New New York is doomed." -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkhsxwsACgkQS9HxQb37XmdVMACfYRmS5yZhylrdh4qXP1ErPqx9 3p4AoNOEuxs1UgCWG0AjxYTcjVZAleNK =2mmX -END PGP SIGNATURE- I've pretty much given up on Iceweasel 3.0 too. It has seg faulted on my main workstation ever since it entered Sid. I have yet to get it to open a single web page. I've turned in bug reports, as have a few other people with the same problem, but nothing has been fixed yet. I just gave up and installed Firefox 3.0 from a tar package and Opera as Epiphany has the same failing and I don't like Konqueror. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
webdav davfs2 and file editing
I am trying to get a working setup with webdav and davfs2 where I can edit files from a remote webserver locally. I can successfully access the webdav directory and files and mount them on my workstation. The problem I'm running into is that most of the time I open a file to edit it I get the following error message. The file "/media/dav2/administrator/components/com_reviews/toolbar.reviews.html.php" could not be opened properly and has been truncated. This can occur if the file contains a NULL byte. Be aware that saving it can cause data loss. This happens with about 60-70% of the php files on that website. Is this something to do with how the files were created? I can edit the ones I have problems with on the server if I ssh into it and use vi. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
linux-patch-exec-shield
Hi all, I was wanting to use the linux-patch-exec-shield on the 2.6.24-5 kernel but only see patches in sid up to 2.6.21. Does anyone know if this patch has been integrated into the later kernels and that's why it's not available for them? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Plone, zc.buildout, and Etch
Freddy Freeloader wrote: Hi all, Is anyone else out there using buildout to create Plone 3 installs on Etch? I've been running into really strange errors when using buildout on Etch which no one in the Plone community is able to help me with. I cannot even complete a default buildout on Etch using Debian Python packages. I can however, do this on successfully on Sid. I haven't tried Lenny. If I use the ez_setup.py script on Etch to to download and install the Python setup tools package I can build a default Plone 3 install, however, I then run into problems adding third party tools such as Ploneboard, SimpleAttachment, and Clouseau. As I need the third party tools for the functions they provide this isn't something I live with. I can use the Plone Unified Installer, but would much prefer using buildout as it is so much more flexible. This all seems to be tied to changes made in the Debian Python packages in Etch that have been made since January 30 of this year. Why that date? Because I successfully used buildout on a server running Etch on that day. I also added the above third party tools successfully. Sometime since then something has changed in the Debian Python packages, or so it would appear anyway. Update: Well, if anyone out there is wanting to use the buildout method of installing Zope and Plone on Etch, at least for a while anyway, you're going to either have to compile Python, PIL, and any other Python dependencies from source code, or the the UnifiedInstallerBuildout packages that combine the Unified Installer, buildout, and Python 2.4.4 in one package. The Python interpreter now in Etch will not work. What exactly changed I don't know, but it's changed sometime after 1/28/2008. You can't just use the Unified Installer package either as it will install successfully but any time you install 3rd party packages it hoses the Plone site, sometimes so badly that even deleting the product directories and restarting the instance or the cluster doesn't help. You have to start over from scratch. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Plone, zc.buildout, and Etch
Hi all, Is anyone else out there using buildout to create Plone 3 installs on Etch? I've been running into really strange errors when using buildout on Etch which no one in the Plone community is able to help me with. I cannot even complete a default buildout on Etch using Debian Python packages. I can however, do this on successfully on Sid. I haven't tried Lenny. If I use the ez_setup.py script on Etch to to download and install the Python setup tools package I can build a default Plone 3 install, however, I then run into problems adding third party tools such as Ploneboard, SimpleAttachment, and Clouseau. As I need the third party tools for the functions they provide this isn't something I live with. I can use the Plone Unified Installer, but would much prefer using buildout as it is so much more flexible. This all seems to be tied to changes made in the Debian Python packages in Etch that have been made since January 30 of this year. Why that date? Because I successfully used buildout on a server running Etch on that day. I also added the above third party tools successfully. Sometime since then something has changed in the Debian Python packages, or so it would appear anyway. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: problems booting
Andrew Reid wrote: On Sunday 23 March 2008 01:57, Freddy Freeloader wrote: I assume I'm running into problems with udev not naming the devices consistently but am not quite sure of my diagnosis or how to fix it if that is the problem. I don't think udev rules will fix it, since the udev rules are on the root filesystem that it's not finding. I recommend using volume management, so that the root filesystem ends up on /dev/mapper/vg0-root or something similar -- this all gets set up in the initramfs, and elegantly avoids device-naming issues. Alternatively, you can mount the filesystems by label or by UUID. -- A. Thanks, Andrew. That got me started in the right direction. I ended up not using LVM as I didn't want that extra layer of complexity creating another point of failure, but I did get it working. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: problems booting
Douglas A. Tutty wrote: On Sat, Mar 22, 2008 at 10:57:30PM -0700, Freddy Freeloader wrote: I'm building a server based on an Asus M2N-LR mobo, a 3Ware 9550SXU RAID card, 8 gigs of ram and a dual core Opteron. The hard drives are all sata Raptors with 4 of the 5 drives in a RAID 10 array. I'm running Etch. The problem I'm having is an intermittent one with booting. I will quite regularly get a message that the root file system cannot be found, as well as files such sbin/init and /etc/fstab. Here is the most specific error: mount: mounting /root/dev on /dev/.static/dev failed. Just before those messages it says kinit is looking at sda5 for a resume image but cannot find it. I'm then dumped into a shell after the boot process fails. Use either LABEL or UUID. This gets covered on this list at least every couple of weeks. Doug. Well, that was a huge help. You cut out the part of the message where I say I don't really understand the rules syntax and just give me variables. It doesn't do a whole lot for me. If I understood the syntax I'd have already written my own rules and wouldn't have asked for help here. And, btw, I have 81,000+ messages from this list on my machine going back more than a year. If my reading of them had helped me understand this well enough to write my own rules I'd have done it. Guess I'll just look for help elsewhere. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
problems booting
Hi All, I'm building a server based on an Asus M2N-LR mobo, a 3Ware 9550SXU RAID card, 8 gigs of ram and a dual core Opteron. The hard drives are all sata Raptors with 4 of the 5 drives in a RAID 10 array. I'm running Etch. The problem I'm having is an intermittent one with booting. I will quite regularly get a message that the root file system cannot be found, as well as files such sbin/init and /etc/fstab. Here is the most specific error: mount: mounting /root/dev on /dev/.static/dev failed. Just before those messages it says kinit is looking at sda5 for a resume image but cannot find it. I'm then dumped into a shell after the boot process fails. This happens about 40% of the time on reboots and is not acceptable as this will be a remote server which will be physically located more than 100 miles away. I assume I'm running into problems with udev not naming the devices consistently but am not quite sure of my diagnosis or how to fix it if that is the problem. I've been reading the man pages associated with udev but man pages are so terse I'm not sure if I understand how to write rules for it, and looking at rules in /etc/udev/rules.d leaves me a little more confused as to exactly how to write a rule for this. The drives are: sda: a single sata drive partitioned into sda1 and sda5. sda1 has the boot flag set. sda5 is the swap partition. sdb: RAID 10 array partitioned as sdb5, sdb6, sdb7, sdb8. Any help would be appreciated. Any links to clearly written tutorials or anything like that would help. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: No more AMD-K7 kernels?
Curt Howland wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I've looked through the debian-kernel and debian-user archives, and I don't find a discussion of this. Does anyone know why the -k7 precompiled kernel was dropped? Two of my machines are Athalons. Curt- - -- November 5th: $4.3Million Dollars In One Day December 16th: $6 Million Dollars In One Day http://www.youtube.com/watch?v=WxldrCsVByA -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iQEVAwUBR8b9qC9Y35yItIgBAQIWMQf/S2IByKJb7jZl+C91xthcc8Lpf6iAt7np So/nm5E2/vS+K6DtNrhJx2zE8J/Qz6TlIAXd2g0P5a440GGLnvGDkOHMwwUdXyBp 4Q1/k6V6w9oVGmPlR/Yoo2Ul02h+RiIkhcoM+qEfGpwCnbVroMhAA3K5ivq8V2/t fHpHT5UMjQJ8bqK2Q5PR4fqYSnyKlRxtsYNET/KzZlDqL0VrGtMaewFFH7mwzSf2 +5pM16njRoaM7kPGfMNo1s6Urb6ZWG+GSAAErP6ACI9ecEWIO/xDvQ3W79TTjSau 6/DzdMzckGiGgLZAsaY7rSMAvc2hA0s3R+e1sat0nKOBwT84stZGxA== =Bv4H -END PGP SIGNATURE- I can't tell you why the move away from K7 and K8 kernels was made, only that the amd64 kernel in the i386 release replaces both of them. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
gnome-volume-manager
Is anyone else having problems with gnome-volume-manager? I'm running Sid with apt-get upgrade last run yesterday. gvm is not running as a daemon and when I run "gnome-volume-manager" at a bash prompt either as a regular user or as root I get the error that the command is not found. It is installed. So is gnome-mount, and hald is running. I'm a member of the plugdev and cdrom groups. It's really odd as I can burn cds and dvds with k3b, but then the drive won't read them. It tells me that the checksum failed after checking the "verify" option k3b. However, the drive will read the same disks and they work fine in another computer. The other thing that's really odd is that the drive will recognize blank disks and display an icon for them on the desktop, but as soon as I write data to that cd it will no longer recognize it unless I manually mount either /dev/hdb or /media/cdrom0 as root. I have a machine that runs Etch and /usr/bin/gnome-volume-manager exists on it. It doesn't exist on my machine that runs Sid. Anyone know what's going on with this? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: amd64 downloads not successful
Andrei Popescu wrote: On Tue, Feb 19, 2008 at 12:00:40PM -0800, Freddy Freeloader wrote: Can you give the exact link to the image you are trying to download? Go here. http://www.debian.org/CD/netinst/ Click on the netinstall link for the amd64 release for "stable". Or, go here. http://cdimage.debian.org/debian-cd/4.0_r3/amd64/iso-cd/ r3? That was released 2 days ago. The images were probably not completely uploaded or something because now I downloaded 50 MB in Iceweasel without problems. Regards, Andrei OK. I just figured if the links from debian.org were pointing there that the files would be there. Guess I just caught the system in transition. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: amd64 downloads not successful
Sergio Cuéllar Valdés wrote: 2008/2/19, Freddy Freeloader <[EMAIL PROTECTED]>: I don't know who exactly to report this to so I will ask here and maybe someone can point me in the correct direction or maybe the right people will read this. I have been trying to download an AMD64 netinstall iso image this morning and have not been able to. The download is terminated at anywhere from a few KB of data to 16 MB into the download and Iceweasel then reports the download as complete. The same thing happened when I tried a full AMD64 install iso image (the 650 meg plus cd1 download) I thought maybe the problem was at my end but I successfully downloaded an i386 netinstall iso image so that seems to eliminate the problem being at my end. Hello, hmmm, try with wget. If the download is terminated again, try it with the option -c of wget. It will continue getting the partially donwloaded file. Best regards, Sergio Cuellar Hmmm, using wget it worked the second try. The first time I connected to 204.152.191.7 and received a 404 error. The second try I connected to 204.152.191.39 and it worked successfully. Looks like there is a problem with the mirrors. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: amd64 downloads not successful
Andrei Popescu wrote: On Tue, Feb 19, 2008 at 09:04:53AM -0800, Freddy Freeloader wrote: I don't know who exactly to report this to so I will ask here and maybe someone can point me in the correct direction or maybe the right people will read this. I have been trying to download an AMD64 netinstall iso image this morning and have not been able to. The download is terminated at anywhere from a few KB of data to 16 MB into the download and Iceweasel then reports the download as complete. The same thing happened when I tried a full AMD64 install iso image (the 650 meg plus cd1 download) Can you give the exact link to the image you are trying to download? Regards, Andrei Go here. http://www.debian.org/CD/netinst/ Click on the netinstall link for the amd64 release for "stable". Or, go here. http://cdimage.debian.org/debian-cd/4.0_r3/amd64/iso-cd/ Try the ...CD-1.iso link or the netinstall.iso link. Both downloads are terminated without completing the download. I can successfully download the debian-testing .iso from below. http://cdimage.debian.org/cdimage/daily-builds/daily/arch-latest/amd64/iso-cd/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
amd64 downloads not successful
I don't know who exactly to report this to so I will ask here and maybe someone can point me in the correct direction or maybe the right people will read this. I have been trying to download an AMD64 netinstall iso image this morning and have not been able to. The download is terminated at anywhere from a few KB of data to 16 MB into the download and Iceweasel then reports the download as complete. The same thing happened when I tried a full AMD64 install iso image (the 650 meg plus cd1 download) I thought maybe the problem was at my end but I successfully downloaded an i386 netinstall iso image so that seems to eliminate the problem being at my end. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: modsecurity
Jeff D wrote: Freddy Freeloader wrote: Schiz0 wrote: On Feb 13, 2008 2:13 PM, Freddy Freeloader <[EMAIL PROTECTED]> wrote: Hi All, Can anyone tell me why the modsecurity2_module is not in the Debian repositories? I understand that parts of it might not be GPL'ed, but why can't it be carried in the non-free repositories if that's the problem? Debian carries things such as fully proprietary drivers in non-free, so why the problem with modsecurity? IMHO this is a very useful package. If you're using etch, you can put this line in your /etc/apt/sources.list deb http://etc.inittab.org/~agi/debian/libapache-mod-security2/ etch/ It actually has a link to that site on the official mod_security website. Thanks. I was unaware of that link. I will make use of it. Do you know why modsecurity isn't hosted directly? here's a bug report on it: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=352344 OK. It's as I thought. It's a problem with the licensing. But, why can't it be moved to non-free rather than just being dropped altogether? All kinds of proprietary software is in non-free, and the Apache license is at least open even though it's not the GPL. I mean, I can "contaminate" the kernel right from the Debian repositories, so why am I not able to at least install open source software from there? I don't understand this at all. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: modsecurity
Schiz0 wrote: On Feb 13, 2008 2:13 PM, Freddy Freeloader <[EMAIL PROTECTED]> wrote: Hi All, Can anyone tell me why the modsecurity2_module is not in the Debian repositories? I understand that parts of it might not be GPL'ed, but why can't it be carried in the non-free repositories if that's the problem? Debian carries things such as fully proprietary drivers in non-free, so why the problem with modsecurity? IMHO this is a very useful package. If you're using etch, you can put this line in your /etc/apt/sources.list deb http://etc.inittab.org/~agi/debian/libapache-mod-security2/ etch/ It actually has a link to that site on the official mod_security website. Thanks. I was unaware of that link. I will make use of it. Do you know why modsecurity isn't hosted directly? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
modsecurity
Hi All, Can anyone tell me why the modsecurity2_module is not in the Debian repositories? I understand that parts of it might not be GPL'ed, but why can't it be carried in the non-free repositories if that's the problem? Debian carries things such as fully proprietary drivers in non-free, so why the problem with modsecurity? IMHO this is a very useful package. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
evolution not getting access to password
About two weeks ago, well, that's "a" guess as to how long ago it was as I've been pretty busy, I started having to enter my admin password every time Evolution checks for mail. Not the password for the email server but the same admin password that's asked for if you're starting up a program that requires root privileges. I've looked for settings to change this behavior but can't find any. It's not really a big thing, but it is getting annoying. Anyone else run into this and have some idea as to how to change this behavior? I am running an up-to-date Sid install. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: CGI Scripts, Apache 2.2.3, and Debian 4.0 R1
Jon D. Irish wrote: Hi Freddy, I was finally able to get it working, but I'm not exactly sure what I did to fix it. I think that it might have been enabling the cgid module. Does this make sense. Otherwise, I set most of the setting back to default, restarted Apache, and the scripts starting working. Jon - Original Message From: Freddy Freeloader <[EMAIL PROTECTED]> To: Debian User Sent: Sunday, December 16, 2007 10:22:09 PM Subject: Re: CGI Scripts, Apache 2.2.3, and Debian 4.0 R1 Jon D. Irish wrote: I have a clean install of Debian 4.0 R1 with Apache 2.2.3-4 installed. I can not get cgi scripts to execute from the website. I have researched the Apache site and tried both of the following: 1) Under apache2.conf, I added Options +ExecCGI AddHandler cgi-script .cgi .pl 2) Under httpd.conf, I added # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # AllowOverride None Options Indexes Includes FollowSymLinks Order deny,allow Deny from all Allow from 192.168.1 Neither of these has allowed my cgi scripts to run. Can someone please tell me what I am doing wrong? Sincerely, Jon Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping This may be a stupid question, but did you enable the cgi module by creating a symbolic link from /etc/apache2/mods-enabled to /etc/apache2/mods-available and install the appropriate php4/php5-cgi package or the libapache-mod-fastcgi depending on your usage? Without doing those two things pache won't/can't load the cgi module. There are also some libcgi packages for perl and a couple of other languages too, but I'm not familiar with them as I've never had occasion to use them. Not knowing what all you did, or exactly what type of scripts you're using, it's impossible to tell. I just know if you don't enable the correct modules for what you're doing no matter what's in your httpd.conf or your virtual server settings it just won't work. The correct combination of enabled modules and configuration settings is what works. You can tell which modules have been loaded, both dynamically and statically, by running, as root, "apache2 -D DUMP_MODULES". -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: CGI Scripts, Apache 2.2.3, and Debian 4.0 R1
Jon D. Irish wrote: I have a clean install of Debian 4.0 R1 with Apache 2.2.3-4 installed. I can not get cgi scripts to execute from the website. I have researched the Apache site and tried both of the following: 1) Under apache2.conf, I added Options +ExecCGI AddHandler cgi-script .cgi .pl 2) Under httpd.conf, I added # # ScriptAlias: This controls which directories contain server scripts. # ScriptAliases are essentially the same as Aliases, except that # documents in the realname directory are treated as applications and # run by the server when requested rather than as documents sent to the client. # The same rules about trailing "/" apply to ScriptAlias directives as to # Alias. # ScriptAlias /cgi-bin/ "/var/www/cgi-bin/" # # "/var/www/cgi-bin" should be changed to whatever your ScriptAliased # CGI directory exists, if you have that configured. # AllowOverride None Options Indexes Includes FollowSymLinks Order deny,allow Deny from all Allow from 192.168.1 Neither of these has allowed my cgi scripts to run. Can someone please tell me what I am doing wrong? Sincerely, Jon Looking for last minute shopping deals? Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping This may be a stupid question, but did you enable the cgi module by creating a symbolic link from /etc/apache2/mods-enabled to /etc/apache2/mods-available and install the appropriate php4/php5-cgi package or the libapache-mod-fastcgi depending on your usage? Without doing those two things pache won't/can't load the cgi module. There are also some libcgi packages for perl and a couple of other languages too, but I'm not familiar with them as I've never had occasion to use them. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
blacklist packages in apt
Hi All, I'm unsure if "blacklist" is even the correct terminology for what I want to do, but at least it's a starting point. I administer servers for a small company. If it was up to me I wouldn't even have a gui on them, but the boss and developers come from the Windows world and require a gui. So, I'm stuck with it. What I would like to permanently get rid of are network-manager and avahi-daemon. Is there a way of marking these packages with either apt or dpkg so that they will not be reinstalled, ever? I've had problems with these two packages being auto-reinstalled when doing apt-get upgrades, and I just don't want to have mess with it anymore. I know it is at least theoretically possible with apt-pinning by setting the priority very low, but I've been looking at the man pages for apt-get and dpkg and can see no way there of doing this without apt-pinning. Is there another way and I've just missed it? I would just as soon not use apt-pinning if I don't have to, but if that's what I must do, I'll use it. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
all video players dropping frames when playing dvd's
Hi all, I hadn't played a movie on any of my computers for 2 or 3 months, and tried to play one today as I had enough time to watch one. What I found is that movies that used to play very well on my laptop and my workstations will not play at all anymore. They drop frames so badly that the players, vlc, totem, and mplayer, all crash. The audio plays at varying speeds and drops frames/skips too. The other thing I found was that dvd's that I had copied to disk no longer play. Totem used to read .iso files and play them just as if it was reading off a dvd disk. Now Totem says it is missing a plugin and can't read them. VLC and Mplayer used to read those same .iso files and play back the movies very smoothly. Now they crash trying to play them. I used to play movies with 2-3% cpu usage at full screen, and the picture was crystal clear and the sound great. Now cpu usage is at 30-40%, and as I said, nothing plays worth a damn. What has happened to the ability to play movies in the last couple of months? If it was only one of my computers I'd be thinking it had a problem, but it's not. It's all of my computers. I haven't done anything to them other than just keep on updating them as I run Sid on all them. Is anyone else experiencing this same thing? It looks to me as if something has undergone a major change, but what? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Joomla
Jose Luis Rivas Contreras wrote: Freddy Freeloader wrote: Well, that is one reason I guess. Do you happen to know why security updates to Joomla would take any longer than security updates to any other Debian package in stable? I'm just curious as Debian has Zope, Plone, Drupal, several wiki's, egroupware, and phpgroupware packages in stable. What is so different about Joomla? I don't know anything about it other than I from what I read on their website today, but it seems to me that most GPL'ed software is in Debian so my curiosity has been piqued. Joomla and webmin are two rather stark exceptions to the inclusiveness that I find in the Debian repositories, and I have read the reasoning behind why Debian dropped webmin. Well, drupal in stable is old, normally you want the newest and stable, you can find in etch the stable but not the new. That's maybe, and maybe there've not been anyone taking care good enough of the package. Regards, Jose Luis. Thanks for your reply. I was just really curious as to why there aren't any Joomla packages. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Joomla
Jose Luis Rivas Contreras wrote: Freddy Freeloader wrote: Does anyone here have any insight why Joomla has never made it as a Debian package? I was looking at the Joomla site and did some research on the relationship between Debian and Joomla. I see that someone in late 2006 was packaging Joomla for Sid, but it doesn't appear in the Debian-maintained repositories anymore. Anyone know why this was discontinued? Are there political reasons it was dropped, or is it just lack of enough interest to make the project viable? Well, I guess one of the main reasons is that stable is the suite used by servers and the updates take a long time to get to stable at least that this updates are security ones. And at least me does prefer to install my owns CMS's... Regards, Jose Luis. Well, that is one reason I guess. Do you happen to know why security updates to Joomla would take any longer than security updates to any other Debian package in stable? I'm just curious as Debian has Zope, Plone, Drupal, several wiki's, egroupware, and phpgroupware packages in stable. What is so different about Joomla? I don't know anything about it other than I from what I read on their website today, but it seems to me that most GPL'ed software is in Debian so my curiosity has been piqued. Joomla and webmin are two rather stark exceptions to the inclusiveness that I find in the Debian repositories, and I have read the reasoning behind why Debian dropped webmin. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Joomla
Does anyone here have any insight why Joomla has never made it as a Debian package? I was looking at the Joomla site and did some research on the relationship between Debian and Joomla. I see that someone in late 2006 was packaging Joomla for Sid, but it doesn't appear in the Debian-maintained repositories anymore. Anyone know why this was discontinued? Are there political reasons it was dropped, or is it just lack of enough interest to make the project viable? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Plone3 with Zope3
Hi all, Does anyone when, and if, Plone3 will be available to install with Zope3? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Zwiki
Freddy Freeloader wrote: Hi all, I have installed Zope/Plone and have a Plone site up and running. I want to add a zwiki site to it and have installed the zope-zwiki package. The problem is that it never shows up under Products in the ZMI. Is there something else I have to configure to get it to show up? The rest of the products I have installed have shown up there and I have been able to then import them into Plone. Not with zwiki though. I've Googled this to death, and looked on the zwiki wiki, but haven't found anything there either. Any help you can give would be appreciated. Well, never mind I finally found out how to do this. It was stuck in a comment under an article in the zwiki site. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Zwiki
Hi all, I have installed Zope/Plone and have a Plone site up and running. I want to add a zwiki site to it and have installed the zope-zwiki package. The problem is that it never shows up under Products in the ZMI. Is there something else I have to configure to get it to show up? The rest of the products I have installed have shown up there and I have been able to then import them into Plone. Not with zwiki though. I've Googled this to death, and looked on the zwiki wiki, but haven't found anything there either. Any help you can give would be appreciated. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: How to pipe from python script to system process that script starts
Douglas A. Tutty wrote: On Mon, Sep 03, 2007 at 11:48:48AM -0700, Freddy Freeloader wrote: Now on to the next step, to figure out a way to poll for stdin, pass that to exim, and process the message(thaw or remove it) based on what I read in the header. Before we get too far down the garden path with this, what is the big issue? I'm on dialup. I've never had to deal with removing messages from the queue since I have exim run with -qqff which automatically thaws the list every time it runs. After enough retries, exim returns the message to sender. Then again, I'm going to a smarthost. Under what circumstances would you need to manually remove a message from the queue? As for polling stdin, why? What will it get from stdin? Doug. Mainly the issue is learning to use python. What I want the script to read from stdin in will be my choice to either thaw the message or remove it from the queue. And, yes, I know Exim will delete anything in the queue after so many tries to deliver it. This is more a learning project than anything else at this point. I'm a newbie exim admin and a newbie to python, and I'm just wanting to make sure all frozen messages have nothing to do my acl's, and learn something about python at the same time. So, if you don't like my dinky little Python project all I can say is, stop replying. It's my time, and my energy being put into it. If you don't want to help any further, don't. I'll get my answers elsewhere, and there will be no hard feelings on my part -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: How to pipe from python script to system process that script starts
Kumar Appaiah wrote: On Mon, Sep 03, 2007 at 08:05:16AM -0700, Freddy Freeloader wrote: So far I've coded everything as process oriented rather than object oriented as that is what I am familiar with, but I'm beginning to believe that using classes is probably the way to go as it would be much easier to abstract concepts out that way. If someone has an example or two they could share with me on how to do interprocess piping in either oo or process oriented, or both, manner I would appreciate the help. Read the documentation for os.popen. It opens the command and it's stdin and stdout as pipes. The commands module might also be of interest. Kumar Thanks. I got os.popen4 to sort of work, I think, as it will capture stdout from the child process, but haven't figured out, yet, how to capture that and print it to the screen so I can read the headers. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: How to pipe from python script to system process that script starts
Douglas A. Tutty wrote: On Mon, Sep 03, 2007 at 08:05:16AM -0700, Freddy Freeloader wrote: I'm wanting to learn python so I'm starting with projects that I want to automate at work. What I want to do in this specific instance is use a python script to call exipick to find all frozen messages in the Exim queue, then feed the message id's to something such as "exim -Mvh" so I can look at the message headers. So far I've been able to get everything working except for how to get my python script to be able to pass the message id's to the exim command as the needed single parameter. I'm assuming that a pipe is the logical way to do this, but just haven't found any kind of example for what I am wanting to do. My Python reference book is just a little too cryptic for me yet and "Learning Python" barely touches on piping. All the examples there are on how to pipe from stdin with sys.stdin and that won't work for this task. Since exim -Mvh, as you say, only takes a single message id, you'll be starting a new process for each message. What you didn't say was where you want the output to go. If you just want to run the command and use exim's way of displaying the info as if you had typed the command from the shell, then you would use os.system(). If you want to get its output back into python you would use one of the os.popen() functions, depending on what pipes you want. Probably just os.popen() with its mode defaulting to 'r'. Now you just have to make up the command line which is simple string processing. Probably define EXIMCMD as '/usr/bin/exim4 ' somewhere near the top of the script as a pseudo constant. Then you would make up the command line with something like (NOTE: haven't tried this, just going from memory and cursory look at my Python bible): exim_cmd_line = EXIMCMD + '-Mvh ' + message_id message_header = os.popen(exim_cmd_line) You now have a file-like object message_header that you can use in the script. I hope this gets you on the right track. Doug. Ah, thank you. That put me on the right track. I did it a little differently, but you got me thinking in the right direction. I used os.system() as I need the output printed to screen so I can read email headers. Now on to the next step, to figure out a way to poll for stdin, pass that to exim, and process the message(thaw or remove it) based on what I read in the header. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: mail (un)delivery
Douglas A. Tutty wrote: On Mon, Sep 03, 2007 at 02:49:56PM +0100, michael wrote: On Mon, 2007-09-03 at 09:27 -0400, Douglas A. Tutty wrote: On Mon, Sep 03, 2007 at 02:12:42PM +0100, michael wrote: I'm feeling a bit dense today so any help welcome! Essentially, I've just noticed that local mail hasn't been delivered for a couple of weeks. I can email off my box but not to my username on the box. I can't see what the problem is. They are probably both a red herring [1] but (a) I did have some DNS problems just prior to the last received email and (b) switched off the box and physically moved it to a new location (and the new IP number) just after the last received email. I'm unsure how to go about debugging this so all pointers welcome! Assuming that you're using exim4, check you /etc/exim4/update-exim4.conf.conf file for the wrong IP addresses. If you find any, follow the instructions at the top of the file. >From what I can tell it seems fine: agreed > Assuming that you have written yourself an email on the same box, what error messages do you get? What does mailq say? Are you having exim do a reverse DNS lookup for every mail? Yes, the box is ratty.phy.umist.ac.uk and if I email myself (mail localusername) I get no error msgs. mailq gives me a permission error unless I use 'sudo mailq localusername' which then gives me michael-H *** spool read error: No such file or directory *** (not sure what that means...) What it means is that you used mailq wrong. You don't need any parameters but if you provide any, they are a list of message IDs. Since no message ID will be your localusername it will fail. Try mailq all by itself. I said 'no' to keeping num of DNS lookups minimal. NB: nslookup on the machine gives multiple entries: $ nslookup ratty.phy.umist.ac.uk Server: 130.88.13.7 Address:130.88.13.7#53 Name: ratty.phy.umist.ac.uk Address: 130.88.15.179 Name: ratty.phy.umist.ac.uk Address: 130.88.128.163 I don't have nslookup installed but it seems wierd to me that one hostname would have more than one IP address. See what mailq say and see what are in exim's logs. Doug. If you traceroute ratty.ph.umist.ac.uk it will go to one ip address and then the other the next time you traceroute the url, so I'm assuming this is some form of load balancing using dns. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
How to pipe from python script to system process that script starts
Hi all, This will be sort of involved I'm wanting to learn python so I'm starting with projects that I want to automate at work. What I want to do in this specific instance is use a python script to call exipick to find all frozen messages in the Exim queue, then feed the message id's to something such as "exim -Mvh" so I can look at the message headers. So far I've been able to get everything working except for how to get my python script to be able to pass the message id's to the exim command as the needed single parameter. I'm assuming that a pipe is the logical way to do this, but just haven't found any kind of example for what I am wanting to do. My Python reference book is just a little too cryptic for me yet and "Learning Python" barely touches on piping. All the examples there are on how to pipe from stdin with sys.stdin and that won't work for this task. So far I've coded everything as process oriented rather than object oriented as that is what I am familiar with, but I'm beginning to believe that using classes is probably the way to go as it would be much easier to abstract concepts out that way. If someone has an example or two they could share with me on how to do interprocess piping in either oo or process oriented, or both, manner I would appreciate the help. And, yes, I know, I could have done this very simply with a bash script, but that is not the point. I'm doing this as much to learn python as I am to accomplish the task. So, please, no tips on how to do this in a bash script. I already know how to do that. I'm just very much a Python noob at the moment -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: replacement for formmail
Rick Pasotto wrote: On Wed, Aug 29, 2007 at 01:36:12AM +, Douglas A. Tutty wrote: On Tue, Aug 28, 2007 at 02:58:00PM -0700, Freddy Freeloader wrote: Douglas A. Tutty wrote: On Tue, Aug 28, 2007 at 12:51:00PM -0700, Freddy Freeloader wrote: Anyone have any good recommendations for replacing formmail? I just started working for someone who is using it and we are trying to lock his sites down more than they have been in the past and are looking for a replacement. What is formmail? A buggy, insecure perl script used for sending mail from a website. It's been around for a long time and a lot of people use it, but we would like to move away from it. You can still see one replacement for it using apt-cache if your sources.list file is pointed to sarge. You can find it by searching packages at debian.org and choosing "any" for the release. I've never done CGI stuff. Why not just rewrite it so that its not buggy? Use your language-of-choice. I would use Python but of course it would require that I could read Perl, which I can't. There is a php version that is much better than the perl version. http://www.dtheatre.com/scripts/formmail.php Thank you. That is just what I was looking for. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: replacement for formmail
Douglas A. Tutty wrote: On Tue, Aug 28, 2007 at 12:51:00PM -0700, Freddy Freeloader wrote: Anyone have any good recommendations for replacing formmail? I just started working for someone who is using it and we are trying to lock his sites down more than they have been in the past and are looking for a replacement. What is formmail? Doug. A buggy, insecure perl script used for sending mail from a website. It's been around for a long time and a lot of people use it, but we would like to move away from it. You can still see one replacement for it using apt-cache if your sources.list file is pointed to sarge. You can find it by searching packages at debian.org and choosing "any" for the release. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
replacement for formmail
Anyone have any good recommendations for replacing formmail? I just started working for someone who is using it and we are trying to lock his sites down more than they have been in the past and are looking for a replacement. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: phpmyadmin
Jeff D wrote: On Tue, 21 Aug 2007, ArcticFox wrote: On Aug 20, 2007, at 9:58 PM, Freddy Freeloader wrote: Sorry to be so long getting back to you, but I had already tried this and you're not going to believe the results There is a phpinfo.php file in /usr/share/phpmyadmin. It doesn't work. However, I took the php script I used earlier to connect to mysql, added the phpinfo() function to the bottom of the page, copied it to /usr/share/phpmyadmin and Apache served the page. It both connected to mysql and served up the php configuration. It has the same permissions that all other files have in that directory Now, I'm not highly experienced with Apache, but know enough to set it up and get it working in most situations, but this just plain old has me buffaloed That Apache will not serve up any of the phpmyadmin php pages but will serve up other php pages located in the same directory just seems plain old bizarre to me. I'm sorry if this seems redundant but you're /sure/ all the files have the same user and permissions? If that's the case the only thing I can think of is to try downloading a fresh copy and installing that. That seems strange to me too. I just set it up here and apparently even though I have a config.inc.php file, it was ignoring it anyway. So I edited libraries/config.default.php, added in my host info and wow it works! -+- 8 out of 10 Owners who Expressed a Preference said Their Cats Preferred Techno. I tried your solution this morning and it didn't change anything either. :( -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: phpmyadmin
ArcticFox wrote: On Aug 20, 2007, at 9:58 PM, Freddy Freeloader wrote: Sorry to be so long getting back to you, but I had already tried this and you're not going to believe the results There is a phpinfo.php file in /usr/share/phpmyadmin. It doesn't work. However, I took the php script I used earlier to connect to mysql, added the phpinfo() function to the bottom of the page, copied it to /usr/share/phpmyadmin and Apache served the page. It both connected to mysql and served up the php configuration. It has the same permissions that all other files have in that directory Now, I'm not highly experienced with Apache, but know enough to set it up and get it working in most situations, but this just plain old has me buffaloed That Apache will not serve up any of the phpmyadmin php pages but will serve up other php pages located in the same directory just seems plain old bizarre to me. I'm sorry if this seems redundant but you're /sure/ all the files have the same user and permissions? If that's the case the only thing I can think of is to try downloading a fresh copy and installing that. That seems strange to me too. Absolutely positive. That was one of the first things I checked, and then checked again to make sure that the files I have placed in /usr/share/phpmyadmin had the same permissions. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: phpmyadmin
ArcticFox wrote: On Aug 20, 2007, at 8:01 AM, Freddy Freeloader wrote: ArcticFox wrote: On Aug 20, 2007, at 12:10 AM, Freddy Freeloader wrote: ArcticFox wrote: Have you taken a look at the phpmyadmin site? I had some trouble getting it to work on my system too and their was a rather nice troubleshooting page there that helped me out. Yeah, I have. I've also read the documentation installed in /usr/share/phpmyadmin/Documentation.html, and spent quite a while with Google too. I just don't really understand what is going on as the same server will serve up other php pages, but phpmyadmin seems only able to serve html pages. You didn't by any chance disable FollowSymlinks did you? In your apache config file. The new install of Debian I have phpmyadmin worked out of the box. The other thing to try would be to delete the existing install and try installing a fresh copy downloaded from their website. Nope, didn't disable FollowSymlinks. This was a fresh Apache and phpmyadmin install and it has done this since I installed it. Looking at all the config files it "should" work, but it doesn't. Hum. Try running the index.php file from the command line to make sure your PHP processor is working right. If that works create a file (call it something like test.php) and put this in it: then try to access that from your web browser. You should see a huge list of Sorry to be so long getting back to you, but I had already tried this and you're not going to believe the results There is a phpinfo.php file in /usr/share/phpmyadmin. It doesn't work. However, I took the php script I used earlier to connect to mysql, added the phpinfo() function to the bottom of the page, copied it to /usr/share/phpmyadmin and Apache served the page. It both connected to mysql and served up the php configuration. It has the same permissions that all other files have in that directory Now, I'm not highly experienced with Apache, but know enough to set it up and get it working in most situations, but this just plain old has me buffaloed That Apache will not serve up any of the phpmyadmin php pages but will serve up other php pages located in the same directory just seems plain old bizarre to me. everything php knows. On Aug 19, 2007, at 10:58 PM, Freddy Freeloader wrote: Hi All, I am having a problem with phpmyadmin that is just driving me nuts. (I've spent hours troubleshooting and Googling this and just cannot come up with a solution. I don't know if this problem is misconfiguration or a bug so I didn't want to turn in a bug report.) All phpmyadmin will serve is blank pages. (looking at page source in a browser shows there is nothing being sent from Apache. The page source is blank.) There are no related errors in the apache2 logs and I have logging set to "debug" in /etc/apache2/apache2.conf. In fact, calling http://server_name/phpmyadmin results in a 200 message in the access log. Here is the output from apache2 -M: Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) python_module (shared) negotiation_module (shared) php5_module (shared) setenvif_module (shared) status_module (shared) Syntax OK Here is what /var/log/apache2/access.log shows to a call to the url http://server_name/phpmyadmin/: 127.0.0.1- - [18/Aug/2007:09:23:53 -0700] "GET /phpmyadmin/ HTTP/1.1" 200 - "-" "Mozilla/5.0 (X$en-US;$ Well, it line-wrapped and the end of the line is cut off from copying and pasting out of nano, but you can see what I mean. All calls to phpmyadmin have the same entry in the access log. I can call /phpmyadmin/scripts/setup.php and I get exactly the same entry in the logs and a blank page. In fact, I've called pretty much all the php files in /usr/share/phpmyadmin and I get exactly the same response. I have also enabled all levels of php error logging in php.ini and can find no errors there either. However, if I manually call for http://server_name/phpmyadmin/Documentation.html (which is found in /usr/share/phpmyadmin/ along will the rest of the phpmyadmin files it will serve it up. Starting up wireshark and sniffing packets on the client I see the client send the GET command for the appropriate page, I see the server acknowledge it with a 200 message, but nothing is ever sent from the server in response to calling any of the php files in phpmyadmin. Permissions in /usr/share/phpmyadmin are world readable and all subdirectories are wor
Re: phpmyadmin
ArcticFox wrote: On Aug 20, 2007, at 12:10 AM, Freddy Freeloader wrote: ArcticFox wrote: Have you taken a look at the phpmyadmin site? I had some trouble getting it to work on my system too and their was a rather nice troubleshooting page there that helped me out. Yeah, I have. I've also read the documentation installed in /usr/share/phpmyadmin/Documentation.html, and spent quite a while with Google too. I just don't really understand what is going on as the same server will serve up other php pages, but phpmyadmin seems only able to serve html pages. You didn't by any chance disable FollowSymlinks did you? In your apache config file. The new install of Debian I have phpmyadmin worked out of the box. The other thing to try would be to delete the existing install and try installing a fresh copy downloaded from their website. Nope, didn't disable FollowSymlinks. This was a fresh Apache and phpmyadmin install and it has done this since I installed it. Looking at all the config files it "should" work, but it doesn't. On Aug 19, 2007, at 10:58 PM, Freddy Freeloader wrote: Hi All, I am having a problem with phpmyadmin that is just driving me nuts. (I've spent hours troubleshooting and Googling this and just cannot come up with a solution. I don't know if this problem is misconfiguration or a bug so I didn't want to turn in a bug report.) All phpmyadmin will serve is blank pages. (looking at page source in a browser shows there is nothing being sent from Apache. The page source is blank.) There are no related errors in the apache2 logs and I have logging set to "debug" in /etc/apache2/apache2.conf. In fact, calling http://server_name/phpmyadmin results in a 200 message in the access log. Here is the output from apache2 -M: Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) python_module (shared) negotiation_module (shared) php5_module (shared) setenvif_module (shared) status_module (shared) Syntax OK Here is what /var/log/apache2/access.log shows to a call to the url http://server_name/phpmyadmin/: 127.0.0.1- - [18/Aug/2007:09:23:53 -0700] "GET /phpmyadmin/ HTTP/1.1" 200 - "-" "Mozilla/5.0 (X$en-US;$ Well, it line-wrapped and the end of the line is cut off from copying and pasting out of nano, but you can see what I mean. All calls to phpmyadmin have the same entry in the access log. I can call /phpmyadmin/scripts/setup.php and I get exactly the same entry in the logs and a blank page. In fact, I've called pretty much all the php files in /usr/share/phpmyadmin and I get exactly the same response. I have also enabled all levels of php error logging in php.ini and can find no errors there either. However, if I manually call for http://server_name/phpmyadmin/Documentation.html (which is found in /usr/share/phpmyadmin/ along will the rest of the phpmyadmin files it will serve it up. Starting up wireshark and sniffing packets on the client I see the client send the GET command for the appropriate page, I see the server acknowledge it with a 200 message, but nothing is ever sent from the server in response to calling any of the php files in phpmyadmin. Permissions in /usr/share/phpmyadmin are world readable and all subdirectories are world browsable. User and group are both root. When connecting via a browser I am asked for a user name and password and they are accepted. I also found a php script on the net that would connect directly to mysql-server from a browser page and it worked fine. I was able to successfully connect to mysql. I am running mysql 5 and have a sample database installed and can successfully query it from a bash prompt. I am running sid and it is up-to-date as of last night. So I am current with all packages. Here is the output of dpkg -l | grep apache2, php5, and mysql: ii apache2 2.2.4-3Next generation, scalable, ext$ ii apache2-mpm-prefork 2.2.4-3Traditional model for Apache H$ ii apache2-utils2.2.4-3 utility programs for webservers ii apache2.2-common 2.2.4-3 Next generation, scalable, ext$ ii libapache2-mod-auth-mysql4.3.9-4 Apache 2 module for MySQL auth$ ii libapache2-mod-auth-pgsql2.0.3-4+b1 Module for Apache2 which provi$ ii libapache2-mod-perl2 2.0.2-2.4 Integration of perl with the A$ ii libapache2-mod-php5 5.2.3-1+b1 server-side, HTM
Re: phpmyadmin
ArcticFox wrote: Have you taken a look at the phpmyadmin site? I had some trouble getting it to work on my system too and their was a rather nice troubleshooting page there that helped me out. Yeah, I have. I've also read the documentation installed in /usr/share/phpmyadmin/Documentation.html, and spent quite a while with Google too. I just don't really understand what is going on as the same server will serve up other php pages, but phpmyadmin seems only able to serve html pages. On Aug 19, 2007, at 10:58 PM, Freddy Freeloader wrote: Hi All, I am having a problem with phpmyadmin that is just driving me nuts. (I've spent hours troubleshooting and Googling this and just cannot come up with a solution. I don't know if this problem is misconfiguration or a bug so I didn't want to turn in a bug report.) All phpmyadmin will serve is blank pages. (looking at page source in a browser shows there is nothing being sent from Apache. The page source is blank.) There are no related errors in the apache2 logs and I have logging set to "debug" in /etc/apache2/apache2.conf. In fact, calling http://server_name/phpmyadmin results in a 200 message in the access log. Here is the output from apache2 -M: Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) python_module (shared) negotiation_module (shared) php5_module (shared) setenvif_module (shared) status_module (shared) Syntax OK Here is what /var/log/apache2/access.log shows to a call to the url http://server_name/phpmyadmin/: 127.0.0.1- - [18/Aug/2007:09:23:53 -0700] "GET /phpmyadmin/ HTTP/1.1" 200 - "-" "Mozilla/5.0 (X$en-US;$ Well, it line-wrapped and the end of the line is cut off from copying and pasting out of nano, but you can see what I mean. All calls to phpmyadmin have the same entry in the access log. I can call /phpmyadmin/scripts/setup.php and I get exactly the same entry in the logs and a blank page. In fact, I've called pretty much all the php files in /usr/share/phpmyadmin and I get exactly the same response. I have also enabled all levels of php error logging in php.ini and can find no errors there either. However, if I manually call for http://server_name/phpmyadmin/Documentation.html (which is found in /usr/share/phpmyadmin/ along will the rest of the phpmyadmin files it will serve it up. Starting up wireshark and sniffing packets on the client I see the client send the GET command for the appropriate page, I see the server acknowledge it with a 200 message, but nothing is ever sent from the server in response to calling any of the php files in phpmyadmin. Permissions in /usr/share/phpmyadmin are world readable and all subdirectories are world browsable. User and group are both root. When connecting via a browser I am asked for a user name and password and they are accepted. I also found a php script on the net that would connect directly to mysql-server from a browser page and it worked fine. I was able to successfully connect to mysql. I am running mysql 5 and have a sample database installed and can successfully query it from a bash prompt. I am running sid and it is up-to-date as of last night. So I am current with all packages. Here is the output of dpkg -l | grep apache2, php5, and mysql: ii apache2 2.2.4-3Next generation, scalable, ext$ ii apache2-mpm-prefork 2.2.4-3Traditional model for Apache H$ ii apache2-utils2.2.4-3 utility programs for webservers ii apache2.2-common 2.2.4-3 Next generation, scalable, ext$ ii libapache2-mod-auth-mysql4.3.9-4 Apache 2 module for MySQL auth$ ii libapache2-mod-auth-pgsql2.0.3-4+b1 Module for Apache2 which provi$ ii libapache2-mod-perl2 2.0.2-2.4 Integration of perl with the A$ ii libapache2-mod-php5 5.2.3-1+b1 server-side, HTML-embedded scr$ ii libapache2-mod-python3.3.1-2 Apache 2 module that embeds Py$ ii libapache2-mod-python-doc3.3.1-1 Apache 2 module that embeds Py$ ii libapache2-mod-php5 5.2.3-1+b1 server-side, HTML-embedded scr$ ii php5 5.2.3-1 server-side, HTML-embedded scr$ ii php5-cgi 5.2.3-1+b1 server-side, HTML-embedded scr$ ii php5-common 5.2.3-1+b1 Common files for packages buil$ ii php5-gd 5.2.3-1+b1
phpmyadmin
Hi All, I am having a problem with phpmyadmin that is just driving me nuts. (I've spent hours troubleshooting and Googling this and just cannot come up with a solution. I don't know if this problem is misconfiguration or a bug so I didn't want to turn in a bug report.) All phpmyadmin will serve is blank pages. (looking at page source in a browser shows there is nothing being sent from Apache. The page source is blank.) There are no related errors in the apache2 logs and I have logging set to "debug" in /etc/apache2/apache2.conf. In fact, calling http://server_name/phpmyadmin results in a 200 message in the access log. Here is the output from apache2 -M: Loaded Modules: core_module (static) log_config_module (static) logio_module (static) mpm_prefork_module (static) http_module (static) so_module (static) alias_module (shared) auth_basic_module (shared) authn_file_module (shared) authz_default_module (shared) authz_groupfile_module (shared) authz_host_module (shared) authz_user_module (shared) autoindex_module (shared) cgi_module (shared) dir_module (shared) env_module (shared) mime_module (shared) python_module (shared) negotiation_module (shared) php5_module (shared) setenvif_module (shared) status_module (shared) Syntax OK Here is what /var/log/apache2/access.log shows to a call to the url http://server_name/phpmyadmin/: 127.0.0.1- - [18/Aug/2007:09:23:53 -0700] "GET /phpmyadmin/ HTTP/1.1" 200 - "-" "Mozilla/5.0 (X$en-US;$ Well, it line-wrapped and the end of the line is cut off from copying and pasting out of nano, but you can see what I mean. All calls to phpmyadmin have the same entry in the access log. I can call /phpmyadmin/scripts/setup.php and I get exactly the same entry in the logs and a blank page. In fact, I've called pretty much all the php files in /usr/share/phpmyadmin and I get exactly the same response. I have also enabled all levels of php error logging in php.ini and can find no errors there either. However, if I manually call for http://server_name/phpmyadmin/Documentation.html (which is found in /usr/share/phpmyadmin/ along will the rest of the phpmyadmin files it will serve it up. Starting up wireshark and sniffing packets on the client I see the client send the GET command for the appropriate page, I see the server acknowledge it with a 200 message, but nothing is ever sent from the server in response to calling any of the php files in phpmyadmin. Permissions in /usr/share/phpmyadmin are world readable and all subdirectories are world browsable. User and group are both root. When connecting via a browser I am asked for a user name and password and they are accepted. I also found a php script on the net that would connect directly to mysql-server from a browser page and it worked fine. I was able to successfully connect to mysql. I am running mysql 5 and have a sample database installed and can successfully query it from a bash prompt. I am running sid and it is up-to-date as of last night. So I am current with all packages. Here is the output of dpkg -l | grep apache2, php5, and mysql: ii apache2 2.2.4-3Next generation, scalable, ext$ ii apache2-mpm-prefork 2.2.4-3Traditional model for Apache H$ ii apache2-utils2.2.4-3 utility programs for webservers ii apache2.2-common 2.2.4-3 Next generation, scalable, ext$ ii libapache2-mod-auth-mysql4.3.9-4 Apache 2 module for MySQL auth$ ii libapache2-mod-auth-pgsql2.0.3-4+b1 Module for Apache2 which provi$ ii libapache2-mod-perl2 2.0.2-2.4 Integration of perl with the A$ ii libapache2-mod-php5 5.2.3-1+b1 server-side, HTML-embedded scr$ ii libapache2-mod-python3.3.1-2 Apache 2 module that embeds Py$ ii libapache2-mod-python-doc3.3.1-1 Apache 2 module that embeds Py$ ii libapache2-mod-php5 5.2.3-1+b1 server-side, HTML-embedded scr$ ii php5 5.2.3-1 server-side, HTML-embedded scr$ ii php5-cgi 5.2.3-1+b1 server-side, HTML-embedded scr$ ii php5-common 5.2.3-1+b1 Common files for packages buil$ ii php5-gd 5.2.3-1+b1 GD module for php5 ii php5-mcrypt 5.2.3-1+b1 MCrypt module for php5 ii php5-mysql 5.2.3-1+b1 MySQL module for php5 ii php5-pgsql 5.2.3-1+b1 PostgreSQL module for php5 ii libapache2-mod-auth-mysql4.3.9-4 Apache 2 module for MySQL auth$ ii libdbd-mysql-perl4.005-1 A Perl5 database interface to $ ii libmysqlclient15off 5.0.45-1 MySQL database client library ii mysql-client-5.0 5.0.45-1 MySQL database client binaries ii mysql-common
Re: Am I missing something in my understanding here?
Bob Proulx wrote: Freddy Freeloader wrote: The first time I run "/etc/init.d/networking restart" after a reboot I get the same message I would if I ran "ifdown eth0" about eth0 releasing its dhcp address, and I no longer have network connectivity, i.e. the /etc/init.d/networking script does not seem to call ifup -a to restart all network connections. (I have only one nic on this computer.) To get network connectivity back I must manually run "ifup etho". The problem is rooted in Bug#300937. It is about how to handle hotplug network devices. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=300937 The default Etch installation configures networking such as you are seeing now. But let's back into it from the side. The configuration is: allow-hotplug eth0 You noted "ifup -a". But /etc/init.d/networking does call ifup -a. But looking at the man page for ifup shows: -a, --all If given to ifup, affect all interfaces marked auto. So 'ifup -a' only brings up interfaces marked auto but a default Etch installation sets all interfaces to allow-hotplug. Now you can see why restarting networking does not bring up networking. And in the man page for interfaces it says: Lines beginning with the word "auto" are used to identify the physical interfaces to be brought up when ifup is run with the -a option. (This option is used by the system boot scripts.) ... Lines beginning with "allow-" are used to identify interfaces that should be brought up automatically by various subsytems. This may be done using a command such as "ifup --allow=hotplug eth0 eth1", which will only bring up eth0 or eth1 if it is listed in an "allow-hotplug" line. My question is, is this behavior by design, or have I stumbled across a bug? I don't know the full history behind this but looking at the Bug#300937 I assume the design is that network events are now moved from boot time action to network event time action to support hotplug network interfaces. Looking in /usr/share/doc/udev/README.Debian.gz I see: After receiving events about network interfaces, net.agent will call ifupdown using the --allow=hotplug option. This makes the program act only on interfaces marked with the "allow-hotplug" statement. E.g: "allow-hotplug eth0" instead of the usual "auto eth0". I have not researched further what it would take to trigger udev to restart networking but I presume there is a path through it that would do so. To return to the previous behavior you may change the allow-hotplug stanzas into auto stanzas and then restarting networking will restart the network interface. I would welcome further information on this area. Bob Thanks for your answer, Bob. Changing allow-hotplug to auto did change the behavior, and I now have a better understanding of how all this works. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Am I missing something in my understanding here?
The first time I run "/etc/init.d/networking restart" after a reboot I get the same message I would if I ran "ifdown eth0" about eth0 releasing its dhcp address, and I no longer have network connectivity, i.e. the /etc/init.d/networking script does not seem to call ifup -a to restart all network connections. (I have only one nic on this computer.) To get network connectivity back I must manually run "ifup etho". However, ever after that first time event running "/etc/init.d/networking restart" I get the message "reconfiguring network interfaces" but no ifdown or ifup messages showing the releasing and acquiring of an ip address through dhcp. If I run "ifdown eth0" at this point it tells me that eth0 is not configured, but I have network connectivity and ifconfig -a confirms that eth0 is configured. If I run "/etc/init.d/networking stop" I get the correct message that eth0 is releasing its dhcp address. However, then running "/etc/init.d/networking start" does not run ifup on eth0 as eth0 remains unconfigured, although I receive the message that the network interfaces have been successfully reconfigured. I must manually run ifup eth0 to configure it again and regain network connectivity. My question is, is this behavior by design, or have I stumbled across a bug? I'm running an up-to-date Sid install. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: rampant offtopic and offensive posts to debian-user
On Mon, 2007-05-21 at 22:49 -0400, Celejar wrote: > On Mon, 21 May 2007 10:59:01 -0400 > Hal Vaughan <[EMAIL PROTECTED]> wrote: > > > On Monday 21 May 2007, Celejar wrote: > > > On Sat, 19 May 2007 17:02:01 +0200 > > > "M. Fioretti" <[EMAIL PROTECTED]> wrote: > > > > > > [snip] > > > > > > > This is why I'm posting also this reply to the moderators. I really > > > > hope they put a stop to this, this time. > > > > > > As people have pointed out; this is exactly the issue. There *are no* > > > moderators! d-u is (currently) unmoderated (listmasters aren't > > > [necessarily] moderators). I have no problem with people requesting a > > > change, but as it exists now, it's an unmoderated list. > > > > I don't know what the term is, but within the past year or so there was > > an issue that came up with some quite heated discussions that involved > > criticism of Debian or some part of it and someone stepped in and > > delayed any posts on that thread for 24 hours. > > > > How was that done without moderation (or moderation by any other name)? > > I have heard of that, although I don't know anything about it. > Obviously someone was doing some moderating at that time, but I still > don't think that that makes the list, in general, a moderated list. It > goes without saying that any list *can* be moderated; after all, > *someone* has root access on the list servers! The question is whether > the list rules and code of conduct specify moderation, and whether > there is, in practice, any actual, sustained moderation. > > > Hal > > Celejar > -- > mailmin.sourceforge.net - remote access via secure (OpenPGP) email > ssuds.sourceforge.net - A Simple Sudoku Solver and Generator > > This is not a direct reply to anyone. It is just something that I thought should be pointed out. There are many people here who assume that they should be able to control what everyone has to talk about. Yeah, you guys may not like off-topic posts, but the heat I hear about this just makes no sense to me. Calling the off-topic posts on this mailing list offensive is just plain nuts. You want to see offensive? Take a look at the following link to a thread from a Windows forum. It is truly offensive, not just something some people decide they want to call threads offensive because they don't like some of the points of view expressed, because of some off-color joke, or just because they are tired of dealing with volume of posts. http://www.sharkyforums.com/showthread.php?p=2455305#post2455305 Could OT posting be reduced? Yes, but get some sense of perspective people. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: getting a new Debian box
Greg Folkert wrote: On Fri, 2007-05-18 at 21:51 -0700, Alexandru Cardaniuc wrote: Hi All! I've been using Debian with my desktop replacement laptop HP Pavilion zv5260 for about 2 years now. This was the only computer I had for about 2.5 years. I got tired of using the laptop constantly. I want something more convenient with a bigger screen and that I can use more comfortably. So I decided to get a desktop. I need some advices. [snippage] Dell Dimension E521N: AMD Athlon 64 X2 Dual-Core 4000+ [snippage] TOTAL:$689.00 Are there any other desktop computers I should consider buying instead of this one? Is there any configuration part I should consider changing? Should I upgrade to 2 GB of RAM ? 1GB seems to be enough for me, since I am using now etch on my laptop with 512MB and I don't see any problems. In general e521n seems to be enough for etch and everything I use, since I am running a 2.5 year old laptop and I don't see any real performance issues. But again, I am open to advices. I googled e521n and debian and found out that there was a problem with usb malfunction that froze mouse but that it was solved with the new bios update released by Dell in January, 2007. I didn't find any other issues. Everything else seems to work in Debian. So if somebody here is using this model with Debian, please, provide feedback. Do you have any issues? Something to be aware before bying it? Any compatibility issues with Debian? I would prefer to install on it Debian Stable (Etch) that goes with linux kernel 2.6.18 Will I be able to do that? Or I will need new kernel 2.6.20 from unstable? If you wait until May24th, Dell will be shipping Desktops and Laptops with Ubuntu Linux pre-installed on them. Everything will work on them. I believe the same model you just quoted. Similar prices too. I believe the correct model number is the XPS 421, not the 521. At least that is what was posted on Groklaw. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: ntfs read-write
somethin2cool wrote: Joe Hart wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 somethin2cool wrote: Answer to this one: ntfs-3g. It works. But, never trust anything to work with an undocumented file system. Frequent backups are a good idea. Reformatting the ntfs drive is a better idea, but your Windows might complain ;) Joe Good advice, but a good tool to have around nonetheless. You wouldn't advise captive since it controls the official driver? Thanks Well, I don't use ntfs, but I used to, and I know the ntfs-3g works and the captive is not recommended by people much more familiar with it than myself for writing. It works fine reading. Back when I still had windows, I created a fat32 partition and used that to share, but since then I have reclaimed the space that Windows was using and changed it so that Windows runs in a VM so I can still access windows programs if I have to, but no longer have to reboot to do it. Now I can just run Windows in a Window. I use Virtual Box for that. Joe - -- Registerd Linux user #443289 at http://counter.li.org/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGM0WPiXBCVWpc5J4RArM3AJ44q7P2Q12PTpk91xg3JtG8hQTBnwCgnw3L es3rUA29U6Xc03+Bjc80Xj8= =3AuE -END PGP SIGNATURE- That sounds like where I'm heading. I have a FAT32 partition for the interim, but my target is not just for my own data storage, but to be able to use other hdd's that are connected as slaves in the future. For that I need a reliable NTFS write driver. My understanding of attitudes toward Captive is that it follows the lines of "we hate it because it just wraps a microsoft driver and is thus evil. it should have been written from scratch". Something which I am not concerned about. And logically speaking, this idea ought to be the most reliable. I will look into both and post my findings (assuming i get them working) I have a lot of files left over from my Windows days that are stored on ntfs partitions and I have found ntfs-3g to be really reliable. I never could get captive to work nearly as reliably as I have ntfs-3g. It's basically just install and go. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: oddity in apt-cache
On Mon, 2007-04-23 at 23:11 +0200, Florian Kulzer wrote: > On Mon, Apr 23, 2007 at 13:47:50 -0700, Freddy Freeloader wrote: > > Florian Kulzer wrote: > > [...] > > >> I think many people would not like it if apt-cache no longer found the > >> local packages, custom kernels, etc. If a package is still installed > >> then its information is included in apt's package cache, and "apt-cache" > >> bases all its results on this cache. It does not query the repositories > >> at all but it gets this information indirectly whenever you run "apt-get > >> update" (or aptitude, etc.). > >> > >> If you want to run queries on what is available in the repositories you > >> will probably have to use "apt-file" or "rmadison" (from package > >> "devscripts"). > >> > >> > > Thanks Florian, > > > > I guess, then, I just use "apt-cache search" differently than most people > > do, or even the way it was intended to be used. I use it to find available > > packages and it has always worked for me until today as I usually run > > apt-get update on a daily basis. I have never even thought of using it to > > find something installed locally because to me it makes no sense to use it > > that way as it gives you no way to know if a package is installed. If I > > want to find out if something is locally installed I use "dpkg -l | grep > > relevant_string". That tells me the package's installation status. > > [...] > > I think I did not make myself very clear: I also use apt-cache like you > do (mostly since it seems to be faster than "aptitude search" for simple > queries). I don't think that there is anything wrong with that; you just > have to be aware that all installed packages are also included in the > search, even if some of them have meanwhile been removed from the > repositories. > > -- > Regards,| http://users.icfo.es/Florian.Kulzer > Florian | > > OK. What threw me was when you brought up people using apt-cache to track packages such as custom built kernels. I would have never thought of using apt-cache for such a task. dpkg is the only tool I would have even considered using. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: oddity in apt-cache
Douglas Allan Tutty wrote: On Mon, Apr 23, 2007 at 11:34:21AM -0700, Freddy Freeloader wrote: Florian Kulzer wrote: On Mon, Apr 23, 2007 at 08:08:22 -0700, Freddy Freeloader wrote: Does this mean that apt-cache reads the local database + the server repositories rather than the just the server repositories? I tend to see that as a bug, not a feature, as it leads people, such as me, to believe a package which was installed at some time in the past on the local machine still exists in the repositories when it has, in fact, been removed. Sorry to interrupt, but I'm wondering what aptitude show shows? Doug. Good question. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: oddity in apt-cache
Florian Kulzer wrote: On Mon, Apr 23, 2007 at 11:34:21 -0700, Freddy Freeloader wrote: Florian Kulzer wrote: On Mon, Apr 23, 2007 at 08:08:22 -0700, Freddy Freeloader wrote: I have three separate machines that have identical entries in /etc/apt/sources.list. All were updated and upgraded this morning as a result of troubleshooting this issue. On machine #1 I can apt-cache show nhfsstone and it returns the expected data on nhfsstone. On machines 2 and 3 it tells me that the nhfsstone package cannot be found. Running apt-cache search nfs on all machines yeilds similar results. Machine #1 has nhfsstone included in the result set. Machines 2 and 3 do not. All machines are pointed to: deb http://mirrors.kernel.org/debian sid main contrib non-free. This just makes absolutely no sense to me. I'm pointing all three to the same set of repositories and yet two machines cannot find a software package the other machine finds. I realize that all machines may not see exactly the same server every time, but to have a package being found on one machine and missing on two others seems very strange. Can anyone explain this anomaly this to me? It seems to me that nhfsstone is no longer in Sid. My guess is that machine #1 has it only as a local package. What is the output of "apt-cache policy nhfsstone" on this machine? Thanks Florian, It is reported as installed, which it is. However, since it was installed and the present time I have run "apt-get clean" so the package no longer exists in the local apt archives. But it is still listed by apt-cache as "100 /var/lib/dpkg/status" for the currently installed version, right? Does this mean that apt-cache reads the local database + the server repositories rather than the just the server repositories? I tend to see that as a bug, not a feature, as it leads people, such as me, to believe a package which was installed at some time in the past on the local machine still exists in the repositories when it has, in fact, been removed. I think many people would not like it if apt-cache no longer found the local packages, custom kernels, etc. If a package is still installed then its information is included in apt's package cache, and "apt-cache" bases all its results on this cache. It does not query the repositories at all but it gets this information indirectly whenever you run "apt-get update" (or aptitude, etc.). If you want to run queries on what is available in the repositories you will probably have to use "apt-file" or "rmadison" (from package "devscripts"). Thanks Florian, I guess, then, I just use "apt-cache search" differently than most people do, or even the way it was intended to be used. I use it to find available packages and it has always worked for me until today as I usually run apt-get update on a daily basis. I have never even thought of using it to find something installed locally because to me it makes no sense to use it that way as it gives you no way to know if a package is installed. If I want to find out if something is locally installed I use "dpkg -l | grep relevant_string". That tells me the package's installation status. I guess I should have realized that apt-cache does search locally because of the results returned using "apt-cache policy", which I used to use a fair amount, but not regularly anymore because I no longer use apt-pinning. For some reason it just didn't occur to me though. I just assumed that the search and policy options worked differently. My bad for not firing up a sniffer to see exactly how they worked. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: oddity in apt-cache
Florian Kulzer wrote: On Mon, Apr 23, 2007 at 08:08:22 -0700, Freddy Freeloader wrote: I have three separate machines that have identical entries in /etc/apt/sources.list. All were updated and upgraded this morning as a result of troubleshooting this issue. On machine #1 I can apt-cache show nhfsstone and it returns the expected data on nhfsstone. On machines 2 and 3 it tells me that the nhfsstone package cannot be found. Running apt-cache search nfs on all machines yeilds similar results. Machine #1 has nhfsstone included in the result set. Machines 2 and 3 do not. All machines are pointed to: deb http://mirrors.kernel.org/debian sid main contrib non-free. This just makes absolutely no sense to me. I'm pointing all three to the same set of repositories and yet two machines cannot find a software package the other machine finds. I realize that all machines may not see exactly the same server every time, but to have a package being found on one machine and missing on two others seems very strange. Can anyone explain this anomaly this to me? It seems to me that nhfsstone is no longer in Sid. My guess is that machine #1 has it only as a local package. What is the output of "apt-cache policy nhfsstone" on this machine? Thanks Florian, It is reported as installed, which it is. However, since it was installed and the present time I have run "apt-get clean" so the package no longer exists in the local apt archives. Does this mean that apt-cache reads the local database + the server repositories rather than the just the server repositories? I tend to see that as a bug, not a feature, as it leads people, such as me, to believe a package which was installed at some time in the past on the local machine still exists in the repositories when it has, in fact, been removed. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: oddity in apt-cache
Joe Hart wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Freddy Freeloader wrote: I have three separate machines that have identical entries in /etc/apt/sources.list. All were updated and upgraded this morning as a result of troubleshooting this issue. On machine #1 I can apt-cache show nhfsstone and it returns the expected data on nhfsstone. On machines 2 and 3 it tells me that the nhfsstone package cannot be found. Running apt-cache search nfs on all machines yeilds similar results. Machine #1 has nhfsstone included in the result set. Machines 2 and 3 do not. All machines are pointed to: deb http://mirrors.kernel.org/debian sid main contrib non-free. This just makes absolutely no sense to me. I'm pointing all three to the same set of repositories and yet two machines cannot find a software package the other machine finds. I realize that all machines may not see exactly the same server every time, but to have a package being found on one machine and missing on two others seems very strange. Can anyone explain this anomaly this to me? You are correct, it makes no sense. Do all three machines have the same access point to the Internet? Is perhaps a firewall in between blocking something? That's the only reason I can think of that one machine will work and the others will not. Joe - -- Registerd Linux user #443289 at http://counter.li.org/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFGLOHAiXBCVWpc5J4RAkd+AJ9t8L/UUafI0ERDgrzEV1o8cjng2wCgu4cY Gtq1Rr+Qsa5jyFQtUagyfx0= =bFl7 -END PGP SIGNATURE- Hi Joe, All three machines are on the same subnet, use the same router, the same gateway, and the same firewall. I couldn't believe this when I first found it so I ran apt-get update && apt-get upgrade on all three within a matter of 10 minutes total time and the symptom still persisted. Two of the machines started life as Sarge and have been up and running a minimum of 2 years. The other is my laptop that had a fresh install of Etch and moved to Sid a month or so ago. The machine that can see nhfsstone is the oldest install of all the machines. Its install dates to some time in 2004. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
oddity in apt-cache
I have three separate machines that have identical entries in /etc/apt/sources.list. All were updated and upgraded this morning as a result of troubleshooting this issue. On machine #1 I can apt-cache show nhfsstone and it returns the expected data on nhfsstone. On machines 2 and 3 it tells me that the nhfsstone package cannot be found. Running apt-cache search nfs on all machines yeilds similar results. Machine #1 has nhfsstone included in the result set. Machines 2 and 3 do not. All machines are pointed to: deb http://mirrors.kernel.org/debian sid main contrib non-free. This just makes absolutely no sense to me. I'm pointing all three to the same set of repositories and yet two machines cannot find a software package the other machine finds. I realize that all machines may not see exactly the same server every time, but to have a package being found on one machine and missing on two others seems very strange. Can anyone explain this anomaly this to me? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
linux-image-2.6.20-1-686 and sky2 module
I have been working on a Toshiba Tecra A4-S211 and have run across a problem in relation to the linux-image package in the title of this post and the Marvell 88E8036 NIC. The sky2 module is loaded according to lsmod, the system thinks the ethernet port is active as the lights actually blink on the physical port itself, but there isn't actually any traffic going in or out of the ethernet port. If I run ifup the dhcp discover packets are sent according to the messages printed in the bash shell, but nothing is actually sent over the network. The dhcp server sees no packets (I am sniffing traffice on it using Wireshark during the test), and after the default number of timeouts is reached the dhcp client says there was no response. Running ifup at this point I receive a message saying that eth1 is already configured. Running ifdown I get errors saying the network is not reachable. If I boot back into 2.6.18-4-686 the card works normally and acquires an IP address from the dhcp server. The entire installation is Etch except for the 2.6.20-1-686 kernel as I installed it to try to solve the ACPI problems and ATA hardware recognition problems I was having with the 2.6.18-4-686 kernel. It does solve the problem of the time outs in recognizing one of the ATA devices and the shutdown problems I was having, but no networking capability at all is rather severe price to pay for fixing those to irritating but relatively minor annoyances. Has anyone else seen this? Oh, one possible wild card in the issue is I have vmware-server installed on this machine and use bridged networking with it. I've Googled to see if there are any known reasons why this would create a networking problem for the host if I hadn't reconfigured vmware networking for this kernel yet but couldn't find anything on it. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Noob question - best way to install software
Michael Pobega wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sun, Apr 15, 2007 at 07:13:07AM -0500, Dennis G. Wicks wrote: Michael Pobega wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Sat, Apr 14, 2007 at 05:50:26PM -0400, Douglas Allan Tutty wrote: On Sat, Apr 14, 2007 at 02:36:40PM -0700, Adam Frank wrote: For beginners I'd definitely recommend apt-get, or even one of its GUI fronteds like Synaptic. The only problem for a beginner using Synaptic is that if it is all she knows, and X crashes, they have no experience to fall back on. I completely agree. Everyone should have some command line experience in case anything ever breaks X.org, it could save lots of data and time. I recommend aptitude for the new user, apt-get doesn't track dependencies as well as aptitude does, and you don't have to remember seperate commands (apt-* as opposed to aptitude) I have one recurring problem with aptitude. It keeps trying to remove gnome and everything related to it and a bunch of other stuff. Fortunately it takes up enough real estate on the screen that it is hard to miss and I just reply N and use apt for that task. run "aptitude keep-all" That will keep aptitude from trying to get rid of it sees as your "unused" packages, plus all their dependencies. Just so you know you responded to me personally, and not to the mailing list. I'll CC it, hopefully it works (I'm not sure if it will break the thread) - -- /\\ http://digital-haze.net/~pobega/ - My Debian site and blog _\_V Window Maker user, Debian enthusiast, Mutt lover -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFGIhyn/o7Q/FCvPe0RAhsjAJ9P5Loc9bCO0ssMFQ3NNFesRDTdrgCeJnsG GvXaeSiBpGk9KMAmMJzzJLU= =Fup+ -END PGP SIGNATURE- -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Kernel bug?
I installed Etch a couple of weeks on a Toshiba Tecra A4-S211 for a friend of mine. Today I had to install some software and get his new printer working. While I was at it I ran apt-get update && apt-get upgrade. The kernel was patched during this upgrade and now the laptop will no longer shut down on its own. What follows are the last lines in the shut down messages, but even though the last line says "system halted" the monitor stays on and the laptop fails to power down. Deactivating swap...done. Will now halt. Sychronizing SCSI cache for disk sda: System halted. This laptop has an ata hard drive in it, not SCSI or SATA, at least according to Toshiba's official documentation for this laptop. However, lspci identifies the hard drive controller as "00.1f.2 IDE Interface: Intel Corporation 82801FBM (ICH6M Sata Controller (rev 04)". There is also a bug during system startup which causes timeouts/hangs. The system has a problem identifying drives. This has been a problem since the initial Etch installation, although it has always shut down correctly up to now. Here is the relevant dmesg printout for what happens during boot: PCI: Discovered primary peer bus 04 [IRQ] PCI: Discovered primary peer bus 05 [IRQ] PCI: Using IRQ router PIIX/ICH [8086/2641] at :00:1f.0 PCI: setting IRQ 11 as level-triggered PCI: Found IRQ 11 for device :00:1f.2 PCI: Sharing IRQ 11 with :00:1d.1 PCI: Found IRQ 11 for device :06:06.0 PCI: Sharing IRQ 11 with :00:1d.2 PCI: Sharing IRQ 11 with :06:04.0 PCI: Sharing IRQ 11 with :06:06.4 PCI: Cannot allocate resource region 0 of device :06:06.0 PCI: Bridge: :00:01.0 IO window: c000-dfff MEM window: c000-cfff PREFETCH window: 9000-9fff PCI: Bridge: :00:1c.0 IO window: a000-bfff MEM window: bc00-bfff PREFETCH window: 8c00-8fff PCI: Bridge: :00:1c.1 IO window: 8000-9fff MEM window: b800-bbff PREFETCH window: 8800-8bff PCI: Bus 7, cardbus bridge: :06:06.0 IO window: 6000-60ff IO window: 6400-64ff PREFETCH window: 8000-81ff MEM window: b200-b3ff PCI: Bridge: :00:1e.0 IO window: 6000-7fff MEM window: b000-b7ff PREFETCH window: 8000-87ff PCI: Found IRQ 11 for device :00:01.0 PCI: Sharing IRQ 11 with :00:1c.1 PCI: Sharing IRQ 11 with :00:1d.3 PCI: Sharing IRQ 11 with :01:00.0 PCI: Sharing IRQ 11 with :02:00.0 PCI: Setting latency timer of device :00:01.0 to 64 PCI: setting IRQ 10 as level-triggered PCI: Found IRQ 10 for device :00:1c.0 PCI: Sharing IRQ 10 with :00:1e.2 PCI: Setting latency timer of device :00:1c.0 to 64 PCI: Found IRQ 11 for device :00:1c.1 PCI: Sharing IRQ 11 with :00:01.0 PCI: Sharing IRQ 11 with :00:1d.3 PCI: Sharing IRQ 11 with :01:00.0 PCI: Sharing IRQ 11 with :02:00.0 PCI: Setting latency timer of device :00:1c.1 to 64 PCI: Setting latency timer of device :00:1e.0 to 64 PCI: Found IRQ 11 for device :06:06.0 PCI: Sharing IRQ 11 with :00:1d.2 PCI: Sharing IRQ 11 with :06:04.0 PCI: Sharing IRQ 11 with :06:06.4 NET: Registered protocol family 2 IP route cache hash table entries: 4096 (order: 2, 16384 bytes) TCP established hash table entries: 16384 (order: 5, 131072 bytes) TCP bind hash table entries: 8192 (order: 4, 65536 bytes) TCP: Hash tables configured (established 16384 bind 8192) TCP reno registered audit: initializing netlink socket (disabled) audit(1176121167.776:1): initialized VFS: Disk quotas dquot_6.5.1 Dquot-cache hash table entries: 1024 (order 0, 4096 bytes) Initializing Cryptographic API io scheduler noop registered io scheduler anticipatory registered io scheduler deadline registered io scheduler cfq registered (default) PCI: Setting latency timer of device :00:01.0 to 64 assign_interrupt_mode Found MSI capability Allocate Port Service[:00:01.0:pcie00] PCI: Setting latency timer of device :00:1c.0 to 64 assign_interrupt_mode Found MSI capability Allocate Port Service[:00:1c.0:pcie00] Allocate Port Service[:00:1c.0:pcie02] PCI: Setting latency timer of device :00:1c.1 to 64 assign_interrupt_mode Found MSI capability Allocate Port Service[:00:1c.1:pcie00] Allocate Port Service[:00:1c.1:pcie02] isapnp: Scanning for PnP cards... isapnp: No Plug & Play device found Serial: 8250/16550 driver $Revision: 1.90 $ 4 ports, IRQ sharing enabled serial8250: ttyS0 at I/O 0x3f8 (irq = 4) is a 16550A PCI: setting IRQ 5 as level-triggered PCI: Found IRQ 5 for device :00:1e.3 PCI: Sharing IRQ 5 with :06:06.3 RAMDISK driver initialized: 16 RAM disks of 8192K size 1024 blocksize PNP: No PS/2 controller found. Probing ports directly. i8042.c: Detected active multiplexing controller, rev 1.1. serio: i8042 AUX0 port at 0x60,0x64 irq 12 serio: i8042 AUX1 port at 0x60,0x64 irq 12 serio: i8042 AUX2 port at 0x60,0x64 irq 12 serio: i8042 AUX3 port at 0x60,0x64 irq
Re: Etch, nfs, and AIX v3.2
Greg Folkert wrote: On Thu, 2007-04-05 at 13:28 -0700, Freddy Freeloader wrote: I'm having some problems setting up 4 nfs shares that mount reliably on an Etch box that is importing the share from an RS/6000 AIX v3.2 server. It takes about a minute to mount each share at boot, and about 1/2 that time to manually mount one of the shares share. I get an "RPC: timed out" error on the client if a manual mount fails. Do I need to specify the version, udp, etc... on the client, as it seems that nfs v3 wasn't released until long after this server went into production? The same client will mount nfs shares from a Debian box on the same subnet as the AIX machine almost instantly. AIX v3.2 uses nfsv2, if I am not mistaken. a line example for /etc/fstab: remo:/export/aix /mnt/aix/ nfs default,vers=2 0 0 BTW, with version 2 of NFS, both machine mush be able to resolve each other. Also, since nfsv2 doesn't support rsize and wsize other than default, don't change them. Thanks Greg. I figured I'd have to do something about the version number. I just wasn't sure. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Etch, nfs, and AIX v3.2
I'm having some problems setting up 4 nfs shares that mount reliably on an Etch box that is importing the share from an RS/6000 AIX v3.2 server. It takes about a minute to mount each share at boot, and about 1/2 that time to manually mount one of the shares share. I get an "RPC: timed out" error on the client if a manual mount fails. Do I need to specify the version, udp, etc... on the client, as it seems that nfs v3 wasn't released until long after this server went into production? The same client will mount nfs shares from a Debian box on the same subnet as the AIX machine almost instantly. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: "I do consider Ubuntu to be Debian" , Ian Murdock
anoop aryal wrote: On Thursday 29 March 2007 14:55, Steve Lamb wrote: anoop aryal wrote: i'll take etch when it's good and ready and not a day before. i'd rather have a working OS, free of bugs, late than a half baked, bug-ridden POS, on time. Then you'll be waiting forever because even Debian does not ship stable releases "free" of bugs. touche. but my main argument is that i value debian prioratizing bug-squashing over meeting release dates. and that if debian is guilty of being cavalier about missing release dates, others are guilty of being cavalier about releasing bug-riddled software (knowingly). i switched from windows to redhat (around '96), then from redhat to fedora, then from fedora to debian. now that i'm here, i don't see myself switching to anything else. and that's *because* debian hasn't given in to the temptation of always having the latest and the greatest software and *because* when the new stable is out, i know there'll be minimum surprises. and that's because debian hasn't given in to meeting artificial release dates. as a software dev who does sysadmin only to setup an environment to run the software we develop, i really value the slow/steady releases because otherwise, we'll be on a hamster wheel of upgrades and have to constantly fix our apps to work with the newer system the whole time and not have the time to develop anything new. debian gives us a nice stable target to hit. i do use testing/unstable on my personal machines where i don't mind the occasional breakage - i see that as a chance to send in the occasional bug report to help out. and as a way to prepare for what may be coming down the pipe in the next release. but debian - the way it is - is exactly why i'm using it on servers. so, yeah, given everything else stays the same, i'd take a firmer release date. but not at the expense of getting software that has critical bugs that could be fixed if the release date was moved. after all, if i was really itching for the newer software, all i'd have to do is 'sed -i "s/sarge/etch/g" /etc/apt/sources.list'. not like debian is stopping me from getting the software before it's magical "release" date. just wanted to make sure that the powers that be also hear from people who appriciate the way things are done in debian. I agree. I would much rather have Debian released when it is ready to be released, than released when an arbitrary time line has passed. I'll take stability and quality over "latest and greatest" every time in a server OS. I run Sid on my laptop and desktop, but not on any servers I install. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: Woohooo! Dell + Linux
Max Hyre wrote: Dear Debianistas: John Hasler wrote: The manufacturer may be paying Microsoft a fixed fee for every machine he ships rather than for every copy of Microsoft Windows he ships. This makes sense when nearly every machine has Microsoft Windows installed. Precisely. But the sense is inverted. Nearly every machine has a copy of MS Windows installed because the manufacturer pays a fixed fee for every machine shipped. When this whole thing started to snowball (as in when MS had gotten a solid foothold by selling MS-DOS for lots less than the P-system or CP/M-86) MS made an offer no one in her right mind could refuse. Their per-hardware-unit-sold license was so much cheaper than the per-OS-copy-sold license that it made no sense to do anything else. Thus, any system sent out already had the cost of MS-DOS (later MS Windows) built into its price. Hence, remarks about the ``Microsoft Tax''. Once this happens, adding any other OS, no matter what (>= 0) its price, means more effort for the manufacturer. It raises the cost of the sale, and Linux is frozen out by economics. Q.E.D. I see you didn't pay much attention to the Comes vs Microsoft trial. The plaintiffs had a former CEO from one of largest OEM's in Europe testify as to how MS operated. They required the OEM to pay MS whether or not they installed a MS OS on a system they sold. It's called the per-processor license and that's what MS used to run DR-DOS out of the business. If an OEM wouldn't agree to a per-processor license then MS doubled or tripled price of their OS even if the OEM sold the same number of PC's with a MS OS installed. Thus it wasn't a licensing scheme that was advantageous to the OEM's, it was a licensing scheme they either had to accept or get out of the business. It was a gun to the head of the OEM's, and MS agreements with the OEM's still are. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
(Solved) Re: sendmail hostname configured as an empty string
Freddy Freeloader wrote: Jeff D wrote: On Fri, 23 Mar 2007, Freddy Freeloader wrote: Roberto C. Sánchez wrote: On Fri, Mar 23, 2007 at 09:53:57AM -0700, Freddy Freeloader wrote: Not my choice. I'm setting this up for someone else who has hard-coded sendmail into his apps and is afraid that using exim4 instead of sendmail will break them. I tried to get him to use exim4 but he wasn't about to change his mind. Are you aware that because of this (people hard coding to sendmail) that both Exim and Postfix provide a /usr/sbin/sendmail (or was is /usr/lib/sendmail) binary that is perfectly compatible (at the command-line option level) with sendmail? I have used apps that were hard coded to use sendmail with postfix over the past few years and never once encountered a problem. Regards, -Roberto How many times do I have to tell you this is not my decision to make? If it was up to my I'd use Exim4. I've used it successfully in the past with applications that specifically called for sendmail, but the person I'm doing the for has required me to use sendmail. Think I'd be knocking my head against the wall when there is something much easier to use if I had a choice? Anyway, thanks for nothing. it goes in sendmail.mc --- Human beings were created by water to transport it uphill. Thank you. That's what I figured, but was unsure. Well, the define confDOMAIN_Name addition to sendmail.mc didn't fix the problem. What I finally chased this down to was a problem with HOSTSTATUSDIRECTORY=path. The host_status file was missing from the /var/lib/sendmail directory. I "touched" the file, changed the permissions so both root and the smmsp group had rw permissions, ran sendmailconfig, and the problem with having an undefined host name was gone. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: sendmail hostname configured as an empty string
Jeff D wrote: On Fri, 23 Mar 2007, Freddy Freeloader wrote: Roberto C. Sánchez wrote: On Fri, Mar 23, 2007 at 09:53:57AM -0700, Freddy Freeloader wrote: Not my choice. I'm setting this up for someone else who has hard-coded sendmail into his apps and is afraid that using exim4 instead of sendmail will break them. I tried to get him to use exim4 but he wasn't about to change his mind. Are you aware that because of this (people hard coding to sendmail) that both Exim and Postfix provide a /usr/sbin/sendmail (or was is /usr/lib/sendmail) binary that is perfectly compatible (at the command-line option level) with sendmail? I have used apps that were hard coded to use sendmail with postfix over the past few years and never once encountered a problem. Regards, -Roberto How many times do I have to tell you this is not my decision to make? If it was up to my I'd use Exim4. I've used it successfully in the past with applications that specifically called for sendmail, but the person I'm doing the for has required me to use sendmail. Think I'd be knocking my head against the wall when there is something much easier to use if I had a choice? Anyway, thanks for nothing. it goes in sendmail.mc --- Human beings were created by water to transport it uphill. Thank you. That's what I figured, but was unsure. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: sendmail hostname configured as an empty string
Roberto C. Sánchez wrote: On Fri, Mar 23, 2007 at 09:53:57AM -0700, Freddy Freeloader wrote: Not my choice. I'm setting this up for someone else who has hard-coded sendmail into his apps and is afraid that using exim4 instead of sendmail will break them. I tried to get him to use exim4 but he wasn't about to change his mind. Are you aware that because of this (people hard coding to sendmail) that both Exim and Postfix provide a /usr/sbin/sendmail (or was is /usr/lib/sendmail) binary that is perfectly compatible (at the command-line option level) with sendmail? I have used apps that were hard coded to use sendmail with postfix over the past few years and never once encountered a problem. Regards, -Roberto How many times do I have to tell you this is not my decision to make? If it was up to my I'd use Exim4. I've used it successfully in the past with applications that specifically called for sendmail, but the person I'm doing the for has required me to use sendmail. Think I'd be knocking my head against the wall when there is something much easier to use if I had a choice? Anyway, thanks for nothing. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: sendmail hostname configured as an empty string
Roberto C. Sánchez wrote: On Fri, Mar 23, 2007 at 08:48:19AM -0700, Freddy Freeloader wrote: I see this has been asked before, but being the total sendmail newbie that I am, and that Debian uses sendmailconfig to configure sendmail I am not quite sure as to how to proceed. Out of curiousity, if you are a total sendmail newbie, why not just try something like Exim or Postfix? Regards, -Roberto Not my choice. I'm setting this up for someone else who has hard-coded sendmail into his apps and is afraid that using exim4 instead of sendmail will break them. I tried to get him to use exim4 but he wasn't about to change his mind. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]