Re: Malformed link from Planet GNU?
Dora Scilipoti wrote: > Bob writes: > > I think this must be on the planet.gnu.org side of things. It is. Found it here. https://git.savannah.gnu.org/cgit/gnues/planet-infra.git/tree/cron/17gnuprojects The earliest verion of it in version control already had it producing the atom feed rather than the track back link. I forgot to say in my last note that there is a track back link at the very bottom of each article. It's the date link at the bottom of each article. Hover over the date and you will find that it links back. I am still tracking down the curent caretaker of this aggregator. Now that we are into the weekend it will probably wait until next week. However this has been in place since 2018 it seems. > And FWIW, I noticed that Savannah feeds are not listed in config.ini.tmpl. > https://git.savannah.gnu.org/cgit/www/planet-config.git/tree/config/config.ini.tmpl The Savannah feeds are dynamically generated. There is a groups.tsv resource link that returns the list. This is then appended to the end of the template. Bob
Re: Malformed link from Planet GNU?
Hi Dora! Dora Scilipoti wrote: > submitting a news item from Savannah [1] goes to Planet GNU [2]. But > there is no link back to [1] from Planet GNU. Instead, it's a prompt to > download a www.atom file. I think this must be on the planet.gnu.org side of things. I don't have any acces to it. But I scanned through the Internet Archive's copy of things and I see that this was the last time it was "working". Note the "October 07, 2017" "health @ Savannah" entry. https://web.archive.org/web/20171014141946/https://planet.gnu.org/ And the next snapshot shows the atom feed. Look for the "March 11, 2018" "automake @ Savannah" entry and it is an atom feed then. https://web.archive.org/web/20180313234855/http://planet.gnu.org/ What Savannah provides is a file with a list of projects. The aggregator on planet then polls the projects news feeds and scrapes out the new news articles and posts them to planet. > Is this intentional? I would expect a link back to the original online > page instead of the file. This does have the feel like an unintentional change. I don't think (can't guarentee it without looking at the planet side aggregator) but I think this must have been a change on the aggregator side of things. The news article rendering on the Savannah web UI side looks the same to my eye but I don't know exactly what the aggregator will be scraping from it. It's possible something changed in there and the aggregator became confused and started grabbing a different link. I don't have access to the planet0p VM. I don't know who does. Pretty sure Sylvain brought the feed back online again after an absence in 2016 due to a note I see. I will ask sysadmin about it tomorrow. Bob
Re: cvs login
Hi Peter, Peter Frazier wrote: > according to: > > https://savannah.gnu.org/cvs/?group=www > > ...there is an anonymous login for gnu.org cvs, but logging-in with the > anonymous handle fails. Thank you for reporting this problem. I am able to reproduce the problem. But it isn't related to anonymous logins as those actually have never existed. It's about the command line usage documented on that page which is not correct. I see that the documentation on that page is broken. There is a space now added to the line that should not be there. > "cvs -t -d :pserver:anonym...@cvs.savannah.gnu.org:sources/www co ccvs login" That's not quite what the page says to do though. The page says this. cvs -z3 -d:pserver:anonym...@cvs.savannah.gnu.org: /sources/www co www Which has a space added in an breakign place. Remove that space. This following line works. cvs -z3 -d:pserver:anonym...@cvs.savannah.gnu.org:/sources/www co www Try that and it should be working for you. Thank you again for reporting this problem! :-) Bob
Re: urgent: request for removal from appearing in my mail-list.
Takeda Shingen wrote: > guys guys... sorry, i didn't expected you were to add my email to a > search engine-crawled email list, i would like request that every > email you might have related with this email-address i'm writing into > might be deleted, including this one as well if possible > ASAP. specially those included in the savannah-list if possible, from > now on. thanks! I was rather hoping that someone who sets policy might reply. I am only a person who helps and must follow policy. There is a policy of not modifying web archives. See the "Mailing list archive policy" here. https://lists.gnu.org/ Also as to your specific request I can't even tell from your message what postings you are talking about other than these here asking to delete them. These are the only ones I am seeing. I think if you had not asked to have your email address deleted that your email address would not have appeared anywhere. As a practical matter trying to suppress information always calls attention to it creating the opposite effect from the one you want. This is known as The Streisand Effect. You can read about the phenomenon here. https://en.wikipedia.org/wiki/Streisand_effect This was very well described by Jon Oliver in his satirical news show Last Week Tonight from May 2014. Right To Be Forgotten: Last Week Tonight with John Oliver (HBO) https://redirect.invidious.io/watch?v=r-ERajkMXw0=4m04s One of the problems people face trying to delete information which is distributed to the public is that the information is *distributed* information. It's not all in one place. It is in your mailbox. It's in my mailbox. It's in everyone's mailboxes! And some of those mailboxes are publicly available on the web. It's everywhere! It is simply not possible to delete public information from the public because the public is very many different places all over the world. It is impossible by reason of practicality to reach out silently to all of them and ask thousands of places to delete history. Trying to do so will create a lot of new history that would itself need to be deleted. My best advice is to let sleeping dogs lie and do nothing. Bob
Re: Spam message when using CVS for webpages
Ineiev wrote: > Savane is the free software hosting system savannah.gnu.org runs. > > sv_membersh is the restricted shell used as the login shell for Savane users > when they connect via SSH. > > Savane released under the AGPL; offering the corresponding source code > is a requirement of the AGPL. I spent some time looking at this issue and my assessment is that sv_membersh is only a peripheral part of Savannah at best. It isn't needed for Savannah to operate. It's a security gate that we use to protect the host from potentially malicious activity or potentially accidental harm. It does not need to be savane software and might be any suitable component program. Even though Savannah as a whole is distributed under the AGPL Savannah makes use of many programs which are licensed under other licenses such as the other various GPL versions and other permissive licenses. That the whole of Savannah is available under the AGPL does not make a requirement that every component used in Savannah be forced into the AGPL. For example in Savannah cron is used. If that were true then it would be required to re-license cron from GPLv2+ to the AGPL. Savannah uses git and git is licensed under the GPLv2. Savannah uses Subversion is licensed under the Apache-2.0 license. And so on and so forth. Simply using these components does not require that the license always be advertised. For example GNU ls does not emit its license upon every invocation. That would interfere with its primary function. But ls will emit its license information when this is asked for with ls --version. I join our fellow colleagues asking to remove this license advertisement as being harmful to the primary function of the site. Thanks! Bob
Re: Moving an existing project from SourceForge to Savannah.
Ineiev wrote: > Alan Mackenzie wrote: > > I would thus like to move the project from SourceForge to Savannah. May > > I take it this would be acceptable and welcomed? > > Yes; it's nice to see software migrating to more user-respecting > forges, > > https://www.gnu.org/software/repo-criteria-evaluation.html +1! It will be good to see you using Savannah. We are happy to help with the process. > > Looking at the Savannah site, there are a couple of things which confuse > > me. I couldn't find a definition of what is meant by "group". It seems > > to mean the name of a project (in my case, "CC Mode") and/or the Linux > > file-system group name under which project files will be stored > > ("cc-mode"). > > The "project" is a type of group; other group types hosted on Savannah > include GNU User Groups, www.gnu.org portions and www.gnu.org translation > teams. In deep detail there are project types defined in the SQL database and there are Unix groups defined in the database that are real Unix groups to the system for file access permissions. Therefore "group" in that case means both things. But you can think of it as the classic Unix group and that will be accurate. Files are stored on disk accessible by file group permission. > > I would also like to preserve the project's mailing list, if possible. +1! Preserving history is important. > > I have a copy of posts going back to 2001 on my own machine, I don't > > know if it will be possible to extract a more complete copy from > > SourceForge. Do you see any problems, here? Currently, the main > > mailing address for this list is bug-cc-m...@gnu.org, and the gnu server > > forwards the mail to the SourceForge address. I foresee this address > > remaining the main address for the list, relocated back to Savannah. > > I think you can use your old mailing list or migrate to lists.gnu.org. I presume without looking that the old mailing list was hosted on SourceForge. And from past experience there I know they don't make accessing the files there easy. In which case it would be easier to use your archive of the mailing list. Off the top of my head I don't know the recipe to do the mailing list archive import but I generally know and would work it out when the time comes to do it. > > What about old releases? How much point is there, trying to preserve > > these? SourceForge still has releases going back around 20 years, to > > release 5.26. Current (three years old) is 5.35. They do not take up > > much space (around 700 kByte each). The older releases must be presumed > > lost. > > You'll be able to upload them to Savannah download area. +1 Bob signature.asc Description: PGP signature
cgit syntax highlight request
Thanks to everyone who added comments to this topic. It's been a week for people to think about things and make comments. (And a week for me to be completely consumed by my own tasks.) The unscientific survey does lean toward adding color syntax highlighting. Therefore I have enabled it on the cgit interface for at least an experimental basis. It is active now. This uses the "highlight" utility. This identifies file types by file suffix. http://git.savannah.gnu.org/cgit/coreutils.git/tree/src/cat.c Let me say that I would prefer not to colorize just like the people who commented they preferred no color. Adding this is definitely not the most natural thing for me to be doing. But I think highlight does a pretty good job of enhancing the syntax without being overpowering. And very importantly in both dark and light themes. For all of us in the no-color camp I can point out that gitweb is still no-color and still available for us to use. Also my take of the people who voted this way is that we might all be people more comfortable working in our own sandbox where we have our own environments set up. The web interface perhaps being the shared commons for people not using their own sandbox. http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f=src/cat.c In any case let's take the highlighting out for a shake down cruise and see how it works for all of us. If you love it then please make a comment to that effect. If you hate it then also make a comment to that effect. This might be a good opportunity for someone to get involved with cgit and develop a nice way to allow users to choose color highlighting or not. I know I would use that control knob. Bob
cgit syntax highlight request
Savannah Users, A user on IRC (daviid) has requested that cgit on Savannah be modified to perform syntax highlighting by default on the various source page display pages. I did some research into this topic of cgit syntax highlighting. It seems there are two popular ways to enable syntax highlighting in cgit. One uses the Python "Pygments" and one uses the standalone "highlight" utility. On IRC there were various comments about pygments and previous security vulnerabilities it has been through. The other option using "highlight" I note is packaged for Debian and therefore if any security vulnerabilities were found that the security channel would normally provide a patch which would be quickly installed on our systems. Therefore in my opinion using "highlight" would be the best option. I tried it with both dark and light themes and it seems acceptable in either. Which is important to me personally as I almost always use a dark theme when possible. Personally I rather prefer the non-colorized display. Colors are one of those bike-shed items that everyone wants to be different. Therefore the common ground is often the no-color option. I much prefer if people clone to their own sandbox and then they can use their own preferences for all bike-shed things like colors and fonts. But this is a shared resource and everyone is using it as a commons area. I will bring the topic up for discussion. What is the opinion of the group at this time? Should we enable syntax color highlight in cgit by default? Should we leave color as it is now without? Should we try it for a time period and see how it is received? What would the users like to see here? Bob signature.asc Description: PGP signature
Re: DNS issue affecting gnu.org (and subdomains)
Eli Zaretskii wrote: > > Ar Rakin wrote: > >$ host gnu.org > > ;; connection timed out; no servers could be reached > You will find the information here: > > https://hostux.social/@fsfstatus > > That place is always good to look at when such issues occur. +1 for the https://hostux.social/@fsfstatus status page. The FSF sysadmins post information there (sometimes terse) when there are problems seen that affect systems. It's something everyone should bookmark where they can find it easily. > $ host gnu.org 8.8.8.8 > [...] > Host gnu.org not found: 2(SERVFAIL) > > Nope, Google's resolver can't resolve gnu.org either. The authoritative nameservers (a fancy title for the upstream ones) are getting DDoS'd off the net. Which means that all resolution by downstream nameservers, even Google ones, are timing out. Compounded by the very short 300 second TTL on the gnu.org records mean that even if a lookup is successful that it can only be cached for five minutes and then discarded. Upon which then it needs to be looked up again and the query will have to fight its way through the DDoS in a mixed martial arts cage fight arena to get the data again. > How about, making the same queries on a VPS in the US: > > $ host gnu.org > gnu.org has address 209.51.188.116 > gnu.org has IPv6 address 2001:470:142:5::116 > Host gnu.org not found: 2(SERVFAIL) > > Hmm, that worked, just, but it was very slow (~ 8 secs). The nameservers are overwhelmed making them slow to respond. And then additionally I am seeing a very high packet loss across the network into the Boston machines. That high packet loss means retries at the network protocol level making things slow. I have seen 30-45 seconds on average here looking up DNS for a while. > $ host gnu.org 8.8.8.8 > [...] > Host gnu.org not found: 2(SERVFAIL) > > Google's resolver fails again. There is really nothing special about the Google resolver. If the upstream ns*.gnu.org nameservers can't receive and can't send data then gnu.org names cannot be resolved. > I fetch from git.sv.gnu.org every 30 minutes and the fetch beagn to > fail two days ago (on 23rd March) at around 10pm GMT. It has been > failing much more often than not since then. Yes. That's about when the attack started. I assume it is an attack. That's what sysadmin said about it. I have no special ability to observe this particular attack and am suffering through the packet loss of it along with the rest of you. Bob
Re: How to upgrade automake to 1.16.5?
Andreas Schwab wrote: > Bob Proulx wrote: > > Perhaps you could try the gpg key download again? It should work. It > > worked for me. If not then I would try one of the other key servers. > > > > https://gnupg.org/faq/gnupg-faq.html#new_user_default_keyserver > > Unfortunately, the recommendation on that page is no longer appropriate, > since pool.sks-keyservers.net is defunct now. Hmm... Bummer! Thanks for that update. Bob
Re: How to upgrade automake to 1.16.5?
Hello afernandez, You have reached the Savannah Users mailing list. This is a community of people who use the Savannah Free Software Forge. The Savannah site hosts hundreds of free software projects. Both autoconf and automake are among those hosted there. But this list is more about how to use the software forge site itself. afernandez wrote: > I was finally able to upgrade automake by using the gzipped files available > at https://ftp.gnu.org/gnu/automake/. For help about autoconf and automake the best place is to write to the mailing lists associated with autoconf and automake. https://www.gnu.org/software/autoconf/ https://www.gnu.org/software/automake/ afernandez wrote: > In a Ubuntu 20.04 system, I had to upgrade autoconf to 2.71, which then > required upgrading automake to 1.16.5. I'm following the instruction on the > website Which web site? > but getting the following errors: > wget https://ftp.gnu.org/gnu/automake/automake-1.16.5.tar.xz > wget https://ftp.gnu.org/gnu/automake/automake-1.16.5.tar.xz.sig > gpg --verify automake-1.16.5.tar.xz.sig > gpg: directory '/home/ubuntu/.gnupg' created > gpg: keybox '/home/ubuntu/.gnupg/pubring.kbx' created > gpg: assuming signed data in 'automake-1.16.5.tar.xz' > gpg: Signature made Mon Oct 4 03:23:30 2021 UTC > gpg: using RSA key 155D3FC500C834486D1EEA677FD9FCCB000B > gpg: Can't check signature: No public key > gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000B > gpg: keyserver receive failed: Server indicated a failure > Thanks. I was able to import both of those keys without any problem. rwp@turmoil:~/tmp/junk/autotools$ gpg --keyserver keys.gnupg.net --recv-keys 91FCC32B6769AA64 gpg: key 0x91FCC32B6769AA64: public key "Zack Weinberg " imported gpg: Total number processed: 1 gpg: imported: 1 rwp@turmoil:~/tmp/junk/autotools$ gpg --keyserver keys.gnupg.net --recv-keys 7FD9FCCB000B gpg: key 0x7FD9FCCB000B: "Jim Meyering " 1 new user ID gpg: key 0x7FD9FCCB000B: "Jim Meyering " 3 new signatures gpg: Total number processed: 1 gpg: new user IDs: 1 gpg: new signatures: 3 And then with that was able to get a "Good signature" for both of those compressed sources. The Good Signature will verify that the sources are those signed. Note that unless you have a web of trust between you and the signer that gpg will warn that the key is not trusted. That's an unfortunate effect of the trust system. But if the signature verified and you obtained the key from a trusted source without a web of trust that is the best that can be done as far as I know. Perhaps you could try the gpg key download again? It should work. It worked for me. If not then I would try one of the other key servers. https://gnupg.org/faq/gnupg-faq.html#new_user_default_keyserver Hope this helps! :-) Bob
Savannah Upgrades
Savannah Users, This is an FYI of two system upgrades today. The SQL database was upgraded today from an older MySQL database server to a newer MariaDB database server. That went very well such that we decided not to stop there. We also upgraded the Savannah web UI frontend server to the latest release of Triquel 10. And then came in a report that git push failed due to permission problems! Just when I had thought things had gone so well. I didn't notice because my test for it was flawed. The problem was that the back end storage server had been reconfigured for the new SQL DB address but the running rpc.mountd was still hanging onto the previous address. Restarting it solved this problem. The goal was to make these ugprades as invisible of an upgrade as possible to the users of the services. I apologize for any inconvenience. If you experience any continuing problems please let us know! Bob
Re: "Register New Project" always goes to the login page
> >> >> Hi, currently on both sv.gnu.org and sv.nongnu.org, after logging in, > >> >> > >> >> if I click on "Register New Project" it always goes to the login page > >> >> and essentially asks the user to re-login. So "registering new > >> >> project" does not work. If it is asking for a login then something has invalided the web browser session cookie. > >> What happens if you try it in Firefox Incognito window(file > new private > >> window)? > >Then registering new project works That makes me suspect a plugin that is causing problems. Because most plugins are not enabled in the incognito private browsing window. And since that worked I think it indicates that one of the browser plugins is causing the problem. Not that this means anything but I mostly use Firefox and I don't have any trouble browsing the Savannah web UI site. Bob
Re: Email notifications from bug tracker
Eli Zaretskii wrote: > There were massive problems on the info-gnu-emacs mailing list > yesterday. Mailman for GNU and FSF are mostly completely separate from Savannah anything. Savannah's web UI has been scripted to interact with the mailman server lists.gnu.org to create mailing lists and to reset mailing list passwords. But that's about it. This Savannah frontend upgrade this weekend is completely independent of lists.gnu.org and the main mail relay eggs.gnu.org and the other mailing lists. Since I am speaking I will add more background. The listhelper team moderates for spam discards and can help with subscription and unsubscription but does not have administrative permission on lists.gnu.org nor eggs.gnu.org as that is reserved for FSF sysadmin. It's a little hard for those not in the day to day operations to know what listhelpers can do and not do. Mostly we peek and see if we have permissions or not. If we do then we can do something. If not then we pass it along to FSF sysadmin. > I wrote to mail...@gnu.org about that, but didn't yet receive any > replies. FSF sysadmin usually does not work on the weekend. Especially if they have been in a long hours push for other tasks. There have been several of those recently. I imagine right now they are trying to recharge themselves on the weekends. But they do work normal working hours Monday through Friday and so once they "return to the office", virtually in this case, they will start working through RT tickets and responding to problem report emails. Bob
Re: Email notifications from bug tracker
> I will get Postfix configured for user address mapping, DKIM > signatures, transport, and delivery. That will fix the future sending > of messages from the web UI which includes bug tickets, password > recovery, and other administrative uses. I don't think past failed > messages can be re-sent in any meaningful way however. I believe I have the system configured for email now. This means that messages generated from the web UI from now forward should be okay. They will have the correct From address and will be supported with DKIM and such. Previous messages that have already failed will be unlikely to be possible to regenerate them correctly. Maybe. I will look at it tomorrow. But I think it unlikely. Bob
Re: Email notifications from bug tracker
> It looks like savannah does send email notifications currently. > See also this report with some more information: > https://savannah.nongnu.org/support/index.php?110658 > > I'm also sending this message in case the report doesn't get noticed > because of the missing notifications. Sorry for the noise in case > you already saw the other report. Thanks for the report. I appreciate the help in keeping an eye on the system! I see that on Friday Ineiev has upgraded the Savannah web UI system from frontend1 to frontend2 bringing things forward to the next stable Trisquel OS release. This is great! Definitely needed. However in that process I see that the email system was not fully configured. That's the cause of the current email not being processed correctly. I will get Postfix configured for user address mapping, DKIM signatures, transport, and delivery. That will fix the future sending of messages from the web UI which includes bug tickets, password recovery, and other administrative uses. I don't think past failed messages can be re-sent in any meaningful way however. Thanks again for the problem report! Bob
Re: Authenticating git.savannah.gnu.org
Yuan Fu wrote: > Thank you for following up with this. Embarrassingly I still > couldn’t get it to work. It's very hard to debug things when you only get to see one end of the problem. Because ssh, and other security systems, just can't give you too much debug information. Because obviously that gives attackers too much information too! :-( > In the previous message I posted the wrong output so it looks like I > used the wrong key. I confirmed afterwards that the key I uploaded > to savannah is in fact tried by ssh. I also used the key on a > machine on my local network and it seems to work fine. I uploaded > another key and still no luck. I’m not really sure how to debug > this. It will be much easier to have one of us on both ends for the debug. > I’ve read the debug page, in fact me trying to login with ssh is by > following the debug instructions. But I don’t know how to proceed from > here. Let's arrange a time when both of us can be online at the same time and can debug it. Let's take that conversation offlist. However this doesn't actually need to be me. Others can do this too. I might be the person who does this type of debugging most often though. Unfortunately I am going to be busy most of my daylight hours today away from the keyboard. But we can work something out. Bob
Re: Authenticating git.savannah.gnu.org
Hello Yuan, Ineiev wrote: > Yuan Fu wrote: > > debug1: Authentications that can continue: publickey,password > > debug1: Offering public key: yuan@Brown ED25519 > > SHA256:xDlZxIRWzZBaA+Xg/J/Y4O96EtMj7ezWrbtLIN0Bgm4 agent > > debug3: send packet: type 50 > > debug2: we sent a publickey packet, wait for reply > > debug3: receive packet: type 51 > > > > Seems my key is rejected? > > Yes. The fingerprint of the key registered in your account is > > SHA256:jCGSDL+P+BqJ+v0NdXDABsY1I3Y7cjMXhb/5qG+haTc yuan@Brown (ED25519) > > Probably ssh offers a wrong key. Did this eventually get resolved? I'll note that there are debugging tips on this reference page. https://savannah.gnu.org/maintenance/SshAccess/ Bob
ssh member access updated
The server hosting svn, hg, and bzr services has been updated and now has a newer OpenSSH. This should resolve the OpenSSH obsolescence of SHA1. Hopefully it will all work transparently for everyone. If any of you had previously added the initial workaround below it should be removed as it should no longer be needed. HostkeyAlgorithms +ssh-rsa PubkeyAcceptedAlgorithms +ssh-rsa It shouldn't be needed because since OpenSSH 8.5 a new feature now exists and defaults to "UpdateHostKeys yes" which will automatically add newer host keys to the known_hosts file. The theory goes that moving forward now that host keys can and will be automatically upgraded without the need for the above type of workaround. Please report any problems. Bob Reference: https://savannah.nongnu.org/maintenance/SshAccess/ signature.asc Description: PGP signature
Re: Help requiref for setting up my Savannah
निरंजन wrote: > Hello hackers, Hello! :-) > I am निरंजन , a new user on Savannah. My first > project got approved today. Now I want to do a few > things. Can you guide me please? I also saw your ticket. Which I link together here. https://savannah.nongnu.org/support/index.php?110579 I answered a little bit in the ticket and link it here so that if someone else in the future is searching and find it they can read through what was responded to there. > 1. I want to import my existing git-repository with the >commit-history to be visible on Savannah. I have checked >the feature Git and Git web browsing in the admin area of >my project which shows the following URLs: > > * http://savannah.nongnu.org/git/?group=datestamp > * //git.savannah.nongnu.org/cgit/datestamp.git > >None of these addresses are functional at the moment. The above shows two details which are subtle that create snags for people to get caught upon. 1. http:// protocol cannot be authenticated. It cannot be pushed to. It can only be used for anonymous clones. Similar for https:// and git:// protocols. Those anonymous clones are valid clones but that remote protocol cannot be used to push back to the remote repository. The https://savannah.nongnu.org/maintenance/UsingGit/ page documents a way to convert a checked out sandbox from one remote branch to another remote branch without needing to clone it again. 2. The "//" protocol. I am not sure where you copied that link from but that is an HTML link intended for the user to select in the web browser and is not valid in other contexts. The // protocol does not specify the protocol to be either http:// nor https:// protocol but defaults to the currently connected protocol. It's an HTML technique to allow HTML pages to be viewed in a web browser in either http or https either way and the link to that page will continue using the same protocol as before. This avoids the problem of a user who cannot use https browsing with http being forced to use https but being unable to do so. And it avoids the problem of an https user being downgraded from https to an http link. Instead it continues the current connection protocol. But it is not valid in other contexts. The clone section of the cgit project page shows the recommended URLs for cloning. https://git.savannah.nongnu.org/cgit/datestamp.git For your project it is this list. git://git.savannah.gnu.org/datestamp.git https://git.savannah.gnu.org/git/datestamp.git ssh://git.savannah.gnu.org/srv/git/datestamp.git Two notes. As described above git:// and https:// are anonymous read-only sources only. The only writable source is ssh:// protocol which is authenticated. At this time Savannah only supports commits over ssh:// protocol. Secondly although it is not advertized it is also available over http protocol the same as https protocol. This helps our free software friends who access the Internet from behind restrictive firewalls. Also noting that git itself uses secure hashing to verify content. But generally https avoids problems with troublesome web proxies and many other problems. I recommend always using ssh:// protocol for developer access as it is fully authenticated and most robust. >I have read the instructions on the following page: >https://savannah.nongnu.org/maintenance/UsingGit, but >unfortunately I couldn't understand how to properly >import a ready git project. I know that you have subsequently been successful. Very often the best readers and editors of documentation are those who are reading it for the first time. If you have any suggestions for the documentation it would be appreciated. >PS: I have setup the SSH as per the instructions. > >Can you please help? This is the users mailing list. It's a community of users just like yourself who are all using Savannah. This is a great resource for people who can share their knowledge and help other free software hackers. > 2. I want to have a homepage for the project. I have ticked >that box too in the admin area, but the home-page-URL >also isn't functional. > >http://www.nongnu.org/datestamp/ At this time all of the web pages are managed using CVS. Which to new people today sounds very scary. But let me assure you that if you have succeeded with git that using cvs is much less problematic! Your project page starts with some documentation on setting up web pages. https://savannah.nongnu.org/cvs/?group=datestamp The Savannah documentation for CVS is here. https://savannah.nongnu.org/maintenance/CvsGettingStarted/ I want to send this mail off so will avoid making it too much longer. Please ask questions. People will help. But for the web there is a team of GNU webmasters who both work on the site and can help. Let me link to the webmastering documentation. There is a lot of good information there.
Re: authentication failures
Bob Proulx wrote: > record. Take a look at the last paragraph at that SshAccess page. > > file:///home/bob/src/savannah-stuff/administration/sviki/SshAccess.html Oh that's what I get for not proofing my pastes! That's local on my system of course. This would be the Internet URL for it. https://savannah.gnu.org/maintenance/SshAccess/ Sorry for the confusion. Bob signature.asc Description: PGP signature
Re: authentication failures
Thien-Thi Nguyen wrote: > I confirm that it is working on the client side now, as well. > My guess is that i was too hasty in retrying, and didn't give > savannah enough time to process the new key. Hmm... I hate intermittent failures! :-( At this time there should be no delay between updating an ssh key in the Savannah web UI and being able to use it across the vcs and download servers with ssh. The ssh key is stored in an SQL database and upon storing it is immediately available with an SQL query which is used for ssh in our configuration. In the development of Savannah (and the SourceForge it is based upon) a lot of actions were done through cron running every hour or every half hour or whatever. One thing happens one place and then after a bit the cron job runs and a reaction happens elsewhere. That paradigm of action and cron task later reaction is so pervasive that it is easy to think that everything just needs time to settle. (I would like to get rid of all of those cases...) And ssh keys used to be that way until about ten years ago as far as I can tell from the historical record. Take a look at the last paragraph at that SshAccess page. file:///home/bob/src/savannah-stuff/administration/sviki/SshAccess.html (Savannah admin info: specifically, in /etc/ssh/authorized_keys/USERNAME on the subhosts; a cron job sv_authorized_keys runs on each vm.) Well... That isn't true anymore. But it says that it used to be true. I saw that while looking at it for your incident and have it queued up to make a change to that doc to update that bit but haven't had a chance to do it yet. Some time ago not sure when or who but things were changed to use the MySQL database directly to hold ssh keys. That has many advantages. Not the least of which being that the web UI can store the changes to the database and then the new keys are immediately available to the other systems that need them for ssh access. Both ssh keys and also libnss for user account data. They are separate processes but both using the SQL database over the network. The libnss SQL module for user account data has worked pretty well. It does have a dependency on the network between the front facing system with ssh needing it and the SQL database system that is shared among the collection. Usually that is okay. But if the network has a problem then reports will be logged. libnss-mysql: mysql_query failed: Lost connection to MySQL server during query, trying again (2) I'll say that I probably never saw that happen for several years and then in the last few months have suddenly started to see this quite frequently in the logs. (I use logcheck to scan the logs continuously and send alerts upon unusual events.) Most often this happens in the middle of the US nighttime (say 3am or so but really at any time) when no one, admins or volunteers, are doing anything. And therefore I have this idea that there must be network maintenance happening in the datacenter in those hours which causes transient network glitches. I am sure this affects our overseas free software friends the worst since it is happening during their daylight times. I feel bad about that. But then whenever we look at the problem later no trouble is ever found. I expect this to be causing transient glitches in using the vcs services. Upon retrying I expect things to work fine. It isn't a problem of something changing on the systems because as you have heard our problem is that we need to upgrade them and are working on doing that, on other systems, and so nothing has changed on the vcs systems. Also this happens to both vcs0 and vcs1 both and they each have different OS versions. Therefore I am pretty sure this particular problem is due to external influences. > In any case, many thanks again for your help resolving things. > Happy hacking! I am happy things have been resolved for you! Happy Hacking! Bob signature.asc Description: PGP signature
OpenSSH 8.8 deprecates SHA-1
OpenSSH 8.8 was released on September 26, 2021 and deprecated all use of the SHA-1 hash algorithm. This affects users using git, svn, hg repositories using ssh-rsa keys and who have upgraded to ssh 8.8. (The cvs repositories are not affected.) For user workarounds please see this documentation page. https://savannah.gnu.org/maintenance/SshAccess/ Upgrading the services is a high priority but there are various entanglements which makes doing this immediately and quickly rather hard. It will take some time. Please be patient. ___ Message sent via Savannah https://savannah.nongnu.org/
Re: Daily News Aggregation
Hello Stephen, Stephen H. Dawson wrote: >We used to have an email daily for the news feeds of all GNU projects. >I remember a server migration occurred, and the email was put on the >back burner to get it done. >What is the status of activating the daily email of news feeds for GNU >projects, please? https://planet.gnu.org/ carries the news feed for all Savannah projects. In addition to other postings. Is that to which you are referring? That was dropped for a while after a previous migration in July 2016 but was then brought back (Thanks Sylvain!) in March 2018. It provides RSS and other feeds from that location. I am unaware of any previous news feed directly from Savannah itself. There are individual projects which have their own news feed. But I don't know a lot about a lot of things so if you have an example please point me to it. :-) I'll use this message to ask users about something else I have been thinking about and it is on this topic. In the time I have been working with Savannah the front page news feed has only ever been about Savannah itself. Which means when nothing is happening the front page is rather boring. And when things are happening it is almost always bad news! But it is a feature of the software forge to approve project news to the front page. And I think that might also send the news to this mailing list since I think all news posted to the front page is also sent here too. Is this something that Savannah users would like to see? To send appropriate and curated news from individual projects to the front page and also to this mailing list? Bob
Certificate Expiration Event September 2021
On September 30, 2021, as planned the DST Root CA X3 cross-sign has expired for the Let's Encrypt trust chain. That was a normal and planned event. However coupled with a verification error in the code of libraries authenticating certificates it caused some clients that have not been updated to fixed versions to have problems validating certificates. If you are experiencing invalid certificate chain problems with Let's Encrypt certificates (not a Savannah problem) then please upgrade your client to the latest security patches for your system. Please reference these resources as to upstream information and discussion about the issue. * https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/ * https://community.letsencrypt.org/t/production-chain-changes/150739/4 * https://letsencrypt.org/docs/certificate-compatibility/ * https://letsencrypt.org/certificates/ * https://www.openssl.org/blog/blog/2021/09/13/LetsEncryptRootCertExpire/ ___ Message sent via Savannah https://savannah.nongnu.org/
Certificate Expiration Event September 2020
On September 30, 2021, as planned the DST Root CA X3 cross-sign has expired for the Let's Encrypt trust chain. That was a normal and planned event. However coupled with a verification error in the code of libraries authenticating certificates it caused some clients that have not been updated to fixed versions to have problems validating certificates. If you are experiencing invalid certificate chain problems with Let's Encrypt certificates (not a Savannah problem) then please upgrade your client to the latest security patches for your system. Please reference these resources as to upstream information and discussion about the issue. * https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/ * https://community.letsencrypt.org/t/production-chain-changes/150739/4 * https://letsencrypt.org/docs/certificate-compatibility/ * https://letsencrypt.org/certificates/ * https://www.openssl.org/blog/blog/2021/09/13/LetsEncryptRootCertExpire/ ___ Message sent via Savannah https://savannah.nongnu.org/
Re: Can no longer login to savannah.
Hi Carlo, Ineiev wrote: > Carlo Wood wrote: > > This worked, because now I can do ssh ca...@cvs.savannah.gnu.org > > and it doesn't ask for my password anymore (I get an error that > > I am not allowed to execute that command; obviously because it > > is a restricted shell for cvs only). Good! That means everthing *should* be working. Is it? > I would check if these things work: > > * "member" cvs checkout Just to be clear... Are we talking site web pages or project source code? Did that work for you? Try a pristine checkout in an empty directory. cvs -z3 -d:ext:ca...@cvs.savannah.gnu.org:/sources/which co which Success?? > * anonymous cvs checkout and cvs diff with it Similar but for anonymous cvs pserver protocol. (I always recommend ssh protocol as the above example.) cvs -z3 -d:pserver:anonym...@cvs.savannah.gnu.org:/sources/which co which Success? Bob
Re: Storage Array Problems
Protesilaos Stavrou wrote: > "Basil L. Contovounesios" wrote: > > Bob Proulx writes: > >> Are all of those messages yours? They all have the same unique string > >> pattern. > > > > This pattern is generated by an Emacs MUA. The @tcd.ie ones are mine, > > and the @protesilaos.com ones are Prot's (CCed). I think I received the > > messages locally, but they're clearly missing from > > https://bugs.gnu.org/45068 and possibly other places too. Should I just > > resend the missing messages? > > Hello! I noticed that they were missing, but assumed that the sync > takes some time. > > Please re-send them or tell me how I can do it from here. When I was provided with a message-id by Lars for one of his missing messages I was able to grep around and find that message and the others in the logs. The logs said those message-ids had been discarded. That's all I know. Sorry. The group of those all together just stood out as looking unusual to my eye and therefore I mentioned it. I don't know if there is a systematic failure that needs to be fixed or if it was simply human error due to the systems problems and the large spam backlog. One of the contributing factors may have been related to the storage array problems yesterday. When a system can't read or write files the process trying to do so gets "blocked waiting for I/O" and pauses in an uninterruptable wait state. (In the Linux kernel a ps listing shows this uninterruptible state as the "D" state.) Since most OS functions get cached in the file system buffer cache in RAM the OS on most systems were still able to function at some level of functionality. As far as I know none of the systems outright crashed. But these processes blocked waiting for I/O from the networked storage server did pile up. I saw that fencepost had a system load of more than 1100! The FSF admins worked almost all day long Sunday morning through late afternoon to restore the storage array. As you can imagine it was a high stress situation for them. Meanwhile after the initial couple of hours the rest of the systems were mostly restored to normal operation and they were able to drain down their high cpu load averages. Those uninterruptible processes completed their I/O reads and writes upon which they were blocked and were able to exit. However after being blocked for a long time some processes that have timeouts will time out and be killed for taking too long to complete. The large mail backlog that occurred yesterday which meant that humans looking at the mailman web page hold queue were looking at dozens and dozens of messages most of which were spam because the anti-spam "cancel bot" was also backlogged. That's almost worst case for a human looking at mail messages and trying to pick out the non-spam messages from the sea of spam. But I really have no idea about any particular message and am just guessing. I also don't know the deep details of the storage array problems either. Perhaps the FSF admins will write up a blog note about it. That would be interesting to me. From what I could tell there was a coupled failure of multiple controller nodes causing the array to lose redundancy. At least one of the arrays went offline completely. They had to carefully reset and restore redundancy quorom of the disk storage and the controller nodes. Other than the initial hour when things were completely offline the subsequent restoration was all done online and running while the system was functioning in a degraded raid mode. Which is pretty cool when you think of it! > [ I am using Emacs+Gnus and this setup has been stable for a fairly long time > ] Emacs+Gnus worked great. No problems there at all. The only reason that Emacs+Gnus got mentioned was that it created a message-id format that I did not recognize and therefore asked if those were all from Lars. Basil told me those were from Emacs. Which is great. No problems there at all. Bob
Re: Storage Array Problems
Basil L. Contovounesios wrote: > Bob Proulx writes: > > Are all of those messages yours? They all have the same unique string > > pattern. > > This pattern is generated by an Emacs MUA. Oh! Thank you for that tidbit of information. I am unfamiliar with that signature and thought it might have been applied custom. (One might notice that message-ids on my messages are custom for example.) > The @tcd.ie ones are mine, and the @protesilaos.com ones are Prot's > (CCed). I think I received the messages locally, but they're > clearly missing from https://bugs.gnu.org/45068 and possibly other > places too. Should I just resend the missing messages? Since they were logged as being discarded they are never going to be delivered. I would go slow and send one or two initially and note the message-ids of those messages. If they do not show up through the list in a reasonable time please send a note to the debbugs team mailing list with the message-ids so we can look for them. help-debb...@gnu.org help-debbugs AT gnu DOT org (The obfuscators often get in the way of actually sending email addresses to people who read the email on web pages.) The main debbugs team members do not monitor the Savannah mailing lists. As a standard operating procedure we normally hold all unknown senders (unknown in this case is unknown to the Mailman mailing list management software, you might be Margaret Hamilton in real life but still an unknown sender to Mailman here), we normally hold all unknown senders for human review upon the initial contact. After review the message is approved and the sender is then added to the known list of senders for that mailing list. This is so it is only the initial review is needed for specific human checking. Subsequent mail is completely automated after that point with no human delays added. I found it an unusual pattern to see that large group of message-ids that all had the same syntax form all were discarded that morning over that time of 8am to 11am. Which is why I asked about them. Something might have gone wrong somewhere, potentially between a chair and keyboard for that matter. Bob signature.asc Description: PGP signature
Re: Storage Array Problems
Lars Ingebrigtsen wrote: > Thanks for checking. I'll resend the discarded messages, then. I would send just one or two initially. Please keep note of the message-ids of them. See if they show up. If not please notify us of the message-ids so that we can look to see their disposition. For debbugs please send to the debbugs team mailing list. help-debb...@gnu.org help-debbugs AT gnu DOT org The main debbugs folks don't monitor the Savannah mailing lists. One of the contributing factors may have been the large mail backlog that occurred yesterday which meant that humans looking at the mailman web page hold queue were looking at dozens and dozens of messages most of which were spam because the anti-spam "cancel bot" was also backlogged. That's almost worst case for a human looking at things and trying to pick out the non-spam messages. But I really have no idea about this particular message and am just guessing. Bob
Re: Storage Array Problems
Lars Ingebrigtsen wrote: > <8735xgnw6y@gnus.org> > > And here's the logs from my MTA for this message: > > 2021-02-28 14:44:41 1lGMNC-0008OQ-HF <= la...@gnus.org > H=cm-84.212.220.105.getinternet.no (xo) [84.212.220.105] P=esmtpsa > X=TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256 CV=no A=plain_server:larsi S=3501 > id=8735xgnw6y@gnus.org > 2021-02-28 14:45:16 1lGMNC-0008OQ-HF => 46...@debbugs.gnu.org R=dnslookup > T=remote_smtp H=debbugs.gnu.org [209.51.188.43] C="250 OK id=1lGMNS-00025F-TJ" > (Times are in +0100 (CET).) > > This message has not shown up on the debbugs bug tracker here: > > https://debbugs.gnu.org/cgi/bugreport.cgi?bug=46781 I find that message has been deleted at the Mailman part of the pipeline. Time in US/Eastern -0500. I do not find it in the listhelper anti-spam automation. Feb 28 08:45:45 2021 (9480) Message discarded, msgid: <8735xgnw6y@gnus.org> I see that the message id pattern is unusual. And there are many in that group. Feb 28 08:44:11 2021 (9480) Message discarded, msgid: <877dmsnwa7@gnus.org> Feb 28 08:45:45 2021 (9480) Message discarded, msgid: <8735xgnw6y@gnus.org> Feb 28 08:49:27 2021 (9480) Message discarded, msgid: <87lfb8l2wr@tcd.ie> Feb 28 08:50:36 2021 (9480) Message discarded, msgid: <87y2f8mheo@gnus.org> Feb 28 08:51:12 2021 (9480) Message discarded, msgid: <87wnusmhe1@gnus.org> Feb 28 08:54:46 2021 (9480) Message discarded, msgid: <87sg5gmh7j@gnus.org> Feb 28 08:55:07 2021 (9480) Message discarded, msgid: <87r1l0mh74@gnus.org> Feb 28 08:57:54 2021 (9480) Message discarded, msgid: <87eeh0xpm6@protesilaos.com> Feb 28 09:12:38 2021 (9480) Message discarded, msgid: <87lfb8mgdu@gnus.org> Feb 28 09:14:34 2021 (9480) Message discarded, msgid: <87a6roxou1@protesilaos.com> Feb 28 09:14:55 2021 (9480) Message discarded, msgid: <87h7lwmgaj@gnus.org> Feb 28 09:15:32 2021 (9480) Message discarded, msgid: <87mtvomgea@gnus.org> Feb 28 09:24:10 2021 (9480) Message discarded, msgid: <87czwkmfuk@gnus.org> Feb 28 09:24:31 2021 (9480) Message discarded, msgid: <87blc4mfub@gnus.org> Feb 28 10:20:43 2021 (9480) Message discarded, msgid: <87im6cfcex@tcd.ie> Feb 28 10:58:31 2021 (9480) Message discarded, msgid: <8735xgxk0z@protesilaos.com> Are all of those messages yours? They all have the same unique string pattern. Bob
Re: Storage Array Problems
Lars Ingebrigtsen wrote: > It seems like all mail to debbugs during the outage have been lost? Or > are they stuck in a queue somewhere? All mail should have all flowed through the queues by now. Although the storage array was frozen, from the perspective of the various systems running on it, everything has thawed now. Everything appears to be working normally. If you have a Message-Id then we can potentially trace down what happened to it. Bob
Storage Array Problems
The storage array hosting much of the GNU and FSF infrastructure suffered problems this morning. The FSF admins are on the task and most services have been restored though issues linger. Please look to https://hostux.social/@fsfstatus for current status updates. ___ Message sent via Savannah https://savannah.nongnu.org/
Re: ViewVC Exception
Thomas De Contes wrote: > Bob Proulx a écrit : > > Please try the https protocol instead. Does that work better? > > I suppose the 2nd possibility is the right one, and the problem did > not recur, so I can't say what it would have been like. It's not the typical problem being reported. If https works then keep using it. > If you think that we should always avoid http because it can be corrupted, > then: > > 1 > I fixed the link given here: > http://svn.savannah.gnu.org/viewvc/rapid/branches/gtkada-2.24/README?view=markup > (to https://savannah.nongnu.org/projects/rapid/ ) > Am I right ? Yes. I think it is *safer* to always document https now. However I am opposed to blocking http. Let me explain. There are those who wish to actively block http and only allow https. But just because something can be broken for some cases does not mean that it is always broken in all cases. And it does not mean that we should actively break those who need http access. This is just me typing extemporaneously and I might write something wrong since but... * If your clock has failed and you boot without a good time source then it is likely that you will be unable to contact any https sites. Because the time will be different to the point that the certificate will not validate. * Likewise for mandatory DNSSEC. DNS Security Extensions. If the time is too far wrong then it can't validate the DNS entry. * If you are behind a blocking firewall then https may be impossible to use. In which case http may be the only available protocol. There are many who must exist behind these firewalls and it would be a tragedy to block them from access to Free Software out of a misplaced sense of trying to protect others. First do no harm. * Not every peek at documentation or every software download requires a theoretical level of life or death security! So I think it is okay and good to document https as the primary protocol and to encourage it. But I don't want to block http. Because if someone needs http then they should be able to recognize their need and use it as needed. I don't think we need to go to extremes of documenting every use of https as also have a potential fallback to http either. That would be too much noise and clutter everywhere. It would be like documenting the use and purpose of the shell's command line IFS variable *for every command*. It is definitely in use every time we type in any command but if someone attempted to document every possible thing at every possible place then that documentation quickly becomes impossible to use in practice. > 2 > Why does the server redirects to https only when we are logged in? > And when we clic on "Browse Sources Repository" (and links that > point out of the sub-domain), it goes on http even from https. Probably because it wasn't noticed before. Developers are always logged in! For example when I visit I remain in https. But when I test this now I do see that it defaults to http for the standard address. This can be changed in the "Select features" in the admin page. But it would be somewhat tedious to update all of the links manually. I myself have done very little Savannah web UI development. Maintenance has fallen to the very few who work on it. This would be an excellent area for contributed patches! > > Also the past day and a half has had some problems with memory > > exhaustion due to external influence starting a git clone then either > > dropping the connection or getting dropped due to networking issues. > > > However its Emacs and the repository is large and the resulting git > > pack-objects process consumes 800 MB of active RAM before deciding to > > write anything to the closed file descriptor and then exiting. That's > > been a problem. > > Yes, it seems to be a problem. > I was thinking about migrate from subversion to git, but maybe it's > too soon? What do you think about that? Such a question! When I meet parents with several children, I usually avoid asking them, "Who is your favorite child?" :-) I don't think it is a matter of "soon" or "too soon". Git is very stable and mature. It is used by thousands. It's fine. I use git myself and I like git. But I also know that many people do not like git. They much prefer svn or hg instead. And in some workflows there are advantages. Like any benchmark everything has a sweet good spot and also a bad worst case. It is not a matter of time. It is a matter of features and work flow and what you want to use. The choice is yours. > > We have mitigation in effect now to detect those as > > quickly as practical and kill them as they are occurring. > > Thank you for having fix it. :-) Unfortunately the Internet is a hostile place. 8 billion people in the world and all "doing stuff"
Re: ViewVC Exception
Thomas De Contes wrote: > >>> Is there a problem on the server today ? > >>> http://svn.savannah.gnu.org/viewvc/rapid/trunk/gtk_peer/?pathrev=3 > >>> http://svn.savannah.gnu.org/viewvc/rapid/tags/rapid-3.01/jvm_peer/?limit_changes=0=2 Those are working okay for me at the moment as well. > probably, since i've got 2 different messages for the same url > > but that's strange that you had not any problem at all ... There are two very likely possibilities. One on your end and one on our end. For one I am seeing http instead of https in the location. Which means that corporate and ISP proxies may be getting in the middle and causing trouble. That happens more often than seems reasonable but does. Please try the https protocol instead. Does that work better? Also the past day and a half has had some problems with memory exhaustion due to external influence starting a git clone then either dropping the connection or getting dropped due to networking issues. However its Emacs and the repository is large and the resulting git pack-objects process consumes 800 MB of active RAM before deciding to write anything to the closed file descriptor and then exiting. That's been a problem. We have mitigation in effect now to detect those as quickly as practical and kill them as they are occurring. The memory exhaustion usually manifests as a 503 Bad Gateway. I am sure users on this list will have seen it the past couple of days while this has been happening. At those times the system has been running a high cpu load average of 70 or so before the errant processes have been killed to free that memory. > > An Exception Has Occurred > > > > Python Traceback > > > > Traceback (most recent call last): This Python traceback seems very odd because I don't know how memory exhaustion on the server side would trigger this. But stranger things have happened. > And when I reloaded it, I got : > > An error occurred while reading CGI reply (no response received) This seems much more likely. As that is basically a 503 Bad Gateway due to the backend not being able to load in time due to memory stress. It eventually loads but not before the timeout which is already quite long. I worry the web browser is caching some result. Please the next time you have this problem try using a command line tool such as wget or curl which avoids web browser cache issues. For example: wget -O- -q -S 'http://svn.savannah.gnu.org/viewvc/rapid/trunk/gtk_peer/?pathrev=3' Does that work when at the same time the browser does not? Bob signature.asc Description: PGP signature
Re: Email notifications from bug tracker
Markus Mutzel wrote: > Am 07. Oktober 2020 um 21:34 Uhr schrieb "Ian Kelling": > > > Could you please check if there is something wrong with the email > > > notifications? > > > > It looks like frontend1.savannah.gnu.org is not sending new bug messages > > to eggs, probably something going wrong there. Yes. The munin graphs show a steady increase in the size of the mailq indicating a problem there. For some reason systemd is running in degraded mode. root@frontend1:~# systemctl status + frontend1 State: degraded Jobs: 0 queued Failed: 1 units root@frontend1:~# systemctl list-units --state=failed UNIT LOAD ACTIVE SUBDESCRIPTION + opendkim.service loaded failed failed DomainKeys Identified Mail (DKIM) Milter I kicked systemd to restart the service and things appear happy now. root@frontend1:~# systemctl status opendkim.service | head + opendkim.service - DomainKeys Identified Mail (DKIM) Milter Loaded: loaded (/etc/systemd/system/opendkim.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-10-07 17:17:30 EDT; 53s ago Then I asked Postfix to run its queues and deliver queued messages. root@frontend1:~# postfix flush root@frontend1:~# tail -F /var/log/mail.log ...lots of activity... This reduced the mailq down to the 16 messages which is a more normal level of account registration abuse noise. I'll double check them all more deeply later but a quick scan looked like all routine now. I rebooted the system because I wanted to verify that everything at least starts happy after a reboot. It rebooted okay and booted up cleanly with everything running and systemd in a the "running" state. root@frontend1:~# systemctl status + frontend1 State: running Jobs: 0 queued Failed: 0 units Since: Wed 2020-10-07 17:23:55 EDT; 5min ago root@frontend1:~# systemctl list-units --state=failed 0 loaded units listed. ... Thank you for the problem report and sorry for the delay in getting the problem resolved. Bob
Re: Is it possible to authenticate with savannah, e.g. with ldap?
Friedrich Beckmann wrote: > >> I also think of the underlying server. It would be nice if I could give > >> login access to the project members via authentication at savannah. > > > > I am not sure how to do this. Which literally means that I myself > > don't know how to do this. But if other people had ideas then I am > > sure something could be worked out. > > At my university we use an ldap server for this purpose. I have not looked > into the github way of doing this. The Devil is in the details. Bob
Re: Is it possible to authenticate with savannah, e.g. with ldap?
Friedrich Beckmann wrote: > I have a buildbot prototype for pspp here: > > http://caeis.etech.fh-augsburg.de:8010 > > The webinterface is currently open without any authentication. So you > can for example trigger a build. My idea is that project members of the > savannah project can authenticate at this server such that I > can restrict the access. ... > Anyway it is a small project so this can be done manually also. If it were me I would subscribe to the commit email mailing list (assuming there is one, can set up one if there is not) then have it automatically fetch and build whenever a new commit is pushed. Then things would always be automatically built. And then additionally have a periodic build regardless to catch up with any OS upgrades. This would reduce the need for manual button pushing somewhat. > I also think of the underlying server. It would be nice if I could give > login access to the project members via authentication at savannah. I am not sure how to do this. Which literally means that I myself don't know how to do this. But if other people had ideas then I am sure something could be worked out. Bob signature.asc Description: PGP signature
Re: Is it possible to authenticate with savannah, e.g. with ldap?
Friedrich Beckmann wrote: > I am member of the https://savannah.gnu.org/projects/pspp project. I > am currently investigating how to run a continuous integration > server for building the pspp software. One part of that is to allow > the pspp project members to modify and control the build server, > e.g. to trigger a new build. Awesome! > Is it possible to authenticate users via some savannah > infrastructure? Is there for example an ldap server which I could > contact from the build server? This sounds like SSO single-sign-on service to me. There is not currently any support for a SSO with Savannah. That does not mean that one could not be created. The question has not been asked before and no one has created the infrastructure for it otherwise. But how is this related to setting up a continuous integration server? > Currently I am looking at buildbot. They have something to > authenticate against github or gitlab or plain user/password. I > would prefer to use the user/password from savannah. What would this authentication be used for? Bob signature.asc Description: PGP signature
Re: Links to further help in ViewVC help are all dead
Thomas De Contes wrote: > fredomatic a écrit : > > I guess that should be fixed somehow. > > on my side, > http://svn.savannah.gnu.org/viewvc/*docroot*/help_rootview.html > ant the 3 others are working, All appears okay to me too. > but > http://svn.savannah.gnu.org/viewvc/*docroot*/help_query.html > i didn't find "the form at the top of the page" I think that refers to the "Directory Query View" which is not enabled by default. * Directory Query View - Shows information about changes made to all subdirectories and files under a parent directory, sorted and filtered by criteria you specify. /This view is disabled in the default ViewVC configuration./ And that's all I know about it. Bob
Re: IdleAccounts
Ineiev wrote: > > > Thomas De Contes wrote: > > >> I'm a member of this project : > > >> https://savannah.nongnu.org/project/memberlist.php?group=rapid > > >> but i don't have a lot of time to code You are a member of that group. Therefore your account will never be an "unused account" by: https://savannah.gnu.org/maintenance/IdleAccounts/ > If your account is a member of any group, it isn't subject to > automatically deleting (in fact, we don't check VCS commits at all). Except for "bragging rights" if someone is not a member of any group and has not posted any bug tickets or made any comments then there is little use for having a registered account. Since it is unused. If there occurs a reason for such an account then one can always submit a bug ticket to Savannah Administration (the Savannah Hackers support ticket project) and by that ticket submission it will also be "used" forever afterward and won't be a candidate for removal by this process. The purpose for removing unused accounts is to fight spammers and other abuse. Prior to Ineiev doing the work to implement this we had thousands of spammer accounts that were created only to post link spam. This policy of deleting unused accounts was intended to remove those spammer accounts. Bob
Re: How to deal with mailing list hit by spammers
Karl Berry wrote: > 4. To avoid seeing the flood of notifications, I suggest filtering in > your mail reader. E.g., my .procmailrc recipes (except I don't actually > send them to /dev/null; any filename is fine, relative to ~): Unfortunately Mailman itself has no way to tell it not to send the moderator notifications only to the moderator address and not also to the owner address. I really wish it had a way to do that. Filtering is a good suggestion. I do that for myself too. If filtering is not easy for you for whatever reason then I suggest changing the "owner" address to listhelper-moder...@gnu.org (listhelper-moderate AT gnu DOT org) which is the team address. That address already has filtering of all of the things Karl mentioned. Mail for the owner will go to the team address and be read by a human and responded to in those very rare cases that a person sends a message there. The owner address is there for people to write to for help in subscribing or unsubscribing from the mailing list. We are always happy when people write there as that is the right place to get help for unsubscribing. Note that changing the owner address simply avoids the mailman moderation notifications. You can still log into the mailing list web admin interface as often as you like. And still tend to any pending moderation requests. And all of that. It just redirects the endless stream of noise away from your address and over to the default where we already have filters in place to deal with it. Bob
Savannah Storage Backend Reboots and Maintenance
Savannah Users, Maintenance is happening immediately to the storage backend server for Savannah systems. Sorry for the short notice. The backend server is being rebooted probably several times while debugging a kernel problem. Mon, 11 May 2020 20:34:00 + Mon, 11 May 2020 14:34:00 -0600 Bob signature.asc Description: PGP signature
Re: mailman keeps holding for non-subscribers
Eric Wong wrote: > Bob Proulx wrote: > > That spam is okay. But it is the very first initial contact email > > message delay that is the showstopper? It's beyond the pale? > > The delay is one of the factors I think we will simply have to agree to disagree. However to be clear the configuration of your mailing list is up to you. > and far-off Date: headers being flagged as spam by subscribers' MTAs > is another. This is the first I have heard of any problem of messages so far delayed that anti-spam filters flag them due to the delay. Has this actually happened with any of the GNU mailing lists? Or is that a problem that is being remembered from other mailing lists? If it is a problem with the GNU mailing lists then please do make reports of any such problems. > The main factor of all the admins being potentially unavailable > for long periods of time (or permanently) is a worry of mine. That is why teams are good. That way it is more than just one person. And if someone does get abducted by aliens then another in the team can make noise that more help is needed to replace them. > Maybe the admin team here is big enough here for that not to be > a big problem. I've definitely participated in some groups > where admins disappeared for days or even weeks while mail piled > up. We all get behind at times... > > How about SMTP time greylisting? I would gather from this discussion > > so far that SMTP greylisting, which is exactly the same and creates a > > delay upon the initial contact, would also be a showstopper too then? > > Greylisting at SMTP time would also be beyond the pale? > > Fwiw, I see greylisting as a "less bad" option because messages > still eventually go through when no admins are available. I > prefer not to have greylisting at all, but it's better than > needing humans to be constantly in the loop. ... > So with that, I really don't think involving human labor at > initial contact (and potential for human error) is good at all. I'll just say that I disagree but thank you for making your position clear. The things are you worried about happening are not a problem with the GNU mailing lists as we have them set up now. I understand that you disagree. > > I am sorry but IMNHO it is the daily day to day operations that are > > much more important to optimize and make efficient. Because those are > > things that happen repeatedly, day after day. One time startup costs > > should not be too onerous, but may have some cost in order to have > > benefit. Like greylisting. But it is the repeated operations that I > > think should be targeted for optimization. And that is the normal day > > to day use of the mailing lists without having them filled with spam. > > Interesting that you say that, especially when you rightly > admit humans also make mistakes in letting spam through, below. I don't see any conflict in the information I wrote. > Fwiw, I'm not advocating mlmmj, either; but more wondering if > Mailman and mlmmj are similar enough that it's easy to make the > existing replay script work with Mailman, as well. I did not see a script called "replay" listed in the diagram at the URL you posted. The name hints at an action that is not clear to me how it applies to a mailing list. Therefore I have no opinion about it since I have no idea what it does. Bob
Re: mailman keeps holding for non-subscribers
Eric Wong wrote: > Bob Proulx wrote: > > Eric Wong wrote: > > > OK, so I'm following half the recommendations > > > > > > The ones I'm going against are: > > > > > > generic_nonmember_action=hold (I want Accept) > > > default_member_moderation=yes (I want no) > > > > May I try to convince you otherwise? Because there are good reasons > > for the recommended settings. > > Not unless the maximum delay can be minutes. In other words, > similar to what greylisting gets without any human interaction. The initial contact delay is the hill being defended? On a mailing list that may have many interactions over time. You and I might be discussing some topic. Say the topic of mailing list operations. :-) We may send many messages back and forth on the mailing list. This might go on for years and years over many topics. Each of those happen fast and efficiently. And it is not the continuing problem of spam to the mailing list that is a problem. That spam is okay. But it is the very first initial contact email message delay that is the showstopper? It's beyond the pale? How about SMTP time greylisting? I would gather from this discussion so far that SMTP greylisting, which is exactly the same and creates a delay upon the initial contact, would also be a showstopper too then? Greylisting at SMTP time would also be beyond the pale? I am sorry but IMNHO it is the daily day to day operations that are much more important to optimize and make efficient. Because those are things that happen repeatedly, day after day. One time startup costs should not be too onerous, but may have some cost in order to have benefit. Like greylisting. But it is the repeated operations that I think should be targeted for optimization. And that is the normal day to day use of the mailing lists without having them filled with spam. > > > So, should I remove listhel...@gnu.org from moderators? > > > I still want automated spam filters such as SpamAssassin, though. > > > > The listhelper anti-spam SpamAssassin et al cancel-bot depends upon > > the hold actions. If messages do not get held then it has no ability > > to filter spam. That's fundamental to how it works with Mailman. > > That's unfortunate. I'm not familiar with Mailman, but can't > the MTA feed the message through spam filters before Mailman > ever sees it? It's interesting that you mention that. Because for years and years the frontend anti-spam was poor. Very poor. And this is not a reflection upon the current FSF staff who have inherited the present situation. But that is the traditional situation. For a very long time the frontend anti-spam has been very poor. And therefore we have been implementing the anti-spam portion mostly in the Mailman interface where it is possible for volunteers to interact with the system. There has been discussion of how to improve the frontend anti-spam. At this time the systems are getting OS upgrades. Those are dearly needed. And obviously a first step in the improvement of the system. And there have been discussion about what needs to be done to improve the frontend anti-spam. This is starting to happen. But is still going to take a while from now to be improved. As with many things life and time is what keeps everything from happening all at once. However given the flow of mail and spam there needs to be a way to train the learning engines. As we just mentioned in the previous emails in our thread. Right now Mailman provides a reasonably convenient hook location to provide that training. One that is not as easy to do without the mailing list manager. Improving the feedback location in the flow of email is something to look at doing. But there is a lot of associated work that needs to happen first before working on that aspect of the problem. > I use mlmmj for legacy mailing list subscribers, that just runs > off cron with no synchronous relationship with the MTA at all. > I have replay script which makes it incrementally read mail from > public-inbox (git). If we are going to start listing out mailing list management software that is better than Mailman then we had better get comfortable. It's a long list! I am not a fan of Mailman. Mailman presents a pretty low threshold. I would start with Smartlist which is very capable and scales well. Also I have long been a fan of the way ezmlm works, if only it didn't require qmail. And at one time I would have said that Enemies of Carlotta had interesting features for a mailing list. For that matter I actually like the venerable old Majordomo. One of the very active mailing lists I interact with still to this day uses Majordomo for it! But Mailman is an official GNU Project. There is a benefit to "eating your own dogfood" as the saying goes. That and due to other reasons t
Re: mailman keeps holding for non-subscribers
Eric Wong wrote: > OK, so I'm following half the recommendations > > The ones I'm going against are: > > generic_nonmember_action=hold (I want Accept) > default_member_moderation=yes (I want no) May I try to convince you otherwise? Because there are good reasons for the recommended settings. > So, should I remove listhel...@gnu.org from moderators? > I still want automated spam filters such as SpamAssassin, though. The listhelper anti-spam SpamAssassin et al cancel-bot depends upon the hold actions. If messages do not get held then it has no ability to filter spam. That's fundamental to how it works with Mailman. > > Additionally any non-spam messages are also approved by the human > > team, and their senders either unmoderated or whitelisted. This > > results in the avoidance of spam to the mailing lists while at the > > same time avoiding delays in posting as only the initial contact is > > held for moderation. This has been necessary because spammers > > routinely subscribe and then post spam. Therefore we moderate new > > addresses as they appear. > > I've found automated spam filters good enough on their own and > would like to just have those without human moderation. My experience is that even with highly tuned automation that it still needs continuous training feedback in order to keep accuracy to acceptably levels. Therefore instead of avoiding giving it feedback we are giving it continuous feedback. Another task of the listhelpers is to periodically review and train-on-error the learning engines. (The learning engines include SpamAssassin's Bayes engine and the Bogofilter engine. But also specifically the CRM114 engine which does the best classification for us and has been weighted more heavily due to it being most successful.) When we run the queues we are also providing training for those engines. That way as spam is continuously changing in character the filters are also continuously being updated. However Mailman doesn't have a lot of built in anti-spam ability. The listhelper system is bolted onto the moderation system. Therefore it can only anti-spam the moderated messages. If the moderation is bypassed then so is the anti-spam. To improve this Mailman itself would need to be modified. > I don't want to have to whitelist anybody, it doesn't scale. Perhaps in the large it does not but we are only handling 1500+ mailing lists and all of the associated subscribers at this time. I don't know how many subscribers in total. There are 1521 mailing lists using listhelper anti-spam right now. But for small numbers such as these it works okay. But the real reason is that we are working within the limitations of Mailman. It's the only system Mailman supports. Therefore it is the system we are using. Improvements would require changes to Mailman. > > The resulting process means that as a general statement project > > mailing lists need no explicit maintenance. If you as a project > > maintainer and also a maintainer of the mailing list do nothing then > > everything happens as needed anyway. You are however free to be as > > involved in the mailing lists as you want. > > So if I'm away and unable to administer dtas-...@nongnu.org, and > generic_nonmember_action is "Hold"; does the "human team" at GNU > will eventually accept postings in my absence? Yes. Eventually usually means a few hours. If you had done nothing you would have experienced what any other poster to the mailing lists would experience. There would have been a short delay until a moderator from the listhelper team saw the message and approved it, whitelisted or unmoderated your address, and then subsequent messages would have been passed through by Mailman without delay. Sometimes there are longer or shorter delays for someone to see a message. Personally my own schedule allows me to look at the message queues several times a day on most days. But sometimes I am busy away from the keyboard for a day. However I am but one of the team and there are several of us who look at the queues and it is the overlapping schedules of everyone that fills in time periods. It's not organized and a bit of chaos and randomness but a new address rarely sits in the hold queue for more than half a day. And worst case one of us would get to the queues at least once a day at the worst. But most days it would be a few hours. I really should do some statistical work to plot the delay times out. It's on my list for one of these days to do. And that moderation hold is only for the initial contact. Subsequent messages are passed through without delay. Therefore the usual posters to a mailing list will see no delays. They will converse all as normal. Plus there is no need to be subscribed to post a message. There may be mechanical delays due to all of the normal reasons of email however but that is outside of the anti-spam processes. > > > The list in question is dtas-...@nongnu.org > > > > I
Savannah Outage Event Today 2019-12-09
Savannah git, svn, hg, bzr, download, Outage Event Today 2019-12-09 We had a problem today that caused all of those services to fail for a while this afternoon. Things are back online again. Sorry for the inconvenience. Everything is back working normally at this time. Please make reports if you are seeing problems. Here is the story of the events today: The trigger was me upgrading nfs1 to the latest security upgrades and rebooting it. There were three packages with upgrades pending that needed to be installed: systemd systemd-sysv libsystemd0 from Trisquel 8 versions 229-4ubuntu21.22 to 229-4ubuntu21.23. The system had been running continuously for 90 days. Which is a bit of time. Not a problem for a system. But long enough that some forgotten change might have been made to affect things on a reboot. If a system has been up for longer than even a week then as standard operating practice I always reboot the system before doing any new maintenance on it just as a paranoid practice. Then I know if there is a problem I know it was something previously existing and not something I was just doing. I rebooted nfs1. All normal. Then applied the upgrades listed above. Then rebooted again. On nfs1 all seemed perfectly normal. But that is when vcs0 and download0 started reporting stale nfs handle on the mount point. Nagios detected this and sent out alerts. Gack! Jump and try to understand the problem and fix it. I poked and probed the patients. Trying to mount the partition manually failed with a "mount.nfs4: Connection timed out" error. Yet running tcpdump on the clients showed them communicating. Regardless of the timeout it did not appear to be a problem with them communicating. And in the tcpdump trace nfs1 kept returning "getattr ERROR: Stale NFS file handle" errors. Strange. On the mount? Without a good explanation I think nfs1 was behaving badly. Even though it had freshly been rebooted I decided to reboot it again. And on this nfs1 reboot then on the clients I could mount the partition. Therefore this seems to indicate that nfs1 had for some unknown reason latched into a bad state. Rebooted vcs0 so that it would be a clean start from boot. Ran through the regression suite and all of the service tests passed. During the problem event time download0 which has two mount points had one stale and one okay. Yet both are basically identical other than different names. After the reboot of nfs1 when things started working then download0 could mount it on the stale partition and everything was okay again there too. And the mount point on frontend1 was okay throughout too. Odd that some client systems had problems and others did not. That leaves us with an unresolved and perhaps not possible to resolve question of why nfs1 behaved badly. Will need continued investigation. Sometimes things like this only become understood after a longer research time. At the start of things I asked Andrew to post a status update: https://hostux.social/@fsfstatus/ That's always a good URL to bookmark so as to be able to get an out-of-band non-gnu-system update of infrastructure system status. It's on a non-gnu system in case the gnu sites are offline, so that there is a way to communicate in that event. Bob
Savannah DDoS Attack
Savannah Community, Savannah systems are getting hit by a DDoS attack. A botnet is browning out the web UIs on three of the systems. This has been going on all weekend. The botnet is hitting the web interface randomly selecting every possible URL. If you can imagine every version of every project file in every project you will know what is happening. The attack started late Friday. It is at least 10k IP addresses strong and probably a lot bigger. It's somewhat hard to tell the exact size. I know that vcs0 was hit by 45k addresses in 24 hours on Saturday but I do not know how many of those were the botnet and how many were just nice people like you and I clicking on the web browser. But that seems a likely upper end. Unfortunately we weren't previously collecting trend data on that particular statistic for vcs0 and so I don't know what is a normal daily rate. Not that high by a lot however. But at least for the future moving forward we will have this data. Things are running about 30 requests per second on just vcs0 at this moment. 5/s on vcs1 and 10/s on frontend0. And sometimes it spikes significantly higher. We are working as best we can to try to block the attack and keep the system limping along. But you know how these DDoS attacks go. If someone wants you offline then there is really no way to stop them. In the meantime I suggest using ssh:// protocol member access for all of the version control backends. Because that is not http/https it is faring better. Checkouts and commits should still be working. It's really just the web UI that is problematic. The 502 Bad Gateway for the interfaces that use it is somewhat transient in that if one retries then it will eventually succeed through the botnet. Bob
DDoS Attack!!
The web interfaces on several of the Savannah systems are getting hit with a high volume DDoS attack. This is browning out the web interfaces. When possible members are encouraged to use SSH protocols to avoid the web interfaces which are suffering severe performance degradation and timing out. ___ Message sent via Savannah https://savannah.nongnu.org/
Re: [Savannah-users] cannot stat /var/lock/cvs/web/www: No such file or directory
Dora Scilipoti wrote: > cvs [update aborted]: cannot stat /var/lock/cvs/web/www-es: No such file or > directory Unfixed damage from this: https://lists.gnu.org/archive/html/savannah-hackers-public/2019-09/msg00011.html Sorry this wasn't noticed early. Will fix it. Thank you for reporting it! Bob
Re: [Savannah-users] Fwd: qemu-devel mailing list dropping certain emails consistently
Hello Bin, > This is really annoying. Please investigate what's wrong with > the mailing list service. Thanks! You are emailing the Savannah Users mailing list. I don't know if anyone here is going to be able to help you. Savannah and Savannah Users don't have much if anything to do with the mailing lists. Please send your request for help to mail...@gnu.org (since this is a Mailman thing) or to sysad...@gnu.org (since it is likely a rejection on the incoming mail relay system eggs) and ask there. Those are the places where the FSF admins who need to see this hang out. Switching roles to the Listhelper anti-spam role I looked and I do not see any discarding happening due to the anti-spam that I have control over. I don't see those missing messages appearing at all. Therefore I suspect they are getting rejected by a content filter on eggs and I have no access nor any visibility there. Bob Bin Meng wrote: > Hi, > > It was observed that the qemu-devel mailing list consistently drops > certain emails for unknown reasons. > > This was seen for below patch series emails sent to on qemu-devel and > qemu-riscv mailing lists, both of which are hosted on > savannah.nongnu.org. > > [Qemu-devel] [PATCH v2 00/28] riscv: sifive_u: Improve the emulation fid > https://lists.gnu.org/archive/html/qemu-devel/2019-08/msg00987.html > > [Qemu-devel] [PATCH v3 00/28] riscv: sifive_u: Improve the emulation fid > https://lists.gnu.org/archive/html/qemu-devel/2019-08/msg01869.html > > You can see from above archive that patch email [20/28] and [26/28] are > missing. > > These patch emails were sent using 'git send-email' via gmail's smtp > service. To troubleshoot the problem, I resent these 2 emails to > qemu-devel/qemu-riscv mailing list twice, but mailing list did not > receive them. To verify whether it is related to gmail itself, I > included a 'bcc' to my another private email address at the same time > when I sent to these 2 mailing lists, and my another private email > address did receive these 2 patch emails. > > Note: patchew also reported that only 26 patches were received. See below: > > v2: > https://patchew.org/QEMU/1565163924-18621-1-git-send-email-bmeng...@gmail.com/ > v3: > https://patchew.org/QEMU/1565510821-3927-1-git-send-email-bmeng...@gmail.com/ > > So the problem seems to be with the qemu-devel/qemu-riscv mailing > lists. This is really annoying. Please investigate what's wrong with > the mailing list service. Thanks! > > Regards, > Bin >
Re: [Savannah-users] Multiple GPG keys on Savannah
Asher Gordon wrote: > Ineiev writes: > > The respective part of Savannah runs Trisquel 7, and it comes with > > GnuPG 2.0 series which doesn't support ECC anyway; however, we should > > update it before 2020, and then... > > I see. It's too bad Savannah doesn't host the GnuPG git repository, > because then I could point out how ironic it is that Savannah hosts > GnuPG but still uses an old version! :-) I'll own that one. I really push for having an alive security patch process and using a software distribution package management system makes that much easier than building everything from scratch. Our Savannah systems get patches installed usually within a day of their being available from the distro security team. That covers literally millions of lines of code. That is much more than we could review and manage ourselves. We rely upon the community to help. For critical services such as gpg the visibility and importance of a security problem would be high. Every time we decide that we are going to own a bit of code for the systems then *we* must be on-the-bounce ready to react to any security issues. If someone finds a vulnerability in a project that we are owning then we need to jump and react to it. But with an entire system it is really easy not to notice an individual project needing a custom update. The terrible irony would be that a security vulnerability would get found, reported, known by the malicious, fixed upstream, and we might still be running a stale old copy that we had not realized needed to be updated if we are not paying attention and get compromised. On the other hand the daily distro package upgrade keeps things simple. It is possible to use containers to jump some bits of software forward using a different distro and that associated security stream. We do that for a few services. (Notably for git.) We might do that for GPG too. Life and time is what keeps everything from happening all at once. Every time we do that it also increases the complexity of the interactions. But GPG features has been causing noise so we will probably get there eventually. Bob signature.asc Description: PGP signature
Re: [Savannah-users] Messed up CVS repository
Asher Gordon wrote: > Bob Proulx writes: > > Please give your project a clean checkout and see if things are as you > > would like them to be. If not let us know! :-) > > The dead files are indeed gone, but the empty directories are still > there (in ViewVC and when you checkout without -P). Oh, drat. I forgot to remove the empty directories. I have done that now. When I check out now I do not get those empty directories. Sorry. I know this might no longer be important to you now but I hate to leave things like this half done and so wanted to make it right regardless. > But I think I'll probably switch to git or something similar instead as > someone else suggested. So I guess I could just disable CVS and enable > git or similar and we can forget about the CVS repository? Yes. Exactly so. Git is the more popular revision control system these days. There is a lot of support for it. Bob
Re: [Savannah-users] Messed up CVS repository
Hello Asher, Asher Gordon wrote: > I recently created a new project using CVS > (https://savannah.nongnu.org/projects/magic-square) and I accidentally > imported my entire directory tree including files which should not be > imported (i.e. compiled and automatically generated files). I removed > these files with "cvs remove", but if I understand correctly, > directories cannot be removed with CVS. That is correct. Normally CVS is designed to record history not to remove it. > These directories (as well as all the unnecessary dead files) are > bothering me. Would it be possible to reset the CVS repository to its > initial state so I can start over and only import what I need? Since this is a new project that has only just recently been uploaded I see no reason not to correct things manually. Normally I would suggest either writing to savannah-hackers-public and asking for assistence from there or filing a support ticket. But I am reading your message here and can do it. > Sorry for the inconvenience! I am still pretty new to CVS and version > control systems in general. No worries! We are happy to help. :-) > P.S. I know you can use the -P option to prune empty directories, but > they are still in the repository and everyone who wanted to check out > the repository would have to use -P. The directories are also visible in > ViewVC. It appeared that if I took all of the Attic directories which CVS uses to store removed files and removed them from the repository that it would accomplish what you wanted without needing to purge and re-upload everything. For safety sake I did the remove by moving those Attic directories to a trashcan area. Please give your project a clean checkout and see if things are as you would like them to be. If not let us know! :-) Bob
Re: [Savannah-users] Webpage updates are lagging.
Bob Proulx wrote: > Kaz Kylheku wrote: > > In the project CVSROOT-ed at :ext:kkylh...@cvs.sv.gnu.org:/web/txr > > I committed the "txr-manpage.html" document hours ago. > > > > When I access https://www.nongnu.org/txr/txr-manpage.html > > it's still showing Version 217 from July 10 instead of > > Version 218 from July 19. > > Thanks. Several other users reported the same problem. I will look > into it. > > (JFTR but this is passing in the test matrix. So I am starting with a > head scratch...) I think things are fixed now. Things were stuck on the web server. Looking at the page you referenced: https://www.nongnu.org/txr/txr-manpage.html When I look there now it shows the date of July 19, 2019 now. The web pages are all updating and have updated now. I wrote up more details about the problem to the hackers list. Here is a reference to that message in the archive for anyone who wants all of the details of the problem. https://lists.gnu.org/archive/html/savannah-hackers-public/2019-06/msg00031.html Bob
[Savannah-users] OpenSSH DSA keys Deprecated
OpenSSH has deprecated DSA ssh keys. And therefore so has Savannah. Note that DSA keys have always been recommended against for Savannah use but were not actively blocked. If you are using a DSA ssh key it will no longer be possible to access the repositories using it. Please update your account to use an RSA or ECDSA key. ___ Message sent via Savannah https://savannah.nongnu.org/
Re: [Savannah-users] Webpage updates are lagging.
Kaz Kylheku wrote: > In the project CVSROOT-ed at :ext:kkylh...@cvs.sv.gnu.org:/web/txr > I committed the "txr-manpage.html" document hours ago. > > When I access https://www.nongnu.org/txr/txr-manpage.html > it's still showing Version 217 from July 10 instead of > Version 218 from July 19. Thanks. Several other users reported the same problem. I will look into it. (JFTR but this is passing in the test matrix. So I am starting with a head scratch...) Bob
Re: [Savannah-users] SSH not working: upgraded?
Kaz Kylheku wrote: > I have been able to confirm that DSA2 keys are now rejected. > > My RSA and ECDSA keys work. Okay good. That is the way things should be working. > This is an issue with newer OpenSSH; DSA keys have been deprecated. > > https://security.stackexchange.com/questions/112802/why-openssh-deprecated-dsa-keys I know that DSA keys have always been recommended against. https://savannah.gnu.org/maintenance/SshAccess/ And DSA hasn't been in my test matrix. Sorry but I had rather forgotten that they might even be in use. This will warrant a notice about this change. Thanks! Bob
[Savannah-users] New VM vcs1 to hold CVS data and services
Savannah Users, Today, a few minutes ago, we migrated the CVS data and services off of the vcs0+oldvcs combination and over to vcs1 as a standalone server. All appears to be working. Please report any problems that you might experience with the CVS services. For some details... Previously we had tried to migrade off of the old vcs system and onto the new vcs0+nfs1 combination where vcs0 handled all of the frontend and nfs1 handled all of the storage. This mostly worked but the "mostly" part that failed was file ACLs from the new storage server. Strangely and unfathomably they are not functioning from the new storage server. Since those were not functioning things were rolled back to the old server. But that old server still needed to go! We had to move. And soon. In order to get that done on the time schedule needed a new system vcs1 was set up to handle everything standalone avoiding the troublesome nfs1 server that isn't behaving. Eventually the problem with nfs1 will be understood and fixed but this keeps things moving so that we can decommission the old hardware on the schedule needed. And "scaling-out" with a separate system is a good direction for things regardless. Bob
Re: [Savannah-users] Unable to add skill to job
Asher Gordon wrote: > Actually, it was respecting that header, but I did not realize what was > happening, so I thought it was just some strange feature and I added the > CC manually. Sorry about that! :P Ah! I try not to gripe about the problem. But now I am glad I mentioned it! :-) Thanks, Bob signature.asc Description: PGP signature
Re: [Savannah-users] Unable to add skill to job
Asher Gordon wrote: > Yes, your message seemed to work. Thanks for fixing it! There is also > one more strange thing: two of the messages in the archive show up as > "Message not available". Perhaps that was part of the filter problem? Yes. In the threaded view those message ids are referred to by another message, yours in this case, and therefore the archive knows they exist but they are missing from the archive. Those missing messages are my replies to you where my message was signed and was filtered out. You have them in your mailbox because you had sent a direct reply to me and therefore I returned the action with a direct reply to you. And because your message was signed my Mutt client replied with a signed message. But they are missing from the archive due to the misconfiguration which filtered out content type parts that were not in the list of text type parts. I am surprised that gnus is not respecting the Mail-Followup-To header. Since I am subscribed to the list I prefer to get replies there and set the header to direct replies there. But that is a different problem. Bob signature.asc Description: PGP signature
Re: [Savannah-users] Unable to add skill to job
rhkra...@gmail.com wrote: > Speaking from the peanut gallery, some mail lists exclude things like HTML > and > sometimes other things (inline pictures, (large?) files, ...) > > Maybe that filter was an inaccurate attempt to block HTML? You are probably right since that is in there too. However that part seems to work pretty well. Or possibly an attempt to block virus attachments which sometimes also hit the lists hard and that is also in the area of content type too. And it very was a slip of mine doing the misconfiguration. This does mean that anyone who was using signed email had their message bodies filtered out and their messages lost. Sigh. Sorry. On the good side this is only something that is list by list. So something that was only affecting this list. Not something that had global effect across all of the lists. Bob
Re: [Savannah-users] Unable to add skill to job
Asher Gordon wrote: > I tried sending another message signed with PGP/MIME and I think you got > it, but it didn't seem to go to the mailing list. There does seem to be a configuration problem with the savannah-users mailing list. Because I do not see my signed messages in the archive either. However they do go to other lists okay. I think maybe I found the problem. There was a filter enabled that deleted message parts that were not text/plain. That's certainly not good. I deleted that filter and will sign this message as a test. If it goes through okay then that was the problem. I have no idea why that filter was enabled. Bob signature.asc Description: PGP signature
[Savannah-users] Version Control System Storage Back-End Maintenance Complete
The move of the version control system back-end to the new storage array is now complete. All services are working normally. ___ Message sent via Savannah https://savannah.nongnu.org/
[Savannah-users] Version Control System Downtime
Today the version control back-end data is moving from the old storage array onto the new storage array. Expect connection errors today during the transition. The data is sync'ing live now. This will be followed by blocking commit access and a final sync of the read-only data and then switching the vcs frontend over to the new storage. Blocking commit access means blocking ssh access as all commit access is through ssh access. Expect ssh access both for commit and for read access to fail for a while today. Read-only access through https and other will still be active. There will be brief periods of time when connection errors to the server will be seen. ___ Message sent via Savannah https://savannah.nongnu.org/
Re: [Savannah-users] Unable to upload release tarball to Download Area
Asher Gordon wrote: > Bob Proulx wrote: > > That's just normal. Many projects enable all of the checkboxes which > > includes a download area but then never do anything with them. > > I see. That makes sense. Also, I figured out how to copy symlinks and > remove files with rsync, but it's a bit inconvenient, and sshfs would > still be nice. I have one more task to complete before I can look at sshfs access. Sorry. Today moving the storage back-end for the vcs system from the old storage to the new storage. I must concentrate on it first. Then I will come back and look at user remote access to the download areas. Bob
Re: [Savannah-users] Unable to upload release tarball to Download Area
Asher Gordon wrote: > Bob Proulx wrote: > > I have gained an understanding now. I know what needs to be done to > > solve the problem. At root cause I had broken one of the cron scripts > > in moving it from system to system. It wasn't updating a timestamp > > which caused a monitoring script in the redirector to think the mirror > > was out of sync when it was not. It needs to look not at the absolute > > age but the relative age however. The reference should be the > > upstream source, which is known to the redirector, not the time now. > > And there needs to be a fallback when there are no mirrors. > > That's pretty complicated! I also noticed that many of the directories > in https://download.savannah.nongnu.org/releases/ are empty. Is that > part of the migration? That's just normal. Many projects enable all of the checkboxes which includes a download area but then never do anything with them. Bob
Re: [Savannah-users] Download mirror redirection is broken
Savannah Users, > > Unfortunately the mirror redirection for project downloads is broken > > today. > > This is back online now. Unfortunately without the mirror > redirection. This is okay for a while. Most importantly files from > the download areas are available again. And the redirector is online too. Everything should be running normally again now. > > This must be the result of the back-end storage change I made > > yesterday however I promise I thought I had run the regression tests > > after all was done and everything was working at that time. But it is > > definitely failing now. I am debugging the problem now and will get > > it back online as soon as possible. > > Actually FTR it doesn't seem to be related to the back-end change and > the regression suite had actually passed! This simply appears to be > coincidental breakage. Of which I am not quite at the root cause yet > but getting closer. It turned out to both be related and not related. I broke one of the cronjobs when migrating it off the old system to the new. It was a very simple one that did nothing but update a timestamp in the download area. Broken due to new permission requirements. Mirrors mirror everyting including that timestamp file. Everything worked initially because the previous copy of that file had a new enough timestamp in it. Then as time passed the redirector kept looking at that static timestamp being freshly mirrored and decided that the mirrors were too "old" by looking at the time now and deciding they must be out of sync therefore eventually dropping all of the mirrors from the mirror list. Without any mirrors in the list the redirectory failed to produce a valid URL. I fixed this so that even without mirrors it will now produce a valid URL. I added the local mirror fallback so there will always be a valid mirror regardless. I fixed the timestamp user so that it now has permissions and is updating. And with that things seem to be completely back online again. Bob
Re: [Savannah-users] Unable to upload release tarball to Download Area
Asher Gordon wrote: > > I am getting pretty close to having the mirror problem understood. At > > the moment my brains are leaking out my ears from trying to get my > > mind wrapped around it. > > Sounds painful! ;-) Pain is simply weakness leaving the body. Sometimes the soul tags along with it. :-) I have gained an understanding now. I know what needs to be done to solve the problem. At root cause I had broken one of the cron scripts in moving it from system to system. It wasn't updating a timestamp which caused a monitoring script in the redirector to think the mirror was out of sync when it was not. It needs to look not at the absolute age but the relative age however. The reference should be the upstream source, which is known to the redirector, not the time now. And there needs to be a fallback when there are no mirrors. I am a huge fan of continuous improvement. :-) > I am now having problems with sftp and sshfs. See below: > $ mkdir mnt > $ sshfs asd...@dl.sv.nongnu.org:/releases/c2py mnt > Enter passphrase for key '/home/asher/.ssh/id_rsa': > remote host has disconnected > $ sftp asd...@dl.sv.nongnu.org:/releases/c2py > Enter passphrase for key '/home/asher/.ssh/id_rsa': > Connection closed > (scp still works fine) I'll need to investigate. > Are these also related to the same bug? It would be nice to use > sshfs because I could easily manage the directory as if it were > local. This may or may not be related to a previous security vulnerability that Sylvain reported and fixed a couple of years ago which forced a loss of support for some file copy protocols. As such the documentation may not have caught up to the new reality. I remember there being new restrictions on file deletion that didn't exist before. Therefore sshfs might just not be allowed through the command filter at this time. As you might imagine it isn't allowed to run arbitrary commands. Therefore all of the ssh commands are filtered through an sv_membersh filter before being allowed through. > Also, rsync works if I do not use `-a'. However, if I do use `-a', > it gets stuck at "sending incremental file list" (you need `-v' to > see the message). Hopefully this information will help you fix the > bug(s). That one is definitely known due to the sv_membersh filter. I will add that as a note to the rsync method in the documentation. The example shown was without options. scp release.tar.gz y...@dl.sv.nongnu.org:/releases/project/ rsync release.tar.gz y...@dl.sv.nongnu.org:/releases/project/ sftp y...@dl.sv.nongnu.org:/releases/project/ # interactive mode However sshfs is documented as working. I'll give it a try and see what methods it is trying to use for file access. > Thank you very much for your great work on Savannah! Happy to help! Bob
Re: [Savannah-users] Download mirror redirection is broken
Savannah Users, > Unfortunately the mirror redirection for project downloads is broken > today. This is back online now. Unfortunately without the mirror redirection. This is okay for a while. Most importantly files from the download areas are available again. Bob P.S. > This must be the result of the back-end storage change I made > yesterday however I promise I thought I had run the regression tests > after all was done and everything was working at that time. But it is > definitely failing now. I am debugging the problem now and will get > it back online as soon as possible. Actually FTR it doesn't seem to be related to the back-end change and the regression suite had actually passed! This simply appears to be coincidental breakage. Of which I am not quite at the root cause yet but getting closer.
Re: [Savannah-users] Unable to upload release tarball to Download Area
Asher Gordon wrote: > Bob Proulx wrote: > > In the meantime I have created that directory for you. Asher you > > should be able to upload to it at this time. Please try it again and > > let us know if it is working for you or not. > > It is working now. Thank you, Jan and Bob, for you help. Awesome! :-) Now I just have three urgent bugs related to this that I need to fix. I am getting pretty close to having the mirror problem understood. At the moment my brains are leaking out my ears from trying to get my mind wrapped around it. Almost there. Soon. Bob
Re: [Savannah-users] Unable to upload release tarball to Download Area
Hello Asher! And also welcome to Savannah. Jan Owoc wrote: > Asher Gordon wrote: > > I'm very new to Savannah, and I recently added a new project, c2py > https://savannah.nongnu.org/projects/c2py. I'm trying to upload a > release tarball to the Download Area, but it is not working. ... > If you go up a directory, you can see a list of projects that have > folders, and c2py isn't on the list. It looks like you need to make > that folder first, or ask a Savannah admin to do it for you. Thanks Jan for the help! You are correct that the directory was missing and causing the upload problem. This is due to a storage back-end change made yesterday. The short answer is that the Savannah web page UI checkbox is broken at the moment due to it. And another problem too. I am working on it. In the meantime I have created that directory for you. Asher you should be able to upload to it at this time. Please try it again and let us know if it is working for you or not. > > scp c2py-0.0.1rc5.tar.gz{,.sig} asd...@dl.sv.nongnu.org:/releases/c2py/ > > Enter passphrase for key '/home/asher/.ssh/id_rsa': > > scp: /releases/c2py/: No such file or directory > > The fact that you are getting the latter message means that everything > is correctly set up with your SSH keys. Otherwise, the error would be > something like "authentication failed". Yes! :-) > The "Download Area" link on the project page links to > https://savannah.nongnu.org/files/?group=c2py which redirects to > https://download.savannah.nongnu.org/releases/c2py/ but that page is > a 404. And that is bug problem for me number two to debug this morning. The mirror redirection is dropping the "/releases" part of the URL from the redirection and therefore the resulting URL is 404. This is affecting everyone. I am trying to figure it out now. In the meantime files that have been uploaded and mirrored out can be reached by selecting a mirror manually. But of course for new files they won't have propagated until the next mirror sync is able to copy them out to any individual mirror site. https://download.savannah.gnu.org/mirmon/savannah/ Bob
[Savannah-users] Download mirror redirection is broken
Savannah Users, Unfortunately the mirror redirection for project downloads is broken today. It is missing the "/releases" part from the URL at the redirection causing the resulting URL to be a 404 Not Found. The files are there but it is imposible to get to them at the moment. This must be the result of the back-end storage change I made yesterday however I promise I thought I had run the regression tests after all was done and everything was working at that time. But it is definitely failing now. I am debugging the problem now and will get it back online as soon as possible. In the meantime it is still possible to download files by selecting a mirror manually. Visit the mirmon page and select a mirror manually and one should be able to download any of the files. https://download.savannah.gnu.org/mirmon/savannah/ Bob
[Savannah-users] Storage Array Downtime Today
Savannah Users, Unfortuantely there was a storage array problem hosting the Savannah version control repositories and the project download area. This necessitated taking the systems offline for a couple of hours today while Andrew traveled to the datacenter to repair the system. Since lists.gnu.org is also on that host it made it impossible to send out an email about the problem. All that could be done was to work through the problem, get the system online, and send out an email like this one after all was back online again. Thanks go to Andrew for doing the work and getting the systems back online. Reminder that when systems are down https://quitter.im/fsfstatus is always a good resource for communication about the current status. Bob
[Savannah-users] Version Control and Project Download Areas Offline for Urgent Maintenance
The version control system back end and the project download area back end systems are currently offline for urgent file system storage maintenance. This is an urgent issue and is getting our full attention. We will have them back online as soon as possible. Please monitor https://quitter.im/fsfstatus for FSF system status updates. Note this also affects the mailing list server too therefore the mailing lists will be unavailable for the duration of the outage. ___ Message sent via Savannah https://savannah.nongnu.org/
Re: [Savannah-users] IP Address Changes
Andreas Schwab wrote: > Bob Proulx wrote: > > Please try it now and let us know if it is working now. The DNS for > > the IPv6 addresses should be propagated. > > I see IPv6 addresses for the various subdomains of > (sv|savannah).(non|)gnu.org, but not for the main domain itself. Thank you for your continued testing and reports! I had forgotten that the basename address was in another file. I updated it shortly after receiving your latest report above but then didn't reply immediately because I wanted to make sure it propagated first. The Savannah web site UI dns should be delivering both IPv6 and IPv4 addresses now. It is in my testing. Please try it again now. Bob P.S. Unfortunately my ISP still does not yet support IPv6! Which adds a little complication to my process when trying to test proper IPv6 operation.
Re: [Savannah-users] IP Address Changes
Bob Proulx wrote: > Andreas Schwab wrote: > > Bob Proulx wrote: > > > Additionally Savannah servers now have IPv6 addresses fully supported > > > on all of the servers now. This will make Savannah accessible on IPv6 > > > only networks whereas previously it required IPv4. > > > > I'm only seeing IPv4 addresses on any Savannah server. > > Oh! Confirmed. Let me dig into the problem. Thank you for reporting it. Please try it now and let us know if it is working now. The DNS for the IPv6 addresses should be propagated. Thanks for reporting that! Bob
Re: [Savannah-users] IP Address Changes
Bob Proulx wrote: > Tomorrow, Friday, will be the day for vcs services git, svn, bzr, hg, > cvs and all of the vcs related services. Folllowed immediately by the > backend network file storage. To be honest the vcs side is the very > high profile side that makes me nervous. I will be happy when it is done. Whew! It's done! It has been a long two days but Savannah is now fully operating on the new subnet with new IPs all around. At the last, later on Friday, there was a problem that went unnoticed for too long that version control commit access was broken due to a user id mapping problem on the backend storage causing it to be read-only. That was fixed late after midnight. Sorry. Additionally Savannah servers now have IPv6 addresses fully supported on all of the servers now. This will make Savannah accessible on IPv6 only networks whereas previously it required IPv4. There is still one minor item that is failing the regression suite. However things have stabilized otherwise. > Followed by whatever was forgotten up to that point! :-) If you are having problems from now moving forward please make a report. Everything should be fully functional for you on the new network. Bob
Re: [Savannah-users] IP Address Changes
Bob Proulx wrote: > Tomorrow morning US-time (Thurday) and onward through the day will be > the main part of the IP migration for the Savannah systems. This will > cause at least some disruption of Savannah services. There are > several systems with dependencies among them. They need to be > migrated together. Today was a tumultuous day! Lots of progress made. Much of the task list was worked through. Two big snags that each broke things for a while. Mostly on 'download' but it backlogged the database server and affected vcs and the web frontend too. But we pushed through them and made good forward progress. Tomorrow, Friday, will be the day for vcs services git, svn, bzr, hg, cvs and all of the vcs related services. Folllowed immediately by the backend network file storage. To be honest the vcs side is the very high profile side that makes me nervous. I will be happy when it is done. That followed by the web UI frontend. It needs an IPv6 address and configured to use it and other modifications. Followed by whatever was forgotten up to that point! :-) Bob
Re: [Savannah-users] IP Address Changes
Bob Proulx wrote: > The Savannah systems along with GNU and FSF systems are changing IP addresses. > We expect to make this change with a minimum of Savannah services downtime. > However in the event of the unexpected your patience as we work through > problems is greatly appreciated! Tomorrow morning US-time (Thurday) and onward through the day will be the main part of the IP migration for the Savannah systems. This will cause at least some disruption of Savannah services. There are several systems with dependencies among them. They need to be migrated together. If you happen to be unlucky enough to notice the downtime please be patient and try again after a pause. We will be moving things as quickly as we can and will have things running on the new network configuration as soon as possible. For those using IRC there is the #savannah channel on Freenode where I will be posting real-time status updates. If by late afternoon US-time something is still not working for you then please report it to us. We also appreciate any feedback and comments. < savannah-hackers-pub...@gnu.org > Bob
[Savannah-users] IP Address Changes
The Savannah systems along with GNU and FSF systems are changing IP addresses. We expect to make this change with a minimum of Savannah services downtime. However in the event of the unexpected your patience as we work through problems is greatly appreciated! Our network provider TowardEX generously donated our bandwidth and IP addresses for many years and now we have a new donor: Hurricane Electric. We thank our donors for their support! ___ Message sent via Savannah https://savannah.gnu.org/
[Savannah-users] Savannah cgit service status
Savannah Users, Starting around 20:00 UTC the version control cgit interface came under a networked attack. Unfortunately this caused some instability problems with https service. We are doing what we can to try to mitigate the problem. ssh access to version control is not affected. If you are having version control access problems using https then please be patient, pause a moment, and try again. Bob
Re: [Savannah-users] Replacing master branch for Texinfo git repository
Gavin Smith wrote: > Is it possible for an administrator to enable this? Please send an email with the request to: savannah-hackers-pub...@gnu.org Or submit a support request ticket: https://savannah.gnu.org/support/?func=additem=administration Bob
Re: [Savannah-users] Your GNU packages, slib,scm, jacal and wb
John Darrington wrote: > I don't think that's what Aubrey is looking for. > > He has a redirect from his project web page pointing to an external website. > The redirect is currently wrong. I think he wants help replacing it with > the correct redirect. What I see is that the "Homepage" link URL registered for the scm project is to: http://swissnet.ai.mit.edu/~jaffer/SCM.html Which is returning a 301 Moved Permanently to: http://groups.csail.mit.edu/mac~jaffer/SCM.html Which is returning a 403 Forbidden error. That looks like an error because of the "mac~jaffer" which one would think is an error. The obvious typo-fix to remove the "mac" part: http://groups.csail.mit.edu/~jaffer/SCM.html That works. Therefore I presume that that is the desired link for the Homepage link. Does that make sense? The http://swissnet.ai.mit.edu/~jaffer/SCM.html Homepage link can be changed on Savannah to http://groups.csail.mit.edu/~jaffer/SCM.html using the process I described in the previous mail if that is desired. I think that is the simplest answer. No redirects needed at this point. If not then the 301 Moved Permanently redirect is something happening at swissnet.ai.mit.edu and there isn't anything we can do about that here. One would need to contact the swissnet.ai.mit.edu admins and ask them to fix the redirect on their site. Bob
Re: [Savannah-users] Your GNU packages, slib,scm, jacal and wb
Aubrey Jaffer wrote: > I have logged into https://savannah.gnu.org/project/admin/?group=scm and > its website link is broken, but I can't find where to change the website > link. How do I do that? Go to your project page: https://savannah.gnu.org/projects/scm/ As admin you should see a navigation link in the Administration area for "Select features". https://savannah.gnu.org/project/admin/editgroupfeatures.php?group=scm The first link in the form is for the home page link. It is enabled. Edit the rightmost column "Alternative Address" field. Then Submit. Hope that helps! Bob
[Savannah-users] DDoS attack against vcs
Hello Savannah Users, It was pointed out that I should have made some type of message previously. Apologies for not thinking of savannah-users. For the past five days starting on the 14th there has been about a 10,000 sized botnet attacking the vcs system browsing every web accessible URL. This has caused a big stress on the system. Lots of cpu use, lots of data bandwidth consumed, but mostly lots of processes queued. Users have made reports of timeouts and 502 Bad Gateway proxy failures and other brown-outs while browsing the web or checking out project source. No hard failures. Retrying the action would usually work. But also the failure would occur again too. Unfortuantely mitigating against a large botnet attacker with a large collection of systems to do the attack is quite difficult. Even very large and well funded sites have been brought down by these attacks. For us on the community funded FSF hosted Savannah these can be overpowering and there isn't much we can do about it. We took what mitigations we could. In any case I should have sent a notification here to savannah-users about the current status of the problem. Sorry for that absence of thought. Sorry this is a notice after the fact that it was a problem for the past week. The current attack seems to have subsided earlier today. For the moment everything seems to be back to normal. Let's hope that continues. Bob
Re: [Savannah-users] SSH key change seems ineffectual
Steve White wrote: > I found it. Yay! \o/ > Something went wrong when I pasted the RSA key into the field -- the > beggining and end looked the same as my key, > but in between something got pasted twice. Ah... Yes that would do it. > Sorry for the trouble. No trouble at all. Glad you were able to get things going. Bob
Re: [Savannah-users] SSH key change seems ineffectual
Hi Steve, > I am trying to do commits from a new machine. I added its SSH public > RSA key on the Savannah web page (for GNU FreeFont project), but for > hours, neither SVN nor CVS works for me. Both are asking for a > password. Looking in the logs doesn't show anything obvious. Seems like it should be working. I looked quickly at your ssh keys in the database and they looked okay. Since you say you are working from a new machine then I suspect something there. Perhaps permissions. Usually permissions must not be group writable anywhere up the directory tree for example so be sure to check permissions. > Have i forgotten something? Or do key changes have to be authorized > by somebody? Things should be working. Try this as a simple test. $ ssh stevan_wh...@cvs.savannah.gnu.org hostname You should see: You tried to execute: hostname Sorry, you are not allowed to execute that command. That means things are working. But you might be seeing something else. If so and the reason isn't obvious then try: ssh -v stevan_wh...@cvs.savannah.gnu.org hostname You should see something like (this is from me): debug1: Next authentication method: publickey debug1: Offering public key: RSA SHA256:ny9SZmnnLpUeTcmzt+pVpoxdH39AntZ+8Cb33tmCLzQ /home/rwp/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 If you don't then something is wrong with the public key offer. Either permissions or something. Bob
Re: [Savannah-users] About projects web pages
Hello Danilo, Danilo G. Baio wrote: > I've noticed that in FreeBSD, some ports have their web pages broken. > > This happens with all projects pointing to http://www.nongnu.org/%%PROJECT%% ... > I think at least 100 ports on FreeBSD are affected with this. First let me say that I am not one of the FSF admins. I volunteer with Savannah. So all of this that I am going to say is second hand information. I can't login to the FSF systems and look. Read details with caution because it might be wrong! :-) But hopefully useful anyway. My first thought is that I don't know why this is being brought up here on savannah-users since it isn't related to Savannah. What can we do about it? I might think that gnu-misc-discuss might be better. (Shrug.) I don't know. But there isn't anything Savannah Hackers can do about it since it is on other systems that are not Savannah systems. Here is what I know. Sometime last week it was reported that there was a certificate problem with the www.nongnu.org system (on wildebeest.gnu.org the system that hosts it). This was a systematic problem and affected *ALL* of the projects, not just the ones you mentioned. You weren't getting picked on in particular. It was just a fault and everything using it was broken. The FSF admins started working on the problem but were unable to complete it last week. And unfortunately the debugging of the https certificate problem apparently broke the http side of things too. That appears to have been an Oops. On Saturday more reports came in to Savannah about this new problem of www.nongnu.org returning 404 Not Found for everything. But since www.nongnu.org on wildebeest is not a Savannah system there wasn't anything we could do other than redirect things over to sysadmin. Which I did and on Saturday Ian jumped in and got the http side of things back online. Thanks Ian! This wasn't a system he was familiar with, he's new, and the https certificate problem predated the http breakage so things had to get left at that point for others to fix later. Today on Sunday I check and see that the redirect from http to https has been removed. I am sure this is temporary just to avoid the broken https certificate. It's about the only temporary workaround that can get things back working without an error. Even though it does drop support for https for the moment. I know this will only be temporary and https will be when it is working again. Things are online in a workaround state until the certificate fix is completed. > The addresses http://www.nongnu.org/%%PROJECT%% will work again? When I woke up this morning and looked I see that the above are all working again now. The only caveat is that they are (temporarily) no longer redirecting to https as they had been doing. > Or should we just update them to http://%%PROJECT%%.nongnu.org? When things were in the broken state the %%PROJECT%%.nongnu.org URLs were also broken too. (At least they were for me.) So that wouldn't have helped. :-( In the future if something like this happens the best official answer is to please file a bug ticket with the FSF admins. Do this by sending an email to < sysadmin AT gnu DOT org > where it will create a ticket in their RT queue. If something is particularly grave and urgent then after filing that ticket also contacting them interactively such as over IRC or phone or other channel to alert them may be appropriate based upon the judgment of the situation at the time. For something like this I would nudge them so they are aware of how widespread the problem is and that this will affect a lot of people. Another good thing to know is that the FSF admins are governed by union and state employment rules and therefore response time outside of weekday business hours may be substantially delayed. If something is broken over the weekend it might not have any resources to work on it until Monday morning. Of course the weekend is when many of the rest of us are volunteering on projects and so this creates a perfect convergence of problems existing and delays before it will be addressed. This means that if you find something broken on Saturday then patience is needed until Monday. Sometimes all that can be done is to file tickets and wait. I don't see a way around this problem other than to recognize that it exists and has been that way for as long as I can remember. Please do file a ticket to FSF sysadmin when you are affected by problems. Because if something is broken and no one says anything then there is no reason for it to get fixed. They might not even know about it. If something affects someone and a ticket is filed then it will eventually get addressed. On the other hand if something big is broken and it affects a lot of people and they get a lot of tickets then getting a lot of tickets will tell them that this is an important problem and it must be prioritized high to be fixed sooner. An old saying applies here. "The squeaky wheel gets
Re: [Savannah-users] SVN commit problems
Hi Steve, > This time the commit worked. Then I am sure this is a problem with the nfs lockd. Rotating the file out of the way causes the problem to follow the original file and the new file works fine. I have chased this problem before elsewhere and am sure that if we were to double blind this by rotating the saved off file back that the lock problem would return again. This is a bug that has been around for a while. Interestingly enough the system was rebooted very recently for the lastest security kernel upgrade. Which I just mention because it isn't a case of the system having been up for ages. It was rebooted just two days ago. > Shall I still experiment with the test project? Not necessary now. Since it is working for your project. But if it had failed there then the test-project would have been a good comparison. I think things will be fine from here forward. But if you encounter the problem again please let us know and we will work the problem again. Bob
Re: [Savannah-users] SVN commit problems
Hi Steve, Steve White wrote: > OK I tried it again. Still not working -- it has hung: > > $ svn commit > Sending... > Transmitting file data ...done > Committing transaction... And that also shows a process stuck on it on the server side. root@vcs0:~# lsof | grep txn-current-lock svnserve 20468 Stevan_White4u REG 0,24 0 7003445 /net/vcs/srv/svn/freefont/db/txn-current-lock (vcs:/) > And nothing for several minuts. Previously I killed the process and > tried again later -- this is when I saw the issue with the lock on the > server. > I have been committing the same way pretty regularly recently and this > thing just started happening yesterday. > And I saw it happen on a different system, when I committed something > from there. Sometimes the root cause is not immediately clear. I tested to our test-project and I could commit okay. I have a suspicion and tried something. Please try it again now. (I hate to say before I know but I have seen the nfs lockd get into an odd state on individual files before. This is tied to the inode and so tracks the physical file and not the name. Therefore I rotated that file out of the way and a copy into place. This doesn't eliminate the problem but side-steps it. If it is one of those times again then the problem will follow the file and all will appear okay now. And if not then it is something completely different. Just a diagnose step.) > I'll leave it like this for a while so you can look at it. I killed that process on the server side. > What else can I look at? I added you to the test-project. Please check that out and make some random change to test-file1.txt or create a new file or whatever and then commit that back in. Did that work? Do that twice or three times. Does that work? It's a test project to test svn functionality on Savannah so please keep it clean but the changes aren't important otherwise. Any diddle and commit should test the flow. svn co svn+ssh://stevan_wh...@svn.savannah.nongnu.org/test-project Bob
Re: [Savannah-users] SVN commit problems
Hello Steve, Steve White wrote: > Hi, since yesterday I've been getting errors on svn commit: > > Sending ... > Transmitting file data ...done > Committing transaction... > svn: E37: Commit failed (details follow): > svn: E37: Can't get exclusive lock on file > '/srv/svn/freefont/db/txn-current-lock': No locks available > ... There appears to be a process of yours still running attached to that file. > It doesn't happen every time, but for some hours now I've been unable > to commit anything. It seems that there are two processes hanging around attached to that file. I can only guess that some networking glitches between your client and the server has left some processes unclosed. They should timeout eventually but the timeout is long to accomodate people working on slow connections. > This appears to be a server issue. Please have a look at it. I have killed off the two processes I saw of yours that were attached to that file. That should release the locks for it and you should be able to get a new semaphore lock now. But there is nothing that would prevent this from happening again. If it happened once then it could happen again. Please let us know if it does and we can deal with it again then. Bob
Re: [Savannah-users] DMIdecode
Hello Tai, Tai Vo wrote: > Hi Savanah, You have reached Savannah Users but I think you meant to write to the dmidecode-devel mailing list instead. > I sending email regarding the issue with DMI-decode type 17. > > Memory Manufacture: Undefine. > > Can you fix it to include Unigen JEDEC ID code is 01 CE. We are the users of Savannah. The software forge for free software. But I think you want to contact the DMIdecode project developers. Please write to them directly. Here is a link to their mailing list. https://lists.nongnu.org/mailman/listinfo/dmidecode-devel That is all that I know about it. More general information can be found on their web page. http://www.nongnu.org/dmidecode/ Good luck! :-) Bob
Re: [Savannah-users] Unable to cvs checkout team's homepage
Rafael Fontenelle wrote: > Assaf Gordon wrote: > > I just tried this command on two different computers, different networks > > and it worked fine. I just tried it to Colorado and it worked okay for me too. The files are small at only 1.6M and so this would not be too big. When I ran it here the checkout was very fast taking only 2.5 seconds! Wow! cvs -z3 -d:ext:@cvs.savannah.gnu.org:/webcvs/www-pt-br co www-pt-br > > I'm guessing this was a temporary error on the savannah server (it > > happens sometimes, especially if it gets overloaded with many people > > trying to checkout repositories at the same time). Or something in between. This doesn't feel like a Savannah problem to me yet. Feels like something in between. > > May I ask you to try again ? > > Just tried again, same result as yesterday, in two different > residential networks (therefore no special proxy/firewall > implementations). > > I don't have another computer to test at the moment, but I am able to > access via SSH the cvs.savannah.gnu.org was established (action > denied, but connection was OK) and also able to cvs-checkout > anonymously. Could there be any configuration missing? Since cvs is a high use server at Savannah I think if it weren't working generally then we would be hearing many problem reports. Since we aren't and it appears to be working I think we need to focus on your side of the connection. I have a couple of different experiments to try. One is to try the checkout using the anonymous pserver side of cvs. To prove that the path between is okay and to eliminate ssh from the problem. It can't be used to check in but will get a full copy of the code the same as using ssh. cvs -z3 -d:pserver:anonym...@cvs.savannah.gnu.org:/webcvs/www-pt-br co www-pt-br I expect that to work. It should work. Works for me and others. But will it work for you on your network? Especially if that works then I would focus on the ssh side of things. There are two configuration files typically in use when cvs calls ssh and it connects. One is the per user file ~/.ssh/config and the other is the system /etc/ssh/ssh_config file. At this point I am most suspicious of something in one of those files that is causing problems. Do you have CVS_RSH or any other CVS_* variable set in the environment? cvs traditionally defaults to using rsh but on most systems today rsh is a symlinked alternative to ssh. This depends upon your system but if it is using the real rsh then that can't work as only ssh connections are allowed. If you have CVS_RSH set to something that isn't happy then that might be the problem. Or on your system you might require setting 'export CVS_RSH=ssh' in order to force cvs to use ssh. Please try this: env CVS_RSH=ssh cvs -z3 -d:ext:rafael...@cvs.savannah.gnu.org:/webcvs/www-pt-br co www-pt-br Bob
Re: [Savannah-users] Howto setup multiple git repos per project?
savannah-us...@trodman.com wrote: > I plan to convert my project from svn to git. > > What are the initial steps required to enable multiple > git repos for my single project? Each will be fairly small. It is easy but requires an admin to create the additional directories. Send an email to savannah-hackers-pub...@gnu.org or file a support ticket https://savannah.gnu.org/support/?func=additem=administration asking for the additional repository to be created. Bob
Re: [Savannah-users] SSL cert for git0.savannah.gnu.org: wrong host
Marcus Müller wrote: > https://git0.savannah.gnu.org is unusable at the moment, since the SSL > certificate is for bzr.savannah.gnu.org; noticed that when trying to > clone the autoconf repo. You have a typo in your URL. You are using git0.savannah.gnu.org but that is the underlying node hostname. You should be using the virtual name git.savannah.gnu.org, without the "0" part. https://savannah.gnu.org/git/?group=autoconf Where did you see git0.savannah.gnu.org documented so that this may be corrected? > See openssl output below: ... > Could someone please fix that by getting a Let's Encrypt cert for the > actual git0 subdomain? Regardless of the typo we appreciate the reports. :-) BTW... We are already using Let's Encrypt certificates for all of the site certificates. Thanks, Bob
Re: [Savannah-users] Mailing list for Translation Project team
Hello Joonas, Joonas Kylmälä wrote: > would savannah non-gnu be an appropriate place to host mailing list for > a Translation Project [1] translation team that does free software > translations? It would be used for reviewing the translations, > discussing translation topics, etc. The Savannah group doesn't manage the mailing lists. That is lists.{non,}gnu.org instead of savannah.{non,}gnu.org. There is overlap of people doing administration but the control infrastructure is quite separate from each other. Please make your request to the mail...@gnu.org mailing list. Thanks, Bob
Re: [Savannah-users] Password cannot be reseted before confirmation
Joonas Kylmälä wrote: > I just tried to create a savannah.nongnu.org user account. I got to the > point where I opened the link for account confirmation but the it said > "Invalid password". So I thought maybe I mistyped it and tried to use > the "Lost your password?" functionality but it turns out my account is > not yet existing in the system (the message I got "Invalid User: This > account does not exist or has not been activated "). How can I get the > account now activated? What is the account name? There is a "hang state" that should be fixed in the web logic. If an account activation is in the middle between being created and being confirmed then there is no way for the user to do anything with it. The original confirmation email link is the only way to activate it and if that link is lost or mangled then there isn't currently a way to trigger another confirmation. Filing a help request is the way to move past the problem. Bob
Re: [Savannah-users] Savannah VMs rebooting for kernel upgrades
Bob Proulx wrote: > Question for the mailing list users: Is this the type of information > that you find useful to be posted to the savannah-users mailing list? Rather than many individual replies I am going to create a combined reply here for the several responses. Gordon Haverland wrote: > I am not yet doing much in the way of VMs myself, but I have friends > that ask me about VMs (since I know a lot more about computers in > general). Reading things like this (or like this with more detail) > give me more information as to where to look for answers for my > friends. > > It is likely this is atypical. Probably there are many people who would like to learn more about VMs (virtual machines). But we would probably run off other savannah-users who would like to keep the list focusted only on those issues. Please write me directly offlist about VMs as of a general nature and we can make further discussion. Eric Wong wrote: > Yes. Any updates like this one about downtime and/or upgrades are > greatly appreciated. In my experience, even "transparent" upgrades > can cause user-visible behavior changes, so I appreciate being > informed. Thank you. I will say "Good!" to this. Because reboots will kill any actively running process that is happening at the moment the reboot occurs. This means that some people cloning a git repository might see an error at that moment. Mostly they might just try it again but a continuous integration system cloning might log failures in that cycle. Also someone looking at the web page right at those critical moments might see a mangled display. Hopefully they will wait a moment and reload the page. That type of thing. "André Z. D. A." wrote: > I have no opinion for that question because I do not use some things that > happens around or that Savannah has. What these machines do? This could actually be a very long and involved answer. Which is why I saved this one for the end of the email. So that I could leave the message with a list of things that are affected by a reboot. But first let me give a short, extremely abbreviated, summary. Really! There is one system that is a dedicated database system. It is currently running MySQL. It holds all of the account information for all of the subscribers. Upgrading MySQL or rebooting this system causes account data to be unavailable during the upgrade and reboot. Perhaps 1-2 minutes of downtime depending upon the upgrade. If account data is not available then no one can authenticate with the version control system. If the database is not available then the web site is not fully functional. There is the main web server frontend. This is one of three web servers in the collection. It hosts what most people probably visualize when they think of Savannah. But really the web interface is only a control panel to add and remove people and features from projects. All of the real work is done elsewhere. If the web server is rebooting and unavailable then no one can add or remove features or people from projects. It is dependent upon the database server for both web data and account data. No one can post new project news items. If the web server is unavailable no one can look at the documentation there. Perhaps 1-2 minutes of downtime while the system reboots. There is a version control system which hosts all of the various version control systems. All of git, svn, cvs, bzr, hg, run on this system. If the version control system is rebooted then all active processes are killed. Someone checking out a repository, or a continuous integration system doing so, will have a failure when their running process is killed. During the time the system is rebooting none of the version control systems are available. Connection attempts will fail with connection errors until it arrives back online. Additionally the version control server is dependent upon the database server for account information. Perhaps 1-2 minutes of downtime while the system reboots. There is a project file download server. Project files are stored there and made available for download. The database server must be online in order for account information to be available to authenticate project members to be able to upload files there. Perhaps 1-2 minutes of downtime while the system reboots. Additionally there are two legacy data file system servers. These are network file systems mounted upon the systems described above. They are operated on a different upgrade cycle and are not rebooted as often. They are firewalled off from the hostile Internet. As currently provisioned when those are rebooted it may require file system checks. They have a large amount of data. If so then the checks may run for an hour to complete before the file servers are back online. This is an area of ongoing improvement moving forward. That is the abbreviated summary of the system. Real
Re: [Savannah-users] Savannah VMs rebooted for latest security kernel
Bob Proulx wrote: > Currently the two supported access protocols for 'download' are scp > and rsync. I recommend using rsync. I am embarrassed to admit that I didn't test this latest before sending it. :-( > Try this for a listing: > > rsync jkraehem...@dl.sv.nongnu.org:/releases/gsequencer/stable/ That works okay. > Try this for an upload: > > rsync -av FILE.TO.UPLOAD > jkraehem...@dl.sv.nongnu.org:/releases/gsequencer/stable/ This is having a failure. There is a permissions problem. I wish I had tested it before sending it out. :-( I am looking at this problem and hopefully we will get it figured out soon. Please stand by. Bob