Re: Best file system to use?
Hi Dennis, On Wed, Feb 12, 2020 at 05:55:52PM -0600, Dennis Wicks wrote: > I have 4TB running on an AMD Ryzen under Buster. What is the current > consensus of the best file system to use for general data usage? If your 4TB isn't composed of at least one more drive for redundancy then for me all questions of which filesystem to use would be moot. Storage is fairly cheap compared to the hassle of having to eat the downtime and restore from backup, when a non-redundant drive dies. With redundancy sorted out, ZFS is probably technically the best filesystem but is perhaps complicated, slightly inflexible and with other disadvantages related to being developed and shipped externally to the kernel. If that puts you off of ZFS, (ext4 or XFS)-on-LVM-on-mdraid are fine choices, just accept that bitrot can happen. I do not recommend btrfs and anyone who does should have a look at the linux-btrfs mailing list to see how many cases of data loss and loss of availability people have reported this month. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: apache2 virtual host
Hi Russell, On Mon, Feb 03, 2020 at 07:11:21AM +, Russell L. Harris wrote: > On Mon, Feb 03, 2020 at 07:05:11AM +0000, Andy Smith wrote: > >(do these as root, since that seems to be how you are working) > > > ># which a2ensite > ># ls -la /usr/sbin/a2ensite > > root@penelope:/etc/apache2/sites-available# which a2ensite So a2ensite is either not present or not in your path… > root@penelope:/etc/apache2/sites-available# ls -la /usr/sbin/a2ensite > lrwxrwxrwx 1 root root 7 Oct 15 19:53 /usr/sbin/a2ensite -> a2enmod So a2ensite exists as a symlink to a2enmod. I'm guessing that however you became root ("su" instead of "su -" perhaps?) didn't leave you with /usr/sbin in your path. Does calling it as: # /usr/sbin/a2ensite … work? If not, what is the output of: # ls -la/usr/sbin/a2enmod Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: apache2 virtual host
Hi Russell, On Mon, Feb 03, 2020 at 06:59:55AM +, Russell L. Harris wrote: > I receive the following error message when attempting to enable a > virtual host (apache2 in Debian 10): > >root@penelope:/etc/apache2/sites-available# a2ensite domain1.com.conf >bash: a2ensite: command not found Your shell is failing to find the "a2ensite" command. Normally it is found at /usr/sbin/a2ensite and is part of the apache2 package. What do the following commands say? (do these as root, since that seems to be how you are working) # which a2ensite # ls -la /usr/sbin/a2ensite Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Debian on server
Hi, On Tue, Jan 21, 2020 at 03:41:13PM +0100, Alessandro Baggi wrote: > So, I know this is a debian list and could be obtain biased opinion but what > are better point to use debian on a server instead of CentOS? Really I think you are best off using whichever one you have the most experience with - unless you're looking for a learning experience anyway. Debian has far more packages in its repositories than CentOS does, even if you add the unofficial ELRepo. CentOS has a longer support lifetime than Debian, though in practice I find that some other combination of required software (or desired new features of required software) will force an upgrade before the strict end of life of a given CentOS release. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: apple mini
Hello, On Thu, Jan 09, 2020 at 12:11:54PM +1300, Ben Caradoc-Davies wrote: > If you need to protect against an attacker willing to examine your HDD with > magnetic force microscopy, there is no substitute for physical destruction > of the media. Even then it's unnecessary! No has ever recovered usable data from a modern (less than 15 years old) used HDD after a single pass of writes. A study was done with 2006-era drives and magnetic force microscopy (MFM) between 2006 and 2008: https://www.vidarholen.net/~vidar/overwriting_hard_drive_data.pdf "4 Conclusion The purpose of this paper was a categorical settlement to the controversy surrounding the misconceptions involving the belief that data can be recovered following a wipe procedure. This study has demonstrated that correctly wiped data cannot reasonably be retrieved even if it is of a small size or found only over small parts of the hard drive. Not even with the use of a MFM or other known methods. The belief that a tool can be developed to retrieve gigabytes or terabytes of information from a wiped drive is in error. Although there is a good chance of recovery for any individual bit from a drive, the chances of recovery of any amount of data from a drive using an electron microscope are negligible. Even speculating on the possible recovery of an old drive, there is no likelihood that any data would be recoverable from the drive. The forensic recovery of data using electron microscopy is infeasible. This was true both on old drives and has become more difficult over time. Further, there is a need for the data to have been written and then wiped on a raw unused drive for there to be any hope of any level of recovery even at the bit level, which does not reflect real situations. It is unlikely that a recovered drive will have not been used for a period of time and the interaction of defragmentation, file copies and general use that overwrites data areas negates any chance of data recovery. The fallacy that data can be forensically recovered using an electron microscope or related means needs to be put to rest." So, for the main data areas of the HDD, one pass of writes is always enough and anything more is just a meaningless ritual. Some will argue that a better-funded attacker may somehow have better microscopes even to the point that they have technological breakthroughs not known to the wider world. However, the paper also makes clear that the limit is not the sensitivity of the microscope, but the fact that any drive that has been in use for a while has too much noise for the data immediately prior to the wipe to be distinguishable from that. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Is there any tool in debian which helps us find what cdn does a website use ?
Hello, On Sat, Jan 04, 2020 at 10:50:54AM +, shirish शिरीष wrote: > I am interested to know if there is any other tool besides dig and > delv, something like perhaps cdnfinder [6] which will make it more > interesting to find things ? Can't you just traceroute or mtr to the site in question to see whose network it traverses last? Or do you need to programmatically discover the CDN company? If so then looking up the Autonomous System Number (ASN) of the IP address might be enlightening, e.g.: $ host bbc.com bbc.com has address 151.101.0.81 $ dig +short 81.0.101.151.origin.asn.cymru.com TXT # note reversed IP "54113 | 151.101.0.0/22 | US | arin | 2016-02-01" $ whois as54113 ASNumber: 54113 ASName: FASTLY ASHandle: AS54113 (I omitted some uninteresting lines of output) Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: realtime kernel on ARM hardware
Hello, On Mon, Dec 30, 2019 at 05:05:07PM -0500, Gene Heskett wrote: > > Quoting Gene Heskett (2019-12-30 21:00:55) > > > > > If debian was serious about supporting the "arm's" that would have > > > been fixed several years ago by moving that list and its contents to > > > "debian-arm-devel", and instituting a new "debian-arm-users" list. […] > I detect a smidgeon of tongue in cheek, ;-) but I think it would also > help by drawing in those that do have experience in that hdwe. Have you tried requesting such a list? https://www.debian.org/MailingLists/HOWTO_start_list You seem convinced it will help, so why not give it a go? Debian is entirely run by volunteers. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: End-user support in 'real world' -- was [Re: A cache problem of some sort?]
On Sat, Nov 30, 2019 at 12:52:45PM -0600, Richard Owlett wrote: > I *sympathize* with my contact. > > I *HAVE BEEN* customer service for ~half-century. > > *PUT UP* or . I also sympathise with the customer service person, which is why I urged you not to hand them links to things that are very likely to be confusing and beyond their remit. None of this is useful information for the first line, and any tier of support after that should know their own system enough that it's redundant. I did debate whether your history of arguing with people who attempt to help you made it worth chipping in, and I see I made the wrong choice. Apparently since I am not quite 50 years old so lack your half century in customer service you would prefer that I shut up; fair enough. OK Richard, as usual you know best, have fun passing links to debian-user to customer service people. Regards, Andy
Re: A cache problem of some sort?
Hello, On Sat, Nov 30, 2019 at 08:36:19AM -0600, Richard Owlett wrote: > Thank you. Monday I'll make a followup report referencing this thread. I don't think that the person you are corresponding with will be technical as such; they'll have less knowledge of HTTP, caches, last-modified, Debian and Linux than you do and mentioning any of this and/or directing them here I think will likely only confuse them. I would just say that I checked again more than a week after they advised it had been updated and it still reads the same, as confirmed by several other people around the country and indeed the world. Perhaps ask that they look at the page themselves to confirm. That should be enough for any person to understand that the change hasn't actually happened (if not why). If they do have to pass it on to someone more technical, that person is already going to know more about the actual architecture of their systems and likely isn't going to need a pointer to a debian-users thread! Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Little problem with rights on a script
On Sun, Nov 17, 2019 at 09:57:39AM -0500, Gene Heskett wrote: > You may have to resort to similar measures. Hopefully though, most people asking questions here are more willing to read documentation and accept advice, and so will end up with more sensible solutions. Regards, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting Please consider the environment before reading this e-mail. — John Levine
Re: Thought regarding NGINX and Debian
On Sat, Nov 09, 2019 at 01:20:40PM -0500, Gene Heskett wrote: > On Saturday 09 November 2019 10:07:43 Andy Smith wrote: > > On Fri, Nov 08, 2019 at 10:55:33PM -0500, Gene Heskett wrote: > > > unforch, reinstalling apache2 is not a workable situation because it > > > was built for the repos w/o libwrappers support. Dumb and forces me > > > to run iptables to block the bots that are DDOSing my site. > > > > This is a really odd conclusion. Apache has a very rich syntax for > > authentication and authorization that makes protecting it with > > tcpwrappers rather pointless. > > > Then, if thats the case, why has no one attempted to teach me how to do > all this iptables stuffs within apache2? Millennials these days, so entitled!
Re: fail2ban for apache2
Hello, On Sat, Nov 09, 2019 at 01:34:11PM -0500, Gene Heskett wrote: > On Saturday 09 November 2019 10:10:53 Andy Smith wrote: > > You've repeatedly been advised to block these bots in Apache by > > their UserAgent. Have you tried that yet? It would be a lot simpler > > than fail2ban or trying to keep up with their IP addresses. > > > Maybe, but semrush has a variation in the user agent spelling that makes > a block of xx.xx.xx.xx/24 more effective. Really? $ cat /var/log/apache2/access.log{,.1} | awk -F '[()]' 'tolower($0) ~ /semrush/ { print $2 }' | sort | uniq -c | sort -rn 95 compatible; SemrushBot/6~bl; +http://www.semrush.com/bot.html 80 compatible; SemrushBot/3~bl; +http://www.semrush.com/bot.html 29 compatible; SemrushBot-BA; +http://www.semrush.com/bot.html I'll suggest once more just blocking UserAgents that match "SemrushBot" but I realise I am just howling into the void. Regards, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fail2ban for apache2
Hello, On Sat, Nov 09, 2019 at 08:43:25AM -0500, Gene Heskett wrote: > I've done that with the help of a previous responder and now have 99% of > the pigs that ignore my robots.txt blocked. semrush is extremely > determined and has switched to a 4th address I've not seen before, but > is no longer DDOSing my site. You've repeatedly been advised to block these bots in Apache by their UserAgent. Have you tried that yet? It would be a lot simpler than fail2ban or trying to keep up with their IP addresses. Regards, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Thought regarding NGINX and Debian
Hello, On Fri, Nov 08, 2019 at 10:55:33PM -0500, Gene Heskett wrote: > unforch, reinstalling apache2 is not a workable situation because it was > built for the repos w/o libwrappers support. Dumb and forces me to run > iptables to block the bots that are DDOSing my site. This is a really odd conclusion. Apache has a very rich syntax for authentication and authorization that makes protecting it with tcpwrappers rather pointless. Just because you only have a hammer, doesn't mean that every piece of software is a nail. Regards, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Stopping webcrawlers.
Hi Gene, On Sun, Nov 03, 2019 at 11:40:23AM -0500, Gene Heskett wrote: > I just installed fail2ban but setting it up looks daunting. Looking for a > tut. Yes, that could be quite involved. Fail2Ban parses logs, so you'd first have to decide what constitutes logging of an unwanted condition (or make sure that such a condition is logged). Your difficulty there is probably that any given log line by itself is innocuous, it is the repeated number of requests for large content that is problematic. So, one way to go could be to use Fail2Ban with a really high incidence count like say, 100 requests (access log lines) in a day per IP. Still that only counts requests, not bytes. > Ideally, I'd like to steer such stuff thru a module that would limit them > to 10% of the available bandwidth. 35 kilobaud I could tolerate, 350kb > is a DDOS to be dealt with when it never ends. I've never used it but this looks simple and is bundled with Apache: https://httpd.apache.org/docs/2.4/mod/mod_ratelimit.html Idea being you'd use a Location match for your big files and set an appropriate limit for those directories. Take heed of the warning that it's applied to each request not to each IP. So, presumably, a given IP could request the same thing 5 times simultaneously and each request would get the limit. Again I've never used it, but this is packaged as libapache2-mod-qos and looks like it would work on a per-IP basis for number of requests and bandwidth: http://mod-qos.sourceforge.net/ I've only ever used mod_cband but it looks like that is abandonware now and was never updated for Apache 2.4.x. Before any of these though I would be blocking by robots.txt and UserAgent. Maybe that is enough and you don't need to do anything else. If you are serving large static files you may also want to put a CDN in front of your site. Here's some free options: https://geekflare.com/free-cdn-list/ Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting signature.asc Description: Digital signature
Re: Stopping webcrawlers.
Hello, On Sun, Nov 03, 2019 at 10:04:46AM -0500, Gene Heskett wrote: > I am developing a list of broken webcrawlers who are repeatedly > downloading my entire web site including the hidden stuff. […] > How do I get their attention to stop the DDOS? Or is this a war you > cannot win? Hosting a public web site on a domestic broadband connection with low data transfer allowance isn't the best way to go in 2019, but you can have some success with an escalation of: 1. robots.txt 2. UserAgent banning 3. Fail2Ban and/or apache modules for per-IP quota or requests and bytes. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Is it a bug that the iptable_filter module isn't loaded automatically with Debian 10 iptables-nft?
Hi, I noticed a few hours ago that a particular piece of firewall management software wasn't working correctly with my Debian 10 hosts. After quite a lot of investigation I worked out that the software in question was looking at the content of /proc/net/ip_tables_names to determine the names of the tables that are currently active ('filter', 'mangle', etc). On my Debian 10 hosts, this file is empty even though they have active rules loaded by iptables. I then noticed that on my Debian 9 hosts, the modules iptable_filter and ip6table_filter are loaded as soon as a rule is added to any of the chains in the filter table ('INPUT', 'OUTPUT, 'FORWARD'). By manually loading the module iptable_filter on my Debian 10 hosts I was able to populate the file /proc/net/ip_tables_names with the active tables ('filter') and the management software works again. I have for the moment made this change permanent by adding those modules to a file in /etc/modules-load.d/. I will take a guess that the switching of the iptables commands to use the nftables framework has somehow caused this iptable_filter module to not be loaded even though the firewall still works. Is it a bug that loading rules into the filter table using iptables-nft command (actually called as "iptables" due to alternatives) no longer causes iptable_filter module to be loaded and thus "filter" to appear in /proc/net/ip_tables_names? Is there a different proc file that will list the active netfilter tables? Is it safe for me to continue forcing the load of the iptable_file and ip6table_filter modules, or should I stop doing that and seek to get the management software fixed instead? Though doing that will need some other way to obtain the same information. If it is bad to force load those modules, perhaps I should be using update-alternatives to go back to using iptables-legacy? I am aware that we should be switching to nftables now, but I have quite a few servers all managed by config management. As I will need to switch the method by which I manage the firewalling in the config management, and don't want to run two different things simultaneously, I was planning to wait until my oldest hosts have been upgraded enough and then do them all at once. I don't really want to starting rewriting the firewalls on older Debian 8 servers when they should go away within a year anyway. Cheers, Andy
Re: Email based attack on University
Hello, On Thu, Oct 03, 2019 at 08:05:27AM -0400, rhkra...@gmail.com wrote: > On Thursday, October 03, 2019 06:23:20 AM Andrew McGlashan wrote: > > There have been numerous bugs with LookOut (otherwise known as > > Outlook), running scripts and having other vulnerabilities due to > > preview pane being open. […] > I suppose then, that the same vulnerabilities that you allude to > are present in (at least older versions of) kmail? I think it's important to realise that large organisations tend to enforce a monoculture of office productivity and email applications. These tend to be large and complex software packages which harbour many bugs and opportunities for security compromise. There have been many incidents of the large office suites having flaws that execute content, even sometimes without any user action beyond some sort of email preview. This will continue to happen. Web-based email may even have a better security story, as the browser security model has at least had a lot of thought applied to it over time as opposed to standalone large executables. Realistically therefore, if there was an enterprise mandating a Linux desktop and mail package to all its users, we probably would still continue to find security bugs in that email application that did not rely on the user following a link or maybe not even explicitly opening a media attachment. You would have to assume such bugs are present, though possibly as yet undiscovered. Every large monoculture installation is waiting for their own specific 0-day exploit. As previously noted in this thread, blocking execution from the /home filesystem tree would not help here as in this case it would either be the email application or a media handler it launches doing the executing. There are other security features available in Linux, such as SELinux and AppArmor, which seek to limit the privileges of binaries. Conceivably a rigorous use of these could really lock down a desktop and productivity suite to be much harder to break into. For example, a media viewer/player could be restricted from writing to the filesystem or making network calls. My experience is that very few organisations are willing to spend the time to define such policies, and in truth I rarely do either, much beyond what comes as default with the OS. Some even disable them entirely. But at least the feature is there, and that's the sort of thing that would be worth exploring if someone is seriously wanting to lock down this sort of big desktop deployment. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Authentication for telnet.
Hello, On Sun, Sep 29, 2019 at 07:28:45PM -0700, pe...@easthope.ca wrote: > From: pe...@easthope.ca > Date: Sat, 28 Sep 2019 08:15:07 -0700 > > Opening a terminal emulator in default configuration on localhost, ... > > Localhost; not hosts. It's easy to get confused because your posting style is incredibly difficult to follow. You break threads and give very little detail. Help us to help you. > > ... telnet opens in about 1 s. ... ssh requires about 15 s. If your SSH takes 15 seconds to connect to localhost then you have a configuration issue. As a first guess, check you do not have it using DNS. "ssh -v localhost" might give you some hint as to where in the connection/login process the time is being spent. But because of your reluctance to tell us exactly what you're trying to do, we don't even know if ssh is the best tool for the job. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Authentication for telnet.
On Sun, Sep 29, 2019 at 10:51:22PM +, Andy Smith wrote: > On Sun, Sep 29, 2019 at 02:36:02PM -0700, pe...@easthope.ca wrote: > > An interactive shell session with minimal overhead. (Or maximal > > efficiency.) > I am old enough to remember how we used to remotely manage machines > before SSH was invented: rlogin. Oh, I see now that you were interested in passwordless equivalent of "telnet localhost". It is confusing why you would need to do this to localhost as you could just type "bash" (or dash or zsh or whatever) to get a new shell. So it would help our understanding if you were to explain what your use case is for this new interactive shell session. If you are in some sort of graphical desktop then as you already say, the usual method is just to open a new terminal emulator. On the console you could switch to a new virtual console ctrl+alt+F1, F2, F3 etc. That would have a login prompt though. Would that solution be good enough if it was automatically logged in as your user? If you are just trying to execute things as another use then su or sudo may be more appropriate. "sudo -u anotheruser -s" gets you an interactive shell session as anotheruser, and can be configured to be passwordless if you like. I mentioned rlogin. With rlogin you can still use it over localhost to switch between users in a passwordless manner. So too could SSH, of course. If it's only to the same host though it seems overkill compared to su or sudo. So I think we really do still need to know more about your use case. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Authentication for telnet.
Hello, On Sun, Sep 29, 2019 at 02:36:02PM -0700, pe...@easthope.ca wrote: > From: Reco > > I have to ask - what are you trying to achieve? > > An interactive shell session with minimal overhead. (Or maximal > efficiency.) The telnet client in the Oberon subsystem is noticeably > faster than competitors. Because such a thing is hideously insecure, it has fallen into disuse and SSH is the name of the game these days, Even if you do not require the security of SSH, the mere fact that SSH is ubiquitous means that you may have an easier time using SSH for this. Have you tried SSH and found it lacking somehow? Is it a case that the hosts you are dealing with are too underpowered CPU-wise to cope with SSH's encryption? I am old enough to remember how we used to remotely manage machines before SSH was invented: rlogin. You can still install rlogin on Debian, and by crafting a suitable $HOME/.rhosts file you can provide passwordless plain text login capability. "man rlogin" and "man 5 rhosts" should get you going. I still think it is a really bad idea unless SSH is totally out of the question. Finally, it is possible to spawn a shell on a particxular port with socat and then use socat at the other end to connect to it, to provide an interactive shell session again with no authentication or encryption. See: https://blog.ropnop.com/upgrading-simple-shells-to-fully-interactive-ttys/#method2usingsocat > > ... your request seems to be awfully close to (in)famous A/B > > problem, ... > > I might have read about the A/B Problem years ago but don't recall or > understand well enough. It's when someone has a problem, and they think a particular method will solve it, so they ask about that method rather than the problem itself. They risk missing a much better solution because they focussed on the particular method they knew of. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: When/how/why to use "sudo", "su" or "su -" -- was [Re: rocks n diamonds]
Hello, On Fri, Sep 13, 2019 at 06:57:11AM -0500, Richard Owlett wrote: > I think what is needed is an essay comparing/contrasting the proper usage of > "sudo" versus "su" versus "su -". It should also include a discussion of the > change from "su" to "su -". A lot has already been written about the change of "su" program in Debian and it's difficult to see how writing one more page will help anyone. The information is there for anyone to find, if they know they need to look. I think that's the problem here: those stumbling over issues with changed "su" behaviour are already used to the old behaviour of "su", so when they type something like: $ su # some-admin-command and get back a message that "some-admin-command" can't be found, they do not immediately think, "what can be wrong with my usage of su?" Instead they think, "what can be wrong with my install of some-admin-command?" hence threads like these. They feel they are comfortable with their use of "su" because it's worked for them so many times before. It's the new "some-admin-command" that must be messed up. So in fact the problem is harder than education because it is actually re-education. Over time, the "new" behaviour of "su" (which is now consistent with the behaviour of "su" on most other Linux distributions) will implant itself as the only known behaviour for "su" users, so these problems should reduce. As for "su" vs "sudo", it is a debate that has raged amongst small factions for years and I don't see it as possible to objectively make recommendations as to which is best and when, as it is all personal preference. Whatever "recommendation" one would make, there are going to be plenty of people who will pop up to say that is an anti-recommendation. You could try to just describe their functionality in contrast to each other, but it's been done so many times already. Type "difference between su and sudo" in your favourite search engine and there are pages and pages of results. It is probably some sort of failure that a GUI application needs the user to do anything at all with "su" or "sudo" or anything at a shell prompt. Although I would never want to give up use of the shell prompt, it is a steep learning curve for the new user, who just wants to install and play a game. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: crc16
Hello, On Wed, Sep 11, 2019 at 11:37:11AM +0200, Grzesiek Sójka wrote: > Is there any utility to calculate crc16 (not the crc32) in Debian? It should be trivial in almost any scripting language available in Debian. Here is a Perl example. $ sudo apt install libdigest-crc-perl Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: libdigest-crc-perl 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 14.2 kB of archives. After this operation, 51.2 kB of additional disk space will be used. Get:1 http://deb.debian.org/debian buster/main amd64 libdigest-crc-perl amd64 0.22.2-1+b1 [14.2 kB] Fetched 14.2 kB in 0s (93.0 kB/s) Selecting previously unselected package libdigest-crc-perl. (Reading database ... 127754 files and directories currently installed.) Preparing to unpack .../libdigest-crc-perl_0.22.2-1+b1_amd64.deb ... Unpacking libdigest-crc-perl (0.22.2-1+b1) ... Setting up libdigest-crc-perl (0.22.2-1+b1) ... Processing triggers for man-db (2.8.5-2) ... $ perl -MDigest::CRC -e '$ctx = Digest::CRC->new(type=>"crc16"); $ctx->addfile(*STDIN); print $ctx->b64digest' < /etc/os-release lzA= Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Length of a video file (.avi, .mpg) from the command line
Hi, On Wed, Aug 14, 2019 at 05:43:42PM +0200, Jean-Baptiste Thomas wrote: > Can anyone recommend a command line program that could tell me > the length of the video stream in a .avi or .mpg file ? ffprobe from the ffmpeg package can extract the duration, and offers programmatically-friendly output formats such as CSV, JSON and so on. http://trac.ffmpeg.org/wiki/FFprobeTips#Duration Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: August 10, 2019
On Sat, Aug 10, 2019 at 10:53:29PM +, Randy Demerchant wrote: > I like to know can I install and use Debian on eithers of these two system > with out ant problems I've never had ant problems with Debian, but maybe stop eating over the keyboard unless you want ants, because that's how you get ants. Cheers, Andy
Re: installing Debian 10 to 3 hdds as one big system
Hello, On Sat, Aug 10, 2019 at 11:18:49AM -0700, David Christensen wrote: > But, as others have said, two HDD's and one SDD in JBOD (or RAID) > is not optimal. On the other hand it's probably not going to be worse than 3 HDDs. All the ways of trying to exploit the speed of the SSD are quite complicated so if 1x SSD and 2x HDD is what you have to work with, just putting them all together in a RAID isn't necessarily a bad idea. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: installing Debian 10 to 3 hdds as one big system
Hi Pavel, On Sat, Aug 10, 2019 at 01:03:10PM +0200, Pavel Vlček wrote: > I have computer with 3 hdds. One is ssd, 2 others are hdd. I want to install > Debian 10 to all 3 disks as one big system. What to use, raid or lvm? Personally I would use the three devices as a RAID-10 which would result in half the capacity of the total (768G) and you could withstand the loss of any one device. You could instead do RAID-5 but I do not like parity RAIDs. In this case that would give you 2 devices worth of capacity and again any one device could fail. It won't perform as well as RAID-10. Other options that include redundancy would be btrfs or zfs. I would not do the redundancy in LVM. If not using btrfs or zfs, I would use LVM afterwards on the RAID device for management purposes. You can do all of this (except zfs) in the Debian installer. There is an MD feature called "write-mostly" which you can set on devices to tell the kernel that no reads should go to these devices unless absolutely necessary. The usual use of this is in mixed rotational and flash setups to try to encourage reads to come from the much faster flash devices. This could be of real benefit to you but sadly it doesn't work with anything except RAID-1. Other interesting approaches could be: - RAID-1 of the rotational devices then use the SSD in lvmcache. In writethrough mode it is safe to lose the (non-redundant) cache device: https://manpages.debian.org/buster/lvm2/lvmcache.7.en.html - RAID-1 of the rotational devices then bcache on the SSD: https://bcache.evilpiepirate.org/ Personally I don't find bcache mature enough and while lvmcache I did find to be safe, I didn't find that it improved performance that much, probably not enough to dedicate a third of my total capacity to it. If performance was my overriding concern I might actually do a 3-way RAID-1 with the two HDDs set to write-mostly. Only 512G capacity, can lose any two devices, good read performance. > I know, how to create the lvm with textual installer, > but I have problem expanding it to next two hdds, /dev/sda and sdb I would not do this, but… # Mark the new device as an LVM PV # pvcreate /dev/sda # Extend your current volume group to use the new PV # vgextend /dev/your_vg_name /dev/sda At this point you have added the capacity of /dev/sda to the existing volume group, but all your existing logical volumes still reside on the original PV alone. You can now convert them to have their extents mirrored: # lvconvert -m1 /dev/your_vg_name/some_lv The mirrored extents will be on /dev/sda because that is the only PV with free extents. If you already added /dev/sdb then the extents could be mirrored there instead. If you want to specify where the mirrored extents should go then you can do so by appending the PV device path to the above lvconvert command. You can instead/also stripe, using the --stripes option to lvconvert (or lvcreate, for new LVs). In this setup there would be no redundancy though, which is too bad for me to consider. Not doing anything special would leave your LVs being allocated sequentially from whichever PV has capacity, in this setup resulting in no redundancy and max performance of one device, so that would be the worst setup. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Possible Bug issues in kernel.
Hi, On Thu, Jul 25, 2019 at 02:55:13AM +, J.W. Foster wrote: E: initramfs-tools: installed initramfs-tools package post-installation script subprocess returned error exit status 1 There can be many reasons why this package's post-installation script may have failed. I don't think you have shown the full output. Running "apt upgrade" again will probably either repeat the action or tell you what to type to repeat the action, so please do that and show us the full output. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: HTTP shimmed to HTTPS
Hello, On Mon, Jul 22, 2019 at 11:00:20AM -0400, rhkra...@gmail.com wrote: > Anybody know what these no content messagess are about? Just spam? I haven't seen one without content, but I haven't read them as the quoting is so broken it's too hard to read. Maybe check with the archives and see if your mailer is showing differently? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: sending mail via a script
Hi Mick, On Wed, Jul 17, 2019 at 09:50:47PM +0100, mick crane wrote: > I have wondered about this, the actual infrastructure. I've noticed that the > fiber optic cable is in places strung along with the electricity pylons. > Presumably if you could somehow attach to that then you could be anybody ? Leaving aside the technicalities of splicing into an optical fiber link, in the context of email sending and "being anybody" I interpret your question as being alternatively phrased as: "if I gained access to some sort of backbone connection then could I pretend to be anyone, in email?" The answer is probably, "not really." Most of this email reputation stuff is operating on the source IP of the connection. With access to someone's network, you could possibly send packets from their IP address(es), and this is basically what happens when someone's device gets compromised and used for a spam run. The resulting fallout then affects their IP reputation. But you do not get to send packets of an *arbitrary* source IP just because you managed to tap into a fatter pipe¹. You get to use the IPs that you are assigned by your provider, or the provider of whatever network it is that you're connected to. Your Internet service provider may assign you IP addresses if you ask, though they may not offer this service or may charge a lot of money for it. You can always become your own service provider and go directly to a Regional Internet Registry for the IPs. For example, membership of RIPE, which covers Europe and some of Middle East and Africa, costs €1,400+VAT per year with a setup fee of an extra €2,000 in the first year. For this you currently get a /22 of IPv4 (1,024 addresses) and a /32 of IPv6 (or up to /29 if you need it, or even more if you can justify it). A /32 of IPv6 is 65,536 /48s, each of which you would generally assign to a site or a business, and each /48 is 65.536 /64s, which would be an individual network within that. As you can see that's a pretty big outlay, yet on a per address basis it's probably cheaper than getting your existing provider to assign you IPs, or rent servers or whatever. Going back to "being anybody", email of course doesn't have any security and you can put any From: address you like. That's why so much of email reputation is still focused on the source IP address and not the content. Parsing the content is expensive and comes later. Cheers, Andy ¹ A lot of networks don't have protections against spoofing, in that they allow packets to go out into the Internet with source IP addresses that do not correspond to what has been assigned to that network. This will not work for email however because email (SMTP) is a TCP service which requires a three way handshake to set up a connection. If you tried to initiate an SMTP connection with a forged source address, the communication from the server would route back to the real IP address and the IP stack of that device should reject it because it would know it was not something that it initiated. Forged source addresses are more commonly used for UDP-based denial of service. For example, I send a small request to a UDP server and forge your IP address as the source. The server sends a massive reply back to you, not me. You are crushed by the traffic. Some poorly-designed UDP services can enable 1,000x or more amplification of traffic. This has been done with NTP, DNS, portmapper, and lots of others. -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: sending mail via a script
Hi Mick, On Tue, Jul 16, 2019 at 10:39:57PM +0100, mick crane wrote: > well when I became aware of all this stuff, I thought this is great, > everybody can connect and do what they like, if of course following > protocols. > But you can't do that can you ? you have to connect through a service > provider. In theory any host in the Internet can talk to any other host on the Internet because that is what an internetwork is. In practice some hosts on the Internet do not want to be talked to by just anyone for any reason. So, firewalls, application firewalls, blocklists and other restrictions in the name of security. An unfortunate reality of the centralisation of email services into just a handful of very large providers is that those providers in practice dictate stricter rules for who can talk to them. IP netblocks that are known to be assigned to end users (as opposed to hosting providers) are generally outright blocked or distrusted to a degree which makes it difficult for them to be used to send email to everyone that one might want to correspond with. On the other hand, hosting services have got a lot cheaper over the years to the point where one can rent a virtual server at a decent provider for not a lot of money, and as long as one complies with modern email practices one should not generally have much of a problem. Very few people wish to go to this extent, but if you are someone who wanted to do it at home then doing it on a rented server instead is not much more effort. Running your own mail service is still within reach, just not from your own home in most cases. If intending to do this I would however caution against using the very cheapest of providers, some of which come in at just a few Euro per month. These providers do not have functioning abuse departments and as a result are widely blocked for the misdeeds of their customers. As someone who operates in this space I will not name any providers, but if it seems too cheap to be true then it probably is. Cheers, Andy
Re: Where'd lsb-compat go?
Hello, On Mon, Jul 15, 2019 at 03:48:50PM -0400, Stefan Monnier wrote: > > 3) It spurs me to ask: So, if not via LSB, what is the canonical way to > > programatically determine the version of an installed Debian setup? > > Why would a program want to know? Being able to programmatically query the status of a system is the cornerstone of configuration management. Some products handle it better than others. I've never heard of OP's product but well known solutions in this space include Puppet and Ansible, which have the advantage of being open source. Ansible does not suffer from this confusion over the version number of "10": $ ansible backup4*,limoncello* -m setup -a 'filter=ansible_distribution*' backup4.bitfolk.com | SUCCESS => { "ansible_facts": { "ansible_distribution": "Debian", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/os-release", "ansible_distribution_file_variety": "Debian", "ansible_distribution_major_version": "9", "ansible_distribution_release": "stretch", "ansible_distribution_version": "9.9" }, "changed": false } limoncello.bitfolk.com | SUCCESS => { "ansible_facts": { "ansible_distribution": "Debian", "ansible_distribution_file_parsed": true, "ansible_distribution_file_path": "/etc/os-release", "ansible_distribution_file_variety": "Debian", "ansible_distribution_major_version": "10", "ansible_distribution_release": "buster", "ansible_distribution_version": "10.0" }, "changed": false } Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: sending mail via a script
Hi Pierre, On Sun, Jul 14, 2019 at 06:17:50PM +0200, Pierre Frenkiel wrote: > I tried with mail.mailutils, and I get the following error: > ><< 550-5.7.1 [2a01:e35:8a7f:9c50:2e4d:54ff:fed0:5806] Our system has > detected that ><<< 550-5.7.1 this message does not meet IPv6 sending guidelines regarding > PTR ><<< 550-5.7.1 records and authentication. Please review ><<< 550-5.7.1 https://support.google.com/mail/?p=IPv6AuthError for more > information If sending email to Gmail over IPv6 you absolutely require matching forward and reverse DNS and some email IP authentication mechanism such as SPF and/or DKIM. If you can't do this, disable IPv6 in your mail server either in general or when sending to Gmail (could be tricky because this affects Google Apps For Your Domain also, so you don't necessarily know all the domains). You will have an easier time over IPv4 as Gmail relax their SPF/DKIM requirement, though can still avoid unwanted trashing of your email by implementing SPF and/or DKIM. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Ansible : User to use
Hi Thierry, On Thu, Jul 11, 2019 at 11:13:13PM +0200, Thierry Leurent wrote: > I'm beginning to work with Ansible to configure my hosts. > What is the best practice to run playbooks on all of my Linux host ? > I must define a specific user ? Usually you have a specific user that can SSH in and use sudo when it needs to. But if you want to you can make it SSH as root. Best practice would be an unprivileged user, ssh by public key, use sudo where necessary. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
On Thu, Jul 11, 2019 at 05:12:03PM +0300, Reco wrote: > On Thu, Jul 11, 2019 at 12:03:53PM +0000, Andy Smith wrote: > > I think the wiki article at > > https://wiki.debian.org/BoottimeEntropyStarvation really shows that > > currently there is no such consensus available, as every solution > > listed (except buying extra entropy hardware) > > That one is bad too. > Hardware random generator is not used by kernel directly, it requires > userspace program (such as hwrngd). > So, even if you put it into initrd alongside with the needed kernel > modules, there's still a noticeable delay between 'kernel rng is needed' > and 'sufficient entropy is available'. With no modifications and RDRAND instruction disabled, a Debian buster VM I just created gets to crng: done in 49 seconds. By adding the userspace daemon for EntropyKey, it gets there in 10 seconds. Allowing RDRAND it gets there in 2 seconds. I haven't tested it with my OneRNG devices yet. I suspect I could also make the EntropyKey daemon start sooner if I tried. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
Hi Greg, On Wed, Jul 10, 2019 at 09:03:16AM -0400, Greg Wooledge wrote: > The primary thing that's lacking is someone who actually knows all of > this stuff and can explain it properly. Everyone on this mailing list is > grasping at straws that are lying around in various places, of different > types and quality and age, and trying to assemble a house out of them. I think the wiki article at https://wiki.debian.org/BoottimeEntropyStarvation really shows that currently there is no such consensus available, as every solution listed (except buying extra entropy hardware) has at least one expert who thinks it is bad. So assuming the option of "find an expert who everyone agrees with and get them to write some documentation" isn't available, what next? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
Hi Curt, On Wed, Jul 10, 2019 at 09:26:31AM -, Curt wrote: > On 2019-07-10, Andy Smith wrote: > > But, let's say this use of RDRAND to supply boot-time entropy is as > > serious as you argue. What would be your suggested configuration > > I would like Debian to make it clear in the release-notes that there are > security implications to the application of the patch, as expressed by > its author in the patch itself. This seems like a strict minimum for the > "universal operating system" devoted to free software, open standards, > and user choice. So firstly, I feel that you are taking Ted Ts'o's remarks in the patch too strongly. I interpret them as letting us know that there is some concern over whether RDRAND could have been compromised, but not telling us not to use it for this purpose, which seems to me to be how you take it. I don't think that Ted would have contributed the patch if he felt like it was very bad to use it, and I do take his earlier objection in 2013 to be evidence that he wouldn't make a change that he felt was a bad idea even if some people wanted it. However we are then back to debating over our guesses as to what Ted Ts'o thinks rather than anything objective, so I think we'll just have to agree to disagree on that. Secondly, the reason I asked you what you would like done is that in the message I replied to you said that the release notes were something that users don't read. But your proposed solution is to put more things in the release notes. > Further, I would like to know whether the patch will be "baked into the > kernel" or whether it can be toggled on and/or off at the *user's* > discretion. I don't remember being clear on this point after reading the > notes (maybe it's there and I missed it). > > It wasn't clear to me, either, in the release-notes, the recommended way > forward for those with amd64 cpus lacking the RDRAND instruction (and who > therefore cannot "benefit" from the patch). In the release notes in the relevant section (5.1.4) the last paragraph is: See the wiki (https://wiki.debian.org/BoottimeEntropyStarvation) and DLange's overview of the issue (https://daniel-lange.com/archives/152-hello-buster.html) for other options. Both those pages have a list of different solutions and both mention that this RDRAND thing is enabled. Neither of them say how to disable it but they do say: for amd64: use a recent kernel with CONFIG_RANDOM_TRUST_CPU set (less recent kernels may need random.trust_cpu=on added to the commandline) which kind of makes it obvious to me that to disable it you would do "random.trust_cpu=off". If you don't find this obvious maybe the wiki page could do with editing to improve that. So what's lacking? Is it just that the release notes and linked pages mention that the user trusts the CPU but do not mention why some feel that is a bad idea? Maybe you could draft an extra paragraph for the release notes that contains a suitable warning? Although I do think if you are going to use the "even the author of the patch thinks it's bad!" argument then you should probably check with Ted Ts'o that that's an accurate representation of his views on using RDRAND for boot-time entropy. As for the recommended way forward, I'm not sure that there is an easy answer if RDRAND isn't an option. There are complex trade-offs and I think it's probably right that users in this position read the wiki page and work out what's best for them. I do note that for a person in your situation (real hardware [not a virtual machine] with no RDRAND and no TPM), every listed solution has at least one expert that says it is a very bad idea! I don't think there is consensus here yet. In your position I think I'd probably hold my nose (as it says) and use haveged. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
Hi Curt, On Tue, Jul 09, 2019 at 07:59:53AM +, Curt wrote: > I think these reserves are relevant and pertinent to the patch > itself and should be revealed to the user, whom we cannot assume > or expect to follow the technical discussions of the development > team, in the release-notes for Buster. Clearly we disagree about how serious that matter is, and I have a problem with your continued reference to dire warnings from 2013 about something completely different, because it easily confuses. But, let's say this use of RDRAND to supply boot-time entropy is as serious as you argue. What would be your suggested configuration "out of the box" and how would you communicate the issue to the user? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Fwd: fix for no ssh
Hi Nicholas, On Mon, Jul 08, 2019 at 04:49:00PM -0500, Nicholas Geovanis wrote: > On Mon, Jul 8, 2019 at 3:45 PM Andy Smith wrote: > > Flash forward to 2017 and T'so himself wrote a patch to add a > > configure option to allow RDRAND to be used early on to bootstrap > > entropy. Thereafter it would not be the exclusive source of entropy. > > That is what has been enabled in buster's kernel and is what is at > > the heart of this discussion. > > These are two different scenarios. > > Granted, they are different scenarios. > But I don't ever recall the mainstream kernel in a leading > distribution blocking on lack of entropy by default at any time in > T'so's career. Until now. Sure, it looks like the source permitted > blocking all along (maybe). But I never heard of this being used > in a non-custom-built mainstream distro before. > Has T'so had any comments on that? Does he find it appropriate or wise? This whole getrandom()-can-block thing has been going on for multiple years; it has been a huge debate within the Linux development community. I'd have to read it all again I think to try to summarise Theodore T'so's position on all of it, and I don't really want to do that. There was good coverage in LWN: https://lwn.net/Articles/760584/ > > This sub-thread appears to have people concerned about the Debian > > kernel's willingness by default to use RDRAND at early boot (a patch > > which T'so wrote), but using a statement made by T'so in 2013 about > > something else to oppose it. > > OK. You imply that their concerns were misguided (I wasn't one of them). For the avoidance of doubt, I think that if we don't trust our CPUs then we start to go down the rabbit hole where we can't trust anything. HOWEVER… Theodore T'so himself has made the argument that most places you might compromise a CPU involve big teams of people and keeping the secret that it's happened would likely be infeasible. Not so RDRAND, which is thought to be the work of one or a small team of developers, and which would be very easy to compromise. So, as far as I understand it, that is why T'so opposed the use of RDRAND as the *only* entropy source for Linux, but T'so in fact *suggests* the use of RDRAND in tandem with other entropy sources as a means to avoid boot-time entropy starvation. Because it's one of the easiest ways to get around that problem, and even if compromised isn't going to do any harm when mixed in. That sounds sensible to me so I would agree with him that RDRAND is suspicious but usable in that context. And I do go so far as to provide entropy from external hardware sources for me and my customers since 2013. Actually I just tested this earlier in a new buster VM and by default: [1.170404] random: crng done (trusting CPU's manufacturer) With 'nordrand' kernel command line and no use of external entropy: [1.231884] random: fast init done [ 48.655256] random: crng init done with 'nordrand' and external entropy: [ 10.879583] random: crng init done The daemon for the external entropy starts quite late so I may be able to improve matters by forcing it to start earlier, but I'm okay with leaving RDRAND enabled so am not going to spend too much effort on this. > The evidence they chose may be untenable. But you have not > addressed their actual concern. I don't know whether you mean their concern about RDRAND possibly being unsafe, or their concern about Debian choosing to make use of it even though some believe it may be unsafe. Your next bit makes me think you mean the latter so that's what I'll go with. > I'm sure that your answer will include "but the standard > debian process to include this new feature in the kernel was > followed..". So everything is OK. Right? How to solve the "lack of entropy at boot time" problem was discussed at length on debian-devel multiples times over multiple years and that's how the wiki page at: https://wiki.debian.org/BoottimeEntropyStarvation came to exist. That page links to two of the debian-devel threads and in those threads Theodore T'so did give some recommendations. Debian developers made a decision based on those discussions which were held in the open. I'm not sure who it fell to, to make the ultimate decision. Probably the kernel team as they decided whether to enable the RDRAND option? It's a tricky problem without a perfect solution. Not everyone will be happy with the outcome, though there does seem to be enough configurability there for them to have things work differently, with different trade-offs. If this has come as a surprise to some Debian users, that is only because they didn't choose to get involved in such a deeply technical discussion. It's not that unusual that some users get upset by developer decisions and isn't usually worth commenting upon, but it struck me that in this sub-thr
Re: fix for no ssh
Hello, On Mon, Jul 08, 2019 at 02:50:18PM -0400, Gene Heskett wrote: > On Monday 08 July 2019 14:14:10 Andy Smith wrote: > > On Mon, Jul 08, 2019 at 05:48:24PM -, Curt wrote: > > > it "amounts to trusting that CPU manufacturer (perhaps with the > > > insistence or mandate of a Nation State's intelligence or law > > > enforcement agencies) has not installed a hidden back door to > > > compromise the CPU's random number generation facilities." > > > > Again, everyone using a popular CPU is already in that position. > > Absolutely Andy. But from the argument thread my original post generated, > there are IMO, quite a few who have become addicted to the koolaid. The koolaid of using a mainstream CPU? I think if we stopped using Intel and AMD then there would be some other near-monopoly manufacturer that would arise and embed unauditable blobs so we'd be in exactly the same position. To arrive at a situation where the entirety of the CPU was open to inspection would probably require a complete reworking of the modern economy, i.e. make it less purely capitalist for a start. When the problem is as big as that, I'm not sure that being captured by it can be referred to as "koolaid" to be honest. (Before an ARM fanboy pulls me up for not mentioning them, I'm talking about the mainstream. If Intel and AMD didn't exist then I suspect ARM would be just as bad for this, if not worse.) > Theodore T. has been right wayy more than he's been wrong. I think you may have been confused by the posted statement of T'so's which began: "I am so glad I resisted pressure from Intel engineers to let /dev/random rely only on the RDRAND instruction." This was from 2013 and was from T'so opposing the trusting of the RDRAND instruction for *all* of the Linux kernel's entropy needs. Flash forward to 2017 and T'so himself wrote a patch to add a configure option to allow RDRAND to be used early on to bootstrap entropy. Thereafter it would not be the exclusive source of entropy. That is what has been enabled in buster's kernel and is what is at the heart of this discussion. These are two different scenarios. This sub-thread appears to have people concerned about the Debian kernel's willingness by default to use RDRAND at early boot (a patch which T'so wrote), but using a statement made by T'so in 2013 about something else to oppose it. > Raspian is usually not more than a day or so behind debian, Discussion of Raspbian is off-topic here, and I don't see what it has to do with the topic of this sub-thread (entropy starvation at boot). I think you would be better off discussing Raspbian on Raspbian mailing lists. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
Hi Curt, On Mon, Jul 08, 2019 at 05:48:24PM -, Curt wrote: > it "amounts to trusting that CPU manufacturer (perhaps with the > insistence or mandate of a Nation State's intelligence or law > enforcement agencies) has not installed a hidden back door to > compromise the CPU's random number generation facilities." Again, everyone using a popular CPU is already in that position. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
Hello, On Mon, Jul 08, 2019 at 04:18:28PM -, Curt wrote: > Well, looking at Ted Ts'o short patch, where he mentions the security > implications of the thing at some length, *twice* I think that some of Ted's stance might not be because Ted thinks it is dangerous but because there has been in the past very vocal opposition to any use of RDRAND, given that it is part of the unauditable innards of the CPU. > and reading the following from Ts'o circa 2013: > > https://daniel-lange.com/documents/130905_Ted_Tso_on_RDRAND.pdf > > I am so glad I resisted pressure from Intel engineers to let /dev/random > rely only on the RDRAND instruction. Note that relying *only* on RDRAND and using RDRAND as *one* of the entropy sources are different situations. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: fix for no ssh
Hello, On Mon, Jul 08, 2019 at 02:40:16PM -, Curt wrote: > But as an innate altruist (just kidding), I'm wondering whether the > regular user is aware of the implications of all this. What about people > in Nation States ... Well, you get the idea. Thing is, if you can't trust that your CPU's implementation of RDRAND hasn't been compromised then how can you trust that any other aspect of your CPU hasn't been compromised? Every Intel CPU contains a whole other operating system (Minix) and no one outside of Intel knows exactly what it does. The situation will not be markedly better at AMD. Personally I use RDRAND and also hardware entropy sources (EntropyKey and OneRNG). Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting Please consider the environment before reading this e-mail. — John Levine
Re: Hosting in Spain - targeting Indonesian audience
Hello, On Sat, Jul 06, 2019 at 10:36:02AM +0700, Bagas Sanjaya wrote: > Since the datacenter is far from the audiences, I expected that my > website speed will be somewhat slower than if I choose hosting > provider which offer datacenters in Indonesia or Singapore. […] > Any suggestions? Host your stuff on any reasonable provider that you like and put a CDN in front of it. The CDN will take care of serving locally to clients worldwide. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Choice of VMs under i386 Stretch?
Hello Curt and Matthew, On Tue, Jul 02, 2019 at 12:04:36PM -, Curt wrote: > On 2019-07-02, Richard Owlett wrote: > > > > A restatement of my question might be: […] > What do we win if we provide the correct answer? A year's supply of > invective? I do feel sorry for you Matthew. You have been enticed into spending considerable time giving a thorough answer in an Owlett thread. Unfortunately Owlett threads are either an ongoing Internet performance art project or a result of severe mental illness (why not both!?), not sincere requests for help. Now, which one of you is going to tell him that running virtual machines is a bit of a stretch on a 32-bit host? Better luck next time! :) Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: IPv4 v IPv6
On Sat, Jun 22, 2019 at 07:34:52PM +, Andy Smith wrote: > That is why the stance that, "I have IPv4 so I don't need to do > anything" is not completely correct: it's not urgent for much of the > world at present, but we will get into a situation where either one > or both sides of a given IP conversation are behind multiple layers > of NAT that they don't control, and that's bad. I recently came across a couple of articles that indicate that due to the issues I mentioned in this email, IPv6 is already faster than IPv4: https://www.retevia.net/fast/ …and that this has some interesting effects on the economics for carriers (ISPs) and content providers (e.g. web hosts and large retailers) of v6 vs v4: https://www.retevia.net/prisoner/ Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: How can we check that a compressed file is rsyncable ?
On Mon, Jun 24, 2019 at 12:42:37PM -, Curt wrote: > On 2019-06-22, Andy Smith wrote: > > I am not aware of any other compression tool that offers to do what > > gzip's --rsyncable option does, but I owuld be interested if there > > are some that I overlooked. > > zstd introduced an '--rsyncable' flag with version 1.3.8 (available from > stretch-backports for those using stable). Interesting, thank you! Andy
Re: Giving remaja (teens) group full administrator privileges through sudo - dangerous?
Hello, On Mon, Jun 24, 2019 at 12:34:36PM +1200, Richard Hector wrote: > On 23/06/19 12:07 PM, Andy Smith wrote: > > andy@debtest1:~$ su - bob > > Password: > > bob@debtest1:~$ whoami > > bob > > bob@debtest1:~$ sudo -i > > [sudo] password for bob: > > Sorry, user bob is not allowed to execute '/bin/bash' as root on > > debtest1.vps.bitfolk.com. > > bob@debtest1:~$ echo > > 'bob:$6$K6b1uzg.$pTNKJG/9hIgnhBL53Y2mr0rrsBBZE1xDWE0bO8E94dBlM.itel4/meJTZYL12IIOZ9ck/ > > 3P2/j5XGbyKcKxFK/:18070:0:9:7:::' > myshadow > > bob@debtest1:~$ sudo mount --bind ./myshadow /etc/shadow > > bob@debtest1:~$ su - > > Password: > > root@debtest1:~# whoami > > root […] > Haven't you just set your own (bob) password there? Not saying you > couldn't set root's instead, but ... it looks like in this case you > already knew it. Yes, it was a mispaste from an earlier line in my screen history. Sorry about that. Point is you can take a hash that you already know, e.g. your own, write it into a new shadow file but make it be for the root user, not your own user, e.g.: bob@debtest1:~$ echo 'root:$6$K6b1uzg.$pTNKJG/9hIgnhBL53Y2mr0rrsBBZE1xDWE0bO8E94dBlM.itel4/meJTZYL12IIOZ9ck/3P2/j5XGbyKcKxFK/:18070:0:9:7:::' > myshadow and then since you are able to use mount as root you can bind mount your new shadow file over the system's real shadow file, hence effectively resetting root's password: bob@debtest1:~$ sudo mount --bind ./myshadow /etc/shadow bob@debtest1:~$ su - Password: root@debtest1:~# whoami root Since you can bind mount files and directories, root access to "mount" means root access to every part of the existing filesystem so there's many many ways of getting a root shell from that. Try it. :) But maybe on a test host as bind-mounting over something important may completely break your system. Cheers, Andy
Re: Giving remaja (teens) group full administrator privileges through sudo - dangerous?
Hello, On Sat, Jun 22, 2019 at 04:44:40PM -0700, Jimmy Johnson wrote: > Some one mentioned mounting drives, all that and what they need can be > configured. Also note that anyone who can use "mount" as root can trivially become root. If countenancing allowing users to run "mount" as root I would make scripts that only mounted the exact things to the exact places, and then let them run those scripts as root. andy@debtest1:~$ su - bob Password: bob@debtest1:~$ whoami bob bob@debtest1:~$ sudo -i [sudo] password for bob: Sorry, user bob is not allowed to execute '/bin/bash' as root on debtest1.vps.bitfolk.com. bob@debtest1:~$ echo 'bob:$6$K6b1uzg.$pTNKJG/9hIgnhBL53Y2mr0rrsBBZE1xDWE0bO8E94dBlM.itel4/meJTZYL12IIOZ9ck/ 3P2/j5XGbyKcKxFK/:18070:0:9:7:::' > myshadow bob@debtest1:~$ sudo mount --bind ./myshadow /etc/shadow bob@debtest1:~$ su - Password: root@debtest1:~# whoami root The password of that hash is "letmein1". So don't give anyone sudo access to /bin/mount unless you are okay with them being able to become root proper if they really want to. Cheers, Andy
Re: How can we check that a compressed file is rsyncable ?
Hello, On Sat, Jun 22, 2019 at 07:59:18PM +0400, Jerome BENOIT wrote: > I was refering to the long option --rsyncable of gzip(1). I am not aware of any other compression tool that offers to do what gzip's --rsyncable option does, but I owuld be interested if there are some that I overlooked. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: System on a chip - performance relative size and setup (how can the (Debian) setup make a difference?)
Hi Erik, On Fri, Jun 21, 2019 at 03:02:46PM +0200, Erik Josefsson wrote: > Maybe flashbench cannot tell me anything about that anyway? > > Are there other tools? I'm not familiar with flashbench. I like fio. It's available in Debian. I like to do the following tests. Example fio command line follows for each. - sequential read speed (MB/sec) $ fio --name="seqread" \ --filename="/mnt/fioscratch" \ --ioengine=libaio \ --readwrite=read \ --direct=1 \ --numjobs=2 \ --bs=4k \ --iodepth=4 \ --size=1g \ --runtime=300s \ --gtod_reduce=1 \ --group_reporting | tee -a /home/$USER/fio.txt - sequential write speed (MB/sec) $ fio --name="seqwrite" \ --filename="/mnt/fioscratch" \ --ioengine=libaio \ --readwrite=write \ --direct=1 \ --numjobs=2 \ --bs=4k \ --iodepth=4 \ --size=1g \ --runtime=300s \ --gtod_reduce=1 \ --group_reporting | tee -a /home/$USER/fio.txt - random 4KiB reads (IOPS) $ fio --name="randread" \ --filename="/mnt/fioscratch" \ --ioengine=libaio \ --readwrite=randread \ --direct=1 \ --numjobs=2 \ --bs=4k \ --iodepth=4 \ --size=1g \ --runtime=300s \ --gtod_reduce=1 \ --group_reporting | tee -a /home/$USER/fio.txt - random 4KiB writes (IOPS) $ fio --name="randread" \ --filename="/mnt/fioscratch" \ --ioengine=libaio \ --readwrite=randwrite \ --direct=1 \ --numjobs=2 \ --bs=4k \ --iodepth=4 \ --size=1g \ --runtime=300s \ --gtod_reduce=1 \ --group_reporting | tee -a /home/$USER/fio.txt Explanation: name: Identifies the block of test output in the results output file. filename: This file will be written out and then read from or written to. So your test device needs to be mounted on /mnt first. As long as your user has write access there, fio does not need to be run as root. readwrite: Sets the mis of reads and writes and whether they are sequential or random. direct: Use direct IO, bypassing Linux's page cache. If you don't use this, you'll only be testing Linux's cache which would distort results since you're only testing 1GiB of data which could well fit entirely within your RAM. Note that many storage devices have their own cache, but this probably isn't relevant for your case. numjobs: Spawn two processes each of which will be doing the same thing at once. bs: Use 4KiB sized IOs. If you can benchark your real application you may find it uses different-sized IOs, but if you don't know then 4KiB is a reasonable start. iodepth: Each process will issue 4 IOs at once, rather than issuing one and then waiting for it to complete. size/runtime: The tests will read or write 1GiB of data but there is also a time limit of 5 minutes and if that runs out first then the test will stop. I don't think you need to do many hours of testing here. After 5 minutes I should think the card will be showing its reasonable performance. gtod_reduce: Don't do some tests that require the gettimeofday system call. Without this, fio can spend a lot of its CPU time calling that system call instead of benchmarking, and you rarely require the info it gives back anyway. Run without this option once to see if you really require it. group_reporting: Aggregate results from all jobs (processes) within the test. | tee -a …: Output the results both to the screen and append to a file in your home directory. Of those figures, I consider the random ones more important in most configurations. i.e. if I had to choose between a device that supported a bit higher sequential read/write but much lower random read/write, I'd rather have the random read/write, because that tends to have more impact on interactive usage than sequential. SD cards tend to have poor random IO speed so I would never use one for general purpose computing if I could use an HDD or SSD instead. To give you some idea of what decent SSDs manage: http://strugglers.net/~andy/blog/2019/05/29/linux-raid-10-may-not-always-be-the-best-performer-but-i-dont-know-why/ Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: IPv4 v IPv6
On Fri, Jun 21, 2019 at 10:01:47PM -0500, David Wright wrote: > On Wed 19 Jun 2019 at 04:23:15 (+1200), Richard Hector wrote: > > On 19/06/19 4:12 AM, David Wright wrote: > > > On Mon 17 Jun 2019 at 10:38:27 (-0400), Gene Heskett wrote: > > > > >> But that opens yet another container of worms. If I arbitrarily assign > > >> ipv6 local addresses, and later, ipv6 shows up at my side of the router, > > >> what if I have an address clash with someone on a satellite circuit in > > >> Ulan Bator. How is that resolved, by unroutable address blocks such as > > >> 192.168.xx.xx is now? > > > > > > Seems a good reason not to bother setting up ipv6 local addresses > > > until we (you and I) understand it and ever see ipv6 on this side > > > of the modem. I'm not holding my breath. > > > > If you never try setting it up, when do you expect to understand it? And > > I see IPv6 on my side of the modem; I suspect many others do too. I > > expect you'll get it sooner or later. > > What's more relevant to me is not when IPv6 is made availble to me, but > when IPv4 is withdrawn. Until then, I have IPv6 disabled in the router. This is not quite the case. Here is why: IPv4 is almost entirely exhausted. In some regions it is already exhausted. New businesses entering the marketplace who want to advertise services on the Internet will need to either buy IPv4 on the auction market or else live behind something called "Carrier Grade NAT" (CGNAT). CGNAT can be in a couple of different configurations but the most common are as follows: - NAT444 Three networks of IPv4: a) Customer's own private (RFC1918) IPv4 network. b) Provider's own public IPv4 network, but a much smaller number than the sum of customer networks. c) The public IPv4 Internet. - DS-Lite Two networks of IPv4 with an IPv6 core: a) Customer's own private (RFC1918) IPv4 network. b) Provider's IPv6 core. c) The public IPv4 Internet. Now probably if you aren't already behind a NAT444 you're not going to be put behind one, but it could happen to anyone at this point if they switch ISPs. So let's say you are an IPv4 hold-out who visits a small business's site who can't afford to buy highly valuable IPv4 addresses of their own¹. They are very possibly going to be behind a NAT444. If you also are behind a NAT444 then that's 6 layers of NAT that every packet traverses! CGNAT devices are really expensive and not a great solution. They have to hold a lot of state and any protocol that uses lots of ports can run them out of their per-IP state limits. As the end users either side don't have administrative control of the NAT in the middle, it is not possible without provider assistance to set up permanent mappings i.e. to set up servers that permanently hold an IP;port pair. NAT hampers the ability of end-to-end communication on the Internet. The good news is that there is a very easy fix. Just start using IPv6. There is no shortage of IPv6, so no reason why the newcomer sites can't serve on v6 immediately, and if you view on v6 then you side-step this entire CGNAT apparatus. Now, in the North American and European market, outside of cellular networks, it is still rare to end up behind a CGNAT. In the Asian markets a lot of people are behind CGNAT because they ran out of v4 a long time ago. It's coming to us in Europe and North America too. That is why the stance that, "I have IPv4 so I don't need to do anything" is not completely correct: it's not urgent for much of the world at present, but we will get into a situation where either one or both sides of a given IP conversation are behind multiple layers of NAT that they don't control, and that's bad. It is essential though that ISPs turn on v6 and end users use it without even knowing. That's the only way this gets done. So I would say that most of the onus is on your ISP, but if they're doing their bit and providing IPv6 and your side isn't just working with it without you doing anything then that is a problem that should be looked into. If they aren't doing their bit and not providing v6 then I personally would be asking why and looking around for another provider, but it is the case that a lot of people are in a near-monopoly without real choice of ISP. Eventually the cost of CGNAT will force even those tardy ISPs to push out v6 to their subscribers, because there comes a point where that's cheaper than scaling the CGNAT. Cheers, Andy ¹ To give you some idea of how valuable, I looked up what IPv4 addresses are selling for today, and it's about $40k per /21. That means that my business's most valuable asset as of today is its IPv4 addresses. How will new businesses cope? I didn't have $40k when I started my business. -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Exim latest update reports to world as 4.89, which the world thinks is vulnerable.
Hello, On Thu, Jun 20, 2019 at 08:45:13PM +0100, Brian wrote: > At least 2000,000, hosts on the internet. You reckon you will be in > the first tranche of targets? I don't know about "amongst the first" but there are multiple services scanning every port of the entire IPv4 space now and selling access to the results, e.g. Shodan which has already been mentioned. So the idea that you don't need to think about hostile actors connecting to your service because you are 1 in 2bn or whatever, is no longer sound. For example, for over 10 years I have been putting ssh on a port other than 22 where I able to do so, just to cut down on noise in my logs since every hostile knew to check port 22. This year for the first time I am finding that mass scanners have found my alternate port and are now doing dictionary attacks against it. This is because the aforementioned scanning services have scanned every port of my hosts and are selling the information that my host has what looks like an sshd on so and so port. The operators of botnets are buying this information and setting their botnets to try SSH on those alternate ports too. So any new bad actor who wants to scan for this vulnerability is just going to buy access to a list of every host on the Internet that has an open port 25, maybe an open port 25 running the vulnerable versions of Exim if that is offered. That will be a very manageable list of IPs. They won't have to do the scanning themselves. This is only going to get worse. I don't think it's security through obscurity to try to hide yourself from the hostiles if you have already taken steps to protect yourself and it's just to reduce the amount of noise. I think it's only security through obscurity if you don't fix it, try to hide and would get exploited if you were found. Having said that, I am in full agreement that the correct thing to do if concerned about the SMTP banner is to change the SMTP banner, not change the version of the software. I might even go further and try to find a way to identify and log people trying this exploit, so that they can be dealt with the same way persistent SSH dictionary attackers are. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: System on a chip - performance relative size and setup (how can the (Debian) setup make a difference?)
Hi Erik, On Tue, Jun 18, 2019 at 02:26:57PM +0200, Erik Josefsson wrote: > As far as I understand, it is quite recent that SD cards are fast and large > enough to be able to carry and run an entire Debian instance. Not really recent. I've run Debian sarge on a 128MiB CompactFlash card and I'm sure people have done more extreme things than that. > If this is the case, maybe there is only theory available regarding whether > you can make a computer "run faster" on a 64GB SD card than on a 32GB SD > card when cards are otherwise identical. So firstly, SD cards in the general case aren't that performant or reliable. You can spend more money to get faster and more durable ones. The unique selling point of SD cards is the form factor – they're small and have no moving parts. They're meant to go in devices like cameras, dashcams, cell phones, etc. Given two SD cards that differ only in capacity, I would not expect their performance to differ. The bigger one may last longer (survive more writes) due to you using less of its capacity. > I don't really know how swap works on a standard computer, even less how it > works when the whole computer runs from/on a SD card. It doesn't work any differently, except that swapping onto SD generally isn't great because they aren't that fast and they often have fairly low write endurance. SD cards aren't like SSDs, even though they are both made from a form of flash memory. Modern SSDs and flash drives have much better write endurance than modern SD cards. > Swap is supposed to be make your computer pretend that you have more RAM > than it actually has, but if the whole computer is running from/on RAM (or > is it?), then what does swap mean? I don't know why you have introduced the concept of a computer running from memory, as that is a completely different topic. A computer running from SD card isn't much different to a computer running from an HDD or an SSD. It's just a block device. Now, due to the low write endurance of your typical SD card, some people — especially those making small single-purpose devices — do configure things to load off of the SD card into memory and then run largely from memory. This prevents writes into the SD card, thus prolonging its life. But that tactic is not in any way required when using SD cards and can be done with any block device. > On Teres-I with redpill RC2 (now there is a RC3 that I have not yet > installed) an unfortunate website with pop up commercials (like dn.se) can > eat all performance there is and freeze the mouse for hours. I would guess > that could have been fixed on a normal computer with "more RAM", i.e., "more > swap"? But is the same true for e.g. Teres-I? Sorry I am unfamiliar with Teres and redpill. > Second question is if it is meaningful to buy a "super duper blazing fast" > SD card for the task to run a whole Debian system? If you wish to run a general purpose operating system off of an SD card then yes I would suggest that the fastest and more durable one you can afford would be a good idea. But also consider a regular SSD as some of the low capacity ones may compare favourably in price with a specialist SD card. > There is a very expensive 64GB SD card from SanDisk that is called Extreme > Pro that costs twice as much as same size Extreme Plus. Specs say it is > "super duper blazing fast" for video in "Ultra HD 4K", but would Pro also be > faster than Plus for the task of running Thunderbird and Firefox at the same > time? Running big apps like that will benefit more from having enough memory. After that is satisfied, fast storage will certainly help. You'll have to look at the exact specifications of Plus vs Pro. What are you trying to achieve? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: An Ounce of Prevention
Hi Bob, On Tue, Jun 18, 2019 at 12:07:16AM -0400, Bob Bernstein wrote: > On Tue, 18 Jun 2019, Andy Smith wrote: > >What happens if you try to ping something? Like: […] > PING linode.com(2600:3c00::22 (2600:3c00::22)) 56 data bytes > 64 bytes from 2600:3c00::22 (2600:3c00::22): icmp_seq=1 ttl=51 time=63.4 ms So it seems you have IPv6 connectivity but not IPv4, although you do have your v4 address configured on the interface. I'd be interested in seeing your routing table (the "ip route show" command I mentioned before). Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: An Ounce of Prevention
Hi Bob, On Mon, Jun 17, 2019 at 09:21:57PM -0400, Bob Bernstein wrote: > iface eth0 inet static > address 192.168.1.40 > netmask 255.255.255.0 > gateway 192.168.1.1 > dns-nameserver 8.8.8.8 […] > 2: eth0: mtu 1500 qdisc mq state UP mode > DEFAULT group default qlen 1000 > link/ether 00:24:21:87:09:c2 brd ff:ff:ff:ff:ff:ff So you configure eth0 and you do actually have an eth0, so that's alright. > 2: eth0: mtu 1500 qdisc mq state UP group > default qlen 1000 > link/ether 00:24:21:87:09:c2 brd ff:ff:ff:ff:ff:ff > inet 192.168.1.40/24 brd 192.168.1.255 scope global eth0 >valid_lft forever preferred_lft forever …and your eth0 is currently up and has the IP address you configured, so that's okay. So I do wonder why it's not working. What does this say? $ ip route show Just wondering what your default route is. And how are you determining that networking doesn't work? i.e. what are the symptoms? What happens if you try to ping something? Like: $ ping 8.8.8.8 I am ignoring the tun0 stuff right now but that could possibly be related. Thanks, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: An Ounce of Prevention
Hello, On Mon, Jun 17, 2019 at 08:39:32PM -0400, Bob Bernstein wrote: > So, any hints about networking? Possibly your network interface has changed name due to persistent naming? In any case, please can we see the contents of your /etc/network/interfaces file, and the output of: $ ip link $ ip address show Or do you generally have no networking until X starts and gives you NetworkManager, etc? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: IPv4 v IPv6
Hello, On Mon, Jun 17, 2019 at 04:11:32PM -0400, Robin Hammond wrote: > The size of such a routing table gives me nightmares ! Thank goodness you > have to advertise networks of a reasonably sized prefix length! I wouldn't worry too much about the number of v6 routes. In terms of addressing and routing policy, this second go with v6 has afforded some chances to correct mistaken assumptions made with v4 that later became impossible to undo. I would worry more about the number of v4 routes. As v4 runs out globally (already has in some regions), there is increased pressure to carve up allocations so that they can be traded. For example, if you look at an arbitrary v4 auction site: https://auctions.ipv4.global/ (I picked this from a web search and have no information about it, so it's not an endorsement) You see that an ARIN /24 (256 addresses) currently goes for around $5.6k. Let's say you have a /21 but you're only using the bottom half. At the moment your route in the global routing table is a single /21 route. But hey, people want to buy IPs, and you have 8 /24s in your /21. You're only using 4 of them (the bottom half as I say). So you auction the top 4 off to 4 different buyers. Now the global routing table needs one /22, for your bottom half, and then four /24s, so it grew by 500%. It is not yet quite that bad because a /24 is really still a bit too small to route. Some providers may not accept the announcement. But as the availability goes down and the prices go up, people are going to want to route /24s routinely. That is on top of the number of orgs who got an allocation that proved to be too small so they went back for an extra one, thus doubling the number of routes. Meanwhile in IPv6 land, Regional Internet Registries tried really hard to give out allocations so big that very few applicants should ever need to come back for a second one (and thereby introduce another global route). Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Output from apt-get update.
Hi Peter, On Tue, Jun 11, 2019 at 07:28:12PM -0700, pe...@easthope.ca wrote: > E: Failed to fetch file:/full/path/to/build/packaging/deb/./Packages File > not found - /full/path/to/build/packaging/deb/./Packages (2: No such file or > directory) > E: Some index files failed to download. They have been ignored, or old ones > used instead. > root@imager:/home/peter# > > The last two lines, "E: ...", appear to be error messages. The two > systems have the same /etc/apt/sources.list. Any ideas about the > difficulty in imager? The second machine seems to have additional sources. They may be in a file under /etc/apt/sources.list.d/ rather than in /etc/apt/sources.list itself. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Ping as normal user (Was: Why /usr/sbin is not in my root $PATH ?)
Hello, On Fri, May 31, 2019 at 08:48:36AM -0500, Jason wrote: > On Wed, May 29, 2019 at 11:46:50PM +0000, Andy Smith wrote: > > How did you install this system? […] > > One other person in this thread said they used (a script which > > ultimately uses) debootstrap. > > This system was installed on an SBC (similar to RPi) from a zipped > filesystem image, dd'd to the onboard eMMC chip. It sounds likely that something in that process failed to copy across file capabilities. As previously mentioned, some care has to be used with tar for example, if you want to (re)store these. So that's something to be aware of I guess… Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Ping as normal user (Was: Why /usr/sbin is not in my root $PATH ?)
Hello, On Thu, May 30, 2019 at 09:08:38AM +0300, Reco wrote: > Easy. You run debootstrap, set some --include options (which pull > libcap2-bin by dependency), and then you tar the whole resulting > filesystem. > tar never understood file capabilities, so they are lost in the process. Sure, tar is one of the example ways I mentioned before of how I've seen this go wrong. > debootstrap (no --variant) does install iputils-ping, but does not > install libcap2-bin. Hence iputils-ping postinst script simply sets > suid bit on /bin/ping as postinst cannot locate setcap. Oh, that's interesting. I didn't think of the case where there is no libcap2-bin. Still, these reporters aren't getting a suid bit either, so I guess there must be something else going wrong. Not debootstrap. Cheers, Andy
Re: Ping as normal user (Was: Why /usr/sbin is not in my root $PATH ?)
Hi Cindy, On Wed, May 29, 2019 at 09:48:44PM -0400, Cindy Sue Causey wrote: > So, yeah, at least for Debootstrap. "iputils-ping" is in there at the > absolute very first start where the Developers have picked the very > first packages that get the party started before the User then picks > everything else... That's not the issue at hand. The issue is whether the file /bin/ping retains the file capabilities. People who have a /bin/ping that only works as root are missing these: $ getcap /bin/ping /bin/ping = cap_net_raw+ep If they didn't have the package installed at all then it would be a very different and more obvious error that was presented. So my question is, are installs done by debootstrap somehow losing the file capabilities? I ask because in this thread, one of the other people reporting a /bin/ping without the correct capabilities did their install through debootstrap. If you've just done a debootstrap, what does getcap return for the /bin/ping that got installed? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Ping as normal user (Was: Why /usr/sbin is not in my root $PATH ?)
Hi Jason, On Wed, May 29, 2019 at 04:18:51PM -0500, Jason wrote: > On Mon, May 27, 2019 at 08:12:32AM +0300, Andrei POPESCU wrote: > > While I didn't mention it in this thread, ping had indeed somehow lost > > its capabilities on my system. 'dpkg-reconfigure iputils-ping' fixed it. > > That worked for me (I'm not the OP) with Stretch on an ARM board. Before > running the above command, I could only ping as root or using sudo, now > I can ping as a normal user. Thanks! How did you install this system? Because /bin/ping is supposed to come with file capabilities such that the user can allow it to do what it needs to do (this is part of what 'dpkg-reconfigure iputils-ping' restores). So it would be interesting to know how the system was installed in case there is a general theme for those who never got those capabilities. One other person in this thread said they used (a script which ultimately uses) debootstrap. Cheers, Andy
Re: Insidious systemd
Hello, On Tue, May 28, 2019 at 10:16:27PM +0100, Liam O'Toole wrote: > On 2019-05-27, Patrick Bartek wrote: > > Needing to convert this box from wired ethernet to wireless, I searched > > for a suitable network manager and wicd looked good: No desktop > > environment dependencies (I use a window manager Openbox and single > > lxpanel), compatibility with Openbox, etc. Imagine my surprise when > > during the simulated install (I always check), I discovered systemd > > init was set to replace sysvinit. I had converted Stretch to the > > latter during its install last year, but left the systemd libraries. […] > Are you trolling? You need to talk to the maintainers of wicd and ask > them why there is a systemd dependency. Before bothering developers/maintainers, it would be best to do what 'bw' did elsewhere in this thread and demonstrate that it's most likely to be because OP is allowing apt to install recommends, and wicd recommends gksu, which depends on systemd. OP does claim that their apt already is configured to not install recommends, but another poster demonstrated that when they forcibly disable recommends their apt does not try to install gksu, whereas OP's does. So clarification is pending from OP on this matter. The phrasing of the question has I think led to some unfortunate diversions. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: That time IPv6 farted in Gene's church (Was Re: forcedeth?)
Hi Jimmy, On Tue, May 28, 2019 at 10:15:28AM -0700, Jimmy Johnson wrote: > >On Mon, May 27, 2019 at 10:13:12AM -0700, Jimmy Johnson wrote: > >>On 05/26/2019 11:03 PM, Andy Smith wrote: > >>>There doesn't seem to be any point in interacting further. > >> > >>Andy that's the most helpful thing you've said, > > > >I guess you missed the response where the very first thing I did was > >to actually show Gene how to disable IPv6, even though I predicted > >it wasn't his issue, and then he later agreed (in a response to > >another poster) that it wasn't his issue. > > You predicted it wasn't his issue, you knew that for sure, like you know > what the issue is but your not saying. That's not a question, I'm being a > sounding board. I've no idea what Gene's issue is because he never gives enough information to work it out, and you can try to extract that from him but all you get is rambling. However, his direct question (how to disable IPv6) was very simple to answer (so I did), and seeing him state in multiple places that IPv6 was the root cause of his problems I thought it was worth trying to get to the bottom of that. I was wrong about that second bit: in general it is worth it, but Gene doesn't want to, and you can't debug without the user's consent. Once again I think it is unfair of you to say that me giving up is the most helpful thing I've said, when I did actually answer his direct question. > IPv6 just like systemd, kernel modules, programs with added code, etc. and I > really can go on, but the point is the mere mention of somethings start > arguments and trolling since day one. So am I to understand that you think it is fine that someone posts misinformation for reasons known only to themselves and should never be asked to justify it? I'm not interested in some rambling war about X vs Y, I'm interested in seeing a concrete issue and fixing it. If someone wants to say "XYZ software sucks" or even, "XYZ was made by the Communists/NSA/aliens to destroy Linux" then I'm not going to bite, because that's not something you can ever disprove to someone. However, when someone makes direct claims like "IPv6 being enabled in my kernel caused me to be unable to configure IPv4, and also broke my XYZ software's configure, and, and, and, …" then that is something that can and should be investigated. As you've seen, once it's been shown that someone only wants to post emotional rambling I give up on them, so it's never going to go any further. However, you are prolonging it by popping up to throw around accusations of being a troll… > >>so please don't troll. > > > >Is your definition of trolling "asking someone to back up their > >statements"? > > Yes, often it is trolling, it's a deliberate act of discrediting a poster > who may often be expressing his personal experience working with technology. Wow. I didn't realise that Gene was being awarded a participation medal and being encouraged to just fill the list with creative writing. It really doesn't seem like that would scale if encouraged for every poster. The problem isn't that Gene has difficulties. We all have difficulties. The problem is that Gene invents a root cause, declares the problem solved, won't accept any attempt to prove/disprove his theory, and then just restates his opinion as fact over again in other threads. I don't think it helps Gene to encourage that. It certainly won't help anyone else searching for bugs with the same software. > So please don't troll. I don't accept this as a definition of trolling, but you can rest easy that you won't see any more of it from me in response to Gene. Anyone else who is actively avoiding answering direct questions or carrying out simple diagnosis when being helped though, is going to get asked why. The first few times could be miscommunication or I could always have misunderstood, but with some people a pattern does quickly emerge. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: That time IPv6 farted in Gene's church (Was Re: forcedeth?)
Hello, On Mon, May 27, 2019 at 10:13:12AM -0700, Jimmy Johnson wrote: > On 05/26/2019 11:03 PM, Andy Smith wrote: > >There doesn't seem to be any point in interacting further. > > Andy that's the most helpful thing you've said, I guess you missed the response where the very first thing I did was to actually show Gene how to disable IPv6, even though I predicted it wasn't his issue, and then he later agreed (in a response to another poster) that it wasn't his issue. > you are trolling Gene a longtime Debian Linux User who is having > problems adjusting to not having Debian Linux any longer, like so > many others who are longtime Debian Users, I'm having trouble making sense of what you're saying there, but what I am doing is taking issue with Gene's insistence that IPv6 is responsible for every problem he encounters. > so please don't troll. Is your definition of trolling "asking someone to back up their statements"? If not then I'd be interested to know what it is that I'm doing that you think is trolling. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: That time IPv6 farted in Gene's church (Was Re: forcedeth?)
Hello, On Mon, May 27, 2019 at 12:41:36AM -0400, Gene Heskett wrote: > On Sunday 26 May 2019 10:09:49 pm Andy Smith wrote: > > On Sat, May 25, 2019 at 11:25:26AM -0400, Gene Heskett wrote: > > > No Andy, it didn't drink my last beer (Murphy does that), or kill > > > any kittens but it did totally disable ipv4. How? Simply by refusing > > > to apply a route/gateway to the ipv4 settings we do manually. > > > > Can you show the archive link to the email where it was established > > that having IPv6 enabled in the kernel prevented your IPv4 > > configuration from being applied? Somehow you have failed to respond to this very simple request, opting instead to just ramble on restating yourself. There doesn't seem to be any point in interacting further. It's a shame that you waste everyone's time with these delusions. Not just people trying to help you but also those future searchers who are having problems with the same software as you and are led on a wild goose chase when you report that IPv6 is the root cause, amidst pages and page of distraction, yet somehow never get around to explaining how or why. Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Why /usr/sbin is not in my root $PATH ?
Hello, On Mon, May 27, 2019 at 08:12:32AM +0300, Andrei POPESCU wrote: > On Lu, 27 mai 19, 02:15:49, Andy Smith wrote: > > > > Glenn, and Andrei, do you do anything out of the ordinary to > > install? > > https://salsa.debian.org/amp-guest/pine64/blob/master/pine64_buildimage Seems it does a debootstrap, so it would be interesting to know if a plain old debootstrap results in a /bin/ping with the correct capabilities… Cheers, Andy
Re: Why /usr/sbin is not in my root $PATH ?
Hello, On Sun, May 26, 2019 at 07:41:41AM -0600, ghe wrote: > On 2/21/19 11:12 AM, ghe wrote: > > > Another Busterism, BTW: ping now requires root privileges. It does on my > > computer, anyway. Maybe I made a mistake when I installed -- somebody > > sure did. > > Fix: 'alias ping="sudo ping"' in .bashrc. I'm on Buster too :-) After a normal install, /bin/ping should end up with the capabilities such that it can do what it needs to do. These are: $ getcap /bin/ping /bin/ping = cap_net_raw+ep If yours has not ended up with those capabilities, I think that is a bug in whatever method of install you have used. Glenn, and Andrei, do you do anything out of the ordinary to install? Myself I have seen this happen when untarring the operating system as by default tar does not store or re-apply such capabilities. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
That time IPv6 farted in Gene's church (Was Re: forcedeth?)
Hello, On Sat, May 25, 2019 at 11:25:26AM -0400, Gene Heskett wrote: > On Saturday 25 May 2019 07:33:01 am Andy Smith wrote: > > My recollection was that none of that was ever established in any of > > the threads you posted here, so that is a really weird thing to keep > > stating. Did IPv6 use all your toilet paper and kick your dog or > > something? > > You just pulled my trigger. > > No Andy, it didn't drink my last beer (Murphy does that), or kill any > kittens but it did totally disable ipv4. How? Simply by refusing to > apply a route/gateway to the ipv4 settings we do manually. Can you show the archive link to the email where it was established that having IPv6 enabled in the kernel prevented your IPv4 configuration from being applied? Otherwise that is a very strange thing to keep asserting. > And depending on the phase of the moon, those of us on host file > networks are forced to edit the /e/n/i/config files and > immediately chattr +i them in order to protect them from N-M's > incessant meddling, ditto for resolv.conf, which we have to make > into a real file, and chattr +i it for the same reason. For a > while we could remove N-M on armhf-jessie but now its somehow > linked to our choice of desktops so the only way is to rm it by > hand, or chattr +i everything it touches. N-M at least has the > common decency to not complain or go crazy when it finds itself > locked out of its playpen. Unforch I can't say the same for hpfax, > in the hplip package you get with cups. Its crashed this machine > 6 or 7 times by killing hid-common, leaving the only working > button the reset button on the machines front panel. Somebody put > a call to hpfax in the root crontab, and when it gets called with > nothing to do it goes postal killing all input devices on the usb > bus by killing hid-common. A separate problem of course, one that > hp needs to fix before buster goes live. Unclear how any of the above is or could ever be related to IPv6. > You folks with ipv6 all think we should all just switch and be done with > it, Vast majority of Linux users already switched to having IPv6 enabled, because it has been enabled by default for years, and IPv6 addresses appear on every interface. If you are finding bugs then it would be good to report them instead of howling at the moon. > You all claim that N-M won't bother an interface defined as static. Thats > an outright blatant lie, I'm sure I may have said that somewhere although I don't think I've said it to you. But also, it doesn't seem to have any relationship to IPv6. Again, I suggest if you find bugs in NetworkManager that you report them, not invoke the IPv6 bogey man until and unless you're certain that it is that dread creature which plagues you. > Put a kill switch in that puppy. defaulted to off. And take a survey to > see how many have turned it on a year from now. I'll be apologetic if > its more than the 5% carrying their lappy to dunkin donuts. Could you apologise right now then since IPv6 has already been enabled for decades and the vast majority of users experience no problem? > /ipv6 rant. Rants aren't so bad, it's when they are utterly clueless and devoid of factual content one could tend to come off looking like an absolute lunatic. Hopefully though you aren't a lunatic and can point me to this exact situation where the enablement of IPv6 in your kernel caused something to break, cos then we can get some bugs fixed instead of just spilling more performance art onto the interwebs. Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: forcedeth?
On Sat, May 25, 2019 at 06:52:00AM -0400, Gene Heskett wrote: > the installer locked me to ipv6, and the nearest ipv6 connectivity > is probably in Pittsburgh PA, 140 some miles north of me. The > installer hasn't brains enough to try ipv4 when it can't find > anything working in ipv6. My recollection was that none of that was ever established in any of the threads you posted here, so that is a really weird thing to keep stating. Did IPv6 use all your toilet paper and kick your dog or something? Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: I need to totally stop ANY indication of ipv6 connectivity, how.
Hi Gene, [You asked how to do this so I am answering, but for the record I don't believe it is a good idea to disable the current version of the Internet protocol and rely on the legacy Internet protocol. If there are problems with IPv6 then I think they should be fixed, not disabled.] On Sat, May 25, 2019 at 05:38:22AM -0400, Gene Heskett wrote: > I have the following in /etc/sysctl.conf: > > net.ipv6.conf.all.disable_ipv6 = 1 > net.ipv6.conf.default.disable_ipv6 = 1 > net.ipv6.conf.lo.disable_ipv6 = 1 > net.ipv6.conf.eth0.disable_ipv6 = 1 > net.ipv6.conf.eth1.disable_ipv6 = 1 > net.ipv6.conf.ppp0.disable_ipv6 = 1 > net.ipv6.conf.tun0.disable_ipv6 = 1 Note that there can be race conditions because some of these sysctls may not exist until the network interfaces themselves exist and/or are brought up. You may therefore prefer to disable IPv6 at the kernel command line level, or by using post-up commands in each interface stanza in /etc/network/interfaces. Disable at kernel command line: ipv6.disable=1 Example of disabling in /etc/network/interfaces: auto eth0 iface eth0 … address … netmask … gateway … post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6 (assuming you leave the entries for "all" and "default" in /etc/sysctl.conf) > but its not enough to stop this from ip a: > : eth0: mtu 1500 qdisc pfifo_fast state > UP group default qlen 1000 > link/ether 00:1f:c6:62:fc:bb brd ff:ff:ff:ff:ff:ff <--ipv6 crap. This isn't "ipv6 crap". That is the MAC address of your Ethernet interface at the link level. IPv6 address output from "ip a" start with "inet6" not "link/ether". For example: 2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether aa:00:00:4b:a0:c1 brd ff:ff:ff:ff:ff:ff inet6 2001:ba8:1f1:f019::2/128 scope global valid_lft forever preferred_lft forever Possibly there isn't even any IPv6 addressing present and you are experiencing some other problem so, as usual, I recommend focusing on the details of the actual problem you have rather than trying to guess what the cause is and only asking about that. > Any ipv6 connectivity is at least 100 miles away. Incorrect; IPv6 can be used on a local network, and on the loopback interface of every host. As it is the current version of the Internet protocol, it can and should be used any time any IP networking wants to be used, even within the same host. If you want to disable that and revert to legacy protocols that should be recognised as a deviation from the norm. > So I need a way to shut it off so thouroughly that nothing in a > ./configure script believes its a working connection. Almost certainly not your issue. Apps are supposed to support IPv6 but only use it if there is a global scope IPv6 address on the interface that the app is trying to use. We don't even know if that is what is happening because you haven't shown us the actual problem. > Its creating all sorts of hate, discontent and general havoc > trying to build a tarball because it makes things disable ipv4, > the only WORKING connectivity my little 7 or 9 machine network, > all behind dd-wrt anyway, has. If it really is causing problems then show us those problems and I'm sure they can be fixed, but you haven't demonstrated any, so it's highly likely that this is a result of a misunderstanding. > Please advise, we are still in jurrasic park here in WV. Fortunately the Linux kernel and network stack is not. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Testing netinstall, but use stable release?
Hi Rory, On Wed, May 08, 2019 at 06:32:43PM +0100, Rory Campbell-Lange wrote: > Is there a clever way of downgrading the installation to stable while > keeping the testing kernel and associated ixgbe module? If you run the daily netinst in expert mode doesn't it let you pick the release you want to install? https://wiki.debian.org/DebianInstaller/FAQ#Q:_How_can_I_install_sid_.28unstable.29_with_DebianInstaller.3F Start the installation in expert mode. After selecting a mirror you will be asked which distribution to install: stable, testing or unstable. We recommend using a daily build of the installer to install testing or unstable. However, I think while this might run the netinst with a kernel and ixgbe module that works for you, it may install a kernel and module from stable, which apparently does not. So you may need to find a way to get the right kernel+module on there before you do the final reboot into your new install. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting signature.asc Description: Digital signature
Re: How do I trace changes in configuration files?
Hi Erik, On Wed, May 01, 2019 at 11:35:48AM +0200, Erik Josefsson wrote: > I have tried to document my personal preferences before, but I have always > ended up with unreadable handwritten notes. > > This time I thought I should do it in a more systematic way by somehow > capture the difference between the default install and the result of my > (often irrational) efforts to make my machines look and feel like I want it > to. > > So, is there a way to trace/record/capture changes in all configuration > files? I like to invert the problem by not making any changes to configurations except through a config management system like Ansible, Chef, Puppet, etc. That enforces treating the configuration of the system like a software project, so changes are recorded in a version control system like git; what you did is automatically documented just by the act of doing it, so forgetting to document is less of a problem. Running through the config management again should bring the new machine into the same state. (Of course, documenting WHY you made a change or any more detail than what the change was, is still on you!) I don't find it overkill even for just one or two machines, though it does of course come into its own with much larger numbers of machines. Just from the documentation and reproduction angles I find it worth it. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting signature.asc Description: Digital signature
Re: Stretch with MATE DE - odd new file association problem
Hello, On Wed, Apr 17, 2019 at 10:53:34AM +0500, Alexander V. Makartsev wrote: > It looks like Synaptic starts with user privileges as a wrapper and > PolicyKit asks for root privileges to spawn actual synaptic process. Ah okay, my mistake. Synaptic is another piece of software I've never really used and had only got the impression from another thread that it was having some issues with running under Wayland as it is a GUI app that wants to do things as root. Good to hear that it uses some privilege separation and isn't in fact running the whole app as root. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Stretch with MATE DE - odd new file association problem
Hi, On Tue, Apr 16, 2019 at 01:15:07PM +0500, Alexander V. Makartsev wrote: > On 16.04.2019 12:08, Andy Smith wrote: > > I don't have seamonkey installed so haven't tried myself, but does > > it even run as root? > > > I don't use SeaMonkey either, but I'm pretty sure it does not need root > privileges to run. :) My point is that the OP is asking why, when they click a link from within a GUI app that is running as root (synaptic), the wrong browser is launched. So what I am asking is, once the correct browser is set up via alternatives, isn't it going to launch seamonkey as root? Further implication being, it is good to learn about the alternatives system, but maybe not a great idea to solve this problem this way. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: A call to drop gnome
Hello, On Tue, Apr 16, 2019 at 04:54:02PM +1000, Keith Bainbridge wrote: > I say this is NOT freedom. > > Of course new users accept the defaults on a fresh install - I guess that > like me 20 years ago, they presume the defaults will work best. > > > So, I am asking that gnome be dropped as an installation option (not just as > the default desktop) until they encourage freedom. Just so I understand correctly, is this a serious request to remove the freedom of the current GNOME user base to install it, in order to encourage the GNOME project to be more free? As in, it is not enough that an ignorant GNOME user might stumble across something they can't configure and decide to explore other desktop environments: they must be prevented from initially using GNOME, instead being forcibly introduced to some other DE and only able to use GNOME by installing it later with the package manager? Perhaps I have misunderstood, but if I haven't, which DE do you think should be the new default? Would it be Mate as you use now, or some other? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Stretch with MATE DE - odd new file association problem
Hello, On Tue, Apr 16, 2019 at 04:39:06AM +0500, Alexander V. Makartsev wrote: > You can add SeaMonkey config manually if you want, but you will have to > track the changes after that. > $ sudo update-alternatives --install /usr/bin/x-www-browser > x-www-browser /your/path/to/seamonkey 100 I don't have seamonkey installed so haven't tried myself, but does it even run as root? If it does, I would consider running a web browser as root and then actually using it to view a web site (that I didn't 100% control, etc) to be really really unwise, for security reasons. But maybe I am not giving it enough credit; perhaps it drops privileges before continuing, or something. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Measuring (or calculating) how many bytes are actually written to disk when I repeatedly save a file
Hello, On Mon, Apr 08, 2019 at 01:32:48PM -, Curt wrote: > How about: > > Subject: SSD for frequent edits of large text files? It's really hard for me to imagine any form of human editing of a text file that could wear out a modern SSD. Natural language text files just aren't that big, and human fingers and brains just don't operate that fast. To wear these things out you need multiple users, vast numbers of small writes (because the erase size of an SSD is typically 1MiB or more, so at minimum it writes that every time), things of that nature. One person revising their memoirs for example is not going to hit it, even if they are as loquacious and capricious as Richard Owlett avoiding giving a direct answer to a reasonable question. Honestly my advice to the OP as suggested what seems like many days ago remains: just take a measure, do a day or two of work, take another measure, check the difference in byte count and extrapolate from there. I'd be amazed if you didn't end up with multiple decades of write headroom. Too much has been written here on this subject without actual testing of the realities. We can debate forever how many angels can dance on the head of a pin, but in this case both the pin and the angels are extremely easy to quantify as they come with spec sheets and SMART attributes. Perhaps it is an attempt to exhaust our brains' collective write endurance. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Measuring (or calculating) how many bytes are actually written to disk when I repeatedly save a file
Hi, On Sat, Apr 06, 2019 at 01:39:27PM -0400, rhkra...@gmail.com wrote: > Background: I am considering buying a new disk (and will write an email later > with some other questions or observations about the process), but I know > that, > at least often for SSD drives, they now specify what I will call the > longevity > in terms of TB TBW (iiuc, that is terabytes total bytes written). "TBW" in the endurance specs for SSDs is normally "Terabytes Written". Also that may be 10¹² (10^12) bytes or 2⁴⁰ (2^40) bytes. Another common metric is Drive Writes Per Day (DWPD). Like Alexander I use SMART attributes to monitor this. As Alexander says, the usual attribute is 241. You will have to check what 241 corresponds to though. For example, on some of my machines 241 is described as "Total_LBAs_Written" and measures 512 byte sectors. On others I've found it uses units of 1MiB (2²⁰ bytes), 25MiB or 1GiB! You can test by writing a known quantity of data to the device (say, with dd) and then checking out with smartctl how much the counters altered. Here's a blog post where I did this with some flash devices to determine the 241 unit: http://strugglers.net/~andy/blog/2016/11/26/supermicro-sata-dom-flash-devices-dont-report-lifetime-writes-correctly/ Given that you can easily measure how much is written to the device, do you still need to measure how much is written when editing specific files? Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: About Debian Workshop
Hi Abhishek, The Debian Indian Community may also be able to provide more local knowledge. - may even be some of them at your institution. https://wiki.debian.org/DebianIndia Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: 'synaptic' removed from buster
Hello, On Fri, Apr 05, 2019 at 07:44:32AM -0500, Richard Owlett wrote: > I installed Buster *only* because an application I'm investigating requires > the version of Python in Buster. Another option may be to use "virtualenv" or a similar trick to run just the upstream Python and required modules atop a normal Debian stable. https://www.pythonforbeginners.com/basics/how-to-use-python-virtualenv Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Solved, maybe (was: Re: Help updating a Jessie installation to Jessie LTS)
Hello, On Sun, Mar 31, 2019 at 04:35:44PM -0400, rhkra...@gmail.com wrote: >* Aside: what the heck does errorackage mean / stand for and what is it > telling me? Who came up with that "word"? Ah, I thought you were aware of what it was printing. It was printing "Bus error" over the top of its normal output, which was presumably something about "Package". Hence "Bus errorackage". If everything works now then it seems quite likely that apt was getting a bus error because of your filesystem(s) being full. That's a pleasing outcome since now you don't have to test your memory, power supply, etc etc! If you can reproduce that i think it would be worth a bug report, because it's really not nice to see "bus error". Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: debian packes broken
Hi Manikanta, On Sun, Mar 31, 2019 at 09:51:32PM +0530, manikanta bandaru wrote: > Err http://deb.debian.org jessie-updates/main amd64 Packages jessie-updates doesn't exist any more; remove any lines referring to "jessie-updates" (NOT "jessie/updates") from your /etc/apt/sources.list and do "apt update", and think about when you are going to upgrade. :) This message hints that there will be no further uploads to jessie-updates: https://lists.debian.org/debian-announce/2018/msg2.html though it does not come out and say it explicitly. Unfortunately the only explicit official announcement of the removal I saw was at: https://lists.debian.org/debian-devel-announce/2019/03/msg6.html which is probably not a mailing list that too many users subscribe to. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Help updating a Jessie installation to Jessie LTS
Hi, On Sat, Mar 30, 2019 at 04:12:05PM -0400, rh kramer wrote: > I found a post that told me what the content of > /etc/apt/sources.list should be for Jessie LTS. LTS updates (for a limited set of architectures) make their way into the /updates suite that is found on security.debian.org, just like the normal security updates used to. Therefore you do not need to change your /etc/apt/sources.list just to use LTS. You *may* need to modify sources.list to remove references to -updates (note the '-', not a '/') as that suite was used to contain packages for point releases, and is removed when the distribution is no longer being updated by Debian. When that suite is removed, all of its contents have already been put into the regular suite at the last point release of that distribution. There is some talk of in future not entirely deleting the -updates directory from the mirrors, but instead leaving it present and empty. This would be to prevent error messages in apt clients doing an update and finding the directory no longer present. It would have no functional effect as all the packages from that suite would already be in the suite. Eventually when the distribution is long out of support, it is entirely deleted from the main Debian mirrors and only becomes available from archive.debian.org. > I think I need to remove the references to jessie-updates from the > sources.list file. But, I'm fairly sure that won't solve what > looks to me like the bigger problem: > > root@s31:~# apt-get upgrade > Bus errorackage lists... 1% That looks very bad and is indicative of a serious software error or hardware problem. I doubt that has anything to do with your use of "jessie-updates" in your sources.list, even though the "jessie-updates" lines shouldn't be there any more. When did this start happening? Can you think of anything you may have done that alters the "apt-get" binary? Does it still happen when you use just "apt" or "aptitude"? > Oh, an extra credit question: are there mirrors I can use for Jessie LTS or > must I use debian.org? If there are mirrors I can use, where would I find > those? Since LTS packages end up in /updates, you can use the normal Debian mirror network for security. So that's security.debian.org or the equivalent as listed by http://deb.debian.org/ > Here is what I put in the sources.list file: Looks correct except remove the "jessie-updates" lines. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Ways to verify tools/applications? Fire support for new users
Hello, On Fri, Mar 22, 2019 at 11:44:15AM -0400, deb wrote: > Are there list-suggested ways to help verify non-free / out-of-stable-distro > or even seldomly updated in-distro tools, PRE-INSTALL? If in your /etc/apt/sources.list you stick to one distribution and don't include "contrib" and "non-free" suites then you aren't going to get any non-free software and the packages you install will at least have been considered acceptable by Debian for release. > Are there suggested sites to look up Linux tools to verify them a bit; > rather than just one-off searches? > > Are there suggested sites where KNOWN BAD tools are listed? That all sounds highly subjective and don't see how you could have any such definitive thing. There is no substitute for proper research but as a blunt tool, once you have identified multiple different packages that do what you want you could look at Debian's popcon to compare how many reported installations there are of each of them: https://popcon.debian.org/ Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Other lists? Fire support for new users
Hello, On Fri, Mar 22, 2019 at 11:31:07AM -0400, deb wrote: > For someone trying to pull Windows (and Mac) users into Linux > does anyone have: > > Preferred other email lists, for new users? Honestly I don't recommend mailing lists for technical support. Especially this one where moderation is nearly non-existent and nothing prevents prolific posters from deluging with inappropriate or massively-offtopic responses. I think the Stack Overflow-like web communities of superuser.com and similar are much better for that use case. Answers supplied need to be effective and on-topic otherwise they get voted down and buried. Debian did used to have a similar thing running Shapado. It was at http://shapado.debian.net/. But it was never very well-used and ceased to work some years ago. I think this was a shame. Previous discussion of Debian's now defunct Shapado instance was here: https://lists.debian.org/debian-user/2010/10/msg00096.html Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: How to get a non random IPv6 address on buster ?
Hi Erwan, On Wed, Mar 20, 2019 at 10:28:55PM +0100, Erwan David wrote: > > How to get back the EUI64 Mac based address ? […] > So maybe it is network-manager which does this ? (privacy extensions set > to Default) Before we go on too much of a wild goose chase, can you confirm that you do actually use Network Manager to set up your networking? I do not use N-M anywhere where I have felt a need to modify its behaviour, but I understand that you can use "nmcli" to disable the privacy addresses: https://librehacker.com/2018/04/04/network-manager-and-ipv6-privacy/ Personally I find privacy addresses a useful feature, so in your position I would continue using them but also add static IPv6 addresses that are used for well-known services. So I'd have the best of both worlds: outbound connections using unpredictable source address, but known address for inbound connections. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Temperatures rrd
Hi Greg, On Sun, Mar 03, 2019 at 10:01:58AM +0100, Grzesiek Sójka wrote: > Since sensord was removed I would like to ask what do you use to log sensors > readings to rrd database. As I want to monitor multiple hosts I use an external monitoring system, but if you want to store metrics locally, rrd has bindings for most scripting languages so just use it in the one you're most comfortable with. If that's shell, you can just do everything with the "rrdtool" binary. For Perl I've used https://metacpan.org/pod/RRD::Simple with no problems; this is packaged in Debian as librrd-simple-perl. I am sure there are options for whatever language you like. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Debian specific tools/apps
Josef, On Sat, Mar 02, 2019 at 02:42:19PM -0800, Josef Bailey wrote: > I know how to use Linux, why would I man a tool I know nothing about . I need > the tool/app first before I man it. I’m asking for Debian specific tools I > need to know. The documentations I pointed you to are specific to Debian and tell you Debian-specific things that you need to know. But apparently you know best so I'm sorry for wasting your time. Andy > > On Mar 2, 2019, at 9:17 AM, Andy Smith wrote: > > > >> On Sat, Mar 02, 2019 at 05:14:25PM +, Andy Smith wrote: > >> Having a read through the release notes at: > >> > >>https://www.debian.org/releases/stable/amd64/release-notes/ > >> > >> is often worthwile even for those more familiar with Debian. > > > > Oh, and The Debian Administrator's Handbook is a great free resource. > > Although it was last updated for jessie it is still very relevant, > > especially if new to Debian. > > > >https://debian-handbook.info/browse/stable/ > > > > Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Debian specific tools/apps
On Sat, Mar 02, 2019 at 05:14:25PM +, Andy Smith wrote: > Having a read through the release notes at: > > https://www.debian.org/releases/stable/amd64/release-notes/ > > is often worthwile even for those more familiar with Debian. Oh, and The Debian Administrator's Handbook is a great free resource. Although it was last updated for jessie it is still very relevant, especially if new to Debian. https://debian-handbook.info/browse/stable/ Andy
Re: Debian specific tools/apps
Hi Josef, On Fri, Mar 01, 2019 at 10:40:19PM -0800, Josef Bailey wrote: > Is there anything else I should know that can help me find answers quick and > help me maintain my system Having a read through the release notes at: https://www.debian.org/releases/stable/amd64/release-notes/ is often worthwile even for those more familiar with Debian. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Label multimeanings : doc bug?
Hello, On Wed, Feb 27, 2019 at 09:37:58AM +0300, Reco wrote: > Running parted on a logical volume is definitely not the best of ideas. As an aside, I do this all the time and it works fine. The logical volumes are exported to virtual machines as their main disk, so they may appear as /dev/vgname/lvname on the host which when partitioned and presented to a guest will show as /dev/xvda{1,2,…}. If I want to later manipulate the partition table from outside the guest I may run parted on the /dev/vgname/lvname. Similar is expected to work on disk image files. Direct access in the host to the partitions on such a thing as block devices can be enabled by using kpartx (or fiddling with offsets in losetup, but life is too short!). Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Debian package missing
Hi Damon, On Thu, Feb 21, 2019 at 03:55:16PM +, Damon Bakker wrote: > http://security-cdn.debian.org/dists/jessie/updates/main/binary-amd64/Packages > > is now missing. It is used in the pipelines of bitbucket and now it seems > they're broken. Can it not use the .gz or .bz2 versions? http://security-cdn.debian.org/dists/jessie/updates/main/binary-amd64/Packages.bz2 Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting > I'd be interested to hear any (even two word) reviews of their sofas… Provides seating.— Andy Davidson
Re: Your Password Reset Link from CorrLinks
Hello, On Thu, Feb 21, 2019 at 10:55:44AM +0300, Reco wrote: > On Thu, Feb 21, 2019 at 02:30:48AM -0500, Gene Heskett wrote: > > On Thursday 21 February 2019 01:14:03 w...@corrlinks.com wrote: > > > > > > > Whats worse is that my isp is rightfully rejecting some of this bs as > > spam, but I get the threatening msgs from the debian server, threatening > > to unsuscribe me for the bounces. > > You do not bounce to the list - [1]. Yes, even if it's spam. > They will excommunicate you if you insist on bouncing to the list. Before we get carried away here, I highly suspect that Gene is talking about their mail server rejecting the mail during the SMTP conversation, which is a correct thing to do if the mail is unwanted. This doesn't generate what we would typically refer to as a bounce. However, since the SMTP conversation is with Debian's list server, Debian's list server then warns Gene that mail it has tried to deliver to Gene has been rejected. Multiple such rejections would lead to the list server deciding that Gene's account has become invalid, and the automatica unsubscription of Gene from this list. Again, no actual bounces here, and everyone involved is behaving reasonably. Rejecting unwanted email during the SMTP connection is the correct thing to do, but it is pointless rejecting mail from known list servers as they are not in a position to do anything about that and repeated rejection of email will lead to them unsubscribing you. So Gene might like to find way to establish when the peer is a list server and just discard the mail in that case. Or deliver it to some sort of quarantine area. Ideally of course, the Debian list server would have spotted that this particular email is not useful for the debian-user list and not accepted it in the first place. This is really hard though, because this email is not spam. Unless "spam" is something you define as "email I don't want". This email appears to be from a legitimate and real service, and although we as humans can see that it would never be likely to be of interest to debian-user, it's hard for a computer program to do that. What seems to have happened here is that some other human has decided to prank the list by putting the list's address into that web site. So, since it is the case that there is often going to be email that Debian's list server fails to notice is inappropriate but the user's own mail server does (because the user knows more about what they are willing to accept), it would be better for the user not to reject the list server's mail if possible, or else accept that a spate of such rejections may result in unwanted unsubscription from the list. I don't think it's fair to suggest that Debian's list server is at fault for accepting this email, because as mentioned it's not spam, it's just off-topic, and furthermore is likely the result of deliberate pranking by a human. Expecting Debian's list server to detect that with no prior information seems a bit much. Certainly as you suggest people could contact listmaster@ with a pointer to this thread and ask for future mail from corrlinks.com to be rejected though, as it would seem such mail would never be relevant. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Please Recommend Affordable and Reliable Cloud Storage for 50 TB of Data
Hello, On Sat, Feb 16, 2019 at 01:08:01PM +0800, Turritopsis Dohrnii Teo En Ming wrote: > On Fri, Feb 15, 2019 at 10:10 PM Dan Ritter wrote: > > 50TB is a lot of data. Do you actually have it right now, or are > > you projecting into the future? How are you storing it? > > Using external portable USB hard drives. What then are your goals for cloud storage? The main issue you are facing is that 50TB is a pretty huge amount of personal user data by today's standards, so any commercial offering is going to be expensive. If all you require is access to your data when you are out and about, and you do currently have always-on Internet at home, you could build a cheap server, attach your existing USB storage to it, and serve it with owncloud or something. Downsides: - USB storage kind of sucks - Now you are a sysadmin, congrats - Maybe your domestic Internet service provider isn't up to the task of serving a lot of data Upsides: - Access to your data from where you have Internet for a relatively modest one off purchase and some sysadmin work. - Your personal data can stay physically where it is, so no months-long upload session. Depending on what your actual use case is there are many other reasons why this may not be suitable. If I had 50TB of valuable personal data I'd also be worrying about how it's backed up. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Bug with soft raid?
Hi Steve, On Fri, Feb 15, 2019 at 09:35:27AM +0100, steve wrote: > >for i in /dev/sd{b..f}; do echo "DISK: ${i}"; smartctl -l scterc "${i}"; > >sleep 3; done > > I get this for sdb and sdc > > SCT Error Recovery Control: > Read: Disabled > Write: Disabled > > and this for sdf > > SCT Error Recovery Control: > Read: 70 (7.0 seconds) > Write: 70 (7.0 seconds) > > What does it tell me ? It means that sd[bc] may support SCTERC but it's disabled (promising), and sdf does support it and it's set to 7 seconds (good). For disks in Linux software RAID, SCTERC with a low timeout is essential. If it's not possible then the block layer timeout for the device should be increased. You should try to set SCTERC for sd[bc] like so: # for dev in /dev/sd[cd]; do smartctl -l scterc,70,70 "$dev"; done If that works then great - all your drives support SCTERC and have low timeouts. If setting it to 70 (centiseconds, so 7 seconds) doesn't work then you will need to increase the block layer timeout like this: # for dev in sd[cd]; do echo 180 > /sys/block/sda/device/timeout; done The reason to do this is that should any of your drives encounter a problem reading or writing, without SCTERC set the drive will try very hard to do whatever it was meant to be doing for a very long period of time, and while it's doing that it will be unresponsive to anything else. The default block layer timeout is 30 seconds and a drive having problems reading or writing just 1 sector can easily spend longer than this trying to do so. Linux then drops the entire device from the array. If you're lucky this only happens on the one device and you're able to add it back in again, but a very common cause of arrays not assembling or always having a device kicked out is these sorts of timeouts. When using RAID it is much better to have the drive give up sooner as the RAID should take care of what was unable to be read or written. So, being able to set SCTERC is best, but failing that you really must set the block layer timeout high enough. The smartctl and /sys/block settings above don't survive a power cycle so would need to be set at every boot. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Swap space choice on a SSD <- Current best practice on?
Hello, On Wed, Feb 13, 2019 at 04:23:56PM -0500, Dan Ritter wrote: > "Over-provisioning often takes away from user capacity, either > temporarily or permanently, but it gives back reduced write > amplification, increased endurance, and increased performance." > > Increased endurance is increased longevity. That is also my understanding and matches many articles advising how to choose the best enterprise SSD for a particular workload. However, I know that SSDs are a lot more "black box" than your typical HDD so I think especially with consumer devices it could be hard to generalise and reason about. At that level the device specs often do not specify numbers for "terabytes written" or "drive writes per day". It can also be surprising sometimes how little is written. For example, I have some servers with flash memory for their operating system install, with data on other storage: https://www.supermicro.com/products/nfo/SATADOM.cfm At the 16GB capacity these offer only 17TB of writes over 5 years and I was a bit worried, so I was thinking of spending some effort making sure that things which are regularly doing writes do so to a RAM disk instead. Luckily there's a SMART attribute (241) you can use to tell how much has been written to the drive to date and when I checked that I found the servers were typically writing only ~14GiB per month. So that would take about 100 years to reach 17TB! Of course, the 5 year warranty covers other factors too. It all depends on use case, as clearly there are uses that are write-intensive which would burn through 17TB in a matter of hours. I do not put swap on these devices. Measuring is still essential in my view, but things are indeed a lot easier than they were a decade ago. Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: Swap space choice on a SSD <- Current best practice on?
Hello, On Wed, Feb 13, 2019 at 01:14:36PM -0800, David Christensen wrote: > A swap partition is faster than a swap file. Has something changed in this regard since kernel version 2.6 then? http://lkml.iu.edu/hypermail/linux/kernel/0507.0/1690.html Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting
Re: what are you using instead of bind9?
Hi, On Tue, Feb 12, 2019 at 06:40:01PM -0500, Lee wrote: > What are people using these days to > 1. have dnssec enabled lookups > 2. filter external dns answers I use Unbound for resolvers. I understand that Unbound can do some RPZ-like things with its local-data and local-zone directives, but I've never played with RPZ so don't know if it can cover your use case. PowerDNS Recursor is another popular recursor. I have never used it, only the Auth server version, but I've found that to be high quality software so I'd certainly be willing to look at their Recursor product if I wasn't happy with Unbound. It seems to have RPZ support: https://doc.powerdns.com/recursor/lua-config/rpz.html Cheers, Andy -- https://bitfolk.com/ -- No-nonsense VPS hosting