* Akira Urushibata <a...@wta.att.ne.jp> [2025-05-26 20:32]: > In this article I will provide a typical example of an operation and > discuss how we can use it in our advocacy efforts.
Thanks so much for your good intentions. > Improvements in graphical user interfaces have made computers easy to > use. However, graphical user interfaces become cumbersome when the > same task must be repeated many times over. That is so right and we don't have ideal computing environments. wish that would change one day. I think that large language models with natural language processing are the way to go to direct computer on what to do and to generate repetitions, loops and other workflows just by telling to computer what has to be done. > Adding the a multiple-input file feature to any utility requires > effort. In addition small variations may appear on how it is actually > implemented, obliging users to learn anew the details unique to each > application. It makes more sense to provide a universal apparatus for > handling the common case of multiple inputs. In general, our file systems and systems for dealing with files are not ideal. I can totally understand what you mean. However, a universal apparatus is not a goal, because we cannot standardize, just like you said it in the last email, but we can make software and let people choose what to use. I have made my own computing and file environment in such a way that I can relate files to people, to other files, to documents, notes, I can tag them, have types and subtypes. This classification allows me to get files by various intersections or relations. The actual file location is totally unknown to me as a user, I have no idea where it is. But the speed to access the file is so much shorter, because I am finding it by semantic relationships, tags, types, subtypes and so on. > In many real-life instances the files the user desires to convert are > not all in one flat directory but rather scattered around, often in a > nested structure of directories. Also there are cases in which there > are files which should not be converted mingling with others which > definitely should. If each application had to offer the capacity to > deal with such complicated requests, they would become sophisticated > in the bad sense of the word. I think that file system is not the best way for human to interact with it. I just think that better way is using the dynamic knowledge repository as defined by Mr. Doug Engelbart, the inventor of a lot of computing innovations and the mouse. About Dynamic Knowledge Repositories (DKR): https://www.dougengelbart.org/content/view/190/163/ > The absence of terminology reflects low interest in the issue. Batch > conversion of multiple files is a mundane task that should not be > taken lightly. A typo can lead to erasure of files or the creation of > too many files. In another scenario the demands of an inefficient > conversion may add up and eventually overload the entire system. When I am looking at it, it is very easy now to create various loops and go over the files. But that's what I say, it is easy, it is not integrated, just like you say. So there is a variety of situations which can take place and you are right, there may be many errors. But we could use the large language models to tell the program to avoid any errors. That way, at least we improve the system. For example: ------------ > Here is an example of a shell command to convert all your PNG files > (named *.png) to JPEG files named *.jpg: > > for i in *.png; do pngtopam $i | pnmtojpeg >`basename $i .png`.jpg; done > > Make this loop better that if something is wrong, that it is detected before > the conversion actually takes place. Then: ----- #!/bin/bash # Check required commands are available if ! command -v pngtopam >/dev/null 2>&1 || ! command -v pnmtojpeg >/dev/null 2>&1; then echo "Error: Required commands (pngtopam or pnmtojpeg) not found" >&2 exit 1 fi # Process each PNG file for i in *.png; do # Skip if no PNG files exist [ -f "$i" ] || continue # Check if input file is readable if [ ! -r "$i" ]; then echo "Error: Cannot read file '$i'" >&2 continue fi # Set output filename outfile="$(basename "$i" .png).jpg" # Check if output file would be writable if [ -e "$outfile" ] && [ ! -w "$outfile" ]; then echo "Error: Cannot overwrite file '$outfile'" >&2 continue fi # Perform the conversion with error checking if ! pngtopam "$i" | pnmtojpeg > "$outfile"; then echo "Error: Conversion failed for '$i'" >&2 # Remove failed output file if it was created [ -f "$outfile" ] && rm -f "$outfile" continue fi echo "Successfully converted '$i' to '$outfile'" done And as you say, universal approach to loop over the files would be the best. Just that we don't have such universal approach. > There is much effort spent in advertising the "Linux" OS as > user-friendly and easy to use. But that is also true depending from which viewpoint you look at it. There is now a lot of GNU Linux distributions, which are very user-friendly. > People who seek user-friendly computers regard command-line tools as > a relic of the past. I would not generalize it because that is simply a different group of users who use the graphics user interface. They are not as powerful as those using command line tools. If they think it is something of the past, that is just a generalization. They are not on the same level as developers who know how to deal with it. > Any agent promoting "Linux" would thus rather hide them from the > general public. Thus people have few opportunities to see how > experts actually do work with the OS. Opportunities are today more than ever. There are many videos, there are many YouTube videos, there are explanations on many software repositories, so there is now much more use of the command lines and all the software is now at the maximum. So it's not like there are few opportunities, I can't see that. I see much more opportunities for the last 25 years. > In addition there is a persistent campaign against the name "GNU". Yes, that is somehow true, but the campaign is not from a single source. It comes from opinions from various different groups. And of course there is a reason for that. It's political reason. > The above procedure employs GNU Bash and GNU coreutils. Other > utilities often used in conjunction are provided by GNU findutils, > GNU diffutils, GNU grep, GNU sed, GNU awk. The negative campaign > discourages people from understanding how GNU utilities are actually > employed and leaves them with a shallow, distorted view of the entire > system. Then it is up to us to make it right, you see. > Some attempts have appeared to find a term to fill the void. "Cloud" > is a vague term, but for some people it is mostly about efficient > command-line procedures which system management requires. Some others > speak of the operations as part of "Linux". In fact I have heard that > a major reason Microsoft decided to provide "Window System for Linux" > (WSL), is that "cloud" operators became accustomed to using "Linux" > command-line utilities and felt inconvenienced by their absence in > ordinary Windows environments. Whatever Microsoft says, I am not really keen to trust that company. > The above observation gives me an idea for an new strategy for > promoting GNU. There is a problem that requires a solution. We can > explain the problem and the potential outcome of not solving it > properly. After convincing people that a problem exists we can > explain how it is best solved, how to find the engineers who know the > right solution, what tools they use and where the tools come from. The best way of promoting GNU is helping people install the system and then delve into researching it. Thanks so much for your insightful view from Japan and many greetings. Jean Louis _______________________________________________ libreplanet-discuss mailing list libreplanet-discuss@libreplanet.org https://lists.libreplanet.org/mailman/listinfo/libreplanet-discuss