This series is written by a representative of the latter group, which is comprised mostly of what might be called "productivity users" (perhaps "tinkerly productivity users?"). Though my lack of training precludes me from writing code or improving anyone else's, I can, nonetheless, try and figure out creative ways of utilizing open source programs. And again, because of my lack of expertise, though I may be capable of deploying open source programs in creative ways, my modest technical acumen hinders me from utilizing those programs in what may be the most optimal ways. The open-source character, then, of this series, consists in my presentation to the community of open source users and programmers of my own crude and halting attempts at accomplishing computing tasks, in the hope that those who are more knowledgeable than me can offer advice, alternatives, and corrections. The desired end result is the discovery, through a communal process, of optimal and/or alternate ways of accomplishing the sorts of tasks that I and other open source productivity users need to perform.
Showing posts with label bash. Show all posts
Showing posts with label bash. Show all posts

Thursday, August 7, 2025

The AI revolution?

That's kind of a dramatic title, but an appropriate one for this blog and this post. I've recently discovered (yes, I've been living under a bit of a rock) AI and its possibilities for the kinds of tech projects I undertake. It turns out that, by using AI, my laborious process of trawling the internet for snippets of information related to the kinds of things I try to do with my computer may have come to an end. Yes, having AI at my disposal, I can have an always-available interlocutor to consult with on these topics, and one who is far more knowledgeable than me about computer coding.

The impact of this on this blog is that I will likely now be, instead of documenting my halting attempts and offering links to articles I've found, the posting of a lot of articles henceforth about scripts AI has helped me to create and related knowledge it has conveyed. I've already developed several scripts over the last few weeks and I keep thinking up more. In fact this post is being published partly by a script AI helped me to develop. True, its initial attempts at using the blogger API turned out to be a huge and fruitless time sink. I finally told it I was calling it quits after many hours of meeting one frustration after another--not necessarily totally the fault of AI, but due in large part to google's inhumanly complex way of expecting users to interface which seem calculated to deviously thwart the standard end user at every turn.

But I managed to get AI to help me develop a semi-automated way of posting using the method discussed in the last post I made here that involves using the xclip utility. In other words, I need to sometimes come to AI's rescue, paltry as my abilities are in this realm. And I've had to several times make suggestions to AI about how to do something or about some utility to use. In fact, I just had a frustrating session with AI trying to develop a working .asoundrc (yes, no pulse or pipewire here, just good old-fashioned alsa) that would allow me to send sound through my laptop's HDMI port. To offer a bit of gory detail, AI couldn't figure out (nor could I) why aplay could send sound via plughw while trying to configure .asoundrc with plughw devices would report that the relevant library couldn't be found.

But AI did provide me with a nice workaround for sending my laptop display out the HDMI port using xrandr, that solution leaving it to the video player (mpv in this case) to select the proper audio device for the sound. In fact, here's a nice bash script I asked it to develop for when I want to send the display through the HDMI port:


#!/bin/bash

# Find the names of your displays with 'xrandr'
# For a typical laptop, this is a good guess:
LAPTOP_DISPLAY="LVDS1"
EXTERNAL_DISPLAY="HDMI1"

# Present the menu
echo "Choose a display configuration (for mpv audio hit g-d keys & select alsa/hdmi once mpv has been started):"
echo "1) Extend desktop (external right of laptop)"
echo "2) Mirror displays"
echo "3) External display only"
echo "4) Revert to laptop display only"
echo -n "Enter your choice (1-4): "
read choice

# Execute the command based on user choice
case $choice in
    1)
        echo "Extending desktop..."
        xrandr --output "$EXTERNAL_DISPLAY" --auto --right-of "$LAPTOP_DISPLAY"
        ;;
    2)
        echo "Mirroring displays..."
        xrandr --output "$EXTERNAL_DISPLAY" --auto --same-as "$LAPTOP_DISPLAY"
        ;;
    3)
        echo "Using external display only..."
        xrandr --output "$EXTERNAL_DISPLAY" --auto --output "$LAPTOP_DISPLAY" --off
        ;;
    4)
        echo "Reverting to laptop display only..."
        xrandr --output "$EXTERNAL_DISPLAY" --off --output "$LAPTOP_DISPLAY" --auto
        ;;
    *)
        echo "Invalid choice. Please enter a number between 1 and 4."
        ;;
esac

By the way, I'm using mostly Google's Gemini so far. I've tried duck.ai as well, which I found to be a bit deficient although, on the positive side they don't seem to do geo-blocking like other services do. I also decided to try a paid service, github's co-pilot. Honestly, I've so far not seen its advantages as compared to Gemini. But I'm still quite new at this, so I may decide in the future to use some better alternative. Time will tell.

I've learned a bit about strategizing to get help from AI as well. I've discovered that I need to develop projects gradually, first coming up with a sort of simplified mock-up of a key part of what I'm aiming to accomplish, then asking it to build gradually on that. Describing to AI a complex project and its aims at the outset is likely to result in AI proposing a complex script, but experience has thus far shown that it will be so unwieldy and with parts so error-prone that it will turn out to be a nightmare to troubleshoot. Better to start off simplistically and build on working parts that can later be tied together.

I would say I stand to learn a lot about coding from AI. It's light years ahead of me when it comes to error-checking routines. I'm usually so focused on getting my tasks accomplished that I don't even think about those sorts of things. But I also suspect that AI doesn't implement sound code practices very well. Some of the scripts it has put together for me, so far as I can tell, have turned out to be horrendously complex monstrosities which, as I think of what they do and how they're designed, seem to me to string together too many routines and files into one. If I understand coding protocols at all correctly, I think these scripts should be divided up in a couple of smaller scripts and the data they produce should be turned into separate files. This is all pretty abstract without giving some concrete examples. But in future posts I should be posting here some of that code, which may give a clearer indication of what I think I'm getting at.

So stay tuned for those posts.

Friday, October 18, 2024

A motd project

Long time no post. Well, life precludes various tech experimentation and hoped-for successes sometimes.

But I do have a couple of projects to describe. One involves a back-up routine I developed a few years ago. I have doubts that one will be of much interest to most in that it was developed specifically for the non-uefi machines I tend to use and target the paltry-sized hard drives (120 gb and less) and comparativeley meager installations I favor. But it's fairly straightforward and, to my somewhat unrefined tech sensibilities, clever. The other is yet simpler and is the one I'll be describing in this post.

I refer to it as a motd (message of the day) project/routine because I want the system to provide me with certain information when I log in. My implementation differs a bit in that it also even provides this information whenever I start a new bash session.

The information I want the system to provide me is twofold: 1) reminder of when I last backed up the system, and 2) on some of my machines, a reminder of when I last updated the operating system. In most cases what I'll need reminding about is only number 1), since I'm regularly using these systems and do operating system updates on these rolling-release (Arch and Void) systems every 1-2 weeks.

On other systems, however, such as my laptop, I'll need reminding about number 2) as well. That machine, and the VPS I use, are systems I am only occassionally logged into. So knowing when was the last operating system update is quite important on those.

So my solution involves creating the files motd-backup.txt and motd-update.txt in my home directory and writing target information to them. Since I haven't scripted the back-up routine, that has to be done manually by running something like echo "++last system back up done on $(date)++" >/home/user/motd-backup.txt after I've finished running my back-up routine. That's something I'll be adding to the notes I keep as a reminder about how to use that routine.

As for the operating system updates, I've semi-automated those. I run a script at certain intervals that opens a bash session and queries whether I'd like to do a system update: a response in the positive triggers the update. I've added to that a command that runs after a successful update and writes information to ~/motd-update.txt. It looks as follows: echo "++last update $(cat /var/log/pacman.log | tail -1 | cut -c2-11)++" >/home/user/motd-update.txt

Arch'ers will understand that what's happening is that the pacman log file is being queried and information from the final entry is being excerpted and cat'ed (is this a misuse of cat?) into the motd-update.txt file. It has worked well in my testing and should suffice for the purpose.

The final piece of the puzzle is getting the information to display. I want it to be available during my day-to-day computer usage and not in a file stored somewhere on the system. Experience has indicated that I am likely to forget about such a file and so become remiss in my system administration responsibilites.

Since I'm regularly using a terminal, I decided a good place to cause that information to be auto-displayed with regularity would be make it part of the process of starting terminal sessions. So I put the following line at the end of my .bashrc: cat ~/motd-update.txt && cat ~/motd-backup.txt

And that addressesd my issue. Every time I log into the system or start a new terminal, that information appears.

As I write this, I begin to think of alternatives. For example, I could use something like Xmessage and cause a pop-up to appear at regular intervals--say once weekly--that would contain such information. I'm sure other options like zenity could be fairly easily configured to do the same. Or perhaps making a .png out of the information as I wrote about a few years ago (https://audaciousamateur.blogspot.com/2020/04/another-imagemagick-trick-txt-to-png.html) and using cron to pop up feh or some image-displaying utility that shows the information in bright text on a dark background. That could be more disruptive but might be more effective.

In any case, what are your ideas for displaying important information like that? Auto-generated e-mails, for example? There are lots of possibilities. The ways to skin this cat (pun intended) are numerous.

Saturday, June 18, 2016

Another addendum to the seventh installment: imagemagick as a resource for the budget-constrained researcher continued

Continuing on the theme of the last couple of entries, the budet-contrained researcher may own or wish to acquire a piece of hardware that can aid him in obtaining needed materials from research libraries. For example he may need but a single article or perhaps a chapter from a book. Or maybe a bibliography. Obtaining limited segments of larger works such as those mentioned may be more difficult through inter-library loan channels, especially in the case of works that may contain more than one item of potential interest. It can happen that the researcher will need to go in person to inspect the work to decide which part is actually required.

Suppose the researcher is already on the premises of the local academic library and has located target material. Should he not wish to check out a book, he is left with the option of himself scanning the material. Of course these libraries often have scanners that they make available to patrons, so that is one possible option. Yet another option is for the researcher to use his own scanner, and this is where highly portable hardware such as the Magic Wand portable scanner comes in.

I invested in one of these a few years ago and it has proved quite useful. One of the problems with using it, though, is that, for the bulk of books and journals (i.e., those of more standard size) it seems to work best to scan pages sideways--horizontally, rather than vertically. In other words, it works best to start from the spine and to scan toward page edges. This, obviously, entails that roughly every other page will have been scanned in a different orientation from the page preceding.

Once all pages are scanned, they can be easily rotated in bulk to the desired orientation--by 90 or 270 degrees, as the case may be--using imagemagick's mogrify switch, like so; mogrify -rotate 90 *.JPG (a command like convert -rotate 270 PTDC0001.JPG PTDC0001-270rotate.jpg would perform much the same function while preserving the original file). In my case, it seemed best to first copy all odd, the all even, image files to separate directories prior to rotating them.

At this point, I needed to name all files with either odd or even numbers. My bash scripting skills being modest at best, I began scouring the internet for a solution that would aid me in doing this sort of bulk renaming. I found such a script at http://www.linuxquestions.org/questions/programming-9/bash-script-for-renaming-files-all-odd-617011/ and a bit of testing proved it to be a solution that would work for me.

I modified the script into 2 variants and named one rename-all_odd.sh and rename-all_even.sh. The scripts look as follows:


and


It was then a simple matter of copying all the renamed files into a separate directory and concatenating them into a pdf, as was covered in a previous installment.

Wednesday, January 23, 2013

11th installment: lynx; your own personal google scraper

Ok, I'll admit it: there's certainly hyperbole in this entry's title. What I'm doing with the text-mode browser lynx isn't really scraping--it's just something that bears some conceptual (in my view) similarities. It might appear similar because what I've done is to come up with a way of invoking lynx (or any other text-mode browser for that matter), with search terms already entered, from the command line. The end product is just the text results google finds relative to your query--sans all the bells and whistles google's search portal has been foisting on us in recent years. Why is this a significant accomplishment? Well, consider the following.

Background

Have you found google's search portal to be increasingly cluttered and bothersome? I certainly have. Things like pop-out previews do nothing for me but create distraction, and auto-completion is far more often an irritation to me than a help: as a liberal estimate, perhaps 25% of my searches have benefited from the auto-completion feature. For what it's worth, if google wished to provide better service to users like me, they would create two separate search portals: one would be a fuzzy-feely search portal for those who might be uncertain as to what they're seeking and who could benefit from auto-completion and/or pop-out previews; the other would be google's old, streamlined search page and would involve little more than short text summaries and relevant links.

Once upon a time there was a google scraper site at www.scroogle.org--billing itself more as a search anonymizer than as an interface unclutterer--that provided a results page pretty much like the old google one. I used to use scroogle in the days before google introduced some of the more irritating "enhancements" that now plague their site, and came to appreciate above all its spartan appearance. But, alas, scroogle closed its doors in mid-2012 and so is no longer an option. I've been stuck since, resentfully, using google.

In a recent fit of frustration, I decided to see whether there might be any other such scrapers around. As I searched, I wondered as well whether one might not be able to set up their own, personal scraper, on their own personal computer: I had certainly heard and read about the possibilities for conducting web searches from the command line, and this seemed a promising avenue for my query. I ended up finding some results that, while providing but a primitive approximation, look like they may nonetheless have given me a workable way to do the sort of pseudo-scraping I need. Thus, the following entry.

More about the task

Conducting web searches from the command line is another way of describing the task I aimed to accomplish. Granted, doing this sort of thing is nothing especially new. surfraw, for example, created by the infamous Julian Assange, has been around for a number of years and more properly fits into the category of web-search-from-the-command-line utilities than does the solution I propose--which just invokes a text-mode browser. There are actually several means of doing something that could be classified as "searching the web from the command line" (google that and you'll see), including the interesting "google shell" project, called "goosh."

Still, the solution I've cobbled together using bits found in web searches, and which involves a bash function that calls the text-mode browser lynx, seemed on-target enough and something worth writing an entry about. Details below.

The meat of the matter: bash function

To begin with, some credits. The template I cannibalized for my solution is found here: I only did some minor modifications to that code so that it would work more to my liking. There's another interesting proposition in that same thread, by the way, that uses lynx--though it pipes output through less. I tried that one and it got me thinking in the direction of using lynx for this. But I liked the way the output looked in lynx much more than when piped through less, so I decided to try further adapting the bash function for my uses and came up with the following.

The bash function outlined at that site actually uses google search and calls a graphical browser to display the output. The graphical browser part was the one I was trying to obviate so that would be the first change to make. I mostly use elinks these days for text-mode browsing, but having revisited lynx while experimenting with the other solution posed there, I decided I would try it out. And I must say that it does have an advantage over elinks in that URL's can be more easily copied from within lynx (no need to hold down the shift key).

I could not get the google URL given in that example to work in my initial trials, however. This is likely owing to changes google has made to its addressing scheme in the intervening interval since that post was made. So I first used a different URL from the search engine startpage.

After some additional web searching and tweaking, I was finally able to find the correct URL to return google search results. Though that URL is likely to change in the future, I include it in the example below.

What I have working on this system results from the code below, which I have entered into my .bashrc file:



Once that has been entered, simply issue . .bashrc so that your system will re-source your .bashrc file, and you're ready for command-line web searching/pseudo-scraping. To begin searching, simply enter the new terminal command you just created, search, followed by the word or phrase you wish to search for on google: search word, search my wordsearch "my own word", search my+very+own+word, or seemingly just about any other search term or phrase you might otherwise enter into google's graphical search portal seem to work fine.

lynx will then open in the current terminal to the google search results page for your query. You can have a quick read of summaries or follow results links. Should any of the entries merit graphical inspection, you can copy and paste the URL into your graphical browser of choice.

You'll probably want to tell lynx (by modifying the relevant option in lynx.cfg) either to accept or reject all cookies so as to save yourself some keystrokes. If you do not do so, it will, on receiving a cookie, await your input prior to displaying results. Of course you could use any other text-mode browser--such as w3m, the old links or xlinks, retawq, netrik, or any other text-mode-browser candidates as well.

Suggestions for improvements to my solution or offerings of alternative approaches will be appreciated. Happy pseudo-scraping/command-line searching!

AFTERTHOUGHT: I happened upon some other interesting-looking bash functions at another site that are supposed to allow other types of operations from the command line; e.g., defining words, checking weather, translating words. These are rather dated, though (2007), and I couldn't get them to work. Interpreting their workings and determing where the problem(s) lie is a bit above my pay grade: anyone have ideas for making any of these functions once again operable?