This series is written by a representative of the latter group, which is comprised mostly of what might be called "productivity users" (perhaps "tinkerly productivity users?"). Though my lack of training precludes me from writing code or improving anyone else's, I can, nonetheless, try and figure out creative ways of utilizing open source programs. And again, because of my lack of expertise, though I may be capable of deploying open source programs in creative ways, my modest technical acumen hinders me from utilizing those programs in what may be the most optimal ways. The open-source character, then, of this series, consists in my presentation to the community of open source users and programmers of my own crude and halting attempts at accomplishing computing tasks, in the hope that those who are more knowledgeable than me can offer advice, alternatives, and corrections. The desired end result is the discovery, through a communal process, of optimal and/or alternate ways of accomplishing the sorts of tasks that I and other open source productivity users need to perform.

Friday, April 24, 2020

Another imagemagick trick: txt to png

I ran across an article some time ago that discussed exporting text to an image format using imagemagick's convert option. Although it looked interesting, I wasn't sure at the time whether it would actually be useful to me in any real-world way. Since I'm now considering various schemes for making paper back-ups of some sensitive data, I've begin investigating again this means of making an image from text. So I though I'd provide a bit of documentation here and just make a comment or two on some recent tests.

So the current iteration of the command I've been testing is

convert -size 800x -pointsize 6 caption:@my-data.txt my-data.png

The -size switch performs the obvious function of setting the image size, or at least its width. The -pointsize switch defines the size of font that will be used in the text that will appear in the image--in the case of this example, a very small font. I'm sure font face can be specified as well, though I am not experimenting with that at this time.

In the example given here, the name of a text file is specified. Long lines are split by the program according to the width specified, but if no width is specified the width of the image will correspond to the longest line length. The output of a command can also be directed to an image. Slightly different syntax from what is seen in this example would need to be used in that case, of course.

Another convert option that works similarly to caption is label. It seemed to me the caption option was more relevant to the task I was experimenting with since a larger amount of text could be involved.

The experiments I've been conducting are for the possible purpose of making a paper back-up of the sensitive data. An image file of the data could be created, printed, then the image file erased.

Finally, I recently discovered that there is a fork of imagemagick called graphicsmagick. I have not looked into that very deeply or used it so far. But I will be investigating further.

For reference I got my original introduction to this neat feature from an article at https://www.ostechnix.com/save-linux-command-output-image-file/

More can be found at http://www.imagemagick.org/Usage/text/

Thursday, November 14, 2019

view youtube videos while browsing with elinks

I recently wanted to view a youtube video using a computer that had no graphical browser installed. Long story short, the computer runs Gentoo and is used almost exclusively for recording and playback of television content: so for almost all use case scenarios, having a graphical browser is not needed, the excessive compile times they would entail (over 24 hours on this somewhat low-resource machine) being unjustifiable.

I decided there must be some way, using the non-graphical browser I did have on this machine--elinks,to view youtube videos. A bit of online research revealed how I could accomplish this task. Though there is undoubtedly more than one way to skin this cat, I used the one that seemed most straightforward to me, as described below.

Since I already had installed the mpv utility, all I had to do was some minor tweaks to elinks. First, I went into Setup > Option manager > Document > URI passing and added a new entry. I simply named it youtube-handle_mpv. Of course the final task for this step is to save that option.

I then edited that entry, using information found at the Arch wiki entry for elinks, and added the line mpv %c (this allows elinks to feed the designated URI to mpv). Having done that, I next needed to assign a key which, when pressed, would trigger the URI passing.

I went to Setup > Keybinding manager > Main mapping > Pass URI of current frame to external command and there designated the grave or backtick key as the one that would trigger the URI passing. Again I selected "save" and exited the Setup menu.

After having done that, I navigated elinks to youtube's site, searched for the video I wanted to view and, having highlighted the desired link using the arrow keys, pressed the grave/backtick key. After a brief pause (for downloading and caching some of the data, I presume), mpv opened and the video began to play.

NOTE: the pause between pressing the designated key and the actual playback of the video under mpv could vary based, I believe, on the length/quality of the video.

Friday, February 15, 2019

Stress-testing hard drives

This entry will consist mostly in someone else's content. The back story is that, about 3 years ago on the mythtv-users list-serv, one of the list members offered such a concise, straightforward, and apparently sound description of how she tests out new hard drives, that it remained in my memory.

Well, now that it has come time for me to replace an aging hard drive in my MythTV machine, it's time to dig out those directives and actually use them. And while I'm doing that, I may as well post them on this blog for future reference. Credit for the material goes to faginbagin/Helen, the user who posted it in the list-serv.

Without further ado, here is the content:
I look for drives with the longest warranty. This Christmas I bought 4 3TB HGST drives with 3 year warranties. Got to look close, because some only have 2 year warranties. Before I put any new drive into service, I use the following procedure to increase the chances it's a good drive
that will last. 
Capture SMART data via:
smartctl -a /dev/sdx > smart0.out 
Write semi random data on the whole drive with:
nohup shred -n1 -v /dev/sdx > shred.out & 
Go away for a couple of hours. Check shred.out and figure out how long
it will take to finish. Come back when it should be done. 
Read the whole drive and compute a checksum:
nohup md5sum /dev/sdx > md5sum.out & 
Go away for roughly the same time it took shred to write to the drive.
Read the whole drive again and make sure the checksum matches:
nohup md5sum -c md5sum.out > md5sum.chk & 
Go away for roughly the same time it took the first md5sum to read the
drive. 
Write zeros to the drive:
nohup dd if=/dev/zero of=/dev/sdx bs=1M > dd.out & 
Capture SMART data via:
smartctl -a /dev/sdx > smart1.out 
Compare the two smart runs:
diff smart0.out smart1.out 
Make sure there are no complaints about sectors. 
Make sure the kernel didn't report any errors:
dmesg| tail 
If no SMART or kernel reported errors, create partition table, create
partitions, mount, etc... 
If any errors, return immediately. 
 Original post located at http://lists.mythtv.org/pipermail/mythtv-users/2016-April/386438.html

Thursday, January 31, 2019

dhcpcd hooks: what are they and why should you care?

I know I didn't know what dhcpcd hooks were and why I should care about them. That is, until I set up a file server that runs headless and which I want to make sounds when certain events, such as the network coming up, occur. This, as may be evident, will help me ascertain whether, in absence of a monitor displaying graphical indicators, the machine booted and came online successfully--a pretty important status for a machine like a file server.

dhcpcd (the final "d" stands for "daemon") is, of course, the utility that runs on a lot of Linux computers in order to get them connected to the network. It polls for an IP from whatever dhcp (dynamic host configuration protocol) server is running on the network, and the IP received gets assigned to the computer's designated network interface. So the "hooks" under discussion essentially latch onto that process and cause some other process(es) to be triggered once dhcpcd accomplishes its task. This, then, could allow me to implement the desired behavior on my headless file server

I had part of the solution in place already, namely the beep program, a utility that allows for the playing of various tones through the pc speaker. Yes, I'm aware that most computer users seem to want only to disable the pc speaker: I, on the other hand, have found it quite useful on my systems.

Having done some on-line research on this matter, I was able to succeed at the task of getting the pc speaker to play a tone once the computer had booted and gotten an IP by using the following steps (geared toward the Void Linux distribution installed on that target system).

I first created a file owned by root and in the root group in /usr/share/dhcpcd/hooks/--(call it, say) 10-beeponIP--with the following content:

if ([ $reason = "BOUND" ] || [ $reason = "RENEW" ]) then
# your script commands here (see below for the command I used)
/usr/bin/beep -f 1000 -r 5 -n -r 5 -l 10
fi


I can't go into many specifics of the bash syntax seen in this file since I understand it rather poorly myself (it was simply lifted from the askubuntu link listed below). But some testing of its claimed efficacy revealed that it would, in fact, result in the behavior I was aiming to enable.

I had to set proper permissions on that file, then symlink it. Doing the former was straightforward (permissions needed are, like the rest of the files in that directory, 444). I ran the following command to create the needed symlink: sudo ln -s /usr/share/dhcpcd/hooks/10-beeponIP /usr/libexec/dhcpcd-hooks.

Having done that, on rebooting the computer, the designated tone plays though the pc speaker, letting me know that the system booted normally and is now on-line. Mission accomplished!

Some links that helped me to better understand and accomplish this task are given below:

https://askubuntu.com/questions/1005653/how-do-i-execute-a-script-after-dhcp-assigns-an-ip-address-on-start-up
https://wiki.voidlinux.eu/Network_Configuration#Starting_wpa_supplicant_through_dhcpcd_hooks
https://man.voidlinux.eu/dhcpcd-run-hooks.8

Saturday, November 17, 2018

Twitter Alerts: A Trick for the Twitter-averse

I'm not a registered Twitter user and have never managed to think of a compelling reason to be one. In fact, the only time I ever really have or want anything to do with Twitter is when some Twitter feed comes up in an internet search. And all I do in those cases is read any relevant text and move on. I suppose I'm not much of a socialite and accordingly have little interest in social media phenomena such as this.

Recently, however, I became interested in joining a service that sends out invitations periodically on Twitter. Not having an account and not being interested in much of anything else Twitter represents or offers, I'm at a distinct disadvantage in this case: what am I supposed to do, start checking it every day for possibly months on end in hopes of stumbling upon the desired invitation? Not for me, obviously.

But I soon began to realize, based on other web-scraping and scheduling jobs I'd set up recently, that I would likely be able to automate the task of checking this Twitter feed for invitations. I had tools like text-mode browsers that seemed to render Twitter pages pretty well, as well as commands like grep for finding target text. And of course cron could play a key role in automating things as well. Accomplishing the task actually turned out to be quite simple.

I had already set up a way to check a Twitter feed using keystrokes and rendering the text in a terminal on my desktop: elinks -dump -no-numbering -no-references https://twitter.com/TargetTwitt | tail -n +21 | head -n -8 |less seemed to do the job just fine.* The problem with that approach with regard to the task at hand is that I would need to remember to use the key combination to check for invitations daily.

The next step, then, could be to recruit grep to search for target text--a keyword like "invit"--which, if found in the text essentially scraped from the Twitter feed, would trigger my machine to send me an e-mail. Since I already regularly use mailx to auto-send myself various sorts of e-mails, most of that aspect of this task was already in place as well.

The command I tested and that seemed to bring together well most of these various elements is as follows: body="$(elinks -dump -no-numbering -no-references https://twitter.com/TargetTwitt | grep -A 1 -B 1 the)" && echo "$body" | mailx -s Twit-invite me@my-em.ail.** That line, of course, uses, for testing purposes, a common word (the article "the") as the searched string to prove that the whole thing will work together as expected.

The command first dumps text from the Twitter feed to stdout then pipes it to grep, where grep looks for the target text-string. If the string is found, it is included--along with a couple of adjacent lines--in the body of an e-mail that mailx will sent to me (the scheme assumes that a valid smtp transport mechanism has been set up for mailx--a topic beyond the scope of this brief post). If the term is not found--something I also tested by changing the search term to one I was sure would not be included in the twitter feed--nothing further is done: the scraped text simply gets discarded and no e-mail is sent.*** The test passed with flying colors, so the only remaining thing to implement was to set up a daily cron job.

Though this configuration seems to work well and looks as though it will serve my purposes just fine, it likely could be improved upon. Should any readers of this blog have suggestions for improvements, feel free post them in the comments section below.


* lynx, cURL, wget, and other tools could easily likely replace elinks, and might even be more effective or efficient. Since I know elinks fairly well and I use it for other similar tasks, I did not investigate any of those.

** Command found and copied in large part from https://unix.stackexchange.com/questions/259538/grep-to-search-error-log-and-email-only-when-results-found.

*** More precisely, I think what happens is that when a string is searched for using grep and is not found, grep returns exit code 1, which, in the case of this series of commands, means the process does not proceed to && (which means something like "proceed to the next command on successful completion of the previous command").

Thursday, October 20, 2016

Discussion topic 1: vim or emacs for personal wiki, etc?

Instead of my more typical how-to, this posting will aim to solicit input from readers. I realize I may be inviting some sort of flame war, but rest assured that my intentions are sincere: I really am largely ignorant of the respective virtues and and flaws of the two utilities on which I want to solicit input, having barely dabbled in either. My hope is that I might get input here on which will be the more worthwhile one to put further effort into learning.

First, a bit about my aims. I set up for myself some time ago a personal wiki--a vehicle for keeping track of inspiring ideas, tasks on which I am now working, or will need at some point in the future, to work, and a receptacle for various tech tips I have employed and which I may need again in the future to use, but which I have difficulty remembering. I wanted the wiki to be accessible to me, not just at home, but from the internet as well. Much as I wanted to keep the wiki's set-up and maintenance simple, at the time I deemed that deploying it under a web-serving scenario would be required. To that end, I implemented the MoinMoin wiki on a machine I administered.

That scenario has worked out acceptably well over the last few years. But it is now time to take that machine out of service. So I will be needing to reconstitute my wiki and so am revisiting the matter of how I will set up and administer it.

Having a preference for simple, resource-frugal utilities, I am hoping I might migrate my wiki to some command-line interface. The overhead and complexity of the web server most wikis involve is not really justified for my use case: in fact, I might be engaging in a bit of hyperbole in claiming that I use what I have as a real wiki--it's used more like just an organizer.

Under my best-case envisioned scenario, I could either ssh into my machine to consult and/or modify my wiki, or perhaps even host it at a shell account to which I have access. It's an appealing thought and one I hope I will soon be able to implement.

So far as I can tell, the two candidate command-line tools I might use for this are vimwiki and emacs in org-mode. And I must admit that my experience with both has been very slight. In fact, I've tried to avoid using either vim or emacs, typically gravitating to nano for the sorts of needs either of those utilities might otherwise fulfill. Perhaps emacs will be slightly more preferable since development on the vimwikiplugin seems to have ceased a little over 4 years ago, while emacs org-mode seems to have a quite active and extensive user and development base.

Both utilities, with their arcane interfaces and keystroke options have left me baffled and even trapped on more than one occasion. Having a few years of command-line interaction under my belt, I did recently manage a bit of experimentation with emacs org-mode--at least enough to convince me that it could be a suitable new vehicle for my wiki.

I had pretty much written off vim as a possible vehicle since, in past attempts to utilize it, I have found it even more obtuse and intractable than emacs. But that situation recently changed somewhat when I realized that one of the best tools for doing some routine maintenance on one of my Arch systems employs vimdiff. Having used that a few times, I can now say that I've recently managed, under the guise of vimdiff, to use vim successfully for some system maintenance tasks.

And just today I learn that emacs has its own diff implementation--ediff--as well. So emacs might also be serviceable in the system-maintenance capacity, should I decide that it will be more worthwhile to try and better learn emacs org-mode.

Bottom line here is that it looks as though I am going to be using one or other of these utilities routinely, so it is time I started learning it better. And I can, at the same time, use whichever I will be learning better, as the new vehicle for my wiki.

So I am looking for guidance and recommendations on which is likely better to suit my needs and disposition--or whether I might even have overlooked some other command-line utility for creating an maintaining a personal wiki. I should state that I am unlikely ever to do any sort of programming, so whatever may be the relative advantages of either with respect to coding, will be largely irrelevant for me. Rather, I would be using them for perhaps some simple editing functions, and mostly for some routine maintenance tasks (comparing updated config files with files already on my system) and for managing my wiki.

Let the discussion begin.

Afterthought: perhaps even creating a markdown file containing my wiki's material, then converting that to html for viewing with elinks/lynx could even work? In other words, a sort of homebrew solution?

Saturday, June 18, 2016

Another addendum to the seventh installment: imagemagick as a resource for the budget-constrained researcher continued

Continuing on the theme of the last couple of entries, the budet-contrained researcher may own or wish to acquire a piece of hardware that can aid him in obtaining needed materials from research libraries. For example he may need but a single article or perhaps a chapter from a book. Or maybe a bibliography. Obtaining limited segments of larger works such as those mentioned may be more difficult through inter-library loan channels, especially in the case of works that may contain more than one item of potential interest. It can happen that the researcher will need to go in person to inspect the work to decide which part is actually required.

Suppose the researcher is already on the premises of the local academic library and has located target material. Should he not wish to check out a book, he is left with the option of himself scanning the material. Of course these libraries often have scanners that they make available to patrons, so that is one possible option. Yet another option is for the researcher to use his own scanner, and this is where highly portable hardware such as the Magic Wand portable scanner comes in.

I invested in one of these a few years ago and it has proved quite useful. One of the problems with using it, though, is that, for the bulk of books and journals (i.e., those of more standard size) it seems to work best to scan pages sideways--horizontally, rather than vertically. In other words, it works best to start from the spine and to scan toward page edges. This, obviously, entails that roughly every other page will have been scanned in a different orientation from the page preceding.

Once all pages are scanned, they can be easily rotated in bulk to the desired orientation--by 90 or 270 degrees, as the case may be--using imagemagick's mogrify switch, like so; mogrify -rotate 90 *.JPG (a command like convert -rotate 270 PTDC0001.JPG PTDC0001-270rotate.jpg would perform much the same function while preserving the original file). In my case, it seemed best to first copy all odd, the all even, image files to separate directories prior to rotating them.

At this point, I needed to name all files with either odd or even numbers. My bash scripting skills being modest at best, I began scouring the internet for a solution that would aid me in doing this sort of bulk renaming. I found such a script at http://www.linuxquestions.org/questions/programming-9/bash-script-for-renaming-files-all-odd-617011/ and a bit of testing proved it to be a solution that would work for me.

I modified the script into 2 variants and named one rename-all_odd.sh and rename-all_even.sh. The scripts look as follows:


and


It was then a simple matter of copying all the renamed files into a separate directory and concatenating them into a pdf, as was covered in a previous installment.