This series is written by a representative of the latter group, which is comprised mostly of what might be called "productivity users" (perhaps "tinkerly productivity users?"). Though my lack of training precludes me from writing code or improving anyone else's, I can, nonetheless, try and figure out creative ways of utilizing open source programs. And again, because of my lack of expertise, though I may be capable of deploying open source programs in creative ways, my modest technical acumen hinders me from utilizing those programs in what may be the most optimal ways. The open-source character, then, of this series, consists in my presentation to the community of open source users and programmers of my own crude and halting attempts at accomplishing computing tasks, in the hope that those who are more knowledgeable than me can offer advice, alternatives, and corrections. The desired end result is the discovery, through a communal process, of optimal and/or alternate ways of accomplishing the sorts of tasks that I and other open source productivity users need to perform.

Thursday, August 7, 2025

The AI revolution?

That's kind of a dramatic title, but an appropriate one for this blog and this post. I've recently discovered (yes, I've been living under a bit of a rock) AI and its possibilities for the kinds of tech projects I undertake. It turns out that, by using AI, my laborious process of trawling the internet for snippets of information related to the kinds of things I try to do with my computer may have come to an end. Yes, having AI at my disposal, I can have an always-available interlocutor to consult with on these topics, and one who is far more knowledgeable than me about computer coding.

The impact of this on this blog is that I will likely now be, instead of documenting my halting attempts and offering links to articles I've found, the posting of a lot of articles henceforth about scripts AI has helped me to create and related knowledge it has conveyed. I've already developed several scripts over the last few weeks and I keep thinking up more. In fact this post is being published partly by a script AI helped me to develop. True, its initial attempts at using the blogger API turned out to be a huge and fruitless time sink. I finally told it I was calling it quits after many hours of meeting one frustration after another--not necessarily totally the fault of AI, but due in large part to google's inhumanly complex way of expecting users to interface which seem calculated to deviously thwart the standard end user at every turn.

But I managed to get AI to help me develop a semi-automated way of posting using the method discussed in the last post I made here that involves using the xclip utility. In other words, I need to sometimes come to AI's rescue, paltry as my abilities are in this realm. And I've had to several times make suggestions to AI about how to do something or about some utility to use. In fact, I just had a frustrating session with AI trying to develop a working .asoundrc (yes, no pulse or pipewire here, just good old-fashioned alsa) that would allow me to send sound through my laptop's HDMI port. To offer a bit of gory detail, AI couldn't figure out (nor could I) why aplay could send sound via plughw while trying to configure .asoundrc with plughw devices would report that the relevant library couldn't be found.

But AI did provide me with a nice workaround for sending my laptop display out the HDMI port using xrandr, that solution leaving it to the video player (mpv in this case) to select the proper audio device for the sound. In fact, here's a nice bash script I asked it to develop for when I want to send the display through the HDMI port:


#!/bin/bash

# Find the names of your displays with 'xrandr'
# For a typical laptop, this is a good guess:
LAPTOP_DISPLAY="LVDS1"
EXTERNAL_DISPLAY="HDMI1"

# Present the menu
echo "Choose a display configuration (for mpv audio hit g-d keys & select alsa/hdmi once mpv has been started):"
echo "1) Extend desktop (external right of laptop)"
echo "2) Mirror displays"
echo "3) External display only"
echo "4) Revert to laptop display only"
echo -n "Enter your choice (1-4): "
read choice

# Execute the command based on user choice
case $choice in
    1)
        echo "Extending desktop..."
        xrandr --output "$EXTERNAL_DISPLAY" --auto --right-of "$LAPTOP_DISPLAY"
        ;;
    2)
        echo "Mirroring displays..."
        xrandr --output "$EXTERNAL_DISPLAY" --auto --same-as "$LAPTOP_DISPLAY"
        ;;
    3)
        echo "Using external display only..."
        xrandr --output "$EXTERNAL_DISPLAY" --auto --output "$LAPTOP_DISPLAY" --off
        ;;
    4)
        echo "Reverting to laptop display only..."
        xrandr --output "$EXTERNAL_DISPLAY" --off --output "$LAPTOP_DISPLAY" --auto
        ;;
    *)
        echo "Invalid choice. Please enter a number between 1 and 4."
        ;;
esac

By the way, I'm using mostly Google's Gemini so far. I've tried duck.ai as well, which I found to be a bit deficient although, on the positive side they don't seem to do geo-blocking like other services do. I also decided to try a paid service, github's co-pilot. Honestly, I've so far not seen its advantages as compared to Gemini. But I'm still quite new at this, so I may decide in the future to use some better alternative. Time will tell.

I've learned a bit about strategizing to get help from AI as well. I've discovered that I need to develop projects gradually, first coming up with a sort of simplified mock-up of a key part of what I'm aiming to accomplish, then asking it to build gradually on that. Describing to AI a complex project and its aims at the outset is likely to result in AI proposing a complex script, but experience has thus far shown that it will be so unwieldy and with parts so error-prone that it will turn out to be a nightmare to troubleshoot. Better to start off simplistically and build on working parts that can later be tied together.

I would say I stand to learn a lot about coding from AI. It's light years ahead of me when it comes to error-checking routines. I'm usually so focused on getting my tasks accomplished that I don't even think about those sorts of things. But I also suspect that AI doesn't implement sound code practices very well. Some of the scripts it has put together for me, so far as I can tell, have turned out to be horrendously complex monstrosities which, as I think of what they do and how they're designed, seem to me to string together too many routines and files into one. If I understand coding protocols at all correctly, I think these scripts should be divided up in a couple of smaller scripts and the data they produce should be turned into separate files. This is all pretty abstract without giving some concrete examples. But in future posts I should be posting here some of that code, which may give a clearer indication of what I think I'm getting at.

So stay tuned for those posts.

Monday, October 21, 2024

xclip to the rescue? How I wish . . .

I find the blogging platform I've been using for this blog to be horrendous. First, I don't want to have to log into a website to write articles: I'd rather type them on my computer, usually using my favored command-line text editor nano.

I was iniatially enthused to discover that blogspot/blogger (the host for this blog) offered the possibility of e-mailing posts. Esssentially, you add a custom word to an e-mail address they provide, and you can use that to send posts directly to your blog (subject line becomes the blog posting's title). Great, I thought, I can type posts on my computer and send them to my blog.

But my enthusiasm was shortlived. Initial tests indicated that the blogging platform was inserting a bunch of extraneous html tags into postings I'd send like that. No problem, I thought, I'll just write my posts with html, tags pre-inserted. Tests using that method indicated that the platform was continuing to insert those extraneous tags. See the graphic below for an indication of how the file looks in html editing view on blogger's site (all the &gt &lt &#39 and <br> were added by their system, not by me):


So I was left with a dilemma: how can I more conveniently, according to my preferred practices, continue blogging here? Researching the matter revealed the xclip utility, which appeared to be a good workaround for helping deal with the constraints involved in continuing to blog on this platform.

I discovered that, after I save the posting I've written under nano, I can simply cat the file into the system's clipboard by running a command like cat myfile.txt | xclip (see the LATER EDIT note added at the very end of this post for the corrected invocation for the blogger site) and from there paste it into the platform's editor interface by placing my cursor there and clicking the middle mouse button. Granted, it's pretty much the same as using a graphical text editor like leafpad and hitting ctrl-a to highlight the whole of the file, ctrl-c to copy the highlighted text to the clipboard, then ctrl-v after placing the cursor into the platform's editor interface. Not ideal, since I would much prefer if the e-mailing method would work as I had hoped by just importing the text as written in the file, but it's an acceptable (at least for now) workaround.

Perhaps more to come later, if I manage to make improvements or to figure out some heretofore overlooked factor.

Afterthought: I tried to paste this post into the blogger editing interface using xclip and it didn't work. Nothing would paste. So I had to paste the text into leafpad, then do the ctrl-a, ctrl-c, ctrl-v trick to be able to paste it here. So I changed the exclamation sign I had initially placed at the end of the title to a question mark, since it may not provide the solution after all. Sigh.

LATER EDIT: IMPORTANT! I've discovered that the way I can paste text into blogger using xclip is by running it with a couple of switches, as follows: cat myfile.txt | xclip -selection clipboard: having done that, I can then place the cursor in blogger's html editor and hit ctrl-v and the text from the file will paste into the editor as intended.

Saturday, October 19, 2024

TeX/LaTeX: tex to txt

Since a .tex file is already essentially compatible with all text editors, you may be wondering why anyone would need to do any sort of conversion to .txt format. I would also have wondered at why such a task would need to be done--that is, until I ran into the need myself.

But I did run into such a need. The issue was that I had a nicely formatted .tex document--an article, actually, that I had translated with the aid of my computer, into a foreign language. I had one fairly competent translator check the machine translation over and do some corrections and thought I was ready to go. Then, another translator had a look and offered to do further improvements, to which I gladly assented. This is where I ran into a problem with the nicely-formatted file.

This translator, although quite well-qualified and fairly capable in matters technical, was nonetheless not at all familiar with Tex/LaTeX formatting. So I couldn't really give him the document in the most optimum format for me (.tex) for correcting. And at the same time getting from him the corrected text in a format like .pdf or .doc would further complicate my task of getting it back to its nicely-formatted .tex state. Thus, I decided that .txt would be the most neutral format to use for providing the translator with the computer-translated text for further revision. But how to do that?

Well, I had already created a pdf of the document, so I had that to work from. Using sed or awk to strip out all the TeX formatting codes would be an option for someone far better versed in those utilities that I am. But even that might prove a fairly involved and time-consuming task.

Some on-line searching revealed another possible solution: it involved using the utility pdftotext. It seemed worth a try.

Sure enough, running pdftotext file.pdf file.txt actually gave quite good results. There were a few anomalies I needed to clean up, but they were actually fairly few in number. I'd say the whole process took about 15 minutes total, after which, I had a .txt version of this 5k-word file that I could submit to the translator.

So, in the unlikely event that you may need to convert your .tex file to .txt, I can recommend the routine of first converting it to .pdf, then the resulting file to .txt. I should probably mention that this file didn't contain graphics, a table, or any sort of chart. So I can't vouch for how it would work on files containing comparatively more complex elements such as that. So, probably the less complex the document is, the more successfully it will convert using this method. So, there you have it, a method for converting .tex to .pdf to .txt

Friday, October 18, 2024

A motd project

Long time no post. Well, life precludes various tech experimentation and hoped-for successes sometimes.

But I do have a couple of projects to describe. One involves a back-up routine I developed a few years ago. I have doubts that one will be of much interest to most in that it was developed specifically for the non-uefi machines I tend to use and target the paltry-sized hard drives (120 gb and less) and comparativeley meager installations I favor. But it's fairly straightforward and, to my somewhat unrefined tech sensibilities, clever. The other is yet simpler and is the one I'll be describing in this post.

I refer to it as a motd (message of the day) project/routine because I want the system to provide me with certain information when I log in. My implementation differs a bit in that it also even provides this information whenever I start a new bash session.

The information I want the system to provide me is twofold: 1) reminder of when I last backed up the system, and 2) on some of my machines, a reminder of when I last updated the operating system. In most cases what I'll need reminding about is only number 1), since I'm regularly using these systems and do operating system updates on these rolling-release (Arch and Void) systems every 1-2 weeks.

On other systems, however, such as my laptop, I'll need reminding about number 2) as well. That machine, and the VPS I use, are systems I am only occassionally logged into. So knowing when was the last operating system update is quite important on those.

So my solution involves creating the files motd-backup.txt and motd-update.txt in my home directory and writing target information to them. Since I haven't scripted the back-up routine, that has to be done manually by running something like echo "++last system back up done on $(date)++" >/home/user/motd-backup.txt after I've finished running my back-up routine. That's something I'll be adding to the notes I keep as a reminder about how to use that routine.

As for the operating system updates, I've semi-automated those. I run a script at certain intervals that opens a bash session and queries whether I'd like to do a system update: a response in the positive triggers the update. I've added to that a command that runs after a successful update and writes information to ~/motd-update.txt. It looks as follows: echo "++last update $(cat /var/log/pacman.log | tail -1 | cut -c2-11)++" >/home/user/motd-update.txt

Arch'ers will understand that what's happening is that the pacman log file is being queried and information from the final entry is being excerpted and cat'ed (is this a misuse of cat?) into the motd-update.txt file. It has worked well in my testing and should suffice for the purpose.

The final piece of the puzzle is getting the information to display. I want it to be available during my day-to-day computer usage and not in a file stored somewhere on the system. Experience has indicated that I am likely to forget about such a file and so become remiss in my system administration responsibilites.

Since I'm regularly using a terminal, I decided a good place to cause that information to be auto-displayed with regularity would be make it part of the process of starting terminal sessions. So I put the following line at the end of my .bashrc: cat ~/motd-update.txt && cat ~/motd-backup.txt

And that addressesd my issue. Every time I log into the system or start a new terminal, that information appears.

As I write this, I begin to think of alternatives. For example, I could use something like Xmessage and cause a pop-up to appear at regular intervals--say once weekly--that would contain such information. I'm sure other options like zenity could be fairly easily configured to do the same. Or perhaps making a .png out of the information as I wrote about a few years ago (https://audaciousamateur.blogspot.com/2020/04/another-imagemagick-trick-txt-to-png.html) and using cron to pop up feh or some image-displaying utility that shows the information in bright text on a dark background. That could be more disruptive but might be more effective.

In any case, what are your ideas for displaying important information like that? Auto-generated e-mails, for example? There are lots of possibilities. The ways to skin this cat (pun intended) are numerous.

Saturday, April 25, 2020

Another imagemagick trick: txt to png

I ran across an article some time ago that discussed exporting text to an image format using imagemagick's convert option. Although it looked interesting, I wasn't sure at the time whether it would actually be useful to me in any real-world way. Since I'm now considering various schemes for making paper back-ups of some sensitive data, I've begun investigating again this means of making an image from text. So I thought I'd provide a bit of documentation here and just make a comment or two on some recent tests.

So the current iteration of the command I've been testing is

convert -size 800x -pointsize 6 caption:@my-data.txt my-data.png

The -size switch performs the obvious function of setting the image size, or at least its width. The -pointsize switch defines the size of font that will be used in the text that will appear in the image--in the case of this example, a very small font. I'm sure font face can be specified as well, though I am not experimenting with that at this time.

In the example given here, the name of a text file is specified. Long lines are split by the program according to the width specified, but if no width is specified the width of the image will correspond to the longest line length. The output of a command can also be directed to an image. Slightly different syntax from what is seen in this example would need to be used in that case, of course.

Another convert option that works similarly to caption is label. It seemed to me the caption option was more relevant to the task I was experimenting with since a larger amount of text could be involved.

The experiments I've been conducting are for the possible purpose of making a paper back-up of the sensitive data. An image file of the data could be created, printed, then the image file erased.

Finally, I recently discovered that there is a fork of imagemagick called graphicsmagick. I have not looked into that very deeply or used it so far. But I will be investigating further.

For reference I got my original introduction to this neat feature from an article at https://www.ostechnix.com/save-linux-command-output-image-file/

More can be found at http://www.imagemagick.org/Usage/text/

Thursday, November 14, 2019

view youtube videos while browsing with elinks

I recently wanted to view a youtube video using a computer that had no graphical browser installed. Long story short, the computer runs Gentoo and is used almost exclusively for recording and playback of television content: so for almost all use case scenarios, having a graphical browser is not needed, the excessive compile times they would entail (over 24 hours on this somewhat low-resource machine) being unjustifiable.

I decided there must be some way, using the non-graphical browser I did have on this machine--elinks,to view youtube videos. A bit of online research revealed how I could accomplish this task. Though there is undoubtedly more than one way to skin this cat, I used the one that seemed most straightforward to me, as described below.

Since I already had installed the mpv utility, all I had to do was some minor tweaks to elinks. First, I went into Setup > Option manager > Document > URI passing and added a new entry. I simply named it youtube-handle_mpv. Of course the final task for this step is to save that option.

I then edited that entry, using information found at the Arch wiki entry for elinks, and added the line mpv %c (this allows elinks to feed the designated URI to mpv). Having done that, I next needed to assign a key which, when pressed, would trigger the URI passing.

I went to Setup > Keybinding manager > Main mapping > Pass URI of current frame to external command and there designated the grave or backtick key as the one that would trigger the URI passing. Again I selected "save" and exited the Setup menu.

After having done that, I navigated elinks to youtube's site, searched for the video I wanted to view and, having highlighted the desired link using the arrow keys, pressed the grave/backtick key. After a brief pause (for downloading and caching some of the data, I presume), mpv opened and the video began to play.

NOTE: the pause between pressing the designated key and the actual playback of the video under mpv could vary based, I believe, on the length/quality of the video.

Friday, February 15, 2019

Stress-testing hard drives

This entry will consist mostly in someone else's content. The back story is that, about 3 years ago on the mythtv-users list-serv, one of the list members offered such a concise, straightforward, and apparently sound description of how she tests out new hard drives, that it remained in my memory.

Well, now that it has come time for me to replace an aging hard drive in my MythTV machine, it's time to dig out those directives and actually use them. And while I'm doing that, I may as well post them on this blog for future reference. Credit for the material goes to faginbagin/Helen, the user who posted it in the list-serv.

Without further ado, here is the content:
I look for drives with the longest warranty. This Christmas I bought 4 3TB HGST drives with 3 year warranties. Got to look close, because some only have 2 year warranties. Before I put any new drive into service, I use the following procedure to increase the chances it's a good drive
that will last. 
Capture SMART data via:
smartctl -a /dev/sdx > smart0.out 
Write semi random data on the whole drive with:
nohup shred -n1 -v /dev/sdx > shred.out & 
Go away for a couple of hours. Check shred.out and figure out how long
it will take to finish. Come back when it should be done. 
Read the whole drive and compute a checksum:
nohup md5sum /dev/sdx > md5sum.out & 
Go away for roughly the same time it took shred to write to the drive.
Read the whole drive again and make sure the checksum matches:
nohup md5sum -c md5sum.out > md5sum.chk & 
Go away for roughly the same time it took the first md5sum to read the
drive. 
Write zeros to the drive:
nohup dd if=/dev/zero of=/dev/sdx bs=1M > dd.out & 
Capture SMART data via:
smartctl -a /dev/sdx > smart1.out 
Compare the two smart runs:
diff smart0.out smart1.out 
Make sure there are no complaints about sectors. 
Make sure the kernel didn't report any errors:
dmesg| tail 
If no SMART or kernel reported errors, create partition table, create
partitions, mount, etc... 
If any errors, return immediately. 
 Original post located at http://lists.mythtv.org/pipermail/mythtv-users/2016-April/386438.html

Thursday, January 31, 2019

dhcpcd hooks: what are they and why should you care?

I know I didn't know what dhcpcd hooks were and why I should care about them. That is, until I set up a file server that runs headless and which I want to make sounds when certain events, such as the network coming up, occur. This, as may be evident, will help me ascertain whether, in absence of a monitor displaying graphical indicators, the machine booted and came online successfully--a pretty important status for a machine like a file server.

dhcpcd (the final "d" stands for "daemon") is, of course, the utility that runs on a lot of Linux computers in order to get them connected to the network. It polls for an IP from whatever dhcp (dynamic host configuration protocol) server is running on the network, and the IP received gets assigned to the computer's designated network interface. So the "hooks" under discussion essentially latch onto that process and cause some other process(es) to be triggered once dhcpcd accomplishes its task. This, then, could allow me to implement the desired behavior on my headless file server

I had part of the solution in place already, namely the beep program, a utility that allows for the playing of various tones through the pc speaker. Yes, I'm aware that most computer users seem to want only to disable the pc speaker: I, on the other hand, have found it quite useful on my systems.

Having done some on-line research on this matter, I was able to succeed at the task of getting the pc speaker to play a tone once the computer had booted and gotten an IP by using the following steps (geared toward the Void Linux distribution installed on that target system).

I first created a file owned by root and in the root group in /usr/share/dhcpcd/hooks/--(call it, say) 10-beeponIP--with the following content:

if ([ $reason = "BOUND" ] || [ $reason = "RENEW" ]) then
# your script commands here (see below for the command I used)
/usr/bin/beep -f 1000 -r 5 -n -r 5 -l 10
fi


I can't go into many specifics of the bash syntax seen in this file since I understand it rather poorly myself (it was simply lifted from the askubuntu link listed below). But some testing of its claimed efficacy revealed that it would, in fact, result in the behavior I was aiming to enable.

I had to set proper permissions on that file, then symlink it. Doing the former was straightforward (permissions needed are, like the rest of the files in that directory, 444). I ran the following command to create the needed symlink: sudo ln -s /usr/share/dhcpcd/hooks/10-beeponIP /usr/libexec/dhcpcd-hooks.

Having done that, on rebooting the computer, the designated tone plays though the pc speaker, letting me know that the system booted normally and is now on-line. Mission accomplished!

Some links that helped me to better understand and accomplish this task are given below:

https://askubuntu.com/questions/1005653/how-do-i-execute-a-script-after-dhcp-assigns-an-ip-address-on-start-up
https://wiki.voidlinux.eu/Network_Configuration#Starting_wpa_supplicant_through_dhcpcd_hooks
https://man.voidlinux.eu/dhcpcd-run-hooks.8

Saturday, November 17, 2018

Twitter Alerts: A Trick for the Twitter-averse

I'm not a registered Twitter user and have never managed to think of a compelling reason to be one. In fact, the only time I ever really have or want anything to do with Twitter is when some Twitter feed comes up in an internet search. And all I do in those cases is read any relevant text and move on. I suppose I'm not much of a socialite and accordingly have little interest in social media phenomena such as this.

Recently, however, I became interested in joining a service that sends out invitations periodically on Twitter. Not having an account and not being interested in much of anything else Twitter represents or offers, I'm at a distinct disadvantage in this case: what am I supposed to do, start checking it every day for possibly months on end in hopes of stumbling upon the desired invitation? Not for me, obviously.

But I soon began to realize, based on other web-scraping and scheduling jobs I'd set up recently, that I would likely be able to automate the task of checking this Twitter feed for invitations. I had tools like text-mode browsers that seemed to render Twitter pages pretty well, as well as commands like grep for finding target text. And of course cron could play a key role in automating things as well. Accomplishing the task actually turned out to be quite simple.

I had already set up a way to check a Twitter feed using keystrokes and rendering the text in a terminal on my desktop: elinks -dump -no-numbering -no-references https://twitter.com/TargetTwitt | tail -n +21 | head -n -8 |less seemed to do the job just fine.* The problem with that approach with regard to the task at hand is that I would need to remember to use the key combination to check for invitations daily.

The next step, then, could be to recruit grep to search for target text--a keyword like "invit"--which, if found in the text essentially scraped from the Twitter feed, would trigger my machine to send me an e-mail. Since I already regularly use mailx to auto-send myself various sorts of e-mails, most of that aspect of this task was already in place as well.

The command I tested and that seemed to bring together well most of these various elements is as follows: body="$(elinks -dump -no-numbering -no-references https://twitter.com/TargetTwitt | grep -A 1 -B 1 the)" && echo "$body" | mailx -s Twit-invite me@my-em.ail.** That line, of course, uses, for testing purposes, a common word (the article "the") as the searched string to prove that the whole thing will work together as expected.

The command first dumps text from the Twitter feed to stdout then pipes it to grep, where grep looks for the target text-string. If the string is found, it is included--along with a couple of adjacent lines--in the body of an e-mail that mailx will sent to me (the scheme assumes that a valid smtp transport mechanism has been set up for mailx--a topic beyond the scope of this brief post). If the term is not found--something I also tested by changing the search term to one I was sure would not be included in the twitter feed--nothing further is done: the scraped text simply gets discarded and no e-mail is sent.*** The test passed with flying colors, so the only remaining thing to implement was to set up a daily cron job.

Though this configuration seems to work well and looks as though it will serve my purposes just fine, it likely could be improved upon. Should any readers of this blog have suggestions for improvements, feel free post them in the comments section below.


* lynx, cURL, wget, and other tools could easily likely replace elinks, and might even be more effective or efficient. Since I know elinks fairly well and I use it for other similar tasks, I did not investigate any of those.

** Command found and copied in large part from https://unix.stackexchange.com/questions/259538/grep-to-search-error-log-and-email-only-when-results-found.

*** More precisely, I think what happens is that when a string is searched for using grep and is not found, grep returns exit code 1, which, in the case of this series of commands, means the process does not proceed to && (which means something like "proceed to the next command on successful completion of the previous command").

Thursday, October 20, 2016

Discussion topic 1: vim or emacs for personal wiki, etc?

Instead of my more typical how-to, this posting will aim to solicit input from readers. I realize I may be inviting some sort of flame war, but rest assured that my intentions are sincere: I really am largely ignorant of the respective virtues and and flaws of the two utilities on which I want to solicit input, having barely dabbled in either. My hope is that I might get input here on which will be the more worthwhile one to put further effort into learning.

First, a bit about my aims. I set up for myself some time ago a personal wiki--a vehicle for keeping track of inspiring ideas, tasks on which I am now working, or will need at some point in the future, to work, and a receptacle for various tech tips I have employed and which I may need again in the future to use, but which I have difficulty remembering. I wanted the wiki to be accessible to me, not just at home, but from the internet as well. Much as I wanted to keep the wiki's set-up and maintenance simple, at the time I deemed that deploying it under a web-serving scenario would be required. To that end, I implemented the MoinMoin wiki on a machine I administered.

That scenario has worked out acceptably well over the last few years. But it is now time to take that machine out of service. So I will be needing to reconstitute my wiki and so am revisiting the matter of how I will set up and administer it.

Having a preference for simple, resource-frugal utilities, I am hoping I might migrate my wiki to some command-line interface. The overhead and complexity of the web server most wikis involve is not really justified for my use case: in fact, I might be engaging in a bit of hyperbole in claiming that I use what I have as a real wiki--it's used more like just an organizer.

Under my best-case envisioned scenario, I could either ssh into my machine to consult and/or modify my wiki, or perhaps even host it at a shell account to which I have access. It's an appealing thought and one I hope I will soon be able to implement.

So far as I can tell, the two candidate command-line tools I might use for this are vimwiki and emacs in org-mode. And I must admit that my experience with both has been very slight. In fact, I've tried to avoid using either vim or emacs, typically gravitating to nano for the sorts of needs either of those utilities might otherwise fulfill. Perhaps emacs will be slightly more preferable since development on the vimwikiplugin seems to have ceased a little over 4 years ago, while emacs org-mode seems to have a quite active and extensive user and development base.

Both utilities, with their arcane interfaces and keystroke options have left me baffled and even trapped on more than one occasion. Having a few years of command-line interaction under my belt, I did recently manage a bit of experimentation with emacs org-mode--at least enough to convince me that it could be a suitable new vehicle for my wiki.

I had pretty much written off vim as a possible vehicle since, in past attempts to utilize it, I have found it even more obtuse and intractable than emacs. But that situation recently changed somewhat when I realized that one of the best tools for doing some routine maintenance on one of my Arch systems employs vimdiff. Having used that a few times, I can now say that I've recently managed, under the guise of vimdiff, to use vim successfully for some system maintenance tasks.

And just today I learn that emacs has its own diff implementation--ediff--as well. So emacs might also be serviceable in the system-maintenance capacity, should I decide that it will be more worthwhile to try and better learn emacs org-mode.

Bottom line here is that it looks as though I am going to be using one or other of these utilities routinely, so it is time I started learning it better. And I can, at the same time, use whichever I will be learning better, as the new vehicle for my wiki.

So I am looking for guidance and recommendations on which is likely better to suit my needs and disposition--or whether I might even have overlooked some other command-line utility for creating an maintaining a personal wiki. I should state that I am unlikely ever to do any sort of programming, so whatever may be the relative advantages of either with respect to coding, will be largely irrelevant for me. Rather, I would be using them for perhaps some simple editing functions, and mostly for some routine maintenance tasks (comparing updated config files with files already on my system) and for managing my wiki.

Let the discussion begin.

Afterthought: perhaps even creating a markdown file containing my wiki's material, then converting that to html for viewing with elinks/lynx could even work? In other words, a sort of homebrew solution?

Saturday, June 18, 2016

Another addendum to the seventh installment: imagemagick as a resource for the budget-constrained researcher continued

Continuing on the theme of the last couple of entries, the budet-contrained researcher may own or wish to acquire a piece of hardware that can aid him in obtaining needed materials from research libraries. For example he may need but a single article or perhaps a chapter from a book. Or maybe a bibliography. Obtaining limited segments of larger works such as those mentioned may be more difficult through inter-library loan channels, especially in the case of works that may contain more than one item of potential interest. It can happen that the researcher will need to go in person to inspect the work to decide which part is actually required.

Suppose the researcher is already on the premises of the local academic library and has located target material. Should he not wish to check out a book, he is left with the option of himself scanning the material. Of course these libraries often have scanners that they make available to patrons, so that is one possible option. Yet another option is for the researcher to use his own scanner, and this is where highly portable hardware such as the Magic Wand portable scanner comes in.

I invested in one of these a few years ago and it has proved quite useful. One of the problems with using it, though, is that, for the bulk of books and journals (i.e., those of more standard size) it seems to work best to scan pages sideways--horizontally, rather than vertically. In other words, it works best to start from the spine and to scan toward page edges. This, obviously, entails that roughly every other page will have been scanned in a different orientation from the page preceding.

Once all pages are scanned, they can be easily rotated in bulk to the desired orientation--by 90 or 270 degrees, as the case may be--using imagemagick's mogrify switch, like so; mogrify -rotate 90 *.JPG (a command like convert -rotate 270 PTDC0001.JPG PTDC0001-270rotate.jpg would perform much the same function while preserving the original file). In my case, it seemed best to first copy all odd, the all even, image files to separate directories prior to rotating them.

At this point, I needed to name all files with either odd or even numbers. My bash scripting skills being modest at best, I began scouring the internet for a solution that would aid me in doing this sort of bulk renaming. I found such a script at http://www.linuxquestions.org/questions/programming-9/bash-script-for-renaming-files-all-odd-617011/ and a bit of testing proved it to be a solution that would work for me.

I modified the script into 2 variants and named one rename-all_odd.sh and rename-all_even.sh. The scripts look as follows:


and


It was then a simple matter of copying all the renamed files into a separate directory and concatenating them into a pdf, as was covered in a previous installment.

Sunday, March 27, 2016

Miscellaneous Monday quickies: manipulating and excising pages from pdfs

So you've gotten, via inter-library loan, the pdf's you requested to aid you in researching the article you're writing. For purposes of putting them on your e-reading device, the form they're in is probably perfectly suitable. But what if you'd like to do other things with one or more of them, such as printing them out? There is quite a range of pdf manipulation tools that can help you put them in a form more congenial to such aims.

One that I recently discovered, for example, allows you to "n-up" your pdf document, i. e., to put more than one page per sheet of paper. Should you wish to print the document, this can help you lessen paper waste. The utility is called pdfnup, and the relevant command for accomplishing this end is pdfnup --nup 2x2 myfile1.pdf myfile2.pdf. Presumably one could use 4x4 in place of 2x2 to get four pages per sheet instead of two.

This utility gives results similar to psnup, a utility I have used (and previously witten about in this blog) in the past for making booklets comprised of multiple pages per sheet of paper, though pdfnup likely lacks the advanced collating options of psnup. But psnup involves greater complexity in that it operates on postscript files, which usually need to be converted to or from some other format.

Getting back to the task at hand, should you wish to print out any of your pdf's with the aim of minimizing paper waste, you may well wish to eliminate extraneous pages from your document. In my experience, for example, inter-library loan pdf documents routinely include one or more copyright notice pages. Before printing such documents, I almost always try to exclude those pages--simple enough if you send them directly from the printer from a pdf reader. But what if you're taking the additional step of n-upping multiple pages per sheet?

As it turns out, pdfnup is actually part of a larger pdf-manipulation suite called pdfjam. And that suite enables you to not only n-up your pdf document, but to eliminate extraneous pages as part of the same process. To give an example, if you have a fifteen page document wherein the first and last pages are copyright notices that you wish to exclude from your 2-upp'ed version, you'd use the command

pdfjam MyDoc.pdf '2-14' --nup 2x1 --landscape --outfile MyDoc_2-up.pdf.

The meaning of the various command switches will, I think, be obvious.

This is just a thin slice of the capabilities offered by just one suite of pdf manipulating tools available under GNU/Linux. I have used other tools such as pdfedit, pdftk, flpsed, to good effect as well.

LATER EDIT: I just discovered a new and interesting functionality of pdfjam; it can convert pdf's from a4 to letter format (and vice versa). The relevant command is pdfjam --paper letter --outfile out.pdf in.pdf

Monday, March 21, 2016

Addendum to the seventh installment: imagemagick as a resource for the budget-constrained researcher

In this installment, I'll cover concatenating multiple image files into a multi-page pdf--a very handy trick the imagemagick utility convert makes possible. But first, a bit of grousing on the subject of academia, budget-constrained researching, and academic publishing.

Pricing for on-line academic resources tends, not surprisingly, to be linked to budgetary allowances of large academic institutions: what institutions can afford to pay for electronic access to some journal or other, for example, will influence the fee that will be charged to anyone wishing to gain such access. If one is affiliated with such an institution--whether in an ongoing way such as by being a student, staff, or faculty member, or in a more ephemeral way, such as by physically paying a visit to one's local academic library--one typically need pay nothing at all for such access: the institution pays some annual fee that enables these users to utilize the electronic resource.

But for someone who has no long-term affiliation with such an institution and who may find it difficult to be physically present in its library, some sort of payment may be required. To give an example, while doing some research recently for an article I'm writing, I found several related articles I need to read. I should mention that, since I am still in the early stages of writing my article, there are bound to be several additional articles to which I will need ongoing access. I will address two journal articles in particular, both of which were published between twenty and thirty years ago.

I discovered that both articles were available through an on-line digital library. I was offered the option of downloading the articles at between twenty and forty dollars apiece. At that rate--since one of my articles' main topics has received fairly scant attention in modern times and I might need to review only another twenty or so articles--it could cost me well over six hundred dollars just to research and provide references for this topic. The time I spend actually writing and revising the article--the less tangible cost, which will exceed by a substantial amount the time spent researching--is an additional "expense" for producing the material.

There are, of course, ways to reduce the more tangible costs. Inter-library-loan often proves a valuable resource in this regard since even non-academic libraries who may lack subscriptions to academic publishers or digital libraries can nonetheless request journals or books containing relevant articles or, even better yet, obtain for their patrons such articles in electronic format--typically pdf files--these latter often having been created by scanning from paper-and-ink journals or books.

Some digital libraries even offer free--though quite limited--access to their materials. In researching my project I found three articles available from such a site. On registration at their site, they offered free on-line viewing, in a low-resolution scan, of just a couple of articles--those being made available for viewing for only a few days. Once the limited number of articles was reached, only at the end of those few days could another article be viewed. For purposes of the budget-constrained researcher, while this is a promising development, it's not an entirely practicable one.

Being able to view an article on a computer screen is a lot better than having no electronic access to it at all. But it also is of no help in those circumstances where one may be without an internet connection. Having the ability to save the article to an e-reader would be preferable and far more flexible than reading it, one page at a time, in a browser window. But the service seems calculated to preclude that option without payment of the twenty- to forty-dollar per article fee. It turns out, however, that sometimes ways around such restrictions can be discovered. And that, finally, is where the tools mentioned in the first paragraph of this article enter in. Thus, without further ado, on the the technical details.

Some digital libraries actually display, on each page of the article that appears as you go about reading it in a web browser window, a low-resolution image of the scanned page. As I discovered, one can right-click on that image and select to save it to the local drive. The file name may have, instead of a humanly-comprehensible name, just a long series of characters and/or numbers. And it may, as well, lack any file extension. But I discovered that the page images could, in my case, be saved as png files. Those png files, then, appropriately named so as cause them to retain their proper order, could then, using imagemagick tools, be concatenated into a multi-page pdf. That multi-page pdf can then be transferred to the reading device of choice. I found that, although the image quality is quite poor, it is nonetheless sufficient to allow deciphering of even such smaller fonts as one typically finds in footnotes. Although involving a bit of additional time and labor, using this tactic can yet further defray the budget-constrained researcher's more tangible costs.

For reasons that will become obvious, the image files should be saved to a directory empty of other png files. How the images are saved is essentially a numerical question and is dependent on the total number of pages in the article. If the total number of pages is in the single digits, it would be a simple matter of naming them, for example, 1.png, 2.png, 3.png, and so forth. If the number of pages reaches double digits--from ten through ninety nine, zeros must be introduced so that all file names begin with pairs of numbers; for example 00.png, 01.png, 02.png, and so forth. The same formula would hold for--God forbid, since the task would become quite tedious--articles with total pages reaching triple digits.

Provided imagemagick is already installed, once the saving is done, the very simple formula convert *.png full-article.pdf can be used to produce the pdf of concatenated image files. Since the files have numerical prefixes, the program will automatically concatenate them in the proper order.

In the next installment I will be covering manipulation of pdf's provided through inter-library loan services--focusing on removal of extraneous pages (e.g., copyright-notice pages) routinely included by those services.

Thursday, February 4, 2016

Miscellaneous Thursday quickies: what's your bi-directional syncing utility?

So I've been pursuing a research project for the last year or so and have been locating and saving material related to it, as well as doing some of my own writing in the area. I keep that material in a particular folder. That's all fine and good. The problem is that I want the ability to work on the project while I'm at any of 3 different computers--computers that are often located in 3 different locales, some of which are even remote from my LAN. So, how to host the same folder on all three machines, and keep it current with the most recent changes made on any of the 3 computers?

I intend for this to be a manual process, i.e., one that will involve me manually running some program or script on each of the three machines, in order to update the folder. I should also mention that I have access to a shell account where I can run a number of utilities that can facilitate this--so a 4th computer, technically speaking, is involved as well. I envision the shell account functioning as a sort of central hub for keeping said folders in sync: a sort of master copy of the folder can be stored there and each of the three machines can syncronize with that folder as need will arise.

I'm still trying to puzzle out how to pull all this together and am looking at the sorts of software/utilities that can accomplish the task. I've only tested out one option thus far--bsync. I opted for that in an initial foray for its simplicity: it's just a python script that enhances the functionality of rsync (a great sync utility, but one that does not do bi-directional synchronization). So all I needed to do was download the script and make it executable.

Using the utility, I was able to put the most current copy of the folder at my shell account by just running bsync MyFolder me@my.shellacct.com:MyFolder (the MyFolder directory must already exist at the remote address). So I've at least made a beginning.

That said, I'm still in the early stages of investigating approaches to do the sort of bi-directional synchronization I'm after. Tests with bsync have gone well so far but, if I'm understanding correctly, this utility does not deal well with sub-folders--which could be an issue in my use scenario; it seems bsync will work best on a folder or directory that contains only files, while my directory has a few sub-directories under it.

Other possible options I've found are csync (which uses smb or sftp), osync, bitpocket, and FreeFileSync. The first 3 of these are most attractive to me since they are command-line utilities. FreeFileSync is a graphical utility, though it does appear that it can be run from the command line as well. I should also mention unison, which I've looked at but not pursued--the reason being that it apparently requires that the same version be installed on all concerned machines, which is something that will be unrealistic in my case (Arch runs on 2 machines, an older Ubuntu on another, and BSD on the fourth).

So, what is your bi-directional synchronization software preference? Any further tips or pointers to add on accomplishing this task?

Wednesday, January 13, 2016

Addendum to 11th installment: Lynx; scraping credentialed web pages

Sort of a dramatized headline for what I've accomplished using the command-line Lynx  browser, but not too far from the mark. I've described in previous entries how I've used lynx to accomplish similar goals of extracting target information from web pages, so this entry is a continuation along those same lines.

I recently signed up for a prepaid cellular plan touted as being free, though it is one limited to a certain (unreasonably low, for most) number of minutes per month. The plan has thus far worked well for me. The only real issue I have come across is that I had not yet discovered any way easily to check how many minutes I've used and how many are left. The company providing the service is, of course, not very forthcoming with that sort of information: they have a vested interest in getting you to use up your free minutes, hoping thereby that you'll realize you should buy a paid plan from them, one that includes more minutes. The only way I'd found for checking current usage status is to log in to their web site and click around til you reach a page showing that data.

Of course I am generally aware of the phenomemon of web-page scraping and also have heard of python and/or perl scripts that can perform more or less automated interactions with web pages (youtube-dl being one example). So I initally thought my task would require something along these lines--quite the tall order for someone such as myself, knowing next to nothing about programming in either python or perl. But then I ran across promising information that led me to believe I might well be able to accomplish this task using the tried and true lynx browser, and some experimentation proved that this would, indeed, allow me to realize my goal.

The information I discovered came from this page. There is found a description of how it is possible to record to a log file all keystrokes entered into a particular lynx browsing session--something reminiscent of the way I used to create macros under Microsoft Word when I was using that software years ago. The generated log file can then, in turn, be fed to a subsequent lynx session, effectively automating certain browsing tasks, such as logging into a site, navigating to, then printing (to a file, in my case) a page. Add a few other utilities like cron, sed, and mail, and I have a good recipe for getting the cellular information I need into an e-mail that gets delivered to my inbox on a regular basis.

The initial step was to create the log file. An example of the command issued is as follows:

lynx -cmd_log=/tmp/mysite.txt http://www.mysite.com.

That, of course, opens the URL specified in lynx. The next step is to enter such keystrokes are are necessary to get to the target page. In my case, I needed to press the down arrow key a few times to reach the login and password entry blanks. I then typed in the credentials, hit the down arrow again, then the "enter" key to submit the credentials. I then needed to hit the "end" key on the next page, which took me all the way to the bottom of that page, then the up arrow key a couple of times to get to the link leading to the target page. Once I got to the target page, I pressed the "p" key (for print), then the "enter" key (for print to file), at which point I was prompted for a file name. Once I'd entered the desired file name and pressed the "enter" key again, I hit the "q" key to exit lynx. In this way, I produced the log file I could then use for a future automated session at that same site. Subsequent testing using the command

lynx -cmd_script=mysite.txt http://www.mysite.com

confirmed that I had, in fact, a working log file that could be used for retreiving the desired content from the target page.

The additional steps for my scenario were to turn this into a cron job (no systemd silliness here!), use sed to strip out extraneous content from the beginning and end of the page I'd printed/retrieved, and to get the resulting material into the body of an e-mail that I would have sent to myself at given intervals. The sed/mail part of this goes something like

sed -n 24,32p filename | mail -s prepaid-status me@mymail.com*

* I can't go into particulars of the mail program here, but suffice to say at least that you need a properly edited configuration file for your mail sending utility (I use msmtp) for this to work.

Friday, April 17, 2015

Addendum to 12th installment: watermarks with copyright notice using LaTeX

So, it's been awhile. And there's been plenty I could have blogged about on the tech front. Like when I copied my Arch install to another hard drive, making it bootable. But I didn't. And now I've forgotten important details of how I did it. Oh well.

I can blog about this next item, though, which is still fresh in memory. I've got to write up  some articles and am sending them out for proofreading. So I wanted to mark them as drafts, something I already know how to do and have blogged about previously.

I decided to modify things a bit for the current task, though. This time I'm using only one utility to do the watermarking--LaTeX--and I'm tweaking things a bit further.

The challenge this time is making a watermark with a line break, as well as one that contains text with differing font sizes in the two lines. I want a really large font for the first line, which marks the document as a draft--as in my previous installment--but I want a really small font for the second line this time. That second line is where a copyright notice will be located.

Without further ado, here's the MWE (TeX-speak for minimum working example) I've come up with for accomplishing this:



This will place, diagonally across each page of the document, a notice in light gray font and in very large letters, with the word DRAFT. Underneath that, there will be text in much smaller font alerting that the material is under copyright claim of the document's author. It's also got a nice little feature that auto-inserts the year, so it's something that can be reused in varying documents over a period of time, relieving the composer of having to fiddle with minor details like dates.

So, that's about it for this installment!

LATE ADDITION: Just today I ran across a new means of watermarking that can be done on already-existing .pdf files. It involves using the program pdftk and is quite simple. You simply create an empty--except for your desired watermark--.pdf, then use the program to add the watermark to the already-existing .pdf. Something like the following:

pdftk in.pdf background back.pdf output out.pdf

(I ran across that here). I used LibreOffice Draw to create such a background watermark and easily added that to an existing .pdf. It worked great, though it should be noted that the watermark won't cover graphics; I assume there must be a way to make it do so, however.

Tuesday, November 18, 2014

Miscellaneous Tuesday quickies: creating and using ssh tunnels/proxies

This entry will concern tunneling so as to get around port-blocking restrictions. It's something I learned about some years ago, but had a difficult time wrapping my head around it. While I can't say I understand it a whole lot better now, I can at least say that I've been able to get it working.

In my case it was needed because I've been working in a library whose wifi network is set up to block a variety of non-standard ports. That's a problem for me since I run my (command-line) e-mail client on a home computer, and I connect to that computer via ssh--which, in turn, runs on a non-standard port. So, when I work in this library, I am unable to connect to my home machine to do e-mailing. There are also occasional problems with sites being blocked on this network (and, no, I'm not rying to view porn).

For this to work, one must have access to some machine outside the wifi network that runs ssh on port 443. I happen to have a shell account with a service that has just such a set-up.

In my case, then, I can set up the tunnel as follows:

ssh -L localhost:1234:my.dyndns.url:12345 -p 443 my-user@my.shellacct.net.

I am asked for my password, then logged into my shell account, and the tunnel is thus opened.

Then, to connect to ssh as running on my home machine, I simply issue

ssh -p 1234

To get around the occasional page blocking I've run into, I first downloaded a browser I will dedicate to this task--namely, qupzilla. Then, I need to set up a socks proxy, which is done via ssh, like so:

ssh -D 8080 my-user@my.shellacct.net -p 443

After that, it's a matter of configuring qupzilla (or your preferred browser) to route web traffic over the socks proxy you've just created. That's done by going to Edit > Preferences > Proxy Configuration, ticking the Manual configuration radio button, selecting socks5 from the drop-down menu, then entering localhost into the first field next to that and 8080 in the Port field. Click Apply and Ok, and qupzilla will be set to route its traffic over your proxy, thus avoiding the blocks instituted by the wifi network.

With this basic information, it should be clear how other sorts of ssh tunnels and/or proxies could be set up.

Friday, July 18, 2014

Miscellaneous Friday quickies: The Plop boot manager; what is it and why would you need it?

Prefatory remark: I am uncertain of the licensing status of the project discussed in the posting below, but I suspect it may not--unlike most of the other utilities I discuss in this blog--be open-source.

Unless you, like me, are stubbornly trying to repurpose aging hardware, this tool might not be of much interest to you. But it allowed me to get an older machine booting from USB when BIOS limitations were interfering, , so I decided to document here the fairly simple procedures I followed to accomplish this in case they might be of benefit to others.

How old was said machine? Well, old enough to not only have problems booting from USB flash drives (BIOS USB boot options were limited to USB floppies or ZIP disks), but to have a floppy drive in it as well! A single core machine, as you might guess, although the motherboard did at least have SATA headers--which made it a good candidate for the project I had in mind.

I learned, through some on-line research, about the Plop boot manager--touted for enabling systems to boot from USB even where BIOS settings limited it--and that floppy disk images of the boot manager are included in the download. So I dug up and dusted off a floppy, downloaded the image, and wrote it to the floppy the GNU/Linux way--using dd:

dd if=/path/to/plpbt.img of=/dev/fd0

And that disk did, in fact, allow me to boot sanely from a USB thumb drive I'd plugged into the system. On boot, a starfield simulation reminiscent of the old Star Trek intro (ok, I'm dating myself here) appeared on the screen, in the foreground of which was a boot menu from which I could select the medium I wished to boot. And, sure enough, USB was one of the items.

That wasn't quite all I needed for my own application, however; you see, my hope was to have this machine run headless. So, how to make the boot manager default to booting from the USB drive after a certain number of seconds?

For that, it turns out, I needed another program included in the download called plpbtcfg. That program is what allows one to modify the binary file plpbt.bin. And plpbt.bin needs to be accessed somehow as well in order to modify it--accomplished in my case by mounting plpbt.img as a looped file system.

So I ran mount -o loop /path/to/plpbt.img /mnt/loop. Once that image had been thus mounted, I cd'd to where I'd downloaded plpbtcfg and ran plpcfgbt cnt=on cntval=4 dbt=usb /mnt/loop/plpbt.bin: that gave the boot menu a four-second time count, after which the computer would automatically boot from USB. I rewrote, using dd again, that image, to the floppy. So, mission accomplished.

Except some other aspects of that machine's operation proved not very suitable to the application I was hoping to deploy it for, so I'm not sure it will finally be put into service. But that's another story . . .

Saturday, April 12, 2014

Miscellaneous Friday quickies: crop pdf margins with psnup

As sometimes happens, I recently needed to print off a portion of a public-domain work that's been scanned to portable document format and made available for download via Google Books. As the case proves to be at times, the original book had pages with fairly wide margins; when that sort of scanned book gets printed on letter-sized paper, you end up with a smallish text-box in the middle of a page that will have something like two-inch margins. That makes the text harder to read because the font ends up being relatively small, and it also results in waste of a lot of paper.

What to do, then, to make the text larger and make it occupy more of the page? I used three utilities for this job: xpdf to excise the target pages into a postscript file, psnup to enlarge the text and crop margins, and ps2pdf to convert the modified pages back to a pdf file. psnup is part of the psutils package, while ps2pdf relies on Ghostscript. A description of the steps I took follows.

With the pdf-viewer xpdf, the print dialog offer two option: print the document to a physical printer, or print to a postscript file. Both options allow for page-range stipulation. That's how I created a postscript file from the target pages.

Next, psnup--a really handy tool I've used previously for creating booklets (more about that in a future entry), but which I'd never considered might perform a job like this--was used to reduce margins, which had the added effect of increasing the text size. The command I used, which I appropriated from here, looks like this:

psnup -1 -b-200 infile.ps file_zoomed.ps

The crucial switch here seems to be -b (which is short for borders) followed by a negative numeral. Of course the numeral 200 as seen in this example will need to be modified to suit your circumstances.

The final step of converting file_zoomed.ps, using ps2pdf, to a pdf was simplest of all--in fact so simple that I won't even describe it here. I hope this brief description may be an aid to others wishing to execute a task like this.

Finally, I ran across some other related information of interest while reaserching how to crop pdf margins. Here, for example, is a page that describes a few other, more technically-involved ways to reduce margins: http://ma.juii.net/blog/scale-page-content-of-pdf-files. This posting on the Ubuntu forums has a really interesting script someone put together for making pdf booklets: http://ubuntuforums.org/showthread.php?t=1148764. And this one offers some clever tips on using another psutils utility--psbook--for making booklets: http://www.peppertop.com/blog/?p=35. Then, there's what might be the most straightforward utility of all for cropping pdf margins--pdfcrop. Since I could not get it to work for me in this instance, I looked for and found the alternative described above.

MUCH LATER EDIT: having done some further experimentation with pdfcrop I've discovered an important element: using negative values to specify width of margins. Doing this, I was able, using this tool, to trim margins to my staisfaction. A command such as pdfcrop --margins '-5 -25 -5 -40' MyDoc.pdf MyDoc-cropped.pdf ended up doing the trick for me quite nicely

Monday, March 17, 2014

Miscellaneous Monday quickies: volume adjustment via keyboard

Being an enthusiast of minimalist GUI systems, I'd heard some time ago of the i3 window manager and liked what I'd read. Recently, I switched over a couple of my computers to it and have been quite happy with it.

I ran across a news item the other day that was touting the virtues of i3 and which therefore caught my interest. Especially intriguing was the author's description of how, using that WM, certain keyboard keys or key combinations could be mapped so as to govern the computer's sound ouput--intriguing, that is, even apart from the fact that it was a description of a system configured to use pulse audio for sound output (my preference for ALSA over pulse is material for another entry). Still, I felt it should not be too hard to modify those directions to suit my systems.

As in the article referenced, it was a simple matter of modifying ~/.i3/config, adding some lines. In my case, the lines were as follows:



The keyboard on that particular machine is what I believe is called a "multimedia keyboard," meaning that it has a few keys dedicated, rather than to alphanumeric characters, to multimedia functions such as volume control. Finding which key codes to place in that file was a simple matter of using the xev utility. The ALSA--as opposed to pulse audio-- commands for raising, lowering, and muting volume were readily found in an internet search.

After a few trial runs and further tweaks, a quick restart of i3 revealed that things were working as expected. Flush with success from that project, I decided I might get the same thing working on another computer in my apartment--though that computer, since it needs to be usable for my wife, runs a different WM; incidentally, it runs JWM with a home-brewed Gnome 2 mock-up interface (I plan to do a write-up someday describing the Gnome 2 mock-up I created).

The basic idea of getting keyboard keys controlling volume is the same, though it involves editing a different configuration file--named ~/.jwmrc--that uses alternate syntax. Since the keyboard attched to this machine is not a multimedia keybaord, I ended up repurposing some seldom-used keys, in combination with the Alt key, for volume control functions. The entries in that file are as follows:



That pretty much sums it up this quickie entry.