I'm not a registered Twitter user and have never managed to think of a compelling reason to be one. In fact, the only time I ever really have or want anything to do with Twitter is when some Twitter feed comes up in an internet search. And all I do in those cases is read any relevant text and move on. I suppose I'm not much of a socialite and accordingly have little interest in social media phenomena such as this.
Recently, however, I became interested in joining a service that sends out invitations periodically on Twitter. Not having an account and not being interested in much of anything else Twitter represents or offers, I'm at a distinct disadvantage in this case: what am I supposed to do, start checking it every day for possibly months on end in hopes of stumbling upon the desired invitation? Not for me, obviously.
But I soon began to realize, based on other web-scraping and scheduling jobs I'd set up recently, that I would likely be able to automate the task of checking this Twitter feed for invitations. I had tools like text-mode browsers that seemed to render Twitter pages pretty well, as well as commands like grep for finding target text. And of course cron could play a key role in automating things as well. Accomplishing the task actually turned out to be quite simple.
I had already set up a way to check a Twitter feed using keystrokes and rendering the text in a terminal on my desktop: elinks -dump -no-numbering -no-references https://twitter.com/TargetTwitt | tail -n +21 | head -n -8 |less seemed to do the job just fine.* The problem with that approach with regard to the task at hand is that I would need to remember to use the key combination to check for invitations daily.
The next step, then, could be to recruit grep to search for target text--a keyword like "invit"--which, if found in the text essentially scraped from the Twitter feed, would trigger my machine to send me an e-mail. Since I already regularly use mailx to auto-send myself various sorts of e-mails, most of that aspect of this task was already in place as well.
The command I tested and that seemed to bring together well most of these various elements is as follows: body="$(elinks -dump -no-numbering -no-references https://twitter.com/TargetTwitt | grep -A 1 -B 1 the)" && echo "$body" | mailx -s Twit-invite me@my-em.ail.** That line, of course, uses, for testing purposes, a common word (the article "the") as the searched string to prove that the whole thing will work together as expected.
The command first dumps text from the Twitter feed to stdout then pipes it to grep, where grep looks for the target text-string. If the string is found, it is included--along with a couple of adjacent lines--in the body of an e-mail that mailx will sent to me (the scheme assumes that a valid smtp transport mechanism has been set up for mailx--a topic beyond the scope of this brief post). If the term is not found--something I also tested by changing the search term to one I was sure would not be included in the twitter feed--nothing further is done: the scraped text simply gets discarded and no e-mail is sent.*** The test passed with flying colors, so the only remaining thing to implement was to set up a daily cron job.
Though this configuration seems to work well and looks as though it will serve my purposes just fine, it likely could be improved upon. Should any readers of this blog have suggestions for improvements, feel free post them in the comments section below.
* lynx, cURL, wget, and other tools could easily likely replace elinks, and might even be more effective or efficient. Since I know elinks fairly well and I use it for other similar tasks, I did not investigate any of those.
** Command found and copied in large part from https://unix.stackexchange.com/questions/259538/grep-to-search-error-log-and-email-only-when-results-found.
*** More precisely, I think what happens is that when a string is searched for using grep and is not found, grep returns exit code 1, which, in the case of this series of commands, means the process does not proceed to && (which means something like "proceed to the next command on successful completion of the previous command").
The Field notes of an Audacious Amateur series is offered in the spirit of the open source movement. While the concept of open source is typically associated with computer programmers, there is a growing body of those who don't know jack about programming, but who nevertheless use the creations of open source programmers. . . .
This series is written by a representative of the latter group, which is comprised mostly of what might be called "productivity users" (perhaps "tinkerly productivity users?"). Though my lack of training precludes me from writing code or improving anyone else's, I can, nonetheless, try and figure out creative ways of utilizing open source programs. And again, because of my lack of expertise, though I may be capable of deploying open source programs in creative ways, my modest technical acumen hinders me from utilizing those programs in what may be the most optimal ways. The open-source character, then, of this series, consists in my presentation to the community of open source users and programmers of my own crude and halting attempts at accomplishing computing tasks, in the hope that those who are more knowledgeable than me can offer advice, alternatives, and corrections. The desired end result is the discovery, through a communal process, of optimal and/or alternate ways of accomplishing the sorts of tasks that I and other open source productivity users need to perform.