Announcing my utility scripts repository

Like many Linux and Unix users, I’ve been writing and collecting little utility scripts to automate repetitive tasks and otherwise simplify my digital life. In the last couple of months, I’ve put my collection under version control and worked to make the scripts I’ve written more generally applicable and useful. So today I wanted to announce the existence of this repository, and explain some of the scripts that I think others might find most useful.

(I call these “the Lovelace utilities” because that’s my name, and there are umpteen different “utility scripts” repositories already with any non-”vanity” name I could think of, by the way.)

The scripts in the third_party directory are programs written by others that I haven’t gotten around to packaging in my overlay yet. Those by me (though some might be only minimally adapted long ago from something I found somewhere) are in the general directory.

Most of my scripts are implemented as shell functions, with a heuristic to call the function automatically when the script is executed rather than sourced. I also (as the last few steps before this announcement) added code to get configuration from files stored in a standard (XDG-compliant! or the equivalent on Macs) location in addition to the calling environment and the script itself. This config-sourcing code is in

Several of the scripts are related to each other in several groups. I’ll cover each group together, then the “miscellaneous” scripts.


While I now have enough disk space to spare that I don’t worry about it much, there was a while when I didn’t have quite enough disk space for both all my personal files and for building packages (since Gentoo builds everything from source). So I went through my collected files, compressing everything I could as compactly as I could, and developed these scripts to automate the process.

  • tries to compress a file using gzip, bzip2, rzip, xz, and lrzip, recording the compressed size each time, and then uses whichever one produced the smallest compressed file to finally compress it.
  • travels through a directory tree, using to compress every not-already-compressed file.
  • and are the same, except they tell every compression program to be verbose about its progress.
  • is like, except that it only uses gzip and bzip2. rzip and lrzip can be very slow on even medium-large files.
  • and decompress compressed files (.gz, .bz2, .rz, .lrz, or .xz), then use to compress them using the most space-efficient method. operates on individual specified files; operates on a directory tree recursively.


These next scripts are related to the maintenance of my music collection. I keep music files organized in a directory hierarchy, and “favorites” hard-linked into a parallel directory hierarchy (actually, one for “favorites,” one for Christmas music, and one for Easter music).

  • tests whether any music file in my “favorites” music collection isn’t on my MP3 player, or if any file that was in my “favorites” collection and is thus on my MP3 player isn’t in that collection anymore.
  • adds a file to my MP3 player, converting it to MP3 in the process (leaving the original alone) if it wasn’t in that format already.
  • was originally written to replace my existing “favorites” directory, which I’d created by hand and hadn’t updated as I added more files to my collection, with a fresh look by taking a new pass through my full collection. I now use it to update my “favorites” after I add new files to my full collection.
  • goes through the “favorites” directories to see if any file differs from the same file in the main collection.
  • goes through the “favorites” directory and, if any file differs from the same file in the main collection, replaces it with a hard-link to the file in the main collection.
  • calculates and prints the total duration of all music files in the “favorites” collection.
  • helps in the maintenance of parallel directory trees. Given source and destination filenames, each possibly prefixed with the common parent directory (in my case music/) of the main and favorites trees, it moves the file from the source location to the destination location in all the directory trees. It also handles the case where the destination is a directory rather than a filename, though not yet the case of several files and one destination directory.
  • is a collection of shell functions for adding tags to, and editing the tags for, the whole music collection one directory at a time. The setup function pushes every directory in the music collection onto the stack of directories, then pushes directories off until the current directory contains a hidden file named .bookmark. The advance function removes that hidden file, pops the current directory off the stack, and creates new .bookmark file in the new current directory. The edit_tags function uses the vorbiscomment command to export tags from every Ogg format file in the current directory to text files, then opens the newly-created files in Vim, and the apply_tags function applies the changes made to those files by writing the tags into the audio files.
  • uses a media player (mplayer by default) to play all files in a particular directory tree (the “favorites” tree unless it’s Christmastide or --xmas or --easter is specified) in a random order. Unless --noremove is specified, it asks whether to keep each file after playing it.
  • is similar, but only plays a specified number of files (five by default).
  • plays a file or files using a media player (again mplayer by default), then asks (after each one if multiple files were specified) whether to remove it.


I also have a fairly large photo collection, and I’ve tried to get a “core” collection of favorites (without getting rid of the originals) at least three times that I can remember. I’ve tried two approaches: making a parallel directory tree, as I do with my music, and making a file containing a list of “favorites.” And which tools I used depended on whether I was working in some desktop environment or at a more minimal virtual terminal.

  • is used to help triage a collection of image files by displaying a given file and asking “is it a favorite?” Actually, at the moment it only works in Linux virtual terminals via the fbi image viewer, which prints the filename to standard output if you press Enter and doesn’t if you press Space, so I used that. To operate on a collection of images, pass it each image in turn using find; it’ll silently skip any image it’s recorded having dealt with already.
  • is a helper script for that filters out extraneous lines that fbi tends to print from time to time that would otherwise corrupt the “favorite images” file.
  • has much the same purpose as, but handles looping through the images itself, presents a more user-friendly interface, and works in a graphical environment as well as on a virtual terminal.
  • presents an image to the user and asks if he or she wants to keep it, and if the answer is “no” removes it. does the same thing, but differently and less robustly.
  • tells the process that shows a desktop background in XFCE to show a new image. XFCE lets you set your background to “something at random from this list of images,” but doesn’t (or didn’t at the time; I haven’t investigated this in years) let you set them to switch automatically every so often (i.e. a “slideshow”), like KDE did. So this script reset the “desktop image” property to get that behavior despite the built-in limitation.


  •, one of the scripts that is probably an adaptation of a third-party one, prints the sum of a column of numbers given to it on standard input.
  • tries each password from a file of passwords on a Zip or RAR archive until one works. If you’re pretty sure that the password is one of these dozen or so strings, but you’ve forgotten which one, you could test each by hand, or you could use something like this script.
  • is what I use, when I’m using my laptop in command-line-only mode to save the battery, to see how much battery life I have left.
  •, another that might have come from somewhere online (I know I wouldn’t have written something using getopts, for example), but I’ve extended it somewhat to fit my needs. The script does all the setup and tear-down needed to enter a chroot, including bind-mounting the Portage tree, any overlays, and the source of the current Linux kernel if the appropriate directories exist both inside and outside the chroot.
  • is a solution to a problem I hope will soon go away if it hasn’t already: that of Firefox (or Thunderbird, except I stopped using that years ago) slowing down so much that I feel I have to stop it with kill -STOP to let the system catch its breath, then start it again and keep forcing disk flushes every few seconds. The script runs kill -CONT on all Mozilla processes (though not the plugin containers) and a sync, then sleeps for two seconds, then repeats as long as there is a process running. You can specify any additional process names, but it always does this to Firefox and Thunderbird processes. I also note that this will print errors (but shouldn’t crash, though it’ll keep running unneccessarily) if run on a multi-user system where some other user is running a process that matches.
  • extracts the audio track from a video file, then plays first the audio and then the original and asks if you want to keep or remove each. (If you remove the audio-only file it’ll leave the original alone without asking.)
  • counts the files in a specified directory or directories.
  • empty empties a file by writing nothing to it.
  • epubgrep is a wrapper around zipgrep that tries to find a pattern in Zip or EPUB files, printing the names of the files that contain the pattern.
  • searches a directory tree or trees for files other than text files (with the right line endings) and CSS, RTF, and early Microsoft Office format files.
  • sorts the files in a directory tree by size, largest to smallest, and prints their size in “human-readable units.”
  • does much the same thing for installed packages on a Gentoo system.
  • tries to determine the size of packages including the size of their dependencies and show those with the largest total size. It ignores packages installed because of sets.
  • is the one possibly-generally-useful script (so far) from the build apparatus of my poetry collection (released a little over a year ago). For a time I tried to use tex4ht to build the Kindle version of the collection, but it produced broken img tags. This script produces a sed script to fix them.
  • calculates the duration of a music file.
  • gvim-wrapper, gvimdiff-hg-wrapper, and gvimdiff-svn-wrapper are wrappers around gvim with various options, for use when a single command without options is called for but the editor won’t work properly without options. gvimdiff-svn-wrapper discards its first five options, because Subversion passed unwanted information to graphical diff tools.
  • takes an array of programs and uses the ionice command to tell the kernel to give all instances of those programs high priority for I/O. By default, it covers X and the base software of the desktop environments I use, terminal emulators I commonly use, the software that maintains a network connection, and the music/video player mplayer.
  • contains a shell function to print the system load average. This is almost trivial, but I keep it so that when the system gets really bogged down I don’t have to spawn a new process to check it.
  • and are from the days when the compression program lrzip broke compatibility with old archives, so I kept an old version around. lrzip is also unusual in that it leaves the original file in place unless the user specifies otherwise with a command-line option, it leaves a partially-converted (compressed or uncompressed) file if it fails midway through (most commonly empty), and it (if I remember correctly) refuses to operate if the file it would create already exists. So tries to uncompress a file with the system version of lrunzip first, then a local version, and tests whether trying to decompress a file would collide with an existing file.
  • uses the music/video player mplayer to play all files in a directory tree, in a random order.
  • is the script that I use daily to keep my Gentoo Portage tree, and overlays, up to date. It uses eix-sync, which at the end provides a summary of changes to the tree, and saves all output into a configurable directory for later perusal.
  • is the other daily update script. It builds any packages for which updates are available, or which otherwise need rebuilding, then removes any packages that are no longer needed, then checks the system’s linking for consistency.
  • sets the Linux CPU scaling “governor” for each CPU to whatever the user specifies, or powersave by default.
  • renumbers a window in the ratpoison window manager. It takes the new and old numbers, in that order.
  • rcsless is one of the oldest scripts in the repository. It checks out a file from RCS to standard output (using RCS standard functionality) and pipes it to less.
  • is a simple script to rebuild a Gentoo system “from scratch,” by doing an “emptytree” build, then “depcleaning” and running revdep-rebuild to remove any unnecessary packages.
  • tells the computer to hibernate in one minute.
  • size_file prints the human-readable size of a file, if it exists.
  • concatenates a number of MP3 files into one Speex-encoded file. (If I ever have cause to use it again, I’ll want to change it to use the newer Opus format.)
  • removes the byte-order mark from Unicode text files.
  • contains shell functions for adding stories to Pivotal Tracker. The submit_to_tracker function takes the project ID (or name if there’s a function project_name_to_id to convert the name to ID), the type of story, the story’s points estimate (use ”” for no estimate), a comma-separated list of tags to apply to the story, the name of the story, and optionally the story’s state (unscheduled, unstarted, started, finished, delivered, or accepted) and a longer description. If there’s an array PROJECTS_WITHOUT_CHORE_PTS and you try to create a chore with an estimate, it’ll check if the project is in that array and object rather than trying and failing to submit the story. The submit_tracker_release function is similar, taking the project ID (or name if you provide a conversion function), tags to apply to the release in Tracker, the name to give the release, and the release’s due date (in a format Tracker’s API expects). Both of these functions require TRACKER_TOKEN to be set in your environment to your Tracker API token.
  • is another script that’s only fairly minimally adapted from instructions found I-forget-where, a little wrapper script that allows the caller to run a shell function as another user. Despite its name, it doesn’t actually use sudo; the fact that sudo can’t do this is why the script exists.
  • waits until a process or processes are no longer running, then suspends the computer.
  • synchronizes a DVCS clone (only Mercurial for now, but I know how I’ll work with Git when I get around to implementing that in this script) with a clone on another computer, transferring any differences (changes one copy has and the other doesn’t) in either direction. does much the same thing, but with “upstream” instead of another probably-local computer.
  • is probably the script that I use most often. It uses unison to synchronize corresponding directories (listed in a shell array) between multiple computers. For each directory, if that directory contains or, the script executes them before and after (respectively) running unison—for example, I keep ~/.config and ~/.local synchronized between my desktop and my laptop, and those directories change frequently if Chromium (Chrome) is running, so I forcibly pause all chrome processes in and resume them in
  • lists, using emerge --pretend, packages that have a test USE flag but were installed without it, which is to say most packages that were installed despite test failures.
  • tests whether the computer has a network connection. For each site in an array, by default containing the IP addresses of the router immediately upstream of me and of one of Google’s public DNS servers and the hostname, the script tries to ping the site once.
  • converts ebooks from EPUB or other formats using a command-line utility provided by Calibre to Kindle format and tries to put them on an MTP device.
  • converts possibly-compressed video files to h264-encoded MP4, plays the new video to test it, then asks the user whether to remove it or the original.
  • and convert filenames and file extensions, respectively, to lower case.
  • is like zdiff, but for any of the compression formats I’ve used.
  • is a script to maintain /etc/hosts using the list provided by hpHosts. Anything at the beginning of /etc/hosts (e.g. names of hosts in the local network, in addition to the explanatory header) is preserved, and any lines commented out in the main list in /etc/hosts are commented out in the updated version.
  • finds SQLite databases under the current directory and applies the SQLite commands VACUUM and REINDEX to them. But before doing this, it checks whether any Mozilla software is running (since I use it primarily to clean up databases in my Firefox, or formerly Thunderbird, profile) and aborts if it is.
  • converts an HTML file to text and counts the number of words in it.
  • is a wrapper around the ps command to get a particular format in which the “kernel function” field, or “WCHAN” field, is visible in its entirety instead of being truncated.
  • contains (and executes if the script is executed) a number of shell functions to help in backing up data from Web services I use: GoodReads, LibraryThing, Facebook, WordPress (all the blogs I’m the author of), Delicious, Diigo, Gmail (using OfflineIMAP), Pivotal Tracker, and SimpleNote. For most of these, all the script does is open the page from which I can download data in a Web browser, even though I would much prefer an automated backup. However, the LibraryThing backup is automated, as is Gmail (provided OfflineIMAP is set up to get the necessary credentials from a “keyring” daemon), and the Pivotal Tracker backup function opens the pages for each project I own.
  • prints the days of the year that are the weekday specified in the script, in YYYY/MM/DD format. When this blog still ran on a fixed schedule (posts related to the Shine Cycle on Mondays, posts related to Strategic Primer on Wednesdays, poems on Fridays, etc.), I used this to open all (for instance) Mondays in my browser to help me create a year-end summary without having to scroll through the entirety of every month.
  • reads the contents of a gzip-compressed file and counts the number of words in it.

While these worked well enough when last I used them (and thus those I use daily or monthly work quite well for me), and I’ve tried to remove any truly obsolete scripts, I recommend reading carefully any scripts you plan to use before doing so. On the other hand, I welcome bug reports.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s