Today I Learned

A Hashrocket project

193 posts about #command-line

Resize Tmux Pane 🖥

Sometimes, after a long day of coding, I resize a Tmux pane using my mouse instead of my keyboard. It’s a habit from my GUI-informed past.

Here’s how to accomplish this without a mouse.

To resize the focused pane left one cell (from the Tmux prompt):

:resize-pane -L

Resize pane number 3 right 10 cells:

:resize-pane -t 3 -R 10

Etc.

Change Prompt in Z Shell

When I live code, or share terminal commands in a demonstration, I don’t want my customized terminal prompt included in that information. It’s noisy.

Right now I’m changing this in Z Shell via that PROMPT variable.

# Complex prompt
jake@computer-name: echo $PROMPT
%{%}%n@%m%{%}:

# Simple prompt
jake@computer-name: PROMPT="$ "
$ echo 'ready to live code'
ready to live code
$

Pipe all output from a command (stderr & stdout)

When you write a bash/zsh script relying on pipes normally you will not be able to pipe through text from the stderr output with a normal pipe.

For example, curl -v prints some information about the request, including it’s headers and status into stderr.

If we simply try to pipe the output of curl -v into less we will not see the verbose header and request info:

curl -v https://hashrocket.com | less

Output:

<html lang='en-US'>
<meta charset='UTF-8'>
<title>Ruby on Rails, Elixir, React, mobile design and development | Hashrocket</title>
...

But if we want the stderr output as well we can use the |& syntax:

curl -v https://hashrocket.com |& less

Output:

* Rebuilt URL to: https://hashrocket.com/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
...
* Connected to hashrocket.com (52.55.191.55) port 443 (#0)
...
<html lang='en-US'>
<meta charset='UTF-8'>
...

🍒 Bonus:

We can also pipe through ONLY the stderr:

curl -v https://hashrocket.com |1>& less

Output (will not contain the html response):

* Rebuilt URL to: https://hashrocket.com/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
...

h/t Thomas Allen

Get ONLY PIDs for processes listening on a port

The lsof utility on Linux is useful among other things for checking which process is listening on a specific port.

If you need to kill all processes listening on a particular port, normally you would reach for something like awk '{ print $2 }', but that would fail to remove the PID column header, so you would also need to pipe through tail -1. It get pretty verbose for something that should be pretty simple.

Fortunatly, lsof provides a way to list all the pids without the PID header specifically so you can pipe the output to the kill command.

The -t flag removes everything from the output except the pids of the resulting processes from your query.

In this example I used a query to return all processes listening on port 3000 and return their PID:

lsof -ti tcp:3000

The output of which will look something like:

6540
6543
21715

This is perfect for piping into kill using xargs:

lsof -ti tcp:3000 | xargs kill

No awks or tails necessary! 🐕

Find The Process Using A Specific Port On Mac

The netstat utility is often recommended for finding the PID (process ID) bound to a specific port. Unfortunately, Mac’s version of netstat does not support the -p (process) flag. Instead, you’ll want to use the lsof utility.

$ sudo lsof -i tcp:4567

Running this will produce a nicely formatted response that tells you several pieces of information about the process bound to :4567 including the PID.

source

Grep For Files With Multiple Matches

The grep utility is a great way to find files that contain a certain pattern:

$ grep -r ".class-name" src/css/

This will recursively look through all the files in your css directory to find matches of .class-name.

Often times these kinds of searches can turn up too many results and you’ll want to pare it back by providing some additional context.

For instance, we may only want results where @media only screen also appears, but on a different line. To do this, we need to chain a series of greps together.

$ grep -rl "@media only screen" src/css |
    xargs grep -l ".class-name"

This will produce a list of filenames (hence the -l flag) that contain both a line with @media only screen and a line with .class-name.

If you need to, chain more grep commands on to narrow things down even farther.

See man grep for more details.

Zsh file name without the extension

Zsh provides a weird way to get the different parts a of a file name.

If you want the full path without the extension:

> myfile=/path/to/story.txt
> echo ${myfile:r}
/path/to/story
> myfile=story.txt
> echo ${myfile:r}
story

If you want just the file name minus the path:

> myfile=/path/to/story.txt
> echo ${myfile:t}
story.txt

Check this out you can combine those two symbols!

> myfile=/path/to/story.txt
> echo ${myfile:t:r}
story

Copy files with progress in terminal w/rsync

When you need to transfer a lot of files from one location to another it’s sometimes useful to have some progress indication and maybe even a speed measure, or time remaining.

I recently had to transfer a few gigabytes of data from one computer to another. For this task I chose to use Rsync, since it is a command line utility that can preserve file metadata (permissions) and easily resume in case of an error.

Rsync ships with macOS by default, but if you want to get a more recent version, you can install it from homebrew.

There are two options for showing progress:

If you are transferring a few really big files you can use the --progress flag.

rsync -ah --progress source destination

This will list each file as it being transferred and show you the progress and speed in which the file is being transferred.

In my case I had a lot of small files so I chose to use --info=progress2.

rsync -ah --info=progress2 source destination

This will output something like this

2.26G  16%    6.13MB/s    0:05:51 (xfr#375313, to-chk=0/1165396)

Which represents the progress, speed and estimated time remaining for the entire transfer.

List Stats For A File

The ls command is good for listing files. Tacking on the -la flags gives you a bunch of info about each of the listed files. To get even more info, we can use the stat command.

$ stat README.md
16777220 143994676 -rw-r--r-- 1 jbranchaud staff 0 53557 "Jul 14 14:53:44 2018" "Jul 10 14:54:39 2018" "Jul 10 14:54:39 2018" "Jul 10 14:54:39 2018" 4096 112 0 README.md

That’s definitely more info, but it is unlabeled and a lot to parse. We can improve the output with the -x flag.

$ stat -x README.md
  File: "README.md"
  Size: 53557        FileType: Regular File
  Mode: (0644/-rw-r--r--)         Uid: (  501/jbranchaud)  Gid: (   20/   staff)
Device: 1,4   Inode: 143994676    Links: 1
Access: Sat Jul 14 14:53:44 2018
Modify: Tue Jul 10 14:54:39 2018
Change: Tue Jul 10 14:54:39 2018

See man stat for more details.

source

Show escaped bash color codes in less #linux

My ls command colors directories and files according to their type and permissions:

ls with color

But when the window is too small to fit the content I pipe the result into less:

less broken

Which cannot correctly parse the escape code from ls and turn them into color. To fix that add -r to the less command:

solution

Notes:

My l alias is gls -F -G --color --group-directories-first -lah (gls is GNU ls)

You can alias less=less -r if you want this to be the default behavior for less.

Delete all node_modules dirs recursively with find

If you have hundreds of past JavaScript projects sitting in your workspace folder, you probably also have hundreds of node_modules folders nested inside of them, and hundreds of thousands actual npm modules resting peacefully in those.

Often enough all you care about is the code that uses the modules and not the modules themselves, so to save yourself some precious laptop diskspace you can just delete all those folders! When you need them again cd into the project directory and run yarn install or npm install.

First let’s do a dry run:

find . -name "node_modules" -type d -prune

and now that you checked the output of the above command you can delete all the nested node_module folders.

If you are still feeling paranoid (and you’re on macOS) you can simply move those to the Trash:

find . -name "node_modules" -type d -prune -exec trash '{}' +

If you feel a little braver just unlatch the airlock and toss them into a black hole 🕳 using rm -rf

find . -name "node_modules" -type d -prune -exec rm -rf '{}' +

I saved a whopping 80GB with this technique 🤑. Hope you find it helpful.

Generate Zeropadded Ranges

Need to generate 100 directories, named 01/ to 99/? Today I learned that command line brace expansion supports zeropadded (starting with one or more zeroes) ranges. The following command will create 100 zeropadded, numbered directories:

$ mkdir {01..99}

Hit tab to see the expanded command.

The second zeropad, if there is one, can be omitted— the following creates a range of 01-05, even thought there’s no zero in front of the 5:

$ mkdir {01..5}


Which expands to:

$ mkdir 01 02 03 04 05

Use a proxy on curl/wget commands

Using a proxy can be a good way to debug http issues. Unfourtunately setting the proxy on macOS globally does not apply to all command line utilities.

On Curl for example you can set the proxy using the --proxy flag:

curl http://example.com --proxy 127.0.0.1:8080

Or by adding the following to your ~/.curlrc configuration file for a more persistent setting:

proxy = 127.0.0.1:8080

A similar thing can be done with the wget utility by editing the ~/.wgetrc and adding:

http_proxy = http://127.0.0.1:8080

xargs substitution

Output piped to xargs will be placed at the end of the command passed to xargs. This can be problematic when we want the output to go in the middle of the command.

> echo "Bravo" | xargs echo "Alpha Charlie"
Alpha Charlie Bravo

xargs has the facility for substituion however. Indicate the symbol or string you would like to replace with the -I flag.

> echo "Bravo" | xargs -I SUB echo "Alpha SUB Charlie"
Alpha Bravo Charlie

You can use the symbol or phrase twice:

> echo "Bravo" | xargs -I SUB echo "Alpha SUB Charlie, SUB"
Alpha Bravo Charlie, Bravo

If xargs is passed two lines, it will call the the command with the substitution twice.

> echo "Bravo\nDelta" | xargs -I SUB echo "Alpha SUB Charlie SUB"
Alpha Bravo Charlie Bravo
Alpha Delta Charlie Delta

Multiple SSH Port Forwardings

This post by Dillon Hafer is one of my favorite remote-work tricks. Today I learned more about this part of the SSH configuration.

Multiple forwardings can be set; in a setup where port 3000 and port 9000 are important, we can forward both:

# ~/.ssh/config

Host example
  Hostname     example.com
  LocalForward 3000 localhost:3000
  LocalForward 9000 localhost:9000

Ports 3000 and 9000 will then be forwarded from the remote host. For a one-time solution, we can also add more via the command line:

$ ssh example -L 4000:localhost:4000

Combining this command and the configuration file above, ports 3000, 9000, and 4000 will be forwarded.

Relative Dates with GNU date

GNU date ships with the ability to add and subtract dates. Using the -d flag one can add add or subtract years, months, days, weeks, and seconds. As an example here’s a future date helper:

in() {
  if [ "$(uname)" == "Darwin" ]; then
    gdate -d"$(gdate) +$1 $2" "+%Y-%m-%d"
  else
    date -d"$(date) +$1 $2" "+%Y-%m-%d"
  fi
}
~❯ in 7 days
2018-03-24
~❯ in 2 months
2018-05-17

This can be handy for some CLI utilities like Todo.txt.

For example:

~❯ t add Post first TIL due:$(in 3 days)
10 Post first TIL due:2018-03-20
TODO: 10 added.

Note: If you’re on a Mac you’ll need to install the GNU coreutils. By default all commands will be prepended with g, hence gdate above.

brew install coreutils

Check the man pages for more info.

Difference between output of two commands #linux

Recently I’ve been playing around with ripgrep (rg) which is a tool similar to Ack and Ag, but faster than both (and written in Rust FWIW).

I noticed that when I ran a command in Ag to list all file names in a directory, and counted the number of files shown, I was getting a different number than the comparable command in ripgrep.

ag -l -g "" | wc -l
# =>      29
rg -l "" | wc -l
# =>      33

This led me to wonder if there is an easy way to view the diff between the output of the two commands.

I know I can save the output into a file and then compare the two files with the builtin diff command in linux, however I don’t see a reason to write to disk for such a comparison.

This is how you would do that without writing to disk:

diff <(ag -l -g "") <(rg -l "")

The diff printed out by this command is inaccurate, so you will need to add a sort to each command:

diff <(ag -l -g "" | sort) <(rg -l "" | sort)

Execute Remote Commands with SSH

Today I switched workstations. As a Vim user this means copying some local dotfiles around, appending a few here, deleting a few there. It was a good reminder that executing remote commands is part of SSH.

Here’s how I’ve been appending old custom configurations on my old workstation, to others’ already existing (shared) configurations on the new machine. The quoted command is executed on the remote machine:

$ ssh jake@oldworkstation "cat ~/.vimrc.local" >> ~/.vimrc.local

List The Available JDKs

Want to know what JDK versions are installed and available on your machine? There is a command for that.

$ /usr/libexec/java_home -V
Matching Java Virtual Machines (3):
    9.0.4, x86_64:      "Java SE 9.0.4" /Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/Home
    1.8.0_162, x86_64:  "Java SE 8"     /Library/Java/JavaVirtualMachines/jdk1.8.0_162.jdk/Contents/Home
    1.8.0_161, x86_64:  "Java SE 8"     /Library/Java/JavaVirtualMachines/jdk1.8.0_161.jdk/Contents/Home

/Library/Java/JavaVirtualMachines/jdk-9.0.4.jdk/Contents/Home

The listed VMs show what JDK versions you have and the final line shows which is currently the default version.

Forward Multiple Ports Over SSH

I sometimes find myself doing web app development on another machine via an SSH connection. If I have the server running on port 3000, then I like to use SSH’s port forwarding feature so that I can access localhost:3000 on my physical machine.

$ ssh dev@server.com -L 3000:localhost:3000

What if I have two different servers running? I’d like to port forward both of them — that way I can access both.

SSH allows you to forward as many ports as you need. The trick is to specify a -L for each.

$ ssh dev@server.com -L 3000:localhost:3000 -L 9009:localhost:9009

Get the mime-type of a file

The file command for both Linux and Mac can provide mime information with the -I flag.

> file -I cool_song.aif
cool_song.aif: audio/x-aiff; charset=binary

There are longer flags to limit this information, --mime-type and --mime-encoding.

> file --mime-type cool_song.aif
cool_song.aif: audio/x-aiff
> file --mime-type cool_song.aif
cool_song.aif: binary

And if you are using this information in a script you can remove the prepended file name with the -b flag.

> file --mime-type -b cool_song.aif
audio/x-aiff
> file --mime-type -b cool_song.aif
binary

Combine -b and -I for a quick terse mime information command

> file -bI cool_song.aif
audio/x-aiff; charset=binary

Use jq to filter objects list with regex

My use case is filtering an array of objects that have an attribute that matches a particular regex and extract another attribute from that object.

Here is my dataset:

[
  {name: 'Chris', id: 'aabbcc'},
  {name: 'Ryan', id: 'ddeeff'},
  {name: 'Ifu', id: 'aaddgg'}
]

First get each element of an array.

> data | jq '.[]'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ryan', id: 'ddeeff'}
{name: 'Ifu', id: 'aaddgg'}

Pipe the elements to select and pass an expression that will evaluate to truthy for each object.

> data | jq '.[] | select(true) | select(.name)'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ryan', id: 'ddeeff'}
{name: 'Ifu', id: 'aaddgg'}

Then use the test function to compare an attribute to a regex.

> data | jq '.[] | select(.id|test("a."))'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ifu', id: 'aaddgg'}

Extract the name attribute for a more concise list.

> data | jq '.[] | select(.id|test("a.")) | .name'
Chris
Ifu

Read more about jq

Setting timezones from command line on MacOS

We wrote some tests that failed recently in a timezone other than our own (☹️). In order to reproduce this failure we had to set the timezone of our local machine to the failure timezone.

sudo systemsetup -settimezone America/New_York

Easy enough. You can get all the timezones available to you with:

sudo systemsetup -listtimezones

And also check your current timezone

sudo systemsetup -gettimezone

systemsetup has 47 different time/system oriented commands. But it needs a sudo even to run -help.

> systemsetup -help
You need administrator access to run this tool... exiting!

Don’t fret you can get this important secret information via the man page

man systemsetup

zsh change dir after hook w/`chpwd_functions`

How does rvm work? You change to a directory and rvm has determined which version of ruby you’re using.

In bash, the cd command is overriden with a cd function. This isn’t necessary in zsh because zsh has hooks that execute after you change a directory.

If you have rvm installed in zsh you’ll see this:

> echo $chpwd_functions
__rvm_cd_functions_set
> echo ${(t)chpwd_functions}
array

chpwd_functions is an array of functions that will be executed after you run cd.

Within the function you can use the $OLDPWD and $PWD env vars to programatically determine where you are like this:

> huh() { echo 'one sec...' echo $OLDPWD && echo $PWD } 
> chpwd_functions+=huh
> cd newdir
one sec...
olddir
newdir

You can read more about hook functions in the zsh docs.

Remove duplicates in zsh $PATH

How to do it?

typeset -aU path

There ya go. All entries in $PATH are now unique

How does it work? Well that’s stickier. path and PATH are not the same thing. You can determine that by examining the type of each with t variable expansion.

> echo ${(t)path}
array-unique-special
> echo ${(t)PATH}
scalar-export-special

They are linked variables however, if you change one you change the other but they have different properties. I have to admit at this point that scalar-export-special is something of which I’m ignorant.

The typeset declaration is simpler, it changes the type of path from array-special to array-unique-special. The -a flag is for array and the -U flag is for unique.

What files are being sourced by zsh at startup?

Tracking down a bug in the jagged mountains of shell files that are my zsh configuration can be tough. Enter the SOURCE_TRACE option, settable with the -o flag when opening a new shell with the zsh command.

zsh -o SOURCE_TRACE

Which outputs:

+/etc/zsh/zshenv:1> <sourcetrace>
+/etc/zsh/zshrc:1> <sourcetrace>
+/home/chris/.zshrc:1> <sourcetrace>
+/home/chris/.sharedrc:1> <sourcetrace>
+/home/chris/.zshrc.local:1> <sourcetrace>

The above output is an abbreviated representation of the actual files loaded on my system. I have language manager asdf installed which adds a couple of entries. I am an rvm user which adds 50 (!!!) entries. Lots of shell code to source for rvm. Additionally, I use Hashrocket’s dotmatrix which adds even more entries. Lots of sourcing to sort through.

This is handy in combination with print line (or echo) debugging. It gives your print lines added context with things get noisy.

Posting data from a file with curl

Iterating on an api post request with curl can be frustrating if that involves a lot of command line editing. curl however can read a file for post body contents. Generally the --data option is used like this:

curl -XPOST --data '{"data": 123}' api.example.com/data

But when using an @ symbol you can reference a file

curl -XPOST --data @data.json api.example.com/data

Now, you can run the same command for each iteration and edit the data.json file containing the data to be posted with your favorite text editor (which is vim right?).

whatis command and apropos in #linux #bash

Ever wonder what a command you are using is? Turns out linux has the answers for you!

Simply type whatis followed by the name of the command.

Examples:

$ whatis bc
bc(1)         - An arbitrary precision calculator language
$ whatis brew
brew(1)       - The missing package manager for macOS
brew-cask(1)  - a friendly binary installer for macOS

whatis uses the man pages to search your entered query.

There is also a reverse search, which searches the descriptions of commands. For example say you are looking for a calculator:

$ apropos calculator
bc(1)         - An arbitrary precision calculator language
dc(1)         - an arbitrary precision calculator

h/t this tweet by Peter Cooper

Stop #bash script on error #linux #zsh

If you are writing a procedural bash script, you may want to stop execution if one of the steps errored out.

You can write error handling for each step, but that can get quite verbose and make your script hard to read, or you might even miss something.

Fortunately bash provides another option:

set -e

Simply place the above code at the top of your script and bash should halt the script in case any of them returns a non-true exit code.

Caveats: this will not work in all cases, for example it does not work for short circuited commands using &&/||.

If you want it to work when one of your operations in a pipe fails you will need to add the pipefail flag (not supported on some systems set -o | grep pipefail to check your system):

set -e -o pipefail

If you have a script that always returns a non true return code and that’s fine you can override set -e for that command with:

set +e
your_command_goes_here
set -e

At this point I consider it a best practice to include this statement in every script I write.

List filenames of multiple filetypes in project

Ag (aka The Silver Searcher) is an amazing piece of software. It allows you to define file types (like Ack) and comes prepackeged with some file types.

Using this feature you can list all files of a specific type in your project. For example say we want to list all Ruby and JavaScript files:

ag --ruby --js -l

Ag has the added benefit over Ack, that it ignores gitignored files, so you only get the files that matter (and not stuff from node_modules etc).

To see what filetypes Ag supports:

ag --list-file-types

The list is pretty extensive! Unlike Ack however, there is currently no way to add new file types or extend the list.

Reloading shell history in zsh

When you start a shell your history list is populated from your .zsh_history file. Depending on options that you have set, when you close a shell you write your history list to that same history file. Until you close your or open new shells that history is self contained and not accessible from another shell.

There is a built-in zsh command to both write and read history from the .zsh_history file. fc -W will write to the history file. fc -R will read from the history file.

Clean Up Autojump

autojump is a command-line tool for faster directory changing. It’s a must-have on my workstation, because it lets me jump to deeply nested directories like this:

:~/$ j cli
/Users/jwworth/projects/2017/twitter-killer/client
:~/jwworth/projects/2017/twitter-killer/client$

There are two ways we can clean up autojump once it’s been running for a while. First, purge non-existant directories from the jump database:

$ j --purge
Purged 8 entries.

Second, edit the file ~/Library/autojump/autojump.txt (OSX), which stores the jump database. Here you can remove directories that should never be jumped to, or are weighted too highly because of frequent use in the past.

Happy jumping!

Call a program one time for each argument w/ xargs

Generally, I’ve used xargs in combination with programs like kill or echo both of which accept a variable number of arguments. Some programs only accept one argument.

For lack of a better example, lets try adding 1 to 10 numbers. In shell environments you can add with the expr command.

> expr 1 + 1
2

I can combine this with seq and pass the piped values from seq to expr with xargs.

> seq 10 | xargs expr 1 + 
expr: syntax error

In the above, instead of adding 1 to 1 and then 1 to 2, it trys to run:

expr 1 + 1 2 3 4 5 6 7 8 9 0

Syntax Error!

We can use the -n flag to ensure that only one argument is applied at time and the command runs 10 times.

> seq 10 | xargs -n1 expr 1 +
2
3
4
5
6
7
8
9
10
11

For more insight into what’s being called, use the -t flag to see the commands.

Use `source /dev/stdin` to execute commands

Let’s say there’s a command in a file, like a README file, and you don’t have any copy or paste tools handy. You can get the command out of the README file with:

> cat README.md | grep "^sed"
sed -ie "s/\(.*\)/Plug '\1'/" .vimbundle.local

Great! Now how do we run it? The source is generally used to read and execute commands in files, and really /dev/stdin behaves like a file.

You can use the pipe operator to place the command into stdin and then source will read from stdin.

> cat README.md | grep "^sed" | source /dev/stdin

A simpler example can be constructed with echoing

> echo "echo 'hi there'"
echo 'hi there'

And

> echo "echo 'hi there'" | source /dev/stdin
hi there

Split large file into multiple smaller files

Bash has a handy tool to split files into multiple pieces. This can be useful as a precursor to some parallelized processing of a large file. Lets say you have gigabytes of log files you need to search through, splitting the files into smaller chunks is one way to approach the problem.

> seq 10 > large_file.txt
> split -l2 large_file.txt smaller_file_
> ls -1
smaller_file_aa
smaller_file_ab
smaller_file_ac
smaller_file_ad
smaller_file_ae
large_file.txt

First, I created a “large” file with ten lines. Then, I split that file into files with the prefix smaller_file_. The -l2 option for split tells split to make every 2 lines a new file. For 10 lines, we’ll get 5 files. The suffix it adds (“aa”, “ab”, …) sorts lexigraphically so that we can reconstruct the file with cat and globs.

> cat smaller_file*
1
2
3
4
5
6
7
8
9
10

More useful Homebrew searches #macOS #homebrew

Homebrew, the third-party package manager on macOS, allows searching for packages by name, but the list that comes out only contains package names. That’s not always very useful, particulary when you are not sure what you are looking for.

To get the package description along with the package name simply add --desc to your brew search command.

For example, let’s look for a library for performing file diffs with color highlighting:

$ brew search --desc diff
apgdiff: Another PostgreSQL diff tool
cdiff: View colored diff with side by side and auto pager support
cern-ndiff: Numerical diff tool
colordiff: Color-highlighted diff(1) output
cppad: Differentiation of C++ Algorithms
dhex: Ncurses based advanced hex editor featuring diff mode and more
diff-so-fancy: Good-lookin' diffs with diff-highlight and more
...

You can also search using regex in both the description and name of the package as long as you supply the --desc option:

$ brew search --desc /[cC]olor.*[dD]iff/
cdiff: View colored diff with side by side and auto pager support
colordiff: Color-highlighted diff(1) output
icdiff: Improved colored diff

`cd` in subshell

With many of our projects sequestering the front end javascript code into an assets directory I find myself moving between the root project directory and the assets directory to perform all the npm or yarn related tasks in that assets dir. Inevitably I’ll start doing something like this:

cd assets; npm install; cd ..

or this

pushd assets; npm install; popd

In both cases using ; instead of && puts me back in the original directory regardless of the result of the npm command.

I just learned that using cd in a subshell does not change the directory of the current shell, so I can also do this:

(cd assets; npm install)

Xargs from a file

I’ve struggled with xargs conceptually for long time, but actually its pretty easy conceptually. For commands that don’t read from stdin but do take arguments, like echo or kill, you can turn newline separated values from stdin in into arguments.

Piping to echo does not work.

> echo 123 | echo
# nothing

Using xargs it does.

> echo 123 | xargs echo
123

xargs can also read a file with the -a flag, turning each line of the file into an argument.

> echo "123\nabc" > test.txt
> cat test.txt
123
abc
> xargs -a test.txt echo
123 abc

H/T Brian Dunn

Kill rogue shell processes

There is a particular type of attack where an inserted usb stick can act like a keyboard, open a terminal, and start something like this:

while (true); do something_malicious; sleep 3600; done & disown

This process endlessly loops and wakes every hour to do something malicious. The & puts it in the background and the disown will end its attachment to the current terminal. When the terminal is closed the process will get a parent of 1.

This process is still detectable and killable at the command line by finding all shell programs with a parent pid of 1 and killing them with -9.

ps ax -o pid,command,ppid | grep '.*zsh.*\s1$' | awk '{print $1}' | xargs kill -9

This will kill all running rogue zsh processes. There may be reasons why you’d want a process to be detached from its parent terminal, but you could easily decide that this isn’t something you want ever and place the above command into a cron job that runs every 2 seconds.

Download all of humble bundle books in parallel

Humble Bundle is a great site which offers technical book bundles. The problem is that they present the user with a huge list of links for all the different formats and it is a tedious task to right click each link and save it to your hard drive.

In order to solve this you can open the Developer Tools while on the download page and paste the following:

var pattern = /(MOBI|EPUB|PDF( ?\(H.\))?|CBZ|Download)$/i;
var nodes = document.getElementsByTagName('a');
var downloadCmd = '';
for (i in nodes) {
    var a = nodes[i];
    if (a && a.text && pattern.test(a.text.trim()) && a.attributes['data-web']) {
        downloadCmd += 'wget --content-disposition "' + a.attributes['data-web'].value + "\"\n";
    }
}
var output = document.createElement("pre");
output.textContent = downloadCmd;
document.getElementById("papers-content").prepend(output);

credit: https://gist.github.com/graymouser/a33fbb75f94f08af7e36

This will add a pre tag to the page with a bunch of wget commands. Go ahead and copy those to your clipboard.

Using GNU Parallel (brew install parallel). First save the contents of your clipboard into a file, for example download_jobs then run the following command:

parallel -j 4 < download_jobs

Replace 4 with the number of cores you have on your machine.

Then sit back and watch your directory get populated with files.

Surround every line in a file using sed

To replace every line in a file you can use linux’s built in sed utility:

For example given a file like this:

dkarter/backpack
junegunn/fzf
junegunn/fzf.vim
junegunn/vim-peekaboo
junegunn/gv.vim
terryma/vim-multiple-cursors
scrooloose/nerdtree
dyng/ctrlsf.vim
haya14busa/incsearch.vim
killphi/vim-legend
neomake/neomake

If we want to surround each line with Plug '$content_of_line' we can run the following command:

sed -e "s/\(.*\)/Plug '\1'/" .vimbundle.local

Output:

Plug 'dkarter/backpack'
Plug 'junegunn/fzf'
Plug 'junegunn/fzf.vim'
Plug 'junegunn/vim-peekaboo'
Plug 'junegunn/gv.vim'
Plug 'terryma/vim-multiple-cursors'
Plug 'scrooloose/nerdtree'
Plug 'dyng/ctrlsf.vim'
Plug 'haya14busa/incsearch.vim'
Plug 'killphi/vim-legend'
Plug 'neomake/neomake'

If the result is what we expected we can add -i flag to write the file in place, updating it with our changes:

sed -ie "s/\(.*\)/Plug '\1'/" .vimbundle.local

Keep Your Brews Bubbly

Life is too short to have Homebrew problems. Run these commands to keep your brews bubbly.

Checks your system to make sure that future installs go smoothly:

brew doctor

Upgrades Homebrew to the latest version:

brew update

Gets a list of what packages are outdated:

brew outdated

Looks through your installed packages and deletes any old versions that may still be hanging around:

brew cleanup

You could alternatively add the --dry-run flag to cleanup to see all the outdated packages that would be removed.

Deletes old symlinks:

brew prune

Updates packages to the latest version:

brew upgrade

You can add the --cleanup flag to delete older versions of the packages you are updating.