Today I Learned

hashrocket A Hashrocket project

63 posts about #devops surprise

Understanding the 'WWW-Authenticate' Header

If you've ever made an HTTP request and got a 401 response status, the response headers most likely included 1 or more entries for WWW-Authenticate. This flow is called "challenge and response" and it's part of the framework for doing HTTP Authentication.

This header is used to tell the client how it can authenticate in order to gain access to the requested resource.

For example, a response might include the following headers:

WWW-Authenticate: Basic
WWW-Authenticate: NTLM

Which means that the requested resource supports both Basic and NTLM authentication schemes.

It's also possible for these headers to come back with other metadata about their authentication schemes like token68 and realm.

Next time you get a 401, check out the response headers to see what Auth schemes are supported!

https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/WWW-Authenticate

Delete A Rule In UFW

Say you want to delete a rule in your uncomplicated firewall (ufw). You run ufw help and see that the entry for delete is

delete RULE|NUM                 delete RULE

But when you check the status with sudo ufw status, all you see is this

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
443                        ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
443 (v6)                   ALLOW       Anywhere (v6)

Where's the rule number?

What you need is sudo ufw status numbered, which has the rule number in the left hand column.

Status: active

     To                         Action      From
     --                         ------      ----
[ 1] OpenSSH                    ALLOW IN    Anywhere
[ 2] 443                        ALLOW IN    Anywhere
[ 3] OpenSSH (v6)               ALLOW IN    Anywhere (v6)
[ 4] 443 (v6)                   ALLOW IN    Anywhere (v6)

So if you want to delete your OpenSSH rule for IPv6, you know to run sudo ufw delete 3.

Reverse proxy tcp/udp with nginx

Nginx has the ability to reverse proxy tcp and udp with the stream directive, similar to the http directive:

# Reverse proxy postgres server
stream {
  server {
    listen 5432;
    proxy_pass 172.18.65.97:5432;
  }
}

http {
  server { 
    ...
  }
}

This can be useful to load balance tcp streams like a database connection.

If you built nginx with --with-stream=dynamic (you can check with nginx -V) you will need to manually load a shared object:

# nginx.conf
load_module /usr/lib/nginx/modules/ngx_stream_module.so

Choosing your `--cloud` provider on Gigalixir

One cool thing about gigalixir is that you can choose both cloud platform and the datacenter/region where you'd like to install your app.

By default gigalixir create <app name> will put an app in Google Cloud Platform.

But what if you already have infrastructure that you want to take advantage of on AWS in the us-east-1 region (this is where heroku is by default putting it's servers)?

Well you can change providers with the --cloud flag and the region with --region flag.

gigalixir create -n gigatilex --cloud aws --region us-east-1

AWS it is!

Transfer env vars from Heroku to Gigalixir

We're in the process of moving tilex, this website, to gigalixir so that we can use http/2. One part of that is moving over all the configuration. Turns out that's real easy with the -s flag.

heroku config -s -a tilex | xargs gigalixir config:set

With the -s flag, you get this:

> heroku config -s | grep DATABASE_URL
DATABASE_URL=postgres://user:pass@location/dbname

Rather than this

> heroku config | grep DATABASE_URL
DATABASE_URL: postgres://user:pass@location/dbname

When means you can just pipe all those configurations over to gigalixir using xargs.

The heroku command assumes there is a git remote named heroku and the gigalixir command assumes there is a git remote named gigalixir.

Parcel hot module reloading over ssh

A really great feature of parcel is hot module reloading, or rather, when you make a change to a file that change is manifested in the browser where you have your app open.

My parcel project was setup on a remote server that I had ssh'd into port forwarding the default parcel port, 1234, but in the console of Firefox I got this error:

Firefox can’t establish a connection to the server at ws://localhost:41393/. hmr-runtime.js:29:11

Turns out I needed to lock my hmr port to a fixed port and then port forward that hmr port over ssh.

Lock the hmr port when you start the parcel server like this:

parcel index.html --hmr-port=55555

And make sure you're port forwarding that port over ssh:

ssh chris@myserver -L 1234:localhost:1234 -L 55555:localhost:55555

Now your app will establish a web socker connection to the parcel server!

Scan local network for hosts with nmap

If you want to connect to a computer on your network but don't know the ipaddress, you can use nmap to help you.

nmap -sn 192.168.1.0/24

The above command will conduct a "Ping Scan" on all addresses that start with 192.168.1.

192.168.1.0/24 is a subnet mask. It takes 24 bits to define the first 3 sections of an ip4 address, so that's what the /24 is all about. Learn more about subnet masks here.

Tmux Clear History

I've been doing an experiment lately that requires reading and searching lots of logs in Tmux. Something that makes this harder is that we collect a deep history there. Sometimes I'll 'find' a query I've been looking for 😀, only to realize it is hours or days old ️😐.

Tmux clear-history has been our solution. Running this command on an already cleared Tmux pane clears your history for that pane; you can't scroll up or search the previous output.

We're doing this enough that I put this alias in our terminal dotfiles.

alias tc='clear; tmux clear-history; clear'

tmux clear-history is the CLI version of the Tmux meta command :tmux clear-history.

How to manually edit ufw rules

Usually you don't want to manually edit ufw, but in this case I just needed to update an ip address across multiple ip4 and ip6 rules. Turns out there is config file that can be very carefully edited:

$ sudo vim /lib/ufw/user.rules
$ sudo vim /lib/ufw/user6.rules

Very carefully update the ip addresses and then reload ufw:

$ sudo ufw reload

Show All Docker Containers (Running & Stopped)

docker ps was a confusing command for me, because I thought that it would show all containers by default. But trying to run a container that has been stopped will give you a an error that there is a container in use with that name. Running docker ps will be fruitless because that container will not be listed by default. You can easily view all your containers, running & stopped, with:

docker ps -a

H/T to the docs lol - https://docs.docker.com/engine/reference/commandline/ps/

Creating a Bind Mount with `docker volume`

Creating a bind mount (a volume that has an explicitly declared directory underpinning it) is easy when using docker run:

docker run -v /var/app/data:/data:rw my-container

Now, anytime you write to the container's data directory, you will be writing to /var/app/data as well.

You can do the same thing with the --mount flag.

docker run --mount type=bind,source=/var/app/data,target=/data my-container

Sometimes though you might want to create a bind mount that is independent of a container. This is less than clear but Cody Craven figured it out.

docker volume create \
--driver local \
-o o=bind \
-o type=none \
-o device=/var/app/data \
example-volume

The key value pairs passed with -o are not well documented. The man page for docker-create-volume says:

The built-in local driver on Linux accepts options similar to the linux mount command

The man page for mount will have options similiar to the above, but structred differently.

Sharing Volumes Between Docker Containers

In docker, it's easy to share data between containers with the --volumes-from flag.

First let's create a Dockerfile that declares a volume.

from apline:latest

volume ["/foo"]

Then let's:

  1. Build it into an image foo-image
  2. Create & Run it as a container with the name foo-container
  3. Put some text into a file in the volume
docker build . -t foo-image
docker run -it --name foo-container foo-image sh -c 'echo abc > /foo/test.txt'

When you run docker volume ls you can see a volume is listed. By running a container from an image with a volume we've created a volume.

When you run docker container ls -a you can see that we've also created a container. It's stopped currently, but the volume is still available.

Now let's run another container passing the name of our previously created container to the --volumes-from flag.

docker run -it --volumes-from foo-container alpine cat /foo/test.txt

# outputs: abc

We've accessed the volume of the container and output the results.

Blocking ip6 addresses with /etc/hosts

Like many developers, I need to eliminate distractions to be able to focus. To do that, I block non-development sites using /etc/hosts entries, like this:

127.0.0.1 twitter.com

Today I learned that this doesn't block sites that use ip6. I have cnn.com in my /etc/hosts file but it is not blocked in the browser.

To prove this is an ip6 issue I can use ping and ping6

> ping cnn.com
PING cnn.com (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.024 ms

> ping6 cnn.com
PING6(56=40+8+8 bytes) 2601:240:c503:87e3:fdee:8b0b:dadf:278e --> 2a04:4e42:200::323
16 bytes from 2a04:4e42:200::323, icmp_seq=0 hlim=57 time=9.288 ms

So for ip4 requests cnn.com is pinging localhost and not getting a response, which is what I want. For ip6 addresses cnn.com is hitting an address that is definitely not my machine.

Let's add another entry to /etc/hosts:

::1 cnn.com

::1 is the simplification of the ip6 loopback address 0:0:0:0:0:0:0:1.

Now, does pinging cnn.com with ip6 hit my machine?

> ping6 cnn.com
PING6(56=40+8+8 bytes) ::1 --> ::1
16 bytes from ::1, icmp_seq=0 hlim=64 time=0.044 ms

Distractions eliminated.

The interaction of CMD and ENTRYPOINT

The CMD and ENTRYPOINT instructions in a Dockerfile interact with each other in an interesting way.

Consider this simple dockerfile:

from alpine:latest

cmd echo A

When I run docker run -it $(docker build -q .) The out put I get is A, like you'd expect.

With an additional entrypoint instruction:

from alpine:latest

entrypoint echo B
cmd echo A

I get just B no A.

Each of these commands are using the shell form of the instruction. What if I use the exec form?

from alpine:latest

entrypoint ["echo", "B"]
cmd ["echo", "A"]

Then! Surprisingly, I get B echo A.

When using the exec form cmd provides default arguments to entrypoint

You can override those default arguments by providing an argument to docker run:

docker run -it $(docker build -q .) C
B C

Clean stopped containers & dangling images #docker

Today I learned how to clean stopped containers in Docker. Starting in Docker 1.13 a new prune command has been introduced.

docker container prune

(docs)

No one likes dangling images...

To list the numeric ids of dangling images dangling images use the filter with the dangling flag and -q to surpress anything but IDs:

dangling_images=$(docker images -qf dangling=true)

(docs)

Then delete the images

docker rmi $dangling_images

(docs)

You can add those to a script and run it from time to time to reclaim hard drive space and some sanity.

Connect To An RDS PostgreSQL Database

You can connect to an RDS PostgreSQL database remotely via psql. First, you need to tell AWS that connections from your IP address are allowed. This is done by configuring the VPC (Virtual Private Cloud) that is associated with the RDS instance. You'll need to add an inbound rule to the Security Group associated with that VPC. You'll add an inbound rule that allows a Postgres connection on port 5432 from your IP address -- which is identified by a CIDR address.

image

Once this rule has been added to the security groups associated with the VPC that is associated with your RDS instance, you'll be able to connect from your machine with psql.

$ psql my-rds-endpoint.cbazqrhkmyo.us-east-1.rds.amazonaws.com \
    --port 5432 \
    --user rdsusername \
    postgres

Assuming the database username is rdsusername and the specific database is named postgres, you'll be prompted for that user's password and then connected.

Allow HTTPS Through Your UFW Firewall

UFW -- Uncomplicated Firewall -- is just what is sounds like. I have it running on a DigitalOcean box and it is only letting through traffic on ports 80 (HTTP) and 22 (SSH). I am setting up SSL for a domain hosted on this box which means I need to also let through traffic on 443 (HTTPS).

The allowed ports can be checked with the status command:

$ sudo ufw status

Status: active

To                         Action      From
--                         ------      ----
OpenSSH                    ALLOW       Anywhere
Nginx HTTP                 ALLOW       Anywhere
OpenSSH (v6)               ALLOW       Anywhere (v6)
Nginx HTTP (v6)            ALLOW       Anywhere (v6)

As we can see, HTTPS has not yet been allowed by ufw. We can allow HTTPS traffic with the allow command.

$ sudo ufw allow https

Check the status again and see that HTTPS is now included in the list.

source

h/t Dillon Hafer

Each line in a Dockerfile is a layer in an image

What has helped me grok docker a bit better is knowing that every line in a Dockerfile has a corresponding hash identifier after the image has been built. Here is a sample Dockerfile:


FROM alpine

RUN echo 'helloworld' > helloworld.txt

CMD ["cat", "helloworld.txt"]

I create the image with:

docker build -t helloworld .

I can now examine each layer in the Dockerfile with docker history helloworld

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
1e5d27ca20a8        13 hours ago        /bin/sh -c #(nop)  CMD ["env"]                  0B
84f489011989        13 hours ago        /bin/sh -c echo "Hello World" > helloworld.t…   12B
3fd9065eaf02        2 months ago        /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>           2 months ago        /bin/sh -c #(nop) ADD file:093f0723fa46f6cdb…   4.15MB

Three commands and four layers. The FROM alpine command is actually 2 layers that have been squashed together. You can see the <missing> hash for the initial command because it has been squashed into 3fd9065.

The command that creates the helloworld.txt file has a size of 12 bytes because thats the size of the file that was created.

CircleCI Build Forked Pull Requests

Today I learned that CircleCI does not build against forked pull requests by default. You have to enable it under 'Advanced Settings'.

This is important if your .circleci/config.yml contains build steps like running an automated test suite, linter, or autoformatter. With this setting enabled, every PR goes through the same motions before human review, whether coming from inside or outside the project organization.

Killing heroku dynos

Yesterday we encountered an odd situation. A rake task running with heroku that did not finish. Through no fault of our own.

We killed the local processes kicked off by the heroku command line tool and restarted the rake task but got an error message about only allowing one free dyno. Apparently, the dyno supporting our rake task was still in use.

First we had to examine whether that was true with:

heroku ps -a myapp

Next step was to kill the dyno with the identifier provided by the above command.

heroku ps:kill dyno.1 -a myapp

We ran the rake task again, everything was fine, it worked great.

sub_filter + proxy_pass requires no gzip encoding

The nginx sub module is useful for injecting html into web requests at the nginx level. It looks like this:

location / {
  sub_filter '</head>' '<script>doSomething();</script></head>';
}

Replace the end closing tag with a script tag and the head closing tag.

This does not work however with the proxy_pass module response coming from the proxy_pass url is gzip encoded.

In this case, you want to specify to the destination server that you will accept no encoding whatoever with the proxy_set_header directive, like so:

  proxy_set_header Accept-Encoding "";

Altogether it looks like this:

location / {
  proxy_set_header Accept-Encoding "";
  proxy_pass http://destination.example.com
  sub_filter '</head>' '<script>doSomething();</script></head>';
}

Examine owner and permissions of paths

The ownership of an entire path hierarchy is important when nginx needs to read static content. For instance, the path /var/www/site.com/one/two/three/numbers.html may be problematic when directory three is owned by root rather than www-data. Nginx will respond with status code 403 (forbidden) when http://site.com/one/two/three/numbers.html is accessed.

To debug this scenario we need to examine the owners and pemissions of each of the directories that lead to numbers.html. This can be accomplished in linux with the handy namei command in combination with realpath.

Real path will return the fullpath for the file.

> realpath numbers.html
/var/www/site.com/one/two/three/numbers.html

And namei -l <full_path> will examine each step in the file's directory structure.

> namei -l $(realpath numbers.html)
drwxr-xr-x root  root  /
drwxr-xr-x root  root  var
drwxr-xr-x www-data www-data www
drwxr-xr-x www-data www-data site.com
drwxr-xr-x www-data www-data one
drwxr-xr-x root root two
drwxr-xr-x www-data www-data three
-rw-r--r-- www-data www-data numbers.html

Ah there it is. The directory two is owned by root rather that www-data. chown will help you from here.

Disk usage for just the top level dirs

Every so often your hard drive runs out of space and you need to do a survey of what could be possibly be taking up so much space. The du disk-usage command is great for that.

It can be noisy though as it traverses through all the directories and gives you a report on the size of each file. What we really want is just the size of each directory under the root dir.

du has a depth flag (-d) to help control the depth of directories that the command reports on.

du -h -d1 /

The above command gives you a report on all the directories and files at the top level.

Logrotation for a Rails App

This week I did a some Linux server logrotation for a Ruby on Rails application. log/your_log.log can get large on a server with traffic, and sometimes we must control how long to retain data.

Here's the basic configuration file I wrote, with comments:

# Logrotater:
# - Daily
# - Timestamped
# - Doesn't error on missing log files
# - Keeps seven copies
# - Truncates log files (allows Rails to start writing 
#  to the new log file without a restart)

/var/app/current/log/your_log.log {
    daily
    dateext
    missingok
    rotate 7
    copytruncate
}

There are many other options. Check out man logrotate on a Linux machine for more info.

Raid 0 offers performance with worse reliability

RAID is defined:

Redundant Array of Independent Drives

Whenever I think of RAID I think of data drives that each contain the same information, for fault tolerance. Different RAID configurations are given different numeric identifiers. Recently, when shopping for motherboards I saw RAID 0 listed as a selling point for many of the boards and I assumed this was a fault tolerance feature on these motherboards meant for gamers.

RAID 0 has nothing to do with fault tolerance. Nor redundance. In a Raid 0 configuration data is spread across multiple drives with the intention of increasing the bandwidth of data from the motherboard/CPU to the drives. It is a technique for increasing performance that actually reduces fault tolerance.

Read more about RAID here -> https://en.wikipedia.org/wiki/Standard_RAID_levels

Push Variables to Heroku

If you use Heroku to deploy your apps, I have a great command to try.

Included in the Heroku toolbelt command-line interface is heroku config:push.

This command pushes your local environmental variables, defaulting to a file called .env in your root directory, to any Heroku remote. It favors existing remote configs in the event of a collision.

No more copying and pasting to the command line, or pointless typos while setting remote configs.

Bonus: also check out heroku config:pull— equally useful.

Copy that ssh pub key to the remote server

When setting up a server its customary to copy the ssh key to the authorized keys file so that you don't rely on password auth to sign in to the remote server.

Its generally something like a 4 or 5 step process but linux has a utility to get it down to one step, ssh-copy-id.

$ ssh-copy-id dev@myserver.com

After entering your password you're all set! Not turn off that password authentication in your ssh config and start deploying your app!

Restore a Heroku PG Backup From Another App

If your staging and production instances are both hosted on Heroku, you can restore your production Postgres database to your staging instance from the Heroku CLI. No posting the dump on AWS, no backing up to a local dump, etc.

$ heroku pg:backups capture -rproduction

This will print the name of your new backup; let's call it b234, and the production app name my-blog.

$ heroku pg:backups restore my-blog::b234 DATABASE_URL -rstaging

The databases are now in sync.

.cert vs .pem

Fellow Rocketeer Dillon Hafer had the explanation for my SSL issue mentioned at https://til.hashrocket.com/posts/c38666d448-react-native-heroku-and-ssl-heroku- reactnative -

"DNSimple gives you a cert and a pem file. The cert file is just the certificate, while the pem file is actually 4 certificates mushed into one file. The 3 extra certificates are the intermediate certificates required by some operating systems that only include Root Certificates. Hope that helps explain why the pem from DNSimple works while the cert doesn’t." - Dillon Hafer

Chaining TLS Certificates 🐱

Apache allows you to declare an intermediate TLS certificate along with your regular certificate in your configuration, but many web servers only allow you to provide one certificate option. Like Nginx, go, or heroku.

In those cases, you will need to concatenate the entire certificate chain into one certificate file. This may sound daunting, but the process is very simple. Let me introduce cat -- concatenate and print files.

cat is normally used for printing files, but in this case we actually want to concatenate files. Below is a simple example on how we can do this:

cp example_com.crt example_com.chained.crt
cat AddTrustExternalCARoot.crt >> example_com.chained.crt
cat COMODORSAAddTrustCA.crt >> example_com.chained.crt
cat COMODORSADomainValidationSecureServerCA.crt >> example_com.chained.crt

To shorten it, we only need to use cat once:

cp example_com{,.chained}.crt &&
cat AddTrustExternalCARoot.crt COMODORSAAddTrustCA.crt COMODORSADomainValidationSecureServerCA.crt >> example_com.chained.crt

Verify TLS cert with private key

Hopefully you're never in a situation where you don't know what private key you used to generate your TLS certificate, but if you do... here's how you can check.

Note: this is better than uploading the certs to production to check on them 😉

Assuming we have generated a private key named example.com.key and a certificate named example.com.crt we can use openssl to check that the MD5 hashes are the same:

openssl x509 -noout -modulus -in example.com.crt | openssl md5
openssl rsa -noout -modulus -in example.com.key | openssl md5

To make things better, you can write a script:

#!/bin/bash
CERT_MD5=$(openssl x509 -noout -modulus -in example.com.crt | openssl md5)
 KEY_MD5=$(openssl rsa  -noout -modulus -in example.com.key | openssl md5)
 
if [ "$CERT_MD5" == "$KEY_MD5" ]; then
  echo "Private key matches certificate"
else
  echo "Private key does not match certificate"
fi

Intel Speedstep and Ubuntu 14.04 Performance

Intel Speedstep works with the OS to adjust the clock speed of the CPU in real-time to save power. Older Linux kernels had a poor interaction with Speedstep that could cause the CPU to be downclocked even when running something demanding like a test suite. This can be fixed by disabling Speedstep in BIOS or upgrading the kernel. I was on kernel 3.13 and upgraded to 4.2. I saw 15-40% speed increase when running these tests.

Ubuntu 14.04.4 ships with a new kernel, but older installs of 14.04 will not be automatically upgraded. Run uname -a to see what kernel you are running. If it is not at least 4.2, then you may want to upgrade your kernel <sup>1</sup>. Using aptitude this is simple as:

sudo aptitude install linux-generic-lts-wily

1

Test Your Nginx Configuration

Nginx misconfiguration can produce vague messages like these:

$ sudo service nginx reload
 * Reloading nginx configuration nginx
   [fail]

Turn up the verbosity with these flags:

$ nginx -c /etc/nginx/nginx.conf -t

The -c flag indicates a certain configuration file will follow; the -t flag tells Nginx to test our configuration. This produces much more useful errors like this:

nginx: [emerg] unknown directive "erver_name" in /etc/nginx/sites-enabled/default:3
nginx: configuration file /etc/nginx/nginx.conf test failed

erver_name should be server_name-- with these flags we now have an actionable error message.

Nginx switches docs

List The Statuses Of All Upstart Jobs

To see a list of all known upstart jobs and their statuses, use the following command:

$ initctl list
...
console stop/waiting
mounted-run stop/waiting
acpid start/running, process 2927
checkfs.sh start/running
checkroot-bootclean.sh start/running
kmod stop/waiting
mountnfs.sh start/running
nginx stop/waiting
plymouth-stop stop/waiting
rcS stop/waiting
ufw start/running
...

It will tell you for each job if it is stopped or started.

See man initctl for more details.

h/t Josh Davey

Reuse a Mac OS Installer

Once you've downloaded a Mac OS Installer, you know it takes a long time. Multiply that time and the half-hour installation by many workstations, and you have potentially days of work to upgrade them all.

We can slash that time by reusing the installer. ssh into a machine with the package, and ls /Applications/. We should see our directory, named Install\ OS\ X\ El\ Capitan.app/ (insert your OS version name). scp -r that directory to your local /Applications/, which should take a few minutes.

Back on your machine, find 'Install OS El Capitan' with the Finder, and follow the installation steps.

h/t Dillon Hafer

-y apt-get?

When installing packages on Ubuntu, you may find it really tiring to constantly confirm 'yes' all the time. I know I did. And when it comes to scripting your installs... that really becomes a nuisance. Today I learned that apt-get has a -y flag:

Automatic yes to prompts;
assume "yes" as answer to all prompts and run non-interactively.

Aliasing An Ansible Host

When specifying the hosts that Ansible can interact with in the /etc/ansible/hosts file, you can put just the IP address of the host server, like so:

192.168.1.50

IP addresses are not particularly meaningful for a person to look at though. Giving it a name serves as better documentation and makes it easier to see what host servers are in play during a task.

Ansible makes it easy to alias host servers. For example, we can name our host staging like so:

staging ansible_host=192.168.1.50

source

Keep mina fast

One thing I've loved about mina is the speed of deployments. One way mina achieves fast deploys is by avoiding unnecessary tasks.

I recently started using webpack with rails and soon found I needed that same performance boost. Because I was using mina/rails I had a nice little macro already available for me 😁

Example of #check_for_changes_script

desc "Install npm dependencies"
task :install do
  queue check_for_changes_script \
    check: 'package.json',
    at: ['package.json'],
    skip: %[echo "-----> Skipping npm installation"],
    changed: %[
      echo "-----> #{message}"
      #{echo_cmd %[NODE_ENV=#{ENV['to']} npm install]}
    ],
    default: %[
      echo "-----> Installing npm modules"
      #{echo_cmd %[NODE_ENV=#{ENV['to']} npm install]}
    ]
end

Encrypt a zip archive

When you're using the zip CLI on your machine or a remote server and you need some extra security you can use the -e flag to encrypt a zip archive. Be sure to use a super long random password for this.

zip -e secure-files.zip ~/Documents/*.pdf

-e       
--encrypt
       
  Encrypt the contents of the zip archive using a password which is entered on
  the  terminal  in response to a prompt (this will not be echoed; if standard
  error is not a tty, zip will exit with an error).  The  password  prompt  is
  repeated to save the user from typing errors.

Debian alternatives

Debian/Ubuntu has a number of chores for which the application to use for that chore can be configured by you. Editor and pager are 2 of the most common and generally I've set the environment variables for those in the past. Setting EDITOR and PAGER determines what happens when an application puts you into a page mode or which an application requires that you edit something.

Debian/Ubuntu though has a system for determining what application should be used in that instance, the alternatives system.

Running ls -l /usr/bin | grep 'alternative' will show you all the programs that are linked to the /etc/alternatives directory. The /etc/alternatives dir in turn has symlinks that point to the application choices a user configures.

update-alternatives is the program which manipulates the symlinks in the /etc/alternatives directory. Running sudo update-alternatives --config editor will give you a menu from which you can choose your favorite editor.

Running Out Of inode Space

Unix systems have two types of storage limitations. The first, and more common, is a limitation on physical storage used for storing the contents of files. The second is a limitation on inode space which represents file location and other data.

Though it is uncommon, it is possible to run out of inode space before running out of disk space (run df and df -i to see the levels of each). When this happens, the system will complain that there is No space left on device. Both inode space and disk space are needed to create a new file.

How can this happen? If lots of directories with lots of empty, small, or duplicate files are being created, then the inode space can be used up disproportionately to the amount of respective disk space. You'll need to clean up some of those files before you can continue.

Sources: this and this