Today I Learned

A Hashrocket project

111 posts by doriankarter @dorian_escplan

Treat words with dash as a word in Vim

By default vim treats two words connected with an underscore e.g. search_tag as a vim word text-object.

This is very useful because you can then use motions and commands on them such as daw to delete a word, or ciw to change inside the word.

That is not the case for dash separated words e.g. search-tag. These types of words are very common in CSS class names and HTML IDs.

To make vim treat dash separated words as a word text-object simply add the following to your .vimrc:

set iskeyword+=-

And source it again.

Jump between #git hunks in Vim with vim-gitgutter

One of my favorite Vim plugins is vim-gitgutter - it shows gutter markings for changes in the current buffer when that file is tracked by git.

It looks like this:

+ = new lines - = deleted line ~ = changed line

It also comes with a few very useful commands for working with hunks, sections of changed code in your file.

To jump between hunks you can use ]c and [c. Since I don’t really use the return and backspace keys in normal mode I have mapped those instead:

nnoremap <silent> <cr> :GitGutterNextHunk<cr>
nnoremap <silent> <backspace> :GitGutterPrevHunk<cr>

This is especially useful on big files (more than a bufferful) with scattered changes.

Simple text file #encryption with Vim

Vim provides a simple text file encryption feature. To make use of it add the following to your .vimrc:

set cryptmethod=blowfish2

This will set the encryption to the strongest algorithm vim supports.

Now to use it simply start editing a file with the -x flag:

$ vim -x mysecret.txt

You will be prompted for a password, and password confirmation. After that you should be able to edit the file and save it normally.

When you open the file again with vim (even without the -x flag) you will be asked to type your password to decrypt the file. If you enter the wrong password all you’ll see is gibberish.

This is not the strongest encryption out there but it works and should suffice for most personal use cases.

NOTE: this will not work with NeoVim.

List all available extensions in #Postgres

Postgres comes packed with extensions just waiting to be enabled!

To see a list of those extensions:

select * from pg_available_extensions;

This will list the extension’s name, default_version, installed_version, and the comment which is a one liner description of what the extension does.

Here’s an interesting one for example:

name              | earthdistance
default_version   | 1.1
installed_version | ø
comment           | calculate great-circle distances on the surface of the Earth

To enable an extension, simply call create extension on the name:

create extension if not exists earthdistance;

whatis command and apropos in #linux #bash

Ever wonder what a command you are using is? Turns out linux has the answers for you!

Simply type whatis followed by the name of the command.

Examples:

$ whatis bc
bc(1)         - An arbitrary precision calculator language
$ whatis brew
brew(1)       - The missing package manager for macOS
brew-cask(1)  - a friendly binary installer for macOS

whatis uses the man pages to search your entered query.

There is also a reverse search, which searches the descriptions of commands. For example say you are looking for a calculator:

$ apropos calculator
bc(1)         - An arbitrary precision calculator language
dc(1)         - an arbitrary precision calculator

h/t this tweet by Peter Cooper

Stop #bash script on error #linux #zsh

If you are writing a procedural bash script, you may want to stop execution if one of the steps errored out.

You can write error handling for each step, but that can get quite verbose and make your script hard to read, or you might even miss something.

Fortunately bash provides another option:

set -e

Simply place the above code at the top of your script and bash should halt the script in case any of them returns a non-true exit code.

Caveats: this will not work in all cases, for example it does not work for short circuited commands using &&/||.

If you want it to work when one of your operations in a pipe fails you will need to add the pipefail flag (not supported on some systems set -o | grep pipefail to check your system):

set -e -o pipefail

If you have a script that always returns a non true return code and that’s fine you can override set -e for that command with:

set +e
your_command_goes_here
set -e

At this point I consider it a best practice to include this statement in every script I write.

List filenames of multiple filetypes in project

Ag (aka The Silver Searcher) is an amazing piece of software. It allows you to define file types (like Ack) and comes prepackeged with some file types.

Using this feature you can list all files of a specific type in your project. For example say we want to list all Ruby and JavaScript files:

ag --ruby --js -l

Ag has the added benefit over Ack, that it ignores gitignored files, so you only get the files that matter (and not stuff from node_modules etc).

To see what filetypes Ag supports:

ag --list-file-types

The list is pretty extensive! Unlike Ack however, there is currently no way to add new file types or extend the list.

Make console.log stand out with custom css style

I know your browser console is full of messages because you are debugging something, and that creates a lot of noise. Now you are adding a new console.log, and you need it to stand out above the rest.

Maybe you are like facebook and just want to warn your users from pasting in code in the browser in social engineering attacks.

facebook

To style a console.log message use the %c interpolation and pass it a css style. e.g.

console.log('%c%s', 'color:red;font-size:5em', alert)

In the example above %s means inerpolate the object into the output string.

preview

Compatibility: tested to work on Firefox, Chrome, and Safari.

h/t Dillon Hafer

Storing recurring schedules in #Rails + #Postgres

If you have a scheduling component to your Rails application you may need to store the day of week and time of day in the database.

One way to store the day of week is to use an integer column with a check constraint that will check that the value is between 0 and 6.

create table schedules (
  id serial primary key,
  day_of_week integer not null check(day_of_week in (0,1,2,3,4,5,6)),
  beg_time time not null,
  end_time time not null
);

Then when you read it back from the database and need to convert it back to day name you can use Date::DAYNAMES. e.g.:

[2] pry(main)> require 'date'
=> true
[3] pry(main)> Date::DAYNAMES
=> ["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
[4] pry(main)> Date::DAYNAMES[0]
=> "Sunday"
[5] pry(main)>

If you need to store time of day as entered (in a time without timezone column - as specified above) check out the wonderful Tod gem by Jack Christensen

More useful Homebrew searches #macOS #homebrew

Homebrew, the third-party package manager on macOS, allows searching for packages by name, but the list that comes out only contains package names. That’s not always very useful, particulary when you are not sure what you are looking for.

To get the package description along with the package name simply add --desc to your brew search command.

For example, let’s look for a library for performing file diffs with color highlighting:

$ brew search --desc diff
apgdiff: Another PostgreSQL diff tool
cdiff: View colored diff with side by side and auto pager support
cern-ndiff: Numerical diff tool
colordiff: Color-highlighted diff(1) output
cppad: Differentiation of C++ Algorithms
dhex: Ncurses based advanced hex editor featuring diff mode and more
diff-so-fancy: Good-lookin' diffs with diff-highlight and more
...

You can also search using regex in both the description and name of the package as long as you supply the --desc option:

$ brew search --desc /[cC]olor.*[dD]iff/
cdiff: View colored diff with side by side and auto pager support
colordiff: Color-highlighted diff(1) output
icdiff: Improved colored diff

Joining URI parts in Elixir

Elixir 1.3 introduced a standard way to join URIs.

For example, say we have a base URI for an API: https://api.hashrocket.com and different endpoints on that URI: events, developers, applications.

To join the URI into one properly formatted string:

def endpoint_uri(endpoint) do
  "https://api.hashrocket.com"
  |> URI.merge(endpoint)
  |> URI.to_string()
end

# then call it

endpoint_url("events") # => "https://api.hashrocket.com/events"

URI.merge accepts both strings and URI structs as the first object so you can easily continue adding URI parts to the pipeline including query params:

"https://test.com"
|> URI.merge("events") 
|> URI.merge("?date=today") 
|> URI.to_string()

# => "https://test.com/events?date=today"

Magically insert `iex -S` in front of a command

Often times you need to execute an elixir function with iex to enable pry breakpoints.

I found that I was doing a lot of fumbling in zsh to go back to the previous command, jump to the beginning of it and type out iex -S.

Since I like to automate repeatitive processes, I came up with this:

bindkey -s "^Xi" "^[Iiex -S ^[A"

Dump this line in your .zshrc or .bashrc and then all you have to do is Ctrl+xi to insert iex -S in front of the previously ran command.

preview

Magic. 🎩

Download all of humble bundle books in parallel

Humble Bundle is a great site which offers technical book bundles. The problem is that they present the user with a huge list of links for all the different formats and it is a tedious task to right click each link and save it to your hard drive.

In order to solve this you can open the Developer Tools while on the download page and paste the following:

var pattern = /(MOBI|EPUB|PDF( ?\(H.\))?|CBZ|Download)$/i;
var nodes = document.getElementsByTagName('a');
var downloadCmd = '';
for (i in nodes) {
    var a = nodes[i];
    if (a && a.text && pattern.test(a.text.trim()) && a.attributes['data-web']) {
        downloadCmd += 'wget --content-disposition "' + a.attributes['data-web'].value + "\"\n";
    }
}
var output = document.createElement("pre");
output.textContent = downloadCmd;
document.getElementById("papers-content").prepend(output);

credit: https://gist.github.com/graymouser/a33fbb75f94f08af7e36

This will add a pre tag to the page with a bunch of wget commands. Go ahead and copy those to your clipboard.

Using GNU Parallel (brew install parallel). First save the contents of your clipboard into a file, for example download_jobs then run the following command:

parallel -j 4 < download_jobs

Replace 4 with the number of cores you have on your machine.

Then sit back and watch your directory get populated with files.

Surround every line in a file using sed

To replace every line in a file you can use linux’s built in sed utility:

For example given a file like this:

dkarter/backpack
junegunn/fzf
junegunn/fzf.vim
junegunn/vim-peekaboo
junegunn/gv.vim
terryma/vim-multiple-cursors
scrooloose/nerdtree
dyng/ctrlsf.vim
haya14busa/incsearch.vim
killphi/vim-legend
neomake/neomake

If we want to surround each line with Plug '$content_of_line' we can run the following command:

sed -e "s/\(.*\)/Plug '\1'/" .vimbundle.local

Output:

Plug 'dkarter/backpack'
Plug 'junegunn/fzf'
Plug 'junegunn/fzf.vim'
Plug 'junegunn/vim-peekaboo'
Plug 'junegunn/gv.vim'
Plug 'terryma/vim-multiple-cursors'
Plug 'scrooloose/nerdtree'
Plug 'dyng/ctrlsf.vim'
Plug 'haya14busa/incsearch.vim'
Plug 'killphi/vim-legend'
Plug 'neomake/neomake'

If the result is what we expected we can add -i flag to write the file in place, updating it with our changes:

sed -ie "s/\(.*\)/Plug '\1'/" .vimbundle.local

Pass name of function as a block

In Ruby it is possible to pass a function name instead of a block to a function expecting a block using &method in the last argument.

For example:

def blocky_fun(foo)
  bar = "hello #{foo}"
  yield(bar)
end

blocky_fun('blocky', &method(:puts))

The result of the above would be that the puts function will be called with “hello blocky” as the first argument.

Prettier ignore! 💁 #javascript

Prettier is super helpful but sometimes you just want to format things your way if the output of prettier is not very readable.

To solve this, prettier provides a comment that you can put above any “node” in the resulting javascript AST.

For example:

BEFORE (w/ prettier)

const street_number = this.findAddressComponent(
  resultObj,
  'street_number'
).long_name;
const route = this.findAddressComponent(resultObj, 'route').long_name;
const zip_code = this.findAddressComponent(resultObj, 'postal_code')
  .long_name;
const city = this.findAddressComponent(resultObj, 'locality').long_name;
const state = this.findAddressComponent(
  resultObj,
  'administrative_area_level_1'
).short_name.toUpperCase();

The above is a result of prettier formatting and is not very readable or pretty - so I would need to turn it into a single AST node and put the prettier-ignore comment over it:

AFTER (w/ prettier-ignore)

// prettier-ignore
const address = {
  street_number: this.findAddressComponent(resultObj, 'street_number').long_name,
  route: this.findAddressComponent(resultObj, 'route').long_name,
  zip_code: this.findAddressComponent(resultObj, 'postal_code').long_name,
  city: this.findAddressComponent(resultObj, 'locality').long_name,
  state: this.findAddressComponent(resultObj, 'administrative_area_level_1').short_name.toUpperCase(),
}

Now the address components will be accessible from the address object (e.g. address.route) and while still not the prettiest, it is a lot more readable IMO.

Enable history in IEX through #erlang (OTP 20) ⏳

If you are using the latest version of Erlang, OTP 20 now ships with shell history, so you can use Ctrl-p / Ctrl-n or the up/down arrow keys.

The shell history is turned off by default though, so you will have to turn it on by adding the following to your .zshrc/.bashrc etc.

export ERL_AFLAGS="-kernel shell_history enabled"

Once you do that make sure to source your bash config file or open a new window.

Every subsequent iex session will now have shell history. 🚀

"experience uses an unsupported version of Expo"

error

If this happens when developing a React Native app with Expo and trying to test it in the iOS Simulator, it means the version of Expo on the iOS Simulator is out of date.

To fix that try the following:

  1. Quit the Expo app on the simulator
  2. Uninstall the Expo app on the simulator
  3. Lunch the app again in the simulator from the Expo XDE

This will cause the Expo app on the simulator to reinstall with the latest version.

Move window (tab) in tmux

To move a new window to the left do this:

tmux-prefix, :swap-window -t -1

To move window to the right

tmux-prefix, :swap-window -t +1

You can also bind a key to that command:

bind-key S-Left swap-window -t -1
bind-key S-Right swap-window -t +1

Now it will be tmux-prefix, Shift + Left and tmux-prefix, Shift + Right

Require local package in mix.exs

In Elixir as you are writing your application it is recommended to split parts of it into smaller applications (also can be called micro-services if you want to be buzzword compliant).

You don’t however need to publish those dependencies to the Hex package manager in order to load them, instead you can use the path argument when defining a dependency.

In this example we have our main application Foo and in the directory above it we have an application called Bar.

To make the Bar module available in Foo we can do it like so:

mix.exs

defp deps do
  [
    {:bar, path: "../bar"},
  ]
end

Module attribute constants nil in Elixir

Module attributes in elixir (@something) can be used as constants and assigned a value, however one must make sure that the value assigned to the constant is available at compile time.

For example when using a constant that loads environment variables if the value is not available in compile time they will resolve to nil.

@salt System.get_env(:your_app_name, "AUTH_SALT")

So either export the environment variables before compiling, or don’t use module attributes, instead you can use a function:

defp salt do
  System.get_env(:your_app_name, "AUTH_SALT")
end

Consider:

  • The downside to using a function is that it will be re-evaluated each time.
  • The downside of exporting env-vars before compile is that they you might forget and your app will crash in production. You can circumvent that by writing a script for compilation.
  • You can also call System.get_env from your config.exs but make sure to run mix clean after doing so since the compiler seems to cache the config compilation.

Convert nested JSON object to nested OpenStructs

If you are parsing a nested JSON string such as this:

{
   "vendor": {
       "company_name": "Basket Clowns Inc",
       "website": "www.basketthecloon.com"
 }

And want to access it with dot notation, simply doing:

OpenStruct.new(JSON.parse(json_str))

will not do!

Turns out there is a cool option on JSON.parse called object_class:

JSON.parse(json_str, object_class: OpenStruct)

Now you can access the resulting object with dot notation all the way down:

obj.vendor.website #=> "www.basketthecloon.com"

Set filetype/settings for a specific file in Vim

Given a file with a weird extension but an underlying known filetype e.g. yaml file with a .config extension. It is possible to force vim to set the filetype to yaml for that file:

At the top/bottom of the file add a comment (with the filetypes acceptable comment syntax - for yaml it is #):

# vi: ft=yaml
baz:
  foo: 'bar'

Re-open the file and vim will automatically set the filetype to yaml for that file.

This can also be used for setting other setting such as shiftwidth, tabstop etc

I’ve been using this trick for a while but keep forgetting it the exact syntax, usually using # vim: instead of # vi:. Hopefully my wording will make it more easily duckduckgoable.

Set foreign key to null on delete in #Postgres

Say you have a table with a foreign key:

posts
------
id serial primary key
...
primary_image_id references images (id)

images
------
id serial primary key
post_id references posts (id)

If you attempted to delete an image from the images table while that image’s ID is referenced in one of the posts you will receive an error preventing you from deleting the image. Postgres is trying to protect your data integrity by preventing foreign keys that point to records which don’t exist.

To prevent errors and allow deleting records from the images table freely you must define the on delete strategy on the posts table.

One of the options available to you is set null so if you are creating a new table it will look like this:

create table posts (
  id int serial primary key,
  -- ...
  primary_image_id int references images (id) on delete set null
);

Now if the primary image is deleted it will set the primary_image_id to null.

This an alternative to on delete cascade which in this case will delete the post from the posts table and is not what we want.

Read the full documentation under ‘5.3.5. Foreign Keys’

Serve static files/directories in Phoenix

Phoenix will by default server some files and directories from priv/static. More specifically css/, fonts/, images/, js/, favicon.ico and robots.txt. If you need to add a new directory however simply creating it in priv/static will not make Phoenix serve it. You need to explicitly add it.

Say your project is called ChicagoElixir you will need to go to your endpoint configuration, typically in: lib/chicago_elixir/web/endpoint.ex. There you will find the following configuration:

  plug Plug.Static,
    at: "/", from: :chicago_elixir, gzip: false,
    only: ~w(css fonts images js favicon.ico robots.txt)

Simply add the new folder or file name to the list in only and restart your Phoenix server.

Run Prettier on all #JavaScript files in a dir

If you are like me you must like formatters such as Prettier which probably prompted you to set your editor to auto format the file on save.

That’s great for new projects but when working on an existing project, every file you touch will have a huge diff in git that can obscure the real changes made to the file.

To solve that you must run prettier on all your javascript files as an independent commit. You can do it with the following command:

find ./src/**/*.js | xargs prettier --write --print-width 80 --single-quote --trailing-comma es5

The flags after the prettier are all my personal preferences except for --write which tells prettier to write the file in place.

Note 1: Make sure you have all the files you are about to change committed to source control so that you can check them out if this did not go well.

Note 2: When committing this change it would be a good idea to use git add -p and go through the changes one by one (which is always a good idea…)

Note 3: To dry run and see which files will be changed run the find ./src/**/*.js by itself.

Fuzzy awesome copy to sys clipboard w/yank & fzf

Say you want to copy a pid of a process to system clipboard, you could run ps ax, maybe grep on the result, connect your trusty mouse and try to select the value and hit ⌘ + c.

Or you can use the amazing fuzzy finder FZF (brew install fzf) in combination with Yank (brew install yank).

ps ax | fzf | yank

Now simply start typing the name of the process. When you press return you will get the columns broken down into a selectable prompt - choose one and press return. It is now in your system clipboard.

Here’s a demo:

demo

This will work with any column spaced or even multiline response. Try running ps ax | yank.

Code splitting with Webpack 2 and Babel

Webpack 2 provides the ability to split your code into smaller files that are lazy loaded during runtime as they are needed.

When I first learned about this feature I thought it would be very intelligent in detecting which parts of the code are using a certain module and split all my modules into separate files automatically. That’s not really the case.

If you want to have Webpack split your code and lazy load it you need to explicitly call import in your code. For example:

import Foo from './foo';

class Bar {
  baz() {
    Foo.someMethod(); // this will not be split and lazy loaded
    
    import('./lazy_module').then(function(lazyModule) {
      console.log(lazyModule.sayHello());
    }).catch(function(err) {
      console.log('Failed to load lazy module', err);
    });
  }
}

To have this work you need to install the Syntax Dynamic Import library

Then edit your .babelrc:

{
  "presets": [["es2015", { "modules": false }]],
  "plugins": ["syntax-dynamic-import"]
}

The “modules”: false part is really important. It basically instructs Babel to not try and parse the imports but let Webpack 2’s native ability to parse imports to do the work. This was tricky.

There’s more to that and it keeps changing so I recommend visiting this documentation https://webpack.js.org/guides/code-splitting-import/

Git Garbage Collection - optimize local repository

As you work with git a lot of garbage gets accumulated in your .git folder such as file revisions and unreachable objects.

On large repositories and long running projects this negatively affects both operating performance and disk space utilization.

To clean up the garbage git provides a command:

git gc

This will not impact the remote repository and will only optimize the local copy so it is safe to run on any git repo you might have. (In fact this operation is already run for you automatically after some git commands that leave behind too many loose objects)

If you want a deeper cleaning (which you should every few hunderd changesets or so), run:

git gc --aggressive

This will take longer but will provide better results.

To learn more:

git help gc

Yarn global

Just like npm install -g, Yarn provides yarn global add. I found however that it did not work right out of the box to register executable binaries/CLIs.

To fix this add the following to your .zshrc/.bashrc:

# set yarn binaries on path
export PATH="$HOME/.config/yarn/global/node_modules/.bin:$PATH"

Now all binaries installed from yarn should be on your system PATH.

Save disk space with Yarn

Yarn is a fast, reliable and secure replacement for npm. Those are all important attributes in my book for a tool I use daily for development, but that’s not all Yarn offers.

The node_modules directory is often a resource consuming hog both in space and number of files. It is full of junk such as test files, build scripts and example directories.

To clean some of those files Yarn offers the clean command. To run it:

yarn clean

Once run, the command will create a .yarnclean file with patterns of type of files to be deleted. By default yarn clean will delete the following:

# test directories
__tests__
test
tests
powered-test

# asset directories
docs
doc
website
images
assets

# examples
example
examples

# code coverage directories
coverage
.nyc_output

# build scripts
Makefile
Gulpfile.js
Gruntfile.js

# configs
.tern-project
.gitattributes
.editorconfig
.*ignore
.eslintrc
.jshintrc
.flowconfig
.documentup.json
.yarn-metadata.json
.*.yml
*.yml

# misc
*.gz
*.md

With this file in your project root directory Yarn will ensure to run a cleaning task after every install and report how much space it saved you.

Clean untracked files in Git

Given I am a developer
And I am working on a new branch in an existing project
And during one of my commits I introduced a few files/folders
And those files/folders are untracked (in .gitignore)
And those files/folders are automatically generated (e.g. node_modules/ webpack_bundle.js)
When I switch back to the main branch
Then I see those files
And I don’t want to…

If you find yourself in the above situation, you may want to clean your untracked files. Git provides a command for that: git clean.

This command comes with a way to see which files/folders are going to be deleted (DRY RUN):

git clean -n

You may notice that the above command does not show any untracked directories. To add directories to that list use the -d switch:

git clean -dn

Alternatively you may choose to only remove files/dirs that are in .gitignore with the -X option:

git clean -X -dn

If you are ready to take action use the -f switch and remove the -n switch:

git clean -fd

Set persistent font in MacVim

Like many, I have MacVim installed through Homebrew Cask. When I first started using MacVim, I had to change the font to a powerline supported font so that my Airline looks spiffy.

To do that I went to Edit -> Font -> Show Fonts and selected a font.

Unfortunately this setting gets wiped out with each update of MacVim, and since it updates often, which is great, having to set the font over and over is not.

To have your font setting persisted in MacVim add this to your .gvimrc:

set guifont=Source\ Code\ Pro\ for\ Powerline:h24

You can set what ever font you like, just make sure to escape the spaces with a slash.

To get powerline (and Airline) patched fonts go here: https://github.com/powerline/fonts

Map Caps Lock to Escape in macOS Sierra #seil

macOS Sierra was made available to the public yesterday and many of us early adopters rushed to install and test it out.

One of the things that broke and really affected my workflow was that Seil, the program I use to remap Caps Lock to ESC, no longer works. It’s sister application Karabiner also stopped working.

Fortunately there’s a solution available from the developer of Karabiner and Seil. It’s a little more complicated than usual:

  1. Download and install Karabiner-Elements:

    https://github.com/tekezo/Karabiner-Elements

  2. Karabiner Elements will install a virtual keyboard driver, and you probably want to disable the default capslock behavior for the new virtual driver:

    disable capslock

  3. Use your favorite editor and edit the following file (create it if does not exist):

    vim ~/.karabiner.d/configuration/karabiner.json

    And add the following to it:

    {
        "profiles": [
            {
                "name": "Default profile",
                "selected": true,
                "simple_modifications": {
                    "caps_lock": "escape"
                }
            }
        ]
    }

That’s it. Just make sure you have Karabiner Elements running.

Remove both scrollbars from MacVim

If you use MacVim you may encounter the gray Mac OS scrollbar on the right side.

When you split the window you may encounter two scrollbars, one on each side.

I find that to ruin the look of MacVim, especially with a dark colorscheme (I use Dracula).

example

To remove only the left one use

set guioptions=r

This will tell vim to always show the right scrollbar only. To remove only the right one use

set guioptions=l

To remove all scrollbars, remove everything after the equal sign

set guioptions=

example

Add this to your vimrc for a consistent experience.

Expecting change with RSpec #rails #testing #rspec

Usually when I try to test if a value has changed after a method has been called I will assert the initial value as one expectation followed by the action that changes it, and finally assert the value has changed.

For example this test will check if a user’s bad login attempts are incremented when the user.record_bad_login! method is called:

describe '#record_bad_login!' do
  let(:user) { FactoryGirl.create(:user) }

  it 'increments the bad login attempts count' do
    expect(user.failed_login_attempts).to eq(0)
    user.record_bad_login!
    expect(user.failed_login_attempts).to eq(1)
  end
end

RSpec provides us with a more straight forward way to “oneline” this type of test while making it more declarative:

describe '#record_bad_login!' do
  let(:user) { FactoryGirl.create(:user) }

  it 'increments the bad login attempts count' do
    expect { user.record_bad_login! }.to change { user.failed_login_attempts }.from(0).to(1)
  end
end

Read more here: https://www.relishapp.com/rspec/rspec-expectations/v/2-0/docs/matchers/expect-change

Toggle CursorLine, CursorColumn w/Vim Unimpaired

Vim Unimpaired plugin by Tim Pope ships with a shortcut for quickly toggling CursorLine and CursorColumn. This is particularly useful on large files with plenty of syntax highlighting.

Turning CursorLine/CursorColumn off can speed buffer navigation by reducing the blocks being re-rendered on the screen, making Vim snappy again.

To use this shortcut type cox from NORMAL mode and Vim will toggle CursorLine and CursorColumn on and off.

h/t Chris Erin

DEMO: demo

Postgres age function #postgresql

If you want to select records according to a specific interval, like Rails’ ActiveSupport 1.year.ago PostgreSQL has you covered.

The age function returns an interval type and can be used in queries like so:

select * from sometbl where age(created_at) > '1 year';

By default the age function will use the current system time to calculate the age. If you want to calculate the age relative to a different time you can simply pass in another argument:

psql> select age(timestamp '2016-08-28', timestamp '1957-06-13');
           age
-------------------------
 59 years 2 mons 15 days
(1 row)

You can also use the make_interval function to create intervals using numeric parameters:

psql> select make_interval(1,2,3,4,5);
         make_interval
--------------------------------
 1 year 2 mons 25 days 05:00:00
 (1 row)

In a query it can be used like so:

select * from sometbl where age(created_at) > make_interval(1);

to select rows with created_at older than one year from now.

Read more about the age function and other cool date/time manipulations see the official documentation.

h/t Jack Christensen

Verify downloaded files from the web #security

If you download a file from the web on a public WiFi and want to run on your machine you might want to check if the file has not been tampered with by a man-in-the-middle-attack or if the file host has been breached.

The easiest way to do this is to check the publised md5 or sha-1 hash for that file (you can do that via your phone if you want to be extra secure). Not every package publishes that but if they do it will be on their website usually next to the download link.

To verify the file you will need to hash the file you downloaded using openssl. For example:

 $ openssl sha1 Kali-Linux-2016.1-vm-amd64.7z
 SHA1(Kali-Linux-2016.1-vm-amd64.7z)= 2b49bf1e77c11ecb5618249ca69a46f23a6f5d2d

Which matches the site’s published sha-1 hash:

kalisha

If you want to check md5, simply replace sha1 in the command with md5.

Treat null as if it is a known value #postgresql

When you query a table sometimes you want to check if a nullable field is not equal to a value. For example:

select * from sometable where afield != 'avalue';

However the query above will exclude rows where afield is null, so you would typically add that as an additional condition:

select * from sometable where afield is null or afield != 'avalue';

When you are doing it once it may be ok but as queries get bigger this makes the query messy and harder to read. Fortunately Postgres offers a more idiomatic way to check if a value does not equal something, including null values: is distinct from and is not distinct from.

select * from sometable where afield is distinct from 'avalue';

This query will return all the rows where afield is null or anything but avalue. Conversely:

select * from sometable where afield is NOT distinct from (select x from y limit 1);

will return all the values that are equal to the result of the subquery above and is useful when the result of the subquery could be null.

h/t Jack Christensen

Original docs: https://wiki.postgresql.org/wiki/Is_distinct_from

Grep through compressed (gzipped) log files

The logrotate linux utility automatically compresses your ever-growing production log files.

If you encountered an error and wanted to search the history including all compressed logs, you may have considered unzipping all of them into a directory and run a grep in that directory.

Fortunately linux offers a more idiomatic way for grepping gzipped files: zgrep.

From the manual:

zgrep, zegrep, and zfgrep act like grep, egrep, and fgrep, respectively, but accept input files compressed with the compress(1) or gzip(1) compression utilities.

h/t Jack Christensen

See how long a process has been running on #linux

If you started a long-running process and want to know how long it has been “on the run” so to speak, you can use the -eo switch on ps to specify you want the elapsed time like so:

ps -eo pid,cmd,etime

This will yield something like:

112 [aws/0]                  2-10:27:00
114 [aws/1]                  2-10:27:00
115 [aws/2]                  2-10:27:00
123 [aws/3]                  2-10:27:00
  ...

Which means that process aws has been running for 2 days, 10 hours and 27 minutes.

You can of course pipe the result to grep:

ps -eo pid,cmd,etime | grep aws

Bundle in parallel using full CPU powa!!! #rails

Don’t you wish there was a faster way to install your bundled gems for a project? Especially when cloning an existing Rails application from Github?

![more powa](https://i.imgur.com/HFgXC3H.png)

It turns out that since Bundler v1.5, Bundler supports Parallel Install.

To run bundler with parallel install use the --jobs option or -j for short.

Assuming your computer has 4 cores you can try

$ bundle install --jobs 4
$ # or
$ bundle install -j4

Finally if you want to set bundler to always use parallel install you can run this command:

bundle config --global jobs 4

The bundler team has seen speedups of 40-60%. That’s amazing!

h/t Micah Cooper && bundler documentation