Today I Learned

A Hashrocket project

251 posts by chriserin @mcnormalmode

Checking that an association is loaded

Ecto will NOT load associations automatically, that is something that you must do explicitly and sometimes you might expect an association to be loaded but because of the code path travelled, it is not.

You can check to see if an association is loaded with Ecto.assoc_loaded?

case Ecto.assoc_loaded?(post.channel) do
  true -> IO.puts('yep its loaded')
  false -> IO.puts('you do not have the data')
end

Legacy Refs vs Ref Callbacks

Reactjs provides a way to get references to dom elements that react is rendering through jsx. Previously, it was through what are now legacy refs:

componentWillUpdate() {
  this.refs.thing.tagName == "div";
}

render() {
  return (
    <div ref="thing"/>
  );
}

Where you can assign an element an identifier and react would keep a refs hash up to date with references to the dom for that element.

The react docs say this about the previous system.

We advise against it because string refs have some issues, are considered legacy, and are likely to be removed in one of the future releases.

The new system uses callbacks

render() {
  return (
    <div ref={(div) => { console.log('tag name:', div.tagName); }} />
  );
}

This callback is called when the component mounts with a reference to the dom element as an argument. Importantly, when the component unmounts the callback is called again but this time with null as an argument.

Stacking HEREDOCS

I ran across this in a co-workers code yesterday. It made my head hurt for a second. Let’s say you need a a function that takes 4 lines as an argument and outputs each of those lines to stdout.

def print_stanza(line1, line2, line3, line4)
  puts line1, line2, line3, line4
end

print_stanza(<<-LINEA, <<-LINEB, <<-LINEC, <<-LINED)
  You can get all hung up
LINEA
  in a prickle-ly perch.
LINEB
  And your gang will fly on.
LINEC
  You'll be left in a Lurch.
LINED

The second HEREDOC starts where this first one ends, the third HEREDOC starts where the second one ends, etc. Its all valid Ruby and starts to make sense if you look at it long enough.

Most editors I’ve seen haven’t been able to highlight the syntax correctly though. Sorta leaves you in a Lurch.

H/T Brian Dunn

.babelrc ignores babel settings in package.json

There are three ways to provide babel configuration listed on the usage page for babelrc. You can provide a .babelrc file, you can place a babel section in your package.json or you can declare configuration with an env option.

If you have both a .babelrc in your root dir AND a babel section in your package.json file then the settings in the package.json will be completely ignored.

This happened to me when needing to declare an extra plugin for babel that create-react-app did not provide. I ejected from create-react-app, I added a .babelrc that declared the extra plugin, and this broke the build for my app. The babel configuration for an ejected create-react-app is in the package.json file.

Convert to BigDecimal with `to_d` w/ActiveSupport

Ruby provides the BigDecimal method to convert to BigDecimal.

> require 'bigdecimal'
> BigDecimal("123.45")
#<BigDecimal:56236cc3cab8,'0.12345E3',18(18)>

But you can’t convert a float without a precision

> BigDecimal(123.12)
ArgumentError: can't omit precision for a Float.
> BigDecimal(123.12, 5).to_s
"0.12312E3"

When using Rails, and specifically with ActiveSupport required, you can use the to_d method converts to BigDecimal.

> require 'active_support'
> 123.to_d
#<BigDecimal:55ebd7800ea8,'0.123E3',9(27)>
> "123".to_d
#<BigDecimal:55ebd7800ea8,'0.123E3',9(27)>

And for floats provides a default precision of Float::DIG+1 which for me is 16. DIG is described as

The number of decimal digits in a double-precision floating point.

> 123.45.to_d
#<BigDecimal:55ebd2d6cfb8,'0.12345E3',18(36)>
> 123.45.to_d.to_s
"123.45"

Note, to_s in ActiveSupport outputs a more human readable number. Also Note, nil is not convertable with to_d

> require 'active_support'
> nil.to_d
NoMethodError: undefined method `to_d' for nil:NilClass
> BigDecimal(nil)
TypeError: no implicit conversion of nil into String

`requestAnimationFrame` should call itself

This style of animation is useful when you’re making small changes via javascript. When you pass requestAnimationFrame a callback, the callback is called before a browser repaint, or about 60 times a second. To make sure that you’re getting 60 callbacks a second, you must call requestAnimationFrame from within your callback.

function animate() {
  makeSomeSmallChangeToHtmlOrCss();
  requestAnimationFrame(animate);
}

This is a recursive function, so without an exit condition, it will recurse infinitely.

H/T Brian Dunn

Upgrading npm when using `asdf` with `reshim`

The version of npm that comes with nodejs when installed with asdf may not be the latest. In my case I had npm 5.3.0 installed and the newest version is 5.4.2. I upgraded npm with npm install -g npm and saw output that made me think everything installed successfully, but when I ran npm -v I still got 5.3.0.

The answer is to use asdf’s reshim command.

> asdf help reshim
asdf reshim <name> <version>    Recreate shims for version of a package

I ran the following command:

> npm -v
5.3.0
> asdf reshim nodejs
> npm -v
5.4.2

And now I have the latest version and everything is great!

What you had before you saved w/`previous_changes`

When you set values into the ActiveRecord object the previous values are still available with changes, but when you save, however you save, those changes are wiped out. You can access what those values were before saving with previous_changes.

> thing = Thing.create({color: 'blue', status: 'active'})
> thing.color = 'red'
> puts thing.changes
{"color" => ['blue', 'red']}
> puts thing.previous_changes
{}
> thing.save
> puts thing.changes
{}
> puts thing.previous_changes
{"color" => ['blue', 'red']}

Call a program one time for each argument w/ xargs

Generally, I’ve used xargs in combination with programs like kill or echo both of which accept a variable number of arguments. Some programs only accept one argument.

For lack of a better example, lets try adding 1 to 10 numbers. In shell environments you can add with the expr command.

> expr 1 + 1
2

I can combine this with seq and pass the piped values from seq to expr with xargs.

> seq 10 | xargs expr 1 + 
expr: syntax error

In the above, instead of adding 1 to 1 and then 1 to 2, it trys to run:

expr 1 + 1 2 3 4 5 6 7 8 9 0

Syntax Error!

We can use the -n flag to ensure that only one argument is applied at time and the command runs 10 times.

> seq 10 | xargs -n1 expr 1 +
2
3
4
5
6
7
8
9
10
11

For more insight into what’s being called, use the -t flag to see the commands.

Mix tasks accessing the db with `Mix.Ecto`

When running reports or one-off data operations it might be necessary to create a mix task that can access the database. Ecto provides convenience functions in the Mix.Ecto module to help facilitate setting up and starting the Ecto repos.

The function parse_repo(args) will process the arguments used when calling the mix task. It looks specifically for -r MyApp.Repo but if you don’t pass anything it will return all the repos from the configuration.

The function ensure_started(repo) takes the repo as an argument ensures that the Repo application has been started. Without calling this function the Repo will throw an error when used.

Put it all together:

defmodule Mix.Tasks.MyApp.SayHi do
  use Mix.Task
  import Mix.Ecto

  def run(args) do
    repo = parse_repo(args) |> hd

    ensure_started(repo)

    result = repo.query("select 'hi!';")

    result.rows
    |> hd
    |> hd
    |> IO.puts
  end
end

Use `source /dev/stdin` to execute commands

Let’s say there’s a command in a file, like a README file, and you don’t have any copy or paste tools handy. You can get the command out of the README file with:

> cat README.md | grep "^sed"
sed -ie "s/\(.*\)/Plug '\1'/" .vimbundle.local

Great! Now how do we run it? The source is generally used to read and execute commands in files, and really /dev/stdin behaves like a file.

You can use the pipe operator to place the command into stdin and then source will read from stdin.

> cat README.md | grep "^sed" | source /dev/stdin

A simpler example can be constructed with echoing

> echo "echo 'hi there'"
echo 'hi there'

And

> echo "echo 'hi there'" | source /dev/stdin
hi there

Access record from ActiveRecord::RecordInvalid

You can pass an array of hashes to Thing.create! in ActiveRecord. If one of those records is invalid, then an ActiveRecord::RecordInvalid error is thrown. You might need to know which record threw the error, in which case you can get the record from the error with record_invalid_error.record

bad_record = nil

begin
  Things.create!([{value: 'bad'}, {value: 'good'}])
rescue ActiveRecord::RecordInvalid => record_invalid_error
  bad_record = record_invalid_error.record
end

if bad_record
  puts "got a bad record with value: #{bad_record.value}"
end

Array of hashes `create` many ActiveRecord objects

Generally, you use the create method of ActiveRecord objects to create an object by passing a hash of attributes as the argument.

Thing.create(color: 'green', status: 'active')

You can also pass an array of hashes to create:

things = [
  {
    color: 'blue',
    status: 'pending'
  },
  {
    color: 'green',
    status: 'active'
]

created_things = Thing.create(things)

One disappointing thing is that this does not batch the insert statements. It is still just one insert statement per object, but it might make your code simpler in some cases.

`github` as source block in Gemfile

source blocks in a Ruby Gemfile help group gems together that come from the same source. In addition, the Gemfile supports a github block for multiple gems that are coming from the same github repository. In my specific case, there are two gemspecs in the Brian Dunn’s flatware repo.

github 'briandunn/flatware', branch: master do
  gem 'flatware-rspec'
  gem 'flatware-cucumber'
end

With this example, only one change is needed to change the branch that both of those gems will come from.

H/T Brian Dunn

Split large file into multiple smaller files

Bash has a handy tool to split files into multiple pieces. This can be useful as a precursor to some parallelized processing of a large file. Lets say you have gigabytes of log files you need to search through, splitting the files into smaller chunks is one way to approach the problem.

> seq 10 > large_file.txt
> split -l2 large_file.txt smaller_file_
> ls -1
smaller_file_aa
smaller_file_ab
smaller_file_ac
smaller_file_ad
smaller_file_ae
large_file.txt

First, I created a “large” file with ten lines. Then, I split that file into files with the prefix smaller_file_. The -l2 option for split tells split to make every 2 lines a new file. For 10 lines, we’ll get 5 files. The suffix it adds (“aa”, “ab”, …) sorts lexigraphically so that we can reconstruct the file with cat and globs.

> cat smaller_file*
1
2
3
4
5
6
7
8
9
10

`with` statement has an `else` clause

with statements are used to ensure a specific result from a function or series of functions, using the results of those functions to take actions within the with block. If a function does not return a specific result (think :error instead of :ok) then you can either define specific clauses for the things you expected to go wrong, or you can just return the result that did not conform to the with clause expectations.

This is the general form:

with {:ok, a} <- {:ok, 123} do
  IO.puts "Everythings OK"
end

A with with an else block:

with {:ok, a} <- {:error, 123} do
  IO.puts "Everythings OK"
else 
  result -> IO.puts("Not OK")
end

A with else clause with pattern matching:

with {:ok, a} <- {:error, "something went wrong"} do
  IO.puts "Everythings OK"
else 
  {:error, message} -> IO.puts(message)
  error -> IO.puts("I'm not sure what went wrong
end

A with without an else clause where the error is returned from the with block:

result = with {:ok, a} <- {:error, "something went wrong"} do
  IO.puts "Everythings OK"
end

{:error, message} = result
IO.puts "This went wrong #{message}"

Implied applications and `extra_applications`

In your mix file (mix.exs) the application function returns a keyword list. Two options in that list determine what applications are started at runtime.

The applications is by default implied based on your app’s dependencies. The default list is thrown away through if this option is set in your mix.exs file.

If you want to add an extra application without disrupting the default, implied list then you can add the optionextra_applications. This leaves the default, implied list of applications untouched.

H/T Jose Valim PR

def application do
[
  mod: {Tilex, []},
  extra_applications: [:logger]
]
end

`cd` in subshell

With many of our projects sequestering the front end javascript code into an assets directory I find myself moving between the root project directory and the assets directory to perform all the npm or yarn related tasks in that assets dir. Inevitably I’ll start doing something like this:

cd assets; npm install; cd ..

or this

pushd assets; npm install; popd

In both cases using ; instead of && puts me back in the original directory regardless of the result of the npm command.

I just learned that using cd in a subshell does not change the directory of the current shell, so I can also do this:

(cd assets; npm install)

Ruby srand returns the previous seed

srand is a method on Kernel that seeds the pseudo random number generator. It takes a new seed as an argument or calls Random.new_seed if you don’t pass an argument. What’s interesting about it is that it returns the old seed. This has the effect of return a new large random number every time you call srand.

2.4.1 :007 > srand
 => 94673047677259675797540834050294260538
2.4.1 :008 > srand
 => 314698890309676898144014783014808654061
2.4.1 :009 > srand
 => 102609070680693453063563677087702518073
2.4.1 :010 > srand
 => 81598494819438432908893265364593292061

Which can come in handy if you’re playing some Ruby golf and need to generate a huge random number in as few characters as possible.

H/T Dillon Hafer

Converting strings to atoms safely

If your elixir system accepts any outside inputs and takes any part of those outside inputs and calls String.to_atom with the input as an argument then your elixir system is subject to a denial of service attack.

Malicious actors can submit input designed to dynamically create a large number of atoms until the atom limit is reached, knocking out your elixir applications.

Consider using String.to_existing_atom instead. If the argument to this function cannot be converted to an existing atom then an exception will be thrown.

> String.to_existing_atom("I don't exist")
** (ArgumentError) argument error
    :erlang.binary_to_existing_atom("nothere", :utf8)
> String.to_atom("I don't exist")
:"I don't exist"
> String.to_existing_atom("I don't exist")
:"I don't exist"

Current number of atoms in the atoms table

In Elixir and Erlang there is a hard limit on the number of atoms you can create. Atoms are not garbage collected so its important to ensure you don’t exceed the limit. You can check what the limit is with:

> :erlang.system_info(:atom_limit)
1048576

Likewise, you can check the current number of atoms in the atoms table with:

> :erlang.system_info(:atom_count)
9654

On my system using Elixir 1.5.1 I use 9654 atoms just to start iex.

Xargs from a file

I’ve struggled with xargs conceptually for long time, but actually its pretty easy conceptually. For commands that don’t read from stdin but do take arguments, like echo or kill, you can turn newline separated values from stdin in into arguments.

Piping to echo does not work.

> echo 123 | echo
# nothing

Using xargs it does.

> echo 123 | xargs echo
123

xargs can also read a file with the -a flag, turning each line of the file into an argument.

> echo "123\nabc" > test.txt
> cat test.txt
123
abc
> xargs -a test.txt echo
123 abc

H/T Brian Dunn

Kill rogue shell processes

There is a particular type of attack where an inserted usb stick can act like a keyboard, open a terminal, and start something like this:

while (true); do something_malicious; sleep 3600; done & disown

This process endlessly loops and wakes every hour to do something malicious. The & puts it in the background and the disown will end its attachment to the current terminal. When the terminal is closed the process will get a parent of 1.

This process is still detectable and killable at the command line by finding all shell programs with a parent pid of 1 and killing them with -9.

ps ax -o pid,command,ppid | grep '.*zsh.*\s1$' | awk '{print $1}' | xargs kill -9

This will kill all running rogue zsh processes. There may be reasons why you’d want a process to be detached from its parent terminal, but you could easily decide that this isn’t something you want ever and place the above command into a cron job that runs every 2 seconds.

Run vim command from .... the command line!

Generally vim is started at the command line with the vim command and it comes equipped with the -c flag for running commands.

-c will open vim, run a command, and stay open, so if you wanted to open vim with a smile you could run:

vim -c ':smile'

If you wanted to run a subsitution on a file with vim’s substitute command, then save, then quit, that would look like:

vim -c '%s/frown/upside-down/g | write | quit' frowns.txt

Capturing stderr when shelling out in ruby

All of the usual methods of shelling out in ruby (backticks, system, etc.) don’t capture stderr, but there’s an oddly named library open3 that can help with that.

require 'open3'

result_or_err, process_status = Open3.capture2e('cat abc')
puts result_or_err
# 'cat: abc: No such file or directory'
puts process_status
# <Process::Status: pid 6729 exit 1>]

In this case cat abc errors out because there is no abc file, so the command writes to stderr. If there was an abc file, result_or_err would contain the contents of the abc file.

source

Examine or Diff a file on another #git branch

At times, I’m curious about the changes that have been made for a particular file on a branch that is not the branch I’m on.

The vim-fugitive plugin provides an easy way to open a buffer for a file on another branch with the :Gedit command.

For instance:

:Gedit readme_updates:README.md

Will open a buffer with the contents of the README.md file from the readme_updates branch.

You may want to view that file in comparison to the current file. The :Gdiff command provides us with that functionality:

:Gdiff readme_updates:README.md

Non-ActiveRecord objects in FactoryGirl

Creating non-ActiveRecord objects with FactoryGirl is possible. Classically, a constructor is used to set all the data attributes of an object.

class ParsedString
  attr_reader :abc, :def
  def initialize(string)
    @abc, @def = string.split(?|)
  end
end

In the above class the attributes are set in the constructor by passing in a string that gets split into two parts on \|. Lets use instantiate_with to determine how the object gets instantiated in FactoryGirl.

factory :parsed_string do
  initial_value "123|456"

  initialize_with { new(initial_value) }
end

And then in the test we can FactoryGirl.build, like:

ps = FactoryGirl.build(:parsed_string)
expect(ps.abc).to eq 123
expect(ps.def).to eq 456

Find an npm library from the command line

When looking for a new package to use with npm you can use npm search <string>. This will query npm and return 20 - 25 results that match your search.

> npm search react-router
NAME                      | DESCRIPTION          | AUTHOR          | DATE       | VERSION  | KEYWORDS
react-router              | Declarative routing… | =ryanflorence   | 2017-08-24 |          | react router 
react-router-dom          | DOM bindings for…    | =mjackson…      | 2017-08-24 |          | react router 
react-router-redux        | Ruthlessly simple…   | =jlongster…     | 2017-02-10 |          | react redux r
react-router-native       | React Native…        | =jmurzy…        | 2017-08-24 |          |
react-router-config       | Static route config… | =mjackson…      | 2017-08-24 |          | react router 
react-router-bootstrap    | Integration between… | =monastic.panic… | 2017-04-19 |          | react react-
react-router-scroll       | React Router scroll… | =taion          | 2017-04-10 |          | react react r
react-native-router-flux  | React Native Router… | =aksonov        | 2017-08-23 |          |
...

There doesn’t seem to be a way to search by date, version or popularity, but if you pass the --long flag you’ll be able to see the entire description. Without the --long flag the results will be truncated to fit onto one line.

Check that an executable exists on the $PATH

When writing a vim plugin interacting with different programs on the server might be necessary. For instance you might want to use ruby to make a network request or perform some file management. But first, the vim plugin should check to see if the program ruby exists which can be accomplished with the executable() vimscript function.

:echo executable('ruby')
1
:echo executable('doesnotexist')
0

This can be used in a condition before running the ruby command.

if executable('ruby')
  system('ruby -e "puts YAML"')
endif

CamelCase to underscore and back again w/Elixir

If you have a mix project name with multiple words, then those multiple words are generally separated with underscores in the project directory name, like honey_bears. The module name for the project however is HoneyBears.

Converting a string from underscore to CamelCase is built into Elixir. Its in the Macro module:

> Macro.camelize("honey_bear")
"HoneyBear"

The reverse case also can be solved with a Macro function.

> Macro.underscore("HoneyBear")
"honey_bear"

The Macro module has a number of convience functions for working with macros.

Specifying the location of your vim plugin

If you are working on a vim plugin and you want to keep this plugin amongst your projects rather than with your other vim plugins you can add a new entry to the runtimepath or rtp.

Place the following in your vimrc:

set runtimepath+=~/projects/vim-my-plugin

Or

set rtp+=~/projects/vim-my-plugin

You can confirm that the path was added correctly by checking that setting with:

set rtp

Resize vim window to the size of its content

If you’re writing a vim plugin and in that plugin you are opening a new window to display some content, you may want to resize that window so that it is only the size of the content within. For instance, if you have 10 lines of content but the window is half the screen, then you want to resize the window to 10 lines.

Resizing a window is accomplished with the resize command like:

:resize 10

Getting the number of content lines is accomplished by getting the line number of the last last:

:echo line('$')

Putting the two together is tricky because the argument to the resize command is interpreted literally. :resize line('$') does not work. You have to combine the two into a string and then pass it to the execute command.

:execute('resize ' . line('$'))

Configure max http header size in Elixir Phoenix

If you store lots of data in cookies, you may eventually run up against the maximum header value length in Cowboy which by default is 4096.

While storing that much data in cookies is maybe not the best idea, there is a configuration setting that can help you work around this issue:

config :myapp, MyAppWeb.Endpoint,                                                                                                                 
  http: [protocol_options: [max_request_line_length: 8192, max_header_value_length: 8192]]

There ya go! Now you can accept http header values up to 8192 bytes in length!

Check to see if a command exists at #shell

If you want to check to see if a command exists on a user’s machine you can use command -v. command without flags will run the command passed as an argument.

> command echo "A"
"A"

command -v with the -v flag will return the path of the command, and most importantly, a non-zero status code if that command does not exist.

> command -v brew
/usr/local/bin/brew

Or on Linux

> command -v brew
# returns nothing and also returns a status code of 1

Importantly, you can check for a command before using it.

command -v brew && brew install something

Phoenix will watch your JS for you, just watch!

With es6 javascript ecosystems most compilation/transpilation is done with watchers, programs that watch for changes in your code and then trigger compilation based on the changes you’ve made.

Phoenix has a method of integrating those watchers into the server itself.

config :myapp, MyApp.Endpoint,
  watchers: [yarn: ["run", "watch", cd: Path.expand("../assets", __DIR__)]]

The above configuration goes in config/dev.exs and runs the watch command whenever you start the server/endpoint. The keyword option cd is as of yet undocumented, but does what you think it does, changes the directory to the path you configure, which by convention in Phoenix 1.3 would be the assets directory. cd is the only keyword option.

The watchers start when you run your server. Just make a change to your javascript and refresh the page!

String concatentation in the Bourne Again Shell

String concatenation must be different in every lanaguage, or so it seems, and Bash is no different! You can string concat just by putting too strings next to each other.

> echo "a""b"
ab 

Its ok if one of these strings is a variable.

> a = "x" 
> echo $a"b"
xb

Its ok if you put a double quoted string next to a single quoted string.

> echo 'a'"b"
ab

Its ok if you put spaces in between the strings, but that space now will be part of the new string.

> echo "a" "b"
a b

But multiple spaces in between strings will be squashed to one string.

> echo "a"      "b"
a b

This works when setting a variable too, but only surrounded by parens.

> c=('a'      'b')
> echo $c
a b

Don’t worry about the parens though when they are right next to each other.

> c = "a""b"
> echo $c
ab

Edit bash command at certain point in history

Jack introduced our team to the fc command a couple weeks ago. We had a command in our history that we wanted to edit, but it was not the last command. In fact, it was several 100 commands up in our history.

It turns out that the fc command takes an id argument, and each item in history has an id. To find the id of the particular command we wanted we ran:

> history | grep curl
10256 curl www.google.com
10500 curl www.amazon.com

To edit the first command listed in curl we just use:

fc 10256

H/T Ifu Aniemeka

Execute raw parameterized sql with Ecto in Elixir

Using sql directly is a good way to use some of the more interesting parts of sqlthat Ecto does not provide a ready abstraction for in its dsl. And parameterization is necessary to avoid sql injection and just to be able to deal with single quotes correctly. Its fairly straight forward once you find the right module (Ecto.Adapters.SQL) and function (query or query!). Parameters are indicated by $1.

sql = """
  select * from users where name = $1;
"""

results = Ecto.Adapters.SQL.query!(MyApp.Repo, sql, ["chris"])

There is also a stream function on the Ecto.Adapters.SQL module with the same signature as query but the documentation doesn’t necessarily state the advantages or situations where it may be useful. More to learn.f8

Load all records for an association in Ecto

Lets say you get a collection of Posts with:

posts = Repo.all(Post)

If you try to access the associated developer for one of these posts, you get an error:

hd(posts).developer
#Ecto.Association.NotLoaded<association :developer is not loaded>

The same is true for all posts in the collection, none of them have a loaded developer. You can load them one by one with:

developer = Tilex.Repo.one(Ecto.assoc(post, :developer))

But if you need to do this for each developer you’ll get into an N+1 situation, making an additional trip to the database for each post.

Instead you can load all the developers at one time with:

posts = Repo.preload(posts, :developer)

Clearing Phoenix npm cache on Heroku

I recently encountered an issue where the solution, as found on the internet, was to remove all the npm_modules, re-install them and build again. My issue however was on Heroku rather than local.

If you’re using Elixier and Phoenix on Heroku then you’re likely using the heroku-buildpack-phoenix-static. This buildpack compiles the static assets for Phoenix.

This buildpack caches npm_modules by default, to clear the cache you must provide a configuration file at the root level of your project named phoenix_static_buildpack.config that looks like:

clean_cache=true

The buildpack will now download the npm_modules on each deployment.

Suppress errors when substituting in vim

The substitute (or s) command in vim throws an error the search pattern is not found.

:substitute/banana/apple
E486: Pattern not found: banana

This error can be suppressed with the e flag at the end of the statement.

:substitute/banana/apple/e

When operating in just one file or on just one line the error is a minor annoyance but when combining the substitute command with :argdo you receive an error for every file in the args list that does not contain the pattern in question.

A typical argdo command would look like this.

:argdo s/banana/apple/gec

The flags are:

g - global

e - suppress errors

c - confirm

tmux copy-mode -u, useless or useful?

Tmux comes with a key binding for both copy-mode and copy-mode -u. The first is straight forward, you can navigate through the window history, highlight something and place it into your tmux buffer. I was curious about what the -u flag was for, and the man page says:

The -u option scrolls one page up.

So the -u option just places you into copy-mode one page up. Useless! Or wait, maybe that’s something I want…

Difference between `union` and `union all` in psql

union is an operator that operates on two sets of results, combining them:

chris=# select 1 union select 2;
 ?column?
----------
        1
        2
(2 rows)

It filters duplicate results though:

chris=# select 1 union select 1;
 ?column?
----------
        1
(1 row)

If you want to not worry about duplicate results and just stack the result sets on top of each other use union all:

chris=# select 1 union all select 1;
 ?column?
----------
        1
        1
(2 rows)

Postgres Natural joins squash unneeded columns

Dealing mostly with Rails style databases we generally write inner joins on ids and foreign keys but there are other ways to create joins, the most fun being the natural join.

The below query works like a natural join but uses inner join syntax. When two tables having columns with the same name, join on that column. You can see that both columns have been returned even though they have the same value.

chris=# select * from (select 1 x) foo inner join (select 1 x) bar on bar.x = foo.x ;
 x | x
---+---
 1 | 1
(1 row)

When using the natural join syntax the query is less verbose and only returns one column. Postgres knows that they have the same value, thats the whole point! So only present one column.

chris=# select * from (select 1 x) foo natural join (select 1 x) bar;
 x
---
 1
(1 row)

Skip all changes in this file

git add --patch gives you the opportunity to make a decision on every code change in all files individually, providing you a menu that looks like this:

Stage this hunk [y,n,q,a,d,/,j,J,g,e,?]?

Option d is

d - do not stage this hunk or any of the later hunks in the file

This comes in handy when there is a host of whitespace changes in a file that you don’t want to commit. Just hit d to skip all those whitespace changes and go to the next file.

Postgres `coalesce` errors with multiple types

The Postgres function coalesce takes a variable number of arguments and returns the first non-null argument.

psql> select coalesce(null, null, null, null, 'hi!');
hi!

But if you include a number and a string, then Postgres throws an error.

psql> select coalesce(1, 'hi!');
ERROR:  invalid input syntax for integer: "hi!"

Coalesce will try to coerce value to the type of the first argument if possible.

psql> select coalesce(1, '2');
1

Mysql behaves a bit differently, it allows values of different types.

mysql> select coalesce(1, 'hi!');
1

The postgres way makes a bit more sense to me. What is the type of a column if there are multiple possible types? Is it always a string? But I wanted a number? I feel like the additional type safety will help catch errors a bit earlier.