Today I Learned

A Hashrocket project

Ruby 2.7 Enumerable#tally

Have you ever wanted to get back a count of how often objects showed up within an array? If so, you’ve likely written (or copy-pasta’d) some variation of the code below:

list = %w(red red red blue blue fish blue blue)

list.each_with_object( { |v, h| h[v] += 1 }          # => {"red"=>3, "blue"=>4, "fish"=>1}
list.each_with_object({}) { |v, h| h[v] ? h[v] += 1 : h[v] = 1 } # => {"red"=>3, "blue"=>4, "fish"=>1}
list.group_by { |v| v }.map { |k, v| [k, v.size] }.to_h          # => {"red"=>3, "blue"=>4, "fish"=>1}

Lucky for us, Ruby 2.7 introduces a new convenience method in Enumerable#tally, so our above code reduces to:

list = %w(red red red blue blue fish blue blue)
list.tally # => {"red"=>3, "blue"=>4, "fish"=>1}


You can read the feature discussion here!

Linking to a PayPal Transaction Page ⛓

This is a follow up to PayPal Transaction Pages aren’t Linkable. We figured out a way to build this feature!

When you process a PayPal transaction with the SDK (i.e. as a seller), you get a token that could be considered a transaction ID. It is something like a primary key on an object that is the parent to several other transactions: the seller’s transaction, any currency conversions, etc.

With this ID in hand, you can link to the seller’s dashboard and see a summary of this parent transaction. It’s not a documented feature, so the link could be broken at any time.

Thanks for sticking with me, PayPal support.

No More Destructuring Objs in Func Args in Ruby2.7

Ruby has supported destructuring of objects into keyword arguments for a while in the Ruby 2.* series of releases, but now you’ll be getting a warning if you try this:

The 2.6 version

> def foo(a:, b:); puts [a, b]; end
> foo({a: 2, b: 2})

The 2.7 version

> def foo(a:, b:); puts [a, b]; end
> foo({a: 2, b: 2})
warning: Using the last argument as keyword parameters is deprecated; maybe ** should be added to the call

This warning will turn into an error in Ruby 3.0.

And if you add ** to the call like it tells you to:

> def foo(a:, b:); puts [a, b]; end
> foo(**{a: 2, b: 2})

OK Everything is cool. You can’t put ** if your keyword arguments are in a lambda which is being passed to map(&myLambda) though.

In this case, you’ll have to rewrite your code, so do that or you’ll be version locked!

H/T Brian Dunn

Read more here.

Protect Yourself Against Firebase Database Refs!

My worst nightmare is to lose production data, thank God for backups. But my nightmare came to life on a project’s Firebase database. This is a noSQL JSON-like document database offered as a service from Google Cloud Platform. You use it like this:


This is how you get the data at that path. Like a file system.

And when you want to make it dynamic:



But, what if customerID is blank?

cue Nightmare Before Christmas soundtrack

funtion handleDelete(customerID) {
    // ...delete()

If I ran that function in production with a blank customer ID, well then I just deleted all the customers.

Fear Not!

import protectedRef from './protectedRef.js'

const customerID = ''
protectedRef('customers', customerID) 
// => [ERROR] Bad ref! 1: ""

Use protection kids.

// protectedRef.js

import { sanitize } from './sanitize.js'

function protectedRefReducer(pathSoFar, pathPart, index) {
  const sanitizedPathPart = sanitize(pathPart)

  if (!sanitizedPathPart) {
    throw new Error(`Bad ref! ${index}: "${pathPart}"`)

  return pathSoFar + '/' + sanitizedPathPart

export default function protectedRef( {
    if (!parts.length) {
    throw new Error('noop')
  return firebase.database().ref(parts.reduce(protectedRefReducer, ''))

Ecto can only execute 1 sql statement at a time

Ecto can only ever execute 1 sql statement at a time, by design. For performance concerns, every statement is wrapped in a prepared statement
*some “security” is a fringe benefit of prepare statements.

In regards to the performance concerns of the prepared statement, PostgreSQL will force re-analysis of the statement when the objects in the statement have undergone definitional changes (DDL), making its use in a migration useless.

An example of a migration if you need to perform multiple statements:

def up do
  execute("create extension if not exists \"uuid-ossp\";")
  execute("alter table schedules add column user_id uuid;")
  execute("create unique index on schedules (id, user_id);")

def down do
  execute("alter table schedules drop column user_id;")
  execute("drop extension if exists \"uuid-ossp\";")

So you heard about Ruby 2.7 Pattern Matching?

Ruby has an experimental feature “Pattern Matching” introduced in this latest release, 2.7.

When I think pattern matching I think function arguments but for Ruby this is not the case. Instead, pattern matching is all about the case statement.

When I open up a 2.7 irb repl I can do this:

irb(main):001:0> case [1, 2]; in [1, x]; puts "x: #{x}"; end
(irb):5: warning: Pattern matching is experimental, and the behavior may change in future versions of Ruby!
x: 2
=> nil

Yep, it’s experimental. Seems like you shouldn’t use it in production code 😡.

There it is! Check out a slide deck with more info here

What happens on Ecto.Repo.rollback?

The rollback function interupts the execution of the function that you pass to transaction, but what is the mechanism for interupting execution?

When you call rollback you are essentially calling this code:

throw({DBConnection, conn_ref, reason})

throw is no good without a catch and you can find the try ... catch in the DBConnection module as well. It can be boiled down to this:

try do
  # call function passed to Ecto.Repo.transaction
  :throw, {__MODULE__, ^conn_ref, reason} ->
    # send rollback request to database and return with {:error, reason}
    return {:error, reason}
  kind, reason ->
    # send rollback request to database and raise with reason
    :erlang.raise(kind, reason, stack)

This code exploits the ability of catch to both handle a throw and handle a raise.

PayPal Transaction Pages aren't Linkable ⛓

I spent some time this week trying to create a link directly to a PayPal Merchant transaction show page. This is the page you’d see as a merchant when somebody has paid you via PayPal.

These pages use the transaction’s ID as a lookup, so I’d need the know that ID in order to dynamically create the link. After some Googling, reading GitHub issues, and finally the source code of my PayPal SDK of choice, I’d like to report: PayPal does not expose this ID in the normal course of business.

To quote my PayPal caseworker:

“While there are ways of retrieving the Transaction ID… you will not be able to link directly to the PayPal transaction details page, for security. The only way that your buyers can view the transaction details page is by accessing PayPal themselves, logging in with their credentials, and navigating to the Activity.”

I hope this spares another developer from the same rabbit-hole I just emerged from. Sometimes things with technology don’t work. 🤷‍♀️

Edit: we figured it out! with Superclass

Today I encountered the following syntax (observed in Ruby 2.7):

SpecialError =

What’s going on here?

According to the Class docs, takes a superclass as an argument, creating an anonymous unnamed class. So this could be written as:

class SpecialError < StandardError

We’d lose the anonymity of the SpecialError class, but in the first example, we are assigning it to a Pascal-cased variable name anyway.

I think the first syntax would make sense when the code looks just like this example: a simple class that inherits some behavior, where the name is needed only in the scope where the class is declared.

Sort Git Branches by Latest Commit

Some of my repos have a lot of branches, and a command I discovered this week sorts them by the most recent committer date:

$ git branch --sort=-committerdate
* new-branch

The - reverses the order, putting the most-recently updated branch at the top, rather than the bottom, of my output. That’s more intuitive to me, but it certainly works in reverse.

Ruby Array#sort_by

Let’s say we have the following list of very interesting items.

items = [
  { name: "dogfood", price: 4.99 },
  { name: "catfood", price: 5.99 },
  { name: "batfood", price: 4.99 }

Our client wants the list sorted by price descending, and we should break ties in price by then sorting by item name descending. So with our example list, the result should be:

items = [
  { name: "batfood", price: 4.99 },
  { name: "dogfood", price: 4.99 },
  { name: "catfood", price: 5.99 }

Here’s a method that does the job for our client:

def sort_by_price_and_name(items)
    .group_by { |i| i[:price] }
    .map { |_k,v| v.sort { |a, b| a[:name] <=> b[:name] } }

It’s a bit unwieldy and inelegant. It turns out there is a perfect method for this in Ruby:

def sort_by_price_and_name(items)
  items.sort_by { |i| [i[:price], i[:name]] }

Ruby’s Enumerable#sort_by works in this case where we sort by all the fields in ascending order. If we wanted to sort by price ascending, then name descending, sort_by would no longer be the tool we’d want to use.

Hat tip to @joshcheek for the knowledge.

Useful Git Diff Filenames

If you type git config --global diff.noprefix true into your terminal, it will set the value like so in your ~/.gitconfig file:

    noprefix = true

The upshot is that git diffs will no longer prefix files with a/ and b/, such that you can copy a useful path when you double-click to select it in the terminal.

--- a/file/that/changed.awesome
+++ b/file/that/changed.awesome


--- file/that/changed.awesome
+++ file/that/changed.awesome

Credit to @brandur for the tip. Here’s the source tweet

Capitalize Hashtags for #a11y

At RubyConf 2019, I learned that social media hashtags are more easily read by screen readers if they are capitalized. So #rubyconf is more accessible when written as #RubyConf, because the screen reader can more easily make sense of the word breaks. Keep this in mind when posting and designing marketing campaigns.

I saw #rubyfriends and #rubyconf migrate to #RubyFriends and #RubyConf at the conference in realtime as this idea spread.

more info

h/t Coraline Ada Ehmke

Use Typescript to help migrate/upgrade code

I am tasked with migrating a large javascript codebase from using the beta version of firebase-functions to the latest. Like most major upgrades, there are many API changes to deal with. Here’s an example cloud function:

Beta version with node 6:

exports.dbCreate = functions.database.ref('/path').onCreate((event) => {
  const createdData =; // data that was created

Latest version with node 10:

exports.dbCreate = functions.database.ref('/path').onCreate((snap, context) => {
  const createdData = snap.val(); // data that was created

The parameters changed for onCreate.

In the real codebase there are hundreds of cloud functions like this and they all have varying API changes to be made. With no code-mod in sight, I’m on my own to figure out an effecient way to upgrade. Enter Typescript.

After upgrading the dependencies to the latest versions I added a tsconfig:

  "compilerOptions": {
    "module": "commonjs",
    "outDir": "lib",
    "target": "es2017",
    "allowJs": true,
    "checkJs": true,
  "include": ["src"]

The key is to enable checkJs so that the Typescript compiler reports errors in javascript files.

And running tsc --noEmit against the codebase provided me with a list of 200+ compiler errors pointing me to every change required.

`random()` in subquery is only executed once

I discovered this morning that random() when used in a subquery doesn’t really do what you think it does.

Random generally looks like this:

> select random() from generate_series(1, 3)
(3 rows)

But when you use random() in a subquery the function is only evaluated one time.

> select (select random()), random() from generate_series(1, 3);
      random       |      random
 0.611774671822786 | 0.212534857913852
 0.611774671822786 | 0.834582580719143
 0.611774671822786 | 0.415058249142021
(3 rows)

So do something like this:

insert into things (widget_id) 
  (select id from widgets order by random() limit 1)
from generate_series(1, 1000);

Results in 1000 entries into things all with the same widget_id.

Find that change you know you made!

Sometimes you do awesome work…

Sometimes you even remember that awesome work when at a later point it is not there but everything in your being knows that you wrote it already.

Hopefully this will help:

git log --all --source -- path/to/my/file

This super useful command will show you changes on a particular file (that you totally know you changed) across branches; because sometimes our brains work so hard that we forget to get something merged and those important changes just sit there waiting to be useful again on another branch.

Migrating to form_with

Whilst upgrading a Rails application from 4.1 to 5.2, my feature specs started throwing a strange error:

Unable to find field "X" that is disabled (Capybara::ElementNotFound)

I reverted my changes… the tests passed. Step 1 in debugging: read the error message. Capybara is saying the element I’m trying to find is disabled. If I inspect the html generated by form_for:

= form_for book do |f|
  = f.label :title, 'Title'
  = f.text_field :title
  = f.button 'Save Changes', type: :submit
<form action="/books/14" accept-charset="UTF-8" method="post">
  <label for="book_title">Title</label>
  <input type="text" name="book[title]" id="book_title">
  <button name="button" type="submit">Save Changes</button>

and compare it to the html generated by form_with:

= form_with model: book, local: true do |f|
  = f.label :title, 'Title'
  = f.text_field :title
  = f.button 'Save Changes', type: :submit
<form action="/books/14" accept-charset="UTF-8" method="post">
  <label for="book_title">Title</label>
  <input type="text" name="book[title]">
  <button name="button" type="submit">Save Changes</button>

Notice what’s missing. form_with does not automatically generates ids for form elements, and the id is necessary to link the label’s for with the input’s id, otherwise fill_in('Title', with: "Pride and Prejudice") doesn’t work in Capybara. Either add the ids manually, or in Rails 5.2 use this setting:

Rails.application.config.action_view.form_with_generates_ids = true

Multiline HAML, One Way or Another 🥞

HAML uses meaningful whitespace identation, so in general, multiline code doesn’t fly. The prevailing wisdom is that ‘it’s better to put that Ruby code into a helper’ rather than support multiline blocks. But what if I really want to?

Here’s an example of a multiline HAML block using pipes.

= link_to(           |
  'someplace',       |
  some_path,         |
  class: 'someclass'

The pipe character preceded by whitespace signifies a multiline block. Make sure you don’t end your final line with a pipe; this can break your templating.

Ruby yield as keyword args default

Today I learned that you can use yield as a default value for a keyword arg.

def foo(bar: yield)
  "Received => #{bar}"

Then you can call this method using the keyword syntax:

foo(bar: "Hello world!")
#=> "Received => Hello world!"

or by using the block syntax:

foo do
  "Hello world!"
#=> "Received => Hello world!"

I am not sure why I’d use such flexible syntax for a single method, but we have to know what’s possibly in Ruby. Anyway, just a sanity check here, is the block evaluated if we pass the arg?:

foo(bar: "Hello") do
  puts "Block was evaluated!!!"
#=> "Received => Hello"

Cool, so ruby does not evaluate the block if this keyword is passed into, so we are cool.

Testing Shell Conditions

When you’re shell scripting you really want to get your head wrapped around conditions. GNU provides a command to test conditions.

test 1 -gt 0
# exits with exit code 0
echo $?
# prints 0
test 0 -gt 1
# exits with exit code 1
echo $?
# prints 1

Checking the $? env var is a bit awkward, you can chain the command with echo though.

test 1 -gt 0 && echo true
# outputs true

Just be aware that it doesn’t output false when false.

But if you’re chaining with && you might as well use the [[ compound command.

[[ 1 -gt 0]] && echo true
# outputs true

Now you’re using shell syntax directly.

Linux ZSH ls colors

ls does not colorize the output in linux.

ls --colordoes colorize the output. It’s smart to set an alias.

alias ls='ls --color=auto'

Ok, now you’ve got colors everytime, but how do you change those colors?

The color settings are defaulted, but can be overriden by the value of environment variable LS_COLORS.

The language for setting these colors is really obtuse, but you can generate the settings with the command dircolors. dircolors outputs an enivornment variable you can include into your zshrc file. This variable will give you the same colors as when LS_COLORS is not set.

You can figure out what values to set colors to with this resource.

Pass args to a custom vim command

Custom commands are easy in vim:

:command HelloWorld echo "hello world"
" outputs hello world

But what if I want to pass an arg to the command?

First you have to specify that you want args with the -narg flag. Then you need to have declare where the args would go with <q-args>.

:command! -narg=1 Say :echo "hello" <q-args>
:Say world
" outputs hello world

Creating a Bind Mount with `docker volume`

Creating a bind mount (a volume that has an explicitly declared directory underpinning it) is easy when using docker run:

docker run -v /var/app/data:/data:rw my-container

Now, anytime you write to the container’s data directory, you will be writing to /var/app/data as well.

You can do the same thing with the --mount flag.

docker run --mount type=bind,source=/var/app/data,target=/data my-container

Sometimes though you might want to create a bind mount that is independent of a container. This is less than clear but Cody Craven figured it out.

docker volume create \
--driver local \
-o o=bind \
-o type=none \
-o device=/var/app/data \

The key value pairs passed with -o are not well documented. The man page for docker-create-volume says:

The built-in local driver on Linux accepts options similar to the linux mount command

The man page for mount will have options similiar to the above, but structred differently.

Set Git Tracking Branch on `push`

You hate this error, right?

$ git push
There is no tracking information for the current branch.

I especially hate git’s recommendation at this stage:

$ git branch --set-upstream-to=origin/<branch> my-branch

You can check for tracking information in your config file with:

$ git config -l | grep my-branch
# returns exit code 1 (nothing)

Yep, no tracking info. The first time you push you should use the -u flag.

# assuming you are on my-branch
$ git push -u origin HEAD

No do you have tracking info?

# returns the tracking information stored in config!
$ git config -l | grep my-branch

Did you forget to set up tracking on the first push? Don’t worry, this actually works anytime you push.

$ git push
There is no tracking information for the current branch.

$ git push -u origin HEAD
Branch 'my-branch' set up to track remote branch 'my-branch' from 'origin' by rebasing.

This is so more ergonomic than git’s recommendation.

Tmux Attach Sessions with Working Directory

Your Tmux working directory is the root directory of the session. You can figure out what it is by opening a new window or pane in your session. The directory you start in is your working directory.

Sometimes it’s not the best directory for the type of project you’re developing. For instance, it could be set to the root directory of an umbrella app, when you’re working exclusively in one of the subdirectories.

Reset it with the -c flag:

$ tmux attach-session -t my_session -c ~/my_project

Get Back To Those Merge Conflicts

You’ve probably experienced this:

Decision A
<<<<<<< HEAD
Decision H
Decision I
Decision F
Decision G
>>>>>>> branch a
Decision E

And you wind up making some iffy decisions:

Decision A
Decision I
Decision G
Decision E

The tests don’t pass, you’re not confident in the choices you’ve made, but this is the third commit in a rebase and you don’t want to start over.

It’s easy to get back to a place where all your merge conflicts exist with:

git checkout --merge file_name
# or
git checkout -m file_name

Now you can re-evaluate your choices and make better decisions

Decision A
<<<<<<< HEAD
Decision H
Decision I
Decision F
Decision G
>>>>>>> branch a
Decision E

H/T Brian Dunn

Sharing Volumes Between Docker Containers

In docker, it’s easy to share data between containers with the --volumes-from flag.

First let’s create a Dockerfile that declares a volume.

from apline:latest

volume ["/foo"]

Then let’s:

  1. Build it into an image foo-image
  2. Create & Run it as a container with the name foo-container
  3. Put some text into a file in the volume
docker build . -t foo-image
docker run -it --name foo-container foo-image sh -c 'echo abc > /foo/test.txt'

When you run docker volume ls you can see a volume is listed. By running a container from an image with a volume we’ve created a volume.

When you run docker container ls -a you can see that we’ve also created a container. It’s stopped currently, but the volume is still available.

Now let’s run another container passing the name of our previously created container to the --volumes-from flag.

docker run -it --volumes-from foo-container alpine cat /foo/test.txt

# outputs: abc

We’ve accessed the volume of the container and output the results.

Blocking ip6 addresses with /etc/hosts

Like many developers, I need to eliminate distractions to be able to focus. To do that, I block non-development sites using /etc/hosts entries, like this:

Today I learned that this doesn’t block sites that use ip6. I have in my /etc/hosts file but it is not blocked in the browser.

To prove this is an ip6 issue I can use ping and ping6

> ping
PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=64 time=0.024 ms

> ping6
PING6(56=40+8+8 bytes) 2601:240:c503:87e3:fdee:8b0b:dadf:278e --> 2a04:4e42:200::323
16 bytes from 2a04:4e42:200::323, icmp_seq=0 hlim=57 time=9.288 ms

So for ip4 requests is pinging localhost and not getting a response, which is what I want. For ip6 addresses is hitting an address that is definitely not my machine.

Let’s add another entry to /etc/hosts:


::1 is the simplification of the ip6 loopback address 0:0:0:0:0:0:0:1.

Now, does pinging with ip6 hit my machine?

> ping6
PING6(56=40+8+8 bytes) ::1 --> ::1
16 bytes from ::1, icmp_seq=0 hlim=64 time=0.044 ms

Distractions eliminated.

View the `motd` after login in Ubuntu

When you ssh into an Ubuntu machine, you may see a welcome message that starts with something like this:

Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-65-generic x86_64)

This is the motd (message of the day).

What if you clear your terminal after login but want to see that message again?

There are two ways to do this.

$ cat /run/motd.dynamic

This will show you the same message that was created for you when you logged in.

If there is dynamic information in that message and you want to see the latest version run:

$ sudo run-parts /etc/update-motd.d/

This will run all the scripts that make the motd message.

Highlight json with the `bat`

Sometimes I run a utility that outputs a whole bunch of json, like:

docker inspect hello-world 
# outputs a couple pages of json

I want to send it through bat because bat is a great output viewer, and I also want it to syntax highlight. If bat is viewing a file with the extenson of json then it will get syntax highlighting, but in this case there is no file and no extension.

You can turn on json syntax highlighting with the --language flag.

docker inspect hello-world | bat --language json
# or just use -l
docker inspect hello-world | bat -l json

Combine this with --theme and you’re looking good!

docker inspect hello-world | bat -l json --theme TwoDark

The interaction of CMD and ENTRYPOINT

The CMD and ENTRYPOINT instructions in a Dockerfile interact with each other in an interesting way.

Consider this simple dockerfile:

from alpine:latest

cmd echo A

When I run docker run -it $(docker build -q .) The out put I get is A, like you’d expect.

With an additional entrypoint instruction:

from alpine:latest

entrypoint echo B
cmd echo A

I get just B no A.

Each of these commands are using the shell form of the instruction. What if I use the exec form?

from alpine:latest

entrypoint ["echo", "B"]
cmd ["echo", "A"]

Then! Surprisingly, I get B echo A.

When using the exec form cmd provides default arguments to entrypoint

You can override those default arguments by providing an argument to docker run:

docker run -it $(docker build -q .) C

`ets` table gets deleted when owning process dies

You can create a new ets table with:, [:named_table])

And you can confirm it was created with:
id: #Reference<0.197283434.4219076611.147360>,

Now check this:

spawn(fn ->, [:named_table]) end)

Let’s see if it was created:
# returns :undefined

What gives? The erlang ets docs say this:

Each table is created by a process. When the process terminates, the table is automatically destroyed.

So, spawn created the process and then terminated, so :spawn_table got deleted when the process died.

Debugging: Elm evaluates uncalled `let` variables

If you write a function that has a let expression variables like so:

view : Model -> Html Msg
view model =
        logModel = Debug.log "model:" model
      div []
          [ button [ onClick Increment ] [ text "+1" ]
          , div [] [ text <| String.fromInt model.count ]
          , button [ onClick Decrement ] [ text "-1" ]

When the view function is called you will see the console log message that logModel writes, even though it was never called from the function’s body.

This can be useful for debugging function arguments coming in, or other variables without messing with the function’s body.

To avoid the [elm-analyse 97] [W] Unused variable "logModel" warning you can use an underscore instead of naming the variable:

example =
        _ = Debug.log "foo" "bar"
      "function body"

It is worth mentioning that variables that are called from a function’s body will only be executed once.

example =
    foo = Debug.log "foo" "I'm called once"
    bar = Debug.log "bar" "I'm called once"

will result in only two console log messages, one for foo, and one for bar.

h/t Jeremy Fairbank

Force Code Block Length with Non-Breaking Space 📏

Services that render fenced code blocks tend to remove whitespace. They’ll display these two code blocks the same way, trimming the blank lines from the second example.

let something;
let something;

GitHub does this, as does Deckset. Deckset also adjusts your code’s font size based on how many lines are in the block, which means these two code blocks would have different font sizes on their slides. Sometimes I don’t want that to happen, like when building a series of slides that fill in different pieces of an example and should look consistent.

I cheat this feature putting a non-breaking space on the last line. On Mac, I can do this with OPTION + SPACE. I can see the character in Vim (‘+’), but it’s invisible on the slide, and it prevents Deckset from collapsing the line it’s on, forcing the code block to the length I choose.

SO: Use non-breaking spaces

fill your quickfix window with lint

File names I can’t jump to frustrate me. Today I ran $ npx eslint and my computer said “I looked at a file, and found a problem on this line, in this column. Do you want to see it? Good for you. Go type out the file name in your editor then.”

ButI wanted a jump list of all the eslint errors in my project. Eslint is a kind of compiler, right? Vim knows compilers.

:set makeprg=npx\ eslint\ -f\ unix

Now I can


and behold!


I can now see all of the errors and warnings for the project, and nimbly jump betwixt.

Use assigned variables on its own assign statement

Javascript allows you to use an assigned variables into the same assignment statement. This is a non expected behavior for me as the first languages I’ve learned were compiled languages and that would fail compilation, at least on the ones I’ve experienced with.

Here’s an example:

Javascript ES2015:

const [hello, world] = [
  () => `Hello ${world()}`,
  () => "World",
//=> "Hello World"

In this example I’m assigning the second function into the world variable by using destructuring assignment and I’m using this by calling world() in the first function.


To my surprise ruby allows the same behavior:

hello, world = [
  lambda { "Hello #{}" },
  lambda { "World" },
#=> "Hello World"


Well, Elixir will act as I previously expected, so assigned variables are allowed to be used only after the statement that creates them.

[hello, world] = [
  fn -> "Hello #{world.()}" end,
  fn -> "World" end,
#=> ERROR** (CompileError) iex:2: undefined function world/0

I won’t start using this approach, but it’s important to know that’s possible.

Vim Tags in Visual Mode 🏷

This is my 400th TIL! 🎉

I’ll file this under ‘Vim is endlessly composable’. Today I learned that Vim tags can be used to define a range in visual mode. Here’s how you’d fold your code between two Vim tags.

Go to the first tag. If you marked it 1, here’s how you’d do that:


Enter visual mode and extend to your second tag 2:


Enter command mode and fold the range:


Which automatically extends to:


I use this in big markdown files to hide all but the thing I’m currently working on. Enjoy.

Vim Reverse Sort

I use Vim’s :sort all the time. If I can’t have a more meaningful algorithm than alphabetical, at least my lists are in some kind of order.

This function comes in surprisingly useful while writing GitHub-style checkbox lists.

- [x] Check mail
- [ ] Play guitar
- [x] Write TIL

Sorting this list alphabeticaly puts the undone items at the top.

- [ ] Play guitar
- [x] Check mail
- [x] Write TIL

Reverse the order (in classic Vim style) with a bang:


GitHub PR Team Reviews

Pull request openers on GitHub can request a review from a GitHub user, or a team that has at least read-access to the repository. In the screenshot below, deploying the ‘Ninjas’ to review this PR is effectively the same as requesting a review from each ninja on the team. If they have notifications for such things, they’ll be notified, and they’ll see a banner on the PR’s show page requesting their input on the change.


This is a handy way to request a lot of reviews at once.

About Pull Request Reviews