Today I Learned

A Hashrocket project

Keeping an SSH connection alive

Do you get disconnected from your SSH session often? I do… but I’ve found a solution that helps.

An SSH configuration that can be made on the server or client side but in my instance it makes more sense for the update to be on the client side.

In your ~/.ssh/config we’ll utilize the ServerAliveInterval declaration.

ServerAliveInterval
             Sets a timeout interval in seconds after which if no data has
             been received from the server, ssh(1) will send a message through
             the encrypted channel to request a response from the server.  The
             default is 0, indicating that these messages will not be sent to
             the server.  This option applies to protocol version 2 only.

This declaration informs the client to send a keep alive packet at a certain interval to the server, ensuring the connection stays open.

Host example
  Hostname example.com
  ServerAliveInterval 30

That declaration will send a packet every 30 seconds if the connection goes idle. Now this is the desired funcationality that I’d like to see for all of my SSH connections so we can add it to the host wildcard like so:

Host *
  ServerAliveInterval 120

GitHub Insert a Suggestion ☝

Today I learned that GitHub has allows you to insert a one-line suggestion while conducting a pull request review. Click on the button circled in red, and you’ll get a suggestion fenced code block, as shown.

image

Inside the code block is the current code; replace it with your proposed change. GitHub will present your suggestion in a nice diff format.

When the pull requester views your suggestion, they can accept the change with one click. Efficient!

Multiple-lines are not yet supported, according to this GitHub blog post it is a frequently requested feature.

h/t Jed and Raelyn

Pivotal Tracker Advanced Search by Owner

On a big project, Pivotal Tracker contains a wealth of data. Let’s use it!

Today I filtered all the stories assigned to my teammate with the owner keyword:

owner, owned_by
owner: “Obi Wan Kenobi”
Search using the full name, initials, the username, user id or part of the user’s name.

This filtered out hundreds of stories, leaving the four in progress that I care about.

Here’s the official advanced search guide. Try a new keyword today!

https://www.pivotaltracker.com/help/articles/advanced_search/

Change PostgreSQL psql prompt colors

Today I learned how to change psql prompt to add some color and manipulate which info to show:

$ psql postgres
postgres=# \set PROMPT1 '%[%033[1;32m%]@%/ => %[%033[0m%]%'
@postgres => \l
                                           List of databases
            Name            |   Owner    | Encoding |   Collate   |    Ctype    |   Access privileges
----------------------------+------------+----------+-------------+-------------+-----------------------
 postgres                   | postgres   | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0                  | postgres   | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                            |            |          |             |             | postgres=CTc/postgres
 template1                  | postgres   | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                            |            |          |             |             | postgres=CTc/postgres
(3 rows)

image

Check this documentation if you want to know more.

Tuples are never inferred in TypeScript

If I have a function that takes a number tuple:

type Options = {
  aspect: [number, number],
  ...
}

const pickImage = (imageOptions: Options) => (
  ...
)

This will give a type error:

const myOptions = {
  aspect: [4, 3],
};

// ❌ Expected [number, number], got number[]
pickImage(myOptions);

I must use type assertion when passing tuples:

const myOptions = {
  aspect: [4, 3] as [number, number],
};

pickImage(myOptions);

Research Software Alternatives

Today I discovered AlternativeTo, which provides ‘crowdsourced software recommendations’. It’s available here:

https://alternativeto.net/

With this resource you can find alternatives to websites, tools, operating systems, and more, and learn about each alternative you find.

For reference, I checked out the results for open source alternatives to Ruby. The list includes no-brainers like Python, JavaScript, and Java, but also Go, C, and Lua. If I were a CTO who didn’t know Ruby and needed to quickly understand its position in the ecosystem, this would be a pretty useful starting point.

image

Tmux Kill All Other Sessions

Ok, you can kill a session in Tmux from the command line with tmux kill-session -t <session-name>. Did you know you can add a flag to kill every session other than your target? Here’s that command.

$ tmux kill-session -a -t <session-name>
$ tmux kill-session -at <session-name> # short version

All sessions but the target will die. This is my ‘clear the desk’ operation when I come back to my computer after some time away and want to shut down a bunch of random servers and processes.

`sleep` is just `receive after`

While using sleep in your production application might not be for the best, it’s useful in situations where you’re simulating process behaviour in a test or when you’re trying to diagnose race conditions.

The Elixir docs say this:

Use this function with extreme care. For almost all situations where you would use sleep/1 in Elixir, there is likely a more correct, faster and precise way of achieving the same with message passing.

There’s nothing special about sleep though. The implementation for both :timer.sleep and Process.sleep is equivalent. In Elixir syntax, it’s:

receive after: (timeout -> :ok)

So, it’s waiting for a message, but doesn’t provide any patterns that would successfully be matched. It relies on the after of receive to initiate the timeout and then it returns :ok.

If you’re building out some production code that requires waiting for a certain amount of time, it might be useful to just use the receive after code so that if later you decide that the waiting can be interrupted you can quickly provide a pattern that would match the interrupt like:

timeout = 1000

receive do
  :interrupt -> 
    :ok_move_on_now
  
  after
    timeout -> :ok_we_waited
end

Assert one process gets message from another

Erlang has a very useful-for-testing function :erlang.trace/3, that can serve as a window into all sorts of behaviour.

In this case I want to test that one process sent a message to another. While it’s always best to test outputs rather than implementation, when everything is asynchronous and paralleized you might need some extra techniques to verify your code works right.

pid_a =
  spawn(fn ->
    receive do
      :red -> IO.puts("got red")
      :blue -> IO.puts("got blue")
    end
  end)

:erlang.trace(pid_a, true, [:receive])

spawn(fn ->
  send(pid_a, :blue)
end)

assert_receive({:trace, captured_pid, :receive, captured_message})

assert captured_pid == pid_a
assert :blue == captured_message

In the above example we setup a trace on receive for the first process:

:erlang.trace(pid_a, true, [:receive])

Now, the process that called trace will receive a message whenever traced process receives a message. That message will look like this:

{
  :trace,
  pid_that_received_the_message,
  :receive, # the action being traced
  message_that_was_received
}

This in combination with assert_receive allows you to test that the test process receives the trace message.

Make dry run

make has an great option to validate the commands you want to run without executing them. For that just use --dry-run.

#!make
PHONY: test
test:
    echo "running something"

Then:

$ make test --dry-run
echo "running something"
$ make test
echo "running something"
running something

Thanks @DillonHafer for that tip!

Git Log Relative Committer Date 📝

Git log format strings are endlessly interesting to me. There are so many different ways to display the data.

Today I learned a new one, %cr. This formats your commit from the committer date, relative to now. After pulling from the remote, use it like this to see how long ago the latest commmit was committed:

$ git log -1 --format=%cr
8 hours ago

I use this command when surveying a Github organization, or an old computer with lots of dusty projects laying around.

How to setup VS Code for Ruby development

After some trial and error with the various extensions available for Ruby, I’ve found the following combination to work well:

Ruby for debugging, syntax highlighting and linting. I use these VS Code User Settings:

"ruby.useLanguageServer": true,
"ruby.lint": {
  "ruby": true
}

Solargraph for intellisense and autocomplete.

endwise for wisely adding end to code structures. (Inspired by tpope’s endwise.vim)

Prettier and plugin-ruby for formatting.

Prettier plugin support is on the way, but for now we have to do this

Bonus for Rails: PostgreSQL for writing queries within the editor.

Apollo-React hooks can easily refetch queries

React-apollo hooks allow you to easily refetch queries after a mutation, which is useful for updating a list when you have create/update/delete mutations.

useMutation takes an argument called refetchQueries that will run queries named in the array.

import React from "react";
import { useMutation } from "@apollo/react-hooks";
import gql from "graphql-tag";
import { Button } from "react-native";
import { Item } from "./types";

const ITEM_DELETE = gql`
  mutation ItemDelete($id: ID!) {
    itemDelete(id: $id) {
      id
    }
  }
`;

interface Props {
  item: Item;
}

const DeleteItemButton = ({item}: Props) => {
  const [deleteItem] = useMutation(ITEM_DELETE, {
    variables: { id: item.id },
    refetchQueries: ["GetItemList"],
  });

  return (
    <Button title="Delete" onPress={() => deleteItem()} />
  )
}

Assert Linked Process Raised Error

Linked processes bubble up errors, but not in a way that you can catch with rescue:

test "catch child process error?" do
  spawn_link(fn -> 
    raise "3RA1N1AC"
  end)
rescue 
  e in RuntimeError ->
    IO.puts e
end

This test fails because the error wasn’t caught. The error bubbles up outside of normal execution so you can’t rely on procedural methods of catching the error.

But, because the error causes on exit on the parent process (the test process) you can trap the exit with Process.flag(:trap_exit, true). This flag changes exit behavior. Instead of exiting, the parent process will now receive an :EXIT message.

test "catch child process error?" do
  Process.flag(:trap_exit, true)

  child_pid = spawn_link(fn -> 
    raise "3RA1N1AC"
  end)

  assert_receive {
    :EXIT,
    ^child_pid,
    {%RuntimeError{message: "3RA1N1AC"}, _stack}
  }
end

The error struct is returned in the message tuple so you can pattern match on it and assert about.

This method is still subject to race conditions. The child process must throw the error before the assert_receive times out.

There is a different example in the Elixir docs for catch_exit.

Assert Test Process Did or Will Receive A Message

The ExUnit.Assertions module contains a function assert_receive which the docs state:

Asserts that a message matching pattern was or is going to be received within the timeout period, specified in milliseconds.

It should possibly in addition say “received by the test process”. Let’s see if we can send a message from a different process and assert that the test process receives it:

test_process = self()

spawn(fn ->
  :timer.sleep(99)
  send(test_process, :the_message)
end)

assert_receive(:the_message, 100)
# It Passes!!!

In the above code, :the_message is sent 1 millisecond before the timeout, and the assertion passes.

Now let’s reverse the assertion to refute_receive, and change the sleep to the same time as the timeout.

test_process = self()

spawn(fn ->
  :timer.sleep(100)
  send(test_process, :the_message)
end)

refute_receive(:the_message, 100)
# It Passes!!!

Yep, it passes.

Firefox Built-in JSON Tools ⛏

Recent versions of Firefox (I’m on 67.0.1) have built-in tools to explore JSON.

Here’s a screenshot of a JSON file from the React Native docs, viewed in Firefox:

image

We get syntax highlighting, the ability to save, copy, expand, or collapse the JSON, right in the browser, with no extra plugins or tools required. Another great Firefox feature!

Override Markdown Listing 📝

Markdown parsers have a neat feature: a list of any numbers tends to be turned into an ordered lists.

I write:

99. First thing (?)
1. Second thing (??)
50. Third thing (??)
1. Fourth thing (????)

And TIL’s markdown parser (Earmark) produces this as HTML:

  1. First thing (?)
  2. Second thing (??)
  3. Third thing (??)
  4. Fourth thing (????)

Pretty cool! Sometimes I want to do my own thing, though. This Stack Overflow question elicits a bunch of techniques; the kind I learned today and used with success is:

99\. First thing (?)
1\. Second thing (??)
50\. Third thing (??)
1\. Fourth thing (????)

Escape the pattern; break the system! This perserved my unordered numbering. Now, I may have other problems, like losing automatic ol and li tags, but the hack works.

Are All Values True in Postgres

If you have values like this:

chriserin=# select * from (values (true), (false), (true)) x(x);
 x
---
 t
 f
 t

You might want to see if all of them are true. You can do that with bool_and:

chriserin=# select bool_and(x.x) from (values (true), (false), (true)) x(x);
bool_and 
---
 f

And when they are all true:

chriserin=# select bool_and(x.x) from (values (true), (true), (true)) x(x);
bool_and 
---
 t

Push git branch to another machine

Whenever I’m working in the same git repo on multiple machines, and I need to work on a branch on both machines I usually push the branch to a shared remote for the sole purpose of pulling it down on the other machine. This third machine can be avoided by push directly between machines:

$ myproject(feature-branch): git remote add machine2 ssh://machine2:/Users/dillon/dev/myproject
$ myproject(feature-branch): git push machine2 feature-branch

Over on machine2:

$ myproject(master): git branch
* master
  feature-branch       <= 😯

Hiding and Revealing Struct Info with `inspect`

There are two ways to hide information when printing structs in Elixir.

Hiding by implementing the inspect protol.

defmodule Thing do
    defstruct color: "blue", tentacles: 7
end

defimpl Inspect, for: Thing do
  def inspect(thing, opts) do
    "A #{thing.color} thing!"
  end
end

So now in iex I can’t tell how many tentacles the thing has:

> monster = %Thing{color: "green", tentacles: 17}
> IO.inspect(monster, label: "MONSTER")
MONSTER: A green thing!
A green thing!

Note that iex uses inspect to output data which can get confusing.

NEW IN ELIXIR 1.8: You can also hide data with @derive

defmodule Thing do
  @derive {Inspect, only: [:color]}
  defstruct color: "blue", tentacles: 7
end

And now you won’t see tentacles on inspection

> monster = %Thing{color: "green", tentacles: 17}
> IO.inspect(monster)
#Thing<color: "green", ...>

In both cases, you can reveal the hidden information with the structs: false option:

> monster = %Thing{color: "green", tentacles: 17}
> IO.inspect(monster)
#Thing<color: "green", ...>
> IO.inspect(monster, structs: false)
%{__struct__: Thing, color: "green", tentacles: 17}

Custom Validation in Ecto

Sometimes the standard validation options aren’t enough.

You can create a custom validator in Ecto using the validate_change function.

In this particular case, I want to validate that thing has an odd number of limbs. I can pass the changeset to validate_odd_number.

def changeset(thing, attrs) do
  thing
  |> cast(attrs, [
    :number_of_limbs
  ])
  |> validate_odd_number(
    :number_of_limbs
  )
end

And then define validate_odd_number like this:

def validate_odd_number(changeset, field) when is_atom(field) do
  validate_change(changeset, field, fn (current_field, value) ->
    if rem(value, 2) == 0 do
      [{f, "This field must be an odd number"}]
    else 
      []
    end
  end)
end

We pass a function to validate_change that takes the field atom and the value for that field. In that function we test for oddness and return a keyword list that contains the field name and an error message.

Remember that when f is :number_of_limbs:

    [{f, "hi"}] == [number_of_limbs: "hi"]
    # true, these data structures are equal

Parallel xargs fails if any of its children do

We like to write about xargs. In addition to all that, turns out xargs is a great tool for easily parallelizing tests, linters or anything where some may pass, and some may fail. If any of the processes that xargs spawns fail, the xargs call will also fail.

All child processes exit zero:


% echo "0\n0\n0" | xargs -Icode -P4 sh -c 'exit code'; echo exit code: $?
exit code: 0

And so does xargs! If any exit non-zero:

echo "0\n1\n127" | xargs -Icode  -P4 sh -c 'exit code'; echo exit code: $?
exit code: 1

xargs follows suit.

Ecto's `distinct` adds an order by clause

When using distinct on you may encounter this error:

select distinct on (color) id, color from fruits order by id;
-- ERROR:  SELECT DISTINCT ON expressions must match initial ORDER BY expressions

This is because distinct on needs to enounter the rows in a specific order so that it can make the determination about which row to take. The expressions must match the initial ORDER BY expressions!

So the above query works when it looks like this:

select distinct on (color) id, color from fruits order by color, id;

OK, now it works.

Ecto’s distinct function helps us avoid this common error by prepending an order by clause ahead of the order by clause you add explicitly.

This elixir statement:

Fruit |> distinct([f], f.color) |> order_by([f], f.id) |> Repo.all

Produces this sql (cleaned up for legibility):

select distinct on (color) id, color from fruits order by color, id;

Ecto added color to the order by!

Without any order by at all, distinct does not prepend the order by.

Read the docs!

Find File Case-Insensitively 🔎

When using the find command, consider the -iname flag. It works like -name:

True if the last component of the pathname being examined matches pattern.

But with case insensitivity:

-iname pattern
  Like -name, but the match is case insensitive.

Here’s a sample command, which will match utils.js and Utils.js.

$ find . -iname Utils.js

See man find for more information. h/t Raelyn.

Transactions can timeout in Elixir

In Ecto, transactions can timeout. So this type of code:

  Repo.transaction fn -> 
    # Many thousands of expensive queries
    # And inserts
  end

This type of code might fail after 15 seconds, which is the default timeout.

In Ecto you can specify what the timeout should be for each operation. All functions that make a request to the database have the same same shared options of which :timeout is one.

Repo.all(massive_query, timeout: 20_000)

The above query now times out after 20 seconds.

These shared options apply to a transaction as well. If you don’t care that a transaction is taking a long time you can set the timeout to :infinity.

  Repo.transaction(fn -> 
    # Many thousands of expensive queries
    # And inserts
  end, timeout: :infinity)

Now this operation will be allowed to finish, despite the time it takes.

Timing A Function In Elixir

Erlang provides the :timer module for all things timing. The oddly named function tc will let you know how long a function takes:

{uSecs, :ok} = :timer.tc(IO, :puts, ["Hello World"])

Note that uSecs is microseconds not milliseconds so divide by 1_000_000 to get seconds.

microseconds are helpful though because sometimes functions are just that quick.

:timer.tc(IO, :puts, ["Hello World"])
# {22, :ok}

You can also call :timer.tc with a function and args:

adding = fn (x, y) ->  x + y end

:timer.tc(adding, [1,3])
# {5, 4}

Or just a function:

:timer.tc(fn -> 
    # something really expensive
    :ok
    end)
# {1_302_342, :ok}

Ack ignores node_modules by default

When searching through your JavaScript project it doesn’t make sense to search through your node_modules. But if your are on a spelunking journey into the depths of your dependencies, you may want to search through all your node_modules!

ack ignores node_modules by default, and ack being ack you can ack through ack to check it out:

> cat `which ack` | ack node_modules
--ignore-directory=is:node_modules

This is different behaviour from ag and rg which also ignore node_modules but not explicitly. They both ignore node_modules by ignoring all entries in the .gitignore file.

rg claims to implement full support for the .gitignore file while also claiming other search tools do not. The open issues list for ag bears that out.

With each of these tools, explicitly stating the directory to search through overrides the ignore.

> ack autoprefix node_modules
> rg autoprefix node_modules
> ag autoprefix node_modules

`user-select:none` needs prefixes for each browser

The user-select css property governs if text is selectable for a given element. user-select: none means that it is not selectable.

What’s interesting about this property is that while each browser supports it, they each require their own prefix, except Chrome, which does not need a prefix.

In create react app, what starts out as:

user-select: none;

Gets expanded to:

-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;

But the default browserList configuration for development in your package.json file is:

"development": [
    "last 1 chrome version",
    "last 1 firefox version",
    "last 1 safari version"
]

And so in development, it gets expanded to:

-webkit-user-select: none;
-moz-user-select: none;
user-select: none;

Sorry Micrsoft.

How does create react app know which prefixes to render? The caniuse-lite npm package has up-to-date support data for each property and user-select:none is defined here.

That data is compressed. Here’s how to uncompress it using the node cli to see what it represents:

const compressedData = require('caniuse-lite/data/features/user-select-none');
const caniuse = require('caniuse-lite');
const unpackFunction = caniuse.feature;
unpackFunction(compressedData);

This is accomplished in create react app by the npm package autoprefixer;

Reduce Depth of an Existing Git Repo ⚓️

git pull --depth works on an existing repo to set the depth of references retrieved. To quote the --depth docs:

Limit fetching to the specified number of commits from the tip of each remote branch history.

So, what happens when you run this? Here’s me experimenting with the Elixir repo:

$ git clone https://github.com/elixir-lang/elixir.git
$ cd elixir
$ git log --oneline | wc -l
   15670 # 15670 entries!
$ git pull --depth=1
# ...pulling the shallow repo
$ git log --oneline | wc -l
   1 # 15670 to 1... woah!
$ git log
# ...output from just one commit

Is .git/ a lot smaller now? Not yet, because there are many dangling references. This Stack Overflow answer shows you how to cleanup that directory. After setting a depth of one and following its instructions, I reduced the Elixir repo’s .git/ size by 90%.

Check out man git-pull for more information.

Zoom Hotkeys to Toggle Audio and Video 🎛

Zoom users, two new hotkeys for you on Mac: CMD-SHIFT-A to toggle audio and CMD-SHIFT-V to toggle video. The ability to control my A/V presentation in long video calls makes these hotkeys worth the muscle memory investment.

Zoom has another handy audio feature in preferences: temporarily unmute by pressing the spacebar. This is perfect for standups, where I mostly want to be muted, but need to be able to unmute quickly for short periods of time.

Dash Docset Keywords 🗝

Dash has a customizable keyword for each Docset. Here are the defaults for a random Hashrocket collection*:

image

These keywords scope search results to one Docset; use one in the Dash search bar like so:

elixir:String.split

And from Alfred:

dash elixir:String.split

This is useful when you have many Docsets, and are sure where you want to look, which should be true almost all the time. It’s nice when using two complementary technologies like Elixir and Elm that share a function namespace (such as String.split).

* Also note: this list order can be changed, as noted in the table header. Move libraries you use often above those you use rarely.

Dash Find in Page 🔎

To build upon my earlier post, Integrate Alfred and Dash: Dash supports find-in-page search via a whitespace. To search on a docs page, add a space after the function or keyword name, and then enter your search query.

Here’s a command you could run to find JavaScript’s Array.prototype.unshift()and search on that docs page for the ‘specification’ header, where the associated ECMAScript standards for unshift are found.

unshift specification

This command from Alfred could look like:

dash unshift specification

Integrate Alfred and Dash 🤝

Today I’m testing an intergration between Alfred, the powerful alternative to Finder, and Dash, the local-documentation tool for Mac. My goal is a single command, run from anywhere on a Mac, that searches one or many sets of docs for a named function.

This tutorials showed me how to do it. TL;DR, it’s a one-click setup in Dash:

https://github.com/Kapeli/Dash-Alfred-Workflow

Once complete, I can trigger Alfred (with CMD + SPACE, or any key combo that’s not assigned elsewhere), type dash List.reverse, hit enter, and Alfred loads all the List.reverse docs I’ve downloaded in Dash.

Custom React Hook Must Use `use`

You can build your own hooks by composing existing hooks.

Here, I create a custom hook useBoolean by wrapping useState:

const useBoolean = () => useState(true);

Which I can then use in my component:

function Value() {
  const [value, setValue] = useBoolean();

  return <div onClick={() => setValue(!value)}>Click me {String(value)}</div>;
}

The react documentation very politely asks that you start the name of your hook with use. This is isn’t strictly necessary, and it will still work if you call it:

const doBoolean = () => useState(true);

But that violates the Rules of Hooks.

You can include an eslint plugin that will prevent you from breaking the rules. This plugin is installed by default in create-react-app version 3.

Aggregate Arrays In Postgres

array_agg is a great aggregate function in Postgres but it gets weird when aggregating other arrays.

First let’s look at what array_agg does on rows with integer columns:

select array_agg(x) from (values (1), (2), (3), (4)) x (x);
-- {1,2,3,4}

It puts each value into an array. What if are values are arrays?

select array_agg(x)
from (values (Array[1, 2]), (Array[3, 4])) x (x);
-- {{1,2},{3,4}}

Put this doesn’t work when the arrays have different numbers of elements:

select array_agg(x)
from (values (Array[1, 2]), (Array[3, 4, 5])) x (x);
-- ERROR:  cannot accumulate arrays of different dimensionality

If you are trying to accumulate elements to process in your code, you can use jsonb_agg.

select jsonb_agg(x.x)
from (values (Array[1, 2]), (Array[3, 4, 5])) x (x);
-- [[1, 2], [3, 4, 5]]

The advantage of using Postgres arrays however is being able to unnest those arrays downstream:

select unnest(array_agg(x))
from (values (Array[1, 2]), (Array[3, 4])) x (x);
--      1
--      2
--      3
--      4

Send a Command's Output to Vim (Part II) ↪️

In a Hashrocket blog post, 10 Vim Commands for a Better Workflow, I wrote about :.! <command>, which replaces the output of <command> with your current line.

Today I learned what this technique is called: filtering. From the docs:

filter is a program that accepts text at standard input, changes it in some way, and sends it to standard output. You can use the commands below to send some text through a filter, so that it is replaced by the filter output.

An even shorter version is just !!<command> in normal mode. A use case for this would be writing docs, where command-line output (ls, curl, etc.) can help demonstrate an idea.

Check out :help !! to see all the permutations of this interesting command.

ALL CAPS SQL

A while ago I read The Mac Is Not a Typewriter by Robin Williams. In it, the author claims:

Many studies have shown that all caps are much harder to read. We recognize words not only by their letter groups, but also by their shapes, sometimes called the “coastline.” —pg. 31, The Mac Is Not a Typewriter

I’ve found this to be true. When we teach SQL, students are often surprised that we don’t capitalize PostgreSQL keywords, preferring this:

select * from posts limit 5;

To this, which you might see in an SQL textbook:

SELECT * FROM posts LIMIT 5;

My arguments against the latter syntax: it’s practically redundant in PostgreSQL, it’s harder to type, and it’s unnecessary because any good text editor highlights the keywords. Now I have another: such writing has been, in typesetting, shown to be harder to read. WHERE and LIMIT look similar from a distance in all-caps, but they mean and do different things.

It’s a style opinion each developer gets to refine for themselves. To quote Williams: “Be able to justify the choice.”