Today I Learned

A Hashrocket project

Ready to join Hashrocket? Find Openings here and apply today.

56 posts by gabrielreis @greis

Convert array to object with Lodash

I was trying to see if I could find a lodash method that works the same way as index_by from ruby and I found keyBy:

> const memberships = [{groupId: '1', status: "active"}, {groupId: '2', status: "inactive"}]
> keyBy(memberships, "groupId")

{ '1': { groupId: '1', status: 'active' },
  '2': { groupId: '2', status: 'inactive' } }

Funny thing is that on a previous version of lodash this method was called indexBy, same as the ruby version.

How to force reload associations in Ecto

Ecto by default doesn’t load associations and you have to explicitly call preload:

id = 1
user = Repo.get(User, id) |> Repo.preload(:posts)

If you call preload again on the same user, Ecto won’t make the SQL query again because the association is already loaded:

user = user |> Repo.preload(:posts)

But in some cases you do want to reload a preloaded association then you can use force: true:

user = user |> Repo.preload(:posts, force: true)

Preview ffmpeg video filters without re-encoding

I was playing around with some ffmpegfilters, like cropping, scaling and overlays and I was tired of waiting for the video to be fully re-encoded in order to see the changes.

ffmpeg -i video.mp4 -vf "crop=in_w:in_h/2:0:0" -c:a copy output.mp4

I’m glad that this is not a problem because you can use ffplay to preview the changes instantly without having to wait:

ffplay -i video.mp4 -vf "crop=in_w:in_h/2:0:0"

Preloading data for Phoenix LiveView components

One of the most common problems in web development is N+1. Graphql has taught me the DataLoader pattern and since then I never had to worry about N+1 as long as I kept using that pattern.

With Phoenix LiveView you don’t need to think about APIs anymore but I’m glad that it has support for preloading data kind of the same way as if it was a DataLoader. You can define a preload function that will receive a list of all the assigns so you can preload all the data needed for multiple components in one single batch before they are mounted. Here is the sequence of a component lifecycle:

preload(list_of_assigns) -> mount(socket) -> update(assigns, socket) -> render(assigns)

On this example only one SQL query is made to load all products by id:

@impl true
def preload(list_of_assigns) do
  product_ids =, & &1.product_id)

  products =
    from(p in Product, where: in ^product_ids, select: {, p})
    |> Repo.all()
    |>, fn assigns ->
    Map.put(assigns, :product, products[assigns.product_id])

Phoenix LiveView with some JSX flavor

Today I learned from Vinny about this Elixir library called Surface that allows you to write LiveView components that look a lot like JSX from React.

You first create a module that uses Surface.Component and define some props:

defmodule MyApp.WelcomeMessage do
  use Surface.Component

  prop name, :string, required: true

  def render(assigns) do
    <div class="title">Hello, <strong>{@name}!</strong><div>

Then your LiveView module needs to use Surface.LiveView and you are ready to render that new β€œJSX” component passing some props:

defmodule MyApp.ExampleLive do
  use Surface.LiveView
  alias MyApp.WelcomeMessage

  def render(assigns) do
    <div class="container">
      <WelcomeMessage name="Vinny" />

Surface also handles state and has a way to handle portals, which are called slots. I might play with this library in one of my side-projects.

Nested attributes in Phoenix (Live) Views:

To use nested attributes in a form you have to use the inputs_for helper:

<%= f = form_for @changeset, Routes.user_path(@socket, :create), phx_submit: :submit, as: :user %>
  <%= input f, :name %>
  <%= inputs_for f, :address, fn a -> %>
    <%= input a, :city %>
    <%= input a, :zipcode %>
    <% end %><
  <%= submit "Create" %>

And then you can use cast_assoc and specify the changeset for the nested schema you want to validate:

defmodule MyApp.User do
  def registration_changeset(user, attrs, opts \\ []) do
      |> cast(attrs, [:name])
      |> cast_assoc(:address, with: &MyApp.Address.changeset/2)
      |> validate_required([:name, :address])

defmodule MyApp.Address do
  def changeset(address, attrs, opts \\ []) do
      |> cast(attrs, [:city, :zipcode])
      |> validate_required([:city, :zipcode])

Session cookie with Phoenix LiveView

You can use phx-trigger-action to make sure a form submit goes through http instead through the socket.

On the example below, when @trigger_submit is true then the form will be sumitted to the path Routes.user_session_path(@socket, :create), which will hit the controller and set the cookie:

<%= f = form_for @changeset, Routes.user_session_path(@socket, :create), phx_trigger_action: @trigger_submit, phx_change: :validate, phx_submit: :submit, as: :user %>
  <%= input f, :email %>
  <%= input f, :password %>
  <%= submit "Sign In", disabled: !@changeset.valid? %>

Reversed git log

I use git log -p a lot.

git log shows all the commits and messages in descending order and the -p flag includes the code that changed.

It helps me understanding why some changes happened in the codebase, to search when things got included and more.

Recently I wanted to actually see the history from the beginning and guess what, there’s a flag for that:

git log -p --reverse

How to convert JSON to CSV with jq

I had this json file that was an array of objects and some of those objects had different keys. I wanted to visualize that data in a spreadsheet (data science, AI, machine learning stuff) so I thought about having a CSV file where each JSON key would become a CSV column.

// file.json
    "type": "Event1",
    "time": 20
    "type": "Event2",
    "distance": 100

jq to the rescue:

cat file.json | jq -r '(map(keys) | add | unique) as $cols | map(. as $row | $cols | map($row[.])) as $rows | $cols, $rows[] | @csv' > file.csv

Here is the output:

Screen Shot 2020-08-07 at 4 39 16 PM

Rails 6 Blocked Hosts

Rails 6 has a new feature where only whitelisted hosts are allowed to be accessed. By default only localhost is permitted.

When doing mobile development, you always need to test the app in a real device that connects to a backend. In order to automatically add the dev machine host to the list, just change your development.rb to:

# config/environments/development.rb 

config.hosts << "#{`hostname -s`.strip}.local"

Combine Records from Different Tables

Let’s say you have 2 users and 3 categories and you want a query to return the combination of all the records, resulting in 6 rows.

You can use cross join to do that:

select as user_id, as category_id
from users cross join categories

 user_id | category_id
       1 |           1
       1 |           2
       1 |           3
       2 |           1
       2 |           2
       2 |           3

Git Interactive Rebase The First Commit

Everytime I want to do an interactive rebase I pass the number of commits back that I want using head~number:

> git rebase -i head~3

Recently I created a repo that had only 2 commits and I got an error when I tried to do a rebase the same way:

> git rebase -i head~2
fatal: Needed a single revision
invalid upstream 'head~2'

To avoid that error, you can use a --root option to rebase the first commit:

> git rebase -i --root

Elixir Pattern Matching with Variables

Let’s say you have a variable that you want to pattern match.

By default Elixir won’t use the variable’s value to do the pattern matching and it will do a regular assignment, overriding the original variable’s value:

iex(1)> year = 2020

iex(2)> car = %{year: 2019}
%{year: 2019}

iex(3)> %{year: year} = car
%{year: 2019}

iex(4)> year

Elixir has the pin opertator ^ that does exactly what we need. So in our example we can use the pin operator and if it doesn’t match you get an error:

iex(1)> year = 2020

iex(2)> car = %{year: 2019}
%{year: 2019}

iex(3)> %{year: ^year} = car
** (MatchError) no match of right hand side value: %{year: 2019}

Archiving React Native iOS projects on Xcode 10+

I was getting the following error on the CI when it was trying to archive the project and I couldn’t figured out what was going on:

xcodebuild failed with return code: 65

So I decided to archive the iOS project locally and the error was different:

:-1: Multiple commands produce 
1) Target 'React' has a command with output '~/Library/Developer/Xcode/DerivedData/.../IntermediateBuildFilesPath/UninstalledProducts/iphoneos/libReact.a'
2) Target 'React' has a command with output '~/Library/Developer/Xcode/DerivedData/.../IntermediateBuildFilesPath/UninstalledProducts/iphoneos/libReact.a'

:-1: Multiple commands produce 
1) Target 'yoga' has a command with output '~/Library/Developer/Xcode/DerivedData/.../IntermediateBuildFilesPath/UninstalledProducts/iphoneos/libyoga.a'
2) Target 'yoga' has a command with output '~/Library/Developer/Xcode/DerivedData/.../IntermediateBuildFilesPath/UninstalledProducts/iphoneos/libyoga.a'

Then I googled it and found this:

Then I added this post_install block at the end of the Podfile:

post_install do |installer|
  installer.pods_project.targets.each do |target|
    if == "React"

    if == "yoga"

It worked locally and on the CI.

Finally I gave a πŸ‘ on that comment.

Get the return type of a function in TypeScript

Sometimes you only want to rely on the return type of a function instead of defining a new type for it.

To do that in TypeScript use ReturnType and typeof:

function extractStatusData(buffer: Buffer) {
  return {
    active: buffer[0] !== 1,
    totalErrors: buffer[1]

function onStatusChange(callback: (data: ReturnType<typeof extractStatusData>) => void) {
  listenToSomeBinaryStuff(buffer => {
    const statusData = extractStatusData(buffer)

onStatusChange(({active, totalErrors}) => {
  if (active) {
    console.log(`Task is still running`)
  } else {
    console.log(`Task is completed with ${totalErrors} errors`)

Compare Dates in Neo4j.rb

Neo4j.rb stores dates as timestamps so you will have to convert your date object into a timestamp.

To convert a date object into a timestamp first convert to utc time and then to integer:


And in your cypher query you are safe to use the comparison operators:

where('post.published_at <= ?', Date.current.to_time(:utc).to_i)

Pretty Print JSON responses from `curl` - Part 3

If you thought that the output was pretty enough from last TIL, you were wrong.

Dennis Carlsson tweeted me about a tool called bat that has automatically syntax highlighting for a bunch of different languages and can also display line numbers.

Just pipe bat after jq and you are good to go:

> curl '' | jq  | bat

       β”‚ STDIN
   1   β”‚ {
   2   β”‚   "data": {
   3   β”‚     "posts": [
   4   β”‚       {
   5   β”‚         "title": "Pretty Print JSON responses from `curl` - Part 2",
   6   β”‚         "slug": "utpch45mba"
   7   β”‚       },
   8   β”‚       {
   9   β”‚         "title": "Pretty Print JSON responses from `curl`",
  10   β”‚         "slug": "pgyjvtuwba"
  11   β”‚       },
  12   β”‚       {
  13   β”‚         "title": "Display line break content in React with just CSS",
  14   β”‚         "slug": "mmzlajavna"
  15   β”‚       }
  16   β”‚     ]
  17   β”‚   }
  18   β”‚ }

If you know any other tricks on making stdout prettier I would love to learn them.

Pretty Print JSON responses from `curl` - Part 2

After posting my last TIL , Vinicius showed me another tool that goes beyond just pretty printing: jq

If you don’t pass any args to jq it will just pretty print same as json_pp:

> curl '' | jq

  "data": {
    "posts": [
        "title": "Pretty Print JSON responses from `curl`",
        "slug": "pgyjvtuwba"
        "title": "Display line break content in React with just CSS",
        "slug": "mmzlajavna"
        "title": "Mutations with the graphql-client Ruby gem",
        "slug": "xej7xtsnit"

What if you only want to display the first post on the response? Just pass an argument to filter the keys you want. It’s like Xpath for JSON: jq '.data.posts[0]'

> curl '' | jq '.data.posts[0]'

  "title": "Pretty Print JSON responses from `curl`",
  "slug": "pgyjvtuwba"

See Part 3

Pretty Print JSON responses from `curl`

When you use curl to manually make API calls, sometimes the response is not formatted:

> curl ''`

{"data":{"posts":[{"title":"Display line break content in React with just CSS","slug":"mmzlajavna"},{"title":"Mutations with the graphql-client Ruby gem","slug":"xej7xtsnit"},{"title":"The rest of keyword arguments πŸ•","slug":"o2wiclcyjf"}]}}%

You can pipe json_pp at the end so you have a prettier json response:

> curl '' | json_pp

   "data" : {
      "posts" : [
            "slug" : "mmzlajavna",
            "title" : "Display line break content in React with just CSS"
            "title" : "Mutations with the graphql-client Ruby gem",
            "slug" : "xej7xtsnit"
            "title" : "The rest of keyword arguments πŸ•",
            "slug" : "o2wiclcyjf"

See Part 2

Display line break content in React with just CSS

Let’s say you have an array of strings and you want to display its content inside a paragraph:

const locations = ['Location 1', 'Location 2']

That will display all the items inline even if the line break is present :

Location 1 Location 2

Instead of adding unnecessary <br /> tags to the markup you can simply specify a css property to render line breaks:

<p style="white-space: pre-line">{locations.join('\n')}</p>

This will look like:

Location 1
Location 2

Mutations with the graphql-client Ruby gem

The graphql-client is a really good library to consume Graphql APIs in Ruby.

You can execute a mutation the same way as if it was a regular query passing the variables you want to use:

require 'graphql/client'
require 'graphql/client/http'

module Api
  Schema = GraphQL::Client.load_schema(HTTP)
  Client = Schema, execute: HTTP)

CreateCityMutation = Api::Client.parse(<<~'GRAPHQL')   
  mutation($name: String) {
    createCity(name: $name) {

variables = {name: 'Jacksonville'}
result = Api::Client.query(CreateCityMutation, variables: variables)

The rest of keyword arguments πŸ•

Sometimes you pass some extra keyword arguments to a method that doesn’t handle them and you get the error ArgumentError: unknown keywords: ....

You can just use ** as the last argument to ignore the rest of the keyword arguments:

def make_pizza(cheese:, sauce:, **)
  puts "Making pizza with #{cheese} cheese and #{sauce} sauce"

make_pizza(cheese: 'muzzarella', sauce: 'tomato', chocolate: 'white', syrup: 'mapple')

=> Making pizza with muzzarella cheese and tomato sauce

You can also give it a name to group them:

def make_pizza(cheese:, sauce:, **rest)
  puts "Making pizza with #{cheese} cheese and #{sauce} sauce"
  rest.each {|k, v| puts "#{v.capitalize} #{k} is not good for you"}

make_pizza(cheese: 'muzzarella', sauce: 'tomato', chocolate: 'white', syrup: 'mapple')

=> Making pizza with muzzarella cheese and tomato sauce
=> White chocolate is not good for you
=> Mapple syrup is not good for you

Export a synchronous method in React Native module

If you are working with an existing brownfield iOS app you may already have some code that is really complicated to change and sometimes you want a bridge module to export some native methods to JS that return synchronous values right way. In iOS, you can use the macro RCT_EXPORT_BLOCKING_SYNCHRONOUS_METHOD:

@define API_URL @"http://localhost:3000"

@implementation ConfigManager


  return API_URL;


And in your JS file you can call:

import { NativeModules } from 'react-native';

const apiUrl = NativeModules.ConfigManager.getApiUrl();

There is a problem using that when your app is in debug mode. It will raise the error global.nativeCallSyncHook is not a function. Also this macro blocks the current thread, so only use it if you REALLY need it.

Handle JS Errors in React 16

React 16 introduced a new method called componentDidCatch to catch all JS errors. It can catch errors during rendering, lifecycle methods and constructors of the whole tree below.

class App extends React.Component {

  componentDidCatch(error, info) {
    ErrorService.log(error, info);

  render() {
     // code that raises error, it could come from this component or its children.

Message order constraint in RSpec

Sometimes you need to make sure your code is executing in a specific order. In the example below we have a Payment double that needs to first call processing! and then approved!. So you write a test like this:

it "approves the payment" do
  payment = double("Payment")

  expect(payment).to receive(:processing!)
  expect(payment).to receive(:approved!)


If you change the order of the method calls your test will still pass:


Finished in 0.01601 seconds (files took 0.20832 seconds to load)
1 example, 0 failures

To guarantee the order, RSpec has an option to specify an order constraint. So we want to make sure processing! is called before approved! :

 expect(payment).to receive(:processing!).ordered
 expect(payment).to receive(:approved!).ordered

Now if you run the test with the wrong order, you will see the error:

it "approves the payment" do
  payment = double("Payment")

  expect(payment).to receive(:processing!).ordered
  expect(payment).to receive(:approved!).ordered


Failure/Error: payment.approved!
   #<Double "Payment"> received :approved! out of order

Invisible components in React Native

Sometimes you want a component to be rendered but not visible to the user and in React Native 0.42 and below, you could accomplish that by setting the component’s height and width to 0.

The good news is that React Native 0.43 will make it easier and introduce the CSS property you are already familiar with: display: 'none'.

You can already try it installing the pre-release version 0.43.0-rc.4.

Making your JS files prettier on Vim

We all use VIM at Hashrocket and we can run prettier everytime we save a JS file:

yarn global add prettier
autocmd FileType javascript set formatprg=prettier\ --single-quote\ --trailing-comma\ es5\ --stdin
autocmd BufWritePre *.js :normal gggqG
autocmd BufWritePre *.js exe "normal! gggqG\<C-o>\<C-o>"

And yes. I like trailing comma,

** Update: There is an open issue on Github when prettier fails to parse the JS file but there is a workaround for it.

Pipe | after | grep

The grep command may not work properly if you want to pipe another command after it, specially when you are tailing a file for example. Just call grep with --line-buffered

heroku logs -t | grep --line-buffered "heroku\[router\]" | awk '{print $11" "$5}'

This command outputs an easy-to-read performance log for each HTTP request:

service=266ms path="/api/posts"
service=142ms path="/api/users"