Today I Learned

A Hashrocket project

280 posts by chriserin @mcnormalmode

Preventing access to `Window.opener`

When you open a link in a new tab with target="_blank" like so:

<a target="_blank" href="http://some.link">Somewhere</a>

Then the page that you open has access to the original page through Window.opener. Depending on your case you may not want this. Actually I can’t think of a case where you do want the new tab to refer back to the opening tab.

To prevent Window.opener access use rel="noopener":

<a target="_blank" href="http://some.link" rel="noopener">Somelink</a>

At this moment, this is not available in IE or Edge, check out the caniuse site.

H/T Dillon Hafer

Use jq to filter objects list with regex

My use case is filtering an array of objects that have an attribute that matches a particular regex and extract another attribute from that object.

Here is my dataset:

[
  {name: 'Chris', id: 'aabbcc'},
  {name: 'Ryan', id: 'ddeeff'},
  {name: 'Ifu', id: 'aaddgg'}
]

First get each element of an array.

> data | jq '.[]'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ryan', id: 'ddeeff'}
{name: 'Ifu', id: 'aaddgg'}

Pipe the elements to select and pass an expression that will evaluate to truthy for each object.

> data | jq '.[] | select(true) | select(.name)'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ryan', id: 'ddeeff'}
{name: 'Ifu', id: 'aaddgg'}

Then use the test function to compare an attribute to a regex.

> data | jq '.[] | select(.id|test("a."))'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ifu', id: 'aaddgg'}

Extract the name attribute for a more concise list.

> data | jq '.[] | select(.id|test("a.")) | .name'
Chris
Ifu

Read more about jq

Setting Struct properties with an index

It’s rare that I get a chance to use structs but yesterday while parsing some xml (!!!) I wrote an algorythm where it would be handy to set values of a struct with an incrementing number rather than a symbol.

Low and Behold! Ruby Structs allow you to set attributes with either the name of the property or the ordinal with which it was declared.

2.5.0 :001 > Apple = Struct.new(:color, :size)
 => Apple
2.5.0 :002 > apple = Apple.new
 => #<struct Apple color=nil, size=nil>
2.5.0 :003 > apple[0] = 'red'
 => "red"
2.5.0 :004 > apple[1] = 'huge'
 => "huge"
2.5.0 :005 > apple
 => #<struct Apple color="red", size="huge">

Structs are a great data structure that I’m going to try to use more in my daily programming.

Setting timezones from command line on MacOS

We wrote some tests that failed recently in a timezone other than our own (☹️). In order to reproduce this failure we had to set the timezone of our local machine to the failure timezone.

sudo systemsetup -settimezone America/New_York

Easy enough. You can get all the timezones available to you with:

sudo systemsetup -listtimezones

And also check your current timezone

sudo systemsetup -gettimezone

systemsetup has 47 different time/system oriented commands. But it needs a sudo even to run -help.

> systemsetup -help
You need administrator access to run this tool... exiting!

Don’t fret you can get this important secret information via the man page

man systemsetup

Log rotation in ruby

Jake wrote a great til about linux log rotation last year. Ruby also has a log rotation alternative built into the stdlib.

Generally, ruby logging looks like this:

require 'logger'
logger = Logger.new('app.log')
logger.info('starting log')

You can also pass a ‘daily’, ‘weekly’ or ‘monthly’ to the logger:

logger = Logger.new('week.log', 'weekly')

If I use this log today, Tue Jan 9, and next week on Wed Jan 17, then ls will show two log files:

week.log
week.log.20180113

20180113 is Saturday the last day of the previous logging period, a new log is started every Sunday. NOTE: no files are deleted with this style of logging, you may potentially run out of space.

You can also choose to log with the number of files to keep and the maximum size of each file.

logger = Logger.new('size.log', 4, 1024000)

In this case after producing more than 4 MB of logging information ls will show you.

size.log
size.log.1
size.log.2
size.log.3

Compared to linux’s logrotate this is missing features, compression for example, but for adding log rotation to your logs quickly and simply it works great.

Documentation

flex-flow for shorter flex declarations

flex-flow is a css property that allows you to specify both flex-direction and flex-wrap in one property. In css parlance this is a shorthand property and like all shorthand properties the values are order agnostic when possible.

It will provide minimal efficiency gains but you can just avoid writing flex-direction altogether. When you need to specify a direction just type:

ul { 
  display: flex;
  flex-flow: row;
}

And then its easy to tack on a wrap when you determine it is necessary:

ul {
  display: flex;
  flex-flow: row wrap;
}

Native CSS Variables

CSS variables are normally associated with the pre-processor languages like SASS. But css has its own native variables as well. Declare them like so:

element { 
  --black-grey: #111111;
}

And then use them with the var function like this:

element { 
  background-color: var(--black-grey);
}

In the above example if --black-grey is not set then the background-color will fallback to the inherited value. You can provide a default value for the variable if you anticipate it may not be set:

element {
  background-color: var(--black-grey, black);
}

Variables behave just like any other inherited or cascading properties and if you’d like them available for entire stylesheets then you would need to declare them in :root.

:root {
  --one-fathom: 16rem;
}

This css feature is safe to use everywhere but IE, but it is only recently safe. Firefox enabled this feature by default in version 56 released in Sept 2017.

You can read about them in the Mozilla Docs or the original W3C editor’s draft.

zsh change dir after hook w/`chpwd_functions`

How does rvm work? You change to a directory and rvm has determined which version of ruby you’re using.

In bash, the cd command is overriden with a cd function. This isn’t necessary in zsh because zsh has hooks that execute after you change a directory.

If you have rvm installed in zsh you’ll see this:

> echo $chpwd_functions
__rvm_cd_functions_set
> echo ${(t)chpwd_functions}
array

chpwd_functions is an array of functions that will be executed after you run cd.

Within the function you can use the $OLDPWD and $PWD env vars to programatically determine where you are like this:

> huh() { echo 'one sec...' echo $OLDPWD && echo $PWD } 
> chpwd_functions+=huh
> cd newdir
one sec...
olddir
newdir

You can read more about hook functions in the zsh docs.

Comply with the erlang IO protocol

Below is a basic IO server/device implementation, it receives an :io_request message and responds with a :io_reply message.

defmodule MYIODevice do
  def listen() do
    receive do
      {:io_request, from, reply_as, {:put_chars, :unicode, message}} ->
        send(from, {:io_reply, reply_as, :ok})
        IO.puts(message)
        IO.puts("I see you")
        listen()
    end
  end
end

pid = spawn_link(MYIODevice, :listen, [])

IO.puts(pid, "Hey there")

The above code outputs the following to stdout:

Hey there
I see you

The first argument of IO.puts/2 is a pid representing the device that is being written to, it is generally defaulted to :stdio)

The documentation is dense, enjoy! Erlang IO Protocol

H/T Brian Dunn

Remove duplicates in zsh $PATH

How to do it?

typeset -aU path

There ya go. All entries in $PATH are now unique

How does it work? Well that’s stickier. path and PATH are not the same thing. You can determine that by examining the type of each with t variable expansion.

> echo ${(t)path}
array-unique-special
> echo ${(t)PATH}
scalar-export-special

They are linked variables however, if you change one you change the other but they have different properties. I have to admit at this point that scalar-export-special is something of which I’m ignorant.

The typeset declaration is simpler, it changes the type of path from array-special to array-unique-special. The -a flag is for array and the -U flag is for unique.

What files are being sourced by zsh at startup?

Tracking down a bug in the jagged mountains of shell files that are my zsh configuration can be tough. Enter the SOURCE_TRACE option, settable with the -o flag when opening a new shell with the zsh command.

zsh -o SOURCE_TRACE

Which outputs:

+/etc/zsh/zshenv:1> <sourcetrace>
+/etc/zsh/zshrc:1> <sourcetrace>
+/home/chris/.zshrc:1> <sourcetrace>
+/home/chris/.sharedrc:1> <sourcetrace>
+/home/chris/.zshrc.local:1> <sourcetrace>

The above output is an abbreviated representation of the actual files loaded on my system. I have language manager asdf installed which adds a couple of entries. I am an rvm user which adds 50 (!!!) entries. Lots of shell code to source for rvm. Additionally, I use Hashrocket’s dotmatrix which adds even more entries. Lots of sourcing to sort through.

This is handy in combination with print line (or echo) debugging. It gives your print lines added context with things get noisy.

Killing heroku dynos

Yesterday we encountered an odd situation. A rake task running with heroku that did not finish. Through no fault of our own.

We killed the local processes kicked off by the heroku command line tool and restarted the rake task but got an error message about only allowing one free dyno. Apparently, the dyno supporting our rake task was still in use.

First we had to examine whether that was true with:

heroku ps -a myapp

Next step was to kill the dyno with the identifier provided by the above command.

heroku ps:kill dyno.1 -a myapp

We ran the rake task again, everything was fine, it worked great.

sub_filter + proxy_pass requires no gzip encoding

The nginx sub module is useful for injecting html into web requests at the nginx level. It looks like this:

location / {
  sub_filter '</head>' '<script>doSomething();</script></head>';
}

Replace the end closing tag with a script tag and the head closing tag.

This does not work however with the proxy_pass module response coming from the proxy_pass url is gzip encoded.

In this case, you want to specify to the destination server that you will accept no encoding whatoever with the proxy_set_header directive, like so:

  proxy_set_header Accept-Encoding "";

Altogether it looks like this:

location / {
  proxy_set_header Accept-Encoding "";
  proxy_pass http://destination.example.com
  sub_filter '</head>' '<script>doSomething();</script></head>';
}

Examine owner and permissions of paths

The ownership of an entire path hierarchy is important when nginx needs to read static content. For instance, the path /var/www/site.com/one/two/three/numbers.html may be problematic when directory three is owned by root rather than www-data. Nginx will respond with status code 403 (forbidden) when http://site.com/one/two/three/numbers.html is accessed.

To debug this scenario we need to examine the owners and pemissions of each of the directories that lead to numbers.html. This can be accomplished in linux with the handy namei command in combination with realpath.

Real path will return the fullpath for the file.

> realpath numbers.html
/var/www/site.com/one/two/three/numbers.html

And namei -l <full_path> will examine each step in the file’s directory structure.

> namei -l $(realpath numbers.html)
drwxr-xr-x root  root  /
drwxr-xr-x root  root  var
drwxr-xr-x www-data www-data www
drwxr-xr-x www-data www-data site.com
drwxr-xr-x www-data www-data one
drwxr-xr-x root root two
drwxr-xr-x www-data www-data three
-rw-r--r-- www-data www-data numbers.html

Ah there it is. The directory two is owned by root rather that www-data. chown will help you from here.

Cursor Pagination with graphql

When a graphql object has a plural type, each object of the collection from that plural type will have a cursor associated with it. This cursor is generally a base64 encoded unique identifier referencing the search, the object and the place of that object amongst all objects in the collection.

Ask for the cursor as a property when iterating over a collection.

  query {
    User(login: 'jbranchaud') {
      repositories(first: 100) {
        edges {
          node {
            cursor
            name
          }
        }
      }
    }
  }

The above query will return a cursor for each repo.

Get the last cursor of that collection with pageInfo.endCursor:

  query {
    User(login: 'jbranchaud') {
      repositories(first: 100) {
        pageInfo {
          endCursor
          hasNextPage
        }
        edges {
          node {
            name
          }
        }
      }
    }
  }

The endCursor will look like:

Y3Vyc29yOnYyOpHOBK0NoA==

Use this cursor to obtain the next 100 repos with the after property.

  query {
    User(login: 'jbranchaud') {
      repositories(first: 100, after: "Y3Vyc29yOnYyOpHOBK0NoA==") {
        pageInfo {
          endCursor
          hasNextPage
        }
        edges {
          node {
            name
          }
        }
      }
    }
  }

Posting data from a file with curl

Iterating on an api post request with curl can be frustrating if that involves a lot of command line editing. curl however can read a file for post body contents. Generally the --data option is used like this:

curl -XPOST --data '{"data": 123}' api.example.com/data

But when using an @ symbol you can reference a file

curl -XPOST --data @data.json api.example.com/data

Now, you can run the same command for each iteration and edit the data.json file containing the data to be posted with your favorite text editor (which is vim right?).

`disable_with` to prevent double clicks

If a link or button initiates a request that takes a long time to respond then the user (being a user) might click the link or button again. Depending on the code implemented on the server this could put the application into a bad state.

Rails has a handy feature that will disable a link or button after the user has clicked it, disable_with:

<%= link_to('something',
      something_path(something),
      data: {
        disable_with: "Please wait..."
      }
    )
%>

In Rails 5.0 and up sumbit buttons have disable_with set by default. To disable disable_with use:

data: { disable_with: false }

Partial post body matching with webmock

When using webmock to stub out a post, you can specify that the post has a specific body. When dealing with web apis, that body is json and you want to compare the body to a ruby hash, from the webmock README, that looks like this:

stub_request(:post, "www.example.com").
  with(body: {a: '1', b: 'five'})

In the above example, the whole json body has to be represented in the hash. Webmock provides a way to just examine a portion of the json/hash with the hash_including method. This is not the rspec hash_including this is a webmock specific function using the WebMock::Matchers::HashIncludingMatcher class.

stub_request(:post, "www.example.com").
  with(body: hash_including({a: '1'}))

Using ngrok to evaluate webhhooks in a dev env

A webhook, an http request to a server initiated by an outside service, is difficult to test. The outside service must have a valid url to make the request with, but your development server generally runs on the unreachable localhost.

The Ngrok product page describes ngrok like this:

It connects to the ngrok cloud service which accepts traffic on a public address and relays that traffic through to the ngrok process running on your machine and then on to the local address you specified.

If I have a local dev server on port 3000 I run ngrok like so:

> ngrok http 3000
Forwarding   http://92832de0.ngrok.io -> localhost:80

Then, configure the outside service to point the webhook at the ngrok.io address provided by the ngrok tool. Now, your local server will receive http requests from the outside service and you can evaluate those requests effectively.

Remove namespaces for easier Xpath queries

Xpath can get strange when namespaces are involved

> doc = Nokogiri::XML(<<-XML)
  <a xmlns='http://www.example.com/xhtml'>
    <b>
      <c></c>
    </b>
  </a>
  XML
> doc.xpath("/a")
[] # returns empty array
> doc.xpath("/*[name()='a']")
# returns the first node

The [name()='a'] isn’t really clear and will become less clear as the query looks for elements deeper in the document. When the namespace doesn’t provide any value, for instance by helping to avoid collisions, or helping to validate the given xml document, then removing the namespace is entirely acceptable.

In Nokogiri you can remove namespaces from the document with, remove_namespaces!.

> doc.remove_namespaces!
> doc.xpath('/a')
[<Node a>]

Now traversing the document with xpath will be significantly more straightforward.

Iterating over objects in Lodash

Many of the Lodash collection functions iterate over either an object or an array. Somewhat unintuitively for me, when iterating over objects the first argument of the iteratee function is a value and the second argument is the key.

The documentation for Lodash lists the arguments for the iteratee in the description for each function. For collection functions that generally looks like this

The iteratee is invoked with three arguments:(value, index|key, collection).

By searching through the documentation for index|key you can find all the functions for which this is true.

Using a Lodash function that can return iterate over an object looks like this:

const result = _.map({a: 1, b: 2}, function(value, key) {
  return value + key;
});

// result is now ["a1", "b2"]

H/T Ryan Messner

Reloading shell history in zsh

When you start a shell your history list is populated from your .zsh_history file. Depending on options that you have set, when you close a shell you write your history list to that same history file. Until you close your or open new shells that history is self contained and not accessible from another shell.

There is a built-in zsh command to both write and read history from the .zsh_history file. fc -W will write to the history file. fc -R will read from the history file.

Ch-ch-ch-ch-changes

For everyfile in vim, vim remembers the position of the changes that have occurred. These change positions are stored in the changes list for each file and can be used for intrafile navigation with the two commands g, and g;.

This is really handy if you switch to a large file and need to quickly flip through the most recently changed sections of that file.

View the entire list of changes for a file with the command :changes.

`Object.entries` helps you iterate over objects

You may find yourself with an object with lots of entries that you need to transform into an array of extracted information. For examples sake lets say:

  const things = {
    a: 1,
    b: 2
  }

And we need to turn that into and array [2, 3].

We can use Object.entries to turn the object into an array of key/value tuples.

const arr = Object.entries(things)
// arr === [["a", 1], ["b", 2]]

Then we can iterate over that array with map:

const mappedArr = arr.map((tuple) => {
  const [key, value] = tuple;
  return value + 1;
});
// mappedArr === [2, 3]

Object.entries is an ES7 proposal implemented in Chrome/Firefox/Edge and polyfilled with CoreJS, the polyfill library used by Babel.

glamorous composition for inline styling

glamorous is a library to help style components from the component itself rather than from a css stylesheet. You create a component with style in the form of a javascript object.

require glam import 'glamorours';

const InfoSpan = glam.span({
  backgroundColor: 'none',
  width: '20%',
  margin: '0 1rem',
  padding: '0 1rem',
  display: 'inline-block',
  boxSizing: 'borderBox'
});

Later when rendering, you can use this as if it were a component.

render() {
  return (
    <div>
      <InfoSpan>
        Some Important Information
      </InfoSpan>
    </div>
  );
}

You can also compose additional glamorous components by re-using previously declared glamorous components.

const LeftSpan = glamorous(InfoSpan, {
  textAlign: 'right'
});

const RedArticle = glamorous(Article, {
  textAlign: 'left'
});

Casting graphql types with inline fragments

Graphql works with a type system. You ask for fruit and it returns fruit, but that fruit could be a pear or an orange.

(contrived example apologies)

interface Fruit {
  seedless: Boolean
}

type Pear implements Fruit {
  color: String
}

type Apple implements Fruit {
  size: Int
}

So when you ask for fruit, and you want the size of the apple, you have to let graphql know that you want an Apple and are expecting an Apple and while I’m thinking about this as casting the graphql docs call it inline fragments.

  query {
    fruit {
      seedless
      ... on Pear {
        color
      }
      ... on Apple {
        size
      }
    }
  }

The ... is a syntactic element not a writing convention.

Animating polygon must have same number of points

You can show an SVG polygon changing shape with SVG’s animate tag.

<polygon points="100,100 0,100 0,0">
  <animate
    to="50,50 0,50 0,0"
    from="100,100 0,100 0,0"
    dur="10s"
    attributeName="points"
  />
</polygon>

However, you can not animate a shape change to a different number of points. For instance if you have three points in the to attribute and four points in the from attribute, then nothing will happen.

Recently, I wanted to animate a change from a 4 pointed polygon to a 5 pointed polygon. In this case, I included an extra point in the 4 pointed polygon that was a duplicate of an existing point and right next to that same duplicated point. When changing shape to a 5 pointed polygon, the previously duplicated point moved out from that spot to its new spot and the polygon adjusted accordingly and pleasingly.

Checking that an association is loaded

Ecto will NOT load associations automatically, that is something that you must do explicitly and sometimes you might expect an association to be loaded but because of the code path travelled, it is not.

You can check to see if an association is loaded with Ecto.assoc_loaded?

case Ecto.assoc_loaded?(post.channel) do
  true -> IO.puts('yep its loaded')
  false -> IO.puts('you do not have the data')
end

Legacy Refs vs Ref Callbacks

Reactjs provides a way to get references to dom elements that react is rendering through jsx. Previously, it was through what are now legacy refs:

componentWillUpdate() {
  this.refs.thing.tagName == "div";
}

render() {
  return (
    <div ref="thing"/>
  );
}

Where you can assign an element an identifier and react would keep a refs hash up to date with references to the dom for that element.

The react docs say this about the previous system.

We advise against it because string refs have some issues, are considered legacy, and are likely to be removed in one of the future releases.

The new system uses callbacks

render() {
  return (
    <div ref={(div) => { console.log('tag name:', div.tagName); }} />
  );
}

This callback is called when the component mounts with a reference to the dom element as an argument. Importantly, when the component unmounts the callback is called again but this time with null as an argument.

Stacking HEREDOCS

I ran across this in a co-workers code yesterday. It made my head hurt for a second. Let’s say you need a a function that takes 4 lines as an argument and outputs each of those lines to stdout.

def print_stanza(line1, line2, line3, line4)
  puts line1, line2, line3, line4
end

print_stanza(<<-LINEA, <<-LINEB, <<-LINEC, <<-LINED)
  You can get all hung up
LINEA
  in a prickle-ly perch.
LINEB
  And your gang will fly on.
LINEC
  You'll be left in a Lurch.
LINED

The second HEREDOC starts where this first one ends, the third HEREDOC starts where the second one ends, etc. Its all valid Ruby and starts to make sense if you look at it long enough.

Most editors I’ve seen haven’t been able to highlight the syntax correctly though. Sorta leaves you in a Lurch.

H/T Brian Dunn

.babelrc ignores babel settings in package.json

There are three ways to provide babel configuration listed on the usage page for babelrc. You can provide a .babelrc file, you can place a babel section in your package.json or you can declare configuration with an env option.

If you have both a .babelrc in your root dir AND a babel section in your package.json file then the settings in the package.json will be completely ignored.

This happened to me when needing to declare an extra plugin for babel that create-react-app did not provide. I ejected from create-react-app, I added a .babelrc that declared the extra plugin, and this broke the build for my app. The babel configuration for an ejected create-react-app is in the package.json file.

Convert to BigDecimal with `to_d` w/ActiveSupport

Ruby provides the BigDecimal method to convert to BigDecimal.

> require 'bigdecimal'
> BigDecimal("123.45")
#<BigDecimal:56236cc3cab8,'0.12345E3',18(18)>

But you can’t convert a float without a precision

> BigDecimal(123.12)
ArgumentError: can't omit precision for a Float.
> BigDecimal(123.12, 5).to_s
"0.12312E3"

When using Rails, and specifically with ActiveSupport required, you can use the to_d method converts to BigDecimal.

> require 'active_support'
> 123.to_d
#<BigDecimal:55ebd7800ea8,'0.123E3',9(27)>
> "123".to_d
#<BigDecimal:55ebd7800ea8,'0.123E3',9(27)>

And for floats provides a default precision of Float::DIG+1 which for me is 16. DIG is described as

The number of decimal digits in a double-precision floating point.

> 123.45.to_d
#<BigDecimal:55ebd2d6cfb8,'0.12345E3',18(36)>
> 123.45.to_d.to_s
"123.45"

Note, to_s in ActiveSupport outputs a more human readable number. Also Note, nil is not convertable with to_d

> require 'active_support'
> nil.to_d
NoMethodError: undefined method `to_d' for nil:NilClass
> BigDecimal(nil)
TypeError: no implicit conversion of nil into String

`requestAnimationFrame` should call itself

This style of animation is useful when you’re making small changes via javascript. When you pass requestAnimationFrame a callback, the callback is called before a browser repaint, or about 60 times a second. To make sure that you’re getting 60 callbacks a second, you must call requestAnimationFrame from within your callback.

function animate() {
  makeSomeSmallChangeToHtmlOrCss();
  requestAnimationFrame(animate);
}

This is a recursive function, so without an exit condition, it will recurse infinitely.

H/T Brian Dunn

Upgrading npm when using `asdf` with `reshim`

The version of npm that comes with nodejs when installed with asdf may not be the latest. In my case I had npm 5.3.0 installed and the newest version is 5.4.2. I upgraded npm with npm install -g npm and saw output that made me think everything installed successfully, but when I ran npm -v I still got 5.3.0.

The answer is to use asdf’s reshim command.

> asdf help reshim
asdf reshim <name> <version>    Recreate shims for version of a package

I ran the following command:

> npm -v
5.3.0
> asdf reshim nodejs
> npm -v
5.4.2

And now I have the latest version and everything is great!

What you had before you saved w/`previous_changes`

When you set values into the ActiveRecord object the previous values are still available with changes, but when you save, however you save, those changes are wiped out. You can access what those values were before saving with previous_changes.

> thing = Thing.create({color: 'blue', status: 'active'})
> thing.color = 'red'
> puts thing.changes
{"color" => ['blue', 'red']}
> puts thing.previous_changes
{}
> thing.save
> puts thing.changes
{}
> puts thing.previous_changes
{"color" => ['blue', 'red']}

Call a program one time for each argument w/ xargs

Generally, I’ve used xargs in combination with programs like kill or echo both of which accept a variable number of arguments. Some programs only accept one argument.

For lack of a better example, lets try adding 1 to 10 numbers. In shell environments you can add with the expr command.

> expr 1 + 1
2

I can combine this with seq and pass the piped values from seq to expr with xargs.

> seq 10 | xargs expr 1 + 
expr: syntax error

In the above, instead of adding 1 to 1 and then 1 to 2, it trys to run:

expr 1 + 1 2 3 4 5 6 7 8 9 0

Syntax Error!

We can use the -n flag to ensure that only one argument is applied at time and the command runs 10 times.

> seq 10 | xargs -n1 expr 1 +
2
3
4
5
6
7
8
9
10
11

For more insight into what’s being called, use the -t flag to see the commands.

Mix tasks accessing the db with `Mix.Ecto`

When running reports or one-off data operations it might be necessary to create a mix task that can access the database. Ecto provides convenience functions in the Mix.Ecto module to help facilitate setting up and starting the Ecto repos.

The function parse_repo(args) will process the arguments used when calling the mix task. It looks specifically for -r MyApp.Repo but if you don’t pass anything it will return all the repos from the configuration.

The function ensure_started(repo) takes the repo as an argument ensures that the Repo application has been started. Without calling this function the Repo will throw an error when used.

Put it all together:

defmodule Mix.Tasks.MyApp.SayHi do
  use Mix.Task
  import Mix.Ecto

  def run(args) do
    repo = parse_repo(args) |> hd

    ensure_started(repo)

    result = repo.query("select 'hi!';")

    result.rows
    |> hd
    |> hd
    |> IO.puts
  end
end

Use `source /dev/stdin` to execute commands

Let’s say there’s a command in a file, like a README file, and you don’t have any copy or paste tools handy. You can get the command out of the README file with:

> cat README.md | grep "^sed"
sed -ie "s/\(.*\)/Plug '\1'/" .vimbundle.local

Great! Now how do we run it? The source is generally used to read and execute commands in files, and really /dev/stdin behaves like a file.

You can use the pipe operator to place the command into stdin and then source will read from stdin.

> cat README.md | grep "^sed" | source /dev/stdin

A simpler example can be constructed with echoing

> echo "echo 'hi there'"
echo 'hi there'

And

> echo "echo 'hi there'" | source /dev/stdin
hi there

Access record from ActiveRecord::RecordInvalid

You can pass an array of hashes to Thing.create! in ActiveRecord. If one of those records is invalid, then an ActiveRecord::RecordInvalid error is thrown. You might need to know which record threw the error, in which case you can get the record from the error with record_invalid_error.record

bad_record = nil

begin
  Things.create!([{value: 'bad'}, {value: 'good'}])
rescue ActiveRecord::RecordInvalid => record_invalid_error
  bad_record = record_invalid_error.record
end

if bad_record
  puts "got a bad record with value: #{bad_record.value}"
end

Array of hashes `create` many ActiveRecord objects

Generally, you use the create method of ActiveRecord objects to create an object by passing a hash of attributes as the argument.

Thing.create(color: 'green', status: 'active')

You can also pass an array of hashes to create:

things = [
  {
    color: 'blue',
    status: 'pending'
  },
  {
    color: 'green',
    status: 'active'
]

created_things = Thing.create(things)

One disappointing thing is that this does not batch the insert statements. It is still just one insert statement per object, but it might make your code simpler in some cases.

`github` as source block in Gemfile

source blocks in a Ruby Gemfile help group gems together that come from the same source. In addition, the Gemfile supports a github block for multiple gems that are coming from the same github repository. In my specific case, there are two gemspecs in the Brian Dunn’s flatware repo.

github 'briandunn/flatware', branch: master do
  gem 'flatware-rspec'
  gem 'flatware-cucumber'
end

With this example, only one change is needed to change the branch that both of those gems will come from.

H/T Brian Dunn

Split large file into multiple smaller files

Bash has a handy tool to split files into multiple pieces. This can be useful as a precursor to some parallelized processing of a large file. Lets say you have gigabytes of log files you need to search through, splitting the files into smaller chunks is one way to approach the problem.

> seq 10 > large_file.txt
> split -l2 large_file.txt smaller_file_
> ls -1
smaller_file_aa
smaller_file_ab
smaller_file_ac
smaller_file_ad
smaller_file_ae
large_file.txt

First, I created a “large” file with ten lines. Then, I split that file into files with the prefix smaller_file_. The -l2 option for split tells split to make every 2 lines a new file. For 10 lines, we’ll get 5 files. The suffix it adds (“aa”, “ab”, …) sorts lexigraphically so that we can reconstruct the file with cat and globs.

> cat smaller_file*
1
2
3
4
5
6
7
8
9
10

`with` statement has an `else` clause

with statements are used to ensure a specific result from a function or series of functions, using the results of those functions to take actions within the with block. If a function does not return a specific result (think :error instead of :ok) then you can either define specific clauses for the things you expected to go wrong, or you can just return the result that did not conform to the with clause expectations.

This is the general form:

with {:ok, a} <- {:ok, 123} do
  IO.puts "Everythings OK"
end

A with with an else block:

with {:ok, a} <- {:error, 123} do
  IO.puts "Everythings OK"
else 
  result -> IO.puts("Not OK")
end

A with else clause with pattern matching:

with {:ok, a} <- {:error, "something went wrong"} do
  IO.puts "Everythings OK"
else 
  {:error, message} -> IO.puts(message)
  error -> IO.puts("I'm not sure what went wrong
end

A with without an else clause where the error is returned from the with block:

result = with {:ok, a} <- {:error, "something went wrong"} do
  IO.puts "Everythings OK"
end

{:error, message} = result
IO.puts "This went wrong #{message}"

Implied applications and `extra_applications`

In your mix file (mix.exs) the application function returns a keyword list. Two options in that list determine what applications are started at runtime.

The applications is by default implied based on your app’s dependencies. The default list is thrown away through if this option is set in your mix.exs file.

If you want to add an extra application without disrupting the default, implied list then you can add the optionextra_applications. This leaves the default, implied list of applications untouched.

H/T Jose Valim PR

def application do
[
  mod: {Tilex, []},
  extra_applications: [:logger]
]
end

`cd` in subshell

With many of our projects sequestering the front end javascript code into an assets directory I find myself moving between the root project directory and the assets directory to perform all the npm or yarn related tasks in that assets dir. Inevitably I’ll start doing something like this:

cd assets; npm install; cd ..

or this

pushd assets; npm install; popd

In both cases using ; instead of && puts me back in the original directory regardless of the result of the npm command.

I just learned that using cd in a subshell does not change the directory of the current shell, so I can also do this:

(cd assets; npm install)

Ruby srand returns the previous seed

srand is a method on Kernel that seeds the pseudo random number generator. It takes a new seed as an argument or calls Random.new_seed if you don’t pass an argument. What’s interesting about it is that it returns the old seed. This has the effect of return a new large random number every time you call srand.

2.4.1 :007 > srand
 => 94673047677259675797540834050294260538
2.4.1 :008 > srand
 => 314698890309676898144014783014808654061
2.4.1 :009 > srand
 => 102609070680693453063563677087702518073
2.4.1 :010 > srand
 => 81598494819438432908893265364593292061

Which can come in handy if you’re playing some Ruby golf and need to generate a huge random number in as few characters as possible.

H/T Dillon Hafer