Today I Learned

A Hashrocket project

311 posts by chriserin @mcnormalmode

`nil.&to_s` evaluates to false

ruby 2.3 introducted the Safe Naviation Operator &. intended to make nil checks less verbose, simulating the functionality of the Rails introduced try method.

This helps to prevent unwanted NoMethodErrors

> nil.do_something
NoMethodError (undefined method `do_something' for nil:NilClass)
> nil&.do_something
nil

What happens when you type the & on the right side of the .?

> nil.&do_something
NameError (undefined local variable or method `do_something' for main:Object)

Normally you’d get an error, because do_something isn’t defined, but what if you are calling a method available on everything like to_s?

> nil.&to_s
false

This breaks expectations. This statement is equivalent to nil & to_s.

> nil & to_s
false

Ruby defines & as a method on nil.

> nil.method(:&)
#<Method: NilClass#&>

This method in the documentation is defined as:

And—Returns false. obj is always evaluated as it is the argument to a method call—there is no short-circuit evaluation in this case.

So with nil.&to_s you’re calling the & operator as a method and passing the result of to_s called on the main object, but it doesn’t matter because nil.& always evaluates to false.

Array concatenation with the spread operator

The spread operator in ES6 javascript is a swiss army knife of wonders. Previously in javascript if you wanted to concatenate two arrays you could use the concat method of an array.

let a = [1, 2, 3]
let b = [4, 5, 6]
a.concat(b) //concat though is not mutative, you have to assign the result
let c = a.concat(b) // we have concatenated

Whether or not Array.prototype.concat is mutative has always been something I get wrong. Some Array methods are mutative some are not! By using the spread operator, I never get this wrong (it’s not mutative).

let c = [...[1, 2, 3], ...[4, 5, 6]]
console.log(c) // [1, 2, 3, 4, 5, 6]

Each line in a Dockerfile is a layer in an image

What has helped me grok docker a bit better is knowing that every line in a Dockerfile has a corresponding hash identifier after the image has been built. Here is a sample Dockerfile:


FROM alpine

RUN echo 'helloworld' > helloworld.txt

CMD ["cat", "helloworld.txt"]

I create the image with:

docker build -t helloworld .

I can now examine each layer in the Dockerfile with docker history helloworld

IMAGE               CREATED             CREATED BY                                      SIZE                COMMENT
1e5d27ca20a8        13 hours ago        /bin/sh -c #(nop)  CMD ["env"]                  0B
84f489011989        13 hours ago        /bin/sh -c echo "Hello World" > helloworld.t…   12B
3fd9065eaf02        2 months ago        /bin/sh -c #(nop)  CMD ["/bin/sh"]              0B
<missing>           2 months ago        /bin/sh -c #(nop) ADD file:093f0723fa46f6cdb…   4.15MB

Three commands and four layers. The FROM alpine command is actually 2 layers that have been squashed together. You can see the <missing> hash for the initial command because it has been squashed into 3fd9065.

The command that creates the helloworld.txt file has a size of 12 bytes because thats the size of the file that was created.

Async/Await UnhandledPromiseRejectionWarning

When an await promise is rejected when using async/await you get an UnhandledPromiseRejectionWarning.

promise13 = () => {
  return new Promise((resolve, reject) => { reject(13)})
}

(async () => {
  let num = await promise13(); // UnhandledPromiseRejectionWarning
  console.log('num', num);
})();

There are two ways to properly handle the promise reject. The first is with a try catch block.

(async () => {
  try {
    let num = await promise13();
    console.log('num', num);
  } catch(e) {
    console.log('Error caught');
  }
})();

The second is by handling the error on the promise itself.

(async () => {
  let num = await promise13().catch((err) => console.log('caught it'));
  console.log('num', num);
})();

Both ways will allow you to avoid the UnhandledPromiseRejectionWarning and also properly handle your rejections. Beware the deprecation message that comes with this warning.

Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

xargs substitution

Output piped to xargs will be placed at the end of the command passed to xargs. This can be problematic when we want the output to go in the middle of the command.

> echo "Bravo" | xargs echo "Alpha Charlie"
Alpha Charlie Bravo

xargs has the facility for substituion however. Indicate the symbol or string you would like to replace with the -I flag.

> echo "Bravo" | xargs -I SUB echo "Alpha SUB Charlie"
Alpha Bravo Charlie

You can use the symbol or phrase twice:

> echo "Bravo" | xargs -I SUB echo "Alpha SUB Charlie, SUB"
Alpha Bravo Charlie, Bravo

If xargs is passed two lines, it will call the the command with the substitution twice.

> echo "Bravo\nDelta" | xargs -I SUB echo "Alpha SUB Charlie SUB"
Alpha Bravo Charlie Bravo
Alpha Delta Charlie Delta

Captured for var in closure can change

Variables can be captured into anonymous functions in go with closures, and its easy to use the variables we declare in a for loop in those anonymous functions, but that doesn’t produce the results I would expect.

var fnArray []func()

for _, chr := range []string{"a", "b", "c"} {
    fmt.Println("Hello,", chr)
    fn := func() {
      fmt.Println("Goodbye,", chr)
    }
    fnArray = append(fnArray, fn)
}

for _, fn := range fnArray {
  fn()
}

The above code outputs:

Hello,a
Hello,b
Hello,c
Goodbye,c
Goodbye,c
Goodbye,c

Go optimizes for loops by using the same memory space for each value that it terates over. To avoid this hard edge you can reassign that value:

capturedChr := chr

Inserted that assignment into our for loop looks like this:

var fnArray []func()

for _, chr := range []string{"a", "b", "c"} {
    capturedChr := chr
    fmt.Println("Hello,", chr)
    fn := func() {
      fmt.Println("Goodbye,", capturedChr)
    }
    fnArray = append(fnArray, fn)
}

for _, fn := range fnArray {
  fn()
}

And produces this more correct output:

Hello,a
Hello,b
Hello,c
Goodbye,a
Goodbye,b
Goodbye,c

Checkout the go playground here

Firing the blur event when a component unmounts

If an input is focused and a React component containing that input unmounts, then the blur event for that input will not fire. If you use the blur event to save the contents of the input then you are in trouble.

You can know what input/select on the page currently has the focus with document.activeElement. Combine that with the component lifecycle method componentWillUnmount() to fire blur for the currently focused element on unmount.

componentWillUnmount() {
  const input = document.activeElement()
  input.blur()
}

The documentation for activeElement says it may return <body> or null when there is no active element, so you need to protect against that.

componentWillUnmount() {
  const input = document.activeElement()
  if (input) {
    input.blur()
  }
}

Get the response json with `fetch` and `Promises`

The fetch api is intertwined with the Promise api. The fetch function itself returns a Promise. The success function for that promise will have an argument that is a Response object.

fetch('myResource.json').then(function(response) {
  console.log('This is a response object', response);
})

The Response object implements Body which has several properties and several methods that help you to access the contents of the body of the response.

Body.json() is a function that reads the response stream to completion and parses the response as json. This operation may take take time, so instead of just returning the json, it returns another Promise. The success function of this promise will have the resulting json as an argument.

fetch('myResource.json').then(function(response) {
  response.json().then(function(parsedJson) {
    console.log('This is the parsed json', parsedJson);
  })
})

Of course, if you return a promise from a success function, then that promise will be resolved in the next chained then function.

fetch('myResource.json').then(function(response) {
  return response.json();
}).then(function(parsedJson) {
  console.log('This is the parsed json', parsedJson);
})

Pin item to bottom of container

In laying out the items of a container, I want an item to be pinned to the bottom no matter the size of the other content. I can do that with a mix of flex and margin: auto.

.container {
  display: flex;
  flex-flow: column;
}

.footer {
  margin-top: auto;
}

I had previously thought that flex box was more contained and wouldn’t change the behaviour of properties like margin: auto but it does!

Check out my codepen here.

H/T Vidal Ekechuckwu

Specify different bundler groups with RAILS_GROUPS

Is there a gem that you want to include sometimes and not others? For instance, do you have multiple versions of staging with slightly different gemsets? You can manage that with custom groups.

group :apple do
  gem 'some_gem'
end

group :orange do
  gem 'some_other_gem'
end
RAILS_GROUPS=apple,orange rails console
> Rails.groups
[:default, "development", "apple", "orange"]

In the config/application.rb file, Rails.groups is passed to Bundler.require.

Bundler.require(*Rails.groups)

Shell into a stopped docker container

If you are experiencing an error trying to run your docker container then maybe you need to do some debugging inside of the docker container. But how do you shell into a container if you can’t even start it?

docker run -it --rm --name newname myimage:latest bash

This takes your image and starts a new container, running bash and allowing you to examine your container.

-it allows you to attach a terminal.

--rm will clean this container up when you are done with it.

--name <something> is the name you give your new container.

Using an array literal to specify length

Go forces you to specify the length of an array when creating or declaring an array. If you don’t declare the length then its not an array its a slice. The syntax for declaring the array looks like this:

// array of length 3
var numbers = [3]int{1, 2, 3}

When we use an array literal with the {} to declare the values of an array, the length declaration becomes redundant.

//slice
var numbers = []int{1, 2, 3}

The above example, however shows the declaration of a slice not an array. To let the literal define the length of an array we can place the ... operator inside the [] square brackets.

var numbers = [...]int{1, 2, 3}

Now, we can add a fourth element to the array without also having to change the specified length.

Clip images in html with `clip-path`

You may have some images that have some annoying artifacts around the border, or maybe you just want an image to be a funky shape for design purposes. If so, then clip-path is the css property for you.

Check out this diamond shaped image from the mdn docs:

clip-path: polygon(50% 0, 100% 50%, 50% 100%, 0 50%);

There are 4 functions, circle, ellipse, polygon, and inset, amongst a number of interesting options, some of which work in your browser currently, some of which may not.

Creating a vector for your specific shape might sound rough, but firefox has a built-in clip-path tool that can help you set it correctly in the browser.

Rearrange items with `grid-template-areas`

You can arrange items with the css-grid property grid-template-areas without changing the structure of your html.

Here I have basic strategy for a 2x2 grid with item one in the upper left corner and item four in the lower right.

.container {
  display: grid;
  grid-template-columns: 1fr;
  grid-template-rows: 1fr;
  grid-tempate-areas: 
    "one two"
    "three four";
}

.one { grid-area: one }
.two { grid-area: two }
.three { grid-area: three }
.four { grid-area: four }

I can rearrange my items just by changing the grid-template-areas property:

.container {
  grid-template-areas:
    "four two"
    "three one";
}

Now, item four is in the upper left corner, and item one is in the lower right.

A complete example is up at CodePen

Use Go Printf argument twice with adverbs

Usually, when you use one of the format functions from the fmt library you pass in one argument or operand for each verb you use.

fmt.Printf("Hello %s, nice to meet you %s", "Lauren", "Harry")
// Prints --> Hello Lauren, nice to meet you Harry

But what if you want to use one of those operands twice? Well, you can use what go terms an adverb. Place square brackets after the percent symbol with an index that refers to the argument you’d like to use.

fmt.Printf("Hello %s, nice to meet you %[1]s", "Lauren")
// Prints --> Hello Lauren, nice to meet you Lauren

Azure key discovery provides modulus and exponent

To verify a JWT coming from Azure, you must provide the ruby JWT package a public key. That public key must be OpenSSL::PKey::RSA value.

The Azure key discovery url provides this public key definition:

{
  "keys": [
    {
      "kid": "X5eXk4xyojNFum1kl2Ytv8dlNP4-c57dO6QGTVBwaNk",
      "nbf": 1493763266,
      "use": "sig",
      "kty": "RSA",
      "e": "AQAB",
      "n": "tVKUtcx_n9rt5afY_2WFNvU6PlFMggCatsZ3l4RjKxH0jgdLq6CScb0P3ZGXYbPzXvmmLiWZizpb-h0qup5jznOvOr-Dhw9908584BSgC83YacjWNqEK3urxhyE2jWjwRm2N95WGgb5mzE5XmZIvkvyXnn7X8dvgFPF5QwIngGsDG8LyHuJWlaDhr_EPLMW4wHvH0zZCuRMARIJmmqiMy3VD4ftq4nS5s8vJL0pVSrkuNojtokp84AtkADCDU_BUhrc2sIgfnvZ03koCQRoZmWiHu86SuJZYkDFstVTVSR0hiXudFlfQ2rOhPlpObmku68lXw-7V-P7jwrQRFfQVXw"
    }
  ]
}

First, I tried using the n value to create a ruby public key:

> OpenSSL::PKey::RSA.new(key['n'])
OpenSSL::PKey::RSAError: Neither PUB key nor PRIV key: nested asn1 error

So n is not the public key? WTF is n. Via the wikipedia page for RSA.

Alice transmits her public key (n, e) to Bob via a reliable, but not necessarily secret, route

n is the modulus. e is the exponent (in this case AQAB), but the ruby library requires a pem. We must convert the modulus and exponent to a pem. Luckily there is a gem for that:

gem install rsa-pem-from-mod-exp

Convert to a pem then to a pubkey:

require 'rsa_pem'

pem = RsaPem.from(key['n'], key['e'])

public_key = OpenSSL::PKey::RSA.new(pem)

And verify your token

decoded_token = JWT.decode token, public_key, true, { :algorithm => 'RS256' }

In Go, NaN does not equal NaN

It’s false, it returns false.

fmt.Println(math.NaN == math.NaN)

Don’t fret though, there is a function that can tell you whether or not a value is NaN.

fmt.Println(math.IsNaN(math.Log(-1.0)))

The above returns true and all is well with the world. Don’t use math.NaN as a sentinel value, it does not work, use the math.IsNaN function.

React Router Route can call a render function

In General, the React Router Route component looks like this.

<React path="/triangles" component={Triangles} />

If the current url matches /triangles then the Triangles component will render.

If you want to render something simple however, you can pass down a function with the render prop.

<React path="/helloworld" render={this.renderHelloWorld} />

With that function defined in the same parent component and returning some jsx

renderHelloWorld = () => {
  return (
    <div>Hello World!</div>
  )
}

Parse JSON into an OpenStruct

When you parse json in ruby it is placed into a hash, and you have to access the values with hash syntax:

parsed_data = JSON.parse('{"color": "blue"}')
puts parsed_data["color"] # prints 'blue'

But instead of a hash you can choose to parse it into an OpenStruct by using the object_class option. Then you can use ruby’s method syntax to access the data.

parsed_data = JSON.parse('{"color": "blue"}', object_class: OpenStruct)
puts parsed_data.color # prints 'blue'

H/T Dillon Hafer

Promises and then function return values

The return type of Promise function will dictate how future chained then functions behave. If you return a Promise then the next chained then function will execute when the Promise that you returned is resolved.

Promise.resolve('foo').
  then(() => {return Promise.resolve('bar')}).
  then((v) => console.log(v))

// prints bar

If you return something different from the then function, then the argument of the next chained then function will be the previous returned value.

Promise.resolve('foo').
  then(() => {return 'baz'}).
  then((v) => console.log(v))

// prints baz

Check out the MDN docs for Promise.prototype.then

H/T Brian Dunn

H/T Josh Branchaud

Three syntactical elements that return 2 values

golang supports multiple return values from functions but there are also 3 different syntactical elements or accessors that also return 2 values.

The most straight forward is accessing values from a map.

colors := map[string]string{
  "ocean":  "blue",
  "forest": "green",
  "jet":   "black"
}

x, ok := colors["ocean"]

Type assertions also want to let you know that they’re ok, like when we try to assert an error as a json.SyntaxError. Yeah, its a SyntaxError, ok?

synerr, ok := error.(*json.SyntaxError)

Channels can also let the program that things are going ok when using the receive syntax <-.

receivedValue, ok := <- ch

`new` returns a pointer, not a value

Go has pointers, which is great, I’ve never worked in a language with legit pointers before. The built-in function new returns a pointer.

p := new(int)

p is now a pointer. You can access the value of the pointer with the * syntax.

print(*p)

The above returns 0.

Read the documentation about new with:

go doc builtin.new

React Anti-Pattern: defaultValue

The use of defaultValue isn’t strictly an anti-pattern, as with everything it’s context dependent. In the context of a small one form react component doing something simple I’d say defaultValue is fine. This is React in-the-small.

React in-the-large or in-the-massive is different. In large applications things change in unforseen ways. In large applications there are explicit or implicit frameworks that are upstream from your component and will affect the operation of your component. React-in-the-large will create multiple entry points to your component. Sometimes its history.push and sometimes its good ol’ Page Refresh. The state of your application will be different in both those cases.

In software that can change in many ways, defaultValue is brittle. Once rendered, an input with defaultValue is completely out of React’s hands. The value of the input cannot be changed when new information becomes available. You can change the value of the input with DOM functions, but then you’re not using React.

Outside of using a React form abstraction value is much more adaptable than defaultValue. It’s more work to setup and maybe harder to get right initially, but far more likely to suit your needs down the road.

Go docs at the command line

Go has plentiful documentation online, but sometimes using a search engine to find the right thing in go can be tough. Go has docs at the command line though.

You can go doc <name-of-package>

> go doc io/ioutil
package ioutil // import "io/ioutil"

Package ioutil implements some I/O utility functions.

var Discard io.Writer = devNull(0)
func NopCloser(r io.Reader) io.ReadCloser
func ReadAll(r io.Reader) ([]byte, error)
func ReadDir(dirname string) ([]os.FileInfo, error)
func ReadFile(filename string) ([]byte, error)
func TempDir(dir, prefix string) (name string, err error)
func TempFile(dir, prefix string) (f *os.File, err error)
func WriteFile(filename string, data []byte, perm os.FileMode) error

Or, you can go doc <function> if the function is fully qualified with the package name.

> go doc io/ioutil.WriteFile
func WriteFile(filename string, data []byte, perm os.FileMode) error
    WriteFile writes data to a file named by filename. If the file does not
    exist, WriteFile creates it with permissions perm; otherwise WriteFile
    truncates it before writing.

go doc ioutil.WriteFile will also work.

Open a zsh terminal in a split window in Neovim

Opening a terminal in Neovim is one of the more eye opening features of the vim fork. To open a terminal in vim use the :terminal command.

:te
:term
:terminal

Those commands though will use your entire current window for the terminal and by default it opons in bash even though my shell is configured to be zsh

To open a terminal in zsh use:

:te zsh

This still takes up the entire window. Lets now use the split command and the terminal protocol syntax to open a split window with zsh.

:sp term://zsh
:split term://zsh

Alpha channel in hex colors #RRGGBBAA

In the past if I’ve needed to give a hex color an opactiy I would use the opacity attribute, like this:

div {
  background-color: #cecece;
  opacity: 0.5;
}

But as of Chrome 62/Firefox 49/CSS Color Module Level 4 you can now include the alpha channel information into the hex code:

div {
  background-color: #cecece88;
}

And if you are using the shortened version of the hex color it looks like this:

div { 
  background-color: #ccc8;
}

Check out the MDN Docs

Span a grid item across the entire css grid

When using css grid you might want an item to span item to span across the entire grid even when that grid has many columns.

Grid with many columns looks like this:

.grid {
  display: grid;
  grid-template-columns: repeat(5, 1fr);
}

You could certainly just use grid-column: span 5 but if the number of columns changes then you aren’t going across the entire screen. To span a grid item across the entire css grid regardless of number of columns use the / operator and the index of the last column -1:

.item {
  grid-column: 1 / -1;
}

Netrw header toggling with vim-vinegar

vim-vinegar is a vim plugin that “cleans up” Netrw as-

an attempt to mitigate the need for more disruptive “project drawer” style plugins

but it turns off the Netrw header, which, while unnecessary is the second most charming feature of vim.

To reclaim your Netrw header, use I while in Netrw.

As a vim-vinegar bonus, use - while in a buffer to open Netrw in the directory of the current buffer.

Having rapid directory access available changes everything

All quotes are from the the vim-vinegar README. Thank you to Vidal for the hat tip about I.

Get the mime-type of a file

The file command for both Linux and Mac can provide mime information with the -I flag.

> file -I cool_song.aif
cool_song.aif: audio/x-aiff; charset=binary

There are longer flags to limit this information, --mime-type and --mime-encoding.

> file --mime-type cool_song.aif
cool_song.aif: audio/x-aiff
> file --mime-type cool_song.aif
cool_song.aif: binary

And if you are using this information in a script you can remove the prepended file name with the -b flag.

> file --mime-type -b cool_song.aif
audio/x-aiff
> file --mime-type -b cool_song.aif
binary

Combine -b and -I for a quick terse mime information command

> file -bI cool_song.aif
audio/x-aiff; charset=binary

Pure Data's netreceive uses special protocol

Pure Data is graphical programming language specialising in audio applications. It has an object called netreceive that is a socket (either TCP or UDP) that accepts a connection and outputs messages sent to it.

There are 2 gotcha’s that I experienced in working with it.

The first. It requires a special protocol, called FUDI. Each message must be terminated by a semicolon ;. The message can contain any number of ‘atoms’ (strings) separated by whitespace but the semicolon is essential.

The second. The documentation of pure data contains running examples, if you open the documentation of netreceive before running the object in your patch, you will get a bind: Address already in use (48) error. Close the documentation and try again!

Preventing access to `Window.opener`

When you open a link in a new tab with target="_blank" like so:

<a target="_blank" href="http://some.link">Somewhere</a>

Then the page that you open has access to the original page through Window.opener. Depending on your case you may not want this. Actually I can’t think of a case where you do want the new tab to refer back to the opening tab.

To prevent Window.opener access use rel="noopener":

<a target="_blank" href="http://some.link" rel="noopener">Somelink</a>

At this moment, this is not available in IE or Edge, check out the caniuse site.

H/T Dillon Hafer

Use jq to filter objects list with regex

My use case is filtering an array of objects that have an attribute that matches a particular regex and extract another attribute from that object.

Here is my dataset:

[
  {name: 'Chris', id: 'aabbcc'},
  {name: 'Ryan', id: 'ddeeff'},
  {name: 'Ifu', id: 'aaddgg'}
]

First get each element of an array.

> data | jq '.[]'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ryan', id: 'ddeeff'}
{name: 'Ifu', id: 'aaddgg'}

Pipe the elements to select and pass an expression that will evaluate to truthy for each object.

> data | jq '.[] | select(true) | select(.name)'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ryan', id: 'ddeeff'}
{name: 'Ifu', id: 'aaddgg'}

Then use the test function to compare an attribute to a regex.

> data | jq '.[] | select(.id|test("a."))'
{name: 'Chris', id: 'aabbcc'}
{name: 'Ifu', id: 'aaddgg'}

Extract the name attribute for a more concise list.

> data | jq '.[] | select(.id|test("a.")) | .name'
Chris
Ifu

Read more about jq

Setting Struct properties with an index

It’s rare that I get a chance to use structs but yesterday while parsing some xml (!!!) I wrote an algorythm where it would be handy to set values of a struct with an incrementing number rather than a symbol.

Low and Behold! Ruby Structs allow you to set attributes with either the name of the property or the ordinal with which it was declared.

2.5.0 :001 > Apple = Struct.new(:color, :size)
 => Apple
2.5.0 :002 > apple = Apple.new
 => #<struct Apple color=nil, size=nil>
2.5.0 :003 > apple[0] = 'red'
 => "red"
2.5.0 :004 > apple[1] = 'huge'
 => "huge"
2.5.0 :005 > apple
 => #<struct Apple color="red", size="huge">

Structs are a great data structure that I’m going to try to use more in my daily programming.

Setting timezones from command line on MacOS

We wrote some tests that failed recently in a timezone other than our own (☹️). In order to reproduce this failure we had to set the timezone of our local machine to the failure timezone.

sudo systemsetup -settimezone America/New_York

Easy enough. You can get all the timezones available to you with:

sudo systemsetup -listtimezones

And also check your current timezone

sudo systemsetup -gettimezone

systemsetup has 47 different time/system oriented commands. But it needs a sudo even to run -help.

> systemsetup -help
You need administrator access to run this tool... exiting!

Don’t fret you can get this important secret information via the man page

man systemsetup

Log rotation in ruby

Jake wrote a great til about linux log rotation last year. Ruby also has a log rotation alternative built into the stdlib.

Generally, ruby logging looks like this:

require 'logger'
logger = Logger.new('app.log')
logger.info('starting log')

You can also pass a ‘daily’, ‘weekly’ or ‘monthly’ to the logger:

logger = Logger.new('week.log', 'weekly')

If I use this log today, Tue Jan 9, and next week on Wed Jan 17, then ls will show two log files:

week.log
week.log.20180113

20180113 is Saturday the last day of the previous logging period, a new log is started every Sunday. NOTE: no files are deleted with this style of logging, you may potentially run out of space.

You can also choose to log with the number of files to keep and the maximum size of each file.

logger = Logger.new('size.log', 4, 1024000)

In this case after producing more than 4 MB of logging information ls will show you.

size.log
size.log.1
size.log.2
size.log.3

Compared to linux’s logrotate this is missing features, compression for example, but for adding log rotation to your logs quickly and simply it works great.

Documentation

flex-flow for shorter flex declarations

flex-flow is a css property that allows you to specify both flex-direction and flex-wrap in one property. In css parlance this is a shorthand property and like all shorthand properties the values are order agnostic when possible.

It will provide minimal efficiency gains but you can just avoid writing flex-direction altogether. When you need to specify a direction just type:

ul { 
  display: flex;
  flex-flow: row;
}

And then its easy to tack on a wrap when you determine it is necessary:

ul {
  display: flex;
  flex-flow: row wrap;
}

Native CSS Variables

CSS variables are normally associated with the pre-processor languages like SASS. But css has its own native variables as well. Declare them like so:

element { 
  --black-grey: #111111;
}

And then use them with the var function like this:

element { 
  background-color: var(--black-grey);
}

In the above example if --black-grey is not set then the background-color will fallback to the inherited value. You can provide a default value for the variable if you anticipate it may not be set:

element {
  background-color: var(--black-grey, black);
}

Variables behave just like any other inherited or cascading properties and if you’d like them available for entire stylesheets then you would need to declare them in :root.

:root {
  --one-fathom: 16rem;
}

This css feature is safe to use everywhere but IE, but it is only recently safe. Firefox enabled this feature by default in version 56 released in Sept 2017.

You can read about them in the Mozilla Docs or the original W3C editor’s draft.

zsh change dir after hook w/`chpwd_functions`

How does rvm work? You change to a directory and rvm has determined which version of ruby you’re using.

In bash, the cd command is overriden with a cd function. This isn’t necessary in zsh because zsh has hooks that execute after you change a directory.

If you have rvm installed in zsh you’ll see this:

> echo $chpwd_functions
__rvm_cd_functions_set
> echo ${(t)chpwd_functions}
array

chpwd_functions is an array of functions that will be executed after you run cd.

Within the function you can use the $OLDPWD and $PWD env vars to programatically determine where you are like this:

> huh() { echo 'one sec...' echo $OLDPWD && echo $PWD } 
> chpwd_functions+=huh
> cd newdir
one sec...
olddir
newdir

You can read more about hook functions in the zsh docs.

Comply with the erlang IO protocol

Below is a basic IO server/device implementation, it receives an :io_request message and responds with a :io_reply message.

defmodule MYIODevice do
  def listen() do
    receive do
      {:io_request, from, reply_as, {:put_chars, :unicode, message}} ->
        send(from, {:io_reply, reply_as, :ok})
        IO.puts(message)
        IO.puts("I see you")
        listen()
    end
  end
end

pid = spawn_link(MYIODevice, :listen, [])

IO.puts(pid, "Hey there")

The above code outputs the following to stdout:

Hey there
I see you

The first argument of IO.puts/2 is a pid representing the device that is being written to, it is generally defaulted to :stdio)

The documentation is dense, enjoy! Erlang IO Protocol

H/T Brian Dunn

Remove duplicates in zsh $PATH

How to do it?

typeset -aU path

There ya go. All entries in $PATH are now unique

How does it work? Well that’s stickier. path and PATH are not the same thing. You can determine that by examining the type of each with t variable expansion.

> echo ${(t)path}
array-unique-special
> echo ${(t)PATH}
scalar-export-special

They are linked variables however, if you change one you change the other but they have different properties. I have to admit at this point that scalar-export-special is something of which I’m ignorant.

The typeset declaration is simpler, it changes the type of path from array-special to array-unique-special. The -a flag is for array and the -U flag is for unique.

What files are being sourced by zsh at startup?

Tracking down a bug in the jagged mountains of shell files that are my zsh configuration can be tough. Enter the SOURCE_TRACE option, settable with the -o flag when opening a new shell with the zsh command.

zsh -o SOURCE_TRACE

Which outputs:

+/etc/zsh/zshenv:1> <sourcetrace>
+/etc/zsh/zshrc:1> <sourcetrace>
+/home/chris/.zshrc:1> <sourcetrace>
+/home/chris/.sharedrc:1> <sourcetrace>
+/home/chris/.zshrc.local:1> <sourcetrace>

The above output is an abbreviated representation of the actual files loaded on my system. I have language manager asdf installed which adds a couple of entries. I am an rvm user which adds 50 (!!!) entries. Lots of shell code to source for rvm. Additionally, I use Hashrocket’s dotmatrix which adds even more entries. Lots of sourcing to sort through.

This is handy in combination with print line (or echo) debugging. It gives your print lines added context with things get noisy.

Killing heroku dynos

Yesterday we encountered an odd situation. A rake task running with heroku that did not finish. Through no fault of our own.

We killed the local processes kicked off by the heroku command line tool and restarted the rake task but got an error message about only allowing one free dyno. Apparently, the dyno supporting our rake task was still in use.

First we had to examine whether that was true with:

heroku ps -a myapp

Next step was to kill the dyno with the identifier provided by the above command.

heroku ps:kill dyno.1 -a myapp

We ran the rake task again, everything was fine, it worked great.

sub_filter + proxy_pass requires no gzip encoding

The nginx sub module is useful for injecting html into web requests at the nginx level. It looks like this:

location / {
  sub_filter '</head>' '<script>doSomething();</script></head>';
}

Replace the end closing tag with a script tag and the head closing tag.

This does not work however with the proxy_pass module response coming from the proxy_pass url is gzip encoded.

In this case, you want to specify to the destination server that you will accept no encoding whatoever with the proxy_set_header directive, like so:

  proxy_set_header Accept-Encoding "";

Altogether it looks like this:

location / {
  proxy_set_header Accept-Encoding "";
  proxy_pass http://destination.example.com
  sub_filter '</head>' '<script>doSomething();</script></head>';
}

Examine owner and permissions of paths

The ownership of an entire path hierarchy is important when nginx needs to read static content. For instance, the path /var/www/site.com/one/two/three/numbers.html may be problematic when directory three is owned by root rather than www-data. Nginx will respond with status code 403 (forbidden) when http://site.com/one/two/three/numbers.html is accessed.

To debug this scenario we need to examine the owners and pemissions of each of the directories that lead to numbers.html. This can be accomplished in linux with the handy namei command in combination with realpath.

Real path will return the fullpath for the file.

> realpath numbers.html
/var/www/site.com/one/two/three/numbers.html

And namei -l <full_path> will examine each step in the file’s directory structure.

> namei -l $(realpath numbers.html)
drwxr-xr-x root  root  /
drwxr-xr-x root  root  var
drwxr-xr-x www-data www-data www
drwxr-xr-x www-data www-data site.com
drwxr-xr-x www-data www-data one
drwxr-xr-x root root two
drwxr-xr-x www-data www-data three
-rw-r--r-- www-data www-data numbers.html

Ah there it is. The directory two is owned by root rather that www-data. chown will help you from here.

Cursor Pagination with graphql

When a graphql object has a plural type, each object of the collection from that plural type will have a cursor associated with it. This cursor is generally a base64 encoded unique identifier referencing the search, the object and the place of that object amongst all objects in the collection.

Ask for the cursor as a property when iterating over a collection.

  query {
    User(login: 'jbranchaud') {
      repositories(first: 100) {
        edges {
          node {
            cursor
            name
          }
        }
      }
    }
  }

The above query will return a cursor for each repo.

Get the last cursor of that collection with pageInfo.endCursor:

  query {
    User(login: 'jbranchaud') {
      repositories(first: 100) {
        pageInfo {
          endCursor
          hasNextPage
        }
        edges {
          node {
            name
          }
        }
      }
    }
  }

The endCursor will look like:

Y3Vyc29yOnYyOpHOBK0NoA==

Use this cursor to obtain the next 100 repos with the after property.

  query {
    User(login: 'jbranchaud') {
      repositories(first: 100, after: "Y3Vyc29yOnYyOpHOBK0NoA==") {
        pageInfo {
          endCursor
          hasNextPage
        }
        edges {
          node {
            name
          }
        }
      }
    }
  }

Posting data from a file with curl

Iterating on an api post request with curl can be frustrating if that involves a lot of command line editing. curl however can read a file for post body contents. Generally the --data option is used like this:

curl -XPOST --data '{"data": 123}' api.example.com/data

But when using an @ symbol you can reference a file

curl -XPOST --data @data.json api.example.com/data

Now, you can run the same command for each iteration and edit the data.json file containing the data to be posted with your favorite text editor (which is vim right?).

`disable_with` to prevent double clicks

If a link or button initiates a request that takes a long time to respond then the user (being a user) might click the link or button again. Depending on the code implemented on the server this could put the application into a bad state.

Rails has a handy feature that will disable a link or button after the user has clicked it, disable_with:

<%= link_to('something',
      something_path(something),
      data: {
        disable_with: "Please wait..."
      }
    )
%>

In Rails 5.0 and up sumbit buttons have disable_with set by default. To disable disable_with use:

data: { disable_with: false }

Partial post body matching with webmock

When using webmock to stub out a post, you can specify that the post has a specific body. When dealing with web apis, that body is json and you want to compare the body to a ruby hash, from the webmock README, that looks like this:

stub_request(:post, "www.example.com").
  with(body: {a: '1', b: 'five'})

In the above example, the whole json body has to be represented in the hash. Webmock provides a way to just examine a portion of the json/hash with the hash_including method. This is not the rspec hash_including this is a webmock specific function using the WebMock::Matchers::HashIncludingMatcher class.

stub_request(:post, "www.example.com").
  with(body: hash_including({a: '1'}))

Using ngrok to evaluate webhhooks in a dev env

A webhook, an http request to a server initiated by an outside service, is difficult to test. The outside service must have a valid url to make the request with, but your development server generally runs on the unreachable localhost.

The Ngrok product page describes ngrok like this:

It connects to the ngrok cloud service which accepts traffic on a public address and relays that traffic through to the ngrok process running on your machine and then on to the local address you specified.

If I have a local dev server on port 3000 I run ngrok like so:

> ngrok http 3000
Forwarding   http://92832de0.ngrok.io -> localhost:80

Then, configure the outside service to point the webhook at the ngrok.io address provided by the ngrok tool. Now, your local server will receive http requests from the outside service and you can evaluate those requests effectively.