Today I Learned

hashrocket A Hashrocket project

Phoenix chunk huge files

So Phoenix, actually Plug, has a way to chunkify a response back to the client. This allowed us to create a stream out of an ecto query, then pipe that stream into a csv encoder and finally to Plug.Conn.chunk/2, which end up solving the huge memory spikes that we had when downloading a CSV. Here's a pseudo-code:

my_ecto_query
|> Repo.stream()
|> CSV.encode() # external lib that encodes from a stream into CSV
|> Enum.reduce_while(conn, fn chunk, conn ->
  case Conn.chunk(conn, chunk) do
    {:ok, conn} -> {:cont, conn}
    _ -> {:halt, conn}
  end
end)
See More #elixir TILs
Looking for help? At Hashrocket, we 💜 Elixir! From our many Elixir client projects, to sponsoring the Chicago Elixir Meetup, to the source code for this application, we are invested in this community. Contact us today to talk about your Elixir project.