Phoenix chunk huge files
So Phoenix, actually Plug, has a way to chunkify a response back to the client. This allowed us to create a stream out of an ecto query, then pipe that stream into a csv encoder and finally to Plug.Conn.chunk/2, which end up solving the huge memory spikes that we had when downloading a CSV. Here's a pseudo-code:
my_ecto_query
|> Repo.stream()
|> CSV.encode() # external lib that encodes from a stream into CSV
|> Enum.reduce_while(conn, fn chunk, conn ->
case Conn.chunk(conn, chunk) do
{:ok, conn} -> {:cont, conn}
_ -> {:halt, conn}
end
end)
Tweet