Support for simultaneous zipline backtests

Hi Brian,

My backtests currently take several hours and sometimes days to complete. I'm considering buying a significantly greater number of cpu cores (threadripper) so that I can run several (e.g. 5-6) backtests simultaneously to optimize model parameters.

Currently it's possible to initiate multiple backtests within the single zipline container, and each is executed in its own thread. However, the container's processing limit appears to be shared by all the backtests, which defeats the purpose of running them simultaneously. I.e. each thread is not maxing out it's own core.

Is there a way to assign backtests to multiple zipline containers (e.g. quantrocket_zipline_1, quantrocket_zipline_2, etc.)? I can easily create these containers with the --scale flag in docker compose, but I'm not sure how to assign additional backtests to each with quantrocket.zipline.backtest().

You could try adding more zipline services in the docker compose file (e.g. a zipline2 service which is a duplicate of zipline) and then routing to the respective containers using the REST API:

curl -X POST http://houston/zipline2/backtests/…

Haven’t tested this and it’s not officially supported, though.

Depending on your bottleneck, you could also consider precalculating expensive computations, storing them in a custom database, and allowing your backtest to query the custom database.

Thanks for this tip, I was able to run two backtests simultaneously by creating zipline2 as a service and sending it a backtest request via houston.

Is there a way to make these backtest requests async? Currently each backtest initiation is blocking, which means I have to start a different notebook/terminal for each backtest.