2.11 release question

Hi Brian,

Great updates on the new release. I do have a couple of questions regarding this release;

1 - The zipline backtest runs 15-20% slower than the previous version. Any specific reason for that?

2 - I like the graph output from the backtest, but I’m facing a few issues. The specific logs that were previously printed to the terminal are now being wiped out between graph updates. Additionally, I’m unable to see the exact performance metrics linked to specific dates as the backtest progresses. I used those date-based checkpoints to monitor longer backtests effectively. Is there a flag or option to revert to the previous output format? The text-based graph makes it difficult to track performance against specific dates.

Thanks for your help.

I will see if I can replicate slower performance.

I'm sorry the new progress meter seems to be a downgrade for your use case. It seems like you might be using it in a way I didn't envision. It's mainly intended to provide some basic feedback while the backtest is running, while becoming obsolete once the backtest is finished. Here are a few points to consider. They may not all be useful/relevant because I don't know exactly what you're doing.

  • the current progress meter takes up more screen real estate than the previous one, but anything logged to the terminal is still accessible by scrolling up or by querying the log file (or using quantrocket flightlog wait).
  • Modifying the frequency of progress logging may help balance the amount of noise in the terminal to a more desirable amount.
  • the progress meter shows the backtest performance up to the current simulation date. That's identical to what the last line of the old progress meter showed. It's true that the old progress meter printed a date whereas now you have to approximate the date by looking at the axis.
  • If you want to know exactly how a strategy performed during a sub-period of the backtest, you can use the start_date and/or end_date parameters with pyfolio.from_zipline_csv to get full performance metrics for that period.

My overall recommendation is to use the progress meter for basic, temporary feedback and to use pyfolio for more precise, in-depth analysis. If there's something I'm missing about why that doesn't work well for your use case, please let me know in more detail.

Thanks Brian, Yes I certainly use the pyfolio functions as a key part of the analysis. My question was mostly about the real-time nature of looking at the backtest as it performs particularly on longer ones that can take a while to complete and have an putout file available.

the new progress meter definitely makes sense for quick feedback, but the text‐based, date‐stamped output still has its own unique advantages. It would address most of the issues I'm encountering if there’s any way to preserve or reintroduce that more granular, “check‐as‐we‐go” display. If I’m missing any tricks for a better real‐time date readout, please let me know—my goal is just to keep the same level of clarity and detail that made the old logging style so useful.

Of course, I can work on a workaround and write new log functions, etc. But I was trying to avoid making new changes as we upgraded to a new version.

Thanks again

I'm not able to replicate a slower backtest in 2.11 vs 2.10, and I'm not aware of any code changes that would be likely to cause that. If you're able to produce a small example strategy that runs slower on 2.11 than 2.10, I can take another look at it.

Regarding the progress logging, this is an area where I don't think it makes sense to support two different output styles (more complicated API for end users, and more maintenance), yet I believe the newer style will be favored by most users. I'm still not clear on what you're able to do with the old style than you can't do with the new style. Open to changing my mind, but that's where I am now.

Brian, Thank you for looking into it.

I don't have any specific code pieces to send you regarding the performance. I have a %%time function that returns a longer wall time for the backtest. I am not sure what could be causing it as we have made no changes. The only thing I can imagine is that the container ibg1 was paused before since I run this on MacOS and maybe that is creating the delta.