Here are some thoughts after spending time with Option Risk Calculator. First of all big thanks to Nathan for creating this tool and making it available to public. So my idea to use Option Risk Calculator was this: create some basic trade, lets say: ATM put butterfly, and test it in different environments, to see how it performs in various DTE's, various wing lengths (in symmetrical, broken wing and asymmetrical combinations). Then go to OTM butterflies and then to more complicated trades like double and triple butterfly and then jump to split top butterflies etc, etc. Before start lets look at tool itself. Is it right tool to get the job done and, if so, how to use it? In my opinion it can not be used to do any serious evaluation of real trades. Why? Because there is not enough parameters to tweak to get close to real market conditions. In Nathan's presentation Jim asked about using different historical distributions – yes, it would be nice to have it but, as per author's explanation, it would result in significant increase in time calculation, I'd prefer it stay the way it is. On my side I'd like to have direct control of volatility skew which, probably, had the same practical effect – more extensive and longer calculation time. On other hand I can use “Projected Stock Price Growth Rate (annual %)” field to emulate bullish or bearish market (except volatility skew changes). For above reasons I also think that usage of data from real option chains to feed ORC is pretty useless and to get best results I have to leave 'Premium” field empty and let ORC calculate it's own prices. Similarly expiration date is also meaningless, it can be any date. The only thing that matters here is day span between “Trade Start Date:” and “Strike Date” fields. With this in mind I started my tests from common 50/50 BFly. I was interested in only two output numbers: “Mean P/L” and “Probability of breakeven”. I set spreadsheet to capture results and started testing. I expected it to be tedious job but after just few tests I realized it will be plain-impossible to finish this task, not to mention testing other strategies. Why? There are too many clicks to get single pair results. The process is too prone to mistakes. Single test (let's say 50/50 Bfly, 70 DTE, in bullish market) should be run couple times to assure better output (aka that pair that is spit out doesn't come from tail), thus increasing number of clicks and risk of mistakes. 100K samples is max for ORC, you can not increase this number. At times I did get wide results showing me once positive p/l, other - negative, on the same inputs. You should do multiple runs for more reliable results. Nevertheless I managed to finish 50/50 70DTE butterfly. I also asked one of CD members to run the same test on market data. As expected results were way-way-off. True, life data were very small in comparison to, also small, test data, but we can assume that: I did some mistakes in my tests or other member generated wrong data or we had a 'bad luck' getting portion of data that were bad (means rest of life results are closer to test data). Actually we can not say anything about relationship between ORC and life data without performing extensive research. But also it doesn't really matter here as my goal was to explore direction of changes of trades in different configurations in various markets than obtaining concrete numbers. For those we have to go to market data. If above exercise worked, it would be nice (fast) screen to find promising configurations for future investigation without necessity to back-testing all of them. But it didn't work. For that, for ORC to work for me, I would need ability to feed program with input file containing all desired parameters (easy to generate) and get results written to output file (easy to analyze). Without this (and other functions) capabilities of OCR are limited to single strategies (for more patient traders than I am). I didn't meant to complain about ORC and I am not. I still appreciate Nathan's work and time he spend explaining it. I still admire speed of this software. I had vision how I can use this tool in my research and it didn't work out. I have to take different path.That's it. Attached is raw spreadsheet with my results. In cells first number is mean p/l, second – win rate. You see that after doing 15DTE I had enough and skipped right to 70DTE ( I had partial life data for that to compare) with understanding that it is not the way to go. Or I did something wrong here or misunderstood something...

Wow, Marcas... the Option Risk Calculator, with its simple probability model and strictly interactive interface, is a pretty rough force-fit to your trade-testing requirements. I'm glad you found it interesting enough to try, though.

I know, I know... Now I have dilemma: should I start building Monte Carlo for my needs or work on back-tester? Both options have unique properties and I want all of them! It was a fun though : ) Thanks

IMHO: There are two sticky wickets here: 1) What cumulative distribution function best fits the projected Time frame. 2) What will the volatility surface be at each point on that CDF function.

Garyw, Ice101781, I did find that my knowledge of statistics is close to none. I will have to make it up in the near future but for now I give a shot at putting up program anyway. ad1. do you mean distribution? I use std normalized distribution (I think : ) with possibility to use different types in future. Cumulative distribution is just calculation of area to the left. How calculation method can have significant impact here? ad2. that is a guess, to get a grasp of strategy we can project VolSkew the way we desire. BTW. I think if I have known distribution curve I don't need to use Monte Carlo at all to calc expected return for diff market conditions.

Marcas: My ramblings... ad1: I currently use a Normal Distribution for my CDF. This is easy and simple. BUT likely NOT the best choice for each Time period! -- Nathan, also uses Normal Distribution (per my limited understanding). ad2: The largest variable for this is likely the choice of sticky strike, VS sticky moneyness (or more obscure morphing of target point volatility) {More so than Volatility Surface per-se) [I currently use sticky strike -- The simple TOS-like method, but will likely change once I have a deeper understanding of issues] I do NOT use Monte Carlo either. I use the CDF to solve directly (with the ad2: for the iv input). -------------------------------- An attempt to "test trades" seems to be unsolvable without an extremely large and good sample size of "realistic trades". --- The whole notion of ad1: (all samples fit the distribution), infers, you will need a very large sample size to gain any confidence of being in the ballpark. This is made worse by ignoring direction (trend), and understanding that Normal may not be the optimal distribution. BTW: This is still weighing heavily on my mind. -- I would like to improve my handling of this area. (My knowledge of Statistics is very poor)

I'd say it is not the best choice. Period. But I don't mind for now for reasons below. I'm not there yet, at least not in my work progress. Gary the way I'm setting it up is that I don't care much about real data. I create my own VolSkew the way I want and see impact of changes on trade. I don't need to mimic real data, all I'm interested is capturing trends. I know that changing distribution type will give me different numbers but trend suppose to stay the same (as per direction, not magnitude). I save market data for later. This will be more manual (?)

Just for bookkeeping purposes. I'm giving up on efforts to make ORC usable tool. All that time it was sitting somewhere in my mind that I can do some improvements here and there and create useful soft for traders' toolbox. I liked this idea as I like to use easy solutions when I can. I did some work on accuracy of t+0 lines and was of opinion that it can be improved. Was waiting to hear from Gary about his progress with distributions. In one of last Market Muses Jim showed his graphs of distributions in SPX. I looked at distribution for 55 days and it was a nail. I'm giving up on ORC-based development. Not that it's impossible task (I still see how it can be achieved) but it is definitely not the easy way. Any numbers created by ORC can not be taken in any serious work, well maybe except when comparing numbers from one trade to numbers from another, similar one (but do not take numbers literally!!) other way to say it: we are dealing with (weak, imo) signals. I'm not blaming ORC creator, I still think that Nathan did a good job, just models are what they are and it is very hard to jump over this problem. So, I'm off to bactesters, delta-hedgings etc. Old path - new for me. Nobody said that trading is easy.

I agree with your pessimistic view on dominating markets with whatsoever single tool. But, I still believe that to have one (ore some) well argumented and well constructed tool(s) at hand is better than having none, at least in the sense of being able to develop some proper benchmarking or, at least, a feeling about the difference between theory and practice. Trades based on statistics tend to neglect many facettes of human expectations and behaviour but provide a minimal and sound basis for developing own strategies. In my case, I am using the OCR and RTT entry tool of CD to situate myself in the environment of mathematically favorable trades. Once processed this information in the context of my open positions and market expectations, I may tweeak these "mathematical" guidelines and adapt them to my portfolio. Anyway, I found them to be extremely helpful to get a grasp on a reasonable high-probability profit setup for my trades.

@ Marcas: Interesting. I have "bogged down" in my "distribution" pursuit, since trying to substitute a historical distribution looses value if not correlated to the representative volatility (IMO). While I expect a better solution exists, my limited grasp on the math and statistics keeps me from reaching reliable conclusions so far. -- I'd prefer to err on the side of caution for a bit longer! --I continue thinking about this, but am not yet convinced an "ideal solution" will be substantially different from the simple approach of using a normal distribution and a simplistic volatility estimation! I think my current estimates, used in the "static assessment evaluation" of the G-Tool, are slightly pessimistic, and I continue to gain a better understanding in the hopes of improving the algorithms. -- This has not been a trivial pursuit!

PK. You read correctly what I said. I gave up on 'general' tool that allows quickly and accurately estimate trades' performance in various market conditions. This work _has to_ be done just it wont be a simple model based tool. I'm not against aid software at all I use them all the time and they are great help in a sense of saving time. They are probably useful in increasing profitability of our trades but here I would hesitate with final statement without some solid studies. Garyw. I don't intend to push you. This project is not first priority, I put it aside, not sure for how long. If you want follow I think there are possible rewards waiting somewhere there, and disappointments too. Here are some of my loose thoughts you may find interesting, even if to define which way not to go. I wouldn't tinker with formula in BS model but rather apply correction to BS output. If you want to change distribution(s) inside model do it rather by creating custom prob. tables (for testing) then trying to come up with formulas for applied new distributions. I also definitively like idea of extracting implied distribution from option chains but it involves so much work (for example ivs will be different (fixed?) thus we will have diff. skew) especially when I have to start from basics - not for mine near future. Ofc it seems to be natural to let quants in, danger here is that we may end up with so many layers of math that nobody wold really know what is going on. I'm always interested in hearing about your findings if you stay on this quest.

I should clarify the context of my statements above. An evaluation of "expected returns" (or the ORC, or the "Static assessment" section of the G-Tool) combines two separate functions. 1) The PnL expectation of a position at a specific time in the future. (a T+n graph) 2) A statistical based distribution, directional component (if present), and an estimate of FRV, for the underlying price at a specific time in the future. All comments I have made on this subject relates to the (2nd) case. -- I see no value in "tinkering with the BSM", which is integral to (1). I would like (and plan to, if I live long enough) address the volatility within (1) at some point, which I think may be a more worthwhile use of time, but (2) should be a simpler task, and imho, should be completed first for my interests.

Dan Harvey--you have done a presentation or two with the ORC that I found to be quite illuminating. Do any of these comments change your opinions about ORC value or the accuracy of results you found?

Hi Mark17. No, I continue to believe that the ORC is a useful tool, as long as the user understands its limitations. Its greatest value is its ability to compare similar trade setups with regard to probability parameters only. The G Tool automates the ORC for RTT setups.