Exploration Risks and Decisions – Part 5:
A statistically based Monte Carlo model for exploration
This is part 5 of 6 in the series of articles that I will be publishing on the subject of exploration risk. Each part references subjects from previous posts and so should be read sequentially. The topics that will be covered are:
- Part 1: The sources of risk and odds of success in exploration
- Part 2: The optionality of exploration
- Part 3: Quantifying reserve volatility
- Part 4: Valuing exploration optionality with risk neutral probabilities
- Part 5: A statistically based Monte Carlo Model for exploration
- Part 6: Conclusions and implications for exploration participants
My previous post showed how we can construct trees which map the various paths an exploration project may take and then price back recursively using Risk Neutral Pricing (RNP). I ended by touching on some of the problems of this method:
- Intuitively it is problematic as seems to infer the resource (rather than our expectation) changes which is clearly incorrect.
- It doesn’t allow large jumps in the value of R which may occur upon circumstances such as completion of a discovery drill programme, rather it followed a rather unrealistic model of constant gradual change of R.
- Dσ is as likely to create equal downward movements as upwards, in real life it would seem that upwards movements in R are more likely than downwards. Movements downwards can occur but are less common than upwards movements.
On this basis an improved model should decide what is already in the ground and work backwards to value through the exploration cycle. The value being based on our expectations on what is there at that moment. It is obviously impossible to predict what R is without any data i.e. at the beginning of an exploration project. However as described in post 3 we can measure the distribution of R in a given terrane which gives sufficient information to conduct multiple Simulations which can be aggregated using Monte Carlo methods to arrive at reasoned valuations. Such models are commonplace in the Oil and Gas industry and perhaps also deserve a place in gold exploration.
R and G
Using the same Abitibi Gold Belt data as used previously we know that resources on such a terrane are approximately log normally distributed. Excel can be used to randomly generate numbers according to a normal distribution with a given mean and standard deviation (I used the add-in RiskAmp but there are number of others, including free macros out there), so that we can simulate different scenarios of R subject to the conditionality d; that the prospect is successfully drilled. Again this utilises the assumption that a project contains only one prospect, if multiple significant prospects occur within a larger prospective licence they should be treated separately.
In a further advancement to the previous model I also modelled grade 'G', in a similar way to R. However instead of a normal distribution, grade in the Abitibi seems to be within 2 distinct populations, a lower grade population presumably based on mining of larger low grade halos and a higher grade population based presumably on high grade veins, with 3g/t the boundary of the 2 populations. Somewhat surprisingly there was little correlation between R and G so they could be treated independently. This indicates that there is no large difference in the contained ounces of high and low grade deposits, a somewhat circumspect outcome which deserves more study.
In order to model the grade of the project a simple probability is used to determine if the project is low or high grade. Empirical study indicated approximately 29% of resources are low grade. When the 2 populations are studied independently it can be seen that the low grade follows a normal distribution and the high grade a lognormal distribution. Both can therefore be modelled in a similar way to R. In order to prevent skewing from spurious results a cap on R of 20Moz on high grade resources was also added.
This value for R and G can be converted to a dollar value using either a simple NPV model or $/oz valuation. I prefer constructing simple NPV models which can more easily reflect the skinnier margins associated with big low grade projects. $/oz valuations have the advantage of allowing comparison with other recent transactions.
The distribution of R and G that we have simulated are all based on the assumption that the prospects were initially scout drilled with sufficient success to declare a maiden resource of some description. This is what has been previously described as the conditionality d. Unrepresented in the data are all the times that d was not fulfilled i.e. scout drilling was unsuccessful. The conditionality is actually a compound conditionality as not only is it the conditionality that scout drilling was successful but also that the initial reconnaissance work warranted the prospect being drilled at all.
At Plutus we were able to do a simple study of TSX listed companies operating in the Abitibi to arrive at a rough success rate of scout drilling of 45%. We did this by using the simple definition of ‘success’ as any scout drilling which resulted in at more drilling and that at least one of these phases delivered some recognisably ore grade intersections.
The conditionality that a prospect is drilled in the first place is much harder to define as companies are under no obligation to disclose such information. This is obviously another area worthy of further study but in order to progress with the model a rate of 10% of prospects receiving scout drilling was used based on our own subjective experience of exploration. This would indicate that roughly 1 in 20 prospects will produce a resource of some description.
In our previous RNP model Dσ behaved in a constant manner through time which does not fit with real life experience where Dσ is typically much higher at earlier stages. This model considers six periods of exploration from initial prospect recognition to mine development. Because the simulation first decides what is really in the ground R can be flexed backwards through time using the following periods R value as the mean and Dσ based on qualitative observations of the exploration process:
The majority of deposits that come into production go through a pathway of Reconnaissance (often encompassing trenching) > Scout Drilling/Maiden Resource > PEA/Scoping Study > PFS > FFS. These various studies are completed to international reporting codes which are supposed (there is some debate to be had here!) to conform to certain margins of error in their inputs, such that the overall margin of error can be estimated. There is more statistical work to be done in this area to establish exactly how accurate such studies are and if there is actually some bias towards under or over reporting. As it is out of the scope of these articles to do a full study in this area, let’s use the following assumed margins of error for the following various studies FFS = 10%, PFS = 30%, Scoping Study 50%, Maiden Resource 75%. We can use these margins of error as proxies for the standard deviation Dσ and model our value for R back through time.
Building the Model
In addition to R the commodity price (in this instance the price of gold) can be modelled using Monte Carlo analysis. Gold is assumed to follow a ‘random walk’ and can be simulated using the known historical price behaviour. In contrast to R gold price can be modelled forward from time 0 using the spot price as the starting point. With the addition of gold price at each stage we are able to simulate a gold price, resource R and grade G.
At each stage the project is valued to ascertain if the next stage of exploration is worthwhile. As previously mentioned this is achieved via NPV in year 6 but in years 2-5 I used $/oz metrics flexed according to gold price, which I felt it more true to real life project transactions. No valuation is attempted in year 1 simply just a decision to explore or not based on the aforementioned first year success rate of 10%. At time zero exploration always proceeds and therefore some exploration expenditure is always incurred.
Using the model it is possible for a single simulation to have an economic resource that is abandoned before fruition because of incorrect expectations of R caused by Dσ and/or a poor price environment, which we consider realistic to real life.
If the expected value at the end of the exploration exceeds the cost of the next stage then exploration proceeds, otherwise it is abandoned. The exploration cost is estimated as a fixed component and a floating component based on size of R, both of which escalate as the project develops. In this way the model only incurs exploration expenditure when it is deemed worthwhile and the majority of prospects will only incur small expenditure associated with initial reconnaissance and trenching but prospects with large R values will incur significant exploration costs.
The actual (rather than expected) value of a prospect is represented by the valuation in year 6 discounted back to year 0, minus discounted exploration costs, this discount rate can be adjusted to reflect political risk. In the majority of cases the value in year 6 will be 0 as exploration would have been abandoned at an earlier stage and the actual value will be negative from incurred exploration costs. Other negative actual values may occur if the discounted total exploration costs outweigh the discounted value at year 6 and positive values can arise when the discounted value at year 6 exceeds the discounted total exploration costs. Each simulation therefore produces an actual value which represents the prospects value at time = 0 for an individual simulation.
Simplified diagram of single simulation (does not include gold price simulation)
Once the simulation is setup it can be ran a number of times to produce meaningful average real values at time 0, this is our expected value of any given prospect without information. By omitting individual simulations that are abandoned at a previous stage we can also calculate meaningful valuations for projects at different stages of exploration. For individual projects with known resources the expected the resource figure can be used as R, and modelled forwards through time using Dσ. However this requires a great degree of confidence in the nature of Dσ which would require more study than we have at present on the subject.
Results of Model
I ran the model over 100,000 simulations and arrived at an average value of $1.1M, which is the average value for any undrilled prospect at time 0. At first appearance this figure seems high given present market valuations.
Including simulations which fail the condition d and have a resource of zero, average resource size was 26,500oz, this is the expected resource of a prospect at time zero. This proves the value of exploration optionality as if we just used this value to calculate the value of a prospect, exploration would never be worthwhile.
99.35% of simulations did not produce a mineable resource meaning 6.5 out of 1,000 prospects become a mine, x6.5 more than the 1 in a thousand statistic that prompted these articles.
The scope of this article is just to explain how the model works, state some initial findings and think about some of its shortcomings. In the next post there will be some interpretation of these results the model has created, comparison with market values, and the implications therein.
So we have arrived at a more statistically valid way of thinking about and valuing exploration but models such as these are not without their faults. It can’t be stressed enough that using methods like this alone is completely unsuitable and our geological and technical knowledge should always take precedent. For example we may be in the third year of an exploration programme and recently drilled a modest initial resource, however we know multiple speculative holes have been drilled along strike and hit sizeable ore grade intersections. In this circumstance we know that the chances of R increasing significantly are larger than a model such as this may indicate. Conversely a deposit that has been very tightly drilled by year 3 and seems to have been close off in all directions has little chance of increasing.
The approach largely ignores known prospectivity indicators apart from drilling/resources such as geochemical/geophysical signatures, presence of artisanal miners etc. Not all prospects are equal and with some geological knowledge we can normally immediately rank them in a reasoned order of prospectivity, with corresponding relative values. This model would value all prospects the same at time 0 but it is obvious to an explorationist that this is untrue and that some prospects are worth more at time 0, and that our calculated value is actually an average value. With time and careful thinking prospectivity indicators may be incorporated into such a model statistically or alternatively operators can make qualitative judgements on how these effect the value of R and adjust accordingly.
The model relies on a single prospect remaining one prospect. Sometimes a single prospect can morph into multiple prospects or multiple prospects coalesce into one, which is not supported by this model.
The model is only suitable for greenstone gold (though it is possible to compare Abitibi data to other greenstone belts with favourable results). Districts with multiple mineralisation types such as Andean Cordeillras may have more complicated resource distributions which must incorporate multiple commodities. Also the model is based on historic data including some very old mines. Cut-off grade has declined significantly over the years so that resources from earlier discoveries may not be as large, and be of a higher grade, than if they were discovered more recently. This is mitigated by the assumption that old prospects have been worked on repeatedly through time and updated accordingly.
Many practioners place great importance on the management team in valuation/investment and this technique ignores this. Exploration managers who become associated with success are able to attract more capital and have extra project opportunities open to them and so there is some sense in following them, though I would be wary of doing so religiously because as this statistical approach shows their success probably has a large degree of luck. Nevertheless success tends to breed success perhaps because of individual skill but also perhaps more importantly, due to increased accessibility to capital and first pick of the best projects. The model also assumes rational management and resource allocation behaviour. This isn’t always the case and sometimes poor exploration projects are funded and progress, whilst other more prospective projects remain unfunded and are abandoned. Hopefully overtime the industry can improve in this respect!
It probably has not escaped the attention of some readers that the model fails to make any real distinction between resources and reserves. This is a major area for concern as large portions of resource or even entire resources can fail to make it to the reserve category. This has been exasperated by some loose interpretation of the term resources in the past. According to CIM (43-101) a resource should have “….reasonable prospects for eventual economic extraction”, and being a loosely defined term is wide open to abuse. Nevertheless over the lifecycle of the mine resources that didn’t make into the reserves in the feasibility analysis are often eventually mined so this perhaps offsets this problem.
In the next post I will look at some of the results this model throws up and whether from this we can discern if exploration companies are fairly priced. The ultimate aim would be to find mis-priced companies to use as the basis for new investments and/or to use such a model to optimise capital allocation within large companies.