Stochastic Seismic Reservoir Characterisation at the Copthorne tonight
The Copthorne Hotel, 6pm tonight
The Complete Works of Shakespeare contains 884,647 words (apparently). I’ll guess that that equates to about six million characters. If each of those can have, including punctuation, thirty values, there would be 10 9,000,000 ways to fill a book the size of the Complete Works. But, famously, it would still be a breeze for an infinite number of monkeys tapping at typewriters to come up with a flawless copy (in fact an infinite number of flawless copies).
A typical reservoir model might contain six million cells (400 x 300 x 50). Which means there are 10 9,000,000 ways to populate the cells in the model with integer porosity unit values between 1 and 30. Again, no trouble for an infinite number of geologists at workstations to construct a perfect model, but as they’d have no way to know which one is correct uncertainties would be large. And of course in today’s cost constrained environment an infinite number of geologists are not available (thank goodness!), but fortunately a (finite) number of geophysicists are on hand to provide help. This talk will be about a new way to do just that.
Let’s continue with the Shakespearean analogy. We’ll pretend that one of the monkeys has a friend (perhaps a geophysicist) who has found an old copy of the Complete Works. Unfortunately it’s been left out in the rain, chewed up by the dog
The monkey has successfully got to the beginning of Richard III with its famous first line “Now is the winter of our discontent” but all they can decipher from the chewed-up version is “N w i t e w ter ou di co eqt”. The monkey could try typing random letters in the gaps (line 3) and eventually would obtain the correct answer, along with an awful lot of incorrect ones. A better strategy would be to impose a few rules. Firstly he could limit his guesses to complete words (line 4) and allow some degree of mismatch as we know the text contains errors. Better, use complete words but with a few grammatical rules (line 5) or, better still, restrict his vocabulary to words in use in Elizabethan England (line 6). This still won’t produce a unique solution; there are still several phrases which are consistent with the data and with the rules (line 7) but we’re pretty close and we have an estimate of the likely uncertainty of our estimate.and so on such that much of it is illegible. Nevertheless this noisy, low resolution version of the text (you can see where I’m going with this) is the only data available.
What I’ve described is a stochastic inversion scheme in a Bayesian framework. Our prior data are the grammatical rules and Elizabethan vocabulary. The chewed up text is our data, or ‘likelihood’ in Bayesian terminology, and the posterior is the set of sentences (realisations) consistent with both prior and likelihood. The process is stochastic because we’re attempting to account for all the uncertainties.
There are a number of different approaches to stochastic inversion; here I’m generating random sentences from the prior data and then either accepting or rejecting them by matching against the likelihood. Another way would be to choose one sentence and then modify it until it fits (stochastic optimisation) and another would be to come up with a more complete set of rules based on both prior and likelihood (an analytic posterior) and use those rules to select consistent phrases.
How does this apply to the geological problem? Our ‘low resolution noisy data’ is of course the seismic and the prior data is all the other information we have about the reservoir; we’ll know something about the depositional environment allowing us to estimate the lithofacies we might expect to encounter and their possible proportions. Well data will be available that can be used to calibrate increasingly sophisticated rock physics models for each of the lithofacies and the vertical statistics of the bed-thickness distributions can also be measured.
Given these rules we can construct the equivalent of the Shakespearian phrases; short vertical geological profiles. We start with a stratigraphic framework based on a conventional interpretation and, within that, build up possible random lithofacies columns. We assign properties to each of the lithofacies; porosity then dry frame moduli for the sands, velocities for the shales and so on, building up a complete suite of petrophysical curves. We refer to this as a pseudo-well. From these curves a synthetic can be constructed and matched against the seismic trace, selecting it if it matches otherwise rejecting it. And we do this thousands of times for each seismic trace and repeat across the entire reservoir. By averaging the reservoir properties associated with the accepted pseudo-wells the mean and variance of any desired reservoir property can be estimated at every location.