Finsurance: Theory, Computation and Applications

Research

1. GMWBs

With attrition of standard defined benefit pension plans, a number of firms are offering retirement savings products that combine the upside of an equity portfolio with the downside or longevity protection of an annuity. Among the most popular of these are Guaranteed Minimum Withdrawal Benefits (GMWBs). In the United States these are riders to variable annuity contracts, and variants also exist in the UK and Japan. Over a trillion dollars in contract face value in the US alone are backed by these riders. They have also arrived in Canada as well. The first, Manulife's "Income Plus" generated over $3.4B in sales in the year since being introduced in October of 2006. Other companies subsequently entering this market are Sun Life, Desjardins Financial, Industrial Alliance, and Fidelity, with more in the wings. Innovative products are in rapid development, the design and hedging of which is fertile ground for academic research.

The fundamental problem is that life insurance and financial economics approach have quite different points of view, and use quite distinct approaches. The perfect hedging paradigm does not apply in actuarial science, where risk is retained and managed without dynamic hedging. Traditional life insurance products provided little exposure to equity returns. What has changed is that in the last several years there has been rapid innovation of various equity-linked insurance and annuity products. These products are very attractive because they not only provide the potential of positive gains in the equity markets, they also provide protection from market downturns -- the recent propagation of the effects of the sub-prime mortgages in the US is a prime example of such events. The complex embedded options in these equity-linked contracts coupled with the inherit mortality risk make them notoriously difficult to model, value, hedge and risk assess. However, it is exactly this interplay between the upside potential and downside protection that makes them interesting and useful for retirement planning.

While some pricing questions can be answered using traditional financial economics, a variety of other problems fundamentally involve client behaviour. Incorporating client preferences and optionality into the hedging of novel finsurance products is among the outstanding challenges of the field. For example, the valuation problem is one of determining the annual management fee charged by the issuing firm. Any attempt to rationally set this fee, using for example indifference pricing, involves selecting a client's utility function. This is a subtle issue, since the reason these products exist is exactly because clients have complex and multidimensional utilities that require balancing many factors: upside potential versus secure income for life, consumption versus bequest or liquidity.

Another dimension is that products such as GMWBs typically allow clients to vary the underlying mutual fund on which the guarantees are written (something like a passport option). The asset allocation problem within the guarantee has interesting aspects. In effect, the guarantee functions somewhat like a bond, and the riskiness of optimal portfolios goes up. There is recent empirical work on variable annuity allocations, showing that purchasers do indeed assume more equity risk inside these vehicles, as models suggest. Much work remains to be done on the asset allocation problem within GMWBs. All these problems fundamentally involve the techniques of stochastic calculus, actuarial modeling, and stochastic control theory.

As indicated above the mathematical approach to optimizing client behaviour, and in turn developing hedges for the guarantees, involve stochastic control theory. In particular, the probabilistic techniques lead to variants of the Hamilton-Jacobi-Bellman (HJB) equation, a highly nonlinear partial differential equation, first developed by Bellman in the 1952. In most circumstances, the HJB equation cannot be solved analytically and numerical methods have to be employed. In the past few years, researcher have developed efficient hybrid methods to solve these equations, by employing a combination of analytical-numerical methods with similarity reduction techniques and applying them successfully to optimal consumption and portfolio selection problems. However, there are many associated areas which have not been explored.

2. Mortality modeling

So far, much of the academic work has been carried out has been under the assumption that the underlying hazard rate is deterministic (and Gompertz). In practice, though, there is growing evidence that hazard rates themselves are unknown and therefore survival and death might be best described as a doubly-stochastic process. A number of theory and empirical papers have taken this approach, and this is now a thriving area of research.

This then raises the question of how some of the "known" results in the existing literature change when you include this additional (state variable) randomness. If the law of large numbers can NOT be used to completely eliminate mortality risk, there is a risk premium for mortality. Furthermore, given the empirical evidence for the dependence between wealth and health in the general population, one can make a strong argument that any "portfolio choice" model of asset prices and hazard rates should assume this dependence structure as well. It then becomes an open question how this randomness would impact pricing, hedging and valuation of mortality contingent claims. For example, is the retirement ruin probability under a given spending strategy higher/lower? Is the optimal age to annuitize earlier/later? Is the value of a Ruin-contingent Life Annuity, and for that matter ANY life annuity, higher or lower?

As described above, mortality risk has traditionally been modeled via the random time of death generated from a distribution, such as Gompertz or other generalizations. The stochastic and dynamic nature of this risk is only now receiving attention. One of the first stochastic mortality models is the Lee-Carter model. Here, central death rates are assumed to follow an AR(1) process. Although the model has been quite influential, it suffers from several ailments such as (i) there is no fundamental connection with the aging process of an individual(s); (ii) the model is widely believed to under represent the uncertainty in future mortality rates; and (iii) the model does not lead to analytically tractable valuations. A different possibility is to use a structural modeling approach to mortality risk. In particular, an individual is assumed born with an uncertain number of "health units", and given these, the individual's health evolves according to a Brownian motion with negative drift. At the first hitting time to 0 the individual dies. This approach to mortality modeling has several advantages over other approaches, e.g. (i) it allows for arbitrarily accurate calibration to life tables; (ii) risk-neutral valuation of various mortality linked products are analytically tractable; (iii) indifference valuation of mortality risk may also lead to tractable, or at least numerically computable, results.

3. Insurance derivatives

Jaimungal's work on derivatives which interlink insurance losses and equity value come in several related but distinct flavors. Two such derivatives are catastrophe equity-linked put options and double-trigger stop-loss functions.Typical modeling frameworks incorporate both a Brownian motion and a strictly increasing pure jump process modeling losses. Given the long time horizons (5-10years) of these contracts, regime changes are likely important. This is modeled by modulating the world -states using a continuous time Markov chain. Conditional on the world state, the log-stock price and losses accumulate according to a 2-dimensional Levy process. The different world states can represent for example epochs of low, moderate and large losses and/or low, moderate and high volatility, or any combination of these. Jaimungal has studied the European version of these options. Jaimungal has also developed a novel and efficient technique for valuing options under general Markov modulated Levy models using Fourier space techniques, and is applying those techniques, together with filtering methods to estimate model parameters, for the valuation of the American and Bermudan variants.

The Waterloo group plans to develops a framework for pricing mortality derivatives based on the theory of equilibrium pricing. The market is generalized into two players. One of them suffers from the uncertainty of mortality and issues mortality derivatives to hedge the mortality risk. The other invests in mortality derivatives. The equilibrium price is obtained such that the expected terminal utilities of both players are maximized and the market clears. This framework will provide us not only the equilibrium price of mortality derivatives, but also the optimal amount of derivatives to be traded.

Another topic of particular interest to the Waterloo group is the incorporation of multivariate stochastic volatility models into the management of pension savings products. Only recently have academics and practitioners started working more intensively on such models, and there are still many outstanding issues related to the ways volatilities and correlations are jointly modelled and estimated. These models are essential for effective applications of portfolio selection methods and pricing of options contingent on several assets. Classical assumptions of constant volatility are particularly problematic when portfolios are to be managed for extended periods of time, as is typical when dealing with the variable annuity products sold by insurers.

4. Robust Reinsurance Design

The study of optimal reinsurance is a classical problem in actuarial science. From a practical point of view, an appropriate use of reinsurance can bean effective risk management tool for managing and mitigating an insurer's risk exposure. From a theoretical point of view, the quest for optimal reinsurance is typicallyformulated as an optimization problem, which in turn stimulates numerous ingenious models and interesting optimization theories. Both arguments spark a tremendoussurge of interest among practicing actuaries and researchers in constantly seeking better and more effective reinsurance strategy. For example, some of the recentlyproposed risk measures such as value at risk (VaR) and conditional tail expectation (CTE) have been exploited in determining an optimal design of reinsurance.

The project investigators hope to bring further insights to the problem of optimal reinsurance. In particular, they will consider an empirical approach to optimal reinsurance whereby observed data is exploited explicitly. They will also address the issue of robust reinsurance. By formulating the optimization problem in terms of second order conic program, they will study the impact of parameter uncertainty and objective uncertainty on the optimality of reinsurance.