TIME-RANDOMIZED STOPPING PROBLEMS FOR A FAMILY OF UTILITY FUNCTIONS

TIME-RANDOMIZED STOPPING PROBLEMS FOR A FAMILY OF UTILITY FUNCTIONS∗ IKER PEREZ† AND HUILING LE† Abstract. This paper studies stopping problems of the form V = inf0≤τ≤T E[U( max0≤s≤T Zs Zτ )] for strictly concave or convex utility functions U in a family of increasing functions satisfying certain conditions, where Z is a geometric Brownian motion and T is the time of the nth jump of a Poisson process independent of Z. We obtain some properties of V and offer solutions for the optimal strategies to follow. This provides us with a technique to build numerical approximations of stopping boundaries for the fixed terminal time optimal stopping problem presented in [J. Du Toit and G. Peskir, Ann. Appl. Probab., 19 (2009), pp. 983–1014].


Introduction.
The type of optimal stopping problem which is related to the one studied in this paper has been studied by many authors.Pioneering solutions were first presented by Graversen, Peskir, and Shiryaev in [11] and Du Toit and Peskir in [6] on the optimal stopping problems inf where B stands for a Brownian motion and B λ denotes a Brownian motion with drift λ.In particular, stopping rules obtained were defined as the first entry time of an underlying stochastic process, accounting for the distance between the Brownian motion and its running maximum, entering some stopping region.Within a financial context, considering a geometric Brownian motion Z, Shiryaev, Xu, and Zhou in [18], Du Toit and Peskir in [7], and Dai et al. in [4] derived results on the stopping problems where M T stands for the maximum of Z over the entire time interval [0, T ].The use of probabilistic techniques in [7] enabled the authors to extend work in [18] and derive so-called bang-bang strategies in problem V 2 ; these defined a goodness index through parameters describing the dynamics of Z and categorized processes as either good (never to stop) or bad (immediate stop).Also, an analysis on problem V 1 surprisingly led to a different optimal stopping rule for a given subset of parameters; in this case, the solution to V 1 was found to be given by a time-dependent optimal stopping boundary for an underlying stochastic process to cross.Elie and Espinosa in [9] and Espinosa and Touzi in [10] addressed optimal stopping problems for a more general family of mean reverting diffusions with similar financial motivations.In their case the terminal time bounding the time space is random, and it is given by the hitting time of the diffusion to zero.In [10] the optimal stopping problem inf τ ∈[0,θ] E[U (X τ − max 0≤s≤θ X s )] is defined as the first crossing time of a time-dependent boundary by some underlying stochastic process, where X stands for a mean reverting diffusion, U is some increasing and convex loss function, and θ is the first hitting time of X to zero.On the other hand, [9] provides a solution to the problem The result is consistent with those in [7], [18], and [4], where a restrictive timedependent stopping boundary is defined, implying that the immediate stop is close to optimal.
In this paper, we address questions similar to those in [7], [9], [18], and [4] in an extended time-randomized context, where the stopping terminal deadline is random and independent from the state of the diffusion of interest.The aim is twofold: to discuss robustness of developed strategies with respect to different utility criteria chosen under the influence of such new uncertainty, and to provide approximations to these under a fixed terminal time set-up.The addition of such uncertainty, modelled as a Poisson process, has been introduced in [3], in the context of option pricing in order to offer approximations for American option values.Randomizing, in that context, was treated as a first step in a more general procedure that involves working out the expected value of the dependent variable in the random parameter setting and, finally, letting the variance of the distribution of the randomized variable approach zero, while holding its mean at a fixed parameter.
We derive a family of time-independent stopping problems with an underlying two-dimensional diffusion.We discuss the existence of optimal stopping boundaries and obtain complete solutions through a reduction to a family of boundary value problems.Also, the detection of "bang-bang" strategies and links to previous work are analyzed.Our results allow for us to computationally build numerical approximations to fixed terminal-time optimal stopping problems and suggest the possibility of extending optimal stopping rules defined in [7] to a more general family of power utility measures.
The structure of the paper is as follows.In section 2 we introduce the timerandomized problem, define the family of utility functions of interest and set-up a time-independent two-dimensional optimal stopping problem fitting the general theory exposed in [17].Section 3 discusses the existence of bang-bang strategies and stopping boundaries and introduces the main result in this paper.In section 4, we provide the proof of the main result.Finally, section 5 discusses the results and suggests future research directions.

A randomized terminal time stopping problem.
Let (Ω, F , P) be a probability space equipped with a P-augmented natural filtration {F t } t≥0 .For a fixed n > 0, T denotes the waiting time to the nth jump (T n ) of an F t -adapted Poisson process N = (N t ) t≥0 with rate ν, so that Let B = (B t ) t≥0 denote a one-dimensional standard Brownian motion adapted to {F t } t≥0 , with B 0 = 0 and independent of N , and, for fixed constants μ and σ, let Z = (Z t ) t≥0 denote a geometric Brownian motion given by Define the running maximum processes M = (M t ) t≥0 and S = (S λ t ) t≥0 by (2.1) where λ is a fixed constant and B λ t = B t + λt.Recall (cf.[14]) that the distribution of S λ t is given by where Φ denotes the cumulative distribution function of a standard normal random variable.
Definition 2.1.The family U consists of all C 2 -functions U (x) defined on [1, ∞) that are increasing, strictly concave or convex, and meet the following criteria: for all constants α, β, t 0 ∈ R + , where U (x) and U (x) are the first and second order derivatives of U (x).
For a given function U ∈ U, we consider the optimal stopping problem (2.6) where T stands for the set of all stopping times taking values in [0, T ].

An alternative expression for V .
Lemma 2.2.For any given utility function U ∈ U, let function ψ be defined as (2.7) where T n−k stands for the waiting time until the (n − k)th jump of a Poisson process with rate ν, λ = (μ − σ 2 /2)/σ, and F S λ t (s) is as in (2.2).Then, (2.6) can be expressed as the F t -measurable time-independent optimal stopping problem where the process X = (X t ) 0≤t≤T is given by X t = S λ t − B λ t .Proof.The proof is similar to that in [7, Lemma 1], and so we only summarize the main steps.We can rewrite V in terms of a Brownian with drift λ and its running maximum as Using deterministic times and making use of the law of total expectation, the term involving the expected value above, restricted to the case when {t ≤ T }, reads The independent and stationary increments of B λ t imply that max 0≤s≤T −t B λ t+s − B λ t and S λ T −t are equal in law.Hence The memoryless property of the exponential distribution implies that, conditioned on F t , T −t law = T n−Nt , where T n−Nt stands for the waiting time until the (n−N t )th jump in a Poisson process with rate ν.Recalling that processes N and B are independent, the above gives where f S λ T (z) is the density function of S λ T .Using property (2.3) and integrating by parts the inner integral in the last term of the right-hand side, we obtain As pointed out in [8] and [7], arguments based on each stopping time being the limit of a decreasing sequence of discrete stopping times allow us to extend this result for deterministic times to all stopping times.Consequently, we may rewrite V as completing the proof.

Extension of V .
Let D denote the set of possible states in (N t , X t ) at which instantaneous stopping is optimal in problem (2.8); we refer to it as the stopping set.Then, (2.8) is expressed as (2.9) where We note that {n} × R + ⊆ D, since the state n in N indicates forced stopping.This implies that τ D ≤ T < ∞ almost surely.It is shown in [12] that X, with initial state x ≥ 0, has the law of a Brownian motion with negative drift −λ reflected at 0. This is identical to the similar process [5]).On the other hand, the law of N started at k is equal to that of (N k t ) t≥0 , with In order to make use of Markovian techniques and provide a solution to our problem we extend (2.9), allowing it to start at any point and time in the state space, so that , where E k,x denotes the expectation under any Markovian probability measure for which P(N t = k, X t = x|t < T ) = 1, and τ D (k, x) stands for the first entry time of the two-dimensional Markovian process Then, general theory in [17] indicates that the solution to the stopping problem is provided by the largest subharmonic function dominating ψ on the state space.In addition, the optimal stopping time comes whenever the current state of the Markovian process falls within the subset of the state space where the value of the gain and dominating functions is the same, so that D is given by and is complemented by If a bang-bang strategy were optimal, then {1, . . ., n}×R + would be included in either D or C.

The infinitesimal generator.
The infinitesimal generator of the process X = S λ − B λ is known (cf.[7]) to act on twice differentiable functions f (satisfying f (0) = 0) as while the generator of a Poisson counting process acts as Therefore, the infinitesimal generator of the two-dimensional Markovian process Y t = (N t , X t ) acts on suitable functions f : R 2 → R as (2.12) Applying the Itô formula to ψ in (2.10) and compensating the jump terms with a subordinator, we obtain Note that algebraic calculations show that A Y ψ(k, x) is given by 3. Solution to the optimal stopping problem.Noting expression (2.13), the following two sets play a fundamental role in the descriptions of C and D: , and define functions R 1 and R 2 as Arguments in [17], considering exit times from small balls and making use of the optional sampling theorem, can be extrapolated to this case and suggest that for any utility measure and stopping is never optimal until deadline.The explicit expression for V is given as the unique solution to the boundary value problem While (3.6) is rather obvious, the derivations of (3.4) and (3.6) are similar to those of Theorem 3.3, and we omit them here.Making use of (3.5) and (3.6) as boundary conditions, the explicit solution for V can be obtained solving the ordinary differential equation (3.4).We refer the reader to [1] for a collection of ordinary techniques for solving linear second order differential equations.
If Θ = {0, 1, . . ., n − 1} × R + , it is not directly implied that (k, x) ∈ D, since Θ and D are not necessarily the same set.However, looking into (2.13)we see that τ D (k, x) = 0 is a must, since otherwise V (k, x) > ψ(k, x), which is contradictory.This implies that instantaneous stopping is optimal and It follows from Lemma 3.1 that for a given U ∈ U, Should conditions of Lemma 3.1 not be met, then the memoryless property of the exponential distribution poses an independent optimal stopping problem for each subsequent step in N (cf.[2,3,13]) and may give rise to the existence of arrays of critical points in R + dividing the state space {0, 1, . . ., n} × R + into sets D and C.These are referred to as optimal stopping boundaries, and the optimal stopping rule for a problem V started at an arbitrary (k, x) ∈ C is given by the first crossing time for process X to a boundary.Formally defined as time functions (constant over time within jumps in N ), stopping boundaries are linked to the number of steps left to deadline in N at any given point in time, and we denote them as for t ≥ 0 (see example in Figure 1).If Θ in (3.1) is nonempty, the theory in [17] implies the existence of "bounding" functions for the set, in our case, functions on {0, 1, . . ., n − 1} defining the frontier(s) between Θ and Υ.In what follows we make the following assumption.
We note that determining the veracity of Assumption 3.2 analytically can be a daunting challenge due to the complexity of (2.14); thus, in the examples discussed in the closing section this is done numerically.Figure 2   Under Assumption 3.2, the existence of an optimal stopping boundary ζ * follows from the existence of bounding values b(k).Heuristic arguments, implying that the optimal stopping rule will be linked to the time of a big departure of process Z from its running maximum, suggest that D will be lying above the boundary ζ * .Thus, the continuation set C defined by (2.11) is composed of all points (k, x) where x is smaller than the value of ζ * at n − k steps left to deadline, i.e., Equivalently, Finally, for any starting point (k, x), the optimal stopping rule τ D linked to an optimal boundary ζ * takes the form Therefore, the solution for the optimal stopping problem V will follow from the correct detection of the values that ζ * takes for each step in {0, 1, . . ., n}.With the aim of exposing the main result in this paper, we define the following functionals.Let C 1 and C 2 , in terms of the boundary ζ * and the set of parameters (λ, σ, ν), be given by where functions R 1 and R 2 are given by (3.3).
Theorem 3.3.Under Assumption 3.2, for a given U ∈ U, the underlying extended optimal stopping problem V (k, x) in (2.10) can be recursively decomposed as follows: The value of the optimal stopping boundary ζ * , at " n − k" steps (or jumps) to deadline in process N , can be identified as the only positive solution to the integral equation 4. Proof of Theorem 3.3.For any x ∈ R + , V is known at deadline and is given by V (n, x) = ψ(n, x) = U (e σx ).Equation (3.10) provides an iterative method to work out the numerical value of V at any point in the state space.
It is known (cf.[17, Chapter 3] that the optimal stopping problem V (k, x) in (2.10) solves where A Y is the infinitesimal generator of the process Y defined in (2.12).In terms of ζ * , this is equivalent to In the following, we show that the mapping x → V (k, x) is continuous for any fixed value of k in N .Its differentiability while in C follows from the theory in [17].
Moreover, we show that, for any k < n, the system of equations (4.3)-(4.4)may be complemented by the following boundary conditions: Then, the use of boundary conditions (4.5)-(4.7)allows (3.10) and (3.11) to be derived by solving the ordinary differential equation (4.3), so that the proof of Theorem 3.3 follows from the application of ordinary techniques for solving linear second order differential equations with constant coefficients (cf.[1]).In order to show (4.5)-(4.7),we make use of variations of the methods of solution presented in [17,Chapter 4] and applied in [7,15,16] among others.

Monotonicity and continuity of V .
We recall that V (k, x) ≤ ψ(k, x) for any x < ζ * (n − k), so that (4.5) will follow from continuity of the mapping x → V (k, x).We start by introducing the following lemma for later use.
Lemma 4.1.Let U ∈ U be a strictly convex function, and fix t ≥ 0 and x ∈ R + .Then, the random variable e σX x t U (e σX x t ) is integrable.Proof.Note first that Since U is a nondecreasing and convex function, where w(z) = x + |λ|t + 2z.Integrating by parts, the above yields Recalling (cf.[7]) that P(max 0≤s≤t |B s | ≥ z) ≤ 2P(|B t | ≥ z) and noting conditions (2.4) and (2.5), it follows that Proof.The proof is split into two parts; to start, we show that the gain function ψ in (2.7) is nondecreasing in x.If k = n, the monotonicity of ψ follows from Since the subset {n} × R + is included in D, E[ψ(N k t , X • )] will reach a global minimum at some point on or before deadline.Such a global minimum always exists and corresponds to the value of V (k, •).Moreover, τ x , τ y ≤ T .Then, ] .Recalling that ψ(k, x) is nondecreasing on x, and noting that X y τy ≥ X x τy , implies that V (k, y) ≥ V (k, x) , settling the result on monotonicity for V .Now, we show that the mapping x → V (k, x) is continuous in x for any fixed k ≤ n.If k = n, the value function is reduced to U (e σx ), which is continuous in x.If k < n, following previous arguments, we note that for any fixed value of k, the mean value theorem yields In order to further simplify the upper bound above, we recall result (4.8) and note that and that ν ≤ X y τx .If U is concave, then U is a nonincreasing function.We obtain for some constant value c > 0. If U is convex, then U is a nondecreasing function, and so Note that the integrability of e σX y τx U (e σX y τx ) follows from Lemma 4.1, when U is convex and U is a nondecreasing function.We refer to [7] for a probabilistic proof on the integrability of the term e σX y τx .Now, take the limit as |y − x| → 0 in (4.9) and (4.10) above to conclude that x → V (k, x) are continuous mappings in R + , concluding the proof.

The condition of smooth fit.
Lemma 4.3 (principle of smooth fit).The optimal stopping boundary ζ * is characterized by the fact that, for any fixed k < n, V x (k, x) exists and is continuous on x, while and Making use of the mean value theorem, it can be derived from (4.12) that, for fixed k, where X . Recall that, for any U ∈ U, ψ x ≥ 0. We note also that Recalling that V is twice differentiable in C, dividing the terms in (4.11) and (4.13) by ε, and taking the limit as ε → 0 leads to . Therefore, by (4.14), the fact that follows from the right continuity of Poisson processes.
Next, we show that, for any fixed k < n, V x (k, x) is continuous at corresponding values of ζ * (n − k).For this, we take δ > 0. Thus, in a similar fashion as before, for any ε ∈ (0, δ), which leads to where X Dividing expressions in the above inequality by ε and taking the limit as ε → 0, we obtain To show that the reverse inequality holds, taking ε > 0, we also note that for some ].If we divide the previous expression by ε and take the limit as ε → 0, it is clear that the left-hand side tends to V x (k, ζ * (n − k) − δ) and the right-hand side tends to implying that Hence, the right continuity of Poisson processes implies that concluding the proof.

The condition of normal reflection.
Lemma 4.4 (normal reflection).For any fixed k < n, , and from (4.8) we observe that lim x→0 V x (k, x) = 0.If ζ * (n − k) > 0, we apply Itô's formula for noncontinuous semimartingales to V (N k t , X 0 t ), while (N k t , X 0 t ) is in the continuation set C. Function V is twice differentiable in C, and therefore the limit exists.Then, where, for any two-dimensional Markovian process Recall from [7] that dX 0 s = dS λ s −λds−dB s = −λds+dB s +dl 0 s (X) is a generalized Itô process so that [X 0 ] s = ( s 0 dB r ) 2 = s 0 dr = s, where l 0 (X) denotes the local time of the process X at 0. We plug these expressions appropriately into the previous equation to obtain where the operator A X is the infinitesimal generator of the process X given by (2.3).The jumps of the Poisson process N k t are of size 1 almost surely.Therefore, the last term on the right-hand side of (4.15) can be modified as Thus, Therefore, we take on both sides of (4.16) the limit as t → 0 to obtain where A N is the infinitesimal generator of the process N t .Recalling that A Y = A X + A N , (4.18) reduces to However, note that Thus, dividing the above by t and letting t → 0 yields 5. Discussion.The result in Theorem 3.3 allows us to iteratively compute, for a choice of utility function U ∈ U satisfying conditions in Assumption 3.2, the values of the optimal stopping boundary ζ * associated to the problem of optimally halting a stochastic process Z driven by a geometric Brownian motion with drift μ and variance σ 2 .This boundary allows us to define an optimal stopping rule, so that the expected value in (2.6) is optimized according to the choice of U .The signal to halt has been explained to be given by the first crossing time of the underlying process X t to the boundary ζ * .
It is key to understand that different processes Z will be related to different optimal boundaries.Assuming a given value of steps to deadline n and a given common expected rate of jumps ν, different parameters (μ , σ ), defining another process, will link to a different optimal stopping boundary.Such a boundary could be either more permissive, allowing for a broader range of values of X t not to fall in the stopping set D, or more restrictive, reducing its value and therefore forcing the sale as soon as the process slightly takes off from zero.

5.1.
Existence.An example of a family of functions meeting conditions in Assumption 3.2 is given by the family of combined power utility functions, i.e., functions U (x) of the form with m ≥ 0, 0 ≤ δ i < 1 for all i ∈ {1, 2, . . ., m} (strictly concave) or 1 < δ i for all i ∈ {1, 2, . . ., m} (strictly convex); 0 ≤ α i ≤ 1, and m i=1 α i = 1.Direct numerical analysis of functions A Y ψ(k, x) in (2.13) reveals the existence of common properties for measures of this kind.For all (k, x) ∈ {1, 2, . . ., n} × R + there exist u 1 , u 2 ∈ R + with u 1 < u 2 so that Moreover, for any μ ∈ (u 1 , u 2 ), conditions in Assumption (3.2) are met.It follows that for this family of functions, the optimal stopping set D can partially be defined as Expressions for value functions and optimal stopping boundaries can therefore be obtained using results in Theorem 3.3.
As mentioned in the introduction, the structure of the optimal stopping rules presented in [7] led to the categorization of processes into three different groupings.The solution to our randomized problem for the family of combined power utility functions suggests that we can still make use of a categorization similar to that described in [7] and presented in (5.1)-(5.3).

Approaching the original fixed terminal time problem.
There is an obvious reciprocity within the fixed time deadline and randomized set-up problems with power functions U (x).The random variable T , defined as the terminal stopping time, is modelled as the nth jump in a Poisson process with rate ν.Thus, it is possible to modify the values of both n and ν to create strong estimates of true deadlines, which are Gamma distributed.For some fixed T > 0, it is possible to set ν = n T and asymptotically fix T = T as n → ∞, decreasing the variance to infinitesimally small numbers.Such an approach results in the same optimal selling boundaries introduced in [7] under a fixed terminal time T set-up, as shown in Figure 3.   shows the existence of two bounding points for the set Θ for any fixed value of k, when μ > 0, as shown in Figure 4.This is consistent with results in [17] and [6].Such an observation implies the existence of two stopping boundaries, therefore leading to a boundary value problem different from that offered by (4.1) and (4.2).

Fig. 1 .
Fig. 1.Example realization with n = 10 and U (x) = x.The straight horizontal lines correspond to the optimal stopping boundary ζ * .The dynamics of X x are plotted in the jagged line.Here, τ is the optimal stopping time.

Assumption 3 . 2 .
Sets Θ and Υ in (3.1)-(3.2) are nonempty, and there exists an n-dimensional array b such that offers examples of choices of function U meeting this criterion.It is possible to face the existence of two or more bounding vectors for Θ, leading to a boundary value problem different from the one studied here (see examples for this case in the final section).
which is a nondecreasing function of x.If k < n, take values x, y ∈ R + with x ≤ y and set τ x = τ D (k, x) and τ y = τ D (k, y), where τ D (k, •) is given by (3.7).

Fig. 3 .
Fig. 3. Estimate of continuous optimal selling boundary for fixed terminal time parameter T = 10, λ = −0.25,σ = 1.The number of breaks n used to build this estimate is 40, ν = 4.The time τ stands for the optimal stopping time.

5. 3 .
Different utility functions.It is also possible to extend the work to functions U in U satisfying conditions different from those in Assumption 3.2.The nature of the sets Θ and Υ in (3.1) and (3.2) is linked to each choice of U and determines the nature of the stopping set D to be defined.For instance, the choice of squared logarithmic utility function U (x) = (log(x)) 2 , leading to the randomized terminal time optimal stopping problem (5.4) inf τ ∈[0,T ] E[(B λ τ − max 0≤s≤T B λ s ) 2 ] ,

Fig. 4 .
Fig. 4. Value of function A Y ψ(k, x) for different fixed values of k, with respect to x. Case μ = 0.5 and choice of measure U (x) = log(x).
.17)The random variable N k t − k follows a Poisson distribution with rate νt so that