UCF Math Department's Probability and Statistics Seminar Fall 2008

Fridays in MAP 213 from 2:30 PM - 3:30 PM, unless otherwise noted.

Organized by Jason Swanson

Schedule and Abstracts

 Friday, September 12 Marianna Pensky, University of Central Florida Functional wavelet deconvolution in a periodic setting

We extend deconvolution in a periodic setting to deal with functional data. The resulting functional deconvolution model can be viewed as a generalization of a multitude of inverse problems in mathematical physics where one needs to recover initial or boundary conditions on the basis of observations from a noisy solution of a partial differential equation. In the case when it is observed at a finite number of distinct points, the proposed functional deconvolution model can also be viewed as a multichannel deconvolution model.

We derive minimax lower bounds for the $L^2$- risk of an estimator of the unknown response function $f(\cdot)$ in the proposed functional deconvolution model when $f(\cdot)$ is assumed to belong to a Besov ball and the blurring function is assumed to possess some smoothness properties. Furthermore, we propose an adaptive block thresholding wavelet estimator of $f(\cdot)$ that is asymptotically optimal (in the minimax sense), or near-optimal within a logarithmic factor, in a wide range of Besov balls.

In addition, we consider a discretization of the proposed functional deconvolution model and investigate when the availability of continuous data give advantages over observations at the asymptotically large number of points. As an illustration, we discuss particular examples for both continuous and discrete settings.

 Friday, September 19 Marianna Pensky, University of Central Florida Functional wavelet deconvolution in a periodic setting, Part 2

A continuation of last week's talk.

 Friday, September 26 Jiongmin Yong, University of Central Florida Optimal Stopping Problems

In this talk, some results on optimal stopping problems will be presented.

We will start with a deterministic problem. When the payoff function is just known to be a continuous/differentiable function, the problem is reduced to a simple maximization problem in calculus. When the payoff functional depends on a state process driven by an ordinary differential equation, by Bellman's dynamic programming method, we will end up a a first order partial differential variational inequality (PDVI, for short) to be satisfied by the value function of the problem. Notion of viscosity solution will be introduced and it turns out that the value function is the unique viscosity solution of the ODVI. Once the value function is determined, an optimal stopping time will be determined.

For stochastic case, if the payoff is just an adapted continuous process, then the optimal stopping time can be charaterized by means of the so-called Snell envelope and supermartingale. Now, if the payoff depends on a state process which is the solution to a stochastic differential equation (SDE, for short) with deterministic coefficients, then by Bellman's dynamic programming method, we will end up with a second order (possibly degenerate) PDVI to be satisfied by the value function. Then viscosity solution can also be introduced to obtain a characterization of the value function and the optimal stopping time can be determined.

Finally, if the payoff depends on a state process which is the solution to some SDE with random coefficients, the problem becomes more challenging. By applying Bellman's principle of optimality, We will end up with a backward stochastic partial differential variational inequality (BSPDVI, for short) to be satisfied by the value function. We will establish the well-posedness of BSPDVI and identify the solution to be the value function of the problem. Then optimal stopping time will be determined.

At the end of the talk, some open problems will be posed.

 Friday, October 3 Jiongmin Yong, University of Central Florida Optimal Stopping Problems, Part 2

A continuation of last week's talk.

 Friday, October 10 No seminar

 Friday, October 17 No Seminar

 Friday, October 24 No seminar

 Friday, October 31 Gary Richardson, University of Central Florida A Spatial Model: Long Memory

Partial sums (properly normalized) of the error structure are embedded in the function space D([0,1]x[0,1]). Sufficient conditions are given in order for the sequence of partial sums to converge in distribution to a fractional Brownian motion process on the unit square. This asymptotic result can be used to develop a test for unit roots in the spatial model. All the results are preliminary.