A heuristic enhancement on an optimisation algorithm

Directly or indirectly we heavily use optimization in our life.  Resource or utility optimization is something we hear every day.  Here I will discuss a standard optimization problem in statistics, maximum likelihood estimate. This blog is a small episode of my recent work. I used R language to implement the optimization exercise.

There are packages available for optimization in R. The mostly used are: optim() and  optimize() are in stats package. A dedicated function mle() available in stats4 package. This function is very useful most of the mean part modelling (eg: glm). What we need for this optimization is to prepare a function for the (log) likelihood and gradient (or hessian). We can also specify the algorithm (methods) as any of the following: “Nelder-Mead”, “BFGS”, “CG”, “L-BFGS-B”, “SANN”, and   “Brent”. This optimization output gives the necessary information for the model estimation.

I was working on an extension of bivariate garch model. It includes more than 20 parameters. Let me focus pure garch model, garch(1,1),  as a prototype to explain the algorithm. Here the model equation is in the variance part. Also, the parameters have constraint to ensure the positive variance. The   model equation is given below.

$x_t=z_t\sqrt{h_t};z_t\sim\mathcal{N}(0,1)$
$h_t = \omega + \alpha x_{t-1}^2 + \beta h_{t-1}; \omega >0 , \alpha,\beta \geq 0$

The  optimization function may work for the garch (1,1) estimation. But as the number of parameters increases, there is not much use with optimal functions in R. The main reason for failing the algorithm is the nonconvex surface with sensitive boundaries. So the solution is to write the codes for newton raphson algorithm or BHHH or other modified algorithms.

I wrote BHHH algorithm for my model. I was not able solve it fully, it was not giving the optimum. The reason is the direction for each iteration when it reaches sensitive area is not properly scaled. ie. either it overshoot in some dimension or there is no significant improvement in some direction. So to get appropriate multiplier on optimal direction fails here.

Now I will discuss about the heuristic part of the modification in the algorithm. In this heuristic, certain number of dimensions are randomly selected and perturbed. Remaining dimensions are left unchanged. This way we can overcome the scaling issue. This given me optimal solution but it is not optimised in the execution perspective. It take slightly more time ( I compared few simple models with my heuristic and existing garch estimation).

This approach looks very silly for regression type problems. But in garch like process tweaking like this is very much required. I am still looking for the better way. Anyway the blogging helped me to revise  my approach again. Please let me know if you have any suggestions.