A heuristic enhancement on an optimisation algorithm

Directly or indirectly we heavily use optimisation in our life.  Resource or utility optimisation is something we hear everyday.  Here I will discuss a standard optimisation problem in statistics, maximum likelihood estimate. This blog is a small episode of my recent work. I used R language to implement the optimisation exercise.

There are packages available for optimisation in R. The mostly used are: optim() and  optimize() are in stats package. A dedicated function mle() available in stats4 package. This is very useful most of the mean part modelling (eg: glm). What we need for this optimisation is to prepare function for the (log) likelihood and gradient (or hessian). We can also specify the algorithm (methods) as any of the following: “Nelder-Mead”, “BFGS”, “CG”, “L-BFGS-B”, “SANN”, and   “Brent”. This optimisation output give the necessary information for the model estimation.

I was working on an extension of bivariate garch model. It includes more than 20 parameters. let me focus pure garch model, garch(1,1),  as a prototype to explain the algorithm. Here the model equation is in the variance part. Also the parameters have constraint to ensure the positive variance. The   model equation is given below.


The  optimisation function may work for garch(1,1) estimation. But as the number of parameters increases there is not much use with optim functions in R. The main reason for failing the algorithm is the nonconvex surface with sensitive boundaries. So the solution is to write the codes for newton raphson algorithm or BHHH or other modified algorithms.

I wrote BHHH algorithm for my model. I was not able solve it fully, it was not giving the optimum. The reason is the direction for each iteration when it reaches sensitive area is not properly scaled. ie. either it overshoot in some dimension or there is no significant improvement in some direction. So to get appropriate multiplier on optimal direction fails here.

Now I will discuss about the heuristic part of the modification in the algorithm. In this heuristic, certain number of dimensions are randomly selected and perturbed. Remaining dimensions are left unchanged. This way we can overcome the scaling issue. This given me optimal solution but it is not optimised in the execution perspective. It take slightly more time ( I compared few simple models with my heuristic and existing garch estimation).

This approach looks very silly for regression type problems. But in garch like process tweaking like this is very much required. I am still looking for the better way. Anyway the blogging helped me to revise  my approach again. Please let me know if you have any suggestions.

This entry was posted in OR, R, Time Series and tagged , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

* Copy This Password *

* Type Or Paste Password Here *

nine − 1 =

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">