This post will cover the following points:

- Background on linear programming
- What is inverse optimization?
- Applications of inverse optimization with some code

Inverse optimization is a framework for estimating parameters of a mathematical optimization model. This might seem very abstract for the folks that are not familiar with mathematical models, but, in fact, inverse optimization is oriented to solving real-life situations. In the context of this blog post, a mathematical model describes certain area of reality by using simple equations. This simplified view of reality is then used for several purposes: understanding what happened in the past, forecasting the future, and helping on the decision-making.

Let us step back and recall the basic concepts of Linear programming, which are the ones we focus on in this tutorial. In *traditional* optimization, the main goal is to find the optimal value of the decision variable, often denoted by “*x*”. *x* can answer questions like: “how should I organize the transportation of my goods so that I save money and fuel?”, “how do I select my portfolio of investment optimally?”, or even “how to do I go from A to B following the fastest route?”.

Linear problems are composed of two pieces. The first piece is the **objective function** that we are interested in maximizing. For example, benefits, expressed as an equation dependent on *x*. The second piece is the **constraints**, namely, a set of equations that define our reality. Often linear problems are written as follows:

where and are our known parameters and is our set of unknown decision variables.

There are many methods to find the optimal value, being the simplex algorithm and the interior point method the two most famous ones. Further reading on linear programming is found in this book, or in page 23 of this PhD thesis.

In an inverse-optimization framework, the solution to the problem (so-called ) is **known**. What is **unknown** is the model parameters. If you think this is too strange to even happen in reality, in the section I give a few examples of real cases where this applies.

Roughly speaking, an inverse optimization problem looks very similar as before:

with the significant difference that now **A**,**b** and **c** are decision variables and is a known parameter.

Before solving an inverse optimization problem it needs to be reformulated, since the notation above is not really mathematically correct and it cannot not be understood by any optimization software. Here, we make use of a basic property of linear problems and re-formulate it by its dual problem:

The second equation corresponds to the relaxed strong duality conditions from the original linear problem presented above, and the third and fourth equations are its dual feasibility constraints. This formulation is unfortunately non-linear due to the terms and . Computationally attractive methods to solve this type of problems is given in this paper. We will not go into details here but feel free to contact me of leave a comment.

One of the topics of my PhD thesis is to forecast the electricity consumptions of a pool of price-responsive houses. The work, published in this paper, explains the theoretical approach and some of the results. The code is found in this repository.

A market bid with consists of an utility function, ramp limits and energy bounds for every traded period. Using an inverse-optimization approach one can treat the historical consumption/production as the “known” , and the market parameters as the quantities to be estimated by the inverse model. Regressors can be used to make such estimations adaptive: the so-called *Dynamic Inverse Optimization* as in page 30 of my PhD thesis. Further details are also explained in this paper or in this presentation given at INFORMS in Philadelphia: