Rowboat Advisors optimization infrastructure, part 3: approximating non-linear functions

Rowboat Advisors has built infrastructure to make it tremendously easier to invest a portfolio well. Part 3 describes how to approximate non-linear functions.

Audience requirements: high school math; accumulated familiarity with linear programming via the previous blog post. Programming knowledge not needed, but helps appreciate the code examples. If it gets too much, skip to the summary section at the end.

Example

Assume1 you want to minimize x2 + 4y2 subject to x + y = 1 and 0 <= x, y <= 1. The optimal solution2 for (x, y) is (0.8, 0.2). The following code uses our infrastructure and comes up with the answer (0.792, 0.208), which is quite close.

  @Test
  public void blogPostExample_testSum_twoVariables_unequalWeights_lowerBoundForVarsIsZero() {
    HighLevelLPBuilder lpBuilder = makeRealHighLevelLPBuilder()
        .withHumanReadableLabel(label("minimize x^2 + 4*y^2 subject to x + y = 1"));
    LinearApproximationVars x = generateVars(lpBuilder, "x", Range.closed(0.0, 1.0), 0.01, 1.2, v -> v * v);
    LinearApproximationVars y = generateVars(lpBuilder, "y", Range.closed(0.0, 1.0), 0.01, 1.2, v -> v * v);
    LinearOptimizationProgram lp = lpBuilder
        .setObjectiveFunction(simpleLinearObjectiveFunction(
            highLevelVarExpression(1, x.getApproximatedNonLinearPart(), 4, y.getApproximatedNonLinearPart())))
        .addConstraint(
            "x + y = 1", sumOfDisjointHighLevelVars(x.getLinearPart(), y.getLinearPart()), EQUAL_TO_SCALAR, 1.0)
        .build();
    AllVariablesAndDoubles variablesAndOptimalValues =
        makeRealLpSolveLinearOptimizer().minimize(lp).get().getVariablesAndOptimalValues();
    System.out.println(lp);
    ToDoubleFunction<HighLevelVar> evaluate = var -> evaluator.evaluateHighLevelVar(var, variablesAndOptimalValues);
    double finalX = evaluate.applyAsDouble(x.getLinearPart());
    double finalY = evaluate.applyAsDouble(y.getLinearPart());
    double COARSE_EPSILON = 0.025;
    assertEquals(0.8, finalX, COARSE_EPSILON);
    assertEquals(0.2, finalY, COARSE_EPSILON);
    // Even though the X and Y will be approximate, their sum will not, since we have a constraint of x + y = 1
    assertEquals(1, finalX + finalY, 1.01 * 1e-8);
    assertEquals(doubleExplained(0.64, 0.8 * 0.8), evaluate.applyAsDouble(x.getApproximatedNonLinearPart()), COARSE_EPSILON);
    assertEquals(doubleExplained(0.04, 0.2 * 0.2), evaluate.applyAsDouble(y.getApproximatedNonLinearPart()), COARSE_EPSILON);
  }

Our infrastructure translates this into this raw linear program. You don’t have to understand the details below, but you may be able to see that this uses 17 line segments to approximate x2 (plus another 17 for y2) in the interval from 0 to 1.

[LOPI minimize x^2 + 4*y^2 subject to x + y = 1 supervars: [[x0:x_0.0000_to_0.0100], [x1:x_0.0100_to_0.0220], [x2:x_0.0220_to_0.0364], [x3:x_0.0364_to_0.0537], [x4:x_0.0537_to_0.0744], [x5:x_0.0744_to_0.0993], [x6:x_0.0993_to_0.1292], [x7:x_0.1292_to_0.1650], [x8:x_0.1650_to_0.2080], [x9:x_0.2080_to_0.2596], [x10:x_0.2596_to_0.3215], [x11:x_0.3215_to_0.3958], [x12:x_0.3958_to_0.4850], [x13:x_0.4850_to_0.5920], [x14:x_0.5920_to_0.7204], [x15:x_0.7204_to_0.8744], [x16:x_0.8744_to_1.0000], [x17:y_0.0000_to_0.0100], [x18:y_0.0100_to_0.0220], [x19:y_0.0220_to_0.0364], [x20:y_0.0364_to_0.0537], [x21:y_0.0537_to_0.0744], [x22:y_0.0744_to_0.0993], [x23:y_0.0993_to_0.1292], [x24:y_0.1292_to_0.1650], [x25:y_0.1650_to_0.2080], [x26:y_0.2080_to_0.2596], [x27:y_0.2596_to_0.3215], [x28:y_0.3215_to_0.3958], [x29:y_0.3958_to_0.4850], [x30:y_0.4850_to_0.5920], [x31:y_0.5920_to_0.7204], [x32:y_0.7204_to_0.8744], [x33:y_0.8744_to_1.0000]]
  minimize:
0.01 [x0:x_0.0000_to_0.0100] + 0.032 [x1:x_0.0100_to_0.0220] + 0.058399999999999994 [x2:x_0.0220_to_0.0364] + 0.09008000000000001 [x3:x_0.0364_to_0.0537] + 0.12809600000000002 [x4:x_0.0537_to_0.0744] + 0.17371520000000004 [x5:x_0.0744_to_0.0993] + 0.22845824 [x6:x_0.0993_to_0.1292] + 0.294149888 [x7:x_0.1292_to_0.1650] + 0.37297986559999996 [x8:x_0.1650_to_0.2080] + 0.46757583871999986 [x9:x_0.2080_to_0.2596] + 0.581091006464 [x10:x_0.2596_to_0.3215] + 0.7173092077567997 [x11:x_0.3215_to_0.3958] + 0.8807710493081599 [x12:x_0.3958_to_0.4850] + 1.0769252591697915 [x13:x_0.4850_to_0.5920] + 1.3123103110037504 [x14:x_0.5920_to_0.7204] + 1.5947723732044994 [x15:x_0.7204_to_0.8744] + 1.8744212944751821 [x16:x_0.8744_to_1.0000] + const(-5.06116518) + 4.0 0.01 [x17:y_0.0000_to_0.0100] + 0.032 [x18:y_0.0100_to_0.0220] + 0.058399999999999994 [x19:y_0.0220_to_0.0364] + 0.09008000000000001 [x20:y_0.0364_to_0.0537] + 0.12809600000000002 [x21:y_0.0537_to_0.0744] + 0.17371520000000004 [x22:y_0.0744_to_0.0993] + 0.22845824 [x23:y_0.0993_to_0.1292] + 0.294149888 [x24:y_0.1292_to_0.1650] + 0.37297986559999996 [x25:y_0.1650_to_0.2080] + 0.46757583871999986 [x26:y_0.2080_to_0.2596] + 0.581091006464 [x27:y_0.2596_to_0.3215] + 0.7173092077567997 [x28:y_0.3215_to_0.3958] + 0.8807710493081599 [x29:y_0.3958_to_0.4850] + 1.0769252591697915 [x30:y_0.4850_to_0.5920] + 1.3123103110037504 [x31:y_0.5920_to_0.7204] + 1.5947723732044994 [x32:y_0.7204_to_0.8744] + 1.8744212944751821 [x33:y_0.8744_to_1.0000] + const(-5.06116518)
  subject to constraints:
x + y = 1 :    9.8930555337 ==  + [x0:x_0.0000_to_0.0100] + [x1:x_0.0100_to_0.0220] + [x2:x_0.0220_to_0.0364] + [x3:x_0.0364_to_0.0537] + [x4:x_0.0537_to_0.0744] + [x5:x_0.0744_to_0.0993] + [x6:x_0.0993_to_0.1292] + [x7:x_0.1292_to_0.1650] + [x8:x_0.1650_to_0.2080] + [x9:x_0.2080_to_0.2596] + [x10:x_0.2596_to_0.3215] + [x11:x_0.3215_to_0.3958] + [x12:x_0.3958_to_0.4850] + [x13:x_0.4850_to_0.5920] + [x14:x_0.5920_to_0.7204] + [x15:x_0.7204_to_0.8744] + [x16:x_0.8744_to_1.0000] + [x17:y_0.0000_to_0.0100] + [x18:y_0.0100_to_0.0220] + [x19:y_0.0220_to_0.0364] + [x20:y_0.0364_to_0.0537] + [x21:y_0.0537_to_0.0744] + [x22:y_0.0744_to_0.0993] + [x23:y_0.0993_to_0.1292] + [x24:y_0.1292_to_0.1650] + [x25:y_0.1650_to_0.2080] + [x26:y_0.2080_to_0.2596] + [x27:y_0.2596_to_0.3215] + [x28:y_0.3215_to_0.3958] + [x29:y_0.3958_to_0.4850] + [x30:y_0.4850_to_0.5920] + [x31:y_0.5920_to_0.7204] + [x32:y_0.7204_to_0.8744] + [x33:y_0.8744_to_1.0000]
  variable ranges:
x_0.0000_to_0.0100 [0.0‥0.01)
x_0.0100_to_0.0220 [0.01‥0.022)
x_0.0220_to_0.0364 [0.022‥0.0364)
x_0.0364_to_0.0537 [0.0364‥0.053680000000000005)
x_0.0537_to_0.0744 [0.053680000000000005‥0.07441600000000001)
x_0.0744_to_0.0993 [0.07441600000000001‥0.0992992)
x_0.0993_to_0.1292 [0.0992992‥0.12915904)
x_0.1292_to_0.1650 [0.12915904‥0.164990848)
x_0.1650_to_0.2080 [0.164990848‥0.2079890176)
x_0.2080_to_0.2596 [0.2079890176‥0.25958682112)
x_0.2596_to_0.3215 [0.25958682112‥0.32150418534399994)
x_0.3215_to_0.3958 [0.32150418534399994‥0.3958050224127999)
x_0.3958_to_0.4850 [0.3958050224127999‥0.4849660268953599)
x_0.4850_to_0.5920 [0.4849660268953599‥0.5919592322744318)
x_0.5920_to_0.7204 [0.5919592322744318‥0.7203510787293181)
x_0.7204_to_0.8744 [0.7203510787293181‥0.8744212944751818)
x_0.8744_to_1.0000 [0.8744212944751818‥1.0]
y_0.0000_to_0.0100 [0.0‥0.01)
y_0.0100_to_0.0220 [0.01‥0.022)
y_0.0220_to_0.0364 [0.022‥0.0364)
y_0.0364_to_0.0537 [0.0364‥0.053680000000000005)
y_0.0537_to_0.0744 [0.053680000000000005‥0.07441600000000001)
y_0.0744_to_0.0993 [0.07441600000000001‥0.0992992)
y_0.0993_to_0.1292 [0.0992992‥0.12915904)
y_0.1292_to_0.1650 [0.12915904‥0.164990848)
y_0.1650_to_0.2080 [0.164990848‥0.2079890176)
y_0.2080_to_0.2596 [0.2079890176‥0.25958682112)
y_0.2596_to_0.3215 [0.25958682112‥0.32150418534399994)
y_0.3215_to_0.3958 [0.32150418534399994‥0.3958050224127999)
y_0.3958_to_0.4850 [0.3958050224127999‥0.4849660268953599)
y_0.4850_to_0.5920 [0.4849660268953599‥0.5919592322744318)
y_0.5920_to_0.7204 [0.5919592322744318‥0.7203510787293181)
y_0.7204_to_0.8744 [0.7203510787293181‥0.8744212944751818)
y_0.8744_to_1.0000 [0.8744212944751818‥1.0]

Initial point:
Optional.empty
LOPI]

As accuracy increases (i.e. we use more approximation variables), we will get closer and closer to (0.8, 0.2). The downside is that it will take longer to solve the problem.

Discussion

Here is a visual way of looking at this:

linear approximation of square

The chart above looks like a parabola – i.e. the chart of the function x2 – but it’s not an exact one. If you look very closely, you will notice it is not a smooth curve; there are 17 line segments, each one mapping to an intermediate variable in the raw LP.

Segment lengths

Our infrastructure allows for any way of segmenting the interval we are trying to approximate. We could have divided the interval [0, 1] into 17 equal parts(or 13, or 10), and we’d have a different approximation. We just chose a method where the line segment lengths increase geometrically; here, they become 20% longer at each step. We did that because in our investing engine we care more about being precise the closer x is to 0 – but again, this is not a requirement3.

Caveats

  1. This linear approximation infrastructure only works for bounded regions (e.g. from 0 to 1 in this example). This makes sense; intuitively speaking, no line segment can approximate an curve with infinite length. As x goes up, the approximating line’s slope is still the same (lines have a fixed slope), but the curve’s slope will keep increasing (in this example, at least), so at some point its slope will get huge, and it will end up way above the approximating line.
  2. This was not created with the purpose of approximating curves closely. In practice, there are cases where we want to gradually increase the penalty from an ideal point. Any function that grows faster than linear could work (although with a different “gradualness”).

Summary

In the Rowboat Advisors investing engine, we often need to approximate non-linear functions of a single variable, such as square, square root, ex, etc. However, linear programming – the mathematical model we use to describe our investing optimization – does not support that.

We have created infrastructure that automates these approximations, while hiding the details from the developer.

The end-user result is that this enables us to invest better. Our engine jointly considers the tradeoffs between different improvements in a portfolio (e.g. lower taxes, lower fees, matching a target portfolio, etc.). This feature allows us to assign a more accurate penalty (or preference) to each such improvement.


 Notes

  1. For the special case of x2: Quadratic programming is an extension of linear programming, but there are fewer good solvers for such problems, and they take a longer time than LP. At any rate, our infrastructure is even more general, as it allows for e.g. square roots. We just picked x2 here because it makes for a better example. However, even though our infrastructure works for the scenarios we care about, not every single function will work in general; there are certain properties a function must hold. This is beyond the scope of this post.
  2. Proof:
    • If you remember calculus: rewrite x2 + 4y2 to (1-y)2 + 4y2 = 5y-2y + 1. Set its derivative to 0, so 10y – 2 = 0 ⇔ y = 0.2. Therefore x = 1 – y = 0.8.
    • If not, here’s an informal proof: using (x, y) = (0.8, 0.2) in the objective function gives 0.82 + 4*0.22 = 0.80. Let’s make x slightly bigger, to 0.81; y then has to get slightly smaller since x + y = 1, so 0.19. However, 0.812 + 4*0.192 = 0.8005 > 0.8, so this is a worse solution. Similarly, 0.792 + 4*0.212 = 0.8005 > 0.8.
  3. That decision should also be affected by the curvature of the function. Intuitively speaking, the curvier a function is in a region, the more variables we should use for the approximation to get the same precision as in a region where the function looks linear. At the limit, a straight line (i.e. no curvature) can be approximated by a single variable.

Author: Rowboat Advisors

Rowboat Advisors builds software for sophisticated and fully automated portfolio management for the financial advisor industry.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s