Audience requirements: high school math; previous blog post. Programming knowledge not needed, but helps appreciate the code examples. If it gets too much, skip to the summary section at the end.

# Problem

In the Rowboat Advisors investing engine, we often use absolute values. For example, we are indifferent between holding 1% more vs 1% less of an asset (vs. its target). Both are equally undesirable. However, even if we use some tricks (which we have to – see below), it is too complicated for the code to be aware of that trick.

# Absolute values: the standard trick

General linear programs (LP) only let you specify linear expressions (such as 2*x + 3*y); the absolute value does not count. However, there is a well-known trick to minimize^{1} absolute values in a linear program.

Say you want to minimize |x| subject to -5 <= x <= -3.

Instead, rewrite as follows:

Minimize:

- x
_{pos}+ x_{neg}

subject to:

- -5 <= x <= -3
- x = x
_{pos}– x_{neg} - x
_{pos}>= 0 - x
_{neg}>= 0

x_{pos} represents the positive part (or 0) of x; x_{neg} the negative. At most one of them can be non-0.

- If x = 3, then x
_{pos}= 3, x_{neg}= 0 (though this violates constraint #1 so can’t happen) - If x = 0, then x
_{pos}= 0, x_{neg}= 0 - if x = -4, then x
_{pos}= 0, x_{neg}= 4.

This trick works because the solver will push both x_{pos} and x_{neg} to be as small as possible. In this example, minimizing |x| subject to -5 <= x <= -3 should intuitively give us x=-3. There are infinite combinations for x_{pos} – x_{neg }that give -3 (e.g. 7 – 10, 97 – 100, etc.), but the combination with the smallest objective function value (x_{pos} + x_{neg}) is x_{pos} = 0, x_{neg }= 3.

# Solution #1: uses 3 variables

This is the Java code that solves^{2} “minimize |x| subject to -5 <= x <= -3”; solution is x = -3, implying |x| = 3.

@Test public void blogPostExample1() { HighLevelLPBuilder builder = makeBuilder(); Variable x = builder.addConstrainedRawVariable("x", Range.closed(-5.0, -3.0)); AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateDumbAbsoluteValueSuperVars( "abs(x)", singleVarExpression(x), builder); LinearOptimizationProgram lp = builder .withHumanReadableLabel(label("minimize |x| subject to -5 <= x <= -3")) .setObjectiveFunction(simpleLinearObjectiveFunction(singleVarExpression(absX.getAbsoluteValue()))) .build(); AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues(); BiConsumer asserter = (expected, var) -> assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8); asserter.accept(-3.0, x); asserter.accept(3.0, absX.getAbsoluteValue()); }

This is the raw LP that ends up getting passed to the solver after the translation step. It uses 3 variables: x, x_{pos}, x_{neg}.

[LOPI minimize |x| subject to -5 <= x <= -3 supervars: [[x0:x], [x1:abs(x)+], [x2:abs(x)-]] minimize: [x1:abs(x)+] + [x2:abs(x)-] subject to constraints: abs(x) = abs(x)+ - abs(x)- : -0.0000000000 == + [x0:x] - [x1:abs(x)+] + [x2:abs(x)-] variable ranges: x [-5.0‥-3.0] abs(x)+ [0.0‥+∞) abs(x)- [0.0‥+∞) Initial point: Optional.empty LOPI]

This is great – the code can use absolute values without needing to know about this positive / negative trick. However, we can do better.

# Solution #2: uses 2 variables

If we know the range of the expression in the absolute value, we can give a hint. For example, -5 <= x < = -3 means we’ll never need x_{pos }, as x is always negative.

@Test public void blogPostExample2() { HighLevelLPBuilder builder = makeBuilder(); Variable x = builder.addRawVariable("x"); AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateSmartAbsoluteValueSuperVars( "abs(x)", singleVarExpression(x), builder, Range.closed(-5.0, -3.0)); LinearOptimizationProgram lp = builder .withHumanReadableLabel(label("minimize |x| subject to -5 <= x <= -3")) .setObjectiveFunction(simpleLinearObjectiveFunction(singleVarExpression(absX.getAbsoluteValue()))) .build(); AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues(); BiConsumer asserter = (expected, var) -> assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8); asserter.accept(-3.0, x); asserter.accept(3.0, absX.getAbsoluteValue()); }

The raw LP now uses 2 variables instead of 3: x and x_{neg}. x_{pos }is not needed.

[LOPI minimize |x| subject to -5 <= x <= -3 supervars: [[x0:x], [x1:abs(x)-]] minimize: [x1:abs(x)-] subject to constraints: abs(x) = - abs(x)- (no pos) : -0.0000000000 == + [x0:x] + [x1:abs(x)-] variable ranges: x (-∞‥+∞) abs(x)- [3.0‥5.0] Initial point: Optional.empty LOPI]

That's better; fewer variables generally make things run faster. However, we can do even better.

# Solution 3: uses only 1 variable

Usually, we are taking the absolute value of an existing expression (e.g. 2*y + 3*z + 4*w), with y, z and w appearing elsewhere in the LP. However, in this case, we don’t even need to create a separate variable for x itself. When we create an absolute value using our infrastructure, we also get the positive, negative, and signed (positive – negative) for free. So we can just refer to the signed value of the abs(x) variable, i.e. x_{pos} – x_{neg}, instead of x itself. Here is an example.

@Test public void blogPostExample3() { HighLevelLPBuilder builder = makeBuilder(); AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateAbsoluteValueSuperVarsOfSingleVariable( "abs(x)", builder, Range.closed(-5.0, -3.0)); LinearOptimizationProgram lp = builder .withHumanReadableLabel(label("minimize |x| subject to -5 <= x <= -3")) .setObjectiveFunction(simpleLinearObjectiveFunction(singleVarExpression(absX.getAbsoluteValue()))) .build(); System.out.println(lp); AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues(); BiConsumer asserter = (expected, var) -> assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8); asserter.accept(-3.0, absX.getSigned()); asserter.accept(3.0, absX.getAbsoluteValue()); }

The raw LP now uses only x_{neg}. x and x_{pos }are not needed. That’s even more efficient.

[LOPI minimize |x| subject to -5 <= x <= -3 supervars: [[x0:abs(x)-]] minimize: [x0:abs(x)-] subject to constraints: variable ranges: abs(x)- [3.0‥5.0] Initial point: Optional.empty LOPI]

# Minor bonus: absolute value can appear in constraint

When we minimize an absolute value, we are also able to use it in a constraint.

Here is a variation where we add the additional constraint that |x| > 3.5. The solution is now x = -3.5, which gives |x| = 3.5.

@Test public void blogPostExample4() { HighLevelLPBuilder builder = makeBuilder(); AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateAbsoluteValueSuperVarsOfSingleVariable( "abs(x)", builder, Range.closed(-5.0, -3.0)); LinearOptimizationProgram lp = builder .withHumanReadableLabel(label("minimize |x| subject to -5 <= x 3.5")) .addConstraint("|x| > 3.5", absX.getAbsoluteValue().getHighLevelVarExpression(), GREATER_THAN_SCALAR, 3.5) .setObjectiveFunction(simpleLinearObjectiveFunction(singleVarExpression(absX.getAbsoluteValue()))) .build(); System.out.println(lp); AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues(); BiConsumer asserter = (expected, var) -> assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8); asserter.accept(-3.5, absX.getSigned()); asserter.accept(3.5, absX.getAbsoluteValue()); }

In the raw LP, note that the |x| > 3.5 constraint gets automatically translated to be in terms of x_{neg }(which shows below as **[x0:abs(x)-]**) .

[LOPI minimize |x| subject to -5 <= x 3.5 : 3.5000000000 < + [x0:abs(x)-] variable ranges: abs(x)- [3.0‥5.0] Initial point: Optional.empty LOPI]

# Major bonus: simplicity

The main benefit of all examples above (even the less efficient ones) is that the code does not need to be aware of all this trickery, so it can be simpler. It doesn’t need to know that the underlying infrastructure has created x_{pos} and x_{neg}. It also does not need to make a distinction between the efficiency improvements described in steps 2 and 3; that would require the code to know whether 0, 1, or both of x_{pos} and x_{neg }have been created.

This was an actual problem for us in practice; we built this infrastructure to address it.

# Summary

Minimizing absolute values is not directly supported by the conceptual model of LP. The Rowboat Advisors infrastructure enables this, while hiding from the developer how this actually happens. Given light guidance in certain special cases, it is also intelligent to make the code faster.

The end-user result is that, by reducing complexity, this enables us to invest better. Without it, the complexity of jointly considering the tradeoffs between different improvements in a portfolio (e.g. lower taxes, lower fees, matching a target portfolio, etc.) ends up becoming unmanageable in practice.

# Notes

- The trick actually does not work when maximizing an absolute value, or (equivalently) when an absolute value appears with a negative coefficient in front of it in the objective function.
- The examples in this post are all simple (|x|) but our infrastructure lets us use the absolute value of any expression, not just a single variable.

## One thought on “Rowboat Advisors optimization infrastructure, part 2: absolute values”