Replacing ESG portfolio tilts with optimization

How our proprietary portfolio optimizer improves upon the industry standard approach


With Direct Indexing (DI), instead of buying a fund that tracks a stock index (such as an index ETF), we buy the individual stocks. DI can increase after-tax returns through tax loss harvesting. Another benefit, which is not applicable just to taxable accounts, is the ability to customize a portfolio for Environmental, Social, and Governance (ESG) concerns1. For example, a client may want to decrease holdings in high-carbon-footprint stocks relative to the weights in the index being tracked2


The industry standard approach to improve the ESG score of a portfolio is to “tilt” the weights of stocks relative to their weight in the index. For example, if Chevron is 4% of the index, we reduce the target percentage, but not so much as to cause too much tracking error – so perhaps we settle on 3%. In general, index constituents are tilted up (for “good” stocks) or down (for “bad” stocks) as needed, the total is normalized to add up to 100%, which effectively creates a new custom index. The tilt can also vary so that “somewhat bad” stocks get affected less than “very bad” ones3.

Tilts have a simple qualitative explanation: “we bought less of this undesirable stock“. They also work with simple (non-optimization-based) software: you can do a regular portfolio rebalance on this new custom tilted index. Finally, tilts work well enough if we can actually get to a portfolio that resembles the tilted one.

However, that is not always possible for several reasons. Perhaps we cannot sell the overweight stocks (or buy the underweight ones) because we want to avoid wash sales or capital gains. This can be a problem when we combine DI with tax-loss harvesting, which results in more trading, and hence more situations where wash sales and capital gains must be avoided.

Sidebar – optimization

Optimization is, at its core, a rigorous way to evaluate tradeoffs between goals that may conflict. Our software currently (Jan. 2021) supports 5 such goals:

  • Tracking: the portfolio should be similar4 to a target.
  • Tax: avoid realizing capital gains; attempt to realize capital losses.
  • Transaction cost: reduce trading, and prefer trading securities that are cheaper to trade.
  • Holding cost: e.g. prefer holding ETFs with lower expense ratios.
  • ESG: prefer higher portfolio-wide (i.e. weighted by holdings) ESG scores.

ESG “goals”: an improvement over tilts

Clients care about the portfolio-level ESG scores. Their specific stock-level holdings are only a means to that end. Of course, one coarse way to reduce carbon footprint of the entire portfolio is to tell the system to reduce each high-carbon-footprint stock. However, it is not the same.

Extending our food analogy, if adding an ESG goal to portfolio optimization is like making a meal with the best combination of cost and healthiness, tilts are like instructions to use half as much ham and ranch dressing. Tilts sound right, and usually push outcomes in the right direction, but they do not directly address the high-level goal. This can result in suboptimal outcomes that are not intuitive to foresee.


Say both Chevron (CVX) and Exxon Mobil (XOM) both have a target of 4%, but have been tilted down to a target of 3% because they both have a high carbon footprint. Assume there are only two choices5:

  1. 2.6% CVX, 3.4% XOM
  2. 3.0% CVX, 3.0% XOM

A simple approach that uses tilts would always prefer #2: CVX is underweight its tilted target of 3%, and XOM is overweight its tilted target of 3%. Of course, CVX and XOM are both underweight their original (untilted) target of 4%, so intuitively #1 and #2 should be equally desirable6.

One may say “OK, the system really prefers one of the two portfolios, but they are equivalent, so I don’t care in the end.” However, the problem becomes more evident in the presence of other – possibly conflicting – portfolio goals. Take a situation where it is actually preferable to move from #2 to #1, such as realizing a loss by selling CVX from 3% to 2.6%, and buying XOM instead, to 3.4% of the total portfolio. Compared to their respective targets of 4%, CVX would become more underweight, but XOM would become less underweight, so the effect on tracking would roughly balance out.

The optimization approach would say “tracking doesn’t really change, and the tax outcome is better, so let’s trade”. With tilts, even if a sophisticated optimization system were to be used (vs. a simple rules-based “rebalancer”), the system would think that improving the tax situation would also increase the expected tracking error, and (erroneously) avoid trading.


Our proprietary portfolio optimizer, developed over the last 5 years, allows ESG scores to be considered alongside other portfolio goals. This is a more natural way to specify how much a client cares about maximizing a specific ESG score. It also results in better investing outcomes, especially in situations where the best achievable portfolio cannot get too close to the ideal.


  1. Socially Responsible Investing (SRI) typically reflects a client’s personal values, whereas ESG focuses on objective aspects, such as having a cybersecurity policy. This distinction is not relevant in this post.
  2. This post focuses on portfolios that track a stock index, but the approach discussed can apply to any portfolio target.
  3. Exclusion filters are also common: instead of holding less of a “bad” stock, the client may want to hold none of it. However, they are effectively an extreme tilt (all the way down to zero), so we will not discuss them here. 
  4. We actually further break down tracking to cover two interpretations of ‘similar’: ‘looks similar’ (adheres to the nominal target percentages) and ‘has similar behavior’ (has similar factor exposures as per a factor-based risk model).
  5. We assume this to simplify the example; normally, there will be infinite possible portfolios. In this case, if both 3%/3% and 2.6%/3.4% are possible, then e.g. 2.9%/3.1% will also be possible, as will 2.8%/3.2%, etc.
  6. In practice, the misallocation penalties may not be linear (proportional): in our system, they increase faster as we move further away we go from the target. The idea is that we don’t care as much about 4% becoming 3.5% (e.g. due to price movement) as much as we care about 3.5% becoming 3% (which means we’re really starting to drift). The shape of the misallocation penalty function is highly customizable in our system. This detail does not invalidate our main argument, though.

Direct Indexing problems during big market drops, part 1

Our proprietary backtest infrastructure allows us to discover potential problems early on, and to fine-tune investing behavior.


Tax loss harvesting (TLH) is a technique that can increase after-tax returns in taxable accounts. It does so by selling (“harvesting”) lossy positions, while replacing them in a way that attempts to maintain a similar exposure.

A widely implemented approach uses pairs of similar ETFs: buy X, and when it is at a loss, sell X and buy Y. Direct Indexing (DI) is more sophisticated1. Instead of buying “prepackaged” exposure to a stock index via an ETF or mutual fund, it buys the individual index constituent stocks, at the proportions defined by the stock index. When some holdings are at a loss, they are sold (“harvested”). By selling individual stocks instead of the entire basket, there are harvesting opportunities even when the index as a whole is up. The cash generated gets used to buy other index constituents, such that the resulting portfolio has similar behavior as the index. Typically this is done via a risk model, which tells you how similar different stocks are. As a simple example, if Exxon was just sold at a loss and is underweight, the risk model could tell us to make Chevron overweight. A more general example is when we sell 4 stocks from a portfolio that was already imbalanced with respect to the index. In that case, we may buy 7 other stocks, so as to correct:

  • any previous imbalance
  • the newly introduced imbalance from selling those 4 stocks
  • possibly another imbalance due to a cash deposit, cash dividends paid, a withdrawal, etc.

Over-harvesting problem

If the market drops by a lot, as it did on many days of March 2020, we could naively try to harvest (sell) too much compared to how much we would be able to buy. Let us break down this statement.


For a tax lot to be sold today2:

  • it must be at a big enough loss today.
  • it was not at a big enough loss yesterday3, otherwise we would have sold it then.

Having a tax lot cross that threshold is relatively uncommon. Therefore, the total harvesting amounts on any given day for an entire DI portfolio are not too large.


At any point in time, some stocks can be unbuyable for several reasons, such as:

  1. due to wash sale restrictions, if we sold them at a loss within the last 30 days. This includes today; we can’t sell at a loss and buy the same stock4.
  2. because they are already at the top of the acceptable range. For example, if AAPL is 4% of the index, DI will typically allow it to be within a range such as 2% to 6%, so as to keep it from dominating the portfolio.

Even when stocks are not completely unbuyable, they are always partially unbuyable due to the range restrictions, as per #2.

Why is this bad?

If we over-harvest, we will end up with a lot of uninvested cash, and therefore with tracking error (overweight cash, underweight stocks). Moreover, this is a particular bad kind of tracking error. Being overweight some stocks and underweight others will at least maintain some market exposure. However, cash does not move with the stock market. The result is bad, not just for a client’s reality (tracking error), but also perception. A client will be unhappy if an error creates extra uninvested cash, and therefore the client misses out on a market bounce. However, this does not work symmetrically: if the market drops further, they will not call to thank you for selling early due to a mistake.

We have implemented a fairly thoughtful and optimal solution to this problem, but that will be the focus of a follow-up post.

How did we discover this?

Most software is written by foreseeing scenarios ahead of time, and writing code to address them. For example:

  • If someone deposits cash into the account, then we must create buy orders for the correct amounts.
  • If a stock’s has tax lots at a loss greater than 2%, then we must sell them.

… and so on. We do that as well; we have to. However, this approach is not sufficient by itself, because not all scenarios are easy to foresee. The over-harvesting problem sounds obvious in retrospect, but you should not assume that all DI implementations handle it correctly. In fact, this very scenario affected a large firm in a very public and painful way.

We have built a sophisticated backtest infrastructure, which supports a complementary approach that can uncover such problems early on. It allows us to use the same exact code that would normally run in a production environment, and to simulate its behavior over a multi-year period in the past. This causes certain complex situations to arise organically, which may otherwise be hard to foresee. Over-harvesting is one such example.

Of course, it is not enough to generate such situations; we need to be able to know whether a behavior is wrong, so that we can fix the code to handle it. Here is how we do that:

  1. At the end of a backtest, the code generates a few hundred different metrics, such money-weighted returns, ex-ante and ex-post tracking error, etc.
  2. Afterwards, a different piece of code performs sanity checks on those metrics. As a specific example, we were alerted to the over-harvesting problem because the metric for “maximum ex-ante tracking error for any day in the backtest” exceeded an otherwise loose threshold.

One might ask: “if you can automatically detect wrong behavior in another piece of code that you yourselves wrote, why is it any harder to have that code be correct in the first place?” Detecting side effects of medicines is a good analogy. It is fairly easy to define what “normal” is under several health indicators (sleep, temperature, pressure, heart rate, etc.). There may be a few rare false positives or false negatives, but those health indicators are a useful guide in most cases.

Mapping our analogy to the original point, detection is particularly valuable when:

  • It is simple compared to the solution. Measuring the end result (heart rate; maximum cash %) is much simpler than making sure the end result will always cover every possible scenario, including rare ones (using the drug while simultaneously taking another rare drug; second-biggest market drop in the last ~90 years).
  • There are not too many false positives (higher but still normal heart rate; cash just a bit more overweight than normal) or false negatives (patient experiencing an hard-to-detect health problem; portfolio being moderately imbalanced on every day instead of extremely imbalanced on a single day only).


Our proprietary backtest infrastructure helps us validate investing behavior extensively during development time. This is an improvement over performing some one-off investment research before rolling out an investing product. Detecting problems ahead of time is obviously better than getting a call from an unsatisfied client who discovers a problem months after rolling out DI.


  1. Direct Indexing also allows values-based investing, where certain stocks may be excluded, or held at different weights than their normal target in the stock index, so as to conform to an investor’s values (e.g. a vegetarian avoiding meat processor stocks). Unlike “DI with TLH”, this is also applicable to tax-deferred accounts. We support that as well – not just exclusions, but also tilts – but it is not as interesting from an investment sophistication angle, and we are ignoring it for purposes of this post.
  2. This assumes a daily schedule for evaluating accounts to decide whether to harvest losses (though not necessarily trading daily, because it will not always be worth the trouble). It can be done less frequently, especially if it requires human intervention, or more frequently, especially on days with big intraday moves. In practice, daily is frequent enough.
  3. This decision may also consider the entire position, not just a single tax lot. We may not want to sell too small a tax lot at a loss today, because buying later will cause a wash sale, so it ‘locks out’ that stock during the next 30 days, so we may want to limit selling to large enough amounts.
  4. Although there are a few cases involving a combination of taxable and tax-deferred accounts where the tax loss is forfeited forever, in most cases a wash sale results in the tax loss being disallowed, but it can be used to reduce liability in the future. However, in practice, it is customary to disallow wash sales altogether, except for scenarios such as withdrawals, where a client may not care about wash sales.

Tax-smart withdrawals and partially ordered sets

Audience requirements: Some Java, but only for the last half.

Problem description

When a client makes a large enough withdrawal, we may need to suspend some of our normal restrictions, such as avoiding wash sales1, or avoiding selling of (highly-taxed) short-term capital gains2. If the client wants to withdraw money, we cannot just tell her “sorry, that would cause a wash sale”.


An architectural principle of ours is that, for almost everything that could be a hard-coded constant or a configuration file setting, we instead allow it to be dynamically passed into the system on a per-API-call basis3. That is, investing customer X’s portfolio could use one set of settings, and then one second later the system can be called with customer Y and his (possibly different) settings.

In keeping with that spirit, it is possible to specify a “withdrawal relaxation path” dynamically. The Java code reads like English:

    relaxTo(onlySellShortTermLosses(),        DISALLOW_WASH_SALES),
    relaxTo(doNotSellAnyGains(),              DISALLOW_WASH_SALES),
    relaxTo(doNotSellShortTermCapitalGains(), DISALLOW_WASH_SALES),
    relaxTo(canSellAnything(),                DISALLOW_WASH_SALES),
    relaxTo(canSellAnything(),                ALLOW_WASH_SALES));

The idea is that, if it is not possible to fulfill the withdrawal with the restrictions implied by the settings in step N, we try the settings in step N+1, and so on, until it is possible for the withdrawal to go through (i.e. the underlying linear optimization is feasible).

Our mechanism is very flexible. It is possible to specify separate restrictions for short-term and long-term tax lots. The following example can be passed as a 1st argument to relaxTo above, and will not sell any:

  • short-term capital gain above 20%
  • long-term capital gain above 50%
  • short-term capital gain that will go long-term (and be eligible for lower tax) in 15 or fewer days

What is partial comparison?

Partial comparison is (roughly speaking) when not all items in a set are comparable. As an example, let us use a pair of portfolios (N, R): N can be liquidated now, whereas R can only be liquidated in 20 years (at retirement).

It is hard to disagree that it is always better if one or both of N and R have more money in them:

  • (100, 301) > (100, 300)
  • (101, 300) > (100, 300)
  • (101, 301) > (100, 300)

However, how do you compare (101, 300) and (100, 302)?

  • (101, 300) has a bigger amount that can be used right now
  • (100, 302) has more total money in it

Any preference for one or another pair of portfolios is not objective, and will depend on liquidity needs.

Why partial comparison?

We want to confirm that the sequence of relaxation steps is indeed in increasing “relaxation order”.

There is nothing inherently wrong if this is not respected. We can tell the system to try trading first under some loose restrictions (step 1), and – after that fails – try trading with stricter restrictions (step 2). The stricter step will always fail if the looser step failed. However, the 2nd step is a waste of resources. Even more importantly, it may mean that the person who constructed this sequence of steps made an error, so it is good to get an alert.

Partial comparison is relevant here, because “relaxation steps” are partially comparable. Take the following steps:

  • Step 1: do not sell short-term capital gains of over 10%, or long-term gains over 20%
  • Step 2: do not sell short-term capital gains of over 17%, or long-term gains over 27%
  • Step 3: do not sell short-term capital gains of over 14%, or long-term gains over 15%

Step 2 is more relaxed than step 1, as it is less restrictive in selling both short- and long-term gains.

However, there is no partial comparison of step 1 and step 3; step 3 is more restrictive with long-term gains, but less restrictive with short-term gains.

Our sanity checks will only complain if step K is strictly less relaxed than step M (K < M), but not when they are not comparable. Still, this has the potential to catch bugs, so it increases system safety.

Our Java support for partial comparison

In order to build our sanity checks, we formalized the concept of partial comparison in our code. Java has Comparable<T> and Comparator<T>, but there is no support for partial comparison, even in commonly used libraries such as Google Guava or Apache Commons. Therefore, we wrote our own.

Without too much code, here is a summary of what we added:

public interface PartialComparator<T> {
  PartialComparisonResult partiallyCompare(T o1, T o2);
public interface PartiallyComparable<T> {
  PartialComparisonResult partiallyCompareTo(T o);

PartialComparisonResult (code not shown): can be less-than, equal, greater-than, undefined (4 cases total). This also abstracts away the less-intuitive outcomes of Comparable#compareTo, which use {negative, 0, positive} to denote {<, ==, >}.

The following method will look at the comparisons of all fields of two objects, and determine whether there is a well-defined ordering or not. This generalizes the idea that, for class A, if all fields of object A1 are >= those of object A2, and at least one is >, then A1 > A2. If any field is not comparable, then A1 and A2 are not comparable.

public static PartialComparisonResult partiallyCompareMultiple(
    PartialComparisonResult first, 
    PartialComparisonResult second,

Finally, the following throws an exception if the items in a list do not respect a certain partial comparison, as per the PartialComparator<T> passed in. One actually has to compare every pair of items, not just consecutive ones.

public static <T> void checkDecreasingPerPartialComparison(
    PartialComparator<T> partialComparator, 
    List<T> list, 
    String format, 
    Object...args) {
  forEachUnequalPairInList(list, (item1, item2) ->
      partialComparator.partiallyCompare(item1, item2)
          .ifPresent(comparisonResult -> Preconditions.checkArgument(
              comparisonResult > 0,

All this infrastructure will now be reusable for other cases in the future, beyond this specific scenario of withdrawal relaxation instructions.


It is possible to specify dynamically – and in a very understandable, English-language-like format – how we want the system to behave when there is a withdrawal. We are very meticulous about system safety, so we added checks that the sequence of constraint relaxation steps does not get stricter. We could have done this with a one-off, localized check, but we instead added support in the code for generalized partial comparison. This makes for cleaner code; also, we can reuse that infrastructure later, if needed.


  1. Very roughly speaking, a wash sale is when we buy and sell the same stock within a month. The IRS disallows claiming a capital loss in certain cases. It is there to prevent a largely riskless realization of capital losses just for tax purposes. Wash sales add undesirable for several reasons (beyond the scope of this post).
  2. Our optimization logic already penalizes selling of capital gains. However, it takes in an additional parameter that results in adding hard constraints to avoid bad tax outcomes, such as selling short-term capital gains, selling any long-term gains above 50%, etc. The idea is that, even if in some cases selling a short-term capital gain may improve the portfolio in some way (e.g. better tracking to the target allocation), in practice it looks bad to a client if we sell short-term capital gains. Note that this is highly configurable, so “no extra constraints” is still an option.
  3. One obvious advantage is that it makes the system very flexible. However, another big advantage is that we can have test code that compares what happens with one setting vs. another. For example, we can assert that, over the duration of an 8-year backtest, we will realize fewer tax gains if we prevent the selling of gains than if we do not. This, in turn, helps us ascertain that our investing logic is correct.

Easy, single-statement unit tests

Our testing framework lets us test our code more extensively, and with less effort. This makes the code simpler, easier to refactor, and results in fewer bugs.

Audience requirements: Java; unit testing.

Some background: our architecture

We divide all our java classes into 2 categories:

  • data/noun classes store data only, and have almost no logic, except for construction preconditions.
  • verb classes are stateless and implement the business logic.

Opinions may differ on this style, but it works well with dependency injection (e.g. Google Guice), and we have seen it used successfully elsewhere, so we like it.

We make heavy use of data classes instead of ‘more raw’ types. For example, we have a Partition class for representing “a collection of items with weights that sum to 1, i.e. 100%”. It is better than a simple Map<T, Double> because:

  1. Single check: We only need to check once that sum(values) ~= 1, at the Partition constructor. Otherwise, if 7 methods took a raw Map<T, Double> argument with implicit partition semantics, we would need to have 7 preconditions in the code. That would clutter the production code, and also require 7 more tests. Secondarily, it’s better performance to check the preconditions only once.
  2. Clearer semantics than relying on the name of a parameter, which may not be descriptive – or even correct.
  3. Type safety avoids bugs. Example: say OptimizationResult stores the solution from the optimizer in a Map<AssetClass, Double>, but there’s no restriction that the values sum to 1 (e.g. perhaps it is in dollars, not %)1. Passing a Partition instead of an OptimizationResult will not compile, which is good. However, with a raw Map<AssetClass, Double>, it is possible to make such a mistake.

With 600+ data classes, and 185,000+ lines of test code (as of this writing), it pays to have systematic ways to test data classes, and to simplify the testing of verb classes.


We make extensive use of the Hamcrest matcher framework. You can think of a matcher as a test-only, parametrizable equals().

We only implement hashCode & equals in our data classes in very few cases, such as when its instances will be used as keys to some map. Otherwise, we prefer using a matcher:

  • Since matcher code lives in test files, we avoid cluttering up prod code with test-oriented logic.
  • Matchers are like a parametrizable equals() in a way. For instance, we can compare two objects subject to an epsilon passed in, and use a different epsilon in different cases.
  • Matchers make it easier to read and write unit tests of ‘verb classes’.

Our data classes each have corresponding test classes that extend RBTestMatcher, a class that we built (simplified below):

public abstract class RBTestMatcher {
  public abstract T makeTrivialObject();
  public abstract T makeNontrivialObject();
  public abstract T makeMatchingNontrivialObject();
  protected abstract boolean willMatch(T expected, T actual);

Here is a concrete example for a TradeAmounts class, which stores the $ amount bought and sold, broken down by stock.

public class TradeAmountsTest extends RBTestMatcher {

  public static TradeAmounts emptyTradeAmounts() {
    return tradeAmounts(emptyBoughtTradeAmounts(), emptySoldTradeAmounts());

  public TradeAmounts makeTrivialObject() {
    return emptyTradeAmounts();

  public TradeAmounts makeNontrivialObject() {
    return tradeAmounts(
        new BoughtTradeAmountsTest().makeNontrivialObject(),
        new SoldTradeAmountsTest().makeNontrivialObject());

  public TradeAmounts makeMatchingNontrivialObject() {
    return tradeAmounts(
        new BoughtTradeAmountsTest().makeMatchingNontrivialObject(),
        new SoldTradeAmountsTest().makeMatchingNontrivialObject());

  protected boolean willMatch(
        TradeAmounts expected,
        TradeAmounts actual) {
    return tradeAmountsMatcher(expected).matches(actual);

  public static TypeSafeMatcher tradeAmountsMatcher(
        TradeAmounts expected) {
    return makeMatcher(expected,
        match(v -> v.getBoughtTradeAmounts(), f -> boughtTradeAmountsMatcher(f)),
        match(v -> v.getSoldTradeAmounts(),   f -> soldTradeAmountsMatcher(f)));


Subclasses of RBTestMatcher just need to implement:

  • makeTrivialObject to return the most trivial object possible (in this case, nothing traded).
  • makeNontrivialObject to return most general object possible (e.g. we bought and sold different amounts, and in multiple stocks – not just one).
  • makeMatchingNontrivialObject that should ‘match’ makeNontrivialObject, but not be equal in the traditional sense of equality: e.g. the two could be off by some epsilon.
  • willMatch: a simple predicate to return true if two objects are similar enough to consider to be matching.

By convention, we also add a static method that returns a matcher (here, tradeAmountsMatcher). Hidden behind the syntactic sugar of makeMatcher() and match() is the fact that it in turn relies on boughtAmountsMatcher and soldAmountsMatcher. Since all of our data test classes adhere to the RBTestMatcher convention, this is not a restriction.

Of course, additional tests may be necessary, most commonly for preconditions. For example, TradeAmountsTest also tests (not shown here) that a stock cannot both be bought and sold.

There are several advantages to having every data test class extend RBTestMatcher:

  • We get a free unit test for the matcher logic. TradeAmountsTest inherits a unit test called RBTestMatcher#matcherMetaTest() (not shown), which checks that:
    • makeTrivialObject does not match the two non-trivial objects
    • all 3 objects match themselves
    • makeNontrivialObject matches makeMatchingNontrivialObject
  • We get a free unit test for toString(), also inherited from RBTestMatcher. It’s a simple test that just calls toString() and ignores the return value, but at least it ensures no exceptions are thrown.
  • We some free rudimentary testing of the constructors, insofar as no exceptions are thrown from within the 3 make*Object methods.
  • It standardizes the format of the data class test code, making it easier to read.
  • It lets other tests construct a realistic test object without needing to know how to construct one (e.g. knowing what would constitute valid constructor parameters). For example, TradeAmountsTest#makeNontrivialObject builds an object by utilizing in turn BoughtTradeAmounts#makeNontrivialObject.

Single-statement unit tests

Another big benefit of all data classes having matchers is that verb class unit tests can be much more concise – in some cases, a single Java statement.

Take a look at this extensive test2 of summarize() in TradesToTradeAmountsSummarizer. The logic is fairly straightforward: it adds up the corresponding amounts bought and sold in the Trades object in the input, and summarizes to a TradeAmounts object, which only records amounts bought and sold.

                STOCK_B1, boughtTradesWithOrder(
                    buyOrder(STOCK_B1, buyQuantity(11), price(100.10), DUMMY_TIME),
                        boughtTrade(STOCK_B1, buyQuantity(2), price(100.02), DUMMY_TIME),
                        boughtTrade(STOCK_B1, buyQuantity(3), price(100.03), DUMMY_TIME))),
                STOCK_B2, boughtTradesWithOrder(
                    buyOrder(STOCK_B2, buyQuantity(44), price(100.04), DUMMY_TIME),
                        boughtTrade(STOCK_B2, buyQuantity(5), price(100.05), DUMMY_TIME),
                        boughtTrade(STOCK_B2, buyQuantity(6), price(100.06), DUMMY_TIME))))),
                STOCK_S1, soldTradesWithOrder(
                    sellUnspecifiedLots(STOCK_S1, sellQuantity(11), price(200.10), DUMMY_TIME),
                        soldTrade(STOCK_S1, sellQuantity(2), price(200.02), DUMMY_TIME),
                        soldTrade(STOCK_S1, sellQuantity(3), price(200.03), DUMMY_TIME))),
                STOCK_S2, soldTradesWithOrder(
                    sellUnspecifiedLots(STOCK_S2, sellQuantity(44), price(200.04), DUMMY_TIME),
                        soldTrade(STOCK_S2, sellQuantity(5), price(200.05), DUMMY_TIME),
                        soldTrade(STOCK_S2, sellQuantity(6), price(200.06), DUMMY_TIME))))))),
                STOCK_B1, money(doubleExplained(  500.13, 2 * 100.02 + 3 * 100.03)),
                STOCK_B2, money(doubleExplained(1_100.61, 5 * 100.05 + 6 * 100.06)))),
                STOCK_S1, money(doubleExplained(1_000.13, 2 * 200.02 + 3 * 200.03)),
                STOCK_S2, money(doubleExplained(2_200.61, 5 * 200.05 + 6 * 200.06)))))));

This was a clean implementation as a single Java statement.

Let’s imagine we didn’t have tradeAmountsMatcher, and instead implemented hashCode/equals in TradeAmounts, BoughtAmounts, and SoldAmounts.

A simple assertEquals might not work, because double arithmetic may result in tiny deviations. Of course, we could relax the various equals() methods to check equality subject to some epsilon. However, that gets complicated quickly: the epsilon may not always be a fixed amount, yet equals() cannot take epsilon as a parameter.

Therefore, the test logic would have to be something like this (shown in pseudocode):

  • Get output of summarize()
  • For BoughtAmounts:
    • Check that the map has entries for B1 and B2 (no more or fewer stocks)
    • For B1, confirm the $ amount
    • For B2, confirm the $ amount
  • Same for SoldAmounts

The deeper the structure of an object (e.g. a tree of collections of trees), the faster this will get unwieldy very quickly. Instead, our test matcher allows us to write such a test as a single Java statement.


We are big proponents of automated testing, instead of manual QA. To that end, we have built infrastructure to make it much easier to write and read automated tests. This results in test code of higher quality and wider coverage, making it easier to improve the code via refactors, and to catch bugs.


1. This is not a good example in the context of our own code base, because we rarely use plain doubles anyway. Even if we used raw maps in our code, the Partition would be a Map<AssetClass, UnitFraction>, where UnitFraction is forced to be within 0 and 1, and the OptimizationResult (if it were in $ terms, which it is not) would be a Map<AssetClass, Money>. Those two are not interchangeable, even by accident. A better example would be a TargetPartition and HeldPartition, where each can instead be represented as a raw Partition, with the former representing the target/goal, and the latter representing the current holdings before any trading. The two are conceptually different, so we should never want to use one instead of the other, yet both contain a Partition inside them.

2. Inside this test code, doubleExplained() is a handy in-line way to show an actual value that gets used in a test alongside the calculation that gave rise to it. Otherwise, the actual value used may look arbitrary. Also, DUMMY_TIME is a predefined constant which we use to imply that time does not matter in a calculation; here, that makes sense, because we are just summing dollar amounts.

Fast time series classes in Java

We describe our space- and time-efficient daily time series implementation.

Our system is expressly built to handle investing concepts well, just like the programming language R is particularly good for statistics vs., say, creating video games or word processors.

A common such concept is a time series, which is a collection of data points in time order. Particularly useful to us are daily time series (henceforth DTS), where we have values for consecutive market (NOT calendar) days1. DTS objects are central to our backtest subsystem, and we run thousands of such backtests in our automated test suite. Therefore, we created our own space- and time-efficient optimized implementation.

Sharing a mapping

A DTS can store any object type; however, the following example refers to the DTS of an account’s value.

A common implementation of a map with O(1) lookup time, such as java.util.HashMap,  is (conceptually) an unordered set of {key, value} entries, where the keys implement hashCode and equals. Example:

Fri 2010-01-08 → $1,008
Mon 2010-01-11 → $1,011
Tue 2010-01-12 → $1,012

Barring a few exceptions, data classes in our codebase are immutable2. A corollary of this is that map objects cannot have entries added to or removed from them. This allows us to represent the above in two pieces, without any efficiency losses from having to enlarge the array, or moving objects in it, since the object is immutable:

  • a mapping of market day to array index:
    Fri 2010-01-08 → 0
    Mon 2010-01-11 → 1
    Tue 2010-01-12 → 2
  • an array with the data points appearing consecutively:
    { $1008 , $1011, $1012 }

This allows us to improve on a simple java.util.HashMap in multiple ways, which improve both running time or space (not in the big-O sense, but still).

  1. Less space
    1. No hash table gaps: data items can be stored consecutively in memory, so we will not have any gaps – whereas a hash map’s underlying array will need to be larger (typically at least 2x) to reduce collisions.
    2. Mapping can be shared: if there are many time series for the same market date range, the date -> array index mapping can be shared, and only the actual data array needs to be different. [Note: further down, we describe a trick to achieve savings even across DTS objects that cover different date ranges.]
  2. Fast ordered iteration: if we want to iterate over all data points in sequential day order, we just sequentially access the array. With a hash map3 we would have to enumerate all market days in order, and then look up the corresponding values, which would result in a hash function calculation per item.

Here is the (condensed) code. We call this ArrayIndexMapping; it is actually a more general concept, and not specific to market days.

public interface ArrayIndexMapping {
  T getKey(int index); // throws if index not in mapping
  int getIndex(T key); // throws if key not in mapping
  int size();

DailyTimeSeries objects have an ArrayIndexMapping member; more details below.

Hash function speedup

A naive implementation of ArrayIndexMapping would be using a simple hash map under the hood, and there would be no improvements vs. just using a plain hash map.

However, with market days, there’s a trick we do.

We use a hash function:

f(date) = day of month + 32 * (month of year + 13 * year)

This is guaranteed to give us increasing values for later days (i.e. f(x) is monotonically increasing)4.

Then, we pick a date range that we want to support. It has to be early enough to accommodate backtests that go far into the past, but late enough so that we know the market holiday schedule. We chose the first market day5 in 2000, which is Monday 2000-01-03.

We create a ‘master array’ whose size is the number of calendar days between 2000-01-03 and today. Then, for the n-th calendar day D after 2000-01-03, we store f(D) – f(2000-01-03) in the n-th location of the array – but only if it is a market day. If not, it will have a value of 0, since Java initializes int arrays this way, but it doesn’t matter, because that value will never get looked at.

The master array will look like this – using empty to denote ‘value which never gets read’:

Array index (not stored) n, for n-th market day since 2000-01-03 Calendar date (not stored)
0 0 2000-01-03 Monday
1 1 2000-01-04 Tuesday
2 2 2000-01-05 Wednesday
3 3 2000-01-06 Thursday
4 4 2000-01-07 Friday
5 2000-01-08 Saturday
6 2000-01-09 Sunday
7 5 2000-01-10 Monday
8 6 2000-01-11 Tuesday

The first 5 days are for the weekdays (Mon – Fri). Then, there are two empty values for the weekend, and then there are more weekdays, and so on. [Note: The grayed-out row is used in the example further down.]

The memory requirements of the master array are trivial: e.g. when we run our code in 2018, this would be approximately 18 * 365 * 4 bytes per int ~= 26 Kb.

Here’s an example. For simplicity, let us assume that the account value on the n-th market day after the first one is $1000 + n.

The DTS class will store this array:

Array index (not stored) $ account value Market date (not stored)
0 $1,000 2000-01-03 Monday
1 $1,001 2000-01-04 Tuesday
2 $1,002 2000-01-05 Wednesday
3 $1,003 2000-01-06 Thursday
4 $1,004 2000-01-07 Friday
5 $1,005 2000-01-10 Monday
6 $1,006 2000-01-11 Tuesday

Say we want to look up that DTS’s value for Monday 2000-01-10 (gray row). Here are the steps:

  1. We compute f(2000-01-10) – f(2000-01-03), which is 7; i.e. there are 7 calendar days since the ‘beginning of time’.
  2. We index the master array on 7, which gives us 5.
  3. We index the DTS’s internal array on 5, which gives us $1,005.

The lookup time consists of 2 array lookups, and the calculation of f(2000-01-10) = 10 + 32 * (1 + 13 * 2000). [Note: although trivial, we precompute once and save the value of f(2000-01-03)].

In short, a DTS lookup by date will take almost zero time.

An additional advantage is improved spatial locality, which should improve cache performance, since we typically go through a DTS in time order.

Reusing the master array

The previous solution works if all time series start on 2000-01-03. However, different backtests use different starting dates; some may start in 2008 and some in 2009. We use a simple trick so that all DTS lookups can be fast, regardless of the date range they cover.

Each DTS includes a specialized ArrayIndexMapping implementation that stores:

A. a reference to the ‘master array’

B. the calendar day index of the starting day in the master array. In other words, f(starting day) – f(2000-01-03).

The space requirements per DTS are trivial: 8 bytes for the reference (on a 64-bit machine) and 4 for the int that stores the starting index.

Let’s take the example of a DTS that starts on Wednesday 2008-01-02, the first market day in 2008. Value (B) is:

f(2008-01-02) – f(2000-01-03)

= (2 + 32 * (1 + 13 * 2008)) – (3 + 32 * (1 + 13 * 2000))

= 32 * 13 * 8 – 1

= 3327

Let us assume that account values for n-th market day in 2008 are $8000 + n. The DTS will store this array:

Array index (not stored) $ account value Market date (not stored)
0 $8,000 2008-01-02 Wednesday
1 $8,001 2008-01-03 Thursday
2 $8,002 2008-01-04 Friday

Say we want to look up this DTS’s value for 2008-01-04. The steps are simple:

  • Compute f(2008-01-04) – f(2000-01-03) = 3329
  • Subtract 3327 from it to get 2
  • Index the internal DTS array at position 2 to get the account value of $8,002

This shows that the space and time benefits extend to all DTS objects in the system, regardless of their market date range.


We have described a way to store daily time series objects in a space-efficient way, and where lookups based on market day are lightning fast. We do thousands of backtests in our test suite, so even small savings matter.


1. We use “market day” to mean “day when stock markets were open.” This excludes weekends and equity market holidays (Martin Luther King Day, Labor Day, etc.) All equity exchanges in the US have shared the same holiday calendar in the recent past. Some ‘half market days’ exist, e.g. the Thursday before Thanksgiving, but we treat those as regular days.

2. There are several benefits beyond the scope of this post. Quickly: immutability of data objects simplifies the code and reduces bugs, because it enforces value semantics for non-primitives, instead of the default reference semantics Java gives us. In other words, it guarantees that a callee cannot change the contents of an object that is passed in.

3. There are ways around this (e.g. java.util.TreeMap) but they come with their own baggage over a plain hash map (e.g. extra internal state).

4. We could also have used:

(day of month - 1) + 31 * ((month of year - 1) + 12 * (year - 1))

However, we wanted to be stingy even about the minus-one subtraction.

5. In the last few years, if a market holiday (such as New Year’s Day) falls on a weekend, it is usually observed on the next weekday. That was not the case back in 2000, so Monday 2000-01-03 was the first market day in 2000, even though New Year’s fell on a Saturday.

Practical builder design patterns in Java

Although none of this blog post contains Turing Award material, we will describe a practical way to use the builder pattern in Java that reduces bugs and makes the code easier to follow.

Audience requirements: Java

According to Wikipedia, “The intent of the Builder design pattern is to separate the construction of a complex object from its representation.


We like the following benefits of builders, in particular:

No arbitrary constructor parameter ordering

This is not a problem if there is a well-accepted ordering convention, e.g.

Vector3d(double x, double y, double z)

However, in the code below, one can accidentally pass in arguments in the wrong order and cause bugs:

new RealizedTax(Money shortTerm, Money longTerm)

Clearer names

Even in those cases where the compiler will prevent you from reordering the constructor parameters, it helps to have a clear name for each parameter, especially when the type of the parameter is not sufficient. For example, it may not be obvious below that 252 means “number of market days”:

new TotalReturn(onesBasedReturn(1.02), 252)

Our Java interface for builders

Every builder class in our codebase implements RBBuilder. 

We admittedly do not use the Java interface mechanism for its intended purpose here. The RBBuilder interface mostly exists to enforce a convention and to simplify some code. In general, interfaces are useful when polymorphism is involved. However, it almost never makes sense for a method to take as an argument a builder of some supertype; a builder is almost always used in-line as a more legible replacement for a traditional constructor.

The code below had comments removed; we will add explanations to the blog post instead.

Note: Our codebase is in Java 8, which allows default interface methods.

public interface RBBuilder<T> {

  void sanityCheckContents();

  default T build() {
    return buildWithoutPreconditions();

  default T checkNotAlreadySet(T currentFieldValue, T newFieldValue) {
        currentFieldValue == null,
        "You are trying to set a value twice in a builder, which is probably a bug: from %s to %s",
        currentFieldValue, newFieldValue);
    return newFieldValue;

  default T checkAlreadySet(T currentFieldValue, T newFieldValue) {
        currentFieldValue != null,
        "You are trying to reset a value to %s in a builder, but the value has not already been set",
    return newFieldValue;

  T buildWithoutPreconditions();


We will explain the methods one at a time.


This should throw an exception if an object cannot be built for some reason, such as:

  • we forgot to specify a value for one or more of the fields.
  • some data inconsistency exists between 2 or more fields – e.g. if long-term tax rate are larger than short-term but the code disallows that. If the inconsistency applies only to a single field, we prefer to perform the check directly in the builder’s setter for that field.

We rarely call #sanityCheckContents directly; it is only used in #build.


Builds the object, after confirming that it is valid. We never need to override this.


This convenience method throws an exception if a field is given a value more than once. This is almost always an error. As a convention, the setter methods in a builder start with ‘set’, and the assumption is that they do not get called more than once.


public CapitalGainsTaxRatesBuilder setMarginalLongTermFederal(TaxRate marginalLongTermFederal) {
  this.marginalLongTermFederal = checkNotAlreadySet(this.marginalLongTermFederal, marginalLongTermFederal);
  return this;


There are a few cases where we do want to be able to set a value in a builder more than once; for instance, to overwrite a default value. In those cases, we use checkAlreadySet to confirm that we are indeed overwriting a field’s value, and specifically not giving it its first value. For code clarity, we have a convention that the names of such setter methods must start with ‘reset’ instead of ‘set’.

public AllComparableBacktestSettingsBuilder resetTaxSituation(TaxSituation taxSituation) {
  this.taxSituation = checkAlreadySet(this.taxSituation, taxSituation);
  return this;


This builds an object, but without calling #sanityCheckContents. Here’s why it exists:

  • Allows partial objects: Our TaxSituation class stores several tax rates (short- vs long-term, federal vs. state, qualified dividend). If some non-test code only looks at the dividend rate, its corresponding unit test would be simpler if its TaxSituation input object only specifies the dividend rate. Needing to specify a full object in tests is an even bigger problem in practice, because most objects contain other objects, and things can get out of hand quickly.
  • Software engineering: Sanity checking is a different concept than construction; separating them makes for cleaner code. It also allows you to check whether it’s possible to construct an object without actually attempting to construct it (although we do not do that in practice).

Note: The @VisibleForTesting annotation (from Guava) can be used to enforce that buildWithoutPreconditions only gets called from test code. This makes sense; in production, we should never be skipping the preconditions.

Sample usage:

public class CapitalGainsTaxRates {

  private final TaxRate marginalLongTermFederal;
  private final TaxRate marginalOrdinaryIncome;
  private final TaxRate marginalLongTermState;
  private final TaxRate marginalShortTermState;

  private CapitalGainsTaxRates(TaxRate marginalLongTermFederal, TaxRate marginalOrdinaryIncome,
                               TaxRate marginalLongTermState, TaxRate marginalShortTermState) {
    this.marginalLongTermFederal = marginalLongTermFederal;
    this.marginalOrdinaryIncome = marginalOrdinaryIncome;
    this.marginalLongTermState = marginalLongTermState;
    this.marginalShortTermState = marginalShortTermState;

  public TaxRate getMarginalLongTermFederal() {
    return marginalLongTermFederal;

  public TaxRate getMarginalOrdinaryIncome() {
    return marginalOrdinaryIncome;

  public TaxRate getMarginalLongTermState() {
    return marginalLongTermState;

  public TaxRate getMarginalShortTermState() {
    return marginalShortTermState;

  public String toString() {
    return Strings.format("[CPR Fed (LT, ST)= %s %s ; ST = %s %s CPR]",
        marginalLongTermFederal, marginalOrdinaryIncome, marginalShortTermState, marginalLongTermState);
public static class CapitalGainsTaxRatesBuilder implements RBBuilder<CapitalGainsTaxRates> {

    private TaxRate marginalLongTermFederal;
    private TaxRate marginalOrdinaryIncome;
    private TaxRate marginalLongTermState;
    private TaxRate marginalShortTermState;

    private CapitalGainsTaxRatesBuilder() {}

    public static CapitalGainsTaxRatesBuilder capitalGainsTaxRatesBuilder() {
      return new CapitalGainsTaxRatesBuilder();

    public CapitalGainsTaxRatesBuilder setMarginalLongTermFederal(TaxRate marginalLongTermFederal) {
      this.marginalLongTermFederal = checkNotAlreadySet(this.marginalLongTermFederal, marginalLongTermFederal);
      return this;

    public CapitalGainsTaxRatesBuilder setMarginalOrdinaryIncome(TaxRate marginalOrdinaryIncome) {
      this.marginalOrdinaryIncome = checkNotAlreadySet(this.marginalOrdinaryIncome, marginalOrdinaryIncome);
      return this;

    public CapitalGainsTaxRatesBuilder setMarginalLongTermState(TaxRate marginalLongTermState) {
      this.marginalLongTermState = checkNotAlreadySet(this.marginalLongTermState, marginalLongTermState);
      return this;

    public CapitalGainsTaxRatesBuilder setMarginalShortTermState(TaxRate marginalShortTermState) {
      this.marginalShortTermState = checkNotAlreadySet(this.marginalShortTermState, marginalShortTermState);
      return this;

    public void sanityCheckContents() {

    public CapitalGainsTaxRates buildWithoutPreconditions() {
      return new CapitalGainsTaxRates(
          marginalLongTermFederal, marginalOrdinaryIncome,
          marginalLongTermState, marginalShortTermState);



A few notes:

  1. The data class (the one built by the builder) has a private constructor (line 8). This prevents any instantiation other than through the builder.
  2. The builder also has a private constructor (line 45), but this is only because we have a convention of always using static constructor methods for object construction. This is for reasons unrelated to this post, so you can ignore it.

The builder does make the data class code longer, but object construction everywhere else will now be clear and error-proof. Example:

public static final CapitalGainsTaxRates CAPITAL_GAINS_TAX_RATES_2016_HIGHEST = capitalGainsTaxRatesBuilder()
    .setMarginalLongTermFederal(taxRateInPercent(20.0 + 3.8))

Although wordier, that was much clearer than relying on the order of constructor arguments (which doesn’t clarify which rate means what) as in:

new CapitalGainsTaxRates(
  taxRateInPercent(20.0 + 3.8),


The builder design pattern is very useful. We use builders a lot in our codebase; it enhances code legibility and reduces bugs. We created a common Java interface for our builders, which streamlines the construction of partial objects for unit tests, and (secondarily) keeps the code cleaner by enforcing some conventions at compile time.

Rowboat Advisors optimization infrastructure, part 3: approximating non-linear functions

Rowboat Advisors has built infrastructure to make it tremendously easier to invest a portfolio well. Part 3 describes how to approximate non-linear functions.

Audience requirements: high school math; accumulated familiarity with linear programming via the previous blog post. Programming knowledge not needed, but helps appreciate the code examples. If it gets too much, skip to the summary section at the end.


Assume1 you want to minimize x2 + 4y2 subject to x + y = 1 and 0 <= x, y <= 1. The optimal solution2 for (x, y) is (0.8, 0.2). The following code uses our infrastructure and comes up with the answer (0.792, 0.208), which is quite close.

  public void blogPostExample_testSum_twoVariables_unequalWeights_lowerBoundForVarsIsZero() {
    HighLevelLPBuilder lpBuilder = makeRealHighLevelLPBuilder()
        .withHumanReadableLabel(label("minimize x^2 + 4*y^2 subject to x + y = 1"));
    LinearApproximationVars x = generateVars(lpBuilder, "x", Range.closed(0.0, 1.0), 0.01, 1.2, v -> v * v);
    LinearApproximationVars y = generateVars(lpBuilder, "y", Range.closed(0.0, 1.0), 0.01, 1.2, v -> v * v);
    LinearOptimizationProgram lp = lpBuilder
            highLevelVarExpression(1, x.getApproximatedNonLinearPart(), 4, y.getApproximatedNonLinearPart())))
            "x + y = 1", sumOfDisjointHighLevelVars(x.getLinearPart(), y.getLinearPart()), EQUAL_TO_SCALAR, 1.0)
    AllVariablesAndDoubles variablesAndOptimalValues =
    ToDoubleFunction<HighLevelVar> evaluate = var -> evaluator.evaluateHighLevelVar(var, variablesAndOptimalValues);
    double finalX = evaluate.applyAsDouble(x.getLinearPart());
    double finalY = evaluate.applyAsDouble(y.getLinearPart());
    double COARSE_EPSILON = 0.025;
    assertEquals(0.8, finalX, COARSE_EPSILON);
    assertEquals(0.2, finalY, COARSE_EPSILON);
    // Even though the X and Y will be approximate, their sum will not, since we have a constraint of x + y = 1
    assertEquals(1, finalX + finalY, 1.01 * 1e-8);
    assertEquals(doubleExplained(0.64, 0.8 * 0.8), evaluate.applyAsDouble(x.getApproximatedNonLinearPart()), COARSE_EPSILON);
    assertEquals(doubleExplained(0.04, 0.2 * 0.2), evaluate.applyAsDouble(y.getApproximatedNonLinearPart()), COARSE_EPSILON);

Our infrastructure translates this into this raw linear program. You don’t have to understand the details below, but you may be able to see that this uses 17 line segments to approximate x2 (plus another 17 for y2) in the interval from 0 to 1.

[LOPI minimize x^2 + 4*y^2 subject to x + y = 1 supervars: [[x0:x_0.0000_to_0.0100], [x1:x_0.0100_to_0.0220], [x2:x_0.0220_to_0.0364], [x3:x_0.0364_to_0.0537], [x4:x_0.0537_to_0.0744], [x5:x_0.0744_to_0.0993], [x6:x_0.0993_to_0.1292], [x7:x_0.1292_to_0.1650], [x8:x_0.1650_to_0.2080], [x9:x_0.2080_to_0.2596], [x10:x_0.2596_to_0.3215], [x11:x_0.3215_to_0.3958], [x12:x_0.3958_to_0.4850], [x13:x_0.4850_to_0.5920], [x14:x_0.5920_to_0.7204], [x15:x_0.7204_to_0.8744], [x16:x_0.8744_to_1.0000], [x17:y_0.0000_to_0.0100], [x18:y_0.0100_to_0.0220], [x19:y_0.0220_to_0.0364], [x20:y_0.0364_to_0.0537], [x21:y_0.0537_to_0.0744], [x22:y_0.0744_to_0.0993], [x23:y_0.0993_to_0.1292], [x24:y_0.1292_to_0.1650], [x25:y_0.1650_to_0.2080], [x26:y_0.2080_to_0.2596], [x27:y_0.2596_to_0.3215], [x28:y_0.3215_to_0.3958], [x29:y_0.3958_to_0.4850], [x30:y_0.4850_to_0.5920], [x31:y_0.5920_to_0.7204], [x32:y_0.7204_to_0.8744], [x33:y_0.8744_to_1.0000]]
0.01 [x0:x_0.0000_to_0.0100] + 0.032 [x1:x_0.0100_to_0.0220] + 0.058399999999999994 [x2:x_0.0220_to_0.0364] + 0.09008000000000001 [x3:x_0.0364_to_0.0537] + 0.12809600000000002 [x4:x_0.0537_to_0.0744] + 0.17371520000000004 [x5:x_0.0744_to_0.0993] + 0.22845824 [x6:x_0.0993_to_0.1292] + 0.294149888 [x7:x_0.1292_to_0.1650] + 0.37297986559999996 [x8:x_0.1650_to_0.2080] + 0.46757583871999986 [x9:x_0.2080_to_0.2596] + 0.581091006464 [x10:x_0.2596_to_0.3215] + 0.7173092077567997 [x11:x_0.3215_to_0.3958] + 0.8807710493081599 [x12:x_0.3958_to_0.4850] + 1.0769252591697915 [x13:x_0.4850_to_0.5920] + 1.3123103110037504 [x14:x_0.5920_to_0.7204] + 1.5947723732044994 [x15:x_0.7204_to_0.8744] + 1.8744212944751821 [x16:x_0.8744_to_1.0000] + const(-5.06116518) + 4.0 0.01 [x17:y_0.0000_to_0.0100] + 0.032 [x18:y_0.0100_to_0.0220] + 0.058399999999999994 [x19:y_0.0220_to_0.0364] + 0.09008000000000001 [x20:y_0.0364_to_0.0537] + 0.12809600000000002 [x21:y_0.0537_to_0.0744] + 0.17371520000000004 [x22:y_0.0744_to_0.0993] + 0.22845824 [x23:y_0.0993_to_0.1292] + 0.294149888 [x24:y_0.1292_to_0.1650] + 0.37297986559999996 [x25:y_0.1650_to_0.2080] + 0.46757583871999986 [x26:y_0.2080_to_0.2596] + 0.581091006464 [x27:y_0.2596_to_0.3215] + 0.7173092077567997 [x28:y_0.3215_to_0.3958] + 0.8807710493081599 [x29:y_0.3958_to_0.4850] + 1.0769252591697915 [x30:y_0.4850_to_0.5920] + 1.3123103110037504 [x31:y_0.5920_to_0.7204] + 1.5947723732044994 [x32:y_0.7204_to_0.8744] + 1.8744212944751821 [x33:y_0.8744_to_1.0000] + const(-5.06116518)
  subject to constraints:
x + y = 1 :    9.8930555337 ==  + [x0:x_0.0000_to_0.0100] + [x1:x_0.0100_to_0.0220] + [x2:x_0.0220_to_0.0364] + [x3:x_0.0364_to_0.0537] + [x4:x_0.0537_to_0.0744] + [x5:x_0.0744_to_0.0993] + [x6:x_0.0993_to_0.1292] + [x7:x_0.1292_to_0.1650] + [x8:x_0.1650_to_0.2080] + [x9:x_0.2080_to_0.2596] + [x10:x_0.2596_to_0.3215] + [x11:x_0.3215_to_0.3958] + [x12:x_0.3958_to_0.4850] + [x13:x_0.4850_to_0.5920] + [x14:x_0.5920_to_0.7204] + [x15:x_0.7204_to_0.8744] + [x16:x_0.8744_to_1.0000] + [x17:y_0.0000_to_0.0100] + [x18:y_0.0100_to_0.0220] + [x19:y_0.0220_to_0.0364] + [x20:y_0.0364_to_0.0537] + [x21:y_0.0537_to_0.0744] + [x22:y_0.0744_to_0.0993] + [x23:y_0.0993_to_0.1292] + [x24:y_0.1292_to_0.1650] + [x25:y_0.1650_to_0.2080] + [x26:y_0.2080_to_0.2596] + [x27:y_0.2596_to_0.3215] + [x28:y_0.3215_to_0.3958] + [x29:y_0.3958_to_0.4850] + [x30:y_0.4850_to_0.5920] + [x31:y_0.5920_to_0.7204] + [x32:y_0.7204_to_0.8744] + [x33:y_0.8744_to_1.0000]
  variable ranges:
x_0.0000_to_0.0100 [0.0‥0.01)
x_0.0100_to_0.0220 [0.01‥0.022)
x_0.0220_to_0.0364 [0.022‥0.0364)
x_0.0364_to_0.0537 [0.0364‥0.053680000000000005)
x_0.0537_to_0.0744 [0.053680000000000005‥0.07441600000000001)
x_0.0744_to_0.0993 [0.07441600000000001‥0.0992992)
x_0.0993_to_0.1292 [0.0992992‥0.12915904)
x_0.1292_to_0.1650 [0.12915904‥0.164990848)
x_0.1650_to_0.2080 [0.164990848‥0.2079890176)
x_0.2080_to_0.2596 [0.2079890176‥0.25958682112)
x_0.2596_to_0.3215 [0.25958682112‥0.32150418534399994)
x_0.3215_to_0.3958 [0.32150418534399994‥0.3958050224127999)
x_0.3958_to_0.4850 [0.3958050224127999‥0.4849660268953599)
x_0.4850_to_0.5920 [0.4849660268953599‥0.5919592322744318)
x_0.5920_to_0.7204 [0.5919592322744318‥0.7203510787293181)
x_0.7204_to_0.8744 [0.7203510787293181‥0.8744212944751818)
x_0.8744_to_1.0000 [0.8744212944751818‥1.0]
y_0.0000_to_0.0100 [0.0‥0.01)
y_0.0100_to_0.0220 [0.01‥0.022)
y_0.0220_to_0.0364 [0.022‥0.0364)
y_0.0364_to_0.0537 [0.0364‥0.053680000000000005)
y_0.0537_to_0.0744 [0.053680000000000005‥0.07441600000000001)
y_0.0744_to_0.0993 [0.07441600000000001‥0.0992992)
y_0.0993_to_0.1292 [0.0992992‥0.12915904)
y_0.1292_to_0.1650 [0.12915904‥0.164990848)
y_0.1650_to_0.2080 [0.164990848‥0.2079890176)
y_0.2080_to_0.2596 [0.2079890176‥0.25958682112)
y_0.2596_to_0.3215 [0.25958682112‥0.32150418534399994)
y_0.3215_to_0.3958 [0.32150418534399994‥0.3958050224127999)
y_0.3958_to_0.4850 [0.3958050224127999‥0.4849660268953599)
y_0.4850_to_0.5920 [0.4849660268953599‥0.5919592322744318)
y_0.5920_to_0.7204 [0.5919592322744318‥0.7203510787293181)
y_0.7204_to_0.8744 [0.7203510787293181‥0.8744212944751818)
y_0.8744_to_1.0000 [0.8744212944751818‥1.0]

Initial point:

As accuracy increases (i.e. we use more approximation variables), we will get closer and closer to (0.8, 0.2). The downside is that it will take longer to solve the problem.


Here is a visual way of looking at this:

linear approximation of square

The chart above looks like a parabola – i.e. the chart of the function x2 – but it’s not an exact one. If you look very closely, you will notice it is not a smooth curve; there are 17 line segments, each one mapping to an intermediate variable in the raw LP.

Segment lengths

Our infrastructure allows for any way of segmenting the interval we are trying to approximate. We could have divided the interval [0, 1] into 17 equal parts(or 13, or 10), and we’d have a different approximation. We just chose a method where the line segment lengths increase geometrically; here, they become 20% longer at each step. We did that because in our investing engine we care more about being precise the closer x is to 0 – but again, this is not a requirement3.


  1. This linear approximation infrastructure only works for bounded regions (e.g. from 0 to 1 in this example). This makes sense; intuitively speaking, no line segment can approximate an curve with infinite length. As x goes up, the approximating line’s slope is still the same (lines have a fixed slope), but the curve’s slope will keep increasing (in this example, at least), so at some point its slope will get huge, and it will end up way above the approximating line.
  2. This was not created with the purpose of approximating curves closely. In practice, there are cases where we want to gradually increase the penalty from an ideal point. Any function that grows faster than linear could work (although with a different “gradualness”).


In the Rowboat Advisors investing engine, we often need to approximate non-linear functions of a single variable, such as square, square root, ex, etc. However, linear programming – the mathematical model we use to describe our investing optimization – does not support that.

We have created infrastructure that automates these approximations, while hiding the details from the developer.

The end-user result is that this enables us to invest better. Our engine jointly considers the tradeoffs between different improvements in a portfolio (e.g. lower taxes, lower fees, matching a target portfolio, etc.). This feature allows us to assign a more accurate penalty (or preference) to each such improvement.


  1. For the special case of x2: Quadratic programming is an extension of linear programming, but there are fewer good solvers for such problems, and they take a longer time than LP. At any rate, our infrastructure is even more general, as it allows for e.g. square roots. We just picked x2 here because it makes for a better example. However, even though our infrastructure works for the scenarios we care about, not every single function will work in general; there are certain properties a function must hold. This is beyond the scope of this post.
  2. Proof:
    • If you remember calculus: rewrite x2 + 4y2 to (1-y)2 + 4y2 = 5y-2y + 1. Set its derivative to 0, so 10y – 2 = 0 ⇔ y = 0.2. Therefore x = 1 – y = 0.8.
    • If not, here’s an informal proof: using (x, y) = (0.8, 0.2) in the objective function gives 0.82 + 4*0.22 = 0.80. Let’s make x slightly bigger, to 0.81; y then has to get slightly smaller since x + y = 1, so 0.19. However, 0.812 + 4*0.192 = 0.8005 > 0.8, so this is a worse solution. Similarly, 0.792 + 4*0.212 = 0.8005 > 0.8.
  3. That decision should also be affected by the curvature of the function. Intuitively speaking, the curvier a function is in a region, the more variables we should use for the approximation to get the same precision as in a region where the function looks linear. At the limit, a straight line (i.e. no curvature) can be approximated by a single variable.

Rowboat Advisors optimization infrastructure, part 2: absolute values

Rowboat Advisors has built infrastructure to make it tremendously easier to invest a portfolio well. We describe how to use absolute values.

Audience requirements: high school math; previous blog post. Programming knowledge not needed, but helps appreciate the code examples. If it gets too much, skip to the summary section at the end.


In the Rowboat Advisors investing engine, we often use absolute values. For example, we are indifferent between holding 1% more vs 1% less of an asset (vs. its target). Both are equally undesirable. However, even if we use some tricks (which we have to – see below), it is too complicated for the code to be aware of that trick.

Absolute values: the standard trick

General linear programs (LP) only let you specify linear expressions (such as 2*x + 3*y); the absolute value does not count. However, there is a well-known trick to minimize1 absolute values in a linear program.

Say you want to minimize |x| subject to -5 <= x <= -3.

Instead, rewrite as follows:


  • xpos + xneg

subject to:

  1. -5 <= x <= -3
  2. x = xpos – xneg
  3. xpos >= 0
  4. xneg >= 0

xpos represents the positive part (or 0) of x; xneg the negative. At most one of them can be non-0.

  • If x = 3, then xpos = 3, xneg = 0 (though this violates constraint #1 so can’t happen)
  • If x = 0, then xpos = 0, xneg = 0
  • if x = -4, then xpos = 0, xneg = 4.

This trick works because the solver will push both xpos and xneg to be as small as possible. In this example, minimizing |x| subject to -5 <= x <= -3 should intuitively give us x=-3. There are infinite combinations for xpos – xneg that give -3 (e.g. 7 – 10, 97 – 100, etc.), but the combination with the smallest objective function value (xpos + xneg) is xpos = 0, xneg = 3.

Solution #1: uses 3 variables

This is the Java code that solves2 “minimize |x| subject to -5 <= x <= -3”; solution is x = -3, implying |x| = 3.

  public void blogPostExample1() {
    HighLevelLPBuilder builder = makeBuilder();
    Variable x = builder.addConstrainedRawVariable("x", Range.closed(-5.0, -3.0));
    AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateDumbAbsoluteValueSuperVars(
        "abs(x)", singleVarExpression(x), builder);
    LinearOptimizationProgram lp = builder
        .withHumanReadableLabel(label("minimize |x| subject to -5 <= x <= -3"))
    AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues();
    BiConsumer asserter = (expected, var) ->
        assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8);
    asserter.accept(-3.0, x);
    asserter.accept(3.0, absX.getAbsoluteValue());

This is the raw LP that ends up getting passed to the solver after the translation step. It uses 3 variables: x, xpos, xneg.

[LOPI minimize |x| subject to -5 <= x <= -3 supervars: [[x0:x], [x1:abs(x)+], [x2:abs(x)-]]
[x1:abs(x)+] + [x2:abs(x)-]
 subject to constraints:
abs(x) = abs(x)+ - abs(x)- : -0.0000000000 == + [x0:x] - [x1:abs(x)+] + [x2:abs(x)-]
 variable ranges:
x [-5.0‥-3.0]
abs(x)+ [0.0‥+∞)
abs(x)- [0.0‥+∞)

Initial point:

This is great – the code can use absolute values without needing to know about this positive / negative trick. However, we can do better.

Solution #2: uses 2 variables

If we know the range of the expression in the absolute value, we can give a hint. For example, -5 <= x < = -3 means we’ll never need xpos , as x is always negative.

  public void blogPostExample2() {
    HighLevelLPBuilder builder = makeBuilder();
    Variable x = builder.addRawVariable("x");
    AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateSmartAbsoluteValueSuperVars(
        "abs(x)", singleVarExpression(x), builder, Range.closed(-5.0, -3.0));
    LinearOptimizationProgram lp = builder
        .withHumanReadableLabel(label("minimize |x| subject to -5 <= x <= -3"))
    AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues();
    BiConsumer asserter = (expected, var) ->
        assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8);
    asserter.accept(-3.0, x);
    asserter.accept(3.0, absX.getAbsoluteValue());

The raw LP now uses 2 variables instead of 3: x and xneg. xpos is not needed.

[LOPI minimize |x| subject to -5 <= x <= -3 supervars: [[x0:x], [x1:abs(x)-]]
 subject to constraints:
abs(x) = - abs(x)- (no pos) : -0.0000000000 == + [x0:x] + [x1:abs(x)-]
 variable ranges:
x (-∞‥+∞)
abs(x)- [3.0‥5.0]

Initial point:

That's better; fewer variables generally make things run faster. However, we can do even better.

Solution 3: uses only 1 variable

Usually, we are taking the absolute value of an existing expression (e.g. 2*y + 3*z + 4*w), with y, z and w appearing elsewhere in the LP. However, in this case, we don’t even need to create a separate variable for x itself. When we create an absolute value using our infrastructure, we also get the positive, negative, and signed (positive – negative) for free. So we can just refer to the signed value of the abs(x) variable, i.e. xpos – xneg, instead of x itself. Here is an example.

  public void blogPostExample3() {
    HighLevelLPBuilder builder = makeBuilder();
    AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateAbsoluteValueSuperVarsOfSingleVariable(
        "abs(x)", builder, Range.closed(-5.0, -3.0));
    LinearOptimizationProgram lp = builder
        .withHumanReadableLabel(label("minimize |x| subject to -5 <= x <= -3"))
    AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues();
    BiConsumer asserter = (expected, var) ->
        assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8);
    asserter.accept(-3.0, absX.getSigned());
    asserter.accept(3.0, absX.getAbsoluteValue());

The raw LP now uses only xneg. x and xpos are not needed. That’s even more efficient.

[LOPI minimize |x| subject to -5 <= x <= -3 supervars: [[x0:abs(x)-]]
 subject to constraints:
 variable ranges:
abs(x)- [3.0‥5.0]

Initial point:

Minor bonus: absolute value can appear in constraint

When we minimize an absolute value, we are also able to use it in a constraint.

Here is a variation where we add the additional constraint that |x| > 3.5. The solution is now x = -3.5, which gives |x| = 3.5.

  public void blogPostExample4() {
    HighLevelLPBuilder builder = makeBuilder();
    AbsoluteValueSuperVars absX = absoluteValueSuperVarsGenerator.generateAbsoluteValueSuperVarsOfSingleVariable(
        "abs(x)", builder, Range.closed(-5.0, -3.0));
    LinearOptimizationProgram lp = builder
        .withHumanReadableLabel(label("minimize |x| subject to -5 <= x  3.5"))
        .addConstraint("|x| > 3.5", absX.getAbsoluteValue().getHighLevelVarExpression(), GREATER_THAN_SCALAR, 3.5)
    AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues();
    BiConsumer asserter = (expected, var) ->
        assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8);
    asserter.accept(-3.5, absX.getSigned());
    asserter.accept(3.5, absX.getAbsoluteValue());

In the raw LP, note that the |x| > 3.5 constraint gets automatically translated to be in terms of xneg (which shows below as [x0:abs(x)-]) .

[LOPI minimize |x| subject to -5 <= x  3.5 : 3.5000000000 < + [x0:abs(x)-]
 variable ranges:
abs(x)- [3.0‥5.0]

Initial point:

Major bonus: simplicity

The main benefit of all examples above (even the less efficient ones) is that the code does not need to be aware of all this trickery, so it can be simpler. It doesn’t need to know that the underlying infrastructure has created xpos and xneg. It also does not need to make a distinction between the efficiency improvements described in steps 2 and 3; that would require the code to know whether 0, 1, or both of xpos and xneg have been created.

This was an actual problem for us in practice; we built this infrastructure to address it.


Minimizing absolute values is not directly supported by the conceptual model of LP. The Rowboat Advisors infrastructure enables this, while hiding from the developer how this actually happens. Given light guidance in certain special cases, it is also intelligent to make the code faster.

The end-user result is that, by reducing complexity, this enables us to invest better. Without it, the complexity of jointly considering the tradeoffs between different improvements in a portfolio (e.g. lower taxes, lower fees, matching a target portfolio, etc.) ends up becoming unmanageable in practice.


  1. The trick actually does not work when maximizing an absolute value, or (equivalently) when an absolute value appears with a negative coefficient in front of it in the objective function.
  2. The examples in this post are all simple (|x|) but our infrastructure lets us use the absolute value of any expression, not just a single variable.

Rowboat Advisors optimization infrastructure, part 1: high level variables

Rowboat Advisors has built infrastructure to make it tremendously easier to invest a portfolio well. We describe the concept of high-level variables, and show how they can be useful.

Audience requirements: high school math; previous blog post. Programming knowledge is not needed; it will just help you appreciate the code example.

Review of previous example

This is the simplistic example from the previous post1.

To get as close as possible to a target (a.k.a. ideal) portfolio of 10% x, 50% y, 40% z,


  • |10 – x| + |50 – y| + |40 – z| (distance from ideal portfolio)

subject to:

  1. 0 <= x <= 100 (an asset class can be between 0 and 100% of the portfolio)
  2. 0 <= y <= 100
  3. 0 <= z <= 100
  4. x + y + z = 100 (all holdings sum up to the entire portfolio)

Similar constraints are color-coded into groups, for clarity. If the mathy notation puts you off, you can just read the text descriptions in italics, which are only mentioned once per group.

New requirements

We will now augment the original example to illustrate a new point.

Assume that we also want to prevent too much trading (buying or selling)3 of y or z (but not x)  – e.g. no more than 30% of the total portfolio value.

This requires rewriting the linear program, because the variables are currently too coarse; there’s nothing that represents buying or selling yet.

Case 1: without using our infrastructure

Rewrite the final amount for x (and similarly for y and z) as

xfinal = xinitial + xbuy_amount – xsell_amount

Either buy or sell amount will be positive, but not both; we can’t both buy and sell something. They can both be 0, however, if we don’t trade.

So we’ll now have 9 variables instead of 3.

  • xf = xi + xb – xs
  • yf = yi + yb – ys
  • zf = zi + zb – zs

In practice, the initial amounts are known ahead of time, so they can be constant numbers. Using these 6 variables (xb, xs, yb, ys, zb, zsinstead of the original 3 (i.e. without using xf, yf, or zf), we can rewrite our LP as


|10 – (7 + xb – xs)| + |50 – (52 + yb – ys)| + |40 – (41 + zb – zs)| (distance from ideal portfolio)

subject to:

  1. 0 <= 7 + xb – xs <= 100 (an asset class can be between 0 and 100% of the portfolio)
  2. 0 <= 52 + yb – ys <= 100
  3. 0 <= 41 + zb – zs <= 100
  4. (7 + xb – xs) + (52 + yb – ys) + (41 + zb – zs) = 100 (all holdings sum up to 100%)
  5. yb + ys + zb + zs <= 30 (prevent too much trading, i.e. more than 30% of portfolio)
  6. xb >= 0 (buy and sell amounts are non-negative)
  7. xs >= 0
  8. yb >= 0
  9. ys >= 0
  10. zb >= 0
  11. zs >= 0

This looks quite complicated.

It is possible to simplify these a bit. E.g.

0 <= 7 + xb – xs <= 100

can be rewritten as

-7 <= xb – xs <= 93

However, the point here isn’t to make this simpler to the external solver software that will solve this. It doesn’t care – computers are good at this stuff. It will still come up with the same solution.

The real concern is that the system is now more complicated, so such simplifications won’t help. Any code that refers to the final amount – such as the code that builds the objective function – will now need to be aware of (and updated to use) the new “initial + buy – sell” way of representing the final amount. Since code is written by humans, the extra complication makes it harder to understand, fix, and improve on that code.

Case 2: without our infrastructure, but slightly better

We could make things simpler by including and actually using xf, yf, and zf (instead of their expanded forms), but we need additional constraints. 


|10 – xf| + |50 – yf| + |40 – zf| (distance from ideal portfolio)

subject to:

  1. 0 <= xf <= 100 (an asset class can be between 0 and 100% of the portfolio)
  2. 0 <= yf <= 100
  3. 0 <= zf <= 100
  4. x+ y+ zf = 100 (all holdings sum up to 100%)
  5. yb + ys + zb + zs <= 30 (prevent too much trading, i.e. more than 30% of portfolio)
  6. xb >= 0 (buy and sell amounts are non-negative)
  7. xs >= 0
  8. yb >= 0
  9. ys >= 0
  10. zb >= 0
  11. zs >= 0
  12. xf = xi + xb – x(final amount = initial + buy – sell)
  13. yf = yi + yb – ys
  14. zf = zi + zb – zs

Constraints #12 – #14 are now new. However, now we have 14 constraints instead of 11. We just introduced 2 new problems, which are immaterial in this small example, but matter once the problem gets bigger.

Problem 1: performance / speed

The underlying software we use to find the best solution may be smart enough to understand that the problem is no more complicated now than before, but we can’t count on it. In most cases, those extra variables will slow things down.

Problem 2: simplicity

The programmer needs to be aware of all these variables; nothing is hidden from her. For example, she does not care in this example about the buy or sell amounts per se; she only cares about gross trade amounts (buy + sell). But she can’t refer to the trade amount without knowing that it consists of the sum of 2 variables, buy + sell.

A fundamental engineering principle is for components to expose only the necessary details. For instance, an HVAC contractor can install an a/c system and the right ducts without knowing the details of how the a/c system works internally. Similarly, hiding details from individual software components makes building the whole system easier.

Case 3: Using our infrastructure

A simpler way is to use xf, yf, and zwhen describing the problem, but not make the underlying solver aware of this4.


|10 – xf| + |50 – yf| + |40 – zf| (distance from ideal portfolio)

subject to:

  1. 0 <= xf <= 100 (an asset class can be between 0 and 100% of the portfolio)
  2. 0 <= yf <= 100
  3. 0 <= zf <= 100
  4. x+ y+ zf = 100 (all holdings sum up to 100%)
  5. yb + ys + zb + zs <= 30 (prevent too much trading, i.e. more than 30% of portfolio)
  6. xb >= 0 (buy and sell amounts are non-negative)
  7. xs >= 0
  8. yb >= 0
  9. ys >= 0
  10. zb >= 0
  11. zs >= 0

The improvement is very subtle, but important. Case #3 combines the benefits of case #1 (only 11 constraints, vs 14) and case #2 (easier to understand). The point is that the necessary translations of “high-level variables” (e.g. xf) are done automatically and hidden from the programmer.

For non-programmers: the least bad analogy we can think of is asking someone to cook the best meal possible, subject to what’s in the fridge and pantry. This is a high-level optimization; the cook has to solve the low-level optimization based on using a maximum of 1 lb of chicken, 2 lbs of potatoes, etc.

For programmers: think of it as a compiler; the underlying 3rd party optimization solver sees something closer to case #1 – but with 11 constraints instead of 14, which is more efficient. If you read this far and are wondering how, you may be a great fit for our company, so shoot us an email!

Why this all matters

A central tenet of the Rowboat Advisors approach is to evaluate intelligently the tradeoffs between the different improvements that can be made to a portfolio. As more and more of these “sub-objectives” come into play (reduced taxes, reduced fees, tracking a target portfolio, etc.), the extra complexity becomes a real problem – and, in practice, far more so than in this simple example.

For programmers: think of writing a parser in java vs. bash. For a simple parser (e.g. for csv), bash will be worse than java, but will at least suffice. However, for a more complicated parser (e.g. for xml), bash is not just worse, but is effectively unusable.

Actual code example

This is the Java code that will tell us the cheapest way to make an acceptable 200 ml rum-and-coke drink.

  public void rumAndCoke() {
    HighLevelLPBuilder builder = makeBuilder().withHumanReadableLabel(label("minimize cost of drink"));
    HighLevelVariablesBuilder varBuilder = builder.getHighLevelVariablesBuilder();

    // Amounts of rum and coke that we will use in the cocktail (in liters).
    // Customers will complain if there's not enough rum in the cocktail; use at least 50 ml
    // We can use any amount of coke, but of course it has to be positive.
    Variable rum  = varBuilder.addConstrainedRawVariable("rum",  Range.atLeast(0.05));
    Variable coke = varBuilder.addConstrainedRawVariable("coke", Range.atLeast(0.0));
    SuperVar cost = varBuilder.addSuperVar(generalSuperVarWithoutAddedConstraints(
        label("cost of drink (rum costs $4.5 / liter, and coke $1.5)"),
        disjointHighLevelVarExpression(4.5, rum, 1.5, coke)));
    SuperVar sugar = varBuilder.addSuperVar(generalSuperVarWithoutAddedConstraints(
        label("total sugar in grams (rum is 25 / liter; coke about 40)"),
        disjointHighLevelVarExpression(25, rum, 40, coke)));
    LinearOptimizationProgram lp = builder
        .addConstraint(highLevelVarConstraint(label("drink must be exactly 200 ml"),
            sumOfDisjointHighLevelVars(rum, coke), EQUAL_TO_SCALAR, constantTerm(0.2)))
        .addConstraint(highLevelVarConstraint(label("drink can't be too sweet, so no more than 7 grams of sugar"),
            sugar.getHighLevelVarExpression(), LESS_THAN_SCALAR, constantTerm(7)))
        .addConstraint(highLevelVarConstraint(label("drink can't be too cheap, so no less than $0.60"),
            cost.getHighLevelVarExpression(), GREATER_THAN_SCALAR, constantTerm(0.60)))
    AllVariablesAndDoubles solution = optimizer.minimize(lp).get().getVariablesAndOptimalValues();
    BiConsumer asserter = (expected, var) ->
        assertEquals(expected, evaluator.evaluateHighLevelVar(var, solution), 1e-8);
    asserter.accept(0.1, coke); // 0.1 = 100 ml
    asserter.accept(0.1, rum);
    asserter.accept(doubleExplained(6.5, 25 * 0.1 + 40 * 0.1), sugar);

    // This solution makes sense. The % of coke
    // * can't go below 50%; we can't have a cheaper price, since a 50-50 mix already gives $0.60 (min allowed)
    // * shouldn't go above 50%: we want to minimize cost, and less coke (=> more rum) increases cost
    asserter.accept(doubleExplained(0.60, 4.5 * 0.1 + 1.5 * 0.1), cost);

The code also prints out the LP after the infrastructure does the translation. Notice how there are only 2 variables (x0 for rum, x1 for coke), even though the source code mentions cost and sugar in the objective and constraints.

[LOPI minimize cost of drink supervars: [[x0:rum], [x1:coke]]
 4.5 [x0:rum] + 1.5 [x1:coke]
 subject to constraints:
 drink must be exactly 200 ml : 0.2000000000 == + [x0:rum] + [x1:coke]
 drink can't be too sweet, so no more than 7 grams of sugar : 7.0000000000 > + 25.0000000000*[x0:rum] + 40.0000000000*[x1:coke]
 drink can't be too cheap, so no less than $0.60 : 0.6000000000 < + 4.5000000000*[x0:rum] + 1.5000000000*[x1:coke]
 variable ranges:
 rum [0.05‥+∞)
 coke [0.0‥+∞)

Initial point:

This is great! Our description of the problem used cost and sugar, which are easy to understand, while the actual LP that will get solved only mentions 2 variables (rum, coke) instead of 4 (rum, coke, cost, sugar), which will be faster to solve5.


Our infrastructure allows us to build sophisticated investment logic in a simple fashion. We described one of the many features: high-level variables. This allows for investing logic that’s easier to understand and faster to execute.


  1. We will not use c, s, b for cash / stocks / bonds, because b and s may be confused with ‘buy’ and ‘sell’.
  2. Absolute values are not directly supported by linear optimization. A future post will explain how we do this.
  3. In practice, we can’t always do that; we need to allow full investment from an all-cash account (a lot of buying), and withdrawals (a lot of selling)
  4. AMPL is a very well established modeling language, designed by AT&T Bell Labs in the 80s by (among others) the famous Brian Kernighan. It allows the concept of a high-level variable as described in this post – plus a ton more, though not relevant to us. However, it does not have some features we need (described in future posts). There are also several other reasons why it’s not a good fit for us, which are beyond the scope of this post. Our infrastructure was actually built before finding out that AMPL exists. The downside is that we missed out on a potential opportunity to use something existing (although, in the end, we can’t); the upside is the validation that the infrastructure we built was in the right direction.
  5. The difference in this example will be insignificant, but with hundreds or thousands of variables, it matters.

Investing well through math

We use linear programming, a well-known, math-oriented methodology, to choose the best possible portfolio. A simple example illustrates the basic principles.

Audience requirements: high school math. No need to be a programmer.

What is Linear Programming?

Mathematical optimization (of which LP is a special case1) is a class of mathematical methods to help choose the best possible choice from a set of alternatives. It has many uses in real-life problems. It requires converting a problem into a mathematical representation. The best solution typically seeks the best value of an objective function (i.e. what you want to achieve, such as maximizing profit in a factory) subject to some constraints (e.g. availability of raw materials). LP is a very well understood field, and many good solvers exist.

Simple LP-based portfolio optimization example

Assume an ideal portfolio2 of 3 asset classes:

  • 10% cash
  • 50% stocks
  • 40% bonds

Define c, s, b to be the respective percentage of each asset class in the final portfolio. In order to get closest to the ideal (a.k.a. target) portfolio, we mustminimize:

  • |10 – c| + |50 – s| + |40 – b| (“distance from ideal portfolio”)

… subject to constraints3, 4:

  1. 0 <= c <= 100
  2. 0 <= s <= 100
  3. 0 <= b <= 100
  4. c + s + b = 100

For brevity, [7 52 41] will mean “a portfolio that’s 7% cash, 52% stocks, 41% bonds”.


  • The vertical bar in the objective function means “absolute value”. Linear programs do not support this concept, but we’ll get to that in a future post.
  • 1 through 3 mean “any single asset must be between 0% and 100% of the portfolio”. Makes sense. E.g. if I have $100, I can’t ever hold $110 in stocks, or minus $10.
  • 4 means “if you add up all the holdings, they should add up to the value of the entire portfolio”.


  • The direction in which a single asset class is off target (above or below) does not matter. Take [10 49 41] and [10 51 39]. Stocks are considered equally off target at 49% vs 51%.
  • Being off target by e.g. 1% has the same penalty regardless of asset class5. E.g. [9 51 40] is just as good as [10 51 39].

Comparing solutions

The best scenario is when we hold a portfolio that matches the target exactly: [10 40 50] gives us |10 – 10| + |50 – 50| + |40 – 40| = 0.

Being slightly off, which is intuitively worse, also works out to be worse: e.g. [7 52 41] gives us |10 – 7| + |50 – 52| + |40 – 41| = 6. We are trying to minimize this value, so 6 > 0 means this portfolio is worse, as expected.

Why do all this?

This all sounds complicated. Why not just buy 50% stocks, 40% bonds, and leave 10% cash aside?

Say that we also hold some stock6 that cannot be sold (e.g. an unvested executive stock grant). If total stock (as a % of all assets) is already e.g. 80%, then what do we do? We likely don’t want to buy even more stocks, because we are already overweight at 80% (vs the target of 50%). However, how should the remaining 20% be invested? Clearly, bonds can’t be 40% of the total (80% + 40% > 100%). In practice, this is way more complicated, because there are multiple asset classes, and also multiple objectives.

The real value shows when we aren’t able to pick the best portfolio, but must pick the best from a set of feasible portfolios. LP is good at that.


Investing well can be done with well-established mathematical optimization techniques. It is possibly to define rigorously what makes a good portfolio, which lets us

  • exclude invalid portfolios
  • pick the best valid portfolio

This technique is even more important in the presence of multiple – and many times conflicting – goals. This is not illustrated here, and will be addressed in a future blog post.


  1. Linear programming is not related to computer programming, although one can write software to solve linear programs.
  2. Typically, a financial advisor (automated or not) ascertains a client’s needs (how risk-averse they are, their near-term needs for cash, etc.) and suggests an ideal portfolio. Doing this in terms of asset classes is not the only (or even best) way to do this, but it’s the industry standard, and will suffice for this example.
  3. 0 <= c <= 100 is actually two constraints, but we show it as one for brevity.
  4. If you are a finance person: This example is intentionally simple. We are
    • disallowing leverage
    • disallowing short sales
    • ignoring any (minuscule, in practice) transaction costs that will reduce the value of the portfolio if we trade
    • assuming our only objective is to match the target portfolio
  5. In practice, a proportional penalty is better. 8% less in stocks isn’t horrible – 42% is close enough to 50%, proportionally. However, 2% in cash (vs a target of 10%) is very far off; it is 1/5th of the desired 10%.
  6. For simplicity in this example, we assume that your external stock behaves the same as whatever stock fund you’d normally buy. We do not make this assumption in our actual system.