Operations Research Forum
Ciamac C. Moallemi’s and Mehmet Saglam’s paper “The Cost of Latency in High Frequency Trading” appears in the September-October 2013 issue of Operations Research. In this paper, Moallemi and Saglam try to quantify the cost of delays in processing a sell order for a stock. Technological advances in data networks and computing power have transformed the way securities are traded. These advances have created the opportunity to process market information and to profit from momentary informational advantages and have led to the rise of electronic trading platforms. The widespread use of computerized trading algorithms in the financial markets and the importance of speedy decision making and trade execution make this a fertile area for Operations Research methods. While it may be self-evident that being able to react quickly to market information is better than being slow, reducing reaction time requires significant investments. As the authors point out, high-frequency traders need to invest in both algorithm development, computing and communications hardware, and even facilities that are co-located with the exchanges all to reduce trade latency. This paper helps give a theoretical foundation to these investments.
We solicited comments on this work from experts on financial markets micro-structure, high-frequency trading and trading algorithms.
Terrence Hendershott is the Cheryl and Christian Valentine Chair and Associate Professor at the Haas School of Business at the University of California Berkeley. He is a member of both the Finance and Operations and Information Technology groups. He has also been a visiting economist at the New York Stock Exchange. His areas of expertise and interests revolve around the role of information technology in financial markets. He has a PhD in Operations and Information Technology from Stanford.
Robert Almgren is the Co-founder and Head of Research at Quantitative Brokers a venture-funded algorithmic agency brokerage concentrating on ﬁxed-income products and futures. He is also a Visiting Scholar and Adjunct Professor in Financial Mathematics at the Courant Institute of Mathematical Sciences, New York University. He has written on transaction cost measurement, high-frequency trading and trading strategies. He has a Ph.D. in Applied and Computational Mathematics, from Princeton University.
The authors take the perspective of a single seller over a short time horizon who is trying to use a limit order to sell a single unit of stock using information on the current bids, the stochastic process governing the bids, and the frequency of arrival of impatient buyers. The authors develop a benchmark model without latency in which the seller knows the current bid at the time they set their limit order price. They then develop a discrete time model to capture the effect of latency by introducing a fixed time lag when setting the limit order price. So a price level is decided at time ti but only takes effect at time ti+1=ti+Δt when the bid process will be at a different level. The Δt represents the trading latency. They then derive a closed form asymptotic approximation of the cost increase over the perfect information benchmark created by low latencies. Using empirical date on high frequency data to generate parameters for their dynamic programming model they then generate estimates for the cost of latency at different points in time over a ten year period. They find that latency costs, defined as relative to the no latency benchmark, have been increasing and that the absolute latency cost is of comparable scale to other trading costs.
Moallemi and Saglam examine an significant aspect of optimally implementing an investor’s trading decision: the order type choice problem for how each individual piece of a larger order should be executed. Prior work typically assumes an investor must pay a transitory price impact generated by their trading, which includes the bid-ask spread. An ad hoc transitory price impact function is assumed and optimization proceeds. One way to think of Moallemi and Saglam’s contribution is as studying the details of how to minimize that transitory impact by using limit orders. The paper examines the optimal control problem for an investor wanting to capture the bid-ask spread by placing a limit order rather than paying the spread by placing a market order. The main challenge in using limit orders is that as the underlying stock price varies, the original limit order price becomes stale (no longer optimal). The selling investor would like to keep the order at the best ask price. As the stock price moves up the limit order should be revised upwards to capture as much of the spread as possible. As the stock price moves down the limit order should be revised downwards to remain at the best price to allow execution. Using their model the authors quantify the benefits of being able to revise/reprice limit orders more quickly.
Hendershott questions if the results indicate an increase in absolute latency costs are just relative latency costs.
The paper’s relative definition latency is clear in Definition 1 where the latency costs are defined as the percentage difference between the latency free value of the optimal policy and the value of the optimal policy with latency. Figure 8 shows that this percentage difference increases over time. What is not clear from Figure 8 is whether the increase is coming from an increase in the numerator or a decrease in the denominator. If the rise in Figure 8 is from an increase in the numerator then the cost of latency shown is increasing in both absolute and relative terms. If the trend in Figure 8 is from a fall in the denominator then absolute latency costs (the numerator) could be falling even though relative latency costs are rising.
The latency free value of the optimal policy is given in Theorem 3 and is proportional to the bid-ask spread. Therefore, I calculate the bid-ask spread for Goldman Sachs from 1999 to 2005. The below figure shows the bid-ask spread in both dollar terms and as a percentage of stock price. Goldman’s share remains close to $100 throughout the period so the two measures track each other closely. The figure shows the spread measures falling roughly seven fold. This decline is substantially greater than the percentage increase shown in Figure 8. Combining my figure and Figure 8 in the paper suggests that latency costs at the beginning of the sample (pre-2001) represent roughly two cents per share, 10% cost of latency times a 20 cent bid-ask spread. At the end of the sample spreads have fallen to roughly three cents per share. Multiplying this times a 20% cost of latency from Figure 8 gives an absolute cost of latency of approximately 0.6 cents per share. This is a decline in of roughly two thirds. Calculations done in basis points are similar. Hence, while I agree that latency costs as a portion of the costs of immediacy have increased, the absolute importance of latency appears to decline. Thus, in the paper’s context, while latency is more important in the trading process, latency may be less important for the investing process.
The authors respond:
With regards to the characterization of latency trends, you are absolutely right in that we are empirically arguing that the relative latency cost has increased over time. We agree with your characterization that latency relates to "trading" rather than "investing". Hence, we feel that the relative metric is appropriate; it makes sense to understand the impact of latency in the context of a well-understood trading cost, the bid-offer spread.
Moreover, the trading problem we analyze considers the impact of latency on the difference in value between a limit order and a market order. This is bounded by the bid-offer spread, and if the bid-offer spread dramatically decreases, as in your example, the absolute latency cost must necessarily also become small. That doesn't mean, however, that latency is less important now than before in a practical sense, just as minimizing spread costs remains practically important to many investors in spite of the fact that spreads have decreased dramatically over time.
Almgren questions whether the “cost of latency” observed in the model is perhaps just a by-product of discretization:
The effect observed in this paper is due to time discretization, not to latency. The authors have ignored the price change within the interval Ti to Ti-1, which is of the same asymptotic size O(√Δt)) as the price change from Ti to Ti-1. Correctly calculated, the cost for the discretization-only model in Section 4.3 is of the same order as the overall cost for the latency model.
Dr. Almgren brings up some good and subtle points relating to our model. As Dr. Almgren states, our main conclusions is that a latency of Δt asymptotically creates a cost that is order O(√Δt log 1/Δt). We believe this conclusion is robust to the particular assumptions made in the model of Section 4.2 (e.g., Bernoulli rather than Poisson arrivals, including or excluding changes in market price during the limit order lifetime, etc.) We do wish to clarify the role of the discrete-time model in Section 4.3, however. As Dr. Almgren observes, including changes in market price during the limit order lifetime would alter the asymptotic cost in this setting. However, the main point of Section 4.3 was to understand whether the latency cost between the model of Section 4.2 and the continuous time model arises from:
(A) Discreteness of time, i.e., that decisions are only made at the beginning of n intervals of length \Delta t, as opposed to being made continuously
(B) Latency, i.e., lack of access to the timely information, when a limit order is placed, the reservation price of an impatient buyer who might trade against is not known, as opposed to the continuous time case where it is known
The "more realistic" model that Dr. Almgren suggests, including changes in market price during the limit order lifetime, has both discreteness of time and latency. The reservation price of the next impatient buyer is unknown when the limit order price is set, but involves some random delay--- this is a latency. Hence, it is not surprising that it achieves the same asymptotic cost as that of the model of Section 4.2, but it's also not interesting as a point of comparison in order to disentangle the effects of (A) and (B).