L1 Trend Filtering

Introduction

Kim et al. (2009) propose the \(l_1\) trend filtering method for trend estimation. The method solves an optimization problem of the form

\[ \begin{array}{ll} \underset{\beta}{\mbox{minimize}} & \frac{1}{2}\sum_{i=1}^m (y_i - \beta_i)^2 + \lambda ||D\beta||_1 \end{array} \] where the variable to be estimated is \(\beta\) and we are given the problem data \(y\) and \(\lambda\). The matrix \(D\) is the second-order difference matrix,

\[ D = \left[ \begin{matrix} 1 & -2 & 1 & & & & \\ & 1 & -2 & 1 & & & \\ & & \ddots & \ddots & \ddots & & \\ & & & 1 & -2 & 1 & \\ & & & & 1 & -2 & 1\\ \end{matrix} \right]. \]

The implementation is in both C and Matlab. Hadley Wickham provides an R interface to the C code. This is on GitHub and can be installed via:

library(devtools)
install_github("hadley/l1tf")

Example

We will use the example in l1tf to illustrate. The package provides the function l1tf which computes the trend estimate for a specified \(\lambda\).

library(l1tf)
sp_data <- data.frame(x = sp500$date,
                      y = sp500$log,
                      l1_50 = l1tf(sp500$log, lambda = 50),
                      l1_100 = l1tf(sp500$log, lambda = 100))

The CVXR version

CVXR provides all the atoms and functions necessary to formulat the problem in a few lines. For example, the \(D\) matrix above is provided by the function diff(..., differences = 2). Notice how the formulation tracks the mathematical construct above.

## lambda = 50
y <- sp500$log
lambda_1 <- 50 
beta <- Variable(length(y))
objective_1 <- Minimize(0.5 * p_norm(y - beta) +
                        lambda_1 * p_norm(diff(x = beta, differences = 2), 1))
p1 <- Problem(objective_1)
betaHat_50 <- solve(p1)$getValue(beta)

## lambda = 100
lambda_2 <- 100
objective_2 <- Minimize(0.5 * p_norm(y - beta) +
                        lambda_2 * p_norm(diff(x = beta, differences = 2), 1))
p2 <- Problem(objective_2)
betaHat_100 <- solve(p2)$getValue(beta)

NOTE Of course, CVXR is much slower since it is not optimized just for one problem.

Comparison Plots

A plot of the estimates for two values of \(\lambda\) is shown below using both approaches. First the l1tf plot.

ggplot(data = sp_data) +
    geom_line(mapping = aes(x = x, y = y), color = 'grey50') +
    labs(x = "Date", y = "SP500 log-price") +
    geom_line(mapping = aes(x = x, y = l1_50), color = 'red', size = 1) +
    geom_line(mapping = aes(x = x, y = l1_100), color = 'blue', size = 1)
$L_1$ trends for $\lambda = 50$ (red) and $\lambda = 100$ (blue).

Figure 1: \(L_1\) trends for \(\lambda = 50\) (red) and \(\lambda = 100\) (blue).

Next the corresponding CVXR plots.

cvxr_data <- data.frame(x = sp500$date,
                        y = sp500$log,
                        l1_50 = betaHat_50,
                        l1_100 = betaHat_100)
ggplot(data = cvxr_data) +
    geom_line(mapping = aes(x = x, y = y), color = 'grey50') +
    labs(x = "Date", y = "SP500 log-price") +
    geom_line(mapping = aes(x = x, y = l1_50), color = 'red', size = 1) +
    geom_line(mapping = aes(x = x, y = l1_100), color = 'blue', size = 1)
`CVXR` estimated $L_1$ trends for $\lambda = 50$ (red) and $\lambda = 100$ (blue).

Figure 2: CVXR estimated \(L_1\) trends for \(\lambda = 50\) (red) and \(\lambda = 100\) (blue).

Notes

The CVXR solution is not quite exactly that of l1tf: on the left it shows a larger difference for the two \(\lambda\) values; in the middle, it is less flatter than l1tf; and on the right, it does not have as many knots as l1tf.

Session Info

sessionInfo()
## R version 3.5.1 (2018-07-02)
## Platform: x86_64-apple-darwin17.7.0 (64-bit)
## Running under: macOS  10.14.1
## 
## Matrix products: default
## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.3/lib/libopenblasp-r0.3.3.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices datasets  utils     methods   base     
## 
## other attached packages:
## [1] l1tf_0.0.0.9000 CVXR_0.99-2     ggplot2_3.1.0  
## 
## loaded via a namespace (and not attached):
##  [1] gmp_0.5-13.2      Rcpp_0.12.19      highr_0.7        
##  [4] pillar_1.3.0      compiler_3.5.1    plyr_1.8.4       
##  [7] bindr_0.1.1       R.methodsS3_1.7.1 R.utils_2.7.0    
## [10] tools_3.5.1       bit_1.1-14        digest_0.6.18    
## [13] evaluate_0.12     tibble_1.4.2      gtable_0.2.0     
## [16] lattice_0.20-35   pkgconfig_2.0.2   rlang_0.3.0.1    
## [19] Matrix_1.2-15     yaml_2.2.0        blogdown_0.9.2   
## [22] xfun_0.4          bindrcpp_0.2.2    Rmpfr_0.7-1      
## [25] ECOSolveR_0.4     withr_2.1.2       stringr_1.3.1    
## [28] dplyr_0.7.7       knitr_1.20        bit64_0.9-7      
## [31] rprojroot_1.3-2   grid_3.5.1        tidyselect_0.2.5 
## [34] glue_1.3.0        R6_2.3.0          rmarkdown_1.10   
## [37] bookdown_0.7      purrr_0.2.5       magrittr_1.5     
## [40] scs_1.1-1         backports_1.1.2   scales_1.0.0     
## [43] htmltools_0.3.6   assertthat_0.2.0  colorspace_1.3-2 
## [46] labeling_0.3      stringi_1.2.4     lazyeval_0.2.1   
## [49] munsell_0.5.0     crayon_1.3.4      R.oo_1.22.0

Source

R Markdown

References

Kim, Seung-Jean, Kwangmoo Koh, Stephen Boyd, and Dimitry Gorinevsky. 2009. “\(l_1\) Trend Filtering.” SIAM Review 51 (2): 339–60. doi:doi:10.1137/070690274.