Log-Concave Distribution Estimation

Introduction

Let \(n = 1\) and suppose \(x_i\) are i.i.d. samples from a log-concave discrete distribution on \(\{0,\ldots,K\}\) for some \(K \in {\mathbf Z}_+\). Define \(p_k := {\mathbf {Prob}}(X = k)\) to be the probability mass function. One method for estimating \(\{p_0,\ldots,p_K\}\) is to maximize the log-likelihood function subject to a log-concavity constraint , i.e.,

\[ \begin{array}{ll} \underset{p}{\mbox{maximize}} & \sum_{k=0}^K M_k\log p_k \\ \mbox{subject to} & p \geq 0, \quad \sum_{k=0}^K p_k = 1, \\ & p_k \geq \sqrt{p_{k-1}p_{k+1}}, \quad k = 1,\ldots,K-1, \end{array} \]

where \(p \in {\mathbf R}^{K+1}\) is our variable of interest and \(M_k\) represents the number of observations equal to \(k\), so that \(\sum_{k=0}^K M_k = m\). The problem as posed above is not convex. However, we can transform it into a convex optimization problem by defining new variables \(u_k = \log p_k\) and relaxing the equality constraint to \(\sum_{k=0}^K p_k \leq 1\), since the latter always holds tightly at an optimal solution. The result is

\[ \begin{array}{ll} \underset{u}{\mbox{maximize}} & \sum_{k=0}^K M_k u_k \\ \mbox{subject to} & \sum_{k=0}^K e^{u_k} \leq 1, \\ & u_k - u_{k-1} \geq u_{k+1} - u_k, \quad k = 1,\ldots,K-1. \end{array} \]

Example

We draw \(m = 25\) observations from a log-concave distribution on \(\{0,\ldots,100\}\). We then estimate the probability mass function using the above method and compare it with the empirical distribution.

set.seed(1)
## Calculate a piecewise linear function
pwl_fun <- function(x, knots) {
    n <- nrow(knots)
    x0 <- sort(knots$x, decreasing = FALSE)
    y0 <- knots$y[order(knots$x, decreasing = FALSE)]
    slope <- diff(y0)/diff(x0)
    
    sapply(x, function(xs) {
        if(xs <= x0[1])
            y0[1] + slope[1]*(xs -x0[1])
        else if(xs >= x0[n])
            y0[n] + slope[n-1]*(xs - x0[n])
        else {
            idx <- which(xs <= x0)[1]
            y0[idx-1] + slope[idx-1]*(xs - x0[idx-1])
        }
    })
}
## Problem data
m <- 25
xrange <- 0:100
knots <- data.frame(x = c(0, 25, 65, 100), y = c(10, 30, 40, 15))
xprobs <- pwl_fun(xrange, knots)/15
xprobs <- exp(xprobs)/sum(exp(xprobs))
x <- sample(xrange, size = m, replace = TRUE, prob = xprobs)

K <- max(xrange)
counts <- hist(x, breaks = -1:K, right = TRUE, include.lowest = FALSE,
               plot = FALSE)$counts
ggplot() +
    geom_histogram(mapping = aes(x = x), breaks = -1:K, color = "blue", fill = "orange")

We now solve problem with log-concave constraint.

u <- Variable(K+1)
obj <- t(counts) %*% u
constraints <- list(sum(exp(u)) <= 1, diff(u[1:K]) >= diff(u[2:(K+1)]))
prob <- Problem(Maximize(obj), constraints)
result <- solve(prob)
pmf <- result$getValue(exp(u))

The above lines transform the variables \(u_k\) to \(e^{u_k}\) before calculating their resulting values. This is possible because exp is a member of the CVXR library of atoms, so it can operate directly on a Variable object such as u.

Below are the comparison plots of pmf and cdf.

dens <- density(x, bw = "sj")
d <- data.frame(x = xrange, True = xprobs, Optimal = pmf,
                Empirical = approx(x = dens$x, y = dens$y, xout = xrange)$y)
plot.data <- gather(data = d, key = "Type", value = "Estimate", True, Empirical, Optimal,
                    factor_key = TRUE)
ggplot(plot.data) +
    geom_line(mapping = aes(x = x, y = Estimate, color = Type)) +
    theme(legend.position = "top")

d <- data.frame(x = xrange, True = cumsum(xprobs),
                Empirical = cumsum(counts) / sum(counts),
                Optimal = cumsum(pmf))
plot.data <- gather(data = d, key = "Type", value = "Estimate", True, Empirical, Optimal,
                    factor_key = TRUE)
ggplot(plot.data) +
    geom_line(mapping = aes(x = x, y = Estimate, color = Type)) +
    theme(legend.position = "top")

From the figures we see that the estimated curve is much closer to the true distribution, exhibiting a similar shape and number of peaks. In contrast, the empirical probability mass function oscillates, failing to be log-concave on parts of its domain. These differences are reflected in the cumulative distribution functions as well.

Session Info

sessionInfo()
## R version 4.2.1 (2022-06-23)
## Platform: x86_64-apple-darwin21.6.0 (64-bit)
## Running under: macOS Ventura 13.0
## 
## Matrix products: default
## BLAS:   /usr/local/Cellar/openblas/0.3.21/lib/libopenblasp-r0.3.21.dylib
## LAPACK: /usr/local/Cellar/r/4.2.1_4/lib/R/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices datasets  utils     methods   base     
## 
## other attached packages:
## [1] tidyr_1.2.1   ggplot2_3.3.6 CVXR_1.0-11  
## 
## loaded via a namespace (and not attached):
##  [1] tidyselect_1.2.0 xfun_0.34        bslib_0.4.0      slam_0.1-50     
##  [5] purrr_0.3.5      lattice_0.20-45  Rmosek_10.0.25   colorspace_2.0-3
##  [9] vctrs_0.5.0      generics_0.1.3   htmltools_0.5.3  yaml_2.3.6      
## [13] gmp_0.6-6        utf8_1.2.2       rlang_1.0.6      jquerylib_0.1.4 
## [17] pillar_1.8.1     glue_1.6.2       Rmpfr_0.8-9      withr_2.5.0     
## [21] DBI_1.1.3        Rcplex_0.3-5     bit64_4.0.5      lifecycle_1.0.3 
## [25] stringr_1.4.1    munsell_0.5.0    blogdown_1.13    gtable_0.3.1    
## [29] gurobi_9.5-2     codetools_0.2-18 evaluate_0.17    labeling_0.4.2  
## [33] knitr_1.40       fastmap_1.1.0    cccp_0.2-9       fansi_1.0.3     
## [37] highr_0.9        Rcpp_1.0.9       scales_1.2.1     cachem_1.0.6    
## [41] jsonlite_1.8.3   farver_2.1.1     bit_4.0.4        digest_0.6.30   
## [45] stringi_1.7.8    bookdown_0.29    dplyr_1.0.10     Rglpk_0.6-4     
## [49] grid_4.2.1       ECOSolveR_0.5.4  cli_3.4.1        tools_4.2.1     
## [53] magrittr_2.0.3   sass_0.4.2       tibble_3.1.8     pkgconfig_2.0.3 
## [57] ellipsis_0.3.2   rcbc_0.1.0.9001  Matrix_1.5-1     assertthat_0.2.1
## [61] rmarkdown_2.17   R6_2.5.1         compiler_4.2.1

Source

R Markdown

References