Abstract: Constrained maximization of non-decreasing utility functions with submodularity-like properties is at the core of several AI and ML applications including viral marketing, pool-based active learning, adaptive feature selection, and sensor deployment. In this work, we develop adaptive policies for maximizing such functions when both the utility function and item costs may be stochastic. \n\nFirst, we study maximization of an adaptive weak submodular function which is submodular in an approximate and probabilistic sense, subject to a stochastic fractional knapsack constraint, which requires total expected item cost to be within a given capacity. We present the $\beta$-\textsc{Greedy} policy for this problem; our approximation guarantee for it recovers many known greedy maximization guarantees as special cases. Next, we study maximization of an adaptive submodular function, which is submodular in a probabilistic sense, subject to a stochastic knapsack constraint, which requires the total item cost to be within a given capacity with probability one. We present the \textsc{Mix} policy for this problem; our approximation guarantee for it is the first known approximation guarantee for maximizing a non-linear utility function subject to a stochastic knapsack constraint. Using alternative parameterizations of \textsc{Mix}, we also derive the first known approximation guarantees for maximizing an adaptive submodular function subject to a deterministic knapsack constraint.\n\nOur guarantees are powered by an innovative differential analysis technique which models the $\beta$-\textsc{Greedy} policy using a continuous-time stochastic reward process of a particle whose reward is related to the optimal utility through a differential inequality. The solution to this inequality yields our $\beta$-\textsc{Greedy} guarantee. We combine differential analysis with a variety of other ideas to derive our \textsc{Mix} guarantees.

Summary presentation

Full presentation