## Recent Presentations

• IEEE Conference on Decision and Control (CDC), Nice, 2019 (slides)
• IFAC Workshop on Distributed Estimation and Control in Networked Systems (NecSys), Chicago, 2019 (poster)
• Systems Information Learning Optimization (SILO), Madison, 2019 (slides)
• International Symposium on Mathematical Programming (ISMP), Bordeaux, 2018 (slides)

## Research Interests

• robust control
• convex optimization
• multi-agent systems

## Education and Experience

Postdoc at the Wisconsin Institute for Discovery

Supervisor: Laurent Lessard

PhD in Electrical Engineering and Computer Science

Northwestern University, 2012–2017

MS in Applied Mathematics

University of Akron, 2007–2012

BS in Applied Mathematics

University of Akron, 2007–2012

BS in Electrical Engineering

University of Akron, 2007–2012

## Forthcoming Publications

• B. Van Scoy and L. Lessard, “Systematic analysis of distributed optimization algorithms over jointly-connected networks,” arXiv:2003.10500, 2020.
[PDF] [BibTeX] [URL]

@inproceedings{vanscoy2020systematic,
author    = {Van Scoy, Bryan and Lessard, Laurent},
title     = {Systematic analysis of distributed optimization algorithms over jointly-connected networks},
booktitle = {arxiv.org:2003.10500},
year      = {2020},
url       = {https://arxiv.org/abs/2003.10500}
}
• A. Sundararajan, B. Van Scoy, and L. Lessard, “Analysis and design of first-order distributed optimization algorithms over time-varying graphs,” arXiv:1907.05448, 2020.
[PDF] [BibTeX] [URL]

@inproceedings{sundararajan2020analysis,
author    = {Sundararajan, Akhil and Van Scoy, Bryan and Lessard, Laurent},
title     = {Analysis and design of first-order distributed optimization algorithms over time-varying graphs},
booktitle = {arxiv.org:1907.05448},
year      = {2020},
url       = {https://arxiv.org/abs/1907.05448}
}

## Journal Publications

• S. S. Kia, B. Van Scoy, J. Cortés, R. A. Freeman, K. M. Lynch, and S. Martínez, “Tutorial on dynamic average consensus: The problem, its applications, and the algorithms,” IEEE Control Systems Magazine, vol. 39, no. 3, pp. 40–72, 2019.
[PDF] [BibTeX] [URL]

@inproceedings{kia2019tutorial,
author    = {Kia, Solmaz S. and Van Scoy, Bryan and Cort\'{e}s, J. and Freeman, Randy A. and Lynch, Kevin M. and Mart\'{i}nez, S.},
title     = {Tutorial on Dynamic Average Consensus: {T}he problem, Its Applications, and the Algorithms},
journal   = {IEEE Control Systems Magazine},
year      = {2019},
volume    = {39},
number    = {3},
pages     = {40--72},
url       = {https://ieeexplore.ieee.org/document/8716798},
doi       = {10.1109/MCS.2019.2900783}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “The fastest known globally convergent first-order method for minimizing strongly convex functions,” IEEE Control Systems Letters, vol. 2, no. 1, pp. 49–54, 2018.
[PDF] [BibTeX] [URL] [Slides]

@article{vanscoy2017fastest,
author  = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title   = {The fastest known globally convergent first-order method for minimizing strongly convex functions},
journal = {IEEE Control Systems Letters},
year    = {2018},
volume  = {2},
number  = {1},
pages   = {49--54},
url     = {http://ieeexplore.ieee.org/document/7967721/},
doi     = {http://dx.doi.org/10.1109/LCSYS.2017.2722406}
}

## Peer-Reviewed Conference Proceedings

• B. Van Scoy and L. Lessard, “Integral quadratic constraints: Exact convergence rates and worst-case trajectories,” IEEE Conference on Decision and Control, 2019.
[PDF] [BibTeX] [URL] [Slides]

  @inproceedings{vanscoy2019iqc,
author    = {Van Scoy, Bryan and Lessard, Laurent},
title     = {Integral quadratic constraints: {E}xact convergence rates and worst-case trajectories},
booktitle = {IEEE Conference on Decision and Control},
year      = {2019},
url       = {https://arxiv.org/abs/1903.07668}
}
• B. Van Scoy and L. Lessard, “A distributed optimization algorithm over time-varying graphs with efficient gradient evaluations,” IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2019.
[PDF] [BibTeX] [URL] [Poster]

  @inproceedings{vanscoy2019distributed,
author    = {Van Scoy, Bryan and Lessard, Laurent},
title     = {A distributed optimization algorithm over time-varying graphs with efficient gradient evaluations},
booktitle = {IFAC Workshop on Distributed Estimation and Control in Networked Systems},
year      = {2019},
url       = {https://doi.org/10.1016/j.ifacol.2019.12.181}
}
• A. Sundararajan, B. Van Scoy, and L. Lessard, “A canonical form for first-order distributed optimization algorithms,” American Control Conference, 2019.
[PDF] [BibTeX] [URL] [Slides]

@inproceedings{sundararajan2019canonical,
author    = {Sundararajan, Akhil and Van Scoy, Bryan and Lessard, Laurent},
title     = {A canonical form for first-order distributed optimization algorithms},
booktitle = {American Control Conference},
year      = {2019},
url       = {https://doi.org/10.23919/ACC.2019.8814838}
}
• A. Taylor*, B. Van Scoy*, and L. Lessard*, “Lyapunov functions for first-order methods: Tight automated convergence guarantees,” International Conference on Machine Learning 2018 (* denotes equal contribution).
[PDF] [BibTeX] [URL] [Slides] [Code]

@inproceedings{taylor2018lyapunov,
author    = {Taylor, Adrien and Van Scoy, Bryan and Lessard, Laurent},
title     = {Lyapunov functions for first-order methods: {T}ight automated convergence guarantees},
booktitle = {International Conference on Machine Learning},
pages     = {4897--4906},
year      = {2018},
volume    = {80},
series    = {Proceedings of Machine Learning Research},
month     = {Jul},
publisher = {PMLR},
pdf       = {http://proceedings.mlr.press/v80/taylor18a/taylor18a.pdf},
url       = {http://proceedings.mlr.press/v80/taylor18a.html},
}
• S. Cyrus, B. Hu, B. Van Scoy, and L. Lessard, “A robust accelerated optimization algorithm for strongly convex functions,” American Control Conference, 2018.
[PDF] [BibTeX] [URL] [Slides]

@inproceedings{cyrus2018robust,
author    = {Cyrus, Saman and Hu, Bin and Van Scoy, Bryan and Lessard, Laurent},
title     = {A robust accelerated optimization algorithm for strongly convex functions},
booktitle = {American Control Conference},
year      = {2018},
url       = {https://ieeexplore.ieee.org/document/8430824/}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Feedforward estimators for the distributed average tracking of bandlimited signals in discrete time with switching graph topology,” IEEE Conference on Decision and Control, 2016.
[PDF] [BibTeX] [URL] [Slides]

@inproceedings{vanscoy2016feedforward,
author    = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title     = {Feedforward estimators for the distributed average tracking of bandlimited signals in discrete time with switching graph topology},
booktitle = {IEEE Conference on Decision and Control},
year      = {2016},
url       = {http://ieeexplore.ieee.org/document/7798918/}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Design of robust dynamic average consensus estimators,” IEEE Conference on Decision and Control, 2015.
[PDF] [BibTeX] [URL]

@inproceedings{vanscoy2015design,
author    = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title     = {Design of robust dynamic average consensus estimators},
booktitle = {IEEE Conference on Decision and Control},
year      = {2015},
url       = {http://ieeexplore.ieee.org/document/7403206/}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Exploiting memory in average consensus,” Allerton Conference on Communication, Control, and Computing, 2015.
[PDF] [BibTeX] [URL]

@inproceedings{vanscoy2015exploiting,
author    = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title     = {Exploiting memory in average consensus},
booktitle = {Allerton Conference on Communication, Control, and Computing},
year      = {2015},
url       = {http://ieeexplore.ieee.org/document/7447013/}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “A fast robust nonlinear dynamic average consensus estimator in discrete time,” IFAC Workshop on Distributed Estimation and Control in Networked Systems, 2015.
[PDF] [BibTeX] [URL] [Poster]

@inproceedings{vanscoy2015fast,
author    = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title     = {A fast robust nonlinear dynamic average consensus estimator in discrete time},
booktitle = {IFAC Workshop on Distributed Estimation and Control in Networked Systems},
year      = {2015},
url       = {https://doi.org/10.1016/j.ifacol.2015.10.329}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Optimal worst-case dynamic average consensus,” American Control Conference, 2015.
[PDF] [BibTeX] [URL] [Slides]

@inproceedings{vanscoy2015optimal,
author    = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title     = {Optimal worst-case dynamic average consensus},
booktitle = {American Control Conference},
year      = {2015},
url       = {http://ieeexplore.ieee.org/document/7172171/}
}
• B. Van Scoy, R. A. Freeman, and K. M. Lynch, “Asymptotic mean ergodicity of average consensus estimators,” American Control Conference, 2014.
[PDF] [BibTeX] [URL] [Slides]

@inproceedings{vanscoy2014asymptotic,
author    = {Van Scoy, Bryan and Freeman, Randy A. and Lynch, Kevin M.},
title     = {Asymptotic mean ergodicity of average consensus estimators},
booktitle = {American Control Conference},
year      = {2014},
url       = {http://ieeexplore.ieee.org/document/6859059/}
}

## Doctoral Dissertation

• B. Van Scoy, Analysis and Design of Algorithms for Dynamic Average Consensus and Convex Optimization. PhD thesis, Northwestern University, 2017.
[PDF] [BibTeX] [URL] [Slides]

@phdthesis{vanscoy2017dissertation,
author  = {Van Scoy, Bryan},
title   = {Analysis and Design of Algorithms for Dynamic Average Consensus and Convex Optimization},
journal = {ProQuest Dissertations and Theses},
school  = {Northwestern University},
year    = {2017},
pages   = {227},
}

## Master's Thesis

• B. Van Scoy, “A Mathematical Model for Hydrogen Production from a Proton Exchange Membrane Photoelectrochemical Cell,” 2012.
[PDF] [BibTeX] [URL] [Slides]

@mastersthesis{vanscoy2012thesis,
author = {Van Scoy, Bryan},
title  = {A Mathematical Model for Hydrogen Production from a Proton Exchange Membrane Photoelectrochemical Cell},
school = {University of Akron},
year   = {2012},
}

# Research Overview

My research uses optimization and control to study large-scale systems consisting of many complex interconnected components. I develop systematic tools for characterizing properties of the system, as well as design new algorithms to optimize the overall system performance while being robust to disturbances and uncertainties. [more]

Large-scale cyber-physical systems are becoming more prevalent in today’s society. The smart grid, for example, consists of numerous dynamic renewable energy sources such as wind and solar, where controllers regulate the frequency while minimizing consumer costs. Transportation is moving towards a future where autonomous vehicles will coordinate with each other to optimize travel times and improve safety. While building the sensors and actuators for such systems is a difficult task, their ultimate success relies not on our ability to build such systems, but on our ability to control them.

The fields of optimization and control complement each other in the analysis of the algorithms used to control large-scale interconnected systems. While optimization algorithms are well-suited for finding the best solution to complex problems that remain fixed in time, control theory uses feedback to naturally adapt to dynamic uncertainties and disturbances while maintaining stability. Both fields are crucial to the analysis and design of complex interconnected systems that make efficient use of the resources available and are robust to dynamic environments and unknown operating conditions.

The world is becoming more interconnected, and the algorithms we develop must be capable of controlling these highly complex systems. Interconnected systems of the future must adapt to dynamic operating conditions, combat against cyber attacks, be robust to disturbances, guarantee consumer safety, learn from data, and optimize efficiency. My research aims to make such systems a reality.

## First-order algorithms for convex optimization

Optimization is the cornerstone for solving a multitude of problems in machine learning, finance, control, and engineering. When these problems depend on high-dimensional data, first-order algorithms are typically used due to their low memory and computation requirements. Using tools from robust control, we were able to design the fastest known algorithm for minimizing smooth strongly convex objectives. [more]

First-order optimization algorithms have a long history dating back to 1847 when Cauchy introduced gradient descent. It was not until 1983, however, that Nesterov developed an accelerated method for smooth convex optimization. Since then, researchers have been working to better understand and take advantage of acceleration.

One important class of convex optimization problems are those that are unconstrained with objective functions that are smooth and strongly convex. In [L-CSS’18], we designed a first-order algorithm that is currently the fastest known algorithm for solving this fundamental class of optimization problems.

Triple Momentum Method

Consider the unconstrained optimization problem

$\min_{x\in\mathbb{R}^d} \ f(x),$

where $$f:\mathbb{R}^d\to\mathbb{R}$$ is $$L$$-smooth and $$\mu$$-strongly convex with $$0<\mu\leq L$$. Let $$x_\star\in\mathbb{R}^d$$ denote the unique optimizer, and denote the condition ratio as $$\kappa = L/\mu$$.

Given initial conditions $$x_{-1},x_0\in\mathbb{R}^d$$, the Triple Momentum Method is given by the iterations

\begin{aligned} x_{k+1} &= x_k + \beta\,(x_k-x_{k-1}) - \alpha\,\nabla\!f(y_k) \\ y_k &= x_k + \gamma\,(x_k-x_{k-1}) \end{aligned}

for $$k\ge 0$$, where the stepsizes are

\begin{aligned} \alpha &= \frac{1+\rho}{L} & \beta &= \frac{\rho^2}{2-\rho} & \gamma &= \frac{\rho^2}{(1+\rho)(2-\rho)} \end{aligned}

with $$\rho = 1-1/\sqrt{\kappa}$$. Then there exists a constant $$c>0$$ such that

$\|x_k-x_\star\| \leq c\,\rho^k \qquad \text{for all }k\ge 0.$

In other words, the iterate sequence $$\{x_k\}$$ converges linearly with rate $$\rho = 1-1/\sqrt{\kappa}$$ to the optimizer $$x_\star$$.

## Multi-agent systems

Coordinating agents can interact with each other and their environment in order to solve complex problems that are difficult or impossible to solve individually. Swarms of flying robots, for example, can explore and map unknown environments, and smart cars can communicate with each other to strategically plan routes and minimize travel times. Using semidefinite programming, we have developed a systematic approach to the analysis of algorithms for distributed optimization, and then used this to design an algorithm with the fastest worst-case convergence rate that is also robust to changes in the communication network. [more]

One approach to designing algorithms for a group of agents is to combine centralized algorithms, such as a Kalman filter or model predictive controller, with dynamic average consensus. Here, agents diffuse information over the network in order to track the average of time-varying reference signals. I have worked on multiple aspects of this problem, such as taking into account the effects of noisy and unreliable communication channels, accelerating algorithms so that they converge quickly, and designing algorithms to track the average of signals that vary quickly in time. As a culmination of this work, I wrote a tutorial paper with collaborators describing the benefits and trade-offs of dynamic average consensus algorithms in an effort to make the abundant literature on this topic accessible to a wide audience in controls [CSM’19].

Instead of combining separate optimization and consensus algorithms, a more flexible and robust approach is to develop algorithms to solve the more general distributed optimization problem directly. This problem has seen a surge of attention recently, resulting in a plethora of algorithms with ad hoc designs and analyses. To better understand the taxonomy of distributed algorithm, we first parametrized a large class of distributed algorithms in a canonical form, similar to the controllable canonical form for a linear time-invariant system. We then formulated a linear matrix inequality that searches for a quadratic Lyapunov function to certify convergence of the algorithm. Beyond providing a systematic analysis, this also allowed us to design a new algorithm that has the fastest convergence rate of any known algorithm that is also robust to changes in the communication network [ARX’19].

## Robust stability of interconnected systems

Practical systems often contain components that are uncertain, noisy, and/or nonlinear. Examples include saturation and friction in physical systems, fluctuations in wind and solar renewable energy sources in the smart grid, and activation functions in a neural network. Using integral quadratic constraints from robust control, we have developed a systematic characterization of the rate of convergence of such systems, and we used semidefinite programming duality to construct the most destabilizing problem instances. [more]

My work aims to construct a tight and systematic analysis of uncertain interconnected systems. One method for analyzing interconnected systems is to replace the troublesome component by constraints that its inputs and outputs are known to satisfy, which is the core idea behind integral quadratic constraints from robust control. Since the 1960’s, researchers have been refining this approach to provide tighter guarantees on the closed-loop system. Using convex interpolation, we showed in [ICML’18] that feasibility of a small semidefinite program is equivalent to quadratic stability (that is, existence of a quadratic Lyapunov function), and we used duality theory in semidefinite programming to construct the most unstable scenarios in [CDC’19].