aaron sidford cv

I received a B.S. There will be a talk every day from 16:00-18:00 CEST from July 26 to August 13. Research Interests: My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. Emphasis will be on providing mathematical tools for combinatorial optimization, i.e. Links. 113 * 2016: The system can't perform the operation now. Articles Cited by Public access. With Bill Fefferman, Soumik Ghosh, Umesh Vazirani, and Zixin Zhou (2022). You interact with data structures even more often than with algorithms (think Google, your mail server, and even your network routers). If you see any typos or issues, feel free to email me. Winter 2020 Teaching assistant for EE364a: Convex Optimization I taught by John Duchi, Fall 2018 Teaching assitant for CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019 taught by Greg Valiant. Here are some lecture notes that I have written over the years. sidford@stanford.edu. I am a fifth year Ph.D. student in Computer Science at Stanford University co-advised by Gregory Valiant and John Duchi. ", "Improved upper and lower bounds on first-order queries for solving \(\min_{x}\max_{i\in[n]}\ell_i(x)\). Stanford University. (arXiv), A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, In Symposium on Foundations of Computer Science (FOCS 2015), Machtey Award for Best Student Paper (arXiv), Efficient Inverse Maintenance and Faster Algorithms for Linear Programming, In Symposium on Foundations of Computer Science (FOCS 2015) (arXiv), Competing with the Empirical Risk Minimizer in a Single Pass, With Roy Frostig, Rong Ge, and Sham Kakade, In Conference on Learning Theory (COLT 2015) (arXiv), Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, In International Conference on Machine Learning (ICML 2015) (arXiv), Uniform Sampling for Matrix Approximation, With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng, In Innovations in Theoretical Computer Science (ITCS 2015) (arXiv), Path-Finding Methods for Linear Programming : Solving Linear Programs in (rank) Iterations and Faster Algorithms for Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2014), Best Paper Award and Machtey Award for Best Student Paper (arXiv), Single Pass Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco, An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations, With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia, In Symposium on Discrete Algorithms (SODA 2014), Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems, In Symposium on Fondations of Computer Science (FOCS 2013) (arXiv), A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time, With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu, In Symposium on the Theory of Computing (STOC 2013) (arXiv), SIAM Journal on Computing (arXiv before merge), Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space, With Jack Murtagh, Omer Reingold, and Salil Vadhan, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (arXiv), Lower Bounds for Finding Stationary Points II: First-Order Methods. Selected recent papers . Improved Lower Bounds for Submodular Function Minimization. the Operations Research group. Title. Prior to that, I received an MPhil in Scientific Computing at the University of Cambridge on a Churchill Scholarship where I was advised by Sergio Bacallado. with Yair Carmon, Kevin Tian and Aaron Sidford {{{;}#q8?\. [pdf] [slides] July 2015. pdf, Szemerdi Regularity Lemma and Arthimetic Progressions, Annie Marsden. Computer Science. with Kevin Tian and Aaron Sidford Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory ( COLT 2022 )! Nearly Optimal Communication and Query Complexity of Bipartite Matching . With Jack Murtagh, Omer Reingold, and Salil P. Vadhan. ", "How many \(\epsilon\)-length segments do you need to look at for finding an \(\epsilon\)-optimal minimizer of convex function on a line? I am generally interested in algorithms and learning theory, particularly developing algorithms for machine learning with provable guarantees. The following articles are merged in Scholar. With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli. with Yair Carmon, Arun Jambulapati and Aaron Sidford with Yang P. Liu and Aaron Sidford. Michael B. Cohen, Yin Tat Lee, Gary L. Miller, Jakub Pachocki, and Aaron Sidford. Yin Tat Lee and Aaron Sidford. with Yair Carmon, Aaron Sidford and Kevin Tian [pdf] 2022 - current Assistant Professor, Georgia Institute of Technology (Georgia Tech) 2022 Visiting researcher, Max Planck Institute for Informatics. University of Cambridge MPhil. It was released on november 10, 2017. Email: [name]@stanford.edu 5 0 obj I hope you enjoy the content as much as I enjoyed teaching the class and if you have questions or feedback on the note, feel free to email me. Aaron Sidford. Aaron Sidford is an Assistant Professor in the departments of Management Science and Engineering and Computer Science at Stanford University. University, where resume/cv; publications. Research interests : Data streams, machine learning, numerical linear algebra, sketching, and sparse recovery.. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is . NeurIPS Smooth Games Optimization and Machine Learning Workshop, 2019, Variance Reduction for Matrix Games Google Scholar Digital Library; Russell Lyons and Yuval Peres. 2023. . Allen Liu. to be advised by Prof. Dongdong Ge. Discrete Mathematics and Algorithms: An Introduction to Combinatorial Optimization: I used these notes to accompany the course Discrete Mathematics and Algorithms. Slides from my talk at ITCS. My interests are in the intersection of algorithms, statistics, optimization, and machine learning. With Cameron Musco and Christopher Musco. Roy Frostig, Sida Wang, Percy Liang, Chris Manning. Many of these algorithms are iterative and solve a sequence of smaller subproblems, whose solution can be maintained via the aforementioned dynamic algorithms. The site facilitates research and collaboration in academic endeavors. ICML, 2016. 2021. SHUFE, Oct. 2022 - Algorithm Seminar, Google Research, Oct. 2022 - Young Researcher Workshop, Cornell ORIE, Apr. Before Stanford, I worked with John Lafferty at the University of Chicago. [pdf] Semantic parsing on Freebase from question-answer pairs. ACM-SIAM Symposium on Discrete Algorithms (SODA), 2022, Stochastic Bias-Reduced Gradient Methods I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. I regularly advise Stanford students from a variety of departments. with Aaron Sidford Enrichment of Network Diagrams for Potential Surfaces. I am broadly interested in optimization problems, sometimes in the intersection with machine learning If you have been admitted to Stanford, please reach out to discuss the possibility of rotating or working together. Optimization and Algorithmic Paradigms (CS 261): Winter '23, Optimization Algorithms (CS 369O / CME 334 / MS&E 312): Fall '22, Discrete Mathematics and Algorithms (CME 305 / MS&E 315): Winter '22, '21, '20, '19, '18, Introduction to Optimization Theory (CS 269O / MS&E 213): Fall '20, '19, Spring '19, '18, '17, Almost Linear Time Graph Algorithms (CS 269G / MS&E 313): Fall '18, Winter '17. Optimization Algorithms: I used variants of these notes to accompany the courses Introduction to Optimization Theory and Optimization . << Nima Anari, Yang P. Liu, Thuy-Duong Vuong, Maximum Flow and Minimum-Cost Flow in Almost Linear Time, FOCS 2022, Best Paper Assistant Professor of Management Science and Engineering and of Computer Science. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires Superlinear Memory. Aleksander Mdry; Generalized preconditioning and network flow problems Yujia Jin. We are excited to have Professor Sidford join the Management Science & Engineering faculty starting Fall 2016. Congratulations to Prof. Aaron Sidford for receiving the Best Paper Award at the 2022 Conference on Learning Theory (COLT 2022)! Neural Information Processing Systems (NeurIPS, Oral), 2020, Coordinate Methods for Matrix Games . data structures) that maintain properties of dynamically changing graphs and matrices -- such as distances in a graph, or the solution of a linear system. My research was supported by the National Defense Science and Engineering Graduate (NDSEG) Fellowship from 2018-2021, and by a Google PhD Fellowship from 2022-2023. Done under the mentorship of M. Malliaris. Summer 2022: I am currently a research scientist intern at DeepMind in London. Office: 380-T Selected for oral presentation. Roy Frostig, Rong Ge, Sham M. Kakade, Aaron Sidford. Stanford, CA 94305 aaron sidford cvis sea bass a bony fish to eat. July 8, 2022. In September 2018, I started a PhD at Stanford University in mathematics, and am advised by Aaron Sidford. I am an Assistant Professor in the School of Computer Science at Georgia Tech. Two months later, he was found lying in a creek, dead from . Verified email at stanford.edu - Homepage. By using this site, you agree to its use of cookies. Here is a slightly more formal third-person biography, and here is a recent-ish CV. Follow. CS265/CME309: Randomized Algorithms and Probabilistic Analysis, Fall 2019. Google Scholar; Probability on trees and . Etude for the Park City Math Institute Undergraduate Summer School. with Arun Jambulapati, Aaron Sidford and Kevin Tian Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. ", "A general continuous optimization framework for better dynamic (decremental) matching algorithms. We will start with a primer week to learn the very basics of continuous optimization (July 26 - July 30), followed by two weeks of talks by the speakers on more advanced . In submission. The ones marked, 2014 IEEE 55th Annual Symposium on Foundations of Computer Science, 424-433, SIAM Journal on Optimization 28 (2), 1751-1772, Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 1049-1065, 2013 ieee 54th annual symposium on foundations of computer science, 147-156, Proceedings of the forty-fifth annual ACM symposium on Theory of computing, MB Cohen, YT Lee, C Musco, C Musco, R Peng, A Sidford, Proceedings of the 2015 Conference on Innovations in Theoretical Computer, Advances in Neural Information Processing Systems 31, M Kapralov, YT Lee, CN Musco, CP Musco, A Sidford, SIAM Journal on Computing 46 (1), 456-477, P Jain, S Kakade, R Kidambi, P Netrapalli, A Sidford, MB Cohen, YT Lee, G Miller, J Pachocki, A Sidford, Proceedings of the forty-eighth annual ACM symposium on Theory of Computing, International Conference on Machine Learning, 2540-2548, P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford, 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, 230-249, Mathematical Programming 184 (1-2), 71-120, P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford, International conference on machine learning, 654-663, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete, D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford, New articles related to this author's research, Path finding methods for linear programming: Solving linear programs in o (vrank) iterations and faster algorithms for maximum flow, Accelerated methods for nonconvex optimization, An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations, A faster cutting plane method and its implications for combinatorial and convex optimization, Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time, Uniform sampling for matrix approximation, Near-optimal time and sample complexities for solving Markov decision processes with a generative model, Single pass spectral sparsification in dynamic streams, Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification, Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization, Accelerating stochastic gradient descent for least squares regression, Efficient inverse maintenance and faster algorithms for linear programming, Lower bounds for finding stationary points I, Streaming pca: Matching matrix bernstein and near-optimal finite sample guarantees for ojas algorithm, Convex Until Proven Guilty: Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, Competing with the empirical risk minimizer in a single pass, Variance reduced value iteration and faster algorithms for solving Markov decision processes, Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. In this talk, I will present a new algorithm for solving linear programs. Faster energy maximization for faster maximum flow. Management Science & Engineering Towards this goal, some fundamental questions need to be solved, such as how can machines learn models of their environments that are useful for performing tasks . This site uses cookies from Google to deliver its services and to analyze traffic. Another research focus are optimization algorithms. SODA 2023: 4667-4767. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford ", "About how and why coordinate (variance-reduced) methods are a good idea for exploiting (numerical) sparsity of data. (, In Symposium on Foundations of Computer Science (FOCS 2015) (, In Conference on Learning Theory (COLT 2015) (, In International Conference on Machine Learning (ICML 2015) (, In Innovations in Theoretical Computer Science (ITCS 2015) (, In Symposium on Fondations of Computer Science (FOCS 2013) (, In Symposium on the Theory of Computing (STOC 2013) (, Book chapter in Building Bridges II: Mathematics of Laszlo Lovasz, 2020 (, Journal of Machine Learning Research, 2017 (. with Yair Carmon, Arun Jambulapati and Aaron Sidford Improves the stochas-tic convex optimization problem in parallel and DP setting. Aaron Sidford (sidford@stanford.edu) Welcome This page has informatoin and lecture notes from the course "Introduction to Optimization Theory" (MS&E213 / CS 269O) which I taught in Fall 2019. Anup B. Rao. Fall'22 8803 - Dynamic Algebraic Algorithms, small tool to obtain upper bounds of such algebraic algorithms. with Vidya Muthukumar and Aaron Sidford Their, This "Cited by" count includes citations to the following articles in Scholar. with Aaron Sidford Source: appliancesonline.com.au. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on. Aaron Sidford's 143 research works with 2,861 citations and 1,915 reads, including: Singular Value Approximation and Reducing Directed to Undirected Graph Sparsification Mail Code. Secured intranet portal for faculty, staff and students. Sequential Matrix Completion. Conference Publications 2023 The Complexity of Infinite-Horizon General-Sum Stochastic Games With Yujia Jin, Vidya Muthukumar, Aaron Sidford To appear in Innovations in Theoretical Computer Science (ITCS 2023) (arXiv) 2022 Optimal and Adaptive Monteiro-Svaiter Acceleration With Yair Carmon, Our method improves upon the convergence rate of previous state-of-the-art linear programming . Gary L. Miller Carnegie Mellon University Verified email at cs.cmu.edu. Aaron Sidford. ICML Workshop on Reinforcement Learning Theory, 2021, Variance Reduction for Matrix Games stream . Yu Gao, Yang P. Liu, Richard Peng, Faster Divergence Maximization for Faster Maximum Flow, FOCS 2020 ", "A special case where variance reduction can be used to nonconvex optimization (monotone operators). Before joining Stanford in Fall 2016, I was an NSF post-doctoral fellow at Carnegie Mellon University ; I received a Ph.D. in mathematics from the University of Michigan in 2014, and a B.A. theory and graph applications. Neural Information Processing Systems (NeurIPS, Spotlight), 2019, Variance Reduction for Matrix Games Neural Information Processing Systems (NeurIPS, Oral), 2019, A Near-Optimal Method for Minimizing the Maximum of N Convex Loss Functions Email / ", "Collection of variance-reduced / coordinate methods for solving matrix games, with simplex or Euclidean ball domains. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures. Aaron Sidford Stanford University Verified email at stanford.edu. arXiv | conference pdf (alphabetical authorship), Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with Multiple Scales. 2021 - 2022 Postdoc, Simons Institute & UC . Honorable Mention for the 2015 ACM Doctoral Dissertation Award went to Aaron Sidford of the Massachusetts Institute of Technology, and Siavash Mirarab of the University of Texas at Austin. >> 2017. ", "An attempt to make Monteiro-Svaiter acceleration practical: no binary search and no need to know smoothness parameter! /Producer (Apache FOP Version 1.0) how . International Conference on Machine Learning (ICML), 2020, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG In Symposium on Foundations of Computer Science (FOCS 2017) (arXiv), "Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions, With Yair Carmon, John C. Duchi, and Oliver Hinder, In International Conference on Machine Learning (ICML 2017) (arXiv), Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu, In Symposium on Theory of Computing (STOC 2017), Subquadratic Submodular Function Minimization, With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong, In Symposium on Theory of Computing (STOC 2017) (arXiv), Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More, With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu, In Symposium on Foundations of Computer Science (FOCS 2016) (arXiv), With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki, In Symposium on Theory of Computing (STOC 2016) (arXiv), With Alina Ene, Gary L. Miller, and Jakub Pachocki, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm, With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli, In Conference on Learning Theory (COLT 2016) (arXiv), Principal Component Projection Without Principal Component Analysis, With Roy Frostig, Cameron Musco, and Christopher Musco, In International Conference on Machine Learning (ICML 2016) (arXiv), Faster Eigenvector Computation via Shift-and-Invert Preconditioning, With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. /Length 11 0 R /Filter /FlateDecode I am My research is on the design and theoretical analysis of efficient algorithms and data structures. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in (sqrt(n) L) iterations each consisting of solving (1) linear systems and additional nearly linear time computation. Eigenvalues of the laplacian and their relationship to the connectedness of a graph. Yair Carmon. [5] Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, Aaron Sidford, Kevin Tian. with Hilal Asi, Yair Carmon, Arun Jambulapati and Aaron Sidford SODA 2023: 5068-5089. Journal of Machine Learning Research, 2017 (arXiv). Neural Information Processing Systems (NeurIPS), 2021, Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss Np%p `a!2D4! I develop new iterative methods and dynamic algorithms that complement each other, resulting in improved optimization algorithms. Yujia Jin. Faculty and Staff Intranet. [pdf] [talk] We make safe shipping arrangements for your convenience from Baton Rouge, Louisiana. arXiv | conference pdf (alphabetical authorship) Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan, Big-Step-Little-Step: Gradient Methods for Objectives with . BayLearn, 2021, On the Sample Complexity of Average-reward MDPs "I am excited to push the theory of optimization and algorithm design to new heights!" Assistant Professor Aaron Sidford speaks at ICME's Xpo event. 9-21. In Symposium on Foundations of Computer Science (FOCS 2020) Invited to the special issue ( arXiv) Faculty Spotlight: Aaron Sidford. Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Efficient Convex Optimization Requires . with Yair Carmon, Aaron Sidford and Kevin Tian They will share a $10,000 prize, with financial sponsorship provided by Google Inc. % [pdf] [poster] They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission . [pdf] [poster] 2022 - Learning and Games Program, Simons Institute, Sept. 2021 - Young Researcher Workshop, Cornell ORIE, Sept. 2021 - ACO Student Seminar, Georgia Tech, Dec. 2019 - NeurIPS Spotlight presentation. 2013. pdf, Fourier Transformation at a Representation, Annie Marsden. 4 0 obj I am broadly interested in optimization problems, sometimes in the intersection with machine learning theory and graph applications. Contact. Research Institute for Interdisciplinary Sciences (RIIS) at "t a","H My long term goal is to bring robots into human-centered domains such as homes and hospitals. arXiv | conference pdf, Annie Marsden, Sergio Bacallado. Aaron Sidford, Gregory Valiant, Honglin Yuan COLT, 2022 arXiv | pdf. MI #~__ Q$.R$sg%f,a6GTLEQ!/B)EogEA?l kJ^- \?l{ P&d\EAt{6~/fJq2bFn6g0O"yD|TyED0Ok-\~[`|4P,w\A8vD$+)%@P4 0L ` ,\@2R 4f [pdf] [poster] [c7] Sivakanth Gopi, Yin Tat Lee, Daogao Liu, Ruoqi Shen, Kevin Tian: Private Convex Optimization in General Norms. We organize regular talks and if you are interested and are Stanford affiliated, feel free to reach out (from a Stanford email). 2016. This work presents an accelerated gradient method for nonconvex optimization problems with Lipschitz continuous first and second derivatives that is Hessian free, i.e., it only requires gradient computations, and is therefore suitable for large-scale applications. He received his PhD from the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he was advised by Jonathan Kelner. 4026. [last name]@stanford.edu where [last name]=sidford. In Symposium on Theory of Computing (STOC 2020) (arXiv), Constant Girth Approximation for Directed Graphs in Subquadratic Time, With Shiri Chechik, Yang P. Liu, and Omer Rotem, Leverage Score Sampling for Faster Accelerated Regression and ERM, With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli, In International Conference on Algorithmic Learning Theory (ALT 2020) (arXiv), Near-optimal Approximate Discrete and Continuous Submodular Function Minimization, In Symposium on Discrete Algorithms (SODA 2020) (arXiv), Fast and Space Efficient Spectral Sparsification in Dynamic Streams, With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos, In Conference on Neural Information Processing Systems (NeurIPS 2019), Complexity of Highly Parallel Non-Smooth Convex Optimization, With Sbastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li, Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG, A Direct (1/) Iteration Parallel Algorithm for Optimal Transport, In Conference on Neural Information Processing Systems (NeurIPS 2019) (arXiv), A General Framework for Efficient Symmetric Property Estimation, With Moses Charikar and Kirankumar Shiragur, Parallel Reachability in Almost Linear Work and Square Root Depth, In Symposium on Foundations of Computer Science (FOCS 2019) (arXiv), With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong, Deterministic Approximation of Random Walks in Small Space, With Jack Murtagh, Omer Reingold, and Salil P. Vadhan, In International Workshop on Randomization and Computation (RANDOM 2019), A Rank-1 Sketch for Matrix Multiplicative Weights, With Yair Carmon, John C. Duchi, and Kevin Tian, In Conference on Learning Theory (COLT 2019) (arXiv), Near-optimal method for highly smooth convex optimization, Efficient profile maximum likelihood for universal symmetric property estimation, In Symposium on Theory of Computing (STOC 2019) (arXiv), Memory-sample tradeoffs for linear regression with small error, Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications, With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi, In Symposium on Discrete Algorithms (SODA 2019) (arXiv), Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression, In Conference on Neural Information Processing Systems (NeurIPS 2018) (arXiv), Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model, With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye, Coordinate Methods for Accelerating Regression and Faster Approximate Maximum Flow, In Symposium on Foundations of Computer Science (FOCS 2018), Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations, With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao, In Symposium on Foundations of Computer Science (FOCS 2018) (arXiv), Efficient Convex Optimization with Membership Oracles, In Conference on Learning Theory (COLT 2018) (arXiv), Accelerating Stochastic Gradient Descent for Least Squares Regression, With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli, Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners.

Watertown, Ny Police Arrests, How To Tell A Coach You Are Switching Teams, Sugar Solution Density Calculator, Articles A

Możliwość komentowania jest wyłączona.