Follow
Kimon Antonakopoulos
Kimon Antonakopoulos
LIONS-EPFL
Verified email at epfl.ch
Title
Cited by
Cited by
Year
Adaptive learning in continuous games: Optimal regret bounds and convergence to nash equilibrium
YG Hsieh, K Antonakopoulos, P Mertikopoulos
Conference on Learning Theory, 2388-2422, 2021
612021
An adaptive mirror-prox method for variational inequalities with singular operators
K Antonakopoulos, V Belmega, P Mertikopoulos
Advances in Neural Information Processing Systems 32, 2019
492019
Adaptive extra-gradient methods for min-max optimization and games
K Antonakopoulos, EV Belmega, P Mertikopoulos
arXiv preprint arXiv:2010.12100, 2020
472020
Online and stochastic optimization beyond Lipschitz continuity: A Riemannian approach
K Antonakopoulos, EV Belmega, P Mertikopoulos
ICLR 2020-International Conference on Learning Representations, 1-20, 2020
202020
On the generalization of stochastic gradient descent with momentum
A Ramezani-Kebrya, A Khisti, B Liang
Feb, 2021
192021
AdaGrad avoids saddle points
K Antonakopoulos, P Mertikopoulos, G Piliouras, X Wang
International Conference on Machine Learning, 731-771, 2022
182022
No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation
YG Hsieh, K Antonakopoulos, V Cevher, P Mertikopoulos
Advances in Neural Information Processing Systems 35, 6544-6556, 2022
172022
Fast routing under uncertainty: Adaptive learning in congestion games via exponential weights
DQ Vu, K Antonakopoulos, P Mertikopoulos
Advances in Neural Information Processing Systems 34, 14708-14720, 2021
172021
Adaptive first-order methods revisited: Convex minimization without lipschitz requirements
K Antonakopoulos, P Mertikopoulos
Advances in Neural Information Processing Systems 34, 19056-19068, 2021
152021
Sifting through the noise: Universal first-order methods for stochastic variational inequalities
K Antonakopoulos, T Pethick, A Kavis, P Mertikopoulos, V Cevher
Advances in Neural Information Processing Systems 34, 13099-13111, 2021
122021
Adaptive stochastic variance reduction for non-convex finite-sum minimization
A Kavis, S Skoulakis, K Antonakopoulos, LT Dadi, V Cevher
Advances in Neural Information Processing Systems 35, 23524-23538, 2022
112022
Extra-newton: A first approach to noise-adaptive accelerated second-order methods
K Antonakopoulos, A Kavis, V Cevher
Advances in Neural Information Processing Systems 35, 29859-29872, 2022
72022
UnderGrad: A universal black-box optimization method with almost dimension-free convergence rate guarantees
K Antonakopoulos, DQ Vu, V Cevher, K Levy, P Mertikopoulos
International Conference on Machine Learning, 772-795, 2022
52022
Advancing the lower bounds: An accelerated, stochastic, second-order method with optimal adaptation to inexactness
A Agafonov, D Kamzolov, A Gasnikov, K Antonakopoulos, V Cevher, ...
arXiv preprint arXiv:2309.01570, 2023
12023
Distributed extra-gradient with optimal complexity and communication guarantees
A Ramezani-Kebrya, K Antonakopoulos, I Krawczuk, J Deschenaux, ...
arXiv preprint arXiv:2308.09187, 2023
12023
Universal Gradient Methods for Stochastic Convex Optimization
A Rodomanov, A Kavis, Y Wu, K Antonakopoulos, V Cevher
arXiv preprint arXiv:2402.03210, 2024
2024
Adaptive Bilevel Optimization
K Antonakopoulos, S Sabach, L Viano, M Hong, V Cevher
2023
Adaptive Algorithms for Optimization Beyond Lipschitz Requirements
K Antonakopoulos
Université Grenoble Alpes [2020-....], 2022
2022
Routing in an Uncertain World: Adaptivity, Efficiency, and Equilibrium
DQ Vu, K Antonakopoulos, P Mertikopoulos
arXiv preprint arXiv:2201.02985, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–19