Martin Takáč
Martin Takáč
Η διεύθυνση ηλεκτρονικού ταχυδρομείου έχει επαληθευτεί στον τομέα lehigh.edu - Αρχική σελίδα
Τίτλος
Παρατίθεται από
Παρατίθεται από
Έτος
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
P Richtárik, M Takáč
Mathematical Programming 144 (1), 1-38, 2014
6802014
Parallel coordinate descent methods for big data optimization
P Richtárik, M Takáč
Mathematical Programming, Series A, 1-52, 2015
4592015
Communication-efficient distributed dual coordinate ascent
M Jaggi, V Smith, M Takáč, J Terhorst, S Krishnan, T Hofmann, MI Jordan
arXiv preprint arXiv:1409.1458, 2014
3192014
SARAH: A novel method for machine learning problems using stochastic recursive gradient
L Nguyen, J Liu, K Scheinberg, M Takáč
In 34th International Conference on Machine Learning, ICML 2017, 2017
2732017
Reinforcement learning for solving the vehicle routing problem
M Nazari, A Oroojlooy, LV Snyder, M Takáč
arXiv preprint arXiv:1802.04240, 2018
2392018
Mini-batch semi-stochastic gradient descent in the proximal setting
J Konečný, J Liu, P Richtárik, M Takáč
IEEE Journal of Selected Topics in Signal Processing 10 (2), 242-255, 2015
2362015
Distributed coordinate descent method for learning with big data
P Richtárik, M Takác
Journal of Machine Learning Research 17, 1-25, 2016
1992016
Mini-batch primal and dual methods for SVMs
M Takáč, A Bijral, P Richtárik, N Srebro
In 30th International Conference on Machine Learning, ICML 2013, 2013
192*2013
Adding vs. averaging in distributed primal-dual optimization
C Ma, V Smith, M Jaggi, MI Jordan, P Richtárik, M Takáč
In 32nd International Conference on Machine Learning, ICML 2015, 2015
1682015
CoCoA: A general framework for communication-efficient distributed optimization
V Smith, S Forte, M Chenxin, M Takáč, MI Jordan, M Jaggi
Journal of Machine Learning Research 18, 230, 2018
1482018
On optimal probabilities in stochastic coordinate descent methods
P Richtárik, M Takáč
Optimization Letters, 2015, 1-11, 2015
1032015
Distributed optimization with arbitrary local solvers
C Ma, J Konečný, M Jaggi, V Smith, MI Jordan, P Richtárik, M Takáč
Optimization Methods and Software 32 (4), 813-848, 2017
1002017
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
LM Nguyen, PH Nguyen, M van Dijk, P Richtárik, K Scheinberg, M Takáč
In 34th International Conference on Machine Learning, ICML 2018, 2018
992018
SDNA: stochastic dual newton ascent for empirical risk minimization
Z Qu, P Richtárik, M Takáč, O Fercoq
In 33rd International Conference on Machine Learning, ICML 2016, 2016
842016
A Multi-Batch L-BFGS Method for Machine Learning
AS Berahas, J Nocedal, M Takáč
The Thirtieth Annual Conference on Neural Information Processing Systems (NIPS), 2016
722016
Stochastic recursive gradient algorithm for nonconvex optimization
LM Nguyen, J Liu, K Scheinberg, M Takáč
arXiv preprint arXiv:1705.07261, 2017
602017
Fast distributed coordinate descent for non-strongly convex losses
O Fercoq, Z Qu, P Richtárik, M Takáč
IEEE Workshop on Machine Learning for Signal Processing, 2014, 2014
602014
Primal-Dual Rates and Certificates
C Dünner, S Forte, M Takáč, M Jaggi
In 33rd International Conference on Machine Learning, ICML 2016, 2016
582016
Efficient serial and parallel coordinate descent methods for huge-scale truss topology design
P Richtárik, M Takáč
Operations Research Proceedings 2011, 27-32, 2012
552012
Stochastic reformulations of linear systems: algorithms and convergence theory
P Richtárik, M Takác
SIAM Journal on Matrix Analysis and Applications 41 (2), 487-524, 2020
542020
Δεν είναι δυνατή η εκτέλεση της ενέργειας από το σύστημα αυτή τη στιγμή. Προσπαθήστε ξανά αργότερα.
Άρθρα 1–20