SIAM Journal on Mathematics of Data Science

Papers
(The TQCC of SIAM Journal on Mathematics of Data Science is 5. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2021-12-01 to 2025-12-01.)
ArticleCitations
Spectral Barron Space for Deep Neural Network Approximation40
A Simple and Optimal Algorithm for Strict Circular Seriation31
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization29
Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms28
A Note on the Regularity of Images Generated by Convolutional Neural Networks23
Randomized Nyström Approximation of Non-negative Self-Adjoint Operators23
On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions23
New Equivalences between Interpolation and SVMs: Kernels and Structured Features22
Block Majorization Minimization with Extrapolation and Application to \({\beta }\)-NMF17
Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning17
Resolving the Mixing Time of the Langevin Algorithm to Its Stationary Distribution for Log-Concave Sampling16
Learning Functions Varying along a Central Subspace14
Efficient Algorithms for Regularized Nonnegative Scale-Invariant Low-Rank Approximation Models13
Online Machine Teaching under Learner Uncertainty: Gradient Descent Learners of a Quadratic Loss13
Quantitative Approximation Results for Complex-Valued Neural Networks13
Deep Block Proximal Linearized Minimization Algorithm for Nonconvex Inverse Problems13
Nonlinear Tomographic Reconstruction via Nonsmooth Optimization12
Nonbacktracking Spectral Clustering of Nonuniform Hypergraphs11
Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning11
Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem11
Persistent Laplacians: Properties, Algorithms and Implications11
CA-PCA: Manifold Dimension Estimation, Adapted for Curvature10
Nonlinear Meta-learning Can Guarantee Faster Rates10
Convergence of a Piggyback-Style Method for the Differentiation of Solutions of Standard Saddle-Point Problems9
Asymptotics of the Sketched Pseudoinverse9
Covariance Alignment: From Maximum Likelihood Estimation to Gromov–Wasserstein9
Stochastic Variance-Reduced Majorization-Minimization Algorithms9
The GenCol Algorithm for High-Dimensional Optimal Transport: General Formulation and Application to Barycenters and Wasserstein Splines9
A Variational Formulation of Accelerated Optimization on Riemannian Manifolds9
Scalable Tensor Methods for Nonuniform Hypergraphs9
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities9
Inverse Evolution Layers: Physics-Informed Regularizers for Image Segmentation9
Bi-Invariant Dissimilarity Measures for Sample Distributions in Lie Groups8
The Sample Complexity of Sparse Multireference Alignment and Single-Particle Cryo-Electron Microscopy8
On Neural Network Approximation of Ideal Adversarial Attack and Convergence of Adversarial Training8
Random Multitype Spanning Forests for Synchronization on Sparse Graphs8
Convergence of Gradient Descent for Recurrent Neural Networks: A Nonasymptotic Analysis8
Group-Invariant Tensor Train Networks for Supervised Learning8
Optimal Dorfman Group Testing for Symmetric Distributions7
Numerical Considerations and a new implementation for invariant coordinate selection7
Efficient Identification of Butterfly Sparse Matrix Factorizations7
Finite-Time Analysis of Natural Actor-Critic for POMDPs7
A Nonlinear Matrix Decomposition for Mining the Zeros of Sparse Data7
The Geometric Median and Applications to Robust Mean Estimation7
Supervised Gromov–Wasserstein Optimal Transport with Metric-Preserving Constraints7
Benefit of Interpolation in Nearest Neighbor Algorithms7
Computing Wasserstein Barycenters via Operator Splitting: The Method of Averaged Marginals7
Post-training Quantization for Neural Networks with Provable Guarantees6
Operator Shifting for General Noisy Matrix Systems6
Randomized Wasserstein Barycenter Computation: Resampling with Statistical Guarantees6
LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing6
Adaptive Joint Distribution Learning6
Sequential Construction and Dimension Reduction of Gaussian Processes Under Inequality Constraints6
ABBA Neural Networks: Coping with Positivity, Expressivity, and Robustness6
Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model6
Complete and Continuous Invariants of 1-Periodic Sequences in Polynomial Time6
Memory Capacity of Two Layer Neural Networks with Smooth Activations5
Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning5
HADES: Fast Singularity Detection with Local Measure Comparison5
Phase Retrieval with Semialgebraic and ReLU Neural Network Priors5
A Unifying Generative Model for Graph Learning Algorithms: Label Propagation, Graph Convolutions, and Combinations5
Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks5
Fast Kernel Summation in High Dimensions via Slicing and Fourier Transforms5
Sensitivity-Informed Provable Pruning of Neural Networks5
KL Convergence Guarantees for Score Diffusion Models under Minimal Data Assumptions5
Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation5
0.16284084320068