SIAM Journal on Mathematics of Data Science

Papers
(The TQCC of SIAM Journal on Mathematics of Data Science is 4. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2021-05-01 to 2025-05-01.)
ArticleCitations
A Simple and Optimal Algorithm for Strict Circular Seriation31
Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms24
Quantitative Approximation Results for Complex-Valued Neural Networks18
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization18
New Equivalences between Interpolation and SVMs: Kernels and Structured Features18
On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions17
A Note on the Regularity of Images Generated by Convolutional Neural Networks16
Learning Functions Varying along a Central Subspace15
Efficient Algorithms for Regularized Nonnegative Scale-Invariant Low-Rank Approximation Models15
Persistent Laplacians: Properties, Algorithms and Implications11
Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning11
Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning11
Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem11
Nonbacktracking Spectral Clustering of Nonuniform Hypergraphs11
CA-PCA: Manifold Dimension Estimation, Adapted for Curvature10
Scalable Tensor Methods for Nonuniform Hypergraphs10
Convergence of a Piggyback-Style Method for the Differentiation of Solutions of Standard Saddle-Point Problems10
A Variational Formulation of Accelerated Optimization on Riemannian Manifolds9
Asymptotics of the Sketched Pseudoinverse9
The GenCol Algorithm for High-Dimensional Optimal Transport: General Formulation and Application to Barycenters and Wasserstein Splines9
Inverse Evolution Layers: Physics-Informed Regularizers for Image Segmentation8
Stochastic Variance-Reduced Majorization-Minimization Algorithms8
The Sample Complexity of Sparse Multireference Alignment and Single-Particle Cryo-Electron Microscopy7
Benefit of Interpolation in Nearest Neighbor Algorithms7
Function-Space Optimality of Neural Architectures with Multivariate Nonlinearities7
Group-Invariant Tensor Train Networks for Supervised Learning7
Supervised Gromov–Wasserstein Optimal Transport with Metric-Preserving Constraints6
Bi-Invariant Dissimilarity Measures for Sample Distributions in Lie Groups6
Finite-Time Analysis of Natural Actor-Critic for POMDPs6
Numerical Considerations and a new implementation for invariant coordinate selection6
The Geometric Median and Applications to Robust Mean Estimation6
Optimal Dorfman Group Testing for Symmetric Distributions6
ABBA Neural Networks: Coping with Positivity, Expressivity, and Robustness5
Post-training Quantization for Neural Networks with Provable Guarantees5
Efficient Identification of Butterfly Sparse Matrix Factorizations5
A Nonlinear Matrix Decomposition for Mining the Zeros of Sparse Data5
Computing Wasserstein Barycenters via Operator Splitting: The Method of Averaged Marginals5
LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing5
Optimality Conditions for Nonsmooth Nonconvex-Nonconcave Min-Max Problems and Generative Adversarial Networks4
Randomized Wasserstein Barycenter Computation: Resampling with Statistical Guarantees4
Sequential Construction and Dimension Reduction of Gaussian Processes Under Inequality Constraints4
KL Convergence Guarantees for Score Diffusion Models under Minimal Data Assumptions4
Adaptive Joint Distribution Learning4
Operator Shifting for General Noisy Matrix Systems4
Sensitivity-Informed Provable Pruning of Neural Networks4
Feel-Good Thompson Sampling for Contextual Bandits and Reinforcement Learning4
Memory Capacity of Two Layer Neural Networks with Smooth Activations4
Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model4
Accelerated and Instance-Optimal Policy Evaluation with Linear Function Approximation4
0.033491849899292