SIAM Journal on Mathematics of Data Science

Papers
(The median citation count of SIAM Journal on Mathematics of Data Science is 1. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2021-02-01 to 2025-02-01.)
ArticleCitations
Moment Estimation for Nonparametric Mixture Models through Implicit Tensor Decomposition28
Entropic Optimal Transport on Random Graphs18
Finite-Time Analysis of Natural Actor-Critic for POMDPs15
Bi-Invariant Dissimilarity Measures for Sample Distributions in Lie Groups13
Benefit of Interpolation in Nearest Neighbor Algorithms12
Private Sampling: A Noiseless Approach for Generating Differentially Private Synthetic Data12
Reversible Gromov–Monge Sampler for Simulation-Based Inference11
Max-Affine Regression via First-Order Methods11
A Simple and Optimal Algorithm for Strict Circular Seriation10
The Geometric Median and Applications to Robust Mean Estimation10
On Design of Polyhedral Estimates in Linear Inverse Problems9
A Nonlinear Matrix Decomposition for Mining the Zeros of Sparse Data9
Numerical Considerations and a new implementation for invariant coordinate selection9
Block Bregman Majorization Minimization with Extrapolation8
Nonlinear Weighted Directed Acyclic Graph and A Priori Estimates for Neural Networks8
Causal Structural Learning via Local Graphs7
Robust Inference of Manifold Density and Geometry by Doubly Stochastic Scaling7
Manifold Oblique Random Forests: Towards Closing the Gap on Convolutional Deep Networks7
Equivariant Neural Networks for Indirect Measurements7
Energy-Based Sequential Sampling for Low-Rank PSD-Matrix Approximation7
Adversarial Robustness of Sparse Local Lipschitz Predictors7
Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms6
Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization6
Optimal Dorfman Group Testing for Symmetric Distributions6
Balancing Geometry and Density: Path Distances on High-Dimensional Data6
Approximation of Lipschitz Functions Using Deep Spline Neural Networks5
Spectral Triadic Decompositions of Real-World Networks5
Fredholm Integral Equations for Function Approximation and the Training of Neural Networks5
Network Online Change Point Localization5
Fast Cluster Detection in Networks by First Order Optimization4
Approximation Bounds for Sparse Programs4
Structural Balance and Random Walks on Complex Networks with Complex Weights4
Lipschitz-Regularized Gradient Flows and Generative Particle Algorithms for High-Dimensional Scarce Data4
Local Versions of Sum-of-Norms Clustering4
Quantitative Approximation Results for Complex-Valued Neural Networks4
Randomly Initialized Alternating Least Squares: Fast Convergence for Matrix Sensing4
Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation4
Poisson Reweighted Laplacian Uncertainty Sampling for Graph-Based Active Learning4
Optimization on Manifolds via Graph Gaussian Processes3
Learning Functions Varying along a Central Subspace3
New Equivalences between Interpolation and SVMs: Kernels and Structured Features3
Markov Kernels Local Aggregation for Noise Vanishing Distribution Sampling3
Stability of Deep Neural Networks via Discrete Rough Paths3
Efficient Identification of Butterfly Sparse Matrix Factorizations3
Intrinsic Dimension Adaptive Partitioning for Kernel Methods3
A Diffusion Process Perspective on Posterior Contraction Rates for Parameters3
ABBA Neural Networks: Coping with Positivity, Expressivity, and Robustness2
Computing Wasserstein Barycenters via Operator Splitting: The Method of Averaged Marginals2
Two Steps at a Time---Taking GAN Training in Stride with Tseng's Method2
Memory Capacity of Two Layer Neural Networks with Smooth Activations2
Approximate Q Learning for Controlled Diffusion Processes and Its Near Optimality2
On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions2
Rigorous Dynamical Mean-Field Theory for Stochastic Gradient Descent Methods2
Algorithmic Regularization in Model-Free Overparametrized Asymmetric Matrix Factorization2
A Note on the Regularity of Images Generated by Convolutional Neural Networks2
Safe Rules for the Identification of Zeros in the Solutions of the SLOPE Problem2
Wassmap: Wasserstein Isometric Mapping for Image Manifold Learning2
GNMR: A Provable One-Line Algorithm for Low Rank Matrix Recovery2
Post-training Quantization for Neural Networks with Provable Guarantees2
Approximate Message Passing with Rigorous Guarantees for Pooled Data and Quantitative Group Testing2
LASSO Reloaded: A Variational Analysis Perspective with Applications to Compressed Sensing2
Nonparametric Finite Mixture Models with Possible Shape Constraints: A Cubic Newton Approach2
Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization2
Federated Primal Dual Fixed Point Algorithm2
Target Network and Truncation Overcome the Deadly Triad in \(\boldsymbol{Q}\)-Learning2
Nonbacktracking Spectral Clustering of Nonuniform Hypergraphs1
Positive Semi-definite Embedding for Dimensionality Reduction and Out-of-Sample Extensions1
$k$-Variance: A Clustered Notion of Variance1
Core-Periphery Detection in Hypergraphs1
Time-Inhomogeneous Diffusion Geometry and Topology1
An Improved Central Limit Theorem and Fast Convergence Rates for Entropic Transportation Costs1
Robust Classification Under $\ell_0$ Attack for the Gaussian Mixture Model1
Sequential Construction and Dimension Reduction of Gaussian Processes Under Inequality Constraints1
Nonasymptotic Bounds for Adversarial Excess Risk under Misspecified Models1
High-Dimensional Analysis of Double Descent for Linear Regression with Random Projections1
Operator Shifting for General Noisy Matrix Systems1
Three-Operator Splitting for Learning to Predict Equilibria in Convex Games1
Autodifferentiable Ensemble Kalman Filters1
Wasserstein Barycenters Are NP-Hard to Compute1
Randomized Wasserstein Barycenter Computation: Resampling with Statistical Guarantees1
A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors1
Subgradient Langevin Methods for Sampling from Nonsmooth Potentials1
Efficiency of ETA Prediction1
Spectral Discovery of Jointly Smooth Features for Multimodal Data1
Efficient Global Optimization of Two-Layer ReLU Networks: Quadratic-Time Algorithms and Adversarial Training1
Corrigendum: Post-training Quantization for Neural Networks with Provable Guarantees1
What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory1
Joint Community Detection and Rotational Synchronization via Semidefinite Programming1
0.039654970169067