Name Venue Year citations
Risk Measures and Upper Probabilities: Coherence and Stratification. JMLR 2024 0
Information Processing Equalities and the Information-Risk Bridge. JMLR 2024 0
Random Classification Noise does not defeat All Convex Potential Boosters Irrespective of Model Choice. ICML 2023 0
The Geometry and Calculus of Losses. JMLR 2023 0
PAC-Bayesian Bound for the Conditional Value at Risk. NIPS/NeurIPS 2020 12
Fairness risk measures. ICML 2019 81
A Primal-Dual link between GANs and Autoencoders. NIPS/NeurIPS 2019 13
Costs and Benefits of Fair Representation Learning. AIES 2019 40
Lossless or Quantized Boosting with Integer Arithmetic. ICML 2019 6
Constant Regret, Generalized Mixability, and Mirror Descent. NIPS/NeurIPS 2018 5
A Theory of Learning with Corrupted Labels. JMLR 2017 58
f-GANs in an Information Geometric Nutshell. NIPS/NeurIPS 2017 23
Bipartite Ranking: a Risk-Theoretic Perspective. JMLR 2016 25
Composite Multiclass Losses. JMLR 2016 0
Exp-Concavity of Proper Composite Losses. COLT 2015 7
Fast rates in statistical and online learning. JMLR 2015 84
Learning with Symmetric Label Noise: The Importance of Being Unhinged. NIPS/NeurIPS 2015 229
Generalized Mixability via Entropic Duality. COLT 2015 0
Bayes-Optimal Scorers for Bipartite Ranking. COLT 2014 6
On the Consistency of Output Code Based Learning Algorithms for Multiclass Learning Problems. COLT 2014 13
The Geometry of Losses. COLT 2014 14
From Stochastic Mixability to Fast Rates. NIPS/NeurIPS 2014 16
Elicitation and Identification of Properties. COLT 2014 76
Mixability in Statistical Learning. NIPS/NeurIPS 2012 16
Divergences and Risks for Multiclass Experiments. COLT 2012 23
The Convexity and Design of Composite Multiclass Losses. ICML 2012 4
Mixability is Bayes Risk Curvature Relative to Log Loss. JMLR 2012 0
Mixability is Bayes Risk Curvature Relative to Log Loss. COLT 2011 20
Composite Multiclass Losses. NIPS/NeurIPS 2011 73
Information, Divergence and Risk for Binary Experiments. JMLR 2011 0
Convexity of Proper Composite Binary Losses. AISTATS 2010 0
Composite Binary Losses. JMLR 2010 0
Surrogate regret bounds for proper losses. ICML 2009 50
Generalised Pinsker Inequalities. COLT 2009 33
The Need for Open Source Software in Machine Learning. JMLR 2007 214
Learning the Kernel with Hyperkernels. JMLR 2005 367
Online Bayes Point Machines. PAKDD 2003 20
Hyperkernels. NIPS/NeurIPS 2002 64
Agnostic Learning Nonconvex Function Classes. COLT 2002 8
Algorithmic Luckiness. JMLR 2002 0
Online Learning with Kernels. NIPS/NeurIPS 2001 1086
Algorithmic Luckiness. NIPS/NeurIPS 2001 61
Kernel Machines and Boolean Functions. NIPS/NeurIPS 2001 18
Prior Knowledge and Preferential Structures in Gradient Descent Learning Algorithms. JMLR 2001 26
Regularized Principal Manifolds. JMLR 2001 0
Regularization with Dot-Product Kernels. NIPS/NeurIPS 2000 114
From Margin to Sparsity. NIPS/NeurIPS 2000 59
Entropy Numbers of Linear Function Classes. COLT 2000 20
The Entropy Regularization Information Criterion. NIPS/NeurIPS 1999 3
Support Vector Method for Novelty Detection. NIPS/NeurIPS 1999 1801
Covering Numbers for Support Vector Machines. COLT 1999 74
Shrinking the Tube: A New Support Vector Regression Algorithm. NIPS/NeurIPS 1998 217
A PAC Analysis of a Bayesian Estimator. COLT 1997 173
A Framework for Structural Risk Minimisation. COLT 1996 103
The Importance of Convexity in Learning with Squared Loss. COLT 1996 0
Existence and uniqueness results for neural network approximations. IEEE Trans. Neural Networks 1995 54
On Efficient Agnostic Learning of Linear Combinations of Basis Functions. COLT 1995 20
Online Learning via Congregational Gradient Descent. COLT 1995 11
Examples of learning curves from a modified VC-formalism. NIPS/NeurIPS 1995 2
Fat-Shattering and the Learnability of Real-Valued Functions. COLT 1994 158
Lower Bounds on the VC-Dimension of Smoothly Parametrized Function Classes. COLT 1994 9
Rational Parametrizations of Neural Networks. NIPS/NeurIPS 1992 1
Investigating the Distribution Assumptions in the Pac Learning Model. COLT 1991 12
Splines, Rational Functions and Neural Networks. NIPS/NeurIPS 1991 7
epsilon-Entropy and the Complexity of Feedforward Neural Networks. NIPS/NeurIPS 1990 0
Copyright ©2019 Universität Würzburg

Impressum | Privacy | FAQ