Antos, AndrasLugosi, GáborUniversitat Pompeu Fabra. Departament d'Economia i Empresa2017-07-262017-07-261997-01-01Machine Learning, 30, (1998), pp. 31-56http://hdl.handle.net/10230/1223Minimax lower bounds for concept learning state, for example, that for each sample size $n$ and learning rule $g_n$, there exists a distribution of the observation $X$ and a concept $C$ to be learnt such that the expected error of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimension of the concept class. However, these bounds do not tell anything about the rate of decrease of the error for a {\sl fixed} distribution--concept pair.\\ In this paper we investigate minimax lower bounds in such a--stronger--sense. We show that for several natural $k$--parameter concept classes, including the class of linear halfspaces, the class of balls, the class of polyhedra with a certain number of faces, and a class of neural networks, for any {\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distribution of $X$ and a fixed concept $C$ such that the expected error is larger than a constant times $k/n$ for {\sl infinitely many n}. We also obtain such strong minimax lower bounds for the tail distribution of the probability of error, which extend the corresponding minimax lower bounds.application/pdfengL'accés als continguts d'aquest document queda condicionat a l'acceptació de les condicions d'ús establertes per la següent llicència Creative CommonsStrong minimax lower bounds for learninginfo:eu-repo/semantics/workingPaperestimationhypothesis testingstatistical decision theory: operations researchStatistics, Econometrics and Quantitative Methodsinfo:eu-repo/semantics/openAccess