Learning Monotone functions from random examples in polynomial time

Rocco Servedio

Columbia University

Friday, December 2 at 11:30 in AKW ???

ABSTRACT 


One of the most tantalizing open questions in computational learning
theory is the following:  can monotone DNF formulas be learned in
polynomial time using uniform random examples only?
 
In this work we give an algorithm that learns monotone decision trees (a
weaker representation scheme than DNF formulas) to any constant accuracy
in polynomial time.  Equivalently, our algorithm learns any n-variable
monotone Boolean function f to any constant accuracy, using only uniformly
distributed random examples, in time polynomial in n and in the decision
tree size of f. This is the first algorithm that can learn any monotone
Boolean function to high accuracy, using random examples only, in time
which is polynomial in a reasonable measure of the complexity of f.  A key
ingredient of the result is a new upper bound showing that the average
sensitivity of any monotone function computed by a decision tree of size s
is at most \sqrt{log s}.

This is joint work with Ryan O'Donnell.
	    



Return to DMTCS home page.