Skip to main navigation Skip to search Skip to main content

Learning coordinate gradients with multi-task kernels

  • Yiming Ying
  • , Colin Campbell

Research output: Contribution to conferencePaperpeer-review

7 Scopus citations

Abstract

Coordinate gradient learning is motivated by the problem of variable selection and determining variable covariation. In this paper we propose a novel unifying framework for coordinate gradient learning (MGL) from the perspective of multi-task learning. Our approach relies on multi-task kernels to simulate the structure of gradient learning. This has several appealing properties. Firstly, it allows us to introduce a novel algorithm which appropriately captures the inherent structure of coordinate gradient learning. Secondly, this approach gives rise to a clear algorithmic process: a computational optimization algorithm which is memory and time efficient. Finally, a statistical error analysis ensures convergence of the estimated function and its gradient to the true function and true gradient. We report some preliminary experiments to validate MGL for variable selection as well as determining variable covariation.

Original languageEnglish
Pages217-228
Number of pages12
StatePublished - 2008
Event21st Annual Conference on Learning Theory, COLT 2008 - Helsinki, Finland
Duration: Jul 9 2008Jul 12 2008

Conference

Conference21st Annual Conference on Learning Theory, COLT 2008
Country/TerritoryFinland
CityHelsinki
Period07/9/0807/12/08

Fingerprint

Dive into the research topics of 'Learning coordinate gradients with multi-task kernels'. Together they form a unique fingerprint.

Cite this