A Tour of Sparsity-Aware Learning -- Calling at: Online, Distributed, Robust
And Dictionary Learning is coming at 05/23/2016 - 4:00pm

COVL 216
Mon, 05/23/2016 - 4:00pm

Sergios Theodoridis
Professor, Department of Informatics and Telecommunications, National and
Kapodistrian University of Athens

Abstract:
Learning sparse models has been a topic at the forefront of research for the
last ten years or so. Considerable effort has been invested in developing
efficient schemes for the recovery of sparse signal/parameter vectors.
Moreover, concepts that have originally been developed around the regression
task have been extended to more general and difficult problems, such as
low-rank matrix factorization for dimensionality reduction, robust learning
in the presence of outliers, dictionary learning for “data-dependent”
signal representation. Furthermore, online techniques for sparse modeling
estimation are attracting an increasing interest, especially in the context
of big data applications. Another area which is gaining in importance is
distributed learning over graphs. An area, which was mainly inspired and born
within the sensor network discipline, but now lends itself, nicely, for big
data processing. In this talk, I touch upon all the previously mentioned
problems. Sparse modeling of regression tasks is viewed in its online
estimation facet, via convex analytic arguments, based on the set-theoretic
framework; the emphasis is on very recent extensions of the theory to include
non-convex related constraints, which impose sparsity on the model in a much
more aggressive manner compared to the more standard, convex, l_1-norm
related arguments. In spite of the involved non-convexity, still complexity
per time iteration exhibits a linear dependence on the number of unknown
parameters; furthermore, strong theoretical convergence results have been
established. In the sequel, distributed learning techniques are reviewed with
an emphasis on greedy-type batch as well as online versions. The task of
robust learning in the presence of outliers is then reviewed and new methods,
based on the explicit modeling of the outliers, in the context of
sparsity-aware learning, will be presented. The new method, based on
greedy-type arguments, enjoys a number of merits, compared to more classical
techniques. Furthermore, strong theoretical results have been established,
for the first time, in such a type of treatment of the robust estimation
task. Finally, dictionary learning, in its very recent online and distributed
processing framework, is discussed and new experimental as well as
theoretical results will be presented

Bio:
[node:field-speaker-bio:text]

Read more:
http://eecs.oregonstate.edu/colloquium/tour-sparsity-aware-learning-call... 
[1]


[1] 
http://eecs.oregonstate.edu/colloquium/tour-sparsity-aware-learning-calling-online-distributed-robust-and-dictionary-learning
_______________________________________________
Colloquium mailing list
[email protected]
https://secure.engr.oregonstate.edu/mailman/listinfo/colloquium

Reply via email to