BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//HumTech - UCLA - ECPv6.1.2.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:HumTech - UCLA
X-ORIGINAL-URL:https://humtech.ucla.edu
X-WR-CALDESC:Events for HumTech - UCLA
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20160101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20160419T140000
DTEND;TZID=UTC:20160419T150000
DTSTAMP:20230927T152906
CREATED:20160416T013407Z
LAST-MODIFIED:20160416T013407Z
UID:3463-1461074400-1461078000@humtech.ucla.edu
SUMMARY:Finding Hidden Structure in Data with Tensor Decompositions
DESCRIPTION:In many applications\, we face the challenge of modeling the interactions between multiple observations. A popular and successful approach in machine learning and AI is to hypothesize the existence of certain latent (or hidden) causes which help to explain the correlations in the observed data. The (unsupervised) learning problem is to accurately estimate a model with only samples of the observed variables. For example\, in document modeling\, we may wish to characterize the correlational structure of the “bag of words” in documents\, or in community detection\, we wish to discover the communities of individuals in social networks. Here\, a standard model is to posit that documents are about a few topics (the hidden variables) and that each active topic determines the occurrence of words in the document. The learning problem is\, using only the observed words in the documents (and not the hidden topics)\, to estimate the topic probability vectors (i.e. discover the strength by which words tend to appear under different topcis). In practice\, a broad class of latent variable models is most often fit with either local search heuristics (such as the EM algorithm) or sampling based approaches. \nThis talk will discuss a general and (computationally and statistically) efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models (for clustering)\, hidden Markov models (for time series)\, and latent Dirichlet allocation (for topic modeling and community detection) —by exploiting a certain tensor structure in their low-order observable moments. Specifically\, parameter estimation is reduced to the problem of extracting a certain decomposition of a tensor derived from the (typically second- and third-order) moments; this particular decomposition can be viewed as a natural generalization of the (widely used) principal component analysis method.
URL:https://humtech.ucla.edu/event/finding-hidden-structure-in-data-with-tensor-decompositions/
END:VEVENT
END:VCALENDAR