Dr. Matthew D. Luciw
Matthew D. Luciw, Ph.D.
is Post-Doctoral Researcher at IDSIA.
His research interests include:
models of cortical self-organization and information processing,
multilayer neural networks with top-down (recurrent) connections,
biological and computational visual attention and recognition,
concept development, and
autonomously developing robots.
Matt coauthored
Topographic Class Grouping with Applications to 3D Object
Recognition,
Optimal In-Place Self-Organization for Cortical Development:
Limited Cells, Sparse Coding and Cortical Topography,
Motor Initiated Expectation through Top-Down
Connections as Abstract Context in a Physical
World,
Developmental Learning for Avoiding Dynamic
Obstacles Using Attention,
Laterally Connected Lobe Component Analysis:
Precision and Topography,
A System for Epigenetic Concept Development through
Autonomous Associative Learning, and
A Biologically-Motivated Developmental System
Towards Perceptual Awareness in Vehicle-Based
Robots.
His research has included:
Recurrent Hebbian Neural Networks
A great mystery is how more abstract representation is developed in
later cortical areas. It is also unclear how motor actions might alter
lower-level cortical representation. To investigate this problem, and in
order to learn to control agents and solve engineering-grade problems,
Matt placed multiple layers of LCA neurons in sequence, and added an
output, or “motor” layer (inspired by motor and premotor cortices),
which
directly controls actions. Each internal layer utilized bottom-up and
top-down connections simultaneously in learning and information
processing. Such networks can be called Hebbian since connections
strengthen from correlated firing.
Lobe Component Analysis (LCA)
Inspired by Principal Component Analysis and Independent Component
Analysis, he developed Lobe Component Analysis (LCA), a method for
incrementally setting the weights of a neuronal layer through the
biologically-plausible mechanisms of Hebbian learning and lateral
inhibition. LCA’s strength lies first in its simplicity and generality
(due to the aforementioned two biological mechanisms). It is intended to
run in real-time on a developmental robot to develop its internal
representation, which is environment and input dependent. Another major
strength lies in its mathematical optimality.
Where-What Networks
Pathways of information processing dealing with what (identity) and
where (spatiomotor) diverge in biological visual processing before
rejoining at pre-motor areas in cortex. The separation of identity
information from location information motivated the design of the
Where-What Networks (WWN) for attention and recognition, which learns
via Hebbian learning, using both bottom-up and top-down connections in
both learning and performance. WWNs are biologically-inspired
integrated attention and recognition learning systems. Matt showed how
they
develop the capability of selective attention to foregrounds over
complex backgrounds.
Perceptual Awareness in Vehicles From Radars and a
Camera
Matt and Dr. Zhengping Ji built an object learning system that
incorporates
sensory information from an automotive radar system and a video camera.
The radar system provides a rudimentary attention for the focus of
visual analysis on relatively small areas within the image plane. For
each image, the attended visual area is coded by LCA-developed
orientation-selective features, leading to a sparse representation. This
new representation is input to a recurrent Hebbian learning network to
learn an internal representation to be able to differentiate objects in
the environment.
Concept Development
How does our semantic understanding emerge from a stream of low-level
physical data? Matt investigated this question and created a system in
which a “semi-concrete” concept of distance traveled, emerged from
experience. Distance traveled involves sensory (e.g., movement can be
perceived visually) and motor (the actions taken to move a certain
amount) information. First, each information piece must be learned to be
filled in when only the other piece is present. Second, the internal
representation must be calibrated with existing semantic structure from
the external environment.
Real-Time Learning of Dynamic Obstacle Avoidance
He built a system that could autonomously learn to avoid moving
obstacles using the Hierarchical Discriminant Regression (HDR) learning
engine. Obstacle avoidance experiments on the Dav robot were also done
in this project.
Learn more about
his research!
Matt earned his B.S. in Computer Science in 2003, his M.S. in Computer
Science in 2006, and his Ph.D. in Computer Science in 2010, all at
Michigan State University.