|
Programme
Overview
Detailed
Programme
Computational
Neuroscience and Neurodynamics (CNN) |
CHAIR: KEVIN WARWICK
Tuesday, March 22nd, 11h00-12h40
CNN-1 |
|
Title: |
|
|
Cortical Modulation of Synaptic
Efficacies through Norepinephrine |
Author(s): |
O. Hoshino |
Abstract: |
I propose a norepinephrine- (NE-) neuromodulatory
system, which I call “enhanced-excitatory and enhanced-inhibitory
(E-E/E-I) system”. The E-E/E-I system enhanced excitatory
and inhibitory synaptic connections between cortical cells,
modified their ongoing background activity, and influenced
subsequent cognitive neuronal processing. When stimulated
with sensory features, cognitive performance of neurons, signal-to-noise
(S/N) ratio, was greatly enhanced, for which one of the three
possible S/N enhancement schemes operated under the E-E/E-I
system, namely; i) signal enhancement more than noise increase,
ii) signal enhancement and noise reduction, and iii) noise
reduction more than signal decrease. When a weaker (or subthreshold)
stimulus was presented, the scheme (ii) effectively enhanced
S/N ratio, whereas the scheme (iii) was effective for enhancing
stronger stimuli. I suggest that a release of NE into cortical
areas may modify their background neuronal activity, whereby
cortical neurons can effectively respond to a variety of external
sensory stimuli. |
|
CNN-2 |
|
Title: |
|
|
Associative Memories with Small
World Connectivity |
Author(s): |
Neil Davey,
Lee Calcraft,
Rod Adams |
Abstract: |
In this paper we report experiments designed
to find the relationship between the different parameters
of sparsely connected networks of perceptrons with small world
connectivity patterns, acting as associative memories. |
|
CNN-3 |
|
Title: |
|
|
A Memory-Based Reinforcement Learning
Model Utilizing Macro-Actions |
Author(s): |
Makoto Murata,
Seiichi Ozawa |
Abstract: |
One of the difficulties in reinforcement learning
is that an optimal policy is acquired through enormous trials.
As a solution to reduce waste explorations in learning, recently
the exploitation of macro-actions has been focused. In this
paper, we propose a memory-based reinforcement learning model
in which macro-actions are generated and exploited usefully.
Through the experiments for two standard tasks, we confirmed
that our proposed method can decrease waste explorations especially
in the early training stage. This property contributes to
enhancing training efficiency in RL tasks. |
|
CNN-4 |
|
Title: |
|
|
A Biologically Motivated Classifier
that Preserves Implicit Relationship Information in Layered
Networks |
Author(s): |
Charles C. Peck,
James Kozloski,
Guillermo A. Cecchi,
A. Ravishankar Rao |
Abstract: |
A fundamental problem with layered neural networks
is the loss of information about the relationships among features
in the input space and relationships inferred by higher order
classifiers. Information about these relationships is required
to solve problems such as discrimination of simultaneously
presented objects and discrimination of feature components.
We propose a biologically motivated model for a classifier
that preserves this information. When composed into classification
networks, we show that the classifier propagates and aggregates
information about feature relationships. We discuss how the
model should be capable of segregating this information for
the purpose of object discrimination and aggregating multiple
feature components for the purpose of feature component discrimination. |
|
CNN-5 |
|
Title: |
|
|
Large Scale Hetero-Associative
Networks with Very High Classification Ability and Attractor
Discrimination Consisting of Cumulative-Learned 3-Layer Neural
Networks |
Author(s): |
Yohtaro Yatsuzuka,
Yo Ho |
Abstract: |
We propose a hetero-associative network consisting
of a cumulative-learned forward 3-layer neural network and
a backward 3-layer neural network, and a hetero-tandem associative
network. The hetero-tandem associative network has a spindle
type single cyclic-associative network with cumulative learning
subsequent the hetero-associative network in tandem. These
hetero-associative networks have high of classification and
recognition performance, as well as rapid attractor absorption.
Consecutive codification of outputs in the forward network
was found to produce no spurious attractors, and for coarse
codification the converged attractors can be easily identified
as training or spurious attractors. Cumulative learning with
prototypes and additive training data adjacent to them can
also drastically improve the associative performance of both
the spindle single cyclic-associative and hetero-tandem associative
networks. |
|
|
|