|
Programme
Overview
Detailed
Programme
CHAIR: VERA KURKOVA
Monday, March 21st, 16h30-18h10
LT-1 |
|
Title: |
|
|
Evolution versus Learning in Temporal
Neural Networks |
Author(s): |
Hedi Soula,
Guillaume Beslon,
Joel Favrel |
Abstract: |
In this paper, we study the difference between two ways of
setting synaptic weights in a "temporal" neural
network. Used as a controller of a simulated mobile robot,
the neural network is alternatively evolved through an evolutionary
algorithm or trained via an hebbian reinforcement learning
rule. We compare both approaches and argue that in the last
instance only the learning paradigm is able to exploit meaningfully
the temporal features of the neural network. |
|
LT-2 |
|
Title: |
|
|
Minimization of Empirical Error
over Perceptron Networks |
Author(s): |
Vera Kurkova |
Abstract: |
Supervised learning by perceptron networks is
investigated as an approximate minimization of empirical error
functional. Rates of convergence of suboptimal solutions obtainable
using networks with n hidden units to optimal solutions that
require the same number of hidden units as the size of the
training set are derived in terms of a variational norm. It
is shown that fast rates can be guaranteed when the data defining
the empirical error can be interpolated by a function with
a Sobolev-type norm which does not grow exponentially with
the input dimension d. |
|
LT-3 |
|
Title: |
|
|
Interval Basis Neural Networks
|
Author(s): |
A. Horzyk |
Abstract: |
The paper introduces a new type of ontogenic
neural networks called Interval Basis Neural Networks (IBNN).
IBNNs configure the whole topology and compute weights after
the a priori knowledge collected form training data. After
the statistical analysis the training data of the same class
are grouped by intervals separately for all input features.
This IBNNs feature make possible to computed all network parameters
without training. Moreover the IBNN take into account the
distances between patterns of the same classes and builds
the well-approximating model especially on the borders between
the classes. Futhermore the IBNNs are insensitive for differences
in quantity of patterns represented classes. The IBNNs always
correctly classify training data and very good generalize
other data. |
|
LT-4 |
|
Title: |
|
|
Learning from Randomly-Distributed Inaccurate Measurements
|
Author(s): |
John Eidson,
Bruce Hamilton,
Valery Kanevsky |
Abstract: |
Traditional measurement systems are designed
with tight control over the time and place of measurement
of the device or envi-ronment under test. This is true whether
the measurement sys-tem uses a centralized or a distributed
architecture. Currently there is considerable interest in
using mobile consumer devices as measurement platforms for
testing large dispersed systems. There is also growing activity
in developing concepts of ubiqui-tous measurement, such as
“smart dust”. Under these conditions the times
and places of measurement are random, which raises the question
of the validity and interpretation of the acquired data. This
paper presents a mathematical analysis that shows it is possible
under certain conditions to establish dependence be-tween
error bounds and confidence probability on models built using
data acquired in this manner. |
|
LT-5 |
|
Title: |
|
|
Combining topological and cardinal
directional relation information in QSR |
Author(s): |
Haibin Sun |
Abstract: |
Combining different knowledge representation
languages is one of the main topics in Qualitative Spatial
Reasoning (QSR). In this paper, we combine well-known RCC8
calculus (RCC8) and cardinal direction calculus (CDC) based
on regions and give the interaction tables for the two calculi.
The interaction tables can be used as a tool in solving constraint
satisfaction problems (CSP) and consistency checking procedure
of QSR for combined spatial knowledge. |
|
LT-6 |
|
Title: |
|
|
An Evidence Theoretic Ensemble
Design Technique |
Author(s): |
Hakan Altincay |
Abstract: |
Ensemble design techniques based on resampling
the training set are successfully used to improve the classification
accuracies of the base classifiers. In Boosting technique,
each training set is obtained by drawing samples with replacement
from the available training set according to a weighted distribution
which is iteratively updated for generating new classifiers
for the ensemble. The resultant classifiers are accurate in
different parts of the input space mainly specified the sample
weights. In this study, a dynamic integration of boosting
based ensembles is proposed so as to take into account the
heterogeneity of the input sets. In this approach, a Dempster-Shafer
theory based framework is developed to consider the training
sample distribution in the restricted input space of each
test sample. The effectiveness of the proposed technique is
compared to AdaBoost algorithm using nearest mean type base
classifier. |
|
|