|
Programme
Overview
Detailed
Programme
Neural
Networks Theory I (NNTI) |
CHAIR: JIRI SIMA
Time: Monday 21st March, 11h00-12h40
NNTI-1 |
|
Title: |
|
|
ADFUNN: An Adaptive Function Neural
Network |
Author(s): |
Dominic Palmer-Brown,
Miao Kang |
Abstract: |
An adaptive function neural network (ADFUNN)
is introduced. It is based on a linear piecewise artificial
neuron activation function that is modified by a novel gradient
descent supervised learning algorithm. This process is carried
out in parallel with the traditional w process. Linearly inseparable
problems can be learned with ADFUNN, rapidly and without hidden
neurons. The Iris dataset classification problem is learned
as an example. An additional benefit of ADFUNN is that the
learned functions can support intelligent data analysis. |
NNTI-2 |
|
Title: |
|
|
Certain comments on data preparation
for neural networks based modelling |
Author(s): |
Bartlomiej
Beliczynski |
Abstract: |
The process of data preparation for neural
networks based modelling is examined. We are discussing sampling,
preprocessing and decimation, finally urguing for orthonormal
input preprocessing. |
|
NNTI-3 |
|
Title: |
|
|
A simple method
for selection of inputs and structure of feedforward neural
networks |
Author(s): |
H.
Saxen,
F. Pettersson |
Abstract: |
In using feedforward neural networks of multi-layer perceptron
(MLP) type as black-box models of complex processes, a common
problem is how to select relevant inputs from a large set
of potential variables that affect the outputs to be modeled.
If, furthermore, the observations of the input-output tuples
are scarce, the degrees of freedom may not allow for the use
of a fully connected layer between the inputs and the hidden
nodes. This paper presents a systematic method for selection
of both input variables and a constrained connectivity of
the lower layers weights in MLPs. The method, which can also
be used as a means to provide initial guesses for the weights
before the final training phase of the MLPs, is illustrated
on a class of test problems. |
|
NNTI-4 |
|
Title: |
|
|
The Concept
and Properties of Sigma-if Neural Network |
Author(s): |
M.
Huk,
H. Kwasnicka |
Abstract: |
To-date research in the area of applied technical
artificial intelligence systems suggests that it is necessary
to focus further on the characteristic requirements of this
research field. One of those requirements is related to the
need for effective analysis of multidimensional heterogeneous
data sets, which poses particular difficulties when considering
AI-suggested solutions. Recent works point to the possibility
of extending the activation function of a perceptron to the
time domain, thus significantly enhancing the capabilities
of neural networks. This change results in the ability to
dynamically tune the size of the decision space under consideration,
which stems from continuous adaptation of the interneuronal
connection architecture to the data being classified. Such
adaptation reflects the importance of individual decision
attributes for the patterns being classified, as defined by
the Sigma-if network during its training phase. These characteristics
enable effective employment of such networks in solving classification
problems, which emerge in technical sciences. The described
approach is also a novel, interesting area of neural network
research. This article discusses selected aspects of construction
as well as training of Sigma-if networks, based on a well
known sample classification problems. |
|
NNTI-5 |
|
Title: |
|
|
Beta wavelet networks for function
approximation |
Author(s): |
Wajdi Bellil,
Chokri Amar,
Adel Alimi |
Abstract: |
Wavelet neural networks (WNN) have recently attracted great
interest, because of their advantages over radial basis function
networks (RBFN) as they are universal approximators. In this
paper we present a novel wavelet neural network, based on
Beta wavelets, for 1-D and 2-D function approximation. Our
purpose is to approximate an unknown function f: Rn à
R from scattered samples (xi; yi = f(x)) i = 1 . . . n, where
: - we have little a priori knowledge on the unknown function
f: it lives in some infinite dimensional smooth function space,
=- the function approximation process is performed iteratively:
each new measure on the function (xi; f(xi)) is used to compute
a new estimate as an approximation of the function f. Simulation
results are demonstrated to validate the generalization ability
and efficiency of the proposed Beta wavelet network. |
|