Machine Learning and Heuristics II (MLHII)


Time: Monday 21st March, 14h20-16h00

Paper ID   Title
MLHII-1 Visualization of Meta-Reasoning in Multi-Agent Systems
MLHII-2 Intelligent Agent inspired Genetic Algorithm
MLHII-3 Combining Lazy Learning, Racing and Subsampling for Effective Feature Selection
MLHII-4 Personalized News Access
MLHII-5 A More Accurate Text Classifier for Positive and Unlabeled Data

Title: Visualization of Meta-Reasoning in Multi-Agent Systems
Author(s): D. Rehor,
J. Tozicka,
P. Slavik
Abstract: This paper describes the advances of our research on visualization of multi-agent systems (MAS) for purposes of analysis, monitoring and debugging. MAS are getting more complex and widely used, such analysis tools are highly beneficial in order to achieve better understanding of agent behaviour. Our solution is based on our originally offline visualization tools suite, which now uses a new realtime data acquisition framework. In this case we have focused on agent meta-reasoning in a MAS for planning of humanitarian relief operations. Previous tools were unable to deal with complex characteristics of these simulations, thus we have made some progress to present maximum of important information for realtime monitoring. This paper describes this progress, declares conditions and proposes visualization methods, which fulfil them

Title: Intelligent Agent inspired Genetic Algorithm
Author(s): C. Wu,
Y. Lian,
H. Lee,
C. Lu
Abstract: An intelligent agent-inspired genetic algorithm (IAGA) is proposed. Analogous to the intelligent agent, each individual in IAGA has its own properties, including crossover probability, mutation probability, etc. Numerical simulations demonstrate that, compared with the classical GA where all individuals in a population share the same crossover and mutation probabilities, the proposed algorithm is more efficient and effective.

Title: Combining Lazy Learning, Racing and Subsampling for Effective Feature Selection
Author(s): Gianluca Bontempi,
Mauro Birattari,
Patrick E. Meyer
Abstract: This paper presents a wrapper method for feature selection that combines Lazy Learning, racing and sub-sampling techniques. Lazy Learning (LL) is a local learning technique that, once a query is received, extracts a prediction by locally interpolating the neighboring examples of the query which are considered relevant according to a distance measure. Local learning techniques are often criticized for their limitations in dealing with problems with high number of features and large samples. Similarly wrapper methods are considered prohibitive for large number of features, due to the high cost of the evaluation step. The paper aims to show that a wrapper feature selection method based on LL can take advantage of two effective strategies: racing and sub-sampling. While the idea of racing was already proposed by Maron and Moore, this paper goes a step further by (i) proposing a multiple testing technique for less conservative racing (ii) combining racing with sub-sampling techniques.

Title: Personalized News Access
Author(s): D. G. Kaklamanos,
K. G. Margaritis
Abstract: PENA (Personalized News Access) is an adaptive system for the personalized access to news. The aims of the system are to collect news from predefined news sites, to select the sections and news in the server that are most relevant for each user and to present the selected news. In this paper are described the news collection process, the techniques adopted for structuring the news archive, the creation, maintenance and update of the user model and the generation of the personalized web pages. This work is based on the system that is described in [1].

Title: A More Accurate Text Classifier for Positive and Unlabeled Data
Author(s): Rui-Ming Xin,
Wan-Li Zuo
Abstract: Purifying unlabeled set and expanding positive set is central to all LPU (learning from positive and unlabeled data) approaches. For above two purposes, this paper originally proposed CoTrain- Active which is a typical two-step approach. The first step CoTrain, inspired by traditional Co-Training method, iterates to purify unlabeled set with two individual SVM classifiers. The second step, active-learning step, further expands positive set effectively by request the true label for the “most doubtful positive” instances. Comprehensive experiment shows that our approach is superior to Biased-SVM which is said to be previous best. Moreover, CoTrain-Active is still suitable for those situations when positive instances are extremely inadequate while unlabeled set contains many positive instances.