dr Irena Spasi&#263, asistent

Maturant je IV beogradske gimnazije. Diplomirala je i magistrirala na Matemati&#269kom fakultetu Univerziteta u Beogradu. Trenutno je zaposlena Prirodno-matematickom fakultetu Univerziteta u Man&#269esteru, Velika Britanija.

Od oktobra 1994. godine izvodila ve&#382be na Ekonomskom fakultetu iz predmeta "Matematika za ekonomiste", a po novom nastavnom planu iz predmeta "Matematika" na prvoj godini. U toku akademske 1996/1997. godine izvodila je ve&#382be na Matemati&#269kom fakultetu iz predmeta "Teorija algoritama, jezika i automata". Od 1995. godine u&#269estvovala je na projektu 04M01, podprojekat "Ra&#269unarstvo i primene". Od februara 1999. godine povremeno je sara&#273ivala sa Istra&#382iva&#269kom stanicom Petnica, gde dr&#382i predavanja iz ra&#269unarstva sa nadarenim u&#269enicima iz zemlje i inostranstva. Na Ekonomskom fakultetu bila je anga&#382ovana i u &#352koli ra&#269unarstva. Oblasti nau&#269nog interesovanja su u okvirima ve&#353ta&#269ke inteligencije, od ra&#269unarske lingvistike do analize na osnovu re&#353avanih slu&#269ajeva.

Reference:

Journal publications:

M. Brown, W. Dunn , D.I. Ellis , R. Goodacre , J. Handl , J. Knowles , S. O'Hagan, I. Spasic and D.B. Kell . - " A Metabolome Pipeline: from Concept to Data to Knowledge ," in Metabolomics , Vol. 1, No. 1, pp. 35-46, 2005 (in press)

I. Spasic and S. Ananiadou . - " Using Automatically Learnt Verb Selectional Preferences for Classification of Biomedical Terms ," in Journal of Biomedical Informatics , Special Issue on Named Entity Recognition in Biomedicine, Vol. 37, No. 6, pp. 483-497, 2004 [ full paper ] [ PMID: 15542021 ]

In this paper, we present an approach to term classification based on verb selectional patterns (VSPs), where such a pattern is defined as a set of semantic classes that could be used in combination with a given domain-specific verb. VSPs have been automatically learnt based on the information found in a corpus and an ontology in the biomedical domain. Prior to the learning phase, the corpus is terminologically processed: term recognition is performed by both looking up the dictionary of terms listed in the ontology and applying the C/NC-value method for on-the-fly term extraction. Subsequently, domain-specific verbs are automatically identified in the corpus based on the frequency of occurrence and the frequency of their co-occurrence with terms. VSPs are then learnt automatically for these verbs. Two machine learning approaches are presented. The first approach has been implemented as an iterative generalisation procedure based on a partial order relation induced by the domain-specific ontology. The second approach exploits the idea of genetic algorithms. Once the VSPs are acquired, they can be used to classify newly recognised terms co-occurring with domain-specific verbs. Given a term, the most frequently co-occurring domain-specific verb is selected. Its VSP is used to constrain the search space by focusing on potential classes of the given term. A nearest-neighbour approach is then applied to select a class from the constrained space of candidate classes. The most similar candidate class is predicted for the given term. The similarity measure used for this purpose combines contextual, lexical, and syntactic properties of terms.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Mining Term Similarities from Corpora ," in Terminology , Special Issue on Recent Trends in Computational Terminology, Vol. 10, No. 1, pp. 55-80, 2004
In this article we present an approach to the automatic discovery of term similarities, which may serve as a basis for a number of term-oriented knowledge mining tasks. The method for term comparison combines internal (lexical similarity) and two types of external criteria (syntactic and contextual similarities). Lexical similarity is based on sharing lexical constituents (i.e. term heads and modifiers). Syntactic similarity relies on a set of specific lexico-syntactic co-occurrence patterns indicating the parallel usage of terms (e.g. within an enumeration or within a term coordination/conjunction structure), while contextual similarity is based on the usage of terms in similar contexts. Such contexts are automatically identified by a pattern mining approach, and a procedure is proposed to assess their domain-specific and terminological relevance. Although automatically collected, these patterns are domain dependent and identify contexts in which terms are used. Different types of similarities are combined into a hybrid similarity measure, which can be tuned for a specific domain by learning optimal weights for individual similarities. The suggested similarity measure has been tested in the domain of biomedicine, and some experiments are presented.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Terminology-Driven Mining of Biomedical Literature ," Terminology-Driven Mining of Biomedical Literature," in Bioinformatics , Vol. 19, No. 8, pp. 938-943, 2003 [ full paper ] [ PMID: 12761055 ]
In this paper we present an overview of an integrated framework for terminology-driven mining from biomedical literature. The framework integrates the following components: automatic term recognition, term variation handling, acronym acquisition, automatic discovery of term similarities and term clustering. The term variant recognition is incorporated into terminology recognition process by taking into account orthographical, morphological, syntactic, lexico-semantic and pragmatic term variations. In particular, we address acronyms as a common way of introducing term variants in biomedical papers. Term clustering is based on the automatic discovery of term similarities. We use a hybrid similarity measure, where terms are compared by using both internal and external evidence. The measure combines lexical, syntactical and contextual similarity. Experiments on terminology recognition and structuring performed on a corpus of biomedical abstracts are presented.
G. Nenadic , H. Mima , I. Spasic, S. Ananiadou , and J. Tsujii . - " Terminology-based Literature Mining and Knowledge Acquisition in Biomedicine ," in International Journal of Medical Informatics , Vol. 67, No. 1-3, pp. 33-48, 2002 [ full paper ] [ PMID: 12460630 ]
In this paper we describe TIMS, an integrated knowledge management system for the domain of molecular biology and biomedicine, in which terminology-driven literature mining, knowledge acquisition, knowledge integration, and XML-based knowledge retrieval are combined using tag information management and ontology inference. The system integrates automatic terminology acquisition, term variation management, hierarchical term clustering, tag-based information extraction, and ontology-based query expansion. TIMS supports introducing and combining different types of tags (linguistic and domain-specific, manual and automatic). Tag-based interval operations and a query language are introduced in order to facilitate knowledge acquisition and retrieval from XML documents. Through knowledge acquisition examples, we illustrate the way in which literature mining techniques can be utilised for knowledge discovery from documents.


Book chapters:

G. Nenadic , I. Spasic, and S. Ananiadou . - " Mining Biomedical Abstracts: What is in a Term? ," in 1st International Joint Conference on Natural Language Processing - IJCNLP 2004 , LNAI 3248, Springer Verlag, 2004
In this paper we present a study of the usage of terminology in biomedical literature, with the main aim to indicate phenomena that can be helpful for automatic term recognition in the domain. Our comparative analysis is based on the terminology used in the Genia corpus. We analyse the usage of ordinary biomedical terms as well as their variants (namely inflectional and orthographic alternatives, terms with prepositions, coordinated terms, etc.), showing the variability and dynamic nature of terms used in biomedical abstracts. Term coordination and terms containing prepositions are analysed in detail. We show that there is a discrepancy between terms used in literature and terms listed in controlled dictionaries. We also evaluate the effectiveness of incorporating different types of term variation into an automatic term recognition system.
I. Spasic, G. Nenadic , and S. Ananiadou . - " Learning to Classify Biomedical Terms through Literature Mining and Genetic Algorithms ," in Z.R. Yang et al. (Eds.): Intelligent Data Engineering and Automated Learning - IDEAL 2004 . LNCS 3177, Springer Verlag, pp. 345-351, 2004
We present an approach to classification of biomedical terms based on the information acquired automatically from the corpus of relevant literature. The learning phase consists of two stages: acquisition of terminologically relevant contextual patterns (CPs) and selection of classes that apply to terms used with these patterns. CPs represent a generalisation of similar term contexts in the form of regular expressions containing lexical, syntactic and terminological information. The most probable classes for the training terms co-occurring with the statistically relevant CP are learned by a genetic algorithm. Term classification is based on the learnt results. First, each term is associated with the most frequently co-occurring CP. Classes attached to such CP are initially suggested as the term's potential classes. Then, the term is finally mapped to the most similar suggested class.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Reducing Lexical Ambiguity in Serbo-Croatian by Using Genetic Algorithms ," P. Kosta et al. (Eds.): Investigations into Formal Slavic Linguistics . Linguistik International, Peter Lang, Frankfurt, pp. 287-298, 2003
This paper presents an approach to acquisition of some lexical and grammatical constraints from large corpora using genetic algorithms. The main aim is to use these constraints to automatically define local grammars that can be used to reduce lexical ambiguity usually found in an initially tagged text. A genetic algorithm for computation of the minimal representation of grammatical features of textual constituents is suggested. The algorithm incorporates two types of genes, dominant and recessive, which are specific for the features that are analysed. The resulting genetic structure describes the constraints that have to be fulfilled in order to form a correct utterance. As a case study, the suggested algorithm is applied on contexts of prepositional phrases, and features of corresponding noun phrases are obtained. The results obtained coincide with (theoretical) grammars that define the constraints for such noun phrases.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Term Clustering using a Corpus-Based Similarity Measure ," in P. Sojka et al. (Eds.): Text, Speech and Dialogue - TSD 2002 . LNAI 2448, Springer Verlag, pp. 151-154, 2002 [ full paper ]
In this paper we present a method for the automatic term clustering. The method uses a hybrid similarity measure to cluster terms automatically extracted from a corpus by applying the C/NC-value method. The measure comprises contextual, functional and lexical similarity, and it is used to instantiate the cell values in a similarity matrix. The clustering algorithm uses either the nearest neighbour or the Ward's method to calculate the distance between clusters. The approach has been tested and evaluated in the domain of molecular biology and the results are presented.
I. Spasic, G. Nenadic , K. Manios, and S. Ananiadou . - " Supervised Learning of Term Similarities, ," in Hujun Yin et al. (Eds.): Intelligent Data Engineering and Automated Learning - IDEAL 2002 . LNCS 2412, Springer Verlag, pp. 429-434, 2002 [ full paper ]
In this paper we present a method for the automatic discovery and tuning of term similarities. The method is based on the automatic extraction of significant patterns in which terms tend to appear. Beside that, we use lexical and functional similarities between terms to define a hybrid similarity measure as a linear combination of the three similarities. We then present a genetic algorithm approach to supervised learning of parameters that are used in this linear combination. We used a domain specific ontology to evaluate the generated similarity measures and set the direction of their convergence. The approach has been tested and evaluated in the domain of molecular biology.
G. Nenadic , and I. Spasic. - " The Recognition and Acquisition of Compound Names from Corpora ," in D. Christodoulakis (Ed.): Natural Language Processing - NLP 2000 . LNAI 1835, pp.38-48, Springer Verlag, 2000 [ full paper ]
In this paper we will present an approach to acquisition of some classes of compound words from large corpora, as well as a method for semi-automatic generation of appropriate linguistic models, that can be further used for compound word recognition and for completion of compound word dictionaries. The approach is intended for a highly inflective language such as Serbo-Croatian. Generated linguistic models are represented by local grammars .
I. Spasic, and G. Pavlovic-Lazetic . - " Syntactic Structures in a Sublanguage of Serbian for Querying Relational Databases ," in G. Zybatow et al. (Eds.): Current Issues in Formal Slavic Linguistics . Peter Lang, Frankfurt/Main, pp. 478-488, 1999
This paper deals with syntactic structures identified in a sublanguage of Serbian for querying relational databases. Three levels of syntactic description of the sublanguage are defined: word, syntagmatic, and sentence levels. An algorithm for complete syntactic analysis of a Serbian language query over relational database and its translation into a formal SQL query is presented. An example of partial parsing and translation is discussed.
G. Nenadic , and I. Spasic. - " The Acquisition of Some Lexical Constraints from Corpora ," in V. Matousek et al. (Eds.): Text, Speech and Dialogue - TSD 1999 . LNAI 1692, Springer Verlag, pp. 115-120, 1999 [ full paper ]
This paper presents an approach to acquisition of some lexical and grammatical constraints from large corpora. Constraints that are discussed are related to grammatical features of a preposition and the corresponding noun phrase that constitute a prepositional phrase. The approach is based on the extraction of a textual environment of a preposition from a corpus, which is then tagged using the system of electronic dictionaries. An algorithm for computation of some kind of the minimal representation of grammatical features associated with the corresponding noun phrases is suggested. The resulting set of features describes the constraints that a noun phrase has to fulfil in order to form a correct prepositional phrase with a given preposition. This set can be checked against other corpora.


Conference papers:

I.Spasic and S. Ananiadou . - " A Flexible Measure of Contextual Similarity for Biomedical Terms ," in Proceedings of Pacific Symposium on Biocomputing (PSB 2005), Hawaii, USA, 2005 [ full paper ]
We present a measure of contextual similarity for biomedical terms. The contextual features need to be explored, because newly coined terms are not explicitly described and efficiently stored in biomedical ontologies and their inner features (e.g. morphologic or orthographic) do not always provide sufficient information about the properties of the underlying concepts. The context of each term can be represented as a sequence of syntactic elements annotated with biomedical information retrieved from an ontology. The sequences of contextual elements may be matched approximately by edit distance defined as the minimal cost incurred by the changes (including insertion, deletion and replacement) needed to transform one sequence into the other. Our approach augments the traditional concept of edit distance by elements of linguistic and biomedical knowledge, which together provide flexible selection of contextual features and their comparison.
I. Spasic, G. Nenadic , and S. Ananiadou . - " Using Domain-Specific Verbs for Term Classification, " in Proceedings of ACL Workshop on Natural Language Processing in Biomedicine, Sapporo, Japan, pp. 17-24, 2003 [ full paper ]
In this paper we present an approach to term classification based on verb complementation patterns. The complementation patterns have been automatically learnt by combining information found in a corpus and an ontology, both belonging to the biomedical domain. The learning process is unsupervised and has been implemented as an iterative reasoning procedure based on a partial order relation induced by the domain-specific ontology. First, term recognition was performed by both looking up the dictionary of terms listed in the ontology and applying the C/NC-value method. Subsequently, domain-specific verbs were automatically identified in the corpus. Finally, the classes of terms typically selected as arguments for the considered verbs were induced from the corpus and the ontology. This information was used to classify newly recognised terms. The precision of the classification method reached 64%.
G. Nenadic , S. Rice, I. Spasic, S. Ananiadou , and B. Stapley . - " Selecting Text Features for Gene Name Classification: from Documents to Terms ," in Proceedings of ACL Workshop on Natural Language Processing in Biomedicine, Sapporo, Japan, pp. 121-128, 2003 [ full paper ]
In this paper we discuss the performance of a text-based classification approach by comparing different types of features. We consider the automatic classification of gene names from the molecular biology literature, by using a support-vector machine method. Classification features range from words, lemmas and stems, to automatically extracted terms. Also, simple co-occurrences of genes within documents are considered. The preliminary experiments performed on a set of 3,000 S. cerevisiae gene names and 53,000 Medline abstracts have shown that using domain-specific terms can improve the performance compared to the standard bag-of-words approach, in particular for genes classified with higher confidence, and for under-represented classes.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Morpho-Syntactic Clues for Terminological Processing in Serbian ," in Proceedings of EACL Workshop on Morphological Processing of Slavic Languages, Budapest, Hungary, pp. 79-86, 2003
In this paper we discuss morpho-syntactic clues that can be used to facilitate terminological processing in Serbian. A method (called srCe) for automatic extraction of multiword terms is presented. The approach incorporates a set of generic morpho-syntactic filters for recognition of term candidates, a method for conflation of morphological variants and a module for foreign word recognition. Morpho-syntactic filters describe general term formation patterns, and are implemented as generic regular expressions. The inner structure together with the agreements within term candidates are used as clues to discover the boundaries of nested terms. The results of the terminological processing of a textbook corpus in the domains of mathematics and computer science are presented.
I. Spasic, G. Nenadic , K. Manios, and S. Ananiadou . - " An Integrated Term-Based Corpus Query System ," in Proceedings of 10th Conference of the European Chapter of the Association for Computational Linguistics, Budapest, Hungary, pp. 243-250, 2003 [ full paper ]
In this paper we describe the X-TRACT workbench, which enables efficient term-based querying against a domain-specific literature corpus. Its main aim is to aid domain specialists in locating and extracting new knowledge from scientific literature corpora. Before querying, a corpus is automatically terminologically analysed by the ATRACT system, which performs terminology recognition based on the C/NC-value method enhanced by incorporation of term variation handling. The results of terminology processing are annotated in XML, and the produced XML documents are stored in an XML-native database. All corpus retrieval operations are performed against this database using an XML query language. We illustrate the way in which the X-TRACT workbench can be utilised for knowledge discovery, literature mining and conceptual information extraction.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Terminology-Driven Mining of Biomedical Literature ," Terminology-Driven Mining of Biomedical Literature," in Proceedings of 18th Annual ACM Symposium on Applied Computing, Melbourne, Florida, USA, 2003
Motivation : With an overwhelming amount of textual information in molecular biology and biomedicine, there is a need for effective literature mining techniques that can help biologists to gather and make use of the knowledge encoded in text documents. Although the knowledge is organised around sets of domain-specific terms, few literature mining systems incorporate deep and dynamic terminology processing.

Results : In this paper, we present an overview of an integrated framework for terminology-driven mining from biomedical literature. The framework integrates the following components: automatic term recognition, term variation handling, acronym acquisition, automatic discovery of term similarities and term clustering. The term variant recognition is incorporated into terminology recognition process by taking into account orthographical, morphological, syntactic, lexico-semantic and pragmatic term variations. In particular, we address acronyms as a common way of introducing term variants in biomedical papers. Term clustering is based on the automatic discovery of term similarities. We use a hybrid similarity measure, where terms are compared by using both internal and external evidence. The measure combines lexical, syntactical and contextual similarity. Experiments on terminology recognition and structuring performed on a corpus of biomedical abstracts recorded the precision of 98% and 71% respectively.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Automatic Discovery of Term Similarities Using Pattern Mining ," in Proceedings of Second International Workshop on Computational Terminology - CompuTerm 2002, Taipei, Taiwan, pp. 43-49, 2002 [ full paper ]
Term recognition and clustering are key topics in automatic knowledge acquisition and text mining. In this paper we present a novel approach to the automatic discovery of term similarities, which serves as a basis for both classification and clustering of domain-specific concepts represented by terms. The method is based on automatic extraction of significant patterns in which terms tend to appear. The approach is domain independent: it needs no manual description of domain-specific features and it is based on knowledge-poor processing of specific term features. However, automatically collected patterns are domain specific and identify significant contexts in which terms are used. Beside features that represent contextual patterns, we use lexical and functional similarities between terms to define a combined similarity measure. The approach has been tested and evaluated in the domain of molecular biology, and preliminary results are presented.
S. Ananiadou , G. Nenadic , D. Schuhmann , and I. Spasic. - " Term-based Literature Mining from Biomedical Texts ," ISMB Text Data Mining SIG, Edmonton, Canada, 2002

I. Spasic, G. Nenadic , and S. Ananiadou . - " Tuning Context Features with Genetic Algorithms ," in Proceedings of 3rd International Conference on Language, Resources and Evaluation, Las Palmas, Spain, pp. 2048-2054, 2002
In this paper we present an approach to tuning of context features acquired from corpora. The approach is based on the idea of a genetic algorithm (GA). We analyse a whole population of contexts surrounding related linguistic entities in order to find a generic property characteristic of such contexts. Our goal is to tune the context properties so as not to lose any correct feature values, but also to minimise the presence of ambiguous values. The GA implements a crossover operator based on dominant and recessive genes, where a gene corresponds to a context feature. A dominant gene is the one that, when combined with another gene of the same type, is inevitably reflected in the offspring. Dominant genes denote the more suitable context features. In each iteration of the GA, the number of individuals in the population is halved, finally resulting in a single individual that contains context features tuned with respect to the information contained in the training corpus. We illustrate the general method by using a case study concerned with the identification of relationships between verbs and terms complementing them. More precisely, we tune the classes of terms that are typically selected as arguments for the considered verbs in order to acquire their semantic features.
G. Nenadic , I. Spasic, and S. Ananiadou . - " Automatic Acronym Acquisition and Management within Domain-Specific Texts ," in Proceedings of 3rd International Conference on Language, Resources and Evaluation, Las Palmas, Spain, pp. 2155-2162, 2002
In this paper we present a framework for the effective management of terms and their variants that are automatically acquired from domain-specific texts. In our approach, the term variant recognition is incorporated in the automatic term retrieval process by taking into account orthographical, morphological, syntactic, lexico-semantic and pragmatic term variations. In particular, we address acronyms as a common way of introducing term variants in scientific papers. We describe a method for the automatic acquisition of newly introduced acronyms and the mapping to their 'meanings', i.e. the corresponding terms. The proposed three-step procedure is based on morpho-syntactic constraints that are commonly used in acronym definitions. First, acronym definitions containing an acronym and the corresponding term are retrieved. These two elements are matched in the second step by performing morphological analysis of words and combining forms constituting the term. The problems of acronym variation and acronym ambiguity are addressed in the third step by establishing classes of term variants that correspond to specific concepts. We present the results of the acronym acquisition in the domain of molecular biology: the precision of the method ranged from 94% to 99% depending on the size of the corpus used for evaluation, whilst the recall was 73%.
I. Spasic, G. Nenadic , and S. Ananiadou . - " A Genetic Algorithm Approach to Unsupervised Learning of Context Features ," in Proceedings of 5th National Colloquium for Computational Linguistics in the UK, University of Leeds, UK, January 08-09, pp. 12-19, 2002
We present an approach to unsupervised learning of some context features from corpora. The approach uses the idea of genetic algorithms. The algorithm operates on collection of related linguistic entities as opposed to an isolated linguistic entity. Each of the entities encodes the values for predefined set of context features obtained by automatic tagging. Our goal is to refine these features in order to find an interpretation that is optimal in the sense that it does not lose any correct feature values, but which, on the other hand, minimises the presence of feature values that are not applicable in a specific context. Our genetic algorithm implements a novel crossover operator based on two types of genes, dominant and recessive, where a gene corresponds to a context feature.
D. Pavlicic , and I. Spasic. - " The Effects of Irrelevant Alternatives on the Results of the TOPSIS Method ," in Proceedings of XXVIII Yugoslav Symposium on Operational Research SYM-OP-IS 2001, Belgrade, Yugoslavia, November, 2001

I. Spasic, and G. Pavlovic-Lazetic . - " Object-Oriented Modelling in Natural Language Communication with a Relational Database ," in Selected Papers from 10th Congress of Yugoslav Mathematicians, Belgrade, Yugoslavia, January 21-24, pp. 343-347, 2001
This paper describes the problems of developing a natural language interface towards a relational database (RDB). These problems depend on a particular database, or, more precisely, on a specific semantic domain that is modeled by the RDB. The most obvious dependency is the one reflected in the structure of the RDB, that is - the actual tables, attributes and their relationships. This information is recorded in the RDB catalogue, which can be used for the automatic generation of an OO model of the RDB. The classes of that model may serve the purpose of supporting the information extracted from a natural language query (NLQ). Possible ambiguities are gradually reduced by using the IsA relationships between the classes. If this still leaves the ambiguity unresolved, then it is possible to automatically generate a menu corresponding to the class that is the source of the ambiguity. The structure of the menu is in accordance with the OO model of the RDB.
O. Boskovic , and I. Spasic. - " Graph Theory and Log-Linear Models ," in Proceedings of XXVI Yugoslav Symposium on Operational Research SYM-OP-IS '99, Belgrade, Yugoslavia, November 4-6, 1999

I. Spasic. - " Automatic Foreign Words Recognition in a Serbian Scientific or Technical Text ," in Proceedings of Conference on Standardization of Terminology, Serbian Academy of Arts and Sciences, Belgrade, Yugoslavia, 1996



Presentations:

G. Nenadic , I. Spasic, and S. Ananiadou . - " What Can Be Learnt and Acquired from Non-disambiguated Corpora: A Case Study in Serbian ," 7th TELRI Seminar, Dubrovnik, Croatia, 2002

D. Pavlicic , and I. Spasic. - " The Effects of Irrelevant Alternatives on Decision Making Results ," The European Operational Research Conference EURO 2001, Rotterdam, The Netherlands, July 9-11, 2001
The paper deals with the effects of an irrelevant alternative on the results of the Multiple Attribute Decision Making (MADM) methods. By irrelevant alternative (IA) we denote an alternative, which, although not dominated by any other alternative from the observed set, in binary comparisons made by a MADM method is worse than any of them. We observe the problem of sequential choices from a fixed group of objects by using the same criteria with constant weights during the observed period. The effects of the changes of attributes' values of an IA on the final choices are examined. Several conditions of consistent choice of the MADM methods concerning an IA are defined: Independence of Worsening of an IA, Independence of Completely Negligible Improvement of an IA, and Independence of Partially Negligible Improvement of an IA. The ELECTRE method is chosen as an illustration and it is shown that (when based on vector-normalised ratings, and not on utilities) the method violates the three conditions. Finally, we conclude that the main cause of inconsistent choices is vector normalisation of empirical data, conducted in the first step of the method.


Technical reports:

I. Spasic. - " Automatic Term Extraction in Biomedicine ," in Technical Reports in Computer Science, ISSN 1476-3060, Report No. 03/01, School of Sciences, University of Salford, p. 46, 2003

I. Spasic. - " An Overview of Case-Based Reasoning ," in Technical Reports in Computer Science, ISSN 1476-3060, Report No. 02/01, School of Sciences, University of Salford, p. 75, 2002



Books:

I. Spasic, and P. Janicic . - " Theory of Algorithms, Languages and Automata ." Faculty of Mathematics, Belgrade, 2000

M. Ivovic , B. Boricic , D. Azdejkovic, and I. Spasic. - " Practice Book in Mathematics ." Faculty of Economics, Belgrade, 1998

M. Ivovic , B. Boricic , V. Pavlovic, D. Azdejkovic, and I. Spasic. - " Mathematics through Examples and Exercises with Elements of Theory ." Faculty of Economics, Belgrade, 1996