December 2006: Multi-Criteria Decision Making

Introduction

A multi-criteria decision problem usually involves selection of a number of alternatives to achieve an overall result based on the suitability of those alternatives against a set of criteria. The criteria will normally be weighted in terms of their importance to the decision maker, since criteria are rarely of equal importance. When a suitable process is applied to the problem, a rating of the alternatives can be formed into a rank, based on preferences.

MCDM

In general terms, decision making in the presence of multiple, potentially conflicting criteria can be broadly classified as one of two types:

  • The selection of an alternative or alternatives from a menu or catalogue based on prioritised attributes of the alternatives (i.e. Multiple Attribute Decision Making or MADM), and
  • Synthesis of an alternative or alternatives on the basis of prioritised objectives (Multiple Objective Decision Making or MODM).

The following figure shows the relationships between these concepts. The link describes the terminology in more detail.

Image of: Fig 1. Multiple Criteria Decision Making

Fig 1. Multiple Criteria Decision Making

Selecting by criteria

A criterion can be thought of as any measure of performance for a particular design choice. When there is a set of alternative designs to choose from, the choice is most conveniently made by using some form of static or moving weights to represent the contribution of the various attributes of the alternatives.

Selection problems can thus be thought of as multiple attribute decision making (MADM) problems. There is a common scheme of comparing alternatives on the basis of weighted sum of normalised attributes. The normalising of the attributes ensures their comparability, as otherwise high numbered attributes would make disproportionate contributions to the overall score. Utility functions may be visualised as moving weights so that the relative contributions made by different attributes to the ranking of alternatives change with the attribute values themselves.

Objectives

... problems of synthesis are largely about meeting objectives

When there is no list of solutions to choose from but only a list of requirements to meet, it is appropriate to think in terms of objectives. An attribute with a direction is an objective. Thus cost and weight are attributes but the aim of minimising cost and minimising weight are objectives. As problems of synthesis are largely about meeting objectives, prioritised according to the relative importance of the objectives set by the Decision Maker (DM), design or synthesis problems can be thought of as Multiple Objective Decision Making (MODM) problems.

Image of: Fig 2. Comparison of scores given to six options

Fig 2. Comparison of scores given to six options

Pursuing the terminology a bit further it can be asserted that if the thresholds of the objectives are flexible in the sense that the requirements represent aspirations (e.g. some non-statutory requirement relating to some desirable but non-crucial aspect of performance) rather than hard bounds, then the decision problem reduces to a format that is most conveniently handled by techniques like goal programming in which multiple objectives are addressed by minimising the weighted sum of deviations from stated goals or threshold values of performance.

MCDM approach

The MCDM approach, by addressing the DM’s priorities, makes the underlying trade offs between criteria transparent and capable of convenient manipulation, and this can often lead to better decisions overall.

Techniques for weight assignment

There are different techniques for weight assignment, which can be found in the literature and publications. Here we briefly introduce a sample of weight assignment techniques which have been adopted in the decision support software developed by the EDC. These are the Minimal information trade-off assessment (MITA), Minimal pairwise Comparison (MIPAC) Method and Entropy Method.

Minimal pairwise comparison

In engineering design, the DM may only be capable of providing a combination of exact and vague pairwise comparisons of attributes. For instance, they may assert that one attribute is at least twice as important as another.

The Minimal Information Trade-off Assessment (MITA) method can accommodate both exact and vague pairwise comparisons and therefore it provides a more flexible way of acquiring and representing preference information. To assign weights, the MITA method can use as much preference information as the DM can comfortably provide.

Unfortunately, this approach does not define either the minimal information requirement formally or provide a systematic way to guide the DM in preparing his preference information.

Another method, referred to as the Minimal PAirwise Comparison (MIPAC) method, uses a systematic procedure to acquire and represent preference information. In this method, relative weights of attributes can be initially assigned on the basis of a minimal number of exact and/ or vague pairwise comparisons. Then the minimum comparison set may be revised or extended if the DM is not satisfied with the initial weight assignment and if he can provide more useful information.

The minimum set of complete comparisons is defined as follows:

Suppose there are n attributes. The minimum set of complete pairwise comparisons for the n attributes is composed of (n-1) pairwise comparisons, in which each of the n attributes must be compared with at least one of the other attributes and no single comparison or a subset of the comparisons is isolated from the other comparisons.

Entropy Method

The idea of entropy can be particularly useful in investigating contrasts in discrimination power between sets of data. Since an outcome includes certain information content, the information content of the normalised outcomes of the attribute can be measured by means of an entropy value.

Comparison

In the following the comparison of the scores and ranks of six alternatives are presented for a case study carried out by the EDC. These comparisons demonstrate that the different methods of assigning weight, MIPAC, Entropy and their combination, may have an impact on the final outcome.

Figure 2 (on the left) shows the comparison of the scores given to the options according to the above weighting methods.

Image of: Fig 3. Ranking the options

Fig 3. Ranking the options

Figure 3 (above) presents the ranking of the options calculated by the three methods. It shows clearly that the ranking are different from one method to the other.

Author: Farhad Kenevissi

Contact: farhad.kenevissi@ncl.ac.uk