Which of the following terms means that decision makers choose the first solution alternative that meets minimal decision criteria?

Decision Making: Nonrational Theories

G. Gigerenzer, in International Encyclopedia of the Social & Behavioral Sciences, 2001

4.1 Aspiration Level Theories

Aspiration level theories assume that an agent has an aspiration level, which is either a value on a goal variable (e.g., profit or market share) or, in the case of multiple goals, a vector of goal values that is satisfactory to the agent. When choosing among a large (possibly even infinite) set of alternatives, agents search until they find the first alternative that meets or surpasses their aspiration level, at which point search stops and that alternative is chosen. For instance, agents might set a lower limit on the price at which they would be willing to sell their shares in a company (the aspiration level). In this satisficing model, the agent makes no attempt to calculate an optimal stopping point, in this case, the best day on which to sell. The aspiration level need not be fixed, but can be dynamically adjusted to feedback. For instance, if the investor observes that the share price is monotonically increasing rather than fluctuating over time, they might conclude that there is some stable trend and adjust the limit upward. Thus, aspiration level theories model decision making as a dynamic process in which alternatives are encountered sequentially and aspiration levels stop search. The challenge is to understand where aspiration levels come from in the first place (Simon 1982, Selten 1998).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767016120

Decision-Making Approaches

Theodor J. Stewart, in Encyclopedia of Information Systems, 2003

IV.C. Goal and Reference Point Approaches

Goal Programming is a separate article in this encyclopedia. It is, however, useful to summarize some key concepts within the broader multicriteria decision-making framework discussed here.

Goal and reference point approaches are used primarily when the criteria are associated with quantifiable attributes zi(a) and are thus possibly most appropriate to technical phases of analysis (i.e., in order to shortlist alternatives for more detailed evaluation according to qualitative, intangible, and subjective criteria) . The principle is quite simple. Instead of evaluating tradeoffs and weights (as in Section IV.B), the user simply specifies some desirable goals or aspirations, one for each criterion. These aspirations define in a sense a prima facie assessment by the user of what would constitute a realistically desirable outcome.

Let gi be a goal or aspiration level specified for criterion i The interpretation of gi will depend on the manner in which the corresponding attribute is defined:

Maximizing sense: If the attribute is defined such that larger values of zi(a) are preferred to smaller values, all other things being equal (typically some form of “benefit” measure), then the implied aim is to achieve zi(a)≥gi. Once this value is achieved, further gains in zi(a) are of relatively much lesser importance.

Minimizing sense: If the attribute is defined such that smaller values of zi(a) are preferred to larger values, all other things being equal (typically some form of “cost” measure), then the implied aim is to achieve zi(a)≤gi. Once this value is achieved, further reductions in zi(a) are of relatively much lesser importance.

Sometimes planners like to target some form of intermediate desirable value, possibly something like a water temperature which should not be too hot or too cold. In this case, values of zi(a) in the vicinity of the target value gi are desirable, with greater deviations on either side to be avoided. Since the reasons for avoiding deviations in each direction will generally be different, it is usually convenient to define two separate criteria (“not too hot” and “not too cold”), each using the same attribute, but with different aspiration levels. For example, if the desired temperature range is 15–18°C, then the goal for the not too cold criterion will be temperature ≥ 15°C, while that for the not too hot criterion will be temperature ≤ 18°C. Thus, for the purposes of further explanation, we shall assume that all attributes will be denned in one of the two senses defined by the above-bulleted items.

The general thrust of the so-called goal programming or reference point approaches to MCDA is based firstly on defining deviational variables δi(a) corresponding to the performance of each alternative in terms of each criterion, measuring the extent to which the goal is not met by alternative a, that is,

δi(a)=max{0 ,gi−zi(a)}

for attributes defined in a maximizing sense and

δi(a)=max{0,zi(a)−gi}

for attributes defined in a minimizing sense.

Algebraically (for purposes of inclusion in mathematical programming code), the deviational variables may be defined implicitly via constraints of the form:

zi(a)+δi (a)≥gi

for attributes defined in a maximizing sense and

zi(a)−δi(a)≤gi

for attributes defined in a minimizing sense, linked to some process which minimizes all deviations as far as is possible.

The key question at this stage relates to what is meant by minimizing all deviations. Very often, a simple and effective approach is simply to choose the alternative for which the sum of (possibly weighted) deviations is minimized. This is the basis of conventional goal programming. Without going into any detailed review at this stage, it is this author's view that a more robust approach is to use the so-called Tchebycheff norm popularized in the approaches termed reference point techniques. In essence, we then identify the alternative a which minimizes a function of the form

(2)maxi=1m[wiδi(a)]+ε∑i=1mwiδi(a)

where ϵ is a suitably small positive number (typically something like 0.01) and wi are weights reflecting the relative importance of deviations on each goal. It is important to emphasize that these weights are related to tradeoffs between attributes in the vicinity of the aspiration levels and are dependent upon the specific scale of measurement used. The best way to assess these weights is to evaluate the allowable tradeoffs directly.

The above process can be applied in either the discrete choice or the mathematical programming contexts. For discrete choice, the calculations for each alternative are easily set up in a spreadsheet. For example, suppose that we are evaluating six policy alternatives in a regional water planning context and that four critical criteria have been identified, associated with the four quantitative attributes: investment cost ($m), water quality (ppm of contaminant), minimum flow levels in the river (m3/sec), and recreational access (thousands of person days per annum). Suppose that the values of these criteria for the six alternatives are as follows:

Which of the following terms means that decision makers choose the first solution alternative that meets minimal decision criteria?

Note that the first two attributes require minimization and the latter two attributes require maximization. Suppose that goals are specified as follows: $120m for cost, 280 ppm for quality, 2.5 >m3/sec for minimum flow, and 225 person days for recreational access. The unweighted deviations (δi(a)) can be computed as follows:

Which of the following terms means that decision makers choose the first solution alternative that meets minimal decision criteria?

Suppose that the following tradeoffs have been assessed as follows: a reduction of 0.1 m3/sec in the minimum flow would be equivalent in importance to changes of $4m in costs, 10 ppm in contaminants, and 10,000 person days for recreational access. Arbitrarily setting w3 = 1 (numerical weight for the minimum flow criterion), these tradeoffs translate into the following weights for the other criteria: w1 = 0.025 (costs), w2 = 0.01 (quality), and w4 = 0.01. Using these weights and ϵ = 0.01, we obtain the following values of the function given by Eq. (2) for each of the alternatives:

Which of the following terms means that decision makers choose the first solution alternative that meets minimal decision criteria?

Scenario F is then indicated as the best compromise, followed closely by scenario D. The remainder are shown to be considerably worse in the sense of having large deviations for one or more criteria.

For a small number of alternatives, as in the above example, the goal programming or reference point approach does not generate too much insight. The methods come much more into their own, however, when there are a large number of alternatives that have to be screened and especially when the problem has a mathematical programming structure. In the linear programming case, the trick is to minimize a new variable D, subject to the constraints D ≥ wiδi(a), to the constraints described above for implicitly defining the deviational variables and to the natural constraints of the problem. The proper setting up of the problem for solution would generally require the assistance of a specialist skilled in (multiobjective) linear programming, and we shall not attempt to provide all the details here.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B012227240400037X

Preference Incorporation in Evolutionary Multiobjective Optimization

Slim Bechikh, ... Khaled Ghédira, in Advances in Computers, 2015

2.4.8 Molina et al. [64] Work: The g-dominance

The authors suggested a new kind of dominance, called g-dominance, where solutions satisfying all aspiration levels and solutions fulfilling none of the aspiration levels are preferred over solutions satisfying some aspiration levels. The authors noted that the g-dominance has three merits: (1) it can be incorporated in several metaheuristics, (2) although the preferences are modeled as a reference point, the search process works without using any ASF, and (3) it can be used in an interactive way. Formally, assuming g=g1…gM to be a reference point, each solution x is assigned a flag as follows:

(36) Flaggx=1iffmx≤gm∀m=1,…,M1iffmx≥gm∀m=1,…,M0otherwise

Given two distinct solutions x and y, x is said to g-dominate y if and only if:

(37)Flaggx> FlaggyorFlaggx=Flaggyandfx⪯fy

The main disadvantage of this approach is that it does not preserve the order induced by the Pareto dominance relation. Hence, a dominated solution which satisfies none of the goals may be preferred to a solution that dominates it and which fulfills some of the goals. Figure 11 illustrates this case. Solution A Pareto dominates B, however solution B g-dominates solution A. This fact discourages convergence toward the Pareto optimal front.

Which of the following terms means that decision makers choose the first solution alternative that meets minimal decision criteria?

Figure 11. The g-dominance: nonpreservation of the Pareto dominance order.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245815000273

Bounded and Costly Rationality

R. Radner, in International Encyclopedia of the Social & Behavioral Sciences, 2001

6 Satisficing, Heuristics, and Non-Bayesian Learning

In view of the difficulties posed by the various manifestations of ‘truly bounded rationality,’ a number of authors have proposed and studied behavior that departs more or less radically from the Savage Paradigm. These will be discussed under three headings: satisficing, heuristics, and non-Bayesian learning.

The term ‘satisficing’ refers to behavior in which the DM searches for an act that yields a ‘satisfactory,’ as distinct from an optimal, level of expected utility. The target, or ‘satisfactory,’ level of expected utility is usually called the DM's ‘aspiration level.’ In the simplest model, the aspiration level is exogenous, i.e., a given parameter of the model. More ambitious models describe some process whereby the aspiration level is determined within the model, and may change with experience (Simon 1972, Radner 1975). Such aspiration levels are called ‘endogenous.’ In some problems even optimal behavior bears a resemblance to satisficing. One category is the ‘secretary problem’ (Radner 2000).

The term ‘heuristics’ refers generally to behavior that follows certain rules that appear to produce ‘good’ or ‘satisfactory’ results most of the time in some class of problems (Simon 1972, see Heuristics for Decision and Choice). For example, the calculation of an optimal schedule for assigning jobs to machines is typically intractable if the numbers of jobs and machines are even moderately large. Nevertheless, human schedulers routinely construct ‘satisfactory’ schedules with such numbers, using various rules of thumb that have been developed with experience. Heuristics are central to many artificial intelligence applications. Satisficing plays an important role in many heuristic methods, and also in the processes of their modification.

The discussion of heuristics leads naturally to the consideration of non-Bayesian learning (NBL). Bayesian learning (i.e., the application of the calculus of conditional probability) is of course part of the Savage Paradigm in any decision problem in which the DM conditions his/her action on information about the state of the world. Many standard statistical methods use NBL. For example, the use of the sample mean to estimate a population mean is typically inconsistent with the Savage Paradigm (although in some cases the latter can be shown to be a limit of Bayesian estimates, as some parameter of the problem goes to infinity). Most psychological theories of learning postulate some form of NBL. A central question in the theory of NBL is: under what conditions, if any, does a particular NBL procedure converge asymptotically to a procedure that is Savage-Paradigm optimal as the DM's experience increases? (Rustichini 1999).

Again, one must ask: is there any satisfactory meaning to the term ‘rationality’ when used in the phrase ‘bounded rationality’? The convergence of NBL to optimal actions could provide one (weak) meaning. Nevertheless, the problems raised by the various phenomena grouped under ‘truly bounded rationality’ may eventually lead students of decision making to answer this last question in the negative.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006276

23rd European Symposium on Computer Aided Process Engineering

Karthik Sindhya, ... Kaisa Miettinen, in Computer Aided Chemical Engineering, 2013

5 Numerical results

The concentration control problem was modeled in APROS and connected to IND-NIMBUS. The values of objective and constraint functions were evaluated in APROS and sent to IND-NIMBUS. The calculation of objective and constraint function values was computationally expensive as a typical APROS simulation run took 80 seconds. Two new PO solutions were generated at each NIMBUS iteration (this number can be chosen between 1 and 4) and a hybrid algorithm of differential evolution (DE) (Storn and Price, 1997) and a derivative-free constrained algorithm COBYLA (Powell, 1994) were employed in this. For every optimization run, up to 450 function evaluations (DE = 400 and COBYLA = 50) were allowed. (The DE parameters F and CR were set as 0.5 and 0.8, respectively, and the population size was 20.) A computer with an Intel® Core™ i7-2600 CPU with 3.40GHz processor containing 4 cores was used.

The entire solution process involved two basic steps: pre-processing (DM not involved) and the actual decision making process. In the pre-processing step, the model was utilized for finding estimates for the ranges of the objective function values in the set of PO solutions and to determine a suitable starting solution for the decision making process. The minimum (ideal) values were calculated by optimizing each objective function separately and the approximate maximum (nadir) values were found by using a so-called pay-off table (Miettinen, 1999). Then, a neutral compromise solution was calculated as a starting solution without preferences (according to the NIMBUS algorithm) and it was shown to the DM alongside with ideal and nadir values to give an understanding of attainable objective function values. In the decision making process, the DM investigated the solutions shown as values of the objective functions.

In the first iteration, the DM investigated the starting solution z1=(0.88, 28475, 0.125)T and wanted to improve f1 and f2 till desired aspirations levels 0.2 and 2000, respectively, and to let f3 increase till a reservation level 1.0. Based on the DM's preferences, two PO solutions z2=(0.497, 38258, 0.9609)T and z3=(0.455, 33546, 1.24)T were computed demonstrating different trade-offs and they were shown to the DM. The DM preferred z2 but still wanted to improve the value of f1 till a desired aspiration level 0.2 while relaxing f2 and f3 up till reservation levels 45,000 and 1.0, respectively. Then, PO solutions z4=(0.361, 66335, 0.848)T and z5=(0.315, 67143, 1.199)T were obtained. For the third iteration, the DM preferred z4 and decided to improve f3 till an aspiration level 50000 and allow f1 and f2 to increase up till reservation levels 0.4 and 0.9, respectively. This resulted with PO solutions z6=(0.4, 50826, 0.8)T and z7=(0.42, 48671, 0.9)T.

In the fourth iteration, the DM did not prefer any of the solutions provided to him and, instead, he indicated his desirable objective function values as (0.3, 55000, 0.7)T. NIMBUS provided him with solutions z8=(0.366, 57846, 0.749)T and z9=(0.398, 50495, 0.796)T, both of which he found unsatisfactory. For the fifth iteration, the DM decided to explore a new area of the objective space by reconsidering the first solution z1. Next, the DM preferred to improve f1 and f3 till desired aspiration levels 0.5 and 0.1, respectively, and allowed f2 to increase up till a reservation level 60000. For these preferences, one PO solution z10=(0.45, 60000, 0.099)T was obtained. For the next iteration, the DM continued with the starting solution z1, but this time he wanted to improve f3 as much as possible, while allowing f1 and f2 to increase till reservation levels 0.6 and 65000, respectively. These preferences produced solutions z11=(0.488, 57688, 0.088)T and z12=(0.45, 59804, 0.099)T, which were not satisfactory for the DM.

As the DM was clearly interested in improving the value of f3, he decided to compare three generated PO solutions, where the objective f3 obtained the lowest values. These were z13=(0.675, 42013, 0.0852)T, calculated in the pre-processing stage, and z14=(0.521, 57202, 0.0852)T and z15=(0.517, 57030, 0.0852)T, calculated during the iteration. Of these the DM considered z13 as his preferred solution for the seventh iteration, but decided to explore more solution(s) by improving f1 and f2 till aspiration levels 0.5 and 20000, respectively, and increasing f3 till a reservation level 0.09. No solutions were obtained which could satisfy these preferences. Hence, the DM decided to choose z13 as his most preferred solution to be called a final solution.

During the interactive solution process the DM could learn about the trade-offs among the conflicting objectives and how attainable his desires were. Finally, the DM visually inspected the output concentration c4 and the concentration setpoint for the starting and the preferred solutions (the variations are shown in Figure 2) to justify his decision.

Which of the following terms means that decision makers choose the first solution alternative that meets minimal decision criteria?

Figure 2. Concentration time series for starting and preferred solutions.

The DM found the final preferred solution to have lower initial oscillations, faster response to setpoint and inlet concentration step changes, no overshoots and to be less expensive (compared to other PO solutions generated). The DM attributed his shift in the preferred region in the objective space during the fifth iteration to the lack of convergence to the set point with a higher f3 behavior. Overall, the DM was mainly interested in solutions with low values for f3 and f1 and was ready to pay a higher cost. Thanks to the interactive process, he could get convinced of the goodness of the final solution.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444632340501469

Social Science, the Idea of

F. D'Agostino, in International Encyclopedia of the Social & Behavioral Sciences, 2001

5 Conclusion

Should human studies plausibly aspire to scientific status? This remains questionable, as it has long been. Can they realistically aspire to such a status? This too is questionable. Perhaps Rorty (see Sect. 4.1) is right to suggest that human studies have aspired to too much, that their two main goals are disjoint and hence cannot both be achieved under the same methodological or institutional auspices. In this case, practitioners would perhaps be wise to disentangle their various objectives, and adjust their aspiration levels’ (Simon 1996, p. 30) to more realistic goals. Or perhaps an even more radical adjustment of aspirations and disciplinary cultures is called for. Certainly Toulmin thinks so. Toulmin (1992, p. 193) says: ‘The task is not to build new, more comprehensive systems of theory with universal and timeless relevance, but to limit the scope of even the best-framed theories, and fight the intellectual reductionism that became entrenched during the ascendancy of rationalism … [thus reinstating] respect for the pragmatic methods appropriate in dealing with concrete human problems.’ On this account, human studies are to be understood more on the model of ad hoc tools for a variety of perhaps incommensurable human purposes, and not, as in their ascendency in the mid-twentieth century, as sciences with all the cognitive and institutional entitlements which are associated with that sometimes honorific term.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767010202

Personality and Risk Taking

E.U. Weber, in International Encyclopedia of the Social & Behavioral Sciences, 2001

5 Summary, Caveats, and Future Directions

The current research consensus suggests an interactional model of risk taking (e.g., Sitkin and Weingart 1995) in which situational characteristics as well as person-centered characteristics jointly influence risk taking. Situational constraints include the content domain of the risky decision as well as contextual variables such as outcome framing and aspiration levels (Lopes 1987). Person-centered characteristics include age, gender, culture, and personality. These variables influence risk taking mostly by changing people's perception of the riskiness of decision alternatives, rather than by affecting their willingness to take on more or less risk.

Because of the domain specificity of risk taking, measures of risk attitude that employ choice situations across a range of content domains (e.g., the Choice Dilemmas Questionnaire of Kogan and Wallach 1964) have little predictive validity. Domain-specific scales of risk taking, that help to diagnose apparent differences in risk taking into differences in either risk perception and/or PRA have recently developed (Weber et al. 2000). Future research will provide additional insights into the complex interactions between personality and situation that have been explored for a range of other traits (Mischel 1999) with respect to risk taking. A combination of task analysis and theory about the reasons for risk taking and its cognitive and emotional constraints should lead to the development of gender-, culture-, and domain-specific risk taking profiles, that predict level of risk taking in a situation- and person-contingent fashion.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767017824

Rational Choice Theory: Cultural Concerns

J. Elster, in International Encyclopedia of the Social & Behavioral Sciences, 2001

4.1 Bounded Rationality and Behavioral Economics

Some alternatives try to identify the cognitive mechanisms that produce deviations from what rational choice theory predicts.

The research program on ‘bounded rationality’ stems from Herbert Simon (1954). Nelson and Winter (1982) is an outstanding application. The core assumption is that agents ‘satisfice,’ by choosing an option that is ‘good enough,’ rather than maximize. Agents do not look for better alternatives if the status quo is satisfactory. If it falls below the agent's aspiration level, a search for new options is triggered until a new, satisfactory outcome is found. The theory has a good match to common sense (‘Never change a winning team’) and captures the heavy dependence of many choices on search procedures. By eschewing the maximizing assumption, it avoids problems of indeterminacy.

The research program on ‘behavioral economics’ or ‘quasi-rational economics’ (Thaler 1991) stems from the work of Kahneman and Tversky (Kahneman et al. 1982, Kahneman and Tversky 2000). In addition to identifying many of the ‘cold’ anomalies discussed above, these authors tried to account for them by means of ‘prospect theory.’ It deviates from rational choice theory by assuming that agents evaluate gains and losses by how much they deviate from a reference point rather than in absolute terms, and that they are more risk-averse for losses than for gains. These two assumptions explain, for instance, that people who refuse to use credit cards when they are said to carry a surcharge accept them when the charge is redescribed as a cash discount.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767046301

Preference Models with Latent Variables

U. Böckenholt, in International Encyclopedia of the Social & Behavioral Sciences, 2001

3 Conclusion

The formulation of choice models in terms of latent utilities has a long tradition in biometric, econometric, psychometric and sociometric disciplines. The numerous applications and further developments in each of these areas have led to many improvements in the statistical apparatus necessary for estimating and validating choice models. However, the unifying principle remains valid that observed individual choice behavior is an imperfect reflection of the underlying preferences of a person. This viewpoint leads to a probabilistic representation of the relation between the manifest and latent levels of choice analysis taking into account both the process by which an individual arrives at a choice and the factors that influence this process. Some of these factors are unobservable (such as the processed information about the choice alternatives), while other factors are known and measurable. By separating these two sources of influence due to measured and unobserved attributes at the individual level, a powerful modeling approach is obtained that facilitates the identification and isolation of the net effects of choice option characteristics, individual characteristics, and interactions of option and individual characteristics as separate choice determinants.

Modifications of this choice modeling approach are necessary when the choice set is large or when a simultaneous presentation of the choice options is not feasible. In these cases, one may argue that decision makers tend to rely on satisfying rules and the idea of an aspiration level may be more appropriate than the principle of utility maximization (Simon 1957). In contrast to the large number of possible satisfying rules, however, only a few mathematical representations have been proposed. Perhaps the most popular model is due to Coombs' (1964) analysis of the ‘pick any/m’ task and Thurstone's (1959) work on the method of successive categories. In the ‘pick any/m’ task it is assumed that choice options are selected when their utilities exceed some specified minimum value. Thus, the underlying choice process is scalar valued and based on a binary comparison between the option's utility and a threshold value. More specifically, according to Coombs' (1964) unidimensional unfolding approach, the utility of item j for person n is given by the distance between two parameters representing j and n on a unidimensional continuum. This idea has been incorporated in several psychometric models by making the simplifying assumption that the perception of the item-specific parameters does not vary from person to person (Andrich 1997). In general, results of the maximization and ‘aspiration level’ hypotheses are not directly comparable because in the latter case we model the probability of belonging to some category (i.e., ‘above the threshold’) and in the former case we model order relations among the choice options.

In recent years, stochastic choice models have come under criticism because, frequently, they cannot capture context-sensitive behavior (Tversky and Kahneman 1991). For example, the models' underlying assumption that the assessment of an option's utility does not depend on comparisons drawn between it and other available alternatives has received little support in laboratory studies. Moreover, the deliberation process from the onset of the choice task to the final selection has been shown to play an important role that should be taken into account by the next generation of choice models (Busemeyer and Townsend 1993). Clearly, the latent utility approach has proved to be most useful in providing parsimonious descriptions of how judges differ in their preferences at a particular point in time or over time. However, by emphasizing interindividual differences, this approach can render only limited insights about cognitive processes influencing a choice.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767006586

Computer-assisted Instruction

J. Voogt, J. van den Akker, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Views on the Use of ICT in Education

It is generally accepted that the increasing impact of ICT on our society is also influencing teaching and learning. Some expect that ICT can reduce several problems in present day education, such as the need for individualized instruction and the poor transfer of school knowledge to real life (Sandholtz et al. 1997). Also, new demands from society require education to focus on information management and communication skills. ICT can be a valuable essential tool in learning these skills. Further, in constructivist approaches to teaching and learning, students are held responsible for their own learning. Information technology is considered to offer promising environments and tools to support this process.

The high aspiration level of ICT in education is well reflected in Dede's (2000) list of ICT capabilities for enhancing learning: centering the curriculum around authentic problems parallel to those in real world settings; involving students in virtual communities of practice, using advanced tools similar to those in today's high-tech workplaces; facilitating guided, reflective inquiry through extended projects that inculcate sophisticated concepts and skills and generate complex products; utilizing modeling and visualization as powerful means of bridging between experience and abstraction; enhancing students' collaborative construction of meaning via different perspectives on shared experiences; including pupils as partners in developing learning experiences and generating knowledge; and fostering success for all students through special measures to aid the disabled and the disenfranchised.

Itzkan (1994) made an effort to interpret developments in ICT in terms of educational innovations. He postulated that the potential of technology directs the way education will develop. In his view, three stages can be distinguished. The ‘substitution’ stage marks the ICT applications that are able to automate parts of the instructional process. In this stage, ICT is solely being used as a replacement for tasks of the teacher, with the specific aim to make education more effective, or to provide more possibilities for individualizing education. In the stage of ‘transition,’ ICT is not only used for replacing existing tasks, but it also requires that instructional practices and content are changed. In this stage, ICT applications not only structure the learning process, but students themselves need to increasingly structure their own learning process. In the ‘transformation’ stage, not only must instructional practices and content be altered, but also the underlying rationale of education should change. An example of the latter is the development in network technology that may result in virtual learning communities where students, teachers, and scientists are involved. Such a situation affords greater opportunities for learning to take place in interaction with practicing professionals than might otherwise be possible with a few field trips per year.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767024141

Which of the following means that decision makers choose the first solution alternative that satisfies minimal decision criteria?

Satisficing means that decision makers choose the first solution alternative that satisfies minimal decision criteria.

What are the 3 types of decision making conditions?

Managers make problem‐solving decisions under three different conditions: certainty, risk, and uncertainty. All managers make decisions under each condition, but risk and uncertainty are common to the more complex and unstructured problems faced by top managers.

What are the 4 levels of decision making?

Combining them creates four categories of decisions:.
1] Making routine choices and judgments. ... .
2] Influencing outcomes. ... .
3] Placing competitive bets. ... .
4] Making strategic decisions. ... .
The constraint of decision making research..

Is choosing the first alternative that appears to give an acceptable resolution to a problem?

Bounded rationality is choosing the first alternative that appears to give an acceptable or satisfactory resolution of the problem.