Water resources systems has always been at the core of societal development. The management of water supply and drainage systems is at the core of urban development and often requires taking strategic decisions in the presence of uncertainty. For instance, the Ancient Egyptians knew very well how crucial the spring floods of the Nile River were to the summer harvest. They used the spring season water level in the Nile River, measured by using the Nilometer (Figure 1) to determine the amount of tax to charge the farmers that year. The Nilometer is an ancient example of a decision support system.
Figure 1. Measuring shaft of the Nilometer on Rhoda Island, Cairo. By Baldiri - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4302703.
Decision making has always been at the core of human development. Everyday life involves decisions that may be affected by different levels and forms of uncertainty. Taking decisions in the presence of uncertainty is a usual challenge for humans. In the past heuristic approaches that make use of routine thinking were mainly used. Heuristic methods are generally preferred for being quick and flexible, but are more likely to involve fallacies or inaccuracies. Therefore, already in the far past attempts were made to set the basis for a rigorous and objective approach.
Decision theory has its roots in probability theory. Already in the 17th century, scientists made us of the idea of expected value: when faced with a number of actions, each of which could give rise to more than one possible outcome, the rational procedure is to identify all possible results of each action, determine their benefit using assigned units (like, for instance, economic gain) and related probability, and multiply them two to give an expected value resulting from each action, and then choose the action leading to the highest expected value. Decision theory evolved significantly during the XXth century, when theoretical basis were laid out to support complex strategies.
In the context of sustainable water resources management taking a proper decision is a challenging mission: ensuring sustainability requires the investment of resources for pursuing a long term vision, which usually implies that short term goals are given lower priority. Moreover, the possible presence of deep uncertainty makes decision making definitely critical with respect to a wide range of ecological and human challenges related to the management of water resources. This is the reason why decision theory has been largely used in water resources management in the second part of the XXth century, when the availability of automatic computation allowed the rigorous evaluation of the outcomes resulting from multiple choices.
The first decision support systems in water resources management made use of linear programming, which was introduced in 1939 by the Soviet economist Leonid Kantorovich. During the 1980s and 1990s scientists working in water resources gave a significant contribution to mathematical optimization. During the last 30 years of the XXth century robust optimization was developed and is a very active field of research today. Indeed, during the last few decades decision support and analysis tools emerged as useful techniques for reducing uncertainty, increase transparency and improve the effectiveness of complex water management decisions.
When possible, it is suggested that the criteria and constraints are agreed by the stakeholders before knowing the alternative decisions. In fact, the preliminary knowledge of the possible final outcomes may introduce a subjective bias in the definition of the criteria. However, in water resources management the alternative decisions are often known beforehand and therefore the above suggested strategy cannot be adopted.
Given the above mentioned availability of several different methods it is interesting to compare them. Such comparisons are not straightforward as different methods generally use different measures to quantify robustness, use different descriptions of uncertainty (probabilistic or not), and provide different information to decision makers at different stages of the decision process (Hall et al., 2012). A brief review of selected robust decision methods, inspired by the work of Roach (2016), is offered here below.
Information-Gap (Info-gap) decision theory was proposed by Ben-Haim (2001) to assist decision-making when there are severe knowledge gaps and when models of uncertainty are unreliable, inappropriate, or unavailable. Info-gap is a mismatch between what is known and what needs to be known to make a good decision. The info-gap model assists user that needs to take a decision to assess how wrong can the available information be, without compromising the quality of the outcome. It evaluates the robustness of an intervention strategy as the maximum range of information (input data, model parameters) uncertainty that can be negotiated while maintaining specified performance requirements. In several applications a policy that is not sensitive to information errors or gaps may be preferred over a vulnerable policy. A possible way to get to target is to seek to optimise robustness to failure under uncertainty (Ben-Haim, 2001). In water resources management it is typically applied by identifying the strategy that will satisfy the minimum performance requirements (performing adequately rather than optimally) over a wide range of potential scenarios even under future conditions that deviate from the best estimate. Info-gap also allows to evaluate under which uncertain scenario an unexpected windfall may occur.
Info-gap quantifies uncertainty with a sequence of expanding nested ranges defined on the space of an assigned decision-relevant vector. The latter can be, for instance, the parameter vector of a prediction model (which can reduce to a single scalar value if the model counts one parameter only). A larger range of possible parameter values indicate increased uncertainty. Robustness is defined as the maximum uncertainty, represented by a given value for the parameter range, which a strategy achieves a certain level of performance. The method evaluates alternative strategies with a reward function that measures the desirability of each strategy to the decision maker.
At a given range value, there will be a set of possible rewards given by the minimum and maximum levels of performance. These levels are used to define two criteria:
- Robustness function, the maximum range that can be tolerated while ensuring a given minimum reward for each decision strategy.
- Opportuneness function, the minimum range required to enable an assigned maximum reward ("windfall") for each decision strategy.
Quoting from Hall et al. (2012):
The robustness function expresses immunity against failure so “bigger is better.” Conversely, when considering the opportunity function, “big is bad.” The different behaviors of these functions illustrate the potential pernicious and propitious consequences of uncertainty.
Info-gap presents to users (decision makers) robustness and opportuneness curves for each strategy using the same uncertainty model. The independent variable on the graphs are minimum and maximum reward. The robustness curve describes the maximum level of uncertainty that can be tolerated depending on a given “critical” (minimum) outcome. The opportuneness curve describes the minimum level of uncertainty that is necessary to yield the possibility of a given “windfall” (maximum) outcome. Then, users may decide minimise the worst-case outcome or maximize the best-case windfall, or they may seek a strategy that provides some desirable tradeoff between robustness and opportuneness. Info-gap does not identify a unique best strategy but gives to users the opportunity to assess extreme outcomes and their interaction.
Robust Optimisation involves the identification of an optimal solution for a problem such that the underlying assumption and constraints are always satisfied in the presence of uncertainty. Namely, the identified solution will always guarantee the optimal functioning of the system even in the worst of the several scenario that can materialize under uncertainty. It is mostly employed to identify an optimal solution to a singular objective problem. However, it can be adapted to resolve multiple objective problems, namely, problems where two or more performance measures (objective functions) need to be optimized without the possibility to combine them.
The most classical example of robust optimization is given by a problem where a single objective function is to be optimized under uncertain constraints. For instance, water resources availability for a given future may be not known for sure but varying into a given range with assigned probabilities. Therefore the optimum is to be searched under different scenarios of water availability.
Robust Decision Making (RDM) characterizes uncertainty with multiple views of the future given the available knowledge of uncertainty. These multiple views are created by just looking at their possibility, without necessarily taking into account their probability. Then, robustness with respect to those multiple views, rather than optimality, is adopted as a criterion to assess alternative policies. Several different approaches can be followed to seek robustness. These may include, for instance, trading a small amount of optimum performance for less sensitivity to broken assumptions, or performance comparison over a wide range of plausible scenarios. Finally, a vulnerability-and-response-option analysis framework is used to identify robust strategies that minimize the regret that may occur over the different future scenarios. This structuring of the decision problem is a key feature of RDM which has been used in several climate adaptation studies (see, for instance, Bhave et al., 2016 and Daron (2015)).
Details on RDM are given here and briefly summarized here below.
- Step 1: identification of future scenarios, systems models and metrics to evaluate success. The first step in RDM is articulated by a joint work among stakeholders, planners, and decision makers. They sit together to identify possible future scenarios, without caring of their probability in this stage. Therefore, uniform sampling may be used rather than basing on a prior distributon, in order to make sure that all possibilities are explored. Metrics to describe how well future goals would be met are also agreed upon. Metrics can be, for instance, water demands or water supplied, or unmet demand. Metrics can also include indexes such as reliability (e.g. the percentage of years in which the system does not fail). Environmental and/or financial metrics can also be considered such as minimum in-stream flows and costs of service provision. Furthermore, in this step candidate strategies for reaching the goals are identified, such as investments or programs. They also agree on the models that will be used to determine future performances of the system.
- Step 2: evaluation of system performances. In this step, which is termed as "experimental design", the performances of the alternative strategies are evaluated with respect to the possible future scenarios, by estimating the related metrics of success. This step is typically analytical.
- Step 3: vulnerability assessment. Stakeholders and decision makers work together to analyse the results from step 2 to identify the vulnerabilities associated to each strategy. The results from the simulations in Step 2 are first evaluated to determine in which futures the management strategy or strategies do not meet the management targets. Next, a scenario discovery leads stakeholders and decision makers to jointly define a small number of scenarios to which the strategy is vulnerable. The information about vulnerability can help define new management options that can be used to test strategies more robust to those vulnerabilities. Furthermore, they identify tradeoffs among different strategies. The vulnerability analysis helps decision makers recognize those combinations of uncertainties that require their attention and those that can instead be ignored. Visual inspection or more sophisticated statistical analyses can be used depending on the problem and audience.
- Step 4: adaptation options to address vulnerabilities. The information on system's vulnerability can then be used to identify the most robust adaptation option. Moreover, suggestions to improve the considered options can also be gained from step 3. For instance, adaptive strategies can be considered, that can evolve over time depending on the observed conditions. Interactive visualizations may be used to help decision makers and stakeholders understand the tradeoffs in terms of how alternative strategies perform in reducing vulnerabilities. This information is often paired with additional information about costs and other implications of strategies.
- Step 5: risk management. At this stage decision makers and stakeholders can bring in their assumptions regarding the likelihoods of the future scenarios and the related vulnerable conditions. For example, if the vulnerable conditions are deemed very unlikely, then the reduction in the corresponding vulnerabilities may not be worth the cost or effort. Conversely, the vulnerable conditions identified may be viewed as plausible or very likely, providing support to a strategy designed to reduce these vulnerabilities. Based on this tradeoff analysis, decision makers may finally decide on a robust strategy.
RDM characterizes uncertainty in the context of a particular decision. That is, the method identifies those combinations of uncertainties most important to the choice among alternative options and describes the set of beliefs about the uncertain state of the world that are consistent with choosing one option over another. This ordering provides cognitive benefits in decision support applications, allowing stakeholders to understand the key assumptions underlying alternative options before committing themselves to believing those assumptions.
RDM reverses the order of traditional decision analysis by conducting an iterative process based on a vulnerability-and-response-option rather than a predict-then-act decision framework, which is adaptation based on a single projected future. This is known as a bottom-up analysis and differs from the top-down method that is also widely utilised in decision making (Blöschl et al., 2013).
RDM and Info-Gap Decision Theory (IGDT) are decision making frameworks that seek robustness. Both use simulation models to consider a wide spectrum of plausible futures each with different input parameters to represent uncertainty. Both approaches have been applied to water management. For instance, Groves and Lempert (2007) use RDM to identify vulnerabilities of the California Department of Water Resources’ California Water Plan (CWP). Hipel and Ben-Haim (1999) use IGDT to represent different sources of hydrological uncertainty. IGDT was also used by McCarthy and Lindenmayer (2007) within a water resources – timber production management problem in Australia. Also, the sensitivity of UK flood management decisions to uncertainties in flood inundation models was investigated with IGDT (Hine and Hall, 2010).
In a recent comparison of the two approaches, Hall et al. (2012) found that both tools come to similar conclusions to a climate change problem but provide different information about the performance and vulnerabilities of the analysed decisions. IGDT is described as a tool comparing performance of different decision under a wide range of plausible futures (robustness) and their potential for rewards (windfall) under favourable future conditions. On the other hand, RDM identifies under which combination of future conditions a particular strategy becomes vulnerable to failure through ‘scenario discovery’. Identifying different failure conditions provides scenarios to test plans and devise new strategies. IGDT by contrast, provides the facility to simultaneously compare the robustness and opportuneness of multiple strategies but does not quantify their vulnerabilities.
Info-gap and RDM share many similarities. Both represent uncertainty as sets of multiple plausible futures, and both seek to identify robust strategies whose performance is insensitive to uncertainties. Yet they also exhibit important differences, as they arrange their analyses in different orders, treat losses and gains in different ways, and take different approaches to imprecise probabilistic information.
Decision-scaling (DS) is another bottom-up analysis approach to decision making. It has been introduced in the context of climate change adaptation (Brown et al., 2012). The term "decision scaling" refers to the use of a decision analytic framework to investigate the appropriate downscaling of climate information that is needed to best inform the decision at hand. Here downscaling refers to the identification of the relevant climatic information from the large ensemble of simulations provided by Global Circulation Models (GCMs). DS differs from current methodologies by utilizing the climate information in the latter stages of the process within a decision space to guide preferences among choices.
The analytical heart of DS a kind of “stress test” to identify the factors or combinations of factors that cause the considered system to fail. Thus, in the first step of the analysis vulnerabilities are identified. These vulnerabilities can be defined in terms of those external factors and the thresholds at which they become problematic. The purpose is to identify the scenarios that are relevant to the considered decision which serve as the basis for any necessary scientific investigation.
In the second step of the decision making process, future projections of climate are then used to characterise the relative likelihood or plausibility of those conditions occurring. By using climate projections only in the second step of the analysis, the initial findings are not diluted by the uncertainties inherent in the projections. In the third step of the analysis strategies can be planned to minimize the risk of the system.
The result is a detected ‘vulnerability domain’ of key concerns that the planner or decision maker can utilise to isolate the key climate change projections to strengthen the respective system against, which differs from the bottom-up analysis featured in RDM (see Figure 2).This setup marks DS primarily as a risk assessment tool with limited features developed for overall risk management.
The workflow of DS is compared with the one of RDM in Figure 2, where the workflow of the traditional top-down approach is also depicted.
Figure 2. Top-down decision approach versus DS and RDM bottom-up approaches – adapted from Roach (2016), Brown et al. (2012), Hall et al. (2012) and Lempert and Groves (2010). Images are taken from the following sources: NOAA Geophysical Fluid Dynamics Laboratory (GFDL) [Public domain], Mike Toews - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=15592558, James Mason [Public domain], Dan Perry [CC BY 3.0 (https://creativecommons.org/licenses/by/3.0)], Tommaso.sansone91 - Own work, CC0, https://commons.wikimedia.org/w/index.php?curid=77398870, Svjo - Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=20327097.
Multi-Criteria Decision Analysis (MCDA) is a mathematical optimization procedure involving more than one objective function to be optimized simultaneously. It is useful when decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. MCDA solutions are evaluated against the given criteria and assigned scores according to each criterion performance. The target may be to produce an overall aggregated score, by weighing criteria into one criterion or utility function. An alternative is to identify non dominated solutions (or Pareto efficient solutions).
As this usually implies a deterministic approach, accounting for multi-objectives is the way to seek robustness rather than accounting for uncertainty (Ranger et al., 2010). As such it is often performed as a preliminary step to isolate candidate individual resource options or to pre-select superior strategies to be further tested on decision making models more suited for “deep” uncertainty. If uncertainty is accounted for, it is usually done so by performing a sensitivity analysis on each criterion to uncertainty (Hyde and Maier, 2006; Hyde et al., 2005) or by placing joint probability distributions over all decision criteria (Dorini et al., 2011).
While it is herein presented as an approach on its own, MCDA can be also used within the previously reviewed methods as a means to assess the outcome of a policy with respect to alternatives and assigned scenarios. When combining the several criteria into one final score weights need to be assigned to each criterion.
Analytic hierarchy process (AHP) is a structured technique for handling complex decisions. It was developed by Thomas L. Saaty in the 1970s and has been extensively studied and refined since then.
The AHP supports decision makers by first decomposing their decision problem into a hierarchy of more easily comprehended sub-problems, each of which can be analyzed independently. Once the hierarchy is structured, the decision makers evaluate its various elements by comparing them to each other two at a time, by using Pairwise comparison. The AHP converts preferences to numerical values that can be processed and compared over the entire range of the problem. A numerical weight or priority is derived for each alternative and element of the hierarchy, allowing diverse and often incommensurable elements to be compared to one another in a rational and consistent way. This capability distinguishes the AHP from other decision making techniques. In the final step of the process, numerical priorities are calculated for each of the decision alternatives. These numbers represent the alternatives' relative ability to achieve the decision goal, so they allow a straightforward consideration of the various courses of action.
AHP can account for uncertainty for instance by evaluating alternatives with respect to several future scenarios. Therefore, if conveniently applied it may be considered a robust approach.
The first step in the analytic hierarchy process is to model the problem as a hierarchy. A hierarchy is a stratified system of ranking and organizing people, things, ideas, and so forth, where each element of the system, except for the top one, is subordinate to one or more other elements. Diagrams of hierarchies are often shaped roughly like pyramids, but other than having a single element at the top, there is nothing necessarily pyramid-shaped about a hierarchy. An AHP hierarchy is a structured means of modeling the decision at hand. It consists of an overall goal, a group of options or alternatives for reaching the goal, and a group of factors or criteria that relate the alternatives to the goal. The criteria can be further broken down into subcriteria, and so on, in as many levels as the problem requires. The design of any AHP hierarchy will depend not only on the nature of the problem at hand, but also on the knowledge, judgments, values, opinions, needs, wants, and so forth, of the participants in the decision-making process. Constructing a hierarchy typically involves significant discussion, research, and discovery by those involved. Once the hierarchy has been constructed, the participants analyze it through a series of pairwise comparisons that derive numerical scales of measurement for the nodes. The criteria are pairwise compared against the goal for importance. The alternatives are pairwise compared against each of the criteria for preference. The comparisons are processed mathematically, and priorities are derived for each node.
Figure 3 reports an example of application of AHP to a water resources management problem. In this case the decision is taken according to 4 criteria:
- Net benefit N;
- Environmental impact E;
- Impact on river flow regime R;
- CO2 emissions C.
Net benefit is evaluated along the lifetime of the alternative, by assessing the cost of the intervention and the benefit gained through, for instance, increased crop productivity, hydropower production and so forth. Environmental impact needs to be evaluated through a proper index, as well as the impact on flow regime. CO2 emissions can be quantitatively evaluated as those due to construction works, use of electricity and so forth. Measures of each criteria need to be rescaled in the same range of variability, to allow values to be combined.
It is interesting to observe that the above criteria are not rigorously independent as the environmental impact is related to the impact on the river flow regime. Introducing dependent indicators implies that a larger weight will be implicitly assigned in the decision process to the common driver of those indicators (degradation of the environment in this case). Such a situation may lead to reducing the transparency of the decision.
Figure 3. Example of a decision articulated according to the analytic hierarchy process – Adapted from Lou Sander - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=12123183. Images are taken from the following sources: Public Domain, https://commons.wikimedia.org/w/index.php?curid=687414; Nigel Cox / Grand Union Canal (Wendover Arm) / CC BY-SA 2.0, https://commons.wikimedia.org/wiki/File:Grand_Union_Canal_(Wendover_Arm)_-_geograph.org.uk_-_148356.jpg#file.
Assessment and rescaling of criteria can be carried out through utility functions assigning a real number in range [0,1] to each alternative, in such a way that alternative a is assigned a utility greater than alternative b if, and only if, the individual prefers alternative a to alternative b. While assigning utilities the following analytical rules need to be followed:
- Utility 0 is assigned to the minimum of each criteria. For instance, for the case of the net benefit one may assign utility 0 to the alternative that leads to the minimum benefit, or utility 0 can be assigned to the null benefit, depending on outcome from stakeholder discussion;
- Utility 1 is assigned to the maximum of each criteria;
- Increasing utility is assigned to criteria values corresponding to increasing convenience as quantified by the related indicator.
Figure 4 reports an example of utility function.
Figure 4. Example of a utility function. By Jüri Eintalu / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0).
Finally, the overall score (utility) of each alternative is computed by averaging the score corresponding to each criteria by using the weights W(1), W(2), W(3) and W(4). Those can be computed through Pairwise comparison.
When there are N criteria (in the above case they are 4) the decision makers will need to make N pairwise comparisons with respect to each criterion. In the above case we need to compare: (1) net benefit versus environmental impact, net benefit versus impact on the river flow regime, and net benefit versus reduction of CO2 emissions. Then, we need to compare (2) environmental impact versus impact on the river flow regime and environmental impact versus reduction of CO2 emissions. Finally, we have to compare (3) impact on the river flow regime versus reduction of CO2 emissions. For each comparison, one needs to judge the preference of one criterion that is being compared with respect to the other. The scale given in Figure 5 can be used to quantify preference.
Figure 5. Preference scale used in pairwise comparison. By Lou Sander - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=12016729
The next step is to transfer the measures of preference to a matrix. For each pairwise comparison, the number representing the preference is positioned into the matrix in the corresponding position; the reciprocal of that number is put into the matrix in its symmetric position. For instance, for the above example the matrix resulting from pairwise comparison of the four criteria may be:
Table 1. Pairwise comparison matrix for the criteria net benefit (N), environmental impact (E), impact on the river flow regime (R) and CO2 emissions (C).
By processing the above matrix mathematically, weights for the alternatives with respect to the considered criteria can be derived. Mathematically speaking, weights are the values in the matrix's principal right eigenvector rescaled to give a sum of 1. They can be easily computed by using R, for instance.
It is important to check that the decision is consistent, which implies that preferences expressed in each pairwise comparison are not contradicted by subsequent comparisons. A consistent matrix implies, e.g., that:
- if the decision maker says alternative 1 is equal important to alternative 2 (so the comparison matrix will contain the unit value in the related pairwise comparisons),
- if alternative 2 is absolutely more important than alternative 3, then
- alternative 1 should also be absolutely more important than alternative 3.
Unfortunately, the decision maker is often not able to express consistent preferences in case of several alternatives. Then, a formal test of consistency is required.
In the ideal case of fully consistent matrix, its maximum eigenvalue λ is equal to the dimension of the matrix itself (3 in the above case). If the matrix is not fully consistent, a consistency index CI can be computed as:
CI = (λmax - N)/(N − 1)
Then, a consistence ratio CR can be computed as the ratio of the CI for the considered matrix and a random consistency index RI, which corresponds to the consistency of a randomly generated pairwise comparison matrix:
CR = CI/RI
Suggested values for RI are given in Table 2.
Table 2. RI values for different sizes N of the matrix.
If CR ≤ 0.1 then the pairwise comparison matrix is considered to be consistent enough. If CR>0.1, the comparison matrix should be improved.
The same procedure needs to be repeated for the other criteria. Finally, a pairwise comparison needs to be carried out in order to assign the weights W to the criteria. In this case the matrix will have dimension N=4.
An example of application, to a different but conceptually similar problem, is given here. Another example of application is given by this paper (in Italian).
Ben-Haim, Y. (2001). Info-gap value of information in model updating. Mechanical Systems and Signal Processing, 15(3), 457-474.
Blöschl, G., Viglione, A., & Montanari, A. (2013). Emerging approaches to hydrological risk management in a changing world. In: Climate Vulnerability, 3-10, https://doi.org/10.1016/b978-0-12-384703-4.00505-0.
Bhave, A. G., Conway, D., Dessai, S., & Stainforth, D. A. (2016). Barriers and opportunities for robust decision making approaches to support climate change adaptation in the developing world. Climate Risk Management, 14, 1-10.
Brown, C., Ghile, Y., Laverty, M., & Li, K. (2012). Decision scaling: Linking bottom‐up vulnerability analysis with climate projections in the water sector. Water Resources Research, 48(9). Daron, J. (2015). Challenges in using a Robust Decision Making approach to guide climate change adaptation in South Africa. Climatic Change, 132(3), 459-473.
Dorini, G., Kapelan, Z., & Azapagic, A. (2011). Managing uncertainty in multiple-criteria decision making related to sustainability assessment. Clean Techn. Environ. Policy, 13(1), 133–139.
Groves, D. G., & Lempert, R. J. (2007). A new analytic method for finding policy-relevant scenarios. Global Environmental Change, 17(1), 73-85.
Hall, J. W., Lempert, R. J., Keller, K., Hackbarth, A., Mijere, C., & McInerney, D. J. (2012). Robust climate policies under uncertainty: A comparison of robust decision making and info-gap methods. Risk Anal., 32(10), 1657–1672.
Hine, D., & Hall, J. W. (2010). Information gap analysis of flood model uncertainties and regional frequency analysis. Water Resources Research, 46(1).
Hipel, K. W., & Ben-Haim, Y. (1999). Decision making in an uncertain world: Information-gap modeling in water resources management. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 29(4), 506-517.
Hyde, K. M., & Maier, H. R. (2006). Distance-based and stochastic uncertainty analysis for multi-criteria decision analysis in Excel using Visual Basic for Applications. Environmental Modelling & Software, 21(12), 1695-1710.
Hyde, K. M., Maier, H. R., & Colby, C. B. (2005). A distance-based uncertainty analysis approach to multi-criteria decision analysis for water resource decision making. Journal of Environmental Management, 77(4), 278–290.
Lempert, R. J. (2003). Shaping the next one hundred years: new methods for quantitative, long-term policy analysis. Rand Corporation.
Lempert, R. J., & Groves, D. G. (2010). Identifying and evaluating robust adaptive policy responses to climate change for water management agencies in the American west. Technol. Forecast. Soc., 77(6), 960–974.
McCarthy, M. A., & Lindenmayer, D. B. (2007). Info-gap decision theory for assessing the management of catchments for timber production and urban water supply. Environmental management, 39(4), 553-562.
Roach, T. P. (2016). Decision Making Methods for Water Resources Management Under Deep Uncertainty. Available on-line at https://ore.exeter.ac.uk/repository/bitstream/handle/10871/25756/RoachT.pdf?sequence=1 (last visited on May 21, 2019).
Ranger, N., Millner, A., Dietz, S., Fankhauser, S., Lopez, A., & Ruta, G. (2010). Adaptation in the UK: A decision-making process. Grantham Research Institute/CCCEP Policy Brief, London School of Economics and Political Science, London, UK.
Rosenhead, J., Elton, M., & Gupta, S. K. (1972). Robustness and optimality as criteria for strategic decisions. Journal of the Operational Research Society, 23(4), 413-431.
Download the powerpoint presentation of this lecture
Last modified on April 27, 2020
- 92 views