Public Administration and Information Technology
Volume 10
Series Editor Christopher G. Reddick San Antonio, Texas, USA
More information about this series at https://www.appessaywriters.com/write-my-paper/springer.com/series/10796
Marijn Janssen • Maria A. Wimmer Ameneh Deljoo Editors
Policy Practice and Digital Science
Integrating Complex Systems, Social Simulation and Public Administration in Policy Research
2123
Editors Marijn Janssen Ameneh Deljoo Faculty of Technology, Policy, and Faculty of Technology, Policy, and Management Management Delft University of Technology Delft University of Technology Delft Delft The Netherlands The Netherlands
Maria A. Wimmer Institute for Information Systems Research University of Koblenz-Landau Koblenz Germany
ISBN 978-3-319-12783-5 ISBN 978-3-319-12784-2 (eBook) Public Administration and Information Technology DOI 10.1007/978-3-319-12784-2
Library of Congress Control Number: 2014956771
Springer Cham Heidelberg New York London © Springer International Publishing Switzerland 2015 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
The last economic and financial crisis has heavily threatened European and other economies around the globe. Also, the Eurozone crisis, the energy and climate change crises, challenges of demographic change with high unemployment rates, and the most recent conflicts in the Ukraine and the near East or the Ebola virus disease in Africa threaten the wealth of our societies in different ways. The inability to predict or rapidly deal with dramatic changes and negative trends in our economies and societies can seriously hamper the wealth and prosperity of the European Union and its Member States as well as the global networks. These societal and economic challenges demonstrate an urgent need for more effective and efficient processes of governance and policymaking, therewith specifically addressing crisis management and economic/welfare impact reduction.
Therefore, investing in the exploitation of innovative information and commu- nication technology (ICT) in the support of good governance and policy modeling has become a major effort of the European Union to position itself and its Member States well in the global digital economy. In this realm, the European Union has laid out clear strategic policy objectives for 2020 in the Europe 2020 strategy1: In a changing world, we want the EU to become a smart, sustainable, and inclusive economy. These three mutually reinforcing priorities should help the EU and the Member States deliver high levels of employment, productivity, and social cohesion. Concretely, the Union has set five ambitious objectives—on employment, innovation, education, social inclusion, and climate/energy—to be reached by 2020. Along with this, Europe 2020 has established four priority areas—smart growth, sustainable growth, inclusive growth, and later added: A strong and effective system of eco- nomic governance—designed to help Europe emerge from the crisis stronger and to coordinate policy actions between the EU and national levels.
To specifically support European research in strengthening capacities, in overcom- ing fragmented research in the field of policymaking, and in advancing solutions for
1 Europe 2020 http://ec.europa.eu/europe2020/index_en.htm
v
vi Preface
ICT supported governance and policy modeling, the European Commission has co- funded an international support action called eGovPoliNet2. The overall objective of eGovPoliNet was to create an international, cross-disciplinary community of re- searchers working on ICT solutions for governance and policy modeling. In turn, the aim of this community was to advance and sustain research and to share the insights gleaned from experiences in Europe and globally. To achieve this, eGovPo- liNet established a dialogue, brought together experts from distinct disciplines, and collected and analyzed knowledge assets (i.e., theories, concepts, solutions, findings, and lessons on ICT solutions in the field) from different research disciplines. It built on case material accumulated by leading actors coming from distinct disciplinary backgrounds and brought together the innovative knowledge in the field. Tools, meth- ods, and cases were drawn from the academic community, the ICT sector, specialized policy consulting firms as well as from policymakers and governance experts. These results were assembled in a knowledge base and analyzed in order to produce com- parative analyses and descriptions of cases, tools, and scientific approaches to enrich a common knowledge base accessible via www.policy-community.eu.
This book, entitled “Policy Practice and Digital Science—Integrating Complex Systems, Social Simulation, and Public Administration in Policy Research,” is one of the exciting results of the activities of eGovPoliNet—fusing community building activities and activities of knowledge analysis. It documents findings of comparative analyses and brings in experiences of experts from academia and from case descrip- tions from all over the globe. Specifically, it demonstrates how the explosive growth in data, computational power, and social media creates new opportunities for policy- making and research. The book provides a first comprehensive look on how to take advantage of the development in the digital world with new approaches, concepts, instruments, and methods to deal with societal and computational complexity. This requires the knowledge traditionally found in different disciplines including public administration, policy analyses, information systems, complex systems, and com- puter science to work together in a multidisciplinary fashion and to share approaches. This book provides the foundation for strongly multidisciplinary research, in which the various developments and disciplines work together from a comprehensive and holistic policymaking perspective. A wide range of aspects for social and professional networking and multidisciplinary constituency building along the axes of technol- ogy, participative processes, governance, policy modeling, social simulation, and visualization are tackled in the 19 papers.
With this book, the project makes an effective contribution to the overall objec- tives of the Europe 2020 strategy by providing a better understanding of different approaches to ICT enabled governance and policy modeling, and by overcoming the fragmented research of the past. This book provides impressive insights into various theories, concepts, and solutions of ICT supported policy modeling and how stake- holders can be more actively engaged in public policymaking. It draws conclusions
2 eGovPoliNet is cofunded under FP 7, Call identifier FP7-ICT-2011-7, URL: www.policy- community.eu
Preface vii
of how joint multidisciplinary research can bring more effective and resilient find- ings for better predicting dramatic changes and negative trends in our economies and societies.
It is my great pleasure to provide the preface to the book resulting from the eGovPoliNet project. This book presents stimulating research by researchers coming from all over Europe and beyond. Congratulations to the project partners and to the authors!—Enjoy reading!
Thanassis Chrissafis Project officer of eGovPoliNet European Commission DG CNECT, Excellence in Science, Digital Science
Contents
1 Introduction to Policy-Making in the Digital Age . . . . . . . . . . . . . . . . . 1 Marijn Janssen and Maria A. Wimmer
2 Educating Public Managers and Policy Analysts in an Era of Informatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Christopher Koliba and Asim Zia
3 The Quality of Social Simulation: An Example from Research Policy Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Petra Ahrweiler and Nigel Gilbert
4 Policy Making and Modelling in a Complex World . . . . . . . . . . . . . . . . 57 Wander Jager and Bruce Edmonds
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Erik Pruyt
6 Features and Added Value of Simulation Models Using Different Modelling Approaches Supporting Policy-Making: A Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Dragana Majstorovic, Maria A.Wimmer, Roy Lay-Yee, Peter Davis and Petra Ahrweiler
7 A Comparative Analysis of Tools and Technologies for Policy Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Eleni Kamateri, Eleni Panopoulou, Efthimios Tambouris, Konstantinos Tarabanis, Adegboyega Ojo, Deirdre Lee and David Price
8 Value Sensitive Design of Complex Product Systems . . . . . . . . . . . . . . . 157 Andreas Ligtvoet, Geerten van de Kaa, Theo Fens, Cees van Beers, Paulier Herder and Jeroen van den Hoven
ix
x Contents
9 Stakeholder Engagement in Policy Development: Observations and Lessons from International Experience . . . . . . . . . . . . . . . . . . . . . . 177 Natalie Helbig, Sharon Dawes, Zamira Dzhusupova, Bram Klievink and Catherine Gerald Mkude
10 Values in Computational Models Revalued . . . . . . . . . . . . . . . . . . . . . . . 205 Rebecca Moody and Lasse Gerrits
11 The Psychological Drivers of Bureaucracy: Protecting the Societal Goals of an Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Tjeerd C. Andringa
12 Active and Passive Crowdsourcing in Government . . . . . . . . . . . . . . . . 261 Euripidis Loukis and Yannis Charalabidis
13 Management of Complex Systems: Toward Agent-Based Gaming for Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Wander Jager and Gerben van der Vegt
14 The Role of Microsimulation in the Development of Public Policy . . . 305 Roy Lay-Yee and Gerry Cotterell
15 Visual Decision Support for Policy Making: Advancing Policy Analysis with Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Tobias Ruppert, Jens Dambruch, Michel Krämer, Tina Balke, Marco Gavanelli, Stefano Bragaglia, Federico Chesani, Michela Milano and Jörn Kohlhammer
16 Analysis of Five Policy Cases in the Field of Energy Policy . . . . . . . . . 355 Dominik Bär, Maria A.Wimmer, Jozef Glova, Anastasia Papazafeiropoulou and Laurence Brooks
17 Challenges to Policy-Making in Developing Countries and the Roles of Emerging Tools, Methods and Instruments: Experiences from Saint Petersburg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Dmitrii Trutnev, Lyudmila Vidyasova and Andrei Chugunov
18 Sustainable Urban Development, Governance and Policy: A Comparative Overview of EU Policies and Projects . . . . . . . . . . . . . 393 Diego Navarra and Simona Milio
19 eParticipation, Simulation Exercise and Leadership Training in Nigeria: Bridging the Digital Divide . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Tanko Ahmed
Contributors
Tanko Ahmed National Institute for Policy and Strategic Studies (NIPSS), Jos, Nigeria
Petra Ahrweiler EA European Academy of Technology and Innovation Assess- ment GmbH, Bad Neuenahr-Ahrweiler, Germany
Tjeerd C. Andringa University College Groningen, Institute of Artificial In- telligence and Cognitive Engineering (ALICE), University of Groningen, AB, Groningen, the Netherlands
Tina Balke University of Surrey, Surrey, UK
Dominik Bär University of Koblenz-Landau, Koblenz, Germany
Cees van Beers Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands
Stefano Bragaglia University of Bologna, Bologna, Italy
Laurence Brooks Brunel University, Uxbridge, UK
Yannis Charalabidis University of the Aegean, Samos, Greece
Federico Chesani University of Bologna, Bologna, Italy
Andrei Chugunov ITMO University, St. Petersburg, Russia
Gerry Cotterell Centre of Methods and Policy Application in the Social Sciences (COMPASS Research Centre), University of Auckland, Auckland, New Zealand
Jens Dambruch Fraunhofer Institute for Computer Graphics Research, Darmstadt, Germany
Peter Davis Centre of Methods and Policy Application in the Social Sciences (COMPASS Research Centre), University of Auckland, Auckland, New Zealand
Sharon Dawes Center for Technology in Government, University at Albany, Albany, New York, USA
xi
xii Contributors
Zamira Dzhusupova Department of PublicAdministration and Development Man- agement, United Nations Department of Economic and Social Affairs (UNDESA), NewYork, USA
Bruce Edmonds Manchester Metropolitan University, Manchester, UK
Theo Fens Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands
Marco Gavanelli University of Ferrara, Ferrara, Italy
Lasse Gerrits Department of Public Administration, Erasmus University Rotterdam, Rotterdam, The Netherlands
Nigel Gilbert University of Surrey, Guildford, UK
Jozef Glova Technical University Kosice, Kosice, Slovakia
Natalie Helbig Center for Technology in Government, University at Albany, Albany, New York, USA
Paulier Herder Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands
Jeroen van den Hoven Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands
Wander Jager Groningen Center of Social Complexity Studies, University of Groningen, Groningen, The Netherlands
Marijn Janssen Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands
Geerten van de Kaa Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands
Eleni Kamateri Information Technologies Institute, Centre for Research & Technology—Hellas, Thessaloniki, Greece
Bram Klievink Faculty of Technology, Policy and Management, Delft University of Technology, Delft, The Netherlands
Jörn Kohlhammer GRIS, TU Darmstadt & Fraunhofer IGD, Darmstadt, Germany
Christopher Koliba University of Vermont, Burlington, VT, USA
Michel Krämer Fraunhofer Institute for Computer Graphics Research, Darmstadt, Germany
Roy Lay-Yee Centre of Methods and Policy Application in the Social Sciences (COMPASS Research Centre), University of Auckland, Auckland, New Zealand
Deirdre Lee INSIGHT Centre for Data Analytics, NUIG, Galway, Ireland
Contributors xiii
Andreas Ligtvoet Faculty of Technology, Policy, and Management, Delft Univer- sity of Technology, Delft, The Netherlands
Euripidis Loukis University of the Aegean, Samos, Greece
Dragana Majstorovic University of Koblenz-Landau, Koblenz, Germany
Michela Milano University of Bologna, Bologna, Italy
Simona Milio London School of Economics, Houghton Street, London, UK
Catherine Gerald Mkude Institute for IS Research, University of Koblenz-Landau, Koblenz, Germany
Rebecca Moody Department of Public Administration, Erasmus University Rotterdam, Rotterdam, The Netherlands
Diego Navarra Studio Navarra, London, UK
Adegboyega Ojo INSIGHT Centre for Data Analytics, NUIG, Galway, Ireland
Eleni Panopoulou Information Technologies Institute, Centre for Research & Technology—Hellas, Thessaloniki, Greece
Anastasia Papazafeiropoulou Brunel University, Uxbridge, UK
David Price Thoughtgraph Ltd, Somerset, UK
Erik Pruyt Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands; Netherlands Institute for Advanced Study, Wassenaar, The Netherlands
Tobias Ruppert Fraunhofer Institute for Computer Graphics Research, Darmstadt, Germany
Efthimios Tambouris Information Technologies Institute, Centre for Research & Technology—Hellas, Thessaloniki, Greece; University of Macedonia, Thessaloniki, Greece
Konstantinos Tarabanis Information Technologies Institute, Centre for Research & Technology—Hellas, Thessaloniki, Greece; University of Macedonia, Thessa- loniki, Greece
Dmitrii Trutnev ITMO University, St. Petersburg, Russia
Gerben van derVegt Faculty of Economics and Business, University of Groningen, Groningen, The Netherlands
Lyudmila Vidyasova ITMO University, St. Petersburg, Russia
Maria A. Wimmer University of Koblenz-Landau, Koblenz, Germany
Asim Zia University of Vermont, Burlington, VT, USA
Chapter 1 Introduction to Policy-Making in the Digital Age
Marijn Janssen and Maria A. Wimmer
We are running the 21st century using 20th century systems on top of 19th century political structures. . . . John Pollock, contributing editor MIT technology review
Abstract The explosive growth in data, computational power, and social media creates new opportunities for innovating governance and policy-making. These in- formation and communications technology (ICT) developments affect all parts of the policy-making cycle and result in drastic changes in the way policies are devel- oped. To take advantage of these developments in the digital world, new approaches, concepts, instruments, and methods are needed, which are able to deal with so- cietal complexity and uncertainty. This field of research is sometimes depicted as e-government policy, e-policy, policy informatics, or data science. Advancing our knowledge demands that different scientific communities collaborate to create practice-driven knowledge. For policy-making in the digital age disciplines such as complex systems, social simulation, and public administration need to be combined.
1.1 Introduction
Policy-making and its subsequent implementation is necessary to deal with societal problems. Policy interventions can be costly, have long-term implications, affect groups of citizens or even the whole country and cannot be easily undone or are even irreversible. New information and communications technology (ICT) and models can help to improve the quality of policy-makers. In particular, the explosive growth in data, computational power, and social media creates new opportunities for in- novating the processes and solutions of ICT-based policy-making and research. To
M. Janssen (�) Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands e-mail: m.f.w.h.a.janssen@tudelft.nl
M. A. Wimmer University of Koblenz-Landau, Koblenz, Germany
© Springer International Publishing Switzerland 2015 1 M. Janssen et al. (eds.), Policy Practice and Digital Science, Public Administration and Information Technology 10, DOI 10.1007/978-3-319-12784-2_1
2 M. Janssen and M. A. Wimmer
take advantage of these developments in the digital world, new approaches, con- cepts, instruments, and methods are needed, which are able to deal with societal and computational complexity. This requires the use of knowledge which is traditionally found in different disciplines, including (but not limited to) public administration, policy analyses, information systems, complex systems, and computer science. All these knowledge areas are needed for policy-making in the digital age. The aim of this book is to provide a foundation for this new interdisciplinary field in which various traditional disciplines are blended.
Both policy-makers and those in charge of policy implementations acknowledge that ICT is becoming more and more important and is changing the policy-making process, resulting in a next generation policy-making based on ICT support. The field of policy-making is changing driven by developments such as open data, computa- tional methods for processing data, opinion mining, simulation, and visualization of rich data sets, all combined with public engagement, social media, and participatory tools. In this respect Web 2.0 and even Web 3.0 point to the specific applications of social networks and semantically enriched and linked data which are important for policy-making. In policy-making vast amount of data are used for making predictions and forecasts. This should result in improving the outcomes of policy-making.
Policy-making is confronted with an increasing complexity and uncertainty of the outcomes which results in a need for developing policy models that are able to deal with this. To improve the validity of the models policy-makers are harvesting data to generate evidence. Furthermore, they are improving their models to capture complex phenomena and dealing with uncertainty and limited and incomplete information. Despite all these efforts, there remains often uncertainty concerning the outcomes of policy interventions. Given the uncertainty, often multiple scenarios are developed to show alternative outcomes and impact. A condition for this is the visualization of policy alternatives and its impact. Visualization can ensure involvement of nonexpert and to communicate alternatives. Furthermore, games can be used to let people gain insight in what can happen, given a certain scenario. Games allow persons to interact and to experience what happens in the future based on their interventions.
Policy-makers are often faced with conflicting solutions to complex problems, thus making it necessary for them to test out their assumptions, interventions, and resolutions. For this reason policy-making organizations introduce platforms facili- tating policy-making and citizens engagements and enabling the processing of large volumes of data. There are various participative platforms developed by government agencies (e.g., De Reuver et al. 2013; Slaviero et al. 2010; Welch 2012). Platforms can be viewed as a kind of regulated environment that enable developers, users, and others to interact with each other, share data, services, and applications, enable gov- ernments to more easily monitor what is happening and facilitate the development of innovative solutions (Janssen and Estevez 2013). Platforms should provide not only support for complex policy deliberations with citizens but should also bring to- gether policy-modelers, developers, policy-makers, and other stakeholders involved in policy-making. In this way platforms provide an information-rich, interactive
1 Introduction to Policy-Making in the Digital Age 3
environment that brings together relevant stakeholders and in which complex phe- nomena can be modeled, simulated, visualized, discussed, and even the playing of games can be facilitated.
1.2 Complexity and Uncertainty in Policy-Making
Policy-making is driven by the need to solve societal problems and should result in interventions to solve these societal problems. Examples of societal problems are unemployment, pollution, water quality, safety, criminality, well-being, health, and immigration. Policy-making is an ongoing process in which issues are recognized as a problem, alternative courses of actions are formulated, policies are affected, implemented, executed, and evaluated (Stewart et al. 2007). Figure 1.1 shows the typical stages of policy formulation, implementation, execution, enforcement, and Assessment. This process should not be viewed as linear as many interactions are necessary as well as interactions with all kind of stakeholders. In policy-making processes a vast amount of stakeholders are always involved, which makes policy- making complex.
Once a societal need is identified, a policy has to be formulated. Politicians, members of parliament, executive branches, courts, and interest groups may be involved in these formulations. Often contradictory proposals are made, and the impact of a proposal is difficult to determine as data is missing, models cannot
citizens
Policy formulation
Policy implementation
Policy execution
Policy enforcement and
Assessment
politicians
Policy- makers
Administrative organizations
businesses
Inspection and enforcement agencies
experts
Fig. 1.1 Overview of policy cycle and stakeholders
4 M. Janssen and M. A. Wimmer
capture the complexity, and the results of policy models are difficult to interpret and even might be interpreted in an opposing way. This is further complicated as some proposals might be good but cannot be implemented or are too costly to implement. There is a large uncertainty concerning the outcomes.
Policy implementation is done by organizations other than those that formulated the policy. They often have to interpret the policy and have to make implemen- tation decisions. Sometimes IT can block quick implementation as systems have to be changed. Although policy-making is the domain of the government, private organizations can be involved to some extent, in particular in the execution of policies.
Once all things are ready and decisions are made, policies need to be executed. During the execution small changes are typically made to fine tune the policy formu- lation, implementation decisions might be more difficult to realize, policies might bring other benefits than intended, execution costs might be higher and so on. Typ- ically, execution is continually changing. Assessment is part of the policy-making process as it is necessary to ensure that the policy-execution solved the initial so- cietal problem. Policies might become obsolete, might not work, have unintended affects (like creating bureaucracy) or might lose its support among elected officials, or other alternatives might pop up that are better.
Policy-making is a complex process in which many stakeholders play a role. In the various phases of policy-making different actors are dominant and play a role. Figure 1.1 shows only some actors that might be involved, and many of them are not included in this figure. The involvement of so many actors results in fragmentation and often actors are even not aware of the decisions made by other actors. This makes it difficult to manage a policy-making process as each actor has other goals and might be self-interested.
Public values (PVs) are a way to try to manage complexity and give some guidance. Most policies are made to adhere to certain values. Public value management (PVM) represents the paradigm of achieving PVs as being the primary objective (Stoker 2006). PVM refers to the continuous assessment of the actions performed by public officials to ensure that these actions result in the creation of PV (Moore 1995). Public servants are not only responsible for following the right procedure, but they also have to ensure that PVs are realized. For example, civil servants should ensure that garbage is collected. The procedure that one a week garbage is collected is secondary. If it is necessary to collect garbage more (or less) frequently to ensure a healthy environment then this should be done. The role of managers is not only to ensure that procedures are followed but they should be custodians of public assets and maximize a PV.
There exist a wide variety of PVs (Jørgensen and Bozeman 2007). PVs can be long-lasting or might be driven by contemporary politics. For example, equal access is a typical long-lasting value, whereas providing support for students at universities is contemporary, as politicians might give more, less, or no support to students. PVs differ over times, but also the emphasis on values is different in the policy-making cycle as shown in Fig. 1.2. In this figure some of the values presented by Jørgensen and Bozeman (2007) are mapped onto the four policy-making stages. Dependent on the problem at hand other values might play a role that is not included in this figure.
1 Introduction to Policy-Making in the Digital Age 5
Policy formulation
Policy implementation
Policy execution
Policy enforcement
and Assessment
efficiency
efficiency
accountability
transparancy
responsiveness
public interest
will of the people
listening
citizen involvement
evidence-based
protection of individual rights
accountability
transparancy
evidence-based
equal access
balancing of interests
robust
honesty fair
timelessness
reliable
flexible
fair
Fig. 1.2 Public values in the policy cycle
Policy is often formulated by politicians in consultation with experts. In the PVM paradigm, public administrations aim at creating PVs for society and citizens. This suggests a shift from talking about what citizens expect in creating a PV. In this view public officials should focus on collaborating and creating a dialogue with citizens in order to determine what constitutes a PV.
1.3 Developments
There is an infusion of technology that changes policy processes at both the individual and group level. There are a number of developments that influence the traditional way of policy-making, including social media as a means to interact with the public (Bertot et al. 2012), blogs (Coleman and Moss 2008), open data (Janssen et al. 2012; Zuiderwijk and Janssen 2013), freedom of information (Burt 2011), the wisdom of the crowds (Surowiecki 2004), open collaboration and transparency in policy simulation (Wimmer et al. 2012a, b), agent-based simulation and hybrid modeling techniques (Koliba and Zia 2012) which open new ways of innovative policy-making. Whereas traditional policy-making is executed by experts, now the public is involved to fulfill requirements of good governance according to open government principles.
6 M. Janssen and M. A. Wimmer
Also, the skills and capabilities of crowds can be explored and can lead to better and more transparent democratic policy decisions. All these developments can be used for enhancing citizen’s engagement and to involve citizens better in the policy-making process. We want to emphasize three important developments.
1.3.1 The Availability of Big and Open Linked Data (BOLD)
Policy-making heavily depends on data about existing policies and situations to make decisions. Both public and private organizations are opening their data for use by others. Although information could be requested for in the past, governments have changed their strategy toward actively publishing open data in formats that are readily and easily accessible (for example, European_Commission 2003; Obama 2009). Multiple perspectives are needed to make use of and stimulate new practices based on open data (Zuiderwijk et al. 2014). New applications and innovations can be based solely on open data, but often open data are enriched with data from other sources. As data can be generated and provided in huge amounts, specific needs for processing, curation, linking, visualization, and maintenance appear. The latter is often denoted with big data in which the value is generated by combining different datasets (Janssen et al. 2014). Current advances in processing power and memory allows for the processing of a huge amount of data. BOLD allows for analyzing policies and the use of these data in models to better predict the effect of new policies.
1.3.2 Rise of Hybrid Simulation Approaches
In policy implementation and execution, many actors are involved and there are a huge number of factors influencing the outcomes; this complicates the prediction of the policy outcomes. Simulation models are capable of capturing the interdepen- dencies between the many factors and can include stochastic elements to deal with the variations and uncertainties. Simulation is often used in policy-making as an instrument to gain insight in the impact of possible policies which often result in new ideas for policies. Simulation allows decision-makers to understand the essence of a policy, to identify opportunities for change, and to evaluate the effect of pro- posed changes in key performance indicators (Banks 1998; Law and Kelton 1991). Simulation heavily depends on data and as such can benefit from big and open data.
Simulation models should capture the essential aspects of reality. Simulation models do not rely heavily on mathematical abstraction and are therefore suitable for modeling complex systems (Pidd 1992). Already the development of a model can raise discussions about what to include and what factors are of influence, in this way contributing to a better understanding of the situation at hand. Furthermore, experimentation using models allows one to investigate different settings and the influence of different scenarios in time on the policy outcomes.
1 Introduction to Policy-Making in the Digital Age 7
The effects of policies are hard to predict and dealing with uncertainty is a key aspect in policy modeling. Statistical representation of real-world uncertainties is an integral part of simulation models (Law and Kelton 1991). The dynamics asso- ciated with many factors affecting policy-making, the complexity associated with the interdependencies between individual parts, and the stochastic elements asso- ciated with the randomness and unpredictable behavior of transactions complicates the simulations. Computer simulations for examining, explaining, and predicting so- cial processes and relationships as well as measuring the possible impact of policies has become an important part of policy-making. Traditional models are not able to address all aspects of complex policy interactions, which indicates the need for the development of hybrid simulation models consisting of a combinatory set of models built on different modeling theories (Koliba and Zia 2012). In policy-making it can be that multiple models are developed, but it is also possible to combine various types of simulation in a single model. For this purpose agent-based modeling and simulation approaches can be used as these allow for combining different type of models in a single simulation.
1.3.3 Ubiquitous User Engagement
Efforts to design public policies are confronted with considerable complexity, in which (1) a large number of potentially relevant factors needs to be considered, (2) a vast amount of data needs to be processed, (3) a large degree of uncertainty may exist, and (4) rapidly changing circumstances need to be dealt with. Utilizing computational methods and various types of simulation and modeling methods is often key to solving these kinds of problems (Koliba and Zia 2012). The open data and social media movements are making large quantities of new data available. At the same time enhancements in computational power have expanded the repertoire of instruments and tools available for studying dynamic systems and their interdependencies. In addition, sophisticated techniques for data gathering, visualization, and analysis have expanded our ability to understand, display, and disseminate complex, temporal, and spatial information to diverse audiences. These problems can only be addressed from a complexity science perspective and with a multitude of views and contributions from different disciplines. Insights and methods of complexity science should be applied to Help policy-makers as they tackle societal problems in policy areas such as environmental protection, economics, energy, security, or public safety and health. This demands user involvement which is supported by visualization techniques and which can be actively involved by employing (serious) games. These methods can show what hypothetically will happen when certain policies are implemented.
8 M. Janssen and M. A. Wimmer
1.4 Combining Disciplines in E-government Policy-Making
This new field has been shaped using various names, including e-policy-making, digital policy science, computational intelligence, digital sciences, data sciences, and policy informatics (Dawes and Janssen 2013). The essence of this field it that it is
- Practice-driven 2. Employs modeling techniques 3. Needs the knowledge coming from various disciplines 4. It focused on governance and policy-making
This field is practice-driven by taking as a starting point the public policy problem and defining what information is relevant for addressing the problem under study. This requires understanding of public administration and policy-making processes. Next, it is a key to determine how to obtain, store, retrieve, process, model, and interpret the results. This is the field of e-participation, policy-modeling, social simulation, and complex systems. Finally, it should be agreed upon how to present and disseminate the results so that other researchers, decision-makers, and practitioners can use it. This requires in-depth knowledge of practice, of structures of public administration and constitutions, political cultures, processes and culture and policy-making.
Based on the ideas, the FP7 project EgovPoliNet project has created an inter- national community in ICT solutions for governance and policy-modeling. The “policy-making 2.0” LinkedIn community has a large number of members from dif- ferent disciplines and backgrounds representing practice and academia. This book is the product of this project in which a large number of persons from various dis- ciplines and representing a variety of communities were involved. The book shows experiences and advances in various areas of policy-making. Furthermore, it contains comparative analyses and descriptions of cases, tools, and scientific approaches from the knowledge base created in this project. Using this book, practices and knowl- edge in this field is shared among researchers. Furthermore, this book provides the foundations in this area. The covered expertise include a wide range of aspects for so- cial and professional networking and multidisciplinary constituency building along the axes of technology, participative processes, governance, policy-modeling, social simulation, and visualization. In this way eGovPoliNet has advanced the way re- search, development, and practice is performed worldwide in using ICT solutions for governance and policy-modeling.
Although in Europe the term “e-government policy” or “e-policy,” for short, is often used to refer to these types of phenomena, whereas in the USA often the term “policy informatics” is used. This is similar to that in the USA the term digital government is often used, whereas in Europe the term e-government is preferred. Policy informatics is defined as “the study of how information is leveraged and efforts are coordinated towards solving complex public policy problems” (Krishnamurthy et al. 2013, p. 367). These authors view policy informatics as an emerging research space to navigate through the challenges of complex layers of uncertainty within
1 Introduction to Policy-Making in the Digital Age 9
governance processes. Policy informatics community has created Listserv called Policy Informatics Network (PIN-L).
E-government policy-making is closely connected to “data science.” Data science is the ability to find answers from larger volumes of (un)structured data (Davenport and Patil 2012). Data scientists find and interpret rich data sources, manage large amounts of data, create visualizations to aid in understanding data, build mathemat- ical models using the data, present and communicate the data insights/findings to specialists and scientists in their team, and if required to a nonexpert audience. These are activities which are at the heart of policy-making.
1.5 Overview of Chapters
In total 54 different authors were involved in the creation of this book. Some chapters have a single author, but most of the chapters have multiple authors. The authors rep- resent a wide range of disciplines as shown in Fig. 1.2. The focus has been on targeting five communities that make up the core field for ICT-enabled policy-making. These communities include e-government/e-participation, information systems, complex systems, public administration, and policy research and social simulation. The com- bination of these disciplines and communities are necessary to tackle policy problems in new ways. A sixth category was added for authors not belonging to any of these communities, such as philosophy and economics. Figure 1.3 shows that the authors are evenly distributed among the communities, although this is less with the chapter. Most of the authors can be classified as belonging to the e-government/e-participation community, which is by nature interdisciplinary.
Foundation The first part deals with the foundations of the book. In their Chap. 2 Chris Koliba and Asim Zia start with a best practice to be incorporated in public administration educational programs to embrace the new developments sketched in
EGOV
IS
Complex Systems
Public Administration and Policy Research
Social Simulation
other (philosophy, energy, economics, )
Fig. 1.3 Overview of the disciplinary background of the authors
10 M. Janssen and M. A. Wimmer
this chapter. They identify two types of public servants that need to be educated. The policy informatics include the savvy public manager and the policy informatics analyst. This chapter can be used as a basis to adopt interdisciplinary approaches and include policy informatics in the public administration curriculum.
Petra Ahrweiler and Nigel Gilbert discuss the need for the quality of simulation modeling in their Chap. 3. Developing simulation is always based on certain as- sumptions and a model is as good as the developer makes it. The user community is proposed to assess the quality of a policy-modeling exercise. Communicative skills, patience, willingness to compromise on both sides, and motivation to bridge the formal world of modelers and the narrative world of policy-makers are suggested as key competences. The authors argue that user involvement is necessary in all stages of model development.
Wander Jager and Bruce Edmonds argue that due to the complexity that many social systems are unpredictable by nature in their Chap. 4. They discuss how some insights and tools from complexity science can be used in policy-making. In particular they discuss the strengths and weaknesses of agent-based modeling as a way to gain insight in the complexity and uncertainty of policy-making.
In the Chap. 5, Erik Pruyt sketches the future in which different systems modeling schools and modeling methods are integrated. He shows that elements from policy analysis, data science, machine learning, and computer science need to be combined to deal with the uncertainty in policy-making. He demonstrates the integration of various modeling and simulation approaches and related disciplines using three cases.
Modeling approaches are compared in the Chap. 6 authored by Dragana Majs- torovic, Maria A. Wimmer, Roy Lay-Yee, Peter Davis,and Petra Ahrweiler. Like in the previous chapter they argue that none of the theories on its own is able to address all aspects of complex policy interactions, and the need for hybrid simulation models is advocated.
The next chapter is complimentary to the previous chapter and includes a com- parison of ICT tools and technologies. The Chap. 7 is authored by Eleni Kamateri, Eleni Panopoulou, Efthimios Tambouris, Konstantinos Tarabanis, Adegboyega Ojo, Deirdre Lee, and David Price. This chapter can be used as a basis for tool selecting and includes visualization, argumentation, e-participation, opinion mining, simula- tion, persuasive, social network analysis, big data analytics, semantics, linked data tools, and serious games.
Social Aspects, Stakeholders and Values Although much emphasis is put on mod- eling efforts, the social aspects are key to effective policy-making. The role of values is discussed in the Chap. 8 authored by Andreas Ligtvoet, Geerten van de Kaa, Theo Fens, Cees van Beers, Paulien Herder, and Jeroen van den Hoven. Using the case of the design of smart meters in energy networks they argue that policy-makers would do well by not only addressing functional requirements but also by taking individual stakeholder and PVs into consideration.
In policy-making a wide range of stakeholders are involved in various stages of the policy-making process. Natalie Helbig, Sharon Dawes, Zamira Dzhusupova, Bram Klievink, and Catherine Gerald Mkude analyze five case studies of stakeholder
1 Introduction to Policy-Making in the Digital Age 11
engagement in policy-making in their Chap. 9. Various engagement tools are dis- cussed and factors identified which support the effective use of particular tools and technologies.
The Chap. 10 investigates the role of values and trust in computational models in the policy process. This chapter is authored by Rebecca Moody and Lasse Gerrits. The authors found that a large diversity exists in values within the cases. By the authors important explanatory factors were found including (1) the role of the designer of the model, (2) the number of different actors (3) the level of trust already present, and (4) and the limited control of decision-makers over the models.
Bureaucratic organizations are often considered to be inefficient and not customer friendly. Tjeerd Andringa presents and discusses a multidisciplinary framework con- taining the drivers and causes of bureaucracy in the Chap. 11. He concludes that the reduction of the number of rules and regulations is important, but that motivating workers to understand their professional roles and to learn to oversee the impact of their activities is even more important.
Crowdsourcing has become an important policy instrument to gain access to expertise (“wisdom”) outside own boundaries. In the Chap. 12, Euripids Loukis and Yannis Charalabidis discuss Web 2.0 social media for crowdsourcing. Passive crowdsourcing exploits the content generated by users, whereas active crowdsourcing stimulates content postings and idea generation by users. Synergy can be created by combining both approaches. The results of passive crowdsourcing can be used for guiding active crowdsourcing to avoid asking users for similar types of input.
Policy, Collaboration and Games Agent-based gaming (ABG) is used as a tool to explore the possibilities to manage complex systems in the Chap. 13 by Wander Jager and Gerben van der Vegt. ABG allows for modeling a virtual and autonomous population in a computer game setting to exploit various management and leadership styles. In this way ABG contribute to the development of the required knowledge on how to manage social complex behaving systems.
Micro simulation focuses on modeling individual units and the micro-level pro- cesses that affect their development. The concepts of micro simulation are explained by Roy Lay-Yee and Gerry Cotterell in the Chap. 14. Micro simulation for pol- icy development is useful to combine multiple sources of information in a single contextualized model to answer “what if” questions on complex social phenomena.
Visualization is essential to communicate the model and the results to a variety of stakeholders. These aspects are discussed in the Chap. 15 by Tobias Ruppert, Jens Dambruch, Michel Krämer, Tina Balke, Marco Gavanelli, Stefano Bragaglia, Federico Chesani, Michela Milano, and Jörn Kohlhammer. They argue that despite the significance to use evidence in policy-making, this is seldom realized. Three case studies that have been conducted in two European research projects for policy- modeling are presented. In all the cases access for nonexperts to the computational models by information visualization technologies was realized.
12 M. Janssen and M. A. Wimmer
Applications and Practices Different projects have been initiated to study the best suitable transition process towards renewable energy. In the Chap. 16 by Dominik Bär, Maria A. Wimmer, Jozef Glova, Anastasia Papazafeiropoulou,and Laurence Brooks five of these projects are analyzed and compared. They please for transferring models from one country to other countries to facilitate learning.
Lyudmila Vidyasova, Andrei Chugunov, and Dmitrii Trutnev present experiences from Russia in their Chap. 17. They argue that informational, analytical, and fore- casting activities for the processes of socioeconomic development are an important element in policy-making. The authors provide a brief overview of the history, the current state of the implementation of information processing techniques, and prac- tices for the purpose of public administration in the Russian Federation. Finally, they provide a range of recommendations to proceed.
Urban policy for sustainability is another important area which is directly linked to the first chapter in this section. In the Chap. 18, Diego Navarra and Simona Milio demonstrate a system dynamics model to show how urban policy and governance in the future can support ICT projects in order to reduce energy usage, rehabilitate the housing stock, and promote sustainability in the urban environment. This chapter contains examples of sustainable urban development policies as well as case studies.
In the Chap. 19, Tanko Ahmed discusses the digital divide which is blocking online participation in policy-making processes. Structuration, institutional and actor-network theories are used to analyze a case study of political zoning. The author recommends stronger institutionalization of ICT support and legislation for enhancing participation in policy-making and bridging the digital divide.
1.6 Conclusions
This book is the first comprehensive book in which the various development and disci- plines are covered from the policy-making perspective driven by ICT developments. A wide range of aspects for social and professional networking and multidisciplinary constituency building along the axes of technology, participative processes, gover- nance, policy-modeling, social simulation, and visualization are investigated. Policy- making is a complex process in which many stakeholders are involved. PVs can be used to guide policy-making efforts and to ensure that the many stakeholders have an understanding of the societal value that needs to be created. There is an infusion of technology resulting in changing policy processes and stakeholder involvement. Technologies like social media provides a means to interact with the public, blogs can be used to express opinions, big and open data provide input for evidence-based policy-making, the integration of various types of modeling and simulation tech- niques (hybrid models) can provide much more insight and reliable outcomes, gam- ing in which all kind of stakeholders are involved open new ways of innovative policy- making. In addition trends like the freedom of information, the wisdom of the crowds, and open collaboration changes the landscape further. The policy-making landscape is clearly changing and this demands a strong need for interdisciplinary research.
1 Introduction to Policy-Making in the Digital Age 13
References
Banks J (1998) Handbook of simulation: principles, methodology, advances, applications, and practice. Wiley, New York
Bertot JC, Jaeger PT, Hansen D (2012) The impact of polices on government social media usage: Issues, challenges, and recommendations. Gov Inform Q 29:30–40
Burt E (2011) Introduction to the freedom of information special edition: emerging perspectives, critical reflections, and the need for further research. Inform Polit 16(2):91–92.
Coleman S, Moss G (2008) Governing at a distance—politicians in the blogosphere. Inform Polit 12(1–2):7–20.
Davenport TH, Patil DJ (2012) Data scientist: the sexiest job of the 21st century. Harv Bus Rev 90(10):70–76
Dawes SS, Janssen M (2013) Policy informatics: addressing complex problems with rich data, com- putational tools, and stakeholder engagement. Paper presented at the 14th annual international conference on digital government research, Quebec City, Canada
De Reuver M, Stein S, Hampe F (2013) From eparticipation to mobile participation: designing a service platform and business model for mobile participation. Inform Polit 18(1):57–73
European_Commission (2003) Directive 2003/98/EC of the European Parliament and of the coun- cil of 17 November 2003 on the re-use of public sector information. http://ec.europa.eu/ information_society/policy/psi/rules/eu/index_en.htm. Accessed 12 Dec 2012
Janssen M, Estevez E (2013) Lean government and platform-based governance—doing more with less. Gov Inform Quert 30(suppl 1):S1–S8
Janssen M, CharalabidisY, Zuiderwijk A (2012) Benefits, adoption barriers and myths of open data and open government. Inform Syst Manage 29(4):258–268
Janssen M, Estevez E, Janowski T (2014) Interoperability in big, open, and linked data— organizational maturity, capabilities, and data portfolios. Computer 47(10):26–31
Jørgensen TB, Bozeman B (2007) Public values: an inventory. Adm Soc 39(3):354–381 Koliba C, Zia A (2012) Governance Informatics: using computer simulation models to deepen
situational awareness and governance design considerations policy informatics. MIT Press, Cambridge.
Krishnamurthy R, Bhagwatwar A, Johnston EW, Desouza KC (2013) A glimpse into policy in- formatics: the case of participatory platforms that generate synthetic empathy. Commun Assoc Inform Syst 33(Article 21):365–380.
Law AM, Kelton WD (1991) Simulation modeling and analysis 2nd ed. McGraw-Hill, New York Moore MH (1995) Creating public value: strategic management in government. Harvard University
Press, Cambridge Obama B (2009) Memorandum for the Heads of executive Departments and Agencies: trans-
parency and open government. Retrieved February 21, 2013, from https://www.appessaywriters.com/write-my-paper/whitehouse.gov/ the_press_office/Transparency_and_Open_Government
Pidd M (1992) Computer simulation in management science, 3rd ed. John Wiley, Chichester Slaviero C, Maciel C, Alencar F, Santana E, Souza P (2010) Designing a platform to facilitate
the development of virtual e-participation environments. Paper presented at the ICEGOV ’10 proceedings of the 4th international conference on theory and practice of electronic governance, Beijing
Stewart JJ, Hedge DM, Lester JP (2007) Public policy: an evolutionary approach 3rd edn. Cengage Learning, Wadsworth
Stoker G (2006) Public value management: a new narrative for networked governance? Am Rev Public Adm 3(1):41–57
Surowiecki J (2004) The wisdom of crowds: why the many are smarter than the few and how collective wisdom shapes business economies, societies and nations. Doubleday
Welch EW (2012) The rise of participative technologies in government. In: Shareef MA, Archer N, Dwivedi YK, Mishra A, Pandey SK (eds) Transformational government through eGov practice: socioeconomic, cultural, and technological issues. Emerald Group Publishing Limited
http://ec.europa.eu/information_society/policy/psi/rules/eu/index_en.htm.
http://ec.europa.eu/information_society/policy/psi/rules/eu/index_en.htm.
https://www.appessaywriters.com/write-my-paper/whitehouse.gov/theprotect LY1 extunderscore pressprotect LY1 extunderscore office/Transparencyprotect LY1 extunderscore andprotect LY1 extunderscore Openprotect LY1 extunderscore Government
https://www.appessaywriters.com/write-my-paper/whitehouse.gov/theprotect LY1 extunderscore pressprotect LY1 extunderscore office/Transparencyprotect LY1 extunderscore andprotect LY1 extunderscore Openprotect LY1 extunderscore Government
14 M. Janssen and M. A. Wimmer
Wimmer MA, Furdik K, Bicking M, Mach M, Sabol T, Butka P (2012a) Open collaboration in policy development: concept and architecture to integrate scenario development and formal policy modelling. In: Charalabidis Y, Koussouris S (eds) Empowering open and collaborative governance. Technologies and methods for online citizen engagement in public policy making. Springer, Berlin, pp 199–219
Wimmer MA, Scherer S, Moss S, Bicking M (2012b) Method and tools to support stakeholder engagement in policy development the OCOPOMO project. Int J Electron Gov Res (IJEGR) 8(3):98–119
Zuiderwijk A, Janssen M (2013) A coordination theory perspective to improve the use of open data in policy-making. Paper presented at the 12th conference on Electronic Government (EGOV), Koblenz
Zuiderwijk A, Helbig N, Gil-García JR, Janssen M (2014) Innovation through open data—a review of the state-of-the-art and an emerging research agenda. J Theor Appl Electron Commer Res 9(2):I–XIII.
Chapter 2 Educating Public Managers and Policy Analysts in an Era of Informatics
Christopher Koliba and Asim Zia
Abstract In this chapter, two ideal types of practitioners who may use or cre- ate policy informatics projects, programs, or platforms are introduced: the policy informatics-savvy public manager and the policy informatics analyst. Drawing from our experiences in teaching an informatics-friendly graduate curriculum, we dis- cuss the range of learning competencies needed for traditional public managers and policy informatics-oriented analysts to thrive in an era of informatics. The chapter begins by describing the two different types of students who are, or can be touched by, policy informatics-friendly competencies, skills, and attitudes. Competencies ranging from those who may be users of policy informatics and sponsors of policy informatics projects and programs to those analysts designing and executing policy informatics projects and programs will be addressed. The chapter concludes with an illustration of how one Master of Public Administration (MPA) program with a policy informatics-friendly mission, a core curriculum that touches on policy infor- matics applications, and a series of program electives that allows students to develop analysis and modeling skills, designates its informatics-oriented competencies.
2.1 Introduction
The range of policy informatics opportunities highlighted in this volume will require future generations of public managers and policy analysts to adapt to the oppor- tunities and challenges posed by big data and increasing computational modeling capacities afforded by the rapid growth in information technologies. It will be up to the field’s Master of Public Administration (MPA) and Master of Public Policy (MPP) programs to provide this next generation with the tools needed to harness the wealth of data, information, and knowledge increasingly at the disposal of public
C. Koliba (�) University of Vermont, 103 Morrill Hall, 05405 Burlington, VT, USA e-mail: ckoliba@uvm.edu
A. Zia University of Vermont, 205 Morrill Hall, 05405 Burlington, VT, USA e-mail: azia@uvm.edu
© Springer International Publishing Switzerland 2015 15 M. Janssen et al. (eds.), Policy Practice and Digital Science, Public Administration and Information Technology 10, DOI 10.1007/978-3-319-12784-2_2
16 C. Koliba and A. Zia
administrators and policy analysts. In this chapter, we discuss the role of policy infor- matics in the development of present and future public managers and policy analysts. Drawing from our experiences in teaching an informatics-friendly graduate curricu- lum, we discuss the range of learning competencies needed for traditional public managers and policy informatics-oriented analysts to thrive in an era of informatics. The chapter begins by describing the two different types of students who are, or can be touched by, policy informatics-friendly competencies, skills, and attitudes. Com- petencies ranging from those who may be users of policy informatics and sponsors of policy informatics projects and programs to those analysts designing and executing policy informatics projects and programs will be addressed. The chapter concludes with an illustration of how one MPA program with a policy informatics-friendly mission, a core curriculum that touches on policy informatics applications, and a series of program electives that allows students to develop analysis and modeling skills, designates its informatics-oriented competencies.
2.2 Two Types of Practitioner Orientations to Policy Informatics
Drawn from our experience, we find that there are two “ideal types” of policy infor- matics practitioner, each requiring greater and greater levels of technical mastery of analytics techniques and approaches. These ideal types are: policy informatics-savvy public managers and policy informatics analysts.
A policy informatics-savvy public manager may take on one of two possible roles relative to policy informatics projects, programs, or platforms. They may play instru- mental roles in catalyzing and implementing informatics initiatives on behalf of their organizations, agencies, or institutions. In the manner, they may work with technical experts (analysts) to envision possible uses for data, visualizations, simulations, and the like. Public managers may also be in the role of using policy informatics projects, programs, or platforms. They may be in positions to use these initiatives to ground decision making, allocate resources, and otherwise guide the performance of their organizations.
A policy informatics analyst is a person who is positioned to actually execute a policy informatics initiative. They may be referred to as analysts, researchers, modelers, or programmers and provide the technical Helpance needed to analyze databases, build and run models, simulations, and otherwise construct useful and effective policy informatics projects, programs, or platforms.
To succeed in either and both roles, managers and analysts will require a certain set of skills, knowledge, or competencies. Drawing on some of the prevailing literature and our own experiences, we lay out an initial list of potential competencies for consideration.
2 Educating Public Managers and Policy Analysts in an Era of Informatics 17
2.2.1 Policy Informatics-Savvy Public Managers
To successfully harness policy informatics, public managers will likely not need to know how to explicitly build models or manipulate big data. Instead, they will need to know what kinds of questions that policy informatics projects or programs can answer or not answer. They will need to know how to contract with and/or manage data managers, policy analysts, and modelers. They will need to be savvy consumers of data analysis and computational models, but not necessarily need to know how to technically execute them. Policy informatics projects, programs, and platforms are designed and executed in some ways, as any large-scale, complex project.
In writing about the stages of informatics project development using “big data,” DeSouza lays out project development along three stages: planning, execution, and postimplementation. Throughout the project life cycle, he emphasizes the role of understanding the prevailing policy and legal environment, the need to venture into coalition building, the importance of communicating the broader opportunities af- forded by the project, the need to develop performance indicators, and the importance of lining up adequate financial and human resources (2014).
Framing what traditional public managers need to know and do to effectively interface with policy informatics projects and programs requires an ability to be a “systems thinker,” an effective evaluator, a capacity to integrate informatics into performance and financial management systems, effective communication skills, and a capacity to draw on social media, information technology, and e-governance approaches to achieve common objectives. We briefly review each of these capacities below.
Systems Thinking Knowing the right kinds of questions that may be asked through policy informatics projects and programs requires public managers to possess a “sys- tems” view. Much has been written about the importance of “systems thinking” for public managers (Katz and Kahn 1978; Stacey 2001; Senge 1990; Korton 2001). Taking a systems perspective allows public managers to understand the relationship between the “whole” and the “parts.” Systems-oriented public managers will possess a level of situational awareness (Endsley 1995) that allows them to see and under- stand patterns of interaction and anticipate future events and orientations. Situational awareness allows public mangers to understand and evaluate where data are coming from, how best data are interpreted, and the kinds of assumptions being used in specific interpretations (Koliba et al. 2011). The concept of system thinking laid out here can be associated with the notion of transition management (Loorbach 2007).
Process Orientations to Public Policy The capacity to view the policy making and implementation process as a process that involves certain levels of coordination and conflict between policy actors is of critical importance for policy informatics- savvy public managers and analysts. Understanding how data are used to frame problems and policy solutions, how complex governance arrangements impact policy implementation (Koliba et al. 2010), and how data visualization can be used to
18 C. Koliba and A. Zia
facilitate the setting of policy agendas and open policy windows (Kingdon 1984) is of critical importance for public management and policy analysts alike.
Research Methodologies Another basic competency needed for any public manager using policy informatics is a foundational understanding of research methods, par- ticularly quantitative reasoning and methodologies. A foundational understanding of data validity, analytical rigor and relevance, statistical significance, and the like are needed to be effective consumers of informatics. That said, traditional public man- agers should also be exposed to qualitative methods as well, refining their powers of observation, understanding how symbols, stories, and numbers are used to govern, and how data and data visualization and computer simulations play into these mental models.
Performance Management A key feature of systems thinking as applied to policy informatics is the importance of understanding how data and analysis are to be used and who the intended users of the data are (Patton 2008). The integration of policy informatics into strategic planning (Bryson 2011), performance management systems (Moynihan 2008), and ultimately woven into an organization’s capacity to learn, adapt, and evolve (Argyis and Schön 1996) are critically important in this vein. As policy informatics trends evolve, public managers will likely need to be exposed to uses of decision support tools, dashboards, and other computationally driven models and visualizations to support organizational performance.
Financial Management Since the first systemic budgeting systems were put in place, public managers have been urged to use the budgeting process as a planning and eval- uation tool (Willoughby 1918). This approach was formally codified in the 1960s with the planning–programming–budgeting (PPB) system with its focus on plan- ning, managerial, and operational control (Schick 1966) and later adopted into more contemporary approaches to budgeting (Caiden 1981). Using informative projects, programs, or platforms to make strategic resource allocation decisions is a necessary given and a capacity that effective public managers must master. Likewise, the pol- icy analyst will likely need to integrate financial resource flows and costs into their projects.
Collaborative and Cooperative Capacity Building The development and use of pol- icy informatics projects, programs, or platforms is rarely, if ever, undertaken as an individual, isolated endeavor. It is more likely that such initiatives will require interagency, interorganizational, or intergroup coordination. It is also likely that content experts will need to be partnered with analysts and programmers to com- plete tasks and execute designs. The public manager and policy analyst must both possess the capacity to facilitate collaborative management functions (O’Leary and Bingham 2009).
Basic Communication Skills This perhaps goes without saying, but the heart of any informatics project lies in the ability to effectively communicate findings and ideas through the analysis of data.
2 Educating Public Managers and Policy Analysts in an Era of Informatics 19
Social Media, Information Technology, and e-Governance Awareness A final com- petency concerns public managers’ capacity to deepen their understanding of how social media, Web-based tools, and related information technologies are being em- ployed to foster various e-government, e-governance, and related initiatives (Mergel 2013). Placing policy informatics projects and programs within the context of these larger trends and uses is something that public managers must be exposed to.
Within our MPA program, we have operationalized these capacities within a four- point rubric that outlines what a student needs to do to demonstrate meeting these standards. The rubric below highlights 8 of our program’s 18 capacities. All 18 of these capacities are situated under 1 of the 5 core competencies tied to the accred- itation standards of the Network of Schools of Public Affairs and Administration (NASPAA), the professional accrediting association in the USA, and increasingly in other countries as well, for MPA and MPP programs. A complete list of these core competencies and the 18 capacities nested under them are provided in Appendix of this chapter.
The eight capacities that we have singled out as being the most salient to the role of policy informatics in public administration are provided in Table 2.1. The rubric follows a four-point scale, ranging from “does not meet standard,” “approaches standard,” “meets standard,” and “exceeds standard.”
2.2.2 Policy Informatics Analysts
A second type of practitioner to be considered is what we are referring to as a “policy informatics analyst.” When considering the kinds of competencies that policy infor- matics analysts need to be successful, we first assume that the basic competencies outlined in the prior section apply here as well. In other words, effective policy in- formatics analysts must be systems thinkers in order to place data and their analysis into context, be cognizant of current uses of decision support systems (and related platforms) to enable organizational learning, performance, and strategic planning, and possess an awareness of e-governance and e-government initiatives and how they are transforming contemporary public management and policy planning practices. In addition, policy analysts must possess a capacity to understand policy systems: How policies are made and implemented? This baseline understanding can then be used to consider the placement, purpose, and design of policy informatics projects or programs. We lay out more specific analyst capacities below.
Advanced Research Methods of Information Technology Applications In many in- stances, policy informatics analysts will need to move beyond meeting the standard. This is particularly true in the area of exceeding the public manager standards for re- search methods and utilization of information technology. It is assumed that effective policy informatics analysts will have a strong foundation in quantitative methodolo- gies and applications. To obtain these skills, policy analysts will need to move beyond basic surveys of research methods into more advanced research methods curriculum.
20 C. Koliba and A. Zia
Ta bl
e 2.
1 Pu
bl ic
m an
ag er
po lic
y in
fo rm
at ic
s ca
pa ci
tie s
C ap
ac ity
D oe
s no
tm ee
ts ta
nd ar
d A
pp ro
ac he
s st
an da
rd M
ee ts
st an
da rd
E xc
ee ds
st an
da rd
C ap
ac it
y to
ap pl
y kn
ow le
dg e
of sy
st em
dy na
m ic
s an
d ne
tw or
k st
ru ct
ur es
in pu
bl ic
ad m
in is
tr at
io n
pr ac
ti ce
s
D oe
s no
tu nd
er st
an d
th e
ba si
c op
er at
io ns
of sy
st em
s an
d ne
tw or
ks ;c
an no
te xp
la in
w hy
un de
rs ta
nd in
g ca
se s
an d
co nt
ex ts
in te
rm s
of sy
st em
s an
d ne
tw or
ks is
im po
rt an
t
C an
pr ov
id e
a ba
si c
ov er
vi ew
of w
ha ts
ys te
m dy
na m
ic s
an d
ne tw
or k
st ru
ct ur
es ar
e an
d ill
us tr
at e
ho w
th ey
ar e
ev id
en t
in pa
rt ic
ul ar
ca se
s an
d co
nt ex
ts
Is ab
le to
un de
rt ak
e an
an al
ys is
of a
co m
pl ex
pu bl
ic ad
m in
is tr
at io
n is
su e,
pr ob
le m
, or
co nt
ex tu
si ng
ba si
c sy
st em
dy na
m ic
s an
d ne
tw or
k fr
am ew
or ks
C an
ap pl
y sy
st em
dy na
m ic
s an
d ne
tw or
k fr
am ew
or ks
to ex
is tin
g ca
se s
an d
co nt
ex ts
to de
ri ve
w or
ki ng
so lu
tio ns
or fe
as ib
le al
te rn
at iv
es to
pr es
si ng
ad m
in is
tr at
iv e
an d
po lic
y pr
ob le
m s
C ap
ac it
y to
ap pl
y po
li cy
st re
am s,
cy cl
es ,
sy st
em s
fo ci
up on
pa st
, pr
es en
t, an
d fu
tu re
po li
cy is
su es
,a nd
to un
de rs
ta nd
ho w
pr ob
le m
id en
ti fic
at io
n im
pa ct
s pu
bl ic
ad m
in is
tr at
io n
Po ss
es se
s lim
ite d
ca pa
ci ty
to ut
ili ze
po lic
y st
re am
s an
d po
lic y
st ag
e he
ur is
tic s
m od
el to
de sc
ri be
ob se
rv ed
ph en
om en
a. C
an is
ol at
e si
m pl
e pr
ob le
m s
fr om
so lu
tio ns
,b ut
ha s
di ffi
cu ltl
y se
pa ra
tin g
ill -s
tr uc
tu re
d pr
ob le
m s
fr om
so lu
tio ns
Po ss
es se
s so
m e
ca pa
ci ty
to ut
ili ze
po lic
y st
re am
s an
d to
de sc
ri be
po lic
y st
ag e
he ur
is tic
s m
od el
ob se
rv ed
ph en
om en
a. Po
ss es
se s
so m
e ca
pa ci
ty to
de fin
e ho
w pr
ob le
m s
ar e
fr am
ed by
di ff
er en
tp ol
ic y
ac to
rs
E m
pl oy
s a
po lic
y st
re am
s or
po lic
y st
ag e
he ur
is tic
s m
od el
ap pr
oa ch
to th
e st
ud y
of ob
se rv
ed ph
en om
en a.
C an
de m
on st
ra te
ho w
pr ob
le m
de fin
iti on
is de
fin ed
w ith
in sp
ec ifi
c po
lic y
co nt
ex ts
an d
de co
ns tr
uc tt
he re
la tio
ns hi
p be
tw ee
n pr
ob le
m de
fin iti
on s
an d
so lu
tio ns
E m
pl oy
s a
po lic
y st
re am
s or
po lic
y st
ag e
he ur
is tic
s m
od el
ap pr
oa ch
to th
e di
ag no
si s
of a
pr ob
le m
ra is
ed in
re al
-l if
e po
lic y
di le
m m
as .C
an ar
tic ul
at e
ho w
co nfl
ic ts
ov er
pr ob
le m
de fin
iti on
co nt
ri bu
te to
w ic
ke d
po lic
y pr
ob le
m s
C ap
ac it
y to
em pl
oy qu
an ti
ta ti
ve an
d qu
al it
at iv
e re
se ar
ch m
et ho
ds fo
r pr
og ra
m ev
al ua
ti on
an d
ac ti
on re
se ar
ch
Po ss
es se
s a
lim ite
d ca
pa ci
ty to
em pl
oy su
rv ey
,i nt
er vi
ew ,o
r ot
he r
so ci
al re
se ar
ch m
et ho
ds to
a fo
cu s
ar ea
.C an
ex pl
ai n
w hy
it is
im po
rt an
tt o
un de
rt ak
e pr
og ra
m or
pr oj
ec t
ev al
ua tio
n, bu
tp os
se ss
es lim
ite d
ca pa
ci ty
to ac
tu al
ly ca
rr yi
ng it
ou t
D em
on st
ra te
s a
ca pa
ci ty
to em
pl oy
su rv
ey ,i
nt er
vi ew
,o r
ot he
r so
ci al
re se
ar ch
m et
ho ds
to a
fo cu
s ar
ea an
d an
un de
rs ta
nd in
g of
ho w
su ch
da ta
an d
an al
ys is
ar e
us ef
ul in
ad m
in is
tr at
iv e
pr ac
tic e.
C an
pr ov
id e
a ra
tio na
le fo
r un
de rt
ak in
g pr
og ra
m /p
ro je
ct
C an
pr ov
id e
a pi
ec e
of or
ig in
al an
al ys
is of
an ob
se rv
ed ph
en om
en on
em pl
oy in
g on
e qu
al ita
tiv e
or qu
an tit
at iv
e m
et ho
do lo
gy ef
fe ct
iv el
y. Po
ss es
se s
ca pa
ci ty
to co
m m
is si
on a
pi ec
e of
or ig
in al
re se
ar ch
.C an
pr ov
id e
a de
ta ile
d ac
co un
tf or
ho w
a
D em
on st
ra te
s th
e ca
pa ci
ty to
un de
rt ak
e an
in de
pe nd
en t
re se
ar ch
ag en
da th
ro ug
h em
pl oy
in g
on e
or m
or e
so ci
al re
se ar
ch m
et ho
ds ar
ou nd
a to
pi c
of st
ud y
of im
po rt
an ce
to pu
bl ic
ad m
in is
tr at
io n.
C an
de m
on st
ra te
th e
su cc
es sf
ul ex
ec ut
io n
of a
pr og
ra m
or
2 Educating Public Managers and Policy Analysts in an Era of Informatics 21
Ta bl
e 2.
1 (c
on tin
ue d)
C ap
ac ity
D oe
s no
tm ee
ts ta
nd ar
d A
pp ro
ac he
s st
an da
rd M
ee ts
st an
da rd
E xc
ee ds
st an
da rd
ev al
ua tio
n an
d ex
pl ai
n w
ha tt
he po
ss ib
le go
al s
an d
ou tc
om es
of su
ch an
ev al
ua tio
n m
ig ht
be
pr og
ra m
or pr
oj ec
te va
lu at
io n
pr oj
ec ts
ho ul
d be
st ru
ct ur
ed w
ith in
th e
co nt
ex to
f a
sp ec
ifi c
pr og
ra m
or pr
oj ec
t
pr oj
ec te
va lu
at io
n or
th e
su cc
es sf
ul ut
ili za
tio n
of a
pr og
ra m
or pr
oj ec
te va
lu at
io n
to im
pr ov
e ad
m in
is tr
at iv
e pr
ac tic
e
C ap
ac it
y to
ap pl
y so
un d
pe rf
or m
an ce
m ea
su re
m en
ta nd
m an
ag em
en tp
ra ct
ic es
C an
pr ov
id e
an ex
pl an
at io
n of
w hy
pe rf
or m
an ce
go al
s an
d m
ea su
re s
ar e
im po
rt an
ti n
pu bl
ic ad
m in
is tr
at io
n, bu
t ca
nn ot
ap pl
y th
is re
as on
in g
to sp
ec ifi
c co
nt ex
ts
C an
id en
tif y
th e
pe rf
or m
an ce
m an
ag em
en tc
on si
de ra
tio ns
fo r
a pa
rt ic
ul ar
si tu
at io
n or
co nt
ex t,
bu th
as lim
ite d
ca pa
ci ty
to ev
al ua
te th
e ef
fe ct
iv en
es s
of pe
rf or
m an
ce m
an ag
em en
t sy
st em
s
C an
id en
tif y
an d
an al
yz e
pe rf
or m
an ce
m an
ag em
en t
sy st
em s,
ne ed
s, an
d em
er gi
ng op
po rt
un iti
es w
ith in
a sp
ec ifi
c or
ga ni
za tio
n or
ne tw
or k
C an
pr ov
id e
ne w
in si
gh ts
in to
th e
pe rf
or m
an ce
m an
ag em
en t
ch al
le ng
es fa
ci ng
an or
ga ni
za tio
n or
ne tw
or k,
an d
su gg
es ta
lte rn
at iv
e de
si gn
an d
m ea
su re
m en
ts ce
na ri
os
C ap
ac it
y to
ap pl
y so
un d
fin an
ci al
pl an
ni ng
an d
fis ca
l re
sp on
si bi
li ty
C an
id en
tif y
w hy
bu dg
et in
g an
d so
un d
fis ca
lm an
ag em
en t
pr ac
tic es
ar e
im po
rt an
t, bu
t ca
nn ot
an al
yz e
ho w
an d/
or if
su ch
pr ac
tic es
ar e
be in
g us
ed w
ith in
sp ec
ifi c
co nt
ex ts
C an
id en
tif y
fis ca
lp la
nn in
g an
d bu
dg et
in g
pr ac
tic es
fo r
a pa
rt ic
ul ar
si tu
at io
n or
co nt
ex t,
bu th
as lim
ite d
ca pa
ci ty
to ev
al ua
te th
e ef
fe ct
iv en
es s
of a
fin an
ci al
m an
ag em
en ts
ys te
m
C an
id en
tif y
an d
an al
yz e
fin an
ci al
m an
ag em
en t
sy st
em s,
ne ed
s, an
d em
er gi
ng op
po rt
un iti
es w
ith in
a sp
ec ifi
c or
ga ni
za tio
n or
ne tw
or k
C an
pr ov
id e
ne w
in si
gh ts
in to
th e
fin an
ci al
m an
ag em
en t
ch al
le ng
es fa
ci ng
an or
ga ni
za tio
n or
ne tw
or k,
an d
su gg
es ta
lte rn
at iv
e de
si gn
an d
bu dg
et in
g sc
en ar
io s
C ap
ac it
y to
ac hi
ev e
co op
er at
io n
th ro
ug h
pa rt
ic ip
at or
y pr
ac ti
ce s
C an
ex pl
ai n
w hy
it is
im po
rt an
tf or
pu bl
ic ad
m in
is tr
at or
s to
be op
en an
d re
sp on
si ve
pr ac
tit io
ne rs
in a
va gu
e or
ab st
ra ct
w ay
,b ut
ca nn
ot pr
ov id
e sp
ec ifi
c ex
pl an
at io
ns or
ju st
ifi ca
tio ns
ap pl
ie d
to pa
rt ic
ul ar
co nt
ex ts
C an
id en
tif y
in st
an ce
s in
sp ec
ifi c
ca se
s or
co nt
ex ts
w he
re a
pu bl
ic ad
m in
is tr
at or
de m
on st
ra te
d or
fa ile
d to
de m
on st
ra te
in cl
us iv
e pr
ac tic
es
C an
de m
on st
ra te
ho w
in cl
us iv
e pr
ac tic
es an
d co
nfl ic
t m
an ag
em en
tl ea
ds to
co op
er at
io n
fo r
fo rm
in g
co al
iti on
s an
d co
lla bo
ra tiv
e pr
ac tic
es
C an
or ch
es tr
at e
an y
of th
e fo
llo w
in g:
co al
iti on
bu ild
in g
ac ro
ss un
its ,o
rg an
iz at
io ns
,o r
in st
itu tio
ns ,e
ff ec
tiv e
te am
w or
k, an
d/ or
co nfl
ic t
m an
ag em
en t
22 C. Koliba and A. Zia
Ta bl
e 2.
1 (c
on tin
ue d)
C ap
ac ity
D oe
s no
tm ee
ts ta
nd ar
d A
pp ro
ac he
s st
an da
rd M
ee ts
st an
da rd
E xc
ee ds
st an
da rd
C ap
ac it
y to
un de
rt ak
e hi
gh qu
al it
y or
al ,
w ri
tt en
co m
m un
ic at
io n
D em
on st
ra te
s so
m e
ab ili
ty to
ex pr
es s
id ea
s ve
rb al
ly an
d in
w ri
tin g.
L ac
ks co
ns is
te nt
ca pa
ci ty
to pr
es en
ta nd
w ri
te
Po ss
es se
s th
e ca
pa ci
ty to
w ri
te do
cu m
en ts
th at
ar e
fr ee
of gr
am m
at ic
al er
ro rs
an d
ar e
or ga
ni ze
d in
a cl
ea r
an d
ef fic
ie nt
m an
ne r.
Po ss
es se
s th
e ca
pa ci
ty to
pr es
en ti
de as
in a
pr of
es si
on al
m an
ne r.
Su ff
er s
fr om
a la
ck of
co ns
is te
nc y
in th
e pr
es en
ta tio
n of
m at
er ia
la nd
ex pr
es si
on or
or ig
in al
id ea
s an
d co
nc ep
ts
Is ca
pa bl
e of
co ns
is te
nt ly
ex pr
es si
ng id
ea s
ve rb
al ly
an d
in w
ri tin
g in
a pr
of es
si on
al m
an ne
r th
at co
m m
un ic
at es
m es
sa ge
s to
in te
nd ed
au di
en ce
s
C an
de m
on st
ra te
so m
e in
st an
ce s
in w
hi ch
ve rb
al an
d w
ri tte
n co
m m
un ic
at io
n ha
s pe
rs ua
de d
ot he
rs to
ta ke
ac tio
n
C ap
ac it
y to
un de
rt ak
e hi
gh qu
al it
y el
ec tr
on ic
al ly
m ed
ia te
d co
m m
un ic
at io
n an
d ut
il iz
e in
fo rm
at io
n sy
st em
s an
d m
ed ia
to ad
va nc
e ob
je ct
iv es
C an
ex pl
ai n
w hy
in fo
rm at
io n
te ch
no lo
gy is
im po
rt an
tt o
co nt
em po
ra ry
w or
kp la
ce s
an d
pu bl
ic ad
m in
is tr
at io
n en
vi ro
nm en
ts .P
os se
ss es
di re
ct ex
pe ri
en ce
w ith
in fo
rm at
io n
te ch
no lo
gy ,b
ut lit
tle un
de rs
ta nd
in g
fo r
ho w
IT in
fo rm
s pr
of es
si on
al pr
ac tic
e
C an
id en
tif y
in st
an ce
s in
sp ec
ifi c
ca se
s or
co nt
ex tw
he re
a pu
bl ic
ad m
in is
tr at
or su
cc es
sf ul
ly or
un su
cc es
sf ul
ly de
m on
st ra
te d
a ca
pa ci
ty to
us e
IT to
fo st
er in
no va
tio n,
im pr
ov e
se rv
ic es
,o r
de ep
en ac
co un
ta bi
lit y.
A na
ly si
s at
th is
le ve
li s
re le
ga te
d to
de sc
ri pt
io ns
an d
th in
an al
ys is
C an
id en
tif y
ho w
IT im
pa ct
s w
or kp
la ce
s an
d pu
bl ic
po lic
y. C
an di
ag no
se pr
ob le
m s
as so
ci at
ed w
ith IT
to ol
s, pr
oc ed
ur es
,a nd
us es
D em
on st
ra te
s a
ca pa
ci ty
to vi
ew IT
in te
rm s
of sy
st em
s de
si gn
.I s
ca pa
bl e
of w
or ki
ng w
ith IT
pr of
es si
on al
s in
id en
tif yi
ng ar
ea s
of ne
ed fo
r IT
up gr
ad es
,I T
pr oc
ed ur
es ,
an d
IT us
es in
re al
se tti
ng
IT in
fo rm
at io
n te
ch no
lo gy
2 Educating Public Managers and Policy Analysts in an Era of Informatics 23
Competencies in advanced quantitative methods in which students learn to clean and manage large databases, perform advanced statistical tests, develop linear regression models to describe causal relationship, and the like are needed. Capacity to work across software platforms such as Excel, Statistical Package for the Social Sciences (SPSS), Analytica, and the like are important. Increasingly, the capacity to triangu- late different methods, including qualitative approaches such as interviews, focus groups, participant observations is needed.
Data Visualization and Design Not only must analysts be aware of how these meth- ods and decision support platforms may be used by practitioners but also they must know how to design and implement them. Therefore, we suggest that policy infor- matics analysts be exposed to design principles and how they may be applied to decision support systems, big data projects, and the like. Policy informatics analysts will need to understand and appreciate how data visualization techniques are being employed to “tell a story” through data.
Figure 2.1 provides an illustration of one student’s effort to visualize campaign donations to state legislatures from the gas-extraction (fracking) industry undertaken by a masters student, Jeffery Castle for a system analysis and strategic management class taught by Koliba.
Castle’s project demonstrates the power of data visualization to convey a central message drawing from existing databases. With a solid research methods background and exposure to visualization and design principles in class, he was able to develop an insightful policy informatics project.
Basic to Advanced Programming Language Skills Arguably, policy informatics ana- lysts will possess a capacity to visualize and present data in a manner that is accessible. Increasingly, web-based tools are being used to design user interfaces. Knowledge of JAVA and HTML are likely most helpful in these regards. In some instances, original programs and models will need to be written through the use of program- ming languages such as Python, R, C++, etc. The extent to which existing software programs, be they open source or proprietary, provide enough utility to execute pol- icy informatics projects, programs, or platforms is a continuing subject of debate within the policy informatics community. Exactly how much and to what extent spe- cific programming languages and software programs are needing to be mastered is a standing question. For the purposes of writing this chapter, we rely on our current baseline observations and encourage more discussion and debate about the range of competencies needed by successful policy analysts.
Basic to More Advanced Modeling Skills More advanced policy informatics analysts will employ computational modeling approaches that allow for the incorporation of more complex interactions between variables. These models may be used to capture systems as dynamic, emergent, and path dependent. The outputs of these models may allow for scenario testing through simulation (Koliba et al. 2011). With the advancement of modeling software, it is becoming easier for analysts to develop system dynamics models, agent-based models, and dynamic networks designed to simulate the features of complex adaptive systems. In addition, the ability to manage and store data and link or wrap databases is often necessary.
24 C. Koliba and A. Zia
Fig. 2.1 Campaign contributions to the Pennsylvania State Senate and party membership. The goal of this analysis is to develop a visualization tool to translate publically available campaign contribution information into an easily accessible, visually appealing, and interactive format. While campaign contribution data are filed and available to the public through the Pennsylvania Department of State, it is not easily synthesized. This analysis uses a publically available database that has been published on marcellusmoney.org. In order to visualize the data, a tool was used that allows for the creation of a Sankey diagram that is able to be manipulated and interacted within an Internet browser. A Sankey diagram visualizes the magnitude of flow between the nodes of a network (Castle 2014)
The ability of analysts to draw on a diverse array of methods and theoretical frameworks to envision and create models is of critical importance. Any potential policy informatics project, program, or platform will be enabled or constrained by the modeling logic in place. With a plurality of tools at one’s disposal, policy informatics analysts will be better positioned to design relevant and legitimate models.
2 Educating Public Managers and Policy Analysts in an Era of Informatics 25
Fig. 2.2 End-stage renal disease (ESRD) system dynamics population model. To provide clinicians and health care administrators with a greater understanding of the combined costs associated with the many critical care pathways associated with ESRD, a system dynamics model was designed to simulate the total expenses of ESRD treatment for the USA, as well as incidence and mortality rates associated with different critical care pathways: kidney transplant, hemodialysis, peritoneal dialysis, and conservative care. Calibrated to US Renal Data System (USRDS) 2013 Annual and Historical Data Report and the US Census Bureau for the years 2005–2010, encompassing all ESRD patients under treatment in the USA from 2005 to 2010, the ESRD population model predicts the growth and costs of ESRD treatment type populations using historical patterns. The model has been calibrated against the output of the USRDS’s own prediction for the year 2020 and also tested by running his- toric scenarios and comparing the output to existing data. Using a web interface designed to allow users to alter certain combinations of parameters, several scenarios are run to project future spending, incidence, and mortalities if certain combinations of critical care pathways are pursued. These sce- narios include: a doubling of kidney donations and transplant rates, a marked increase in the offering of peritoneal dialysis, and an increase in conservative care routes for patients over 65. The results of these scenario runs are shared, demonstrating sizable cost savings and increased survival rates. Implications of clinical practice, public policy, and further research are drawn (Fernandez 2013)
Figure 2.2 provides an illustration of Luca Fernandez’s system dynamics model of critical care pathways for end-stage renal disease (ESRD). Fernandez took Koliba’s system analysis and strategic management course and Zia’s decision-making model- ing course. This model, constructed using the proprietary software, AnyLogic, was initially constructed as a project in Zia’s course.
Castle and Fernandez’s projects illustrate how master’s-level students with an eye toward becoming policy informatics analysts can build skills and capacities to develop useful informatics projects that can guide policy and public management. They were guided to this point by taking advanced courses designed explicitly with policy informatics outcomes in mind.
26 C. Koliba and A. Zia
Policy Informatics Analyst Informatics-Savvy Public
•Advanced research methods •Data visualization and design techniques •Basic to advanced modeling software skills •Basic to advanced programming language(s) •Systems thinking •Basic understanding of research methods •Knowledge of how to integrate informatics within performance management •Knowledge of how to integrate inofrmatics within financial systems•Effecive written communication •Effective usese of social media / e-governance approaches
Fig. 2.3 The nested capacities of informatics-savvy public managers and policy informatics analysts
Figure 2.3 illustrates how the competencies of the two different ideal types of policy informatics practitioners are nested inside of one another. A more complete list of competencies that are needed for the more advanced forms of policy analy- sis will need to emerge through robust exchanges between the computer sciences, organizational sciences, and policy sciences. These views will likely hinge on as- sumptions about the sophistication of the models to be developed. A key question here concerning the types of models to be built is: Can adequate models be built using existing software or is original programming needed or desired? Ideally, ad- vanced policy analysts undertaking policy informatics projects are “programmers with a public service motivation.”
2.3 Applications to Professional Masters Programs
Professional graduate degree programs have steadily moved toward emphasizing the importance of the mission of particular graduate programs in determining the optimal curriculum to suit the learning needs of it students. As a result, clear definitions of the learning outcomes and the learning needs of particular student communities are defined. Some programs may seek to serve regional or local needs of the government and nonprofit sector, while others may have a broader reach, preparing students to work within federal or international level governments and nonprofits.
In addition to geographic scope, accredited MPA and MPP programs may have specific areas of concentration. Some programs may focus on preparing public man- agers who are charged with managing resources, making operational, tactical, and
2 Educating Public Managers and Policy Analysts in an Era of Informatics 27
strategic decisions and, overall, administering to the day-to-day needs of a govern- ment or nonprofit organization. Programs may also focus on training policy analysts who are responsible for analyzing policies, policy alternatives, problem definition, and the like. Historically, the differences between public management and policy analysis have distinguished the MPA degree from the MPP degree. However, recent studies of NASPAA-accredited programs have found that the lines between MPA and MPP programs are increasingly blurred (Hur and Hackbart 2009). The relationship between public management and policy analysis matters to those interested in policy informatics because these distinctions drive what policy informatics competencies and capacities are covered within a core curriculum, and what competencies and capacities are covered within a suite of electives or concentrations.
Competency-based assessments are increasingly being used to evaluate and de- sign curriculum. Drawing on the core tenants of adult learning theory and practice, competency-based assessment involves the derivation of specific skills, knowledge, or attitudes that an adult learner must obtain in order to successfully complete a course of study or degree requirement. Effective competency-based graduate pro- grams call on students to demonstrate a mastery of competencies through a variety of means. Portfolio development, test taking, and project completion are common applications. Best practices in competency-based education assert that curriculum be aligned with specific competencies as much as possible.
By way of example, the University ofVermont’s MPA Program has had a “systems thinking” focus since it was first conceived in the middle 1980s. Within the last 10 years, the two chapter coauthors, along with several core faculty who have been associated with the program since its inception, have undertaken an effort to refine its mission based on its original systems-focused orientation.
As of 2010, the program mission was refined to read:
Our MPA program is a professional interdisciplinary degree that prepares pre and in-service leaders, managers and policy analysts by combining the theoretical and practical founda- tions of public administration focusing on the complexity of governance systems and the democratic, collaborative traditions that are a hallmark of Vermont communities.
The mission was revised to include leaders and managers, as well as policy analysts. A theory-practice link was made explicit. The phrase, “complexity of governance systems” was selected to align with a commonly shared view of contemporary gover- nance as a multisectoral and multijurisdictional context. Concepts such as bounded rationality, social complexity, the importance of systems feedback, and path de- pendency are stressed throughout the curriculum. The sense of place found within the State of Vermont was also recognized and used to highlight the high levels of engagement found within the program.
The capacities laid out in Table 2.1 have been mapped to the program’s core curriculum. The program’s current core is a set of five courses: PA 301: Foundations of Public Administration; PA 302: Organizational Behavior and Change; PA 303: Research Methods; PA 305: Public and Nonprofit Budgeting and Finance and PA 306: Policy Systems. In addition, all students are required to undertake a three- credit internship and a three-credit Capstone experience in which they construct a
28 C. Koliba and A. Zia
final learning portfolio. It is within this final portfolio that students are expected to provide evidence of meeting or exceeding the standard. An expanded rubric of all 18 capacities is used by the students to undertake their own self-assessment. These assessments are judged against the Capstone instructor’s Assessment.
In 2009, the MPA faculty revised the core curriculum to align with the core competencies. Several course titles and content were revised to align with these competencies and the overall systems’ focus of our mission. The two core courses taught by the two coauthors, PA 301 and PA 306, are highlighted here.
2.4 PA 301: Foundations of Public Administration
Designed as a survey of the prevailing public administration literature during the past 200 plus years, Foundations of Public Administration is arranged across a continuum of interconnected themes and topics that are to be addressed in more in-depth in other courses and is described in the syllabus in the following way:
This class is designed to provide you with an overview of the field of public administra- tion. You will explore the historical foundations, the major theoretical, organizational, and political breakthroughs, and the dynamic tensions inherent to public and nonprofit sector administration. Special attention will be given to problems arising from political imperatives generated within a democratic society.
Each week a series of classic and contemporary texts are read and reviewed by the students. In part, to fill a noticeable void in the literature, the authors co-wrote, along with Jack Meek, a book on governance networks called: Governance Networks in Public Administration and Public Policy (Koliba et al. 2010). This book is required reading. Students are also asked to purchase Shafritz and Hyde’s edited volume, Classics of Public Administration.
Current events assignments offered through blog posts are undertaken. Weekly themes include: the science and art of administration; citizens and the administra- tive state; nonprofit, private, and public sector differences; governance networks; accountability; and performance management.
During the 2009 reforms of the core curriculum, discrete units on governance networks and performance management were added to this course. Throughout the entire course, a complex systems lens is employed to describe and analyze gover- nance networks and the particular role that performance management systems play in providing feedback to governance actors. Students are exposed to social network and system dynamics theory, and asked to apply these lenses to several written cases taken from the Electronic Hallway. A unit on performance management systems and their role within fostering organizational learning are provided along with readings and examples of decision support tools and dashboard platforms currently in use by government agencies.
Across many units, including units on trends and reforms, ethical and reflective leadership, citizens and the administrative state, and accountability, the increasing use of social media and other forms of information technology are discussed. Trends
2 Educating Public Managers and Policy Analysts in an Era of Informatics 29
shaping the “e-governance” and “e-government” movements serve as a major focus on current trends. In addition, students are exposed to current examples of data visualizations and open data platforms and asked to consider their uses.
2.5 PA 306: Policy Systems
Policy Systems is an entry-level graduate policy course designed to give the MPA student an overview of the policy process. In 2009, the course was revised to reflect a more integrated systems focus. The following text provides an overview of the course:
In particular, the emphasis is placed upon meso-, and macro-scale policy system frame- works and theories, such as InstitutionalAnalysis and Development Framework, the Multiple Streams Framework; Social Construction and Policy Design; the Network Approach; Punc- tuated Equilibrium Theory; the Advocacy Coalition Framework; Innovation and Diffusion Models and Large-N Comparative Models. Further, students will apply these micro-, meso- and macro-scale theories to a substantive policy problem that is of interest to a community partner, which could be a government agency or a non-profit organization. These policy problems may span, or even cut across, a broad range of policy domains such as (included but not limited to) economic policy, food policy, environmental policy, defense and foreign policy, space policy, homeland security, disaster and emergency management, social policy, transportation policy, land-use policy and health policy.
The core texts for this class are Elinor Ostrom’s, Understanding Institutional Di- versity, Paul Sabatier’s edited volume, Theories of the Policy Process, and Deborah Stone’s Policy Paradox: The Art of Political Decision-Making. The course itself is staged following a micro, to meso, to macro level scale of policy systems framework. A service-learning element is incorporated. Students are taught to view the policy process through a systems lens. Zia employs examples of policy systems models us- ing system dynamics (SD), agent-based modeling (ABM), social network analysis (SNA), and hybrid approaches throughout the class. By drawing on Ostrom, Sabatier, and other meso level policy processes as a basis, students are exposed to a number of “complexity-friendly” theoretical policy frameworks (Koliba and Zia 2013). Appre- ciating the value of these policy frameworks, students are provided with heuristics for understanding the flow of information across a system. In addition, students are shown examples of simulation models of different policy processes, streams, and systems.
In addition to PA 301 and PA 306, students are also provided an in-depth ex- ploration of organization theory in PA 302 Organizational Behavior and Change that is taught through an organizational psychology lens that emphasizes the role of organizational culture and learning. “Soft systems” approaches are applied. PA 303 Research Methods for Policy Analysis and Program Assessment exposes students to a variety of research and program Assessment methodologies with a particular focus on quantitative analysis techniques. Within PA 305 Public and Nonprofit Budgeting and Finance, students are taught about evidence-based decision-making and data management.
30 C. Koliba and A. Zia
By completing the core curriculum, students are exposed to some of the founda- tional competencies needed to use and shape policy informatics projects. However, it is not until students enroll in one of the several electives, that more explicit policy informatics concepts and applications are taught. Two of these elective courses are highlighted here. A third, PA 311 Policy Analysis, also exposes students to policy analyst capacities, but is not highlighted here.
2.6 PA 308: Decision-Making Models
A course designated during the original founding of the University of Vermont (UVM)-MPA Program, PA 308: Decision-Making Models offers students with a more advanced look at decision-making theory and modeling. The course is described by Zia in the following manner:
In this advanced graduate level seminar, we will explore and analyze a wide range of norma- tive, descriptive and prescriptive decision making models. This course focuses on systems level thinking to impart problem-solving skills in complex decision-making contexts. Deci- sion making problems in the real-world public policy, business and management arenas will be analyzed and modeled with different tools developed in the fields of Decision Analysis, Behavioral Sciences, Policy Sciences and Complex Systems. The emphasis will be placed on imparting cutting edge skills to enable students to design and implement multiple criteria decision analysis models, decision making models under risk and uncertainty and computer simulation models such as Monte Carlo simulation, system dynamic models, agent based models, Bayesian decision making models, participatory and deliberative decision making models, and interactive scenario planning approaches. AnyLogic version 6.6 will be made available to the students for working with some of these computer simulation models.
2.7 PA 317: Systems Analysis and Strategic Management
Another course designate during the early inception of the program, systems analysis and strategic management is described by Koliba in the course syllabus as follows:
This course combines systems and network analysis with organizational learning theory and practices to provide students with a heightened capacity to analyze and effectively operate in complex organizations and networks. The architecture for the course is grounded in many of the fundamental conceptual frameworks found in network, systems and complexity analysis, as well as some of the fundamental frameworks employed within the public administration and policy studies fields. In this course, strategic management and systems analysis are linked together through the concept of situational awareness and design principles. Several units focusing on teaching network analysis tools using UCINet have been incorporated.
One of the key challenges to offering these informatics-oriented electives lies in the capacities that the traditional MPA students possess to thrive within them. Increas- ingly, these elective courses are being populated by doctoral and master of science students looking to apply what they are learning to their dissertations or thesis. Our MPA program offers a thesis option and we have had some success with these more
2 Educating Public Managers and Policy Analysts in an Era of Informatics 31
professionally oriented students undertaking high quality informatics focused thesis. Our experience begs a larger question pertaining to the degree to which the baseline informatics-savvy public manager capacities lead into more complex policy analysts competencies associated with the actual design and construction of policy informatics projects, programs, and platforms.
Table 2.2 provides an overview of where within the curriculum certain policy informatics capacities are covered. When associated with the class, students are exposed to the uses of informatics projects, programs, or platforms or provided opportunities for concrete skill development.
The University of Vermont context is one that can be replicated in other programs. The capacity of the MPA program to offer these courses hinges on the expertise of two faculties who teach in the core and these two electives. With additional re- sources, a more advanced curriculum may be pursued, one that pursues closer ties with the computer science department (Zia has a secondary appointment) around curricular alignment. Examples of more advanced curriculum to support the devel- opment of policy informatics analysts may be found at such institutions as Carnegie Mellon University, Arizona State University, George Mason University, University at Albany, Delft University of Technology, Massachusetts Institute of Technology, among many others. The University of Vermont case suggests, however, that pol- icy informatics education can be integrated into the main stream with relatively low resource investments leveraged by strategic relationships with other disciplines and core faculty with the right skills, training, and vision.
2.8 Conclusion
It is difficult to argue that with the advancement of high speed computing, the dig- itization of data and the increasing collaboration occurring around the development of informatics projects, programs, and platforms, that the educational establishment, particularly at the professional master degree levels, will need to evolve. This chap- ter lays out a preliminary look at some of the core competencies and capacities that public managers and policy analysts will need to lead the next generation of policy informatics integration.
32 C. Koliba and A. Zia
Table 2.2 Policy informatics capacities covered within the UVM-MPA program curriculum
Course title Policy informatics-savvy public management capacities covered
Policy informatics analysis capacities covered
PA 301: Foundations of public administration
Systems thinking Policy as process Performance management Financial management Basic communication Social media/IT/e-governance Collaborative–cooperative capacity building
Data visualization and design
PA 306: Policy systems Systems thinking Policy as process Basic communication
Basic modeling skills
PA 302: Organizational behavior and change
Systems thinking Basic communication Collaborative–cooperative capacity building
PA 303: Research methods for policy analysis and program Assessment
Research methods Basic communication
Data visualization and design
PA 305: Public and nonprofit budgeting and finance
Financial management Performance management Basic communication
PA 308: Decision-making modeling
Systems thinking Policy as process Research methods Performance management Social media/IT/e-governance
Advanced research methods Data visualization and design techniques Basic modeling skills
PA 311: Policy analysis Systems thinking Policy as process Research methods Performance management Basic communication
Advanced research methods Data visualization and design Basic modeling skills
PA 317: Systems analysis and strategic analysis
Systems thinking Policy as process Research methods Performance management Collaborative–cooperative capacity building Basic communication Social media/IT/e-governance
Data visualization and design Basic modeling skills
2 Educating Public Managers and Policy Analysts in an Era of Informatics 33
2.9 Appendix A: University of Vermont’s MPA Program Learning Competencies and Capacities
NASPAA core standard UVM-MPA learning capacity
To lead and manage in public governance
Capacity to understand accountability and democratic theory
Capacity to manage the lines of authority for public, private, and nonprofit collaboration, and to address sectorial differences to overcome obstacles
Capacity to apply knowledge of system dynamics and network structures in PA practice
Capacity to carry out effective policy implementation
To participate in and contribute to the policy process
Capacity to apply policy streams, cycles, systems foci upon past, present, and future policy issues, and to understand how problem identification impacts public administration
Capacity to conduct policy analysis/Assessment
Capacity to employ quantitative and qualitative research methods for program Assessment and action research
To analyze, synthesize, think critically, solve problems, and make decisions
Capacity to initiate strategic planning, and apply organizational learning and development principles
Capacity to apply sound performance measurement and management practices
Capacity to apply sound financial planning and fiscal responsibility
Capacity to employ quantitative and qualitative research methods for program Assessment and action research
To articulate and apply a public service perspective
Capacity to understand the value of authentic citizen participation in PA practice
Capacity to understand the value of social and economic equity in PA practices
Capacity to lead in an ethical and reflective manner
Capacity to achieve cooperation through participatory practices
To communicate and interact productively with a diverse and changing workforce and citizenry
Capacity to undertake high quality oral, written, and electronically mediated communication and utilize information systems and media to advance objectives
Capacity to appreciate the value of pluralism, multiculturalism, and cultural diversity
Capacity to carry out effective human resource management
Capacity to undertake high quality oral, written, and electronically mediated communication and utilize information systems and media to advance objectives
NASPAA Network of Schools of Public Affairs and Administration, UVM University of Vermont, MPA Master of Public Administration, PA Public administration
34 C. Koliba and A. Zia
References
Argyis C, Schön DA (1996) Organizational learning II: theory, method, and practice. Addison- Wesley, Reading
Bryson J (2011) Strategic planning for public and nonprofit organizations: a guide to strengthening and sustaining organizational achievement. Jossey-Bass, San Francisco
Caiden N (1981) Public budgeting and finance. Blackwell, New York Castle J (2014) Visualizing natural gas industry contributions in Pennsylvania Government, PA 317
final class project Desouza KC (2014) Realizing the promise of big data: implementing big data projects. IBM Center
for the Business of Government, Washington, DC Endsley MR (1995) Toward a theory of situation awareness in dynamic systems. Hum Fact 37(1):32–
64 Fernandez L (2013) An ESRD system dynamics population model for the United States. Final
project for PA 308 Hur Y, Hackbart M (2009) MPA vs. MPP: a distinction without a difference? J Public Aff Educ
15(4):397–424 Katz D, Khan R (1978) The social psychology of organizations. Wiley, New York Kingdon J (1984) Agendas, alternatives, and public policies. Harper Collins, New York Koliba C, Zia A (2013) Complex systems modeling in public administration and policy studies:
challenges and opportunities for a meta-theoretical research program. In: Gerrits L, Marks PK (eds) COMPACT I: public administration in complexity. Emergent, Litchfield Park
Koliba C, Meek J, Zia A (2010) Governance networks in public administration and public policy. CRC, Boca Raton
Koliba C, Zia A, Lee B (2011) Governance informatics: utilizing computer simulation models to manage complex governance networks. Innov J Innov Publ Sect 16(1):1–26 (Article 3). (https://www.appessaywriters.com/write-my-paper/innovation.cc/scholarly-style/koliba_governance_informaticsv16i1a3.pdf)
Korton DC (2001) The management of social transformation. In: Stivers C (ed) Democracy, bureaucracy, and the study of administration. Westview, Boulder, pp 476–497
Loorbach D (2007) Transition management: new modes of governance for sustainable development. International Books, Ultrecht
Mergel I (2013) Social media adoption and resulting tactics in the U.S. federal government. Gov Inf Quart 30(2):123–130
Moynihan DP (2008) The dynamics of performance management: constructing information and reform. Georgetown University Press, Washington, DC
O’Leary R, Bingham L (eds) (2009) The collaborative public manager: new ideas for the twenty-first century. Georgetown University Press, Washington, DC
Patton M (2008) Utilization-focused Assessment. Sage, New York Schick A (1966) The road to PPB: the stages of budget reform. Public Admin Rev 26(4):243–259 Senge PM (1990) The fifth discipline: the art and practice of the learning organization. Doubleday
Currency, New York Stacey RD (2001) Complex responsive processes in organizations: learning and knowledge creation.
Routledge, London Willoughby WF (1918) The movement of budgetary reform in the states. D. Appleton, New York
Chapter 3 The Quality of Social Simulation: An Example from Research Policy Modelling
Petra Ahrweiler and Nigel Gilbert
Abstract This chapter deals with the assessment of the quality of a simulation. The first section points out the problems of the standard view and the constructivist view in evaluating social simulations. A simulation is good when we get from it what we originally would have liked to get from the target; in this, the Assessment of the simulation is guided by the expectations, anticipations, and experience of the community that uses it. This makes the user community view the most promising mechanism to assess the quality of a policy-modelling exercise. The second section looks at a concrete policy-modelling example to test this idea. It shows that the very first negotiation and discussion with the user community to identify their questions is highly user-driven, interactive, and iterative. It requires communicative skills, patience, willingness to compromise on both sides, and motivation to make the formal world of modellers and the narrative world of practical policy making meet. Often, the user community is involved in providing data for calibrating the model. It is not an easy issue to confirm the existence, quality, and availability of data and check for formats and database requirements. As the quality of the simulation in the eyes of the user will very much depend on the quality of the informing data and the quality of the model calibration, much time and effort need to be spent in coordinating this issue with the user community. Last but not least, the user community has to check the validity of simulation results and has to believe in their quality. Users have to be enabled to understand the model, to agree with its processes and ways to produce results, to judge similarity between empirical and simulated data, etc. Although the user community view might be the most promising, it is the most work-intensive mechanism to assess the quality of a simulation. Summarising, to trust the quality of a simulation means to trust the process that produced its results. This process includes not only the design and construction of the simulation model itself but also the whole interaction between stakeholders, study team, model, and findings.
P. Ahrweiler (�) EA European Academy of Technology and Innovation Assessment GmbH, Bad Neuenahr-Ahrweiler, Germany e-mail: Petra.Ahrweiler@ea-aw.de
N. Gilbert University of Surrey, Guildford, UK
© Springer International Publishing Switzerland 2015 35 M. Janssen et al. (eds.), Policy Practice and Digital Science, Public Administration and Information Technology 10, DOI 10.1007/978-3-319-12784-2_3
36 P. Ahrweiler and N. Gilbert
Table 3.1 Comparing simulations
Caffè Nero simulation Science simulation
Target Venetian Café “Real system”
Goal Getting “the feeling” (customers) and profit (owners) from it
Getting understanding and/or predictions from it
Model By reducing the many features of a Venetian Café to a few parameters
By reducing the many features of the target to a few parameters
Question Is it a good simulation, i.e. do we get from it what we want?
Is it a good simulation, i.e. do we get from it what we want?
This chapter deals with the assessment of the quality of a simulation. After dis- cussing this issue on a general level, we apply and test the assessment mechanisms using an example from policy modelling.
3.1 Quality in Social Simulation
The construction of a scientific social simulation implies the following process: “We wish to acquire something from a target entity T. We cannot get what we want from T directly. So, we proceed indirectly. Instead of T we construct another entity M, the ‘model’, which is sufficiently similar to T that we are confident that M will deliver (or reveal) the acquired something which we want to get from T. [. . .] At a moment in time, the model has structure. With the passage of time the structure changes and that is behaviour. [. . .] Clearly we wish to know the behaviour of the model. How? We may set the model running (possibly in special sets of circumstances of our choice) and watch what it does. It is this that we refer to as‘simulation’ of the target” (quoted with slight modifications from Doran and Gilbert 1994).
We also habitually refer to “simulations” in everyday life, mostly in the sense that a simulation is “an illusory appearance that manages a reality effect” (cf. Norris 1992), or as Baudrillard put it, “to simulate is to feign to have what one hasn’t” while “substituting signs for the real” (Baudrillard 1988). In a previous publication (Ahrweiler and Gilbert 2005), we used the example of the Caffè Nero in Guildford, 50 km southwest of London, as a simulation of a Venetian café—which will serve as the “real” to illustrate this view. The purpose of the café is to “serve the best coffee north of Milan”. It tries to give the impression that you are in a real Italian café—although, most of the time, the weather outside can make the illusion difficult to maintain.
The construction of everyday simulations like Caffè Nero has some resemblance to the construction of scientific social simulations (see Table 3.1):
In both cases, we build models from a target by reducing the characteristics of the latter sufficiently for the purpose at hand; in each case, we want something from the model we cannot achieve easily from the target. In the case of Caffè Nero, we cannot simply go to Venice, drink our coffee, be happy, and return. It is too expensive and
3 The Quality of Social Simulation: An Example from Research Policy Modelling 37
time-consuming. We have to use the simulation. In the case of a science simulation, we cannot get data from the real system to learn about its behaviour. We have to use the simulation.
The question, whether one or the other is a good simulation, can therefore be reformulated as: Do we get from the simulation what we constructed it for?
Heeding these similarities, we shall now try to apply Assessment methods typically used for everyday simulations to scientific simulation and vice versa. Before doing so, we shall briefly discuss the “ordinary” method of evaluating simulations called the “standard view” and its adversary, a constructivist approach asserting, “anything goes”.
3.1.1 The Standard View
The standard view refers to the well-known questions and methods of verification, namely whether the code does what it is supposed to do and whether there are any bugs, and validation, namely whether the outputs (for given inputs/parameters) resemble observations of the target, although (because the processes being modelled are stochastic and because of unmeasured factors) identical outputs are not to be expected, as discussed in detail in Gilbert and Troitzsch (1997). This standard view relies on a realist perspective because it refers to the observability of reality in order to compare the “real” with artificial data produced by the simulation.
Applying the standard view to the Caffè Nero example, we can find quantitative and sometimes qualitative measures for evaluating the simulation. Using quantitative measures of similarity between it and a “real”Venetian café, we can ask, for example,
• Whether the coffee tastes the same (by measuring, for example, a quality score at blind tasting),
• Whether the Caffè is a cool place (e.g. measuring the relative temperatures inside and outside),
• Whether the noise level is the same (using a dB meter for measuring pur- poses),whether the lighting level is the same (using a light meter), and
• Whether there are the same number of tables and chairs per square metre for the customers (counting them), and so on.
In applying qualitative measures of similarity, we can again ask:
• Whether the coffee tastes the same (while documenting what comes to mind when customers drink the coffee),
• Whether the Caffè is a “cool” place (this time meaning whether it is a fashionable place to hang out),
• Whether it is a vivid, buzzing place, full of life (observing the liveliness of groups of customers),
• Whether there is the same pattern of social relationships (difficult to opera- tionalise: perhaps by observing whether the waiters spend their time talking to the customers or to the other staff), and
38 P. Ahrweiler and N. Gilbert
• Whether there is a ritual for serving coffee and whether it is felt to be the same as in a Venetian café.
The assumption lying behind these measures is that there is a “real” café and a “simulation” café and that in both of these, we can make observations. Similarly, we generally assume that the theories and models that lie at the base of science simulations are well grounded and can be validated by observation of empirical facts. However, the philosophy of science forces us to be more modest.
3.1.1.1 The Problem of Under-determination
Some philosophers of science argue that theories are under-determined by observa- tional data or experience, that is, the same empirical data may be in accord with many alternative theories. An adherent of the standard view would respond in that one important role of simulations (and of any form of model building) is to derive from theories as many testable implications as possible so that eventually validity can be assessed in a cumulative process1. Simulation is indeed a powerful tool for testing theories in that way if we are followers of the standard view.
However, the problem that theories are under-determined by empirical data can- not be solved by cumulative data gathering: it is more general and therefore more serious. The under-determination problem is not about a missing quantity of data but about the relation between data and theory. As Quine (1977) presents it: If it is possible to construct two or more incompatible theories by relying on the same set of experimental data, the choice between these theories cannot depend on “empirical facts”. Quine showed that there is no procedure to establish a relation of uniqueness between theory and data in a logically exclusive way. This leaves us with an annoying freedom: “sometimes, the same datum is interpreted by such different assumptions and theoretical orientations using different terminologies that one wonders whether the theorists are really thinking of the same datum” (Harbodt 1974, p. 258 f., own translation).
The proposal mentioned above to solve the under-determination problem by sim- ulation does not touch the underlying reference problem at all. It just extends the theory, adding to it its “implications”, hoping them to be more easily testable than the theory’s core theorems. The general reference between theoretical statement— be it implication or core theorem—and observed data has not changed by applying this extension: The point here is that we cannot establish a relation of uniqueness between the observed data and the theoretical statement. This applies to any segment of theorising at the centre or at the periphery of the theory on any level—a matter that cannot be improved by a cumulative strategy.
1 We owe the suggestion that simulation could be a tool to make theories more determined by data to one of the referees of Ahrweiler and Gilbert (2005).
3 The Quality of Social Simulation: An Example from Research Policy Modelling 39
3.1.1.2 The Theory-Ladenness of Observations
Observations are supposed to validate theories, but in fact theories guide our ob- servations, decide on our set of observables, and prepare our interpretation of the data. Take, for example, the different concepts of two authors concerning Venetian cafés: For one, a Venetian café is a quiet place to read newspapers and relax with a good cup of coffee; for the other, a Venetian café is a lively place to meet and talk to people with a good cup of coffee. The first attribute of these different conceptions of a Venetian café is supported by one and the same observable, namely the noise level, although one author expects a low level, the other a high one. The second attribute is completely different: the first conception is supported by a high number of newspaper readers, the second by a high number of people talking. Accordingly, a “good” simulation would mean a different thing for each of the authors. A good simulation for one would be a poor simulation for the other and vice versa. Here, you can easily see the influence of theory on the observables. This example could just lead to an extensive discussion about the “nature” of a Venetian café between two authors, but the theory-ladenness of observations again leads to more serious difficulties. Our access to data is compromised by involving theory, with the con- sequence that observations are not the “bed rock elements” (Balzer et al. 1987) our theories can safely rely on. At the very base of theory is again theory. The attempt to validate our theories by “pure” theory-neutral observational concepts is mistaken from the beginning.
Balzer et al. summarise the long debate about the standard view on this issue as follows: “First, all criteria of observability proposed up to now are vulnerable to serious objections. Second, these criteria would not contribute to our task because in all advanced theories there will be no observational concepts at all—at least if we take ‘observational’ in the more philosophical sense of not involving any theory. Third, it can be shown that none of the concepts of an advanced theory can be defined in terms of observational concepts” (Balzer et al. 1987, p. 48). Not only can you not verify a theory by empirical observation, but you cannot even be certain about falsifying a theory. A theory is not validated by “observations” but by other theories (observational theories). Because of this reference to other theories, in fact a nested structure, the theory-ladenness of each observation has negative consequences for the completeness and self-sufficiency of scientific theories (cf. Carrier 1994, pp. 1–19). These problems apply equally to simulations that are just theories in process.
We can give examples of these difficulties in the area of social simulation. To compare Axelrod’s The Evolution of Cooperation (Axelrod 1984) and all the subse- quent work on iterated prisoners’ dilemmas with the “real world”, we would need to observe “real” IPDs, but this cannot be done in a theory-neutral way. The same problems arise with the growing body of work on opinion dynamics (e.g. Deffuant et al. 2000; Ben-Naim et al. 2003; Weisbuch 2004). The latter starts with some sim- ple assumptions about how agents’ opinions affect the opinions of other agents, and shows under which circumstances the result is a consensus, polarisation, or fragmen- tation. However, how could these results be validated against observations without involving again a considerable amount of theory?
40 P. Ahrweiler and N. Gilbert
Important features of the target might not be observable at all. We cannot, for example, observe learning. We can just use some indicators to measure the conse- quences of learning and assume that learning has taken place. In science simulations, the lack of observability of significant features is one of the prime motivations for carrying out a simulation in the first place.
There are also more technical problems. Validity tests should be “exercised over a full range of inputs and the outputs are observed for correctness” (Cole 2000, p. 23). However, the possibility of such testing is rejected: “real life systems have too many inputs, resulting in a combinatorial explosion of test cases”. Therefore, simulations have “too many inputs/outputs to be able to test strictly” (Cole 2000, p. 23).
While this point does not refute the standard view in principle but only emphasises difficulties in execution, the former arguments reveal problems arising from the logic of validity assessment. We can try to marginalise, neglect, or even deny these problems, but this will disclose our position as mere “believers” of the standard view.
3.1.2 The Constructivist View
Validating a simulation against empirical data is not about comparing “the real world” and the simulation output; it is comparing what you observe as the real world with what you observe as the output. Both are constructions of an observer and his/her views concerning relevant agents and their attributes. Constructing reality and simu- lation are just two ways of an observer seeing the world. The issue of object formation is not normally considered by computer scientists relying on the standard view: data is “organized by a human programmer who appropriately fits them into the chosen representational structure. Usually, researchers use their prior knowledge of the na- ture of the problem to hand-code a representation of the data into a near-optimal form. Only after all this hand-coding is completed is the representation allowed to be manipulated by the machine. The problem of representation-formation [. . .] is ignored” (Chalmers et al. 1995, p. 173).
However, what happens if we question the possibility of validating a simulation by comparing it with empirical data from the “real world”? We need to refer to the modellers/observers in order to get at their different constructions. The constructivists reject the possibility of Assessment because there is no common “reality” we might refer to. This observer-oriented opponent of the realist view is a nightmare to most scientists: “Where anything goes, freedom of thought begins. And this freedom of thought consists of all people blabbering around and everybody is right as long as he does not refer to truth. Because truth is divisible like the coat of Saint Martin; everybody gets a piece of it and everybody has a nice feeling” (Droste 1994, p. 50).
Clearly, we can put some central thoughts from this view much more carefully: “In dealing with experience, in trying to explain and control it, we accept as legitimate and appropriate to experiment with different conceptual settings, to combine the flow of experience to different ‘objects”’ (Gellner 1990, p. 75).
3 The Quality of Social Simulation: An Example from Research Policy Modelling 41
However, this still leads to highly questionable consequences: There seems to be no way to distinguish between different constructions/simulations in terms of “truth”, “objectivity”, “validity”, etc. Science is going coffeehouse: Everything is just construction, rhetoric, and arbitrary talk. Can we so easily dismiss the possibility of Assessment?
3.1.3 The User Community View
We take refuge at the place we started from: What happens if we go back to the Venetian café simulation and ask for an Assessment of its performance? It is probably the case that most customers in the Guildford Caffè Nero have never been to an Italian café. Nevertheless, they manage to “evaluate” its performance—against their concept of an Italian café that is not inspired by any “real” data. However, there is something “real” in this Assessment, namely the customers, their constructions, and a “something” out there, which everybody refers to, relying on some sort of shared meaning and having a “real” discussion about it. The philosopher Searle shows in his work on the Construction of Social Reality (Searle 1997) how conventions are “real”: They are not deficient for the support of a relativistic approach because they are constructed.
Consensus about the “reality observed by us” is generated by an interaction pro- cess that must itself be considered real. At the base of the constructivist view is a strong reference to reality, that is, conventions and expectations that are socially cre- ated and enforced. When evaluating the Caffè Nero simulation, we can refer to the expert community (customers, owners) who use the simulation to get from it what they would expect to get from the target. A good simulation for them would satisfy the customers who want to have the “Venetian feeling” and would satisfy the owners who want to get the “Venetian profit”.
For science, equally, the foundation of every validity discussion is the ordinary everyday interaction that creates an area of shared meanings and expectations. This area takes the place left open by the under-determination of theories and the theo- reticity problem of the standard view.2 Our view comes close to that of empirical epistemology, which points out that the criteria for quality assessment “do not come from some a priori standard but rest on the description of the way research is actually conducted” (Kértesz 1993, p. 32).
2 Thomas Nickles claims new work opportunities for sociology at this point: “the job of phi- losophy is simply to lay out the necessary logico-methodological connections against which the under-determination of scientific claims may be seen; in other words, to reveal the necessity of so- ciological analysis. Philosophy reveals the depths of the under-determination problem, which has always been the central problem of methodology, but is powerless to do anything about it. Under- determination now becomes the province of sociologists, who see the limits of under-determination as the bounds of sociology. Sociology will furnish the contingent connections, the relations, which a priori philosophy cannot” (Nickles 1989, p. 234 f.).
42 P. Ahrweiler and N. Gilbert
If the target for a social science simulation is itself a construction, then the simu- lation is a second-order construction. In order to evaluate the simulation, we can rely on the ordinary (but sophisticated) institutions of (social) science and their practice. The actual Assessment of science comes from answers to questions such as: Do others accept the results as being coherent with existing knowledge? Do other scientists use it to support their work? Do other scientists use it to inspire their own investigations?
An example of such validity discourse in the area of social simulation is the history of the tipping model first proposed by Schelling, and now rather well known in the social simulation community. The Schelling model purports to demonstrate the reasons for the persistence of urban residential segregation in the USA and elsewhere. It consists of a grid of square cells, on which are placed agents, each either black or white. The agents have a “tolerance” for the number of agents of the other colour in the surrounding eight cells that they are content to have around them. If there are “too many” agents of the other colour, the unhappy agents move to other cells until they find a context in which there are a tolerable number of other-coloured agents. Starting with a random distribution, even with high levels of tolerance, the agents will still congregate into clusters of agents of the same colour. The point Schelling and others have taken from this model is that residential segregation will form and persist even when agents are rather tolerant.
The obvious place to undertake a realist validation of this model is a US city. One could collect data about residential mobility and, perhaps, on “tolerance”. However, the exercise is harder than it looks. Even US city blocks are not all regular and square, so the real city does not look anything like the usual model grid. Residents move into the city from outside, migrate to other cities, are born and die, so the tidy picture of mobility in the model is far from the messy reality. Asking residents how many people of the other colour they would be tolerant of is also an exercise fraught with difficulty: the question is hypothetical and abstract, and answers are likely to be biased by social desirability considerations. Notwithstanding these practical methodological difficulties, some attempts have been made to verify the model. The results have not provided much support. For instance, Benenson (2005) analysed residential distribution for nine Israeli cities using census data and demonstrated that whatever the variable tested—family income, number of children, education level— there was a great deal of ethnic and economic heterogeneity within neighbourhoods, contrary to the model’s predictions.
This apparent lack of empirical support has not, however, dimmed the fame of the model. The difficulty of obtaining reliable data provides a ready answer to doubts about whether the model is “really” a good representation of urban segregation dy- namics. Another response has been to elaborate the model at the theoretical level. For instance, Bruch (2005) demonstrates that clustering only emerges in Schelling’s model for discontinuous functional forms for residents’ opinions, while data from surveys suggest that people’s actual decision functions for race are continuous. She shows that using income instead of race as the sorting factor also does not lead to clustering, but if it is assumed that both race and income are significant, segregation appears. Thus, the model continues to be influential, although it has little or no em- pirical support, because it remains a fruitful source for theorising and for developing
3 The Quality of Social Simulation: An Example from Research Policy Modelling 43
new models. In short, it satisfies the criterion that it is “valid” because it generates further scientific work.
Summarising the first part of this chapter, we have argued that a simulation is good when we get from it what we originally would have liked to get from the target. It is good if it works. As Glasersfeld (1987, p. 429) puts it: “Anything goes if it works”. The Assessment of the simulation is guided by the expectations, anticipations and experience of the community that uses it—for practical purposes (Caffè Nero), or for intellectual understanding and for building new knowledge (science simulation).
3.2 An Example of Assessing Quality
In this part, we will apply and test the assessment mechanisms outlined using as an example our work with the simulating knowledge dynamics in innovation networks (SKIN) model in its application to research policy modelling.
There are now a number of policy-modelling studies using SKIN (Gilbert et al. 2014). We will here refer to just one recent example, on the impact, assess- ment and ex-ante Assessment of European funding policies in the Information and Communication Technologies (ICT) research domain (Ahrweiler et al. 2014b).
3.2.1 A Policy-Modelling Application of SKIN
The basic SKIN model has been described and discussed in detail elsewhere (e.g. Pyka et al. 2007; Gilbert et al. 2007; Ahrweiler et al. 2011). On its most general level, SKIN is an agent-based model where agents are knowledge-intensive organi- sations, which try to generate new knowledge by research, be it basic or applied, or creating new products and processes by innovation processes. Agents are located in a changing and complex social environment, which evaluates their performance; e.g. the market if the agents target innovation or the scientific community if the agents target publications through their research activities. Agents have various options to act: each agent has an individual knowledge base called its “kene” (cf. Gilbert 1997), which it takes as the source and basis for its research and innovation activities. The agent kene is not static: the agent can learn, either alone by doing incremental or radical research, or from others, by exchanging and improving knowledge in partner- ships and networks. The latter feature is important, because research and innovation happens in networks, both in science and in knowledge-intensive industries. This is why SKIN agents have a variety of strategies and mechanisms for collaborative arrangements, i.e. for choosing partners, forming partnerships, starting knowledge collaborations, creating collaborative outputs, and distributing rewards. Summaris- ing, usually a SKIN application has agents interacting on the knowledge level and on the social level while both levels are interconnected. It is all about knowledge and networks.
44 P. Ahrweiler and N. Gilbert
This general architecture is quite flexible, which is why the SKIN model has been called a “platform” (cf. Ahrweiler et al. 2014a), and has been used for a variety of applications ranging from the small such as simulating the Vienna biotech cluster (Korber and Paier 2014) to intermediate such as simulating the Norwegian defence industry (Castelacci et al. 2014), to large-scale applications such as the EU-funded ICT research landscape in Europe (Ahrweiler et al. 2014b). We will use the latter study as an example after explaining why the SKIN model is appropriate for realistic policy modelling in particular.
The birth of the SKIN model was inspired by the idea of bringing a theory on innovation networks, stemming mainly from innovation economics and economic so- ciology, onto the computer—a computer theory, which can be instantiated, calibrated, tested, and validated by empirical data. In 1998, the first EU project developing the model “Simulating Self-Organizing Innovation Networks” (SEIN) consisted of a three-step procedure: theory formation, empirical research collecting data both on the quantitative and on the case study level, and agent-based modelling implementing the theory and using the data to inform the model (Pyka et al. 2003).
This is why the SKIN model applications use empirical data and claim to be “realistic simulations” insofar as the aim is to derive conclusions by “inductive the- orising”. The quality of the SKIN simulation derives from an interaction between the theory underlying the simulation and the empirical data used for calibration and validation.
In what way does the SKIN model handle empirical data? We will now turn to our policy-modelling example to explain the data-to-model workflow, which is introduced in greater detail in Schilperoord and Ahrweiler (2014).
3.2.1.1 Policy Modelling for Ex-ante Assessment of EU Funding Programmes
The INFSO-SKIN application, developed for the Directorate General Information Society and Media of the European Commission (DG INFSO), was intended to help to understand and manage the relationship between research funding and the goals of EU policy. The agents of the INFSO-SKIN application are research institutions such as universities, large diversified firms or small and medium-sized enterprises (SMEs). The model (see Fig. 3.1) simulated real-world activity in which the calls of the commission specify the composition of consortia, the minimum number of partners, and the length of the project; the deadline for submission; a range of capabilities, a sufficient number of which must appear in an eligible proposal; and the number of projects that will be funded. The rules of interaction and decision implemented in the model corresponded to Framework Programme (FP) rules; to increase the usefulness for policy designers, the names of the rules corresponded closely to FP terminology. For the Calls 1–6 that had occurred in FP7, the model used empirical information on the number of participants and the number of funded projects, together with data on project size (as measured by participant numbers), duration and average funding. Analysis of this information produced data on the functioning of, and relationships within, actual collaborative networks within the
3 The Quality of Social Simulation: An Example from Research Policy Modelling 45
Fig. 3.1 Flowchart of INFSO-SKIN
context of the FP. Using this data in the model provided a good match with the empirical data from EU-funded ICT networks in FP7: the model accurately reflected what actually happened and could be used as a test bed for potential policy choices (cf. Ahrweiler et al. 2014b).
Altering elements of the model that equate to policy interventions, such as the amount of funding, the size of consortia, or encouraging specific sections of the research community, enabled the use of INFSO-SKIN as a tool for modelling and evaluating the results of specific interactions between policies, funding strategies and agents. Because changing parameters within the model is analogous to applying different policy options in the real world, the model could be used to examine the likely real-world effects of different policy options before they were implemented.
3.2.1.2 The Data-to-Model Workflow
The first contact with “the real world” occurred in the definition phase of the project. What do the stakeholders want to know in terms of policies for a certain research or innovation network? Identifying relevant issues, discussing interesting aspects about them, forming questions and suggesting hypotheses for potential answers formed a first important step. This step was intended to conclude with a set of questions and a corresponding set of designs for experiments using the model that could answer those questions. This was an interactive and participative process between the study team,
46 P. Ahrweiler and N. Gilbert
which knew about the possibilities and limitations of the model, and the stakeholders, who could be assumed to know what are the relevant issues in their day-to-day practice of policy making.
After discussing the evaluative questions for the ex-ante Assessment part of this study with the stakeholders from DG INFSO, the following questions were singled out for experiments:
- What if there are no changes, and funding policies of DG INFSO continued in Horizon 2020 as they were in FP7?
- What if there are changes to the currently eight thematic areas funded in the ICT domain prioritising certain areas in Horizon 2020?
- What if there are changes to the instruments of funding and fund larger/smaller consortia in Horizon 2020 than in FP7?
- What if there are interventions concerning the scope or outreach of funding providing much more/much less resource to more/fewer actors?
- What if there are interventions concerning the participation of certain actors in the network (e.g. SMEs)?
The next step (see Fig. 3.2) was to collect relevant data to address these questions and hypotheses. The issues were not different from the ones every empirical researcher is confronted with. To identify relevant variables for operationalising hypotheses, to be as simple as possible but as detailed as necessary for description and explana- tion, is in line with the requirements of all empirical social research. For SKIN, the most important data are about knowledge dynamics (e.g. knowledge flows, amount of knowledge, and diversity of knowledge) and their indicators (e.g. publications, patents, and innovative ideas), and about dynamics concerning actors, networks, their measures, and their performance (e.g. descriptive statistics about actors, network analysis measures, and aggregate performance data).
These data were used to calibrate the initial knowledge bases of the agents, the social configurations of agents (“starting networks”), and the configuration of an environment at a given point in time. DG INFSO provided the data needed to calibrate the knowledge bases of the agents (in this case the research organisations in the European research area), the descriptive statistics on agents and networks and their interactions (in this case data on funded organisations and projects in ICT under FP7).
The time series data were used to validate the simulations by comparing the empirical data with the simulation outputs. Once we were satisfied with the model performance in that respect, experiments were conducted and the artificially produced data analysed and interpreted. The stakeholders were again invited to provide their feedback and suggestions about how to finetune and adapt the study to their changing user requirements as the study proceeded.
The last step was again stakeholder-centred as it involved visualisation and com- munication of data and results. We had to prove the credibility of the work and the commitment of the stakeholders to the policy-modelling activity.
We worked from an already existing application of the SKIN model adapted to the European research area (Scholz et al. 2010), implemented the scenarios according
3 The Quality of Social Simulation: An Example from Research Policy Modelling 47
Baseline
Thematic change
Instruments change
Funding level change
Participants chenge
Evaluative questions Horizon 2020
INFSO FP7 Database
Calls, Themes, Participants, Projects
21
Network Vis. & Statistics
Gephi
Scenario Development Tool
K”5″(Java 3 4
Participants impacts
Proposals impacts
Projects impacts
Knowlwdge impacts
Network impacts
SKIN model
Netlogo & lava
Simulation Database
CSV
Computational Policy Lab
MySQL
Calls, Themes, Participants, Proposals, Projects, Knowlwdge flows
Fig. 3.2 Horizon 2020 study workflow (Schilperoord and Ahrweiler 2014). First (on the left), a set of issues was isolated, in discussion with stakeholders. Data describing the network of FP7 projects and participants, by theme and Call, obtained from DG INFSO were entered into a database. These data were used to calibrate the INFSO-SKIN model. This model was then used to generate simulated data under various policy options. The simulated data were fed into a second database and visualised using additional network visualisation and statistical software in order to assess the expected impacts of those policy options
to the evaluative questions, and produced artificial data as output of the simulations. The results are reported in the final report presented to the European Cabinet, and were communicated to the stakeholders at DG INFSO.
3.2.2 The INFSO-SKIN Example as Seen by the Standard View
The standard view refers to verification, namely whether the code does what it is sup- posed to do, and validation, namely whether the outputs (for given inputs/parameters) sufficiently resemble observations of the target. To aid in verifying the model, it was completely recoded in another programming language and the two implementations cross-checked to ensure that they generated the same outputs given the same inputs.
To enable validation of the model, we needed to create a simulation resembling the stakeholders’ own world as they perceived it. The simulation needed to create the effect of similar complexity, similar structures and processes, and similar objects and options for interventions. To be under this similarity threshold would have led to the rejection of the model as a “toy model” that is not realistic and is under-determined by empirical data.
48 P. Ahrweiler and N. Gilbert
In the eyes of these stakeholders, the more features of the model that can be validated against empirical data points, the better. Of course, there will always be an empirical “under-determination” of the model due to the necessary selection and abstraction process of model construction, empirical unobservables, missing data for observables, random features of the model, and so on. However, to find the “right” trade-off between empirical under-determination and model credibility was a crucial issue in the discussions between the study team and the stakeholders.
3.2.3 The INFSO-SKIN Example as Seen by the Constructivist View
The strength of a modelling methodology lies in the opportunity to ask what-if questions (ex-ante Assessment), an option that is normally not easily available in the policy-making world. INFSO-SKIN uses scenario modelling as a worksite for “reality constructions”, in line with Gellner’s statement quoted above about the constructivist approach: “In dealing with experience, in trying to explain and control it, we accept as legitimate and appropriate to experiment with different conceptual settings, to combine the flow of experience to different ‘objects”’ (Gellner 1990, p. 75). Scenario modelling was employed in the study both for the impact assessment of existing funding policies, where we measured the impact of policy measures by experimenting with different scenarios where these policies are absent, changed or meet different conditions, and for ex-ante Assessment, where we developed a range of potential futures for the European Research Area in ICT by asking what-if questions.
These are in-silico experiments that construct potential futures. Is this then a relativist approach where “anything goes”, because everything is just a construction? For the general aspects of this question, we refer to Part I of this article. There we talk about the “reality requirements” of the constructivist approach, which mediates its claims. For the limits of constructivist ideas applied to SKIN, we refer to Sect. 2.1.
3.2.4 The INFSO-SKIN Example as Seen by the User Community View
The user community view is the most promising, although the most work-intensive mechanism to assess the quality of this policy-modelling exercise.
3.2.4.1 Identifying User Questions
In our example, SKIN was applied to a tender study with a clear client demand behind it, where the questions the simulation needs to answer were more or less predefined
3 The Quality of Social Simulation: An Example from Research Policy Modelling 49
from the onset of the project. Enough time should, however, be dedicated to identi- fying and discussing the exact set of questions the stakeholders of the work want to see addressed. We found that the best way to do this is applying an iterative process of communication between study team and clients, where stakeholders learn about the scope and applicability of the methods, and where researchers get acquainted with the problems policy makers have to solve and with the kind of decisions for which sound background information is needed. This iterative process should result in an agreed set of questions for the simulation, which will very often decisively differ from the set proposed at the start of the study. In our example, the so-called “steering committee” was assigned to us by the European Commission consisting of policy makers and Assessment experts of DG INFSO.
There are various difficulties and limitations to overcome in identifying user ques- tions. In the case of the DG INFSO study, although the questions under study were outlined in the Tender Specifications in great detail, this was a complicated negotiation process where the stakeholder group:
• Had to find out about the exact nature and direction of their questions while they talked to the study team;
• Had questioned the original set of the Tender Specifications in the meantime and negotiated among each other for an alternative set;
• Did not share the same opinion about what questions should be in the final sample, and how potential questions should be ranked in importance;
• Did not share the same hypotheses about questions in the final sample.
The specification of evaluative questions might be the first time stakeholders talk to each other and discuss their viewpoints.
What is the process for identifying user questions for policy modelling? In the INFSO-SKIN application, the following mechanism was used by the study team and proved to be valuable:
• Scan written project specification by client (in this case the Tender Specifications of DG INFSO) and identify the original set of questions;
• Do a literature review and context analysis for each question (policy background, scope, meaning, etc.) to inform the study team;
• Meet stakeholders to get their views on written project specifications and their view on the context of questions; inform the stakeholders about what the model is about, what it can and cannot do; discuss until stakeholder group and study team is “on the same page”;
• Evaluate the meeting and revise original set of questions if necessary (probably an iterative process between study team and different stakeholders individually where study team acts as coordinator and mediator of the process);
• Meet stakeholders to discuss the final set of questions, get their written consent on this, and get their hypotheses concerning potential answers and potential ways to address the questions;
• Evaluate the meeting and develop experiments that are able to operationalise the hypotheses and address the questions;
50 P. Ahrweiler and N. Gilbert
• Meet stakeholders and get their feedback and consent that the experiments meet questions/hypotheses;
• Evaluate the meeting and refine the experimental setup concerning the final set of questions.
This negotiation and discussion process is highly user-driven, interactive, and itera- tive. It requires communicative skills, patience, willingness to compromise on both sides, and motivation to make both ends meet—the formal world of modellers and the narrative world of policy making in practice. The process is highly time-consuming. In our example, we needed about 6 months of a 12-month-contract research study to get to satisfactory results on this first step.
3.2.4.2 Getting Their Best: Users Need to Provide Data
The study team will know best what types of empirical data are needed to inform the policy modelling. In SKIN, data availability is an important issue, because the findings need to be evidence-based and realistic. This is in the best interest of the stakeholders, who need to trust the findings. This will be the more likely to the extent that the simulated data resembles the empirical data known to the user (see Sect. 2.1). However, the study team might discover that the desired data is not available, either because it does not exist or because it is not willingly released by the stakeholders or whoever holds it.
In our example, the stakeholders were data collectors on a big scale themselves. The Assessment unit of DG INFSO employs a data collection group, which provides information about funded projects and organisations at a detailed level. Furthermore, the DG is used to provide data to the study teams of the projects they contract for their Assessment projects. Consequently we benefitted from having a large and clean database concerning all issues the study team was interested in. However, it was still an issue to confirm the existence, quality and availability of the data and check for formats and database requirements. Even if the data is there in principal, enough time should be reserved for data management issues. The quality of the simulation in the eyes of the user will very much depend on the quality of the informing data and the quality of the model calibration.
What would have been the more common process if the study team had not struck lucky as in our example? In other SKIN applications, the following mechanism was used by the study team and proved to be valuable (the ones with asterisks apply to our INFSO-SKIN example as well):
• Identify the rough type of data required for the study from the project specifications • Estimate financial resources for data access in the proposal of project to
stakeholders (this can sometimes happen in interaction with the funding body); • After the second meeting with stakeholders (see Sect. 2.3.1), identify relevant
data concerning variables to answer study questions and address/test hypotheses of Sect. 2.3.1*;
3 The Quality of Social Simulation: An Example from Research Policy Modelling 51
• Communicate exact data requirements to those stakeholders who are experts on their own empirical data environment*;
• Review existing data bases including the ones stakeholders might hold or can get access to*;
• Meet stakeholders to discuss data issues; help them understand and agree on the scope and limitations of data access*;
• If needed and required by stakeholders, collect data; • Meet stakeholders to discuss the final database; • Evaluate the meeting and develop data-to-model procedures*.
3.2.4.3 Interacting with Users to Check the Validity of Simulation Results
The stakeholders put heavy demands on the study team concerning understanding and trusting the simulation findings. The first and most important is that the clients want to understand the model. To trust results means to trust the process that produced them. Here, the advantage of the adapted SKIN model is that it relies on a narrative that tells the story of the users’ every-day world of decision-making (see Sect. 2.1.1). In the SKIN model, a good example for “reality” requirements is the necessity to model the knowledge and behaviour of agents. Blackboxing knowledge of agents or creating merely reactive simple agents would not have been an option, because stakeholders do not think the world works that way.
The SKIN model is based on empirical quantitative and qualitative research in innovation economics, sociology, science and technology studies, and business stud- ies. Agents and behaviours are informed by what we know about them; the model is calibrated by data from this research. We found that there is a big advantage in having a model where stakeholders can recognise the relevant features they see at work in their social contexts. In setting up and adapting the model to study needs, stakeholders can actively intervene and ask for additional agent characteristics or behavioural rules; they can refine the model and inform blackbox areas where they have information on the underlying processes.
However, here again, we encountered the diversity of stakeholder preferences. Different members of the DG INFSO Steering Committee opted for different changes and modifications of the model. Some were manageable with given time constraints and financial resources; some would have outlived the duration of the project if realised. The final course of action for adapting the model to study needs was the result of discussions between stakeholders about model credibility and increasing complexity and of discussions between stakeholders and the study team concerning feasibility and reducing complexity.
Once the stakeholders were familiar with the features of the model and had con- tributed to its adaptation to study requirements, there was an initial willingness to trust model findings. This was strengthened by letting the model reproduce FP7 data as the baseline scenario that all policy experiments would be benchmarked against. If the networks created by real life and those created by the agent-based model cor- respond closely, the simulation experiments can be characterized as history-friendly
52 P. Ahrweiler and N. Gilbert
experiments, which reproduce the empirical data and cover the decisive mechanisms and resulting dynamics of the real networks (see standard view).
In presenting the results of the INFSO-SKIN study, however, it became clear that there were, again, certain caveats coming from the user community. The policy analysts did not want to look at a multitude of tables and scan through endless numbers of simulation results for interesting parameters; nor did they expect to watch the running model producing its results, because a typical run lasted 48 hours. Presenting results in an appealing and convincing way required visualisations and interactive methods where users could intuitively understand what they see, had access to more detailed information if wanted, e.g. in a hyperlink structure, and could decide themselves in which format, in which order and in which detail they want to go through findings. This part of the process still needs further work: new visualisation and interactive technologies can help to make simulation results more accessible to stakeholders.
This leads to the last issue to be discussed in this section. What happens after the credibility of simulation results is established? In the INFSO-SKIN study, the objective was policy advice for Horizon 2020. The stakeholders wanted the study team to communicate the results as “recommendations” rather than as “findings”. They required a so-called “utility summary” that included statements about what they should do in their policy domain justified according to the results of the study. Here the study team proved to be hesitant—not due to a lack of confidence in their model, but due to the recognition of its predictive limitations and a reluctance to formulate normative statements, which were seen as a matter of political opinion and not a responsibility of a scientific advisor. The negotiation of the wording in the Utility Summary was another instance of an intense dialogue between stakeholders and study team. Nevertheless, the extent to which the results influenced or were somehow useful in the actual political process of finalising Horizon 2020 policies was not part of the stakeholder feedback after the study ended and is still not known to us. The feedback consisted merely of a formal approval that we had fulfilled the project contract.
3.3 Conclusions
To trust the quality of a simulation means to trust the process that produced its results. This process is not only the one incorporated in the simulation model itself. It is the whole interaction between stakeholders, study team, model, and findings.
The first section of this contribution pointed out the problems of the Standard View and the constructivist view in evaluating social simulations. We argued that a simulation is good when we get from it what we originally would have liked to get from the target; in this, the Assessment of the simulation would be guided by the expectations, anticipations, and experience of the community that uses it. This makes the user community view the most promising mechanism to assess the quality of a policy-modelling exercise.
3 The Quality of Social Simulation: An Example from Research Policy Modelling 53
The second section looked at a concrete policy-modelling example to test this idea. It showed that the very first negotiation and discussion with the user commu- nity to identify their questions were highly user-driven, interactive, and iterative. It required communicative skills, patience, willingness to compromise on both sides, and motivation to link the formal world of modellers and the narrative world of policy making in practice.
Often, the user community is involved in providing data for calibrating the model. It is not an easy issue to confirm the existence, quality, and availability of the data and check for formats and database requirements. Because the quality of the simulation in the eyes of the user will depend on the quality of the informing data and the quality of the model calibration, much time and effort need to be spent in coordinating this issue with the user community.
Last but not least, the user community has to check the validity of simulation results and has to believe in their quality. Users have to be helped to understand the model, to agree with its processes and ways to produce results, to judge similarity between empirical and simulated data, etc.
The standard view is epistemologically questionable due to the two problems of under-determination of theory and of theory-ladenness of observations; the con- structivist view is difficult due to its inherent relativism, which annihilates its own validity claims. The user community view relies on social model building and model assessment practices and, in a way, bridges the two other views, because it rests on the realism of these practices. This is why we advocate its quality assessment mechanisms.
Summarising, in our eyes, the user community view might be the most promis- ing, but is definitely the most work-intensive mechanism to assess the quality of a simulation. It all depends on who the user community is and whom it consists of: if there is more than one member, the user community will never be homogenous. It is difficult to refer to a “community”, if people have radically different opinions.
Furthermore, there are all sorts of practical contingencies to deal with. People might not be interested, or they might not be willing or able to dedicate as much of their time and attention to the study as needed. There is also the time dimension: the users at the end of a simulation project might not be the same as those who initiated it, as a result of job changes, resignations, promotions, and organisational restructuring. Moreover, the user community and the simulation modellers may affect each other, with the modellers helping in some ways to construct a user community in order to solve the practical contingencies that get in the way of assessing the quality of the simulation, while the user community may in turn have an effect on the modellers (not least in terms of influencing the financial and recognition rewards the modellers receive).
If trusting the quality of a simulation indeed means trusting the process that pro- duced its results, then we need to address the entire interaction process between user community, researchers, data, model, and findings as the relevant assessment mech- anism. Researchers have to be aware that they are codesigners of the mechanisms they need to participate in with the user community for assessing the quality of a social simulation.
54 P. Ahrweiler and N. Gilbert
References
Ahrweiler P, Gilbert N (2005) Caffe Nero: the Assessment of social simulation. J Artif Soc Soc Simul 8(4):14
Ahrweiler P, Pyka A, Gilbert N (2011) A New model for university-industry links in knowledge- based economies. J Prod Innov Manag 28:218–235
Ahrweiler P, Schilperoord M, Pyka A, Gilbert N (2014a, forthcoming): Testing policy options for horizon 2020 with SKIN. In: Gilbert N, Ahrweiler P, Pyka A (eds) Simulating knowledge dynamics in innovation networks. Springer, Heidelberg
Ahrweiler P, PykaA, Gilbert N (2014b, forthcoming): Simulating knowledge dynamics in innovation networks: an introduction. In: Gilbert N, Ahrweiler P, Pyka A (eds) Simulating knowledge dynamics in innovation networks. Springer, Heidelberg
Axelrod R (1984) The evolution of cooperation. Basic Books, New York Balzer W, Moulines CU, Sneed JD (1987) An architectonic for science. The structuralist program.
Reidel, Dordrecht Baudrillard J (1988) Jean Baudrillard selected writings. Polity Press, Cambridge Ben-Naim E, Krapivsky P, Redner S (2003) Bifurcations and patterns in compromise processes.
Phys D 183:190–204 Benenson I (2005) The city as a human-driven system. Paper presented at the workshop on modelling
urban social dynamics. University of Surrey, Guildford, April 2005. Bruch E (2005) Dynamic models of neighbourhood change. Paper presented at the workshop on
modelling urban social dynamics. University of Surrey, Guildford, April 2005. Carrier M (1994) The completeness of scientific theories. On the derivation of empirical indicators
within a theoretical framework: the case of physical geometry. Kluwer, Dordrecht Castelacci F, Fevolden A, Blom M (2014) R & D policy support and industry concentration: a
SKIN model analysis of the European defence industry. In: Gilbert N, Ahrweiler P, Pyka A (eds) Simulating knowledge dynamics in innovation networks. Heidelberg, Springer
Chalmers D, French R, Hofstadter D (1995) High-level perception, representation, and analogy. In: Hofstadter D (ed) Fluid concepts and creative analogies. Basic Books, New York, pp 165–191
Cole O (2000) White-box testing. Dr. Dobb’s Journal, March 2000, pp 23–28 Deffuant G, Neau D, Amblard F, Weisbuch G (2000) Mixing beliefs among interacting agents.
Advances in complex systems. Adv Complex Syst 3:87–98 Doran J, Gilbert N (1994) Simulating Societies: an Introduction. In: Doran J, Gilbert N (eds)
Simulating societies: the computer simulation of social phenomena. UCL Press, London, pp 1–18
Droste W (1994) Sieger sehen anders aus (Winners look different). Schulenburg, Hamburg Gellner E (1990) Pflug, Schwert und Buch. Grundlinie der Menschheitsgeschichte (Plough, Sword
and Book. Foundations of human history). Klett-Cotta, Stuttgart Gilbert N (1997) A simulation of the structure of academic science, Sociological Research Online
2(1997). https://www.appessaywriters.com/write-my-paper/socresonline.org.uk/socresonline/2/2/3.html Gilbert N, Troitzsch K (1997) Simulation for the social scientist. Open University Press,
Buckingham Gilbert N, Ahrweiler P, Pyka A (2007) Learning in innovation networks: some simulation
experiments. Phys A: Stat Mech Appl 378(1):667–693 Gilbert N, Ahrweiler P, Pyka A (eds) (2014, forthcoming) Simulating knowledge dynamics in
innovation networks. Springer, Heidelberg Glasersfeld E von (1987) Siegener Gespräche über Radikalen Konstruktivismus (Siegen Diskus-
sions on Radical Constructivism). In: Schmidt SJ (ed) Der Diskurs des Radikalen Konstruk- tivismus. Suhrkamp, Frankfurt a. M., pp 401–440
Harbodt S (1974) Computer simulationen in den Sozialwissenschaften (Computer simulations in the social sciences). Rowohlt, Reinbek
Kértesz A (1993) Artificial intelligence and the sociology of scientific knowledge. Lang, Frankfurt, a. M.
3 The Quality of Social Simulation: An Example from Research Policy Modelling 55
Korber M, Paier M (2014) Simulating the effects of public funding on research in life sciences: direct research funds versus tax incentives. In: Gilbert N, Ahrweiler P, Pyka A (eds) Simulating knowledge dynamics in innovation networks. Springer, Heidelberg
Nickles T (1989) ∈ntegrating the science studies disciplines. In: Fuller S, de Mey M, Shinn T, Woolgar S (eds) The cognitive turn. Sociological and psychological perspectives on science. Kluwer, Dordrecht, pp 225–256
Norris C (1992) Uncritical theory. Lawrence and Wishart, London Pyka A, Gilbert N, Ahrweiler P (2003) Simulating Innovation networks. In: Pyka A, Küppers G
(eds) Innovation networks—theory and practice. Edward Elgar, Cheltenham, pp 169–198 Pyka A, Gilbert N, Ahrweiler P (2007) Simulating knowledge generation and distribution processes
in innovation collaborations and networks. Cybern Syst 38(7):667–693 Quine W (1977) Ontological relativity. Columbia University Press, Columbia Schilperoord M, Ahrweiler P (2014, forthcoming) Towards a prototype policy laboratory for sim-
ulating innovation networks. In: Gilbert N, Ahrweiler P, Pyka A (eds) Simulating knowledge dynamics in innovation networks. Springer, Heidelberg
Scholz R, Nokkala T, Ahrweiler P, Pyka A, Gilbert N (2010) The agent-based NEMO model (SKEIN): simulating European Framework programmes. In: Ahrweiler P (ed) Innovation in complex social systems, Routledge studies in global competition. Routledge, London, pp 300–314
Searle J (1997) The construction of social reality. Free Press, New York Weisbuch G (2004) Bounded confidence and social networks. Special Issue: application of complex
networks in biological information and physical systems. Eur Phys JB 38:339–343
Chapter 4 Policy Making and Modelling in a Complex World
Wander Jager and Bruce Edmonds
Abstract In this chapter, we discuss the consequences of complexity in the real world together with some meaningful ways of understanding and managing such situations. The implications of such complexity are that many social systems are unpredictable by nature, especially when in the presence of structural change (transitions). We shortly discuss the problems arising from a too-narrow focus on quantification in managing complex systems. We criticise some of the approaches that ignore these difficulties and pretend to predict using simplistic models. However, lack of pre- dictability does not automatically imply a lack of managerial possibilities. We will discuss how some insights and tools from “complexity science” can help with such management. Managing a complex system requires a good understanding of the dynamics of the system in question—to know, before they occur, some of the real possibilities that might occur and be ready so they can be reacted to as responsively as possible. Agent-based simulation will be discussed as a tool that is suitable for this task, and its particular strengths and weaknesses for this are discussed.
4.1 Introduction
Some time ago, one of us (WJ) attended a meeting of specialists in the energy sector. A former minister was talking about the energy transition, advocating for directing this transition; I sighed, because I realized that the energy transition, involving a multitude of interdependent actors and many unforeseen developments, would make a planned direction of such a process a fundamental impossibility.Yet I decided not to interfere, since my comment would have required a mini lecture on the management of complex systems, and in the setting of this meeting this would have required too much time. So the speaker went on, and one of the listeners stood up and asked, “But
W. Jager (�) Groningen Center of Social Complexity Studies, University Groningen, Groningen, The Netherlands e-mail: w.jager@rug.nl
B. Edmonds Manchester Metropolitan University, Manchester, UK
© Springer International Publishing Switzerland 2015 57 M. Janssen et al. (eds.), Policy Practice and Digital Science, Public Administration and Information Technology 10, DOI 10.1007/978-3-319-12784-2_4
58 W. Jager and B. Edmonds
Fig. 4.1 Double pendulum. (Source: Wikipedia)
sir, what if the storage capacity of batteries will drastically improve?” The speakers answered, “this is an uncertainty we cannot include in our models, so in our transition scenarios we don’t include such events”. This remark made clear that, in many cases, policymakers are not aware of the complexities in the systems they operate in, and are not prepared to deal with surprises in systems. Because the transitional idea is being used very frequently to explain wide-ranging changes related to the transformation of our energy system, and the change towards a sustainable society, it seems relevant to address the issue of complexity in this chapter, and discuss the implications for policy making in complex behaving system. After explaining what complexity is, we will discuss the common mistakes being made in managing complex systems. Following that, we will discuss the use of models in policy making, specifically addressing agent-based models because of their capacity to model social complex systems that are often being addressed by policy.
4.2 What is Complexity?
The word “complexity” can be used to indicate a variety of kinds of difficulties. However, the kind of complexity we are specifically dealing with in this chapter is where a system is composed of multiple interacting elements whose possible behavioural states can combine in ways that are hard to predict or characterise. One of the simplest examples is that of a double pendulum (Fig 4.1).
4 Policy Making and Modelling in a Complex World 59
Although only consisting of a few parts connected by joints, it has complex and un- predictable behaviour when set swinging under gravity. If this pendulum is released, it will move chaotically due to the interactions between the upper (θ1) and lower (θ2) joint. Whereas it is possible to formally represent this simple system in detail, e.g. including aspects such as air pressure, friction in the hinge, the exact behaviour of the double pendulum is unpredictable.1 This is due to the fundamental uncertainty of the precise position of its parts2and the unsolvability of the three-body problem as proven by Bruns and Poincaré in 1887. Just after release, its motion is predictable to a considerable degree of accuracy, but then starts to deviate from any prediction until it is moving in a different manner. Whereas the precise motion at these stages is not predictable, we know that after a while, the swinging motion will become less erratic, and ultimately it will hang still (due to friction). This demonstrates that even in very simple physical systems, interactions may give rise to complex behaviour, expressed in different types of behaviour, ranging from very stable to chaotic. Obviously, many physical systems are much more complicated, such as our atmospheric system. As can be expected, biological or social systems also display complex behaviour be- cause they are composed of large numbers of interacting agents. Also, when such systems are described by a simple set of equations, complex behaviour may arise. This is nicely illustrated by the “logistic equation”, which was originally introduced as a simple model of biological populations in a situation of limited resources (May 1976). Here the population, x, in the next year (expressed as a proportion of its max- imum possible) is determined based on the corresponding value in the last year as rx(1-x), where r is a parameter (the rate of unrestrained population increase). Again, this apparently simple model leads to some complex behaviour. Figure 4.2 shows the possible long-term values of x for different values of r, showing that increasing r creates more possible long-term states for x. Where on the left hand side (r< 3.0) the state of x is fixed, at higher levels the number of possible states increases with the number of states increasing rapidly until, for levels of r above 3.6, almost any state can occur, indicating a chaotic situation. In this case, although the system may be predictable under some circumstances (low r), in others it will not be (higher r).
What is remarkable is that, despite the inherent unpredictability of their environ- ment, organisms have survived and developed intricate webs of interdependence in terms of their ecologies. This is due to the adaptive capacity of organisms, allowing them to self-organise. It is exactly this capacity of organisms to adapt to changing cir- cumstances (learning) that differentiates ‘regular’ complex systems from ‘complex adaptive systems’ (CAS). Hence complex adaptive systems have a strong capacity to self-organise, which can be seen in, i.e. plant growth, the structure of ant nests and the organisation of human society. Yet these very systems have been observed to exist in both stable and unstable stages, with notable transitions between these
1 Obviously predictions can always be made, but it has been proved analytically that the predictive value of models is zero in these cases. 2 Even if one could measure them with extreme accuracy, there would never be complete accuracy due to the uncertainty theorem of Heisenberg (1927).
60 W. Jager and B. Edmonds
Fig. 4.2 Bifurcation diagram. (Source: Wikipedia)
stages. Ecological science has observed that major transitions in ecological systems towards a different regime (transition) are often preceded by increased variances, slower recovery from small perturbations (critical slowing down) and increased re- turn times (Boettiger and Hastings 2012; Dai and Vorselen et al. 2012; Dakos and Carpenter et al. 2012). A classic example here is that of the transition from a clear lake to a turbid state due to eutrophication. Here an increase in mineral and organic nutrients in the water gives rise to the growth of plants, in particular algae. In the stage preceding to a transition, short periods of increased algal blooms may occur, decreasing visibility and oxygen levels, causing the population of top predating fish hunting on eyesight to decrease, causing a growth in populations of other species, etc. The increased variance (e.g. in population levels of different species in the lake) indicates that a regime shift is near, and that the lake may radically shift from a clear state to a turbid state with a complete different ecosystem, with an attendant loss of local species.
The hope is that for other complex systems, such indicators may also identify the approach of a tipping point and a regime shift or transition (Scheffer et al. 2009). For policy making, this is a relevant perspective, as it helps in understanding what a transition or regime shift is, and has implications for policy development. A transition implies a large-scale restructuring of a system that is composed of many interacting parts. As such, the energy system and our economy at large are examples of complex systems where billions of actors are involved, and a large number of stakeholders such as companies and countries are influencing each other. The transformation from, for example, a fossil fuel-based economy towards a sustainable energy system requires that many actors that depend on each other have to simultaneously change their behaviour. An analogy with the logistic process illustrated in Fig. 4.2 can be made.
4 Policy Making and Modelling in a Complex World 61
Imagine a move from the lower stable situation x = 0.5 at r = 3.3 to the upper stable situation x = 0.8. This could be achieved by increasing the value of r, moving towards the more turbulent regime of the system and then reducing r again, allowing the new state to be settled into. This implies that moving from one stable regime towards another stable regime may require a period of turbulence where the transition can happen. Something like a period of turbulence demarcating regime shifts is what seems to have occurred during many transitions in the history of the world.
4.3 Two Common Mistakes in Managing Complex Systems
Turbulent stages in social systems are usually experienced as gruesome by policy- makers and managers. Most of them prefer to have grip on a situation, and try to develop and communicate a clear perspective on how their actions will affect future outcomes. Especially in communicating the rationale of their decisions to the out- side world, the complex nature of social systems is often lost. It is neither possible nor particularly useful to try and list all of the “mistakes” that policymakers might make in the face of complex systems, but two of the ways in which systems are oversimplified are quantification and compartmentalisation.
Quantification implies that policy is biased towards those attributes of a system that are easy to quantify. Hence, it comes as no surprise that economic outcomes, in terms of money, are often the dominating criteria in evaluating policy. Often, this results in choosing a solution that will result in the best financial economic outcome. Whereas non-quantifiable outcomes are often acknowledged, usually the bottom line is that “we obviously have to select the most economical viable option” because “money can be spent only once”. In such a case, many other complex and qualitative outcomes might be undervalued or even ignored since the complex system has been reduced to easily measurable quantities. In many situations, this causes resistance to policies, because the non-quantifiable outcomes often have an important impact on the quality of life of people. An example would be the recent earthquakes in the north of the Netherlands due to the extraction of natural gas, where the policy perspective was mainly focussing on compensating the costs of damage to housing, whereas the population experienced a loss of quality of life due to fear and feelings of unfair treatment by the government, qualities that are hard to quantify and were undervalued in the discussion. The more complex a system is, the more appealing it seems to be to get a grip on the decision context by quantifying the problem, often in economical terms. Hence, in many complex problems, e.g. related to investments in sustainable energy, the discussion revolves around returns on investment, whereas other relevant qualities, whereas being acknowledged, lose importance because they cannot be included in the complicated calculations. Further, the ability to encapsulate and manipulate number-based representations in mathematics may give such exercises an appearance of being scientific and hence reinforce the impression that the situation is under control. However, what has happened here is a conflation of indicators with the overall quality of the goals and outcomes themselves. Indicators may well be
62 W. Jager and B. Edmonds
useful to help judge goals and outcomes; but in complex situations, it is rare that such a judgement can be reduced to such simple dimensions.
Compartimentalization is a second response of many policymakers in trying to simplify complex social systems. This is a strategy whereby a system or organi- sation is split into different parts that act (to a large extent) independently of each other as separated entities, with their own goals and internal structures. As a conse- quence, the policy/management organization will follow the structure of its division into parts. Being responsible for one part of the system implies that a bias emerges towards optimizing the performance of the own part. This is further stimulated by rewarding managers for the performance of the subsystem they are responsible for, independently of the others. However, this approach makes it difficult to account for spillover effects towards other parts of the system, particularly when the outcomes in related parts of the system are more difficult to quantify. An example would be the savings on health care concerning psychiatric care. Reducing the number of maxi- mum number of consults being covered by health insurance resulted in a significant financial savings in health care nationally. However, as a result, more people in need of psychiatric help could not afford this help, and, as a consequence, may have contributed to an increase in problems such as street crime, annoyance, and deviant behaviour. Because these developments are often qualitative in nature, hard num- bers are not available, and hence these effects are more being debated than actually being included in policy development. Interestingly, due to this compartmentalisa- tion, the direct financial savings due to the reduction of the insurance conditions may be surpassed by the additional costs made in various other parts as the system such as policing, costs of crime, and increased need for crisis intervention. Thus, the problems of quantification and compartmentalisation can exacerbate each other: A quantitative approach may facilitate compartmentalisation since it makes measure- ment of each compartment easier and if one takes simple indicates as one’s goals, then it is tempting to reduce institutional structures to separate compartments that can concentrate on these narrow targets. We coin the term “Excellification”—after Microsoft Excel—to express the tendency to use quantitative measurements and compartmentalise systems in getting a grip on systems.
Whereas we are absolutely convinced of the value of using measurements in developing and evaluating policy/management, it is our stance that policy making in complex systems is requiring a deeper level of understanding the processes that guide the developments in the system at hand. When trying to steer policy in the face of a complex and dynamic situation, there are essentially two kinds of strategies being used in developing this understanding: instrumental and representational. We look at these next, before we discuss how agent-based modelling may contribute to understanding and policy making in complex systems.
4.4 Complexity and Policy Making
An instrumental approach is where one chooses between a set of possible policies and then evaluates them according to some assessment of their past effectiveness.
4 Policy Making and Modelling in a Complex World 63
Fig. 4.3 An illustration of the instrumental approach
Choose one and put it into effect (work out what to do)
actionindicators
Strategy 1
Strategy 2
etc.
Strategy 3
Evaluate how successful strategy was
In future iterations, one then adapts and/or changes the chosen policy in the light of its track record. The idea is illustrated in Fig. 4.3. This can be a highly adaptive approach, reacting rapidly in the light of the current effectiveness of different strate- gies. No initial knowledge is needed for this approach, but rather the better strategies develop over time, given feedback from the environment. Maybe, the purest form of this is the “blind variation and selective retention” of Campbell (1960), where new variants of strategies are produced (essentially) at random, and those that work badly are eliminated, as in biological evolution. The instrumental approach works better when: there is a sufficient range of strategies to choose between, there is an effective assessment of their efficacy, and the iterative cycle of trial and assessment is rapid and repeated over a substantial period of time. The instrumental approach is often used by practitioners who might develop a sophisticated “menu” of what strategies seem to work under different sets of circumstances.
An example of this might be adjusting the level of some policy instrument such as the level of tolls that are designed to reduce congestion on certain roads. If there is still too much congestion, the toll might be raised; if there is too little usage, the toll might be progressively lowered.
The representational approach is a little more complicated. One has a series of “models” of the environment. The models are assessed by their ability to pre- dict/mirror observed aspects of the environment. The best model is then used to evaluate possible actions in terms of an Assessment of the predicted outcomes from those actions and the one with the best outcome chosen to enact. Thus, there are two “loops” involved: One in terms of working out predictions of the models and seeing which best predicts what is observed, and the second is a loop of evaluating possible actions using the best model to determine which action to deploy. Figure 4.4 illus- trates this approach. The task of developing, evaluating, and changing the models is an expensive one, so the predictive power of these models needs to be weighed against this cost. Also, the time taken to develop the models means that this approach is often slower to adapt to changes in the environment than a corresponding instru- mental approach. However, one significant advantage of this approach is that, as a result of the models, one might have a good idea of why certain things were hap- pening in the environment, and hence know which models might be more helpful,
64 W. Jager and B. Edmonds
Fig. 4.4 An illustration of the representational approach
Choose one, work out predictions of effects of possible actions
actionperception
Model 1
Model 2
etc.
Model 3
Evaluate whether predicitons were accurate
as well as allowing for the development of longer term strategies addressing the root causes of such change. The representational approach is the one generally followed by scientists because they are interested in understanding what is happening.
An example of the representational approach might be the use of epidemiolog- ical models to predict the spread of an animal disease, given different contain- ment/mitigation strategies to deal with the crisis. The models are used to predict the outcomes of various strategies, which can inform the choice of strategy. This prediction can be useful even if the models are being improved, at the same time, due to the new data coming in because of the events.
Of course, these two approaches are frequently mixed. For example, representa- tional models might be used to constrain which strategies are considered within an otherwise instrumental approach (even if the representational models themselves are not very good at prediction). If a central bank is considering what interest rate to set, there is a certain amount of trial and error: thus, exactly how low one has to drop the interest rates to get an economy going might be impossible to predict, and one just has to progressively lower them until the desired effect achieved. However, some theory will also be useful: thus, one would know that dropping interest rates would not be the way to cool an over-heating economy. Thus, even very rough models with relatively poor predictive ability (such as “raising interest rates tends to reduce the volume of economic activity and lowering them increases it”) can be useful.
Complexity theory is useful for the consideration of policy in two different ways. First, it can help provide representational models that might be used to constrain the range of strategies under consideration and, second, can help inform second- order considerations concerning the ways in which policy might be developed and/or adopted—the policy adaption process itself. In the following section, we first look at the nature and kinds of models so as to inform their best use within the policy modelling, and later look at how second-order considerations may inform how we might use such models.
4 Policy Making and Modelling in a Complex World 65
Fig. 4.5 An illustration in some of the opposing desiderata of models
simplicity
generality
validity
formality
4.4.1 Using Formal Models in Policy Making
The use of models in policy making starts with the question—what the appropriate policy models are? Many models are often available because (1) improving mod- els following the representational approach will yield series of models that further improve the representation of the process in terms of cause–effect relations, and (2) sometimes more extended models are required for explaining a process, whereas often simpler models are used to represent a particular behaviour.
Realising that many models are often available, we still have to keep in mind that any model is an abstraction. A useful model is necessarily simpler than what it represents, so that much is left out—abstracted away. However, the decision as to what needs to be represented in a model and what can be safely left out is a difficult one. Some models will be useful in some circumstances and useless in others. Also, a model that is useful for one purpose may well be useless for another. Many of the problems associated with the use of models to aid the formulation and steering of policy derive from an assumption that a model will have value per se, independent of context and purpose.
One of the things that affect the uses to which models can be put is the compromise that went into the formulation of the models. Figure 4.5 illustrates some of these tensions in a simple way.
These illustrated desiderata refer to a model that is being used. Simplicity is how simple the model is, the extent to which the model itself can be completely understood. Analytically solvable mathematical models, most statistical models, and abstract simulation models are at the relatively simple end of the spectrum. Clearly, a simple model has many advantages in terms of using the model, checking it for bugs and mistakes (Galán et al. 2009), and communicating it. However, when modelling complex systems, such as what policymakers face, such simplicity may not be worth it if gaining it means a loss of other desirable properties. Generality is the extent of the model scope: How many different kinds of situations could the model be usefully applied. Clearly, some level of generality is desirable; otherwise one could only apply
66 W. Jager and B. Edmonds
the model in a single situation. However, all policy models will not be completely general—there will always be assumptions used in their construction, which limit their generality. Authors are often rather lax about making the scope of their models clear—often implying a greater level of generality that can be substantiated. Finally, validity means the extent to which the model outcomes match what is observed to occur—it is what is established in the process of model validation. This might be as close a match as a point forecast, or as loose as projecting qualitative aspects of possible outcomes.
What policymakers want, above all, is validity, with generality (so they do not have to keep going back to the modellers) and simplicity (so there is an accessible narrative to build support for any associated policy) coming after this. Simplicity and generality are nice if you can get them, but one cannot assume that these are achiev- able (Edmonds 2013). Validity should be an overwhelming priority for modellers; otherwise, they are not doing any sort of empirical science. However, they often put this off into the future, preferring the attractions of the apparent generality offered by analogical models (Edmonds 2001, 2010).
Formality is the degree to which a model is built in a precise language or system. A system of equations or a computer simulation is formal, vague, but intuitive ideas expressed in natural language are informal. It must be remembered that formality for those in the policy world is not a virtue but more of a problem. They may be convinced it is necessary (to provide the backing of “science”), but it means that the model is inevitably somewhat opaque and not entirely under their control. This is the nub of the relationship between modellers and the policy world—if the policy side did not feel any need for the formality, then they would have no need of modellers—they are already skilled at making decisions using informal methods. For the modellers, the situation is reverse. Formality is at the root of modelling, so that they can replicate their results and so that the model can be unambiguously passed to other researchers for examination, critique, and further development (Edmonds 2000). For this reason, we will discuss formality a bit and analyse its nature and consequences.
Two dimensions of formality can be usefully distinguished here, these are:
a. The extent to which the referents of the representation are constrained (“specificity of reference”).
b. The extent to which the ways in which instantiations of the representation can be manipulated are constrained (“specificity of manipulation”).
For example, an analogy expressed in natural language has a low specificity of reference since, what its parts refer to are reconstructed by each hearer in each situation. For example, the phrase “a tidal wave of crime” implies that concerted and highly coordinated action is needed in order to prevent people being engulfed, but the level of danger and what (if anything) is necessary to do must be determined by each listener. In contrast to this is a detailed description where what it refers to is severely limited by its content, e.g. “Recorded burglaries in London rose by 15 % compared to the previous year”. Data are characterised by a high specificity of reference, since what it refers to is very precise, but has a low specificity of manipulation because there are few constraints in what one can do with it.
4 Policy Making and Modelling in a Complex World 67
A system of mathematics or computer code has a high specificity of manipulation since the ways these can be manipulated are determined by precise rules—what one person infers from them can be exactly replicated by another. Thus, all formal models (the ones we are mostly concentrating on here) have a high specificity of manipulation, but not necessarily a high specificity of representation. A piece of natural language that can be used to draw inferences in many different ways, only limited by the manipulators’ imagination and linguistic ability, has a low specificity of manipulation. One might get the impression that any “scientific” model expressed in mathematics must be formal in both ways. However, just because a representation has high specificity of manipulation, it does not mean that the meaning of its parts in terms of what it represents is well determined.
Many simulations, for example, do not represent anything we observe directly, but are rather explorations of ideas. We, as intelligent interpreters, may mentally fill in what it might refer to in any particular context but these “mappings” to reality are not well defined. Such models are more in the nature of an analogy, albeit one in formal form—they are not testable in a scientific manner since it is not clear as to precisely what they represent. Whilst it may be obvious when a system of mathematics is very abstract and not directly connected with what is observed, simulations (especially agent-based simulations) can give a false impression of their applicability because they are readily interpretable (but informally). This does not mean they are useless for all purposes. For example, Schelling’s abstract simulation of racial segregation did not have any direct referents in terms of anything measurable,3 but it was an effective counterexample that can show that an assumption that segregation must be caused by strong racial prejudice was unsound. Thus, such “analogical models” (those with low specificity of reference) can give useful insights—they can inform thought, but cannot give reliable forecasts or explanations as to what is observed.
In practice, a variety of models are used by modellers in the consideration of any issue, including: informal analogies or stories that summarise understanding and are used as a rough guide to formal manipulation, data models that abstract and represent the situation being modelled via observation and measurement, the simulation or mathematical model that is used to infer something about outcomes from initial situations, representations of the outcomes in terms of summary measures and graphs, and the interpretations of the results in terms of the target situation. When considering very complex situations, it is inevitable that more models will become involved, abstracting different aspects of the target situation in different ways and “staging” abstraction so that the meaning and reference can be maintained. However, good practice in terms of maintaining “clusters” of highly related models has yet to be established in the modelling community, so that a policymaker might well be bewildered by different models (using different assumptions) giving apparently conflicting results. However, the response to this should not be to reject this variety, and enforce comforting (but ultimately illusory) consistency of outcomes, but accept
3 Subsequent elaborations of this model have tried to make the relationship to what is observed more direct, but the original model, however visually suggestive, was not related to any data.
68 W. Jager and B. Edmonds
that it is useful to have different viewpoints from models as much as it is to have different viewpoints from experts. It is the job of policymakers to use their experience and judgement in assessing and combining these views of reality. Of course, equally it is the job of the modellers to understand and explain why models appear to contradict each other and the significance of this as much as they can.
A model that looks scientific (e.g. is composed of equations, hence quantified) might well inspire more confidence than one that does not. In fact, the formality of models is very much a two-edged sword, giving advantages and disadvantages in ways that are not immediately obvious to a nonmodeller. We will start with the disadvantages and then consider the advantages.
Most formal models will be able to output series of numbers composed of mea- sures on the outcomes of the model. However, just because numbers are by their nature precise,4 does not mean that this precision is representative of the certainty to which these outcomes will map to observed outcomes. Thus, numerical outcomes can give a very false sense of security, and lead those involved in policy to falsely think that prediction of such values is possible. Although many forecasters now will add indications of uncertainty “around” forecasts, this can still be deeply misleading as it still implies that there is a central tendency about which future outcomes will gravitate.5
Many modellers are now reluctant to make such predictions because they know how misleading these can be. This is, understandably, frustrating for those involved in policy, whose response might be, “I know its complex, but we do not have the time/money to develop a more sophisticated model so just give me your ‘best guess”’. This attitude implies that some prediction is better than none, and that the reliability of a prediction is monotonic to the amount of effort one puts in. It seems that many imagine that the reliability of a prediction increases with effort, albeit unevenly—so a prediction with a small amount of effort will be better than none at all. Unfortunately, this is far from the case, and a prediction based on a “quick and dirty” method may be more misleading than helpful and merely give a false sense of security.
One of the consequences of the complexity of social phenomena is that the pre- diction of policy matters is hard, rare, and only obtained as a result of the most specific and pragmatic kind of modelling developed over relatively long periods of time.6 It is more likely that a model is appropriate for establishing and understanding candidate explanations of what is happening, which will inform policy making in a less exact manner than prediction, being part of the mix of factors that a policymaker will take into account when deciding action. It is common for policy people to want a prediction of the impact of possible interventions “however rough”, rather than settle for some level of understanding of what is happening. However, this can be
4 Even if, as in statistics, they are being precise about variation and levels of uncertainty of other numbers. 5 This apparent central tendency might be merely the result of the way data are extracted from the model and the assumptions built into the model rather than anything that represents the fundamental behaviour being modelled. 6 For an account of actual forecasting and its reality, see Silver (2012).
4 Policy Making and Modelling in a Complex World 69
illusory—if one really wanted a prediction “however rough”, one would settle for a random prediction7 dressed up as a complicated “black box” model. If we are wiser, we should accept the complexity of what we are dealing and reject models that give us ill-founded predictions.
Maybe a better approach is to use the modelling to inform the researchers about the kinds of process that might emerge from a situation—showing them possible “trajectories” that they would not otherwise have imagined. Using visualisations of these trajectories and the critical indicators clarifies the complex decision context for policymakers. In this way, the burden of uncertainty and decision making remains with the policymakers and not the researchers, but they will be more intelligently informed about the complexity of what is currently happening, allowing them to “drive” decision making better.
As we have discussed above, one feature of complex systems is that they can result in completely unexpected outcomes, where due to the relevant interactions in the system, a new kind of process has developed resulting in qualitatively different results. It is for this reason that complex models of these systems do not give prob- abilities (since these may be meaningless, or worse be downright misleading) but rather trace some (but not all) of the possible outcomes. This is useful as one can then be as prepared as possible for such outcomes, which otherwise would not have been thought of.
On the positive side, the use of formal modelling techniques can be very helpful for integrating different kinds of understanding and evidence into a more “well- rounded” assessment of options. The formality of the models means that it can be shared without ambiguity or misunderstanding between experts in different domains. This contrasts with communication using natural language where, inevitably, people have different assumptions, different meanings, and different inferences for key terms and systems. This ability to integrate different kinds of expertise turns out to be especially useful in the technique we will discuss next—agent-based simulation.
4.4.2 The Use of Agent-Based Models to Aid Policy Formation
In recent years, agent-based simulation has gained momentum as a tool allowing the computer to simulate the interactions between a great number of agents. An agent- based simulation implies that individuals can be represented as separate computer models that capture their motives and behaviour. Letting these so-called agents in- teract though a network, and confront them with changing circumstances, creates an artificial environment where complex and highly dynamic processes can be stud- ied. Because agent-based models address the interactions between many different agents, they offer a very suitable tool to represent and recreate the complexities in so- cial systems. Hence, agent-based modeling has become an influential methodology
7 Or other null model, such as “what happened last time” or “no change”.
70 W. Jager and B. Edmonds
to study a variety of social systems, ranging from ant colonies to aspects of human society. In the context of agent-based simulation of human behaviour, one of the challenges is connecting the knowledge from behavioural sciences in agent-based models that can be used to model behaviour in some kind of environment. These mod- elled environments may differ largely, and may reflect different (inter)disciplinary fields. Examples of environments where agents can operate in are, e.g. financial markets, agricultural settings, the introduction of new technologies in markets, and transportation systems, just to name a few. A key advantage here is that a model creates a common formal language for different disciplines to communicate. This is important, as it allows for speaking the same language in targeting issues that are interdisciplinary by nature. Rather than taking information from social scientists as an interesting qualitative advice, it becomes possible to actually simulate what the behaviour dynamical effects of policies are. This is, in our view, an important step in addressing interdisciplinary policy issues in an effective way. An additional advantage of social simulation is that formalizing theory and empirical data in mod- els requires researchers to be exact in the assumptions, which, in turn, may result in specific research questions for field and/or lab experiments. Hence, social simulation is a tool that both stimulates the interaction between scientific disciplines, and may stimulate theory development/specification within the behavioural sciences.
An increasing number of agent-based models is being used in a policy context. A recent inventory on the SIMSOC mailing list by Nigel Gilbert8 resulted in a list of modelling projects that in some way were related to actual policy making. Topics included energy systems, littering, water management, crowd dynamics, financial crisis, health management, deforestation, industrial clustering, biogas use, military interventions, diffusion of electric cars, organization of an emergency centre, natural park management, postal service organization, urban design, introduction of renew- able technology, and vaccination programmes. Whereas some models were actually being used by policymakers, in most instances, the models were being used to in- form policy makers about the complexities in the system they were interacting with. The basic idea is that a better understanding of the complex dynamics of the system contributes to understanding how to manage these systems, even if they are unpre- dictable by nature. Here, a comparison can be made with sailing as a managerial process.
Sailing can be seen as a managerial challenge in using different forces that con- stantly change and interact in order to move the ship to a certain destination. In stable and calm weather conditions, it is quite well possible to set the sails in a certain posi- tion and fix the rudder, and make an accurate prediction where of the course the boat will follow. The situation becomes different when you enter more turbulent stages in the system, and strong and variable winds, in combination with bigger waves and streams, requiring the sailor to be very adaptive to the circumstances. A small deviation from the course, due to a gush or a wave, may alter the angle of the wind
8 See mailing list SIMSOC@JISCMAIL.AC.UK. Mail distributed by Nigel Gilbert on December 14, 2013, subject: ABMs in action: second summary.
4 Policy Making and Modelling in a Complex World 71
in the sail, which may give rise to further deviations of the course. This is typically a feedback process, and obviously an experienced sailor is well aware of all these dynamics, and, as a consequence, the sailor responds very adaptive to these small disturbances, yet keeps the long-term outcome—the destination port—also in mind.
The social systems that we are dealing with, in transitions, are way more com- plex than the sailing example. Yet, the underlying rational is the same: the better we learn to understand the dynamics of change, the better we will be capable of coping with turbulences in the process, whilst keeping the long-term goals in focus. Hence, policy aims such that the transition towards a sustainable energy future provides a reasonably clear picture of the direction we are aiming for, but the turbulences in the process towards this future are not well known. Where the sailor has a deep under- standing of the dynamics that govern the behaviour of his boat, for policymakers, this understanding is often limited, as the opening example demonstrated.
Using agent-based models for policy would contribute to a better understand- ing and management of social complex phenomena. First, agent-based models will be useful in identifying under what conditions a social system will behave rela- tively stable (predictable) versus turbulent (unpredictable). This is critical for policy making, because in relatively stable situations, predictions can be made concern- ing the effects of policy, whereas in turbulent regimes, a more adaptive policy is recommended. Adaptive policy implies that the turbulent developments are being followed closely, and that policymakers try to block developments to grow in an undesired direction, and benefit and support beneficial developments. Second, if simulated agents are more realistic in the sense that they are equipped with differ- ent utilities/needs/preferences, the simulations will not only show what the possible behavioural developments are but also reveal the impact on a more psychologi- cal quality-of-life level. Whereas currently many policy models assess behavioural change from a more financial/economical drivers, agent-based models open a pos- sibility to strengthen policy models by including additional outcomes. Examples would be outcomes relating to the stability and support in social networks, and general satisfaction levels.
Agent-based models, thus, can provide a richer and more complex representation of what may be happening within complex and highly dynamic situations, allowing for some of the real possibilities within the system to be explored. This exploration of possibilities can inform the risk analysis of policy, and help ensure that policymakers are ready for more of what the world may throw at them, for example, by having put in place custom-designed indicators that give them the soonest-possible indication that certain kinds of processes or structural changes are underway.
4.5 Conclusions
The bad news for policymakers is that predictive models perform worst exactly at the moment policymakers need them most—during turbulent stages.Yet, we observe that many policymakers, not being aware of the complex nature of the system they
72 W. Jager and B. Edmonds
are interfering with, still have a mechanistic worldview, and base their decisions on classical predictions. This may be one of the reasons for scepticism by policymakers of any modelling approaches (see e.g. Waldherr and Wijermans 2013). Even nowa- days, when complexity has turned into a buzzword, many policymakers still confuse this concept with “complicatedness”, not embracing the essence and meaning of what complexity means for understanding social systems. As a consequence, still many policymakers are “Cartesian9” in their demand for better predictive models. On the other side, still many modellers working from a mechanistic perspective (e.g. linear and/or generic models), holding out the false hope of “scientifically” predic- tive models, look for more resources to incrementally improve their models, e.g. covering more variables. However, whereas it is sometimes justified to argue for the inclusion of more variables in a model, this will not contribute to a better predictive capacity of the model. As Scott Moss reports in his paper (Moss 2002), there are no reported correct real-time forecasts of the volatile clusters or the post-cluster levels in financial market indices or macroeconomic trade cycles, despite their incremental “refinement” over many years. Characteristically, they predict well in periods where nothing much changes, but miss all the “turning points” where structural change occurs.
Even if policymakers have some understanding of the complex nature of the systems they are managing, they still often respond with “I know it is complex, but how else can I decide policy except by using the numbers I have?”, indicating that the numbers are often an important justification of decisions, even if people are aware of the uncertainties behind them. The example of the former minister in the introduction is a prototypical example of this decision making.
The challenge, hence, is not in trying to convince policymakers of the value of simulation models, but providing them with a deeper level understanding of complex systems. Here, simulation models can provide an important role by creating learning experiences. But before going to simulation models, it might be important to use a strong metaphor in anchoring the core idea of managing complex systems. Sailing offers an excellent metaphor here, because many people know the basics of sailing, and understand that it deals with the management of a ship in sometimes turbulent circumstances. What is critical in this metaphor is that in more turbulent conditions, the crew should become more adaptive to the developments in the system.
Agent-based simulation is increasingly being used as a modelling tool to explore the possibilities and potential impacts of policy making in complex systems. They are inherently possibilistic rather than probabilistic. However, the models being used are usually not very accessible for policymakers. Also, in the context of education, not many models are available that allow for an easy access to experiencing policy making in complex systems. In Chap. 13 of this book, Jager and Van der Vegt suggest using based gaming as a promising venue to make agent-based models more
9 Descartes’mechanistic worldview implies that the universe works like a clockwork, and prediction is possible when one has knowledge of all the wheels, gears, and levers of the clockwork. In policy this translates as the viable society.
4 Policy Making and Modelling in a Complex World 73
accessible in education and practical policy settings. A setting where valid games are being used to increase our understanding of the processes in complex management issues is expected to contribute to an improvement of the policy-making process in complex systems.
Acknowledgments This chapter has been written in the context of the eGovPoliNet project. More information can be found on https://www.appessaywriters.com/write-my-paper/policy-community.eu/.
References
Boettiger C, Hastings A (2012) Quantifying limits to detection of early warning for critical transitions. J R Soc Interface 9(75):2527–2539
Campbell DT (1960) Blind variation and selective retention in creative thought as in other knowledge processes. Psychol Rev 67:380–400
Dai L, Vorselen D et al (2012) Generic indicators for loss of resilience before a tipping point leading to population collapse. Science 336(6085):1175–1177
Dakos V, Carpenter RA et al (2012) Methods for detecting early warnings of critical transitions in time series illustrated using simulated ecological data. PLoS ONE 7(7) e41010
Edmonds B (2000) The purpose and place of formal systems in the development of science. CPM report 00–75, MMU, UK (http://cfpm.org/cpmrep75.html)
Edmonds B (2001) The use of models—making MABS actually work. In: Moss S, Davidsson P (eds) Multi agent based simulation. Lecture Notes in Artificial Intelligence 1979. Springer, Berlin, pp 15–32
Edmonds B (2010) Bootstrapping knowledge about social phenomena using simulation models. J Artif Soc Soc Simul 13(1):8 (http://jasss.soc.surrey.ac.uk/13/1/8.html)
Edmonds B (2013) Complexity and context-dependency. Found Sci 18(4):745–755. doi:10.1007/s10699-012-9303-x
Galán JM, Izquierdo LR, Izquierdo SS, Santos JI, del Olmo R, López-Paredes A, Edmonds B (2009) Errors and artefacts in agent-based modelling. J Artif Soc Soc Simul 12(1):1 (http://jasss.soc.surrey.ac.uk/12/1/1.html)
Heisenberg W (1927) Ueber den anschaulichenInhalt der quantentheoretischen. Kinematik and Mechanik Zeitschriftfür Physik 43:172–198. English translation in (Wheeler and Zurek, 1983), pp 62–84
May RM (1976) Simple mathematical models with very complicated dynamics. Nature 261(5560):459–467
Moss S (2002) Policy analysis from first principles. Proc US Natl Acad Sci 99(Suppl 3):7267–7274 Scheffer et al (2009) Early warnings of critical transitions. Nature 461:53–59 Silver N (2012) The signal and the noise: why so many predictions fail-but some don’t. Penguin,
New York Waldherr A, Wijermans N (2013) Communicating social simulation models to sceptical minds.
J Artif Soc Soc Simul 16(4):13 (http://jasss.soc.surrey.ac.uk/16/4/13.html)
Chapter 5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling
Erik Pruyt
Abstract Starting from the state-of-the-art and recent evolutions in the field of system dynamics modeling and simulation, this chapter sketches a plausible near term future of the broader field of systems modeling and simulation. In the near term future, different systems modeling schools are expected to further integrate and accelerate the adoption of methods and techniques from related fields like policy analysis, data science, machine learning, and computer science. The resulting future state of the art of the modeling field is illustrated by three recent pilot projects. Each of these projects required further integration of different modeling and simulation approaches and related disciplines as discussed in this chapter. These examples also illustrate which gaps need to be filled in order to meet the expectations of real decision makers facing complex uncertain issues.
5.1 Introduction
Many systems, issues, and grand challenges are characterized by dynamic com- plexity, i.e., intricate time evolutionary behavior, often on multiple dimensions of interest. Many dynamically complex systems and issues are relatively well known, but have persisted for a long time due to the fact that their dynamic complexity makes them hard to understand and properly manage or solve. Other complex systems and issues—especially rapidly changing systems and future grand challenges—are largely unknown and unpredictable. Most unaided human beings are notoriously bad at dealing with dynamically complex issues—whether the issues dealt with are persistent or unknown. That is, without the help of computational approaches, most human beings are unable to assess potential dynamics of complex systems and issues, and are unable to assess the appropriateness of policies to manage or address them.
E. Pruyt (�) Faculty of Technology, Policy, and Management, Delft University of Technology, Delft, The Netherlands e-mail: E.Pruyt@tudelft.nl
Netherlands Institute for Advanced Study, Wassenaar, The Netherlands
© Springer International Publishing Switzerland 2015 75 M. Janssen et al. (eds.), Policy Practice and Digital Science, Public Administration and Information Technology 10, DOI 10.1007/978-3-319-12784-2_5
76 E. Pruyt
Modeling and simulation is a field that develops and applies computational meth- ods to study complex systems and solve problems related to complex issues. Over the past half century, multiple modeling methods for simulating such issues and for advising decision makers facing them have emerged or have been further devel- oped. Examples include system dynamics (SD) modeling, discrete event simulation (DES), multi-actor systems modeling (MAS), agent-based modeling (ABM), and complex adaptive systems modeling (CAS). All too often, these developments have taken place in distinct fields, such as the SD field or the ABM field, developing into separate “schools,” each ascribing dynamic complexity to the complex underlying mechanisms they focus on, such as feedback effects and accumulation effects in SD or heterogenous actor-specific (inter)actions in ABM. The isolated development within separate traditions has limited the potential to learn across fields and advance faster and more effectively towards the shared goal of understanding complex systems and supporting decision makers facing complex issues.
Recent evolutions in modeling and simulation together with the recent explosive growth in computational power, data, social media, and other evolutions in computer science have created new opportunities for model-based analysis and decision mak- ing. These internal and external evolutions are likely to break through silos of old, open up new opportunities for social simulation and model-based decision making, and stir up the broader field of systems modeling and simulation. Today, different modeling approaches are already used in parallel, in series, and in mixed form, and several hybrid approaches are emerging. But not only are different modeling tradi- tions being mixed and matched in multiple ways, modeling and simulation fields have also started to adopt—or have accelerated their adoption of—useful methods and techniques from other disciplines including operations research, policy analysis, data analytics, machine learning, and computer science. The field of modeling and simulation is consequently turning into an interdisciplinary field in which various modeling schools and related disciplines are gradually being integrated. In prac- tice, the blending process and the adoption of methodological innovations have just started. Although some ways to integrate systems modeling methods and many in- novations have been demonstrated, further integration and massive adoption are still awaited. Moreover, other multi-methods and potential innovations are still in an experimental phase or are yet to be demonstrated and adopted.
In this chapter, some of these developments will be discussed, a picture of the near future state of the art of modeling and simulation is drawn, and a few examples of integrated systems modeling are briefly discussed. The SD method is used to illustrate these developments. Starting with a short introduction to the traditional SD method in Sect. 5.2, some recent and current innovations in SD are discussed in Sect. 5.3, resulting in a picture of the state of modeling and simulation in Sect. 5.4. A few examples are then briefly discussed in Sect. 5.5 to illustrate what these developments could result in and what the future state-of-the-art of systems modeling and simulation could look like. Finally, conclusions are drawn in Sect. 5.6.
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 77
5.2 System Dynamics Modeling and Simulation of Old
System dynamics was first developed in the second half of the 1950s by Jay W. Forrester and was further developed into a consistent method built on specific method- ological choices1. It is a method for modeling and simulating dynamically complex systems or issues characterized by feedback effects and accumulation effects. Feed- back means that the present and future of issues or systems, depend—through a chain of causal relations—on their own past. In SD models, system boundaries are set broadly enough to include all important feedback effects and generative mecha- nisms. Accumulation relates not only to building up real stocks—of people, items, (infra)structures, etc.,—but also to building up mental or other states. In SD mod- els, stock variables and the underlying integral equations are used to group largely homogenous persons/items/. . . and keep track of their aggregated dynamics over time. Together, feedback and accumulation effects generate dynamically complex behavior both inside SD models and—so it is assumed in SD—in real systems.
Other important characteristic of SD are (i) the reliance on relatively enduring conceptual systems representations in people’s minds, aka mental models (Doyle and Ford 1999, p. 414), as prime source of “rich” information (Forrester 1961; Doyle and Ford 1998); (ii) the use of causal loop diagrams and stock-flow diagrams to represent feedback and accumulation effects (Lane 2000); (iii) the use of credibility and fitness for purpose as main criteria for model validation (Barlas 1996); and (iv) the interpretation of simulation runs in terms of general behavior patterns, aka modes of behavior (Meadows and Robinson 1985).
In SD, the behavior of a system is to be explained by a dynamic hypothesis, i.e., a causal theory for the behavior (Lane 2000; Sterman 2000). This causal theory is formalized as a model that can be simulated to generate dynamic behavior. Simulating the model thus allows one to explore the link between the hypothesized system structure and the time evolutionary behavior arising out of it (Lane 2000).
Not surprisingly, these characteristics make SD particularly useful for dealing with complex systems or issues that are characterized by important system feedback effects and accumulation effects. SD modeling is mostly used to model core system structures or core structures underlying issues, to simulate their resulting behavior, and to study the link between the underlying causal structure of issues and models and the resulting behavior. SD models, which are mostly relatively small and manageable, thus allow for experimentation in a virtual laboratory. As a consequence, SD models are also extremely useful for model-based policy analysis, for designing adaptive policies (i.e., policies that automatically adapt to the circumstances), and for testing their policy robustness (i.e., whether they perform well enough across a large variety of circumstances).
1 See Forrester (1991, 2007), Sterman (2007) for accounts of the inception of the SD field. See Sterman (2000), Pruyt (2013) for introductions to SD. And see Forrester (1961, 1969), Homer (2012) for well-known examples of traditional SD.
78 E. Pruyt
In terms of application domains, SD is used for studying many complex social– technical systems and solving policy problems in many application domains, for example, in health policy, resource policy, energy policy, environmental policy, housing policy, education policy, innovation policy, social–economic policy, and other public policy domains. But it is also used for studying all sorts of business dynamics problems, for strategic planning, for solving supply chain problems, etc.
At the inception of the SD method, SD models were almost entirely continuous, i.e., systems of differential equations, but over time more and more discrete and other noncontinuous elements crept in. Other evolutionary adaptations in line with ideas from the earliest days of the field, like the use of Group Model Building to elicit mental models of groups of stakeholders (Vennix 1996) or the use of SD models as engines for serious games, were also readily adopted by almost the entire field. But slightly more revolutionary innovations were not as easily and massively adopted. In other words, the identity and appearance of traditional SD was well established by the mid-1980s and does—at first sight—not seem to have changed fundamentally since then.
5.3 Recent Innovations and Expected Evolutions
5.3.1 Recent and Current Innovations
Looking in somewhat more detail at innovations within the SD field and its adop- tion of innovations from other fields shows that many—often seemingly more revolutionary—innovations have been introduced and demonstrated, but that they have not been massively adopted yet.
For instance, in terms of quantitative modeling, system dynamicists have invested in spatially specific SD modeling (Ruth and Pieper 1994; Struben 2005; BenDor and Kaza 2012), individual agent-based SD modeling as well as mixed and hybrid ABM- SD modeling (Castillo and Saysal 2005; Osgood 2009; Feola et al. 2012; Rahmandad and Sterman 2008), and micro–macro modeling (Fallah-Fini et al. 2014). Examples of recent developments in simulation setup and execution include model calibration and bootstrapping (Oliva 2003; Dogan 2007), different types of sampling (Fiddaman 2002; Ford 1990; Clemson et al. 1995; Islam and Pruyt 2014), multi-model and multi- method simulation (Pruyt and Kwakkel 2014; Moorlag 2014), and different types of optimization approaches used for a variety of purposes (Coyle 1985; Miller 1998; Coyle 1999; Graham andAriza 1998; Hamarat et al. 2013, 2014). Recent innovations in model testing, analysis, and visualization of model outputs in SD include the development and application of new methods for sensitivity and uncertainty analysis (Hearne 2010; Eker et al. 2014), formal model analysis methods to study the link between structure and behavior (Kampmann and Oliva 2008, 2009; Saleh et al. 2010), methods for testing policy robustness across wide ranges of uncertainties (Lempert et al. 2003), statistical packages and screening techniques (Ford and Flynn 2005; Taylor et al. 2010), pattern testing and time series classification techniques
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 79
(Yücel and Barlas 2011;Yücel 2012; Sucullu andYücel 2014; Islam and Pruyt 2014), and machine learning techniques (Pruyt et al. 2013; Kwakkel et al. 2014; Pruyt et al. 2014c). These methods and techniques can be used together with SD models to identify root causes of problems, to identify adaptive policies that properly address these root causes, to test and optimize the effectiveness of policies across wide ranges of assumptions (i.e., policy robustness), etc. From this perspective, these methods and techniques are actually just evolutionary innovations in line with early SD ideas. And large-scale adoption of the aforementioned innovations would allow the SD field, and by extension the larger systems modeling field, to move from “experiential art” to “computational science.”
Most of the aforementioned innovations are actually integrated in particular SD approaches like in exploratory system dynamics modelling and analysis (ESDMA), which is an SD approach for studying dynamic complexity under deep uncertainty. Deep uncertainty could be defined as a situation in which analysts do not know or cannot agree on (i) an underlying model, (ii) probability distributions of key variables and parameters, and/or (iii) the value of alternative outcomes (Lempert et al. 2003). It is often encountered in situations characterized by either too little information or too much information (e.g., conflicting information or different worldviews). ESDMA is the combination of exploratory modeling and analysis (EMA), aka robust decision making, developed during the past two decades (Bankes 1993; Lempert et al. 2000; Bankes 2002; Lempert et al. 2006) and SD modeling. EMA is a research methodology for developing and using models to support decision making under deep uncertainty. It is not a modeling method, in spite of the fact that it requires computational models. EMA can be useful when relevant information that can be exploited by building computational models exists, but this information is insufficient to specify a single model that accurately describes system behavior (Kwakkel and Pruyt 2013a). In such situations, it is better to construct and use ensembles of plausible models since ensembles of models can capture more of the un/available information than any individual model (Bankes 2002). Ensembles of models can then be used to deal with model uncertainty, different perspectives, value diversity, inconsistent information, etc.—in short, with deep uncertainty.2
In EMA (and thus in ESDMA), the influence of a plethora of uncertainties, includ- ing method and model uncertainty, are systematically assessed and used to design policies: sampling and multi-model/multi-method simulation are used to generate ensembles of simulation runs to which time series classification and machine learning techniques are applied for generating insights. Multi-objective robust optimization (Hamarat et al. 2013, 2014) is used to identify policy levers and define policy triggers, and by doing so, support the design of adaptive robust policies. And regret-based approaches are used to test policy robustness across large ensembles of plausible runs (Lempert et al. 2003). EMA and ESDMA can be performed with TU Delft’s
2 For ESDMA, see among else Pruyt and Hamarat (2010), Logtens et al. (2012), Pruyt et al. (2013), Kwakkel and Pruyt (2013a, b), Kwakkel et al. (2013), Pruyt and Kwakkel (2014).
80 E. Pruyt
EMA workbench software, which is an open source tool3 that integrates multi- method, multi-model, multi-policy simulation with data management, visualization, and analysis.
The latter is just one of the recent innovations in modeling and simulation software and platforms: online modeling and simulation platforms, online flight simulator and gaming platforms, and packages for making hybrid models have been developed too. And modeling and simulation across platforms will also become reality soon: the eXtensible Model Interchange LanguagE (XMILE) project (Diker and Allen 2005; Eberlein and Chichakly 2013) aims at facilitating the storage, sharing, and combination of simulation models and parts thereof across software packages and across modeling schools and may ease the interconnection with (real-time) databases, statistical and analytical software packages, and organizational information and com- munication technology (ICT) infrastructures. Note that this is already possible today with scripting languages and software packages with scripting capabilities like the aforementioned EMA workbench.
5.3.2 Current and Expected Evolutions
Three current evolutions are expected to further reinforce this shift from “experiential art” to “computational science.”
The first evolution relates to the development of “smarter” methods, techniques, and tools (i.e., methods, techniques, and tools that provide more insights and deeper understanding at reduced computational cost). Similar to the development of formal model analysis techniques that smartened the traditional SD approach, new meth- ods, techniques, and tools are currently being developed to smarten modeling and simulation approaches that rely on “brute force” sampling, for example, adaptive output-oriented sampling to span the space of possible dynamics (Islam and Pruyt 2014) or smarter machine learning techniques (Pruyt et al. 2013; Kwakkel et al. 2014; Pruyt et al. 2014c) and time series classification techniques (Yücel and Barlas 2011; Yücel 2012; Sucullu and Yücel 2014; Islam and Pruyt 2014), and (multi-objective) robust optimization techniques (Hamarat et al. 2013, 2014).
Partly related to the previous evolution are developments relates to “big data,” data management, and data science. Although traditional SD modeling is sometimes called data-poor modeling, it does not mean it is, nor should be. SD software packages allow one to get data from, and write simulation runs to, databases. Moreover, data are also used in SD to calibrate parameters or bootstrap parameter ranges. But more could be done, especially in the era of “big data.” Big data simply refers here to much more data than was until recently manageable. Big data requires data science techniques to make it manageable and useful. Data science may be used in
3 The EMA workbench can be downloaded for free from http://simulation.tbm.tudelft.nl/ ema-workbench/contents.html
http://simulation.tbm.tudelft.nl/ema-workbench/contents.html
http://simulation.tbm.tudelft.nl/ema-workbench/contents.html
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 81
modeling and simulation (i) to obtain useful inputs from data (e.g., from real-time big data sources), (ii) to analyze and interpret model-generated data (i.e., big artificial data), (iii) to compare simulated and real dynamics (i.e., for monitoring and control), and (iv) to infer parts of models from data (Pruyt et al. 2014c). Interestingly, data science techniques that are useful for obtaining useful inputs from data may also be made useful for analyzing and interpreting model-generated data, and vice versa. Online social media are interesting sources of real-world big data for modeling and simulation, both as inputs to models, to compare simulated and real dynamics, and to inform model development or model selection. There are many application domains in which the combination of data science and modeling and simulation would be beneficial. Examples, some of which are elaborated below, include policy making with regard to crime fighting, infectious diseases, cybersecurity, national safety and security, financial stress testing, energy transitions, and marketing.
Another urgently needed innovation relates to model-based empowerment of de- cision makers. Although existing flight simulator and gaming platforms are useful for developing and distributing educational flight simulators and games, and interfaces can be built in SD packages, using them to develop interfaces for real-world real-time decision making and integrating them into existing ICT systems is difficult and time consuming. In many cases, companies and organizations want these capabilities in- house, even in their boardroom, instead of being dependent on analyses by external or internal analysts. The latter requires user-friendly interfaces on top of (sets of) models possibly connected to real-time data sources. These interfaces should allow for experimentation, simulation, thoroughly analysis of simulation results, adaptive robust policy design, and policy robustness testing.
5.4 Future State of Practice of Systems Modeling and Simulation
These recent evolutions in modeling and simulation together with the recent explosive growth in computational power, data, social media, and other evolutions in computer science may herald the beginning of a new wave of innovation and adoption, moving the modeling and simulation field from building a single model to simultaneously simulating multiple models and uncertainties; from single method to multi-method and hybrid modeling and simulation; from modeling and simulation with sparse data to modeling and simulation with (near real-time) big data; from simulating and analyzing a few simulation runs to simulating and simultaneously analyzing well- selected ensembles of runs; from using models for intuitive policy testing to using models as instruments for designing adaptive robust policies; and from developing educational flight simulators to fully integrated decision support.
For each of the modeling schools, additional adaptations could be foreseen too. In case of SD, it may for example involve a shift from developing purely endoge- nous to largely endogenous models; from fully aggregated models to sufficiently spatially explicit and heterogenous models; from qualitative participatory modeling
82 E. Pruyt
Fig. 5.1 Picture of the state of science/future state of the art of modeling and simulation
to quantitative participatory simulation; and from using SD to combining problem structuring and policy analysis tools, modeling and simulation, machine learning techniques, and (multi-objective) robust optimization.
Adoption of these recent, current, and expected innovations could result in the future state of the art4 of systems modeling as displayed in Fig. 5.1. As indicated by (I) in Fig. 5.1, it will be possible to simultaneously use multiple hypotheses (i.e., simulation models from the same or different traditions or hybrids), for different goals including the search for deeper understanding and policy insights, experimentation in a virtual laboratory, future-oriented exploration, robust policy design, and robustness testing under deep uncertainty. Sets of simulation models may be used to represent different perspectives or plausible theories, to deal with methodological uncertainty, or to deal with a plethora of important characteristics (e.g., agent characteristics, feedback and accumulation effects, spatial and network effects) without necessarily having to integrate them in a single simulation model. The main advantages of using multiple models for doing so are that each of the models in the ensemble of models remains manageable and that the ensemble of simulation runs generated with the
4 Given the fact that it takes a while before innovations are adopted by software developers and practitioners, this picture of the current state of science is at the same time a plausible picture of the medium term future of the field of modeling and simulation.
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 83
ensemble of models is likely to be more diverse which allows for testing policy robustness across a wider range of plausible futures.
Some of these models may be connected to real-time or near real-time data streams, and some models may even be inferred in part with smart data science tools from data sources (see (II) in Fig. 5.1). Storing the outputs of these simulation models in databases and applying data science techniques may enhance our under- standing, may generate policy insights, and may allow for testing policy robustness across large multidimensional uncertainty spaces (see (III) in Fig. 5.1). And user- friendly interfaces on top of these interconnected models may eventually empower policy makers, enabling them to really do model-based policy making.
Note, however, that the integrated systems modeling approach sketched in Fig. 5.1 may only suit a limited set of goals, decision makers, and issues. Single model simulation properly serves many goals, decision makers, and issues well enough for multi-model/multi-method, data-rich, exploratory, policy-oriented approaches not to be required. However, there are most certainly goals, decision makers, and issues that do.
5.5 Examples
Although all of the above is possible today, it should be noted that this is the current state of science, not the state of common practice yet. Applying all these methods and techniques to real issues is still challenging, and shows where innovations are most needed. The following examples illustrate what is possible today as well as what the most important gaps are that remain to be filled.
The first example shows that relatively simple systems models simulated under deep uncertainty allow for generating useful ensembles of many simulation runs. Using methods and techniques from related disciplines to analyze the resulting arti- ficial data sets helps to generate important policy insights. And simulation of policies across the ensembles allows to test for policy robustness. This first case nevertheless shows that there are opportunities for multi-method and hybrid approaches as well as for connecting systems models to real-time data streams.
The second example extends the first example towards a system-of-systems ap- proach with many simulation models generating even larger ensembles of simulation runs. Smart sampling and scenario discovery techniques are then required to reduce the resulting data sets to manageable proportions.
The third example shows a recent attempt to develop a smart model-based decision-support system for dealing with another deeply uncertain issue. This ex- ample shows that it is almost possible to empower decision makers. Interfaces with advanced analytical capabilities as well as easier and better integration with existing ICT systems are required though. This example also illustrates the need for more advanced hybrid systems models as well as the need to connect systems models to real-time geo-spatial data.
84 E. Pruyt
5.5.1 Assessing the Risk, and Monitoring, of New Infectious Diseases
The first case, which is described in more detail in (Pruyt and Hamarat 2010; Pruyt et al. 2013), relates to assessing outbreaks of new flu variants. Outbreaks of new (vari- ants of) infectious diseases are deeply uncertain. For example, in the first months after the first reports about the outbreak of a new flu variant in Mexico and the USA, much remained unknown about the possible dynamics and consequences of this pos- sible epidemic/pandemic of the new flu variant, referred to today as new influenza A(H1N1)v. Table 5.1 shows that more and better information became available over time, but also that many uncertainties remained. However, even with these remaining uncertainties, it is possible to model and simulate this flu variant under deep uncer- tainty, for example with the simplistic simulation model displayed in Fig. 5.2, since flu outbreaks can be modeled.
Simulating this model thousands of times over very wide uncertainty ranges for each of the uncertain variables generates the 3D cloud of potential outbreaks dis- played in Fig. 5.3a. In this figure, the worst flu peak (0–50 months) is displayed on the X-axis, the infected fraction during the worst flu peak (0–50 %) is displayed on the Y -axis, and the cumulative number of fatal cases in the Western world (0– 50.000.000) is displayed on the Z-axis. This 3D plot shows that the most catastrophic outbreaks are likely to happen within the first year or during the first winter season following the outbreak. Using machine learning algorithms to explore this ensemble of simulation runs helps to generate important policy insights (e.g., which policy levers to address). Testing different variants of the same policy shows that adaptive policies outperform their static counterparts (compare Fig. 5.3b and c). Figure 5.3d finally shows that adaptive policies can be further improved using multi-objective robust optimization.
However, taking deep uncertainty seriously into account would require simulating more than a single model from a single modeling method: it would be better to simultaneously simulate CAS, ABM, SD, and hybrid models under deep uncertainty and use the resulting ensemble of simulation runs. Moreover, near real-time geo- spatial data (from twitter, medical records, etc.) may also be used in combination with simulation models, for example, to gradually reduce the ensemble of model- generated data. Both suggested improvements would be possible today.
5.5.2 Integrated Risk-Capability Analysis under Deep Uncertainty
The second example relates to risk assessment and capability planning for National Safety and Security. Since 2001, many nations have invested in the development of all-hazard integrated risk-capability assessment (IRCA) approaches. All-hazard IRCAs integrate scenario-based risk assessment, capability analysis, and capability- based planning approaches to reduce all sorts of risks—from natural hazards, over technical failures to malicious threats—by enhancing capabilities for dealing with
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 85
Ta bl
e 5.
1 In
fo rm
at io
n an
d un
kn ow
ns pr
ov id
ed by
th e
E ur
op ea
n C
en tr
e fo
r D
is ea
se Pr
ev en
tio n
an d
C on
tr ol
(E C
D C
) fr
om 24
A pr
il un
til 21
A ug
us t
D at
e 24
A pr
il 30
A pr
il 08
M ay
20 M
ay 12
Ju ne
20 Ju
ly 21
A ug
us t
In fe
ct iv
ity U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
R o
U nk
no w
n U
nk no
w n
1– 2;
pr ob
. 1–
2; pr
ob .
– –
[R ≤2
]
- 4–
- 9
- 4–
- 6
Im m
un ity
U nk
no w
n U
nk no
w n
In di
ca tio
ns Id
em Id
em Id
em Id
em
(e ld
er ly
)
V ir
ul en
ce U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
M ild
an d
Id em
se lf
-l im
iti ng
In cu
ba tio
n U
nk no
w n
U nk
no w
n L
on g
ta il?
– M
ed ia
n 3–
4 da
ys Id
em Id
em
pe ri
od (u
p to
8 da
ys )
ra ng
e 1–
7 da
ys
C FR
M ex
ic o
17 %
? –
4% ?
2% ?
- 4–
- 8%
? –
–
C FR
U SA
U nk
no w
n U
nk no
w n
- 1%
? 0.
1% ?
- 2%
? 0.
4% ?
–
C FR
a U
K U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
- 3%
(– 1%
)? 0.
1– 0.
2% ?
A ge
di st
ri bu
tio n
U nk
no w
n U
nk no
w n
E ld
er ly
le ss
af fe
ct ed
? –
Sk ew
ed to
w .y
ou ng
er Id
em Id
em
A nt
iv ir
al su
sc ep
. U
nk no
w n
Po ss
ib le
In di
ca tio
ns –
– –
–
% as
ym pt
om at
ic U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
In di
ca tio
ns 33
–5 0%
Fu tu
re ?
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n U
nk no
w n
U nk
no w
n
a C FR
st an
ds fo
r ca
se fa
ta lit
y ra
tio
86 E. Pruyt
F ig
.5 .2
R eg
io n
1 of
a tw
o- re
gi on
sy st
em dy
na m
ic s
(S D
) flu
m od
el
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 87
Fig. 5.3 3D scatter plots of 20,000 Latin-Hypercube samples for region 1 with X-axis: worst flu peak (0–50 months); Y -axis: infected fraction during the worst flu peak (0–50 %); Z-axis: fatal cases (0–5 × 107)
them. Current IRCAs mainly allow dealing with one or a few specific scenarios for a limited set of relatively simple event-based and relatively certain risks, but not for dealing with a plethora of risks that are highly uncertain and complex, combina- tions of measures and capabilities with uncertain and dynamic effects, and divergent opinions about degrees of (un)desirability of risks and capability investments.
The next generation model-based IRCAs may solve many of the shortcomings of the IRCAs that are currently being used. Figure 5.4 displays a next generation IRCA for dealing with all sorts of highly uncertain dynamic risks. This IRCA approach, described in more detail in Pruyt et al. (2012), combines EMA and modeling and simulation, both for the risk assessment and the capability analysis phases. First, risks—like outbreaks of new flu variants—are modeled and simulated many times across their multidimensional uncertainty spaces to generate an ensemble of plausible risk scenarios for each of the risks. Time series classification and machine learning techniques are then used to identify much smaller ensembles of exemplars that are representative for the larger ensembles. These ensembles of exemplars are then used as inputs to a generic capability analysis model. The capability analysis model is subsequently simulated for different capabilities strategies under deep uncertainty (i.e., simulating the uncertainty pertaining to their effectiveness) over all ensembles of exemplars to calculate the potential of capabilities strategies to reduce these risks.
88 E. Pruyt
Fig. 5.4 Model-based integrated risk-capability analysis (IRCA)
Finally, multi-objective robust optimization helps to identify capabilities strategies that are robust.
Not only does this systems-of-systems approach allow to generate thousands of variants per risk type over many types of risks and to perform capability analy- ses across all sorts of risk and under uncertainty, it also allows one to find sets of capabilities that are effective across many uncertain risks. Hence, this integrated model-based approach allows for dealing with capabilities in an all-hazard way under deep uncertainty.
This approach is currently being smartened using adaptive output-oriented sam- pling techniques and new time-series classification methods that together help to identify the largest variety of dynamics with the minimal amount of simulations. Covering the largest variety of dynamics with the minimal amount of exemplars is desirable, for performing automated multi-hazard capability analysis over many risks is—due to the nature of the multi-objective robust optimization techniques used— computationally very expensive. This approach is also being changed from a multi- model approach into a multi-method approach. Whereas, until recently, sets of SD models were used; there are good reasons to extend this approach to other types of systems modeling approaches that may be better suited for particular risks or—using multiple approaches—help to deal with methodological uncertainty. Finally, settings of some of the risks and capabilities, as well as exogenous uncertainties, may also be fed with (near) real-world data.
5.5.3 Policing Under Deep Uncertainty
The third example relates to another deeply uncertain issue, high-impact crimes (HIC). An SD model and related tools (see Fig. 5.5) were developed some years ago in view of increasing the effectiveness of the fight against HIC, more specifically the fight against robbery and burglary. HICs require a systemic perspective and approach:
5 From Building a Model to Adaptive Robust Decision Making Using Systems Modeling 89
Fig. 5.5 (I) Exploratory system dynamics modelling and analysis (ESDMA) model, (II) interface for policy makers, (III) analytical module for analyzing the high-impact crimes (HIC)system under deep uncertainty, (IV) real-world pilots based on analyses, and (V) monitoring of real-world data from the pilots and the HIC system
These crimes are characterized by important systemic effects in time and space, such as learning and specialization effects, “waterbed effects” between different HICs and precincts, accumulations (prison time) and delays (in policing and jurisdiction), preventive effects, and other causal effects (ex-post preventive measures). HICs are also characterized by deep uncertainty: Most perpetrators are unknown and even though their archetypal crime-related habits may be known to some extent at some point in time, accurate time and geographically specific predictions cannot be made. At the same time, is part of the HIC system well known and is a lot of real-world information related to these crimes available.
Important players in the HIC system besides the police and (potential) perpetrators are potential victims (households and shopkeepers), partners in the judicial system (the public prosecution service, the prison system, etc.). Hence, the HIC system is dynamically complex, deeply uncertain, but also data rich, and contingent upon external conditions.
The main goals of this pilot project were to support strategic policy making under deep uncertainty and to test and monitor the effectiveness of policies to fight HIC. The SD model (I) was used as an engine behind the interface for policy makers (II) to explore plausible effects of policies under deep uncertainty and identify real- world pilots that could possibly increase the understanding about the system and effectiveness of interventions (III), to implement these pilots (IV), and monitor their outcomes (V). Real-world data from the pilots and improved understanding about the functioning of the real system allow for improving the model.
90 E. Pruyt
Today, a lot of real-world geo-spatial information related to HICs is available online and in (near) real time which allows to automatically update the data and model, and hence, increase its value for the policy makers. The model used in this project was an ESDMA model. That is, uncertainties were included by means of sets of plausible assumptions and uncertainty ranges. Although this could already be argued to be a multi-model approach, hybrid models or a multi-method approach would really be needed to deal more properly with systems, agents, and spatial characteristics. Moreover, better interfaces and connectors to existing ICT systems and databases would also be needed to turn this pilot into a real decision-support system that would allow chiefs of police to experiment in a virtual world connected to the real world, and to develop and test adaptive robust policies on the spot.
5.6 Conclusions
Recent and current evolutions in modeling and simulation together with the recent explosive growth in computational power, data, social media, and other evolutions in computer science have created new opportunities for model-based analysis and decision making.
Multi-method and hybrid modeling and simulation approaches are being devel- oped to make existing modeling and simulation approaches appropriate for dealing with agent system characteristics, spatial and network aspects, deep uncertainty, and other important aspects. Data science and machine learning techniques are currently being developed into techniques that can provide useful inputs for simulation models as well as for building models. Machine learning algorithms, formal model analysis methods, analytical approaches, and new visualization techniques are being devel- oped to make sense of models and generate useful policy insights. And methods and tools are being developed to turn intuitive policy making into model-based policy design. Some of these evolutions were discussed and illustrated in this chapter.
It was also argued and shown that easier connectors to databases, to social media, to other computer programs, and to ICT systems, as well as better interfacing software need to be developed to allow any systems modeler to turn systems models into real decision-support systems. Doing so would turn the art of modeling into the computational science of simulation. It would most likely also shift the focus of attention from building a model to using ensembles of systems models for adaptive robust decision making.