Result: An Agent-Based Socio-Technical Approach to Impact Assessment for Cyber Defense
CC BY 4.0
Sauf mention contraire ci-dessus, le contenu de cette notice bibliographique peut être utilisé dans le cadre d’une licence CC BY 4.0 Inist-CNRS / Unless otherwise stated above, the content of this bibliographic record may be used under a CC BY 4.0 licence by Inist-CNRS / A menos que se haya señalado antes, el contenido de este registro bibliográfico puede ser utilizado al amparo de una licencia CC BY 4.0 Inist-CNRS
Operational research. Management
Psychology. Ethology
FRANCIS
Further Information
This paper presents a novel simulation for estimating the impact of cyber attacks. Current approaches have adopted the probabilistic risk analysis in order to estimate the impact of attacks mostly on assets or business processes. More recent approaches involve vulnerability analysis on networks of systems and sensor input from third-party detection tools in order to identify attack paths. All these methods are focusing on one level at a time, defining impact in terms of confidentiality, integrity, and availability, failing to place people and technology together in an organization's functional context. We propose an interdependency impact assessment approach, focusing on the responsibilities and the dependencies that flow through the supply chain, mapping them down into an agent-based socio-technical model. This method is useful for modeling consequences across all levels of organizations networks—business processes, business roles, and systems. We are aiming to make chaining analysis on threat scenarios and perform impact assessment, providing situational awareness for cyber defense purposes. Although the model has various applications, our case study is specifically focusing on critical information infrastructures due to the criticality of the systems and the fact that the area is still lacking security-focused research and heavily relies on reliability theory and failure rate.
AN0099753635;[6mr3]01aug.14;2019Mar05.12:18;v2.2.500
An Agent-Based Socio-Technical Approach to Impact Assessment for Cyber Defense.
This paper presents a novel simulation for estimating the impact of cyber attacks. Current approaches have adopted the probabilistic risk analysis in order to estimate the impact of attacks mostly on assets or business processes. More recent approaches involve vulnerability analysis on networks of systems and sensor input from third-party detection tools in order to identify attack paths. All these methods are focusing on one level at a time, defining impact in terms of confidentiality, integrity, and availability, failing to place people and technology together in an organization's functional context. We propose an interdependency impact assessment approach, focusing on the responsibilities and the dependencies that flow through the supply chain, mapping them down into an agent-based socio-technical model. This method is useful for modeling consequences across all levels of organizations networks—business processes, business roles, and systems. We are aiming to make chaining analysis on threat scenarios and perform impact assessment, providing situational awareness for cyber defense purposes. Although the model has various applications, our case study is specifically focusing on critical information infrastructures due to the criticality of the systems and the fact that the area is still lacking security-focused research and heavily relies on reliability theory and failure rate.
Keywords: agent-based systems; cyber defense; impact assessment; SCADA; situational awareness; socio-technical systems
INTRODUCTION
Risk assessment methods for information security so far have been measuring the consequences of cyber attacks, usually the negative ones, in either qualitative or quantitative ways. The first attempt to develop an information risk assessment methodology came from NIST Institute in 1974. Predetermined scales of "high," "medium," "low," or probabilistic approaches estimating the impact in some currency cost have been common practices ever since (Landoll, [21]; Tipton & Krause, [35]; Shoniregun, [31]; Lund, Solhaug, & Stølen, [23]; Sun, Srivastava, & Mock, [34]). The definition of impact coming from the finance sector so far fails to incorporate technology and even the human factor. While risk is the main focus of the risk assessment process, we believe that impact estimation plays a
central role in it. When the end target is measuring the risk, impact does not get the appropriate attention it deserves.
Although there is a substantial gap between academic and professional practice, to make the analysis tractable they both treat loss as a single value parameter, estimated by a decision maker. This way, impact and likelihood—the factors that calculate risk—are the assumptions of decision makers. However, evidence from psychology show that humans are not good at estimating these factors or risk in general (Schneier, [29]). In an attempt to evade subjectivity, one would assume that we could trust our data. However, statistical validity is restricted to controlled experiments, and data sets rarely represent homogenous samples with clear correlations. Obviously, a more automated objective approach is needed.
The substantial difference between security risk and risk the way it is perceived so far is the type of uncertainty. Uncertainty caused by natural disaster events is probabilistic caused by chance, while the one caused by adverse interested parties is strategic (Golany, Kaplan, Marmur, & Rothblum, [14]). The linear models that have been adopted in the current methodologies cannot capture the complexity of today's organizations, such as the interdependencies of the critical infrastructure, which still depends on "intuition" when it comes to risk management (Macaulay, [24]; Golany et al., [14]). It is clear that an approach which would be able to grasp the complexity of the matter would produce more meaningful and accurate results.
For all those reasons, we are suggesting a socio-technical systems approach in an effort to achieve a more holistic picture of the risks that cyber attacks pose to these complex systems. The term socio-technical system is used to describe the function and forms of people (i.e., individuals, groups, roles, and organizations), physical equipment (buildings, surroundings), hardware and software, laws and regulations that accompany the organizations (e.g., laws for the protection of privacy), data (what data are kept, in which formats, who has access to them, where they are kept), and procedures (official and unofficial processes, data flows, relationships in general anything that describes how things work, or better should work in an organization) (St. Andrews University, [33]). From a risk assessment perspective, the challenge is to understand the impact that a potential loss of cyber safety and security can have on an organization.
The target of our model is to map down the responsibilities and dependencies that flow through an organization. This is useful for modeling the consequences across all levels—from the high business processes level to the very important but forgotten business roles level, down to the basic systems and systems entities levels. We believe that we can make forward and backward chaining analysis on threat scenarios and perform impact assessment, providing situational awareness and vulnerability analysis for cyber defense purposes. More specifically this model could be used in cases of enterprise risk management, situational awareness, incident notification, crisis management, resilience planning, risk communication, and critical infrastructure monitoring.
RISK ASSESSMENT
Probabilistic Approaches
According to ISO 27005, which provides the Information Security Risk Management guidelines, impact assessment is defined as adverse change to the level of business objectives achieved, that is, the loss of productivity and market share, or brand deterioration, penalties, and so forth (Klipper, [19]). It is used as a factor, along with the likelihood of occurrence of an event, the vulnerabilities and threats, to calculate and evaluate the risk. We believe that a different perspective needs to be adopted for impact assessment, one separate to that of risk assessment.
Over the past years, a lot of methodologies have been developed in order to manage information systems security. Usually in the literature review, risk assessment (RA) methods are divided into three categories: qualitative, quantitative, and those that are a combination of both. The quantitative ones provide probabilistic results as to what is the percentage of running the risk while the qualitative ones present results in predetermined scales of high – medium – low levels of risk. Some of the most well-established methods and tools in risk assessment are listed in Table 1.
All of the methods that appear in the literature have certain limitations (Sun et al., [34]; Landoll, [21]; Macaulay, [24]; Tipton et al., [35]; Shoniregun, [31]; Verendel, [37]), and very few focus on the impact assessment side to properly monitor and estimate the impact itself full scale in an organization's network and not to use impact to estimate the risk. Another reason why probabilistic methods cannot work in this case is the lack of sufficient data to be able to extract meaningful probabilities, as in the cases of environmental risk assessments. In environmental risks there are data from hundreds of years for which the law of large numbers applies and the probabilities are meaningful giving more realistic results. In security, however, such data do not exist, and that is what makes the extraction of realistic probabilities challenging. Also all of those methods use probabilistic uncertainty for their calculations and not strategic uncertainty that denotes intention by adverse interested parties, which is the case in security (Golany et al., [14]). Meaning that the most unpredictable element of all in this particular case is the human intention.
TABLE 1 Survey of Risk Management Methods and Tools
Vulnerabilities Scanners
There are many more methods and tools than the ones listed in Table 1, but the general methodologies remain the same. Usually they differ in the metrics or the questioners, trying to get slightly better results or they focus on different targets. However, in the past several years there are publications where there is an effort by some researchers and companies to adopt a different and more automated perspective on the matter. Thus, risk assessment has been increasingly becoming part of vulnerability scanner tools, simulation tools, and so forth. This effort is necessary, as the methodologies used so far have been adopted from business management and have not been fully customized to the needs of information security and the strategic uncertainty cyber attacks cause.
CoreLabs have developed a tool called Core Impact to evaluate the cost of a cyber attack in a network and to describe the theater of operations, targets, missions, actions, plans, and assets involved in cybernetic attacks (Futoransky, Notarfrancesco, Richarte, & Sarraute, [13]). CoreLabs are basically modeling and building network attacks for the purposes of automating risk assessment and more specifically the penetration testing process. The tool also allows the administrators of the network to perform a vulnerability assessment through attack simulations. It focuses on performing vulnerability analysis at the systems level.
Cauldron is another situational awareness tool developed by George Mason University's Center for Secure Information Systems (CSIS) under a research grant by the NSA and Air Force Research Labs. Cauldron automatically maps all paths of vulnerability through the network by correlating data from third-party vulnerability scanners. It provides visualization of attack paths and automatically generates mitigation recommendations (Jajodia, Noel, Kalapa, Albanese, & Williams, [16]). The tool provides situational awareness and helps with mitigation strategies, but the analysis is still limited to the systems level.
Other mature tools—attack graph based such as NETSPA, MulVAL, and TVA—are tools that analyze networks of hosts cross-checking with software vulnerability databases for possible attacker exploits, providing visualization capabilities as well. For a detailed analysis check out the survey by Sommestad, Ekstedt, and Holm ([32]).
The tools presented so far are a few representative examples of new approaches performing vulnerability analysis in order to provide situational awareness. As can easily be derived, they are only dealing with the systems network level. Additionally, more and more research is being conducted at the applications level, the applications' exposure to cyber attacks and the impacts the attacks have on them (Heumann, Türpe, & Keller, [15]), which is not something this work will be expanding to though.
An extensive literature review on and comparison of security models, frameworks, metrics, and so forth from 1981 onward is provided by Verendel ([37]). Our observation on this survey is that all the focus is on threats and vulnerabilities analyses while impact is almost not mentioned. We have not come across a method yet that approaches the subject from a socio-technical point of view, and all methods so far assume independency, rationality, and stationarity in order to simplify the cases to be studied. However, this way the results are oversimplified as in reality no system is solemnly isolated from the rest of the infrastructure, which makes things more complex. Every approach so far is dealing with one level at a time. Our approach is attempting to provide automated impact assessment across all levels of an organization and study the dynamic interactions and the dependencies a cyber attack has on the entire socio-technical network of an organization; providing situational awareness and helping with the decision making and mitigation strategies.
WHY ESTIMATING THE IMPACT OF CYBER ATTACKS IS HARDER THAN IT SEEMS
Estimating the "cost" of a cyber attack might appear to be a trivial task, summing up all the attack-related costs using the affected records, perhaps enumerating damages and losses. However, the problem is far more complex and multidimensional. To start, the attribution of costs is hard as many of them could be indirect or realized long after the breach is disclosed. There are also intangible assets such as trade secrets, knowledge, information, or reputational loss, which are very hard to quantify or reliably estimate especially without the appropriate context. Miscalculations of costs, overestimation, or underestimation of the costs related to an attack are also quite common. Enlarging or diminishing the problem, making a mistake in the analysis, or use an inadequate method could result in such issues.
Another problem is the disproportionate consequences, for instance when the negative effects spread on different stakeholders, and then measuring the cost on a single scale is not at all easy (Russell, Antkiewicz, Florer, Widup, & Woodyard, [28]). Loss of intellectual property or information technology (IT) assets is also difficult to evaluate since there is usually inadequate information or uncertainty or inaccessible or lost history. The existing or provided information before and after the breach is usually ambiguous—third-party corporations have a habit of disclosing a breach long after the event.
Ambiguity and different approaches to risk can also come from different or conflicting interests of stakeholders and decision makers. The estimation of the consequences of an attack will be perceived in different ways, as different people will focus on different aspects and their opinion could be shaped through social bias or perception. Finally, bias itself can color someone's opinion and therefore their estimation (Johnson, [17]).
There is much difficulty in estimating the probability of loss occurrence as most methods suggest that such information is obtained by discussions with the users in order to understand the threat propagation. The problem with this approach is that these discussions are limited and rarely help analysts to get complete awareness of a threat and estimate the risk correctly (Sun et al., [34]). Even when there is a proper understanding of the risk propagation it is extremely hard to quantify this, even in a probabilistic way. So we suggest that a more automated method is necessary without excluding the human factor from the equation. We suggest that this can be achieved via the utilization of a socio-technical approach that maps down business processes and roles, responsibilities and dependencies of tasks, considering impact as failure in states of affairs.
The problem with using stochastic probabilistic approaches is the "correct" metrics and the probabilities to estimate the magnitude and the probability of loss. By the term "correct," we mean metrics accurate and descriptive enough to capture the organization's pulse and priorities, in order to take the right threats into consideration and calculate the appropriate risks that actually make sense for the particular organization. In addition to that, as stated by the Risk Assessment Review Group Report of the NRC in 1978, for methods like these it is conceptually impossible to be mathematically complete (Lewis et al., [22]). It is an inherent limitation due to Gödel's theorem and thus they will always be subject to review and doubt as to their completeness. So the problem with the qualitative approaches is that they are not specific enough with the results they provide and not customized enough to make sense.
Furthermore, most approaches do not capture the complex interrelationships of the corporations with few exceptions. It is in those internal relationships and structures, that most of the uncertainty and risk is lying and not in the environmental uncertainty (Carvajal, [6]). MIT argued that the chain of events concept that most current risk assessment methods use could not account for nonlinear and indirect relationships that describe most accidents in complex systems. For this reason, our approach as stated before is that of agent-based socio-technical systems with main focus on the responsibilities and dependencies modeling part to provide impact assessment. Socio-technical systems are scalable and adaptable, capable of mapping those complex nonlinear relationships in the organizations and thus we claim that they are capable of providing better incident and impact analysis.
SCADA AND SECURITY
Our case studies are focusing mostly on the implications of cyber attacks on Supervisory Control and Data Acquisition (SCADA) systems and the processes and infrastructure they control; however, the method applies to general information systems infrastructures as well. There is a few reasons that make these systems so interesting that probed our curiosity, the most important of them is that they usually control critical infrastructure, which is more likely to be targeted in cases of cyber terrorism or sabotage, a mere example of this is Stuxnet (Nicholson, Webber, Dyer, Patel, & Janicke, [26]; Farwell & Rohozinski, [11]). However, these critical systems lack security focused risk assessments and more importantly, there is serious lack of research when it comes to the impact of cyber attacks on the physical systems that SCADA control (Koster et al., [20]). A lack of methodology for impact assessment has been also identified by the military, which calls for development of a Cyber Damage Assessment framework (Eom, Kim, Kim, & Chung, [10]).
Risks related to SCADA can arise from simple wearing down of individual components leading to failures, from natural disasters, but nowadays also from sabotage and acts of terrorism. We need to consider cases in which faults are induced deliberately and hence cannot be described as easily by statistical means and ultimately as probability density functions. For this kind of deliberate actions, it is therefore necessary to come up with different approaches of designing and analyzing infrastructure components in order to allow the efficient enhancement of their robustness and the early detection and mitigation of such actions.
Another question that arises is how different is the security of these systems from general IT security. For starters, almost all SCADA systems security failures have physical consequences more immediate and severe. Security issues often manifest as traditional maintenance failures or nuisance and process to stoppage making them difficult to diagnose and remedy in the end. Managing and maintaining these systems is a difficult task on its own as the old systems cannot be patched or upgraded, there is no separate test environment and conventional methods of protection, like anti viruses and firewalls, may not be able to be utilized. Cyber threats concerning SCADA include a tremendous amount of additional threat vectors such as nontypical network protocols and commands that cannot be blocked due to production and safety issues, in other words valid communications used by attackers in invalid ways. For a more extended literature review on SCADA systems and their security the reader is directed to Tyson Macaulay's book (Macaulay & Singer, [25]).
In conclusion, process control systems are an interesting case study for all the above-mentioned reasons but mostly due to their criticality and severity after potential damage in case of cyber terrorism. Additionally, due to the fact that we identified a lack of security-oriented research, since most work done around these systems is focused on safety and more specifically, reliability theory and failure rate as is customary in engineering. Thus, we were particularly intrigued to test our model with applications on process control systems infrastructure and the challenges this poses.
AGENT-BASED SOCIO-TECHNICAL SYSTEMS
Socio-Technical Systems
The socio-technical systems (STS) concept first appeared in the 1950s, as a project for the Tavistock Institute in London, in an attempt to focus on the group relations at all levels in an organization and come up with innovative practices in organizational development to increase productivity without the need for a major capital (Trist & Bamforth, [36]; Fox, [12]). Socio- technical systems, as a class of complex adaptive systems (Kay, [18]), consist of many technical artifacts (e.g., machines, factories, pipelines, wires) and social entities (e.g., individuals, companies, governments, organizations, institutions). They are interwoven networks of both social and physical assets that every component could interact in every possible way with every other component.
Such systems are focusing on the groups as working units of interaction that are capable of either linear "cause-effect" relationships, or nonlinear ones more complex and unpredictable (Trist et al., [36]). Socio-technical systems are adaptable to the constantly changing environment and the complexity that lies in the heart of most organizations. The concept of tasks, their owners, their meaningfulness and the entire responsibility modeling as well as the dependencies are also a big part of this theory (Dewsbury & Dobson, [9]; Periorellis & Dobson, [27]). In this study we treat people and systems as actors of certain tasks over a state of affairs. They are agents that comply with the same rules and norms, when it comes to the way they operate and interact with other agents for the accomplishment of states of affairs.
Along with the socio-technical systems approach we will use roles, which are basically sets of rights and responsibilities, expectations, behaviors, or expected behaviors and norms. People's behavior in organizations is bounded by specific context subject to both social and legal compliance, depending on their position in the hierarchy. The objective of this is to be able to assist the performance of Responsibility Modeling on the socio-technical systems (Dewsbury et al., [9]) to analyze their internal structure, the responsibility flows, and the dependencies. This will provide us with the necessary information and structure upon which we can apply scenarios that simulate behaviors deviating from the expected (e.g., attack scenarios) (Periorellis et al., [27]), along with logical rules that best describe the organization at hand, its expected behavior and targets, that will allow us to locate vulnerabilities in the supply chain and express cause and effect, in case anything changes to the environment beyond expectation.
Agent-Based Modeling
Agent-based modeling is the most suitable for modeling a complex adaptive system because it enables the capture of more complex structures and dynamics (Dam, Nikolic, & Lukszo, [8]). Another important advantage is that it provides for construction of models in the absence of the knowledge about the global interdependencies: you may know nothing or very little about how things affect each other, or what the global sequence of operations is, but if you have some perception of how the individual participants of the process behave, you can construct the agent-based model and then obtain the global behavior (Borshchev & Filippov, [3]). That is a great advantage in our case, since no organization is perfectly documented in terms of their processes and their interdependencies.
An agent in this context is defined as a persistent entity which has some state we find worth representing, and which interacts with other agents, mutually modifying each other's state. The components of an agent-based model are a collection of agents and their states, the rules governing the interactions of the agents and the environment within which they live (Shalizi, [30]).
These kind of systems attempt to replicate real-world situations in order to study what happens. They are constructed to discover possible emergent properties from a bottom-up perspective. Generally, this kind of modeling has no particular tasks or states to achieve; instead the mere description of entities and the observation of their interaction in order to explore the system's possible states is the final objective (Dam et al., [8]). Thus, in models like this, it becomes less about seeing what happens and more about seeing what it takes to make something specific happen. Which in its own definition, we believe it makes this model an appropriate candidate for a technical automated approach to impact assessment and operational planning. It will be ideal to simulate real world environments with partial information and observe how the agents interact with each other and how they affect the environment they live in. To observe how they react to external or internal events triggered like cyber attacks, with what consequences and thus work with backward chaining to answer questions like which agents do I have to interact with and in what way to bring the environment to a certain state.
PROPOSED MODEL AND CASE STUDY
The problem we are seeking to explore is how to be able to perform impact assessments at all levels of an organizational network in case of a cyber attack. Organizations are complex systems composed of many interacting parts, both human resources and information systems, which behave according to simple individual rules to produce a holistic coherent behavior. However, this also results in emergent properties and the behavior of the system cannot be predicted by the sum of individual rules alone. For this reason, we are trying to bring together the very high level of business processes, the human factor and its acting roles in an organization, down to the systems and system entities levels. We need to be able to explore the responsibilities and dependencies and the way they are distributed across these levels in order to estimate the consequences, the cascading impacts, and the critical dependencies for the purposes of security.
The goal to be achieved is to bridge both the ICT infrastructure and the business processes, through a socio-technical approach, to assure that the business services are safely delivered as scheduled and the organization meets its objectives from a security point of view. This means that all resources/assets should be available to all eligible agents, that is, agents with the appropriate access rights on those resources/assets and all agents are able to execute all actions that have been assigned to them in order to fulfill their responsibilities.
To be able to do that we came up with a model of representation of the dependencies those different layers hold. We must understand that relationship and interactions that technical and people have within an organization. For the purposes of resolving this issue, we propose a socio-technical complex systems approach, developed by a socio-technical agent modeling, to create a simulation environment, which maps down the responsibilities and dependencies that flow in an organization. We support that this approach is more appropriate and provides with answers that the current methods do not give. This simulation gives us the potential to examine situations and explore a number of issues such as:
In Figure 1 we can observe and better comprehend our agents, the relationships, and the responsibilities and the dependencies that flow between them. We have defined a responsibility with reference to a state of affairs and the ability of an agent to fulfill or maintain it (Figure 2). This definition gives rise to the question of how a given agent can achieve this within the context of a socio-technical system. Responsibility is associated with agents, resources, and tasks (Charitoudi & Blyth, [7]) as defined in the ART model, and it is defined as the duty from one agent (the responsible) to another (the authority or principal) for the accomplishment of a state of affairs, whether this is the execution, maintenance, or avoidance of certain tasks, subject to conformance with the organizational culture. Thus, the characteristics of a responsibility consist of: who is responsible to whom, for what state of affairs, which are the obligations of the responsibility holder in order to fulfill his/her responsibility and what type of responsibility it is (Dewsbury et al., [9]). For the complete and detailed semantics of the model, for the responsibilities, the dependencies and the definition of the agents, the reader is directed to (Charitoudi et al., [7]).
Graph: FIGURE 1 The ART model semantics.
Graph: FIGURE 2 The responsibility relationship.
In the process control systems context, SCADA systems infrastructure can be viewed as a set of interacting roles, which may be said to be critical to the organization. Examples of such systems include safety critical systems, utility management systems, financial systems and many others. In our case study scenario (Figure 3), it is a SCADA control system (Boyer, [4]) associated with power generation on board a supertanker (Ashwort, [1]). The infrastructure on board a supertanker in terms of steering, pumping, communications, water treatment, and lighting is completely dependent upon power generation. The SCADA system is monitoring the power generation and the utilization of back up power generation to ensure reliability and continual availability of service.
Graph: FIGURE 3 The case study.
Within the case study we can now start to construct a detailed socio-technical systems model that reflects the flow of abstraction from the roles performed by the technical components of a SCADA system into the human elements of such a system. A number of human and machine agents must interact and function together in order to steer the ship. Under the law of the sea and legislation relating to corporate responsibility, the captain of the ship holds the responsibility for the direction of the ship and all actions committed by the crew under his command (Blowfield & Murray, [2]). Within the case study, the captain will instruct the seaman on the direction of the ship, and via the onboard steering system the seaman will define the direction of the ship.
In Figure 3 one can observe that while the responsibility resides with the captain, there are a number of other responsibilities that the system is dependent on. For example, the engine engineer is responsible for managing the power generation system via the engine management system. The SCADA infrastructure within a ship means that both the steering system and the navigation system are dependent upon the power generation system. The SCADA system is used to control the interactions between the various components of the ship and the management systems used to control them.
We can see that, while a number of humans, computers, and engineer agents function to perform roles and tasks that fulfill their responsibilities, those responsibilities combine to allow the captain's responsibility—to steer the ship, to be fulfilled. The focus of our simulation will be the impact if the system can be manipulated, through a cyber attack, in such a way so that the responsibilities cannot be fulfilled.
Interesting attacks to simulate in regards to SCADA systems are attacks on sensors such as stealthy attacks, surge attacks and bias attacks, as they infiltrate false measurements that can cause great damage. On these kinds of attacks is where our research focus is at the moment. Examples of these kinds of attacks on SCADA can be found in Cardenas et al. ([5]).
Agents
As an agent-based model, this one also comprises agents. AS stated before, agents are entities that interact with each other and mutually modify each other's state. In our case an agent needs to be able to represent an entity in any of the levels described. This means that actors—human agents, systems, or business processes—are eligible agents. In the previous case study the agents were the captain, the seaman, the steering system, the steering engineer, the engine management system, the navigation system, the navigation engineer, the engine engineer, and the power generation system.
Every agent consists of certain attributes, or roles to perform and responsibilities to carry out. They are the only manipulators of the state of the environment through their actions and interactions with other agents and the environment. The responsibilities of the agents can be seen as labels on the vertices of the graph in Figure 3. Also, the vertices denote the direction the responsibilities flow in the map.
An agent's state is a specific collection of parameters that defines an agent at that moment. The internal, local and global states, any of each can be static or dynamic. An example of an internal state could be whether the agent is fully functional, partially functional or not functional at all—compromised after a cyber attack for instance. A global state, on the other hand, is composed of the internal state and all of the relevant states in the whole influential environment. That would mean the states of the other agents and the instant of the environment at that moment. Agents are also autonomous meaning they are responsible for and in total control over both their internal state and behavior.
Environment
An environment is where the agents "live." Given our context, our environment is a simulation of the steering and power management processes on a supertanker. The environment provides the agents with all the information they need and the know-how to do things that are not contained in the agents themselves or their neighbor agents. The environment will comprise elements provided by the model itself, elements provided by the modeler or elements that can be emergent such as sudden cyber attacks.
The elements that come from the model are a set of rules and definitions that establish and define the behavior that is allowed within the environment of the organization. The modeler's elements are parameters to the environment that could be scenarios that the analyst is interested in, or external events that intercept the normal flow in the environment. Emergent properties on the other hand arise through the interactions of agents in the simulation.
Although the agents are provided any information they need at the time they need it and they do not have to be aware of the structure of the environment they are in, a structure for the environment is necessary. In this context, impact is the entire chain of events or interactions caused by an emergent event to the environment, or by the behavior of an agent that does not comply with the rules stated by its rule set or the rules of the environment in which the agent lives.
Technical Implementation
As an input to set the narrative and the formulated actions of the model, we are using architectural frameworks such as MODAF and DODAF for the environment and the agents' rules and functional designs for the physical systems. Using the frameworks' databases we can extract all the necessary information to map the organization's environment, rules, agents rules etc. as well as the responsibilities and dependencies flows.
For the modeling environment or the programming language of our simulation, we use the Recursive Porous Agent Simulation Toolkit (Repast) and custom code.
CONCLUSIONS
The methods and methodologies that have been developed in order to manage ISS or SCADA have certain limitations. They do not incorporate technology fully, nor do they properly include the human factor. They are focusing too much on probabilistic models with metrics that do not provide much incentive, since they are lacking appropriate data. They are not complex enough or adaptable enough models to map all aspects of organizations not even the most important one the human personnel. They cannot reflect the interdependencies of assets or the correlations of data, and they do not deal with strategic uncertainty that is caused by adverse interested parties.
Our model focuses on impact, that is, on the implications events can have on the supply chain, business processes, and business roles and not on risk estimation and the prediction of events, with a specific but not sole application on SCADA systems. The main objective is to be able to perform impact assessment, to provide situational awareness, and feed mitigation strategies. It provides an agent-based socio-technical approach capable to represent the complex models of organizations where humans and technology interact to achieve common objectives and it is adaptable and scalable enough to follow unpredictable evolvement. It allows us to observe the interactions and interdependencies between states of agents as well as the possible events that lead to those states. That way, we can make forward and backward chaining analysis that will allow us to understand how to transit from one state to another and thus better analyze events, the impact, the dependencies and how to mitigate risk.
FUTURE WORK
Since socio-technical models are data-driven and the data span across multiple disciplines and are contributed by multiple domain experts, a good idea would be to focus on techniques that would improve the management of model data.
An even more interesting approach though would be to expand the model from an agent-based system to a multiagent system. Multiagent system agents are set up with exactly the characteristics, connections, and choices that they need to achieve certain desired emergent states. This way, someone could work on the idea of a self-organized, self-healing network that would be exceptionally useful, especially in regards to the SCADA systems.
ACKNOWLEDGMENTS
Many thanks to Professor Brian Turton for his support, guidance, exchange of ideas, and information, as well as for his review and comments on the very early drafts of this paper.
FUNDING
The authors thank Cassidian/EADS for funding this project under grant number CDE25337.
REFERENCES
1 Ashwort , M. J. (1981). Computer-aided design of ship steering systems: an investigation into the manoeuvering control of marine vessels. (Doctoral dissertation). Cardiff University , Cardiff, UK.
2 Blowfield , M. , and Murray , A. (2011). Corporate responsibility (2nd ed.). Oxford, UK: Oxford University Press.
3 Borshchev , A. , and Filippov , A. (2004). From system dynamics and discrete event to practical agent based modeling: Reasons, techniques, tools. Proceedings of the 22nd International Conference of the System Dynamics Society , pp. 25 – 29.
4 Boyer , S. A. (2009). SCADA: Supervisory control and data acquisition (4th ed.). Research Triangle Park, NC : ISA Press.
5 Cardenas , A. , Amin , S. , Lin , Z.-S. , Huang , Y.-L. , Huang , C.-Y. , and Sastry , S. (2011). Attacks against process control systems: Risk assessment, detection, and response. 6th ACM Symposium on Information, Computer and Communications Security, ASIACCS'11 , Hong Kong.
6 Carvajal , R. (1983). Systemic netfields: The systems' paradigm crisis. Part I. Human Relations , 36 (3), 227 – 245.
7 Charitoudi , K. , and Blyth , A. (2013). A socio-technical approach to cyber risk management and impact assessment. Journal of Information Security , 4 (1), 33 – 41.
8 Dam , K. H. , Nikolic , I. , and Lukszo , Z. (2013). Agent-based modelling of socio-technical system. Berlin : Springer-Verlag.
9 Dewsbury , G. , and Dobson , J. (2007). Responsibility and dependable systems (limited ed.). London : Springer.
Eom , J.-H. , Kim , N.-U. , Kim , S.-H. , and Chung , T.-M. (2012). Cyber military strategy for cyberspace superiority in cyber warfare Cyber Security. International Conference on Cyber Warfare and Digital Forensic (CyberSec) , pp. 295 – 299.
Farwell , J. P. , and Rohozinski , R. (2011). Stuxnet and the future of cyber war. Survival , 53 (1), 23 – 40.
Fox , W. (1995). Sociotechnical system principles and guidelines: Past and present. Journal of Applied Behavioral Science , 31 (1), 91 – 105.
Futoransky , A. , Notarfrancesco , L. , Richarte , G. , and Sarraute , C. (2010). Building computer network attacks. Journal: Computing Research Repository.
Golany , B. , Kaplan , E. H. , Marmur , A. , and Rothblum , U. G. (2009). Nature plays with dice – terrorists do not: Allocating resources to counter strategic versus probabilistic risks. European Journal of Operational Research , 192 (1).
Heumann , T. , Türpe , S. , and Keller , J. (2010). Quantifying the attack surface of a Web application. In F. C. Freiling (Ed.), Sicherheit 2010: Sicherheit, Schutz und Zuverlässigkeit, Beiträge der 5. Jahrestagung des Fachbereichs Sicherheit der Gesellschaft für Informatik e.V. (GI), 170, pp. 305--316. Berlin. Retrieved from http://www.bibsonomy.org/bibtex/14a21691aa4532a4a7a35c0689aa157cd/dblp
Jajodia , S. , Noel , S. , Kalapa , P. , Albanese , M. , and Williams , J. (2011). Cauldron mission – centric cyber situational awareness with defense in depth. MILITARY COMMUNICATIONS CONFERENCE, 2011 - MILCOM 2011 , pp. 1339 – 1344.
Johnson , M. E. (Ed.). (2009). Managing information risk and the economics of security. Berlin : Springer.
Kay , J. (2002). On complexity theory, exergy and industrial ecology: Some implications for construction ecology. In C. Kibert , J. Sendzimir , and B. Guy (Eds.), Construction ecology: Nature as the basis for green buildings , 72 – 107. London, UK : Spon Press.
Klipper , S. (2011). Information security risk management: Risikomanagement mit ISO/IEC 27001, 27005 und 31010. Praxis.
Koster , F. , Klaas , M. , Nguyen , H. , Brandle , M. , Obermeier , S. , and Brenne , W. (2009). Collaboration in security assessments for critical infrastructures. Fourth International Conference on Critical Infrastructures, 2009 - CRIS 2009 , pp. 1 – 7.
Landoll , D. (2011). The security risk assessment handbook: A complete guide for performing security risk assessments (2nd ed.). Boca Raton, FL : CRC Press.
Lewis , H. W. , Budnitz , R. J. , Rowe , W. D. , Kouts , H. J. C. , von Hippel , F. , Loewenstein , W. B. , and Zachariasen , F. (1978). Risk assessment review group report to the U.S. Nuclear Regulatory Commission. IEEE Transactions on Nuclear Science , 26 (5), 4686 – 4690.
Lund , M. S. , Solhaug , B. , and Stølen , K. (2011). Model driven risk analysis. The CORAS approach. Berlin : Springer.
Macaulay , T. (2008). Critical infrastructure: Understanding its component parts, vulnerabilities, operating risks, and interdependencies. Boca Raton, FL : CRC Press.
Macaulay , T. , and Singer , B. (2012). Cybersecurity for industrial control systems: SCADA, DCS, PLC, HMI, and SIS. Boca Raton, FL : CRC Press Taylor & Francis Group.
Nicholson , A. , Webber , S. , Dyer , S. , Patel , T. , and Janicke , H. (2012). SCADA security in the light of cyber warfare. Computers & Security , 31 (4), 418 – 436.
Periorellis , P. , and Dobson , J. (2002). Organisational failures in dependable collaborative enterprise systems. Journal of Object Technology , 1 , 107 – 117.
Russell , C. T. , Antkiewicz , M. , Florer , P. , Widup , S. , and Woodyard , M. (2013). How bad is it? – a branching activity model to estimate the impact of information security breaches. The Twelfth Workshop on the Economics of Information Security (WEIS 2013) , Washington, DC , Georgetown University.
Schneier , B. (2008 , January 18). The psychology of security. Retrieved from http://www.schneier.com/essay-155.html
Shalizi , R. C. (2006). Methods and techniques of complex systems science: An overview. In T. S. Deisboeck , and J. Y. Kresh (Eds.), Complex systems science in biomedicine , pp. 33 – 114. New York, NY: Springer.
Shoniregun , C. A. (2005). Impacts and risk assessment of technology for Internet security , Vol. 17. Berlin : Springer.
Sommestad , T. , Ekstedt , M. , and Holm , H. (2012). The cyber security modeling language: A tool for assessing the vulnerability of enterprise system architectures. IEEE Systems Journal , 7 (3), 363 – 373.
St. Andrews University. (2011). Sociotechnical systems engineering handbook. Fife , Scotland : Author.
Sun , L. , Srivastava , R. P. , and Mock , T. J. (2006). An information systems security risk assessment model under the Dempster-Shafer theory of belief functions. Journal of Management Information Systems , 22 (4), 109 – 142.
Tipton , H. , and Krause , M. (2004). Information security management handbook (5th ed.). Boca Raton, FL : Auerbach Publications.
Trist , E. , and Bamforth , K. (1951). Some social and psychological consequences of the longwall method of coal getting. Human Relations , 4 (1), 3 – 38.
Verendel , V. (2009). Quantified security is a weak hypothesis: A critical survey of results and assumptions. Proceedings of New Security Paradigms Workshop , pp. 37 – 50.
By Konstantinia Charitoudi and Andrew J. C. Blyth
Reported by Author; Author
Konstantinia Charitoudi has a Computer Science degree and is currently pursuing a PhD in Information Security at the University of South Wales, UK. Her main focus of research is on cyber attack impact assessment simulations on the critical infrastructure. More specifically, she is looking into ways of identifying the propagation of the impact of a cyber attack on the critical infrastructure from the lower physical level to the functions level, all the way up to the personnel roles level.
Andrew J. C. Blyth is Head of the Information Security Research Group & GSC-CSIRT at the Faculty of Computing, Engineering and Science, University of South Wales. Professor Blyth is one the leading researchers in Information Security in the United Kingdom.