вход по аккаунту



код для вставкиСкачать
Testing Automation for an Intrusion Detection
Jeremy Straub
Department of Computer Science
North Dakota State University
Fargo, ND, USA
[email protected]
Abstract—Intrusion detection systems are used in computer
networking and other applications to detect and respond to
attempts to compromise computers, servers, firewalls and other
network resources. As intrusion detection systems move beyond
providing simple pattern recognition capabilities for known
attack types, the ability to test these systems with conventional
techniques (or use formal or other similar methods) becomes
extremely limited.
Environmental factors and other
considerations present numerous scenarios that cannot be
exhaustively identified, much less fully tested. This paper
presents the use of an adaptive and automated testing paradigm
to more fully validate intrusion detection systems that cannot be
effectively fully tested by other means.
Keywords—intrusion detection systems, automated testing,
testing automation, adaptive testing, autonomous testing
Intrusion detection systems (IDSs) have historically been
used to identify prospective attacks on networks and networked
resources. To do this, they identify attacks by signature, by
sensing the presence of an abnormal behavior or by sensing the
absence of a desired normal behavior. More recently, IDSs
have been proposed for use by cyber-physical systems, which
interact with the real-world environment. These systems
introduce a multitude of new types of data to prospectively
process to detect attacks and other undesirable behavior. The
introduction of this wealth of data, while necessary for system
functionality, increases the complexity of the systems
exponentially and makes the assurance of the system’s
performance problematic.
This paper proposes an answer to the question of how to
test these systems. Formal verification is not possible, as the
systems must consume real-world data from sensors (which
may have their own idiosyncrasies, failure modes and other
limitations). Given the multitude of types of data that the
system may be subjected to and a very real potential that an
attacker may attempt to supply data to explicitly deceive the
system (to cover an attack or trigger an IDS warning / response
in the absence of an attack), testing under a broad variety of
conditions is required.
An autonomous testing unit is proposed and detailed which
is trained with data collected from normal, abnormal nonattack and simulated attack conditions. This training data is
978-1-5090-4922-6/17/$31.00 ©2017 IEEE
used to populate a base set of tests which are then modified via
an expert system rule set and both random and intentional
manipulation to generate scenarios. These scenarios are
presented (via simulation) to the IDS under test and its actions
are recorded. As IDSs are also (typically) learning systems, a
mechanism is provided to supply the IDS with feedback
regarding the simulated attack, facilitating its own learning.
The IDS testing automation system is capable of running tests
in both real time and on a faster-than-real-time, turn-based
basis to facilitate both rapid testing and training of the IDS,
After presenting the IDS automated testing system, the
paper explains how this system would fit into an overarching
IDS testing and updating strategy. In particular, the utility of
the testing system for training new attack types onto an
existing IDS (without losing stored local knowledge) is
discussed. The efficacy of the system for IDS training,
penetration testing and otherwise validating a complete system
cybersecurity solution is considered. In particular, focus is
paid to the trade-offs between rapid and real-time testing and
what is required of an IDS to allow it to be tested in this
Several different IDSs (including multiple cyber-physical
system IDS solutions) are presented and the application of the
testing system to each is explained. These IDSs are used as
case studies to evaluate the utility of the IDS testing
automation system. Finally, the paper concludes with a
discussion of and a roadmap for the future work required on
both cyber-physical system IDSs and testing automation
systems to enable a complete and secure cyber-physical system
IDS capability..
This section, reprinted with modifications from [1],
discusses the use of autonomous testing to ensure the success
of an autonomous system. For systems of any size and
complexity, an efficient method of validation is required. A
sufficient number of test cases need to be developed in order to
show that the system can perform acceptably in the real world
environment in which it was designed to operate. There is a
significant body of research related to validation and test case
generation techniques for artificial intelligence systems and
their evaluation. Existing work in four areas (testing artificial
intelligence systems using test cases, artificial intelligencebased test case generation, testing as a search problem, and
software and artificial intelligence failures) is now reviewed.
2.1. Testing Artificial Intelligence (AI) with Test Cases
An intrusion detection system is an application of AI
techniques to cyber security. One of the most basic forms of
testing an autonomous system is with manually generated test
cases. This involves a human tester creating scenarios that will
be presented to the AI, or under which the performance of the
AI will be evaluated. Felgenbaum [2], for example, reviewed
artificial intelligence systems designed for diagnosis based on
medical case studies and concluded that the modularity of the
ĀSituation ė Actionā technique allowed for rules to be
changed or added easily as the expertÿs knowledge of the
domain grew. This allowed more advanced cases to be used for
Chandrasekaran [3] suggests that the evaluation of an AI
must not be based only on the final result. In [3], an approach
to the validation of Artificial Intelligence Medical (AIM)
systems for medical decision-making is presented. The paper
also examines some of the problems encountered during AIM
evaluations. During performance analysis of AI systems,
evaluating success or failure based upon the final result may
not show the entire picture. Intermediate execution could show
acceptable results even though the final result is unsatisfactory.
Evaluating important steps in reasoning can help alleviate this
Another example of AI testing with test cases is presented
by Cholewinski et al. [4] who discuss the Default Reasoning
System (DeReS) and its validation through test cases derived
from TheoryBase, a benchmarking system “designed to
support experimental investigations of nonmonotonic
reasoning systems based on the language of default logic or
logic programming”. Through the use of TheoryBasegenerated default theories, DeReS was shown to be a success.
Cholewinski et al. also proffer that TheoryBase can be used as
a standalone system and that any non-monotonic reasoning
system can use it as a benchmarking tool.
Brooks [5] comments on the use of simulation testing. In
[5] the possibility of controlling mobile robots with programs
that evolve using artificial life techniques is explored. Brooks
has not implemented or tested the ideas presented; however,
some intriguing notions regarding simulation and testing of
physical robots are discussed. Using simulated robots for
testing, before running the programs on physical robots, has
generally been avoided for two reasons [6]–[8]: First, for realworld autonomous systems, there is a risk that the time
involved in resolving issues identified in a simulated
environment will be wasted due to dissimilarities between the
types of events occurring in simulation versus the real
operating space. Second, emulating real-world dynamics in a
simulated environment is difficult due to differences in real
world sensing. This increases the chance of the program
behaving differently in the real world. The use of simulated
robots for testing may uncover basic issues impairing a control
program. However, this approach tends not to uncover some
problems encountered
The previous studies have related to validation methods and
test cases used to assess AI systems designed for practical or
complex tasks in everyday life. Billings et al. [9], alternately,
explores the use of an AI designed for competition rather than
the performance of particular jobs. Poki is an AI driven
program built as an autonomous substitute for human players
in world-class poker tournaments, (specifically, Texas
Hold’em tournaments). In Poker, players are constantly
adapting across playing many hands. Two methods are
discussed to validate the program: self-play and live-play.
Self-play tests are a simple method of validation where an
older version of the tested program is pitted against the current
version. This allows a great variety of hands to be played in a
short amount of time. Live-play tests seek to alleviate this
problem and are, thus, considered by Billings et al. as essential
for accurate evaluation. Implementing the poker AI as part of
an online game is one of the more effective ways to test
performance, as thousands of players are able to play at any
given time.
In testing Poki, Billings et al. tested each version of the
program for 20,000 hands using the average number of small
bets won per hand as a performance measurement before
translating the results.
The work done on the Poki poker system shows that
validating an AI through testing its function against another AI
(itself in this case) is a helpful tool for evaluating system
performance. Real world application test cases are also shown
to be critical in validating the utility of the AI-based validation
2.2. AI Test Case Generation
While manual test case generation may be suitable for a
system where the scope of performance is limited, systems that
have to operate in a real-world environment, such as intrusion
detection systems, must function under a large variety of
systems. Given this, a more efficient approach to test case
generation is desirable.
Dai, Mausam and Weld [10] deal with a similar problem,
except in the context of evaluating human performance on a
large scale. They look at using an AI adaptive workflow, based
on their TurKontrol software, to increase the performance of a
decision making application. Their workflow controller is
trained with real world cases from Amazon’s Mechanical Turk.
Mechanical Turk utilizes humans to perform repetitive tasks
such as image description tagging. For this, a model of
performance and iterative assessment is utilized to ensure
appropriate quality. Through autonomously determining
whether additional human review and revision was required,
TurKcontrol was able to increase quality performance by 11%.
Dai, Mausam, and Weld note that the cost of this increased
performance is not linear and that an additional 28.7% increase
in cost would be required to achieve a level of comparable
The work performed by Dai, Mausam, and Weld provides
an implementation framework for autonomously revising AI
performance, based upon their work in assessing and refining
human performance. The approach can be extended to
incorporate AI workers and evaluators, for applications where
these tasks can be suitably performed autonomously.
Pitchforth and Mengersen [11] deal with the problem of
testing an AI system. Specifically, they look at the process of
validating a Bayesian network, which is based on data from a
subject matter expert. They note that previous approaches to
validation either involved the comparison of the output of the
created network to pre-existing data or relied upon an expert to
review and provide feedback on the proposed network.
Pitchforth and Mengersen proffer, however, that these
approaches fail to fully test the networks’ validity.
While Pitchforth and Mengersen do not provide a specific
method for the development of use and test cases, their analysis
of the validation process required for a Bayesian network
informs the process of creating them. It appears that use cases
are relevant throughout their validation framework and test
cases are specifically relevant to the analysis of concurrent and
predictive validity. Moreover, the convergent and divergent
analysis processes may inform the types of data that are
required and well suited for test case production.
The use of AI in software development and debugging is
also considered by Wotawa, Nica, and Nica [12], who discuss
the process of debugging via localizing faults. Their proposed
approach, based on model-based diagnosis, is designed to
repetitively test a program or area of code to determine
whether it functions properly. To this end, they propose an
approach that involves creating base test cases and applying a
mutation algorithm to adapt them.
While Wotawa, Nica, and Nica’s work is quite limited (as
they note) in the context of line-by-line review of program
code, the fundamental concept is exceedingly powerful. Input
parameters can be mutated extensively without having to create
a mechanism to generate an associated success condition.
AdiSrikanth et al. [13], on the other hand, deal with a more
generalizable approach. They propose a method for test case
creation based upon an artificial bee colony algorithm. This
algorithm is a swarm intelligence approach where three classes
of virtual bees are utilized to find an optimal solution:
employed, onlookers, and scouts. Bees seek to identify “food
sources” with the maximum amounts of nectar.
In the implementation for optimizing test cases, a piece of
code is provided to AdiSrikanth et al.’s tool. This software
creates a control flow graph, based on the input. The software
then identifies all independent paths and creates test cases,
which cause the traversal of these paths. Optimization is
achieved via a fitness value metric.
This work demonstrates the utility of swarm intelligence
techniques for test case generation and refinement. AdiSrikanth
et al., regrettably, fail to consider the time-cost of their
proposed approach. While an optimal solution for a small
program can be generated fairly quickly, the iterative approach
that they utilize may be overly burdensome for a larger
Similar to the bee colony work performed by AdiSrikanth
et al. is the ant colony optimization work performed by Suri
and Singhal [14]. Suri and Singhal look at using ant colony
optimization (ACO) for performing regression analysis.
Specifically, they look at how regression tests should be
prioritized to maximize the value of regression testing, given a
specific amount of time to perform the testing within.
The time requirements for ACO-selection-based execution
ranged between 50% and 90% of the time required to run the
full test suite. It appears that the average is around the 80%
A more general view is presented by Harman [15], who
reviews how artificial intelligence techniques have been used
in software engineering. He proffers that three categories of
techniques have received significant use: optimization and
search, fuzzy reasoning, and learning. The first, optimization
and search, is utilized by the field of “Search Based Software
Engineering” which converts software engineering challenges
into optimization tasks. Fuzzy reasoning is used by software
engineers to consider real-world problems of a probabilistic
nature. Finally, with “Search Based Software Engineering”
(SBSE) the wealth of solution-search knowledge in the AI
optimization domain is brought to bear on software
engineering problems. Harman proffers that the continued
integration of AI techniques into software engineering is all but
inevitable, given the growing complexity modern programs.
2.3. Testing as a Search Problem
Validation can also be conceived of as a search problem. In
this case, the search’s ‘solution’ is a problem in the system
being tested. Several search approaches relevant to this are
now reviewed. The use of these approaches in testing in most
cases remains to be tested and future work in this area may
include the comparison of these approaches and their
evaluation in terms of performance across various testing
Pop et al. [16], for example, present an enhancement of the
Firefly search algorithm that is designed to elicit optimal, or
near-optimal, solutions to a semantic web service composition
problem. Their approach combines the signaling mechanism
utilized by fireflies in nature with a random modification
Pop et al. compare the firefly solution with a bee-style
solution. The bee-style solution took 44% longer to run,
processing 33% more prospective solutions during this time.
The firefly approach had a higher standard deviation (0.007
versus 0.002). Pop et al. assert that their work has
demonstrated the feasibility of this type of approach.
Shah-Hosseini [17] presents an alternate approach, called
the Intelligent Water Drop (IWD) approach, to problem solving
that utilizes an artificial water drop with properties mirroring
water drops in nature. Two properties of water drops are
important. The first important aspect is its soil carrying
capability. The water drops, collectively, pick up soil from fastmoving parts of the river and deposit it in the slower parts.
Second, the water drops choose the most efficient (easiest) path
from their origin to their destination. The IWD method can be
utilized to find the best (or near-best) path from source to
destination. It can also be utilized to find an optimal solution
(destination) to a problem that can be assessed by a single
metric. Duan, Liu, and Wu [18] demonstrate the IWD’s realworld application in the application of route generation and
smoothing for an unmanned combat aerial vehicle (UCAV).
Yet another search technique is presented by Gendreau,
Hertz, and Laporte [19] who discuss an application of a
metaheuristic improvement method entitled the Tabu Search,
which was developed by Glover [20], [21]. This approach takes
its name from the use of a ‘Tabu List’, which prevents
redundant visits to recently visited nodes via placing them on a
list of nodes to avoid. The approach is open-ended and allows
exploration of less-optimal-than-current solutions to allow the
search to leave local minimums in search of the global
Fundamentally, as an improvement method, the Tabu
Search visits adjacent solutions to the current solution and
selects the best one to be the new current solution. Because of
this, it can be initialized with any prospective solution (even an
infeasible one). Gendreau, Hertz, and Laporte evaluate this
search in the context of TABUROUTE, a solution to the
vehicle routing problem. They conclude that the Tabu Search
outperformed the best existing heuristic-based searches and
that it frequently arrives at optimal or best known solutions.
Finally, Yang and Deb [22] propose a search based upon
the egg-laying patterns of the cuckoo. This bird lays eggs in the
nests of other birds of different species. The bird that built the
nest that the cuckoo lays its egg in may, if it detects that the
egg is not its own, destroy it or decide to abandon the nest. The
Cuckoo Search parallels this. Each automated cuckoo creates
an egg, which is a prospective problem solution, which is
placed into a nest at random. Some nests that have the
generation’s best solutions will persist into the next generation;
a set fraction of those containing the worst performing
solutions will be destroyed. There is a defined probability of
each nest being destroyed or the egg removed (paralleling the
discovery of the egg by the host bird in nature). New nests are
created at new locations (reached via Levy flights) to replace
the nests destroyed (and maintain the fixed number of nests).
Walton, Hassan, Morgan, and Brown [23] refine this
approach. Their Modified Cuckoo Search incorporates two
changes designed to increase the speed of convergence at an
optimal solution. First, they change the distance of the Levy
flight from a fixed value to a value that declines on a
generation-by-generation basis, with each generation having a
value that is the initial value divided by the square root of the
generation. Second, they create a mechanism to seed new eggs
that are based upon the best currently known performing eggs.
To do this, a collection of top eggs is selected and two of these
eggs are selected for combination. Walton, Hassan, Morgan,
and Brown assert that the Modified Cuckoo Search
outperformed the Cuckoo Search in all test cases presented and
that it also performed comparably to or outperformed the
Particle Swarm Optimization approach.
Bulatovic, Dordevic, and Dordevic [24] demonstrate the
utility of the Cuckoo Search to real-world problems. They
utilize it to optimize 20 design variables as part of solving the
six-bar double dwell linkage problem in mechanical
engineering. Gandomi, Yang, and Alavi [25] demonstrate its
utility on a second set of real-world problems related to design
optimization in structural engineering.
2.4. Software and AI Failures
The need for verification and validation of AI systems is
now reviewed. Clearly, not all systems require the same level
of extensive validation. The impact and likelihood of systems’
failure are key considerations in determining how much testing
and other validation is required. Halawani [26] proffers that too
much reliance is placed in software, including artificial
intelligence control systems. Several examples of highly
impactful failures illustrate and support this. A nearlycatastrophic error occurred in 1983 when software running a
Soviet early warning system misidentified sunlight reflection
from clouds as a prospective U.S. missile strike [26], [27]. The
Mars Climate Orbiter, a $125 million dollar spacecraft, crashed
due to a units mismatch between two systems [26], [28]. One
was using and expecting metric units, while the other used and
expected imperial units. A similar (easily correctable if caught)
issue resulted in the loss of the Mariner I probe [26], [29]. The
initial 1996 launch of the Ariane 5 rocket failed due to an
integer conversion issue at a cost of approximately one-half
billion dollars [26], [27]. A radiation therapy machine, the
Therac-25, subjected two patients to lethal doses of radiation
during a treatment [26], [30].
Intrusion detection systems (IDSs) [31], [32] are used to
detect network attacks. They typically look for the presence or
absence of patterns. This includes a pattern that matches a
known attack (i.e., an attack signature), elements that don’t
match a pattern of normal behavior and the lack of elements of
a pattern of normal behavior. When an anomaly is found, they
can take action to alert a user / administrator and / or take
further action on their own in response.
IDSs can be used to protect high-value systems or network
segments. They can be on the device that they’re monitoring
for intrusion. They can be on a device that observes the device
or network segment that they’re monitoring for intrusions.
IDSs can serve both as early warning systems and last lines of
defense. They can be placed to monitor the perimeter of a
network for early signs of attack. They can be placed to watch
key areas of a network or key systems to ascertain when a
breach has occurred.
Intrusion detection isn’t absolute. IDSs can provide both
false positive and false negative reports of a network intrusion.
Some IDSs draw conclusions about whether an attack had/has
not occurred and can even take actions in response to a
believed attack. Other IDSs leave conclusions and actions up
to human administrators and just issue reports / alerts when
conditions dictates. Even with a believed intrusion and an IDS
that is empowered to take responsive actions, typically more
analysis of the intrusion is required beyond the initial
identification. This includes confirming the assertion of the
attack occurring and learning more about the attack (to target
the response and prepare for similar or derivative attacks in the
Testing the basic software functionality of an IDS is
relatively easy. To do this, one must test its scanning
capabilities, test its pattern recognition capabilities, test its
notification capabilities and test other relevant portions of the
software. However, after this one may know that the software
is working as expected. However, this doesn’t answer the
question of whether the system works or not or whether the
system learns effectively. The system must also be given
training to perform well right out of the box (or shortly
thereafter) and to be useful while it is learning more about its
Automated and particularly adaptive automated testing
present a solution to this problem. Under this paradigm, testers
create a sandbox, provide input and evaluate the responses.
Figure 1 depicts this.
desired behaviors (i.e., system use changes) and to help them to
recognize new types of attack.
Significant discussion has surrounded the use of AI systems
and automation for attacks against cybersecurity and network
security mechanisms and systems. Many of these attack
paradigms make use of methodologies from the testing and
automated testing community. Fundamentally, attackers are
trying to find a defect in the system (a testing activity) and
exploit it. Detection systems can be trained by an automated
security testing system that is similarly looking for defects.
Multiple testing systems / methodologies can be used during
this training to aid in the detection of different attack
Over time, the attack / defense process may grow to a speed
where human response times are too slow to be effective
against an adversary (see [33] for a discussion of this problem
in a different context). In this circumstance, IDS training and
verification of functionality and efficacy will need to be
conducted automatically to keep pace with the enemy.
Figure 1. Automated Testing Process.
Automated adaptive testing can further advance the
achievement of the testing goals. It adjusts the test scenarios
programmatically, based on the change in results. It avoids
spending a lot of time in areas where the software is shown to
be working well. It refines the testing focus based on
exploring new untested areas. It also refines the testing focus
based targeting testing on areas where performance seems to be
getting worse or moving towards failure.
In addition to testing system performance, the automated
testing system can supply inputs that can be used to facilitate
machine learning. When using the testing system in this way,
care must be taken to ensure that these inputs are accurate.
Relevant patterns, including attack definitions, desirable &
undesirable behaviors, and such, must be presented. Care must
also be taken as to not train away functionality due to
providing inaccurate or otherwise problematic inputs. As an
added benefit, while being used for training, the testing system
is also learning about the system under test and how to better
test it. Multiple AI and computational intelligence techniques
can be used for both systems (see [1] for more details).
This paper has discussed the use of automated testing and
training for an intrusion detection system. It has explained
how, in many environments and circumstances, the use of
automated testing to train and test an IDS makes sense. The
development of an automated IDS testing system is not just
useful now but also in the future, as well. The automated test
system serves multiple beneficial and interrelated roles, in the
immediate term. In the longer term, a head-to-head protector
versus defender scenario may develop where a rapid speed of
training and response may be needed to keep the defending
system capable of effective response.
J. Straub and J. Huber, “A Characterization of the Utility of Using
Artificial Intelligence to Test Two Artificial Intelligence Systems,”
Computers, vol. 2, no. 2, pp. 67–87, 2013.
Proceedings of the International Joint Conference on Artificial
Intelligence, 1977.
B. Chandrasekaran, “On Evaluating Artificial Intelligence Systems for
Medical Diagnosis,” AI Mag., vol. 4, no. 2, p. 34, Jun. 1983.
“Computing with default logic,” Artif. Intell., vol. 112, no. 1, pp. 105–
146, 1999.
R. A. Brooks, “Artificial Life and Real Robots,” pp. 3–10, 1992.
“Elephants don’t play chess,” Rob. Auton. Syst., vol. 6, no. 1–2, pp. 3–
15, Jun. 1990.
R. A. Brooks and R. A. Brooks, “Intelligence Without Reason,” pp. 569-595, 1991.
R. A. Brooks, “Articles New Approaches to Robotics.”
Data from normal and abnormal (attack) and non-attack
abnormal conditions can be used as an input to the system for
training and testing. Data can be automatically modified to
create multiple dissimilar, but derivative scenarios to train from
and use for testing. A similar automated training process may
be needed for some systems that are entirely learning-based
(i.e., no definitions are used) to rapidly acclimate them to new
D. Billings, A. Davidson, J. Schaeffer, and D. Szafron, “The challenge
of poker,” Artif. Intell., vol. 134, no. 1, pp. 201–240, 2002.
[10] P. Dai and D. S. Weld, “Artificial intelligence for artificial artificial
intelligence,” in Twenty-Fifth AAAI Conference on Artificial
Intelligence, 2011.
[11] “A proposed validation framework for expert elicited Bayesian
Networks,” Expert Syst. Appl., vol. 40, no. 1, pp. 162–167, Jan. 2013.
[12] F. Wotawa, S. Nica, and M. Nica, “Debugging and test case generation
using constraints and mutations,” in Intelligent Solutions in Embedded
Systems (WISES), 2011 Proceedings of the Ninth Workshop on, 2011,
pp. 95–100.
[13] B. Suri and S. Singhal, “Analyzing test case selection &
prioritization using ACO,” ACM SIGSOFT Softw. Eng. Notes, vol. 36,
no. 6, p. 1, Nov. 2011.
[14] AdiSrikanth, N. J. Kulkarni, K. V. Naveen, P. Singh, and P. R.
Srivastava, “Test Case Optimization Using Artificial Bee Colony
Algorithm,” in Advances in Computing and Communications, Springer,
2011, pp. 570–579.
[15] M. Harman, “The role of artificial intelligence in software engineering,”
in Proceedings of the First International Workshop on Realizing AI
Synergies in Software Engineering, 2012, p. 61.
[16] C. B. Pop, V. Rozina Chifu, I. Salomie, R. B. Baico, M. Dinsoreanu, and
G. Copil, “A Hybrid Firefly-inspired Approach for Optimal Semantic
Web Service Composition,” Scalable Comput. Pract. Exp., vol. 12, no.
3, 2011.
[17] H. Shah-Hosseini, “Problem solving by intelligent water drops,” in
Evolutionary Computation, 2007. CEC 2007. IEEE Congress on, 2007,
pp. 3226–3231.
[18] H. Duan, S. Liu, and J. Wu, “Novel intelligent water drops optimization
approach to single UCAV smooth trajectory planning,” Aerosp. Sci.
Technol., vol. 13, no. 8, pp. 442–449, 2009.
[19] M. Gendreau, A. Hertz, and G. Laporte, “A Tabu Search Heuristic for
the Vehicle Routing Problem,” Manage. Sci., vol. 40, no. 10, pp. 1276–
1290, Oct. 1994.
SURROGATE CONSTRAINTS,” Decis. Sci., vol. 8, no. 1, pp. 156–
166, Jan. 1977.
[21] F. Glover, “Tabu Search: A Tutorial,” Interfaces (Providence)., vol. 20,
no. 4, pp. 74–94, Aug. 1990.
[22] X.-S. Yang and S. Deb, “Cuckoo search via Lévy flights,” in Nature &
Biologically Inspired Computing, 2009. NaBIC 2009. World Congress
on, 2009, pp. 210–214.
[23] “Modified cuckoo search: A new gradient free optimisation algorithm,”
Chaos, Solitons & Fractals, vol. 44, no. 9, pp. 710–718, Sep. 2011.
[24] “Cuckoo Search algorithm: A metaheuristic approach to solving the
problem of optimum synthesis of a six-bar double dwell linkage,” Mech.
Mach. Theory, vol. 61, pp. 1–13, Mar. 2013.
[25] A. H. Gandomi, X.-S. Yang, and A. H. Alavi, “Cuckoo search
algorithm: a metaheuristic approach to solve structural optimization
problems,” Eng. Comput., vol. 29, no. 1, pp. 17–35, Jan. 2013.
[26] S. Halawani, “Safety Issues of computer Failure.”
[27] T. Huckle, “Collection of Software Bugs,” Institut für Informatik TU
München: Munich, Germany.
[28] J. P. Laboratory, “Mars Climate Orbiter,” Jet Propulsion Laboratory,
Pasadena, CA.
[29] N. Dershowitz, “Software Horror Stories,” Tel Aviv University School
of Computer Science, Tel Aviv, Israel.
[30] P. Jorgensen, Software testing: a craftsman’s approach. CRC press,
[31] J. F. Maddox, M. B. Kadonoff, W. G. I. I. Robert, and R. A. Wendt,
“Intrusion detection system,” US4772875 A, 1988.
[32] M. Dass, J. Cannady, and W. D. Potter, “A blackboard-based learning
intrusion detection system: a new approach,” in Developments in
Applied Artificial Intelligence, Springer, 2003, pp. 385–390.
[33] J. Straub, “Consideration of the use of autonomous, non-recallable
unmanned vehicles and programs as a deterrent or threat by state actors
and others,” Technol. Soc., vol. 44, 2016.
Без категории
Размер файла
220 Кб
2017, 8080473, autest
Пожаловаться на содержимое документа