вход по аккаунту



код для вставкиСкачать
IET Generation, Transmission & Distribution
Special Issue: Interfacing Techniques for Simulation Tools in Smart
Co-simulation platform for integrated realtime power system emulation and wide area
ISSN 1751-8687
Received on 6th September 2016
Revised 21st December 2016
Accepted on 11th January 2017
E-First on 24th February 2017
doi: 10.1049/iet-gtd.2016.1366
Adeyemi Charles Adewole1 , Raynitchka Tzoneva1
1Centre for Substation Automation and Energy Management Systems (CSAEMS), Cape Peninsula University of Technology, Symphony Way,
Bellville 7535, Cape Town, South Africa
E-mail: [email protected]
Abstract: Phasor measurement units (PMUs) are increasingly being deployed in electric power systems in an attempt to
improve grid reliability and make the grid smarter through real-time wide area situational awareness, monitoring, protection and
control applications. PMU-based applications require communication networks in the transmission of synchrophasor
measurements from the substations to the control centres. However, the quality-of-service of these synchrophasor
measurements are not guaranteed when transmitted over a global wide area network especially in the presence of nondeterministic communication network conditions. This study presents the design, development, and integration of an IEEE Std.
C37.118-based co-simulation platform comprising of power system hardware-in-the-loop simulations using the real-time digital
simulator interfaced to a communication network. The impact of adverse communication network conditions such as latency,
packet losses, and corruption on the exchange of data using the co-simulation platform and their impact on a centralised wide
area monitoring, protection, and control (WAMPAC) system was investigated with software-in-the-loop simulations. Moreover,
the maximum allowable adverse communication conditions are quantified with respect to the WAMPAC applications during
emergency conditions.
1 Introduction
In recent years, electric power systems have seen a steady
implementation of information and communication technology
(ICT) infrastructures in an attempt to make the grid more reliable
and smarter through the exchange of real-time data amongst
stakeholders. Communication in smart grids has a multi-layer
structure, and is usually done over several wide area networks
(WANs), local area networks (LANs), neighbourhood area
networks, and home area networks. A smart grid can be defined as
an electric power system in which ICT is applied in the automated
data acquisition and control of the various constraints that the
power system may encounter in the implementation and integration
of sensing/actuation systems, advanced metering infrastructure,
visualisation/analytics, real-time management/monitoring systems,
and protection/control applications, respectively. Smart grid
applications would typically require real-time optimisation
algorithms, computations, and technical know-how in their design
and analyses.
1.1 Background
It is often difficult to access real-world data from the very few
smart grids that are in existence. Thus, it is necessary to carry out
power system simulations jointly with the emulation of the smart
grid communication infrastructure. This would require the
modelling, simulation, and integration of the power system (and its
components) and their communication in real-time using cyberphysical systems (CPSs). CPSs belong to a class of networked,
computational (cyber) co-simulation platforms capable of
monitoring, closed-loop automation and control. CPSs are often
difficult to implement with the existing simulation platforms
commonly used in power system planning and operational studies
[1]. This is because the various existing simulation platforms are
designed using different development languages and interfaces.
Thus, their integration and interoperability is complicated and
requires multi-disciplinary expertise encompassing power system
modelling and simulation, communication network emulation,
system integration, and hardware-in-the-loop simulations [2].
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
The electric power grid spreads over a vast geographical area
and is commonly monitored at control centres using conventional
measurements from supervisory control and data acquisition
(SCADA) systems. SCADA systems provide data telemetry and
telecontrol functions and are made up of geographically dispersed
field devices, remote terminal units, and master terminal units
(MTUs) which are all interconnected via a communication
medium. At the control centre, applications for monitoring,
protection and control are executed for displaying these data/
alarms on the human-machine interface (HMI), and for the
issuance of the protection/control actuating signals to the field
devices. However, existing SCADA systems have a low data
sampling rate, slow polling (scanning) rates of 1 measurement
every 2–10 s, and the measurements acquired from the field
devices are not synchronised at the time of acquisition. Thus,
SCADA-based measurements do not give a representative snapshot
(situational awareness) of the system at a given point in time.
Synchrophasor measurements are more suitable for smart grid
applications because they have high sampling and reporting rates.
These reporting rates can be sub-multiples or multiples of the
nominal system frequency. Synchrophasor measurements are
synchronised to a reference time source (usually the GPS) with an
accurate time of 1 µs or better, and unlike with SCADA systems
where state estimation (with several iteration loops) is required, the
state of the power system can be directly measured using noniterative linear state estimation if synchrophasor measurements
with rectangular coordinate format are used.
1.2 Related work
Planning and operational studies are usually carried out using
software platforms such as DIgSILENT PowerFactory, PSCAD,
RSCAD, NEPLAN, PSS/E, ETAP, and PowerWorld. In practical
systems, the measurements from geographically-separated
substations are communicated to the control centre via a
communication link which can be subjected to non-deterministic
communication network conditions. These uncertainties should be
investigated by integrating these power system platforms with the
communication network, while subjecting the communication
network to various adverse conditions in real-time. Communication
networks were simulated for power systems in [3]. However, this
study were conducted with software models of communication
networks and not actual communication networks. Moreover,
power system simulations and its interaction with the
communication network models were not carried out. In [4], a
power system simulation software was used in the modelling and
simulation of a power system, while the substation-to-control
centre PMU communication was modelled as a combination of low
pass filter and pure time delay. This is not realistic, and cannot
serve as an interfacing technique in co-simulation platforms. A cosimulation platform comprising of OPNET and a real-time power
system simulator was developed in [5]. However, only the impact
of latency was considered. Similarly, in [6], an electric power and
communication synchronising simulator platform comprising of
PSCAD software (for electromagnetic transient simulation) and
PSLF software (for electromechanical transient simulator) was
combined with network simulator-2 (ns-2) for investigating the
impact of the presence of latency and losses on a backup protection
scheme. Furthermore, in [7], the communication network
requirements for power system protection schemes were estimated
using a communication network modelled in ns-2. The impact of
communication network delays on measurements from phasor
measurement units (PMUs) was considered in [8] using OPNET,
while a voltage-var optimisation co-simulation platform was
considered in [9]. In [10], a global event-driven co-simulation
platform comprising of a power system simulator (PSLF) and ns-2
communication network emulator was implemented for a backup
distance protection scheme. Most of the existing studies in the
literature made use of over-simplified software models of
communication networks. These would give unrealistic results farfetched from practical real-life conditions.
This paper develops a new real-time man-in-the-middle (MitM)
cyber-physical platform using synchrophasor measurements for
simultaneously investigating the interaction between the power
system domain, wide area communication network domain, and
cyber-security related activities using actual industrial-grade
equipment. The contribution of this paper is summarised as
the development of an end-to-end real-time co-simulation
platform comprising of the real-time digital simulator (from
RTDS Technologies Inc.) integrated with an actual
communication network with actual industrial-grade
the implementation of a Linux-based real-time MitM cyberattack on synchrophasor communication protocols using the
developed co-simulation platform;
the investigation of the communication network quality of
service (QoS) and the reliability of protection/control schemes
for several adverse communication network conditions, and the
quantification of the maximum allowable adverse conditions;
unlike in the existing literature, actual communication network
infrastructure and communication protocols were used in the
exchange of data between the power domain (RTDS®
communication network domain. Rather than a library of
communication network models as commonly seen in the
existing literature; and
performance analyses and the interoperability analysis of the
elements in the developed real-time co-simulation platform
were conducted through real-time data and control signal
The rest of the paper is organised as follows: Section 2 describes
the communication networks in power systems. The design of the
co-simulation platform developed for this paper is presented in
Section 3, while Section 4 gives the implementation of this
platform using hardware-in-the-loop (closed-loop) power system
simulation and software-in-the-loop communication network
emulation, respectively. The experimental results obtained are
given in Section 5, and the contribution of the paper is summarised
in Section 6.
2 Communication networks in power systems
For an interconnected power system spread across a wide
geographical area, a wide area monitoring, protection and control
(WAMPAC) system is required to monitor the various regions of
the power system and initiate control/protection countermeasures
when certain indices are violated in order to prevent instabilities
and system collapse. Four main components are required in
synchrophasor-based systems. These are PMUs, phasor data
concentrators (PDCs), a communication network, and the control
centre WAMPAC applications. Typically, the synchronised
measurements from PMUs within a substation are published to the
substation PDCs using an intra-substation LAN. The PDCs collect
the synchrophasor measurements from the PMUs, and time-align
them based on their GPS time stamps. The PDCs at the various
substations of the interconnected power system then publish their
concentrated measurements to the regional PDC or superPDC
located at the central control centre using a WAN, where the
synchrophasor measurements are co-related according to their GPS
time stamps, and are applied as the input to the control centre
WAMPAC applications.
In the implementation of a centralised WAMPAC system, the
communication network infrastructure for publishing the
synchrophasor measurements from the substation PMUs/PDCs to
the control centre superPDC, and the communication of the control
signal from the control centre to the field actuating devices is
important. Four types of synchrophasor messages are defined in the
IEEE C37.118. These are the data, command, header, and
configuration messages respectively [11]. The data, configuration,
and header messages are published from a PMU/PDC (data source)
to the PDC (data destination). While the command messages are
published from the PDC (data destination) and subscribed by the
PMU/PDC (data source). Thus, continuous dialogues between the
substations and the control centre are required in order to transfer
the command frame, configuration frame, header frame, and the
data respectively. It is essential that these duplex communication is
served by a communication network with a guaranteed or
acceptable QoS.
Two communication transport protocols – the transmission
control protocol (TCP) and the user datagram protocol (UDP) are
specified in the IEEE C37.118 standard for synchrophasor
measurements. The intra-substation PMU-to-PDC IEEE C37.118.2
synchrophasor data communication can be done using the TCP
since the latency within a substation LAN is small, while the data
communication between the substation PDC-to-the SuperPDC in
the control centre can be via the UDP. The UDP is suitable for the
substation-to-control centre communication because it is a
connectionless protocol with no handshaking/flow control
constraints, and does not experience packet retransmission like
with the TCP protocol. Also, less latency is experienced with the
UDP. Furthermore, the bandwidth requirement for the UDP is less
since there is a reduction in the communication overhead required.
Another possibility for synchrophasor data transfer is the
communication mechanism defined in the IEC 61850-90-5
technical report on synchrophasor communication [12]. This
involves the formatting and transmission of synchrophasor
measurements using the IEC 61850 data modelling, configuration,
and infrastructure framework. Typically, the UDP protocol with
multicast addressing is used. This can be implemented using the
Routed GOOSE or routed sampled values [12]. The
communication medium to use would depend on the available
resources and can be based on wired technologies (e.g. optical
fibre, power line carrier, or leased lines) or wireless technologies
(e.g. satellite, microwave link, radio link, or IP-based network).
3 Design of the co-simulation platform
The co-simulation platform developed in this paper enables realtime performance testing and interoperability testing of systemwide WAMPAC algorithms before deployment. Moreover,
investigations relating to the quantification of the maximum
allowable network latency, jitter, packet losses, and network
corruption (noise/attenuation) can be carried out through real-time
data and control signal exchanges just like in real-world systems.
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
Fig. 1 Layout of the developed co-simulation platform
The components of this co-simulation platform operate
simultaneously in real-time and can be functionally divided into the
following: (i) Power system; (ii) Communication network; (iii)
control centre WAMPAC applications; and (iv) MitM cyber-attack.
These components are discussed in the following subsections.
3.1 Power system
The power system component comprises of a power system model
developed using the RTDS® platform. This is made up of the
RSCAD software and the RTDS® simulator. The RSCAD software
mainly combines the RSCAD Draft module for modelling the
power system components and the RSCAD Runtime module for
running the simulations in real-time. The RTDS® rack is the
computation hardware with processor cards used for solving the
power system's network computation and the various differential
algebraic equations representing the models of the various power
system components using a simulation time-step of 50 µs.
The RTDS® simulator also contains input/output cards which
serve as the interface for hardware-in-the-loop (closed-loop) testing
with external hardware devices. External PMUs and the RTDSGTNET PMUs (Soft PMUs) are interfaced to the instrument
transformer models (CTs and VTs) within the RSCAD software.
The synchrophasor measurements from these PMUs are then
published over the LAN using the TCP transport protocol to the
local PDCs within the substations, and with UDP for the WAN
communication between the substation PDCs and the superPDCs
(in the control centre).
3.2 Wide area communication network emulation
3.2.1 Wide area communication network: The communication
network for synchrophasor-based applications can be divided into
two: (i) intra-substation communication; and (ii) substation-tocontrol centre communication. For the intra-substation
communication between the PMUs and the local PDC(s) within a
substation, Level-1 (shown in Fig. 1) is the data acquisition level
comprising of sensors, PMUs, and IEDs, and their connection to
the local PDCs, local HMI, and station computer. Typically, the
measurements from the PMUs are subscribed to and co-related by
the substation PDC before being streamed across the WAN to the
control centre. The substation-to-control centre communication
involves the streaming of the collected and co-related
synchrophasors by the substation PDCs to the control centre PDC
via the WAN on Level-2 using dedicated or shared communication
infrastructure. Level-3 is the control centre communication
network connecting the superPDC, data archiver, MTUs, and
workstations with the last mile of the WAN.
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
The Level-2 communication network can be affected by adverse
communication network conditions such as latency, jitter, packet
losses, and corruption. In the past, communication networks were
often emulated using ns-2/ns-3, OPNET Modeler, OMNet++, and
NeSSi [5–8]. However, the methods in [5–8, 10] were implemented
using a library of over-simplified communication network models
which present results different from that obtainable with real-life
communication networks. In this paper, actual synchrophasor data,
configuration, command, and header messages are published over
real communication network infrastructure using actual transport
protocols. The interface for the adverse communication network
investigations were carried out in Linux. Of particular importance
are the netfilter and the Netem frameworks provided by the Linux
Although, the RTDS® simulator by itself can be used to
emulate some of these adverse communication conditions, these
are done internally within the RTDS® environment. This paper
aims to practically emulate WAN conditions externally using actual
industrial-grade communication network switches/protocols and
the abovementioned Linux framework. Details on the emulation of
the WAN conditions are given in the proceeding subsection.
3.2.2 MitM attack: A MitM attack is a malicious cyber-attack on
the communication between two endpoints (hosts), with the
attacker secretly impersonating both hosts and gaining access to the
information being exchanged between the two hosts. It targets the
confidentiality, integrity, and availability of information. MitM
attacks can be divided into two types: (i) passive MitM attacks; and
(ii) active MitM attacks. The passive MitM attack is one in which
the attacker just eavesdrops on the information transmitted between
the hosts, while the active MitM attack is a form of attack in which
the attacker eavesdrops and also insert/deletes/modifies the
information transmitted from one host to another.
A MitM interface was used in this paper as a transparent proxy
(invisible to the IEEE C37.118 synchrophasor clients and servers)
which intercepts the synchrophasor communication between the
IEEE C37.118 servers (substation PMUs/PDCs), then pass them
through the adverse WAN conditions, before forwarding the
synchrophasor measurements to the control centre IEEE C37.118
clients (superPDC). Thus, the MitM breaks the normal private
client-server connection into two. The first connection is between
the server and the MitM attacker, while the second connection is
between the client and the MitM attacker (illustrated with dash
lines in Fig. 2), all without their knowledge by impersonating and
spoofing the communication between the PMUs/PDCs (IEEE
C37.118 servers) and the control centre superPDC (IEEE C37.118
Fig. 2 Man-in-the-middle attack for the developed co-simulation platform
3.3 WAMPAC applications
3.3.1 Wide area monitoring system: A wide area monitoring
algorithm proposed by the authors in [13] and a wide area
protection/control algorithm [14] were implemented in this paper
for voltage stability assessment and wide area protection/control.
An adaptive weighted-summation algorithm for voltage stability
assessment, identification of the critical areas in the system, and
the prediction of the system's margin to voltage collapse for a
group of generators in a large interconnected power system is given
as [13]:
vcaRVSArk =
i = 1, nr
r = 1, N
r = 1, N
∑ wikRVSAik
k = 0, 1, 2, …
where wik are the individual generator weights, RVSAi are the
computed generator-derived indices, i is the ith generator within a
reactive power reserve basin (RPRB). vcaRVSA is the weighted
summation of the RVSA in the RPRB. nr is the total number of
generators in the rth RPRB, and N is the total number of RPRBs.
The RPRB is a group of coherent generators providing reactive
power support/voltage control to a group of load buses with similar
voltage collapse problem. This group of load buses are referred to
as a voltage control area (VCA).
If generator reactive power reserve is used, the RVSAik index is
replaced by the RVSAQik index. The RVSAQik index is computed
RVSAQ, ik =
Qgc max i − Qgki
Qgc max i
× 100%
wik =
∑i r= 1 Qgc max i − Qgki
∑ wik = 1
i = 1, nr
3.3.2 Wide area protection and control system: Protection
and/or control countermeasure against emergency conditions
resulting from severe system disturbances are required in order to
restore a system to an acceptable operating state. Since
undervoltage load shedding (UVLS) scheme is an extreme
countermeasure of last resort, it is used in this paper with the
assumption that other countermeasures failed to mitigate an
impending system collapse. Thus, justifying its use.
To restore the system to the initial pre-contingency operating
condition or to an acceptable system operating point after a
disturbance, the amount of load to shed in a system is formulated
based on the severity of the prevailing disturbance as [14]:
Qshed, k =
k = 0, 1, 2, …
∑ ΔQLlk = ΔQgTk,
k = 0, 1, 2, …
ΔQLk =
βk ∑ ΔQg jk −
The relationship between the change in the reactive power output
ΔQgTk and the change in the total reactive power load demand
ΔQLk can be derived for a system condition where the reactive
power demand is met by the reactive power sources as:
k = 0, 1, 2, … (2)
The weight of the ith generator is calculated on the basis of the
real-time measurements from the PMUs as:
Qgc max i − Qgki
where Qgcmaxi is the maximum reactive power of the ith generator at
the voltage collapse point, Qgki is the reactive power of the ith
generator at the kth operating point. The vcaRVSA index in (1) is
used in the prediction of the operating state of the power system,
prediction of the system's percentage margin to voltage collapse,
and in the identification of the voltage weak areas of the power
system. The RVSA index can be from 100 to 0%. The system
collapses when it is 0%.
∑ ΔQLlk
∑ ΔQLlk = 0,
k = 0, 1, 2, …
where ΔQLlk is the additional reactive power demand by the lth
load, βK is a load factor relating ΔQgTk to ΔQLk, Ng is the number
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
Fig. 3 Topology of the 10-bus equivalent test system with PMU locations
of generators in the system, Nl is the number of load buses that can
be shed.
The total MVAr deficit ΔQgTk from all the synchronous
machines at the kth operating time is approximated as follows:
∑ ΔQg jk =
ΔQgTk =
Qg jk = −
Vg j
Qg j0 = −
V g j I f d jk
− Pg2 jk ,
Vg j
2 2
V g j I 2f d j0
− Pg2 j0 ,
j = 1, Ng
j = 1, Ng
|Vgj| is the terminal voltage at the jth synchronous machine, Xs is
the synchronous reactance of the synchronous machine, Ifdjk and
Pgjk are the field current and the real power of the jth synchronous
machine at the kth operating time respectively. Also, Qgj0, Ifdj0, and
Pgj0 are the reactive power, field current and the real power output
of the jth synchronous machine at the initial operating set point,
The amount of the real power (MW) load corresponding to the
reactive power shed is obtained as:
Pshed, k =
∑ tan
cos power factor jk
× ΔQg jk,
= 0, 1, 2, …
wrΔVBpk =
ΔV rBpk
∑ p br
ΔV rBpk
V r p0 − V r pk
∑ p br
V r p0 − V r pk
Nbr is the number of load buses in the rth VCA, WrΔVBpk and
|ΔV rBpk| are the weighted voltage deviation and bus voltage phasor
magnitude deviation at the pth bus at the operating time k, and
|Vrp0| is the reference voltage at an initial steady-state condition.
4 Implementation of the co-simulation platform
In this section, the experimental setup of the co-simulation
platform developed at the Centre for Substation Automation and
Energy Management Systems at the Cape Peninsula University of
Technology is described. This comprises of the hardware-in-theloop RTDS®, communication network, computation platform for
the WAMPAC applications, and the MitM interface. The various
aspects of this implementation is discussed below.
4.1 Test system
The 10-bus multi-machine equivalent network [15] shown in Fig. 3
was modelled using the RSCAD software. It is made up of three
generators supplying a total of 6655 MW of load at load level-1,
with generators G1 and G2 injecting a total of 5717 MW across
five 200 km 500 kV transmission lines. Generator G3 supplies the
rest of the load demand.
4.2 PSLF and communication network emulator
The total amount of load to shed obtained in (4) above is
distributed among the VCAs using an adaptive weighted
summation vcaRVSA index. The amount of loads to be shed in the
rth VCA is:
ΔQshed, VCArk =
ΔQshed, rBp = wrΔVBp × ΔQshed, VCArk
Qg jk − Qg j0 ,
k = 0, 1, 2, …
The amount of load to shed ΔQshed,rBp at the pth load bus
within the rth VCA is:
100 − vcaRVSArk
100N − ∑r = 1 vcaRVSArk
r = 1, N
× Qshed, k,
k = 0, 1, 2, …
In each respective VCA, the load buses to shed are ranked in
descending order based on their respective voltage deviation/dip
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
4.2.1 Real-time power system hardware-in-the-loop
simulations: The hardware-in-the-loop testbed implemented using
industrial-grade equipment is shown in Fig. 4. This comprises of
the RTDS® simulator (1), analogue output amplifiers (2), GPS
satellite clock (3), RTDS-GTNET-PMU (4), PMUs (5), PDCs,
synchrophasor vector processor (SVP) SEL 3378 (6), and
substation communication network switches (7) as indicated in
Fig. 4b of the developed laboratory-scale testbed. The power
system components/models are compiled in the RSCAD Draft
module, while real-time simulations are carried out using the
RTDS® simulator and the RSCAD Runtime module.
RSCAD Runtime module (running on a computer workstation)
communicate directly with the RTDS® simulator in real-time using
the giga transceiver work station interface card of the RTDS®.
Fig. 4 Lab-scale architecture and implementation of the WAMPAC testbed
(a) Framework architecture of the real-time synchrophasor-based WAMPAC testbed, (b) Pictorial view of the laboratory equipment used
Table 1 PMU configuration parameters
PMU configuration
performance class
configuration frame format
reporting rate
phasor format
phasor output format
phasor output
analogue format
config-2 (CFG-2)
60 fps (up to 240 fps possible)
positive sequence V & I
Through the RSCAD Runtime, graphical plot updates and user
defined events can be simulated. The measurements from the
GTNET-PMU (P-class PMU) and the external PMUs are published
as positive sequence phasors (real and polar formats) with a
reporting rate of 60 fps. Synchrophasor measurements are streamed
onto the network from: (i) the GTNET-PMU card of the RTDS®
simulator; and (ii) the external PMUs interfaced to the RTDS® via
the analogue output amplifiers. Time synchronisation was provided
using the IRIG-B format obtained from the GPS satellite clock.
The measurements from the generators and load buses of interest
were streamed as IEEE C37.118 synchrophasor measurements onto
the substation Ethernet LAN using the TCP transport protocol,
while the SEL-3378 SVP served as a substation PDC used in
communicating with the control centre superPDC using the UDP
transport protocol. The basic configuration settings of the PMUs
are given in Table 1. The control centre WAMPAC applications are
executed using the IEC 61131-3 programme organisation units
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
Table 2 Emulated WAN conditions
Network condition
packet loss
Table 3 Analysis of a substation LAN
Network parameter
latency + jitter
loss of packet
available bandwidth
(100 : 50 : 1000) ms
10% of latency
0.1683 ms
0.1781 ms
51.3418 Mbps
4.2.2 Communication network and MitM attack: Typically, a
10/100 Mbps Ethernet is adequate for intra-substation
communication, while a 1 Gbps Ethernet backbone can be used for
communication, respectively. The communication network for the
co-simulation testbed was implemented using industrial-grade
substation communication network switches. The adverse
communication network condition permissible either due to
bandwidth limitation or caused by cyber-attacks is quantified in
order to test the performance of the WAMPAC applications before
The MitM attack is implemented using Kali Linux - an open
source Debian-derived Linux distribution. As mentioned in the
preceding subsection, the MitM attacker secretly inserts itself in
between the substation PMUs/PDCs and the control centre
superPDC communication. The steps for the execution of the MitM
attack are given below:
• Step 1: In Linux, initialise Ettercap and define the Ethernet
interface of the cyber-attacker machine to use for the MitM
• Step 2: Scan the communication network and identify the IP
addresses of the IEEE C371.118 servers and clients to attack on
the network.
• Step 3: Commence sniffing on the specified Ethernet interface.
• Step 4: Initiate Address Resolution Protocol (ARP) spoofing
(ARP poisoning).
• Step 5: Activate IP forwarding to start forwarding the messages
from the IEEE C371.118 servers to the clients via the MitM
• Step 6: Emulate the adverse wide area communication network
conditions using the Netem component in Linux.
Network devices use the ARP in resolving the network layer IP
address to the link layer MAC address. With ARP poisoning, the
MitM attacker sends a fake ARP message to the IEEE C37.118
servers and clients as the precursor to cyber-attacks such as denial
of service, MitM, and session hijacking attacks, respectively. This
enables it to associate its MAC address with the IP address of the
IEEE C37.118 servers or clients. Thus, all the messages from the
IEEE C37.118 servers to the IEEE C37.118 clients will be routed
through the MitM attacker acting as a transparent proxy server for
the duplex traffic between the IEEE C37.118 Client (superPDC)
and the IEEE C37.118 Servers (substation PDCs). The emulated
adverse communication network condition are introduced into the
communication network via the transparent proxy server (indicated
by the arrow head dashed lines in Figs. 1 and 2).
At the control centre, the SuperPDC collects and time-aligns the
measurements from geographically-dispersed stations according to
their time-stamps. These time-aligned measurements are streamed
and subscribed to by a computation platform for real-time
computation of the elements of the WAMPAC applications. The
peculiarity of the MitM cyber-attack implemented is that it does
not require prior knowledge of the IP addresses or MAC addresses
of the IEEE C37.118 synchrophasor clients and servers. Moreover,
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
the attack is covert, and the clients and servers are unaware of the
presence of the attacker. Furthermore, the MitM attacker does not
necessarily have to be inside the communication network, the
attack could be launched from an external network. In addition, the
MitM attack can modify the synchrophasor measurements being
streamed from the IEEE C37.118 servers to the clients.
Table 2 gives the emulated WAN conditions considered in this
paper. These WAN conditions could result in the failure of the
WAMPAC schemes during emergency conditions. The
experimental results obtained for the investigations carried out on
the impact of network delays (latency) jitter, packet losses, and
network corruption caused by noise or attenuation are presented
and discussed in the next section.
5 Results and discussion
Network analysis was carried out on the implemented
communication network. The lowest communication bandwidth of
about 50 Mbps was obtained between the RTDS® and the rest of
the testbed as shown in Table 3.
From Table 3, it was observed that the network is efficient with
no packet loss recorded and has a negligible delay of 168 µs. Thus,
no throughput degradation will occur under steady-state conditions.
For the substation communication LAN given in Table 3, each
PMU within a given substation is configured to stream two positive
sequence phasors of voltage and current, four analogues, and one
binary word. This corresponds to a PMU data frame of 60 bytes
(calculation is shown in the appendix). Since the TCP protocol is
used for intra-substation communication with a 60 fps reporting
rate and a 50 Mbps network, the total number of PMUs that can be
supported is calculated as 853 PMUs using the following equation.
Number of PMUs =
Available bandwidth
Required bandwidth per PMU
This implies that the communication network is adequate for the
intended traffic and would not experience any degradation. Thus, it
was necessary to artificially create some adverse communication
network conditions. The results obtained in the validation of the
platform and the evaluation of the WAMPAC algorithms with and
without adverse communication network conditions are presented
as follows.
5.1 Case study 1
A N − 1 line contingency on the 10-bus multi-machine test system
with and without the emulation of the network delay parameter
given in Table 2 was simulated in case study 1. Fig. 5 shows the
plots of the synchrophasor voltage at bus-8 obtained without any
network latency, and for the emulated network latencies of 250 to
750 ms with a jitter of 10%. From Fig. 5b, it can be seen that the
network latencies ≥800 ms had adverse effects on the
synchrophasor measurements. This is demonstrated by the
increased losses in the synchrophasor measurements as indicated
by the highlighted dropped measurements.
5.2 Case study 2
Fig. 6a shows the impact of the emulated packet losses and
network corruption from noise using the same contingency given in
case study 1. From the results obtained, the packet losses up to
2.5% did not have any adverse effect on the synchrophasor
measurements published to the control centre using the emulated
WAN conditions. However, packet losses greater than 2.5% had a
greater impact as indicated by the increase in the number of
measurements dropped. Similarly, it was observed that corruption
up to 1.0% was acceptable and did not have any adverse effect on
the synchrophasors published from the substations to the control
centre as shown in Fig. 6b. From Fig. 6c, it was observed that for a
latency of 500 ms, the synchrophasor measurements could only
tolerate a packet loss of about 0.1%. Beyond a packet loss of 0.1%
and a latency of 500 ms, the lost packets increased and the network
was severely degraded. This implies that the communication
network implemented for the WAMPAC applications presented in
Section 3 could only support a latency of 500 ms with ±10% jitter,
0.1% packet loss, and 1.0% random noise, respectively. Beyond
which the network degraded to an unacceptable level.
5.3 Case study 3
Fig. 7 gives the results obtained for the investigation of the
interoperability and ability to exchange data between the RTDS®
used for the hardware-in-the-loop power system simulations and
the emulated wide area communication network for an N − 2 line
contingency that required emergency control using undervoltage
load shedding.
Fig. 7a shows the bus voltages without emergency control and
without the emulation of any adverse WAN condition. From
Figs. 7b and c, it can be seen that the UVLS control signal was lost
as a result of the emulated adverse WAN conditions. Thus, the
UVLS control signal required to prevent the system from voltage
collapse was not received by the field actuation elements in order
to effect the undervoltage load shedding. This resulted in a voltage
collapse condition as shown in Fig. 7c. When the maximum
allowable network conditions obtained from the studies carried out
for the WAMPAC applications were used, the UVLS control signal
was unaffected by network conditions with a latency of 500 ms,
0.1% packet loss, and 1.0% noise as shown in Fig. 8. The wide
area vcaRVSA index (Fig. 8a) correctly gave an indication of the
system's margin to voltage collapse for the simulated scenario. The
UVLS control signal (Fig. 8b) required for the UVLS scheme was
timely received, and voltage collapse was successfully averted and
the system recovery is shown in Fig. 8c.
5.4 Discussions
In this paper, the RTDS® simulator was interfaced to a
communication network emulator, and used for several real-time
hardware-in-the-loop simulations. The ability to exchange data in
real-time among the external devices and over a communication
network interface was verified. Moreover, a MitM cyber-attack
was used in introducing adverse communication network
conditions into the communication network implemented for the
lab-scale testbed. From the results obtained for case study 1, it was
observed that a delay greater than 800 ms would severely degrade
the throughput of the implemented communication network. Thus,
if cyber security applications like authentication and encryption are
used in this system, they should have a total delay less than 750 ms. Similarly, packet losses and corruption less than 1%,
respectively were shown to be acceptable as shown in case study 2.
While a combination of these adverse communication network
conditions showed a reduction in the throughput of the network. As
shown in Fig. 6c, only a latency of 500 ms could be tolerated
simultaneously with packet losses of 0.1% and network noise of
1%. The impact of the MitM cyber-attack and the adverse network
conditions could result in delays in measurements, packet losses,
and the loss of the control signals designed for mitigating power
system instability as shown in case study 3. It should be noted that
the maximum permissible adverse communication network
conditions presented is specifically applicable to the testbed and
WAMPAC applications implemented in this paper, and would be
different for other WAMPAC applications or communication
network infrastructure.
To mitigate the MitM cyber-attack initiated in this paper, a
detection algorithm based on the threshold of the maximum typical
delay or adverse conditions in a communication network can be
implemented. Once the prevailing condition exceeds these
quantified values, the detector will trigger an alarm, initiate
logging, bad data or missing data algorithms, or other pre-planned
countermeasures against cyber-attack. Also, the MitM cyber-attack
can be detected using access control whitelists. These whitelists
can be implemented in the datalink layer using the source and
destination MAC addresses, source and destination IP addresses in
the network layer, and the source and destination ports in the
transport layer, respectively. If access is requested by a host whose
MAC or IP addresses or ports are not in the corresponding
whitelist, alarms and access blocking can be issued. Furthermore,
public key encryption can be used to combat MitM attacks. This
Fig. 5 Case study-1 for various WAN latencies
(a) Network latency of 0–750 ms for case study 1, (b) Network latency of 800–1000 ms for case study 1
would involve the exchange of public key infrastructure between
the client and the server. In addition, MitM cyber-attacks can be
prevented in synchrophasor-based applications by retaining the IP
and MAC addresses of legitimate devices, and by disenabling
unused ports on Ethernet switches.
6 Conclusion
CPSs for investigating the interaction between the power system
components, the communication infrastructures, and the various
applications within the smart grid are often desired because of the
unavailability of actual real-world data from smart grids. The
various power system simulation software and the communication
network emulators in existence are often difficult to integrate.
Moreover, over-simplified communication network software
models are commonly used. In this paper, a co-simulation platform
combining power systems, communication networks, and control
centre WAMPAC applications was developed using actual
industrial-grade equipment. The impact of various adverse
communication network conditions was investigated using a MitM
cyber-attack on the communication between the IEEE C37.118
clients and servers. The use of a co-simulation platform gives a
more realistic option for performance testing and analyses of the
interaction between the power system, communication network,
and the various applications deployed in smart grids. Thus,
allowing issues to be detected before these applications are rolled
The analysis of the experimental results showed a tolerance for
latency up to 750 ms. Similarly, the system also tolerated packet
losses and network corruption of 1%, respectively. However, a
simultaneous combination of latency, packet losses, and network
corruption was observed to be very severe, resulting in the failure
of the WAMPAC applications. These adverse communication
conditions could affect the applications for network management,
monitoring, protection and control in the smart grid, and should be
considered when designing such WAMPAC applications. It should
be noted that the maximum permissible adverse communication
network conditions obtained above are specifically for the
communication network and WAMPAC applications considered in
this paper, and would be different for other communication
networks or applications. The insights provided by this paper in the
development of a real-time co-simulation platform is highly
beneficial, and can be used as guidance in the design,
implementation, and testing of WAMPAC applications,
communication network planning/evaluation of the WAN between
the substation and the control centre of the smart grid.
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
Fig. 6 Case study 2 for various adverse WAN conditions
(a) Bus 8 voltages for various packet losses, (b) Bus 8 voltages for corruption, (c) Bus 8 voltages for various combinations of adverse conditions
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
Fig. 7 Case study 3 Impact of latency, jitter, packet loss, and noise
(a) Phasor measurements of buses-8 and -11 (without latency), (b) Binary signal for case study 5 with latency (UVLS fails), (c) Phasor measurements of buses-8 and -11 with latency
(UVLS fails)
Fig. 8 Case study 5 Impact of latency, jitter, packet loss, and noise
(a) Binary signal for case study 5 (UVLS successful), (b) vcaRVSA index for case study 5 (UVLS successful), (c) Phasor measurements of buses-8 and -11 (UVLS successful)
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
7 References
Palensky, P., Widl, E., Elsheihk, A.: ‘Simulating cyber-physical energy
systems: challenges, tools and methods’, IEEE Trans. Syst. ManCybern. Syst.,
2014, 44, (3), pp. 318–326
Faruque, M.O., Dinavathi, V., Sterurer, M., et al.: ‘Interfacing issues in multidomain simulation tools’, IEEE Trans. Power Deliv., 2012, 27, (1), pp. 439–
Doi, H., Serizawa, Y., Tode, H., et al.: ‘Simulation study of QoS guaranteed
ATM transmission for future power system communication’, IEEE Trans.
Power Deliv., 1999, 14, (2), pp. 342–348
Taylor, C., Erickson, C.D., Martin, K.E., et al.: ‘WACS—wide-area stability
and voltage control systems: R&D and online demonstration’, Proc. IEEE,
2005, 93, (5), pp. 892–906
Zhu, K., Chenine, M., Nordstrom, L.: ‘ICT architecture impact on wide area
monitoring and control system's reliability’, IEEE Trans. Power Deliv., 2011,
26, (4), pp. 2801–2808
Hopkinson, K., Wang, X., Giovanini, R., et al.: ‘EPOCHS: a platform for
agent-based electric power and communication simulation built from
commercial off-the-shelf components’, IEEE Trans. Power Syst., 2006, 21,
(2), pp. 548–558
Kansal, P., Bose, A.: ‘Bandwidth and latency requirements for smart
transmission grid applications’, IEEE Trans. Smart Grid, 2012, 3, (3), pp.
Chenine, M., Nordstrom, L.: ‘Modeling and simulation of wide area
communication for centralized PMU-based applications’, IEEE Trans. Power
Deliv., 2011, 26, (3), pp. 1372–1380
Manbachi, M., Sadu, A., Farhangi, H., et al.: ‘Real-Time co-simulation
platform for smart grid volt-var optimization using IEC 61850’, IEEE Trans.
Ind. Inf., 2016, 12, (4), pp. 1392–1402
Lin, H., Veda, S.S., Shukla, S.S., et al.: ‘GECO: global event-driven cosimulation framework for interconnected power system and communication
network’, IEEE Trans. Smart Grid, 2012, 3, (3), pp. 1444–1456
IEEE Std C37.118.2™-2011, IEEE Standard for Synchrophasor Data Transfer
for Power Systems, 2011
IEC 61850-90-5:2012. Communication networks and systems for power
utility automation-Part 90-5: Use of IEC 61850 to transmit synchrophasor
information according to IEEE C37.118
Adewole, A.C., Tzoneva, R.: ‘Extended synchrophasor-based online stability
assessment using synchronous generator-derived indices’, Int. Trans. Electr.
Energy Syst., 2016, 26, (9), pp. 1–22
Adewole, A.C., Tzoneva, R.: ‘Adaptive under-voltage load shedding scheme
for large interconnected smart grids based on wide area synchrophasor
measurements’, IET Gener. Transm. Distrib., 2016, 10, (8), pp. 1957–2968
Kundur, P.: ‘Power system stability and control’ (McGraw-Hill, New York,
8 Appendix
The required bandwidth per PMU is obtained as follows for a
TCP/IP-based PMU protocol:
Table 4 Data framesize calculation
PHASORSpositive sequence
voltage and current (floatingpoint format)
FREQ(floating-point format)
DFREQ(floating-point format)
ANALOG(floating-point format)
repeat 6–11
12 + CHK
Size, bytes
8 × PHNMRa 8 × 2 = 16
4 × ANNMRb4 × 4 = 16
2 × DGNMRc2 × 1 = 2
For the number of PMUs in the
data frame
aNumber of phasors.
bNumber of analogue values.
cNumber of digital status word.
The total PMU frame length for data transmission using TCP
protocol is given as:
TCP frame length = PMU data frame + TCP/IP overhead
TCP frame length = 60 + 62 bytes = 122 bytes
The bandwidth required per PMU based on the calculated frame
Bandwidth = Framesize bits × PMU reporting rate
Bandwidth = 122 × 8 × 60 bps = 58.56 kbps
50 Mbps
Number of PMUs =
= 853 PMUs
58.56 kbps
The calculation for the PMU data frame (60 bytes) given in (15) is
presented in Table 4:
TCP/IP overhead = TCP overhead + IP overhead + MAC overhead
TCP/IP overhead = 24 + 20 + 18 bytes = 62 bytes
IET Gener. Transm. Distrib., 2017, Vol. 11 Iss. 12, pp. 3019-3029
© The Institution of Engineering and Technology 2017
Без категории
Размер файла
6 325 Кб
gtd, 2016, iet, 1366
Пожаловаться на содержимое документа