close

Вход

Забыли?

вход по аккаунту

?

nt92-a34635

код для вставкиСкачать
Nuclear Technology
ISSN: 0029-5450 (Print) 1943-7471 (Online) Journal homepage: http://www.tandfonline.com/loi/unct20
Nuclear Power Plant Status Diagnostics Using an
Artificial Neural Network
Eric B. Bartlett & Robert E. Uhrig
To cite this article: Eric B. Bartlett & Robert E. Uhrig (1992) Nuclear Power Plant Status
Diagnostics Using an Artificial Neural Network, Nuclear Technology, 97:3, 272-281, DOI: 10.13182/
NT92-A34635
To link to this article: http://dx.doi.org/10.13182/NT92-A34635
Published online: 10 May 2017.
Submit your article to this journal
Article views: 8
View related articles
Citing articles: 15 View citing articles
Full Terms & Conditions of access and use can be found at
http://www.tandfonline.com/action/journalInformation?journalCode=unct20
Download by: [University of Florida]
Date: 28 October 2017, At: 02:56
NUCLEAR POWER PLANT STATUS
DIAGNOSTICS USING AN ARTIFICIAL
NEURAL NETWORK
NUCLEAR REACTOR
SAFETY
KEYWORDS: artificial neural
network, fault diagnosis, nuclear power plant safety
ERIC B. BARTLETT* and ROBERT E. UHRIG
University of Tennessee-Knoxville,
Knoxville, Tennessee 37996-2300
Department of Nuclear
Engineering
Downloaded by [University of Florida] at 02:56 28 October 2017
Received April 8, 1991
Accepted for Publication August 7, 1991
In this work, nuclear power plant operating status
recognition is investigated using a self-optimizing stochastic learning algorithm artificial neural network
(ANN) with dynamic node architecture learning. The
objective is to train the ANN to classify selected nuclear power plant accident conditions and assess the
potential for future success in this area. The network
is trained on normal operating conditions as well as on
potentially unsafe conditions based on nuclear power
plant training simulator-generated accident scenarios.
These scenarios include hot- and cold-leg loss of coolant, control rod ejection, total loss of off-site power,
main steamline break, main feedwater line break, and
steam generator tube leak accidents as well as the normal operating condition. Findings show that ANNs
can be used to diagnose and classify nuclear power
plant conditions with good results. Continued research
work is indicated.
I. INTRODUCTION
Nuclear electric power generating stations require
careful monitoring. Corrective actions must be applied
whenever potentially unsafe conditions occur. Control
of these situations requires knowledge of existing conditions as well as knowledge of the changes made during the control process. Observations of plant variables
must be resolved rapidly into a concise summary of
system conditions. Malfunctions and changes in plant
•Current address: Iowa State University, Mechanical Engineering Department, Nuclear Engineering Program, Ames,
Iowa 50011-2230.
capabilities must be identified as they occur. The diagnosis of a potentially unsafe plant condition should be
quick and accurate.
The objective of the plant diagnostic system in any
potentially unsafe operating scenario is to give plant
operators and engineers sufficient time to formulate,
confirm, initiate, and perform the appropriate corrective actions. The diagnostic effort required for this
objective becomes more difficult when degraded monitoring systems give noisy, incomplete, or intermittent
data. Neural networks can help improve diagnostic capabilities under such conditions.
Another difficulty encountered during this control
effort is the resolution of uncertainties associated with
predicted plant behavior. It is impossible to predict the
exact sequence of events that takes place after a particular accident condition is initiated. Moreover, many
of the conditions that need to be recognized never actually occurred previously. These conditions must be
modeled by computer simulation. Even if simulated
plant behavior is indistinguishable from actual plant
behavior, not all possible plant failures or unsafe operating scenarios can be anticipated. Unsafe scenarios
that have not been anticipated or simulated beforehand
must also be handled.
Steps toward the exploitation of the generalization
characteristics of neural networks regarding solution of
these difficulties would be a significant contribution to
nuclear plant safety and therefore should be investigated. This paper describes the results of an inquiry
into the application of artificial neural networks
(ANNs) to nuclear power plant fault diagnoses.
Section II of this paper describes the layered network
and the architectural paradigm used in the demonstration that follows. Section III describes the methods
used and the specific problem investigated. Section IV
shows the results of this research. Section V contains the
concluding remarks.
II. NETWORK AND NODAL ARCHITECTURES
the following generahzed input-output relation, as
shown diagrammatically in Fig. 2,
The networks used in this investigation utilize layered continuous perceptrons and a self-optimizing stochastic learning algorithm (SOSLA) described in Ref. 1.
The dynamic node architecture scheme used is also described in Ref. 1. A brief review of the SOSLA is given
below.
A mapping M/+i, which can be continuous or discrete, such that
=
,
(1)
is modeled by a network of layered nodes as shown in
Fig. 1, where
1,/!>-*1,2,«) • • •
(2)
Downloaded by [University of Florida] at 02:56 28 October 2017
is the input vector and
=
•••.7(7+1),n]^
(3)
is the output vector, which corresponds to the output
of the /'th layer of active (hidden or output) nodes,
and / ( I ) and 7(7-1-1) are the dimensions of the input
and output vectors, respectively. Note that the input
nodes are inactive in that their input is equal to their
output. Also, note that n is the training set exemplar
(input-output pattern) number. Each active node has
X3,2,n
= k l i j - [ ( I / t t ) •arctan(M,,y,„) -i-
(4a)
and
UiJ,n = ghj-S
k=l
(Wij,k-Xi-I,k.n)
+ blij
.
(4b)
The trainable parameter sets are [ b \ i j ] ,
[ k l j j ] , and {w, y ^t). The artificial neurons used here
are more general than those used in the backpropagation paradigm^"* in that the nodal bias
gain
{^l/.vL and activation constant [ k \ i j \ as well as the
usual interconnection weights
are trainable.
Not only do these nodes have more trainable parameters, but they also use the arctangent rather than the
usual sigmoid function.
The cost (energy, error, merit, objective) function
used has the form
c{W) =
[l/(iV.y)]
N
1/2
n=i
. (5)
L7=1
X3,J(l+1),n
Output Nodes
W3,1,1
W3,J(| + i),J(2)
Hidden Nodes
W2,J(2),Jn)
Input Nodes
'^l.l.n
''UID.n
Fig. I. An example network showing input, hidden, and output nodes, as well as the indexing notation for the nodes, activations, and weights. Note that / = 2 in this example.
Output
TABLE I
An Outline of the SOSLA Method with DNA
Output
Constant
Transfer
Function
Downloaded by [University of Florida] at 02:56 28 October 2017
Gain
6. Apply the algorithm, steps 1 through 5, in turn
to each adaptable parameter set, {Al/jj, {^1,,;),
{A:l,jl, and
Sum
—K—
3. Make a small random change to each member
of a parameter set and evaluate the cost function
at this new time step,
(^F). If c'+'(W) <
c'{W), continue to step 4; if not, go to step 2.
5. Apply the same successful parameter changes
again. If
( W ) < c'{ W), go to step 5 and
repeat a fixed number of times; if not, go to
step 2.
i
Wi,j,k
2. Store the best parameter sets, discard the parameter sets with the largest c(W).
4. Change the parameter selection criteria based on
information gained during step 3.
Bias
gi,i
1. Make two initial random parameter set guesses
for each parameter set and evaluate c( W) for
each.
Wi,j,k+i,n
7. If the network learning is slow, expand the network architecture by adding a node to the most
important layer.
Weights
i
Inputs
Xi-l,k,n
8. If the total cost is acceptable such that c'(fV) < e
for some desired e, then reduce the network size
by deleting the least important node.
9. If the network structure oscillates about a fixed
architecture, stop; otherwise go to step 2.
Fig. 2. An enlarged generalized node showing the signal
path as well as each trainable parameter.
(8)
problem is posed in integral form, the stochastic evaluation of the integral procedure' is used toward the
resolution of this challenge. The theory of Monte
Carlo importance function biasing® provides the optimal probability density function from which to select
future estimates of the parameters being optimized.
Learning is then adapted by the algorithm itself
through the use of internal learning parameters that
control the system dynamics by continually updating
the system estimate of the optimal probability density
function supplied by the theory. The dynamic learning
parameters are updated during code execution as more
and better information is gained about their appropriate values. The learning parameters are used by the
network only during the learning phase and have no effect on the simplicity of the recall process, which is
nearly identical to the recall approach of backpropagation.
An outline of the SOSLA training method is given
in Table I (Ref. 1). The key to the static architecture
nodal optimization is step 4, where the challenge is to
determine the best way to adapt the selection criteria
so that the result is an increased probability of a successful selection at future times. Once the optimization
The key to the dynamic node architecture (DNA)
approach is steps 7 and 8 in Table I. This can be seen
if one realizes that the network training procedure
should seek to minimize both c(fV) and the number of
n o d e s . T h e DNA approach can be achieved as follows. Start the network with only a few nodes. Since
where N is the number of training exemplars in the
training set (fli,fl/+i). Note that {fli} is a subset of
all possible inputs [ A'l), and {8/+!) is a subset of all
correct or desired outputs [XDi+i] associated with
{A'l). The challenge is to reconstruct or approximate
some desired mapping Z/+i, such that
=
,
(6)
from {fli,fl/+i). There are, however, many solutions
A//+1, which satisfy the training set
=M/+,(n,) ,
(7)
none of which are necessarily the desired solution
=Z/+,(8i) .
Downloaded by [University of Florida] at 02:56 28 October 2017
the network is most likely too small to learn the desired
mapping, add nodes until the network learns the training set to the desired accuracy. Once this is achieved,
eliminate a node that has near-zero nodal importance,
thereby eliminating a nearly useless node. The importance of a node can be shown to be a function of the
outputs of the other nodes in the network. If a node
can be shown to have little or no dynamic effect on the
output of every node to which its output is an input,
then it is of little value to the network and has little importance. The total importance of node (/ is
then the sum of the changes of the outputs of the
nodes in layer /, with respect to changes in the output
of node (/ - 1,^). The importance of a network layer
can be similarly defined as the sum of the importance
of each node in the layer of interest. If, after altering
the network architecture, the resultant error cost function is larger than desired, retrain this smaller network
and repeat. The final network should give a more general implementation of the desired mapping.
from the Watts Bar Nuclear Power Station are used to
train the network in lieu of actual plant data.'®*" The
accidents analyzed and their 3-bit training codes are
given in Table II. The variables used for network input are presented in Table III. The data set for each
scenario contains 27 plant process variables at 0.5-s
intervals for at least 250 s. Within each data set, the accident condition is preceded by a period of normal fullpower operation. Seven accident scenarios plus the
normal full-power steady-state operating condition are
taught to an ANN using a SOSLA with the DNA paradigm.
A SOSLA network with DNA is applied to the
TABLE III
Plant Variables Used as Input to the Diagnostic Network
Variable
Description
III. METHOD OF SOLUTION
As a first step toward the realization of the rather
general objectives stated in Sec. I, the following approach is taken. Nuclear power plant simulator data
1
2
3
4
5
6
TABLE II
7
8
9
10
11
12
Flux axial offset
Steam generator
Steam generator
Steam generator
Steam generator
Steam generator
13
14
15
16
17
Steam generator water level
Pressurizer water level
Pressurizer pressure
Pressurizer surge line temperature
Reactor coolant system loop spray
temperature
18
19
Reactor coolant
Reactor coolant
temperature
Reactor coolant
temperature
Reactor coolant
temperature
Reactor coolant
Desired Network Output Layer Activation, Time
of Start of Transient, and Reactor Scram Time
for Each of the Trained Scenarios
Desired
Output
Node
Activation
Plant
Condition
Total loss of offsite power
Main feedwater
line break
Main steamline
break
Control rod
ejection
Hot-leg loss of
coolant
Cold-leg loss of
coolant
Steam generator
tube leak
Full-power normal
operation
Transient
Start
Time
Reactor
Scram
Time
(s)
1
2
3
(s)
1
1
1
20.58
29.58
20
1
1
0
32.92
48.42
21
1
0
1
46.83
57.83
22
1
0
0
30.08
74.08
0
1
1
15.58
19.58
0
1
0
14.58
19.08
0
0
1
31.58
411.08
0
0
0
23
24
25
26
27
Control rod shutdown bank position
Nuclear power level
Plant megawatt output
Volume control tank level
Reactor building equipment drain level
Containment pressure
steam flow
main steam pressure
feedwater inlet flow
feedwater pressure
auxiliary feedwater flow
system hot-leg pressure
system cold-leg
system hot-leg
system average
system loop coolant flow
Reactor coolant system loop delta
temperature
Steam generator building liquid sample
monitor
Containment liquid effluent radiation
monitor
Containment building lower compartment
radiation monitor
Containment building upper compartment
radiation monitor
Downloaded by [University of Florida] at 02:56 28 October 2017
power plant data described in Tables II and III. The
data are normalized so that a value of unity is equivalent to 100% of a variable's meter reading in the control room. Thus, all variables have values from ~ 0
to ~ 1. The training and recall data sets containing the
27 plant process variables were reduced from 81 to 27
variables by averaging those variables associated with
each of the four reactor primary coolant loops. The
result of this variable reduction effort is effectively a
single-loop plant. This effort reduced both the amount
of redundant data input to the network and the size of
the ANN and the input vector.
The network is trained to recognize each condition
based on a single time step of data. A single time step
containing 27 variables is assumed to contain enough
information to determine the plant condition. The
main advantage of this approach is simpUcity of execution. A single time step of data, which can be averaged over some small time interval to reduce noise,
e.g., 1 or 2 s, can be fed into the network, and the diagnosis can be accomplished in real time. A disadvantage of this approach, however, is that the temporal
information is lost. The training data are chosen in an
iterative fashion. The first step is to train the network
to distinguish the steady-state shutdown conditions
that occur after each accident scenario. Then the network is recalled on each scenario for the entire startto-finish time period. Data are then added to the
training set for each scenario where the network is in
error. Thus, the training set is successively filled with
data until the desired accuracy is achieved. To better
simulate actual plant conditions, uniform noise is
added to the training set such that the maximum noise
added is equal to 5% of the actual no-noise value of
each variable. The training process consists of two distinct phases, a no-noise phase and a 5% uniform
added noise phase. The network is first trained on a
no-noise training set. Then, once a reasonable error is
achieved over the complete time scale, data with 5%
uniform noise are added to the training set and to these
data along with the no-noise data. The addition of the
5% noise training data produces a network that is less
sensitive to noisy input. Note that only a small portion
of each simulated data set is used in the training effort.
Typically the network is trained on ~25 unique time
steps for each accident scenario.
iV. RESULTS
Figures 3 through 16 show the resuhs of this investigation. Each of the accident scenarios has two figures
associated with it. The first is a graph of the network
output for each recall case with no noise. The second
is a graph of each recall case with 2% added Gaussian
noise. The 2% Gaussian noise cases are computergenerated to add noise with a Gaussian distribution
and a 2% standard deviation of the actual no-noise
100
150
200
250
T i m e (s)
Fig. 3. Activation of the network output layer as a function
of time in response to the total loss of off-site power
accident scenario, desired response: (1,1,1), without
added noise.
100
150
250
T i m e (s)
Fig. 4. Activation of the network output layer as a function
of time in response to the total loss of off-site power
accident scenario, desired response: (1,1,1), with 2 %
added noise.
value. A fourth line on each graph has binary values
and shows the correct, or desired, output for each active node. This line, and the associated transient time,
can be identified by the right angle bend near the upper left of the graph. Note also that the reactor scram
times are indicated on each graph by an asterisk on the
time axis.
It can be seen from the graphs that the network
responds very quickly to the onset of most of the accident conditions. One exception is the steam generator tube leak. Note, however, that this is a rather slow
accident sequence as the leak is only 500 gal/min. Also,
note that the network classifies this accident well in
1.2
1.2
^ ^ ^ ^ Node 1
/ Node 1
1.0
fl
Q.0.8
^ ^ ^ Desired Activation
^ Node 2
D 0.8
a
D
O 0.6
16
•D
o
z 0.4
D
o 0.6 _
15
•o
o
z 0.4
-
0.2
0
A
1.0
1
50
/ Node 3
100
150
Desired Activation
Node 3
^ ^
-
Node 2
0.2
0
200
250
i jf v
50
100
Downloaded by [University of Florida] at 02:56 28 October 2017
Time (s)
250
Fig. 7. Activation of the network output layer as a function
of time in response to the main steamline break accident scenario, desired response: (1,0,1), without
added noise.
1.2
1.2
Node 1
-Node 1
1.0
1.0
3 0.8
Q.
Desired Activation
Node 2
o 0.6
15
•D
o
z 0.4
0
200
Time (s)
Fig. 5. Activation of the network output layer as a function
of time in response to the main feedwater line break
accident scenario, desired response: (1,1,0), without
added noise.
0.2
150
5
a
n
0.8
r
Desired Activation
Node 3
0 0.6
ro
•a
1 0.4
0.2
/Node 3
100
150
200
250
Node 2
50
Time (s)
100
150
200
250
Time (s)
Fig. 6. Activation of the network output layer as a function
of time in response to the main feedwater line break
accident scenario, desired response: (1,1,0), with 2 %
added noise.
Fig. 8. Activation of the network output layer as a function
of time in response to the main steamline break accident scenario, desired response: (1,0,1), with 2 %
added noise.
advance of the reactor scram, which occurs at 411 s.
Another significant result is the graceful degradation
of network performance with noise as illustrated by the
graphs of the 2% Gaussian cases. Table IV summarizes
these results. The transient recognition times given in
Table IV assume a dead zone for network output between 0.25 and 0.75. If the activation of any output
node in the network falls within this interval, the network is considered to be in an undecided state. Note
that the network actually responds faster to both the
total loss of off-site power and the control rod ejection
accidents in the 2% noise cases. This can be explained
as noise that works for the diagnoses by coincidence.
It also illustrates that noise in the input is not necessarily bad.
V. CONCLUSIONS
The SOSLA network with the DNA paradigm is
shown to be a successful connectionist methodology
for the limited number of nuclear power plant accidents investigated. The feasibility of using ANN technology as a diagnostic tool at nuclear power plants is
therefore demonstrated.
The next logical step is to investigate the possibility of diagnosing many more accident conditions and
1.2
Node 2
1.0
Desired Activation
Node 3
3 0.8
a.
0 0.6
to
•D
1 0.4
Node 1
0.2
100
150
200
250
Downloaded by [University of Florida] at 02:56 28 October 2017
Fig. 9. Activation of the network output layer as a function
of time in response to the control rod ejection accident
scenario, desired response: (1,0,0), without added
noise.
150
100
150
200
250
Time (s)
Time (s)
100
50
250
Time (s)
Fig. 11. Activation of the network output layer as a function of time in response to the hot-leg loss-ofcoolant accident scenario, desired response: (0,1,1),
without added noise.
100
150
250
Time (s)
Fig. 10. Activation of the network output layer as a function of time in response to the control rod ejection
accident scenario, desired response: (1,0,0), with
2 % added noise.
Fig. 12. Activation of the network output layer as a function of time in response to the hot-leg loss-ofcoolant accident scenario, desired response: (0,1,1),
with 2 % added noise.
thus make the approach usable in the context of actual
plant operations. The use of more variables and a larger
ANN would allow the training of many more accident
conditions and would therefore make the approach
more applicable and useful. It is possible that as many
as 500 to 1000 different plant variables may be needed.
These variables have yet to be determined, although
the use of ANN technology may assist in their selection.
Ultimately, the diagnostic system could be tied into
these systems could be at the disposal of the diagnostic network. The problem of diagnosis verification and
validation could be addressed by using an expert system to check the network diagnosis.
An expert system that is designed only to check
whether a given accident has actually occurred could
be fast and simple; therefore, the process of diagnosis
and verification could, together, be accomplished in
real time. Alternatively, an error bound or confidence
interval could be extracted from a suitably modified
ANN paradigm.
The investigation of these areas could significantly
the plant safety parameter display system or plant p r o -
cess computer, and all of the variables associated with
Bartlett and Uhrig
1.2
P L A N T STATUS D I A G N O S T I C S
1.2
Node 2
1.0
Desired Activation
D 0.8
a
D
O 0.6
15
•D
o
z 0.4
Node 3
Node 1
0.2
0
50
100
150
200
250
200
Downloaded by [University of Florida] at 02:56 28 October 2017
Time (s)
300
400
500
Time (s)
Fig. 13. Activation of the network output layer as a function of time in response to the cold-leg loss-ofcoolant accident scenario, desired response: (0,1,0),
without added noise.
Fig. 15. Activation of the network output layer as a function of time in response to the steam generator tube
leak accident scenario, desired response: (0,0,1),
without added noise.
Node 2
Desired Activation
Desired Activation
Node 3
,Node 2
,Node 1
100
150
200
250
200
Time (s)
300
400
500
Time (s)
Fig. 14. Activation of the network output layer as a function of time in response to the cold-leg loss-ofcoolant accident scenario, desired response: (0,1,0),
with 2 % added noise.
Fig. 16. Activation of the network output layer as a function of time in response to the steam generator tube
leak accident scenario, desired response: (0,0,1),
with 2 % added noise.
improve upon the utility, under actual nuclear power
plant conditions, of the approach introduced by the research described in this paper.
g
NOMENCLATURE
arctan(-) = inverse tangent of (•)
b\ij
= nodal bias for node (/,y)
c(fV)
= network cost function
NUCLEAR TECHNOLOGY
VOL. 97
MAR. 1992
= nodal gain for node {i,j)
= number of network layers excluding the input layer
I
= network layer index
J
= number of output nodes
JU)
= number of nodes in layer i
J
= node in layer / index
K
= number of nodes in layer / - 1
k
= index of nodes in layer i - 1
Bartlett and Uhrig
PLANT STATUS DIAGNOSTICS
TABLE IV
ACKNOWLEDGMENT
Time from Start of Accident to Transient Recognition
and Reactor Scram for Each of the Trained Scenarios
This work is supported by the U.S. Department of Energy under contract DE FG07-88ER12824.
Downloaded by [University of Florida] at 02:56 28 October 2017
Time to
Diagnose
Transient (s)
Plant
Condition
No
Noise
2%
Noise
Time to
Reactor
Scram
(s)
Total loss of off-site power
Main feedwater line break
Main steamline break
Control rod ejection
Hot-leg loss-of-coolant
Cold-leg loss-of-coolant
Steam generator tube leak
6.0
18.5
29.0
20.0
29.0
19.5
86.0
5.5
62.0
66.0
19.0
47.5
37.5
113.0
9.0
15.5
11.0
44.0
4.0
4.5
379.5
o u t p u t activation constant f o r n o d e
khj
{i,j)
REFERENCES
1. E. B. BARTLETT, "Nuclear Power Plant Status Diagnostics Using Simulated Condensation: An Auto-Adaptive
Computer Learning Technique," PhD Dissertation, University of Tennessee-Knoxville (1990).
2. R. HECHT-NIELSEN, "Theory of the Backpropagation Neural Network," Proc. IJCNNInt. Conf. Neural Networks, Vol. I, p. 593, Washington, D.C., June 1989, IEEE
TAB Neural Networks Committee, San Diego, California.
3. D. E. RUMELHART et al.. Parallel Distributed Processing: Explorations in the Microstructure of Cognition,
Vols. 1 & 2, Massachusetts Institute of Technology Press,
Cambridge, Massachusetts (1986).
network input-output mapping
N
n u m b e r of exemplars in the training set
n
index of exemplars in the training set
t
time or iteration n u m b e r
(•)
r
(•)'
4. P. J. WERBOS, "Backpropagation and Neural Control:
A Review and Prospectus," Proc. IJCNN Int. Conf. Neural Networks, Vol. 1, p. 209, Washin^on, D.C., June 1989,
IEEE TAB Neural Networks Committee, San Diego, California.
_ transpose of (•)
5. B. D. RIPLEY, Stochastic Simulation,
Sons, New York (1987).
: (•) at time t
John Wiley &
• nodal input f r o m exemplar n for node
6. P. N. STEVENS, Monte Carlo Analysis, Seminar Notebook, 19th Ann. Tennessee Industries Week, University of
Tennessee-Knoxville (1984).
UJ)
W
• interconnection weight vector
weight connecting n o d e ( i j ) a n d n o d e
Xi
: vector of n o d a l o u t p u t s in layer i
• vector of nodal outputs in layer i f r o m exemplar n
• o u t p u t of n o d e {iJ)
XDi
Zi
f r o m exemplar n
: vector of desired n o d a l o u t p u t s in layer i
vector of desired n e t w o r k m a p p i n g s t o
nodes in layer i
8. M. ISHIKAWA, "A Structural Learning
with Forgetting of Link Weights," Proc. IJCNN
Neural Networks, Vol. 2, p. 626, Washington,
1989, IEEE TAB Neural Networks Committee,
California.
Algorithm
Int. Conf
D.C., June
San Diego,
9. J. K. KRUSCHKE, "Improving Generalization in
Backpropagation Networks with Distributed Bottlenecks,"
Proc. IJCNN Int. Conf Neural Networks, Vol. 1, p. 443,
Washington, D.C., June 1989, IEEE TAB Neural Networks
Committee, San Diego, California.
Greek
e
• arbitrarily small real n u m b e r
«!
vector of training set inputs
Khn
training set input f o r n o d e (1,7) f r o m exemplar n
«/+!
7. T. ASH, "Dynamic Node Creation in Backpropagation
Networks," Proc. IJCNN Int. Conf Neural Networks, Vol. 2,
p. 623, Washington, D.C., June 1989, IEEE TAB Neural
Networks Committee, San Diego, California.
10. "Watts Bar Malfunction Cause and Effects Report,"
Tennessee Valley Authority (1989).
vector of training set o u t p u t s
training set output for node (I + l,j)
exemplar n
from
11. "Watts Bar Control Room Instrumentation Report,"
Rev. C, Tennessee Valley Authority (1989).
Downloaded by [University of Florida] at 02:56 28 October 2017
Eric B. Bartlett [BS, electrical engineering, Rensselaer Polytechnic Institute (RPI), 1981; M.Eng., RPI, 1983; PhD, University of Tennessee-Knoxville,
1990] is an assistant professor of nuclear engineering in the Department of Mechanical Engineering at Iowa State University.
Robert E. Uhrig (BS, mechanical engineering. University of Illinois, 1948;
MS, 1950, and PhD, 1954, theoretical and applied mechanics, Iowa State University) holds a joint (50%-50%) appointment in the nuclear engineering
department at the University of Tennessee and in the Instrumentation and
Control Division at Oak Ridge National Laboratory (ORNL) under the University of Tennessee/ORNL Distinguished Scientist Program. His work at
both institutions concerns the application of advanced technologies to nuclear
power plant systems.
Документ
Категория
Без категории
Просмотров
2
Размер файла
912 Кб
Теги
nt92, a34635
1/--страниц
Пожаловаться на содержимое документа