вход по аккаунту



код для вставкиСкачать
Nuclear Science and Engineering
ISSN: 0029-5639 (Print) 1943-748X (Online) Journal homepage:
How to Randomly Evaluate Nuclear Data: A New
Data Adjustment Method Applied to Pu
D. Rochman & A. J. Koning
To cite this article: D. Rochman & A. J. Koning (2011) How to Randomly Evaluate Nuclear Data:
A New Data Adjustment Method Applied to
DOI: 10.13182/NSE10-66
Pu, Nuclear Science and Engineering, 169:1, 68-80,
To link to this article:
Published online: 12 May 2017.
Submit your article to this journal
Article views: 2
View related articles
Citing articles: 3 View citing articles
Full Terms & Conditions of access and use can be found at
Download by: [Tufts University]
Date: 27 October 2017, At: 03:21
How to Randomly Evaluate Nuclear Data:
A New Data Adjustment Method Applied to
239 Pu
D. Rochman* and A. J. Koning
Nuclear Research and Consultancy Group NRG
P.O. Box 25, 1755 ZG Petten, The Netherlands
Downloaded by [Tufts University] at 03:21 27 October 2017
Received September 14, 2010
Accepted January 25, 2011
Abstract – This paper presents a novel approach to combine Monte Carlo optimization and nuclear data
to produce an optimal adjusted nuclear data file. We first introduce the methodology, which is based on the
so-called “Total Monte Carlo” and the TALYS system. As an original procedure, not only a single nuclear
data file is produced for a given isotope but virtually an infinite number, defining probability distributions
for each nuclear quantity. Then, each of these random nuclear data libraries is used in a series of
benchmark calculations. With a goodness-of-fit estimator, a best evaluation for that benchmark set can be
selected. To apply the proposed method, the neutron-induced reactions on 239 Pu are chosen. More than
600 random files of 239 Pu are presented, and each of them is tested with 120 criticality benchmarks. From
this, the best performing random file is chosen and proposed as the optimum choice among the studied
random set.
clear data files to better fit a selected set of integral
This approach has produced evaluations that are used
worldwide, approved by safety authorities and the nuclear industry, and finally used in simulation codes for
reactor design and safety assessment. For instance, the
JEFF-3.1.1 nuclear data library 2 was produced following the previous scheme, including an “incremental approach” ~meaning minimal changes, targeted to improve
a number of reference calculations, from one library version to another!, and is now the reference library for the
French nuclear authorities, operator, and designers. There
are nevertheless a few inconveniences related to this
method of work, especially in a world with higher constraints on safety, efficiency, and cost-effectiveness. The
incremental approach, which is used to improve a series
of benchmarks, advocates minimal changes to nuclear
quantities such as cross sections. It allows one to find the
closest best solution in the multidimensional nuclear data
space, but there is no guarantee that this local best solution is the absolute best solution. It is in principle possible to choose a different set of nuclear data ~far from a
solution given by an incremental approach! and to have
better agreement with the same series of benchmarks.
Because of the large turnaround time of data library
The evaluation of neutron-induced reactions is not a
new branch of nuclear science. This field of applied research is considered quite mature, and nuclear data specialists have delivered a large number of nuclear data
evaluations since the 1950s. Many well-recognized and
respected nuclear data libraries exist, as for instance ~and
to cite only one!, the U.S. ENDF0B-VII.0 library.1
As nuclear data are relevant for different kinds of
applications, all countries with a large nuclear industry
possess their own team~s! of nuclear data evaluators to
answer their special needs. From a common historical
background, these research groups share the same experimental databases and nuclear reaction theories and have
a restricted number of codes0programs to produce the
so-called “evaluated files.” Also, as a heritage of historical segregation, nuclear data specialists are often not
the same people as the application specialists. They are
separated by buildings, language, education, and sometimes countries ~in short, they do not share the same
culture!. A consequence is that application specialists
often modify evaluated files to produce “adjusted” nu*E-mail: [email protected]
Downloaded by [Tufts University] at 03:21 27 October 2017
creation, adoption, validation, and industrial acceptation
~once every 10 to 15 years!, it can be argued that in the
European community, the recent adoption of JEFF-3.1.1
by various nuclear industries gives nuclear data evaluators time to look for a better solution and deviate from
the incremental approach. Another drawback of this approach is that nuclear data are considered as input for a
specific, well-validated, and fixed reactor code “A” designed a few decades ago. By incrementally adjusting
the inputs, the combination “new inputs and code A” is
improving its performance compared to “previous inputs and code A.” The changes in nuclear data can nevertheless be seen as correction factors for imperfections
of this code, and this becomes particularly dangerous if
the adjusted data fall outside high-quality differential
measurement uncertainties. Equally important, it does
not automatically imply that the combination “new inputs and code B” will perform better than “previous inputs and code B.”
It is obvious that a lot of evaluation knowledge exists in the current libraries. But with responsible nuclear
scientists retiring, the real understanding of the library
content is sometimes difficult to keep, even though the
basic data, like the EXFOR database, remain available
~in traditional evaluation methods, reproducibility is not
seen as an asset!.
Finally, as a consequence of the similarity of codes
and integral experiments used worldwide, data libraries
are becoming more alike. An important and positive effect of this is that there is gradual improvement of the
quality of current evaluations for important data and applications ~regarding both their content and format!. In
parallel to the global convergence of the nuclear data
community and in an effort to offer an alternative and
unconventional approach, a new method of nuclear data
evaluation procedure has recently been proposed.3 It is
based on principles that have been embraced by many
other industries long ago: quality, automation, reproducibility, completeness, and consistency. It relies on the
robust nuclear model code TALYS ~Ref. 4! and on the
two simple ideas that any information used to create a
nuclear data file is kept “forever” to be reused as necessary and that manual intervention during the library production is strictly forbidden. The spin-offs of such a
new method are multiple ~see Fig. 1!: complete nuclear
data libraries ~TENDL! on an unprecedented scale,5,6
including covariance production, exact ~Monte Carlo!
uncertainty propagation 3 for fission systems,7 fusion applications,8 or new GEN-IV reactors.9 It is even possible
to “clone” an existing library ~e.g., the entire ENDF0BVII.0 library! and start further development from that
point, such as filling all missing sections using TALYS,
addition of covariance data, etc.
As a new spin-off of this working method, this paper
presents a random search for the best possible adjusted
nuclear data file of 239 Pu. Based on a Monte Carlo variation of the TALYS model and resonance parameters,
not one but many ~virtually an infinite number! files for
Pu are calculated and used in criticality-safety benchmarks. On the basis of a goodness-of-fit estimator, such
as x 2 , it can be judged whether the result is better, at
least from the integral point of view, than any existing
traditional library. We believe that as long as experimental cross sections ~and other nuclear data! are not perfectly known, it is perfectly acceptable to envisage a
large number of evaluated curves to simulate our imperfect knowledge and to guide us toward the best possible
choice for applications. This random variation should of
course take place inside the uncertainty bands associated
with each cross section. In other words, we simply use
differential and integral measurements at the same time,
realizing of course that one experimental data set may
give more exclusive information than the other—as done
in traditional data adjustment methods.
Fig. 1. Presentation of the possible outcomes based on the TALYS system, as presented in this paper.
VOL. 169
SEP. 2011
Downloaded by [Tufts University] at 03:21 27 October 2017
The working method has already been presented in a
few dedicated papers ~see, for instance, Refs. 3, 7, 8, and
9!. It is not specific to actinides, although it should be
mentioned that the main difference between an evaluation of a major actinide and a regular isotope is the amount
of time spent to obtain the best possible TALYS input
parameters. As mentioned previously, once these input
parameters are known ~together with their uncertainties!, they are stored to be reused as needed. The complete schematic approach is presented in Fig. 2.
The full nuclear data file production relies on a small
number of codes and programs, automatically linked together. The output of this system is either one ENDF-6
formatted file, including covariances if needed, or a large
number of random ENDF-6 files. The central evaluation
tool is the TALYS code. A few other satellite programs
are used to complete missing information and randomize
input files. At the end of the calculation scheme, the
formatting code TEFAL produces the ENDF files.
II.A. The TALYS Code
The nuclear reaction code TALYS has been extensively described in many publications ~see Refs. 4 through
10!. It simulates reactions that involve neutrons, gamma
rays, etc., from thermal to 200-MeV energy range. With
a single run, cross sections, energy spectra, angular distributions, etc., for all open channels over the whole incident energy range are predicted. The nuclear reaction
models are driven by a restricted set of parameters, such
as optical model, level density, photon strength, and fission parameters, which can all be varied in a TALYS
input file. All information that is required in a nuclear
data file, above the resonance range, is provided by
TASMAN is a computer code for the production of
covariance data using results of the nuclear model code
TALYS and for automatic optimization of the TALYS
results with respect to experimental data. The essential
idea is to assume that each nuclear model ~i.e., TALYS
input! parameter has its own uncertainty, where often
the uncertainty distribution is assumed to have either a
Gaussian or uniform shape. Running TALYS many times,
whereby each time all elements of the input parameter
vector are randomly sampled from a distribution with a
specific width for each parameter, provides all needed
statistical information to produce a full covariance matrix. The basic objective behind the construction of TASMAN is to facilitate all this.
Fig. 2. Flowchart of the nuclear data file evaluation and production with the TALYS system.
VOL. 169
SEP. 2011
Downloaded by [Tufts University] at 03:21 27 October 2017
TASMAN is using central value parameters and a
probability distribution function. The central values were
chosen to globally obtain the best fit to experimental
cross sections and angular distributions ~see, for instance, Ref. 11!. The uncertainties on parameters ~or
widths of the distributions! are also obtained by comparison with experimental data, which are directly taken
from the EXFOR database.12 The distribution probability can then be chosen between equiprobable, Normal, or
other. In principle, with the least information available
~no measurement, no theoretical information!, the equiprobable probability distribution should be chosen. Otherwise, the Normal distribution is considered.
An important quantity to obtain rapid statistical convergence in the Monte Carlo process is the selection of
random numbers. Several tests were performed using
pseudorandom numbers, quasi-random numbers ~Sobol
sequence!, Latin Hypercube random numbers, or Centroidal Voronoi Tessellations random numbers. As the
considered dimension ~number of parameters for a TALYS
calculation! is rather high ~from 50 to 80!, not all random number generators perform as required ~covering
as fast as possible the full parameter space, without repeating very similar configurations and avoiding correlations!. For the time being, the random data files are
produced using the Sobol quasi-random number generator.
II.C. The TEFAL Code
TEFAL is a computer code for the translation of the
nuclear reaction results of TALYS, and data from other
sources if TALYS is not adequate, into ENDF-6 formatted nuclear data libraries. The basic objective behind the
construction of TEFAL is to create nuclear data files
without error-prone human interference. Hence, the idea
is to first run TALYS for a projectile-target combination
and a range of incident energies, and to obtain a readyto-use nuclear data library from the TEFAL code through
processing of the TALYS results, possibly in combination with experimental data or data from existing data
libraries. This procedure is completely automated, so the
chance of ad hoc human errors is minimized ~of course,
we may still have systematic errors in the TEFAL code!.
II.D. The TARES Program
The TARES program is a code to generate resonance
information in the ENDF-6 format, including covariance
information. It makes use of resonance parameter databases such as the EXFOR database,12 resonance parameters from other libraries ~ENDF0B-VII.0! ~Ref. 1!, or
compilations.13 ENDF-6 procedures can be selected, for
different R-matrix approximations, such as the multilevel
Breit-Wigner or Reich-Moore formalism. The covariance
information is stored either in the “regular” covariance format or in the compact format. For short-range correlation
between resonance parameters, simple formulas as preNUCLEAR SCIENCE AND ENGINEERING
VOL. 169
SEP. 2011
sented in Ref. 14 are used, based on the capture kernel. No
long-range correlations are considered for now.
In the case of major actinides, resonance parameters
are taken from evaluated libraries, such as ENDF0BVII.0 or JEFF-3.1. These values are almost never given
with uncertainties. In this case, uncertainties from compilations or measurements are assigned to the evaluated
resonance parameters. Although not the best alternative,
it nevertheless allows the combination of central values
with uncertainties.
For the unresolved resonance range, an alternative
solution to the average parameters from TALYS is to
adopt parameters from existing evaluations. In the following, this solution is followed. The output of this program is a resonance file with central values ~MF2!, a
resonance file with random resonance parameters ~MF2!,
and two covariance files ~MF32 standard and compact!.
II.E. The TANES Program
TANES is a simple program to calculate the fission
neutron spectrum based on the Los Alamos model.15
The original Madland-Nix 16 or Los Alamos model for
the calculation of prompt fission neutron characteristics
~spectra and multiplicity! has been implemented in a
stand-alone module. The TANES code is using this standalone module, combined with parameter uncertainties
~on the total kinetic energy, released energy, and multichance fission probabilities! to reproduce and randomize the fission neutron spectrum. The output of this
program is the central and random values for the fission
neutron spectra at different incident energies ~MF5! and
their covariances ~MF35!.
II.F. The TAFIS Program
TAFIS is used to calculate fission yields, prompt
neutron emission from fission, and other necessary fission quantities ~kinetic energy of the fission products,
kinetic energy of the prompt and delayed fission neutrons, total energy released by prompt and delayed gamma
rays!. For fission yields, it uses the systematics of fission
product yields from Wahl,17 combined with ad hoc uncertainties. It calculates the independent and cumulative
fission yields at any incident energy up to 200 MeV and
for different incident particles ~spontaneous, neutrons,
protons, deuterons, etc.!. Empirical equations representing systematics of fission product yields are derived from
experimental data. The systematics give some insight
into nuclear-structure effects on yields, and the equations allow estimation of yields from fission of any nuclide ~Z ⫽ 90 to 98 and A ⫽ 230 to 252!. For neutron
emission, different models are used depending on the
energy range and are presented in Ref. 17. The output of
this program is a fission yield file with uncertainties,
prompt neutron emission files for central and random
values ~MF1 MT452!, a list of central and random fission
quantities ~MF1 MT458!, and prompt neutron covariances ~MF31!.
II.G. Autotalys
Autotalys is a script that takes care of the communication between all software and packages described above
and runs the complete sequence of codes, if necessary,
for the whole nuclide chart. Many options regarding
TALYS and all other codes can be set, and it makes the
library production straightforward.
once this set of parameters and uncertainties is found, it
is kept at the start of the whole file production and can
be used in future work. Examples of typical random cross
sections used by MCNP are presented in Fig. 3 for fission, inelastic, elastic, and ~n, 2n! reactions. It can be
seen that the central cross sections are very close to the
ones from the ENDF0B-VII.0 library.
Similar results are obtained from other important
nuclear quantities such as n-bar, resonance parameters,
and fission neutron spectrum.
Downloaded by [Tufts University] at 03:21 27 October 2017
With such a system, it is quite easy to understand
that if a calculation can be done once, it can also be done
a large number of times. Hence, each new calculation
can be performed with a new set of model parameters,
thus simulating uncertainties on cross sections, nu-bar,
fission neutron spectrum, and others. The general description of the method can be found in Ref. 3, with
different applications in Refs. 7, 8, and 9.
The present Total Monte Carlo methodology relies
on a large number of nuclear data files for a single
isotope. In each file, resonance parameters ~MF2!,
cross sections ~MF3 in ENDF terminology!, angular
and energy distributions ~MF4 and MF5! and doubledifferential distributions, gamma-ray production cross
sections ~MF6!, n-bar ~MF1!, and fission neutron spectrum ~MF5! are randomly changed. This is achieved by
modifying theoretical parameters for the TALYS calculations, such as the optical model, Reich-Moore, compound nucleus, direct, and preequilibrium parameters,
constrained by their uncertainties. The basic approach
can be summarized as follows:
1. A restricted number of isotopes for the criticalitysafety benchmarks are considered.
2. For each isotope, a large number of different
ENDF-6 nuclear data libraries are created ~1000 to 2000!
using random model and resonance parameters included
in the TALYS system.
3. All ENDF-6 files are processed with the NJOY
code 18 to produce ACE libraries for the MCNP Monte
Carlo code.19
4. For each selected benchmark,20 calculations are
performed using a different set of ACE files ~for each
isotope of a given element! each time.
For the Monte Carlo evaluation procedure, the most
difficult part of the work is now accomplished. We have
at our disposal a virtually infinite number of 239 Pu files,
all different and all reflecting our current imperfect knowledge. For existing libraries, the evaluation procedure is
rather close to this work: Evaluators search for a set of
cross sections ~and other nuclear data!, which gives the
best comparison to differential and integral experimental
data. The difference is that they are using a given ~educated! unique set of cross sections, whereas we are using
a large set of cross sections. A key in the success of the
approach is an absolute automation, and reproducibility
for the whole evaluation-benchmarking flow.
The next step is to select a set of integral data. This
selection will vary from one application group to another, simply because of different expertise, purpose of
the evaluation, and accessibility to benchmarks. But independently of this choice, the present procedure can be
applied to any number of benchmarks and any kind ~criticality, dosimetry, fusion, activation, and others!.
For the present study, we have selected a few benchmarks from the ICSBEP database.20 The benchmarks that
are highly sensitive to plutonium ~denominated by “pst,”
“pmf,” “pmm,” “pci,” or “pmi”! are selected for the random search ~see Table I!. We thus calculate all these
benchmarks, with MCNP, for one random 239 Pu library
at the time.
IV.A. x 2 Test
As a large number of benchmarks are considered, it
is easier to compare the performances of different libraries with a unique number such as the x 2 statistic, defined
x ⫽(
~Ci ⫺ Ei ! 2
It is important to start from good central values for
model parameters. As usual in the nuclear data evaluation process, a large amount of time is first dedicated to
find suitable model parameters to reproduce experimental data and other evaluations. But in the present case,
Ci ⫽ calculated value for the i benchmark
Ei ⫽ benchmark value.
VOL. 169
SEP. 2011
Downloaded by [Tufts University] at 03:21 27 October 2017
Fig. 3. Examples of random cross sections for 239 Pu, generated with the TALYS system and compared to ENDF0B-VII.0.
Random cross sections are presented in plain lines ~red in the online version!.
List of Plutonium Benchmarks Selected for the Random Search
Depending on the considered nuclear data library, a specific value of x 2 is obtained. If the benchmarks presented in Table I are used, the following x 2 values are
1. for JEFF-3.1, x 2 ⫽ 8.08 ⫻ 10⫺3 6 7.2 ⫻ 10⫺4
2. for ENDF0B-VII.0, x 2 ⫽ 9.55 ⫻ 10⫺3 6 7.9 ⫻
3. for ENDF0B-VI.8, x 2 ⫽ 8.45 ⫻ 10⫺3 6 7.2 ⫻
4. for JENDL-3.3, x 2 ⫽ 1.31 ⫻ 10⫺2 6 1.0 ⫻ 10⫺3.
VOL. 169
SEP. 2011
Even if in principle our current approach for 239 Pu adjustments can be used for all nuclides at once ~requiring
a huge number of TALYS and MCNP!, only 239 Pu is
varied, and the other nuclides are kept constant and
equal to the JEFF-3.1 evaluations. A total of 120 benchmarks are used as shown in Table I, and 630 random
239 Pu files are produced. The total scope of this work
thus involves 75 600 MCNP criticality calculations. Figures 4 and 5 present examples of k eff distributions for
12 fast, intermediate, and thermal benchmarks. Similar
types of distributions are obtained for other benchmarks. These distributions are similar to those reported
in Ref. 7.
Downloaded by [Tufts University] at 03:21 27 October 2017
Fig. 4. Calculated k eff values for six fast benchmarks: pmf1, pmf2, pmf5, pmf6, pmf8, and pmf12.
IV.B. Results
Following this present evaluation method, a large
number of evaluated files are produced, the number being limited by the production time ~on average, the production of a single file takes about 1 to 2 h on a typical
3-GHz personal computer while its validation with all
selected benchmarks takes 12 h!. Figure 6 presents the
results of the benchmarks of the random files in terms of
x 2 as defined in Eq. ~1!. All single random files are
represented by a x 2 value, and to compare with existing
evaluations, results from other libraries are plotted as
bands. The uncertainties on the dots ~and the widths of
the bands! are the statistical uncertainties coming from
the MCNP calculations together with the benchmark
In Fig. 6, the results for 630 random 239 Pu libraries
are presented. It is rather unconventional to visualize a
library for which an isotope is represented by a set of
files ~corresponding to probability distributions for different types of nuclear data!, and Fig. 6 is a collapsed
way of looking at n random files applied to m benchmarks. As expected from a simple random approach, a
large number of the files perform quite poorly compared
to other libraries, but a small set ~;6% of the total
number! outperforms all other traditional libraries. A
different way of representing the same results is shown
in Fig. 7, where x 2 values of Fig. 6 are projected onto
the y-axis and counted as histograms. In Fig. 7, each
random x 2 ~for each random 239 Pu file! is represented
by a step of height 1 in the histograms. The four traditional libraries have a single step for their x 2 value. This
distribution is not symmetric and has a large tail toward
high x 2 values. The four traditional libraries are in the
low-x 2 part of the graph, reflecting the amount of knowledge and time that have been invested in them. Again,
we note that several of the random x 2 are smaller than
the ones from JEFF-3.1, ENDF0B-VII.0, ENDF0BVI.8,
or JENDL-3.3.
According to this probability distribution, it would
be interesting to know if the probability to obtain x 2 ⯝ 0
is theoretically possible ~meaning that even if its probability is small, it could be reached by having enough
random files!. In the hypothesis that the variable Ci of
Eq. ~1! is independent and normally distributed, the Pearson’s chi-square test used above follows a chi-square
VOL. 169
SEP. 2011
Downloaded by [Tufts University] at 03:21 27 October 2017
Fig. 5. Calculated k eff values for six fast, intermediate, and thermal benchmarks: pmf13, pci1, pst1-6, pmi2-1, pst6-1,
and pst2-2.
distribution with k degrees of freedom.21 It is defined in
the interval @0,⫹`! and governs a nonzero probability
for x 2 ⫽ 0. It is then theoretically possible to “continuously” improve the agreement with a set of benchmarks
by using more random files. However, we do realize that
in practice it will not be possible to obtain a perfect fit
for all included benchmarks simultaneously. Additionally, the variable Ci is not fully independent, and a lognormal distribution seems to better represent the
probability distribution of Fig. 7 ~which is also defined
at x 2 ⫽ 0!. Nevertheless, figures like Fig. 7 are important to get an idea of how much room for improvement is
left, even for conventional methods.
This type of approach has at least two important
applications: ~a! the propagation of nuclear data uncertainties to large-scale system quantities ~as was demonstrated in previous papers by the same authors! and ~b! the
VOL. 169
SEP. 2011
random search for the adjusted nuclear data file producing a continuously improved agreement with a selected
set of benchmarks.
V.A. Optimum Pearson’s x 2 Value
As the first point was extensively presented in other
references, we will focus on the second one. To present
the method, we have arbitrarily selected a number of
criticality benchmarks ~120!, with an equal weight to all
of them. It can be foreseen that this selection will not suit
all nuclear data needs and that another evaluator will
make a different choice. However, the present method is
independent of the selection of tests, and the results presented in the following may or will vary if another choice
is made. From Figs. 6 and 7, it can be seen that 40 to 50
random 239 Pu files give a smaller x 2 than any other conventional library. The smallest x 2 is obtained with the
random run 307, giving a x 2 of 4.80 ⫻ 10⫺3 6 5.2 ⫻
10⫺4. To illustrate the performance of run 307, results of
the benchmarks for this random file are presented in Fig. 8.
Downloaded by [Tufts University] at 03:21 27 October 2017
Fig. 6. x 2 values for random 239 Pu files ~dots!, compared to x 2 for existing libraries ~bands!. One can see that for the
selected benchmarks, the 239 Pu evaluation from JEFF-3.1 library performs better than other conventional library, whereas ;6%
of our random 239 Pu files perform better than any other library.
Fig. 7. x 2 values for each of the random 239 Pu files per bin, compared to x 2 values for JEFF-3.1, ENDF0B-VII.0, and
JENDL-3.3. This figure is the projection on the y-axis of Fig. 6. Note that the x-axis for the large plot is in log scale. The insert
is the same plot with the x-axis in linear scale.
V.B. Adjustment of
239 Pu
Nuclear Data
Once the choice of the best random file is made,
the most probable values for the nuclear data ~cross
sections and others! are set. For the selected file, the
cross sections, n-bar, angular distributions, etc., need to
be in agreement with differential data. It is also interesting to check whether they deviate from the conventional evaluations. Figures 9 to 12 present different
nuclear quantities ~cross sections, n-bar, fission neutron
VOL. 169
SEP. 2011
Downloaded by [Tufts University] at 03:21 27 October 2017
Fig. 8. Benchmark results for the best random file ~run 307!, compared to the benchmark results with the JEFF-3.1 library.
Fig. 9.
Pu n-bar for the ENDF0B-VII.0, JEFF-3.1, and JENDL-3.3 libraries compared with the present adjusted file.
Fig. 10. 239 Pu fission neutron spectrum at thermal energy and at 14 MeV for the ENDF0B-VII.0, JEFF-3.1, and JENDL-3.3
libraries compared with the present adjusted file.
spectra! for run 307, compared with differential data
and from other evaluations.
In cases where measurements exist, almost all evaluations are within experimental uncertainties. In the case
VOL. 169
SEP. 2011
of the capture cross section, our evaluation and the one
of ENDF0B-VII.0 are lower than experimental data above
1 MeV, but there is no strong evidence ~based on differential measurements! that this cross section should be
Downloaded by [Tufts University] at 03:21 27 October 2017
Fig. 11. 239 Pu fission cross section in the thermal and fast range for the ENDF0B-VII.0, JEFF-3.1, and JENDL-3.3 libraries
compared with the present adjusted file.
Fig. 12. 239 Pu capture cross section in the fast range for the ENDF0B-VII.0, JEFF-3.1, and JENDL-3.3 libraries compared
with the present adjusted file.
higher. For other nuclear quantities, even though the evaluated data of run 307 do not strongly deviate from other
evaluations, the changes are significant enough to improve the agreement with the integral benchmarks.
To better understand which parts of the nuclear data
file play a critical role, a sensitivity study is necessary.
Together with the current approach, we have developed
a sensitivity method based on the Monte Carlo adjustments. With the full random files, partial random files
are also being produced, in which only parts are changed
~n-bar, resonance parameters, inelastic cross sections,
etc.! and the rest of the file is kept unchanged and equal
to the file with unperturbed model parameters. By benchmarking these partial random files, sensitivities to n-bar,
cross sections, or the fission neutron spectrum can be
obtained. Although more exact than traditional sensitivity approaches based on perturbation theories, the principal drawback of this sensitivity method is ~for the time
being! that the needed computational resources are large.
This method was successfully applied to the study of a
few criticality benchmarks,22 and we plan to scale up
this type of study for more benchmarks.
We have presented in this paper an unconventional
way to approach nuclear data adjustment. First, through
detailed evaluation work a complete data file is made,
after which the data are randomly varied within the uncertainty bands. By applying this methodology to 239 Pu
and its integral validation, we have shown that considerable improvements can be obtained regarding the agreement between nuclear data evaluations and benchmark
results. This method of work was made possible with a
high degree of automation for the production of the evaluated file and its benchmarking.
But, regardless of the success of this approach, some
criticism can be raised. From the author’s point of view,
a nonexhaustive list is given as follows:
1. Choice of model parameters: Even if not confined to this approach, the initial choice of model parameters is crucial. It is much more efficient to start from an
educated selection of model parameters rather than from
VOL. 169
SEP. 2011
Downloaded by [Tufts University] at 03:21 27 October 2017
blind parameters. In the present work, we have spent a
considerable amount of time to adjust these parameters.
But even so, initial parameters can still be more accurate, and0or the nuclear models can be improved. Again,
this problem should not be narrowed to this method only
but to all evaluation processes.
2. Choice of benchmarks: For this study, an arbitrary choice of 120 criticality benchmarks was made. All
benchmarks have equal weight. This can be legitimately
criticized, and different choices can be made to suit special demands, such as putting a large weight on a few
important benchmarks. But again, the present method is
independent of the choice of benchmarks, and one can
imagine a very different selection, using shielding, burnup,
or activation benchmarks. In addition, one should include even different ~deterministic! codes in the loop for
the same goodness-of-fit estimator, realizing that reactor
physics codes and user-friendly software are two different things.
3. Strong correlation of calculated quantities: As a
result of the use of the TALYS system and theoretical
models, energy-energy correlations for a given cross section are quite strong ~without the mathematical inclusion
of experimental differential data, energy-energy correlations are above 50%!. It affects the benchmark results in
the sense that the cross section in the fast range will
move up or down from one random file to another, keeping a rigid shape. Even if correlations are not basic physical quantities as cross sections, reflecting only the method
used to obtain cross sections, it is generally believed that
experimental differential data should be mathematically
included in the process, and therefore, correlations will
be weaker. As a consequence the shape of cross sections
will become less rigid and the benchmark results could
vary more. We are currently studying solutions to that
problem, such as using the “Unified Monte Carlo” presented in Ref. 23 or by randomly changing nuclear models ~such as different level density models! from one
calculation to another.
Despite these points of critics, more possibilities can
be foreseen with the current approach as follows:
provement! of the TENDL library, as presented in Ref. 24.
There are basically two ways to randomly search for the
best nuclear data solution: ~a! The most general approach is to randomly vary all isotopes together and to
benchmark each combination, and ~b! less general and
certainly more efficient is to vary only one isotope at a
time, starting with the most sensitive one ~ 235 U and0or
U! and keeping the optimal nuclear data file, and use
this when the next relevant isotope is to be optimized.
With the current and future computer technology,
and the accumulated amount of knowledge in the nuclear data community, we believe that this methodology
is technologically condemned to succeed. A limited factor is its acceptance by the nuclear data community, which
is more used to accepting a definition of “evaluation
work” as a long, tedious, nonreproducible, and repetitive
process. It would be a misunderstanding to see a random
nuclear data search as a low-cost, low-quality evaluation
procedure. A huge amount of knowledge is already included in the TALYS system, its adjusted model parameters, and the selection of differential measurements. As
mentioned before, this method will be used for the improvement of the TENDL library, and we are considering applying it to the next generation of the European
Activation library.
1. M. B. CHADWICK et al., “ENDF0B-VII.0: Next Generation Evaluated Nuclear Data Library for Nuclear Science and
Technology,” Nucl. Data Sheets, 107, 2931 ~2006!.
2. A. SANTAMARINA et al., “The JEFF-3.1.1 Nuclear Data
Library,” OECD0NEA JEFF Report 22, Organisation for Economic Co-operation and Development0Nuclear Energy Agency
3. A. J. KONING and D. ROCHMAN, “Towards Sustainable
Nuclear Energy: Putting Nuclear Physics to Work,” Ann. Nucl.
Energy, 35, 2024 ~2008!.
“TALYS-1.0,” Proc. Int. Conf. Nuclear Data for Science and
Technology (ND2007), Nice, France, May 22–27, 2007; ~current as of September 14, 2010!.
1. Virtually unlimited random variation: If applied
today to a limited number of random files, it is very easy
to extend it almost forever. With today’s computer power,
one can start the production of random files with benchmarking, leave it running for months and regularly look
at the results. As the probability distribution of the benchmark results follow a law of x 2 ~or a lognormal probability distribution!, there is always a possibility to obtain
a better solution.
5. A. J. KONING and D. ROCHMAN, “TENDL-2008: Consistent Talys-Based Evaluated Nuclear Data Library Including
Covariances,” OECD0NEA JEF0DOC-1262, Organisation for
Economic Co-operation and Development0Nuclear Energy
Agency ~Nov. 2008!; http:00www.talys.eu0tendl-2008 ~current as of September 14, 2010!.
2. Virtually unlimited isotopic variation: The method
can also be applied to more than a single isotope. It can
be applied to a complete library. In the near future, we
plan to use this method for the random search ~and im-
6. A. J. KONING and D. ROCHMAN, “TENDL-2009: Consistent Talys-Based Evaluated Nuclear Data Library Including
Covariances,” OECD0NEA JEF0DOC-1310, Organisation for
Economic Co-operation and Development0Nuclear Energy
VOL. 169
SEP. 2011
Agency ~Nov. 2009!; http:00www.talys.eu0tendl-2009 ~current as of September 14, 2010!.
MARCK, “Uncertainties for Criticality-Safety Benchmarks and
k eff Distributions,” Ann. Nucl. Energy, 36, 810 ~2009!.
MARCK, “Exact Nuclear Data Uncertainty Propagation for
Fusion Neutronics Calculations,” Fusion Eng. Des., 85, 669
Downloaded by [Tufts University] at 03:21 27 October 2017
9. D. ROCHMAN, A. J. KONING, D. F. DA CRUZ, P. ARCHIER, and J. TOMMASI, “On the Evaluation of 23Na NeutronInduced Reactions and Validations,” Nucl. Instrum. Methods
A, 612, 374 ~2010!.
15. P. TALOU, “ Prompt Fission Neutrons Calculations in the
Madland-Nix Model,” LA-UR-07-8168, Los Alamos National
Laboratory ~2007!.
16. D. G. MADLAND and J. R. NIX, “New Calculation of
Prompt Fission Neutron Spectra and Average Prompt Neutron
Multiplicities,” Nucl. Sci. Eng., 81, 213 ~1982!.
17. A. C. WAHL, “Systematics of Fission-Product Yields,”
LA-13928, Los Alamos National Laboratory ~2002!.
18. R. E. MACFARLANE, “NJOY99—Code System for Producing Pointwise and Multigroup Neutron and Photon Cross
Sections from ENDF0B Data,” RSIC PSR-480, Los Alamos
National Laboratory ~2000!.
19. J. F. BRIESMEISTER, “MCNP—A General Monte Carlo
N-Particle Transport Code, version 4C,” LA-13709-M, Los
Alamos National Laboratory ~2000!.
“New Nuclear Data Libraries for Lead and Bismuth and Their
Impact on Accelerator-Driven Systems Design,” Nucl. Sci. Eng.,
156, 357 ~2007!.
20. J. B. BRIGGS, “International Handbook of Evaluated
Criticality Safety Benchmark Experiments,” NEA 0NSC0
DOC~95!030I, Organisation for Economic Co-operation and
Development, Nuclear Energy Agency ~2004!.
11. A. J. KONING and J. P. DELAROCHE, “Local and Global
Nucleon Optical Models from 1 keV to 200 MeV,” Nucl. Phys.
A, 713, 231 ~2003!.
21. M. ABRAMOWITZ and I. A. STEGUN, Handbook of
Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Chapter 26, p. 940, Dover, New York ~1965!.
M. V. MIKHAYLYUKOVA, and N. OTUKA, “The Art of Collecting Experimental Data Internationally: EXFOR, CINDA
and the NRDC Network,” Proc. Int. Conf. Nuclear Data for
Science and Technology, Nice, France, May 22–27, 2007, p. 737
A. HOGENBIRK, and D. VAN VEEN, “Nuclear Data Uncertainty Propagation: Total Monte Carlo vs. Covariances,” Proc.
Int. Conf. Nuclear Data for Science and Technology (ND2010),
Jeju, Korea, April 26–30, 2010 ~to be published!.
13. S. F. MUGHABGHAB, Atlas of Neutron Resonances: Thermal Cross Sections and Resonance Parameters, Elsevier Publisher, Amsterdam ~2006!.
14. D. ROCHMAN and A. J. KONING, “ Pb and Bi Neutron
Data Libraries with Full Covariance Evaluation and Improved
Integral Tests,” Nucl. Instrum. Methods A, 589, 85 ~2008!.
23. R. CAPOTE and D. L. SMITH, “An Investigation of the
Performance of the Unified Monte Carlo Method of Neutron
Cross Section Data Evaluation,” Nucl. Data Sheets, 109, 2768
24. D. ROCHMAN and A. J. KONING, “500 Random Evaluations of 239 Pu,” OECD0NEA JEF0DOC-1327, Organisation
for Economic Co-operation and Development0Nuclear Energy Agency ~May 2010!.
VOL. 169
SEP. 2011
Без категории
Размер файла
1 853 Кб
Пожаловаться на содержимое документа