close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2017026967

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2017026967
Abstract: To provide a sound generation device capable of generating an ambient sound with a
sense of presence and a background sound. SOLUTION: A sound pickup section for collecting Mdirection sound as a sound signal of M channel with M being an integer of 3 or more, and sound
signals of two channels among sound signals of M channel respectively According to a
predetermined rule, an emphasizing processing unit for selecting a sound signal of a twodirection channel, an original sound extraction unit for extracting an original sound using a
sound signal of at least one of M sound signals, and It includes an emphasis degree
determination unit that determines the degree of emphasis of the second direction channel, and a
synthesis unit that amplifies the sound signals of the first and second direction channels
according to the determined degree of emphasis and combines the sound signals with the
original sound. [Selected figure] Figure 2
Sound generation device, sound generation method, program
[0001]
The present invention relates to a sound generation apparatus, a sound generation method, and a
program for creating a highly realistic environmental sound and background sound in, for
example, content creation in outdoor recording.
[0002]
Conventionally, when shooting with, for example, a home video camera, environmental sounds
and background sounds are noises that interfere with voices that are desired to be collected as
target sounds, and have been treated as those to be removed (for example, Patent Document 1).
11-04-2019
1
[0003]
On the other hand, a scene sound generating device has been proposed which reproduces scene
sounds such as "gulls of seagulls" and "swimmers of ships" with a simple configuration in a rich
variation (Patent Document 2).
However, the scene sound generating device of Patent Document 2 reproduces a predetermined
scene by synthesizing the material of the scene sound (for example, voice data of a gull cry)
collected or collected in advance into another sound. It can not be said that the impression of the
actual site is reproduced.
[0004]
JP, 2006-171077, A JP, 2004-289511, A
[0005]
For example, when recording the landscape of a street corner with a video camera,
environmental sounds and background sounds may be recorded with an impression different
from the impression felt when actually being there.
When you actually stand in a street corner, the sound of the passersby passing by in front of you,
the sound of commercials flowing from the storefront a little further away, etc. may clearly be
heard.
On the other hand, when this is recorded and reproduced, only traffic noise is noticeable, and the
above-mentioned shoe sounds and CM sounds may be buried in this noise and may not be heard
clearly.
[0006]
In the case of recording environmental sounds and background sounds, it is possible to establish
11-04-2019
2
a sound generation device and sound generation method capable of generating realistic
environmental sounds and background sounds by simulating the characteristics of the human
auditory system as described above. It is desired. The above-described technology is expected to
be applied not only to video camera recording but also to various services that utilize audio
information.
[0007]
Therefore, it is an object of the present invention to provide a sound generation device capable of
generating a realistic environmental sound and background sound.
[0008]
The sound generation device of the present invention includes a sound collection unit, an
emphasis processing unit, an original sound extraction unit, an emphasis degree determination
unit, and a synthesis unit.
[0009]
The sound collection unit sets M to an integer of 3 or more, and collects the sound in the M
direction as a sound signal of the M channel.
The enhancement processing unit selects sound signals of two channels among the sound signals
of M channels as sound signals of the first and second direction channels, respectively.
The original sound extraction unit extracts an original sound using a sound signal of at least one
of the M channel sound signals. The emphasis degree determination unit determines the
emphasis degree of the first and second direction channels according to a predetermined rule.
The synthesizer amplifies the sound signals of the first and second direction channels in
accordance with the determined degree of emphasis, and synthesizes the sound signals with the
original sound.
[0010]
According to the sound generation device of the present invention, it is possible to generate a
realistic environmental sound and background sound.
11-04-2019
3
[0011]
FIG. 2 is a diagram showing the configuration of a sound collection unit of the sound generation
device of the first embodiment.
FIG. 1 is a block diagram showing the configuration of a sound generation device according to a
first embodiment. 6 is a flowchart showing the operation of the sound generation device of the
first embodiment. FIG. 7 is a block diagram showing the configuration of a sound generation
device according to a second embodiment. FIG. 7 is a block diagram showing the configuration of
first and second direction emphasis units of the sound generation device of the second
embodiment. 6 is a flowchart showing the operation of the first and second direction emphasis
units of the sound generation device of the second embodiment. FIG. 7 is a block diagram
showing the configuration of a sound generation device according to a third embodiment. 10 is a
flowchart showing the operation of the sound generation device of the third embodiment. FIG. 14
is a block diagram showing the configuration of a sound generation device of a fourth
embodiment. 10 is a flowchart showing the operation of the sound generation device of the
fourth embodiment.
[0012]
Human hearing does not process sound coming from all directions equally, but research results
show that it selectively focuses on and listens to sound that has some kind of attention.
Furthermore, it is also investigated that the direction in which the attention is directed is likely to
be two at the maximum (see Non-Patent Document 1). (Reference Non-Patent Document 1:
Sugano, Hirahara, "How many voices can be heard at a time?", Proceedings of the Conference of
the Acoustical Society of Japan, The Acoustical Society of Japan, March 1, 1996, pp. .467-468)
[0013]
The environmental sound and background sound recorded by the microphone lack spatial
information such as the spatial arrangement of the sound, and because human auditory
processing does not function well, it is considered to be one factor that impairs the sense of
reality Be Therefore, in the following embodiment, from among the environmental sound to be
recorded and the background sound, two directions in which the characteristic sound exists are
11-04-2019
4
selected, and the environmental sound and background recorded are enhanced by emphasizing
the sound in two directions. Disclosed is a sound generation device capable of complementing
spatial information lost from sound and creating a sense of presence as if listening to
environmental sound and background sound while being in the place where the sound was
recorded.
[0014]
The sound enhancement processing of the two emphasis directions can be realized by using a
directional microphone as hardware. On the other hand, it is also possible to realize sound
enhancement processing by forming directivity and filtering for each direction using a plurality
of microphones and further performing non-linear processing in the subsequent stage.
[0015]
The sound information in the emphasized two directions is mixed with the original sound not
subjected to emphasis processing to generate a realistic sound. At this time, it is possible to more
clearly perceive the two emphasized sounds by reproducing the original sound as a diotic and the
two emphasized sounds as stereo sounds arranged on the left and right, respectively ( See
Example 3). Furthermore, by convoluting a head-related transfer function in any direction with
each of the two emphasized sounds and listening to headphones, it becomes possible to listen to
environmental sounds and background sounds with a higher sense of reality (Example 4)
reference).
[0016]
Hereinafter, embodiments of the present invention will be described in detail. Note that
components having the same function will be assigned the same reference numerals and
redundant description will be omitted.
[0017]
The configuration and operation of the sound generation device according to the first
11-04-2019
5
embodiment will be described below with reference to FIGS. 1, 2, and 3. FIG. 1 is a diagram
showing the configuration of the sound collection unit 11 of the sound generation device 1 of the
present embodiment. FIG. 2 is a block diagram showing the configuration of the sound
generation device 1 of the present embodiment. FIG. 3 is a flowchart showing the operation of
the sound generation device 1 of the present embodiment.
[0018]
As shown in FIG. 1, the sound collection unit 11 included in the sound generation device 1 of the
present embodiment includes, for example, M single directional microphones 11-1, 11-2, ..., 11M. The directional microphones 11-1, 11-2, ..., 11-M can be arranged in a circle (so that the
direction in which the directivity of each microphone becomes strong is radially arranged). M is
an arbitrary integer of 3 or more. The sound pickup unit 11 picks up the sound in the M
direction (environmental sound, background sound) as a sound signal of M channel (S11).
[0019]
As shown in FIG. 2, the sound generation device 1 of the present embodiment includes an
emphasis processing unit 12, an original sound extraction unit 13, an emphasis degree
determination unit 14, and a synthesis unit 15 in addition to the above-described sound
collection unit 11. It is a structure.
[0020]
The enhancement processing unit 12 selects sound signals of two channels in a desired direction
among the sound signals of M channels recorded in step S11 as sound signals of the first and
second direction channels (S12).
The original sound extraction unit 13 extracts an original sound using the sound signal of at least
one of the sound signals of M channels (S13). More specifically, the original sound extraction
unit 13 extracts the sum of sound signals of M channels or a signal of any one channel as an
original sound and outputs the original sound.
[0021]
11-04-2019
6
The emphasis degree determination unit 14 determines the emphasis degree (emphasis
coefficient) of the first and second direction channels according to a predetermined rule (S14). It
is desirable to set the emphasis degree (emphasis coefficient) in step S14 to a value that makes
the S / N as high as possible within the range in which the balance from the original sound
extraction unit 13 is not unnaturally balanced. In the normal case, when the output of the
original sound extraction unit 13 is set to be higher by about 6 to 10 dB, this condition often
occurs. The synthesis unit 15 amplifies the sound signals of the first and second direction
channels according to the determined degree of emphasis, synthesizes the sound signals with the
original sound, and outputs the synthesized sound (S15).
[0022]
According to the sound generation device 1 of the present embodiment, the enhancement
processing unit 12 selects the sound signals of the first and second direction channels, and the
synthesis unit 15 amplifies the sound signals of the first and second direction channels to obtain
the original sound. Because it combines with and outputs, it is possible to generate a realistic
environmental sound and background sound.
[0023]
Hereinafter, the configuration of the sound generation apparatus of the second embodiment in
which the emphasis processing unit of the first embodiment is modified will be described with
reference to FIGS. 4 and 5.
FIG. 4 is a block diagram showing the configuration of the sound generation device 2 of this
embodiment. FIG. 5 is a block diagram showing the configuration of the first and second
direction emphasis units 221a and 221b of the sound generation device 2 of the present
embodiment.
[0024]
As shown in FIG. 4, the sound generation device 2 of the present embodiment includes an
emphasis processing unit 22 instead of the emphasis processing unit 12 of the sound generation
device 1 of the first embodiment, and the other components are described in the first
embodiment. Is the same as As shown in the figure, the emphasis processing unit 22 includes a
11-04-2019
7
first direction emphasizing unit 221 a and a second direction emphasizing unit 221 b. The first
direction emphasizing unit 221a and the second direction emphasizing unit 221b are configured
by the common constituent elements shown in FIG. As shown in the figure, the first (second)
direction emphasizing unit 221a (221b) includes a filter unit 2211, an adding unit 2212, a target
/ noise area PSD estimation unit 2213, and a steady / non-steady component extraction unit
2214, a post filter calculation unit 2215, a multiplication unit 2216, and an inverse Fourier
transform unit 2217. The operation of the first and second direction emphasizing units 221a and
221b of the sound generation device 2 of this embodiment will be described below with
reference to FIG. The figure is a flowchart showing the operation of the first and second direction
emphasizing units 221a and 221b of the sound generation device 2 of the present embodiment.
[0025]
In this embodiment, a case is considered in which K (K is an arbitrary integer of 1 or more) sound
sources are observed with respect to a microphone array configured of M (M is an arbitrary
integer of 2 or more) microphone elements. . Even in the case of M = 2, directivity can be given in
three or more directions by applying software processing to the microphone array. Assuming
that the transfer characteristic between the m-th microphone element and the k-th sound source
is A m, k (ω) and the k-th sound source signal is S k (ω, τ), the m-th observation signal (m) The
sound signal of the channel) X m (ω, τ) is modeled by the following equation.
[0026]
[0027]
Here, ω represents a frequency and τ represents a frame.
m is an integer satisfying 1 ≦ m ≦ M, and k is an integer satisfying 1 ≦ k ≦ K.
[0028]
The filter unit 2211 filters the sound signal of each channel with a filter that emphasizes the
target sound (S2211). The filter unit 2211 includes a total of M channel filters W 1 (ω), W 2
11-04-2019
8
(ω),..., W M (ω), one for each channel. w (ω) = [W 1 (ω), W 2 (ω),..., W M (ω)] <T> can be
obtained by the following equation.
[0029]
[0030]
Here, h (ω) = [H 1 (ω), H 2 (ω),..., H M (ω)] <T> is an array manifold vector in the target sound
direction.
The subscript k is omitted for h (ω). R <−1> (ω) represents the inverse of the spatial correlation
matrix. Also, superscript T represents transpose, superscript H represents hermite transposition.
Assuming that the sound source signals are uncorrelated with each other, the spatial correlation
matrix R (ω) is expressed by the following equation.
[0031]
[0032]
The adding unit 2212 adds together the filtered sound signals of the respective channels, and
outputs the output signal Y 0 (ω, τ) of beamforming for emphasizing the target sound (S 2212).
That is, the output signal Y 0 (ω, τ) of beamforming for emphasizing the target sound is
obtained by the following equation.
[0033]
[0034]
However, x (ω, τ) = [X 1 (ω, τ), X 2 (ω.τ),..., X M (ω, τ)] <T>.
11-04-2019
9
[0035]
A final output (target direction emphasis signal) in which the noise signal is suppressed can be
obtained by multiplying the output signal Y 0 (ω, τ) by a post filter G (ω, τ) for suppressing
the noise signal. it can.
As means for obtaining this post filter G (ω, τ), for example, a method such as Reference NonPatent Document 2 has been proposed.
In the method of Reference Non-Patent Document 2, G (ω, τ) is determined by the following
equation, where φ S (ω, τ) is the power spectrum density of the target area and φ N (ω, τ) is
the power spectrum density of the noise area .
[0036]
[0037]
Reference Non-Patent Document 2 further proposes a method of estimating φ S (ω, τ) and φ N
(ω, τ) from the observation signal X m (ω, τ).
The power spectral density is hereinafter also referred to as PSD (Power Spectral Density).
(Reference non-patent document 2: Y. Hioka et al., “Underdetermined sound source separation
using power spectrum estimated by combination of directivity gain,” Audio, Speech, and
Language Processing, IEEE Transactions on, IEEE, 2013.2.22, 2013 : 21, Issue: 6, pp. 12401250)
[0038]
Now, with respect to L + 1 beamforming filters wl (ω), (l = 0, 1, ..., L) for obtaining signals in
areas of various directions, the sensitivity to the k-th direction is | D l, k | Assuming that the
power spectral density of <2> and the l-th output signal is | Y l (ω, τ) | <2> and the power
spectral density in each direction is | S k (ω, τ) | The relationship of can be modeled as
11-04-2019
10
[0039]
[0040]
However, the index of each symbol of Y, D, and S is omitted.
[0041]
By solving the inverse problem of the above equation, it is possible to obtain an estimate of the
power spectral density for each direction.
[0042]
[0043]
Here, [] <+> represents a pseudo inverse matrix operation to [].
[0044]
The target / noise area PSD estimation unit 2213 estimates the power spectrum density
estimation value of each of the target area and the noise area based on the power spectrum
density estimation value of each direction determined in advance (S2213).
The target / noise area PSD estimation unit 2213 calculates the PSD estimated value φ ^ S (ω,
τ) of the target area and the PSD estimated value φ ^ N (ω, τ) of the noise area according to
the following equation.
[0045]
[0046]
However, although it is assumed that the target sound and the interference noise are mixed in
the calculation of these estimated values, in the actual use scene, not only the coherent
11-04-2019
11
interference noise but also the steady state with strong incoherence Sex noise is often mixed, and
under such conditions, the estimation errors of φ S (ω, τ) and φ N (ω, τ) become large, and
there is a problem that the noise suppression performance is degraded.
Therefore, the following step S2214 (the operation of the steady / non-steady component
extraction unit 2214) is required.
[0047]
The stationary / non-stationary component extraction unit 2214 extracts, for each of the target
area and the noise area, the non-stationary component derived from the sound coming from the
target area and the stationary component derived from the noise. (S2214).
[0048]
More specifically, the stationary / non-stationary component extraction unit 2214 determines
the non-stationary component ^^ S derived from the sound coming from the target area from the
estimated value S ^ S (ω, τ) of the power spectral density of the target area. Each of (A)> (ω, τ)
and a stationary component φ ^ S <(B)> (ω, τ) derived from incoherent noise is obtained by the
following equation using time averaging processing (S2214).
[0049]
[0050]
Next, the stationary / non-stationary component extraction unit 2214 determines the nonstationary component φ ^ N <(A)> derived from the sound coming from the target area from the
power spectral density estimated value φ ^ N (ω, τ) of the noise area. Each of (ω, τ) and a
stationary component φ ^ N <(B)> (ω, τ) derived from incoherent noise is obtained by the
following equation using time averaging processing (S2214).
[0051]
[0052]
The post filter calculation unit 2215 calculates the post filter based on the non-stationary
11-04-2019
12
component and the stationary component (S2215).
More specifically, the post-filter calculation unit 2215 calculates φ ^ S <(A)> (ω, τ), φ ^ S <(B)>
(ω, τ), φ ^ N <(A)> (ω , τ) and φ ^ N <(B)> (ω, τ), the post filter G ~ (ω, τ) is calculated by
the following equation (S2215).
[0053]
[0054]
The multiplication unit 2216 multiplies the post filter by the signal added in step S2212 to
generate a target direction emphasis signal (S2216).
A multiplying unit 2216 multiplies the post-filter G (ω, τ) by the added signal Y 0 (ω, τ) to
suppress ambient noise and extract only the target direction. A signal Z (ω, τ) is obtained
(S2216).
[0055]
[0056]
The inverse Fourier transform unit 2217 performs inverse Fourier transform on the target
direction emphasis signal Z (ω, τ) (S2217).
This makes it possible to suppress ambient noise and extract only sound in a desired direction.
[0057]
The sound generation device 2 of this embodiment includes the first and second direction
emphasizing units 221a and 221b in the emphasizing processing unit 22, and the first and
11-04-2019
13
second direction emphasizing units 221a and 221b execute the above-described steps S2211 to
S2217. In order to extract only the desired one-way sound by doing, the enhancement processing
unit 22 selects and extracts a total of two-direction (two-channel) sound signals.
Therefore, compared with the sound generation device 1 of the first embodiment, the sound
generation device 2 of the present embodiment can select the sound signals in two directions by
software, and hardware that it is necessary to arrange directional microphones Has the
advantage of being less susceptible to
[0058]
Hereinafter, with reference to FIG. 7 and FIG. 8, the sound generation device of the third
embodiment in which the sound generation device of the first embodiment is transformed into
the stereo format will be described.
FIG. 7 is a block diagram showing the configuration of the sound generation device 3 of this
embodiment.
FIG. 8 is a flowchart showing the operation of the sound generation device 3 of this embodiment.
As shown in FIG. 7, the sound generation device 3 of the present embodiment includes a
synthesis unit 35 instead of the synthesis unit 15 of the sound generation device 1 of the first
embodiment, and the other configuration requirements are the same as those of the first
embodiment. It is.
The synthesis unit 35 of the sound generation device 3 of the present embodiment includes a
right channel synthesis unit 35R and a left channel synthesis unit 35L.
[0059]
The right channel synthesis unit 35R amplifies the sound signal of the first direction channel
11-04-2019
14
according to the determined degree of emphasis and synthesizes it with the original sound to
generate a right channel sound in the stereo system (S35R).
Similarly, the left channel synthesis unit 35L amplifies the sound signal of the second direction
channel according to the determined degree of emphasis and synthesizes it with the original
sound to generate a left channel sound in the stereo system (S35L).
[0060]
The right (left) channel synthesis unit 35R (35L) converts the original sound extracted by the
original sound extraction unit 13 into a signal (diotic signal) equal to each of the left and right
channels, and any direction desired for the right channel (first direction) And the original sound
is synthesized, and for the other left channel, a sound in which the sound and the original sound
are synthesized in the other desired direction (referred to as a second direction) is generated.
The sound generated by the synthesis unit 35 can be reproduced well using a high-resolution
stereo speaker and stereo headphones.
[0061]
According to the sound generation device 3 of the present embodiment, when sounds in desired
two directions are synthesized by the synthesis unit 35, it is possible to divide each sound into
left and right channels of a stereo signal and synthesize them.
[0062]
Hereinafter, with reference to FIG. 9, FIG. 10, the sound production apparatus 4 of Example 4
which changed the sound production apparatus of Example 1 into the binaural system (reference
nonpatent literature 3) is demonstrated.
(Reference Non-Patent Document 3: Akio Ando, "Sound Science Series (Vol. 10)-Reproduction of
sound field", Corona Co., Ltd., Dec. 10, 2014, Chapter 6 High Presence Sound Field Reproduction)
11-04-2019
15
[0063]
FIG. 9 is a block diagram showing the configuration of the sound generation device 4 of this
embodiment.
FIG. 10 is a flowchart showing the operation of the sound generation device 4 of this
embodiment.
As shown in FIG. 9, in addition to the constituent requirements of the sound generation
apparatus 1 of the first embodiment, the sound generation apparatus 4 of the present
embodiment includes a first direction acoustic characteristic addition unit 445a, a second
direction acoustic characteristic addition unit 445b, and a cross. A talk exclusion unit 46 is
provided. The other configuration requirements are the same as in the first embodiment.
[0064]
The sound generation device 4 of this embodiment determines any two spatially distinguishable
directions for the sound signals of the first and second direction channels selected and extracted
in step S12. Add acoustic characteristics corresponding to the direction. Specifically, the first
direction acoustic characteristic addition unit 445a convolutes the head related transfer function
corresponding to the first direction to the sound signal of the first direction channel (S445a).
Similarly, the second direction acoustic characteristic addition unit 445b convolutes the head
related transfer function corresponding to the second direction to the sound signal of the second
direction channel (S445b).
[0065]
On the other hand, with regard to the original sound extracted from the original sound extraction
unit, it is possible to make a diotic signal as in the third embodiment, and to convolute the
transfer characteristic in any direction at a position distinguishable from each of the two sounds.
is there. When reproducing the generated sound as the final output, a method using a stereo
speaker and a method using stereo headphones can be taken. When stereo headphones are used,
each binaural signal may be output from the left and right channels. On the other hand, when a
11-04-2019
16
stereo speaker is used, crosstalk occurs in which the signal of the left channel also reaches the
right ear and the signal of the right channel also reaches the left ear. In this case, it is preferable
that crosstalk be eliminated by the crosstalk eliminating unit 46. The crosstalk eliminating unit
46 eliminates crosstalk from the signal synthesized in step S15 (S46).
[0066]
According to the sound generation device 4 of this embodiment, it is possible to synthesize
sounds in desired two directions using a binaural method that reproduces the acoustic
characteristics at the entrance portions of both ears.
[0067]
<Point of the invention> The point of the present invention is that the sound in two directions is
extracted from the recorded environmental sound and background sound, and it is lost by the
microphone recording by synthesizing three with other sound (original sound) It is a point that
complements the spatial information and generates a highly realistic environmental sound and
background sound.
In addition, by reproducing the extracted two-direction sound from left and right speakers or
virtually anywhere in space, it is possible to use human's space discrimination ability again and
create a higher sense of reality. .
[0068]
It can be used for high-presence environmental sound added to the image in the city like street
view, generation of background sound, or video camera capable of high-presence sound
recording.
[0069]
<Supplement> The device according to the present invention is, for example, an input unit to
which a keyboard can be connected, an output unit to which a liquid crystal display etc can be
connected as a single hardware entity, a communication device capable of communicating
outside the hardware entity Communication unit to which the communication cable can be
connected, CPU (Central Processing Unit, may be provided with a cache memory, a register, etc.),
11-04-2019
17
RAM or ROM which is a memory, external storage device which is a hard disk, and input / output
units thereof , A communication unit, a CPU, a RAM, a ROM, and a bus connected so as to enable
exchange of data between external storage devices.
If necessary, the hardware entity may be provided with a device (drive) capable of reading and
writing a recording medium such as a CD-ROM. Examples of physical entities provided with such
hardware resources include general purpose computers.
[0070]
The external storage device of the hardware entity stores a program necessary for realizing the
above-mentioned function, data required for processing the program, and the like (not limited to
the external storage device, for example, the program is read) It may be stored in the ROM which
is a dedicated storage device). In addition, data and the like obtained by the processing of these
programs are appropriately stored in a RAM, an external storage device, and the like.
[0071]
In the hardware entity, each program stored in the external storage device (or ROM etc.) and data
necessary for processing of each program are read into the memory as needed, interpreted and
executed by the CPU as appropriate . As a result, the CPU realizes predetermined functions (each
component requirement expressed as the above-mentioned,...
[0072]
The present invention is not limited to the above-described embodiment, and various
modifications can be made without departing from the spirit of the present invention. Further,
the processing described in the above embodiment may be performed not only in chronological
order according to the order of description but also may be performed in parallel or individually
depending on the processing capability of the device that executes the processing or the
necessity. .
[0073]
11-04-2019
18
As described above, when the processing function in the hardware entity (the apparatus of the
present invention) described in the above embodiment is implemented by a computer, the
processing content of the function that the hardware entity should have is described by a
program. Then, by executing this program on a computer, the processing function of the
hardware entity is realized on the computer.
[0074]
The program describing the processing content can be recorded in a computer readable
recording medium. As the computer readable recording medium, any medium such as a magnetic
recording device, an optical disc, a magneto-optical recording medium, a semiconductor memory,
etc. may be used. Specifically, for example, as a magnetic recording device, a hard disk device, a
flexible disk, a magnetic tape or the like as an optical disk, a DVD (Digital Versatile Disc), a DVDRAM (Random Access Memory), a CD-ROM (Compact Disc Read Only) Memory), CD-R
(Recordable) / RW (Rewritable), etc. as magneto-optical recording medium, MO (Magneto-Optical
disc) etc., as semiconductor memory EEP-ROM (Electronically Erasable and Programmable Only
Read Memory) etc. Can be used.
[0075]
Further, this program is distributed, for example, by selling, transferring, lending, etc. a portable
recording medium such as a DVD, a CD-ROM or the like in which the program is recorded.
Furthermore, this program may be stored in a storage device of a server computer, and the
program may be distributed by transferring the program from the server computer to another
computer via a network.
[0076]
For example, a computer that executes such a program first temporarily stores a program
recorded on a portable recording medium or a program transferred from a server computer in its
own storage device. Then, at the time of execution of the process, the computer reads the
program stored in its own recording medium and executes the process according to the read
program. Further, as another execution form of this program, the computer may read the
11-04-2019
19
program directly from the portable recording medium and execute processing according to the
program, and further, the program is transferred from the server computer to this computer
Each time, processing according to the received program may be executed sequentially. In
addition, a configuration in which the above-described processing is executed by a so-called ASP
(Application Service Provider) type service that realizes processing functions only by executing
instructions and acquiring results from the server computer without transferring the program to
the computer It may be Note that the program in the present embodiment includes information
provided for processing by a computer that conforms to the program (such as data that is not a
direct command to the computer but has a property that defines the processing of the computer).
[0077]
Further, in this embodiment, the hardware entity is configured by executing a predetermined
program on a computer, but at least a part of the processing content may be realized as
hardware.
11-04-2019
20
Документ
Категория
Без категории
Просмотров
0
Размер файла
31 Кб
Теги
description, jp2017026967
1/--страниц
Пожаловаться на содержимое документа