close

Вход

Забыли?

вход по аккаунту

?

JP2017168887

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2017168887
Abstract: The present invention provides an acoustic reproduction apparatus, an acoustic
reproduction method, and a program for reproducing a three-dimensional sound with enhanced
presence using the principle of otoacoustic radiation. SOLUTION: A first ear acoustic radiation
processing unit 2e for adding effects of induced ear acoustic radiation and distortion component
ear acoustic radiation to input voice data, and predetermined data about processed voice data
And a head-related transfer adjustment processing unit 2 f configured to adjust the transfer delay
of voice to the head of the user based on the head-related transfer function. The processed voice
data is converted into a voice signal and output as a stereoscopic sound with a sense of reality.
[Selected figure] Figure 1
Sound reproduction apparatus, sound reproduction method, and program
[0001]
The present invention relates to an apparatus and the like for reproducing a realistic sound, and
more particularly to an acoustic reproduction apparatus, a sound reproduction method, and a
program in which the presence is enhanced using the principle of otoacoustic radiation.
[0002]
Conventionally, in devices that reproduce sound, etc., more realistic sound reproduction is
required, and stereophonic sound can be generated by using binaural signals that are convoluted
with head-related transfer functions (HRTFs) in sound source signals. It is also done to play.
[0003]
07-05-2019
1
Also, in recent years, when reproducing an audio signal reproduced by speakers placed on both
sides of the screen from a headphone or a head mounted display (HMD; Head Mounted Display),
the orientation of the image does not match the localization position of the sound image These
problems are considered as issues, and various proposals have been made to solve them.
[0004]
For example, in Patent Document 1, the earphone is provided with a gyro sensor to detect the
rotation of the user's head, the acceleration sensor is provided to detect the tilt of the gyro
sensor, and the sound image localization correction unit A sound processing apparatus is
disclosed in which sound image localization processing is performed by correcting with the
detection output of to make the localization position of the sound image constant.
[0005]
Japanese Patent Application No. 2010-56589
[0006]
However, in the one disclosed in Patent Document 1, only the localization position of the sound
image is adjusted by the sound image localization process, and the technical idea of
incorporating the principle of otoacoustic radiation to enhance the sense of reality is disclosed.
There was no suggestion.
[0007]
Here, the otoacoustic emissions include "induced otoacoustic emissions", "spontaneous
otoacoustic emissions" and "distorted component otoacoustic emissions".
The transiently emitted otoacoustic emission (TEOAE) refers to an acoustic response in which a
signal is detected with a delay of about 10 ms with respect to a stimulus using a click sound.
Spontaneous Otoacoustic Emission (SOAE) refers to an acoustic response in which a signal
emitted spontaneously from the cochlea is detected without an external stimulation sound.
And distortion component otoacoustic emission (DPOAE; Distortion Product Otoacoustic
07-05-2019
2
Emission) is nf1 ± mf2 (n, m is an integer) by inputting two different frequency signals (f1, f2, f1
<f2) to the cochlea An acoustic response in which a signal of frequency is detected.
[0008]
However, there has not conventionally been a technique for utilizing such a mechanism of
otoacoustic radiation to enhance the sense of reality in sound reproduction with, for example, an
HMD.
[0009]
The present invention has been made in view of such problems, and it is an object of the present
invention to provide an acoustic reproduction apparatus, an acoustic reproduction method, and a
program for reproducing a three-dimensional sound with enhanced presence using the principle
of otoacoustic radiation. To aim.
[0010]
In order to solve the above-mentioned subject, the sound reproduction device concerning the 1st
mode of the present invention adds the effect of induction otoacoustic radiation and distortion
component otoacoustic radiation to input audio data. A processing means, and an audio output
means for converting the audio data processed by the first ear acoustic emission processing
means into an audio signal and outputting the audio signal.
[0011]
An audio reproduction apparatus according to a second aspect of the present invention is the
audio reproduction apparatus according to the first aspect, wherein the audio data processed by
the first ear acoustic emission processing means is based on a predetermined head related
transfer function. The head transmission adjustment processing means further adjusts the
transmission delay of the sound to the head, and the sound output means converts the sound
data processed by the head movement adjustment processing means into a sound signal and
outputs the sound signal.
[0012]
A sound reproducing apparatus according to a third aspect of the present invention is the second
aspect according to the first aspect, further including an effect of the spontaneous otoacoustic
emission on the audio data processed by the first otoacoustic emission processing means. The
apparatus further includes a radiation processing unit, and the audio output unit converts the
audio data processed by the second ear acoustic emission processing unit into an audio signal
07-05-2019
3
and outputs the audio signal.
[0013]
In the sound reproduction device according to a fourth aspect of the present invention, in the
first to third aspects, the first ear acoustic emission processing means determines a
predetermined frequency of the audio data based on distance data to a real target. The band
volume is adjusted, the sound pressure is adjusted based on the distance data, and a 10 ms delay
effect is added.
[0014]
In the sound reproducing device according to a fifth aspect of the present invention, in the fourth
aspect, the first ear acoustic emission processing means adjusts a predetermined amplitude
based on the distance data after adjusting the volume. To compensate and amplify the overall
drop in volume.
[0015]
In the sound reproduction device according to a sixth aspect of the present invention, in the fifth
aspect, the first ear acoustic emission processing means adjusts the volume of a predetermined
frequency band based on heart rate data to obtain a psychological action. Raise.
[0016]
In the sound reproducing device according to a seventh aspect of the present invention, in the
third aspect, the second ear acoustic emission processing means further adds the effect of the
spontaneous ear acoustic emission, and then based on the latent memory of the user. Add more
sampling sound.
[0017]
A sound reproducing method according to an eighth aspect of the present invention adds the
effects of induced otoacoustic radiation and distorted component otoacoustic radiation to a first
step of receiving input of audio data, and input audio data. A second step of performing a first
ear acoustic radiation process; and a third step of converting the audio data subjected to the first
ear acoustic radiation process into an audio signal and outputting the audio signal.
[0018]
The sound reproduction method according to a ninth aspect of the present invention is the
07-05-2019
4
eighth aspect, wherein the sound data after the first otoacoustic emission processing in the
second step is based on a predetermined head related transfer function. The method further
comprises a fourth step of performing head-related transmission adjustment processing for
adjusting the transmission delay of the voice to the head of the user, wherein in the third step,
the voice data after head-related transmission adjustment processing in the fourth step Is
converted to an audio signal and output.
[0019]
The sound reproduction method according to a tenth aspect of the present invention is the
method according to the ninth aspect, further including the effect of the spontaneous otoacoustic
emission on the audio data after the first otoacoustic emission processing in the second step. The
method further comprises a fifth step of performing otoacoustic emission processing, and in the
third step, audio data after the second otoacoustic emission processing in the fifth step is
converted into an audio signal and output.
[0020]
The sound reproduction method according to an eleventh aspect of the present invention is the
eighth to tenth aspects, wherein, in the first otoacoustic emission processing in the second step,
the sound data is based on distance data to a real object. The volume of a predetermined
frequency band is adjusted, the sound pressure is adjusted based on the distance data, and a 10
ms delay effect is added.
[0021]
In a program according to a twelfth aspect of the present invention, there is provided a computer
according to the first ear acoustic radiation processing means for adding the effects of induced
otoacoustic radiation and distortion component otoacoustic radiation to input audio data; The
voice data processed by the one-ear acoustic radiation processing means is made to function as a
voice output means for converting the voice data into a voice signal and output, and the first ear
acoustic radiation processing means converts the voice data into distance data to a sound source.
Based on the volume of the predetermined frequency band is adjusted, the sound pressure is
adjusted based on the distance data, and a 10 ms delay effect is added.
[0022]
According to the present invention, it is possible to provide an acoustic reproduction device, an
acoustic reproduction method, and a program for reproducing a three-dimensional sound with
enhanced presence using the principle of otoacoustic radiation.
07-05-2019
5
[0023]
It is a block diagram of the sound reproduction apparatus which concerns on 1st Embodiment of
this invention.
It is a detailed block diagram of the 1st ear acoustic radiation processing part of the sound
reproduction device concerning a 1st embodiment of the present invention.
It is a flowchart which shows the process sequence of the sound reproduction by the sound
reproduction apparatus which concerns on 1st Embodiment of this invention.
It is a flowchart which shows the detailed process sequence of the 1st otoacoustic emission
process of FIG.
(A) is a characteristic diagram of input audio data, and (b) is a characteristic diagram of output
audio data.
It is a block diagram of the 1st ear acoustic radiation processing part of the sound reproduction
apparatus concerning 2nd Embodiment of this invention.
It is a flow chart which shows the detailed processing procedure of the 1st ear sound radiation
processing by the sound reproduction device concerning a 2nd embodiment of the present
invention.
It is a block diagram of the 2nd ear acoustic radiation processing part of the sound reproduction
apparatus concerning 3rd Embodiment of this invention.
It is a flow chart which shows the detailed processing procedure of the 2nd ear sound radiation
processing by the sound reproduction device concerning a 3rd embodiment of the present
invention.
07-05-2019
6
[0024]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings.
The sound reproduction apparatus according to the embodiment of the present invention is used
for, for example, a head mounted display (HMD) or headphones, and reproduces threedimensional sound.
[0025]
First Embodiment
[0026]
FIG. 1 shows the configuration of the sound reproduction apparatus according to the first
embodiment of the present invention and it will be described.
[0027]
As shown in the figure, the sound reproduction apparatus 1 is configured by a computer, and
includes a control unit 2 including a CPU (Central Processing Unit) or the like.
A sound source 3 that outputs a sound source signal is connected to the control unit 2 directly or
via an A / D converter 4.
When the sound source signal is an analog signal, it is converted into a digital signal by the A / D
converter 4 and then input to the control unit 2, and when it is a digital signal, it is directly input
to the control unit 2.
[0028]
The sound source 3 outputs an audio signal related to left and right stereo sound, and may be a
storage medium (HDD, RAM, etc.) provided in a computer, or an external storage medium (optical
07-05-2019
7
disc, USB memory, etc.) Of course, it may be a sound source acquired via a communication
environment such as the Internet.
[0029]
Further, an acceleration sensor 5 is connected to the control unit 2 via an A / D converter 6, a
gyro sensor 7 is connected via an A / D converter 8, and a distance sensor 9 is connected to the
A / D converter 10. The geomagnetic sensor 16 is connected via an A / D converter 17.
The heart rate sensor 18 is also connected to the control unit 2.
The control unit 2 is connected to an input unit 11 including an input device such as a keyboard
and a mouse.
Furthermore, a storage unit 15 is connected to the control unit 2.
The storage unit 15 stores a program 19 to be executed by the control unit 2, and a database
(hereinafter referred to as DB) 15a concerning head-related transfer functions is also logically
constructed.
[0030]
Then, the control unit 2 reads out and executes the program 19 of the storage unit 15 to execute
the main control unit 2a, the noise reduction processing unit (noise reduction) 2b, the
reverberation invalidation processing unit (reverb reduction) 2c, and frequency averaging. It
functions as a processing unit (graphic equalizer) 2d, a first ear sound radiation processing unit
2e, a head transmission adjustment processing unit 2f, a reverberation sound adjustment
processing unit (reverb) 2g, and a second ear sound radiation processing unit 2g.
[0031]
The output of the control unit 2 is connected to the audio output unit 14L through the D / A
converter 12, and is connected to the audio output unit 14R through the D / A converter 13.
07-05-2019
8
[0032]
In such a configuration, the acceleration sensor 5 is mounted on, for example, an HMD or
headphones, detects an acceleration of 360 degrees of the user's head, and converts the
acceleration signal into digital acceleration data by the A / D converter 6. After conversion, it is
input to the control unit 2.
In the control unit 2, the main control unit 2a calculates the movement direction and movement
amount of the head based on the acceleration data.
[0033]
The gyro sensor 7 is mounted on, for example, an HMD or a headphone, detects a rotation angle
around the user's head in the longitudinal direction and the lateral axis direction, and the A / D
converter 8 digitally accelerates the angular velocity signal. After conversion into data, the data is
input to the control unit 2.
In the control unit 2, the main control unit 2a calculates the rotation angle of the head based on
the angular velocity data.
The acceleration sensor 5 and the gyro sensor 7 can be selectively used to detect the rotation of
the head of the user, and either one may be mounted.
[0034]
The distance sensor 9 measures the distance to the actual object, and the sensor signal is
converted into distance data by the A / D converter 10 and then input to the control unit 2.
Distance data is used in various processes to be described later to enhance the sense of reality.
As the distance sensor 9, various sensors such as an infrared sensor, an ultrasonic sensor, a
07-05-2019
9
distance measuring sensor, a laser, or a sound wave sensor can be used.
Of course, when the distance sensor 9 is not provided, data may be input from the input unit 11.
Here, the actual target is an entity that generates sound etc. For example, in the case of a concert
venue, musicians on the stage become the actual target.
[0035]
The geomagnetic sensor 16 outputs azimuth data, and the azimuth data output through the A / D
converter 17 is input to the control unit 2.
The orientation data is used to recognize the orientation of movement of the user's head. By
using the geomagnetic sensor 16 in combination with the above-described gyro sensor 7, the
azimuth and angular velocity in three axes can be obtained.
[0036]
The heartbeat sensor 18 outputs heartbeat data related to the user's heartbeat rate, and the
output heartbeat data is input to the control unit 2.
[0037]
The input unit 11 is for inputting various setting data.
As setting data, ambience data can be input. The ambience data is data for determining how
much reverberation is to be added, and preset data may be selectively input according to the size
of the space or the like. Also, the user may make fine adjustments to the preset data. Also,
distance data may be input from the input unit 11. By processing left and right stereo sounds as
described later based on each sensor output described above and input data from the input unit
11, a stereophonic sound is generated.
07-05-2019
10
[0038]
In the control unit 2, each unit operates as follows. The noise reduction processing unit 2b
performs noise reduction processing on audio data relating to the input stereo sound. Then, the
reverberation invalidation processing unit 2c invalidates the sound data relating to the stereo
sound when there is an element of the reverberation. The frequency averaging processing unit
2d changes the frequency characteristics of the audio data, and averages the overall sound
quality. That is, in the audio data, the salient part is lowered, and the small ones are raised to
average the frequency of the sound as a whole. This means sound pressure averaging by
frequency.
[0039]
Subsequently, the first ear acoustic radiation processing unit 2e adds the effects of the induced
ear acoustic radiation (TEOAE) and the distortion component ear acoustic radiation (DPOAE) to
the sound data relating to the stereo sound.
[0040]
More specifically, as shown in FIG. 2, the first ear acoustic radiation processing unit 2e includes a
frequency adjustment processing unit (parametric equalizer) 20, a sound pressure adjustment
processing unit (compressor) 21, and an amplitude adjustment processing unit (amplifier 22 and
a delay adjustment processing unit (delay) 23.
[0041]
In the first ear acoustic radiation processing unit 2e, the frequency adjustment processing unit
20 adjusts the volume of a predetermined frequency band (5.28 Hz to 20 KHz) of audio data
relating to stereo sound based on the distance data.
This adds an effect of induced otoacoustic emission, and based on distance data, processing is
performed such that the volume increases as the distance approaches.
[0042]
07-05-2019
11
The sound pressure adjustment unit 21 adjusts the sound pressure of the audio data relating to
the stereo sound based on the distance data.
For example, when the volume exceeds the threshold, the excess volume is suppressed by the set
compression ratio, and released within the set time, thereby reducing the maximum value of the
changing volume. This compresses the dynamic range of maximum volume and minimum
volume. This is to add the effect of induced otoacoustic radiation, and based on the distance data,
the sound pressure adjusting unit 21 lowers the sound pressure as the distance is longer and
increases the sound pressure as the distance is closer.
[0043]
The amplitude adjustment processing unit 22 adjusts the amplitude of, for example, 10 dB to 20
dB in this example based on the distance data. This means that the sound pressure adjustment
reduces the volume as a whole, so that the reduction is compensated and amplified. This means
adding the effects of both evoked otoacoustic emissions and distorted component otoacoustic
emissions. The amplitude adjustment processing unit 22 is an optional component.
[0044]
The delay adjustment processing unit 23 adds a delay effect of 10 ms to audio data relating to
stereo sound. As described above, the evoked otoacoustic emission means an acoustic response
in which a signal is detected with a delay of about 10 ms with respect to the stimulation of the
input voice, but it means that such an effect is artificially realized. Do.
[0045]
Returning to FIG. 1, subsequently, the head-related transfer adjustment processing unit 2 f
adjusts the transfer delay of sound to the head based on the head-related transfer function. Here,
the head-related transfer function (HRTF) refers to a transfer function representing a change in
sound caused by peripheral objects including the auricle, the human head and the shoulder. In
this example, a set of the right ear (R) and the left ear (L) is held in a table format in the DB 15 a
of the storage unit 15. The right ear and the left ear are separately used because the voice arrival
time differs between the left and right depending on the position of the head. The head-related
07-05-2019
12
transfer adjustment processing unit 2 f refers to the table to achieve adaptation by performing a
convolution operation on the voice data by filtering based on the head-related transfer function
corresponding to the depth of the pinna.
[0046]
The reverberation adjustment processing unit 2g adds a suitable reverberation to the space
designated by the preset to the sound data. This reflects, for example, the sound of bounce from
the boundary in space, etc. in the audio data, and the size of the reverberation to be added differs
depending on the size of the space. Also, as the time difference between the left and right audio
data is smaller, reverberation is assumed assuming a larger space.
[0047]
The second ear acoustic radiation processing unit 2 h adds the effect of the spontaneous ear
acoustic radiation (S0AE) to the voice data. A sampling sound or colored noise is added to the 1
KHz to 2 KHz frequency band of audio data. What kind of sampling sound and colored noise
should be added may be selected at the preset stage.
[0048]
The audio data relating to the three-dimensional sound generated by the above processing is
converted to an analog signal via the D / A converter 12 for the right ear and then output from
the audio output unit 14R, and D / for the left ear After being converted into an analog signal
through the A converter 13, the signal is output from the audio output unit 14L. Thus, the sound
relating to the three-dimensional sound is reproduced.
[0049]
Hereinafter, with reference to the flowchart of FIG. 3, a processing procedure of sound
reproduction by the sound reproduction device according to the first embodiment will be
described. This procedure also corresponds to a sound reproduction method.
07-05-2019
13
[0050]
When an audio signal relating to a stereo sound from the sound source 3 is converted into audio
data by the D / A converter 4 and input, and various setting information is input from the input
unit 11 (S1), the noise reduction processing unit 2b Noise reduction processing is performed on
audio data relating to the input stereo sound (S2). Then, the reverberation invalidation
processing unit 2c invalidates the sound data relating to the stereo sound when there is an
element of the reverberation (S3). Subsequently, the frequency averaging processing unit 2d
changes the frequency characteristics of the audio data, and averages the overall sound quality
(S4).
[0051]
Subsequently, although the first ear acoustic radiation processing unit 2e will be described later
in detail with reference to FIG. 5, the processing of the first ear acoustic radiation processing unit
2e is audio data relating to the effects of induced ear acoustic radiation (TEOAE) and distortion
component ear acoustic radiation (DPOAE) (S5).
[0052]
Subsequently, the head-related transfer adjustment processing unit 2 f adjusts the transfer delay
of the sound to the head based on the head-related transfer function (S 6).
More specifically, adaptation is performed by performing a convolution operation on audio data
by filtering based on the head related transfer function corresponding to the depth of the pinna
with reference to the table.
[0053]
The reverberation adjustment processing unit 2g adds a suitable reverberation to the space
designated by the preset to the voice data (S7). Then, the second ear acoustic radiation
processing unit 2h adds the effect of the spontaneous ear acoustic radiation (S0AE) to the voice
data (S8). More specifically, sampling sound or colored noise is added to the 1 KHz to 2 KHz
frequency band of audio data.
07-05-2019
14
[0054]
Thus, the audio data is converted to an analog signal through the D / A converter 12 for the right
ear and then output from the audio output unit 14R, and is converted into an analog signal
through the D / A converter 13 for the left ear. After being converted, it is output from the audio
output unit 14L (S9). This is the end of the series of processes relating to the reproduction of
sound relating to three-dimensional sound.
[0055]
Here, in the flowchart of FIG. 4, a detailed processing procedure of the first ear acoustic radiation
processing performed in step S4 of FIG. 4 will be shown and described.
[0056]
First, the frequency adjustment processing unit 20 adjusts the volume of a predetermined
frequency band (5.28 Hz to 20 KHz) of audio data relating to stereo sound based on distance
data.
This adds the effect of the induced otoacoustic emission, and based on the distance data,
processing is performed such that the volume increases as the distance approaches (S11).
[0057]
Subsequently, the sound pressure adjusting unit 21 adjusts the sound pressure of the sound data
relating to the stereo sound based on the distance data (S12). That is, based on the distance data,
the sound pressure adjusting unit 21 reduces the sound pressure as the distance increases, and
increases the sound pressure as the distance decreases, so that the effect of the induced
otoacoustic emission is artificially reflected in the sound data. Let
[0058]
07-05-2019
15
Next, the amplitude adjustment processing unit 22 compensates and amplifies the overall drop in
volume by adjusting the amplitude of, for example, 10 dB to 20 dB based on the distance data
(S13). That is, the effects of both the induced otoacoustic emission and the distorted component
otoacoustic emission are reflected in the audio data.
[0059]
Then, the delay adjustment processing unit 23 adds a delay effect of 10 ms to the audio data
related to the stereo sound (S14). This is to reflect in the voice data an acoustic response that a
signal is detected with a delay of about 10 ms with respect to the stimulation of the input voice
that the induced otoacoustic radiation has. Thus, the process returns to the process after step S6
in FIG.
[0060]
Here, FIG. 5 (a) is a characteristic diagram of the input audio data, and FIG. 5 (b) is a
characteristic diagram of the output audio data. FIG. 5 is a diagram in which physical parameters
related to the OAE are dynamically changed with respect to the input voice data shown in FIG.
The audio data as shown in 5 (b) is output. As compared with binaural recorded sound, it
becomes possible to perceive a more realistic three-dimensional sound.
[0061]
As described above, according to the first embodiment of the present invention, it is possible to
reproduce a stereoscopic sound having a sense of reality by pseudo-reflecting the effect of
otoacoustic radiation on input audio data. It becomes possible. In addition, since the effects of
induced otoacoustic radiation (TEOAE), spontaneous otoacoustic radiation (SOAE) and distorted
component otoacoustic radiation (DPOAE) can be experienced as otoacoustic radiation, the
presence is further enhanced. In addition to the effect of otoacoustic emission, adjustment based
on the head transmission is also performed, so that more realistic stereophonic sound
reproduction is realized.
[0062]
07-05-2019
16
Second Embodiment
[0063]
The sound reproduction device according to the second embodiment of the present invention
differs from the first embodiment in the configuration of the first ear sound radiation processing
unit 2e.
Although the details will be described later, frequency adjustment processing is performed using
heartbeat data based on psychoacoustics. The other configuration is the same as that of the first
embodiment, so in the following, the same components as those in FIGS. 1 and 2 are denoted by
the same reference numerals, and different components will be mainly described.
[0064]
The detailed structure of the 1st ear acoustic radiation processing part 2e of the control part 2 of
the sound reproduction apparatus based on 2nd Embodiment of this invention is shown in FIG. 6,
and it demonstrates.
[0065]
As shown in the figure, the first ear acoustic radiation processing unit 2e includes a first
frequency adjustment processing unit (parametric equalizer) 30, a second frequency adjustment
processing unit (parametric equalizer) 31, a sound pressure adjustment processing unit
(compressor) And 32, an amplitude adjustment processing unit (amplifier) 33, and a delay
adjustment processing unit (delay) 34.
[0066]
In the first ear acoustic emission processing unit 2e, the first frequency adjustment processing
unit 30 adjusts the volume of a predetermined frequency band (500 Hz to 20 KHz) of audio data
relating to stereo sound based on distance data.
This adds an effect of induced otoacoustic emission, and based on distance data, processing is
performed such that the volume increases as the distance approaches.
07-05-2019
17
[0067]
The second frequency adjustment processing unit 31 adjusts the volume of a predetermined
frequency band (400 Hz to 10 KHz) of the sound data relating to the stereo sound based on heart
rate data based on the concept of psychoacoustics.
This is based on psychoacoustics, comparing heart rate data with a reference value, recognizing
that the user's psychological change is present when there is an edge in the heart rate, and
promoting the volume of the above-mentioned predetermined frequency to further promote It is
something to raise. In psychoacoustics, physical parameters of sound are changed to have
various psychological effects on human perception of sound.
[0068]
The sound pressure adjustment unit 32 adjusts the sound pressure of the audio data related to
the stereo sound based on the distance data. For example, when the volume exceeds the
threshold, the excess volume is suppressed by the set compression ratio, and released within the
set time, thereby reducing the maximum value of the changing volume. This compresses the
dynamic range of maximum volume and minimum volume. This is to add the effect of induced
otoacoustic radiation, and based on the distance data, the sound pressure adjusting unit 21
lowers the sound pressure as the distance is longer and increases the sound pressure as the
distance is closer.
[0069]
The amplitude adjustment processing unit 33 adjusts the amplitude of, for example, 10 dB to 20
dB in this example based on the distance data. This means that the sound pressure adjustment
reduces the volume as a whole, so that the reduction is compensated and amplified. This means
adding the effects of both evoked otoacoustic emissions and distorted component otoacoustic
emissions. The amplitude adjustment processing unit 33 is an optional component.
[0070]
07-05-2019
18
The delay adjustment processing unit 34 adds a delay effect of 10 ms to audio data related to
stereo sound. As described above, the evoked otoacoustic emission means an acoustic response
in which a signal is detected with a delay of about 10 ms with respect to the stimulation of the
input voice, but it means that such an effect is artificially realized. Do.
[0071]
The processing procedure by the sound reproducing apparatus according to the second
embodiment is substantially the same as that of FIG. 3, and only the first ear acoustic radiation
processing in step S5 of FIG. 3 is different. Only the processing procedure of the first ear acoustic
radiation processing according to the embodiment will be described.
[0072]
First, the first frequency adjustment processing unit 30 adjusts the volume of a predetermined
frequency band (5.28 Hz to 20 KHz) of audio data relating to stereo sound based on distance
data.
This adds the effect of the induced otoacoustic emission, and based on the distance data,
processing is performed such that the volume increases as the distance approaches (S21).
[0073]
Subsequently, the second frequency adjustment processing unit 31 adjusts the volume of a
predetermined frequency band (400 Hz to 10 KHz) of the sound data relating to the stereo
sound, based on the concept of psychoacoustics, with the heartbeat data (S22).
[0074]
Then, the sound pressure adjustment unit 32 adjusts the sound pressure of the audio data
relating to the stereo sound based on the distance data (S23).
That is, based on the distance data, the sound pressure adjusting unit 32 reduces the sound
07-05-2019
19
pressure as the distance increases, and increases the sound pressure as the distance decreases,
so that the effect of the induced otoacoustic emission is artificially reflected in the sound data.
Let
[0075]
Next, the amplitude adjustment processing unit 33 compensates and amplifies the overall drop in
volume by adjusting the amplitude of, for example, 10 dB to 20 dB based on the distance data
(S24). That is, the effects of both the induced otoacoustic emission and the distorted component
otoacoustic emission are reflected in the audio data.
[0076]
Then, the delay adjustment processing unit 34 adds a delay effect of 10 ms to the audio data
relating to the stereo sound (S25). This is to reflect in the voice data an acoustic response that a
signal is detected with a delay of about 10 ms with respect to the stimulation of the input voice
that the induced otoacoustic radiation has. Thus, the process returns to the process after step S6
in FIG.
[0077]
As described above, according to the second embodiment of the present invention, in addition to
the effects of the first embodiment described above, it is possible to provide an acoustic effect
based on psychoacoustics.
[0078]
Third Embodiment
[0079]
The sound reproduction device according to the third embodiment of the present invention
differs from the first embodiment in the configuration of the second ear acoustic radiation
processing unit 2 h.
07-05-2019
20
Although the details will be described later, adding a latent memory sound enhances the sense of
reality.
The other configuration is the same as that of the first embodiment. Therefore, in the following,
the same components as those in FIGS. 1 and 2 are denoted by the same reference numerals, and
different components will be mainly described. The latent memory sound conventionally refers to
a universal environmental sound or the like that is masked in hearing, and is added and changed
as a sampling sound. This promotes the priming effect in the perception of sound and promotes
psychological changes.
[0080]
The detailed structure of the 2nd ear acoustic radiation processing part 2h of the control part 2
of the sound reproduction apparatus based on 2nd Embodiment of this invention is shown, and it
demonstrates it to FIG.
[0081]
As shown in the figure, the second ear acoustic radiation processing unit 2 h includes a
spontaneous ear acoustic radiation processing unit 40 and a latent memory sound addition
processing unit 41.
[0082]
In such a configuration, in the second ear acoustic radiation processing unit 2 h, the spontaneous
ear acoustic radiation processing unit 40 adds the effect of the spontaneous ear acoustic
radiation (S0AE) to the voice data.
A sampling sound or colored noise is added to the 1 KHz to 2 KHz frequency band of audio data.
What kind of sampling sound and colored noise should be added may be selected at the preset
stage. Then, the latent memory sound addition processing unit 41 adds a sampling sound based
on the latent memory. This means, for example, that you can enjoy the music of the on-stage
musician while feeling the fluttering sound on the open air stage of the plateau, but by adding
such a potentially felt sound as a sampling sound It can enhance the sense of reality.
07-05-2019
21
[0083]
The processing procedure by the sound reproducing apparatus according to the third
embodiment is substantially the same as that of FIG. 3, and only the second ear acoustic radiation
processing in step S8 of FIG. 3 is different. Only the processing procedure of the second ear
acoustic radiation processing according to the embodiment will be described.
[0084]
In the processing, first, the spontaneous ear acoustic radiation processing unit 40 adds the effect
of the spontaneous ear acoustic radiation (S0AE) to the voice data (S31).
Then, the latent memory sound addition processing unit 41 adds a sampling sound based on the
latent memory (S32). Thus, processing returns to step S9 and subsequent steps in FIG.
[0085]
As described above, according to the third embodiment of the present invention, in addition to
the effects of the first embodiment, the presence feeling can be further enhanced by further
adding the potential memory sound.
[0086]
As mentioned above, although embodiment of this invention was described, it is needless to say
that this invention can be various improvement * change in the range which does not deviate
from this in the range which does not deviate from this.
For example, the present invention can also be realized as an apparatus in which the second
embodiment and the third embodiment are combined.
[0087]
DESCRIPTION OF SYMBOLS 1 ... Sound reproduction apparatus, 2 ... Control part, 2a ... Main
control part, 2b ... Noise reduction processing part, 2r ... Reverberation sound invalidation
07-05-2019
22
processing part, 2d ... Frequency averaging processing part, 2e ... 1st ear acoustic radiation
processing part 2f ... head part transmission adjustment processing unit, 2g ... reverberation
sound adjustment processing unit, 2h ... second ear acoustic radiation processing unit, 3 ... sound
source, 4 ... A / D conversion unit, 5 ... acceleration sensor, 6 ... A / D Converting unit 7: Gyro
sensor 8: A / D converting unit 9: Distance sensor 10: A / D converting unit 11: Input unit 12: D /
A converting unit 13: D / A converter 14L, 14R: voice output unit, 15: storage unit, 16:
geomagnetic sensor, 17: A / D converter, 18: heart rate sensor, 19: program, 20: frequency
adjustment processing unit, 21: sound pressure adjustment processing unit, 22 ... Amplitude
adjustment processing unit, 23 ... Delay adjustment processing unit, 30 ... First frequency
adjustment processing unit, 31 ... second Wave number adjustment processing unit, 32 ... sound
pressure adjustment unit, 33 ... amplitude adjustment processing section, 34 ... delay adjusting
section, 40 ... spontaneous otoacoustic emissions processing unit, 41 ... implicit memory sound
adding section.
07-05-2019
23
Документ
Категория
Без категории
Просмотров
0
Размер файла
35 Кб
Теги
jp2017168887
1/--страниц
Пожаловаться на содержимое документа