close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2015198297

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2015198297
The present invention provides an acoustic control device and an acoustic control method which
make it easy to listen to an information sound while listening to a listening sound when wearing
an earphone and listening to a listening sound. According to one embodiment, an acoustic
control apparatus according to an embodiment includes an acquisition unit configured to acquire
a first acoustic signal, a detection unit configured to detect an information sound, and a detection
unit configured to detect an information sound. The first acoustic signal is converted to a second
acoustic signal by performing a convolution operation on the first acoustic signal based on a first
function indicating an acoustic transfer characteristic from the virtual position located in the
direction 1 to the listening position. A correction unit that corrects a signal, and an output unit
that outputs a second signal. [Selected figure] Figure 1
Acoustic control device, electronic device and acoustic control method
[0001]
Embodiments of the present invention relate to an acoustic control device, an electronic device,
and an acoustic control method.
[0002]
Devices such as earphones and headphones (hereinafter simply referred to as earphones) may be
worn to listen to music.
11-04-2019
1
When the user wears an earphone and listens to music, it is possible to shut off external sounds,
but it also shuts off sounds necessary as external information (hereinafter referred to as
information sound). Here, the information sound is, for example, a call from a nearby person, a
guide sound for guidance, a warning sound (for example, a horn from a car) or the like.
Therefore, when listening to music with the earphone attached, it is desirable not to miss the
information sound from the viewpoint of danger prevention and auditory support even when the
external sound is blocked by the earphone.
[0003]
On the other hand, there is an acoustic control device that presents to a listener by amplifying
the information sound acquired by the microphone built in the earphone. However, since a high
level of background noise is mixed with the sounds in the city, the amplified background noise
may be superimposed, which may make it difficult to listen to the music (listening sound) to be
listened to.
[0004]
JP, 2004-201195, A
[0005]
Therefore, the problem to be solved by the present invention is an acoustic control device, an
electronic device, and an acoustic control method that make it easy to listen to an information
sound while listening to a listening sound when wearing an earphone and listening to a listening
sound To provide.
[0006]
The acoustic control device according to the embodiment includes an acquisition unit that
acquires a first acoustic signal, a detection unit that detects an information sound, and a first
direction with respect to the listening position when the detection unit detects an information
sound. A first acoustic signal is corrected to a second acoustic signal by performing a convolution
operation on the first acoustic signal based on a first function indicating an acoustic transfer
characteristic from a virtual position to a listening position. A correction unit and an output unit
that outputs a second signal.
[0007]
11-04-2019
2
The sound control method according to the embodiment acquires a first sound signal of a
listening sound, detects an information sound, and a virtual portion is located in a first direction
with respect to a listening position when the detection unit detects an information sound. The
first acoustic signal is corrected to the second acoustic signal by performing a convolution
operation on the first acoustic signal based on the first function indicating the acoustic transfer
characteristic from the position to the listening position. Output an acoustic signal.
[0008]
1 is a block diagram showing an acoustic control device according to a first embodiment.
3 is a flowchart showing an acoustic control method according to the first embodiment.
The figure explaining the sound transfer characteristic concerning a 1st embodiment.
The figure which shows the result of the subjective evaluation which concerns on 1st
Embodiment.
The figure which shows the result of IACF analysis which concerns on 1st Embodiment. FIG. 6 is
a block diagram showing an acoustic control device according to a second embodiment. The
flowchart which shows the sound control method which concerns on 2nd Embodiment. FIG. 1 is
a block diagram showing an electronic device provided with an acoustic control device according
to each embodiment.
[0009]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings.
[0010]
First Embodiment FIG. 1 is a block diagram of an acoustic control device 100 according to a first
embodiment.
11-04-2019
3
The sound control apparatus 100 is used for an electronic device 1000 that can listen to music
and voice (hereinafter, listening sound) using an earphone such as a PC, a mobile phone, a tablet
terminal, a music player, a TV, a radio, and the like. An earphone can be connected to the sound
control device 100 by wire or wirelessly via an earphone jack (not shown).
[0011]
The acoustic control apparatus 100 of FIG. 1 listens when the acquisition unit 10 that acquires
the acoustic signal (first acoustic signal) of the listening sound, the detection unit 20 that detects
the information sound, and the detection unit 20 detect the information sound. The correction
unit 30 corrects the acoustic signal so as to localize the sound image of the sound in a
predetermined direction. Moreover, the output part 40 which outputs the acoustic signal which
the correction | amendment part 30 correct | amended to an earphone is provided. The
correction unit 30 corrects the acoustic signal using a plurality of acoustic transfer
characteristics stored in advance in the storage unit 50.
[0012]
The storage unit 50 is a recording medium such as a memory or an HDD. Further, each
processing of the acquisition unit 10, the detection unit 20, and the correction unit 30 is
executed based on a program stored in a recording medium (for example, the storage unit 50) by
an arithmetic processing device such as a CPU.
[0013]
The acquisition unit 10 acquires an acoustic signal (for example, a monaural signal). Various
variations can be considered as a method for the acquisition unit 10 to acquire an acoustic
signal. For example, content including an audio signal (eg, content including only an audio signal,
content including an audio signal accompanied with a moving image or a still image) by
terrestrial broadcasting or satellite broadcasting such as TV, audio equipment, AV equipment, etc.
It is possible to obtain content etc. further including other related information. The content may
be acquired via a network such as the Internet or an intranet or a home network, or may be
acquired by reading the content stored in a recording medium such as a CD, a DVD, or a built-in
11-04-2019
4
disk device. Also, the voice input by the microphone may be acquired.
[0014]
The detection unit 20 detects an information sound from the outside. The information sound is a
sound that requires primary or sudden listening, and is, for example, a localized sound that is
heard from a certain direction. The information sound may be, for example, a call from a person
around, a guide sound for in-house broadcasting or guidance, a horn from a car, or the like. The
information sound may also include a guide sound and the like reproduced as stereo sound by
the sound control apparatus 100, such as a sound effect included as stereo sound in the listening
sound. As a method of detecting the information sound by the detection unit 20, the acoustic
control device 100 includes a microphone (not shown), and the detection can be performed
based on the sound detected by the microphone. Moreover, it can detect based on the sound
which the microphone with which the earphone was equipped detects. At this time, for example,
the background noise component can be removed from the sound detected by the microphone,
and a component exceeding a certain sound pressure level among the remaining components can
be detected as an information sound.
[0015]
The correction unit 30 generates a stereo signal (a sound signal for the left earphone and a
sound signal for the right earphone) by filtering the sound signal (monaural signal) acquired by
the acquisition unit 10, An acoustic signal is supplied to the output unit 40. At this time, when
the acoustic signal acquired by the acquisition unit 10 is a stereo signal, the acquired acoustic
signal is supplied to the output unit 40.
[0016]
When the detection unit 20 detects an information sound, the correction unit 30 in the present
embodiment uses the acoustic transfer characteristics stored in the storage unit 50 to localize the
sound image of the listening sound in a predetermined direction (localization direction). To
correct the acoustic signal. Here, to localize the sound image in a fixed direction means that
sound (virtual sound source) from a virtual position (virtual sound source) in a fixed direction
with respect to the listener (listening position) is appropriately processed by appropriately
filtering the sound signal. Sound) to give the illusion of an audible effect.
11-04-2019
5
[0017]
The localization direction is preferably a direction that does not overlap with the arrival direction
of the information sound, that is, an arbitrary direction excluding the direction of the information
sound. At this time, the localization direction may change, for example, as the arrival direction of
the information sound described later changes. In addition, about localization of a sound image,
the well-known technique in stereophonic sound can be used. Here, the acoustic transfer
characteristic is a function indicating the transfer characteristic when sound is transmitted from
the virtual position in a fixed direction to the listener to the listener, and is, for example, a headrelated transfer function.
[0018]
FIG. 3 is a diagram for explaining the sound transfer characteristic stored in the storage unit 50.
As shown in FIG. As shown in FIG. 3, consider an XY coordinate system whose center O is the
listener. At this time, the X-axis positive direction is the right hand direction of the listener (θ = 0
°), and the Y-axis positive direction is the front of the listener (θ = 90 °). In the example
illustrated in FIG. 3, the storage unit 50 has acoustic transmission characteristics (for example,
corresponding to every 45 °, that is, θ = 0 °, 45 °, 90 °, 135 °, 180 °, 225 °, 270 °, and
315 °). Sound transmission characteristics up to the left ear, and a set of sound transmission
characteristics up to the right ear) are stored. Each acoustic transfer characteristic indicates the
transfer characteristic when sound is transmitted to the listener from the corresponding
direction, and by presenting the acoustic signal obtained by convoluting the acoustic transfer
characteristic to the acoustic signal to the listener, The sound image can be localized in the
corresponding direction.
[0019]
The correction unit 30 selects one of the plurality of acoustic transfer characteristics stored in
the storage unit 50, and performs a convolution operation of the selected acoustic transfer
characteristic (first acoustic transfer characteristic) on the acoustic signal. Generate an acoustic
signal P L for the left earphone and an acoustic signal P R for the right earphone. The generated
acoustic signals (second acoustic signals) are supplied to the output unit 40.
11-04-2019
6
[0020]
For example, when localizing the sound image at θ = 90 °, the correction unit 30 generates an
acoustic signal P L for the left earphone and an acoustic signal P R for the right earphone
according to the following equations. Here, H L, 90 represents an acoustic transfer characteristic
to the left ear, H R, 90 represents an acoustic transfer characteristic to the right ear, and S
represents an acoustic signal.
[0021]
P L = H L, 90 × S (1) P R = H R, 90 × S (2) Similarly, when θ = 135 °, the correction unit 30
has an acoustic transfer characteristic H L, 135 for 135 °. , HR, 135, etc. By using acoustic
characteristics matched to each angle, the sound image can be localized in a desired direction.
[0022]
The output unit 40 outputs each acoustic signal acquired from the correction unit 30 to an
earphone connected to the acoustic control device 100 by wire or wirelessly via an earphone
jack (not shown).
As a result, the listener wearing the earphone listens to the listening sound such as music in a
normal state where no information sound is detected, and listens to the listening sound as a
localized sound from a certain direction when an information sound is detected. You can listen to
the information sound at the same time.
[0023]
FIG. 2 is a flowchart showing the sound control method according to the first embodiment.
[0024]
In S101, the acquisition unit 10 acquires an acoustic signal (first acoustic signal) of a listening
sound.
[0025]
11-04-2019
7
In S102, the detection unit 20 detects an information sound, and if not detected, the process
proceeds to S103.
[0026]
In S103, the output unit 40 outputs the first acoustic signal to the earphone (listener).
[0027]
In S102, when the detection unit 20 detects an information sound, the process proceeds to S104.
[0028]
In S104, the correction unit 30 acquires the acoustic transfer characteristic (first function) from
the storage unit 50.
[0029]
In S105, the correction unit 30 corrects the first acoustic signal into a second acoustic signal by
performing a convolution operation of the first function on the first acoustic signal.
[0030]
In S106, the output unit 40 outputs the second acoustic signal to the earphone (listener).
[0031]
The above steps are repeated, for example, until the acquisition of the first acoustic signal ends
or while the detection unit 20 detects the information sound.
[0032]
The localization direction of the sound image by the correction unit 30 will be described.
The plane defined by the XY coordinate system shown in FIG. 3 is divided into four quadrants.
11-04-2019
8
That is, the first quadrant (0 ° ≦ θ <90 °), the second quadrant (90 ° ≦ θ <180 °), the
third quadrant (180 ° ≦ θ <270 °), the fourth quadrant (270 ° ≦ θ < 360 °).
[0033]
About the relative positional relationship that the information sound is easy to hear from the
combination (relative positional relationship) when the listening sound (P) and the information
sound (S) are arranged at 45 degree intervals in the XY coordinate system shown in FIG. 3
Subjective evaluation was performed.
[0034]
4 (a) to 4 (d) are diagrams showing the results of subjective evaluation.
Here, when the listening sound (P) present in each quadrant is fixed, a range in which the
information sound (S) is easy to hear is shown.
The angle of the listening sound (P) is θ P, and the angle (localization angle) of the information
sound (S) is θ S, with the listener at the center.
[0035]
As shown in FIG. 4A, when the listening sound (P) is fixed in the first quadrant (θ p = 45 °), the
position of the information sound (S) is 45 ° <θ S <315 °. The result was easy to hear.
In particular, in the case of 90 ° ≦ θ s ≦ 270 °, the result of being easy to hear was obtained.
On the other hand, when the position of the information sound (S) was 0 ° ≦ θ S ≦ 45 ° and
315 ° ≦ θ S ≦ 360 °, the result that it was difficult to hear was obtained.
[0036]
11-04-2019
9
As shown in FIG. 4 (b), when the listening sound (P) is fixed in the second quadrant (θ p = 135
°), the position of the information sound (S) is 0 ° ≦ θ S <135 °. The result was obtained
that it was easy to hear when 225 ° <θ s ≦ 360 °.
In particular, in the case of 0 ° ≦ θ s ≦ 90 ° and 270 ° ≦ θ s ≦ 360 °, it was found that
it was easier to hear.
On the other hand, when the position of the information sound (S) was 135 ° ≦ θ S ≦ 225 °,
the result that it was difficult to hear was obtained.
[0037]
As shown in FIG. 4C, when the listening sound (P) is fixed in the third quadrant (θ p = 225 °),
the position of the information sound (S) is 0 ° ≦ θ S <135 °. The result was obtained that it
was easy to hear when 225 ° <θ s ≦ 360 °.
In particular, in the case of 0 ° ≦ θ s ≦ 90 ° and 270 ° ≦ θ s ≦ 360 °, it was found that
it was easier to hear.
On the other hand, when the position of the information sound (S) was 135 ° ≦ θ S ≦ 225 °,
the result that it was difficult to hear was obtained.
[0038]
As shown in FIG. 4D, when the listening sound (P) is fixed in the first quadrant (θ p = 315 °),
the position of the information sound (S) is 45 ° <θ S <315 °. The result was easy to hear. In
particular, in the case of 90 ° ≦ θ s ≦ 270 °, the result of being easy to hear was obtained.
[0039]
On the other hand, when the position of the information sound (S) was 0 ° ≦ θ S ≦ 45 ° and
11-04-2019
10
315 ° ≦ θ S ≦ 360 °, the result that it was difficult to hear was obtained.
[0040]
As described above, in the relative positional relationship between the listening sound (P) and the
information sound (S), the X-axis is taken as the intersection point (Q) of the perpendicular drawn
from the position of the listening sound (P) to the X axis and the X axis It can be seen that the
information sound (S) present at a position where the intersection of the dropped vertical line
and the X axis is included in the range on the listener side than the intersection (Q) is easy to
hear.
On the other hand, it can be seen that it is difficult to hear the information sound (S) present at a
position where the intersection of the perpendicular drawn to the X axis and the X axis is
included in the range opposite to the listener from the intersection (Q). The same applies to the
case where the positional relationship between the listening sound (P) and the information sound
(S) is reversed.
[0041]
Therefore, preferably, the intersection point of the perpendicular drawn to the X axis and the X
axis is the intersection point (Q ′) with respect to the intersection point (Q ′) of the
perpendicular drawn to the X axis from the position of the information sound (S) and the X axis.
Any direction in the direction in which the position is included in the range on the side of the
listener rather than) is taken as the localization direction. More preferably, in the case where the
position of the information sound (S) is in the first quadrant or the fourth quadrant (right
direction with respect to the listener), any direction included in 90 ° ≦ θ S ≦ 270 ° Let (the
left hand direction with respect to the listener) be the localization direction. If the position of the
information sound (S) is in the second or third quadrant (to the left of the listener), 0 ° ≦ θ S
≦ 90 °, 270 ° ≦ θ S ≦ 360 ° The orientation direction is one of the directions (right-hand
direction with respect to the listener) included in. It is preferable that the correction unit 30
select an acoustic transfer characteristic corresponding to this localization direction.
[0042]
According to the acoustic control device 100 according to the present embodiment, the listener
wears the earphone and listens to the listening sound by shifting the sound image of the listening
11-04-2019
11
sound in the direction not overlapping the information sound at the timing when the information
sound comes in Even in this case, it becomes easy to listen to the information sound while
listening to the listening sound.
[0043]
First Modification In the sound control apparatus 200 of the first modification, the operation of
the detection unit 20 is different from that of the sound control apparatus 100.
Description of the same configuration as that of the acoustic control device 100 will be omitted.
[0044]
The detection unit 20 according to the present modification detects the direction of the
information sound. Here, the direction indicates to which direction the information sound has
been heard from the listener. The detection unit 20 includes, for example, a microphone (not
shown) in the sound control apparatus 100 or the earphone, and can detect the direction of the
information sound based on the sound detected by the microphone.
[0045]
For example, the detection unit 20 detects the direction of the information sound using the sound
intensity method known in the noise field and the sound source search field. The sound intensity
is "the flow of energy of sound passing through the unit area per unit time" and the unit is W / m
<2>. For example, by incorporating a plurality of microphones in an earphone, the flow of sound
energy can be measured, and the direction of the flow as well as the strength of the sound can be
measured as a vector quantity. The detection unit 20 detects the direction of the information
sound using the time difference passing between the two microphones. Specifically, assuming
that the sound pressure waveforms of the two microphones are P 1 (t) and P 2 (t), the sound
intensity I is the average sound pressure P (t) and the particle velocity V Calculated as the time
average of the product of t) by the following equation.
[0046]
11-04-2019
12
Here, ρ represents the air density, and Δr represents the distance between the microphones.
The measurement frequency range depends on the microphone interval Δr, and from the
relationship with the wavelength λ of the sound, in general, the smaller the Δr, the higher the
frequency can be estimated. For example, when Δr is 50 mm, the upper limit frequency is 1.25
kHz, but when Δ12 mm, the range is expanded to 6.3 kHz. Δ r is preferably λ / 2 or more, and
more preferably about λ / 3. That is, it is preferable that Δr is about 33 cm to 50 cm because it
is included from 340 Hz in the case of a voice band.
[0047]
The correction unit 30 selects the acoustic transfer characteristic according to the direction of
the information sound detected by the detection unit 20.
[0048]
The correction unit 30 sets the intersection point between the perpendicular drawn on the X axis
and the X axis at the intersection point (Q ′) based on the intersection (Q ′) of the
perpendicular drawn on the information sound (S) to the X axis and the X axis. Select the
acoustic transfer characteristic corresponding to any direction of the directions that will be
included in the range on the listener side rather than).
More preferably, in the case where the position of the information sound (S) is in the first
quadrant or the fourth quadrant (right direction with respect to the listener), any direction
included in 90 ° ≦ θ S ≦ 270 ° The acoustic transfer characteristics corresponding to (left
handed to the listener) are selected. When the position of the information sound (S) is in the
second or third quadrant (to the left of the listener), 0 ° ≦ θ S ≦ 90 °, 270 ° ≦ θ S ≦ 360
° Select an acoustic transfer characteristic that corresponds to one of the included directions
(right hand direction with respect to the listener).
[0049]
According to the sound control apparatus 200 according to the present modification, the listener
wears the earphone and listens to the listening sound by shifting the sound image of the listening
sound so as to move away from the direction of the information sound at the timing when the
information sound enters. Even when listening, it becomes easy to listen to the information sound
while listening to the listening sound.
11-04-2019
13
[0050]
Second Modification In the sound control apparatus 300 of the second modification, the
operation of the detection unit 20 is different from that of the sound control apparatus 100.
Description of the same configuration as that of the acoustic control device 100 will be omitted.
[0051]
For example, the IACF can be used to detect whether the sound detected by the microphone of
the earphone for binaural recording contains an information sound (localized sound). The
detection unit 20 according to the present modification detects the information sound and
detects the arrival direction of the information sound by performing an IACF analysis based on
the sound detected by the microphone, for example.
[0052]
The IACF indicates how closely the sound pressure waveforms transmitted to both ears match,
and is given by the following equation. Here, P L (t) represents the sound pressure entering the
left ear at time t, and P R (t) represents the sound pressure entering the left ear at time t. t1 and
t2 represent measurement times, and t1 = 0 and t2 = ∞. In an actual calculation, t2 may be set to
a measurement time about the reverberation time, and is set to, for example, 100 milliseconds
(msec). τ represents the correlation time, and the range of the correlation time τ is, for
example, minus 1 millisecond to 1 millisecond. Therefore, the time interval ΔT on the signal for
calculating the interaural cross-correlation function needs to be set to the measurement time or
more. In the present embodiment, the time interval ΔT is 0.1 second.
[0053]
In this embodiment, for example, it is considered to specify the arrival direction of the
information sound in units of 45 °. In this case, it is difficult to distinguish between the front
11-04-2019
14
and back, so the sound image directions to be presented to the user include front (including right
behind), left diagonal (including left diagonal front and left diagonal rear), left lateral and right
diagonal ( The five directions on the right side are candidates as the right oblique front and the
right oblique back). In the present embodiment, five time ranges shown in the following formulas
(2) to (6) are set corresponding to these five directions. The time range shown in equation (2)
corresponds to the front (0 ° or 180 °), and the time range shown in equation (3) corresponds
to the left diagonal (45 ° or 135 °), the equation (4) The time range shown in) corresponds to
the left side (90 °), and the time range shown in equation (5) corresponds to the right angle
(225 ° or 315 °), and the time shown in equation (6) The range corresponds to the right side
(270 °).
[0054]
The peak time τ corresponds to the time difference between the two ears and changes with the
difference in incident angle. For this reason, the time range by direction becomes uneven.
Furthermore, since a person tends to judge that the sound image direction is oblique with respect
to sounds from other directions, it is sensitive to the judgment as to whether the person came
from the front or the back, Equation (3) for the oblique direction. And as shown to and Numerical
formula (5), a wide range is set.
[0055]
−0.08 msec <τ (i) <0.08 msec (7) 0.08 msec ≦ τ (i) <0.6 msec (8) 0.6 msec ≦ τ (i) <1 msec
(9) -0.6 msec <τ (i) ≦ −0.08 msec (10) -1 msec <τ (i) ≦ −0.6 msec (11) The occurrence time
(peak time) of the IACF maximum peak calculated for each ΔT based on the sound detected by
the microphone provided in the earphone is τ (i) And the intensity is γ (i) (i = 1 to N).
[0056]
At this time, for example, one specific time out of a plurality of (five in the present embodiment)
time ranges in which all of the predetermined number or more of the maximum peaks among N
maximum peaks calculated in one second are predetermined. When included in the range, it is
considered that the information sound has arrived from the direction corresponding to this time
range.
[0057]
11-04-2019
15
FIG. 5 shows the result of IACF analysis based on the sound coming from the TV in the direction
toward the left rear (135 °) to the listener.
The sampling was 44.1 kHz, and 100 maximum peaks were calculated at intervals of 0.1 seconds
in 10 seconds.
As a result, it is understood that the maximum peak is included in the time range including 0.4
sec (corresponding to 135 °) indicated by a dotted line. That is, it can be understood from this
result that the voice (information sound) has arrived from the direction of approximately 135 °.
[0058]
The detection unit 20 according to the present modification calculates the IACF for each ΔT
according to Equation (1) based on the sound detected by the microphone provided in the
earphone. Of the N maximum peaks calculated during the predetermined calculation time, the
maximum peaks not less than the predetermined number are all included in one specific time
range among a plurality of (five in the present embodiment) time ranges predetermined. If it is
detected, it is specified that the sound detected by the microphone provided in the earphone
includes an information sound. At this time, the detection unit 20 specifies the direction
corresponding to the representative time as the arrival direction by, for example, setting the
representative time in the time range in advance.
[0059]
According to the acoustic control device 300 according to the present modification, the
information sound is more accurate by using the IACF that can be evaluated including the arrival
direction as compared to, for example, the case where the information sound is detected using
the sound pressure level. It is possible to detect
[0060]
Second Embodiment FIG. 6 is a block diagram showing a sound control apparatus 400 according
to a second embodiment.
11-04-2019
16
Description of the same configuration as that of the acoustic control device 100 will be omitted.
[0061]
The acoustic control device 500 differs from the acoustic control device 100 in that the acoustic
control device 500 further includes a superimposing unit 60 that localizes the information sound
in the direction of arrival of the information sound by a convolution operation and superimposes
the listening sound and the information sound.
[0062]
The superimposing unit 60 selects one acoustic transmission characteristic (second acoustic
transmission characteristic) corresponding to the direction of the information sound from the
plurality of acoustic transmission characteristics stored in the storage unit 50, and selects the
selected acoustic transmission characteristic. By performing a convolution operation on the
information sound to generate an acoustic signal P ′ L for the left earphone and an acoustic
signal P ′ R for the right earphone.
Here, the acoustic transfer characteristic (second acoustic transfer characteristic) used by the
superimposing unit 60 is different from the acoustic transfer characteristic (first acoustic
transfer characteristic) used by the correction unit 30. The superimposing unit 60 outputs an
acoustic signal (fourth acoustic signal) in which each of the generated acoustic signals (third
acoustic signal) and each acoustic signal (second acoustic signal) generated by the correction
unit 30 are superimposed. It supplies to the part 40.
[0063]
For example, when localizing the information sound in the arrival direction θ = 90 °, the
superposing unit 60 generates an acoustic signal P ′ L for the left earphone and an acoustic
signal P ′ R for the right earphone according to the following equations. Here, H L, 90
represents an acoustic transfer characteristic to the left ear, H R, 90 represents an acoustic
transfer characteristic to the right ear, and S 'represents an acoustic signal of the information
sound.
[0064]
11-04-2019
17
P 'L = H L, 90 x S' (12) P 'R = H R, 90 x S' (13) The superimposing unit 60 is configured to
generate each acoustic signal (third acoustic signal) and each acoustic signal (second The
acoustic signal P Lout for the left earphone and the acoustic signal P Rout (the fourth acoustic
signal) for the right earphone are generated according to the following equation.
[0065]
P Lout = P L + P 'L (14) P Rout = P R + P' R (15) Note that the sound image direction of each
acoustic signal (second acoustic signal) generated by the correction unit 30 and the
superposition unit 60 The sound image direction of each sound signal (third sound signal) is
different.
[0066]
FIG. 7 is a flowchart showing an acoustic control method according to the second embodiment.
S201 to S205 are the same as S101 to S105 in FIG.
[0067]
In S206, the superimposing unit 60 acquires the acoustic transfer characteristic (second
function) from the storage unit 50.
[0068]
In S207, the superimposing unit 60 performs a convolution operation of the second function on
the acoustic signal (third acoustic signal) of the information sound to correct the third acoustic
signal to the fourth acoustic signal.
[0069]
In S208, the output unit 40 outputs an acoustic signal (fifth acoustic signal) in which the second
acoustic signal and the fourth acoustic signal are superimposed to an earphone (listener).
[0070]
The above steps are repeated, for example, until the acquisition of the first acoustic signal ends
or while the detection unit 20 detects the information sound.
11-04-2019
18
[0071]
(Third Modification) The sound control apparatus 500 according to the present modification
detects an information sound as sound signal (data) by, for example, wireless etc., and uses the
sound signal acquired by the acquisition unit 10 as a listening sound. On the other hand,
superimpose the information sound.
The sound control apparatus 500 presents a listener with a listening sound including an
information sound.
Thus, for example, when the listener is shopping in a department store while listening to music
by the sound control device 500, the guide sound from each store reproduced by the sound
control device 500 can be presented to the listener .
[0072]
The superimposing unit 60 according to the present modification obtains a listening sound
including an information sound by superimposing the information sound as the sound signal
detected by the detecting unit 20 on the sound signal corrected by the correcting unit 30.
At this time, the localization direction of the information sound can be determined, for example,
based on the relative positional relationship between the listener and each store that is the
source of the information sound.
[0073]
The superimposing unit 60 specifies the position of the sound control device 500 and the
position of the store that transmits the information sound, for example, by the GPS function of
the sound control device 500 or the like.
11-04-2019
19
The superimposing unit 60 maintains the relative positional relationship between the sound
control apparatus 500 and the store, that is, the information sound is localized in the direction in
which the store is located with reference to the position of the sound control apparatus 500.
Convolve the acoustic transfer characteristics.
The acoustic transfer characteristic (second acoustic transfer characteristic) used by the
superimposing unit 60 is different from the acoustic transfer characteristic (first acoustic
transfer characteristic) used by the correction unit 30.
[0074]
According to the sound control apparatus 500 according to the present modification, for a
listener who wears earphones and listens to music, useful information from, for example, a store
etc. Can be presented as
[0075]
The schematic which shows the electronic device 1000 provided with the acoustic control
apparatus which concerns on FIG. 8 at each embodiment is shown.
[0076]
The electronic device 1000 shown in FIG. 8 is a tablet terminal.
[0077]
The electronic device 1000 includes the sound control device 100 according to the first
embodiment, a display 70 such as a touch panel, an earphone jack 80, and a microphone 90.
The detection unit 20 of the sound control device 100 is connected to the microphone 90 by a
communication cable (not shown).
The detection unit 20 detects an information sound based on the sound collected by the
microphone 90.
11-04-2019
20
The output unit 40 of the sound control device 100 is connected to the earphone jack 80 by a
communication cable (not shown). In a state where an earphone (not shown) is connected to the
earphone jack 80, the output unit 40 outputs a second sound signal to the earphone via the
earphone jack 70.
[0078]
Note that the electronic device 1000 may include any one of the acoustic control devices 200,
300, 400, and 500 according to another embodiment or modification instead of the acoustic
control device 100. Further, instead of the electronic device 1000 including the microphone 90,
an earphone connected to the earphone jack 80 of the electronic device 1000 may include the
microphone 90. At this time, the acoustic control device 100 receives an acoustic signal of the
sound collected by the microphone 90 through the earphone jack 80, and detects an information
sound based on the acoustic signal.
[0079]
According to the sound control apparatus or method of at least one embodiment described
above, when the user wears the earphone and listens to the listening sound, it becomes easy to
listen to the information sound while listening to the listening sound.
[0080]
These embodiments are presented as examples and are not intended to limit the scope of the
invention.
These embodiments can be implemented in other various forms, and various omissions,
substitutions, and modifications can be made without departing from the scope of the invention.
These embodiments and modifications thereof are included in the scope and the gist of the
invention, and are included in the invention described in the claims and the equivalent scope
thereof.
[0081]
100, 200, 300, 400, 500 acoustic control unit 1000 electronic device 10 acquisition unit 20
detection unit 30 correction unit 40 output unit 50 storage Unit 60 · · · overlapping unit 70 · · ·
11-04-2019
21
display 80 · · · earphone jack 90 · · · microphone
11-04-2019
22
Документ
Категория
Без категории
Просмотров
0
Размер файла
33 Кб
Теги
jp2015198297, description
1/--страниц
Пожаловаться на содержимое документа