close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2009177747

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2009177747
The present invention provides a call apparatus capable of preventing howling without reducing
the amount of cancellation of the speaker sound even when the diaphragm of the speaker is
divided and vibrated. A communication apparatus A includes a speaker SP for outputting sound
from the front side of a diaphragm 23 to the outside, and a microphone disposed opposite to the
front of the diaphragm 23 for collecting sound and outputting an audio signal. M1 and M2, and
after adjusting the gain and delay time for the audio signals output from the microphones M1
and M2, the audio processing unit 10 removes the audio signal of the microphone M1 from the
audio signal of the microphone M2 and transmits it to the outside The microphone M2 collects
the voice emitted by the speaker at a sound pressure level equal to or higher than that of the
microphone M1, collects the voice emitted by the speaker SP at a smaller sound pressure level
than the microphone M1, and outputs a voice signal. [Selected figure] Figure 1
Intercom
[0001]
The present invention relates to a telephone set.
[0002]
Conventionally, there is a telephone set installed indoors by an intercom system etc., and a
speaker for outputting voice from the telephone set installed at another place, a microphone for
inputting voice to be transmitted to the other telephone set, etc. There is.
15-04-2019
1
And since howling occurs when the sound generated from the speaker gets into the microphone,
various measures against howling are taken.
[0003]
For example, as a first conventional example, a loop circuit including a speaker and a microphone
is formed in a speech apparatus, and howling occurs when the loop gain exceeds 1, so that loss
in a variable loss circuit provided in the loop circuit By adjusting the amount, there is one that
prevents the howling by setting the loop gain to 1 or less. Here, it is considered that the smaller
one of the transmission signal and the reception signal is not important, and the transmission
loss of the variable loss circuit inserted in the transmission path of the smaller signal level is
increased.
[0004]
However, in the first conventional example, when the distance between the microphone and the
speaker is short, the level of the received voice which is transmitted from the speaker to the
microphone becomes large, the transmission signal becomes larger than the received signal, and
the voice is emitted from the speaker The control circuit is switched to the transmitting state
despite the receiving state, and a state occurs in which the speaker can not emit any sound.
[0005]
Therefore, as a second conventional example, a pair of microphones, a delay circuit for delaying
the output of the microphone closer to the speaker by the delay time of the sound wave
corresponding to the difference between the distance between both the microphones and the
speaker, both microphones and the speaker Level adjustment to match the output level of both
microphones with respect to the sound from the speaker, and the outputs of both microphones
passing through the delay circuit and the level adjustment amplifier circuit. There is a telephone
set in which a differential amplifier circuit is provided and the output of the differential amplifier
circuit is a transmission signal.
[0006]
In this communication device, after the voice from the speaker is picked up by a pair of
microphones, delay and level adjustment are performed so that the speaker sounds input to both
15-04-2019
2
microphones are offset by the differential amplifier circuit, so only the speaker sound Can be
canceled to prevent howling, and furthermore, transmission voice can be transmitted without
receiving blocking.
(See, for example, Patent Document 1).
Patent No. 2607257 (Page 2 left column line 13 to right column line 3; page 4 right column line
26 to line 49, FIG. 1, FIG. 5)
[0007]
The diaphragm of the speaker usually vibrates with the same phase on the entire surface, but
generates a divided vibration in which a plurality of vibration areas having different phases are
generated when the sound wave to be emitted has a certain frequency. As a form of the division
vibration, there are division in the radial direction of the diaphragm and concentric division, and
the number of divisions is also divided into two or four.
[0008]
Then, as in Patent Document 1 described above, in the configuration in which the speaker sound
input to both microphones is canceled by the differential amplifier circuit and only the speaker
sound is canceled to prevent howling, the diaphragm is divided during divided vibration. Since
the phases and amplitudes of the sound waves radiated from the respective vibration regions are
different from each other, it is difficult to cancel the speaker sound inputted to both microphones
by the differential amplifier circuit, and the howling preventing effect is reduced.
[0009]
The present invention has been made in view of the above, and an object thereof is to prevent
howling without reducing the amount of cancellation of the speaker sound even when the
diaphragm of the speaker is divided and vibrated. It is in providing a calling device.
[0010]
According to the first aspect of the present invention, a speaker which is attached to the housing
and which outputs sound from the one surface side of the diaphragm to the outside of the
housing and which is disposed opposite to the one surface of the diaphragm collects the sound
15-04-2019
3
and outputs an audio signal. After adjusting the gain and delay time for the audio signals output
by the first and second microphones and the first and second microphones, the audio signal of
the first microphone is removed from the audio signal of the second microphone The second
microphone collects the sound emitted by the speaker at a sound pressure level lower than that
of the first microphone, and the sound emitted by the speaker is equivalent to that of the first
microphone It is characterized in that sound is collected at the above sound pressure level and an
audio signal is output.
[0011]
According to the present invention, the first and second microphones are both disposed to face
one surface of the diaphragm, and the plurality of vibration regions of the diaphragm are
substantially equidistant from the first and second microphones. Since the amplitude difference
and the phase difference generated in each audio signal emitted from each vibration region and
collected by the first and second microphones become substantially the same value, the gain and
delay time of the audio processing unit are adjusted. By doing this, it is possible to cancel the
sound waves respectively emitted from the respective vibration regions, and it is possible to
obtain the howling preventing effect even in the divided vibration.
That is, even when the diaphragm of the speaker is divided and vibrated, howling can be
prevented without reducing the amount of cancellation of the speaker sound.
[0012]
According to a second aspect of the present invention, in the first aspect, the sound collecting
surface of the first microphone is provided to face one surface of the diaphragm, and the sound
collecting surface of the second microphone is an audio output from the speaker It is
characterized in that it is provided in the same direction as the direction.
[0013]
According to the present invention, the second microphone collects the speaker sound at a sound
pressure level smaller than that of the first microphone, and collects the voice emitted by the
speaker at a sound pressure level equal to or higher than that of the first microphone. Since a
sound is output to output an audio signal, the audio signal of the speaker can be maintained at a
sufficient level while securing the amount of cancellation of the speaker sound by the audio
processing unit.
15-04-2019
4
[0014]
According to a third aspect of the present invention, in the first or second aspect, a front air
chamber, which is a space formed on one surface side of the diaphragm of the speaker, is
provided in the housing, and the first microphone is collected in the front air chamber. It is
characterized by arranging a sound surface.
[0015]
According to the present invention, the sound emitted by the speaker is reflected in the front air
chamber and diverges outside the housing, so the difference in sound pressure level between the
speaker sounds collected by the first and second microphones is further increased. The effect
that the voice signal of the speaker can maintain a sufficient level is improved while securing the
cancellation amount of the speaker sound by the processing unit.
[0016]
The invention of claim 4 is characterized in that, in any one of claims 1 to 3, the first and second
microphones are juxtaposed in the vertical direction with respect to one surface of the
diaphragm of the speaker. .
[0017]
According to the present invention, the howling preventing effect at the time of divided vibration
is further improved.
[0018]
According to a fifth aspect of the present invention, in any of the first to fourth aspects, a rear air
chamber which is a space formed on the other surface side of the diaphragm of the speaker is
provided in the housing, and the rear air chamber is a space outside the housing Are
characterized in that they are
[0019]
According to the present invention, since the back air chamber is a space with a high degree of
sealing, sound in the reverse phase radiated from the back surface of the speaker to the back air
chamber is less likely to leak to the back air chamber, and the first and second microphones It is
possible to suppress the adverse effect on the howling prevention processing of the voice
processing unit by collecting the sound of the opposite phase.
15-04-2019
5
[0020]
According to a sixth aspect of the present invention, in any of the first to fifth aspects, the first
and second microphones are disposed on the same substrate on which a wiring pattern is
formed.
[0021]
According to the present invention, positioning of the first and second microphones can be
performed efficiently.
[0022]
As described above, according to the present invention, even when the diaphragm of the speaker
vibrates in a divided manner, howling can be prevented without reducing the amount of
cancellation of the speaker sound.
[0023]
Hereinafter, embodiments of the present invention will be described based on the drawings.
[0024]
(Embodiment) The call apparatus A according to the present embodiment is shown in FIGS. 1 to 3
and is configured by storing a call module MJ in a rectangular box-shaped device main body A2
in which the call switch SW1 and the voice processing unit 10 are arranged. .
The apparatus body A2 is formed by joining, for example, two resin molded members, and after
the call switch SW1, the voice processing unit 10, and the call module MJ are accommodated, the
respective joining members are joined by a fitting means or an adhesive or the like. Do.
[0025]
In the call module MJ, a housing A1 having a width of 40 mm × height 30 mm × thickness 8
mm is constituted by a body A10 having an opening in the rear surface and a flat cover A11
covered at the opening of the body A10. , The speaker SP, and the microphone substrate MB1.
15-04-2019
6
A diaphragm 23 described later of the speaker SP is disposed to face a plurality of sound holes
12 formed on the front surface of the housing A1 and a plurality of sound holes 60 formed on
the front surface of the device body A2.
[0026]
As shown in FIG. 3, the audio processing unit 10 is an IC including a communication unit 10a,
audio switch units 10b and 10c, an amplification unit 10d, and a signal processing unit 10e, and
is disposed in the housing A1.
The audio signal transmitted from the communication device A installed in another room or the
like through the information line Ls is received by the communication unit 10a and amplified by
the amplification unit 10d through the audio switch unit 10b, and then the speaker Output from
SP.
In addition, by operating the call switch SW1, it becomes possible to talk, and each voice signal
inputted from the microphone M1 (first microphone) and the microphone M2 (second
microphone) on the microphone substrate MB1 is processed by the signal processing unit 10e.
After signal processing to be described later is performed, the signal passes through the voice
switch unit 10c, and is transmitted from the communication unit 10a to the communication
device A installed in another room or the like through the information line Ls.
That is, it functions as an interphone capable of interactive communication between rooms.
The power supply of the communication device A may be supplied from an outlet provided near
the installation site, or may be supplied via the information line Ls.
[0027]
As shown in FIG. 1, the speaker SP is a cylindrical yoke 20 formed of an iron-based material
having a thickness of about 0.8 mm, such as cold-rolled steel plate (SPCC, SPCEN),
electromagnetic soft iron (SUY), etc. , And a circular support 21 extends outward from the open
end of the yoke 20.
15-04-2019
7
[0028]
The cylindrical permanent magnet 22 (for example, residual magnetic flux density 1.39T to
1.43T) formed of neodymium is disposed in the cylinder of the yoke 20, and the outer peripheral
edge of the dome-shaped diaphragm 23 is a support It is fixed to the end face of 21.
[0029]
The diaphragm 23 is formed of a thermoplastic plastic (for example, a thickness of 12 μm to 50
μm) such as PET (PolyEthylene Terephtalate) or PEI (Polyetherimide).
A cylindrical bobbin 24 is fixed to the back surface of the diaphragm 23, and is formed by
winding a polyurethane copper wire (for example, φ 0.05 mm) around a kraft paper tube at the
rear end of the bobbin 24. A voice coil 25 is provided.
The bobbin 24 and the voice coil 25 are provided such that the voice coil 25 is positioned at the
open end of the yoke 20, and freely move in the front-rear direction in the vicinity of the open
end of the yoke 20.
[0030]
The voice coil 25 receives an audio signal via a pair of speaker wires W. This speaker wire W is
made of resin in the radial direction along the back surface of the circular diaphragm 23 at one
end of the voice coil 25 side. It is fixed, and the other end side is connected to the amplification
unit 10 d of the audio processing unit 10.
[0031]
When a voice signal is input to the polyurethane copper wire of the voice coil 25, an
electromagnetic force is generated in the voice coil 25 by the current of the voice signal and the
magnetic field of the permanent magnet 22. It is vibrated in the direction.
At this time, the diaphragm 23 emits a sound according to the audio signal.
15-04-2019
8
That is, a dynamic speaker SP is configured.
[0032]
Then, the outer peripheral end of the circular support 21 of the speaker SP abuts on the inside
front of the housing A1 facing the diaphragm 23, and the speaker SP is fixed with the diaphragm
23 facing the inside front of the housing A1 from the inside Be done.
[0033]
When the speaker SP is fixed in the housing A1, a front air chamber Bf, which is a space
surrounded by the front inner side of the housing A1 and the front surface side (diaphragm 23
side) of the speaker SP, back inner side and inner side of the housing A1. A rear air chamber Br,
which is a space surrounded by the speaker SP and the back surface side (yoke 20 side) of the
speaker SP, is formed, and the front air chamber Bf is formed via the sound hole 12 of the
housing A1 and the sound hole 60 of the device body A2. It communicates with the outside.
The rear air chamber Br is a space insulated (not in communication) with the front air chamber
Bf as the end of the support 21 of the speaker SP is in close contact with the inner surface of the
housing A1.
Further, the back air chamber Br becomes a space insulated from the outside air by the close
contact of the body A10 of the housing A1 and the cover A11.
That is, the rear air chamber Br is sealed and is insulated from other spaces.
[0034]
The sound radiated from the back surface of the speaker SP (the back surface of the diaphragm
23) to the back air chamber Br is in phase with the sound radiated from the front surface of the
speaker SP (the front surface of the diaphragm 23) to the front air chamber Bf. (Hereinafter, the
phase of the sound radiated from the front surface of the speaker SP is referred to as the positive
15-04-2019
9
phase, and the phase of the sound radiated from the rear surface of the speaker SP is referred to
as the reverse phase).
However, as described above, since the back air chamber Br is a space having a high degree of
sealing, the sound of the opposite phase radiated from the back surface of the speaker SP to the
back air chamber Br is less likely to leak out of the back air chamber Br. The sound of the
opposite phase leaked from the room Br turns forward and suppresses the decrease in the
radiation pressure by canceling the sound of the positive phase radiated from the surface of the
speaker SP, and the speaker SP emits the sound to the speaker in front The sound is easy to hear.
[0035]
Further, the other end side of the speaker wire W is led out of the call module MJ through an
insertion hole (not shown) bored in the housing A1, and is connected to the voice processing unit
10 in the device main body A2.
After passing through the speaker wire W, the insertion hole is closed with a resin for sealing the
back air chamber Br.
[0036]
Next, as shown in FIG. 4, the microphone substrate MB1 includes the module substrate 2 having
the back surface 2a and the front surface 2b, and the pair of bare chips BC1 and ICKa1 of the
microphone is mounted on the back surface 2a of the module substrate 2 A pair of bare chips
BC2 and ICKa2 is mounted on the front surface 2b of module substrate 2, and bare chips BC1
and ICKa1, between wiring patterns (not shown) on module substrate 2, and wires on bare chips
BC2, ICKa2 and module substrate 2 After connecting (wire bonding) each of the patterns (not
shown) with wires W, shield case SC1 is mounted on rear surface 2a so as to cover bare chip BC1
and ICKa1 pair, and covers bare chip BC2 and ICKa2 pair By mounting the shield case SC2 on the
front surface 2b Bare BC1, ICKa1, constituted by the shield case SC1 microphones M1, bare chip
BC2, ICKa2, and a composed microphone M2 with a shield case SC2.
[0037]
The microphone M1 uses the bottom surface side of the shield case SC1 having the sound hole
F1 formed as a sound collecting surface, and the microphone M2 uses the bottom surface side of
15-04-2019
10
the shield case SC2 having the sound hole F2 formed as a sound collecting surface. The sound
collecting surfaces are provided in both directions of the module substrate 2 which is the
direction.
[0038]
In the bare chip BC (bare chip BC1 or BC2), as shown in FIG. 5, a Si thin film 1d is formed on one
surface side of the silicon substrate 1b so as to close the hole 1c formed in the silicon substrate
1b. An electrode 1f is formed between the two through an air gap 1e, and a pad 1g for outputting
an audio signal is further provided to constitute a capacitor type silicon microphone.
Then, when an acoustic signal from the outside vibrates the Si thin film 1d, the capacitance
between the Si thin film 1d and the electrode 1f is changed to change the charge amount, and the
pad 1g is changed with the change of the charge amount. , 1g, a current corresponding to the
acoustic signal flows.
The bare chip BC is die bonded to the silicon substrate 1 b on the module substrate 2.
[0039]
FIG. 6A is a plan view of the microphone substrate MB1 as viewed from the rear surface 2a side
of the module substrate 2. The module substrate 2 is formed in a rectangular shape, and negative
power supply pads P1, positive power supply pads P2, and output 1 pads P 3 and output 2 pads
P 4 are provided along the edge of the module substrate 2.
[0040]
Then, as shown in FIG. 6B, the negative side of the power supply voltage supplied from the
outside is connected to the negative power supply pad P1, and the positive side of the power
supply voltage is connected to the positive power supply pad P2. Power is supplied to the
microphones M1 and M2 through the pattern.
Further, an audio signal collected by the microphone M1 is output from the output 1 pad P3
through the wiring pattern on the module substrate 2, and an audio signal collected by the
15-04-2019
11
microphone M2 is output from the output 2 pad P4. It is output through the upper wiring
pattern.
The ground of the audio signal output from the output pads P3 and P4 is shared by the negative
power supply pad P1.
[0041]
As described above, the power supply of the microphones M1 and M2 is supplied from the
common negative power supply pad P1 and the positive power supply pad P2, and the ground of
each output of the microphones M1 and M2 is also used as the negative power supply pad P1.
Can be reduced and the configuration is simplified.
Then, the microphone substrate MB1 can efficiently configure the signal lines and the feeders by
performing signal transmission and feeding through the wiring pattern on the module substrate
2 as described above. Next, the operation of the microphone substrate MB1 will be described. Do.
[0042]
First, each current flowing from the bare chips BC1 and BC2 according to the collected acoustic
signal is impedance converted by the ICKa1 and Ka2 and converted to a voltage signal, and
output as an audio signal from the output 1 pad P3 and the output 2 pad P4, respectively. Be
done.
[0043]
ICKa (ICKa1 or Ka2) has the circuit configuration of FIG. 7 and is formed of a chip IC which
converts the power supply voltage + V (eg 5 V) supplied from the power supply pads P1 and P2
into a constant voltage Vr (eg 12 V). A constant voltage Vr is applied to a series circuit of a
resistor R11 and a bare chip BC, and a connection midpoint of the resistor R11 and the bare chip
BC is a junction type J-FET element S11 via a capacitor C11. It is connected to the gate terminal.
The drain terminal of the J-FET element S11 is connected to the operating power supply + V, and
15-04-2019
12
the source terminal is connected to the negative side of the power supply voltage via the resistor
R12.
Here, the J-FET element S11 is for converting the electrical impedance, and the voltage at the
source terminal of the J-FET element S11 is output as an audio signal. The conversion circuit of
the impedance of ICKa is not limited to the above configuration, and may be, for example, a
circuit having a function of a source follower circuit by an operational amplifier, or an
amplification circuit of an audio signal in ICKa as required. May be provided.
[0044]
In this embodiment, the module substrate 2 is incorporated in the front wall of the housing A1,
and the microphone M1 is inserted through the opening on the front surface of the housing A1
so that the sound collecting surface is located in the front air chamber Bf and the bottom surface
of the shield case SC1. The sound hole F1 of the microphone M1 formed in a hole faces the
center of the diaphragm 23 of the speaker SP, and the sound emitted by the speaker SP can be
reliably collected via the sound hole F1.
[0045]
Further, the microphone M2 is inserted through the opening on the front surface of the housing
A1 to direct the sound collecting surface to the outside of the housing A1, and the sound hole F2
of the microphone M2 drilled on the bottom of the shield case SC2 is drilled on the front of the
device body A2. Since it faces the outside (front) of the housing A1 in the same direction as the
output direction of the speaker SP so as to face the sound hole 61 provided, the communication
device A is transmitted through the sound hole F2. The voice from the speaker located ahead can
be collected reliably.
[0046]
Further, by incorporating the module substrate 2 on which the microphones M1 and M2 are
mounted into the housing A1, the microphones M1 and M2 are disposed, so that the
microphones M1 and M2 can be positioned efficiently.
[0047]
Further, assuming that distances from the center of the diaphragm 23 of the speaker SP to the
sound holes F1 and F2 of the microphones M1 and M2 are X1 and X2, respectively, X1 <X2, and
in this embodiment, the audio output of the speaker SP is a microphone The following
configuration is provided to prevent howling caused by M1 and M2 picking up.
15-04-2019
13
[0048]
First, as shown in FIG. 8, the signal processing unit 10 e housed in the audio processing unit 10
includes an amplification circuit 30 that non-inverts and amplifies the output of the microphone
M 1 and an audio band (400 to 3000 Hz) from the output of the amplification circuit 30. ), A
delay circuit 32 for delaying the output of the band pass filter 31, an amplification circuit 33 for
inverting and amplifying the output of the microphone M2, and a voice band from the output of
the amplification circuit 33. A band pass filter 34 for removing noise of frequencies other than
(400 to 3000 Hz) and an adding circuit 35 for adding the outputs of the delay circuit 32 and the
band pass filter 34 are provided.
[0049]
FIG. 9 to FIG. 12 show audio signal waveforms of each part of the signal processing unit 10 in
the case where the voices from the speakers are collected by the microphones M1 and M2,
respectively.
First, assuming that distances from the center of the diaphragm 23 of the speaker SP to the
sound holes F1 and F2 of the microphones M1 and M2 are X1 and X2, respectively, X1 <X2.
Therefore, when the sound from the speaker SP is picked up by the microphones M1 and M2, the
output Y21 of the microphone M2 is more than the output Y11 of the microphone M1
depending on the distance between the speaker SP and the microphones M1 and M2 and the
sensitivity of the microphones M1 and M2. The delay time of the sound wave corresponding to
the difference (X2-X1) between the two microphones M1 and M2 and the speaker SP is small in
amplitude, and the delay time of the sound wave [Td = (X2-X1) / Cv] (Cv is the speed of sound)
The phase of the output Y21 is delayed (see FIGS. 9A and 9B).
[0050]
The amplifier circuit 30 non-inverts and amplifies the output Y11 to generate an output Y12, and
the amplifier circuit 33 inverts and amplifies the output Y21 to generate an output Y22 whose
phase is inverted by 180 °.
15-04-2019
14
At this time, the level adjustment corresponding to the difference (X2-X1) in the distance
between both the microphones M1 and M2 and the speaker SP is performed to make the output
levels of the two microphones M1 and M2 coincide with the sound from the speaker SP (FIG. a)
see (b)).
In the present embodiment, the amplification factor of the amplification circuit 30 is less than 1,
the amplification factor of the amplification circuit 33 is about 1, and the amplification circuit 33
may be omitted.
[0051]
Then, the band pass filters 31 and 34 generate outputs Y13 and Y23 obtained by removing noise
of frequencies other than the voice band from the outputs Y12 and Y22 (see FIGS. 11A and 11B).
[0052]
Next, the delay circuit 32 is composed of a time delay element or a CR phase delay circuit, and
delays the output of the microphone M1 closer to the speaker SP by the delay time Td to obtain
the output Y14 of the delay circuit 32 and The phase is matched with the output Y23 of the band
pass filter 34 to reduce the noise on the audio signal to be transmitted.
[0053]
The audio component from the speaker SP included in the output Y14 and the audio component
from the speaker SP included in the output Y23 have the same amplitude and the same phase by
the amplification processing and the delay processing, and the adding circuit 35 outputs the
output Y14 and By adding Y23, an output Ya in which the audio signal corresponding to the
audio from the speaker SP is canceled is generated (see FIGS. 12A to 12C).
That is, at the output Ya, the sound component from the speaker SP is reduced.
[0054]
15-04-2019
15
Further, for the sound from the speaker SP, the amplitude of the output Y11 of the microphone
M1 arranged with the sound collection surface facing the diaphragm 23 of the speaker SP is the
microphone with the sound collection surface arranged toward the speaker H The amplitude of
the output Y21 of the microphone M2 is larger than the amplitude of the output Y11 of the
microphone M1 for the voice emitted by the speaker H in front of the microphones M1 and M2
while the amplitude is larger than the amplitude of the output Y21 of M2. Become.
Furthermore, since the amplification factor of the amplification circuit 33 is larger than the
amplification factor of the amplification circuit 30, the speech component from the speaker H
included in the output Y23 becomes larger than the speech component from the speaker H
included in the output Y14. .
That is, the amplitude difference between the voice component from the speaker H included in
the output Y14 and the voice component from the speaker H included in the output Y23
becomes large, and the output Ya is obtained even if the addition processing is performed by the
adding circuit 35. , The signal corresponding to the speech emitted by the speaker H remains
with sufficient amplitude.
[0055]
As described above, at the output Ya of the addition circuit 35, the voice component from the
speaker SP is reduced, and the voice component emitted from the speaker H ahead of the
communication device A toward the microphone substrate MB1 remains, and at the output Ya
The relative difference between the voice component from the speaker H who wants to leave and
the voice component from the speaker SP that wants to reduce is large. That is, even when the
voice from the speaker H and the voice from the speaker SP are simultaneously generated, only
the voice component from the speaker SP is reduced while maintaining sufficient amplitude for
the voice component from the speaker H. Therefore, it is possible to prevent the howling caused
by the microphones M1 and M2 picking up the audio output of the speaker SP.
[0056]
When the signal processing unit 10e performs the speaker sound cancellation process, the voice
from the speaker H is also canceled at the same time. At this time, the amount of cancellation of
the voice from the speaker H is 3 dB or less. The sound pressure difference of the speaker sound
15-04-2019
16
collected by each of the microphones M1 and M2 needs to be 10 dB or more (see FIG. 13). Then,
in order to set the sound pressure difference to 10 dB or more due to the difference (X2-X1) in
the distance between both the microphones M1 and M2 and the speaker SP, if the diaphragm 23
of the speaker SP is considered as a point sound source, When the distance X1 from the center to
the sound hole F1 of the microphone M1 is 1 mm, the distance X2 from the center of the
diaphragm 23 to the sound hole F2 of the microphone M2 needs to be about 3.5 mm or more.
[0057]
Further, since the microphone M1 reliably collects the sound emitted by the speaker SP, the
howling prevention processing by the signal processing unit 10e can be reliably performed.
Furthermore, since the microphone M2 has the sound collecting surface and the diaphragm of
the speaker SP facing in the same direction, the acoustic coupling between the speaker SP and
the microphone M2 is reduced, and the microphone M2 does not easily collect the sound emitted
by the speaker SP. .
[0058]
Furthermore, since the back air chamber Br is a space with a high degree of sealing, the sound of
the opposite phase radiated from the back surface of the speaker SP to the back air chamber Br
is less likely to leak out of the back air chamber Br, and the microphones M1 and M2 are
reversed. It is possible to suppress an adverse effect on the howling prevention processing of the
signal processing unit 10e by collecting the phase sound.
[0059]
Next, the audio signal output from the signal processing unit 10e is output to the audio switch
unit 10c, and in the audio switch units 10b and 10c (see FIG. 3), the following processing is
performed to further prevent howling.
[0060]
First, the audio switch unit 10c takes in the output of the audio switch unit 10b as a reference
signal, and performs an operation on the output of the signal processing unit 10e to further
process the audio signal that has looped from the speaker SP into the microphones M1 and M2.
Cancel.
15-04-2019
17
On the other hand, the voice switch unit 10b also takes in the output of the voice switch unit 10c
as a reference signal, and performs an operation on the output of the communication unit 10a,
thereby causing the voice signal from the speaker to the microphone at the other end of the
callee Cancel the
[0061]
Specifically, the voice switch units 10b and 10c are configured by the speaker SP-microphone
M1, M2-signal processing unit 10e-voice switch unit 10c-communication unit 10a-voice switch
unit 10b-amplifier unit 10d-speaker SP By adjusting the amount of loss in variable loss means
(not shown) provided in the loop circuit, howling is prevented by making the loop gain 1 or less.
Here, it is considered that the smaller one of the transmission signal and the reception signal is
not important, and the transmission loss of the variable loss circuit inserted in the transmission
path of the smaller signal level is increased.
[0062]
Here, if the sound wave to be emitted is lower than a certain frequency f1, the speaker SP
performs a piston movement in which the entire surface of the diaphragm 23 vibrates in the
same phase as shown in the amplitude velocity distribution of FIG. However, when the sound
wave to be radiated has a frequency f1 or more, a plurality of vibration areas having different
phases generate divided vibrations generated in the diaphragm.
[0063]
FIG. 16 shows the sound pressure characteristics when the speaker SP is attached to the
standard baffle defined in JIS C5532, and in the present embodiment, the range of the lowest
resonance frequency fo is 500 to 600 Hz, and divided vibration occurs. The range of the lowest
frequency f1 to be started is 800 to 900 Hz, and divided vibration occurs at a frequency higher
than the frequency f1. If the standard baffle specified in the above-mentioned JIS C5532 is used,
the lowest resonance frequency equivalent to the case where the speaker SP is attached to a
baffle plate (ideal baffle plate) having an infinite size can be measured. Furthermore, it can be
15-04-2019
18
said that the lowest resonance frequency fo measured in this way is the same ideal characteristic
as the characteristic of the speaker SP alone. In this embodiment, a standard baffle defined in JIS
C5532 is used, but even if a standard closed box defined in JIS C5532 is used, a baffle plate
having an infinite size (ideal baffle The lowest resonance frequency equivalent to the case where
the speaker SP is attached to the board can be measured.
[0064]
Next, the influence of the piston motion and the divided vibration of the diaphragm 23 on the
cancellation processing of the speaker sound will be described. As shown in FIGS. 15 (a) and 15
(b), in the divided vibration, two vibration regions G1 and G2 having opposite phases with
respect to each other with the phase inversion axis Za formed in the diameter direction of the
diaphragm 23 as a boundary. The two-part vibration which arises in the diaphragm 23 is
illustrated.
[0065]
In FIG. 17 and FIG. 18, the arrangement of the microphones M1 and M2 is different from that of
the present invention, and the microphones M1 and M2 are arranged in parallel to the side of the
speaker SP. However, the relationship between the distances X1 and X2 from the center of the
diaphragm 23 of the speaker SP to the sound holes F1 and F2 of the microphones M1 and M2 is
X1 <X2.
[0066]
First, the amplitude velocity distribution of FIG. 17 shows the case where the diaphragm 23
performs a piston motion near the lowest resonance frequency fo. The diaphragm 23 vibrates in
the same phase and amplitude in all directions, and it is considered that sound waves radiated
from the diaphragm 23 have no acoustic directivity in the positional relationship between the
diaphragm 23 and the microphones M1 and M2. be able to. The sound wave emitted from the
diaphragm 23 reaches the microphone M1, and then arrives at the microphone M2 with a delay
X10 (= X2-X1) between the two microphones M1 and M2. Therefore, in order to make the
amplitude difference and the phase difference of the speaker sounds collected by the two
microphones M1 and M2 zero, the signal processing unit 10e generates the distance difference
X10 between the two microphones M1 and M2 (= X2-X1). The gain and delay time may be
15-04-2019
19
adjusted based on
[0067]
Next, the amplitude velocity distribution of FIG. 18 shows the case where the diaphragm 23
vibrates in two near the frequency f1. The diaphragm 23 divides and vibrates in two vibration
areas G1 and G2 with the phase inversion axis Za as a boundary, is displaced in the + direction in
the vibration area G1, and is displaced in the-direction in the vibration area G2, The sound waves
emitted from the regions G1 and G2 each have an initial phase difference, and acoustic directivity
is generated between the vibration regions G1 and G2 and the microphones M1 and M2. In FIG.
18, the arrangement direction Zb of the microphones M1 and M2 is orthogonal to the phase
inversion axis Za, and a distance difference X20 occurs in the vibration areas G1 and G2 with
respect to the microphones M1 and M2, and the microphones M1 and M2 In addition to the
above-described initial phase difference, an amplitude difference and a phase difference due to
the distance difference X20 occur in the speaker sound to be produced.
[0068]
Then, the amplitude difference and the phase difference generated in each sound signal emitted
from the vibration area G1 and collected by the microphones M1 and M2 and each sound signal
emitted from the vibration area G2 and collected by the microphones M1 and M2 The amplitude
difference and phase difference that occur are different values, and it is difficult to cancel both of
the sound waves respectively emitted from the vibration regions G1 and G2 even if the gain and
delay time of the signal processing unit 10e are adjusted. Howling prevention effect is reduced.
[0069]
19 (a) to 19 (c) show the gain of the signal processing unit 10e and the cancellation amount of
the speaker sound at substantially 2 kHz at 2 KHz during the piston movement shown in FIG. 17
and the two-part vibration shown in FIG. When adjusting the delay time, the phase difference
between the signals Y14 and Y23 input to the adding circuit 35 (FIG. 19A), the amplitude
difference (FIG. 19B), the signal Ya output from the adding circuit 35 The characteristic of the
cancellation amount (FIG.19 (c)) of the speaker sound in (The characteristic at the time of a
piston movement Ya1-Ya3 and the characteristic at the time of Yb1-Yb3 2 division vibration) is
shown.
15-04-2019
20
Then, at the time of two-divided vibration, the phase difference and amplitude difference of the
signals Y14 and -Y23 input to the adding circuit 35 increase (especially around 3000 Hz to 4000
Hz) and the amount of cancellation of the speaker sound is wider than at the time of piston
movement. It is decreasing over the band, which adversely affects howling prevention.
[0070]
Therefore, in the present embodiment, the microphones M1 and M2 are juxtaposed with the
diaphragm 23 of the speaker SP along the vertical direction (front-rear direction) so as to face
the front surface of the diaphragm 23 of the speaker SP. The vibration areas G1 and G2 are
substantially equidistant to the microphones M1 and M2 as shown in the amplitude velocity
distribution of the input signal Y14 of the addition circuit 35 generated due to the distance
difference X20 between the vibration areas G1 and G2. The amplitude difference and the phase
difference of -Y23 are eliminated. Thus, an amplitude difference and a phase difference
generated in each sound signal emitted from the vibration area G1 and collected by the
microphones M1 and M2, and each sound emitted from the vibration area G2 and collected by
the microphones M1 and M2 The amplitude difference and the phase difference that occur in the
signal become approximately the same value, and by adjusting the gain and delay time of the
signal processing unit 10e, both of the sound waves respectively emitted from the vibration
regions G1 and G2 can be canceled. Thus, the howling preventing effect can be obtained even in
the divided vibration.
[0071]
The microphone M1 has its sound collecting surface located in the front air chamber Bf, the
sound hole F1 is opposed to the center of the diaphragm 23 of the speaker SP, and the
microphone M2 has its sound collecting surface directed to the outside of the housing A1. The
sound hole F2 faces the outside (front) of the housing A1 in the same direction as the output
direction of the speaker SP. Then, since the sound emitted by the speaker SP is reflected in the
front air chamber Bf and diverges outside the housing A1, the microphone M2 collects the
speaker sound at a sound pressure level smaller than that of the microphone M1 (for example, a
sound of 10 dB or more) (Pressure difference), and the voice emitted by the speaker is collected
at a sound pressure level higher than that of the microphone M1 to output a voice signal.
Therefore, in the signal processing unit 10e, the relative difference between the voice component
from the speaker H to be left and the voice component from the speaker SP to be reduced is
large, and the speaker noise can be secured while securing the cancellation amount. The audio
signal of can maintain a sufficient level.
15-04-2019
21
[0072]
The arrangement of the microphones M1 and M2 is not limited to the configuration in which the
microphones M1 and M2 are arranged side by side in the front-rear direction by facing the
center of the diaphragm 23 of the speaker SP as described above. The microphone M2 is
disposed facing the front, and the microphone M2 collects the voice emitted by the speaker at a
sound pressure level equal to or higher than that of the microphone M1, collects the speaker
sound at a smaller sound pressure level than the microphone M1, and collects the voice signal.
Any arrangement for output is acceptable.
[0073]
For example, as shown in FIG. 21, the microphones M1 and M2 may be arranged in parallel in
the direction (lateral direction) orthogonal to the front and rear direction on the front surface of
the diaphragm 23. The microphones M1 and M2 are mounted on the rear surface 2a, and the
rear surface 2a is disposed along the front outer side of the housing A1.
The microphone M1 is inserted through the opening on the front surface of the housing A1 so
that the sound collecting surface is located in the front air chamber Bf, and the sound hole F1
faces the center of the diaphragm 23 of the speaker SP. Also, the microphone M2 is fitted in the
recess provided on the front of the housing A1, and the sound hole F2 of the microphone M2
pierced on the module substrate 2 faces the sound hole 61 pierced on the front of the device
main body A2. , Faces the outside (front) of the housing A1 in the same direction as the output
direction of the speaker SP. That is, with respect to the microphone M1 disposed to face the
center of the diaphragm 23, the microphone M2 is disposed to be laterally biased on the front
surface of the diaphragm 23.
[0074]
22 (a) to 22 (c) show that the gain and delay time of the signal processing unit 10e are adjusted
so that the amount of cancellation of the speaker sound becomes substantially zero at 2 KHz
during vibration by two divisions of the communication device A of the present invention. At that
time, the phase difference between the signals Y14 and Y23 input to the adding circuit 35 (FIG.
22A), the amplitude difference (FIG. 22B), and the cancellation of the speaker sound in the signal
Ya output from the adding circuit 35. Each characteristic of the quantity (FIG. 22 (c)) is shown.
15-04-2019
22
[0075]
Assuming that the radius of the diaphragm 23 of the speaker SP is 13 mm and the distance
difference X10 between the two microphones M1 and M2 in this radial direction, Yc1 to Yc3 are
X10 = 0 mm (that is, in FIGS. 22A to 22C). (When the microphone substrate MB1 shown in FIG. 1
is used), Yd1 to Yd3 is X10 = 16 mm (that is, when the microphone M1 does not face the
diaphragm 23 in the communication device A of FIG. 21), Ye3 is X10 = 8 mm , Yf3 indicate the
respective characteristics of X40 = 12 mm (that is, when the microphone M1 is opposed to the
diaphragm 23 in the communication device A of FIG. 21).
The smaller the distance difference X10, the smaller the phase difference and amplitude
difference between the signals Y14 and Y23 input to the adding circuit 35, and the frequency
band where the amount of cancellation of the speaker sound in the signal Ya output from the
adding circuit 35 is wider. Over time.
[0076]
Then, if the distance difference X10 is smaller than the radius of the diaphragm 23 (that is, both
the microphones M1 and M2 are arranged to face the front surface of the diaphragm 23), the
amount of cancellation of the speaker sound even in divided vibration Therefore, the microphone
M2 is disposed in the area D facing the front of the diaphragm 23 (see FIG. 21) with respect to
the microphone M1 disposed facing the center of the diaphragm 23 because of this. If so, it can
be said that the vibration areas G1 and G2 are substantially equidistant to the microphones M1
and M2, and the input signals Y14 and -Y23 of the adding circuit 35 generated due to the
distance difference X20 between the vibration areas G1 and G2. It can be said that the amplitude
difference and the phase difference can be eliminated, and the cancellation amount of the
speaker sound by the signal processing unit 10e can be sufficiently secured.
[0077]
Furthermore, also in the case where the position of both microphones M1 and M2 is arbitrarily
arranged in the region D opposed to the front surface of the diaphragm 23 by moving the
position of the microphone M1 in the region D as well, both microphones M1 and M2 As long as
they are disposed in D, although the vibration areas G1 and G2 are substantially equidistant to
the microphones M1 and M2, they are generated due to the difference in distance X20 between
the vibration areas G1 and G2 at the time of two-divided vibration It can be said that the
amplitude difference and the phase difference of the input signals Y14 and -Y23 of the adding
circuit 35 can be eliminated.
15-04-2019
23
[0078]
In addition to the above-described two-part vibration, four-part vibration in which four vibration
regions G11 to G14 are generated in the diaphragm 23 as shown in FIG. 23, and a plurality of
concentric circles as shown in FIG. Although there are circular division vibration etc. in which
vibration areas G21 and G22 occur in the diaphragm 23, microphones M1 and M2 are both
arranged in the area D facing the front surface of the diaphragm 23 as in the case of the above
two division vibration. For example, with respect to the microphones M1 and M2, the plurality of
vibration regions are substantially equidistant from each other, and the amplitude difference and
the phase difference generated in each audio signal emitted from each vibration region and
collected by the microphones M1 and M2 are substantially the same value By adjusting the gain
and delay time of the signal processing unit 10e, it is possible to cancel the sound waves
respectively emitted from the plurality of vibration regions, and it is possible to perform It can be
obtained preventing effect.
[0079]
The microphone M1 has its sound collecting surface located in the front air chamber Bf, the
sound hole F1 is opposed to the diaphragm 23 of the speaker SP, and the microphone M2 has its
sound collecting surface facing the outside of the housing A1. If F2 is arranged to face the
outside (front) of the housing A1 in the same direction as the output direction of the speaker SP,
the sound emitted by the speaker SP is reflected in the front air chamber Bf and diverges outside
the housing A1. Therefore, the microphone M2 collects the speaker sound at a sound pressure
level smaller than that of the microphone M1 (for example, a sound pressure difference of 10 dB
or more), and collects the voice emitted by the speaker at a sound pressure level of the
microphone M1 or more Can be output.
Therefore, the voice signal of the speaker can be maintained at a sufficient level while securing
the amount of cancellation of the speaker sound by the signal processing unit 10e.
[0080]
The relative positions of the microphones M1 and M2 are set to X1 <X2 in the above
embodiment, assuming that the distances from the center of the diaphragm 23 of the speaker SP
to the sound holes F1 and F2 of the microphones M1 and M2 are X1 and X2, respectively.
However, even if X1 ≧ X2, as shown in FIG. 25, the microphone M1 places its sound collecting
surface in the front air chamber Bf, and the sound hole F1 is placed on the diaphragm 23 of the
speaker SP. The microphone M2 is disposed so that the sound collecting surface faces the
15-04-2019
24
outside of the housing A1 and the sound hole F2 faces the outside (front) of the housing A1 in
the same direction as the output direction of the speaker SP. Similarly, the voice signal of the
speaker can maintain a sufficient level while securing the amount of cancellation of the speaker
sound by the signal processing unit 10e.
[0081]
Further, when both the microphones M1 and M2 have their sound collecting surfaces directed
into the housing A1, and both the sound holes F1 and F2 are disposed toward the center of the
diaphragm 23 of the speaker SP, as shown in FIG. Is arranged closer to the diaphragm 23 of the
speaker SP than the microphone M2 (X1 <X2), the microphone M2 collects the speaker sound at
a sound pressure level smaller than that of the microphone M1 (for example, a sound pressure
difference of 10 dB or more) And, the voice emitted by the speaker can be collected at a sound
pressure level higher than that of the microphone M1 to output a voice signal.
Therefore, the voice signal of the speaker can be maintained at a sufficient level while securing
the amount of cancellation of the speaker sound by the signal processing unit 10e.
[0082]
It is side surface sectional drawing which shows the structure of the telephone apparatus of
embodiment.
(A) (b) It is a perspective view which shows the structure same as the above.
It is a circuit diagram which shows the structure of the audio | voice processing part same as the
above. It is side surface sectional drawing which shows the structure of a microphone board |
substrate same as the above. It is side surface sectional drawing which shows the structure of the
bare chip same as the above. Fig. 6A is a simplified plan view showing the configuration of the
microphone substrate of the above, and Fig. 6B is a simplified circuit diagram. It is a circuit
diagram of an impedance conversion circuit same as the above. It is a circuit block diagram of
the signal processing part same as the above. (A) (b) It is a signal waveform diagram of a signal
processing part same as the above. (A) (b) It is a signal waveform diagram of a signal processing
part same as the above. (A) (b) It is a signal waveform diagram of a signal processing part same
15-04-2019
25
as the above. (A)-(c) It is a signal waveform diagram of the signal processing part same as the
above. It is a figure which shows the cancellation characteristic of the speaker sound same as the
above. It is a perspective view which shows the movement of the diaphragm by piston movement
same as the above. (A) (b) It is a perspective view which shows the movement of the diaphragm
by 2 division | segmentation vibration same as the above. It is a figure which shows the sound
pressure characteristic of the speaker attached to the baffle board same as the above. It is a
figure which shows propagation of the sound wave at the time of piston movement in the
structure different from this invention. It is a figure which shows propagation of the sound wave
at the time of division | segmentation vibration in the structure different from this invention. (A)(c) It is a figure which shows each characteristic at the time of piston movement and division |
segmentation vibration same as the above. It is a figure which shows propagation of the sound
wave at the time of division | segmentation vibration of embodiment. It is side sectional drawing
which shows another structure of the telephone apparatus of embodiment. (A)-(c) It is a figure
which shows each characteristic at the time of the division | segmentation vibration same as the
above. It is a perspective view which shows the movement of the diaphragm by 4 division
vibration same as the above. It is a perspective view which shows the movement of the
diaphragm by the circle division vibration same as the above. It is the schematic which shows
another arrangement | positioning of a microphone same as the above. It is the schematic which
shows another arrangement | positioning of a microphone same as the above.
Explanation of sign
[0083]
A Communication device MJ Call module A1 Housing M1, M2 Microphone SP Speaker 23
Diaphragm 10 Audio processing unit
15-04-2019
26
Документ
Категория
Без категории
Просмотров
0
Размер файла
42 Кб
Теги
description, jp2009177747
1/--страниц
Пожаловаться на содержимое документа