close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2014030248

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2014030248
Abstract: [Problem] To be able to perform voice amplification at the time of telephone call
without discomfort, and to solve the problem that it becomes difficult to hear the voice of the
conversation emitted from the handset by the contact sound. A voice processing apparatus
includes an earphone having a speaker unit, a microphone, a cord, and a cord holding portion
21A, and a main body 3 to which the cord 21 is connected. The processed sound signal is
processed and output to the speaker unit 24. The earphone 2 has a housing 20 for housing the
microphone 26, the housing 20 is provided with a microphone hole 27, and the cord holder 21A
is a microphone hole It is arranged at a position opposite to 27. [Selected figure] Figure 1
Voice processing device
[0001]
The present invention relates to an audio processing device.
[0002]
There is known an electronic apparatus having a hearing aid function of capturing an external
sound with a microphone, amplifying this audio signal, and outputting the amplified audio signal
to a speaker unit of an earphone (see Patent Document 1 below).
In this type of electronic device, a pair of earphones are connected to the main body, a
microphone and a speaker unit are built in the earphone, and an amplifier for amplifying an
11-04-2019
1
electric signal of the microphone built in the earphone and a speaker volume balance are variable
in the main body An operation unit such as a switch to be adjusted is provided.
[0003]
Japanese Utility Model Publication No. 04-061996
[0004]
In the electronic device described above, in particular, when the user wears the earphone in the
ear, when the speaker unit of the receiver is brought close to the ear, the speaker unit of the
receiver will approach the microphone in the earphone housing. The sound emitted from the
speaker unit of the handset can be amplified and emitted from the speaker unit of the earphone
in a state of natural use of the handset.
[0005]
Thus, in order to better hear the speech emitted from the speaker unit of the handset, the user
may bring the speaker unit of the handset closer to the microphone of the earphone.
At this time, a receiver or the like contacts the microphone of the earphone to generate a contact
sound, and the contact sound may be amplified by the hearing aid body and emitted from the
speaker unit of the earphone.
This contact sound is not only uncomfortable for the user because it is amplified and emitted
from the speaker unit, but it is also difficult to hear the speech of the conversation emitted from
the speaker unit of the handset by this contact sound. There was a problem that they missed the
requirements etc.
[0006]
The present invention takes an example of the problem to address such a problem. That is, in the
voice processing apparatus in which the microphone is housed in the housing of the earphone,
the voice amplification at the time of telephone can be performed without discomfort, and the
problem that the voice of the conversation emitted from the receiver becomes hard to hear What
11-04-2019
2
can be done is the object of the present invention.
[0007]
In order to achieve such an object, the speech processing apparatus according to the present
invention at least has the following configuration. A speaker unit and an earphone including a
microphone; and a main body electrically connected to the earphone, the main body processing
an audio signal collected by the microphone and outputting the processed sound signal to the
speaker unit; A housing for housing the speaker unit and the microphone; and a cord for
electrically connecting the main body and the earphone, wherein the housing is configured to
receive the first housing portion for housing the speaker unit and the microphone. And the cord
is drawn from the housing through the cord holding part of the housing, and the second
containing part extends along the cord holding part, and the cord holding part The voice
processing device according to claim 1, wherein the voice processing device is disposed at a
position facing the microphone hole of the second housing.
[0008]
It is an explanatory view showing the whole composition of a speech processing unit concerning
an embodiment of the present invention. It is an explanatory view showing an earphone structure
of a voice processing device in an embodiment of the present invention. FIG. 2 (a) is an
explanatory view showing an outer appearance structure, and FIG. 2 (b) is a cross-sectional view
showing an internal structure. It is an explanatory view showing circuit composition of a speech
processing unit concerning an embodiment of the present invention. It is an explanatory view
showing the mode of mode change according to the use environment of the speech processing
unit concerning the embodiment of the present invention, and shows the characteristic of a
plurality of different band pass filters. It is explanatory drawing which showed the output sound
pressure frequency characteristic of the speaker unit in the speech processing unit concerning
the embodiment of the present invention. It is explanatory drawing which showed the output
sound pressure frequency characteristic of the speaker unit in the speech processing unit
concerning the embodiment of the present invention. It is explanatory drawing which showed the
output sound pressure frequency characteristic of the speaker unit in the speech processing unit
concerning the embodiment of the present invention. It is explanatory drawing which showed the
output sound pressure frequency characteristic of the speaker unit in the speech processing unit
concerning the embodiment of the present invention. It is explanatory drawing which showed the
output sound pressure frequency characteristic of the speaker unit in the speech processing unit
concerning the embodiment of the present invention. It is explanatory drawing which showed the
11-04-2019
3
structure of the connection terminal part of an earphone, and the to-be-connected terminal part
of a main body. It is explanatory drawing which showed the specific structural example of the
connection terminal part in an earphone and an attached earphone. It is an explanatory view
showing the main part of the speech processing unit concerning the embodiment of the present
invention.
[0009]
Hereinafter, embodiments of the present invention will be described with reference to the
drawings. Although the embodiment of the present invention includes the illustrated
embodiment, it is not particularly limited thereto.
[0010]
[Overall Configuration] FIG. 1 is an explanatory view showing an overall configuration of a voice
processing apparatus according to an embodiment of the present invention. The voice processing
device 1 includes an earphone 2 and a main body 3. The earphone 2 includes a housing 20 and a
cord 21. The cord 21 is pulled out of the housing 20 via a cord holding portion 21A. A
connection terminal 22 is provided at the end of the cord 21. Attached to the housing 20 of the
earphone 2 are an auricle connection 23A for keeping the housing 20 airtight to the user's ear,
and an auricle contact 23B for bringing the housing 20 into contact with the inside of the auricle.
[0011]
The earphone 2 includes a speaker unit and a microphone in a housing 20 as described later. The
earphone 2 includes a single housing 20, and the single housing 20 houses one speaker unit and
one omnidirectional microphone.
[0012]
A single cord 21 in the earphone 2 contains a signal line electrically connecting the speaker unit,
the main body 3 and the microphone. The signal line may, for example, be a lead. The cord 21 is
11-04-2019
4
composed of a conducting wire and an insulating member (resin member) which insulates the
conducting wire from the outside. The insulating member has elasticity so as to be easy for the
user to use. The cord holding portion 21A is formed of a member (resin member) having a
bending rigidity that covers the cord 21. The member having a bending rigidity has a large
bending rigidity with respect to the insulating member of the cord 21. While the user wears the
earphone 2, the cord 21 vibrates with respect to the user's ear, but the cord holding portion 21A
having a large bending rigidity with respect to the cord 21 has a predetermined gap with respect
to the microphone hole 27 Maintain. That is, since the cord holding portion 21A has relatively
high rigidity, the contact between the cord holding portion 21A and the microphone hole 27 is
suppressed.
[0013]
The main body 3 has an audio signal processing circuit which is electrically connected to the
earphone 2 and processes (amplifies or attenuates) the audio signal collected by the microphone
and outputs it to the speaker unit. The main body 3 includes a housing (main body housing) 4 for
housing the audio signal processing circuit. The housing 4 includes a mode switching switch
(switching switch) 41 for switching a mode of a use environment described later, a volume
adjustment wheel 42, and a power switch 43 for turning on / off the power. The changeover
switch 41, the volume adjustment wheel 42, and the power switch 43 are disposed in the recess
4B (4B1, 4B2, 4B3) provided in the case side 4A of the main body 3, and the changeover switch
41 and volume adjustment wheel 42 The power switch 43 is disposed so as not to protrude with
respect to the housing side surface 4A of the main body 3.
[0014]
The main body 3 includes a first light source 4C and a second light source 4D in a housing 4. The
user selects a mode corresponding to a plurality of different band pass filters with the mode
switching switch 41 according to the user's own use environment. At this time, the first light
source 4C emits different emission colors in accordance with the mode switched by the mode
switching switch 41. Further, the user presses the power switch 43 to turn on / off the power of
the main body 3. At this time, the second light source 4D is turned on / off in response to the
power on / off of the main body 3.
[0015]
11-04-2019
5
The main body 3 includes a connected terminal portion 4E to which the connection terminal
portion 22 of the earphone 2 is connected. The connection terminal 4E can be connected to the
attached earphone 2A in place of the earphone 2. The attached earphone 2A is provided with a
pair (two) of the housing 20, the cord 21 and the like of the earphone 2 described above, and
includes a housing 20R attached to the right ear of the user and a housing 20L attached to the
left ear. doing. The connection terminal 22A of the attached earphone 2A electrically connects
the speaker unit and the microphone in the housings 20R and 20L to the main body 3 by being
connected to the connection terminal 4E of the main body 3.
[0016]
The audio processing device 1 having such a configuration has the earphone 2 connected
electrically to a single housing 20 in which one speaker unit and one microphone are
accommodated, a speaker unit, the main body 3 and the microphone. A single cord 21 for
accommodating a wire, and a connection terminal 22 provided at an end of the cord 21 for
connecting the aforementioned signal line to the main body 3 are provided. By connecting the
earphone 2 to the main body 3, the user can wear the housing 20 on one of the left and right
ears and use the voice processing device 1.
[0017]
According to this, the user can use the voice processing device 1 with one ear released. As a
result, it is possible to eliminate the obstructive discomfort felt by the user in a state in which
both ears are blocked by the earphones. That is, the user can listen to the sound processed
(including amplification or attenuation) by the voice processing device 1 with the ear on which
the earphone 2 is attached while listening to the environmental sound with the open ear. The
user can listen to the processed (including amplified or attenuated) audio in the same sense as
when the audio processing apparatus 1 is not used.
[0018]
A single housing 20 houses an omnidirectional microphone. As a result, even when the single
housing 20 of the earphone 2 is attached to one ear, the omnidirectional microphone collects
surrounding sound and processes (including amplification or attenuation) this sound. The sound
11-04-2019
6
can be output to the speaker unit in the single housing 20.
[0019]
The changeover switch 41, the volume adjustment wheel 42, and the power switch 43 in the
main body 3 are disposed so as not to protrude with respect to the housing side 4A of the main
body 3. Specifically, the changeover switch 41, the volume adjustment wheel 42, and the power
switch 43 are disposed on the main body 3 side when the case side 4A of the main body 3 is a
boundary surface. Therefore, a part of the switch 41 approaching the boundary surface, the
volume adjustment wheel 42, and the power switch 43 is disposed on the main body 3 side with
respect to the boundary surface. Therefore, even when the main body 3 is used by putting it in a
pocket or the like of the clothes, it is possible to avoid the erroneous operation of the changeover
switch 41, the volume adjusting wheel 42 and the power switch 43.
[0020]
[Earphone Structure] FIG. 2 is an explanatory view showing an earphone structure of the voice
processing device according to the embodiment of the present invention. FIG. 2 (a) is an
explanatory view showing an outer appearance structure, and FIG. 2 (b) is a cross-sectional view
showing an internal structure.
[0021]
The speaker unit 24 and the microphone 26 are accommodated in the housing 20 of the
earphone 2. The housing 20 includes a first housing portion 20A for housing the speaker unit 24
and a second housing portion 20B for housing the microphone 26. The first housing portion 20A
includes an acoustic emission hole 25 through which the sound wave of the speaker unit 24 is
emitted. Sound waves are emitted from the acoustic emission surface of the speaker unit 24. The
second accommodation unit 20B includes an internal space 20B1 on the side of the acoustic
passive surface 26A of the microphone 26. The internal space 20B1 of the second
accommodation portion 20B communicates with the outside through the microphone hole 27.
[0022]
11-04-2019
7
The earphone 2 includes a sound conduit 28 disposed on the side of the acoustic emission hole
25 of the housing 20. The sound conduit 28 extends along the axis (central axis) of the acoustic
radiation hole 25. A pinna connection 23A is attached to the sound conduit 28. The auricle
connection 23A is provided around the sound conduit 28. Moreover, the pinnae contact part 23B
is provided in the side of the housing in the pinnae connection part 23A side. By providing the
auricle contact portion 23B, the sealability of the ear by the earphone can be improved. The
speaker unit 24 has a known configuration, and includes a vibration unit having a voice coil and
a diaphragm and a magnetic circuit. The diaphragm has an acoustic emission surface 24A. Note
that an armature type (electromagnetic type) speaker unit may be adopted instead of the abovedescribed speaker unit 24.
[0023]
The housing 20 includes a bending portion 20C which is bent from the first accommodation
portion 20A toward the second accommodation portion 20B. The first housing portion 20A
extends along the axis (central axis) of the acoustic radiation hole 25. The second
accommodation portion 20B extends from the bending portion 20C in a direction different from
that of the first accommodation portion 20A. The extending direction of the first accommodating
portion 20A and the extending direction of the second accommodating portion 20B cross each
other.
[0024]
The cord 21 is drawn out from a cord extraction hole 20D provided in the housing 20. The cord
21 is pulled out of the housing 20 through the cord holding portion 21A of the housing 20.
Among the both ends of the cord holding portion 21A, a part of the cord holding portion 21A on
one end side is supported inside the housing 20, and a part of the cord holding portion 21A on
the other end side is a cord extraction hole It projects from the portion 20D toward the outside of
the housing 20.
[0025]
The second accommodation portion 20B extends from the bending portion 20C toward the cord
holding portion 21A. The second accommodation portion 20B extends along the projecting
11-04-2019
8
direction of the cord holding portion 21A. The second accommodation portion 20B extends along
the cord holding portion 21A. In addition, the second accommodation unit 20B includes the
microphone hole 27. The cord holding portion 21A is disposed at a position facing the
microphone hole portion 27 of the second housing portion 20B. In the illustrated example, the
cord holding portion 21A is disposed to face the opening of the microphone hole 27. The cord
holding portion 21A has a large bending rigidity with respect to the cord 21. A gap 20S is
provided between the side surface of the cord holding portion 21A and the microphone hole 27.
In particular, a gap 20S is provided between the side surface of the cord holding portion 21A and
the microphone hole portion 27. In the illustrated example, the cord holding portion 21A extends
in a straight line, and the second accommodation portion 20B extends in a curved shape. Thus, a
gap 20S is formed between the cord holding portion 21A and the side surface of the second
accommodation portion 20B.
[0026]
One of the ends of the cord 21 is disposed in the housing 20. The cord 21 is disposed on the
opposite side of the acoustic emission hole 25 side of the housing 20. The axis (central axis) of
the acoustic passive surface 26A of the microphone 26 intersects the axis (central axis) of the
acoustic radiation hole 25. The microphone hole portion 27 opens toward the cord 21 or the
cord holding portion 21A.
[0027]
In the voice processing device 1 provided with the earphone 2 having such a configuration, since
the cord holding portion 21A is disposed at a position facing the microphone hole 27, the user's
hand or clothing is in contact with the microphone hole 27. Can be suppressed, and the
microphone 26 can be prevented from picking up the contact sound that would be unpleasant
for the user.
[0028]
In particular, the earphone 2 is provided with the cord holding portion 21A when it is desired to
bring the telephone receiver (or the main body of the mobile telephone) close to the housing 20
and process (amplify or attenuate) the sound emitted from the receiver. Thus, the receiver can be
prevented from contacting the microphone hole 27.
11-04-2019
9
Since the contact between the receiver and the microphone hole 27 is suppressed, the contact
sound when the receiver contacts the microphone hole 27 is processed (including amplification
or attenuation) and emitted from the speaker unit 24 of the earphone 2 You can suppress that.
Furthermore, since the contact between the receiver and the microphone hole 27 is suppressed,
it is possible to suppress the discomfort that the user receives from the processed (including
amplified or attenuated) contact sound. In addition, it is difficult to hear the sound emitted from
the speaker unit of the receiver by the contact sound due to the contact between the receiver and
the microphone hole 27, and it is possible to avoid the problem that the important requirements
and the like are missed.
[0029]
One end (the end on the housing 20 side) of the cord holding portion 21A is supported by the
housing 20. The cord holding portion 21A itself has bending rigidity. Further, a gap 20S is
provided between the cord holding portion 21A and the housing 20. For this reason, even if the
cord holding portion 21A is pressed by a receiver or the like, the cord holding portion 21A itself
and the cord 21 can be prevented from coming into contact with the microphone hole portion
27.
[0030]
Further, in the sound processing device 1 according to the embodiment of the present invention,
the axis of the acoustic passive surface 26A of the microphone 26 housed in the housing 20 of
the earphone 2 intersects the axis of the acoustic radiation surface of the speaker unit 24 ing.
Further, in the present embodiment, the axis of the acoustic passive surface 26A of the
microphone 26 intersects with the axis of the acoustic emission hole 25 along the axis of the
acoustic radiation surface of the speaker unit 24. Here, “to intersect” means that the axis of
the acoustic passive surface 26A of interest and the axis of the acoustic radiation surface
intersect in a three-dimensional space, one of the axes of the acoustic passive surface 26A of
interest and the acoustic radiation surface It also includes crossing when projecting the other
axis in a two-dimensional plane including the axis of. The microphone hole 27 provided in the
housing 20 opens toward the cord 21 or the cord holding portion 21A. Therefore, it is possible to
suppress the generation of howling (oscillation phenomenon) caused by the vibration generated
by driving the speaker unit 24 propagating to the microphone 26 through the housing 20. In
addition, the microphone hole 27 opens toward the cord 21 or the cord holding portion 21A. For
this reason, the sounding direction of the telephone handset can be directly directed to the
direction of the microphone hole 27. In addition, the sound emitted from the receiver can be
11-04-2019
10
reliably collected by the microphone 26. The sound wave that has entered the microphone hole
27 passes through the internal space 20B1 and vibrates the acoustic passive surface 26A, so that
the microphone 26 can collect sound.
[0031]
The housing 20 of the earphone 2 includes a first housing portion 20A in which the speaker unit
24 is housed and a second housing portion 20B in which the microphone 26 is housed. The first
accommodation portion 20A and the second accommodation portion 20B extend in different
directions. Therefore, the speaker unit 24 and the microphone 26 can be separated. By providing
a predetermined distance between the speaker unit 24 and the microphone 26, the abovementioned howling can be avoided. Further, the second accommodation portion 20B is extended
along the cord 21. Therefore, when the user wears the earphone 2, the position of the
microphone 26 is disposed below the user's ear. Thereby, when talking using a mobile phone (for
example, a mobile phone of a smart phone type), the user's voice is collected by the microphone
of the earphone to listen to the amount of voice emitted by the user himself, The user can emit
an appropriate amount of voice.
[0032]
[Circuit Configuration] FIG. 3 is an explanatory view showing a circuit configuration of the voice
processing device according to the embodiment of the present invention. The main body 3 has an
audio signal processing circuit 30 that processes (amplifies or attenuates) the audio signal
collected by the microphone 26 and outputs the processed signal to the speaker unit 24. The
audio signal processing circuit 30 includes an audio signal input unit 31 and an audio signal
output unit 37, and by connecting the connection terminal unit 22 of the earphone 2 to the
connected terminal unit 4E of the main body 3, a signal line from the microphone 26 The audio
signal sent to the microphone terminal 22M through 21M is input to the audio signal input unit
31, and the processed (including amplified or attenuated) audio signal output from the audio
signal output unit 37 is the speaker terminal 22S and a signal It is sent to the speaker unit 24 via
the line 21S.
[0033]
The audio signal processing circuit 30 is, for example, a preamplifier (preamplifier) 32 that
11-04-2019
11
processes (including amplification or attenuation) an audio signal output from the audio signal
input unit 31, a switching circuit 33, a band pass filter 34, and volume control. A unit (slide
volume) 35, a power amplifier 36, and an audio signal output unit 37 are provided. Further, a
power supply circuit 38 for supplying the drive voltage Vcc to the audio signal processing circuit
30 is provided. A battery (battery) 38A is connected to the power supply circuit 38, and a power
shutoff unit 38B is provided between the battery (battery) 38A and the power supply circuit 38.
[0034]
The audio signal processing circuit 30 is operated by the operation signal from the operation
unit 40. The operation unit 40 outputs operation signals obtained by the mode switching switch
41, the volume adjustment wheel 42, the power switch 43, and the sound pressure balance
adjustment operation unit 46 described above. The audio signal processing circuit 30 may not
include the power supply circuit 38, and the main body 3 may include the power supply circuit
38.
[0035]
The operation unit 40 sends a switching operation signal by the mode switching switch 41 to the
switching circuit 33, sends an adjustment operation signal by the volume adjusting wheel 42 to
the volume adjusting unit 35, and turns on / off operation signal by the power switch 43 as a
power shutoff unit Send to 38B. Further, the adjustment signal of the sound pressure balance
adjusting operation unit 46 is sent to the sound pressure balance adjusting unit 39.
[0036]
When the attached earphone 2A to be described later is connected to the main body 3, the sound
pressure balance adjustment unit 39 has right and left speaker units 24 (R) for the right ear and
speaker units 24 (L) for the right ear that the attached earphone 2A has. The function of
adjusting the sound pressure balance of
[0037]
The control means (example: microcomputer) 50 can determine which of the earphone 2 and the
attached earphone 2A is connected to the main body 3 as described later.
11-04-2019
12
Then, when it is detected that the earphone 2 is connected to the main body 3, the control means
sends a signal to the sound pressure balance adjusting unit 39 to turn off (stop) the adjusting
function of the sound pressure balance adjusting unit 39.
[0038]
The switching circuit 33 selectively switches the plurality of different band pass filters 34 (34A
to 34D) according to the switching operation signal input by the mode switching switch 41.
[0039]
The volume adjustment unit 35 variably controls the volume of the audio signal according to the
adjustment signal input by the volume adjustment wheel 42.
The power shutoff unit 38B connects and shuts off the battery 38A and the power supply circuit
38 according to the on / off operation signal input by the power switch 43.
[0040]
The plurality of different band pass filters 34 may be incorporated in the audio signal processing
circuit 30 in a changeable state or in a state in which any one or more selected are fixed. The
fixed state here means that the characteristic of the band pass filter 34 corresponding to each
mode incorporated in the audio signal processing circuit 30 can not be changed after being
incorporated in the audio signal processing circuit 30. It says that there is.
[0041]
[Mode Switching According to Operating Environment (Characteristics of Band-Bus Filter)] FIG. 4
is an explanatory view showing a mode switching mode according to the operating environment
in the voice processing apparatus according to the embodiment of the present invention. And the
characteristics of a plurality of different band pass filters corresponding to The voice processing
device 1 according to the embodiment of the present invention can selectively switch the
11-04-2019
13
operation mode suitable for each usage environment in accordance with the difference in usage
environment.
[0042]
In order to realize this mode switching, the band pass filter 34 of the audio signal processing
circuit 30 includes a plurality of different band pass filters 34A, 34B, 34C, 34D, and the
switching circuit 33 selectively selects one of them. Can be switched to The plurality of different
band pass filters 34A, 34B, 34C, 34D can be set, for example, for telephone mode, conversation
mode, normal mode, and television mode.
[0043]
The first band pass filter 34A (for telephone mode) which is one of the plurality of different band
pass filters 34A to 34D has, for example, the characteristic shown in FIG. The first band pass
filter 34A has a characteristic to selectively pass the voice (in the frequency band from about
300 Hz to about 3400 Hz) emitted from the handset of the telephone, and the handset in the
microphone hole 27 of the earphone 2 This system is suitable for a use environment in which the
sound emitted by the handset is processed (including amplification or attenuation) by bringing
the speaker unit close to one another.
[0044]
The first band pass filter 34A is a low pass filter having a cutoff frequency C1a (about 2500 Hz
in the case shown) of about 2000 to about 3000 Hz and a cutoff frequency C1b (about 700 Hz in
the case shown) of about 300 to about 800 Hz. , And an equalizer having a center frequency C1c
(about 1000 Hz in the case shown) of about 700 Hz to about 1200 Hz. The first band pass filter
34A has a filter characteristic that the cut-off frequencies C1a to C1b are relatively narrow. Also,
the frequency P1 at which the amplification factor is maximum is between about 1000 Hz and
about 2000 Hz.
[0045]
11-04-2019
14
By switching to the first band pass filter 34A having such filter characteristics, the voice emitted
from the telephone receiver can be selected and processed (including amplification or
attenuation). Also, since sounds such as environmental sounds other than the frequency band
selected by the first band pass filter 34A are not processed (including amplification or
attenuation), the sound emitted from the receiver even if noise is generated around the user You
can listen to The frequency band of the voice emitted from the telephone receiver is 300 Hz to
3400 Hz.
[0046]
The second band pass filter 34B (for conversation mode) which is one of the plurality of different
band pass filters 34A to 34D has, for example, the characteristic shown in FIG. 4 (b). The second
band pass filter 34B is a low pass filter having a cutoff frequency C2a (in the case of about 4900
Hz) larger than the cut off frequency C1a (in the case of the illustration: about 2500 Hz) in the
low pass filter of the first band pass filter 34A. And a high pass filter having a cutoff frequency
C2b (about 150 Hz in the case shown) smaller than the cutoff frequency C1 b (about 700 Hz in
the case shown) in the high pass filter of the first band pass filter 34A, about 700 Hz to about
1200 Hz It is comprised by the equalizer which has center frequency C2c (in the case of
illustration: about 1000 Hz). The frequency P2 at which the amplification factor is maximum is
between about 900 Hz and about 2000 Hz (about 1000 Hz in the illustrated example). Further,
the maximum amplification factor of the second band pass filter 34B is smaller than the
maximum amplification factor of the first band pass filter 34A.
[0047]
By switching to the second band pass filter 34B having such characteristics, it is possible to
effectively process (including amplification or attenuation) conversational speech in a usage
environment in which speech is the main use. In particular, when used in a conference room or
the like, speech speech is effectively processed (including amplification or attenuation), and other
noises other than the band selected by the second band pass filter 34B are processed
(amplification Because it does not (or includes attenuation), conversational speech can be heard
clearly even if there is noise in the surroundings. Also, the second band pass filter has a lowest
frequency (the lowest frequency at the smallest amplification factor) smaller than the lowest
frequency of the third band pass filter described later, and the second band pass filter is
relatively compared to the third band pass filter. It has the characteristic of passing voice in a
small frequency band.
11-04-2019
15
[0048]
The third band pass filter 34C (for the normal mode) which is one of the plurality of different
band pass filters 34A to 34D has, for example, the characteristic shown in FIG. 4 (c). The third
band pass filter 34C is a low pass filter having a cutoff frequency C3a (in the case of about 4400
Hz) smaller than the cut off frequency C2a (in the case of the illustration: about 4900 Hz) in the
low pass filter of the second band pass filter 34B. And a high-pass filter having a cutoff frequency
C3b (about 300 Hz in the case shown) larger than the cutoff frequency C2b (about 150 Hz in the
case shown) of the high-pass filter of the second band pass filter 34B. Also, the frequency P3 at
which the amplification factor is maximum is between about 2000 Hz and about 3000 Hz.
Further, the maximum amplification factor of the third band pass filter 34C is smaller than the
maximum amplification factor of the second band pass filter 34B.
[0049]
When switched to the third band pass filter 34C having such characteristics, processing
(including amplification or attenuation) of necessary sounds or voices effectively in a normal
usage environment in which human voice, musical instrument sound, etc. are main It can be
done.
[0050]
The fourth band pass filter 34D (for television mode) which is one of the plurality of different
band pass filters 34A to 34D has, for example, the characteristic shown in FIG. 4 (d).
The fourth band pass filter 34D is a low pass filter having a cutoff frequency C4a (about 5300 Hz
in the case shown) larger than the cutoff frequency C3a (about 4400 Hz in the case shown) of
the low pass filter of the third band pass filter 34C. And a high-pass filter having a cut-off
frequency C4b (about 49 Hz in the case shown) smaller than the cut-off frequency C3b (about
300 Hz in the case shown) of the high-pass filter of the third band pass filter 34C. Also, the
frequency P4 at which the amplification factor is maximum is between about 3000 Hz and about
4000 Hz. Further, the maximum amplification factor of the fourth band pass filter 34D is smaller
than the maximum amplification factor of the third band pass filter 34C.
[0051]
11-04-2019
16
By switching to the fourth band pass filter 34D having such characteristics, it is possible to
process (including amplification or attenuation) necessary voice effectively in a wide frequency
range operating environment. In particular, since the sound of the television has a wide
frequency range of about 5 Hz to about 20 kHz, the mode switched to the fourth band pass filter
34D is suitable for watching the television. Further, by matching the cutoff frequencies C3b to
C3a with about 50 Hz to about 15 kHz, which is the frequency range of music, it is suitable for
viewing music.
[0052]
5 to 9 show the output sound pressure frequency characteristics of the speaker unit in the audio
processing device according to the embodiment of the present invention. In FIG. 5 to FIG. 8, the
output sound pressure frequency characteristics of the speaker unit when the first to fourth band
pass filters are adopted are shown by curves, and the point of (10000 Hz, 100 dB) and (10 Hz,
40 dB) A reference line connecting the points of is indicated by a dashed line. Here, a range in
which the sound pressure is higher than the reference line is defined as a convex relief and is
surrounded by a broken line in the drawing. The characteristic of the output sound pressure
frequency characteristic when each band pass filter is adopted is shown based on the magnitude
of the unevenness. As shown in FIGS. 5 to 8, in the output sound pressure frequency
characteristics of the speaker unit, the above-described undulations are changed by the first to
fourth band pass filters.
[0053]
FIG. 5 shows the output sound pressure frequency characteristics when the mode corresponding
to the first band pass filter is selected. In the illustrated example, the sound pressure gradually
increases from about 20 Hz to about 130 Hz. In the frequency band of about 130 Hz to about
300 Hz, the output sound pressure frequency characteristic is flat. The sound pressure gradually
increases from about 300 Hz, and there is a peak in the frequency band of about 1600 Hz to
about 2000 Hz. In the frequency band higher than the frequency at which the sound pressure
peaks, the sound pressure decreases to about 20000 Hz. Also, convex undulations are seen in the
frequency band between about 50 Hz and about 800 Hz. It can be seen that this convex-shaped
relief is smaller than those in FIGS. 6 and 8 described later.
11-04-2019
17
[0054]
FIG. 6 shows the output sound pressure frequency characteristics when the mode corresponding
to the second band pass filter is selected. In the illustrated example, the sound pressure gradually
increases from about 20 Hz to about 1600 Hz. The output sound pressure frequency
characteristics are peaked in the frequency band of about 100 Hz to about 500 Hz. The output
sound pressure frequency characteristic is flat in the frequency band from about 1600 Hz to
about 4000 Hz. In the frequency band from about 4000 Hz to about 20000 Hz, the sound
pressure gradually decreases. Also, convex undulations are seen in the frequency band between
about 50 Hz and about 800 Hz. It can be seen that this convex-shaped relief is larger than FIGS. 7
and 5 described later.
[0055]
FIG. 7 shows the output sound pressure frequency characteristics when the mode corresponding
to the third band pass filter is selected. In the illustrated example, the sound pressure gradually
increases from about 20 Hz to about 4000 Hz. The portion of the ridge seen in the frequency
band of about 130 Hz to about 300 Hz in FIG. 6 can not be seen in FIG. In the frequency band
from about 4000 Hz to about 20000 Hz, the sound pressure gradually decreases. Also, convex
undulations are seen in the frequency band between about 50 Hz and about 800 Hz. It can be
seen that this convex-shaped relief is smaller than those in FIGS. 8 and 5 described later.
[0056]
FIG. 8 shows the output sound pressure frequency characteristics when the mode corresponding
to the fourth band pass filter is selected. In the illustrated example, the sound pressure gradually
increases from about 20 Hz to about 63 Hz. The increase rate of sound pressure is rapidly
increasing in the frequency band from about 63 Hz to about 125 Hz. In the frequency band from
about 125 Hz to about 1000 Hz, the output sound pressure frequency characteristic is flat. The
sound pressure gradually increases from about 1000 Hz to about 4000 Hz. In the frequency
band from about 4000 Hz to about 20000 Hz, the sound pressure gradually decreases. Also,
convex undulations are seen in the frequency band between about 50 Hz and about 800 Hz. It
can be seen that this convex relief is larger than those in FIGS. 5 and 7.
[0057]
11-04-2019
18
A schematic diagram of the output sound pressure frequency characteristic corresponding to
each mode described above is shown in FIG. The line B1 shown in the figure is the output sound
pressure frequency characteristic when the first band pass filter is selected, and by selecting the
mode corresponding to the first band pass filter from this line B1, the main frequency of voice It
can be seen that the processed sound is output from the speaker unit, centered on about 1000
Hz. From the magnitude of the convex and concave portions, it can be grasped whether the
amplification factor of human voice, television sound and music sound is obtained.
[0058]
Also, the line B2 shown in the figure is the output sound pressure frequency characteristic when
the second band pass filter is selected, and by selecting the mode corresponding to the second
band pass filter from this line B2, the lower band is further reduced. It can be seen that the sound
of the speaker is output from the speaker unit, and the voice emitted at the meeting etc. can be
heard more clearly.
[0059]
The line B3 shown in the figure is the output sound pressure frequency characteristic when the
third band pass filter is selected, and by selecting the mode corresponding to the third band pass
filter from this line B3, the ordinary life etc. It can be seen that the required voice can be
effectively heard in the use environment of.
[0060]
The line B4 in the figure shows the output sound pressure frequency characteristic when the
fourth band pass filter is selected, and by selecting the mode corresponding to the fourth band
pass filter from the line B4, the second band pass is obtained. It can be seen that the lower range
sound can be heard more than when the mode corresponding to the filter is selected.
Therefore, since it corresponds to the sound of television (about 5 Hz to about 20 kHz) and the
frequency range of music (about 50 Hz to about 15 kHz), it is possible to watch television and
music.
[0061]
11-04-2019
19
The switching of the plurality of different band pass filters 34A to 34D can be performed by the
operation of the mode switching switch 41 as described above.
At this time, the operation unit 40 outputs, for example, an operation signal for sequentially
switching the plurality of different band pass filters 34A to 34D each time the mode selection
switch 41 is pressed.
[0062]
The main body 3 includes a switching display unit 45 as shown in FIG. 3 as an example. The
switching display unit 45 outputs a display signal such that the first light source 4C exhibits
different emission colors corresponding to the plurality of different band pass filters 34A to 34D
switched by the mode switching switch 41. According to this, the user can visually recognize the
mode of the use environment currently set by looking at the emission color of the first light
source 4C provided in the housing 4 of the main body 3.
[0063]
The mode switching switch 41 has a normal switching operation of sequentially switching the
plurality of different band pass filters 34A, 34B, 34C, 34D. For example, when the mode
switching switch 41 is pressed, if the currently set band pass filter 34 is the first band pass filter
34A, the second band pass filter 34B, the third band pass filter 34C, and the fourth band pass
filter 34D are Switch to one of them. The order of switching the plurality of different band pass
filters 34A, 34B, 34C, 34D may be changed such as being switched in this order or being
switched in order of the band pass filters 34B, 34A, 34C, 34D. .
[0064]
In addition, the mode switching switch 41 has a specific switching operation of switching from
one of the second band pass filter 34B, the third band pass filter 34C, and the fourth band pass
filter 34D directly to the first band pass filter 34A. There is. For example, when the mode
switching switch 41 is continuously pressed twice or long, the band pass filter 34 currently set is
11-04-2019
20
one of the second band pass filter 34B, the third band pass filter 34C, and the fourth band pass
filter 34D. In either case, it switches to the first band pass filter 34A. According to this, since it is
possible to switch to the mode of the telephone use environment (the telephone mode; the mode
corresponding to the first band pass filter) by one operation or a series of operations, the case
where a telephone call is suddenly made You can also switch to the mode of the phone
environment immediately upon arrival of a call. In addition, a sensor such as an infrared sensor
may be provided in the earphone 2 so that the sensor detects that the telephone receiver
approaches the earphone 2 and the mode can be switched to the telephone use environment. In
this case, since the mode is switched without the user pressing the switching button, usability for
the user is improved.
[0065]
Further, as shown in FIG. 3 as an example, the main body 3 includes a switching notification unit
44 related to the operation of the mode switching switch 41. The switching notification unit 44
notifies the user that the volume of the audio signal output to the speaker unit 24 increases or
decreases as the mode switching switch 41 is switched. Specifically, in the switching period of
the band pass filter 34 by the switch 41, the switching notification unit 44 processes (includes
amplification or attenuation) in the audio signal processing circuit 30 and outputs the sound to
the speaker unit 24. Temporarily shut off the signal. Further, the main body 3 causes the speaker
unit 24 to output a notification sound. According to this, when switching the mode according to
the use environment, the user can know in advance that the volume output from the speaker unit
24 is increased or decreased. In particular, sudden volume increase can avoid discomfort.
[0066]
[Connection Terminal Portion / Connected Terminal Portion] FIG. 10 is an explanatory view
showing a configuration of a connection terminal portion of an earphone and a connected
terminal portion of a main body. FIG. 10 (a) shows a state in which the earphone 2 (first
earphone) is connected to the main body 3, and FIG. 10 (b) shows a state in which the attached
earphone (second earphone) 2A is connected to the main body 3 Is shown.
[0067]
The earphone 2 is a single housing 20 in which one speaker unit 24 and one microphone 26 are
11-04-2019
21
accommodated, and the connection terminal portion 22 includes a microphone terminal 22M
and a speaker terminal 22S. Further, the connection terminal portion 22 includes non-connection
terminals T1 and T2 which are not connected to the speaker unit and the microphone. On the
other hand, the attached earphone 2A includes two housings 20 (a housing 20R for the right ear
and a housing 20L for the left ear), in which one speaker unit 24 and one microphone 26 are
accommodated respectively. The connection terminal portion 22A includes a right ear
microphone terminal 22M (R), a speaker terminal 22S (R), a left ear microphone terminal 22M
(L), and a speaker terminal 22S (L).
[0068]
On the other hand, the connected terminal 4E of the main body 3 includes speaker output
terminals (speaker terminals) 4E1 and 4E2 and microphone input terminals (microphone
terminals) 4E3 and 4E4. And when connecting the connection terminal part 22 of the earphone 2
provided with the single housing 20 to the to-be-connected terminal part 4E, one of the speaker
output terminals 4E1 and 4E2 (speaker output terminal 4E2 in the example shown) is a speaker
It is connected to the terminal 22S, and one of the microphone input terminals 4E3 and 4E4 (the
microphone input terminal 4E3 in the illustrated example) is connected to the microphone
terminal 22M. Further, the other of the speaker output terminals 4E1 and 4E2 (speaker output
terminal 4E1 in the illustrated example) is connected to the non-connection terminal T2, and the
other of the microphone input terminals 4E3 and 4E4 (the microphone input terminal 4E3 in the
illustrated example) is not connected It is connected to the terminal T1.
[0069]
When the connection terminal 22A of the attached earphone 2A is connected to the connected
terminal 4E, the microphone terminals 22M (R) and 22M (L) of the connection terminal 22A are
respectively connected to the microphone input terminals (connected) of the connected terminal
4E. The microphone terminals 4E3 and 4E4 are connected, and the speaker terminals 22S (R)
and 22S (L) of the connection terminal 22A are connected to the speaker output terminals
(speaker terminals) 4E2 and 4E1 of the connected terminal 4E, respectively.
[0070]
FIG. 11 is an explanatory view showing a specific configuration example of connection terminals
in an earphone and an attached earphone.
11-04-2019
22
FIG. 11A shows a specific configuration example of the connection terminal portion 22 of the
earphone 2, and FIG. 11B shows a specific configuration example of the connection terminal
portion 22A of the accessory earphone 2A. As shown in the figure, the connection terminal 22 of
the earphone 2 and the connection terminal 22A of the accessory earphone 2A have a pin-like
shape, and both have substantially the same form in terms of external dimensions such as
terminal diameter. It has each and can connect to the to-be-connected terminal part 4E of the
main body 3, respectively.
[0071]
The connection terminal 22A of the attached earphone 2A includes six terminals: speaker
terminals 22S (R), 22S (L), microphone terminals 22M (R), 22M (L), a speaker ground terminal
22G1 as a ground, and a microphone ground terminal 22G2. Have. These terminals are
respectively disposed on the tip end side of the connection terminal portion 22A and on the cord
21 side with the microphone ground terminal 22G2 as a boundary. That is, the speaker terminals
22S (R) and 22S (L) on the tip end side of the connection terminal 22A are electrically connected
to the speaker terminals 22S (R) -1 and 22S (L) -1 on the cord 21 side, respectively. ing. Similarly,
the microphone terminals 22M (R) and 22M (L) on the tip side of the connection terminal 22A
are electrically connected to the microphone terminals 22M (R) -1 and 22M (L) -1 on the cord 21
side, respectively. It is done.
[0072]
On the other hand, the connection terminal unit 22 of the earphone 2 has the same terminal
structure, and includes a speaker terminal 22S corresponding to one speaker unit, a microphone
terminal 22M corresponding to one microphone, one speaker unit, and It has two terminals (nonconnecting terminals T1 and T2) which are not connected to one microphone. The nonconnection terminals T1 and T2 correspond to the microphone terminal 22M (L) and the speaker
terminal 22S (L) in the connection terminal portion 22A of the accessory earphone 2A, but the
non-connection terminals T1 and T2 have a speaker unit and a microphone There is no
connection to it.
[0073]
11-04-2019
23
Further, the connection terminal portion 22 has a speaker ground terminal 22G1 and a
microphone ground terminal 22G2 in the same manner as the connection terminal portion 22A.
The speaker terminal 22S on the tip side of the connection terminal 22 is electrically connected
to the speaker terminal 22S-1 on the cord 21 side, and the microphone terminal 22M on the tip
side is a microphone terminal 22M-1 on the cord 21 side And are electrically connected.
[0074]
Then, the non-connection terminal T1-1 on the cord side is electrically connected to the
microphone ground terminal 22G2-1 on the cord side via the wiring (conductor line or the like)
Sp. As a result, the non-connection terminal T1 and the microphone ground terminal 22G2 are
electrically connected, and the non-connection terminal T1 is in a shorted state (ground).
Although the non-connecting terminal T1 is short-circuited in the illustrated example, the nonconnecting terminal T2 may be short-circuited.
[0075]
When the control means (example: microcomputer) 50 which the main body 3 has detects this
short circuit, it detects that the earphone 2 is connected to the main body 3. At that time, the
control means 50 controls the sound pressure balance adjustment unit 39 that adjusts the sound
pressure balance of the audio signal output to the left and right speaker output terminals 4E1
and 4E2, and turns off (stops) the sound pressure balance adjustment function. .
[0076]
FIG. 12 is an explanatory view showing the main body of the voice processing device according
to the embodiment of the present invention. Fig.12 (a) has shown the back surface structure of a
main body, FIG.12 (b) has shown the battery insertion part of a main body. As shown in the
figure, on the back side of the main body 3, a holding clip 4F for holding the main body 3 on
clothes of the user is provided. Further, the main body 3 includes a battery insertion portion 4H
into which a battery or a rechargeable battery can be inserted. The battery insertion portion 4H
is covered by a lid 4G. When the lid 4G is opened, as shown in FIG. 12 (b), the battery 38A, the + terminals 38A1 and 38A2 in electrical contact with the battery 38A, and the user can adjust, for
example, manually. A sound pressure balance adjusting operation unit 46 having a wheel shape
at the position is provided.
11-04-2019
24
[0077]
The sound pressure balance adjustment operation unit 46 sends an adjustment signal to the
sound pressure balance adjustment unit 39 to balance the output sound pressure of one speaker
unit and the other speaker unit of the two speaker units included in the earphone 2A. It is to
adjust. By adjusting the sound pressure balance adjustment operation unit 46, the sound output
from one of the speaker units can be increased, and the sound output from the other speaker
unit can be reduced. Then, by adjusting the sound pressure balance adjustment operation unit 46
as much as possible to one speaker unit side, it is possible to output the sound only from one
speaker unit of the two speaker units. Specifically, according to the adjustment state of the sound
pressure balance adjustment operation unit 46, for example, by changing the value of the
variable resistance of the sound pressure balance adjustment unit 39, the sound output from one
and the other speaker units Make small or large adjustments. In the above description, the sound
pressure balance adjustment in the sound pressure balance adjustment unit 39 is performed by
adjusting the value of the variable resistance, but instead, digital signal processing is performed
using a DSP (digital signal processor). The sound pressure balance may be adjusted by
[0078]
As described above, the sound pressure balance adjustment unit 39 turns on (operates) or off
(stops) the balance adjustment function according to the adjustment signal from the control
means 50. That is, the main body 3 can stop the adjustment by the sound pressure balance
adjustment operation unit 46 for the audio signal output from the audio signal processing circuit
30 to the speaker unit of the first earphone (earphone 2). This operation is described in detail
below.
[0079]
The sound pressure balance adjustment unit 39 included in the main body 3 is for the right ear
in order to correspond to the attached earphone 2A including the housing 20R attached to the
right ear of the user and the housing 20L attached to the left ear. The audio signal collected by
the microphone of the housing 20R is adjusted and output to the speaker unit of the housing
20R, and the audio signal collected by the microphone of the housing 20L for left ear is adjusted
to the housing 20L Adjustment circuit for the left ear that outputs to the speaker unit of Then,
11-04-2019
25
when the earphone 2 in which one speaker unit and one microphone are accommodated is
connected to the main body 3, the sound pressure balance adjusting unit 39 includes the abovementioned adjustment circuit for right ear and adjustment circuit for left ear. It is designed such
that, for example, an audio signal collected by the microphone of the earphone 2 is input only to
the adjustment circuit for the right ear by selecting one of them.
[0080]
At this time, for example, a user who uses the attached earphone 2A connected to the main body
3 operates the sound pressure balance adjustment operation unit 46 so that sound is output only
from the speaker unit for the left ear ( It may be considered that the adjustment is made so that
the sound is not output from the speaker unit for the right ear. In such a case, when the
earphone 2 in which one speaker unit and one microphone are housed in one housing is
connected to the main body 3, an audio signal is input only to the adjustment circuit for the right
ear, At this time, if the sound pressure balance adjustment unit 39 is functioning, the right ear
adjustment circuit does not output, so that no sound can be heard from the speaker unit of the
earphone 2 and the user misinterprets it as a malfunction of the main body. there is a possibility.
[0081]
Therefore, in the embodiment of the present invention, as described above, when the control
means 50 detects that the earphone 2 in which one speaker unit and one microphone are housed
in one housing is connected to the main body 3 The adjustment signal is sent to the sound
pressure balance adjustment unit 39, and the adjustment function of the sound pressure balance
adjustment unit 39 is turned off (stopped). As a result, when the earphone 2 is connected to the
main body 3 and the audio signal is input only to the adjustment circuit for the right or left ear of
the sound pressure balance adjustment unit 39, the sound pressure balance adjustment
operation unit 46 operates in any way Even if it is, an audio signal is always output to the
speaker unit of the earphone 2. By this, when the earphone 2 is connected to the main body 3,
the problem that the sound is not output from the speaker unit can be solved.
[0082]
In the embodiment of the present invention, as described above, when the control means 50
detects that the earphone 2 in which one speaker unit and one microphone are accommodated in
11-04-2019
26
one housing is connected to the main body 3 By sending a signal to the sound pressure balance
adjustment unit 39 and turning off (stopping) the adjustment function of the sound pressure
balance adjustment unit 39, the problem of no sound being output from the speaker unit when
the earphone 2 is connected to the main body 3 did. Further, the sound processing apparatus 1 is
configured to detect that the earphone 2 is connected to the main body 3, and when the control
means 50 detects that the earphone 2 is connected to the main body 3, sound pressure balance
adjustment is performed. A signal is sent to the unit 39 to turn on (operate) the adjustment
function of the sound pressure balance adjustment unit 39. By this, the sound pressure balance
adjustment function when the earphone 2 is connected to the main body 3 is restored.
[0083]
Here, when the adjustment function of the sound pressure balance adjustment unit 39 is turned
off, the audio signal is output to the speaker unit without being subjected to sound pressure
adjustment. In order to realize such a control operation, a method of making the sound pressure
balance adjusting unit 39 bypass the signal path of the audio signal, and a processing step for
digital processing of the audio signal performed by the sound pressure balance adjusting unit 39
There may be a method of skipping or the like, but any specific method may be used.
[0084]
When the connection terminal 22A of the attached earphone 2A having the two housings 20 is
connected to the connected terminal 4E, the microphone terminals 22M (R) and 22M (L) of the
connection terminal 22A are respectively connected terminals. The speaker terminals 22S (R)
and 22S (L) of the connection terminal 22A are connected to the microphone input terminals
4E3 and 4E4 of 4E, and are connected to the speaker output terminals 4E2 and 4E1 of the
connection terminal 4E, respectively. At this time, the audio signal output from the audio signal
processing circuit 30 to the speaker output terminals 4E2 and 4E1 may be a monaural signal or
a stereo signal. In the case of a stereo signal, an audio signal obtained by processing (including
amplification or attenuation) the audio signal input to the microphone terminal 22M (R) is output
to the speaker output terminal 4E2 and input to the microphone terminal 22M (L). An audio
signal obtained by processing (including amplification or attenuation) of the audio signal is
output to the speaker output terminal 4E2.
[0085]
11-04-2019
27
Although the embodiments of the present invention have been described in detail with reference
to the drawings, the specific configuration is not limited to these embodiments, and changes in
design etc. within the scope of the present invention are not limited. Are included in the present
invention. In addition, the respective embodiments described above can be combined with each
other by utilizing the techniques of each other unless there is a contradiction or a problem in
particular in the purpose, the configuration, and the like.
[0086]
1: voice processing device 2: earphone: 3: main body 20: housing 20A, 20B: housing portion 21:
cord 21A: cord holding portion 24: speaker unit 26: microphone 27: microphone hole portion
30: Audio signal processing circuit
11-04-2019
28
Документ
Категория
Без категории
Просмотров
0
Размер файла
44 Кб
Теги
jp2014030248, description
1/--страниц
Пожаловаться на содержимое документа