close

Вход

Забыли?

вход по аккаунту

?

JPH0879897

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JPH0879897
[0001]
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a
hearing aid device suitable for use in, for example, a hearing aid for assisting the hearing of a
hearing impaired person such as an elderly person with impaired hearing or a deaf person.
[0002]
2. Description of the Related Art There is a hearing aid as a device for assisting the hearing
(hearing ability) of a hearing impaired person in order to provide a high living environment.
Hearing aids include, for example, small microphones, amplifiers, and earphones, but such
hearing aids simply amplify and output the sound input to the microphone (small microphone).
The output contains a lot of noise, and the voice of the other person in the conversation and the
sound to be noted (important environmental sound) may be buried in the noise, which is
sufficient to support the hearing of visually impaired people. I could not say that.
[0003]
Therefore, a hearing aid is used to amplify the sound input to the microphone through a band
pass filter for extracting the mid band, taking advantage of the fact that human speech is
localized in a specific frequency band (mid band). However, even with such a hearing aid, it was
hard to say that the voice of the other party in the conversation and the sound of the note to be
heard were comfortable and clear.
03-05-2019
1
[0004]
On the other hand, recent developments in digital signal processing devices make it possible to
miniaturize digital circuits and processors, and such techniques are also applied to the field of
hearing aids.
In a hearing aid to which digital signal processing is applied, an audio signal of an analog signal
is A / D converted and converted into a digital signal, and then this digital signal is subjected to
filtering (filtering by digital filter), noise removal, digital signal such as frequency space
processing By applying the treatment, the audibility is enhanced.
[0005]
Here, FIG. 10 shows the configuration of an example of a hearing aid as a conventional hearing
aid. In this hearing aid, first, the microphone 301 picks up surrounding voices and other object
sounds, converts them into electrical signals, and outputs them as an original voice signal A11.
This original audio signal A11 is supplied to the analog filter 108, where only the midrange
where the frequency distribution of human speech is concentrated is passed and the others are
cut. As a result, the midrange sound signal A12 is output from the analog filter 108. The
midrange audio signal A12 is supplied to the A / D converter 109, where it is A / D converted to
be an audio signal A13 as a digital signal.
[0006]
The audio signal A13 is supplied to the memory 302 and temporarily stored. The memory 302 is
connected to a digital signal processor (DSP) 303 via a signal bus, and the DSP 303 performs, for
example, digital filtering, noise removal, FFT (fast Fourier transform) on the audio signal stored in
the memory 302. It performs frequency component decomposition processing such as
conversion) and frequency space processing. The audio signal subjected to such signal
processing is supplied from the memory 302 to the D / A converter 117 as a processed audio
signal A15. The D / A converter 117 D / A converts the processed audio signal A15, which is a
digital signal, into an analog audio signal A16. The analog voice signal A16 is supplied to the
amplifier 118 and amplified. Then, from the amplifier 118, the amplified audio signal A17 is
supplied to the earphone 304 and output therefrom. As described above, the sound input to the
03-05-2019
2
microphone 301 reaches the ear of the user (visually impaired person).
[0007]
However, in the hearing aid as described above, the frequency component considered to be
equivalent to human voice is extracted from the sound input to the single microphone 301 to
enhance the audibility. As a result, both the voice of the other party and the voice of the other
person are amplified, making it difficult for the user to hear the voice of the other party.
[0008]
Furthermore, since the human voice and the external sound having the same frequency
component as that of the external sound are amplified without being distinguished from each
other, there is also a problem that it is difficult for the user to hear the sound.
[0009]
Further, for example, car horns, alarm sounds, telephone bells, etc. are environmental sounds
(important sounds) important for daily life, and it is desirable to be able to always hear, but when
using the above-mentioned hearing aids There is also a risk of missing such an important sound.
[0010]
The present invention has been made in view of such a situation, and is intended to make the
voice of the other party of the conversation and the object sound to be noted (important sound)
comfortable and clear.
[0011]
SUMMARY OF THE INVENTION A hearing aid according to the present invention is a
nondirectional environmental sound input means (for example, nondirectional microphones
102L and 102R shown in FIG. 3) for inputting environmental sounds. And environmental sound
processing means (for example, the environmental sound processing circuit 106 shown in FIG. 3
etc.) for processing the environmental sound input to the environmental sound input means, and
voice having predetermined directivity for inputting the voice of the conversation partner Input
means (for example, the directional microphone 107 shown in FIG. 3), voice processing means
(for example, the voice processing circuit 111 shown in FIG. 3) for processing the voice input to
the voice input means, environmental sound processing means Alternatively, amplification means
(for example, volume control circuit 116 shown in FIG. 3) for amplifying at least one output of
the audio processing means, and reproduction means (for example D / A converter shown in FIG.
3) for reproducing the output of amplification means 17, an amplifier 118, and characterized in
03-05-2019
3
that it comprises a speaker 119L, and 119R, etc.) and.
[0012]
The hearing aid apparatus further comprises selection means (for example, the selection circuit
115 shown in FIG. 3) for selecting the output of either the environmental sound processing
means or the speech processing means and supplying the selected output to the amplification
means. be able to.
In addition, the selecting means may further include an operating means (for example, a manual
switch 114 shown in FIG. 3 or the like) operated when selecting an output of either the
environmental sound processing means or the audio processing means.
[0013]
The environmental sound processing means inputs the pattern storage means (for example, the
environmental sound pattern generation circuit 213 shown in FIG. 5 and the like shown in FIG. 5)
storing the important sound patterns, which are important environmental sounds, and the
environmental sound input means. Important sound determination means (for example, the
environmental sound shown in FIG. 5) that determines whether the environmental sound is an
important sound by comparing the pattern of the environmental sound with the pattern of the
important sound stored in the pattern storage means When the important sound determination
unit determines that the environmental sound is an important sound, the selection unit can force
the selection unit to select the output of the environmental sound processing unit.
When the environmental sound processing means further includes environmental sound level
detection means (for example, the threshold circuit 214 shown in FIG. 5) for detecting the level
of the environmental sound input to the environmental sound input means, the selection means
When the environmental sound is determined to be an important sound by the important sound
determination means and the level of the environmental sound detected by the environmental
sound level detection means is equal to or higher than a predetermined level, the output of the
environmental sound processing means is forced Can be selected.
[0014]
03-05-2019
4
Also, in the above-described hearing aid apparatus, weighting means (for example, the multipliers
122a and 122b shown in FIG. 4) calculate the weighted sum of the outputs of the environmental
sound processing means and the voice processing means and supply them to the amplification
means. , And an adder 123 etc.).
Furthermore, operating means (for example, a manual switch 114 shown in FIG. 3) operated
when the weighting means makes the weighting applied to the output of either the
environmental sound processing means or the audio processing means large or small It can
further be provided.
[0015]
The environmental sound processing means inputs the pattern storage means (for example, the
environmental sound pattern generation circuit 213 shown in FIG. 5 etc.) storing the pattern of
the important sound which is the important environmental sound and the environmental sound
input means Important sound determination means (for example, the environmental sound
shown in FIG. 5) that determines whether the environmental sound is an important sound by
comparing the pattern of the environmental sound with the pattern of the important sound
stored in the pattern storage means When the important sound determining unit determines that
the environmental sound is an important sound, the weighting unit forcibly sets the weight to be
applied to the output of the environmental sound processing unit. Or the weighting applied to the
output of the speech processing means can be reduced.
When the environmental sound processing means further includes environmental sound level
detection means (for example, the threshold circuit 214 shown in FIG. 5 and the like) for
detecting the level of the environmental sound input to the environmental sound input means,
the weighting means When the environmental sound is determined to be an important sound by
the important sound determination means, and the environmental sound level detected by the
environmental sound level detection means is equal to or higher than a predetermined level, the
output of the environmental sound processing means is forcedly performed. The weighting
applied to the speech processing means may be increased, or the weighting applied to the output
of the speech processing means may be reduced.
[0016]
03-05-2019
5
The voice processing means comprises a voice recognition means (for example, a voice
recognition circuit 222 shown in FIG. 6) for recognizing the voice inputted to the voice input
means and a phoneme based on the recognition result of the voice recognition means. Separation
means (for example, the speech recognition circuit 222 and the phoneme classification circuit
223 shown in FIG. 6) for separating into two, and phoneme processing means (for example, A
vowel processing circuit 224, a consonant processing circuit 225, and the like, and a speech
synthesis unit (for example, a speech synthesis circuit 226 shown in FIG. 6) that performs speech
synthesis can be provided based on the output of the phoneme processing unit.
In addition, when the separation means classifies phonemes into vowels and consonants, the
phoneme processing means includes vowel processing means for processing vowels (for
example, vowel processing circuit 224 shown in FIG. 6) and consonant processing for processing
consonants. And means (eg, consonant processing circuit 225 shown in FIG. 6).
[0017]
The vowel processing means or the consonant processing means can emphasize the vowel or the
consonant, respectively.
[0018]
The speech processing means may further comprise silent part insertion means (eg, silent
insertion circuit 227 shown in FIG. 6) for inserting a silent part between phonemes constituting
the speech synthesized by the speech synthesis means. it can.
[0019]
Further, the voice processing means may convert voice inputted to the voice input means into
frequency components which are signals on the frequency axis (for example, the Fourier
transform circuit 232 shown in FIG. 7). Frequency component processing means (for example,
emphasis suppression processing circuit 223, frequency conversion circuit 235, and harmonic
component addition circuit 236 shown in FIG. 7) for performing predetermined processing on
frequency components supplied from the voice conversion means, frequency The frequency
component conversion means (for example, the inverse Fourier transform circuit 237 shown in
FIG. 7 etc.) which converts the frequency component supplied from a component processing
means into the audio | voice signal which is a signal on a time-axis can be provided.
03-05-2019
6
[0020]
The frequency component processing means can emphasize, suppress or modify predetermined
frequency components.
Also, the frequency component processing means can substitute or shift a predetermined
frequency component to another frequency component.
Furthermore, the frequency component processing means can add a predetermined frequency
component to the output of the speech conversion means.
[0021]
In the case of further comprising A / D conversion means (for example, A / D conversion circuits
104 and 109 shown in FIG. 3 etc.) for A / D converting the outputs of environmental sound input
means and voice input means into digital signals The amplifying means amplifies the digital
signal which is the output of at least one of the environmental sound processing means and the
sound processing means, and the reproducing means D / A converts the output of the amplifying
means, amplifies it and outputs it. it can.
Further, the amplification means can perform amplification according to the level of at least one
of the environmental sound and the sound inputted to the environmental sound input means and
the sound input means respectively.
[0022]
In the case where the environmental sound processing means and the speech processing means
are further provided with parameter storage means (for example, the ROM 241 shown in FIG. 8
etc.) storing parameters necessary for processing in the environmental sound processing means
and the speech processing means, The processing can be performed using the parameters stored
in the parameter storage means. Also, the parameter storage means can be a removable nonvolatile memory.
03-05-2019
7
[0023]
Receiving means (for example, the light receiver 242 shown in FIG. 9) for receiving the
parameters necessary for processing in the environmental sound processing means and the voice
processing means and transmitted through the wired line or the wireless line And the parameter
storage means (for example, the RAM 244 shown in FIG. 9 etc.) for storing the parameters
received by the reception means, the environmental sound processing means and the voice
processing means are stored in the parameter storage means Processing can be performed using
parameters.
[0024]
Environmental sound input means can be attached to the side of the user.
Also, when two environmental sound input means are provided, the two environmental sound
input means can be attached to the right side surface or left side surface of the user, respectively.
Furthermore, the voice input means can be mounted so that the direction of its directivity
matches the direction of the conversation partner.
[0025]
In the hearing aid device of the above configuration, the environmental sound input to the
nondirectional microphones 102L and 102R is processed by the environmental sound processing
circuit 106, and the sound input to the directional microphone 107 is generated. Are processed
by the audio processing circuit 111. Then, the output of at least one of the environmental sound
processing circuit 106 and the audio processing circuit 111 is amplified and reproduced.
Therefore, for example, by amplifying only one output of the environmental sound processing
circuit 106 or the audio processing circuit 111, the important environmental sound (important
sound) or the other party's voice can be heard comfortably and clearly. Become. Furthermore, in
the case of amplifying both the outputs of the environmental sound processing circuit 106 and
the speech processing circuit 111, it is possible to prevent missing of the important sound or the
voice of the other party by changing the weighting of both. become able to.
[0026]
03-05-2019
8
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT FIG. 1 shows the appearance of an
embodiment of a hearing aid to which the present invention is applied. In this hearing aid, an
inner type stereo ear pad (Ear Pad) 101R or 101L is provided with an ear speaker 119R or 119L,
and a wide microphone 102R or 102L for wide area sound collection, respectively. There is.
Furthermore, the stereo ear pad 101L is also provided with a directional microphone (Narrow
Microphone) 107 for a short range. The microphone 107 can be attached not to the stereo ear
pad 101L but to the stereo ear pad 101R, or to both of them.
[0027]
The hearing aid is composed of a remote control unit (Remote Controller) 139 which can be
operated at hand, and a processor unit (Processor Unit) 150 for performing various signal
processing, in addition to the stereo ear pads 101R and 101L. These are connected by cables
(Cables) 131. The cable 131 includes a control signal line for exchanging control signals, as well
as an audio signal line for exchanging sound signals.
[0028]
In this hearing aid, the stereo ear pads 101R or 101L are used by being attached to the right or
left ear of the user (hearing impaired person). The microphone 102R or 102L is positioned on
the right or left side of the user when the stereo earpad 101R or 101L is attached to the user's
right or left ear, respectively. It is attached to 101L, respectively. That is, as a result, ambient
environmental sound is uniformly input from all directions to the omnidirectional microphones
102R and 102L.
[0029]
Also, when the stereo ear pad 102L is worn on the user's left ear, the directional microphone 107
has its directivity in line with the direction of the other party speaking with the user. As such, it is
attached to the stereo ear pad 102L. That is, since the direction of the other party of the
conversation is usually in front of the user, the microphone 107 responds to the sound emitted
from the front of the user when the stereo ear pad 102L is attached to the user's left ear.
Attached to react sensitively.
03-05-2019
9
[0030]
The processor unit 150 digitizes the audio signals from the microphones 102R, 102L and 107,
performs digital signal processing to assist the hearing in accordance with the user's auditory
characteristics, and converts it back to analog signals again. The ear speakers 119R and 119L
are made to sound.
[0031]
The processor unit 150 normally monitors ambient noise and environmental noise to be noted,
and is optimized to be able to appropriately perform conversation and situational awareness, but
the user is By operating the remote control unit 139, it is possible to manually make special
settings in special situations.
[0032]
Here, the remote control unit 139 is provided with a manual switch (Wide / Narrow SW.)
(Manual SW.) 114, a volume (Manual Volume) 140, and a power switch (Power SW.) 141.
The manual switch 114 will be described later.
The volume 140 is operated to adjust the volume of the ear speakers 119R and 119L. The power
switch 141 is operated when turning on / off the power of the apparatus.
[0033]
Next, FIG. 2 shows the appearance of another embodiment of the hearing aid to which the
present invention is applied. In the figure, the parts corresponding to the case in FIG. 1 are given
the same reference numerals. This hearing aid is a headband type, as opposed to the inner type
(inner ear type) shown in FIG.
[0034]
03-05-2019
10
That is, in this hearing aid, as shown in FIG. 1, the stereo ear pads 101R and 101L, the remote
control unit 139, and the processor unit 150 are not independent units but are mounted on the
head. The omnidirectional microphones 102R and 102L, the directional microphones 107, the
ear speakers 119R and 119L, the manual switch 114, the volume 140, the power switch 141,
and the processor unit 150 are integrally formed with the head band (Head Band) 160. There is.
In FIG. 2, the cable 131 shown in FIG. 1 is passed through the head band 160.
[0035]
In this hearing aid, a headband is attached to the head of the user so that the ear speaker 119R
or 119L strikes the user's right or left ear.
[0036]
Next, FIG. 3 is a block diagram showing an example of the electrical configuration of the hearing
aid of the appearance configuration shown in FIG. 1 and FIG.
In the figure, portions corresponding to the case in FIG. 10 are assigned the same reference
numerals. Further, the volume 140 and the power switch 141 are not shown.
[0037]
As described in FIG. 1, this hearing aid comprises three microphones, omnidirectional
microphones 102R and 102L and a directional microphone 107. Of these, ambient
environmental sound is input in stereo from all directions equally to the microphone 102R or
102L attached to the part of the user's right (R) or left (L) ear. In the microphone 102R or 102L,
the input environmental sound is converted into an electrical signal, and is supplied to the analog
filter (Filter) 103 as the original environmental sound signal D11. In the analog filter 103, the
original environmental sound signal D11 is subjected to appropriate pre-processing filtering to
be a pre-processed environmental sound signal D12. The preprocessed environmental sound
signal D12 is supplied to the A / D converter 104, where it is converted into a digital signal by A
/ D conversion. The digital signal is supplied to and stored in the memory 105 as the
environmental sound signal D13.
[0038]
03-05-2019
11
The memory 105 is connected to an environment sound processing circuit (Environment
Processor) 106 configured of a digital signal processor or the like via a signal bus. The
environmental sound processing circuit 106 constantly checks whether the environmental sound
signal stored in the memory 105 is an environmental sound to be noted (for example, an
important sound such as a car horn, an alarm sound, a telephone bell, etc.). If the environmental
sound signal is an important sound, processing is performed to let the user hear it. That is, when
the environmental sound signal is an important sound, the environmental sound processing
circuit 106 reads it out of the memory 105 and outputs it as the processed environmental sound
signal D14 to the terminal a of the switching circuit 115 of the subsequent stage.
[0039]
On the other hand, to the microphone 107 having directivity, the voice emitted by the
conversation partner located in front of the user is inputted. In the microphone 107, the input
voice is converted into an electrical signal, and is supplied to the analog filter (Filter) 108 as the
original voice signal D15. In the analog filter 108, the original audio signal D15 is subjected to
appropriate pre-processing filtering, and is converted to a pre-processed audio signal D16 and
supplied to the A / D converter 109. The A / D converter 109 converts the preprocessed audio
signal D16 into a digital signal by A / D conversion, and supplies the digital signal as an audio
signal D17 to the memory 110 for storage.
[0040]
The memory 110 is connected to an audio processing circuit (Speech Processor) 111 configured
by a digital signal processor or the like via a signal bus. The audio processing circuit 111
subjects the audio signal stored in the memory 110 to frequency component decomposition
processing such as digital filtering, noise removal, FFT or the like, frequency space processing
and the like as in the conventional case. Furthermore, the speech processing circuit 111
performs speech recognition to decompose the speech signal stored in the memory 110 into
phonemes, performs predetermined processing on the phonemes, and then performs speech
synthesis using the processing result. Do. In addition, the sound processing processing circuit
111 performs level detection and other processing of the audio signal stored in the memory 110.
The audio signal processed by the audio processing circuit 111 is read from the memory 110 as
a processed audio signal D18, and is output to the terminal b of the selection circuit 115 in the
subsequent stage.
03-05-2019
12
[0041]
The environmental sound processing circuit 106 and the audio processing circuit 111 described
above are connected to the control processor 112 via the processor bus 120. The control
processor 112 integrates the information supplied from the environmental sound processing
circuit 106 and the audio processing circuit 111, and outputs an environmental sound priority
signal D20 and a volume control signal D23.
[0042]
That is, when an important sound is detected in the environmental sound signal processing
circuit 106, the control processor 112 normally sets the environmental sound priority signal D20
at the L level, for example, to the H level. The environmental sound priority signal D20 is
supplied to one input terminal of an OR gate 121 having two inputs. The other input terminal of
the OR gate 121 is grounded via the manual switch 114 and is further pulled up by a pull-up
resistor R. Therefore, when the manual switch 114 is ON / OFF, the L / H level manual switching
signal D21 is supplied to the other input terminal of the OR gate 121.
[0043]
The OR gate 121 supplies the logical sum of the environmental sound priority signal D20 and the
manual switching signal D21 to the selecting circuit 115 as a switching signal D22. Selection
circuit 115 is adapted to select terminal a or b when the switching signal is at H or L level, so
that at least one of environmental sound priority signal D20 and manual switching signal D21 is
at H level. When the processing environmental sound signal D14 from the memory 105 is both L
level when both the environmental sound priority signal D20 and the manual switching signal
D21 are at L level, the processing audio signal D18 from the memory 110 has its volume through
the selection circuit 115, respectively. It is supplied to the adjustment circuit (Volume) 116.
[0044]
Here, the manual switch 114 is turned off / on when the mode of the apparatus is set to the
03-05-2019
13
environmental sound / voice mode. Therefore, when the manual switch 114 is set to the
environmental sound or voice mode, the environmental sound from the memory 105 (processed
environmental sound signal D14) or the audio from the memory 110 (processed audio signal
D18) is selected by the selection circuit 115, respectively. Are output from the speakers (ear
speakers) 119R and 119L via the volume control circuit 116, the D / A converter 117, and the
amplifier 118, but when an important sound is detected, regardless of the mode of the device.
Instead, the environmental sound from the memory 105, that is, the important sound is forcibly
output.
[0045]
Further, the control processor 112 is operated, for example, when the level of the voice detected
by the voice processing circuit 111 is small or when the environmental sound processing circuit
106 detects an environmental sound with a high degree of urgency (important sound). And a
volume control signal D23 for increasing the sound output from the speakers 119R and 119L to
the volume control circuit 116. Then, when returning to the normal state (when the level of the
sound detected by the sound processing circuit 111 is not so small, when the important sound is
not detected by the environmental sound processing circuit 106, etc.), the speaker 119R And the
volume control signal D23 for returning the output sound of 119 L to the user's appropriate
volume level to the volume control circuit 116.
[0046]
In the volume control circuit 116, the volume control of the output of the selection circuit 115 is
performed corresponding to the volume control signal D23 from the control processor 112. The
volume control performed by the volume control circuit 116 is not to directly change the level of
the analog signal. That is, the volume control circuit 116 is constituted by only a multiplier, for
example, and multiplies the digital signal input thereto by a multiplier corresponding to the
volume control signal D23 and outputs the product. For example, when the digital signal is
multiplied by 2, its signal value is doubled, but when it is D / A converted by the D / A converter
117 at the rear stage of the volume adjustment circuit 116, its volume becomes log 2 times .
Therefore, the volume can be adjusted to any magnification with only the multiplier.
[0047]
03-05-2019
14
Further, the volume control circuit 116 is adapted to perform volume control on each of the
signals supplied to the speakers 119L and 119R. Therefore, from the control processor 112, the
volume control signal D23 for each of the signals supplied to the speakers 119L or 119R is
output to the volume control circuit 116, and each of them is a preset user. According to the
difference between the left or right hearing (a parameter corresponding to this hearing is stored
in the parameter storage memory 113, and the control processor 112 outputs the volume
adjustment signal D23 based on this parameter) Weighting is performed to balance the level of
the output sound from 119L or 119R.
[0048]
Here, in order for the environmental sound processing circuit 106, the audio processing circuit
111, and the control processor 112 to perform the operations as described above, control
parameters suited to the auditory characteristics of the user are required. This is stored in the
parameter storage memory 113, and is read and set by the environmental sound processing
circuit 106, the audio processing circuit 111, and the control processor 112 via the processor
bus 120 at an appropriate timing.
[0049]
As described above, in the case shown in FIG. 3, the processing environment sound signal D14
from the memory 105 or the memory 110 is selected by having the selection circuit 115 select
one of the terminals a and b according to the switching signal D22. Although one of the
processing audio signals D18 from is supplied to the volume control circuit 116 as the output
processing signal D24, the weighting circuit shown in FIG. It is also possible to calculate a
weighted sum of D14 and the processed audio signal D18 and to supply it to the volume control
circuit 116 as the output processed signal D24.
[0050]
That is, the weighting circuit comprises multipliers 122a and 122b and an adder 123, and the
processing environment sound signal D14 or the processing sound signal D18 is supplied to the
multiplier 122a or 122b, respectively.
In multiplier 122a or 122b, the multiplier corresponding to switching signal D22 is multiplied by
processing environment sound signal D14 or processing audio signal D18 respectively, that is,
appropriate weighting is applied to processing environment sound signal D14 or processing
audio signal D18 respectively. And output to the adder 123.
03-05-2019
15
The adder 123 adds the outputs of the multipliers 122a and 122b and outputs the result as an
output processing signal D24. Therefore, in this case, either the environmental sound processing
circuit 106 (memory 105) or the audio processing circuit 111 (memory 110) in response to the
operation of the manual switch 114 or the environmental sound priority signal D20 output by
the control processor 112. The weighting applied to the output of is increased or decreased.
[0051]
Specifically, for example, when the selection signal D22 is at the H level, the processing
environment sound signal D14 is heavily weighted and the processing sound signal D18 is lightly
weighted. Further, when the selection signal D22 is at the L level, the processing environment
sound signal D14 is given a small weight, and the processing sound signal D18 is given a large
weight.
[0052]
Therefore, when the selection circuit 115 is used, only one of the voice and the environmental
sound is output from the speakers 119R and 119L. However, when the weighting circuit shown
in FIG. 4 is used, the environmental sound is generated. The sound is output at a large volume
and the sound is output at a small volume, or the sound is output at a large volume and the
environmental sound is output at a small volume.
[0053]
Incidentally, even if the weighting to the voice is made larger, if the important sound is detected
by the environmental sound processing circuit 106 as described above, the weighting to the
environmental sound is compulsorily forced. It is made larger (or the weighting for speech is
forced to be smaller).
[0054]
Returning to FIG. 3, the volume control is performed by the volume control circuit 116, and the
output processing signal D24 which has been adjusted to an appropriate level is supplied to the
D / A converter 117 as the volume control output signal D25.
03-05-2019
16
The D / A converter 117 D / A converts the volume control output signal D 25, which is a digital
signal, into an analog signal, and outputs the analog signal as an analog output signal D 26 to the
amplifier 118.
The amplifier 118 electrically amplifies the analog output signal D26 and supplies it as an
amplified output signal D27 to the speakers 119R and 119L. The speakers 119R and 119L
output a sound (voice or environmental sound) corresponding to the amplified output signal D27,
which reaches the user's ear.
[0055]
Next, FIG. 5 shows a detailed configuration example of the environmental sound processing
circuit 106. As shown in FIG. An input environmental sound signal E11 corresponding to the
environmental sound signal D13 in FIG. 3 is supplied to and stored in a memory 211 configured
of a FIFO type memory or the like. The memory 211 corresponds to the memory 105 in FIG. The
input environmental sound signal E11 is also supplied to an environmental sound comparator
(Pattern Comparator) 212 and a threshold circuit (Level Threshold) 214.
[0056]
An environmental sound pattern signal E12 is supplied to the environmental sound comparator
212 from an environmental sound generation circuit (Sound Pattern) 213 in addition to the input
environmental sound signal E11. The environmental sound generation circuit 213 is formed of,
for example, a ROM, and the pattern of the important sound is stored therein, and is supplied to
the device 212 deeply as the environmental sound pattern signal E12. There is. The
environmental sound comparator 212 compares the input environmental sound signal E11 with
the environmental sound pattern signal E12, and when the input environmental sound signal E11
matches the environmental sound pattern signal E12, that is, when the environmental sound is
an important sound The environmental sound pattern matching signal E13 is output to the
priority check circuit (Priority Check) 216.
[0057]
03-05-2019
17
The threshold circuit 214 detects the level of the input environmental sound signal E11, and
determines whether the level is higher than a predetermined threshold (equal to or higher than a
predetermined threshold). Then, when the level of the input environmental sound signal E11 is
larger than a predetermined threshold value, that level is obtained, and when the level of the
input environmental sound signal E11 is lower than the predetermined threshold value, 0, for
example. The environmental sound level signal E15 is output to the priority evaluation circuit
216 and the selector circuit 215. The threshold used by the threshold circuit 214 is set in
accordance with the control parameter E14 supplied from the parameter storage memory 113
shown in FIG. 3 via the processor bus 120.
[0058]
The environmental sound level signal E15 is supplied to the control processor 112 via the
processor bus 120 in addition to the priority evaluation circuit 216 and the selection circuit 215,
and is used to determine the final volume control signal D23.
[0059]
The priority evaluation circuit 216 determines, from the environmental sound pattern matching
signal E13 and the environmental sound level signal E15, an evaluation value as to whether to
give priority to the environmental sound.
Here, the environmental sound pattern matching signal E13 is designed to indicate what kind of
important sound the environmental sound is. Therefore, in the priority evaluation circuit 216, the
above-described evaluation value is determined from the type and level of the environmental
sound. This evaluation value is supplied to the control processor 112 via the processor bus 120
as the priority evaluation value E18, and is used to determine the final environmental sound
priority signal D20.
[0060]
On the other hand, the input environmental sound signal E11 stored in the memory 211 is
supplied to the selection circuit 215 as the environmental sound signal E16. In addition to the
environmental sound signal E16, 0 is also input to the selection circuit 215. The selection circuit
215 refers to the environmental sound level signal E15 from the threshold circuit 214, and if it is
0, selects and outputs 0. When the environmental sound level signal E15 is not 0, the selection
03-05-2019
18
circuit 215 selects and outputs the environmental sound signal E16. That is, if the environmental
sound (important sound) is a sound with a small level less than the threshold value, 0 is output as
the output environmental sound signal E17 if it is a sound with a relatively large level. This
output environmental sound signal E17 corresponds to the processing environmental sound
signal D14 described in FIG. 3 and is thus supplied to the terminal a of the selection circuit 115.
[0061]
In FIG. 3, when the weighting circuit shown in FIG. 4 is used instead of the selection circuit 115,
the selection circuit 215 is such that the environmental sound level signal E15 from the
threshold circuit 214 is 0. Also, the environmental sound signal E16 is output as it is as the
output environmental sound signal E17.
[0062]
Next, FIG. 6 shows a detailed configuration example of the audio processing circuit 111. As
shown in FIG.
An input voice signal F11 corresponding to the voice signal D17 in FIG. 3 is stored in a memory
221 configured of a FIFO type memory or the like. This memory 221 corresponds to the memory
110 in FIG. The input voice signal F11 stored in the memory 221 is sequentially read as a voice
signal F12 and supplied to a voice recognition circuit (Syllable Decomposition) 222. Note that, for
example, for the input voice signal F11, after being stored in the memory 221, before being
supplied to the voice recognition circuit 222, frequency component decomposition such as
digital filtering, noise removal, FFT, etc. as in the conventional case. Processing, frequency space
processing, etc. are performed.
[0063]
The speech recognition circuit 222 performs speech recognition according to a predetermined
speech recognition algorithm (for example, DP matching method or HMM method), and
decomposes the speech signal F12 into phonemes based on the speech recognition result. The
speech signal F12 decomposed into phonemes is supplied as a phoneme signal F13 to a
phoneme classification circuit (Vowel / Consonant) 223.
03-05-2019
19
[0064]
The phoneme classification circuit 223 classifies the phoneme signal F13 into vowels and
consonants. This is performed based on, for example, the zero crossing or power of the phoneme
signal F13. A vowel or a consonant is made into a vowel signal F14 or a consonant signal F15,
and is supplied to a vowel processing circuit (Emphasis & Transform) 224 or a consonant
processing circuit (Emphasis & Transform) 225, respectively. The vowel processing circuit 224
subjects the vowel signal F14 to a process of emphasizing the vowel that the user can not easily
listen to or converting the method of sound generation, and the processing result is a speech
synthesis circuit (processing vowel signal F17). Supply to Synthesis) 226. The processing in the
vowel processing circuit 224 is performed in accordance with the control parameter F16
supplied from the parameter storage memory 113 (FIG. 3).
[0065]
On the other hand, in the consonant processing circuit 225, the consonant signal F15 is
subjected to the same processing as in the vowel processing circuit 224 in accordance with the
control parameter F16 supplied from the parameter storage memory 113 (FIG. 3). The
processing result is supplied to the speech synthesis circuit 226 as the processing consonant
processing signal F18.
[0066]
The voice synthesis circuit 226 combines the processing vowel signal F17 and the processing
consonant processing signal F18 to restore the original state, and outputs it to the silence
insertion circuit (Interval Insertion) 227 as a synthesized voice signal F19.
The silent insertion circuit 227 inserts a silent portion of an appropriate time into a connected
portion of a sound (phoneme) which is difficult to hear in the synthetic speech signal F19, and
outputs this as an output speech signal F20. The process in the silence insertion circuit 227 is
performed in accordance with the control parameter F16 supplied from the parameter storage
memory 113 (FIG. 3).
[0067]
03-05-2019
20
The output audio signal F20 corresponds to the processed audio signal D18 in FIG. 3 and is
therefore supplied to the terminal b of the selection circuit 115. The output sound signal F20 is
also supplied to a threshold circuit (Level Thresholding) 228. The threshold circuit 228 detects
the level of the output audio signal F20, and outputs the detection result as an audio level signal
F21. The audio level signal F21 is supplied to the control processor 112 via the processor bus
120 of FIG. 3 and used to determine the final volume control signal D23.
[0068]
Next, FIG. 7 shows another detailed configuration example of the audio processing circuit 111. In
FIG. In the figure, parts corresponding to the case in FIG. 6 are assigned the same reference
numerals. In the audio processing circuit 111 in FIG. 7, somewhat simpler processing is
performed as compared with the case in FIG. That is, the audio signal F12 from the memory 221
is supplied to a Fourier transform circuit (FFT) 232, where it is decomposed into frequency
components which are signals on the frequency axis by being subjected to Fourier transform
(FFT). This frequency component is supplied to the emphasis suppression processing circuit
(Emphasis Suppress) 233 as the frequency component signal G13.
[0069]
On the other hand, the weighting matrix generation circuit (Weighting Matrix) 234 emphasizes
frequency components which the user can not easily hear according to the control parameter
F16 supplied from the parameter storage memory 113 (FIG. 3), and causes unpleasant frequency
components. The weighting value G15 for each frequency component is calculated for
suppressing the frequency component and is supplied to the emphasis suppression processing
circuit 233. The emphasis suppression processing circuit 233 emphasizes, suppresses, or
modifies the frequency component signal G13 in accordance with the weighting value G15, and
supplies the processing result to the frequency conversion circuit (Swap & Shift) 235 as a
processing frequency component signal G16.
[0070]
The frequency conversion circuit 235 performs processing such as shifting (shifting) to a pitch
easy for the user to hear, and replacement (replacement) of overtone components with respect to
the processing frequency component signal G16. A harmonic component adding circuit
03-05-2019
21
(Component Addition) 236 is supplied as a reprocessed frequency component signal G17. The
harmonic component adding circuit 236 adds a harmonic component that makes the user
comfortable to the reprocessed frequency component signal G17, and outputs it to the inverse
Fourier transform circuit (IFFT) 237 as a reprocessed frequency component signal G18 again. Do.
[0071]
The processing in the frequency conversion circuit 235 and the overtone component addition
circuit 236 is performed in accordance with the control parameters supplied from the parameter
storage memory 113 (FIG. 3).
[0072]
The inverse Fourier transform circuit 237 performs signal processing on the time axis by
performing inverse Fourier transform (inverse FFT) on the re-processed frequency component
signal G18, and outputs it as a frequency-processed audio signal G19.
This frequency-processed audio signal G19 is supplied to the threshold circuit 228 and is used to
determine the final volume control signal D23 as described below with reference to FIG. Further,
this frequency-processed audio signal G19 corresponds to the processed audio signal D18 in FIG.
[0073]
Next, FIG. 8 shows a detailed configuration example of the parameter storage memory 113. As
shown in FIG. Control parameters supplied to the environmental sound processing circuit 106,
the audio processing circuit 111, and the control processor 112 are held in a ROM 241 which is
an exchangeable (removable device detachable) nonvolatile memory. In FIG. 8, the signal line
connected to the ROM 241 is a part of the processor bus 120 in FIG. 3.
[0074]
At the start of operation of the apparatus or at the time of external restart, the mode of the
03-05-2019
22
apparatus is set to the parameter setting mode, whereby the control processor 112 outputs the
parameter setting address signal H11 to the ROM 241. In the ROM 241, the control parameter is
read from the address according to the parameter setting address signal H11, and this is
supplied as the parameter signal H12 to the environmental sound processing circuit 106, the
audio processing circuit 111, and the control processor 112 and set.
[0075]
The parameter storage memory 113 can also be configured, for example, as shown in FIG.
According to the parameter storage memory 113, control parameters can be changed at an
arbitrary time from outside using a wired or wireless data transmission device (not shown).
[0076]
That is, by operating the data transmission apparatus, data transmission (transmission of control
parameters) modulated, for example, by infrared light is performed, and this infrared light is
received by the light receiver 242. In the data transmission apparatus, it is assumed that the
reset code is transmitted after the transmission of all data is completed.
[0077]
In the light receiver 242, photoelectric conversion is performed to convert the received infrared
light into an electric signal H13, which is then supplied to a decoder 243. The decoder 243
demodulates the electric signal H13, and outputs the required parameter value of the
demodulated data as the decoded parameter signal H14 to the RAM 244 for storage.
[0078]
Further, the decoder 243 monitors the demodulated data to detect a reset code. When the
decoder 243 detects the reset code, it outputs a reset signal H15 to the control processor 112 via
the processor bus 120. That is, after all parameter values are received and stored in the RAM
244, the reset signal H15 is output to the control processor 112.
03-05-2019
23
[0079]
When the control processor 112 receives the reset signal H15, it sets the mode of the apparatus
to the parameter setting mode, and outputs the parameter setting address signal H11 to the RAM
244. Hereinafter, as in the case described with reference to FIG. 8, the control parameters stored
in the RAM 244 are supplied to and set by the environmental sound processing circuit 106, the
audio processing circuit 111, and the control processor 112.
[0080]
As described above, the environmental sound processing circuit 106 processes the
environmental sound input to the nondirectional microphones 102L and 102R, and the audio
processing circuit 111 processes the sound input to the directional microphone 107. Since the
output of one of the environmental sound processing circuit 106 and the audio processing circuit
111 is amplified and reproduced (output), processing suited to the characteristics of each of the
environmental sound and sound is possible, and therefore one microphone can be used.
Compared to the case of using only, each of the important sound or the voice of the other party
can be heard comfortably and clearly.
[0081]
Also, in the case where the weighted sum of the outputs of the environmental sound processing
circuit 106 and the audio processing circuit 111 is calculated and supplied to the volume control
circuit 116, it is prevented to miss the important sound and the voice of the conversation
partner. Can.
[0082]
Furthermore, the environmental sound processing circuit 106 determines whether the
environmental sound is an important sound, and when the environmental sound is an important
sound, the environmental sound processing circuit 106 (memory 105) is selected in the selection
circuit 115. It is important to force the selection of the output of the control unit (FIG. 4) or to
increase the weight of the output of the environmental sound processing circuit 106 (memory
105). It is possible to prevent the sound from being missed, and as a result, the user who is deaf
can walk safely.
[0083]
In addition, when the environmental sound processing circuit 106 detects the level of the
03-05-2019
24
environmental sound and the environmental sound is an important sound and the level of the
environmental sound is equal to or higher than a predetermined level, the selection circuit 115
detects the environmental sound level. Since the output of the sound processing circuit 106
(memory 105) is forcedly selected, or the weighting circuit (FIG. 4) is forced to weight the output
of the environmental sound processing circuit 106 (memory 105). Since the display is made
larger, environmental sound emitted at a position far from the user can be prevented from being
output from the speakers 119L and 119R.
[0084]
Further, in the speech processing circuit 111, the speech is separated into phonemes, and the
phonemes are subjected to predetermined processing for enhancing the audibility, so that the
words are clearly pronounced and can be heard. .
[0085]
Also, after the speech is converted to a frequency component which is a signal on the frequency
axis by the speech processing circuit 111, a predetermined process is applied to the frequency
component, which is relatively simple. Processing can increase the audibility.
[0086]
Furthermore, since the amplification factor of the volume control circuit 116 is adaptively
controlled by the volume control signal D23 output from the control processor 112, the
important sound or the other party's voice can be heard at the volume according to the situation.
It will be.
[0087]
Further, since the control parameters are stored in the removable parameter storage memory
113, processing corresponding to the user's auditory characteristics can be performed by
exchanging the parameter storage memory 113 for each user. It becomes possible.
Furthermore, even when the transmitted control parameter is received and used, it is possible to
perform processing that matches the user's auditory characteristics.
[0088]
03-05-2019
25
The present invention is not limited to the above-described embodiment.
That is, although an example of the configuration of the inner type (FIG. 1) and the headband
type (FIG. 2) is shown here, the present invention is also applicable to other microphones, a
plurality of ear speakers and a processor unit. It is applicable to a hearing aid of structure (type).
Specifically, instead of incorporating the narrow-range directional microphone 107 in the ear pad
101L, for example, the support rack can be extended from the ear pad 101 L and the directional
microphone 107 can be fixed ahead of that.
Further, for example, the directional microphone 107 can be incorporated instead of being
attached to the headband 107.
Furthermore, for example, the directional microphone 107 can be detached from the ear pad
101L or the headband 160 so that it can be freely oriented or repositioned.
[0089]
Further, in the present embodiment, the case of using the wide-area sound collecting nondirectional microphones 102R and 102L (when the environmental sound is input in stereo) has
been described. In the case where a hearing impaired person with one side only or a person with
a mild side impairment uses a hearing aid, the microphone and the ear speaker only need to be
provided to the disabled ear, in this case, The hearing aid can be configured with one microphone
and one ear speaker (input of environmental sound and output of sound can be monaural).
Furthermore, a hearing aid used by a hearing impaired person who has impaired hearing both on
the left and right sides can also be configured with one microphone and one ear speaker.
In this case, the hearing aid can be configured inexpensively.
03-05-2019
26
[0090]
However, when the user is a deaf person who has impaired hearing both in the left and right, if
the hearing aid is configured with one ear speaker, the balance of the way of hearing may be bad
and it may be unpleasant until getting used to it. The hearing aids used by deaf persons with
both left and right hearing impairments are preferably provided with ear speakers on both the
left and right sides, because they may not know from which direction the sound comes from.
[0091]
Furthermore, in the present embodiment, the environmental sound processing circuit 106 and
the audio processing circuit 111 have been described as being configured with a digital signal
processor, but in addition to this, an LSI for realizing individual functions is combined and
configured. It is also possible to configure using a high performance microprocessor (CPU).
[0092]
Further, in this embodiment, as a processing method in the speech processing circuit 111, a
speech processing method using speech recognition (FIG. 6) and a method by frequency space
processing using frequency component decomposition (FIG. 7) are described. However, which of
the processing methods is to be used may be selected according to the type of user's fault, for
example.
Furthermore, both may be provided in the hearing aid, and a selection switch may be provided to
select either.
In addition, it is possible to adaptively control which method is used depending on the type of
input signal. Furthermore, it is also possible to perform both processes.
[0093]
Further, in the case described with reference to FIG. 6, the speech is decomposed into phonemes,
the phonemes are emphasized (or suppressed), and silent parts are inserted as the speech clear
processing by speech recognition. In addition to this, for example, speech speed conversion by
adjusting the duration of the phoneme (speech segment) can be performed.
[0094]
03-05-2019
27
Further, in the case described in FIG. 7, the processing is performed in the order of emphasis
suppression of frequency components, exchange and shift of frequency components, and
addition of overtone components, but the order of each processing may be changed. It is
possible.
Also, depending on the processing characteristics, processing may be performed in another
combination. Furthermore, in the case described in FIG. 7, the voice signal is converted into the
frequency component using Fourier transform, but in addition to this, a processing method by
multiresolution analysis using subbands such as wavelet transform, for example Alternatively, it
is possible to use a non-linear processing method such as feature point extraction.
[0095]
Further, in the case described in FIG. 9, data transmission is performed using infrared rays, but in
addition to this, for example, weak radio waves can be used. Furthermore, a jack is provided in
the processor unit 150 (FIGS. 1 and 2), and a cable is connected thereto to transmit data from a
data transmission device such as a personal computer, or by a noncontact data transmission
method by magnetic field induction. , Data transmission can be performed.
[0096]
Furthermore, in the present embodiment, when the environmental sound is an important sound,
the important sound is output as it is, but in addition to this, the fact that the important sound is
generated and the type of the important sound are used. It is also possible to notify to
[0097]
As described above, according to the present invention, the voice of the other party of the
conversation and the object sound (important sound) to be noted can be heard comfortably and
clearly.
[0098]
Brief description of the drawings
03-05-2019
28
[0099]
1 is a diagram showing the external appearance of an embodiment of the hearing aid to which
the present invention is applied.
[0100]
2 is a diagram showing the external appearance of another embodiment of the hearing aid to
which the present invention is applied.
[0101]
3 is a block diagram showing an example of the electrical configuration of the embodiment of
FIG. 1 (FIG. 2).
[0102]
4 is a block diagram showing a detailed configuration example of the weighting circuit.
[0103]
5 is a block diagram showing a detailed configuration example of the environmental sound
processing circuit 106 in FIG.
[0104]
6 is a block diagram showing a detailed configuration example of the audio processing circuit
111 in FIG.
[0105]
7 is a block diagram showing another example of the detailed configuration of the audio
processing circuit 111 in FIG.
[0106]
8 is a block diagram showing a detailed configuration example of the parameter storage memory
113 in FIG.
[0107]
9 is a block diagram showing another example of a detailed configuration of the parameter
03-05-2019
29
storage memory 113 in FIG.
[0108]
10 is a block diagram showing a configuration of an example of a conventional hearing aid.
[0109]
Explanation of sign
[0110]
101L, 101R Stereo Ear Pad 102L, 102R Nondirectional Microphone 103 Analog Filter 104 A / D
Converter 105 Memory 106 Environmental Sound Processing Circuit 107 Directional
Microphone 108 Analog Filter 109 A / D Converter 110 Memory 111 Audio Processing Circuit
112 Control Processor 113 parameter storage memory 114 manual switch 115 selection circuit
116 volume control circuit 117 D / A converter 118 amplifier 119L, 119R ear speaker 120
processor bus 121 OR gate 122a, 122b multiplier 123 adder 131 cable 139 remote control unit
140 volume 141 power supply Switch 150 Processor Unit 160 Headband
03-05-2019
30
Документ
Категория
Без категории
Просмотров
0
Размер файла
46 Кб
Теги
jph0879897
1/--страниц
Пожаловаться на содержимое документа