close

Вход

Забыли?

вход по аккаунту

?

JP2010124435

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2010124435
An in-vehicle conversation assistance device is provided that can prevent a speaker from
listening to conversation speech output from a speaker. An in-vehicle conversation assistance
device 100 is installed in each seat in a vehicle, and a microphone 110 for collecting the sound in
the vehicle, and a speaker 150 for a conversation voice installed in each seat in the vehicle and
outputting conversation voices of the occupants. And a conversation voice extraction unit 120
that extracts conversation voice of the occupant from the sound in the car collected by the
microphone 110, and a conversation collected by the conversation voice extraction unit 120 by
the microphone 110 installed in each seat And a speech synthesis unit for synthesizing the
speech so that the speech is not output from the speech speaker 150 installed in the same seat
as the microphone 110 from which the extracted speech speech is collected. [Selected figure]
Figure 1
In-car conversational aid device
[0001]
TECHNICAL FIELD The present invention relates to an in-vehicle conversation assistance device
that outputs conversation speech in a car to a head of a passenger by outputting a speaker to
assist in-car conversation.
[0002]
In a conventional on-vehicle audio apparatus, it is common to listen to radio sound and audio
sound reproduced from a storage medium such as a CD (Compact Disc) using a speaker installed
in a vehicle.
09-05-2019
1
[0003]
Also, as a conventional in-vehicle conversation assistance device, in-vehicle conversation voice is
extracted from the in-vehicle sound collected using a microphone installed in the vehicle, and the
combined voice of the extracted in-vehicle conversation voice and radio voice or audio voice is
An in-vehicle conversation assistance device is known that can listen to conversational voice as
well as audio voice by outputting using a speaker installed in the vehicle (see Patent Document
1).
Japanese Patent Application Laid-Open No. 2002-051392
[0004]
However, in the above-described conventional in-vehicle speech assistance device, all speech
sounds are output from all the speakers in order to output synthetic sounds.
Therefore, since the speaker listens to the conversational voice output from the speaker, there is
a problem that the speaker feels discomfort or discomfort.
[0005]
The present invention has been made in view of such a point, and an object of the present
invention is to provide an in-vehicle conversation assistance device capable of preventing a
speaker from listening to conversation speech output from a speaker by himself.
[0006]
The in-vehicle conversation assistance device of the present invention is installed in each seat in
the vehicle, a microphone for collecting the sound in the vehicle, a speaker for conversation voice
installed in each seat in the vehicle, and outputting conversation voice of the occupant, and the
microphone A conversational voice extraction unit for extracting a conversational voice of the
occupant from the sound in the car collected by the speaker, and the conversational voice
collected by the conversational voice extraction unit and collected by the microphone installed in
each seat A conversation voice synthesis unit is provided, which performs synthesis so as not to
output from a conversation voice speaker installed in the same seat as the seat where the
09-05-2019
2
microphone that collected the conversation voice is installed.
[0007]
According to this configuration, the conversational voice extraction unit extracts conversational
speech from the in-vehicle sound collected by the microphones installed in each seat.
Then, the conversation voices collected by the microphones installed in the respective seats and
the speech voices extracted respectively by the speech voice extraction unit are installed in the
same seat as the seat where the microphones collecting the speech voices respectively extracted
are installed. The speech-to-speech synthesis unit synthesizes so that the speech is not output
from the speech speaker.
As a result, since the speaker's own conversational voice is not output from the speaker for
conversational voice installed at the seat of the speaker, it is possible to prevent the speaker from
listening to his / her conversational voice, which may cause discomfort to the speaker. It can
prevent giving a feeling of discomfort.
[0008]
Further, in addition to the above configuration, the in-vehicle conversation assistance device of
the present invention changes the direction of the occupant's head position detection unit for
detecting the head position of the occupant seated in each seat, and the speaker for conversation
voice. The speaker for the conversation voice such that the direction of the speaker for the
conversation voice is directed to the head of the occupant based on the speaker driving unit for
the conversation voice and the head position of the occupant detected by the occupant head
position detection unit And a speaker driving control unit for conversational voice which controls
the driving unit.
[0009]
Here, changing the direction of the speaker for conversational sound corresponds to changing
the speaker axis to change the directivity of the speaker.
09-05-2019
3
The directivity of the speaker indicates, for example, the relative angle between points at which
the sound pressure level is reduced by one half (-6 dB) with respect to the speaker axis.
[0010]
According to this configuration, the head position of the occupant seated in each seat is detected
by the occupant head position detection unit, and based on the detected head position of the
occupant, the direction of the speaker for conversation voice is the head of the occupant The
speaker driving unit for conversational speech is controlled by the speaker driving control unit
for conversational speech so as to turn to the direction. As a result, it is possible to prevent the
speaker from listening to his / her conversational voice, since the conversational voice output
from the speaker for conversational voice is prevented from coming around other than the target
passenger. Can be further prevented from giving discomfort or discomfort.
[0011]
According to the present invention, it is possible to prevent the speaker from listening to the
conversational voice output from the speaker. That is, since the speaker for conversational voice
is installed in each seat and the speaker's conversational voice is not output from the speaker for
conversational voice installed in the seat of the speaker, the speaker can listen to his / her
conversational voice Therefore, it is possible to prevent the speaker from giving a sense of
discomfort or discomfort.
[0012]
Hereinafter, an in-vehicle conversation assistance device according to an embodiment of the
present invention will be described in detail using the drawings.
[0013]
FIG. 1 is a schematic configuration diagram of an in-vehicle conversation assistance device
according to an embodiment of the present invention.
[0014]
09-05-2019
4
In FIG. 1, an in-vehicle conversation assistance device 100 according to the present embodiment
includes a microphone 110 installed near each seat in the vehicle, a conversation voice
extraction unit 120, a conversation voice synthesis unit 130, an amplifier unit 140, It has a
speaker 150 for conversational voice arranged in the vicinity of each seat in the car, an
occupant's head position detection unit 160 placed in the vicinity of each seat in the car, and a
speaker drive control unit 170 for conversational speech.
Here, the state of "disposed near each seat in the car" or "disposed near each seat in the car" may
be attached to the backrest or headrest of each seat. Alternatively, it may be mounted at the foot
of each seat or at a position near each seat on the ceiling of the vehicle.
However, it is desirable to install or arrange at a position that does not get in the way of the
occupant when the occupant is seated. In the following description, the state of “installation”
or “arrangement” is expressed as “installed in a seat” or “disposed in a seat” for the sake
of convenience.
[0015]
The microphone 110 is disposed in the vicinity of each seat in the car, and collects the sound in
the car. In the example of FIG. 1, the microphone 110 is configured by the driver's seat
microphone 110-1, the passenger's seat microphone 110-2, the rear right seat microphone 1103, and the rear left seat microphone 110-4. ing. In the following, for convenience of explanation,
any one of the four microphones 110-1 to 110-4 will be indicated by "microphone 110".
[0016]
The conversation voice speaker 150 is disposed in the vicinity of each seat in the vehicle, and
outputs the conversation voice of the occupant. Further, as will be described in detail later, the
conversational speech speaker 150 is provided with a mechanism (a conversational speech
speaker drive unit) for changing the direction (directivity). In the example of FIG. 1, the
conversational-sound speaker 150 includes a driver seat speaker 150-1, a front passenger seat
speaker 150-2, a rear right seat speaker 150-3, and a rear left seat speaker 150-4. It consists of
In the following, for convenience of explanation, an arbitrary speaker for conversational voice
among the four speakers 150-1 to 150-4 for conversational voice will be indicated by "speaker
for speech 150".
09-05-2019
5
[0017]
The conversation voice extraction unit 120 extracts the conversation voice of the occupant from
the in-vehicle sound collected by the microphones 110-1 to 110-4 installed in each seat
(conversation voice extraction function). The conversational speech extraction unit 120 will be
described in detail later using the drawings.
[0018]
The conversation voice synthesis unit 130 collects the conversation voices extracted by the
conversation voice extraction unit 120 after being collected by the microphones 110-1 to 110-4
installed in each seat. The speech is synthesized so as not to be output from the speakers 150-1
to 150-4 for the conversation voice disposed in the seat where the microphone having the sound
is installed (conversation voice synthesis function). This synthesis process is performed for each
seat, and a synthetic speech signal output from the speech speakers 150-1 to 150-4 installed in
each seat is generated. The speech and speech synthesis unit 130 will be described in detail later
using the drawings.
[0019]
The amplifier unit 140 amplifies the speech signal synthesized by the speech synthesis unit 130.
This amplification process is also performed for each seat. The synthesized speech signal for
each seat amplified by the amplifier unit 140 is sent to the corresponding speech speakers 150-1
to 150-4. Each of the conversational speech speakers 150-1 to 150-4 outputs the amplified
synthetic speech signal received from the amplifier unit 140 as speech.
[0020]
The occupant head position detection unit 160 is disposed in the vicinity of each seat in the
vehicle, and detects the head position of the occupant who is seated in each seat. In the example
of FIG. 1, the occupant's head position detection unit 160 is for the driver's seat detection unit
160-1, front passenger's seat detection unit 160-2, rear right seat detection unit 160-3, and rear
09-05-2019
6
left seat. And a detection unit 160-4. The occupant head position detection unit 160 will be
described in detail later with reference to the drawings. In the following, for convenience of
explanation, an arbitrary occupant's head position detection unit among the four occupant's head
position detection units 160-1 to 160-4 will be indicated by "the occupant's head position
detection unit 160". .
[0021]
The speaker driving control unit 170 for a conversational voice is a speaker 150 for a
corresponding conversational voice based on the head position of each occupant detected by the
occupant head position detecting units 160-1 to 160-4 installed in each seat. A speaker driving
unit for conversational voice, which will be described later, is controlled so that the direction of
the arrow -1 to 150-4 is directed to the head of each passenger (speaker directivity adjustment
function). The contents of this control will be described in detail later using the drawings. In the
example of FIG. 1, the speaker driving control unit for conversational speech 170 includes a
driving control unit for driver's seat 170-1, a driving control unit for passenger's seat 170-2, and
a drive control for rear right seat 170-3. The rear left seat drive control unit 170-4 is configured.
In the following, for convenience of explanation, an arbitrary speaker driving control unit for
conversation sound among the four speaker driving control units for conversation sound 170-1
to 170-4 is indicated by "speaker driving control unit 170 for conversation sound". To
[0022]
In the present embodiment, the speaker driving control unit for conversational voice 170 is
configured by four speaker driving control units for conversational speech 170-1 to 170-4, but
the present invention is not limited to this. For example, it is also possible to provide the
functions of the speaker driving control units for conversational speech 170-1 to 170-4 in one
control unit.
[0023]
In the car, an audio unit 200 and an on-vehicle audio speaker 210 are installed. The audio unit
200 has functions of, for example, a radio receiver and a CD player, and outputs a received radio
sound signal and a reproduction sound signal obtained by reproducing a storage medium such as
a CD. Here, radio audio signals and reproduced audio signals output from the audio unit 200 will
09-05-2019
7
be collectively referred to as “audio audio signals”. Further, the on-vehicle audio speaker 210
outputs the audio audio signal (radio audio signal or reproduction audio signal) output from the
audio unit 200 to the listener for voice output. In the example of FIG. 1, the in-vehicle audio
speaker 210 is configured of a front light speaker 210-1, a front left speaker 210-2, a rear light
speaker 210-3, and a rear left speaker 210-4.
[0024]
The audio sound signal output from the audio unit 200 is sound-outputted by the on-vehicle
audio speakers 210-1 to 210-4 installed in the car. Thereby, the passenger in the car can listen
to the audio sound.
[0025]
In addition, all sounds in the car such as audio voice, noise, and conversation voice collected by
microphones 110-1 to 110-4 installed in each seat, and audio voice output from audio unit 200
are conversations. It is input to the voice extraction unit 120. In the conversational speech
extraction unit 120, only the conversational speech of the occupant is separated (extracted) from
the sound in the car collected by the microphones 110-1 to 110-4.
[0026]
Next, each function of the in-vehicle conversation assistance device 100 will be described in
detail.
[0027]
First, the in-vehicle conversational speech extraction function will be described with reference to
FIG.
[0028]
FIG. 2 is a block diagram showing an exemplary configuration of the conversational speech
extraction unit 120. As shown in FIG.
09-05-2019
8
This conversational speech extraction unit 120 removes the audio speech signal and the noise
signal from the in-vehicle sound signal collected by the microphone 110 to separate the speech
speech signal, and corrects the loudness based on the levels of the audio speech signal and the
noise signal. Have the ability to
[0029]
The conversational speech extraction unit 120 includes an adaptive filter 121, a filter 122,
arithmetic units 123 and 124, an ambient noise removal unit 125, a loudness compensation
arithmetic unit 126, and a speech correction filter 127.
Note that microphones 110 and conversational speech speakers 150 shown in FIG. 2 indicate
corresponding microphones 110-1 to 110-4 and conversational speech speakers 150-1 to 150-4
installed in the respective seats.
[0030]
The adaptive filter 121 is for simulating the transfer characteristic of the acoustic space in the
car, and is an FIR type digital filter having filter coefficients (tap coefficients), for the audio audio
signal output from the audio unit 200. And performs predetermined adaptive signal processing.
The filter coefficient is updated by, for example, a Least Mean Square (LMS) algorithm so that the
power of a difference signal (described later) output from the calculation unit 124 becomes
minimum. Thereby, it is possible to simulate the transfer characteristic of the acoustic space in
the car. That is, the echo (echo) component in the car can be added to the audio sound signal
output from the audio unit 200 in a pseudo manner.
[0031]
The filter 122, like the adaptive filter 121, is for simulating the transfer characteristic of the
acoustic space in the vehicle, and has filter coefficients. The filter coefficients of the adaptive
filter 121 are copied at a predetermined timing. The filter 122 performs predetermined adaptive
signal processing on the speech sound signal output from the speech speaker 150 using the filter
coefficient determined by the adaptive filter 121, so that the speech sound in the acoustic space
09-05-2019
9
in the car is generated. Can be simulated.
[0032]
The calculation unit 123 receives an output signal of the microphone 110 (an audio signal in the
car collected by the microphone 110) and an output signal of the filter 122, and calculates a
difference between these two signals. As a result, the echo component of the conversational voice
that is output from the conversational speech speaker 150 and collected around the microphone
110 is removed.
[0033]
The arithmetic unit 124 is a differential signal (a signal from which an output signal of the
microphone 110 is output from the speaker 150 and is output from the speaker 150 and
removed from the conversation sound collected by the microphone 110) and the adaptive filter
121. And an audio signal that is output from the audio unit 200 and that simulates the transfer
characteristic of the acoustic space in the vehicle by the adaptive filter 121, and calculates the
difference between these two signals. Thereby, the audio sound component is further removed
from the output signal of the microphone 110.
[0034]
The peripheral noise removal unit 125 removes the component corresponding to the peripheral
noise included in the difference signal output from the operation unit 124 at the previous stage.
From the peripheral noise removal unit 125, only the component corresponding to the speech
sound included in the signal output from the microphone 110 is extracted and output. Since the
noise in the car has a large low frequency component, the surrounding noise removing unit 125
can be configured, for example, by a high pass filter.
[0035]
The loudness compensation operation unit 126 receives the audio voice signal, the ambient noise
signal, and the speech voice signal, and based on these signals, corrects each frequency
09-05-2019
10
component necessary for outputting the speech voice from the speech speaker 150 Calculate the
gain. That is, the loudness compensation operation unit 126 is necessary for optimal
conversational voice listening, which is previously tuned when the conversational speech is
output from the conversational speech speaker 150 based on the sound pressure levels of the
audio speech and the ambient noise. The correction gain of each frequency component is
calculated.
[0036]
The speech correction filter 127 applies the correction gain of each frequency component
calculated according to the sound pressure level in the vehicle by the loudness compensation
calculation unit 126 to the speech sound signal output from the surrounding noise removal unit
125, Adjust the gain. This makes it easy to hear the conversational voice output from the
conversational voice speaker 150.
[0037]
FIG. 3 is a flow chart showing the operation of conversational speech extraction section 120
having the above configuration.
[0038]
First, an output signal of the microphone 110 (in-vehicle audio signal collected by the
microphone 110) is input to the conversational voice extraction unit 120 (step S1000).
Then, the operation unit 123 subtracts the conversational speech signal simulating the transfer
function of the acoustic space in the car from the output signal of the microphone 110 input in
step S1000 (difference operation), and is output from the conversational speech speaker 150
Then, the echo component of the conversational voice collected around the microphone 110 is
removed (step S1100). Then, by subtracting the audio sound signal simulating the transfer
function of the acoustic space in the car from the signal obtained in step S1100 (the difference
signal output from the calculation unit 123) in the calculation unit 124 (difference calculation),
Further, the audio sound component is removed (step S1200). Then, the ambient noise removing
unit 125 further removes the noise in the car, thereby extracting only the conversational voice
from the sound in the car collected by the microphone 110 (step S1300). Then, after the
correction gain of each frequency component necessary for outputting the conversational voice
09-05-2019
11
from the speaker 150 is calculated by the loudness compensation operation unit 126, the speech
correction filter 127 processes the conversational speech extracted in step S1300. , And gain
processing according to the sound pressure level of the sound in the car (step S1400). This
makes it possible to extract the optimum level of speech.
[0039]
Next, with reference to FIG. 4, the function of synthesizing the conversational voice output from
the conversational speech extraction unit 120 will be described.
[0040]
FIG. 4 is a block diagram showing an example of the configuration of conversational speech
synthesizer 130. As shown in FIG.
[0041]
The conversational speech synthesis unit 130 is configured by arithmetic units 131, 132, 133,
134, 135, and 136, which are adders.
The conversational voice synthesis unit 130 picks up the sound in the car by an arbitrary
microphone 110 and outputs the conversational speech extracted by the conversational speech
extraction unit 120 from a conversational speech speaker 150 installed in the same seat as the
microphone 110 It is a synthesizer which synthesize | combines so that it does not, and it
outputs to the other speaker 150 for speech.
That is, from the conversational speech speaker 150-1, the conversational speech is collected by
the microphones 110-2, 110-3, 110-4 and the conversational speech extracted by the
conversational speech extraction unit 120 is output, and the conversational speech speaker 1502 From this, the speech is collected by the microphones 110-1, 110-3, and 110-4 and the speech
sound extracted by the speech sound extraction unit 120 is output, and from the speech speaker
150-3, the microphones 110-1 and 110 are output. 2, and the conversation voice extracted by
the conversation voice extraction unit 120 is output by the conversation voice extraction unit
120, and is collected by the microphones 110-1, 110-2, and 110-3 from the conversation voice
speaker 150-4. The configuration is such that the conversational speech extracted by the
conversational speech extraction unit 120 is output. This configuration is expressed by the
following equations (1) to (4).
09-05-2019
12
[0042]
SP150−1=MIC110−2+MIC110−3+MIC110−4 …(1)
SP150−2=MIC110−1+MIC110−3+MIC110−4 …(2)
SP150−3=MIC110−1+MIC110−2+MIC110−4 …(3)
SP150−4=MIC110−1+MIC110−2+MIC110−3 …(4)
[0043]
Here, “SP” indicates the conversational speech signal output from the conversational speech
speakers 150-1 to 150-4, and “MIC” indicates the conversational speech extraction unit 120
collected by the microphones 110-1 to 110-4. 3 shows the speech signal extracted by.
[0044]
The speech signal thus synthesized by the speech synthesis unit 130 is output from the speech
synthesis unit 130 to the amplifier unit 140 for each seat, amplified by the amplifier unit 316,
and installed in each seat. The voice is output from the conversation voice speakers 150-1 to
150-4.
As a result, the speaker does not listen to the conversation voice uttered from the speaker 150
for conversation voice installed in his / her seat, thereby reducing discomfort and discomfort
when installing the conventional in-vehicle conversation assistance device can do.
[0045]
Next, the function of detecting the head position of the occupant in the car and directing the
directivity of the speaker 150 to the head of the occupant based on the detection result will be
described.
[0046]
First, head position detection of the occupant in the vehicle will be described with reference to
FIG.
[0047]
09-05-2019
13
FIG. 5 is a diagram for explaining a head position detection method using inter-frame difference
and stereo sphere projection as an example of the occupant head position detection unit 160.
[0048]
In this head position detection method, as shown in FIG. 5, two infrared cameras 161a and 161b
are used for each occupant head position detection unit 160 in order to be able to cope with
night travel as well. , Stereo view the occupants of each seat.
Here, in the three-dimensional space which is the observation space 162, when the movement of
the “spherical human head” which is the object to be observed 163 is photographed by the
infrared cameras 161a and 161b, both infrared cameras 161a are viewed in stereo A circular
frame difference is searched for by the two infrared cameras 161a and 161b using the principle
that the interframe difference appears in a circular shape in the two-dimensional images 164a
and 164b of 161b.
[0049]
At this time, as shown in FIG. 6, the circular frame difference is an arbitrary position of the
sphere in three-dimensional space from the circular inter-frame difference simultaneously
appearing in the images of both infrared cameras 161a and 161b. It is calculated in a space
where a coordinate system (x, y, z) having a point as an origin is quantized at constant intervals.
[0050]
Then, based on the head position (x, y, z) of the occupant calculated by the corresponding
occupant's head position detection unit 160, the speaker driving control unit for conversational
speech 170, for example, as shown in FIG. The parameters of the head position of the occupant
are converted into polar coordinates (r, θ, φ) using known polar coordinate conversion.
Then, based on the converted parameters (r, θ, φ), the speaker driving control unit for speech
sound 170 determines the driving amount of the corresponding speaker 150 for speech sound
installed in each seat, and for the speech sound Control is performed so that the directivity is
directed to the head of the occupant by changing the speaker axis of the speaker 150.
09-05-2019
14
[0051]
Here, the speech speaker 150 will be described in detail with reference to FIGS.
[0052]
FIG. 8 is a diagram showing a configuration of a speaker drive unit 180 that drives the
conversation voice speaker 150 to direct the speaker axis of the conversation voice speaker 150
to the occupant.
In particular, FIG. 8A is a top view of the speaker drive unit 180, FIG. 8B is a side view of the
speaker drive unit 180, and FIG. 8C is a front view of the speaker drive unit 180.
[0053]
As shown in FIG. 8, the speaker drive unit 180 rotates the housing 181, the speaker holding
member 182 that holds the conversation sound speaker 150 in a fixed manner, and the speaker
holding member 182 in the θ direction with respect to the housing 181. A fastener 183 for
possible attachment, a turn table 184 for turning the speaker axis of the speaker 150 for speech
sound in the φ direction, and a speaker holding member 182 for changing the direction of the
speaker 150 in the θ direction And the motor 185 of FIG.
[0054]
Further, on the outer periphery of the speaker holding member 182, a band-like gear 187 which
meshes with the gear 186 provided on the rotation shaft of the motor 185 is provided.
[0055]
Here, FIG. 9 is a view showing the trajectory of the speaker axis according to the rotation of the
turntable 184 in FIG.
FIG. 10 is a diagram showing the trajectory of the speaker axis by the drive of the motor 185 in
FIG.
09-05-2019
15
[0056]
Next, the operation of the speaker drive unit 180 having the above configuration will be
described using the flowchart of FIG.
[0057]
FIG. 11 is a flowchart for explaining the operation of the speaker drive unit 180 in the present
embodiment.
[0058]
First, the occupant's head position detection unit 160 performs the above-described operation to
determine whether the occupant is on the target seat (for example, the driver's seat) (step
S2000).
This determination is made depending on whether the inter-frame difference appears in a
circular shape.
[0059]
The occupant's head position detection unit 160 determines that the occupant does not board
the target seat if the inter-frame difference does not appear circular in step S2000, and the
processing of the occupant's head position detection unit 160 End the operation.
[0060]
On the other hand, if the inter-frame difference appears circular in step S2000, the occupant's
head position detection unit 160 determines that the occupant is on the target seat, and the
occupant's head position The detection operation of is started (step S2100).
[0061]
Then, it is determined whether the head position of the occupant detected in step S2100 is the
same position as the position when the head position of the occupant was detected last time (step
S2200).
09-05-2019
16
At this time, for example, the speaker driving control unit for speech sound 170 determines
whether or not the previous head position detection data is stored.
[0062]
Here, if the previous head position detection data is not stored, the conversation voice speaker
drive control unit 170 outputs the turn table 184 and the turn table 184 based on the head
position information that is the detection result of the head position in step S2100. The motor
185 is driven to direct the directivity of the speaker to the head of the occupant (step S2400).
On the other hand, if the previous head position detection data is stored, the previous head
position information is compared with the current head position information detected in step
S2100 (step S2300).
If the head position information of the previous time and the head position information of this
time coincide with each other as a result of the comparison in step S2300, the process returns to
step S2100 without driving the turntable 184 and the motor 185.
On the other hand, when the head position information of the last time and the head position
information of this time do not match as a result of the comparison in step S2300, the turntable
184 and the motor 185 are based on the head position information of this time. To direct the
directivity of the speaker to the head of the occupant (step S2400).
[0063]
Thereafter, the processing operation from step S2000 to step S2500 is repeated as appropriate
until the power of the in-vehicle conversation assistance device 100 is turned off or the head
position detection function is interrupted by the user's operation (step S2500). .
[0064]
As described above, the present embodiment has a function of detecting the head position of the
09-05-2019
17
occupant in the car and directing the directivity of the speech speaker 150 to the head of the
occupant based on the detection result.
As a result, by driving the turntable 184 and the motor 185 based on the driving amount output
from the speaker driving control unit for talking voice 170, the speaker of the speaker 150 for
talking voice within the range shown in FIG. 9 and FIG. The axis can be changed to change the
directivity, and the directivity of the speaker for speech 150 can be directed to the head of the
occupant.
[0065]
As described above, by directing the speaker axis of the speaker 150 for conversational voice to
the head position of the occupant using the rotation of the turntable 184 and the drive of the
motor 185, the listening of the conversational voice of the target passenger is optimized. The
wraparound of the conversational voice from the other conversational speech speaker 150 can
be prevented, and the sense of discomfort and discomfort caused by listening to the talker's own
conversational voice can be reduced.
[0066]
Next, the operation of the in-vehicle conversation assistance device 100 having the above
configuration will be described using FIG.
[0067]
FIG. 12 is a flow chart showing the operation of in-vehicle speech aid device 100 according to
the present embodiment.
[0068]
When the power of the in-vehicle conversation assistance device 100 is turned on, first, the
conversation voice emitted from the passenger in the vehicle is combined with the audio voice
and the noise in the vehicle output from the audio unit 313 by the microphones 110-1 to 110-4.
Sound is collected (step S3000).
However, audio sound does not necessarily have to be output.
09-05-2019
18
[0069]
Then, the above-mentioned conversational voice extraction function extracts conversational
speech from the in-vehicle sound collected in step S3000 (step S3100), and performs loudness
correction according to the sound pressure level.
Here, speakers 150-1 to 150-4 for conversation voice provided exclusively for conversation voice
are provided in the vicinity of each seat, and speakers 150-1 to 150-4 for conversation voice
corresponding to any seat. The conversational voice outputted from the voice is a voice obtained
by collecting, extracting and synthesizing the sound in the vehicle using the microphones 110-1
to 110-4 other than the seat.
[0070]
Conversation voices output from the conversation voice speakers 150-1 to 150-4 to be output
are microphones 110 provided in the vicinity of a seat other than the seat where the
conversation voice speakers 150-1 to 150-4 are installed. The voices in the car are collected and
extracted using the steps # 1.about.110-4 to synthesize the extracted speech signal (step S3200).
Then, the synthesized speech signal is amplified by the amplifier unit 140 (step 3300), and the
speakers 150-1 to 150 for the speech sound in which the respective speaker axes are directed to
the head of the occupant of each target seat. Output from -4 (step S3400).
[0071]
As shown in the flow chart of FIG. 12, the conversational audio speakers 150-1 to 150-4 detect
the head position of the occupant, and the turntable 184 installed on the conversational audio
speakers 150-1 to 150-4. By driving the motor 185, the speaker shaft is always directed to the
head of the occupant.
[0072]
09-05-2019
19
That is, the head position detection unit 160-1 to 160-4 installed in each seat detects the head
position of the passenger, and based on the detection result, the direction of the speakers 150-1
to 150-4 for conversational voice is detected. In order to turn the character to the head of the
occupant, the motor 185 installed in the speaker drive unit 180 is driven to change the speaker
axes of the speakers 150-1 to 150-4 for speech sound.
The driving amount for directing the speaker axis of the speaker 150-1 to 150-4 for the
conversation voice to the head of the occupant was calculated and calculated by the speaker
driving control unit 170-1 to 170-4 for conversation voice installed in each seat By changing the
speaker axes of the speakers 150-1 to 150-4 for conversational voice installed in each seat based
on the amount of drive, directivity is adjusted to the head of the occupant so that the
conversational speech becomes optimal for the occupant. As a result, it is possible to optimize the
listening of the conversation voice of the target audience, and prevent the wraparound of the
conversation voice from the other conversation voice speakers 150-1 to 150-4, so that the
speaker can It is possible to reduce discomfort and discomfort caused by listening.
[0073]
In the present embodiment, the conversational speech extraction process is processed by a series
of steps from step S1000 to step S1400 described with reference to FIG. 3, and the driving of the
conversational speech speaker 150 is shown in FIG. In the process from step S2000 to step
S2500 described above, processing is performed in parallel for the speech voice extraction
processing and the speaker driving processing, or continuous processing is performed in series.
You may do so.
[0074]
As described above, according to the in-vehicle conversation assistance device in the embodiment
of the present invention, by providing the conversation voice synthesis unit 130 that does not
output the conversation voice of the speaker on the speaker's seat and the conversation voice
speaker 150, By outputting a conversational sound from the conversational speech speaker 150,
it is possible to assist the conversation in the car and to reduce the discomfort and discomfort
that the speaker listens to his / her conversational speech.
[0075]
Furthermore, by detecting the head of the passenger in each seat and pointing the direction of
the speaker 150 for speech sound toward the head of the passenger, the speech is output from
the speaker 150 for speech installed in other seats. Since it is possible to prevent the speaker
from wrapping up his / her conversational voice, it is possible to further prevent the speaker
09-05-2019
20
from listening to his / her conversational voice and to further prevent the speaker from giving
discomfort or discomfort. can do.
[0076]
The microphones 110 installed in the vicinity of each seat in the car, the speaker 150 for
conversational voice, the head position detection unit 160 for an occupant, and the speaker drive
control unit 170 for conversational voice are necessarily provided to all the seats in the office. It
is not necessary to be used, but the number may be increased or decreased as appropriate.
[0077]
According to the present invention, there is provided a conversational speech synthesis unit
which does not output its own conversational speech to the seat of the speaker, and a speaker for
conversational speech which can detect the head of the passenger and direct the speaker axis to
the head of the passenger. Thus, the conversational voice is output in the direction of the
optimum speaker for the conversational voice, and the conversation in the car is assisted, and the
sense of discomfort or discomfort that the speaker listens to his own conversational voice can be
reduced. It is useful for the sound system etc. which can be connected and the microphone which
can have a passenger | crew conversation voice can be collected.
[0078]
A schematic configuration diagram of an in-car conversation assistance device according to an
embodiment of the present invention A block diagram showing an example of a configuration of
a conversation voice extraction unit according to an embodiment of the present invention Flow
chart shown A block diagram showing a configuration example of a speech voice synthesis unit
according to an embodiment of the present invention. A diagram for explaining the operation
principle of an occupant's head position detection unit according to an embodiment of the
present invention. A diagram showing a search space and a coordinate system (x, y, z) quantized
at constant intervals for explaining the operation principle of the occupant's head position
detection unit in the embodiment The occupant's head position in the embodiment of the present
invention FIG. 6A shows polar coordinate conversion of the head position for explaining the
principle of operation of the detection unit (a) a top view of the speaker drive unit according to
the embodiment of the present invention; ~ side (C) Front view of the speaker drive unit
according to the embodiment of the present invention. Diagram showing the trajectory of the
speaker axis during turn table drive of the speaker drive unit according to the embodiment of the
present invention. Speaker drive according to the embodiment of the present invention Figure
showing the trajectory of the speaker shaft at the time of motor drive of the unit Flow chart for
explaining the drive operation of the speaker shaft in the embodiment of the present invention In
order to explain the operation of the in-vehicle conversation assistance device according to the
09-05-2019
21
embodiment of the present invention Flow chart of
Explanation of sign
[0079]
100 In-car conversational aid device 110, 110-1 to 110-4 Microphone 120 Conversational voice
extraction unit 130 Conversational speech synthesis unit 140 Amp unit 150, 150-1 to 150-4
Conversational speech speaker 160, 160-1 to 160-4 Passenger head position detection unit 170,
170-1 to 170-4 Speaker drive control unit for conversational voice 180 Speaker drive unit 184
Turntable 185 Motor 200 Audio unit 210-1 to 210-4 Car audio speaker
09-05-2019
22
Документ
Категория
Без категории
Просмотров
0
Размер файла
34 Кб
Теги
jp2010124435
1/--страниц
Пожаловаться на содержимое документа