close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2008306372

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008306372
The present invention provides an acoustic signal control apparatus capable of intuitively
selecting a desired acoustic signal from acoustic signals input from a plurality of acoustic
sources. A tactile sense and vibration sense presentation device 3a for presenting a tactile sense
and a vibration sense stimulus based on respective acoustic signals input from a plurality of
acoustic sources, and when instructing to perform a predetermined process on the acoustic
signal A coupling device 3 integrally coupled to the operation device 3b to be operated in
response to each of a plurality of acoustic sources, and controlling tactile and vibrational stimuli
presented by the tactile and vibration sense presentation device 3a And a sound output control
unit 4 that causes the output device 400 to output a sound based on an acoustic signal. [Selected
figure] Figure 2
Acoustic signal controller
[0001]
The present invention relates to an acoustic signal control apparatus that controls acoustic
signals input from a plurality of acoustic sources.
[0002]
In a sound output device such as a television receiver, a hearing aid, or an audio signal editor, the
user can listen to a sound based on a plurality of audio signals output from the audio output
device and determine a specific audio signal. It may be instructed to perform processing (for
example, processing of outputting only a sound based on one acoustic signal, processing of
10-04-2019
1
adjusting a volume of sound based on an acoustic signal).
Three specific examples are listed below.
[0003]
The first is simultaneous viewing of television broadcasting. For example, news programs often
begin at the same time on multiple broadcast stations. Since the viewer does not know which
broadcast station broadcasts the particular news first if there is a particular news of interest, the
desired news can be obtained while switching channels. I can not but wait for the broadcast.
However, this may miss the desired news. Therefore, a method in which a tuner capable of
simultaneously receiving television broadcast signals of a plurality of channels is mounted on a
television receiver to allow a viewer to view a plurality of programs simultaneously becomes
effective. At this time, when the characteristics of audio output are made different among a
plurality of programs, the audio of each program can be easily heard (for example, see Patent
Document 1). However, even with the technology of Patent Document 1 etc., even if the viewer
can hear the content of the desired news, it is difficult to grasp which channel it corresponds to.
Therefore, when the viewer wants to view only a desired program among a plurality of programs
being watched, for example, it is difficult to select a channel of the desired program
instantaneously. Therefore, an operation device such as a switch that allows the user to
intuitively select a desired program channel is required.
[0004]
The second is output switching of separated signals. Recent development of signal separation
technology has made it possible to separate voices of a plurality of speakers in real time even in
the sound environment of daily life. For example, in a hearing aid having a signal separation
function, voices of a plurality of speakers can be automatically separated and extracted
separately. At this time, if the hearing aid is configured to store the feature of the speaker and
output only the voice of the speaker selected by the user based on the feature (for example, see
Patent Document 2), The user can hear only the voice of the desired speaker by selecting the
desired speaker from the speakers whose characteristics are stored in the hearing aid. However,
according to the technique of Patent Document 2 or the like, although the voice of the speaker in
which the feature is stored can be output, the voice of any speaker can not be output. In addition,
even if it is configured such that the voice output by the operation device such as a switch can be
switched for each arbitrary speaker, it is up to the intention of the user who wants to hear the
10-04-2019
2
voice of a plurality of speakers. Parts can not be automated. Therefore, there is a need for an
operation device that allows the user to intuitively select a desired speaker.
[0005]
The third is editing of an acoustic signal using an acoustic signal editor such as a mixing console.
For example, there are 4ch music sources, ch-1 contains drum music source, ch-2 contains vocal
music source, ch-3 contains piano music source, and ch-4 contains bass guitar music source I
assume. The operator (user) of the audio signal editor operates the operation devices such as
volume control knobs corresponding to each music source, for example, to adjust the volume of
sound based on the audio signal from the desired music source. It has become. Then, the sound
signal editor outputs a sound obtained by taking a weighted average of the four types of sound
signals whose volume has been adjusted. This creates a final balance of music using an acoustic
signal editor. However, when there are a plurality of music sources, it is difficult to reliably select
the operation device corresponding to the music source of the sound whose sound volume is
desired to be adjusted. Therefore, there is a need for an operation device that allows the user to
intuitively select the music source of the sound whose sound volume is desired to be adjusted.
[0006]
Here, when the user selects a channel of a desired program, selects a desired speaker, selects a
music source of sounds whose sound volume is to be adjusted, that is, acoustic signals input from
a plurality of acoustic sources. In the case of selecting one acoustic signal from among the above,
as the simplest method, there is a method of switching the acoustic signals to be output one by
one in order with an operation device such as a switch.
[0007]
The most popular method for converting an acoustic signal into tactile stimulation or vibrational
stimulation and giving it to humans is a method of transmitting the bass part directly to the body
rather than from the ear, such as body sonic.
In addition, ideas have also been proposed to allow the user to feel the rhythmic feeling of music
by transmitting vibrations to the fingertips and to transmit emotions of sounds by vibrations (see,
for example, Patent Documents 3 to 5).
10-04-2019
3
[0008]
In addition, as an idea of integrating an operation device such as a switch and a tactile sense /
vibration sense presentation device, the user can use a vibration to convey to a user that a key
provided on a mobile phone etc. is surely pressed (for example, Patent Document 6) Refer to the
following), to convey the operation confirmation of the touch panel to the user by vibration (see,
for example, Patent Document 7), vibration is required when rotary input is required in an input
unit including a rotary switch, push switch, slide switch and push button switch. It has been
proposed that the user is notified (for example, see Patent Document 8). JP 2005-210349 JP JP
2000-125397 JP JP 8-19078 JP JP 2006-14255 JP JP 2006-47640 JP JP 2003-5893 JP 9161602 JP Patent Japanese Patent Publication 2004-70505
[0009]
However, in the method of switching the sound signal to be output one by one in order with the
operation device such as a switch, an important sound may be output while the switch is turned
off. There is a concern that you will miss it.
[0010]
In addition, with techniques such as body sonic and patent documents 3 to 5, although sound can
be transmitted by vibration, it is not possible to identify each of acoustic signals input from a
plurality of acoustic sources by vibration, so a plurality of acoustics can be transmitted. It can not
be used when one acoustic signal is selected from acoustic signals input from a source.
[0011]
Moreover, in the techniques of Patent Documents 6 to 8 etc., although the necessity of
confirmation of operation and input can be transmitted by vibration, each of acoustic signals
input from a plurality of acoustic sources can not be identified by vibration. When one acoustic
signal is selected from acoustic signals input from a plurality of acoustic sources, it can not be
used.
[0012]
An object of the present invention is to provide an acoustic signal control apparatus capable of
intuitively selecting a desired acoustic signal from acoustic signals input from a plurality of
acoustic sources.
10-04-2019
4
[0013]
In order to solve the above-mentioned subject, invention of Claim 1 is the tactile sense and
vibration sense presentation device which presents a sense of tactile sense and vibration sense
based on each acoustic signal inputted from a plurality of acoustic sources, A coupling device
integrally coupled to an operation device operated when instructing to perform predetermined
processing on an acoustic signal is provided corresponding to each of the plurality of acoustic
sources, the haptic and vibration sense presentation A tactile / vibration sense presentation
control means for controlling the tactile / vibration sense stimulation presented by a device, and
an output control means for outputting a sound based on the acoustic signal to an output device.
[0014]
The invention according to claim 2 is the acoustic signal control device according to claim 1,
wherein the tactile sense and vibration sense presentation control means smoothes the intensity
signal representing the temporal change of the intensity of the acoustic signal and the intensity
signal. To generate the smoothed intensity signal, and when the intensity signal becomes larger
than the smoothed intensity signal, the tactile sense and vibration sense presentation device
starts presentation of the tactile sense and vibration sense stimulus, The presentation is
continued for a predetermined time.
[0015]
The invention according to claim 3 is characterized in that, in the acoustic signal control device
according to claim 2, the intensity signal is an intensity signal of a prediction error signal in
linear prediction analysis.
[0016]
The invention according to claim 4 is characterized in that, in the acoustic signal control
apparatus according to claim 2, the intensity signal is an intensity signal of an aperiodic
component.
[0017]
The invention according to claim 5 is the audio signal control device according to any one of
claims 1 to 4, wherein the tactile sense and vibration sense presentation control means is a
baseband signal to the tactile sense and vibration sense presentation device. And / or tactile
sensation based on an amplitude modulation signal that is amplitude-modulated to a preset
frequency.
10-04-2019
5
[0018]
The invention according to claim 6 is the audio signal control device according to any one of
claims 1 to 5, wherein the tactile sense and vibration sense presentation control means is the
most effective auditory stimulus based on the acoustic signal at a certain time. The tactile sense
and vibration sense presentation device is characterized by presenting the tactile sense and
vibration sense stimulus to a strong acoustic signal.
[0019]
The invention according to claim 7 is the audio signal control device according to any one of
claims 1 to 5, further comprising time axis expanding and contracting means for expanding and
contracting a time axis of the acoustic signal, and the haptic and vibration sense presentation.
The control means causes the haptic and vibration sense presentation device to present haptic
and vibration sense stimulation based on the acoustic signal whose time axis is expanded and
contracted by the time axis expansion and contraction means.
[0020]
The invention according to claim 8 provides a haptic and vibration sense presentation device that
presents haptic and vibration sense stimulation based on respective acoustic signals input from a
plurality of acoustic sources, and performing predetermined processing on the acoustic signals. A
coupling device integrally coupled to a manipulation device operated upon instructing to apply,
corresponding to each of the plurality of acoustic sources, the haptics presented by the haptics /
vibrancy presentation device A tactile sense and vibration sense presentation control means for
controlling vibration sense stimulation; and an output control means for outputting a sound
based on the acoustic signal to an output device, wherein the tactile sense and vibration sense
presentation control means determines the intensity of the acoustic signal The intensity signal of
the prediction error signal in the linear prediction analysis representing the time change of and
the smoothed intensity signal obtained by smoothing the intensity signal are obtained, and the
intensity signal is higher than the smoothed intensity signal. When it becomes severe, the tactile
sense and vibration sense presentation device presents a tactile sense and vibration sense
stimulus based on the intensity signal and / or a tactile sense and vibration sense stimulus based
on an amplitude modulation signal amplitude-modulated to a preset frequency. It is characterized
by starting and continuing the said presentation for a predetermined time.
[0021]
According to the present invention, in the acoustic signal control device, the haptic and vibration
10-04-2019
6
sense presentation device for presenting haptic and vibration sense stimulation based on the
respective acoustic signals input from the plurality of acoustic sources, and the predetermined
for the acoustic signals. A coupling device integrally coupled to an operation device operated
when instructing to perform processing is provided corresponding to each of a plurality of
acoustic sources, and the haptic and vibration presented by the haptic and vibration sense
presentation device A haptic and vibration sense presentation control means for controlling a
sense stimulus and an output control means for outputting a sound based on an acoustic signal
to an output device.
That is, a haptic and vibration sense presentation device corresponding to each of acoustic
signals input from a plurality of acoustic sources is provided, and each haptic and vibration sense
presentation device presents haptic and vibration sense stimulation based on the corresponding
acoustic signal. Therefore, the user can intuitively select a desired acoustic signal from acoustic
signals input from a plurality of acoustic sources.
[0022]
EXAMPLE Hereinafter, the best mode of an acoustic signal control apparatus according to the
present invention will be described in detail with reference to the drawings.
The scope of the invention is not limited to the illustrated example.
[0023]
FIG. 1 is a block diagram showing the functional configuration of the acoustic signal control
device 100, and FIG. 2 shows the flow of signals in the acoustic signal control device 100. As
shown in FIG.
[0024]
The acoustic signal control device 100 is, for example, a device incorporated in a television
receiver or the like, and is connected to, for example, the tuner 200 and the amplifier 300 of the
television receiver as shown in FIG.
[0025]
Specifically, for example, as shown in FIG. 1, the acoustic signal control apparatus 100 includes
10-04-2019
7
the signal processing unit 1 connected to the tuner 200, the drive unit 2, the haptic and vibration
sense presentation device 3a, and the operation device 3b. It comprises the coupling device 3
integrated and coupled, the sound output control unit 4 connected to the amplifier 300, the
control unit 5, and the like.
[0026]
[Tuner] The tuner 200 is, for example, a tuner capable of simultaneously receiving television
broadcast signals of a plurality of channels.
Specifically, for example, as shown in FIGS. 1 and 2, the tuner 200 has an antenna 200a.
For example, the tuner 200 receives television broadcast signals related to a plurality of
programs broadcasted on a plurality of channels from television broadcast waves received by the
antenna 200a, separates an acoustic signal from the television broadcast signals, and Each of the
audio signals related to a plurality of programs is input to the audio signal control apparatus 100
independently and independently.
That is, the tuner 200 is an acoustic signal input device having a plurality of acoustic sources and
inputting acoustic signals from the plurality of acoustic sources to the acoustic signal control
device 100.
[0027]
For example, the tuner 200 shown in FIG. 2 inputs acoustic signals related to four TV programs
of the first TV program to the fourth TV program as audio signals related to a plurality of
programs to the audio signal control apparatus 100 (signal processing unit 1). It has become.
[0028]
[Signal Processing Unit] The signal processing unit 1 performs predetermined acoustic signal
processing (for example, FIG. 3) on an acoustic signal input from the tuner 200, for example,
according to a control signal input from the control unit 5. The acoustic signal is output to the
sound output control unit 4, and the tactile sense and vibration sense stimulation based on the
acoustic signal is presented to the tactile sense and vibration sense presentation device 3 a via
the drive unit 2.
10-04-2019
8
[0029]
For example, the signal processing unit 1 illustrated in FIG. 2 includes a first signal processing
unit to a fourth signal processing unit.
The first signal processing unit is configured to receive an audio signal related to the first TV
program.
The second signal processing unit is configured to receive an audio signal related to the second
TV program.
The third signal processing unit is configured to receive an audio signal related to the third TV
program.
The fourth signal processing unit is configured to receive an acoustic signal related to the fourth
TV program.
Then, for example, in the signal processing unit 1 shown in FIG. 2, each of the first signal
processing unit to the fourth signal processing unit performs acoustic signal processing (for
example, FIG. 3), and an acoustic signal (x (t)) Are output to the sound output control unit 4, and
a signal (f2 (nB)) for driving the haptic and vibration sense presentation device 3a is output to
the drive unit 2.
[0030]
<Acoustic Signal Processing> A specific example of acoustic signal processing by the signal
processing unit 1 will be described with reference to the flowchart in FIG.
[0031]
Here, one acoustic signal input from the tuner 200 is x (t).
10-04-2019
9
The acoustic signal x (t) is a digital signal at the sampling frequency FS, and t is a sampling
number.
That is, for example, when FS = 44100 Hz, t = 44100 represents "after one second".
Hereafter, the length on the time axis will be represented by a sampling number.
Also, the acoustic signal x (t) is processed for each block for a predetermined time.
The block length is taken as TN, and the interval (time interval) between the start points of
adjacent blocks is taken as TS, and TS <TN, so that the preceding and following blocks overlap.
Hereinafter, it will be described that TN = 1470 and TS = 245.
Note that the block number when the blocks are arranged in order of time is nB.
[0032]
First, the signal processing unit 1 initializes, for example, each variable.
That is, for example, the signal processing unit 1 sets “0” to the block number nB, “0” to the
count, “0” to f1 (−1), “0” to f2 (−1), and PA (−1). "0" is set (step S1).
[0033]
Next, the signal processing unit 1 copies, for example, one block of audio signal x (t) (TSnB ≦ t
<TSnB + TN) to the analysis buffer y (t) (0 ≦ t <TN) (step S2) .
[0034]
10-04-2019
10
Next, the signal processing unit 1 performs, for example, linear prediction analysis on the
analysis buffer y (t) (0 ≦ t <TN) to obtain linear prediction coefficients a1, a2,..., AK (step S3).
Here, K is the order of linear prediction. Hereinafter, it is assumed that K = 20.
[0035]
Next, the signal processing unit 1 predicts TS samples at the center of one block using, for
example, linear prediction coefficients a1, a2,. That is, using linear prediction coefficients a1,
a2,..., AK, y (t) ((TN−TS) / 2 ≦ t <(TN + TS) of the analysis buffer y (t) (0 ≦ t <TN) ) / 2) to obtain
a series of prediction errors. Then, the signal processing unit 1 calculates, for example, a sum of
squares of the series of prediction errors (prediction error energy (that is, an intensity signal of a
prediction error signal in linear prediction analysis representing a time change of the intensity of
the acoustic signal)). Then, this is made an intensity signal PR (nB) (step S4). The intensity signal
PR (nB) takes a larger value as the volume of the sound is higher, but it is not merely
proportional to the volume, but largely increases in the consonant part. It also increases
significantly at the beginning of the vowel part. Therefore, if tactile sense and vibration sense
stimulation can be presented at the timing when the intensity signal PR (nB) sharply increases,
human beings can relatively easily associate sounds with stimuli.
[0036]
Next, the signal processing unit 1 creates, for example, a smoothed intensity signal PA (nB)
obtained by smoothing the intensity signal PR (nB) (step S5). Specifically, assuming that the time
constant of smoothing is TA, for example, the signal processing unit 1 uses δ defined by δ =
1−exp (−TS / TA) to obtain PA (n B) = ( A smoothed intensity signal PA (nB) is created by
calculating 1−δ) PA (nB−1) + δPR (nB). The smoothed intensity signal PA (nB) is a signal
representing local average energy of the intensity signal PR (nB) by slowly following the intensity
signal PR (nB). Hereinafter, it will be described that TA = 882 (corresponding to 20 milliseconds).
[0037]
10-04-2019
11
Next, for example, the signal processing unit 1 sets the first threshold signal PH (nB) to “a signal
obtained by increasing PA (nB) by 1 dB” and the second threshold signal PL (nB) to “a signal
obtained by decreasing 5 dB by PA (nB) "Set" (step S6). If the intensity signal PR (nB) greatly
exceeds the smoothed intensity signal PA (nB), the prediction error energy is rapidly increased,
so that it can be estimated as the consonant portion or the vowel portion at the start time, and
the intensity signal PR (nB) Can be estimated to be a silent part or a vowel part in progress if the
signal strength of the smoothed strength signal PA (nB) is significantly lower than that of the
smoothing strength signal PA (nB). In order to make this estimation, two threshold signals (a first
threshold signal PH (nB) and a second threshold signal PL (nB)) are provided. The first threshold
signal PH (nB) is not limited to a signal obtained by increasing PA (nB) by 1 dB, and any signal
may be used as long as it is equal to or higher than the smoothing intensity signal PA (nB).
Further, the second threshold signal PL (nB) is not limited to a signal obtained by reducing PA
(nB) by 5 dB, and any signal may be used as long as it is equal to or less than the smoothing
intensity signal PA (nB).
[0038]
Next, the signal processing unit 1 determines whether, for example, the intensity signal PR (nB) is
larger than the first threshold signal PH (nB) (step S7).
[0039]
If it is determined in step S7 that the intensity signal PR (nB) is larger than the first threshold
signal PH (nB) (step S7; Yes), the signal processing unit 1 sets, for example, “1” to the signal f1
(nB). It sets (step S8), and it transfers to the process of step S12.
[0040]
On the other hand, when it is determined in step S7 that the intensity signal PR (nB) is not larger
than the first threshold signal PH (nB) (step S7; No), the signal processing unit 1 determines that
the intensity signal PR (nB) is It is determined whether it is smaller than the second threshold
signal PL (nB) (step S9).
[0041]
If it is determined in step S9 that the intensity signal PR (nB) is smaller than the second threshold
signal PL (nB) (step S9; Yes), the signal processing unit 1 sets, for example, “0” to the signal f1
(nB). It sets (step S10), and it transfers to the process of step S12.
10-04-2019
12
[0042]
On the other hand, when it is determined in step S9 that the intensity signal PR (nB) is not
smaller than the second threshold signal PL (nB) (step S9; No), the signal processing unit 1
outputs, for example, “signal f1 (nB)”. f1 (nB-1) "is set (step S11).
That is, the signal f1 (nB) is a binary signal and becomes “1” when the intensity signal PR
(nB)> first threshold signal PH (nB), and the intensity signal PR (nB) <second threshold This
signal is "0" when the signal PL (nB) is generated, and continues the previous value (f1 (nB-1))
when it is neither.
Therefore, the rising edge of the signal f1 (nB) (ie, the point when f1 (nB) = 0 switches to f1 (nB)
= 1) is synchronized with the consonant part or the vowel part at the start time.
[0043]
Next, the signal processing unit 1 determines whether, for example, f1 (nB-1) = 0, f1 (nB) = 1, f2
(nB-1) = 0 and count> TM / TS (step) S12).
TM is a condition provided for not switching to f2 (nB) = 1 unless f2 (nB) = 0 continues for a
predetermined time (TM) or more.
This can prevent switching from f2 (nB) = 0 to f2 (nB) = 1 as quickly as human beings can not
perceive. Hereinafter, it is described as TM = 2205 (corresponding to 50 milliseconds).
[0044]
If it is determined in step S12 that f1 (nB-1) = 0, f1 (nB) = 1, f2 (nB-1) = 0 and count> TM / TS
(step S12; Yes), that is, from the previous time If it is determined that the signal f1 (nB) changes
from "0" to "1", the previous value (f2 (nB-1)) is "0", and the count> TM / TS, the signal The
processing unit 1 sets “1” to the signal f 2 (nB) and “0” to the count (step S 13), and shifts
to the process of step S 17.
10-04-2019
13
[0045]
On the other hand, if it is determined in step S12 that f1 (nB-1) = 0, f1 (nB) = 1, f2 (nB-1) = 0 and
count> TM / TS is not satisfied (step S12; No), that is, f1 If it is determined that at least one of
(nB-1) = 0, f1 (nB) = 1, f2 (nB-1) = 0 and count> TM / TS is not satisfied, the signal processing
unit 1 may, for example, It is determined whether f2 (nB-1) = 1 and count = TP / TS (step S14).
TP is a condition provided to switch to f2 (nB) = 0 if f2 (nB) = 1 continues for a predetermined
time (TP). This allows the stimulation to be continued for a time convenient for human beings to
sense. Hereinafter, it is described as TM = 2205 (corresponding to 50 milliseconds). Here,
although it seems better to make the stimulation time proportional to the length of the vowel
part and the consonant part, when the human finger's fingertip has a short detection ability for
stimulation, the stimulation time is short. Can not feel the stimulus. Therefore, it is better to set
the stimulation time to a time convenient for human sensing, rather than being proportional to
the length of the vowel part and the consonant part.
[0046]
If it is determined in step S14 that f2 (nB-1) = 1 and count = TP / TS (step S14; Yes), for example,
the signal processing unit 1 sets f2 (nB) to "0" and "count". 0 "is set (step S15), and the process
proceeds to step S17.
[0047]
On the other hand, when it is determined in step S14 that f2 (nB-1) = 1 and count = TP / TS are
not satisfied (step S14; No), the signal processing unit 1 determines that f2 (nB) is "f2 (nB-1)", for
example. And the count is set to “count + 1” (step S16).
[0048]
Next, the signal processing unit 1 outputs, for example, the signal f2 (nB) to the drive unit 2 and
outputs one block of audio signal x (t) (TSnB-TD t t <TSnB-TD + TN) as a sound output control
unit Then, “nB + 1” is set to nB (step S18), and the processes after step S2 are repeated.
[0049]
Here, the acoustic signal x (t) (TSnB−TD ≦ t <TSnB−TD + TN) output to the sound output
10-04-2019
14
control unit 4 is an acoustic signal stored in the analysis buffer y (t) (0 ≦ t <TN). It is delayed by
TD as compared to x (t) (TSnB ≦ t <TSnB + TN).
This is to correct the time difference between the reaction of "hearing" and "tactile sense" in
humans.
Specifically, since the sense of touch and vibration is insensitive to the sense of hearing, even if
stimulation is simultaneously performed, the sense of touch and vibration sense is delayed and
sensed.
Therefore, it is often better to output an auditory stimulus (that is, an acoustic signal x (t)) later
than a tactile sense and vibration sense stimulus for a predetermined time (TD). Furthermore,
since the processing of the acoustic signal x (t) is performed for each block having a block length
of TN, time correction depending on the block length TN is often required. Therefore, including
these two corrections, the acoustic signal x (t) (TSnB−TD ≦ t <TSnB−TD + TN) to be output to
the sound output control unit 4 is output to the analysis buffer y (t) (0 ≦ t <TN). Is delayed by TD
from the acoustic signal x (t) (TSnB ≦ t <TSnB + TN) stored in.
[0050]
<Example of waveform> Example of the waveform of each signal when the signal processing unit
1 applies the acoustic signal processing (FIG. 3) to the acoustic signal x (t) relating to the voice
that "Dr. gray hair has a cane" Is shown in FIG. 4 (a) shows the waveform of the input acoustic
signal x (t), and FIG. 4 (b) shows the waveform of the intensity signal PR (nB), the first threshold
signal PH (nB) and the second threshold signal PL (nB) 4 (c) shows the waveform of the signal f1
(nB), FIG. 4 (d) shows the waveform of the signal f2 (nB), FIG. 4 (e) shows the waveform of the
first amplitude modulation signal f3 (nB) (described later) 4 (f) shows the waveform of f1 (nB) -f2
(nB) (described later), FIG. 4 (g) shows the waveform of the second amplitude modulation signal
f4 (nB) (described later), and FIG. 4 (h) shows the signal f5. (nB) (described later)
[0051]
According to FIG. 4C, when intensity signal PR (nB)> first threshold signal PH (nB), signal f1 (nB)
is set to “1”, and intensity signal PR (nB) <second threshold When the signal PL (nB) is
10-04-2019
15
obtained, the signal f1 (nB) is reset to "0". Furthermore, a predetermined difference between the
first threshold signal PH (nB) and the second threshold signal PL (nB) (for example, the first
threshold signal PH (nB) "a signal obtained by increasing PA (nB) by 1 dB" When the second
threshold signal PL (nB) is set as "a signal obtained by reducing PA (nB) by 5 dB", there is a 6 dB
difference), so that the change in the signal f1 (nB) has hysteresis. The signal f1 (nB) can be
prevented from changing unstably.
[0052]
Further, according to FIG. 4D, when the signal f1 (nB) is set to "1", the signal f2 (nB) becomes "1"
for a predetermined time (TP), and thereafter "0". become. Then, after becoming "0", the signal f2
(nB) is such that the predetermined time (TM) does not become "1". By doing this, it is possible to
avoid complicated fluctuations that humans can not feel at their fingertips.
[0053]
And according to FIG. 4, the signal f2 (nB) is a consonant and a strong vowel ("ha", "ku", "tsu",
"owa", "ga", "ga", "tsu", "tsu", "te It can be understood that the target signal is obtained.
[0054]
[Drive Unit] The drive unit 2 drives the haptic and vibration sense presentation device 3a, for
example, in accordance with the signal f2 (nB) input from the signal processing unit 1.
[0055]
For example, the drive unit 2 shown in FIG. 2 has a first drive unit to a fourth drive unit.
The first drive unit is configured to drive the corresponding haptic and vibration sense
presentation device 3a in accordance with the signal f2 (nB) input from the first signal
processing unit.
The second drive unit is configured to drive the corresponding haptic and vibration sense
presentation device 3a in accordance with the signal f2 (nB) input from the second signal
processing unit. The third drive unit is configured to drive the corresponding haptic and
10-04-2019
16
vibration sense presentation device 3a in accordance with the signal f2 (nB) input from the third
signal processing unit. The fourth drive unit is configured to drive the corresponding haptic and
vibration sense presentation device 3a in accordance with the signal f2 (nB) input from the
fourth signal processing unit.
[0056]
Specifically, for example, when the signal f2 (nB) of f2 (nB) = 0 is input, the drive unit 2 controls
the haptic / vibration sense presentation device 3a so as not to present the haptic / vibration
sense stimulation. When the signal f2 (nB) of f2 (nB) = 1 is input, the tactile sense and vibration
sense presentation device 3a is controlled to present tactile sense and vibration sense indication.
[0057]
Here, the drive unit 2 may, for example, use the haptic and vibration sense presentation device
3a with haptic and vibration sense stimulation based on the baseband signal (f2 (nB)), and a
haptic based on the first amplitude modulation signal f3 (nB) It is configured to present either
one of vibration sense stimulation and the like.
The haptic and vibration sense stimulation based on the baseband signal is, for example, a haptic
and vibration sense stimulus based on the signal f2 (nB) input from the signal processing unit 1.
Further, the tactile sense and vibration sense stimulus based on the first amplitude modulation
signal f3 (nB) is, for example, a tactile sense based on a signal obtained by amplitude modulating
the signal f2 (nB) input from the signal processing unit 1 to a preset frequency. Vibration sense
stimulation. Here, the frequency set in advance is, for example, about 50 to 150 Hz that human
beings can easily detect as vibration when the tactile sense and vibration sense stimulation are
stimulation by vibration.
[0058]
As a result of the experiment, in the case of vibration exceeding 20 Hz, the human fingertip could
hardly discriminate the frequency. The sense of touch and vibration was found to be
considerably less sensitive to frequency discrimination as compared to hearing. However, for the
tactile and vibration sense stimulation based on the baseband signal (f2 (nB)) and the tactile and
vibration sense stimulation based on the first amplitude modulation signal f3 (nB), the human
fingertip must be clearly identified. It was possible. Using this, the consonant part detected in the
10-04-2019
17
acoustic signal processing (FIG. 3) and the vowel part at the start time are presented as haptic
and vibration sense stimulation based on the baseband signal (f2 (nB)) and superimposed on it. If
tactile sense and vibration sense stimulation based on the second amplitude modulation signal f4
(nB) with vibration of 50 to 150 Hz is presented, tactile sense and vibration sense stimulation
based on the signal f5 (nB) can be presented. Specifically, for example, f1 (nB) -f2 (nB) shown in
FIG. 4 (f) is amplitude-modulated to a frequency of 50 to 150 Hz, and the second amplitude
modulation signal f4 (nB) shown in FIG. Can be generated by adding the signal f2 (nB) to the
second amplitude modulation signal f4 (nB) to generate the signal f5 (nB) shown in FIG. By
presenting tactile and vibration sense stimulation based on the signal f5 (nB), the stimulus can be
presented even for the duration of the signal f1 (nB), and more information is given to the human
fingertip. be able to. The method of creating FIG. 4F as f1 (nB) -f2 (nB) as shown here is merely
an example, and the scope of the present invention is not limited thereto.
[0059]
The tactile sense and vibration sense presentation control means for controlling the tactile sense
and vibration sense stimulation presented by the tactile sense and vibration sense presentation
device 3a includes, for example, a signal processing unit 1 and a drive unit 2.
[0060]
[Coupling Device] The coupling device 3 includes, for example, a haptic and vibration sense
presentation device 3a that presents haptic and vibration sense stimulation based on each
acoustic signal input from the tuner 200 having a plurality of acoustic sources, and a user An
operation device 3b operated when instructing to perform a predetermined process on an
acoustic signal is integrated and coupled, and is provided corresponding to each of a plurality of
acoustic sources.
[0061]
For example, four coupling devices 3... Shown in FIG. 2 correspond to each of the acoustic
sources of the first to fourth TV programs.
Then, the coupling device 3 corresponding to the sound source of the first TV program performs
the predetermined processing on the acoustic signal relating to the first TV program and the
tactile and vibration sense presentation device 3a driven by the first drive unit. And an operation
device 3b for instructing.
10-04-2019
18
The coupling device 3 corresponding to the sound source of the second TV program instructs the
tactile and vibration sense presentation device 3a driven by the second drive and the user to
perform predetermined processing on the sound signal related to the second TV program. And
an operation device 3b. The coupling device 3 corresponding to the sound source of the third TV
program instructs the tactile and vibration sense presentation device 3a driven by the third drive
and the user to perform predetermined processing on the sound signal related to the third TV
program. And an operation device 3b. The coupling device 3 corresponding to the sound source
of the fourth TV program instructs the tactile and vibration sense presentation device 3a driven
by the fourth drive and the user to perform predetermined processing on the sound signal
related to the fourth TV program. And an operation device 3b.
[0062]
Specifically, the haptic and vibration sense presentation device 3a is configured to present, for
example, a stimulus that can be sensed by haptic and vibration sense of a human body (finger),
such as a vibratory stimulus and an electrical stimulus. The operation device 3b is, for example, a
push button type switch, and for example, when operated (pressed) by the user, the operation
device 3b outputs the operation signal to the sound output control unit 4 via the control unit 5. It
has become.
[0063]
For example, as shown in FIG. 2 and FIG. 5, the coupling device 3 is formed by surrounding the
tactile and vibration sense presentation device 3a which is a stimulus presentation part by the
manipulation device 3b which is a stimulus non-presentation part. When 3b is operated, the
haptic and vibration sense presentation device 3a also moves together with the operation device
3b. Therefore, even if the operation device 3b is operated, the finger pressure on the tactile sense
and vibration sense presentation device 3b hardly changes, so the human fingertip is presented
without being influenced by the operation of the operation device 3b. It can sense tactile sense
and vibration sense stimulus. For example, according to FIG. 5, the four coupling devices 3 are
operated (pressed) by the thumb, forefinger, middle finger and ring finger, respectively, and the
middle finger presses the coupling device 3 (operation device 3b). doing.
[0064]
Here, when, for example, there is a program that the user wants to view alone among a plurality
10-04-2019
19
of programs (the first TV program to the fourth TV program) that the user is viewing at the same
time, for example, the operation device 3b selects the program. Operated by
[0065]
[Sound Output Control Unit] The sound output control unit 4 performs predetermined processing
on the acoustic signal x (t) input from the signal processing unit 1 as an output control unit, for
example, according to the control signal input from the control unit 5 By applying the signal to
the amplifier 300, the sound based on the acoustic signal is output to the output device 400.
Here, the predetermined processing includes, for example, only the acoustic signals related to the
program selected by the operation of the operation device 3b by the user among the audio
signals related to a plurality of programs input from the tuner 200 via the signal processing unit
1 , And an output device 400 via the amplifier 300.
[0066]
Specifically, for example, when a program is not selected by the user, the sound output control
unit 4 outputs, to the output device 400, acoustic signals regarding a plurality of programs input
from the tuner 200. On the other hand, when the user selects a program, that is, when the
operation signal is input from the operation device 3b via the control unit 5, the sound output
control unit 4 is input from the tuner 200, for example. An audio signal of a selected program
among audio signals of a plurality of programs is output to the output device 400.
[0067]
[Amplifier] The amplifier 300 amplifies an acoustic signal input from, for example, the sound
output control unit 4 and outputs the amplified signal to the output device 400.
[0068]
[Output Device] The output device 400 is, for example, a speaker device or the like, and outputs,
for example, a sound based on an acoustic signal input from the amplifier 300.
[0069]
10-04-2019
20
[Control Unit] The control unit 5 includes, for example, as illustrated in FIG. 2, a central
processing unit (CPU) 51, a random access memory (RAM) 52, a storage unit 53, and the like.
[0070]
The CPU 51 performs various control operations in accordance with various processing
programs for the audio signal control apparatus 100 stored in the storage unit 53, for example.
[0071]
The RAM 52 includes, for example, a program storage area for expanding a processing program
or the like to be executed by the CPU 51, and a data storage area for storing input data and
processing results generated when the processing program is executed.
[0072]
The storage unit 53 is, for example, a system program that can be executed by the audio signal
control device 100, various processing programs that can be executed by the system program,
data used when executing these various processing programs, Store data of processing result etc.
The program is stored in the storage unit 53 in the form of program code readable by a
computer.
[0073]
For example, as illustrated in FIG. 1, the storage unit 53 stores a signal processing program 53 a,
a sound output control program 53 b, and the like.
[0074]
For example, the signal processing program 53a inputs a control signal to the signal processing
unit 1, causes the acoustic signal input from the tuner 200 to perform predetermined acoustic
signal processing (for example, FIG. 3), and outputs the acoustic signal. The CPU 51 realizes a
function of causing the sound output control unit 4 to output, and causing the drive unit 2 to
output a signal f2 (nB) based on an acoustic signal.
10-04-2019
21
[0075]
For example, the sound output control program 53 b inputs a control signal to the sound output
control unit 4, performs a predetermined process on an acoustic signal input from the signal
processing unit 1, and causes the amplifier 300 to output the processed signal. Make it happen.
[0076]
According to the acoustic signal control apparatus 100 of the present invention described above,
based on the respective acoustic signals (acoustic signals relating to the first TV program to the
fourth TV program) input from the tuner 200 having a plurality of acoustic sources, the touch
and vibration are A plurality of sounds are combined with a combination device 3 in which a
haptic and vibration sense presentation device 3a for presenting a sense stimulus and an
operation device 3b operated when instructing to perform a predetermined process on an
acoustic signal are integrally combined. A signal processing unit 1 and a driving unit 2 that are
provided corresponding to each of the sources and that control tactile sense and vibration sense
stimulation presented by the sense touch and vibration sense presentation device 3a, and output
sound based on acoustic signals to the output device 400 And a sound output control unit 4.
Then, an intensity signal PR (nB) representing a temporal change of the intensity of the acoustic
signal by the signal processing unit 1 and the drive unit 2 and a smoothing intensity signal PA
(nB) obtained by smoothing the intensity signal PR (nB) When the intensity signal PR (nB)
becomes larger than the smoothed intensity signal PA (nB), the tactile / vibration sense
presentation device 3a starts presentation of tactile / vibration sense stimulation, and the
presentation is performed for a predetermined time (TP) It is made to continue.
That is, the haptic and vibration sense presentation device 3a corresponding to each of the
acoustic signals input from the tuner 200 is provided, and each haptic and vibration sense
presentation device 3a presents haptic and vibration sense stimulation based on the
corresponding acoustic signal. Therefore, the user can intuitively select an acoustic signal related
to a desired program from among audio signals related to a plurality of programs input from the
tuner 200.
In addition, since the operation device 3b to be operated is known by the tactile sense and
vibration sense stimulus, the user does not have to look at the operation device 3b at the time of
operation, and it is not necessary to take his eyes off the screen when watching TV programs.
10-04-2019
22
[0077]
Further, according to the acoustic signal control device 100, the intensity signal is an intensity
signal of a prediction error signal in linear prediction analysis.
Therefore, since the timing of the tactile sense and vibration sense stimulation presented by the
tactile sense and vibration sense presentation device 3a can be substantially synchronized with
the timing of the consonant part and the vowel part at the start time, the timing close to the
human auditory sense Thus, the user can easily associate the sound with the tactile sense and
vibration sense stimulus.
[0078]
Further, according to the acoustic signal control device 100, the signal processing unit 1 and the
drive unit 2 are based on the baseband signal (the signal f2 (nB) input from the signal processing
unit 1) to the haptic and vibration sense presentation device 3a. It is possible to present either
tactile sense / vibration sense stimulus or tactile sense / vibration sense stimulus based on the
first amplitude modulation signal f3 (nB) amplitude-modulated to a preset frequency.
Furthermore, in the haptic / vibration sense presentation device 3a, a haptic / vibration sense
stimulation based on the baseband signal (the signal f2 (nB) input from the signal processing unit
1) and a second amplitude-modulated to a preset frequency. The signal processing unit 1 and the
drive unit 2 are configured to present both tactile sense and vibration sense stimulation based on
the amplitude modulation signal f4 (nB) (that is, tactile sense and vibration sense stimulus based
on the signal f5 (nB)) Because it is also possible, more information can be presented to the user's
fingertips.
[0079]
The present invention is not limited to the above-described embodiment, and various
modifications can be made without departing from the scope of the invention.
[0080]
<< Modification 1 >> In the embodiment, in the acoustic signal processing by the signal
10-04-2019
23
processing unit 1 (FIG. 3), the intensity of the prediction error signal in the linear prediction
analysis is taken as the intensity signal PR (nB) representing the time change of the intensity of
the acoustic signal. Although a method using a signal, that is, a method using a sum of squares of
prediction errors has been shown, an intensity signal PR (nB) representing a temporal change of
the strength of an acoustic signal is, for example, a sum of squares of the acoustic signal itself It
is also good.
[0081]
Specifically, for example, instead of steps S3 and S4 of the acoustic signal processing (FIG. 3) by
the signal processing unit 1 of the embodiment, TS sample values at the center of one block (ie,
analysis buffer y Find the sum of squares (acoustic signal energy) of y (t) (sample value of
(TN−TS) / 2 ≦ t <(TN + TS) / 2) of (t) (0 ≦ t <TN) These may be used as the intensity signal PR
(nB).
[0082]
Here, FIG. 6 shows an example of the waveform of each signal in the case where the acoustic
signal processing of the first modification is applied to the acoustic signal x (t) relating to the
voice that “the white-haired lady wears a cane”.
6 (a) shows the waveform of the input acoustic signal x (t), and FIG. 6 (b) shows the waveform of
the intensity signal PR (nB), the first threshold signal PH (nB) and the second threshold signal PL
(nB) 6 (c) shows the waveform of the signal f1 (nB), FIG. 6 (d) shows the waveform of the signal
f2 (nB), and FIG. 6 (e) shows the waveform of the first amplitude modulation signal f3 (nB).
[0083]
In an example (FIG. 4) of the waveform of each signal when the acoustic signal processing (FIG.
3) of the embodiment is applied, the signal f2 (nB) In the example of the waveform of each signal
(FIG. 6) in the case of applying the acoustic signal processing of the modification example 1 (FIG.
6) to the response to “G”, the word f2 (nB) The acoustic signal energy of has been reacting to
the relatively strong "san" part.
That is, the first modification is inferior to the first embodiment in terms of the timing at which a
10-04-2019
24
human senses aurally.
However, it is suggested that “fail” and “on hold” are sufficiently practical because, in the
first modification, the signal f2 (nB) exhibits the same response as the example. .
[0084]
According to the first modification described above, even with a simple configuration that does
not use the linear prediction analysis method, it is possible to present haptic and vibration sense
stimulation at a timing close to the timing at which human beings sense aurally. , The user can
easily associate the sound with the tactile sense and vibration sense stimulus.
[0085]
<< Modification 2 >> The method which does not use the intensity signal of the prediction error
signal in linear prediction analysis as the intensity signal PR (nB) representing the time change of
the intensity of the acoustic signal is not limited to the first modification, and The intensity signal
PR (nB) representing the temporal change in intensity may be, for example, an intensity signal of
an aperiodic component.
[0086]
Specifically, for example, instead of steps S3 and S4 of the acoustic signal processing (FIG. 3) by
the signal processing unit 1 of the embodiment, signal waveforms of two different time intervals
having a length of about 10 milliseconds are used. The energy of the difference waveform may
be determined and used as the intensity signal PR (nB).
[0087]
Even when the acoustic signal processing of the second modification is applied, the consonant
part and the vowel part at the start time can be detected, so it is suggested that it can be
practically used as a substitute for the linear prediction analysis method.
[0088]
According to the second modification described above, even with a simple configuration that
does not use the linear prediction analysis method, it is possible to present haptic / vibration
sense stimulation at a timing close to the timing at which human beings sense aurally. , The user
can easily associate the sound with the tactile sense and vibration sense stimulus.
10-04-2019
25
[0089]
<< Modification 3 >> The acoustic signal control device 100 is not limited to the device
incorporated in a television receiver etc., and is incorporated in a hearing aid etc. like the
acoustic signal control device 100J shown in FIG. 7 and FIG. 8 for example. It may be done.
[0090]
FIG. 7 is a block diagram showing the functional configuration of the acoustic signal control
device 100J, and FIG. 8 shows the flow of signals in the acoustic signal control device 100J.
The acoustic signal control device 100J differs from the acoustic signal control device 100 (FIGS.
1 and 2) of the embodiment only in the configuration of the acoustic signal input device (tuner
200), for example.
Therefore, only different portions will be described, and the other common portions will be
described with the same reference numerals.
[0091]
Specifically, for example, as shown in FIG. 7, the acoustic signal control device 100J includes a
signal processing unit 1 connected to the hearing aid system 200J with a signal separation
function, a drive unit 2, and a haptic / vibration sense presentation device 3a. It comprises a
coupling device 3 integrally coupled to the operation device 3b, a sound output control unit 4
connected to the amplifier 300, a control unit 5, and the like.
[0092]
[Hearing aid system with signal separation function] The hearing aid system with signal
separation function 200J is, for example, a hearing aid system capable of separating sound
signals relating to the voices of a plurality of speakers for each speaker.
Specifically, for example, as shown in FIGS. 7 and 8, the hearing aid system 200J with a signal
separation function has a plurality of microphones 200aJ.
10-04-2019
26
The signal separation function-equipped hearing aid system 200J separates, for example,
acoustic signals related to the voices of a plurality of speakers received by the plurality of
microphones 200aJ for each speaker, and generates audio signals related to the voices of the
plurality of speakers. Are independently and independently input to the acoustic signal control
apparatus 100J.
That is, the hearing aid system 200J with a signal separation function is an acoustic signal input
device having a plurality of acoustic sources and inputting acoustic signals from the plurality of
acoustic sources into the acoustic signal control device 100J.
[0093]
For example, the hearing aid system 200J with a signal separation function shown in FIG. 8
transmits an acoustic signal related to the voices of four speakers to the acoustic signal control
device 100J (signal processing unit 1) as an audio signal related to voices of a plurality of
speakers. It is supposed to be input.
The first to fourth signal processing units of the signal processing unit 1 are configured to
receive, for example, acoustic signals relating to the voices of four speakers.
[0094]
According to the acoustic signal control apparatus 100J of the third modification described
above, each of the acoustic signals (acoustic signals related to the voices of four speakers) input
from the hearing aid system with signal separation function 200J having a plurality of acoustic
sources is used. Based on the tactile and vibration sense presentation device 3a for presenting
tactile and vibration sense stimulation, and the operation device 3b operated when instructing to
perform predetermined processing on an acoustic signal A signal processing unit 1 and a driving
unit 2 each including the signal processing unit 3 corresponding to each of a plurality of acoustic
sources and controlling tactile sense and vibration sense stimuli presented by the sense sense
device for vibration and sense of vibration 3a; And a sound output control unit 4 that causes the
output device 400 to output the sound.
10-04-2019
27
Then, an intensity signal PR (nB) representing a temporal change of the intensity of the acoustic
signal by the signal processing unit 1 and the drive unit 2 and a smoothing intensity signal PA
(nB) obtained by smoothing the intensity signal PR (nB) When the intensity signal PR (nB)
becomes larger than the smoothed intensity signal PA (nB), the tactile / vibration sense
presentation device 3a starts presentation of tactile / vibration sense stimulation, and the
presentation is performed for a predetermined time It is supposed to continue.
That is, the tactile sense and vibration sense presentation device 3a corresponding to each of the
acoustic signals input from the hearing aid system 200J with signal separation function is
provided, and the tactile sense and vibration based on the corresponding acoustic signal from
each of the tactile sense and vibration sense presentation devices 3a. Since it is possible to
present a visual stimulus, the user intuitively selects an acoustic signal regarding a desired voice
from among audio signals regarding the voices of a plurality of speakers input from the hearing
aid system 200J with signal separation function. be able to.
Further, since the operation device 3b to be operated is known by the tactile sense and vibration
sense stimulus, the user does not need to look at the operation device 3b at the time of operation,
and it is not necessary to take his eyes off the conversation partner or the like.
[0095]
<< Modification 4 >> Further, the acoustic signal control device 100 may be incorporated in an
acoustic signal editor or the like such as a mixing console as in the acoustic signal control device
100K shown in FIGS. 9 and 10, for example.
[0096]
FIG. 9 is a block diagram showing the functional configuration of the acoustic signal control
device 100K, and FIG. 10 shows the flow of signals in the acoustic signal control device 100K.
Note that the acoustic signal control device 100 K is, for example, one of the configuration of the
acoustic signal input device (tuner 200) and the configuration of the coupling device 3 in
comparison with the acoustic signal control device 100 (FIG. 1 and FIG. 2) of the embodiment.
Only the configuration of the unit, the sound output control unit 4 and a part of the configuration
of the control unit 5 are different. Therefore, only different portions will be described, and the
10-04-2019
28
other common portions will be described with the same reference numerals.
[0097]
Specifically, for example, as shown in FIG. 9, the acoustic signal control device 100K includes the
signal processing unit 1 connected to the music source input device 200K, the drive unit 2, the
haptic and vibration sense presentation device 3a, and the operation device It comprises a
coupling device 3K integrally coupled with 3bK, a mixer 4K connected to the amplifier 300, a
control unit 5K, and the like.
[0098]
[Music Source Input Device] The music source input device 200K can input acoustic signals from
multiple music sources, such as, for example, drum music sources, vocal music sources, piano
music sources, and bass guitar music sources. Device.
Specifically, for example, as shown in FIGS. 9 and 10, the music source input device 200K inputs
each of audio signals from a plurality of music sources to the audio signal control device 100K
independently and independently. That is, the music source input device 200K is a sound signal
input device having music sources as a plurality of sound sources, and inputting sound signals
from the plurality of music sources into the sound signal control device 100K.
[0099]
For example, the hearing aid system 200K with a signal separation function shown in FIG. 10 is
an acoustic signal control device as acoustic signals from a plurality of music sources, including
acoustic signals from four types of music sources of first to fourth music sources. It is designed
to be input to 100K (signal processing unit 1). Then, for example, each of acoustic signals from
four types of music sources is input to the first signal processing unit to the fourth signal
processing unit of the signal processing unit 1.
[0100]
[Coupling Device] The coupling device 3K integrally couples, for example, the haptic and
vibration sense presentation device 3a and the operation device 3bK operated when the user
instructs to perform predetermined processing on the acoustic signal. And are provided
10-04-2019
29
corresponding to each of a plurality of music sources.
[0101]
For example, four coupled devices 3K shown in FIG. 10 correspond to each of the first music
source to the fourth music source.
Then, the coupling device 3K corresponding to the first music source performs the
predetermined processing on the acoustic signal from the first music source and the haptic and
vibration sense presentation device 3a driven by the first drive unit. And an operation device 3bK
for instructing. The coupling device 3K corresponding to the second music source instructs the
user to perform predetermined processing on the acoustic signal from the second music source,
and the haptic and vibration sense presentation device 3a driven by the second drive unit And an
operation device 3bK. The coupling device 3K corresponding to the third music source instructs
the tactile and vibration sense presentation device 3a driven by the third drive and the user to
perform predetermined processing on the acoustic signal from the third music source And an
operation device 3bK. The coupling device 3K corresponding to the fourth music source instructs
the user to perform predetermined processing on the acoustic signal from the fourth music
source and the haptic / vibration sense presentation device 3a driven by the fourth drive unit
And an operation device 3bK.
[0102]
Specifically, the operation device 3bK is, for example, a slide-type variable resistor, and when the
operation (slide) is performed by the user, for example, the operation signal is transmitted to the
sound output control unit 4 via the control unit 5K. It is designed to output.
[0103]
For example, as shown in FIG. 10, the coupling device 3K is formed by surrounding the haptic /
vibration sense presentation device 3a which is a stimulus presentation part by the manipulation
device 3bK which is a stimulus non-presentation part, and the manipulation device 3bK operates
Then, the haptic and vibration sense presentation device 3a is also moved together with the
operation device 3bK.
10-04-2019
30
Therefore, even if the operation device 3bK is operated, the finger pressure on the tactile sense
and vibration sense presentation device 3b hardly changes, so the human fingertip is presented
without being influenced by the operation of the operation device 3bK It can sense tactile sense
and vibration sense stimulus.
[0104]
Here, for example, when there is a sound that the user wants to adjust the volume among a
plurality of sounds (sounds based on acoustic signals from the first music source to the fourth
music source) that the user is listening to at the same time, etc. It is operated when selecting the
sound and adjusting the volume.
[0105]
[Mixer] The mixer 4K performs predetermined processing on an acoustic signal input from the
signal processing unit 1 as an output control unit, for example, according to a control signal
input from the control unit 5K, and outputs the processed signal to the amplifier 300. , Sound
based on the acoustic signal is output to the output device 400.
Here, for example, the predetermined process adjusts the volume of acoustic signals from a
plurality of music sources input from the music source input device 200K via the signal
processing unit 1 according to the operation of the operation device 3bK by the user, and This is
processing of taking a weighted average of the sound signal whose volume has been adjusted,
and outputting it to the output device 400 via the amplifier 300.
[0106]
[Control Unit] The control unit 5K, for example, as shown in FIG. 9, includes a CPU 51, a RAM 52,
a storage unit 53K, and the like.
[0107]
For example, as illustrated in FIG. 9, the storage unit 53K stores a signal processing program
53a, a sound output control program 53bK, and the like.
[0108]
For example, the sound output control program 53bK causes the CPU 51 to realize a function of
10-04-2019
31
performing a predetermined process on the acoustic signal input from the signal processing unit
1 by inputting a control signal to the mixer 4K. .
[0109]
According to the sound signal control device 100K of the modification 4 described above, sound
signals from the respective music sources (from the first music source to the fourth music
source) input from the music source input device 200K having a plurality of music sources
Integrating the haptic and vibration sense presentation device 3a for presenting haptic and
haptic sense stimulation, and the operation device 3bK operated when instructing to perform
predetermined processing on the acoustic signal. A signal processing unit 1 and a drive unit 2
each including a coupled device 3K coupled to each other corresponding to each of a plurality of
acoustic sources, for controlling tactile sense and vibration sense stimulation presented by the
sense touch and vibration sense presentation device 3a; And a sound output control unit 4 that
causes the output device 400 to output a sound based on a signal.
Then, an intensity signal PR (nB) representing a temporal change of the intensity of the acoustic
signal by the signal processing unit 1 and the drive unit 2 and a smoothing intensity signal PA
(nB) obtained by smoothing the intensity signal PR (nB) When the intensity signal PR (nB)
becomes larger than the smoothed intensity signal PA (nB), the tactile / vibration sense
presentation device 3a starts presentation of tactile / vibration sense stimulation, and the
presentation is performed for a predetermined time It is supposed to continue.
That is, the tactile sense and vibration sense presentation device 3a corresponding to each of the
acoustic signals input from the music source input device 200K is provided, and tactile sense and
vibration sense stimulation based on the corresponding acoustic signal from each of the tactile
sense and vibration sense presentation devices 3a The user can intuitively select an audio signal
input from a desired music source from among audio signals input from a plurality of music
sources of the music source input device 200K. it can.
In addition, since the operating device 3bK to be operated is known by the tactile sense and
vibration sense stimulus, the user does not have to look at the operating device 3bK at the time
of operation.
[0110]
10-04-2019
32
<< Modification 5 >> As described above, humans do not have very good discrimination ability
with regard to tactile sense and vibration sense stimulation. Therefore, by devising the
embodiments and the first to fourth modifications, it is possible to present tactile and vibration
sense stimuli that have synchronization with the acoustic signal and can be identified by humans.
However, human beings are still confused when multiple sounds are generated simultaneously
and multiple stimuli are presented simultaneously. Therefore, confusion may be suppressed, for
example, by making it difficult for a plurality of haptic and vibration sense stimuli to be
presented simultaneously, as in the acoustic signal control device 100L shown in FIGS. 11 and
12, for example.
[0111]
FIG. 11 is a block diagram showing the functional configuration of the acoustic signal control
device 100L, and FIG. 12 shows the flow of signals in the acoustic signal control device 100L.
Note that the acoustic signal control device 100L has, for example, a part of the configuration of
the signal processing unit 1 and a part of the configuration of the control unit 5 as compared
with the acoustic signal control device 100 (FIG. 1 and FIG. 2) of the embodiment. And only
differ. Therefore, only different portions will be described, and the other common portions will be
described with the same reference numerals.
[0112]
Specifically, for example, as shown in FIG. 11, the acoustic signal control device 100L includes a
signal processing unit 1L connected to the tuner 200, a drive unit 2, a haptic / vibration sense
presentation device 3a, and an operation device 3b. It comprises the coupling device 3 integrated
and coupled, the sound output control unit 4 connected to the amplifier 300, the control unit 5,
and the like.
[0113]
[Signal Processing Unit] The signal processing unit 1L performs, for example, predetermined
acoustic signal processing on an acoustic signal input from the tuner 200 in accordance with a
control signal input from the control unit 5L to control an audio signal as a sound output. It
outputs to the unit 4 and causes the tactile sense and vibration sense presentation device 3 a to
present tactile sense and vibration sense stimulation based on the acoustic signal via the drive
unit 2.
10-04-2019
33
[0114]
For example, the signal processing unit 1L illustrated in FIG. 12 includes a first signal processing
unit to a fourth signal processing unit, and a maximum smoothed intensity signal determination
unit 11L.
Then, for example, in the signal processing unit 1L shown in FIG. 12, each of the first signal
processing unit to the fourth signal processing unit performs acoustic signal processing, and
outputs the acoustic signal x (t) to the sound output control unit 4 At the same time, a signal f2
(nB) for driving the haptic and vibration sense presentation device 3a is output to the drive unit
2.
[0115]
Specifically, for example, the signal processing unit 1L includes an intensity signal PR (nB)
representing a temporal change of the intensity of the acoustic signal, and a smoothed intensity
signal PA (nB) obtained by smoothing the intensity signal PR (nB). When the intensity signal PR
(nB) becomes larger than the maximum smoothed intensity signal PA (nB) having the maximum
value among the smoothed intensity signals PA (nB) of each acoustic signal, the drive unit 2 The
tactile sense and vibration sense presentation device 3a starts to present the tactile sense and
vibration sense stimulus via the touch sense and vibration sense presentation device 3a, and the
given sense is continued for a predetermined time (TP).
[0116]
More specifically, in the acoustic signal processing by the signal processing unit 1L, for example,
the processing of steps S1 to S5 of the acoustic signal processing (FIG. 3) by the signal
processing unit 1 of the embodiment is executed, and thereafter, maximum smoothing is
performed Among the smoothed intensity signals PA (nB) of each acoustic signal generated in
step S5, the intensity signal determination unit 11L maximizes the smoothed intensity signal
having the largest value of the smoothed intensity signal PA (nB). PA (nB) is determined, and
instead of step S6, the first threshold signal PH (nB) is set to “a signal obtained by increasing
the maximum smoothed intensity signal PA (nB) by 1 dB” and the second threshold A process of
setting the signal PL (nB) “a signal obtained by reducing the maximum smoothed intensity
signal PA (nB) by 5 dB” is performed, and thereafter, the processes of step S7 to step S18 are
performed.
10-04-2019
34
[0117]
[Control Unit] The control unit 5L includes, for example, a CPU 51, a RAM 52, and a storage unit
53L as shown in FIG.
[0118]
For example, as illustrated in FIG. 11, the storage unit 53L stores a signal processing program
53aL, a sound output control program 53b, and the like.
[0119]
For example, the signal processing program 53aL inputs a control signal to the signal processing
unit 1L, performs predetermined acoustic signal processing on the acoustic signal input from the
tuner 200, and causes the sound output control unit 4 to execute the acoustic signal. The CPU 51
realizes a function of causing the driving unit 2 to output a signal f2 (nB) based on an acoustic
signal as well as outputting it.
[0120]
According to the acoustic signal control device 100L of the fifth modification described above,
the signal processing unit 1L and the driving unit 2 generate the intensity signal PR (nB)
representing the temporal change of the intensity of the acoustic signal and the intensity signal
PR (nB). The smoothed smoothed intensity signal PA (nB) is determined, and the largest
smoothed intensity signal PA having the largest value among the smoothed intensity signals PA
(nB) of the respective acoustic signals is obtained. When it becomes larger than (nB),
presentation of tactile sense and vibration sense stimulation by the tactile sense and vibration
sense presentation device 3a is started, and the presentation is continued for a predetermined
time (TP).
Therefore, f2 (nB) = 1 hardly occurs except for the acoustic signal having the maximum auditory
stimulation, so that it is difficult for the haptic and vibratory stimulation stimuli to be presented
at the same time, and confusion of the user can be suppressed. .
On the other hand, if there is an acoustic signal with an auditory stimulation almost equal to the
maximum auditory stimulation, tactile and vibrational stimulation based on that acoustic signal
will also be presented, so the user's selection range is easily restricted too much. There is no
10-04-2019
35
[0121]
In the fifth modification, the intensity signal PR (nB) representing the time change of the intensity
of the acoustic signal may be a sum of squares of the acoustic signal itself as in the first
modification, as in the second modification. Alternatively, the intensity signal may be an
aperiodic component.
Further, in the fifth modification, the acoustic signal control device 100L may be connected to
the hearing aid system 200J with a signal separation function and provided in a hearing aid or
the like as the acoustic signal control device 100J of the third modification, for example. Like the
sound signal control device 100K of the fourth modification, the sound signal control device may
be connected to the music source input device 200K and provided in the sound signal editor or
the like.
[0122]
Moreover, if confusion can be suppressed by making it difficult to cause a plurality of haptic and
vibration sense stimuli to be presented at the same time, the signal processing unit 1L and the
drive unit 2 may be for a certain period of time (for example, TP + TM). The tactile sense and
vibration sense presentation may be presented on the tactile sense and vibration sense
presentation device 3a corresponding to the acoustic signal in which the auditory stimulus based
on the acoustic signal is the strongest.
Specifically, for example, when the tactile sense and vibration sense presentation device 3a
corresponding to the acoustic signal with the strongest auditory stimulation is activated (when f2
(nB) = 1), a predetermined time (for example, TP + TM) May be controlled so as not to operate
the other tactile sense and vibration sense presentation devices 3a.
[0123]
<< Modification 6 >> The method of suppressing human confusion caused by the simultaneous
presentation of a plurality of haptic and vibration sense stimuli is not limited to the modification
5. For example, the acoustic signal control device shown in FIG. 13 and FIG. 14 As in the case of
100 M, the time axis of the acoustic signal may be extended or contracted so that a plurality of
haptic and vibration sense stimuli may not be presented simultaneously.
10-04-2019
36
[0124]
FIG. 13 is a block diagram showing the functional configuration of the acoustic signal control
device 100M, and FIG. 13 shows the flow of signals in the acoustic signal control device 100M.
Note that the acoustic signal control device 100M has, for example, a time axis expansion /
contraction unit 6M added as compared with the acoustic signal control device 100 (FIGS. 1 and
2) of the embodiment, and a part of the configuration of the control unit 5. And only differ.
Therefore, only different portions will be described, and the other common portions will be
described with the same reference numerals.
[0125]
Specifically, as shown in FIG. 13, for example, as shown in FIG. 13, the acoustic signal control
apparatus 100M includes a time axis expansion / contraction unit 6M connected to the tuner
200, a signal processing unit 1, a drive unit 2, and a haptic / vibration sense presentation device
It comprises a coupling device 3 in which the operating device 3b and the operating device 3b
are integrated and coupled, a sound output control unit 4 connected to the amplifier 300, a
control unit 5, and the like.
[0126]
The time-axis expansion and contraction unit 6M uses, for example, a general time-axis
expansion and contraction technique (Time Scale Modification technology) as a time-axis
expansion and contraction unit according to a control signal input from the control unit 5M. The
time axis of the signal is expanded and contracted, and output to the signal processing unit 1.
Thereby, the signal processing unit 1 and the drive unit 2 cause the haptic and vibration sense
presentation device 3a to present haptic and vibration sense stimulation based on the acoustic
signal whose time axis is expanded and contracted by the time axis expansion and contraction
unit 6M. The unit 4 causes the output device 400 to output a sound based on an acoustic signal
whose time axis is expanded or contracted by the time axis expansion / contraction unit 6M.
10-04-2019
37
[0127]
For example, the time axis expansion / contraction unit 6M shown in FIG. 13 expands / contracts
the time axis of the sound signal related to the first TV program to the fourth TV program input
from the tuner 200, and the first signal processing unit ~fourth of the signal processing unit 1
Output to the signal processor.
Thereby, from the tactile sense and vibration sense presentation device 3a corresponding to the
acoustic signal related to the first TV program to the fourth television program, tactile sense and
vibration sense stimulation based on the acoustic signal related to the first TV program to the
fourth TV program whose time axis is expanded and contracted. Will be presented, and the
output device 400 will output a sound based on the acoustic signal relating to the first to fourth
TV programs whose time axis is expanded or contracted.
[0128]
Specifically, the time-axis expansion / contraction unit 6M detects, for example, a sound portion
(sound portion) of an acoustic signal, and if there is a region where the sound portions overlap
among a plurality of sound signals, the sound portion In order to eliminate the overlapping area,
the time axis of the non-speechless area (silence area) is expanded or contracted.
[0129]
Here, a specific example of the expansion and contraction of the time axis of the acoustic signal
will be described with reference to FIG.
FIG. 15 shows, for example, the waveform of an acoustic signal related to a first TV program and
the waveform of an acoustic signal related to a second TV program. FIG. 15A shows the time axis
before expansion and contraction, and FIG. 15B shows the time axis after expansion and
contraction.
[0130]
10-04-2019
38
According to FIG. 15A, it can be seen that the sound of the first TV program and the sound of the
second TV program overlap in the hatched area. On the other hand, according to FIG. 15 (b), the
sound of the first TV program is extended by the waveform of the silent part of the sound signal
related to the first TV program and the waveform of the silent part of the sound signal related to
the second TV program is reduced. It can be seen that there is no area where the audio of the
second TV program and the audio of the second TV program overlap.
[0131]
[Control Unit] The control unit 5M includes, for example, a CPU 51, a RAM 52, a storage unit
53M, and the like as shown in FIG.
[0132]
For example, as illustrated in FIG. 13, the storage unit 53M stores a signal processing program
53a, a sound output control program 53b, a time axis expansion and contraction program 53cM,
and the like.
[0133]
For example, the time axis expansion and contraction program 53cM causes the CPU 51 to
realize a function to expand and contract the time axis of the acoustic signal input from the tuner
200 by inputting a control signal to the time axis expansion and contraction unit 6M.
[0134]
According to the acoustic signal control device 100M of the sixth modification described above,
the signal processing unit 1 and the drive unit 2 are provided with the time-axis expansion and
contraction unit 6M that expands and contracts the time-axis of the acoustic signal input from
the tuner 200. The vibration sense presentation device 3a is made to present haptic / vibration
sense stimulation based on an acoustic signal whose time axis is expanded or contracted by the
time axis expansion / contraction unit 6M.
That is, if there is a region in which the sound parts overlap between a plurality of acoustic
signals, the time axis is expanded or contracted so that the region in which the sound parts
overlap does not occur, so that f2 (nB) = 1 does not occur simultaneously. The haptic and
vibration sense stimuli are not presented at the same time, and confusion of the user can be
10-04-2019
39
suppressed.
[0135]
In the sixth modification, the intensity signal PR (nB) representing the temporal change of the
intensity of the acoustic signal may be a sum of squares of the acoustic signal itself as in the first
modification, as in the second modification. Alternatively, the intensity signal may be an
aperiodic component.
Further, in the sixth modification, the acoustic signal control device 100M may be connected to
the hearing aid system 200J with a signal separation function and provided in a hearing aid or
the like as the acoustic signal control device 100J of the third modification, for example. Like the
sound signal control device 100K of the fourth modification, the sound signal control device may
be connected to the music source input device 200K and provided in the sound signal editor or
the like.
[0136]
In the embodiment and the first to sixth modifications, the acoustic signal control devices 100
and 100J to 100M, the acoustic signal input devices (the tuner 200, the hearing aid system with
signal separation function 200J, the music source input device 200K, etc.) 300 and the output
device 400 do not have to be separate, and may be integral.
[0137]
Further, in the embodiment and the first to sixth modifications, the number of coupling devices 3
is not limited to four, and is arbitrary as long as it is plural and corresponds to the number of
acoustic sources.
Of course, in response to this, the number of signal processing units (first signal processing unit
...) in the signal processing units 1 and 1L, the number of driving units (first drive unit ...) in the
driving unit 2, etc. are changed. Ru.
[0138]
10-04-2019
40
It is a block diagram showing functional composition of a sound signal control device of an
example.
It is a figure which shows the flow of the signal in the sound signal control apparatus of an
Example. It is a flowchart which shows the acoustic signal processing by the signal processing
part of the acoustic signal control apparatus of an Example. It is a figure which shows the
example of the waveform of each signal at the time of the signal processing part of the acoustic
signal control apparatus of an Example applying an acoustic signal process with respect to the
acoustic signal regarding the audio | voice about "voice of a white gray daughter putting a cane."
It is a figure which shows the coupling device of the sound signal control apparatus of an
Example. It is a figure which shows the example of the waveform of each signal at the time of the
signal processing part of the sound signal control apparatus of the modification 1 applying sound
signal processing to the sound signal about the voice about "voice of a white haired lady putting
a cane". . FIG. 18 is a block diagram showing a functional configuration of an acoustic signal
control device of Modification 3; It is a figure which shows the flow of the signal in the sound
signal control apparatus of the modification 3. FIG. FIG. 18 is a block diagram showing a
functional configuration of an acoustic signal control device of Modification 4; FIG. 18 is a
diagram showing the flow of signals in the acoustic signal control device of modification 4; FIG.
18 is a block diagram showing a functional configuration of an acoustic signal control device of
Modification 5; It is a figure which shows the flow of the signal in the sound signal control
apparatus of the modification 5. FIG. FIG. 18 is a block diagram showing a functional
configuration of an acoustic signal control device of Modification 6. It is a figure which shows the
flow of the signal in the sound signal control apparatus of the modification 6. FIG. It is a figure
which shows the specific example about expansion-contraction of the time-axis of the sound
signal by the signal expansion-contraction part of the sound signal control apparatus of the
modification 6. FIG.
Explanation of sign
[0139]
1, 1 L signal processing unit (tactile and vibration sense presentation control means) 2 drive unit
(tactile and vibration sense presentation control means) 3 coupling device 3 a haptic and
vibration sense presentation device 3 b, 3 bK operation device 4 sound output control unit
(output control Means) 4K mixer (output control means) 6M time axis expansion / contraction
unit (time axis expansion / contraction means) 100, 100J, 100K, 100L, 100M acoustic signal
control device 400 output device
10-04-2019
41
10-04-2019
42
Документ
Категория
Без категории
Просмотров
0
Размер файла
61 Кб
Теги
description, jp2008306372
1/--страниц
Пожаловаться на содержимое документа