close

Вход

Забыли?

вход по аккаунту

?

JP2011180240

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2011180240
A voice processing apparatus, a voice processing system, a voice processing method, and a
program capable of improving the recognition of an output voice at a predetermined point.
SOLUTION: An output speech SO is band-divided into different divided speech SD, and different
divided speech is generated so as not to be recognized as the output speech unless it is
simultaneously propagated to the same point, and the speech supplied to each of a plurality of
speakers S The processing units 120, 220, 320, 420, 520, and the voice processing unit are
output so that different divided sounds are output from a plurality of speakers arranged in
different directions with respect to the predetermined point P and simultaneously propagate to
the predetermined points. And control units 140, 240, 340, 440, and 540 that control the
operation. [Selected figure] Figure 1
Voice processing apparatus, voice processing system, voice processing method and program
[0001]
The present invention relates to an audio processing device, an audio processing system, an
audio processing method, and a program.
[0002]
2. Description of the Related Art Conventionally, directional speakers that output sound without
spreading the sound are known (see Patent Documents 1 and 2 below).
09-05-2019
1
In a directional speaker, a plurality of speakers are disposed at one place, directivity of the
speaker is formed, and sound is output in a certain direction.
[0003]
JP, 2009-10491, A JP, 2006-222669, A
[0004]
In the directional speaker, the recognition of the output voice is improved at an arbitrary point
located on the line in the fixed direction in which the directivity is formed.
However, even when the directivity is formed, the recognition of the output voice can not be
improved only at a predetermined point located on the line in the fixed direction in which the
directivity is formed. Moreover, even when directivity is not formed, the recognition of the output
voice can not be improved only at a predetermined point around the speaker.
[0005]
Therefore, the present invention is intended to provide a voice processing device, a voice
processing system, a voice processing method, and a program capable of improving the
recognition of output voice at a predetermined point.
[0006]
According to an aspect of the present invention, output speech is divided into different divided
speech bands, and different divided speech is generated so as not to be recognized as output
speech unless it is simultaneously propagated to the same point, and supplied to a plurality of
sound sources. A voice processing unit, and a control unit that controls the voice processing unit
to output divided voices that are different from different sound sources from a plurality of sound
sources arranged in different directions with respect to a predetermined point and
simultaneously propagate them to the predetermined point A processing device is provided.
[0007]
According to this configuration, the output speech is divided into bands to generate different
09-05-2019
2
divided speech.
Then, different divided voices are output from a plurality of sound sources arranged in different
directions with reference to a predetermined point, and are simultaneously propagated to the
predetermined point.
Here, different divided voices are generated so as not to be recognized as output voice unless
they are simultaneously propagated to the same point. Thereby, at the predetermined point, the
divided voices different from each other are synthesized and the output voice is restored, so that
the recognition of the output voice can be improved at the predetermined point.
[0008]
When the one sound source and the other sound source are arranged at different distances from
the predetermined point, the control unit sets the volume of the divided audio supplied to each
sound source according to the distance between each sound source and the predetermined point.
The voice processing unit may be controlled to adjust and delay the output of divided voices
supplied to each sound source according to the difference in distance between each sound source
and the predetermined point.
[0009]
In the case where one sound source and another sound source are arranged at predetermined
distances from different predetermined points at different predetermined points, the voice
processing unit outputs voice corresponding to each of the predetermined different points.
Divided into different divided voices for each output voice, and the divided voices of one output
voice are synthesized with the divided voices of the other output voices and supplied to each of a
plurality of sound sources, and the control unit The volume of the divided audio supplied to each
sound source is adjusted according to the distance between the predetermined point
corresponding to the output sound and each sound source, and the difference between the
distance between the predetermined point corresponding to the output sound and each sound
source The voice processing unit may be controlled to delay the output of the divided voices
supplied to the respective sound sources in accordance with.
[0010]
The voice processing unit may synthesize dummy voices with different divided voices and supply
the synthesized voices to a plurality of sound sources.
09-05-2019
3
[0011]
The voice processing apparatus further includes a position specifying unit that specifies or
estimates the current position of the moving object, and the control unit is configured to supply
divided voices to the respective sound sources according to the current position of the moving
object and the distance between the respective sound sources. The sound processing unit may be
controlled to adjust the volume of the sound source and to delay the output of the divided audio
supplied to each sound source according to the difference between the current position of the
moving object and the distance between each sound source.
[0012]
A plurality of sound sources may be arranged at the same distance from a predetermined point.
[0013]
Further, according to another aspect of the present invention, there is provided an audio
processing system comprising the plurality of audios and the audio processing device.
[0014]
Further, according to another aspect of the present invention, output speech is divided into
different divided speech bands, and different divided speech is generated so as not to be
recognized as output speech unless it is simultaneously propagated to the same point, and a
plurality of sound sources are generated. A step of performing voice processing for supplying
each of them and controlling voice processing so that different divided voices are output from a
plurality of sound sources arranged in different directions with reference to a predetermined
point and are simultaneously propagated to the predetermined point An audio processing
method is provided that includes the steps.
[0015]
Further, according to another aspect of the present invention, there is provided a program for
causing a computer to execute the above-described speech processing method.
Here, the program may be provided using a computer readable recording medium, and may be
provided via communication means.
09-05-2019
4
[0016]
As described above, according to the present invention, it is possible to provide a voice
processing device, a voice processing system, a voice processing method, and a program that can
improve the recognition of output voice at a predetermined point.
[0017]
It is a block diagram showing the main functional composition of the speech processing system
concerning a 1st embodiment.
It is a schematic diagram which shows operation | movement of the speech processing system
which concerns on 1st Embodiment.
It is a schematic diagram which shows the production | generation of a division | segmentation
audio | voice.
It is a block diagram showing the main functional composition of the speech processing system
concerning a 2nd embodiment.
It is a schematic diagram which shows operation | movement of the speech processing system
which concerns on 2nd Embodiment.
It is a block diagram showing the main functional composition of the speech processing system
concerning a 3rd embodiment.
It is a schematic diagram which shows operation | movement of the speech processing system
which concerns on 3rd Embodiment.
It is a block diagram showing the main functional composition of the speech processing system
concerning a 4th embodiment. It is a schematic diagram which shows operation | movement of
the speech processing system which concerns on 4th Embodiment. It is a block diagram showing
09-05-2019
5
the main functional composition of the speech processing system concerning a 5th embodiment.
It is a schematic diagram which shows operation | movement of the speech processing system
which concerns on 5th Embodiment.
[0018]
The present invention will now be described more fully with reference to the accompanying
drawings, in which exemplary embodiments of the invention are shown. In the present
specification and the drawings, components having substantially the same functional
configuration will be assigned the same reference numerals and redundant description will be
omitted.
[0019]
[1. First Embodiment First, a voice processing system 100 according to a first embodiment
will be described with reference to FIGS. 1 to 3. The configuration and operation of the speech
processing system 100 are shown in FIGS. 1 and 2, respectively.
[0020]
In the present embodiment, as shown in FIG. 1, the audio processing system 100 includes an
audio processing device 110 and a plurality of speakers S. The voice processing device 110
includes a voice processing unit 120, a setting unit 130, and a control unit 140. The audio
processing unit 120 has a plurality of processing sequences, and each processing sequence is
composed of a band pass filter (BP filter) 121 and a volume adjustment unit 122.
[0021]
The voice processing unit 120 is supplied with the output voice SO to be recognized at the
predetermined point P. The predetermined point P means a point where the recognition of the
output speech SO is to be improved. In the voice processing unit 120, the output voice SO is
divided into bands by the BP filter 121 to generate a plurality of divided voices SD. The plurality
of divided audio signals SD are subjected to volume adjustment processing by the volume
09-05-2019
6
adjustment unit 122, supplied to the plurality of speakers S and separately output. The
arrangement of the BP filter 121 and the volume adjustment unit 122 may be reversed.
[0022]
The setting unit 130 is supplied with setting information indicating the positional relationship
between the plurality of speakers S and the predetermined point P, in particular, the distance
between the plurality of speakers S and the predetermined point P. The control unit 140
performs arithmetic processing necessary to operate the audio processing unit 120 based on the
setting information.
[0023]
The setting information is generally input by the user according to the arrangement of the
speakers S and the position of the predetermined point P. However, the setting information may
be generated by the voice processing device 110 based on the position information transmitted
from the speaker S and the predetermined point P.
[0024]
Note that at least a part of the above configuration may be realized by software (program)
operating on the voice processing device 110 or may be realized by hardware. Also, when
realized by software, the program may be stored in advance on the voice processing device 110
or may be supplied from the outside.
[0025]
In the present embodiment, as shown in FIG. 2, a plurality of speakers S are arranged at the same
distance from a predetermined point P. In the example shown in FIG. 2, the four speakers S <b> 1
to S <b> 4 are disposed on the same circumference centering on the predetermined point P.
[0026]
09-05-2019
7
In the audio processing unit 120, as shown in FIG. 3, the output audio SO is band-divided by the
BP filter 121, and, for example, four divided audios SD1 to SD4 are generated. The output speech
SO may be divided into constant bands, may be divided by a mel cycle, or may be divided by a
comb filter. Here, the divided speeches SD1 to SD4 are generated so as not to be recognized as
the output speech SO unless all the divided speeches SD1 to SD4 are simultaneously propagated
to the same point.
[0027]
The divided audio signals SD1 to SD4 are subjected to volume adjustment processing based on
the setting information by the volume adjustment unit 122, and then output separately from the
speakers S1 to S4. The divided voices SD1 to SD4 are adjusted in volume so as to propagate from
the speakers S1 to S4 to the predetermined point P and not to propagate far beyond the
predetermined point P. In the present embodiment, the volume of the divided speech SD1 to SD4
is adjusted to substantially the same value based on the setting information and the attenuation
factor. When the distance between the speakers S1 to S4 and the predetermined point P is fixed,
the volume adjustment process may be omitted.
[0028]
As described above, the speakers S1 to S4 are disposed at the same distance from the
predetermined point P. Thus, in the audio processing system 100, the divided audio signals SD1
to SD4 are simultaneously output from the speakers S1 to S4 at substantially the same volume.
[0029]
In FIG. 2, propagation ranges A1 to A4 of the divided voices SD1 to SD4 from the speakers S1 to
S4 to the predetermined point P are shown. As shown in FIG. 2, at the predetermined point P, the
divided voices SD1 to SD4 are simultaneously propagated at substantially the same volume, and
the divided voices SD1 to SD4 are synthesized to restore the output voice SO. Thereby, at the
predetermined point P, the recognition of the output speech SO is improved. On the other hand,
since the divided voices SD1 to SD4 do not simultaneously propagate and the output voice SO is
not restored except at the predetermined point P, it becomes difficult to recognize the output
09-05-2019
8
voice SO.
[0030]
[2. Second Embodiment] Next, a voice processing system 200 according to a second
embodiment will be described with reference to FIGS. 4 and 5. The configuration and operation
of the speech processing system 200 are shown in FIGS. 4 and 5, respectively. In the following,
the description overlapping with the first embodiment is omitted.
[0031]
In the present embodiment, as shown in FIG. 4, the voice processing unit 210 includes a voice
processing unit 220, a setting unit 230, and a control unit 240. The voice processing unit 220
has a plurality of processing sequences. Each processing sequence includes a BP filter 221, a
volume adjustment unit 222, and a mixer 224.
[0032]
The plurality of divided voices SD are subjected to volume adjustment processing by the volume
adjustment unit 222 and supplied to the mixer 224.
The plurality of divided speech signals SD are synthesized with the dummy speech signal SD ′
supplied to the mixer 224, supplied to the plurality of speakers S, and output separately. The
dummy sound SD ′ may be the same sound for each divided sound SD or may be different
sounds. In addition, only a part of divided speech SD may be synthesized with dummy speech SD
′.
[0033]
In the present embodiment, as shown in FIG. 5, four speakers S1 to S4 are arranged at the same
distance from the predetermined point P. Therefore, in the voice processing system 200, the four
divided voices SD1 to SD4 are synthesized with the dummy voices SD1 'to SD4' and
simultaneously output from the speakers S1 to S4 at substantially the same volume.
09-05-2019
9
[0034]
FIG. 5 shows the propagation situation of the divided voices SD1 to SD4 from the speakers S1 to
S4 to the predetermined point P. As shown in FIG. 5, at the predetermined point P, the divided
voices SD1 to SD4 simultaneously propagate at substantially the same volume, and the divided
voices SD1 to SD4 are synthesized to restore the output voice SO. Dummy speech SD1'-SD4 'is
propagated so that propagation ranges A1-A4 of division speech SD1-SD4 may be covered,
respectively.
[0035]
Thereby, at the predetermined point P, the recognition of the output speech SO is improved. On
the other hand, except at the predetermined point P, the divided voices SD1 to SD4 are not
simultaneously propagated, the output voice SO is not restored, and the dummy voices SD1 'to
SD4' are recognized together with the divided voices SD1 to SD4. It becomes more difficult for SO
to be recognized.
[0036]
[3. Third Embodiment] Next, a voice processing system 300 according to a third embodiment
will be described with reference to FIGS. 6 and 7. 6 and 7 show the configuration and operation
of the speech processing system 300, respectively. In the following, descriptions overlapping
with the first or second embodiment will be omitted.
[0037]
In the present embodiment, as shown in FIG. 6, the voice processing device 310 includes a voice
processing unit 320, a setting unit 330, and a control unit 340. The audio processing unit 320
has a plurality of processing sequences, and each processing sequence includes a BP filter 321, a
volume adjustment unit 322, and an output delay unit 323. Note that the volume adjustment unit
322 and the output delay unit 323 may be arranged in the opposite direction.
09-05-2019
10
[0038]
The plurality of divided audio signals SD are subjected to volume adjustment and output delay
processing by the volume adjustment unit 322 and the output delay unit 323, and supplied to
the plurality of speakers S to be separately output. The control unit 340 performs arithmetic
processing necessary to perform volume adjustment and output delay processing based on the
setting information.
[0039]
In the present embodiment, as shown in FIG. 7, four speakers S1 to S4 are arranged to surround
a predetermined point P at different distances. Therefore, in the voice processing system 300,
the four divided voices SD1 to SD4 are output from the speakers S1 to S4 at different times at
different volumes. Note that the distances to the predetermined point P need not be different
among all the speakers S1 to S4.
[0040]
The control unit 340 calculates the volume adjustment parameters of the divided voices SD1 to
SD4 in consideration of the attenuation rate of the divided voices SD1 to SD4 based on the
distance from the speakers S1 to S4 to the predetermined point P, and the volume adjustment
unit 323 Supply. The volume adjustment parameter is calculated as a value in which the divided
sounds SD1 to SD4 propagate from the speakers S1 to S4 to the predetermined point P and do
not propagate far beyond the predetermined point P.
[0041]
The control unit 340 calculates output delay parameters of the divided voices SD1 to SD4 in
consideration of the propagation time difference of the divided voices SD1 to SD4 based on the
difference in distance from the speakers S1 to S4 to the predetermined point P, and the output
delay unit 323 Supply to The output delay parameter is calculated based on the shortest distance
from the speakers S1 to S4 to the predetermined point P, and is calculated as a value at which
the divided audio signals SD1 to SD4 output from the speakers S1 to S4 simultaneously
propagate to the predetermined point P.
09-05-2019
11
[0042]
Then, the volume adjustment unit 322 and the output delay unit 323 perform volume adjustment
and output delay processing on the divided audios SD1 to SD4 based on the volume adjustment
parameter and the output delay parameter.
[0043]
FIG. 7 shows propagation ranges A1 to A4 of the divided voices SD1 to SD4 from the speakers S1
to S4 to the predetermined point P.
In FIG. 7, the sizes of the propagation ranges A1 to A4 represent the volumes of the divided
voices SD1 to SD4 output from the speakers S1 to S4, and the lengths of the arrows SD1 to SD4
divide from the speakers S1 to S4 to the predetermined point P The propagation times of the
voices SD1 to SD4 are shown. Here, the larger the propagation range A1 to A4 is, the larger the
output volume from the speakers S1 to S4 is, and the shorter the length of the arrows SD1 to SD4
is, the larger the output delay is.
[0044]
As shown in FIG. 7, at the predetermined point P, the divided voices SD1 to SD4 are
simultaneously propagated at substantially the same volume, and the divided voices SD1 to SD4
are synthesized to restore the output voice SO. Thereby, at the predetermined point P, the
recognition of the output speech SO is improved. On the other hand, since the divided voices SD1
to SD4 do not simultaneously propagate and the output voice SO is not restored except at the
predetermined point P, it becomes difficult to recognize the output voice SO.
[0045]
[4. Fourth Embodiment] Next, a voice processing system 400 according to a fourth
embodiment will be described with reference to FIGS. 8 and 9. 8 and 9 show the configuration
and operation of the speech processing system 400, respectively. In the following, descriptions
overlapping with the first to third embodiments will be omitted.
09-05-2019
12
[0046]
In the present embodiment, as shown in FIG. 8, the voice processing device 410 includes a voice
processing unit 420, a setting unit 430, and a control unit 440. The audio processing unit 420
has a plurality of processing sequences, and each processing sequence includes two BP filters
421a and 421b, two volume adjustment units 422a and 422b, two output delay units 423a and
423b, and one It consists of a mixer 424.
[0047]
The voice processing unit 420 is supplied with two output voices SOa and SOb. The output
speech SOa, SOb means the speech to be recognized at each of different predetermined points Pa,
Pb. In the voice processing unit 420, the output voices SOa and SOb are respectively divided into
bands by the BP filters 421a and 421b, and a plurality of divided voices SDa and SDb are
generated.
[0048]
Note that the output speech SOa, SOb may be divided in the same manner, or may be divided in
different manners. Also, the output sound SO may be two or more, or may be the same sound.
Further, the numbers of divided speeches SDa and SDb may be the same or different.
[0049]
The plurality of divided voices SDa and SDb are subjected to volume adjustment and output delay
processing by the volume adjusters 422a and 422b and the output delay units 423a and 423b
for each of the output voices SOa and SOb, and supplied to the mixer 424.
[0050]
Here, with regard to the divided voices SDa of the output voice SOa, volume adjustment and
output delay processing are performed such that a plurality of divided voices SDa are
09-05-2019
13
simultaneously propagated to the predetermined point Pa at substantially the same volume.
The same processing is applied to the divided speech SDb of the output speech SOb.
[0051]
Then, the divided speech SDa of the output speech SOa is synthesized with the divided speech
SDb of the output speech SOb, and supplied to the plurality of speakers S as divided speech sets
SDa and SDb and separately output.
[0052]
In this embodiment, as shown in FIG. 9, four speakers S1 to S4 are arranged to surround
predetermined points Pa and Pb at different distances.
Therefore, in the voice processing system 400, the four divided voice sets SDa and SDb are
output from the speakers S1 to S4 at different times at different volumes.
[0053]
FIG. 9 shows the propagation situation of the divided sound sets SDa and SDb from the speakers
S1 to S4 to the predetermined points Pa and Pb. Although the propagation ranges of the divided
voice sets SDa and SDb are not shown in FIG. 9, the divided voice sets SDa and SDb are
propagated to the predetermined points Pa and Pb, and propagate far beyond the predetermined
points Pa and Pb. Do not adjust the volume. The number of output speech SO and the number of
predetermined points P may be three or more.
[0054]
As shown in FIG. 9, at the predetermined point Pa, divided speeches SDa1 to SDa4 of the output
speech SOa simultaneously propagate at substantially the same volume, and the divided speeches
SDa1 to SDa4 are synthesized to restore the output speech SOa. Similarly, at the predetermined
point Pb, the divided voices SDb1 to SDb4 of the output voice SOb are simultaneously
09-05-2019
14
propagated at substantially the same volume, and the divided voices SDb1 to SDb4 are
synthesized to restore the output voice SOb.
[0055]
Thereby, at the predetermined points Pa and Pb, the recognizability of the output speech SOa
and SOb is improved. On the other hand, since the output speech SOa, SOb are not
simultaneously propagated and the output speech SOa, SOb is not restored except at the
predetermined points Pa, Pb, the output speech SOa, SOb is recognized. It becomes difficult.
[0056]
[5. Fifth Embodiment] Next, a voice processing system 500 according to a fifth embodiment
will be described with reference to FIGS. 10 and 11. FIG. 10 and 11 show the configuration and
operation of the speech processing system 500, respectively. In the following, descriptions
overlapping with the first to fourth embodiments will be omitted.
[0057]
In the present embodiment, as shown in FIG. 10, the voice processing device 510 includes a voice
processing unit 520, a setting unit 530, a control unit 540, and a position specifying unit 550.
The audio processing unit 520 has a plurality of processing sequences, and each processing
sequence includes a BP filter 521, a volume adjustment unit 522, and an output delay unit 523.
[0058]
The position specifying unit 550 is supplied with a sensor value indicating the current position P
of the movement target. The position specifying unit 550 specifies the current position P of the
movement target based on the sensor value, and supplies it to the control unit 540 as position
information.
[0059]
09-05-2019
15
The sensor value indicates the current position P of the moving object captured using video
information by the camera, audio information by the microphone, contact information by the
floor sensor, and the like. The position specifying unit 550 may predict the future position of the
movement target based on the change in the position information of the movement target.
[0060]
The plurality of divided audio signals SD are subjected to volume adjustment and output delay
processing by the volume adjustment unit 522 and the output delay unit 523, and are supplied
to the plurality of speakers S and separately output. The control unit 540 is supplied with
position information of the movement target from the position specifying unit 550, and
calculates the positional relationship between the speaker S and the movement target. Then,
based on the positional relationship between the speaker S and the movement target, the control
unit 540 performs arithmetic processing necessary to perform volume adjustment and output
delay processing.
[0061]
In the present embodiment, as shown in FIG. 11, four speakers S1 to S4 are arranged so as to
surround the positions Pa and Pb to be moved at different distances. Therefore, in the voice
processing system 500, the four divided voices SD1 to SD4 are output from the speakers S1 to S4
at different times at different volumes.
[0062]
FIG. 11 shows the propagation status of the divided voices SDa1 to SDa4 and SDb1 to SDb4 from
the speakers S1 to S4 to the positions Pa and Pb to be moved. As shown in FIG. 11, the divided
voices SDa1 to SDa4 simultaneously propagate at substantially the same volume at the position
Pa before the movement of the movement target, and the divided voices SDa1 to SDa4 are
synthesized to restore the output voice SOa. Further, when the movement target moves, the
divided speech SDb1 to SDb4 simultaneously propagates at the substantially same volume at the
position Pb after movement, and the divided speech SDb1 to SDb4 is synthesized to restore the
output speech SOb.
09-05-2019
16
[0063]
That is, it is possible to move the positions Pa and Pb of the moving object to be recognized as
the output speech SOa and SOb according to the movement of the moving object.
[0064]
As a result, at the current positions Pa and Pb to be moved, the recognition of the output speech
SOa and SOb is improved.
On the other hand, since the divided voices SDa1 to SDa4 and SDb1 to SDb4 do not
simultaneously propagate and the output voices SOa and SOb are not restored except for the
current positions Pa and Pb to be moved, the output voices SOa and SOb are difficult to
recognize.
[0065]
[6. Summary] As described above, according to the voice processing systems 100, 200, 300,
400, and 500 according to the embodiment of the present invention, the output voice SO is
divided into bands, and different divided voices SD are generated. Then, the divided voices SD
which are different from each other are output from the plurality of speakers S arranged in
different directions with respect to the predetermined point P, and are simultaneously
propagated to the predetermined point P. Here, different divided speech SD is generated so as
not to be recognized as the output speech SO unless it is simultaneously propagated to the same
point. Thereby, at the predetermined point P, the divided speech SD which is different from each
other is synthesized and the output speech SO is restored, so that the recognition of the output
speech SO can be improved at the predetermined point.
[0066]
Although the preferred embodiments of the present invention have been described in detail with
reference to the accompanying drawings, the present invention is not limited to such examples. It
is obvious that those skilled in the art to which the present invention belongs can conceive of
09-05-2019
17
various changes or modifications within the scope of the technical idea described in the claims.
Of course, it is understood that these also fall within the technical scope of the present invention.
[0067]
For example, in the above description, the case where the plurality of sound sources S are
composed of four speakers S1 to S4 has been described, but the number of sound sources S may
be two, three, five or more. The second embodiment may be combined with the third to fifth
embodiments. Similarly, the fourth embodiment may be combined with the fifth embodiment.
[0068]
100, 200, 300, 400, 500 speech processing system 110, 210, 310, 410, 510 speech processing
unit 120, 220, 320, 420, 520 speech processing unit 130, 230, 330, 430, 530 setting unit 140,
240 , 340, 440, 540 Control unit 550 Position specifying unit 121, 221, 321, 421a, 421b, 521
BP filter 122, 222, 322, 422a, 422b, 522 Volume adjustment unit 123, 223, 323, 423a, 423b,
523 Output delaying units 224, 424 Output delaying units S, S1 to S4 Speakers SO, SOa, SOb
Output voices SD, SDa, SDb, SDa1 to SDa4, SDb1 to SDb4 Divided voices P, Pa, Pb predetermined
points
09-05-2019
18
Документ
Категория
Без категории
Просмотров
0
Размер файла
28 Кб
Теги
jp2011180240
1/--страниц
Пожаловаться на содержимое документа