close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2014116932

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2014116932
Abstract: To perform high-quality sound collection while performing sound source tracking, and
to reduce communication load. A sound collection system 1 includes a plurality of handsets 10A
to 10N and a host device 1 (base device), and a plurality of handsets 10A to 10N and a host
device 1 are connected by a communication cable 30. . Each of the slaves 10A to 10N includes a
plurality of microphones MICa to MICm. Each of the slaves 10A to 10N selects a pickup signal of
maximum power from the pickup signals of the microphone pickup signals MICa to MICm, and
transmits the pickup signal to the host apparatus 1 as a slave unit audio signal. At this time, the
slaves 10A to 10N transmit the collected sound signal of maximum power, that is, the level
information of the slave audio signal, to the host apparatus 1 together with the slave audio
signal. The host device 1 compares the level information of the slaves 10A to 10N, selects a slave
audio signal corresponding to the level information of the maximum level, and sets it as a
tracking audio signal. [Selected figure] Figure 9
Sound pickup system
[0001]
The present invention relates to a sound collection system that selects a desired collected sound
from collected sounds from a plurality of sound sources such as a plurality of speakers, and more
particularly to a sound collection system capable of tracking even if the desired collected sound
changes.
[0002]
Conventionally, various techniques have been devised to pick up the voices of each speaker with
high quality even when the speakers are switched in a space where there are a plurality of
11-04-2019
1
speakers.
[0003]
For example, Patent Document 1 describes a sound collection device that selects one microphone
from a plurality of microphones.
In Patent Document 1, the sound collection device compares the signal energy of sound
collection signals of a plurality of microphones.
Then, the sound collection device selects a microphone that outputs an audio signal with the
highest signal energy, and outputs a sound collection signal of the selected microphone.
[0004]
JP-A-8-317491
[0005]
However, since the sound collection device described in Patent Document 1 selects the sound
collection signal of a specific microphone from the sound collection signals of a plurality of
microphones attached to the main body of the device, the sound collection range is limited to a
narrow range. It will be done.
[0006]
For example, in the case of a large venue having a volume in which several tens or more gather,
it is difficult for such a single sound collecting device to perform high-quality sound collection
throughout the venue.
[0007]
As a method of achieving position-independent sound collection in such a large venue, a plurality
of handsets and a base unit are prepared, each of the plurality of handsets is provided with a
microphone, and the base unit collects each microphone. There is a way to select the sound
11-04-2019
2
signal.
Further, by providing a plurality of microphones each having different sound collecting
directivity to each slave unit, it is possible to collect a high quality sound in a wide range.
[0008]
However, when collecting the collected sound signals of each microphone to the parent device
and performing selection processing with the parent device, it is necessary to transmit all the
collected sound signals to the parent device, and the communication load between the child
devices and the parent device increases. It will
In particular, the communication data increases and the communication load increases as the
quality of the collected signal is improved, and the communication load increases as the number
of slave units and the number of microphones increase.
[0009]
An object of the present invention is to reduce communication load while maintaining highquality sound collection in a sound collection system that performs wide-range sound collection
with a plurality of handsets and microphones each having a microphone. .
[0010]
The present invention relates to a sound collection system including a plurality of slaves each
having a plurality of microphones and a master connected to the plurality of slaves, and has the
following features.
Each of the slave units selects a desired pickup signal from the pickup signals of the plurality of
microphones to generate a slave unit audio signal, and generates level information of the slave
unit audio signal to generate a slave unit audio signal and the like. Send level information to the
parent device.
11-04-2019
3
The master selects a desired audio signal from the slave audio signal from each slave and the
level information.
[0011]
In this configuration, the sound pickup signal of each microphone is first selected in the slave
unit. Then, the audio signal selected by each slave unit is further selected by the master unit. As a
result, even if all the sound pickup signals of the microphones of the slave unit are not
transmitted to the parent unit, a desired sound signal can be obtained as a sound collection
system. Therefore, while the communication load is reduced, a desired audio signal can be
obtained without degrading the sound collection quality.
[0012]
Further, each of the plurality of slaves of the sound collection system according to the present
invention includes a handset coefficient determination unit, a handset amplification unit, and a
handset combination unit. The slave unit coefficient determination unit determines the slave unit
amplification factor for selecting the collected signal by comparing the signal levels of the
collected signals of the plurality of microphones, and determines the level from the signal level of
the collected signals of the plurality of microphones. Generate information. The handset
amplification unit amplifies the collected sound signals of the plurality of microphones based on
the handset amplification factor. The slave unit synthesis unit combines a plurality of amplified
sound pickup signals to generate a slave unit audio signal.
[0013]
In this configuration, it is possible to select a desired voice signal from synthesis processing of
sound collection signals of a plurality of microphones, and transmit it as a handset voice signal to
the master. At this time, level information of the handset audio signal can be transmitted to the
master.
[0014]
11-04-2019
4
In addition, the parent device of the sound collection system according to the present invention
includes a parent device coefficient determination unit, a parent device amplification unit, and a
parent device combination unit. The master unit coefficient determination unit compares the
signal levels of the slave unit audio signals output from the respective slave units to determine a
master unit amplification coefficient for selecting a slave unit audio signal. The master
amplification unit amplifies a plurality of slave unit audio signals based on the master
amplification factor. The master unit synthesis unit synthesizes the plurality of amplified slave
unit voice signals to generate an output voice signal.
[0015]
In this configuration, it is possible to select a desired voice signal from the synthesis processing
of a plurality of handset voice signals.
[0016]
In addition, each of the plurality of slaves of the sound collection system according to the present
invention includes a gain control unit that adjusts the signal level of the slave audio signal.
Each of the plurality of handsets transmits a handset voice signal whose signal level has been
adjusted by the gain control unit to the master.
[0017]
In this configuration, since the signal level of the handset audio signal can be adjusted, it can be
transmitted to the master with communication loss taken into consideration, and the signal level
of each handset audio signal is made substantially the same when the master receives. be able to.
By using the handset audio signal and the level information adjusted in this way, the master can
accurately reproduce the signal level of the handset audio signal, and the signal levels of the
handset audio signals are compared with high accuracy. be able to. Thereby, a desired audio
signal can be output reliably and with high accuracy.
[0018]
11-04-2019
5
Further, each of the plurality of slaves of the sound collection system of the present invention is
directly connected to the master. In this configuration, a connection configuration of a plurality
of slaves to a master is shown, and the plurality of slaves are connected in a so-called star
configuration to the master. By adopting such a star connection, the connection to the parent
device is easy, and the specification of signal communication between each child device and the
parent device can be simplified, and communication loss between each child device can be
reduced. The difference can be reduced.
[0019]
Each of the plurality of slaves of the sound collection system according to the present invention
has a memory for storing the speech processing program, and the master has a memory for
storing the plurality of speech processing programs and a speaker. The speaker emits a test
sound wave from the speaker, and the slave unit determines the level of the test sound wave
input to the microphone, transmits level data as a determination result to the master, and the
master sets the slave according to the level data. Select the voice processing program to be sent
to the machine, and send it to the handset. Thereby, the level of the echo from the speaker to the
microphone of each microphone unit can be grasped by the parent device.
[0020]
Also, the voice processing program comprises an echo cancellation program having a filtering
coefficient to be updated, the echo cancellation program has a parameter section for determining
the number of the filtering coefficients, and the master unit receives from each microphone unit
Change the number of filtering coefficients of each microphone unit from the level data.
[0021]
In this case, the number of filter coefficients (the number of taps) is increased for a microphone
unit close to the host device and the level of echo is high, or the number of taps for a microphone
unit far from the host device and the level of echo is low. Can be shortened.
[0022]
The voice processing program is the echo cancellation program or an echo suppressor program
that removes echo by non-linear processing, and the master unit transmits a program for
transmitting the level data to each microphone unit. Determined as one of the suppressor
programs.
11-04-2019
6
[0023]
In this case, a microphone unit close to the host device and having a high echo level can execute
an echo canceler, and a microphone unit distant from the host device and having a low echo level
can execute a noise canceler.
[0024]
According to the present invention, high-quality sound collection and reduction of
communication load can be realized even if a plurality of handsets and a master unit constitute a
sound collection system for collecting sound in a wide range.
[0025]
FIG. 1A is a block diagram showing a configuration of a host apparatus (master device), and FIG.
1B is a block diagram showing a configuration of a microphone unit 2A (child device).
FIG. 2A is a diagram showing the configuration of the echo canceller, and FIG. 2B is a diagram
showing the configuration of the noise canceller.
It is a figure which shows the structure of an echo suppressor.
FIG. 4 (A) is a view showing another connection mode of the signal processing system of the
present invention, FIG. 5 (B) is an external perspective view of a host device, and FIG. 4 (C) is a
microphone unit. It is an external appearance perspective view.
It is the flowchart which showed the operation of the signal processing system.
It is a block diagram showing composition of a signal processing system concerning an example
of application. It is an external appearance perspective view of a child machine concerning an
application example. It is a block diagram showing composition of a child machine concerning an
application example. It is a block diagram which shows the structure of an audio | voice signal
processing part. It is a figure which shows the example of a data format of slave unit data. It is a
block diagram showing composition of a host device concerning an example of application. It is a
11-04-2019
7
flowchart of a sound source tracking process of a child machine. It is a flowchart of a sound
source tracking process of a host device. It is a flowchart which shows the operation | movement
in the case of emitting a test sound wave and performing level determination. It is a flowchart
which shows the operation | movement in the case of specifying the echo canceller of a
sub_station | mobile_unit. It is a block diagram in the case of comprising an echo suppressor
with a host device. FIGS. 17A and 17B are diagrams showing a modification of the arrangement
of the host device and the slave unit.
[0026]
FIG. 1A is a block diagram showing a configuration of a host apparatus (master device), and FIG.
1B is a block diagram showing a configuration of a microphone unit 2A (child device). The
hardware configuration of each microphone unit is the same, and in FIG. 1B, the configuration
and function of the microphone unit 2A will be representatively described. In the present
embodiment, the configuration of A / D conversion is omitted, and various signals are described
as digital signals unless otherwise specified.
[0027]
As shown in FIG. 1A, the host device 1 includes a communication interface (I / F) 11, a CPU 12, a
RAM 13, a non-volatile memory 14, and a speaker 102.
[0028]
The CPU 12 performs various operations by reading an application program from the nonvolatile memory 14 and temporarily storing the application program in the RAM 13.
For example, as described above, an audio signal is input from each microphone unit, and each
audio signal is individually transmitted to another host device connected via a network.
[0029]
The non-volatile memory 14 comprises a flash memory, a hard disk drive (HDD) or the like. The
non-volatile memory 14 is referred to as an audio processing program (hereinafter, referred to as
11-04-2019
8
an audio signal processing program in the present embodiment). Is stored. The audio signal
processing program is an operation program for each microphone unit. For example, there are
various types of programs such as a program for realizing the function of the echo canceller, a
program for realizing the function of the noise canceller, and a program for realizing the gain
control.
[0030]
The CPU 12 reads a predetermined audio signal processing program from the non-volatile
memory 14 and transmits the program to each microphone unit via the communication I / F 11.
The audio signal processing program may be built in the application program.
[0031]
The microphone unit 2A is also referred to as a communication I / F 21A, a DSP 22A, and a
microphone (hereinafter referred to as a microphone). ) Equipped with 25A.
[0032]
The DSP 22A includes a volatile memory 23A and an audio signal processing unit 24A. Although
the volatile memory 23A is incorporated in the DSP 22A in this example, the volatile memory
23A may be provided separately from the DSP 22A. The audio signal processing unit 24A
corresponds to the processing unit of the present invention, and has a function of outputting the
audio collected by the microphone 25A as a digital audio signal.
[0033]
The audio signal processing program transmitted from the host device 1 is temporarily stored in
the volatile memory 23A via the communication I / F 21A. The audio signal processing unit 24A
performs processing according to the audio signal processing program temporarily stored in the
volatile memory 23A, and transmits a digital audio signal related to the audio collected by the
microphone 25A to the host apparatus 1. For example, when a program for an echo canceller is
transmitted from the host device 1, an echo component is removed from the sound collected by
11-04-2019
9
the microphone 25 A and then transmitted to the host device 1. As described above, when an
echo canceller program is executed in each microphone unit, it is preferable to execute an
application program for a communication conference in the host device 1.
[0034]
The audio signal processing program temporarily stored in the volatile memory 23A is erased
when the power supply to the microphone unit 2A is cut off. The microphone unit always
operates after receiving an audio signal processing program for operation from the host device 1
each time the microphone unit is activated. If the microphone unit 2A receives power supply (bus
power drive) via the communication I / F 21A, it receives a program for operation from the host
device 1 only when it is connected to the host device 1, It will work.
[0035]
As described above, when the host apparatus 1 executes the application program for the
communication conference, the audio signal processing program for the echo canceller is
executed, and when the application program for the recording is executed, the audio signal
processing of the noise canceller The program is run. Alternatively, when the application
program for loud sound is executed to output the sound collected by each microphone unit from
the speaker 102 of the host device 1, the sound signal processing program for howling canceller
is executed. Is also possible. When the host device 1 executes an application program for
recording, the speaker 102 is unnecessary.
[0036]
The echo canceller will be described with reference to FIG. FIG. 2A is a block diagram showing
the configuration when the audio signal processing unit 24A executes an echo canceller
program. As shown in FIG. 2A, the audio signal processing unit 24A includes a filter coefficient
setting unit 241, an adaptive filter 242, and an addition unit 243.
[0037]
The filter coefficient setting unit 241 estimates the transfer function of the acoustic transfer
system (the acoustic propagation path from the speaker 102 of the host device 1 to the
11-04-2019
10
microphone of each microphone unit), and sets the filter coefficient of the adaptive filter 242
with the estimated transfer function. .
[0038]
The adaptive filter 242 includes a digital filter such as an FIR filter.
The adaptive filter 242 receives the sound emission signal FE input from the host device 1 to the
speaker 102 of the host device 1, performs filter processing with the filter coefficient set in the
filter coefficient setting unit 241, and generates a pseudo regression sound signal. Generate The
adaptive filter 242 outputs the generated pseudo-regression sound signal to the addition unit
243.
[0039]
The addition unit 243 outputs a sound collection signal NE1 'obtained by subtracting the pseudoregression sound signal input from the adaptive filter 242 from the sound collection signal NE1
of the microphone 25A.
[0040]
The filter coefficient setting unit 241 updates the filter coefficient using an adaptive algorithm
such as the LMS algorithm based on the sound collection signal NE1 'output from the addition
unit 243 and the sound emission signal FE.
Then, the filter coefficient setting unit 241 sets the updated filter coefficient in the adaptive filter
242.
[0041]
Next, the noise canceller will be described with reference to FIG. 2 (B). FIG. 2B is a block diagram
showing the configuration when the audio signal processing unit 24A executes the noise
canceller program. As shown in FIG. 2B, the audio signal processing unit 24A includes an FFT
11-04-2019
11
processing unit 245, a noise removing unit 246, an estimation unit 247, and an IFFT processing
unit 248.
[0042]
The FFT processing unit 245 converts the collected signal NE'T into a frequency spectrum NE'N.
The noise removing unit 246 removes the noise component N′N included in the frequency
spectrum NE′N. The noise component N'N is estimated by the estimation unit 247 based on the
frequency spectrum NE'N.
[0043]
The estimation unit 247 performs a process of estimating a noise component N'N included in the
frequency spectrum NE'N input from the FFT processing unit 245. The estimation unit 247 refers
to a frequency spectrum at certain sample timing of the speech signal NE'N (hereinafter referred
to as a speech spectrum. ) S (NE'N) is sequentially acquired and temporarily stored. The
estimation unit 247 refers to a frequency spectrum at a certain sample timing of the noise
component N'N (hereinafter referred to as a noise spectrum) based on the plurality of acquired
speech spectra S (NE'N) obtained and stored. ) Estimate S (N'N). Then, the estimation unit 247
outputs the estimated noise spectrum S (N′N) to the noise removal unit 246.
[0044]
For example, let a noise spectrum at a certain sampling timing T be S (N′N (T)) and let a speech
spectrum at the same sampling timing T be S (NE′N (T)). The noise spectrum at S is N (N'N (T1)). Further, α and β are oblivion constants, and for example, α = 0.9 and β = 0.1. The noise
spectrum S (N'N (T)) can be expressed by the following equation 1.
[0045]
S (N'N (T)) =. Alpha.S (N'N (T-1)) +. Beta.S (NE'N (T)) (1) Thus, based on the speech spectrum, the
noise spectrum S (N ') is obtained. By estimating N (T), noise components such as background
noise can be estimated. Note that the estimation unit 247 performs noise spectrum estimation
11-04-2019
12
processing only when the level of the sound collection signal collected by the microphone 25A is
low (silence state).
[0046]
The noise removing unit 246 removes the noise component N'N from the frequency spectrum
NE'N input from the FFT processing unit 245, and outputs the frequency spectrum CO'N after
noise removal to the IFFT processing unit 248. Specifically, the noise removing unit 246
calculates a signal level ratio between the speech spectrum S (NE'N) and the noise spectrum S
(N'N) input from the estimating unit 247. The noise removal unit 246 linearly outputs the speech
spectrum S (NE'N) when the calculated signal level ratio is equal to or greater than the threshold.
Also, the noise removal unit 246 non-linearly outputs the speech spectrum S (NE'N) when the
calculated signal level ratio is less than the threshold.
[0047]
The IFFT processing unit 248 outputs a voice signal CO′T generated by inversely transforming
the frequency spectrum CO′N after removing the noise component N′N into a time axis.
[0048]
The audio signal processing program can also realize an echo suppressor program as shown in
FIG.
The echo suppressor is for removing echo components that could not be eliminated by the echo
canceller at a stage subsequent to the echo canceller shown in FIG. 3 (A). The echo suppressor
includes an FFT processing unit 121, an echo removing unit 122, an FFT processing unit 123, a
degree of progress calculating unit 124, an echo generating unit 125, an FFT processing unit
126, and an IFFT processing unit 127, as shown in FIG. Ru.
[0049]
The FFT processing unit 121 converts the collected signal NE1 'output from the echo canceller
into a frequency spectrum. The frequency spectrum is output to the echo removing unit 122 and
11-04-2019
13
the progress calculating unit 124. The echo removing unit 122 removes residual echo
components (echo components that could not be eliminated by the echo canceler) included in the
input frequency spectrum. The residual echo component is generated by the echo generator 125.
[0050]
The echo generation unit 125 generates a residual echo component based on the frequency
spectrum of the pseudo-regression sound signal input from the FFT processing unit 126. The
residual echo component is obtained by adding the residual echo component estimated in the
past and the product of the frequency spectrum of the input pseudo-regression sound signal
multiplied by a predetermined coefficient. The predetermined coefficient is set by the progress
degree calculation unit 124. The progress degree calculation unit 124 receives the collected
signal NE1 input from the FFT processing unit 123 (a collected signal before echo components
are removed by the echo canceller in the previous stage), and the collected signal input from the
FFT processing unit 121. A power ratio to NE1 ′ (a collected signal after echo components have
been removed by an echo canceller in the previous stage) is obtained. The progress degree
calculating unit 124 outputs a predetermined coefficient based on the power ratio. For example,
when the learning of the adaptive filter 242 is not performed at all, the predetermined coefficient
is set to 1, and when the learning of the adaptive filter 242 proceeds, the predetermined
coefficient is set to 0 and the learning of the adaptive filter 242 is performed. As the process
proceeds, the predetermined coefficient is reduced to reduce the residual echo component. Then,
the echo removing unit 122 removes the residual echo component calculated by the echo
generating unit 125. The IFFT processing unit 127 inversely transforms the frequency spectrum
after removing the echo component to the time axis and outputs it.
[0051]
The program for the echo canceller, the program for the noise canceller, and the program for the
echo suppressor can also be executed by the host device 1. In particular, it is also possible that
the host device executes the echo suppressor program while each microphone unit executes the
echo canceller program.
[0052]
11-04-2019
14
In the signal processing system of the present embodiment, it is possible to change the audio
signal processing program to be executed according to the number of connected microphone
units. For example, when the number of microphone units to be connected is one, the gain of the
microphone unit is set high, and when the number of microphone units is plural, the gain of each
microphone unit is set relatively low.
[0053]
Alternatively, when each microphone unit includes a plurality of microphones, a mode of
executing a program for functioning as a microphone array is also possible. In this case, different
parameters (gain, delay amount, etc.) can be set for each microphone unit according to the order
(position) of connection to the host device 1.
[0054]
Although the volatile memory 23A, which is a RAM, is shown as an example of the temporary
storage memory in this embodiment, the volatile memory 23A may be erased if the content is
erased when the power supply to the microphone unit 2A is cut off. Not only the volatile memory
but also a non-volatile memory such as a flash memory may be used. In this case, for example,
when the power supply to the microphone unit 2A is cut off or the cable is replaced, the DSP 22A
erases the contents of the flash memory. In this case, a capacitor or the like is provided to
temporarily secure power until the DSP 22A erases the contents of the flash memory when the
power supply to the microphone unit 2A is cut off.
[0055]
In the example of FIG. 4A, the host device 1 is connected to the microphone unit 2A via the cable
331. The microphone unit 2A and the microphone unit 2B are connected via a cable 341. The
microphone unit 2B and the microphone unit 2C are connected via a cable 351. The microphone
unit 2C and the microphone unit 2D are connected via a cable 361. The microphone unit 2D and
the microphone unit 2E are connected via a cable 371.
[0056]
11-04-2019
15
FIG. 4B is an external perspective view of the host device 1, and FIG. 4C is an external
perspective view of the microphone unit 2A. In FIG. 4C, although the microphone unit 2A is
shown and described as a representative, all the microphone units have the same appearance and
configuration. As shown in FIG. 4B, the host device 1 has a rectangular parallelepiped housing
101A, a speaker 102 is provided on the side (front) of the housing 101A, and communication is
performed on the side (rear) of the housing 101A. An I / F 11 is provided. The microphone unit
2A has a rectangular parallelepiped housing 201A, the microphone 25A is provided on the side
surface of the housing 201A, and the first input / output terminal 33A and the second input /
output terminal 34A are provided on the front of the housing 201A. There is. FIG. 4C shows an
example in which the microphone 25A has three sound collecting directions, that is, the back
surface, the right side surface, and the left side surface. However, the sound collection direction
is not limited to this example. For example, the three microphones 25A may be arranged in a
plane at intervals of 120 degrees and picked up in the circumferential direction. In the
microphone unit 2A, the cable 331 is connected to the first input / output terminal 33A, and the
microphone unit 2A is connected to the communication I / F 11 of the host apparatus 1 via the
cable 331. In the microphone unit 2A, the cable 341 is connected to the second input / output
terminal 34A, and the microphone unit 2A is connected to the first input / output terminal 33B
of the microphone unit 2B via the cable 341. The shapes of the housing 101A and the housing
201A are not limited to rectangular shapes. For example, the housing 101A of the host device 1
may be an elliptic cylinder, and the housing 201A of the microphone unit 2A may be cylindrical.
[0057]
The host device 1 recognizes the connection order of each microphone unit, transmits the echo
canceller program to the microphone units within a certain distance from the own device based
on the connection order and the length of the cable, and the own device It is also possible to
transmit the program of the noise canceller to the microphone unit beyond a certain distance. As
for the length of the cable, for example, when using a dedicated cable, information on the length
of the cable is stored in advance in the host device. Also, by setting identification information to
each cable, storing identification information and information on the cable length, and receiving
identification information from each cable being used, the length of each cable being used is It is
also possible to know.
[0058]
When the host device 1 transmits the echo canceller program, the number of filter coefficients
(the number of taps) is increased for the echo canceller close to the host device 1 so that it can
cope with echoes with a long reverberation, For distant echo cancellers it is preferable to reduce
11-04-2019
16
the number of filter coefficients (the number of taps).
[0059]
Also, instead of the echo canceller program, a program that performs nonlinear processing (for
example, the above echo suppressor program) is transmitted to the microphone unit within a
certain distance from the own device, and the echo components that can not be removed by the
echo canceller are Even when it occurs, it is also possible to adopt an aspect in which the echo
component is removed.
Further, in the present embodiment, although the microphone unit is described to select either
the noise canceller or the echo canceller, the program of both the noise canceller and the echo
canceller is transmitted to the microphone unit close to the host device 1 Only the program of
the noise canceller may be transmitted to the microphone unit far from the device 1.
[0060]
Next, with reference to the flowchart of FIG. 5, the operation at the time of activation of the host
device 1 and each microphone unit will be described. When the microphone unit is connected
and the activation state of the microphone unit is detected (S11), the CPU 12 of the host device 1
reads a predetermined audio signal processing program from the non-volatile memory 14 (S12),
and via the communication I / F 11. It transmits to each microphone unit (S13). At this time, the
CPU 12 of the host device 1 divides the audio signal processing program into fixed unit bit data
as described above, creates serial data in which unit bit data are arranged in the order each
microphone unit receives them, Send.
[0061]
Each microphone unit receives the audio signal processing program transmitted from the host
device 1 (S21), and temporarily stores it (S22). At this time, each microphone unit extracts and
receives unit bit data to be received from the serial data, and temporarily stores the extracted
unit bit data. The microphone unit combines the temporarily stored unit bit data, and performs
processing according to the combined audio signal processing program (S23). Then, each
microphone unit transmits a digital voice signal related to the collected voice to the host device 1
11-04-2019
17
(S24). At this time, the digital audio signal processed by the audio signal processing unit of each
microphone unit is divided into fixed unit bit data and transmitted to the upper connected
microphone unit, and the microphone units cooperate to transmit serial data And transmit the
transmission serial data to the host device.
[0062]
In this example, serial data is converted in units of minimum bits, but conversion is not limited to
conversion in units of minimum bits, such as conversion into units of one word.
[0063]
Also, if there is a microphone unit not connected and there is a channel without signal (if bit data
is 0), the bit data of the channel is not deleted but serial data Include and transmit.
For example, when the number of microphone units is four, the bit data of the signal SDO4 is
always 0, but the signal SDO4 is transmitted as a bit data 0 signal without being deleted.
Therefore, there is no need to consider which device corresponds to which channel, nor is it
necessary to consider the connection relationship, and it is not necessary to have address
information such as what data is transmitted / received to which device, etc. Even if they are
replaced, the signals of the appropriate channels are output from the respective microphone
units.
[0064]
In this manner, if serial data is transmitted between devices, the number of signal lines between
devices does not increase even if the number of channels increases. The detection means for
detecting the activation state of the microphone unit can detect the activation state by detecting
the connection of the cable, but may detect the microphone unit connected when the power is
turned on. In addition, when a new microphone unit is added during use, the connection of the
cable can be detected to detect the activation state. In this case, the program of the connected
microphone unit can be deleted and the voice processing program can be transmitted from the
main unit to all the microphone units again.
[0065]
11-04-2019
18
Next, FIG. 6 is a block diagram of a signal processing system according to an application
example. The signal processing system according to the application example includes slaves 10A
to 10E connected in series and a master (host device) 1 connected to the slave 10A. FIG. 7 is an
external perspective view of the child device 10A. FIG. 8 is a block diagram showing a
configuration of slave unit 10A. In this application example, the host device 1 is connected to the
slave 10A via a cable 331. The slave 10A and the slave 10B are connected via a cable 341. The
slave 10B and the slave 10C are connected via a cable 351. The slave 10C and the slave 10D are
connected via a cable 361. The slave 10D and the slave 10E are connected via a cable 371. The
slaves 10A to 10E have the same configuration. Therefore, in the following description of the
configuration of the slave unit, the slave unit 10A will be described as a representative. The
hardware configuration of each child device is all the same.
[0066]
The slave unit 10A has the same configuration and function as the above-described microphone
unit 2A. However, the slave 10A includes a plurality of microphones MICa to MICm instead of the
microphone 25A. Further, in this example, as shown in FIG. 9, the audio signal processing unit
24A of the DSP 22A includes the configurations of the amplifiers 11a to 11m, the coefficient
determining unit 120, the combining unit 130, and the AGC 140.
[0067]
The number of microphones may be two or more, and can be appropriately set according to the
sound collection specification of one slave unit. Accordingly, the number of amplifiers may be the
same as the number of microphones. For example, three microphones are sufficient for picking
up with a small number in the circumferential direction.
[0068]
The microphones MICa to MICm have different sound collecting directions. That is, each of the
microphones MICa to MICm has predetermined sound collection directivity, and picks up a sound
with the specific direction as the main sound collection direction, and generates the sound
collection signal Sma to the sound collection signal Smm. Specifically, for example, the
11-04-2019
19
microphone MICa picks up the sound with the first specific direction as the main sound pickup
direction, and generates the sound pickup signal Sma. Similarly, the microphone MICb picks up
the second specific direction as the main pick-up direction, and generates a pick-up signal Smb.
[0069]
The microphones MICa to MICm are installed in the slave unit 10A so that their sound collection
directivity differs. In other words, the microphones MICa to MICm are installed in the slave 10A
so that the main sound collecting directions are different.
[0070]
The collected sound signals Sma to Smm output from the microphones MICa to MICm are input
to the amplifiers 11a to 11m, respectively. For example, the collected sound signal Sma output
from the microphone MICa is input to the amplifier 11a, and the collected sound signal Smb
output from the microphone MICb is input to the amplifier 11b. The collected sound signal Smm
output from the microphone MICm is input to the amplifier 11m. Further, each of the collected
sound signals Sma to Smm is input to the coefficient determination unit 120. At this time, the
respective collected sound signals Sma to Smm are converted from analog signals to digital
signals and then input to the respective amplifiers 11a to 11m.
[0071]
The coefficient determination unit 120 detects the signal power of the collected signal Sma to the
collected signal Smm. The signal powers of the respective picked-up signals Sma to Smm are
compared with each other to detect the picked-up signal which becomes the maximum power.
The coefficient determination unit 120 sets the gain coefficient for the sound collection signal
detected as the maximum power to “1”. The coefficient determination unit 120 sets the gain
coefficient for the sound collection signal other than the sound collection signal detected as the
maximum power to “0”.
[0072]
11-04-2019
20
The coefficient determination unit 120 outputs the determined gain coefficient to the amplifiers
11a to 11m. Specifically, the coefficient determination unit 120 outputs gain coefficient = “1”
to the amplifier to which the sound pickup signal detected as the maximum power is input, and
gain coefficient = “0” to the other amplifiers. Output.
[0073]
The coefficient determination unit 120 detects the signal level of the collected sound signal
detected as the maximum power, and generates level information IFo10A. The coefficient
determination unit 120 outputs the level information IFo 10A to the FPGA 51A.
[0074]
The amplifiers 11a to 11m are gain adjustable amplifiers. The amplifiers 11a to 11m amplify the
sound collection signal Sma to the sound collection signal Smm with the gain coefficient given
from the coefficient determination unit 120, and respectively generate the amplified sound
collection signal Smga to the amplified sound collection signal Smgm Do. Specifically, for
example, the amplifier 11a amplifies the sound collection signal Sma with the gain coefficient
from the coefficient determination unit 120, and outputs an amplified sound collection signal
Smga. The amplifier 11b amplifies the collected sound signal Smb with the gain coefficient from
the coefficient determination unit 120, and outputs an amplified sound collection signal Smgb.
The amplifier 11m amplifies the collected signal Smm with the gain coefficient from the
coefficient determination unit 120, and outputs an amplified sound signal Smgm.
[0075]
Here, as described above, since the gain coefficient is “1” or “0”, the amplifier given the gain
coefficient = “1” maintains the signal level of the collected signal as it is and outputs it. In this
case, the amplified sound collection signal remains as the sound collection signal.
[0076]
On the other hand, the amplifier given the gain coefficient = "0" suppresses the signal level of the
11-04-2019
21
collected signal to "0". In this case, the amplified sound collection signal is a signal of signal level
"0".
[0077]
The respective amplified sound collection signals Smga to Smgm are input to the synthesis unit
130. The synthesis unit 130 is an adder, and generates the slave unit audio signal Sm10A by
adding each amplified sound collection signal Smga to the amplified sound collection signal
Smgm.
[0078]
Here, the amplified sound pickup signal Smga to amplified sound pickup signal Smgm are only
those with the maximum power of the collected sound signal Sma to sound pickup signal Smm
that are the sources of the amplified sound pickup signals Smga to Smgm, The other is the signal
level corresponding to the signal, and the signal level is "0".
[0079]
Therefore, the slave unit audio signal Sm10A obtained by adding the amplified sound collection
signal Smga to the amplified sound collection signal Smgm is the sound collection signal itself
detected as the maximum power.
[0080]
By performing such processing, it is possible to detect the sound pickup signal of the maximum
power and output it as the slave unit audio signal Sm10A.
This process is sequentially performed at predetermined time intervals.
Therefore, if the sound pickup signal of the maximum power changes, that is, if the sound source
of the sound pickup signal of the maximum power moves, the sound pickup signal to be the slave
unit audio signal Sm10A also changes according to the change and movement. . As a result, the
sound source can be tracked based on the sound collection signals of the microphones, and the
slave unit audio signal Sm10A that collects the sound from the sound source most efficiently can
11-04-2019
22
be output.
[0081]
The AGC 140 is a so-called auto gain control amplifier, which amplifies the slave unit audio
signal Sm10A by a predetermined gain and outputs the amplified signal to the FPGA 51A. The
gain set by the AGC 140 is appropriately set according to the communication specification.
Specifically, for example, the gain set by the AGC 140 is set so as to estimate transmission loss in
advance and compensate for the transmission loss.
[0082]
By performing such gain control of the slave unit audio signal Sm10A, the slave unit audio signal
Sm10A can be accurately and reliably transmitted from the slave unit 10A to the host apparatus
1. Thus, the host device 1 can correctly and reliably receive and demodulate the slave unit audio
signal Sm10A.
[0083]
Then, the sub-unit voice signal Sm10A after AGC and the level information IFo10A are input to
the FPGA 51A.
[0084]
The FPGA 51A generates the handset data D10A from the handset voice signal Sm10A after AGC
and the level information IFo10A, and transmits the handset data D10A to the host device 1.
At this time, the level information IFo10A is data synchronized with the handset audio signal
Sm10A assigned to the same handset data.
[0085]
11-04-2019
23
FIG. 10 is a diagram showing an example of a data format of slave unit data transmitted from the
slave unit to the host device. The slave unit data D10A is data in which a header DH that can be
identified by the slave unit as a transmission source, a slave unit audio signal Sm10A, and level
information IFo10A are assigned a predetermined number of bits. For example, as shown in FIG.
10, the slave unit audio signal Sm10A is assigned a predetermined bit after the header DH, and
the level information IFo10A is assigned a predetermined bit after the bit string of the slave unit
audio signal Sm10A.
[0086]
Similar to the above-described slave 10A, the other slaves 10B to 10E each include slave audio
data Sm10B to slave audio signal Sm10E and level information IFo10B to level information
IFo10E. D10E is generated and output to the host device 1. Note that each slave unit cooperates
to create serial data by dividing slave unit data D10B to slave unit data D10E into fixed unit bit
data and transmitting them to a slave unit connected to the upper level. become.
[0087]
FIG. 11 is a block diagram showing various configurations realized by the CPU 12 of the host
device 1 executing a predetermined audio signal processing program.
[0088]
The CPU 12 of the host device 1 includes a plurality of amplifiers 21a to 21e, a coefficient
determination unit 220, and a combining unit 230.
[0089]
The child device data D10A to child device data D10E from the child devices 10A to 10E are
input to the communication I / F 11.
Communication I / F 11 demodulates handset data D10A to handset data D10E, and acquires
handset audio signal Sm10A to handset audio signal Sm10E and level information IFo10A to
level information IFo10E.
11-04-2019
24
[0090]
The communication I / F 11 outputs the slave unit audio signal Sm10A to the slave unit audio
signal Sm10E to the amplifiers 21a to 21e, respectively.
Specifically, the communication I / F 11 outputs the handset audio signal Sm10A to the amplifier
21a, and outputs the handset audio signal Sm10B to the amplifier 21b. Similarly, the
communication I / F 11 outputs the slave unit audio signal Sm10E to the amplifier 21e.
[0091]
The communication I / F 11 outputs the level information IFo 10 A to the level information IFo
10 E to the coefficient determination unit 220.
[0092]
The coefficient determination unit 220 compares the level information IFo10A to the level
information IFo10E to detect the maximum level information.
[0093]
The coefficient determination unit 220 sets the gain coefficient for the handset audio signal
corresponding to the level information detected as the maximum level to “1”.
The coefficient determination unit 220 sets the gain coefficient to “0” to the sound collection
signal other than the slave unit audio signal corresponding to the level information detected as
the maximum level.
[0094]
The coefficient determination unit 220 outputs the determined gain coefficient to the amplifiers
21a to 21e.
11-04-2019
25
Specifically, the coefficient determination unit 220 outputs the gain coefficient = "1" to the
amplifier to which the slave unit audio signal corresponding to the detected level information and
the maximum level is input, and the gain to the other amplifiers. Output coefficient = "0".
[0095]
The amplifiers 21a to 21e are gain adjustable amplifiers. The amplifiers 21a to 21e amplify the
slave unit audio signal Sm10A to the slave unit audio signal Sm10E by the gain coefficient given
from the coefficient determination unit 220, and respectively generate the amplified audio signal
Smg10A to the amplified audio signal Smg10E. .
[0096]
Specifically, for example, the amplifier 21a amplifies the handset audio signal Sm10A with the
gain coefficient from the coefficient determination unit 220, and outputs an amplified audio
signal Smg10A. The amplifier 21b amplifies the handset audio signal Sm10B with the gain
coefficient from the coefficient determination unit 220, and outputs an amplified audio signal
Smg10B. The amplifier 21e amplifies the slave unit audio signal Sm10E with the gain coefficient
from the coefficient determination unit 220, and outputs an amplified audio signal Smg10E.
[0097]
Here, as described above, since the gain coefficient is "1" or "0", the amplifier to which the gain
coefficient = "1" is given maintains the signal level of the handset audio signal as it is and outputs
it. . In this case, the amplified audio signal remains as a slave audio signal.
[0098]
On the other hand, the amplifier given the gain coefficient = “0” suppresses the signal level of
the handset audio signal to “0”. In this case, the amplified audio signal is a signal of signal
level "0".
11-04-2019
26
[0099]
The amplified audio signal Smg10A to the amplified audio signal Smg10E are input to the
synthesis unit 230. The synthesis unit 230 is an adder, and generates a tracking audio signal by
adding each amplified audio signal Smg10A to amplified audio signal Smg10E.
[0100]
Here, as for the amplified voice signal Smg10A to amplified voice signal Smg10E, only the
maximum level of the handset voice signal Sm10A to handset voice signal Sm10E from which the
amplified voice signal Smg10A to Smg10E originates is the handset voice. The other is the signal
level corresponding to the signal, and the signal level is "0".
[0101]
Therefore, the tracking audio signal obtained by adding the amplified audio signal Smg10A to the
amplified audio signal Smg10E is the slave audio signal itself detected as having the maximum
level.
[0102]
By performing such a process, it is possible to detect the maximum level slave unit audio signal
and output it as a tracking audio signal.
This process is sequentially performed at predetermined time intervals.
Therefore, if the handset audio signal at the maximum level changes, that is, if the sound source
of the handset audio signal at the maximum level moves, the slave audio signal to be the tracking
audio signal also changes according to the change and movement. Do. As a result, it is possible to
track the sound source based on the slave unit audio signal of each slave unit, and to output the
tracking audio signal in which the sound from the source is collected most efficiently.
[0103]
11-04-2019
27
Then, by performing the configuration and processing as described above, the first stage sound
source tracking is performed by the collected sound signals of the microphones by the child
devices 10A to 10E, and the host device 1 performs the respective child devices 10A to The
sound source tracking of the second stage by the child machine voice signal of the machine 10E
is performed. Thereby, sound source tracking can be realized by the plurality of microphones
MICa to MICm of the plurality of slaves 10A to 10E. Therefore, the sound source tracking can be
reliably performed without being influenced by the size of the sound collection range or the
sound source position of the speaker or the like by appropriately setting the number of the child
devices 10A to 10E and the arrangement pattern. it can. Therefore, the sound from the sound
source can be picked up with high quality without depending on the position of the sound
source.
[0104]
Furthermore, the number of audio signals transmitted by the slaves 10A to 10E is one without
depending on the number of microphones attached to the slaves. Therefore, the amount of
communication data can be reduced rather than transmitting the sound collection signals of all
the microphones to the host device. For example, when the number of microphones attached to
each slave unit is m, the number of audio data transmitted from each slave unit to the host device
is 1 / m in the case where all collected sound signals are transmitted to the host device ).
[0105]
As described above, by using the configuration and processing of the present embodiment, it is
possible to reduce the communication load while having the same sound source tracking
accuracy as in the case of transmitting all the collected sound signals to the host device. This
enables more real time sound source tracking.
[0106]
FIG. 12 is a flowchart of the sound source tracking process of the slave according to the
embodiment of the present invention. Hereinafter, although the processing flow of one slave unit
will be described, a plurality of slave units execute processing of the same flow. Also, since the
contents of the detailed process are as described above, the detailed description will be omitted
below.
11-04-2019
28
[0107]
The slave unit picks up sound with each microphone and generates a pick-up signal (S101). The
slave detects the level of the sound pickup signal of each microphone (S102). The slave detects
the sound pickup signal of the maximum power, and generates level information of the sound
pickup signal of the maximum power (S103).
[0108]
The slave unit determines a gain coefficient for each collected signal (S104). Specifically, the
slave sets the gain of the sound pickup signal of maximum power to “1”, and sets the gains of
the other sound pickup signals to “0”.
[0109]
The slave unit amplifies each of the collected sound signals with the determined gain coefficient
(S105). The slave unit synthesizes the picked up sound signals after amplification to generate a
slave unit audio signal (S106).
[0110]
The handset performs AGC processing on the handset voice signal (S107), generates handset
data including the handset voice signal after AGC processing and level information, and outputs
the handset data to the host device (S108).
[0111]
FIG. 13 is a flowchart of the sound source tracking process of the host device according to the
embodiment of the present invention.
Also, since the contents of the detailed process are as described above, the detailed description
11-04-2019
29
will be omitted below.
[0112]
The host device 1 receives handset data from each handset and acquires handset voice signals
and level information (S201). The host device 1 compares the level information from each child
device, and detects the child signal of the highest level (S202).
[0113]
The host device 1 determines a gain coefficient for each handset audio signal (S203). Specifically,
the host device 1 sets the gain of the maximum level slave unit audio signal to “1”, and sets
the gain of the other slave unit audio signals to “0”.
[0114]
The host device 1 amplifies each slave unit audio signal with the determined gain coefficient
(S204). The host device 1 synthesizes the amplified slave signal and generates a tracking audio
signal (S205).
[0115]
In the above description, the gain coefficient of the sound pickup signal of the original maximum
power is set from “1” to “0” at the timing when the sound pickup signal of the maximum
power switches, and the gain of the sound pickup signal of the new maximum power is obtained.
The coefficient was switched from "0" to "1". However, these gain factors may be changed
stepwise in more detail. For example, the gain coefficient of the pickup signal of the original
maximum power is gradually decreased from "1" to "0", and the gain coefficient of the pickup
signal of the new maximum power is changed from "0" to "1" To increase gradually. That is, the
cross-fading process may be performed on the new maximum power collected sound signal from
the original maximum power collected sound signal. At this time, the sum of these gain
coefficients is made to be "1".
11-04-2019
30
[0116]
Then, such cross fade processing may be applied not only to the synthesis of the collected sound
signal performed by the slave unit, but also to the synthesis of the slave unit audio signal
performed by the host device 1.
[0117]
Moreover, although the example which provides AGC in each cordless handset 10A-the cordless
handset 10E was shown in the above-mentioned description, you may provide in the host
apparatus 1. FIG.
In this case, AGC may be performed by the communication I / F 11 of the host device 1.
[0118]
The host device 1 can also emit a test sound wave from the speaker 102 to each slave as shown
in the flowchart of FIG. 14 so that each slave can determine the level of the test sound.
[0119]
First, when the host device 1 detects the activation state of the slave (S51), it reads the level
determination program from the non-volatile memory 14 (S52), and transmits it to each slave via
the communication I / F 11 (S53) .
At this time, the CPU 12 of the host device 1 divides the level determination program into fixed
unit bit data, creates serial data in which unit bit data are arranged in the order of reception by
each slave, and transmits the serial data to the slave.
[0120]
Each slave receives the level determination program transmitted from the host device 1 (S71).
The level determination program is temporarily stored in the volatile memory 23A (S72). At this
11-04-2019
31
time, each slave device extracts and receives unit bit data to be received from the serial data, and
temporarily stores the extracted unit bit data. Then, each slave unit combines the temporarily
stored unit bit data, and executes the combined level determination program (S73). Thus, the
audio signal processing unit 24 realizes the configuration shown in FIG. However, since the level
determination program only performs level determination, and there is no need to generate and
transmit the slave unit audio signal Sm10A, the amplifiers 11a to 11m, the coefficient
determination unit 120, the synthesis unit 130, and the AGC 140 No configuration is required.
[0121]
Then, the host device 1 emits a test sound wave after a predetermined time has elapsed since the
level determination program was transmitted (S54). The coefficient determination unit 220 of
each slave unit functions as an audio level determination unit, and determines the level of the
test sound wave input to the plurality of microphones MICa to MICm (S74). The coefficient
determination unit 220 transmits level information (level data) as the determination result to the
host device 1 (S75). The level data may be transmitted for each of the plurality of microphones
MICa to MICm, or only level data indicating the maximum level may be transmitted for each child
device. Note that the level data is divided into fixed unit bit data and transmitted to the slave
units connected to the upper level, so that the slave units cooperate to create level determination
serial data.
[0122]
Next, the host device 1 receives level data from each slave unit (S55). The host device 1 selects
an audio signal processing program to be transmitted to each child device based on the received
level data, and reads these programs from the non-volatile memory 14 (s56). For example, a
slave with a high test sound wave level determines that the echo level is high, and selects an echo
canceller program. Also, a slave with a low test sound level determines that the echo level is low,
and selects a noise canceller program. Then, the host device 1 transmits the read audio signal
processing program to each child device (s57). The subsequent processing is the same as the
flowchart shown in FIG.
[0123]
Even if the host apparatus 1 changes the number of filter coefficients of each slave unit in the
11-04-2019
32
echo canceller program based on the received level data, and determines the change parameter
for changing the number of filter coefficients to each slave unit. Good. For example, the number
of taps is increased for slaves with high test sound wave levels, and the number of taps is
decreased for slaves with low test sound wave levels. In this case, the host device 1 divides this
change parameter into constant unit bit data, creates serial data for change parameter in which
unit bit data are arranged in the order in which each slave receives, and transmits it to each
slave.
[0124]
Note that the echo canceller may be provided in each of the plurality of microphones MICa to
MICm in each slave unit. In this case, the coefficient determination unit 220 of each slave
transmits level data to each of the plurality of microphones MICa to MICm.
[0125]
Further, the above-mentioned level information IFo10A to level information IFo10E may include
identification information of the microphones in each slave unit.
[0126]
In this case, as shown in FIG. 15, when the slave detects the sound pickup signal of the maximum
power and generates level information of the sound pickup signal of the maximum power (S801),
the microphone of the microphone in which the maximum power is detected. The identification
information is included in the level information and transmitted (S802).
[0127]
Then, when the host apparatus 1 receives level information from each slave unit (S901) and
selects the level information to be the maximum level, based on the identification information of
the microphones included in the selected level information. By identifying the microphone, the
echo canceller being used is identified (S902).
The host device 1 sends a transmission request for each signal related to the echo canceller to
the slave using the identified echo canceller (S903).
11-04-2019
33
[0128]
Then, when the child device receives the transmission request (S 803), the host device 1 sends a
pseudo-regression sound signal from the designated echo canceller and the collected sound
signal NE 1 (Echo components in the previous stage remove echo components. Each signal of the
previous sound pickup signal) and the sound pickup signal NE1 '(a sound pickup signal after the
echo component has been removed by the echo canceller of the preceding stage) is transmitted
(S804).
[0129]
The host device 1 receives each of these signals (S904), and inputs each of the received signals to
the echo suppressor (S905).
As a result, since the coefficient corresponding to the learning progress of the identified echo
canceller is set in the echo generation unit 125 of the echo suppressor, an appropriate residual
echo component can be generated.
[0130]
Note that, as shown in FIG. 16, the progress degree calculation unit 124 can be provided on the
audio signal processing unit 24A side.
In this case, the host device 1 requests the slave unit using the identified echo canceller to
transmit a coefficient that changes in accordance with the degree of learning progress in S903 of
FIG. In S804, the slave reads out the coefficient calculated by the progress degree calculator 124
and transmits it to the host device 1. The echo generation unit 125 generates a residual echo
component according to the received coefficient and the pseudo-regression sound signal.
[0131]
Next, FIG. 17 is a diagram showing a modification of the arrangement of the host device and the
slaves. FIG. 17 (A) shows the same as the connection mode shown in FIG. That is, the cable 361
11-04-2019
34
connecting the slave 10C and the slave 10D is bent so that the slave 10D and the slave 10E
approach the host device 1.
[0132]
On the other hand, in the example of FIG. 17B, the slave 10C is connected to the host device 1 via
the cable 331. In this case, the slave 10C branches the data transmitted from the host device 1
and transmits the data to the slave 10B and the slave 10D. In addition, the child device 10C
collectively transmits the data transmitted from the child device 10B, the data transmitted from
the child device 10D, and the data of the own device to the host device 1. Also in this case, the
host device is connected to any one of the plurality of handsets connected in series.
[0133]
Although the present invention has been described in detail and with reference to specific
embodiments, it will be apparent to those skilled in the art that various changes and
modifications can be made without departing from the spirit, scope or intent of the present
invention. is there.
[0134]
DESCRIPTION OF SYMBOLS 1 ... Host apparatus 2A, 2B, 2C, 2D, 2E ... Microphone unit 11 ...
Communication I / F 12 ... CPU 13 ... RAM 14 ... Nonvolatile memory 21A ... Communication I / F
22A ... DSP 23A ... Volatile memory 24A ... Audio | voice Signal processing unit 25A ...
microphone
11-04-2019
35
Документ
Категория
Без категории
Просмотров
0
Размер файла
50 Кб
Теги
description, jp2014116932
1/--страниц
Пожаловаться на содержимое документа