close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2008278406

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2008278406
To perform sound source separation processing by the BSS method based on the ICA method, it
is possible to obtain high sound source separation performance over a wide frequency band with
a short signal delay time by a relatively small calculation load. A mixed acoustic signal in the time
domain is discrete Fourier transformed into signals x1 (f), x2a (f) and x2b (f) in the frequency
domain, and from that signal, the distance d1, between microphones to which it is input Signal
components of different frequency bands are extracted by the high pass filter processing unit 31
and the low pass filter processing unit 32 for each combination of a plurality of sets of signals
preset according to d2, and the extracted signals (x1 (f) and x2 (f By performing the sound
source separation processing of the BSS system based on the ICA method in the frequency
domain based on x.sub.1, x.sub.1 (f) and x.sub.2 (f), and integrating obtained first separated
signals with different frequency bands A second separated signal y1 (f), y2 (f) is generated and
the signal is inverse Fourier transformed into a time domain signal. [Selected figure] Figure 1
Sound source separation device, sound source separation program and sound source separation
method
[0001]
A sound source separation apparatus, a sound source separation program for separating and
generating separated signals corresponding to output sound of each sound source based on a
mixed sound signal input through a microphone in an acoustic environment in which a plurality
of sound sources exist It relates to a sound source separation method.
[0002]
10-04-2019
1
In a device equipped with a function to input the sound generated by a sound source such as a
speaker, such as a teleconference system, a video conferencing system, a ticket vending machine,
a car navigation system, etc., the sound emitted from a certain sound source is It is required to
separate the signal of V.1 (source signal) and the other source signal with high accuracy.
By the way, when a plurality of sound sources and a plurality of microphones exist in a
predetermined sound space, mixed sound in which sound signals (hereinafter referred to as
sound source signals) individually emitted by each of the plurality of sound sources are
superimposed for each of the plurality of microphones. A signal is input. The method of sound
source separation processing for generating separated signals identifying each of the sound
source signals based only on the plurality of mixed acoustic signals input in this manner is a
blind source separation method (Blind Source Separation method, hereinafter, BSS method)
Called). Furthermore, as one of the sound source separation processes of the BSS system, there is
a sound source separation process of the BSS system based on the Independent Component
Analysis (hereinafter referred to as the ICA method). The BSS method based on the ICA method
learns a predetermined separation matrix (demixing matrix) using the fact that the sound source
signals are statistically independent in a plurality of mixed acoustic signals input through a
plurality of microphones This is a processing method of performing identification (source
separation) of the sound source signal by performing filter processing using a separation matrix
that is optimized by calculation and the input mixed sound signals are optimized. Here, according
to the sound source separation processing of the BSS system based on the ICA method, the
separated signals are output through the same number of output terminals as the number of
inputs of the mixed sound signal (= the number of microphones). Further, in Patent Document 1,
sub-band analysis is performed in a plurality of frequency bands for each of a plurality of mixed
acoustic signals, and sound source separation processing of the BSS system based on the ICA
method for each frequency band based on the signals after the sub-band analysis. Is shown for
an apparatus that performs and integrates the separated signals generated thereby. UnexaminedJapanese-Patent No. 2003-271168
[0003]
However, in the sound source separation processing of the BSS method based on the ICA method,
since the distance (distance) between microphones suitable for sound source separation differs
according to the frequency band of the sound source signal, sufficient sound source separation is
possible for some frequency bands. There is a problem that the performance may not be
obtained. That is, if the distance between the microphones is too large (too wide) for a specific
frequency band in the sound source signal, some dead areas occur in optimization of the
10-04-2019
2
separation matrix, and the sound source separation performance is degraded. Further, if the
distance between the microphones is too small (too narrow) for a specific frequency band in the
sound source signal, the sound source separation performance is degraded because the sound
pressure difference between the microphones of the sound source signal is not sufficiently
obtained. On the other hand, in the technique disclosed in Patent Document 1, discrete Fourier
transform is applied to the mixed acoustic signal in the time domain for subband analysis, and
the signal after subband analysis is converted to a signal in the time domain (inverse Fourier
transform) Further, discrete Fourier transform is applied to the signal after the sound source
separation processing in order to perform subband integration, and conversion (inverse Fourier
transform) is performed again to the time domain signal to finally perform acoustic output. When
such processing is performed, a signal delay occurs every time the discrete Fourier transform
and the inverse Fourier transform are performed, and there is a problem that the delay time of
the signal as a whole processing is too large, causing a problem in practical use. Furthermore, the
computational load of performing sound source separation processing of the BSS method based
on the ICA method on mixed acoustic signals in the time domain is large, and the processing is
executed by a practical processor (processor with relatively low processing capacity) that can be
mounted on a telephone Then, there is also a problem that the sound source separation
processing in real time is difficult. Therefore, the present invention has been made in view of the
above circumstances, and the object of the present invention is to perform the sound source
separation processing by the BSS method based on the ICA method, and the delay time of the
signal is short due to the relatively small operation load. SUMMARY OF THE INVENTION An
object of the present invention is to provide a sound source separation device, a sound source
separation program and a sound source separation method capable of obtaining high sound
source separation performance over a wide range of frequency bands.
[0004]
In order to achieve the above object, a sound source separation device according to the present
invention is characterized in that the sound source is separated based on three or more mixed
acoustic signals input through three or more microphones and on which a plurality of sound
source signals are superimposed. This device separates and generates separated signals
corresponding to output sound, and includes the respective constituent elements shown in the
following (1-1) to (1-5). (1-1) Discrete Fourier transform means for converting a plurality of
mixed acoustic signals in the time domain into a plurality of mixed acoustic signals in the
frequency domain. (1-2) Extracting, from a plurality of mixed acoustic signals in the frequency
domain, signal components of different frequency bands for each combination of a plurality of
sets of signals preset according to the distance between the microphones to which it is input
Band extraction means. (1-3) The sound source separation process of the blind sound source
separation system based on the independent component analysis method in the frequency
10-04-2019
3
domain based on the extracted signal by the band extraction means for each combination of the
plurality of sets of signals FDICA sound source separation means for separating and generating a
first separated signal corresponding to an individual output sound. (1-4) A second separated
signal corresponding to an individual output sound of the sound source is integrated by
integrating the first separated signals separated and generated by the FDICA sound source
separating means with different frequency bands. Means of signal integration to generate. (1-5)
Inverse Fourier transform means for converting the second separated signal generated by the
signal integration means into a time domain signal. More specifically, the frequency band
different for each combination of a plurality of sets of signals set in advance according to the
distance between the microphones is more specifically, the distance between the microphones
serving as the input source is relatively large (wide) A relatively low frequency band is set in
advance for the set of mixed sound signals, and a relatively high frequency band is previously set
for the set of mixed sound signals in which the distance between the microphones as input
sources is relatively small (narrow). It is set.
[0005]
In the sound source separation device according to the present invention, the mixed speech
signal in the time domain is converted to a mixed acoustic signal in the frequency domain
(discrete Fourier transform), and independent component analysis in the frequency domain is
performed based on the mixed speech signal in the frequency domain. The sound source
separation processing of the blind sound source separation method (hereinafter referred to as
FDICA method) based on the method is executed. The sound source separation process of this
FDICA method is a process of solving an instantaneous mixing problem in a plurality of narrow
bands, and learning calculation of the separation filter (separation matrix) with relatively low
processing load (calculation load) compared to the sound source separation process in the time
domain It can be performed. Further, in the sound source separation apparatus according to the
present invention, the distance between the microphones serving as the input source of the
mixed speech signal, the frequency band of the extracted signal from the mixed speech signal
(that is, the signal to be the target of the sound source separation processing) In order to obtain a
relation suitable for sound source separation, sound source separation processing of the FDICA
method is performed individually for each frequency band, and the separated signals (the first
separated signal) obtained thereby are integrated and finalized. A split signal is generated. As a
result, high sound source separation performance can be obtained over a wide range of
frequency bands. Furthermore, each time a mixed acoustic signal of one frame, which is a unit of
a signal to be subjected to discrete Fourier transform processing, is input, only one discrete
Fourier transform processing and inverse Fourier transform processing are performed to
separate signals of one frame. Since (the second separated signal) can be generated, the delay
time of the signal can be minimized.
10-04-2019
4
[0006]
The present invention can also be understood as a sound source separation program for causing
a computer to execute the processing performed by the sound source separation device
according to the present invention described above, or a sound source separation method in
which the computer executes the processing. That is, the sound source separation program
according to the present invention corresponds to individual output sounds of the sound source
based on three or more mixed sound signals input through three or more microphones and in
which a plurality of sound source signals are superimposed, respectively. It is a program for
causing a computer to execute a process of separating and generating separated signals, and a
program for causing a computer to execute each process shown in the following (2-1) to (2-5).
(2-1) A discrete Fourier transform step of converting a plurality of mixed acoustic signals in a
time domain into a plurality of mixed acoustic signals in a frequency domain and storing the
converted signals in storage means. (2-2) Signal components of different frequency bands are
extracted from the plurality of mixed acoustic signals in the frequency domain for each
combination of a plurality of sets of signals preset according to the distance between the
microphones to which it is input A band extraction step of storing the extraction signal in the
storage means. (2-3) The sound source separation process of the blind sound source separation
system based on the independent component analysis method in the frequency domain based on
the extracted signal in the band extraction step for each combination of the plurality of sets of
signals FDICA sound source separation step of separating and generating a first separated signal
corresponding to an individual output sound, and storing the first separated signal in the storage
means. (2-4) A second separated signal corresponding to an individual output sound of the sound
source is integrated by integrating the first separated signals separated and generated in the
FDICA sound source separation step with different frequency bands. A signal integration step of
generating and storing the second separated signal in the storage means. (2-5) An inverse
Fourier transform step of converting the second separated signal generated by the signal
integration step into a time domain signal. Similarly, the sound source separation method
according to the present invention copes with individual output sound of the sound source based
on three or more mixed acoustic signals input through three or more microphones and in which
a plurality of sound source signals are respectively superimposed. It is a method of performing
each process shown to said (2-1)-(2-5) by computer in isolation | separation production |
generation of the to-be-separated signal. Also by the sound source separation program and the
sound source separation method according to the present invention, the same effects as those of
the above-described sound source separation device according to the present invention can be
obtained.
10-04-2019
5
[0007]
According to the present invention, when performing sound source separation processing by the
BSS method based on the ICA method, sound source separation having a short signal delay time
and high sound source separation performance over a wide range of frequency bands due to
relatively small calculation load Processing can be performed.
[0008]
Hereinafter, embodiments of the present invention will be described with reference to the
accompanying drawings for understanding of the present invention.
The following embodiment is an example embodying the present invention and is not of the
nature to limit the technical scope of the present invention. Here, FIG. 1 is a block diagram
showing a schematic configuration of the sound source separation device X according to the
embodiment of the present invention.
[0009]
Hereinafter, the sound source separation device X according to the embodiment of the present
invention will be described with reference to the block diagram shown in FIG. The sound source
separation device X includes three or more (three in the example shown in FIG. 1) microphones
1, 2a and 2b, and three or more sequentially input through the microphones 1, 2a and 2b (shown
in FIG. 1) An example is a separated signal (that is, a source signal) identified based on the three
mixed acoustic signals x1 (t), x2a (t) and x2b (t) (hereinafter referred to as a source signal).
Identification signals y1T (t) and y2T (t) corresponding to the sound source signal are
sequentially separated and generated, and the separated signals y1T (t) and y2T (t) are output in
real time to the speaker. The microphones 1, 2a and 2b are disposed at predetermined intervals
d1 and d2 respectively in an acoustic space in which a plurality of sound sources (including noise
sources) exist, and the mixed acoustic signals x1 (t) and x2a (t) , X2b (t) are acoustic signals on
which a plurality of sound source signals are superimposed. In the microphones, a combination
of microphones 1 and 2a with a relatively narrow distance (= d1) and a combination of
microphones 1 and 2b with a relatively wide distance (= d2) are respectively set. The intervals d1
and d2 are set according to the frequency band of the sound source signal to be separated. For
example, when the frequency band of the sound source signal to be separated ranges from a
band of about several kHz to a band of about 20 kHz, the microphone interval d2 corresponding
to the band of 8 kHz or less (band of voice) is the maximum of that band A microphone spacing
10-04-2019
6
d1 corresponding to a band up to 20 kHz with a half or less (for example, about 20 mm) of the
wavelength (42.5 mm) of an acoustic signal of frequency (8 kHz), the acoustic signal of the
maximum frequency (20 kHz) in that band Or less (for example, about 8 mm) of the wavelength
(17.0 mm) of In FIG. 1, the number of mixed acoustic signals to be input (ie, the number of
microphones) is three, and the combination of mixed acoustic signals according to the distance
between the microphones is one mixed acoustic signal x1 (t). An example is shown in which two
sets of two mixed acoustic signal combinations (that is, two microphone combinations) are
provided while being shared, but the present invention is not limited thereto. For example, in
addition to the configuration of the microphone shown in FIG. 1, a microphone 3a (not shown)
disposed at a distance d1 from the microphone 2a and a microphone 3b disposed at a distance
d2 from the microphone 2b. By providing (not shown), a configuration in which two
combinations of two mixed acoustic signals are provided is also conceivable.
In this case, three mixed speech signals input through the microphones 1, 2a and 3a respectively
constitute one combination, and three mixed speech signals input through the microphones 1, 2b
and 3b respectively constitute one combination. . A configuration is also conceivable in which the
same mixed acoustic signal (corresponding to the mixed acoustic signal x1 (t) in FIG. 1) is not
included redundantly in each of a plurality of combinations. In any case, (the number n of mixed
acoustic signals) ≧ (the number m of source signals to be separated) for each combination (set).
[0010]
As shown in FIG. 1, in addition to three or more microphones (1, 2a, 2b), the sound source
separation device X further includes A / D converters 11 to 13, digital signal processing devices
Y and D / A converters (71 , 72). Further, the digital signal processing apparatus Y includes FFT
processing units 21 to 23, high pass filter processing unit 31, low pass filter processing unit 32,
FDICA sound source separation processing units (4H, 4L), signal integration processing units (51,
52), An IFFT processing unit (61, 62) is provided. Here, the digital signal processing apparatus Y
is a digital processing circuit (element) or a computer such as a DSP (Digital Signal Processor) or
ASIC, a processor for operation and a ROM etc. storing programs executed by the processor. And
other peripheral devices such as a RAM. Then, each component of the digital signal processing
apparatus Y {FFT processing units 21 to 23, high pass filter processing unit 31, low pass filter
processing unit 32, FDICA sound source separation processing unit (4H, 4L), signal integration
processing unit (51 , 52) and IFFT processing units (61, 62)} are embodied by processors
(processors (computers) included in the digital signal processing device Y) that execute programs
corresponding to the respective processings. The present invention may also be embodied as a
program (sound source separation program) for causing a computer to execute processing
(described later) performed by each component of the digital signal processing apparatus Y. In
10-04-2019
7
FIG. 1, the arrow line connecting each component represents the transmission path and
transmission direction of the signal, and each configuration for a predetermined unit of signal
(signal for one frame) according to the procedure (flow) represented by the arrow line. Element
processing is performed. That is, FIG. 1 is also a flowchart showing the procedure of the process
executed by the sound source separation device X.
[0011]
The A / D converters 11 to 13 sample digital (two or more) mixed acoustic signals (analog
signals) obtained by the microphones 1, 2 a and 2 b at predetermined sampling cycles to obtain
digital mixed acoustic signals. It is converted into x1 (t), x2a (t), x2b (t). The mixed acoustic
signals x1 (t), x2a (t), x2b (t) after digital conversion are temporarily stored in a memory (not
shown). The FFT processing units 21 to 23 divide each of the mixed acoustic signals x1 (t), x2a
(t), x2b (t) in the time domain sequentially inputted through the microphones 1, 2a, 2b into
predetermined cycles. A plurality of (three or more) mixed acoustic signals x1 (f), x2a (f), x2b (f)
in the frequency domain by execution of short time discrete Fourier transform for each frame
which is a unit ) (An example of the discrete Fourier transform means). The mixed acoustic
signals x1 (f), x2a (f), x2b (f) after conversion are temporarily stored in a memory (not shown).
[0012]
The high pass filter processing unit 31 extracts a signal component of a predetermined first
frequency band (for example, a frequency band higher than 8 kHz) from the mixed acoustic
signals x1 (f) and x2a (f) in the frequency domain. The filtering process is performed to
temporarily store the processed signals x1H (f) and x2H (f) in a memory (not shown). Further, the
low-pass filter processing unit 32 may be configured to set a second frequency band (for
example, a frequency band of 8 kHz or less (ie, the first frequency band) predetermined from
mixed acoustic signals x1 (f) and Low-pass filter processing for extracting a signal component of
a frequency band lower than the frequency band) and temporarily storing the processed signals
x1L (f) and x2L (f) in a memory (not shown). As described above, the high-pass filter processing
unit 31 and the low-pass filter processing unit 32 receive the microphones to which the plurality
of mixed acoustic signals x1 (f), x2a (f), x2b (f) in the frequency domain are input. Different
frequencies for each combination of a plurality of sets of signals (combination of x1 (f) and x2a
(f), and combination of x1 (f) and x2b (f) according to the intervals d1 and d2 of The signal
component of the band is extracted (an example of the band extraction means).
10-04-2019
8
[0013]
The FDICA sound source separation processing units 4H and 4L are configured for each
combination of the plurality of sets of signals (combination of the extraction signals x1H (f) and
x2H (f) and combination of the extraction signals x1L (f) and x2L (f)). Sound source separation
processing of a blind source separation method (BSS method) based on an independent
component analysis method (ICA method) in the frequency domain based on the extracted
signals by the high pass filter processing unit 31 and the low pass filter processing unit 32
respectively (hereinafter, By executing the FDICA sound source separation process), a separated
signal (hereinafter referred to as a first separated signal) corresponding to the output sound of
each sound source is separated and generated (an example of the FDICA sound source separation
means). The first separated signal is temporarily stored in a memory (not shown). Here, one of
the FDICA sound source separation processing units 4H determines the first separated signals
y1H (f) and y2H (f) on the high band side based on the extracted signals x1H (f) and x2H (f) on
the high band side. Separate and generate. Further, the other FDICA sound source separation
processing unit 4L generates the first separated signals y1L (f) and y2L (f) on the low band side
based on the low band side extracted signals x1L (f) and x2L (f). Separately generate. These
FDICA sound source separation processing units 4H and 4L execute the same processing
independently, except that the contents (frequency bands) of the input signals are different. The
FDICA sound source separation processing units 4H and 4L each include a separation calculation
processing unit 41 and a learning calculation unit 42. The separation operation processing unit
41 performs matrix operation using the separation matrix W (f) on the extracted signals (x1H (f)
and x2H (f), or x1L (f) and x2L (f)) after filter processing. The first separated signal (y1H)
corresponding to the signal component of the partial frequency band of each sound source (the
extraction frequency range of either the high pass filter processing unit 31 or the low pass filter
processing unit 32). (f) and y2H (f), or y1L (f) and y2L (f)) are sequentially generated, and the
first separation signal is temporarily stored in a memory (not shown). Further, the learning
operation unit 42 is configured to calculate the extraction signals (x1H (f) and x2H (f), or x1L (f)
and x2L (f)) of predetermined time length (one frame), Based on the first separated signals (y1H
(f) and y2H (f), or y1L (f) and y2L (f)) separated and generated by the separation calculation
processing unit 41, the separation calculation processing unit 41 It performs learning calculation
and updating of the separation matrix W (f) to be used.
The extracted signals (x1H (f), x2H (f), x1L (f), x2L (f)) are digital signals sampled at a
predetermined cycle, so defining the time length is a signal It is synonymous with defining the
number of samples.
[0014]
10-04-2019
9
The contents of processing executed by the FDICA sound source separation processing units 4H
and 4L will be described below. The FDICA sound source separation processing units 4H and 4L
each perform sound source separation processing based on the FDICA method (FrequencyDomain ICA) which is a kind of ICA-BSS method. In the following, f represents a frequency bin
and m represents an analysis frame number. In the sound source separation process of the FDICA
method, the separated signal processing unit 41 performs matrix operation processing based on
the separation matrix W (f) on the input signal X (f, m) which is a signal after discrete Fourier
transform. Separate generation (source separation) of Here, the separated signal Y (f, m) can be
expressed as the following equation (1). Here, the update equation of the separation matrix W (f)
can be expressed, for example, as the following equation (2). The input signal X (f, m) in the
equation (1) is a signal of the frequency bin f of the m-th frame in each of the extracted signals
x1H (f) and x2H (f) in the FDICA sound source separation processor 4H. The component is
represented, and in the FDICA sound source separation processing unit 4L, the signal component
of the frequency bin f of the m-th frame in each of the extracted signals x1L (f) and x2L (f) is
represented. Further, in the FDICA sound source separation processing unit 4H, the separated
signal Y (f, m) in the equation (1) is a frequency bin of the m-th frame in each of the first
separated signals y1H (f) and y2H (f). The signal component of f is represented, and in the FDICA
sound source separation processing unit 4L, the signal component of the frequency bin f of the
m-th frame in each of the first separated signals y1L (f) and y2L (f) is represented. The sound
source separation processing of this FDICA method is treated as an instantaneous mixing
problem in each narrow band, and learning calculation of the separation matrix W (f) can be
stably performed with relatively low calculation load. Further, in the sound source separation
apparatus X, the FDICA sound source separation is independently performed for each of the
plurality of frequency bands (for example, a band of 8 kHz or less and other bands) divided by
each of the FDICA sound source separation processing units 4H and 4L. Since the processing is
executed, the number of divisions of the frequency bin f in each of the FDICA sound source
separation processing units 4H and 4L decreases, and the calculation load is lower than when
performing the FDICA sound source separation processing collectively for all frequency bands.
The learning calculation of the separation matrix W (f) can be performed by.
[0015]
The signal integration processing units 51 and 52 integrate the first separated signals separated
and generated by the FDICA sound source separation processing units 4H and 4L, respectively,
with different frequency bands (ie, y1H (f)). By integrating y 1 L (f) and y 2 H (f) and y 2 L (f), the
separated signal corresponding to the individual output sound of the sound source (hereinafter,
the second separated signal y 1 T (f), a mixer that generates y 2 T (f) (an example of the signal
10-04-2019
10
integration unit). The generated second separated signals y1T (f) and y2T (f) are temporarily
stored in a memory (not shown). In addition, the IFFT processing units 61 and 62 are configured
to generate the second separated signals y1T (f) and y2T (f), which are signals in the frequency
domain generated by the signal integration processors 51 and 52, respectively, Inverse Fourier
transform processing for converting into separated signals y1T (t) and y2T (t) is executed (an
example of the above-mentioned inverse Fourier transform means). The separated signals y1T (t)
and y2T (t) after conversion are temporarily stored in a memory (not shown). The separated
signals y1T (t) and y2T (t) output from the IFFT processing units 61 and 62 are converted into
analog signals by the D / A converters 71 and 72, respectively, and then transmitted to the
speaker.
[0016]
In the sound source separation apparatus X described above, the intervals d1 and d2 between
the microphones serving as the input sources of the mixed speech signal and the combination of
the extraction signals (x1H (f) and x2H (f) from the mixed speech signal, And sound source
separation processing of the FDICA method is individually performed for each frequency band so
that the frequency band of and the combination of x1L (f) and x2L (f) is suitable for sound source
separation. The first separated signals (combination of y1H (f) and y1L (f) and combination of
y2H (f) and y2L (f)) are integrated to generate a final separated signal. As a result, high sound
source separation performance can be obtained over a wide range of frequency bands. In the
sound source separation device X, each time one mixed sound signal x1 (t), x2a (t), x2b (t)
corresponding to one frame, which is a unit of a signal to be subjected to discrete Fourier
transform processing, is input, Since the second separated signals y1T (t) and y2T (t) for one
frame can be generated by performing discrete Fourier transform processing and inverse Fourier
transform processing only once, signal delay time can be minimized. it can.
[0017]
The present invention is applicable to a sound source separation device.
[0018]
FIG. 1 is a block diagram showing a schematic configuration of a sound source separation device
X according to an embodiment of the present invention.
Explanation of sign
10-04-2019
11
[0019]
X: sound source separation apparatus 1 according to an embodiment of the present invention 1,
2a, 2b: microphones 11, 12, 13: A / D converters 21, 22, 23: FFT processor 31: high pass filter
processor 32: low pass filter processor 4H, 4L: FDICA sound source separation processing unit
41: separation calculation processing unit 42: learning calculation unit 51, 52: signal integration
processing unit 61, 62: IFFT processing unit 71, 72: D / A converter
10-04-2019
12
Документ
Категория
Без категории
Просмотров
0
Размер файла
25 Кб
Теги
jp2008278406, description
1/--страниц
Пожаловаться на содержимое документа