close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2015502524

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2015502524
A method of determining a focus output signal of a sensor array comprising a plurality of
sensors, wherein each sensor is operable to output a sensor output signal corresponding to the
quantity to be measured, the focus output signal being calculated at the focus The method
includes calculating the focus output signal by receiving the respective measured sensor output
signal from each sensor and performing a focus calculation on the measured sensor signal. The
method further includes determining a subset of mesh points of the set of predetermined mesh
points, each mesh point having at least one pre-computed filter parameter associated with the
mesh point. Calculating the focusing output signal interpolates on the subset of mesh points so
as to obtain an interpolated focusing output signal It includes performing, method. [Selected
figure] Figure 1
Computationally efficient broadband filter and sum array focusing
[0001]
The present invention relates to array signal processing, such as broadband beamforming, of
acoustic signals, for example.
[0002]
Wideband beamforming is a widely used technique for directionally receiving signals such as
acoustic or wireless signals.
11-04-2019
1
Beamforming techniques have been shown, for example, in the context of sound source
localization, sonar, radar, wireless communications, and the like. In general, in such a system, the
signal from the sensor is amplified and delayed so that the resulting formed measurement system
has a particularly high sensitivity to waves coming from a particular direction. In such
measurement systems, the sensitivity of the array of sensors can be directed in a particular
direction, a process known as beamforming. When all channels are recorded simultaneously,
such a system requires only a very short time to make one measurement.
[0003]
For example, many noise reduction problems involve locating one or several noise sources in
complex environments such as in a car or in an aircraft. In recent years, it has become possible to
perform measurements using many channels simultaneously. Today, there are measurement
systems comprising a large number of microphones (e.g. 64 or 128) implemented in a grid. In
other measurement systems, the microphones are usually implemented in a less regular
arrangement.
[0004]
Due to the cost of the microphone (or other sensor) and data / signal acquisition hardware, it is
generally desirable to use as few sensors as possible in the beamforming system. On the other
hand, both the requirements for the frequency range and the requirements for the spatial
accuracy of the system tend to increase the number of sensors required in the array.
[0005]
In so-called filter-and-sum (FAS) beamforming, the output time signal at a given position is
calculated by applying individual filters to the sensor signal and then adding the filtered signal.
Non-Patent Document 1 describes an approach based on an FIR filter and, for example, a method
of optimizing an FR filter to obtain a minimum sidelobe level of a beamformer.
[0006]
Optimized Wideband Filter-and-Sum (FAS) beamforming can significantly reduce sidelobe levels
compared to Delay-and-Sum (DAS) beamforming, and spherical arrays, ie, spherical harmonic
11-04-2019
2
beamforming (SHB) See, for example, Non-Patent Document 1 and Non-Patent Document 2.
[0007]
However, the calculation of optimal filter parameters is a very laborious task, as indicated herein.
In particular, in applications where beamforming operations are to be performed in the case of a
large number of focal points and in the case of large sensor arrays, the computational resources
for performing prior art optimized broadband FAS beamforming Can be quite large. Also, the
method outlined by NPL 1 requires optimization based on the covariance matrix of the signal
from each specific measurement, which introduces similar characteristics to the minimum
dispersion (or Capon) beamformer. Do. The output is not a linear function of the source, in the
sense that the output resulting from the measurements for the two sources is not the sum of the
separate outputs from the measurements for each of the two sources.
[0008]
A paper by Shefeng Yan et al. "Convex optimization based time-domain broadband beamforming
with sidelobe control" (J. Acoust. Soc. Am. 121 (1), January 2007) An article by Shefeng Yan et al.
“Optimal Modal Beamforming for Spherical Microphone Arrays” (IEEE Transactions on Audio,
Speech and Language Processing, Vol. 19, No. 2, February 2011, 361-371)
[0009]
Disclosed herein is a method of determining a focusing output signal of a sensor array
comprising a plurality of sensors, wherein each sensor is operable to output a sensor output
signal corresponding to the quantity to be measured, said focusing output signal Indicates the
calculated amount at the focal point. Embodiments of the method calculate the focus output
signal by receiving a respective measured sensor output signal from each of the sensors and
performing a focus calculation on the measured sensor signal. Including.
[0010]
11-04-2019
3
Therefore, the focus output signal can be considered to indicate a combination of filtered sensor
signals calculated for each one of the sensors, eg, a sum or other linear combination, hence
filtering Each of the pre-determined sensor signals may indicate the measured sensor signal from
each sensor filtered by the filter associated with the corresponding sensor from which the
measured sensor signal is received, the filter being at least one determined by the focus It
depends on the filter parameter.
[0011]
We have optimized with significantly less computational resources by defining a mesh of focal
points that spans an area in space in which focusing calculations will be performed using a
specific array. It has been noted that the benefits of beamforming with filters and other array
focusing techniques such as acoustic holography can be obtained.
The points thus defined are also called mesh points. For each mesh point of these predefined
mesh points, a set of optimized filter parameters is calculated, the pre-calculated filter
parameters into a filter bank, eg a file, a set of files or a database A filter bank can be stored and
associated with the sensor array. For example, such a filter bank can be provided on an
appropriate storage medium with the array. When measurements are taken and a focus
calculation is to be performed at any focus r, a predefined subset of mesh points are identified,
for example, as the nearest neighbor mesh points in the surrounding, and the mesh points
selected Interpolation is performed on the selected subsets. In some embodiments, focus
calculations are performed for each mesh point of these mesh points, and the focus output at the
focus is approximated by interpolation between the focus output values from surrounding mesh
points. In an alternative embodiment, pre-computed filter parameters of the identified subset of
mesh points are interpolated and focus calculations are performed using the resulting
interpolated filter parameters. Therefore, the order of some or all of the interpolation and focus
calculation steps can be interchanged.
[0012]
Thus, embodiments of the method disclosed herein further comprise determining a subset of
mesh points of the set of predetermined mesh points, each mesh point being associated with at
least one of the mesh points. Computing the focus output signal for any focus with pre-computed
filter parameters performs interpolation on the subset of mesh points to obtain an interpolated
focus output signal Including. In particular, calculating the focus output signal as an interpolated
11-04-2019
4
focus output signal may comprise applying one or more auxiliary filters to each of the measured
sensor output signals. The one or more auxiliary filters are each associated with the
corresponding sensor from which the respective measured sensor signal is received, and the one
or more auxiliary filters are each of the previously calculated filter parameters. It depends on at
least one. Thus, each mesh point may have at least one pre-computed filter parameter for each
sensor associated with the mesh point, ie, each mesh point is pre-computed for a plurality of precalculated filter parameters associated with the mesh point. There may be filter parameters, and
each pre-computed filter parameter is associated with one of the sensors.
[0013]
Embodiments of the methods described herein may calculate contributions from various sensor
locations at desired focal points, or to various types of array focusing applications such as
beamforming and acoustic holography. Can be applied to similar application forms. In the
context of beamforming, the focused output signal may also be referred to as a beamformed
output signal. The signal may represent the contribution of the sound source at the focal point to
the measured quantity (e.g. sound pressure) at the location of the sensor array, e.g. at the center
of the sensor array. Therefore, in such an embodiment, the focusing calculation is a beamforming
calculation.
[0014]
In the context of acoustical holography, the focusing output signal indicates (or an estimate of)
an amount (e.g. sound pressure, particle velocity or a different amount of sound) at the focal
point. In this regard, the output signals for the various focal points can be used to reconstruct (or
"error back propagate") the acoustic field in the desired area, eg, the surface or volume of the
object.
[0015]
Thus, in some embodiments, the focusing calculation is a beamforming calculation, and the
focusing calculation determines an estimate of the contribution from the focus to the amount of
sound (e.g. sound pressure) at the array position of the sensor array Including. In an alternative
embodiment, the focusing calculation comprises determining an estimate of the parameter of the
acoustic field at the focal point. Thus, in such an embodiment, the focusing output signal is
11-04-2019
5
indicative of the reconstructed sound volume. In general, focusing calculations involve
calculating physical quantities at the focus based on measured sensor signals from the sensors of
the sensor array. In some embodiments, the focus calculation is a filter-and-sum calculation,
which involves applying a filter to obtain a filtered signal and applying a summation (or other
linear combination) as a result Obtaining the generated focusing signal. Thus, in some
embodiments, the term "array focusing" is intended to refer to a process for estimating an
amount at the focus by filter and sum operations.
[0016]
Thus, in some embodiments, the term "focus calculation" refers to filter and sum calculation, and
the focus output signal refers to the result of filter and sum calculation. The filter-and-sum
calculation involves applying the respective filter to the sensor signal from the respective sensor
and summing the filtered signal.
[0017]
In some embodiments, calculating the focus output signal comprises calculating, for each sensor,
a plurality of auxiliary filtered sensor signals by applying respective auxiliary filters to the
measured sensor output signal. And wherein each auxiliary filter is associated with a
corresponding sensor from which said measured sensor signal is received, each auxiliary filter
depending on at least one of said pre-computed filter parameters; Combining, for each mesh
point, the plurality of auxiliary filtered sensor signals calculated for each sensor to obtain an
auxiliary focusing output signal, and the auxiliary focusing output signal calculated for each
mesh point; Interpolating to obtain an interpolated focusing output signal.
[0018]
The calculation of the focusing output signal may include calculating, for each sensor, a plurality
of auxiliary filtered sensor signals by applying a respective auxiliary filter to the measured sensor
output signal, each auxiliary filter , From which the auxiliary sensor is associated with the
corresponding sensor from which the measured sensor signal was received, each auxiliary filter
being dependent on at least one of the pre-computed filter parameters, again here each auxiliary
filter being associated with the mesh Relating to one of the subset of points, and for each sensor,
interpolating the plurality of auxiliary filtered sensor signals calculated for each mesh point to
obtain interpolated filtered sensor signals , Said interpolated sensor calculated for each sensor
Including No. a, and coupling to obtain the interpolated focusing output signal, the
11-04-2019
6
[0019]
In yet another embodiment, calculating the focus output signal comprises, for each sensor, at
least one of the pre-computed filter parameters associated with each mesh point of the
determined subset of mesh points. Calculating an interpolated filtered sensor signal by
calculating one interpolated filter parameter and applying, for each sensor, the respective
interpolated filter to the measured sensor output signal; Combining the interpolated filtered
sensor signals calculated each time to obtain the interpolated focus output signal.
[0020]
It will be appreciated that the focusing and / or interpolation operations may be performed in
different domains, eg in the time domain and / or in the frequency domain.
[0021]
In the far-field region for the applied array, the beamformer does not have distance (depth)
resolution, so only directional identification of the source can be obtained.
At an intermediate distance, a certain degree of distance resolution can be achieved, so for
example, in order to obtain a 3D source distribution map, focusing at a particular point in 3D is
desired here.
Embodiments of the method described herein use pre-computed filters for the mesh of focus.
If the beamformer is intended to be used only in the far field region, only a directional (2D) mesh
is required and the focus is directional.
If this is not the case, a 3D mesh may be required and focusing will be done at specific points.
Similar considerations apply to acoustic holography. Nevertheless, for the purposes of this
description, the terms "directional focus" and "point focus" will be used interchangeably and the
focus direction is also represented by the position vector in the relevant direction . Thus, as used
herein, the terms "mesh point" and "focus" include, for example, a position in 3D space and a
direction in 3D space, as represented by a position on a unit sphere. It is intended. Thus, each
mesh point and each focus can be defined by three space coordinates relative to the appropriate
coordinate system, or by two space coordinates defining a position on a two-dimensional surface
11-04-2019
7
(eg, a sphere) in 3D space There is a case.
[0022]
The sensor output signal may be an acoustic signal, i.e., indicative of a measured sound, e.g.
noise, audible sound, inaudible sound such as ultrasound or infrasound, or a combination thereof.
In other embodiments, the sensor output signal can indicate any other acoustic or wireless signal,
such as a sonar signal, a radar signal, a wireless communication signal, and the like.
[0023]
The quantity to be measured can be an acoustic quantity, such as sound pressure, sound
pressure gradient, particle velocity, etc. Thus, each sensor can be any suitable acoustic
measurement device, for example a receiver for microphones, hydrophones, pressure gradient
transducers, particle velocity transducers, wireless communication systems, radar and / or sonars
etc. / Converters, or a combination thereof. The sensor array comprises a plurality of sensors, for
example, a regular or irregular grid, for example a set of sensors arranged in a two or three
dimensional grid.
[0024]
The sensors of the array can be located at each of a set of measurement locations. The set of
measurement locations can be arranged in one or more measurement planes, for example in a
single plane or in two or more parallel planes. Within each plane, the measurement locations can
be arranged in a regular grid, or in an irregular pattern, or in any other suitable manner.
Furthermore, the methods described herein also apply to non-planar measurement
arrangements, ie, arrangements where the measurement locations are not located in one or more
parallel planes, for example, located on a curved surface can do. For example, the methods
described herein can be applied to spherical array arrangements.
[0025]
The term "interpolation" is intended to refer to any process suitable to at least approximately
11-04-2019
8
calculate new data points within the vicinity of a discrete set of known data points. In the context
of the present description, the term "interpolation" is intended to refer to calculating the focus
output signal associated with the focus from known filter parameters, each filter parameter being
a discrete set The focal point is associated with one of the mesh points of (or a subset of) mesh
points, the focal point being within a predetermined proximity of the mesh points. Therefore,
selecting a subset of mesh points may be to select mesh points in the vicinity of the focus, eg, to
select all mesh points in a predetermined vicinity of the focus, or a predetermined one closest to
the focus It can include selecting a number of mesh points. However, it will be appreciated that
other mechanisms may be selected to select the subset of mesh points that will be used to
perform the interpolation.
[0026]
The interpolation may be piecewise constant interpolation, linear interpolation, polynomial
interpolation, spline interpolation, or any other suitable interpolation method. The interpolation
may be performed within angular and / or linear coordinates (1D, 2D or 3D). It will further be
appreciated that, in some embodiments, the order of interpolation and combining operations can
be interchanged. In particular, this is an embodiment in which the interpolation is a linear
combination of the values calculated at each mesh point, and a combination of filtered sensor
signals resulting in a focused output signal being calculated The embodiments apply to linear
combinations of sensor signals, for example, the sum of calculated filtered sensor signals.
[0027]
In some embodiments, per-mesh point (and per sensor) auxiliary filtered to be used for
interpolation by applying respective pre-computed filter parameters associated with each mesh
point The signal can be calculated. Thereafter, for each sensor, the filtered sensor signal needed
for the selected focus can be calculated as an interpolation of the calculated auxiliary filtered
sensor signal. Alternatively, for each mesh point, the auxiliary focus output signal can be
calculated for each sensor from the auxiliary filtered sensor signal for that mesh point, and then
the auxiliary focus output signal for each mesh point Can be interpolated to obtain an
interpolated focusing output signal. Therefore, the addition (or combination by another method)
of auxiliary filtered sensor signals per sensor and the interpolation over different mesh points are
calculated when the interpolation is calculated as a linear combination of the calculated values
associated with the respective mesh points In particular, it can be performed in any order. For
example, in an embodiment where focusing (e.g., beamforming) for multiple focal points is to be
performed to obtain a beamforming map or a reconstructed acoustic field map, the process is as
11-04-2019
9
follows: It can be further accelerated. First, at all the focal points involved in obtaining the
beamforming map or the reconstructed map, over at least a subset of the mesh points needed for
interpolation, ie to one of the mesh points, and Focus calculations by applying respective precomputed filter parameters associated with each mesh point of the above subset of mesh points
so as to calculate auxiliary filtered signals respectively associated with one of the sensors It can
be done. For each mesh point, at least one several auxiliary filtered signals are calculated
respectively for each sensor. The desired filtered filtered sensor signal per focus can then be
calculated by interpolation of each of the calculated auxiliary filtered signals. Alternatively, the
interpolated focus output signal for each focus can be calculated by interpolating the respective
focus output signals calculated from the respective auxiliary filtered signals.
[0028]
When the filtered sensor output signal is (at least approximately) a linear function of the filter
parameters, as in the case of, for example, a FIR filter, the filtered sensor output signal associated
with the desired focus is initially at the mesh point. Calculating interpolated filter parameters
from pre-computed filter parameters associated with each, defining the interpolated filter
parameters and then applying the interpolation filter to the sensor output signal Can be
calculated by calculating the filtered sensor output by The interpolated focus output signal can
then be calculated by combining the filtered sensor outputs from each sensor. In this
embodiment, the number of focusing calculations will possibly be reduced, thereby reducing the
computational cost of array focusing (eg, beamforming) operations.
[0029]
Each mesh point may have at least one pre-computed filter parameter per sensor associated with
that mesh point. Calculating pre-computed filter parameters for a given mesh point by
minimizing the power of the in-focus output signal while requiring that the contributions from
the mesh points be fully retained in the output signal Can.
[0030]
In some embodiments, each set of pre-computed filter parameter (s) associated with the first
mesh point can be calculated by minimizing the maximum sidelobe level, The maximum sidelobe
level indicates the suppression level of the disturbance from a set of other positions different
11-04-2019
10
from the first mesh point and at a set of predetermined frequencies. This embodiment has been
found to reduce the risk of high sidelobe sensitivity peaks at several locations of the disturbance
source and at several frequencies.
[0031]
In general, each mesh point can have a set of pre-computed filter parameters associated with that
mesh point.
[0032]
In some embodiments, each pre-computed filter parameter associated with the first mesh point is
a set of sensor weights by minimizing the maximum sidelobe level for each set of predetermined
frequency frequencies. Where the maximum sidelobe level indicates the level of suppression of
disturbances at the frequency from a set of other locations different from the first mesh point,
and each sensor weight is assigned to each of the sensors One associated and, for each sensor, a
frequency response defined / established by said pre-computed filter parameter (s) into a
frequency response vector composed of a subset of said determined sensor weights By fitting at
least one pre-computed filter parameter The method comprising determining the Ta, the sensor
weight of the subset is calculated by the fact that associated with each frequency of the
particular sensor.
It has been found that this embodiment significantly reduces the computational resources
required to calculate the pre-computed filter parameters, while being able to achieve very
different levels of sidelobe suppression at different frequencies.
[0033]
The mesh points can be selected as any suitable regular grid or random grid of points, eg a two
dimensional grid or a three dimensional grid. The mesh points can be arranged in one or more
planes, for example in a single plane or in two or more parallel planes. Within each plane, the
mesh points may be arranged in a regular grid, or in an irregular pattern, or in any other suitable
manner. Alternatively, the mesh points can be distributed in one or more curved surfaces, for
example in one or more spherical surfaces with their respective radius. In some embodiments,
mesh points are arranged such that the distance between adjacent mesh points is shorter than
the local beam width of the beamformer used.
11-04-2019
11
[0034]
The features of the methods described above and in the following description may be
implemented at least partially in software or firmware and may be data processing device or
other brought about by executing program code means such as computer executable
instructions. It should be noted that it can be implemented in the processing means. Here, and in
the following description, the term "processing means" includes any circuit and / or device
suitably configured to perform the above described functions. In particular, the above terms are
general or special purpose programmable microprocessor, digital signal processor (DSP),
application specific integrated circuit (ASIC), programmable logic array (PLA), field
programmable gate array (FPGA), graphical processing unit (GPU), dedicated electronics, etc., or
a combination thereof.
[0035]
Embodiments of the present invention each provide one or more of the benefits and advantages
described in relation to the first mentioned method, each being described in relation to the first
mentioned method, And / or in various ways, including the methods, systems, devices and
product means described above and in the following description, having one or more
embodiments corresponding to the embodiments as disclosed in the dependent claims It can be
realized.
[0036]
In particular, an embodiment of the processing apparatus for performing array focusing (e.g.
beamforming) calculations comprises an interface for receiving from each sensor of the sensor
array a set of sensor output signals corresponding to the measured quantities; And a processing
unit configured to perform the steps of an embodiment of the method defined in 11, wherein
said processing unit comprises a storage medium storing said set of pre-computed filter
parameters.
[0037]
An array focusing (e.g. beamformer or holography) system measures the measured quantity at a
set of measurement locations with the processing apparatus as described above and in the
following description, the first measured A set of sensors connectable to communicate with the
11-04-2019
12
device may be provided to transfer the amount of sound to the processing device.
For example, such systems can be used to locate sound sources (eg, noise sources) in 3D space,
eg, to locate sound sources in an enclosure.
[0038]
The pre-computed filter parameters may be separate from or included in the computer program
for performing the array focusing (e.g. beamforming) process (e.g. part of the computer program)
It will be appreciated that it may be generated as, or by a computer program, which may be
included in one or more separate files, or a combination thereof.
For example, pre-computed filter parameters may be stored by a computer program or
installation program and stored in a database or other storage medium supplied with the sensor
array.
[0039]
The computer program may comprise program code means, the program code means comprising
steps of the method disclosed above in the data processing system and in the following
description when the program code means are executed on a data processing system Configured
to run The computer program may be stored on a computer readable medium or may be
embodied as a data signal. The storage medium may comprise any circuit or device suitable for
storing data, such as a magnetic or optical storage device such as RAM, ROM, EPROM, EEPROM,
flash memory, CD ROM, DVD, hard disk etc. .
[0040]
A computer readable medium can store a set of pre-computed filter parameters, each of the filter
parameters being associated with a mesh point of the set of mesh points and disclosed herein.
When performing the steps of an embodiment of the method, it is used by the processing device
as defined herein.
[0041]
11-04-2019
13
According to another general aspect, disclosed herein is a method of determining a focus output
signal of a sensor array comprising a plurality of sensors, each sensor outputting a sensor output
signal corresponding to a measured quantity. As such, the focus output signal is indicative of the
calculated amount at the focus.
An embodiment of the method is receiving the respective measured sensor output signal from
each sensor and calculating the respective filtered sensor signal from each received measured
sensor output signal, the filtered The sensor signal is indicative of the measured sensor signal
filtered by the filter associated with the corresponding sensor from which the measured sensor
signal is received, wherein the filter is dependent on the at least one filter parameter determined
by the focus; Combining the calculated filtered sensor signal to obtain a focused output signal.
[0042]
These and other aspects will be apparent and elucidated from the embodiments described in the
following description with reference to the drawings.
[0043]
FIG. 1 is a schematic block diagram of a beamformer system.
FIG. 5 is a flow diagram of a process of calculating beamformed output signals of a sensor array.
FIG. 6 shows a measurement arrangement with microphones in a flat array.
[0044]
Throughout the drawings, the same reference signs, where possible, refer to the same or
corresponding elements, mechanisms or components.
[0045]
In the following description, aspects and embodiments of the present invention will be described
in further detail with reference to a beamforming system.
11-04-2019
14
However, the embodiments of the methods, products and systems described herein can also be
applied to acoustic holography.
[0046]
FIG. 1 shows a schematic block diagram of a beamformer system for performing beamforming of
acoustic waves. The system comprises a set of acoustic receivers 108 and an analysis unit 103
connected to the acoustic receivers.
[0047]
In the following description, the acoustic receiver 108 is also referred to as a transducer.
Nevertheless, the acoustic receiver should be a microphone, a hydrophone or any other sensor
suitable for measuring acoustic characteristics such as sound pressure, sound pressure gradient,
particle velocity or other first order quantities. It will be understood that In the embodiment of
FIG. 1, the transducers 108 are implemented as an array 102 of transducers, the transducers
108 being arranged in a regular grid, for example a one, two or three dimensional grid. The
transducers can be arranged in a regular grid or in an irregular arrangement. In beamforming
applications, irregular arrangements perform better and are therefore commonly used. The
number of transducers and the array arrangement, eg, the inter-transducer spacing, is the size
and shape of the enclosed space in which the location of the sound source is to be located, the
frequency range of interest, the desired spatial resolution, And / or may be selected by other
design parameters.
[0048]
The transducer array 102 is connected to the analysis unit 103 such that the converter 108 can
transfer the signal under test to the analysis unit, for example via a wired or wireless signal
connection. The signal measured by the transducer is also referred to as a sensor signal.
[0049]
11-04-2019
15
The analysis unit 103 performs data communication with the interface circuit 104 for receiving
and processing the measured signal from the converter array 102, the processing unit 105 in
data communication with the interface circuit 104, the storage medium 112, and the processing
unit 105. And an output unit 106. Even when shown as a single unit in FIG. 1, the analysis unit
103 can be physically divided into two separate devices, eg, a collection front end and a
computer, or even more than two devices. I will understand. Similarly, it will be appreciated that
the functions described in connection with the different sub-blocks of the analysis unit can be
divided into alternative or additional functions or hardware units / modules.
[0050]
The interface circuit comprises signal processing circuitry suitable for receiving the output signal
from the converter 108 and for processing the received signal for subsequent analysis by the
processing unit 105. The interface unit performs simultaneous temporal data collection, and then
all further processing can be performed by the processing unit 105, including data domain to
frequency domain conversion, typically using an FFT. The interface circuit 104 comprises the
following components: one or more preamplifiers for amplifying the received signal, one or more
analog / digital (A / D) converters for converting the received signal into digital signals, One or
more filters may be provided, for example, one or more of bandwidth filters and the like. For
example, the interface circuit can provide as output data the amplitude and phase as a function
of frequency for each transducer.
[0051]
Processing unit 105 may be a suitably programmed microprocessor, a central processing unit of
a computer, or any other device suitable for processing signals received from interface unit 104,
such as an ASIC, DSP, GPU, etc. be able to. The processing unit is configured to process sensor
signals received via the interface circuit 104 to calculate beamformed output signals as described
herein.
[0052]
Storage medium 112 stores data indicative of a set of pre-computed filter parameters, such as a
magnetic or optical storage device such as RAM, ROM, EPROM, EEPROM, flash memory, CD
11-04-2019
16
ROM, DVD, hard disk, etc. Any suitable circuit or device may be provided. In FIG. 1, the storage
medium is shown separately and shown in communication connection with the processing unit.
However, it will be appreciated that the storage medium 112 may be embodied as part of the
processing unit 105, for example as an internal memory.
[0053]
The output unit 106 may comprise a display or any other device or circuit suitable for providing
a visual representation of the beamformed signal, eg, a map of the beamformed output signal for
various focal points. Examples of suitable output units include a printer and / or printer interface
to provide a printed representation. Alternatively or additionally, the output unit 106 may be a
magnetic or optical storage device such as RAM, ROM, EPROM, EEPROM, flash memory, CD
ROM, DVD, hard disk, wired or wireless data communication interface, eg LAN, Any circuit or
device suitable for communicating and / or storing data indicative of the beamformed output
signal may be provided, such as an interface to a computer or communication network such as a
wide area network and the Internet.
[0054]
The analysis unit 103 can be implemented as a suitably programmed computer, for example a PC
including a suitable signal collecting substrate or circuit.
[0055]
In operation, the transducer array 102 is at a location where the location of the sound source is
to be located in its surroundings, or where the location of the sound source is mapped, for
example the surface of an object comprising a sound source emitting acoustic radiation It can be
positioned near or in an enclosed space.
Depending on the complexity of the size and shape of the object or environment to be analyzed,
the frequency range of interest and the desired spatial resolution, the number of transducers, the
array arrangement of the arrays, eg inter-transducer spacing, and possible sources You can
choose the distance up to.
[0056]
11-04-2019
17
For example, by means of a position detection device, the position of the array 102 can be
determined and supplied to the analysis unit 103. The transducers 108 of the array 102 can
measure the sound pressure or another suitable amount of sound at each position, and the
resulting sensor signal is sent to the analysis unit 103.
[0057]
For example, the transducer array can be a hand-held array incorporating position detection
devices, and thus can be measured at various accessible positions distributed around the object.
Another common application can be in the passenger compartment and a 3D array grid can be
used to make sound sources distinguishable in all directions, eg spherical array or double layer
array (eg 8 ×) 8x2 sensors can be used.
[0058]
Analysis unit 103 calculates beamformed output signals for one or more focal points 109 and /
or directions from the signals measured by the transducer. The analysis unit may store and / or
output a representation of the beamformed signal, eg, a map of the acoustic intensity at the array
position as a function of direction and / or focus, or contribution to sound pressure.
[0059]
One embodiment of a process for calculating a beamformed output signal is described with
reference to FIG. 2 and is also referred to FIG.
[0060]
In general, embodiments of the process can calculate the output time signal b (t, r) at a given
position / direction r.
For example, the output signal may be an estimate of the focus / direction contribution to the
sound pressure at (the center of) the sensor array. As mentioned above, r can define a position or
11-04-2019
18
orientation in 3D (for example represented by a position on a unit sphere around the center of
the sensor array or another suitable origin of the coordinate system). The FAS beamformer first
generates sensor signals p m (t), m = 1, 2,. . . , M can be applied to an individual filter h to obtain
a filtered sensor signal, after which the filtered signals are summed. Here, the symbol represents
a convolution with respect to time, and the vector α m (r) contains the filter parameters applied
to the transducer number m to help focus the beamformer at position r. Therefore, the
beamformed output signal b (t, r) is obtained by summing in this embodiment by combining a
plurality of calculated filtered sensor signals. It will be appreciated that, in general, the
beamformed output signal can be calculated by different combinations, in particular linear
combinations, of the calculated filtered sensor signals. フィルタはFIRフィルタとすることがで
きる。 However, embodiments of the process described herein may apply other types of filters. A
paper by S. Yan, C. Hou and X. Ma "Convex optimization based time-domain broadband
beamforming with sidelobe control" (J. Acoust. Soc. Am. 121 (1), January 2007, 46-49) describe,
for example, an approach based on FIR filters, including a method to optimize the FIR filter to
obtain a minimum sidelobe level of the beamformer. The calculation of the optimal filter
parameter vector α m (r) is often not very suitable for solving for all calculation points each time
a beamforming calculation has to be performed. Those skilled in the art will understand that it is
an operation. In the following description, an example of an efficient method of performing
optimized FAS beamforming is described in more detail.
[0061]
In the first step S 1, the process defines an appropriate coordinate system and obtains a set of N
mesh points r n, n = which form the mesh 110, from which pre-computed optimized filter
parameters are available. 1, 2,. . . , N get. The mesh at position r n spans an area in space in which
beamforming calculations are to be performed using a particular array. In general, the spacing
between mesh points r n should be somewhat smaller than the local beam bandwidth of the
beamformer. The spacing may depend on the interpolation scheme used and can be determined
at the time the entire system (filter, interpolation scheme, mesh generation) is designed. At that
point, the spacing can be selected to be narrow enough to obtain the desired accuracy of
interpolation. For each of these mesh points r n, the process obtains a set of pre-computed
optimized filter parameter vectors α m (r n). However, n = 1, 2,. . . , N are indices associated with
N mesh points, and m = 1, 2,. . . , M are indices associated with the M transducers of the array.
Thus, the filter parameters α m (r n) define the respective auxiliary filters associated with the
respective mesh points. For example, mesh points and associated filter parameters may be stored
in a filter bank (eg, one file, one set of files, or a database), which may be associated with the
array and provided with the array . Examples of methods for calculating optimized filter
parameters are described in more detail below. Typically, mesh and filter parameters are defined
/ calculated for once and for all by special / different programs and supplied with the sensor
11-04-2019
19
array. However, it can also be done during installation or by appropriate initialization functions
within the beamforming software (and hence can be recalculated).
[0062]
In step S2, the process determines that the measured values from each transducer of the array,
i.e. . . , M are received.
[0063]
In step S3, the process selects a vector r which defines the position or direction in which the
beamforming signal is to be calculated.
For example, the process can automatically select a series of positions, or the process can receive
user input indicating a desired direction / position, or the user generates a sound source
distribution map A grid of points r can be defined as the basis of
[0064]
In step S4, the process identifies the nearest neighboring mesh point closest to the focal point r.
In FIG. 1, the mesh point closest to the point 109 is represented by 113. It will be appreciated
that the number of mesh points identified may depend on the interpolation algorithm used, the
choice of coordinate system, and the type of mesh. For example, if the mesh points are placed in
a cube grid, the process can identify the nearest nearest mesh point in the perimeter as the
corner of the cube where the focal point r is located. Similarly, if the mesh points are located on
one or more spheres centered on the origin of the coordinate system and having their respective
radii, the process identifies the sphere closest to the position r and then , A predetermined
number of closest mesh points on each sphere can be identified.
[0065]
In step S5, the process performs beamforming calculations for each identified nearest neighbor
mesh point using pre-computed filter parameters associated with each mesh point. Therefore, the
beamformer output is calculated in this case by summing, by combining the auxiliary filtered
11-04-2019
20
sensor signals resulting from applying an auxiliary filter to each sensor signal. The auxiliary filter
is further defined by the pre-computed filter parameter α m (r n). In step S6, the process
performs interpolation of the beamformer output thus calculated at position r n so that the
interpolated beamformer output b (t, r) at position r is reached . Thus, the interpolated
beamformer output can, for each sensor, apply a plurality of auxiliary filtered sensor signals
associated with the selected subset of mesh points by applying a respective auxiliary filter to the
measured sensor output signal Calculating, each auxiliary filter being associated with the
corresponding sensor from which the measured sensor signal is received, each auxiliary filter
depending on at least one of the pre-calculated filter parameters And-for each mesh point, a
plurality of auxiliary filtered sensor signals calculated for each sensor are combined to obtain an
auxiliary beamformed output signal b (t, r n) and calculated for each mesh point On the auxiliary
beamformed output signal And interpolating to obtain the interpolated beamformed already
output signal is calculated by.
[0066]
Interpolation can be performed using any suitable interpolation technique that is known per se. A
simple form of interpolation is linear interpolation in angular or linear coordinates (1D, 2D or
3D).
[0067]
The beamforming process can be speeded up in the following manner. 1) First, beamforming
calculations can be performed over at least some of the mesh points needed for interpolation at
all calculation points r involved in obtaining the designated beamforming map . Thereafter,
interpolation is performed in a second step. 2) In the case of an FIR filter, the beamformer output
of equation (1) is a linear function of the filter coefficients h m (r n) (where the coefficients are
the majority of the parameter vector α m (r n) Where T SF is the sampling time interval of the
FIR filter, L is the number of tabs in the filter, and v m is a shared integer sample between the
transducer m and all mesh points r n The interval delay, h m, l (r n), for a given n and m,
constitutes L elements of the filter coefficient vector h m (r n). Perform spatial interpolation on
the mesh point beamformer output signal b (t, r n) and instead of obtaining b (t, r), interpolate
the FIR filter coefficients h m, l (r n) to h m, l (r) can be obtained and then the values can be
applied in equation (2). Therefore, the beamformed output signal may be at least one
interpolated filter parameter from the pre-computed filter parameters h m, l (r n) associated with
each of the identified subsets of mesh points per sensor calculating h m, l (r);-calculating, for
each sensor, an interpolated filtered sensor signal by applying respective interpolated filters to
11-04-2019
21
the measured sensor output signal; It can be calculated by combining the interpolated filtered
sensor signals calculated each time to obtain an interpolated beamformed output signal.
[0068]
In order to minimize the required FIR filter length, a shared delay count v m is introduced in
equation (2). Given the similarity with DAS beamforming, the delay required in DAS is
approximated by v m so that the FIR filter is pure, introduced by filter optimization with a
maximum delay of several sample intervals. It needs to be modeled as the value of deviation from
delay. If the delay offset counts v m are not known by the FAS beamforming software, then those
counts are combined with the vector h m (r n) of the FIR filter coefficients in the filter bank (ie in
α m (r n) ) Is stored.
[0069]
By calculating the beamformer output signal of equation (2) using the same sampling rate as the
sensor signal, the equation becomes the sum of discrete convolutions across the transducer.
Where κ is the integer sample count on the output. The actual computation of the convolution
can be done efficiently, for example, using a standard convolutional addition algorithm that
utilizes an FFT.
[0070]
Equations in the frequency domain can be obtained through the Fourier transform of equation
(2). Where P m (ω) is the transducer sound pressure spectrum and ω is the angular frequency.
By introducing the filter's frequency domain representation H m (ω, r n) :, the frequency domain
beamforming equation (4) can be written as: In order to simplify the description, considering
only the radix-2 FFT processing, if the sampling time interval T S of the sensor signal is equal to
2 <μ> T SF for some non-negative integer μ, The frequency domain filter H m (ω, r n) can be
calculated using FFT. If the recording length of the sensor signal is K, the frequency ω k = 2πf k
in the FFT spectrum of the sensor signal is f k = k / (KT S), k = 0, 1, 2,. . . , K / 2, and equation (5)
becomes as follows. The equation can be calculated for each combination (n, m) by zero padding
the FIR filter coefficient vector from length L to length 2 <μ> K and then performing an FFT.
This possibility allows very efficient frequency domain beamforming to be supported in most
cases where the signal under test has a sampling frequency below the sampling frequency used
11-04-2019
22
in the FIR filter bank.
[0071]
Alternative frequency domain implementations are shown in the following description. In
particular, in the following, P m represents the complex sensor signal (eg sound pressure)
measured by sensor m at a given frequency, and H m, n focuses the beamformer at mesh point n
Represents the complex frequency response function (FRF) of the filter applied to the sensor
signal from sensor m in order to The beamformed signal B n at mesh point n at a given frequency
is then: Furthermore, given that w n is the interpolation weighting factor to be applied to the
beamformed result B n at mesh point n so as to obtain an interpolated beamformed signal B at
the desired focus. An interpolated beamformed signal in frequency can be obtained as follows.
Therefore, to calculate B, one can first calculate the interpolated frequency response and then
use the interpolated frequency response function in the beamforming calculation. Thus, in some
embodiments, pre-computed filter parameters associated with each mesh point may be frequency
responses for that mesh point associated with each sensor and per frequency.
[0072]
Since the frequency response is usually only available at a set of discrete frequencies, further
interpolation of the frequency can be performed to calculate beamformed results at any
frequency.
[0073]
Optimization of FIR Filter to Obtain Minimum Sidelobe Level In the following description, in the
case of FIR filter, and when optimization is performed to obtain minimum sidelobe level,
optimized filter coefficients h m An alternative method of pre-computing (r n) is described in
more detail.
[0074]
For this purpose, the calculation of a set of optimal FIR filter coefficients for a single focus or
direction is described, for example, at one of the mesh points r n.
It will be appreciated that the filter parameters for all mesh points r n can be calculated using the
11-04-2019
23
method described below.
The calculated filter parameters can then be stored and used in the manner as described herein.
[0075]
FIG. 3 shows a measurement configuration comprising M transducers in a flat array. l + 1 point
sources are located at positions r i, i = 0, 1, 2,. . . , L generate incident waves, where their position
is here in the far-field region for the array. Using the focusing capabilities of FAS beamforming,
the largest possible contributions s i (t), i = 1, 2,. . . , L, while the free field from source i = 0 at the
reference position in the array (ie, with no array at the given position) the sound pressure
contribution s 0 (t) should be extracted. The extraction is based on the different origins (location
r i) of the sound waves as seen from the array.
[0076]
Although a flat array in free field is shown in the example of FIG. 3, the principles described in
this section may be any array arrangement and any including coplanar implementation in a flat
plate or on a rigid sphere. The same is true for array support structures. In general, sensor
signals can be represented mathematically as follows. However, from the free field pressure
contribution s i (t), the impulse response function g i, m (τ) to the actual contribution to the
pressure p m (t) measured by the transducer is known. In the case of an array of transducers in a
free field where the effects of the transducers on the sound field can be neglected and the sound
source is only in the far field region, the function g i, m (τ) translates from the reference point
Represents only the delay τ i, m of the sound field #i up to the unit m.
[0077]
As a result of Fourier transform of both sides of equation (8), an equivalent frequency domain
relationship is derived. However, P, S and G are Fourier transforms of p, s and g, respectively. In
the above example where g i, m (τ) represents a delay, the corresponding function G i, m (ω) is
equivalent to the phase shift (for an array in free field and a sound source in far field) Represent.
11-04-2019
24
[0078]
To extract the contribution s 0 (t) or S 0 (ω) from source # 0, we look at the beamformer at
position r 0 of that source. When using the frequency domain equation (4) for that beamformer,
the index n is omitted since only the focus r o is considered. Substituting equation (9) into the
transducer pressure spectrum of equation (4), the following equation is derived. However, B i is a
contribution from the sound source number i. Column vectors u and h are defined as follows.
However, the elements are organized in exactly the same way, and T represents transpose of a
vector or matrix. Ideally, B 0 (ω) = S 0 (ω) and B i (ω) = 0, i = 1, 2,. . . It is desirable to find a
filter vector h such that, i, according to equation (12), u <T> (ω, r 0) h = 1 and u <T> (ω, r i) h =
0, i = 1, 2,. . . , I.
[0079]
The first embodiment of the method for precalculating the optimized filter parameters described
is similar to that described by S. Yan, C. Hou and X. Ma (supra). In that way, a relatively small
number I of disturbance sources to be suppressed are taken into account. Thus, the contribution
from these relatively small assumed positions of the disturbance source is minimized.
Disturbances from other positions (directions) can be averaged by minimizing the power of the
beamformer output signal, while requiring that the contribution from the focus r 0 be completely
retained in the output signal. Minimized in the sense.
[0080]
In order to minimize the output power, S. Yan, C. Hou and X. Ma (supra) have shown that this
output power is equal to h <T> Rh <*>. Where R is the covariance matrix of the sensor signal, h is
the vector of FIR filter coefficients to be determined, and <*> is the complex conjugate. When
introducing the Cholesky decomposition of R = U <H> U, where H represents the conjugate
(Hermitian transposition), the output power can be expressed as, which should be minimized by
minimizing it Can.
[0081]
Referring to equation (12), the method can now be described mathematically as:
11-04-2019
25
[0082]
Where ε is a constant that defines the required suppression level of the disturbance from the
selected I locations, ω k is a set of constrained frequencies, and r k, i is the disturbance source
Represents a set of points to be suppressed, possibly changing with frequency, Δ defines the
maximum of the norm (usually 2 norms) of the FIR coefficient vector h, whereby the so-called
white noise of the beamformer Limit the gain (WNG) (see S. Yan, C. Hou and X. Ma, supra).
One reason to use a frequency dependent set of points r k, i is that the beam width tends to be
wide at low frequencies and narrow at high frequencies. Problem (14) has the form of the socalled second order cone programming problem (SOCP) and can be solved efficiently using
available solutions. See, for example, S. Yan, C. Hou and X. Ma, supra.
[0083]
One issue associated with this embodiment is the definition and computation of the covariance
matrix R. Ideally, the covariance matrix should be calculated using the sensor signal to be used
for beamforming, that the optimization problem (14) is the focus of each new measurement It
means that it must be solved every r 0. This is a very hard task. As an example, if there are 50
transducers and 64 FIR filter tabs, 3,200 variables in vector H have to be determined for each
focus. To avoid this, embodiments of the method described herein may be used that use a precomputed filter bank of filters in combination with spatial interpolation. However, the covariance
matrix must then be an average assumed for the intended application. Usually, a diagonal matrix
with the same elements on the diagonal can be used, which means that the minimization of can
be replaced by the minimization of, which just means the minimization of WNG. The last
constraint, namely the WNG constraint, can then be removed.
[0084]
However, the main problem is to use relatively few assumed positions r i of the disturbance
source. As mentioned above, although the sensitivity to disturbance sources at other locations is
minimized on average, depending on the frequency, there may be high side lobe (sensitivity)
peaks between the points r i. A related issue is the choice of the maximum relative side lobe level
ε that is allowed. In principle, it may not know which sidelobe level can be achieved over the
selected frequency ω k when other constraints exist. This will require consideration associated
11-04-2019
26
with a given array and a given focus.
[0085]
The following alternative embodiment addresses these problems by omitting the minimization of
and instead minimizing the maximum side lobe level ε. Specifically, based on the above
considerations, the following optimization problem can be defined to determine FIR filter
coefficients.
[0086]
This is also a SOCP problem and can therefore be solved efficiently using available solutions.
Given the level of "average control" with respect to the level of side lobes between the positions r
k, i, we can still control the WNG through the last constraint. However, otherwise, the idea is to
control all sidelobes within the desired range with a certain degree of accuracy, and hence a
density high enough to minimize the highest sidelobes. It is to use a grid of points r k, i.
[0087]
Here, the main problem is the size of the optimization problem (15) as may be illustrated by the
following example.
[0088]
In one example, an array of M = 50 microphones mounted in the same plane within a rigid
sphere of radius a = 10 cm is used.
The mean spacing between the microphones is then approximately 5 cm, which is up to about
2.5 kHz, well below half a wavelength. In that frequency range, low sidelobe levels can be
achieved with an appropriately selected distribution of 50 microphones on the sphere. Therefore,
in this example, F max = 2.5 kHz is chosen to be the upper frequency limit.
[0089]
11-04-2019
27
At 2.5 kHz, ka≡ωa / cc4.58, where k is the wave number and c is the propagation velocity. The
spherical harmonics of the order up to about the number N ≒ ka will inevitably dominate the
directivity pattern of the spherical array (M. Park and B. Rafaely “Sound-field analysis using
plane-wise using spherical microphone array "(J. Acoust. Soc. Am. 118 (5), November 2005,
2005, 3094-3103)). However, given a reasonable level of WNG, ie a reasonable norm solution
vector h, the spherical harmonics of the order up to about N ≒ Minimum {ka + 2, 6} should be
about the same level during the optimization It is amplified. The minimum peak-to-zero radius of
the lobe is then θ r ππ / N. See M. Park and B. Rafaely, supra. In order not to miss too many
sidelobe peaks, the angular spacing Δθ in the grid r k, i should not exceed about 20% of the
minimum lobe angular radius θ r. To select a grid with an angular spacing Δθ = 0.2π / N
spanning the largest 4π solid angle requires several points around it. At 2.5 kHz, this leads to I
≒ 365 points. However, among these points, the points present in the area of the main lobe must
be removed, which corresponds to about 50 points, so 315 points remain after all.
[0090]
Empirically, when using a sampling frequency equal to 16384 samples / sec, if a low frequency
resolution somewhat better than the resolution of delay and sum will be modeled, then the FIR
filter length L = 128 is usually Needed. The time interval of the filter is then approximately the
time it takes for the wave to make four rounds of its sphere. In order to determine L FIR filter
coefficients, K frequencies ω k at least equal to L / 2 should be used. The division by 2 stems
from the fact that each frequency effectively imposes a constraint on the complex response with
both real and imaginary parts. However, K ≒ 0.7 L ≒ 90 was eventually chosen, as certain
duplicate determinations proved to be valid. When equally distributing 90 frequencies over the
frequency interval from 0 Hz to 2.5 kHz and calculating the number l k of constraint positions r
k, i for each frequency using the method described above, it follows that all points r The number
of k, i is about 13600. Besides these approximately 13,600 inequality constraints in (15), we
have one other inequality constraint in addition to the K = 90 equality constraints. The number of
variable FIR filter coefficients in h (determined in the solution process) is as mentioned above:
ML = 50 · 128 = 6400. Already, this is not very suitable to be done during any beamforming
process.
[0091]
However, beamforming is in practice often used up to about three times the "half-wave spatial
11-04-2019
28
sampling frequency", in this case up to 7.5 kHz. To do this, the temporal sampling frequency
needs to be doubled, ie up to 32768 samples / sec, which again is the number of FIR filter tabs to
keep the filter length in seconds. It is necessary to double L. Then, according to the criteria
previously used to select the total number of frequencies K, the total number must also be
doubled so that finally L = 256, K = 180 and the number of variable FIR coefficients is 12800 be
equivalent to. However, the largest increase in size of the problem is due to the need for a very
dense grid r k, i at an added frequency of 2.5 kHz to 7.5 kHz. It has been found that working well
by increasing the maximum degree N of the spherical harmonics as well as the frequency.
However, with K = 180 frequencies uniformly distributed from 0 Hz to 7.5 kHz, and using the
method previously used to calculate the density in r k, i grids, after all the inequality constraints
It will be about 116000. Considering the matrix of gradients of the constraint function, it will
have 12800 × 116000 = 1484800000 elements, which requires 6 GB of memory in floating
point representation. This problem size is not considered manageable for beamforming
calculations within a reasonable time per focus.
[0092]
The conclusion from the above example is that for multiple problems, the second embodiment
can be used, which is relatively small in terms of transducer count, array diameter and / or upper
frequency limit. For example, for a practical application of a 20 cm diameter, 50 element
spherical array, the method is not feasible. Presumably, it can be handled by a powerful
computer to generate the filter bank that will be used according to the embodiment of the
method disclosed herein once.
[0093]
The following third embodiment avoids the need for large computational power to pre-compute
optimized filter parameters. The embodiment, besides solving the problem, can achieve
significantly different levels of side lobe suppression above and below the "half-wave spatial
sampling frequency". The second alternative embodiment described above may not always be
able to handle it properly, as the same value of ε is used for all frequencies. An attempt can be
made to define ε that changes with frequency, but it is difficult to know if a “good change”
has been defined.
[0094]
11-04-2019
29
According to the third embodiment, in equation (12) relating to the contribution from the sound
source #i to the beamformer output, using the definition (5) of the expression H m (ω) of the
frequency domain of the FIR filter (again For the purpose of this explanation, we omit the index n
and focus only on a single focus), we get Corresponding to the definition of the vector u in (13),
the column vector v is defined here as: As in the first and second embodiments, the third
embodiment has a set of discrete frequencies ω k, k = 1, 2,. . . , K, to optimize the performance of
the FIR beamformer. To facilitate the matrix-vector description of this embodiment, the matrix H
of FIR filter responses at these discrete frequencies is defined as: Then, H m and H k are used to
represent a column vector including the elements of column m and row k in H, respectively.
Using these definitions, equation (18) for a single source contribution to the beamformer output
at frequency ω k can be expressed as: Clearly, if source #i is to be correctly reconstructed at the
beamformer output, then the value of v <T> (ω k, r i) H k should be equal to 1 at all frequencies
It is. If the contribution is to be avoided, | v <T> (ω k, r i) H k | should be as small as possible at
all frequencies.
[0095]
The third embodiment of the method comprehensively optimizes the FIR filter through the
following two-step procedure. 1) The frequency ω k, k = 1, 2,. . . , K, and compute a set of
complex transformer weights at H k as a solution to the problem. Once complete, the complete
matrix H is available. The choice of H k <(0)> is explained as follows. 2) For each converter m, fit
an FIR filter with multiple tabs to the frequency response vector H m. For that purpose, equation
(5) is applied at frequency ω k and again, here we only consider a single point (# 0), so we omit
the focus index n. Here, A is the following matrix having K rows (frequency) and L columns (filter
tab). Equation (23) is solved in the normalized least squares sense.
[0096]
Note that the equality constraints in problem (22) contain phases as opposed to the equality
constraints in problem (14). This is because a consistent "smooth" phase of H m (ω k) over the
frequency ω k is needed to obtain a useful solution h m from equation (23) in the second step.
[0097]
11-04-2019
30
Surprisingly, it was found that the smooth frequency variation of the phase is actually due to step
1) above, probably when H k <(0)> is properly selected in equation (22). Problem (22) is the
SOCP problem solved using an iterative optimization procedure. Instead of using H k with all
zeros as the starting point, we used a well-conditioned analytical start vector H k <(0)> that
satisfies the equality constraints in (22). Usually, this will be a delay and sum solution for
"transparent" arrays, and a spherical harmonic beamforming solution for transducer arrays
within a rigid sphere (eg, implemented in the same plane). In one good embodiment, the steepest
descent algorithm is used, starting at H k <(0)>, and its iterations are performed where the path
crosses the hypersphere or the gradient norm H k <( 0)> was stopped when it became smaller
than 1% of its value, for example. The standard value of Δ was. It has been found that the use of
that procedure produces a frequency response function H m which can be very easily
represented by a fairly short FIR filter h m.
[0098]
When the frequency ω k is properly selected, relation (23) represents only the FFT relation
between H m and h m. However, such selection of frequencies does not support duplicate
decisions. And, in any case, the solution of (23) in step 2) is much less computational work (less
time consuming) than the optimization problem in step 1).
[0099]
The largest computational problem is the solution of (22) at the highest frequency. For the
example of the 50-element spherical array discussed in connection with the second embodiment
above, each of the 180 frequencies with the solution of (22) with M = 50 complex variables, at
7.5 kHz , The number of inequality constraints is about 1862. This is a manageable size issue.
However, it is recalled that 180 frequencies have to be dealt with, and that this has to be done on
a focus-by-focus basis and is still a computational problem that is not suitable to do in connection
with every beamforming calculation. I want to. For practical applications related to arrays of the
type used in that example, a filter bank as described herein is a significant advantage.
[0100]
As mentioned above, the spacing between the points r n spanned by the filter bank should be
somewhat smaller than the local beamwidth of the beamformer. For the 50 element spherical
11-04-2019
31
array of the above example, in addition to only 3 radial distances over 25 cm from the sphere, an
angular mesh with a spacing of about 5 degrees provides linear interpolation in angle (2D) and
radial distance It turned out that it is applicable to. For radial interpolation, it has been found
better to assume a linear change as a function of the reciprocal of the distance than as a function
of the distance.
[0101]
Various sources / noise sources, such as vibrating objects, for example, when analyzing acoustic
characteristics of machines, motors, engines, vehicles such as cars, etc. using the methods and
apparatus described herein. It can be identified.
[0102]
The embodiments of the method described herein may be implemented by means of hardware
comprising several distinct elements, and / or at least in part by means of a suitably programmed
microprocessor.
In the device claim enumerating several means, several of these means can be embodied by one
and the same item of hardware, component or article. The mere fact that certain measures are
recited in mutually different dependent claims or described in different embodiments does not
indicate that a combination of these measures can be used to advantage. Absent.
[0103]
As used herein, the term "comprises / comprising" is to be interpreted as specifically describing
the presence of defined features, elements, steps or components, 1 It should be emphasized that
it does not exclude the presence or addition of one or more other features, elements, steps,
components or groups thereof.
11-04-2019
32
Документ
Категория
Без категории
Просмотров
0
Размер файла
56 Кб
Теги
description, jp2015502524
1/--страниц
Пожаловаться на содержимое документа