close

Вход

Забыли?

вход по аккаунту

?

JP2017532898

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2017532898
Abstract An audio system is described that includes one or more speaker arrays that emit audio
corresponding to one or more audio program content to associated zones in a listening area. One
or more beam pattern attributes may be generated using parameters of the audio system (e.g.,
location of the speaker array and audio source), zone, user, audio program content, and listening
area. Beam pattern attributes define a set of beams used to generate an audio beam of channels
of audio program content to be reproduced in each zone. Beam pattern attributes may be
updated as changes are detected in the listening environment. By adapting to these changing
conditions, the audio system can play back audio that accurately represents each audio program
content in different zones.
Audio system with configurable zones
[0001]
Disclosed is an audio system configurable to output audio beams representing one or more
channels of audio program content to distinct zones based on the arrangement of the user, audio
source, and / or speaker array. Other embodiments are also described.
[0002]
The speaker array can play audio program content to the user by use of one or more audio
beams. For example, a set of speaker arrays can play the left front, center front, and right front
03-05-2019
1
channels of audio program content (eg, music or audio tracks for movies). Speaker arrays provide
extensive customization through the generation of audio beams, while conventional speaker
array systems add a new speaker array to the system, move the speaker array within the
listening environment / area, and add an audio source / Every time a change or any other change
is made to the listening environment, it has to be set manually. This need for manual setting is
bothersome and inconvenient when the listening environment changes continuously (eg, the
speaker array is added to the listening environment or moved to a new position in the listening
environment). There is. Also, these conventional systems are limited to playing back single audio
program content with a single set of loudspeaker arrays.
[0003]
SUMMARY An audio system is disclosed that includes one or more speaker arrays that emit
audio corresponding to one or more audio program content to associated zones within a listening
area. In one embodiment, the zones correspond to areas in the listening area where the
associated audio program content is designated to be played. For example, a first zone may be
defined as the area where a number of users are located in front of a first audio source (e.g. a
television device). In this case, audio program content generated and / or received by the first
audio source is associated with the first zone and played back to the same zone. Continuing this
example, a second zone may be defined as the area where a single user is located near a second
audio source (eg, a radio). In this case, audio program content generated and / or received by the
second audio source is associated with the second zone.
[0004]
One or more beam pattern attributes may be generated using parameters of the audio system
(e.g., location of the speaker array and audio source), zone, user, audio program content, and / or
listening area. Beam pattern attributes define a set of beams used to generate an audio beam of
channels of audio program content to be reproduced in each zone. For example, beam pattern
attributes may indicate gain values, delay values, beam type pattern values, and beam angle
values that may be used to generate beams for each zone.
[0005]
In one embodiment, the beam pattern attributes may be updated when a change is detected in
03-05-2019
2
the listening area. For example, changes may be detected within the audio system (e.g.,
movement of the speaker array) or within the listening area (e.g., movement of the user). In this
way, the variable conditions of the listening environment can be continuously taken into account
in the sound produced by the audio system. By adapting to these changing conditions, the audio
system can play back audio that accurately represents each audio program content in different
zones.
[0006]
The above summary does not contain an exhaustive list of all aspects of the invention. The
present invention is directed to all systems and methods that can be practiced by any suitable
combination of the various aspects summarized above, as well as those disclosed in the detailed
description below, particularly the claims as filed by the present application. It is considered to
include the Such combinations have certain advantages not specifically described in the above
summary.
[0007]
Embodiments of the present invention are illustrated by way of example, and not by way of
limitation, in the figures of the accompanying drawings in which like references indicate similar
elements. It should be noted that references to "one" or "one" embodiments of the present
invention in the present disclosure are not necessarily to the same embodiment and that they
mean at least one embodiment.
[0008]
FIG. 7 is a diagram of an audio system in a listening area, according to one embodiment. FIG. 7 is
a diagram of an audio system in a listening area according to another embodiment. FIG. 1 is a
component diagram of an audio source, according to one embodiment. FIG. 1 is a component
diagram of a speaker array, according to one embodiment. FIG. 4 is a side view of a speaker
array, according to one embodiment. FIG. 1 is a cross-sectional top view of a speaker array,
according to one embodiment. FIG. 3 illustrates an exemplary three beam pattern, according to
one embodiment. FIG. 6 illustrates two speaker arrays in a listening area, according to one
embodiment. FIG. 7 illustrates four speaker arrays in a listening area, according to one
embodiment. FIG. 7 illustrates a method for driving one or more speaker arrays to generate audio
03-05-2019
3
for one or more zones in a listening area based on one or more audio program content according
to one embodiment. FIG. 6 is a component diagram of a rendering strategy unit, according to one
embodiment. FIG. 8 illustrates beam attributes used to generate beams in separate zones of a
listening area, according to one embodiment. FIG. 7 is an overhead view of a listening area with
beams generated for a single zone, according to one embodiment. FIG. 7 is an overhead view of a
listening area with beams generated for two zones, according to one embodiment.
[0009]
Some embodiments are described with reference to the accompanying drawings. Although many
details are set forth, it is understood that some embodiments of the present invention may be
practiced without these details. In other instances, well known circuits, structures, and
techniques have not been shown in detail in order not to obscure the understanding of the
present description.
[0010]
FIG. 1A shows a diagram of an audio system 100 in a listening area 101. Audio system 100 may
include an audio source 103A and a set of speaker arrays 105. Audio source 103A may be
connected to speaker array 105 to drive individual transducers 109 in speaker array 105 that
emit different sound beam patterns for user 107. In one embodiment, speaker array 105 may be
configured to generate an audio beam pattern that represents individual channels of multiple
audio program content. The playback of these audio program content may be directed to
separate audio zones 113 within the listening area 101. For example, the speaker array 105 may
generate beam patterns that represent the left front, right front, and center front channels of the
first audio program content and may be directed to the first zone 113A. In this example, one or
more of the same speaker arrays 105 used for the first audio program content simultaneously
generate beam patterns representing the left front and right front channels of the second audio
program content, and the second zone It may be directed to 113B. In other embodiments,
different sets of speaker arrays 105 may be selected for each of the first and second zones 113A
and 113B. In the following, techniques for driving these loudspeaker arrays 105 to generate
separate audio program content and audio beams of corresponding separate zones 113 will be
described in more detail.
[0011]
03-05-2019
4
As shown in FIG. 1A, the listening area 101 is a room or another enclosed space. For example,
the listening area 101 may be an indoor room, a theater or the like. Although shown as a closed
space, in other embodiments, the listening area 101 may be an outdoor area or location,
including an outdoor arena. In each embodiment, the speaker array 105 may be disposed within
the listening area 101 to generate the sound perceived by the set of users 107.
[0012]
FIG. 2A shows a component diagram of an exemplary audio source 103A according to one
embodiment. As shown in FIG. 1A, audio source 103A is a television device but is any electronic
device capable of transmitting audio content to speaker array 105 so that speaker array 105 can
output audio to listening area 101. May be For example, in other embodiments, audio source
103A may be a desktop computer, laptop computer, tablet computer, home theater receiver, set
top box, personal video player, DVD player, Blu-ray player, gaming system, and / or mobile
device (eg, It may be a smartphone).
[0013]
Although shown as a single audio source 103 in FIG. 1A, in some embodiments, audio system
100 may include multiple audio sources 103 connected to speaker array 105. For example, as
shown in FIG. 1B, both audio sources 103A and 103B may be connected to the speaker array
105. In this configuration, audio sources 103A and 103B may drive each of the speaker arrays
105 simultaneously to output audio corresponding to separate audio program content. For
example, audio source 103A may be a television device that outputs audio to zone 113A utilizing
speaker arrays 105A-105C, while audio source 103B zones audio using speaker arrays 105A and
105C. It may be a radio output to 113B. Audio source 103B may be configured similar to that
shown in FIG. 2A for audio source 103B.
[0014]
As shown in FIG. 2A, audio source 103A may include hardware processor 201 and / or memory
unit 203. Processor 201 and memory unit 203 are to represent any suitable combination of
programmable data processing components and data storage devices that perform the operations
required to perform the various functions and operations of audio source 103A. Generally used
03-05-2019
5
herein. The processor 201 may be an application processor typically found in a smartphone,
while the memory unit 203 may represent a microelectronic random access random access
memory. An operating system may be stored in memory unit 203 along with application
programs specific to various functions of audio source 103A, which are run or executed by
processor 201 to perform various functions of audio source 103A. Do. For example, rendering
planning unit 209 may be stored in memory unit 203. As described in further detail below,
rendering planning unit 209 may be used to generate beam attributes for each channel of audio
program content to be played in listening area 101. These beam attributes may be used to output
the audio beam to the corresponding audio zone 113 in the listening area 101.
[0015]
In one embodiment, audio source 103A may include one or more audio inputs 205 for receiving
audio signals from an external device and / or a remote device. For example, audio source 103A
may receive an audio signal from a streaming media service and / or a remote server. The audio
signal can represent one or more channels of audio program content (e.g., a music track or an
audio track for a movie). For example, a single signal corresponding to a single channel of multichannel audio program content may be received by input 205 of audio source 103A. In another
example, a single signal may correspond to multiple channels of audio program content that are
multiplexed into a single signal.
[0016]
In one embodiment, audio source 103A can include digital audio input 205A that receives digital
audio signals from an external device and / or a remote device. For example, audio input 205A
may be a TOSLINK connector or a digital wireless interface (eg, a wireless local area network
(WLAN) adapter or a Bluetooth® receiver). In one embodiment, audio source 103A can include
an analog audio input 205B that receives an analog audio signal from an external device. For
example, audio input 205B may be a binding post, a fan stack clip, or a phono plug that is
designed to receive a wire or conduit and receive a corresponding analog signal.
[0017]
Although described as receiving audio program content from an external or remote source, in
some embodiments, audio program content may be stored locally at audio source 103A. For
03-05-2019
6
example, one or more audio program content may be stored in memory unit 203.
[0018]
In one embodiment, audio source 103A may include an interface 207 for communicating with
speaker array 105 or other device (eg, remote audio / video streaming service). Interface 207
may communicate with speaker array 105 utilizing a wired medium (eg, conduit or wire). In
another embodiment, the interface 207 can communicate with the speaker array 105 via a
wireless connection, as shown in FIGS. 1A and 1B. For example, the network interface 207 may
be a set of IEEE 802.11 standards, the Global System for Mobile Communications (GSM)
standard, and the Code Division Multiple Access (CDMA) standard. One or more wireless
protocols and standards for communicating with the speaker array 105 may be utilized,
including Long Term Evolution (LTE) standards, and / or Bluetooth standards.
[0019]
As shown in FIG. 2B, the speaker array 105 can receive an audio signal corresponding to an
audio channel from the audio source 103A via the corresponding interface 212. These audio
signals may be used to drive one or more transducers 109 in the speaker array 105. Similar to
interface 207, interface 212 may be a wired protocol and standard, and / or a set of IEEE 802.11
standards, a cellular pan-European digital communications system (GSM) standard, cellular code
division multiple access One or more wireless protocols and standards may be utilized, including
(Code Division Multiple Access, CDMA) standards, Long Term Evolution (LTE) standards, and / or
Bluetooth standards. In some embodiments, the speaker array 105 can include a digital to analog
converter 217, a power amplifier 211, a delay circuit 213, and a beam former 215 to drive the
transducers 109 in the speaker array 105.
[0020]
Although described and illustrated separately from audio source 103A, in some embodiments,
one or more components of audio source 103A may be incorporated into speaker array 105. For
example, one or more of the speaker arrays 105 may include a hardware processor 201, a
memory unit 203, and one or more audio inputs 205.
03-05-2019
7
[0021]
FIG. 3A shows a side view of one of the speaker arrays 105 according to one embodiment. As
shown in FIG. 3A, the speaker array 105 can house a large number of transducers 109 in a
curved cabinet 111. As shown, the cabinet 111 is cylindrical, but in other embodiments may be
of any shape including polyhedrons, frustums, cones, pyramids, triangular prisms, hexagonal
prisms, or spheres.
[0022]
FIG. 3B shows a cross-sectional overhead view of the speaker array 105 according to one
embodiment. As shown in FIGS. 3A and 3B, the transducers 109 in the speaker array 105
surround the cabinet 111 so as to cover the curved surface of the cabinet 111. Transducer 109
may be any combination of full range driver, mid range driver, subwoofer, woofer, and tweeter.
Each of the transducers 109 is a lightweight diaphragm or cone connected to a rigid basket or
frame via a flexible suspension that constrains the coil (e.g. voice coil) of the wire to move axially
through the cylindrical magnetic gap. Can be used. In the voice coil, when an electric audio signal
is applied, a magnetic field is generated in the voice coil by the current, and becomes a variable
electromagnet. The coil and the magnetic system of the transducer 109 interact to create a
mechanical force that moves the coil (and hence the attached cone) back and forth, thereby
added from the audio source such as the audio source 103A Play audio under control of
electrical audio signal. Although described as using an electromagnetic dynamic speaker driver
as the transducer 109, one skilled in the art will recognize that other types of speaker drivers are
possible, such as piezoelectric drivers, planar electromagnetic drivers, and electrostatic drivers. .
[0023]
Each transducer 109 can be independently driven separately to produce sound in response to the
separate separate audio signals received from the audio source 103A. By independently and
separately driving the transducers 109 in the speaker array 105 according to different
parameters and settings (including filters that control delay, amplitude variations, and phase
variations over the audio frequency range), the speaker array 105 can A number of directivity /
beam patterns can be generated that accurately represent each channel of audio program
content output by source 103. For example, in one embodiment, the speaker array 105 can
generate one or more of the directivity patterns shown in FIG. 4 independently or collectively.
03-05-2019
8
[0024]
Although FIGS. 1A and 1B are shown as including three speaker arrays 105, in other
embodiments, different numbers of speaker arrays 105 may be used. For example, as shown in
FIG. 5A, two speaker arrays 105 may be used, while as shown in FIG. 5B, four speaker arrays 105
may be used in the listening area 101. The number, type, and placement of speaker arrays 105
may change with time. For example, user 107 may move speaker array 105 during playback of a
video and / or add speaker array 105 to system 100. Also, although shown as including one
audio source 103A (FIG. 1A) or two audio sources 103A and 103B (FIG. 1B), like the speaker
array 105, the number, type, and arrangement of audio sources 103 are as follows: It may
change with time.
[0025]
In one embodiment, the layout of the speaker array 105, audio source 103, and user 107 may be
determined using various sensors and / or input devices, as described in further detail below.
Audio beam attributes may be generated for each channel of audio program content to be
reproduced in the listening area 101 based on the determined layout of the speaker array 105,
the audio source 103, and / or the user 107. These beam attributes may be used to output an
audio beam to the corresponding audio zone 113, as described in more detail below.
[0026]
Turning to FIG. 6, a method 600 of driving the one or more speaker arrays 105 to generate the
sound of one or more zones 113 in the listening area 101 based on the one or more audio
program content is discussed. Do. Each operation of method 600 may be performed by one or
more components of audio source 103A / 103B and / or speaker array 105. For example, one or
more of the operations of method 600 may be performed by rendering planning unit 209 of
audio source 103. FIG. 7 shows a component diagram of a rendering planning unit 209
according to one embodiment. Each element of the rendering planning unit 209 shown in FIG. 7
will be described with respect to the method 600 described below.
[0027]
03-05-2019
9
As mentioned above, in one embodiment, one or more components of audio source 103 may be
incorporated into one or more speaker arrays 105. For example, one of the speaker arrays 105
may be designated as a master speaker array 105. In this embodiment, the operations of method
600 may be performed solely or primarily by this master speaker array 105, and as described in
more detail below with respect to method 600, the data generated by master speaker array 105
is otherwise It may be distributed to the speaker array 105.
[0028]
Although the acts of method 600 are described and illustrated in a particular order, in other
embodiments the acts may be performed in a different order. In some embodiments, two or more
operations may be performed simultaneously or in periods of overlap.
[0029]
In one embodiment, method 600 may begin at operation 601 with the receipt of one or more
audio signals representing audio program content. In one embodiment, one or more audio
program content may be received by one or more of the speaker arrays 105 (eg, the master
speaker array 105) and / or the audio source 103 in operation 601. For example, a signal
corresponding to audio program content may be received by one or more of the audio inputs
205 and / or the content redistribution and transfer unit 701 at operation 601. Audio program
content may be received at operation 601 from various sources, including streaming internet
services, set top boxes, local or remote computers, personal audio devices, personal video
devices, and the like. Although the audio signal is described as being received from a remote
source or an external source, in some embodiments the signal may originate at an audio source
103 and / or a speaker array 105, which are generated by May be
[0030]
As noted above, each of the audio signals may represent audio program content (eg, a music
track or a movie audio track) to be played to the user 107 in each zone 113 of the listening area
101 by the speaker array 105 . In one embodiment, each of the audio program content can
include one or more audio channels. For example, audio program content may include five audio
channels, including a front left channel, a front center channel, a front right channel, a left
03-05-2019
10
surround channel, and a right surround channel. In other embodiments, 5.1, 7.1 or 9.1 multichannel audio streams may be used. Each of these audio channels may be represented by the
corresponding signal received at operation 601 or by a single signal.
[0031]
Upon receiving at operation 601 one or more signals representing one or more audio program
content, the method 600 may 1) characterize the listening area 101, 2) layout / position of the
speaker array 105, 3) position of the user 107, One or more parameters may be determined that
describe 4) characteristics of the audio program content, 5) layout of the audio source 103, and /
or 6) characteristics of each audio zone 113. For example, at operation 603, method 600 can
determine the characteristics of listening area 101. These characteristics are the size and shape
of the listening area 101 (e.g. the arrangement of walls, floors and ceilings in the listening area
101) and / or the reverberation characteristics of the listening area 101 and / or of the objects in
the listening area 101 An arrangement (eg, arrangement of sofas, tables, etc.) can be included. In
one embodiment, these properties include user input 709 (eg, a mouse, keyboard, touch screen,
or any other input device), and / or sensor data 711 (eg, still image or video camera data and
audio) It can be determined by the use of beacon data). For example, an image from a camera
may be used to determine the size of an obstacle in the listening area 101, and data from an
audio beacon using audible or inaudible test audio may reflect reverberation characteristics of
the listening area 101. And / or the user 107 may manually indicate the size and layout of the
listening area 101 using the input device 709. Input device 709 and sensors that generate sensor
data 711 are incorporated into the audio source 103 and / or the speaker array 105 or part of
an external device (eg, a mobile device in communication with the audio source 103 and / or the
speaker array 105) It can be done.
[0032]
In one embodiment, the method 600 can determine the layout and placement of the speaker
array 105 in the listening area 101 and / or in each zone 113 at operation 605. In one
embodiment, similar to operation 603, operation 605 may be performed by use of user input
709 and / or sensor data 711. For example, test audio may be emitted sequentially or
simultaneously by each of the speaker arrays 105 and detected by a corresponding set of
microphones. Based on these sensed sounds, operation 605 may determine the layout and
placement of each of the speaker arrays 105 in the listening area 101 and / or in the zone 113.
In another example, user 107 may assist in the determination of the layout and placement of
speaker array 105 within listening area 101 and / or within zone 113 through the use of user
03-05-2019
11
input 709. In this example, user 107 may manually indicate the position of speaker array 105
using a picture or video stream of listening area 101. This layout and arrangement of the speaker
array 105 may include the distance between the speaker array 105, the distance between the
speaker array 105 and the one or more users 107, the distance between the speaker array 105
and the one or more audio sources 103, And / or the distance between the speaker array 105
and one or more objects (eg, walls, sofas, etc.) in the listening area 101 or in the zone 113 may
be included.
[0033]
In one embodiment, method 600 may determine the placement of each user 107 in listening area
101 and / or in each zone 113 at operation 607. In one embodiment, similar to acts 603 and
605, act 607 may be performed through the use of user input 709 and / or sensor data 711. For
example, captured images / video of listening area 101 and / or zone 113 may be analyzed to
determine the placement of each user 107 within listening area 101 and / or within each zone
113. Analysis may include the use of face recognition to detect and determine the placement of
user 107. In other embodiments, a microphone may be used to detect the position of the user
107 in the listening area 101 and / or in the zone 113. The arrangement of users 107 may be
referenced to one or more speaker arrays 105, one or more audio sources 103, and / or one or
more objects in the listening area 101 or zone 113. In some embodiments, other types of sensors
may be used to detect the position of the user 107, including global positioning sensors, motion
detection sensors, microphones, and the like.
[0034]
In one embodiment, method 600 can determine, at operation 609, characteristics related to one
or more received audio program content. In one embodiment, the characteristics may include the
number of channels of each audio program content, the frequency range of each audio program
content, and / or the content type of each audio program content (eg, music, speech, or sound
effects) it can. As described in further detail below, this information may be used to determine
the number or type of speaker array 105 needed to play the audio program content.
[0035]
In one embodiment, method 600 may determine the placement of each audio source 103 in
03-05-2019
12
listening area 101 and / or in each zone 113 at operation 611. In one embodiment, similar to
acts 603, 605 and 607, act 611 may be performed by use of user input 709 and / or sensor data
711. For example, captured images / video of listening area 101 and / or zone 113 may be
analyzed to determine the placement of each of audio sources 103 within listening area 101 and
/ or within each zone 113. Analysis may include the use of pattern recognition to detect and
determine the placement of audio source 103. The arrangement of the audio source 103 may be
based on one or more speaker arrays 105, one or more users 107, and / or one or more objects
in the listening area 101 or in the zone 113.
[0036]
At act 613, method 600 can determine / define zone 113 within listening area 113. Zone 113
represents a segment of listening area 101 associated with the corresponding audio program
content. For example, while first audio program content may be associated with zone 113A as
described above and shown in FIGS. 1A and 1B, second audio program content may be associated
with zone 113B. In this example, the first audio program content is designated to be played in
zone 113A, while the second audio program content is designated to be played in zone 113B.
Although shown as circular, the zones 113 may be defined in any shape and may be of any size.
In some embodiments, zones 113 may overlap and / or may encompass the entire listening area
101.
[0037]
In one embodiment, the determination / definition of the zone 113 in the listening area 101 is
automatic based on the determined position of the user 107, the determined position of the
audio source 103 and / or the determined position of the loudspeaker array 105. Can be set. For
example, if users 107A and 107B are located near audio source 103A (eg, a television device)
while users 107C and 107D are determined to be located near audio source 103B (eg, a radio),
operation 613 may indicate that user 107A and user 107A. A first zone 113A around 107B and a
second zone 113B around users 107C and 107D may be defined. In another embodiment, user
107 can manually define the zone using user input 709. For example, user 107 may use a
keyboard, mouse, touch screen, or another input device to indicate parameters of one or more
zones 113 in listening area 101. In one embodiment, the definition of zone 113 may be: size,
shape, and / or another zone and / or another object (eg, user 107, audio source 103, speaker
array 105, wall in listening area 101, etc.) Can be included. This definition may also include the
association of audio program content with each zone 113.
03-05-2019
13
[0038]
As shown in FIG. 6, each of the operations 603, 605, 607, 609, 611, and 613 may be performed
simultaneously. However, in other embodiments, one or more of the acts 603, 605, 607, 609,
611, and 613 may be performed sequentially or otherwise in a non-overlapping manner. In one
embodiment, one or more of the operations 603, 605, 607, 609, 611, and 613 may be
performed by the playback zone / mode generator 705 of the rendering and planning unit 209.
[0039]
1) characteristics of listening area 101, 2) layout / position of speaker array 105, 3) position of
user 107, 4) characteristics of audio stream, 5) layout of audio source 103, and 6) characteristics
of each audio zone 113 After obtaining one or more parameters to describe, method 600 may
move to operation 615. At act 615, the audio program content received at act 601 may be
remixed to generate one or more audio channels of each audio program content. As noted above,
each audio program content received at act 601 may include multiple audio channels. At act 615,
audio channels may be extracted for these audio program content based on the performance and
requirements of the audio system 100 (eg, number, type, and placement of speaker arrays 105).
In one embodiment, remixing at operation 615 may be performed by the mixing unit 703 of the
content redistribution and transfer unit 701.
[0040]
In one embodiment, optional mixing of each audio program content at act 615 may take into
account the parameters / properties derived by acts 603, 605, 607, 609, 611, and 613. For
example, operation 615 may determine that the number of speaker arrays 105 is not sufficient
to represent an ambience or surround audio channel of audio program content. Thereby,
operation 615 may mix the one or more audio program content received in operation 601
without an ambience and / or surround channel. Conversely, if it is determined that the number
of speaker arrays 105 is sufficient to generate an ambience or surround audio channel based on
the parameters derived by the operations 603, 605, 607, 609, 611, and 613, the operation 615
may extract ambience and / or surround channels from the one or more audio program content
received at act 601.
03-05-2019
14
[0041]
After optional mixing of the received audio program content at operation 615, operation 617
generates a set of audio beam attributes corresponding to each channel of the audio program
content output to each corresponding zone 113. be able to. In one embodiment, the attribute
may be a gain value, a delay value, a beam type pattern value (eg, heart shaped, omnidirectional,
and figure eight beam type pattern), and / or a beam angle value (eg, 0 ° to 180). Can be
included. Each set of beam attributes may be used to generate a corresponding beam pattern of
one or more channels of audio program content. For example, as shown in FIG. 8, beam attributes
correspond to each of the Q audio channels of one or more audio program content and the N
speaker arrays 105. This generates a Q × N matrix of gain values, delay values, beam type
pattern values, and beam angle values. These beam attributes cause the speaker array 105 to
generate an audio beam of corresponding audio program content that is associated and focused
with the zone 113 in the listening area 101. As changes occur within the listening environment
(eg, audio system 100, listening area 101, and / or zones 113), beam attributes are adjusted to
account for these changes, as described in further detail below. obtain. In one embodiment, beam
attributes may be generated at operation 617 using beamforming algorithm unit 707.
[0042]
FIG. 9A illustrates an exemplary audio system 100 according to one embodiment. In this
example, the speaker arrays 105A-105D may output audio corresponding to the five channels of
audio program content to the zone 113A. In particular, the speaker array 105A outputs the left
front beam and the left center front beam, the speaker array 105B outputs the right front beam
and the right center front beam, and the speaker array 105C outputs the left surround beam, the
speaker array The 105D outputs a right surround beam. The beams of the left center front
surface and the right center front surface can collectively represent the center front channel,
while the other four beams generated by the speaker arrays 105A-105D correspond to the five
channels of audio program content. Represents an audio channel. For each of these six beams
generated by the speaker arrays 105A-105D, operation 615 may generate a set of beam
attributes based on one or more of the factors described above. The set of beam attributes
generates a corresponding beam based on changing conditions of the listening environment.
[0043]
Although FIG. 9A corresponds to a single audio program content being played in a single zone
03-05-2019
15
(eg, zone 113A), as shown in FIG. 9B, speaker arrays 105A-105D may be configured in another
zone (eg, zone). The audio beam of another audio program content to be played back at 113B)
may be generated simultaneously. As shown in FIG. 9B, speaker arrays 105A-105D generate six
beam patterns to represent the five channels of audio program content described above in zone
113A, while speaker arrays 105A and 105C combine two channels. Two additional beam
patterns may be generated to represent the second audio program content having them in zone
113B. In this example, operation 615 is a beam attribute corresponding to seven channels (ie,
five channels of the first audio program content and two channels of the second audio program
content) reproduced by the speaker arrays 105A-105D. May be generated. The set of beam
attributes generates a corresponding beam based on changing conditions of the listening
environment.
[0044]
In each case, the beam attributes may be referenced to each corresponding zone 113, a set of
users 107 within zone 113, and corresponding audio program content. For example, beam
attributes of the first audio program content described above with respect to FIG. 9A may be
generated with respect to the characteristics of zone 113A, the placement of speaker array 105
with respect to users 107A and 107B, and the characteristics of the first audio program content.
In contrast, the beam attributes of the second audio program content may be based on the
characteristics of zone 113B, the placement of speaker array 105 with respect to users 107C and
107D, and the characteristics of the second audio program content. Thereby, each of the first and
second audio program content may be played back in the respective audio zones 113A and 113B
corresponding to the conditions of the respective zones 113A and 113B.
[0045]
After act 617, act 619 may transmit each of the set of beam attributes to the corresponding
speaker array 105. For example, the speaker array 105A of FIG. 9B receives three sets of beam
pattern attributes corresponding to the left front beam and the left front center beam of the first
audio program content, and beam pattern attributes of the second audio program content. You
may The speaker array 105 may use these beam attributes to continuously output the audio of
each audio program content received in act 601 in each corresponding zone 113.
[0046]
03-05-2019
16
In one embodiment, each audio program content may be transmitted to the corresponding
speaker array 105 along with the associated set of beam pattern attributes. In other
embodiments, these audio program content may be transmitted to each speaker array 105
separately from the set of beam pattern attributes.
[0047]
Upon receiving the set of audio program content and corresponding beam pattern attributes, the
speaker array 105 may drive each of the transducers 109 to generate corresponding beam
patterns at corresponding zones 113 at operation 621. For example, as shown in FIG. 9B, speaker
arrays 105A-105D may generate beam patterns of two audio program content in zones 113A
and 113B. As described above, each loudspeaker array 105 drives a transducer 109 to generate
a beam pattern based on these beam pattern attributes and audio program content, a
corresponding digital-to-analog converter 217, a power amplifier 211, A delay circuit 213 and a
beam former 215 can be included.
[0048]
At operation 623, the method 600 determines whether something has changed in the audio
system 100, in the listening area 101, and / or in the zone 113 by performing the operations
603, 605, 607, 609, 611, and 613. can do. For example, the change may be movement of the
speaker array 105, movement of the user 107, change of audio program content, movement of
another object in the listening area 101 and / or in the zone 113, movement of the audio source
103, reintroduction of the zone 113. The definition may be included. The change may be
determined by use of user input 709 and / or sensor data 711 at operation 623. For example,
images of listening area 101 and / or zone 113 may be continuously examined to determine if a
change has occurred. Having determined the change in the listening area 101 and / or in the
zone 113, the method 600 returns to the operations 603, 605, 607, 609, 611 and / or 613 to 1)
characteristics of the listening area 101, 2) speakers Layout / position of array 105, 3) location
of user 107, 4) characteristics of audio program content, 5) layout of audio source 103, and / or
6) one or more parameters describing characteristics of each audio zone 113 It can be decided.
By using these data, new beam pattern attributes can be constructed using the same techniques
described above. Conversely, if no change is detected in operation 623, method 600 can continue
to output beam patterns based on the beam pattern attributes previously generated in operation
621.
03-05-2019
17
[0049]
Although described as detecting changes in the listening environment in act 623, in some
embodiments, act 623 may determine if another trigger event has occurred. For example, other
trigger events may include the passage of time, initialization of audio system 100, and so on.
Upon detecting one or more of these triggering events, the method 623 moves to operations
603, 605, 607, 609, 611, and 613 to determine the parameters of the listening environment as
described above. Can lead.
[0050]
As described above, method 600 may beam based on the placement / layout of speaker array
105, the placement of user 107, the characteristics of listening area 101, the characteristics of
audio program content, and / or any other parameters of the listening environment. Pattern
attributes can be generated. These beam pattern attributes may be used to drive the speaker
array 105 to generate beams representing one or more channels of audio program content in
separate zones 113 of the listening area. As changes occur in the listening area 101 and / or in
the zone 113, the beam pattern attributes may be updated to reflect the changed environment.
Thereby, in the sound generated by the audio system 100, the variable conditions of the listening
area 101 and the zone 113 can be continuously considered. By adapting to these changing
conditions, the audio system 100 can play back audio that accurately represents each audio
program content in the various zones 113.
[0051]
As explained above, an embodiment of the present invention may be an article of manufacture,
wherein a machine readable medium (such as a memory by microelectronics) performs the
operations described above. Instructions are stored for programming one or more data
processing components (generally referred to herein as a "processor"). In other embodiments,
some of these operations may be performed by specific hardware components including wiring
logic (eg, dedicated digital filter blocks and state machines). These operations may instead be
performed by any combination of programmed data processing components and fixed wired
circuit components.
03-05-2019
18
[0052]
Although several embodiments have been described and illustrated in the accompanying
drawings, such embodiments are merely illustrative of the broad invention and not limiting, and
various other changes may be made. It should be understood that the present invention is not
limited to the specific arrangements and arrangements shown and described, as may occur to
those skilled in the art. Thus, the description is to be regarded as illustrative rather than
restrictive.
03-05-2019
19
Документ
Категория
Без категории
Просмотров
0
Размер файла
34 Кб
Теги
jp2017532898
1/--страниц
Пожаловаться на содержимое документа