close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2017527223

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2017527223
Abstract The examples described herein include calibration of the device's microphone. An
exemplary embodiment comprises: first data indicative of a first audio signal detected by a
microphone of the network device while the network device is located within a predetermined
physical range of the microphone of the reproduction device; Identifying a microphone
calibration algorithm based on the second data indicative of the second audio signal detected by
the microphone of the microphone, and when performing the calibration function associated
with the playback device Applying.
Calibration of Reproduction Device Reference to Related Application
[0001]
This application claims the priority of US Patent Application No. 14 / 481,522 filed on
September 9, 2014, and US Patent Application No. 14 / 644,136 filed on March 10, 2015. ,
Which is incorporated herein by reference in its entirety.
[0002]
The present application relates to consumer products, and more particularly to methods,
systems, products, features, services and other elements directed to media playback, and some
aspects thereof.
[0003]
11-04-2019
1
In 2003, Sonoz Inc. filed a patent application entitled "How to synchronize audio playback
between multiple network devices", which is one of the first patent applications, and in 2005 sold
a media playback system Until then, the options for accessing and listening to digital audio in
out-loud settings were severely limited.
People are enabled by the Sonoz Wireless HiFi System to experience virtually unlimited music
from sources via one or more network playback devices.
Through a software control application installed on a smartphone, tablet or computer, people can
play the music they want in any room equipped with a network playback device. Also, for
example, the controller can be used to stream different songs to each room with playback
devices, or multiple rooms can be grouped for synchronized playback, or synchronized in all
rooms You can listen to the same song.
[0004]
Given the growing interest in digital media so far, there is a need to further develop consumer
accessible technologies that can further enhance the listening experience.
[0005]
The features, aspects and advantages of the technology disclosed herein are better understood
with reference to the following description, the appended claims and the accompanying
drawings.
[0006]
Diagram showing the configuration of an exemplary media playback system that can be
implemented in an embodiment Diagram showing the functional block diagram of the exemplary
playback device Diagram showing the functional block diagram of the exemplary control device
Diagram showing the exemplary controller interface An exemplary flow diagram of a first
method of calibrating a playback device A diagram showing an exemplary playback environment
to which the playback device may be calibrated An exemplary flow diagram of a second method
of calibrating a playback device Calibration of the playback device Flow diagram of the third
method of calibrating the flow diagram of the first method of calibrating the microphone flow
diagram of the exemplary arrangement of calibrating the microphone flow diagram of the second
method of calibrating the microphone
11-04-2019
2
[0007]
While the drawings are intended to illustrate some exemplary embodiments, it is understood that
the invention is not limited to the arrangements and instrumentality shown in the drawings.
[0008]
I.
Calibration of the one or more playback devices to condition the playback environment with the
overview microphone may include acoustic characteristics of the microphone.
In an embodiment, the acoustic characteristics of the microphones of the network device used to
calibrate one or more playback devices may be unknown.
[0009]
The embodiments disclosed herein are based on an audio signal detected by a microphone of a
network device while the network device is located within a predetermined physical range of the
microphone of the playback device. It relates to calibration of the microphone.
[0010]
In an example, the calibration function may be at least partially coordinated and performed by
the network device.
In some cases, the network device may be a mobile device with a built-in microphone.
Also, the network device may be a controller used to control one or more playback devices.
[0011]
11-04-2019
3
The microphone of the network device may detect the first audio signal while the network device
is located within the predetermined physical range of the microphone of the playback device.
In an example, the position within the predetermined physical range of the playback device's
microphone is the position above the playback device, the position behind the playback device,
the position on the side of the playback device, and the position before the playback device. It
may be either or any other possible position.
[0012]
The network device may also receive data indicative of the second audio signal detected by the
microphone of the playback device. Both the first audio signal and the second audio signal may
include portions corresponding to the third audio signal reproduced by the one or more
reproduction devices. The one or more playback devices may include playback devices with
microphones within a predetermined physical range in which the network device is located. The
first audio signal and the second audio signal may be detected simultaneously by the respective
microphones, or may be detected at different times. Data indicative of the second audio signal
may be received by the network device before or after the first audio signal is detected by the
microphone of the network device.
[0013]
The network device may then specify a microphone calibration algorithm based on the data
indicative of the first audio signal and the data indicative of the second audio signal. In response,
the network device may apply the identified microphone calibration algorithm when performing
a function associated with the playback device, eg, a calibration function.
[0014]
In another example, the calibration function may be at least partially coordinated by the
computer and may be performed. The computer includes, for example, a server that
communicates with a playback device and / or a network device.
11-04-2019
4
[0015]
The computer may receive data from the network device indicative of the first audio signal
detected by the network device microphone while the network device is located within the
predetermined physical range of the reproduction device microphone . The computer may also
receive data indicative of the second audio signal detected by the microphone of the playback
device. The computer may then identify the microphone calibration algorithm based on the data
indicative of the first audio signal and the data indicative of the second audio signal. In some
cases, the computer may then apply the specified microphone calibration algorithm when
performing a function associated with the network device and the playback device, such as a
calibration function. In some cases, the computer may transmit data indicative of the identified
microphone calibration algorithm to the network device and apply the data to the network device
when performing functions associated with the playback device.
[0016]
In some cases, identifying the microphone calibration algorithm may include accessing a
database of microphone calibration algorithms and microphone acoustics. Thereby, the
microphone calibration algorithm may be specified based on the microphone acoustics of the
microphone of the network device. The microphone acoustical properties may be determined
based on data indicative of the first audio signal and data indicative of the second audio signal.
[0017]
In another case, identifying the microphone calibration algorithm may include calculating the
microphone calibration algorithm based on data indicative of the first audio signal and data
indicative of the second audio signal. For example, the microphone may be adapted to generate a
third audio signal including normalized audio characteristics by applying a microphone
calibration algorithm by one or more playback devices when playing back audio content in a
playback environment A calibration algorithm may be calculated. For example, if the microphone
acoustical properties include low sensitivity at a frequency, the microphone calibration algorithm
can resolve the low sensitivity, for example, by amplifying audio content detected by the
microphone at a frequency.
[0018]
11-04-2019
5
As described above, when the microphone of the network device is used to perform a calibration
function or the like associated with one or more playback devices, calibration of the microphone
of the network device may be initiated, The acoustic characteristics of the microphone or the
microphone calibration algorithm corresponding to the microphone can not be used. Thus,
calibration of the microphone may be initiated by the device performing a calibration function
associated with one or more playback devices.
[0019]
Also, as mentioned above, the network device may be a controller used to control one or more
playback devices. Thus, in some cases, calibration of the microphone of the network device may
be initiated when the controller is set up to control one or more playback devices. Other
examples are also possible.
[0020]
In an example, the association between the identified calibration algorithm and one or more
characteristics, such as a model of the network device, may be stored as an entry in a database of
microphone calibration algorithms. The microphone calibration algorithm may then be identified
and applied when another network device has at least one of one or more characteristics of the
network device.
[0021]
As mentioned above, herein, calibration of the microphone of the network device based on the
audio signal detected by the microphone of the network device while the network device is
located within the predetermined physical range of the microphone of the reproduction device
including. In one aspect, a network device is provided. The network device comprises a
microphone and a memory storing instructions for causing the playback device to perform the
function by the processor. The function is: (i) while the playback device is playing the first audio
signal, and (ii) while the network device is moving from the first physical position to the second
physical position , By the microphone, identifying an audio processing algorithm based on the
data indicative of the second audio signal, and transmitting data indicative of the identified audio
11-04-2019
6
processing algorithm to the playback device.
[0022]
In another aspect, a playback device is provided. The playback device comprises a processor and
a memory storing instructions for causing the playback device to perform functions by the
processor. The function comprises: playing a first audio signal; a second audio signal detected by
a microphone of the network device while the network device is moved from the first physical
position to the second physical position in the playback environment Receiving from the network
device data indicative of the audio signal, identifying an audio processing algorithm based on the
data indicative of the second audio signal, and applying the identified audio processing algorithm
when playing back the audio content in the playback environment Step includes.
[0023]
In another aspect, a non-transitory computer readable recording medium is provided. The nontransitory computer readable recording medium stores instructions that cause the computer to
perform the function. Receiving from the network device data indicative of an audio signal
detected by the network device microphone while the network device is moving from the first
physical location to the second physical location in the playback environment; Identifying an
audio processing algorithm based on the data indicative of the audio signal, and transmitting the
data indicative of the audio processing algorithm to a playback device within the playback
environment.
[0024]
In another aspect, a network device is provided. The network device comprises a microphone, a
processor, and a memory storing instructions for causing the playback device to perform a
function by the processor. Detecting the first audio signal by the microphone of the network
while the network device is located within the predetermined physical range of the microphone
of the reproduction device, the second audio detected by the microphone of the reproduction
device Receiving a data indicative of the signal by a microphone of the network device,
identifying a microphone calibration algorithm based on the data indicative of the first audio
signal and the data indicative of the second audio signal, and associated with the playback device
Applying a microphone calibration algorithm when performing the calibration function.
11-04-2019
7
[0025]
In another aspect, a computer is provided. The computer comprises a processor and a memory
storing instructions for causing the playback device to perform the function. The function
comprises receiving from the network device data indicative of a first audio signal detected by a
microphone of the network device, receiving data indicative of a second audio signal detected by
a microphone of the reproduction device, the first audio Identifying the microphone calibration
algorithm based on the data indicative of the signal and the data indicative of the second audio
signal, and the microphone calibration algorithm when performing the calibration function
associated with the network device and the playback device Applying.
[0026]
In another aspect, a non-transitory computer readable recording medium is provided. The nontransitory computer readable recording medium stores instructions that cause the computer to
perform the function. Receiving from the network device data indicative of a first audio signal
detected by the network device microphone while the network device is located within a
predetermined physical range of the reproduction device microphone; Identifying data indicative
of a second audio signal detected by a microphone of the device; identifying a microphone
calibration algorithm based on data indicative of the first audio signal and data indicative of the
second audio signal; Storing in the database an association between the implemented
microphone calibration algorithm and one or more characteristics of the network device's
microphone.
[0027]
One skilled in the art will appreciate that the present disclosure includes other embodiments. In
some of the examples described herein, reference may be made to functions performed by an
actor such as a "user" and / or another entity, but such descriptions are for illustrative purposes
only. It should be understood that the purpose is Such exemplary action by the actor should not
be construed as necessary unless it is expressly required by the wording of the claims itself. One
of ordinary skill in the art can appreciate that the present disclosure includes multiple other
embodiments.
11-04-2019
8
[0028]
II. Exemplary Operating Environment FIG. 1 illustrates an exemplary configuration of a
media playback system 100 that may be implemented or implementable in one or more
embodiments disclosed herein. As shown, the media playback system 100 is associated with an
exemplary home environment having multiple rooms and spaces, eg, a main bedroom, an office, a
dining room, and a living room. As shown in the example of FIG. 1, the media playback system
100 includes playback devices 102-124, control devices 126 and 128, and a wired or wireless
network router 130.
[0029]
Further, descriptions of the different components of the exemplary media playback system 100
and how the different components work to provide the user with a media experience are
described in the following sections. Although the description herein generally refers to the media
playback system 100, the techniques described herein are not limited to the use of the home
environment shown in FIG. For example, the techniques described herein may be used in
environments where multi-zone audio is desired, such as commercial environments such as
restaurants, malls, or airports, sport utility vehicles (SUVs), buses or vehicles. It is useful in the
environment of vehicles, ships, or boards, airplanes, etc.
[0030]
a. Exemplary Playback Device FIG. 2 shows a functional block diagram of an exemplary
playback device 200 that comprises one or more of the playback devices 102-124 of the media
playback system 100 of FIG. The playback device 200 may include a processor 202, software
components 204, memory 206, audio processing components 208, audio amplifiers 210,
speakers 212, microphones 220, and network interface 214. Network interface 214 includes
wireless interface 216 and wired interface 218. In some cases, the playback device 200 does not
include the speaker 212, but may include a speaker interface for connecting the playback device
200 to an external speaker. In other cases, the playback device 200 does not include the speaker
212 or the audio amplifier 210, but may include an audio interface for connecting the playback
device 200 to an external audio amplifier or an audio visual receiver.
11-04-2019
9
[0031]
In one example, processor 202 may be a clocked computer component configured to process
input data based on instructions stored in memory 206. Memory 206 may be a non-transitory
computer readable storage medium configured to store instructions executable by processor
202. For example, memory 206 may be data storage capable of loading one or more of software
components 204 executable by processor 202 to perform certain functions. In one example, the
function may include the playback device 200 reading audio data from an audio source or
another playback device. In another example, the function may include the playback device 200
transmitting audio data to another device on the network or to the playback device. In yet
another example, the functionality may include pairing playback device 200 with one or more
playback devices to create a multi-channel audio environment.
[0032]
Certain functions include the playback device 200 synchronizing playback of audio content with
one or more other playback devices. Preferably, while synchronizing playback, the listener is not
aware of the delay between playback of audio content by playback device 200 and playback by
one or more other playback devices. U.S. Pat. No. 8,234,395 entitled "System and Method for
Synchronizing Operation Between Multiple Independent Clock Digital Data Processing Devices" is
incorporated herein by reference, which provides for audio playback between playback devices.
It provides a more detailed example where synchronization is stated.
[0033]
Additionally, memory 206 may be configured to store data. The data may be, for example, a
playback device 200, such as a playback device 200 included as part of one or more zones and /
or zone groups, an audio source accessible by the playback device 200, or a playback device 200
(or other playback Associated with the playback queue, which can be associated with the device).
The data may be updated periodically and stored as one or more state variables that indicate the
state of the playback device 200. Memory 206 may also include data associated with the state of
other devices in the media system, with one or more devices at or near the most recent data
associated with the system, by sharing between devices at any time. It can have. Other
embodiments are also possible.
11-04-2019
10
[0034]
Audio processing component 208 includes, among other things, one or more of a digital to
analog converter (DAC), an analog to digital converter (ADC), an audio processing component, an
audio enhancement component, and a digital signal processor (DSP). May be. In one embodiment,
one or more audio processing components 208 may be subcomponents of processor 202. In one
embodiment, audio content may be processed and / or intentionally modified by audio
processing component 208 to generate an audio signal. The generated audio signal is
transmitted to the audio amplifier 210, amplified, and reproduced through the speaker 212. In
particular, audio amplifier 210 may include a device configured to amplify the audio signal to a
level that can drive one or more speakers 212. The speaker 212 may comprise a complete
speaker system, including an independent transducer (e.g., a "driver") or a housing that encloses
one or more drivers. Some drivers provided in the speaker 212 may include, for example, a
subwoofer (for example, for low frequencies), a middle range driver (for example, for
intermediate frequencies), and / or a tweeter (for high frequencies). In some cases, each
transducer of one or more speakers 212 may be driven by a corresponding individual audio
amplifier of audio amplifier 210. In addition to generating an analog signal for playback on
playback device 200, audio processing component 208 processes the audio content and
transmits the audio content for playback to one or more other playback devices.
[0035]
Audio content to be processed and / or reproduced by the reproduction device 200 is received
via an external source, eg, an audio line-in input connection (eg, auto-detecting 3.5 mm audio
line-in connection) or the network interface 214. May be
[0036]
The microphone 220 may include an audio sensor configured to convert the detected sound into
an electrical signal.
The electrical signals may be processed by the audio processing component 208 and / or the
processor 202. The microphone 220 may be disposed at one or more locations of the playback
device 220, facing one or more directions. Microphone 220 may be configured to detect sound
within one or more frequency ranges. In some cases, one or more of the microphones 220 may
be configured to detect sounds within the frequency range of audio that the playback device 200
can play. In other cases, one or more of the microphones 220 may be configured to detect sound
11-04-2019
11
in a frequency range that human beings can hear.
[0037]
Network interface 214 may be configured to enable data flow between playback device 200 and
one or more other devices over a data network. Thus, the playback device 200 may be a data
network from one or more other playback devices in communication with the playback device, a
network device in a local area network, or an audio content source on a wide area network such
as, for example, the Internet. May be configured to receive audio content. In one example, audio
content and other signals sent and received by the playback device 200 may be sent in the form
of digital packets that include an Internet Protocol (IP) based source address and an IP based
destination address. In such a case, the network interface 214 can appropriately receive and
process data addressed to the playback device 200 by the playback device 200 by analyzing the
digital packet data.
[0038]
As shown, network interface 214 may include wireless interface 216 and wired interface 218.
The wireless interface 216 provides a network interface function for the playback device 200
and a communication protocol (eg, wireless standard IEEE 802.11a, 802.11b, 802.11g, 802.11n,
802.11n, 802.11ac, 802.15, 4G mobile) Other devices (eg, other playback devices in the data
network associated with the playback device 200, speakers, receivers, network devices, control
devices) based on any of the wireless standards (standards) including communication standards
etc. It may communicate wirelessly. The wired interface 218 provides a network interface
function for the playback device 200 and may communicate via a wired connection with other
devices based on a communication protocol (eg, IEEE 802.3). Although the network interface 214
shown in FIG. 2 includes both the wireless interface 216 and the wired interface 218, the
network interface 214 may include only the wireless interface or only the wired interface in an
embodiment.
[0039]
In one example, playback device 200 and another playback device may be paired to play two
separate audio components of audio content. For example, playback device 200 may be
configured to play left channel audio components, while other playback devices may be
11-04-2019
12
configured to play right channel audio components. This can create or enhance stereo effects of
the audio content. Paired playback devices (also referred to as "combined playback devices") may
also play audio content in synchronization with other playback devices.
[0040]
In another example, the playback device 200 may be acoustically integrated with one or more
other playback devices to form a single integrated playback device (integrated playback device).
The integrated playback device can be configured to process and reproduce sound differently as
compared to a non-integrated playback device or paired playback device. This is because the
integrated playback device can add a speaker that plays audio content. For example, if the
playback device 200 is designed to play audio content in the low frequency range (eg, a
subwoofer), the playback device 200 is designed to play audio content in the full frequency
range It may be integrated with the device. In this case, the full frequency range playback device
may be configured to play only the mid-high frequency component of the audio content when
integrated with the low frequency playback device 200. On the other hand, the low frequency
range playback device 200 plays back the low frequency component of the audio content.
Furthermore, the integrated playback device may be paired with a single playback device, or
even another integrated playback device.
[0041]
As an example, Sonoz Inc. currently plays including "PLAY: 1", "PLAY: 3", "PLAY: 5", "PLAYBAR",
"CONNECT: AMP", "CONNECT", and "SUB" We offer devices for sale. Any other past, present, and
/ or future playback devices may additionally or alternatively be implemented and used in the
playback devices of the embodiments disclosed herein. Further, it is understood that the playback
device is not limited to the particular example shown in FIG. 2 or the provided Sonoz product. For
example, the playback device may include wired or wireless headphones. In another example, the
playback device may include or interact with a docking station for a personal mobile media
playback device. In yet another example, the playback device may be integrated with another
device or component, such as a television, a light fixture, or some other device for indoor or
outdoor use.
[0042]
11-04-2019
13
b. Exemplary Playback Zone Configuration Returning to the media playback system of FIG. 1,
the environment includes one or more playback zones, each playback zone including one or more
playback devices. The media playback system 100 is formed of one or more playback zones, and
one or more zones may be added or deleted later to provide the exemplary configuration shown
in FIG. Each zone may be given a name based on a different room or space, such as an office, a
bathroom, a master bedroom, a bedroom, a kitchen, a dining room, a living room, and / or a
balcony. In some cases, a single regeneration zone may include multiple rooms or spaces. In
another case, a single room or space may include multiple playback zones.
[0043]
As shown in FIG. 1, each of the balcony, dining room, kitchen, bathroom, office, and bedroom
zones have one playback device, while each of the living room and master bedroom zones has a
plurality of playback devices. Have. In the living room zone, the playback devices 104, 106, 108,
and 110 may be separate playback devices, one or more combined playback devices, one or
more integrated playback devices, or any of these Audio content may be configured to be
synchronized and played back in any combination. Similarly, in the case of the master bedroom,
the playback devices 122 and 124 may be configured to play audio content synchronously as
separate playback devices, as combined playback devices, or as integrated playback devices. .
[0044]
In one example, one or more playback zones in the environment of FIG. 1 are playing different
audio content. For example, the user can listen to hip-hop music played by the playback device
102 while grilling in the balcony zone. Meanwhile, another user can listen to the classical music
played by the playback device 114 while preparing a meal in the kitchen zone. In another
example, the playback zone may play the same audio content in synchronization with another
playback zone. For example, if the user is in the office zone, the office zone playback device 118
may play the same music as the music being played on the balcony playback device 102. In such
a case, the playback devices 102 and 118 are playing the rock music synchronously, so that the
user moves between different playback zones seamlessly (or at least the audio content played
out-loudly). Almost seamless). Synchronization between playback zones may be performed in the
same manner as synchronization between playback devices as described in the aforementioned
U.S. Patent No. 8,234,395.
[0045]
11-04-2019
14
As mentioned above, the zone configuration of media playback system 100 may be changed
dynamically, and in an embodiment, media playback system 100 supports multiple
configurations. For example, if a user physically moves one or more playback devices into or out
of a zone, media playback system 100 may be reconfigured to accommodate changes. For
example, if the user physically moves the playback device 102 from the balcony zone to the
office zone, the office zone may include both the playback device 118 and the playback device
102. If desired, playback devices 102 may be paired, grouped into office zones, and / or renamed
via control devices, such as control devices 126 and 128, respectively. On the other hand, if one
or more playback devices are moved to an area in a home environment where the playback zone
has not yet been set, a new playback zone may be formed in that area.
[0046]
Further, different playback zones of the media playback system 100 may be dynamically
combined into zone groups or may be divided into separate playback zones. For example, by
combining the dining room zone and the kitchen zone 114 into a dinner party zone group, the
playback devices 112 and 114 can synchronize and play audio content. On the other hand, if one
user wants to watch TV while another wants to listen to music in the living room space, the living
room zone comprises a television zone including the playback device 104 and a listening zone
including the playback devices 106, 108 and 110. And may be divided.
[0047]
c. Exemplary Control Device FIG. 3 shows a functional block diagram of an exemplary control
device 300 that configures one or both of the control devices 126 and 128 of the media
playback system 100. As shown, control device 300 may include processor 302, memory 304,
network interface 306, user interface 308, and microphone 310. In one example, control device
300 may be a control device dedicated to media playback system 100. In another example, the
control device 300 may be a network device with media playback system controller application
software installed, such as an iPhone®, iPad®, or any other smartphone, tablet or network device
(eg, , PC or a network computer such as Mac (registered trademark).
[0048]
11-04-2019
15
Processor 302 may be configured to perform functions related to enabling user access, control,
and configuration of media playback system 100. Memory 304 may be configured to store
instructions that can be executed by processor 302 and to perform those functions. Memory 304
may also be configured to store media playback system controller application software and other
data associated with media playback system 100 and the user.
[0049]
The microphone 310 may include an audio sensor configured to convert the detected sound into
an electrical signal. The electrical signals may be processed by the processor 302. In some cases,
if the control device 300 is a device that can be used as an audio communication means or an
audio recording means, one or more of the microphones 310 may be microphones for
performing their functions. For example, one or more of the microphones 310 may be configured
to detect sound in a frequency range that can be generated by a human and / or in a frequency
range that can be heard by a human. Other examples are also possible.
[0050]
In one example, the network interface 306 may be an industrial standard (eg, infrared, wireless,
wired standard such as IEEE 802.3, IEEE 802.11a, 802.11b, 802.11g, 802.11n, 802.11n, 802.15,
etc. Wireless standard, 4G communication standard, etc.). Network interface 306 may provide a
means for control device 300 to communicate with other devices within media playback system
100. In an example, data and information (eg, state variables) may be communicated between
control device 300 and other devices via network interface 306. For example, the configuration
of playback zones and zone groups in the media playback system 100 may be received by the
control device 300 from a playback device or another network device, or alternatively by the
control device 300 via the network interface 306. It may be sent to a playback device or a
network device. In some cases, the other network device may be another control device.
[0051]
Playback device control commands, such as volume control and audio playback control, may be
communicated from control device 300 to the playback device via network interface 306. As
described above, the configuration change of the media playback system 100 can be performed
11-04-2019
16
by the user using the control device 300. Configuration changes include adding one or more
playback devices to the zone, removing one or more playback devices from the zone, adding one
or more zones to the zone group, one or more May be removed from the zone group, forming a
combined player or an integrated player, separating the combined player or integrated player
into one or more playback devices, and so on. Thus, control device 300 may be referred to as a
controller, which may be a dedicated controller with installed media playback system controller
application software or may be a network device.
[0052]
User interface 308 of control device 300 may be configured to enable user access and control of
media playback system 100 by providing a controller interface, such as controller interface 400
shown in FIG. The controller interface 400 includes a playback control area 410, a playback zone
area 420, a playback status area 430, a playback queue area 440, and an audio content source
area 450. The illustrated user interface 400 is merely one example of a user interface provided
with a network device (and / or the control devices 126 and 128 of FIG. 1) such as the control
device 300 of FIG. It may be accessed to control a media playback system such as system 100.
Alternatively, various formats, styles, and interactive sequences may be implemented on other
user interfaces on one or more network devices to provide similar control access to the media
playback system.
[0053]
The playback control area 410 may include an icon that can be selected (e.g., using a touch or a
cursor). This icon allows playback devices in the selected playback zone or zone group to play or
stop, fast forward, rewind, then skip, skip forward, shuffle mode on / off, repeat mode on / off,
cross Turn on / off fade mode. The playback control area 410 may include another selectable
icon. Other selectable icons may change other settings such as equalization settings and playback
volume.
[0054]
The playback zone area 420 may include an indication of playback zones within the media
playback system 100. In one embodiment, graphical display of the playback zone may be
selectable. By moving additional selectable icons, playback zones within the media playback
11-04-2019
17
system can be managed or configured. For example, other management or configuration may be
performed, such as creating combined zones, creating zone groups, splitting zone groups, and
renaming zone groups.
[0055]
For example, as shown, a "group" icon may be provided on each of the graphical representations
of the playback zone. A "group" icon in the graphic display of a zone may be selectable to select
one or more zones in the media playback system to present an option to group with a zone. Once
grouped, playback devices in a zone grouped with a zone are configured to play audio content in
synchronization with playback devices in the zone. Similarly, a "group" icon may be provided in
the graphical representation of the zone group. In this case, the "group" icon is selectable to give
the option of deselecting one or more zones in the zone group in order to remove one or more
zones in the zone group from the zone group It may be. Other interactions for grouping and
ungrouping zones via user interfaces, such as user interface 400, are possible and can be
implemented. The display of playback zones in the playback zone area 420 may be dynamically
updated as the playback zone or zone group configuration is changed.
[0056]
The playback status area 430 displays a graphical representation of the currently played audio
content, previously played audio content, or audio content scheduled to be played next within the
selected playback zone or zone group. May be included. Selectable playback zones or playback
groups may be visually distinguished on the user interface, for example, within playback zone
region 420 and / or playback status region 430. The graphic display may include the track title,
artist name, album name, album year, track length, and other relevant information useful to the
user when controlling the media playback system via the user interface 400. .
[0057]
The play queue area 440 may include a graphical representation of audio content in a play queue
associated with the selected play zone or zone group. In one embodiment, each play zone or zone
group may be associated with a play queue that includes information corresponding to zero or
more audio items played by the play zone or play group. For example, each audio item in the
playback queue may also include a UIR (URI), URL (URL), or other identifier that can be used by a
11-04-2019
18
playback device within a playback zone or zone group. Good. These allow audio items to be found
and / or retrieved from local audio content sources or network audio content sources and played
back by the playback device.
[0058]
In one example, a playlist may be added to the play queue. In this case, information
corresponding to each audio item in the playlist may be added to the playback queue. In another
example, audio items in the play queue may be saved as a playlist. In yet another example, when
the playback device continues to play streaming audio content, eg, an internet radio that is
played continuously unless stopped, rather than an audio item that is not played continuously by
having a play time, The playback queue may be empty or "unused" but filled. In another
embodiment, the playback queue may include Internet radio and / or other streaming audio
content items, and be "unused" when a playback zone or zone group is playing those items. Can.
Other examples are also possible.
[0059]
When a playback zone or zone group is "grouped" or "ungrouped", the playback queue associated
with the affected playback zone or zone group may be cleared or reassociated May be For
example, if a first playback zone that includes a first playback queue is grouped with a second
playback zone that includes a second playback queue, the formed zone group may have an
associated playback queue. The associated play queue is initially empty or contains an audio item
of the first play queue (eg, if the second play zone is added to the first play zone) (eg, the first
play) If a zone is added to the second playback zone, the audio items of the second playback
queue may be included, or audio items of both the first playback queue and the second playback
queue may be combined. Thereafter, if the formed zone group is ungrouped, the ungrouped first
play zone may be reassociated with the previous first play queue or may be associated with an
empty new play queue Or may be associated with a new playback queue that includes audio
items of the playback queue that were associated with the zone group before the zone group was
ungrouped. Similarly, the ungrouped second playback zone may be reassociated with the
previous second play queue, may be associated with an empty new play queue, or before the
zone group is ungrouped. A new play queue may be associated with the play queue audio items
associated with the zone group.
[0060]
11-04-2019
19
Returning to the user interface 400 of FIG. 4, a graphical representation of the audio content in
the play queue area 440 includes the track title, artist name, track length, and other relevant
information associated with the audio content in the play queue. May be included. In one
example, graphical representations of audio content can be moved by selecting additional
selectable icons. Thereby, the audio content displayed in the playback cue and / or the playback
cue can be managed and / or edited. For example, the displayed audio content may be removed
from the playback queue, moved to a different position in the playback queue, played
immediately or played after the currently playing audio content. May be selected or other
operations may be performed. The playback queue associated with the playback zone or zone
group may be a memory of one or more playback devices in the playback zone or zone group, a
memory of playback devices not in the playback zone or zone group, and / or other designations
May be stored in the memory of the device.
[0061]
Audio content source area 450 may include a graphical representation of selectable audio
content sources. In this audio content source, the audio content may be retrieved and played by
the selected playback zone or zone group. Descriptions of audio content sources can be found in
the following sections.
[0062]
d. Exemplary Audio Content Source As illustrated previously, one or more playback devices
within a zone or zone group may have multiple audio content to be played (eg, based on the
corresponding URI or URL of the audio content) May be configured to retrieve from available
audio content sources. In one example, audio content may be retrieved directly from the
corresponding audio content source (eg, line-in connection) by the playback device. In another
example, audio content may be provided to playback devices on the network via one or more
other playback devices or network devices.
[0063]
An exemplary audio content source may include memory of one or more playback devices in a
11-04-2019
20
media playback system. As the media playback system, for example, the media playback system
100 of FIG. 1, a local music library on one or more network devices (for example, a control
device, a network compatible personal computer, or a network attached storage (NAS)), A
streaming audio service providing audio content via the Internet (eg cloud), or an audio source
connected to a media playback system via a playback device or a line-in input connection of a
network device, or other possible system May be
[0064]
In one embodiment, audio content sources may be periodically added to or removed from media
playback systems, such as the media playback system 100 of FIG. In one example, indexing of
audio items may be performed each time one or more audio content sources are added, removed
or updated. Audio item indexing may include scanning for identifiable audio items in all folders /
directories shared over the network. Here, the network is accessible by the playback device in the
media playback system. Also, the indexing of audio items may include creating or updating an
audio content database that includes metadata (eg, title, artist, album, track length, etc.) and other
relevant information. Good. Other relevant information may include, for example, a URI or a URL
for finding each identifiable audio item. Other examples for managing and maintaining audio
content sources are also possible.
[0065]
The above description of the playback device, the control device, the playback zone
configuration, and the media content source provides only a few exemplary operating
environments in which the functions and methods described below can be implemented. The
invention is applicable to other operating environments and configurations of media playback
systems, playback devices, and network devices not explicitly mentioned herein, and to
implement their functions and methods. Is suitable.
[0066]
III. Calibration of Playback Device for Playback Environment As mentioned above, the
examples described herein are based on audio signals detected by the microphone of the network
device as the network device moves within the playback environment. , Calibration of one or
more playback devices for a playback environment.
11-04-2019
21
[0067]
In an example, calibration of the playback device may be initiated when the playback device is
initially set up or when the playback device moves to a new position. For example, if the playback
device moves to a new position, the calibration of the playback device may detect movement (eg,
such as, among other things, the Global Positioning System (GPS), one or more accelerometers, or
wireless signal strength variations). Or may be initiated based on user input indicating that the
playback device has moved to a new location (e.g., changing the playback zone name associated
with the playback device).
[0068]
In another example, calibration of the playback device may be initiated via a controller (e.g., a
network device). For example, the user may access the playback device's controller interface and
initiate calibration of the playback device. In some cases, the user may access the controller and
select a playback device (or a group of playback devices that includes the playback device) to
perform the calibration. In some cases, a calibration interface may be provided as part of the
playback device's controller interface to allow the user to initiate calibration of the playback
device. Other examples are also possible.
[0069]
Methods 500, 700, and 800 are exemplary methods performed to calibrate one or more
playback devices for a playback environment, as described below.
[0070]
a. Example First Method of Calibrating One or More Playback Devices FIG. 5 illustrates a first
method of calibrating playback devices based on audio signals detected by a microphone of a
network device moving within the playback environment. 8 shows an exemplary flow diagram of
500. FIG.
The method 500 shown in FIG. 5, for example, includes the media playback system 100 of FIG. 1,
11-04-2019
22
one or more of the playback device 200 of FIG. 2, one or more of the control device 300 of FIG.
10 illustrates an embodiment of a method that may be performed within the operating
environment described below including: Method 500 may include one or more operations,
functions, or acts illustrated by one or more of blocks 502-506. Although the blocks are shown
in order, the blocks may be performed in parallel and / or in an order different from the order
described herein. Also, these blocks may be combined to achieve a smaller number of blocks,
divided to increase the number of blocks, and / or removed based on the desired implementation.
[0071]
Further, in the method 500, other processes, and methods disclosed herein, the flowchart shows
one possible function and operation of the present embodiment. In this regard, each block may
represent a module, segment, or portion of program code that includes one or more instructions
executable by a processor to perform logical functions or steps in a process. The program code
may be stored on any type of computer readable medium, such as a storage device including a
disk or a hard drive. Computer readable media may include non-transitory computer readable
media. Non-transitory computer readable media include, for example, computer readable media
that store data for a short time, such as register memory, processor cache, and random access
memory (RAM). Computer readable media may also include non-transitory media. Non-transitory
media include, for example, secondary or permanent long-term storage such as read only
memory (ROM), optical or magnetic disk, and compact disc read only memory (CD-ROM).
Computer readable media may also be any volatile or non-volatile storage system. The computer
readable medium may be regarded as, for example, a computer readable recording medium or a
tangible storage device. Further, in method 500, other processes, and methods disclosed herein,
each block may represent circuits wired to perform certain logic functions in the process.
[0072]
In an example, the method 500 may be performed at least in part by a network device where an
embedded microphone may be used to calibrate one or more playback devices. As shown in FIG.
5, the method 500 may, at block 502, (i) while the playback device is playing the first audio
signal, and (ii) the network device from the first physical location to the second physical location.
Detecting a second audio signal by a microphone of the network device while moving to a
position, at block 504 identifying an audio processing algorithm based on data indicative of the
second audio signal, at block 506 Sending data indicative of the identified audio processing
algorithm to the playback device.
11-04-2019
23
[0073]
To assist in the description of methods 500, 700 and 800, the playback environment 600 of FIG.
6 is provided. As shown in FIG. 6, playback environment 600 includes network device 602,
playback device 604, playback device 606, and computer 610. Network device 602 may be
coordinating at least a portion of method 500 and / or may perform at least a portion of method
500. Network device 602 may be similar to control device 300 of FIG. Both playback devices 604
and 606 may be similar to the playback device 200 of FIG. Either or both of the playback devices
604 and 606 may be calibrated by the method 500, 700 or 800. Computer 810 may be a server
in communication with a media playback system, including playback devices 604 and 606.
Further, computer 810 may communicate directly or indirectly with network device 602.
Although the following description of methods 500, 700, and 800 refers to the playback
environment 600 of FIG. 6, one of ordinary skill in the art would appreciate that the playback
environment 600 is one example of a playback environment in which the playback device may be
calibrated. It can be understood that it only indicates. Other examples are also possible.
[0074]
Returning to the method 500, block 502 is (i) while the playback device is playing the first audio
signal, and (ii) the network device is moving from the first physical location to the second
physical location Meanwhile, the step of detecting the second audio signal by the microphone of
the network device. The playback device is a playback device to be calibrated and may be one of
one or more playback devices in the playback environment. Also, the playback device may be
configured to play the audio content individually or may be configured to play the audio content
in synchronization with another playback device within the playback environment. For purposes
of illustration, the playback device may be the playback device 604.
[0075]
In one example, the first audio signal may be a test signal or measurement signal indicative of
audio content that may be played back by the playback device while being used regularly by the
user. Thus, the first audio signal may include audio content having a frequency that substantially
covers the reproducible wave frequency range of the playback device 604 or a frequency range
that can be heard by humans. In some cases, the first audio signal is an audio signal generated
11-04-2019
24
for special use when calibrating a playback device, such as the calibrated playback device 604
described herein by way of example. May be In other cases, the first audio signal may be an
audio track that is a favorite of the user of the playback device 604, or may be one that is
typically played back by the playback device 604. Other examples are also possible.
[0076]
For purposes of illustration, the network device may be network device 602. As mentioned
above, the network device 602 may be a mobile device with a built-in microphone. Thus, the
microphone of the network device may be the built-in microphone of the network device. In one
example, the network device 602 may cause the playback device 604 to play the first audio
signal before the network device 602 detects the second audio signal via the microphone of the
network device 602. In some cases, network device 602 may transmit data indicative of a first
audio signal to be played back by playback device 604.
[0077]
In another example, the playback device 604 may receive a command to play the first audio
signal from a server, such as the computer 610, and play the first audio signal in response to the
received command. In yet another example, the playback device 604 may play the first audio
signal without receiving a command from the network device 602 or the computer 610. For
example, if the playback device 604 is adjusting the calibration of the playback device 604, the
playback device 604 may play the first audio signal without receiving a command to play the
first audio signal. .
[0078]
If the second audio signal is detected by the microphone of the network device 602 while the
first audio signal is being reproduced by the reproduction device 604, the second audio signal
includes a portion corresponding to the first audio signal It is also good. In other words, the
second audio signal may be reproduced by the reproduction device 604 and / or reflected within
the reproduction environment 600 by including a portion of the first audio signal.
[0079]
11-04-2019
25
In one example, both the first physical location and the second physical location may be within
the playback environment 600. As shown in FIG. 6, the first physical location may be point (a)
and the second physical location may be point (b). While moving from the first physical location
(a) to the second physical location (b), the network device may traverse a location within the
playback environment 600. Here, playback environment 600 is a location where one or more
listeners may experience audio playback during normal use of playback device 604. In one
example, the illustrated playback environment 600 may include a kitchen and dining room, and
the path 608 between the first physical location (a) and the first physical location (b) may be
within the kitchen and dining room. It covers the place of Here, the kitchen and dining room are
places where one or more listeners may experience audio playback when the playback device
604 is in normal use.
[0080]
If the second audio signal is detected while the network device 602 is moving from the first
physical position (a) to the second physical position (b), the second audio signal is It may include
audio signals detected at different locations along the path 608 between a) and the second
physical location (b). Thus, the characteristics of the second audio signal indicate that the second
audio signal was detected while the network device 602 was moving from the first physical
position (a) to the second physical position (b) It may be shown.
[0081]
In one example, movement of the network device 602 moving between the first physical location
(a) and the second physical location (b) may be performed by a user. In some cases, before and /
or while detecting the second audio signal, the graphical representation of the network device
may provide an indication to move the network device 602 within the playback device. For
example, the graphical display "Move the network device through a location in the playback zone
where you or others can enjoy the music while playing audio. "" And other text may be displayed.
Other examples are also possible.
[0082]
11-04-2019
26
In one example, the first audio signal may be for a predetermined time (e.g., about 30 seconds),
and detection of the audio signal by the microphone of the network device 602 may be for a
predetermined period, or the like It may be a period of In some cases, the graphical display of the
network device may further provide a display indicating the amount of time left for the user to
move the network device 602 through a location in the playback environment 602. Other
examples of graphical displays that provide a display that assists the user during calibration of
the playback device are also possible.
[0083]
In one example, the playback device 604 and the network device 602 may coordinate playback
of the first audio signal and / or detection of the second audio signal. In some cases, at the
beginning of the calibration, the playback device 604 may send a message to the network device
to indicate that the playback device is playing or is about to play the first audio signal. Network
device 602 may initiate detection of the second audio signal in response to the message. In
another case, at the start of the calibration, the network device 602 may detect movement of the
network device 602 using a motion sensor such as an accelerometer of the network device 602,
and the network device 602 may A message may be sent indicating that the target position (a)
has moved to the second physical position (b). The playback device 604 may initiate playback of
the first audio signal in response to the message. Other examples are also possible.
[0084]
At block 504, method 500 includes identifying an audio processing algorithm based on the data
indicative of the second audio signal. As mentioned above, the second audio signal may include a
portion corresponding to the first audio signal to be reproduced by the reproduction device.
[0085]
In one example, the second audio signal detected by the microphone of network device 602 may
be an analog signal. Thus, the network device may process the detected analog signal (eg,
convert the detected audio signal from an analog signal to a digital signal) and generate data
indicative of the second audio signal.
11-04-2019
27
[0086]
In some cases, the microphones of network device 602 may have acoustical properties. The
acoustic characteristics may take into account the audio signal output by the microphone to the
processor of the network device 602 for processing (eg, conversion to a digital audio signal). For
example, if the acoustic characteristics of the microphone of the network device include low
sensitivity at a certain frequency, the audio content of that frequency may be attenuated in the
audio signal output by the microphone.
[0087]
When an audio signal output by the microphone of the network device 602 is represented by x
(t), a detected second audio signal is represented by s (t), and an acoustic characteristic of the
microphone is represented by h m (t), the microphone The relationship between the signal
output from the signal and the second audio signal detected by the microphone can be expressed
by the following equation.
[0088]
[0089]
Here, a function in which a cross is drawn in ○ indicates a mathematical function of convolution.
Thus, the second audio signal s (t) detected by the microphone may be determined based on the
signal x (t) output from the microphone and the acoustic characteristic h m (t) of the microphone.
For example, a calibration algorithm such as h m <−1> (t) is applied to the audio signal output
from the microphone of the network device 602 to detect the second audio signal s (t) detected
by the microphone. You may decide.
[0090]
11-04-2019
28
In an example, the acoustic characteristics h m (t) of the microphones of the network device 602
may be known. For example, databases such as acoustic characteristics of microphones,
corresponding network device models, and / or microphone models of network devices may be
available. In another example, the acoustic characteristics h m (t) of the microphone of the
network device 602 may be unknown. In such cases, the acoustic characteristics or microphone
calibration algorithm of the microphones of network device 602 may be determined using a
playback device such as playback device 604, playback device 606, or using another playback
device. Examples of such processing are described below in connection with FIGS. 9-11.
[0091]
In one example, the step of identifying the audio processing algorithm comprises determining a
frequency etc. response based on data indicative of the second audio signal based on the first
audio signal, and the audio processing algorithm based on the determined frequency response.
May be included.
[0092]
If the network device 602 is moving from the first physical location (a) to the second physical
location (b) while the microphone of the network device 602 detects the second audio signal, the
frequency responses are respectively A series of frequency responses may be included that
correspond to portions of the second audio signal detected at different locations along path 608.
In some cases, an average frequency response of the series of frequency responses may be
determined. For example, the magnitude of the signal of a frequency in the average frequency
response may be the average of the magnitudes of that frequency in the series of frequency
responses. Other examples are also possible.
[0093]
In one example, the audio processing algorithm may then be identified based on the average
frequency response. In some cases, the audio processing algorithm is determined such that the
audio processing algorithm by the playback device 604 is applied when playing back the first
audio signal in the playback environment 600 and the audio substantially the same as the
predetermined audio characteristic A third audio signal may be generated that includes the
11-04-2019
29
characteristic.
[0094]
In one example, the predetermined audio characteristic may be audio frequency equalization
which is considered to be a good sound. In some cases, the predetermined audio characteristics
may include substantially even equalization over the playable frequency range of the playback
device. In other cases, the predetermined audio characteristics may include equalization that may
be preferred for a typical listener. In yet another example, the predetermined audio characteristic
may include a frequency response that is considered suitable for a certain genre of music.
[0095]
In any case, the network device 602 may specify an audio processing algorithm based on the
data indicative of the second audio signal and the predetermined audio characteristics. In one
example, in the frequency response of playback environment 600, some audio frequencies may
be attenuated more than others. Also, if an audio frequency with predetermined audio
characteristics includes minimally attenuated equalization, the corresponding audio processing
algorithm may include amplification at an audio frequency.
[0096]
In one example, the relationship between the first audio signal f (t) and the second audio signal
detected by the microphone of the network device 602 and denoted by s (t) can be expressed
mathematically by the following equation: .
[0097]
[0098]
Here, h pe (t) indicates the acoustic characteristics of the audio content being reproduced by the
reproduction device 604 (at a position along the path 608) within the reproduction environment
600.
11-04-2019
30
If the predetermined audio signal is indicated by z (t) and the audio processing algorithm is
indicated by p (t), the predetermined audio signal z (t), the second audio signal s (t), And the
audio processing algorithm p (t) can be expressed mathematically by the following equation.
[0099]
[0100]
Therefore, the audio processing algorithm p (t) can be mathematically expressed by the following
equation.
[0101]
[0102]
In some cases, identifying the audio processing algorithm may include the network device 602
transmitting to the computer 610 data indicative of the second audio signal.
In such case, computer 610 may be configured to identify the audio processing algorithm based
on the data indicative of the second audio signal.
Computer 610 may specify the same audio processing algorithm as described above with
reference to equations (1)-(4).
Network device 602 may then receive the identified audio processing algorithm from computer
610.
[0103]
At block 506, method 500 includes transmitting the identified audio processing algorithm to a
playback device.
11-04-2019
31
The network device 602 may, in some cases, send commands to the playback device 604 and
apply the identified audio processing algorithm when playing back audio content in the playback
environment 600.
[0104]
In one example, the data indicative of the identified audio processing algorithm may include one
or more parameters of the identified audio processing algorithm. In another example, the
database of audio processing algorithms may be accessible by the playback device. In such a
case, the data indicating the identified audio processing algorithm may indicate an entry in the
database corresponding to the identified audio processing algorithm.
[0105]
In some cases, if at block 504 the computer 610 identifies an audio processing algorithm based
on data indicative of a second audio signal, the computer 610 may transmit the audio processing
algorithm directly to the playback device.
[0106]
Although the above description generally describes calibration of a single playback device, one
skilled in the art would perform similar functions and calibrate separately or in groups to
multiple playback devices. It can be understood that it is possible to
For example, method 500 may further be performed by playback devices 604 and / or 606 to
calibrate playback device 606 within playback environment 600. In one example, the playback
device 604 may be calibrated to play in synchronization with the playback device 606 in the
playback environment. For example, the playback device 604 may cause the playback device 606
to play the third audio signal in synchronization with the playback of the first audio signal by the
playback device 604 or separately.
[0107]
11-04-2019
32
In one example, the first audio signal and the third audio signal may be substantially the same
and / or may be played simultaneously. In another example, the first audio signal and the third
audio signal may be orthogonal or otherwise distinguishable. For example, the reproduction
device 604 may reproduce the first audio signal after the reproduction of the third audio signal
by the reproduction device 606 is completed. In another example, the first audio signal may have
a phase that is orthogonal to the phase of the third audio signal. In yet another example, the third
audio signal may have a different frequency range and / or a varying frequency range than the
first audio signal. Other examples are also possible.
[0108]
In any case, the second audio signal detected by the microphone of the network device 602 may
further include a portion corresponding to the third audio signal reproduced by the second
reproduction device. As mentioned above, the second audio signal may be processed to identify
the audio processing algorithm of the playback device 604 and to identify the audio processing
algorithm of the playback device 606. In this case, one or more additional functions may be
performed, including parsing the differences that contribute to the second audio signal by the
playback device 604 and the playback device 606.
[0109]
In an example, when playing back audio content alone in the playback environment 600, it may
specify that the first audio processing algorithm is applied to the playback device 604. The
playback device 604 may specify that a second audio processing algorithm is to be applied when
playing back audio content in synchronization with the playback device 606 in the playback
environment 600. The playback device 604 may then apply the appropriate audio processing
algorithm based on the playback configuration that the playback device 604 is in.
[0110]
In one example, upon initially identifying the audio processing algorithm, the playback device
604 may apply the audio processing algorithm when playing back the audio content. The user of
the playback device (starting calibration and joining) saves the identified audio processing
algorithm or discards the audio processing algorithm after listening to the audio content that has
11-04-2019
33
been applied and has been played back. It may be determined whether to perform and / or
perform calibration again.
[0111]
In some cases, the user may activate or deactivate the identified audio processing algorithm for a
period of time. In one example, this allows the user to spend more time evaluating whether to
apply the audio processing algorithm to the playback device 604 or recalibrate. If the user
indicates that the audio processing algorithm should be applied, the playback device 604 may
apply the audio processing algorithm by default when the playback device 604 plays media
content. The audio processing algorithm may further be stored on network device 602, playback
device 604, playback device 606, computer 610, or any other device in communication with
playback device 604. Other examples are also possible.
[0112]
As mentioned above, method 500 may be at least partially coordinated and / or performed by
network device 602. In some embodiments, certain functions of method 500 may be performed
and / or coordinated by one or more other devices. One or more other devices also include other
possibilities such as playback device 604, playback device 606, or computer 610. For example,
as described above, while block 502 may be performed by network device 602, in some cases,
block 504 may be partially performed by computer 610, and block 506 may be network device
602 and / or network device 602 and / or It may be executed by the computer 610. Other
examples are also possible.
[0113]
b. Exemplary Second Method of Calibrating One or More Playback Devices FIG. 7 illustrates a
second method 700 of calibrating a playback device based on audio signals detected by a
microphone of a network device moving within the playback environment. Shows an exemplary
flow diagram of FIG. The method 700 shown in FIG. 7, for example, includes the media playback
system 100 of FIG. 1, one or more of the playback device 200 of FIG. 2, one or more of the
control device 300 of FIG. 3, and the playback environment 600 of FIG. FIG. 7 illustrates an
embodiment of a method that may be performed within the operating environment described
below, including: Method 700 may include one or more operations, functions or operations
11-04-2019
34
illustrated by one or more of blocks 702-708. Although the blocks are shown in order, the blocks
may be performed in parallel and / or in an order different from the order described herein.
These blocks may be combined into fewer blocks, divided into more blocks, and / or removed
based on the desired implementation.
[0114]
In an example, the method 700 may be at least partially adjusted and / or performed by a
calibrated regeneration device. As shown in FIG. 7, the method 700 may play the first audio
signal at block 702, while moving the network device from the first physical location to the
second physical location at block 704. Receiving data indicative of the second audio signal
detected by the microphone of the network device from the network device, identifying an audio
processing algorithm based on the data indicative of the second audio signal at block 706, and
block 708 Applying the identified audio processing algorithm when playing back the audio
content in the playback environment.
[0115]
At block 702, method 700 includes the step of the playback device playing the first audio signal.
Returning to FIG. 6, the playback device that performs at least a portion of method 700 may be
playback device 604. Thus, the playback device 604 may play the first audio signal. Further, the
playback device 604 may play the first audio signal with or without a command to play the first
audio signal from the network device 602, the computer 610, or the playback device 606.
[0116]
In one example, the first audio signal may be substantially similar to the first audio signal
described above at block 502. Thus, the description of the first audio signal in method 500 may
be applicable to the first audio signal described in block 702 and method 700.
[0117]
At block 704, the method 700 transmits from the network device data indicative of a second
11-04-2019
35
audio signal detected by a microphone of the network device while the network device was
moving from the first physical location to the second physical location. Including the step of
receiving. In addition to indicating the second audio signal, the data is further detected by the
microphone of the network device while the network device is moving from the first physical
position to the second physical position. You may also indicate In one example, block 704 may
be substantially similar to block 502 of method 500. As such, the descriptions associated with
block 502 and method 500 may be applicable, sometimes with modifications, to block 704.
[0118]
In some cases, the playback device 604 may receive data indicative of the second audio signal
while the microphone of the network device 602 is detecting the second audio signal. In other
words, the network device 602 may stream data indicative of the second audio signal while
detecting the second audio signal. In another case, the playback device 604 may also receive data
indicative of the second audio signal upon detection of the second audio signal (and, in some
cases, playback of the first audio signal by the playback device 604) being completed. Good.
Other examples are also possible.
[0119]
At block 706, method 700 includes identifying an audio processing algorithm based on the data
indicative of the second audio signal. In one example, block 706 may be substantially similar to
block 504 of method 500. As such, the descriptions associated with block 504 and method 500
may be applicable, sometimes with modifications, to block 706.
[0120]
At block 708, method 700 includes applying an audio processing algorithm when playing back
audio content in a playback environment. In one example, block 708 may be substantially similar
to block 506 of method 500. Note that in this case, the playback device 604 can apply the
specified audio processing algorithm without having to transmit the specified audio processing
algorithm to another device. As noted above, playback device 604 may also transmit and store
the identified audio processing algorithm to another device, such as computer 610.
11-04-2019
36
[0121]
As mentioned above, method 700 may be at least partially coordinated and / or performed by
playback device 604. In some embodiments, certain functions of method 700 may be performed
and / or coordinated by one or more other devices. One or more other devices include network
device 602, playback device 606, or computer 610, and others as possible. For example, blocks
702, 704, and 708 may be performed by playback device 604, while in some cases, block 706
may be partially performed by network device 602 or computer 610. Other examples are also
possible.
[0122]
c. Exemplary Third Method of Calibrating One or More Playback Devices FIG. 8 illustrates a third
method 800 of calibrating a playback device based on audio signals detected by a microphone of
a network device moving within the playback environment. Shows an exemplary flow diagram of
FIG. The method 800 shown in FIG. 8 may be, for example, the media playback system 100 of
FIG. 1, one or more of the playback device 200 of FIG. 2, one or more of the control device 300
of FIG. 10 illustrates an embodiment of a method that may be performed within the operating
environment described below including: Method 800 may include one or more operations,
functions or operations illustrated by one or more of blocks 802-806. Although the blocks are
shown in order, the blocks may be performed in parallel and / or in an order different from the
order described herein. These blocks may be combined into fewer blocks, divided into more
blocks, and / or removed based on the desired implementation.
[0123]
In an example, the method 800 may be performed at least in part by a computer, such as a
server in communication with a playback device. Returning again to the playback environment
600 of FIG. 6, the method 800 may be at least partially coordinated and / or performed by the
computer 610.
[0124]
As shown in FIG. 8, method 800 is detected at block 802 by a microphone of a network device
while the network device is moving from a first physical location to a second physical location in
a playback environment. Receiving data indicative of the audio signal from the network device,
11-04-2019
37
identifying an audio processing algorithm based on the data indicative of the detected audio
signal at block 804, and indicating the identified audio processing algorithm at block 806
Sending the data to a playback device within the playback environment.
[0125]
At block 802, the method 800 includes data representing an audio signal detected by a
microphone of the network device while the network device is moving from the first physical
location to the second physical location in the playback environment. Including receiving from
the device.
In addition to indicating the detected audio signal, the data may further be detected by the
microphone of the network device while the network device was moving from the first physical
position to the second physical position. It may indicate that it has been detected. In one
example, block 802 may be substantially similar to block 502 of method 500 and block 704 of
method 700. Thus, block 502 and method 500 or any description associated with block 704 and
method 700 may be applicable to block 802, sometimes with modifications.
[0126]
At block 804, method 800 includes identifying an audio processing algorithm based on data
indicative of the detected audio signal. In one example, block 804 may be substantially similar to
block 504 of method 500 and block 706 of method 700. Thus, block 504 and method 500, or
any description associated with block 706 and method 700, may be applicable to block 804,
sometimes with modifications.
[0127]
At block 806, the method 800 includes, at block 806, transmitting data indicative of the
identified audio processing algorithm to a playback device within the playback environment. In
one example, block 806 may be substantially similar to block 506 of method 500 and block 708
of method 700. As such, block 506 and method 500 or any description associated with block
708 and method 700 may be applicable to block 806, sometimes with modifications.
11-04-2019
38
[0128]
As mentioned above, method 800 may be at least partially coordinated and / or performed by
computer 610. In an embodiment, certain functions of method 800 may be performed and / or
coordinated by one or more other devices. One or more other devices include network device
602, playback device 604, or playback device 606, and others as possible. For example, as
described above, block 802 may be performed by a computer, while in some cases block 804
may be partially performed by network device 602 and block 806 may be computer 610 and /
or / or It may be performed by the network device 602. Other examples are also possible.
[0129]
In some cases, one or more network devices may be used to calibrate one or more playback
devices individually or collectively. For example, two or more network devices may detect audio
signals being played by one or more playback devices while moving through the playback
environment. For example, a network device may move where the first user periodically listens to
audio content played by one or more playback devices. Meanwhile, another network device may
move to a location where the second user periodically listens to audio content played by one or
more playback devices. In such cases, the processing algorithm may be performed based on
audio signals detected by two or more network devices.
[0130]
Furthermore, in some cases, processing algorithms may be performed on each of the two or more
playback devices based on signals detected while each network device traverses different paths
in the playback environment. Thus, when a network device is used to initiate playback of audio
content by one or more playback devices, based on signals detected while the network device is
traversing the playback environment. The determined processing algorithm may be applied.
[0131]
IV. Calibration of Network Device Microphone with Playback Device Microphone As
mentioned above, calibration of the playback device in the playback environment is the
11-04-2019
39
microphone of the network device used for calibration as described in FIG. 5-8. It may include
knowing the acoustic properties of and / or the calibration algorithm. In some cases, the acoustic
characteristics of the microphones of the network device used for calibration and / or the
calibration algorithm may be unknown.
[0132]
The examples described in this section include performing calibration of the network device's
microphone while the network device is located within a predetermined physical range of the
playback device's microphone. Methods 900 and 1100 are exemplary methods that can perform
calibration of the microphones of the network device, as described below.
[0133]
a. Exemplary First Method of Calibrating a Microphone of a Network Device FIG. 9 shows an
exemplary flow diagram of a first method of calibrating a microphone of a network device. The
method 900 shown in FIG. 9 may be, for example, the media playback system 100 of FIG. 1, one
or more of the playback device 200 of FIG. 2, one or more of the control device 300 of FIG. 7
illustrates an embodiment of a method that may be performed within the operating environment
described below, such as exemplary deployment 1000 of FIG. Method 900 may include one or
more operations, functions or operations illustrated by one or more of blocks 902-908. Although
the blocks are shown in order, the blocks may be performed in parallel and / or in an order
different from the order described herein. These blocks may be combined into fewer blocks,
divided into more blocks, and / or removed based on the desired implementation.
[0134]
In an example, the method 900 may be performed at least in part by a network device with
calibrated microphones. As shown in FIG. 9, method 900 includes, at block 802, detecting a first
audio signal by a microphone of the network device while the network device is located in a
predetermined physical range of the microphone of the playback device. Receiving data
indicative of the second audio signal detected by the playback device microphone, at 904,
microphone calibration based on the data indicative of the first audio signal and the data
indicative of the second audio signal at block 906; Identifying the algorithm and applying a
microphone calibration when performing a calibration function associated with the playback
11-04-2019
40
device at block 908.
[0135]
To aid in the description of method 900 and method 1100 below, an exemplary arrangement
1000 of microphone calibration as shown in FIG. 10 is provided. The microphone calibration
arrangement 1000 includes a playback device 1002, a playback device 1004, a playback device
1006, a microphone 1008 of the playback device 1006, a network device 1010, and a computer
1012.
[0136]
Network device 1010 may coordinate and / or perform at least a portion of method 900.
Network device 1010 may be similar to control device 300 of FIG. In this case, network device
1010 may have the microphones calibrated by method 900 and / or method 1100. As mentioned
above, the network device 1010 may be a mobile device with a built-in microphone. Thus, the
microphone of the network device 1010 to be calibrated may be the built-in microphone of the
network device 1010.
[0137]
Each of the playback devices 1002, 1004, and 1006 may be similar to the playback device 200
of FIG. One or more of the playback devices 1002, 1004, and 1006 may include a microphone
(with known acoustic characteristics). Computer 1012 may be a server in communication with a
media playback system, including playback devices 1002, 1004 and 1006. Computer 1012 may
also communicate directly or indirectly with network device 1010. Although the following
description of methods 900 and 1100 describes the microphone calibration arrangement 1000
of FIG. 10, those skilled in the art will recognize that the microphone calibration arrangement
1000 may be a microphone calibration in which the microphones of the network device may be
calibrated. It is clear that it only shows one example of the arrangement. Other examples are also
possible.
[0138]
11-04-2019
41
In one example, the microphone calibration arrangement 1000 may be in an acoustic test facility
where the microphones of the network device are calibrated. In another example, the
microphone calibration arrangement 1000 may be in the user's household where the user
calibrates the playback devices 1002, 1004 and 1006 using the network device 1010.
[0139]
In an example, calibration of the microphone of network device 1010 may be initiated by
network device 1010 or computer 1012. For example, calibration of the microphone when an
audio signal detected by the microphone is processed by the network device 1010 or computer
1012 to calibrate the playback device as described above in, for example, methods 500, 700, and
800. The sound characteristics of the microphone may be unknown. In another example,
calibration of the microphone may be initiated when the network device 1010 receives an input
indicating that the microphone of the network device has been calibrated. In one case, the input
may be provided by the user of network device 1010.
[0140]
Returning to the method 900, block 902 includes detecting the first audio signal by the
microphone of the network device while the network device is located within a predetermined
physical range of the microphone of the playback device. In the microphone calibration
arrangement 1000, the network device 1010 may be within a predetermined physical range of
the microphone 1008 of the playback device 1006. As described, the microphone 1008 may be
located at the top left of the playback device 1006. In an implementation, the microphone 1008
of the playback device 1006 may be at a plurality of possible positions associated with the
playback device 1006. In some cases, the microphone 1008 may be hidden within the playback
device 1006 and may not be visible from outside the playback device 1006.
[0141]
Thus, depending on the position of the microphone 1008 of the reproduction device 1006, the
position within the predetermined physical range of the microphone 1008 of the reproduction
device 1006 is the position above the reproduction device 1006, the position behind the
reproduction device 1006, the reproduction It may be a position on the side of the device 1006,
or a position in front of the playback device 1006, or any other possible position.
11-04-2019
42
[0142]
In one example, the network device 1010 may be placed by the user within a predetermined
physical range of the playback device microphone 1008 as part of a calibration process.
For example, upon initiation of calibration of the microphone of network device 1010, network
device 1010 may provide a graphical representation of network device 1010. In the graphical
interface, it is shown that a network device 1010 having known acoustical properties, such as the
playback device 1006, is located within a predetermined physical range of the playback device's
microphone. In some cases, if the plurality of playback devices controlled by the network device
1010 comprises microphones with known acoustical properties, the graphical interface prompts
the user to select one of the plurality of playback devices. And may be used for calibration. In this
example, the user may select a playback device. In one example, the graphical interface may
include a view of where the predetermined physical range of the playback device 1006
microphone is associated with the playback device 1006.
[0143]
In one example, the first audio signal detected by the microphone of network device 1010
includes a portion of the third audio signal reproduced by one or more of reproduction devices
1002, 1004, and 1006. The microphone calibration arrangement 1000 may include a portion of
the third audio signal reflected in the room in which the calibration calibration arrangement
1000 was set up, or may include other possibilities.
[0144]
In an example, the third audio signal reproduced by the one or more reproduction devices 1002,
1004 and 1006 is being calibrated during calibration of one or more of the reproduction devices
1002, 1004 and 1006, It may be a test signal or measurement signal indicating audio content
that may be played back by the playback devices 1002, 1004 and 1006.
Thus, the third audio signal to be reproduced may include audio content having a frequency that
substantially covers the reproducible frequency range of the reproduction devices 1002, 1004
and 1006, or the frequency range that can be heard by humans. In some cases, the third audio
signal to be reproduced may be an audio signal generated for special use when calibrating
11-04-2019
43
reproduction devices such as reproduction devices 1002, 1004 and 1006.
[0145]
Once the network device 1010 is in place, the third audio signal may be played back by one or
more of the playback devices 1002, 1004 and 1006. For example, when the network device
1010 is located within a predetermined physical range of the microphone 1008, the network
device 1010 sends a message to one or more of the playback devices 1002, 1004, and 1006, or
one or more A plurality of playback devices 1002, 1004, and 1006 may play back the third
audio signal. In some cases, a message may be sent in response to a user input indicating that the
network device 1010 is located within a predetermined physical range of the microphone 1008.
In another example, the network device 1010 may detect the proximity of the playback device
1006 to the network device 1010 by means of the proximity sensor of the network device 1010.
In another example, the playback device 1006 may determine based on the proximity sensor of
the playback device 1006 when the network device 1010 is located within a predetermined
physical range of the microphone 1008. Other examples are also possible.
[0146]
One or more of the playback devices 1002, 1004, and 1006 may play the third audio signal, and
the first audio signal may be detected by the microphone of the network device 1010.
[0147]
At block 904, method 900 includes receiving data indicative of a second audio signal detected by
a microphone of the playback device.
Continuing the above example, the microphone of the playback device may be the microphone
1008 of the playback device 1006. In one example, the second audio signal may be detected
simultaneously with the detection of the first audio signal of the microphone of network device
1010. As such, the second audio signal may include a portion corresponding to the third audio
signal reproduced by one or more of the reproduction devices 1002, 1004, and 1006, or
microphone calibration. It may include part of the third audio signal reflected in the room in
which the arrangement 1000 was set up, or it may include other possibilities.
11-04-2019
44
[0148]
In another example, the second audio signal may be detected by the microphone of the playback
device 1006 before or after the first audio signal is detected. In such a case, one or more of the
playback devices 1002, 1004, and 1006 may play the third audio signal while the microphone
1008 of the playback device 1006 may detect the second audio signal. The audio signal may be
reproduced substantially simultaneously with the third audio signal reproduced at different
times.
[0149]
In such a case, one or more of the playback devices 1002, 1004, and 1006 may be played when
the third audio signal is played or when the second audio signal is detected by the microphone
1008 of the playback device 1006. The devices may be located within one and the same
microphone calibration arrangement 1000.
[0150]
In one example, while the second audio signal is being detected by the microphone 1008 of the
playback device 1006, the network device 1010 may receive data indicative of the second audio
signal.
In other words, while the microphone 1008 is detecting the second audio signal, the playback
device 1006 may stream data indicative of the second audio signal to the network device 1010.
In another example, the network device 1010 may receive data indicative of the second audio
signal after detection of the second audio signal is complete. Other examples are also possible.
[0151]
At block 906, the method includes identifying a microphone calibration algorithm based on the
data indicative of the first audio signal and the data indicative of the second audio signal. In one
example, locating the network device 1010 within a predetermined physical range of the
microphone 1008 of the playback device 1006 detects the first audio signal detected by the
microphone of the network device 1010 by the microphone 1008 of the playback device 1006 It
may be substantially the same as the second audio signal. Thus, if the acoustic characteristics of
11-04-2019
45
the playback device 1106 are known, the acoustic characteristics of the microphone of the
network device 1010 may be determined.
[0152]
Assuming that the second audio signal detected by the microphone 1008 is s (t) and the acoustic
characteristic of the microphone 1008 is h p (t), the data is output from the microphone 1008
and data indicating the second audio signal The signal m (t) which is processed to produce H can
be expressed mathematically by the following equation:
[0153]
[0154]
Similarly, if the first audio signal detected by the microphone of the network device 1010 is f (t)
and the unknown acoustic property of the microphone of the network device is h n (t), then the
microphone of the network device 1010 The signal n (t) output from and processed to generate
data indicative of the first audio signal can be represented mathematically by the following
equation:
[0155]
[0156]
As mentioned above, if the first audio signal f (t) detected by the microphone of the network
device 1010 is substantially the same as the second audio signal s (t) detected by the microphone
1008 of the reproduction device 1006 , The following equation holds.
[0157]
[0158]
Therefore, since the data indicating the first audio signal n (t), the data indicating the second
audio signal m (t), and the acoustic characteristics h p (t) of the microphone 1008 of the
reproduction device 1006 are known, h n n (T) can be calculated.
[0159]
11-04-2019
46
In one example, the microphone calibration algorithm of the microphones of the network device
1010 may simply be the reciprocal h n <−1> (t) of the acoustic characteristics h n (t).
Thus, the application of the microphone calibration algorithm when processing the audio signal
output by the microphone of network device 1010 may also mathematically remove the acoustic
characteristics of the microphone of network device 1010 from the output audio signal. Good.
Other examples are also possible.
[0160]
In some cases, the step of identifying the microphone calibration algorithm includes transmitting,
to the computer 1012, data indicative of the first audio signal, data indicative of the second audio
signal, and acoustic characteristics of the microphone 1008 of the playback device 1006. You
may include doing.
In some cases, data indicative of the second audio signal and acoustic characteristics of the
microphone 1008 of the playback device 1006 may be provided to the computer 1012 from the
playback device 1006 and / or other devices in communication with the computer 1012.
Next, the computer 1012 is based on the data representing the first audio signal, the data
representing the second audio signal, and the acoustic characteristics of the microphone 1008 of
the reproduction device 1006 as in the contents described in the equations (5) to (7). The audio
processing algorithm may be specified.
Network device 1010 may then receive the identified audio processing algorithm from computer
1012.
[0161]
11-04-2019
47
At block 906, the method 900 includes applying a microphone calibration algorithm when
performing a calibration function associated with the playback device.
In one example, in identifying the microphone calibration algorithm, the network device 1010
may apply the microphone calibration algorithm identified when performing a function that
includes a microphone.
For example, one audio signal generated from an audio signal detected by a microphone of
network device 1010 may be processed using a microphone calibration algorithm, and the audio
signal may be transmitted before network device 1010 transmits the audio signal to another
device. The acoustic characteristics of the microphone may be mathematically removed from In
one example, as described in the methods 500, 700 and 800, a microphone calibration algorithm
may be applied when the network device 1010 is performing calibration of the playback device.
[0162]
In one example, the network device 1010 further stores, in a database, an association between
the identified calibration algorithm (and / or the acoustic characteristics) and one or more
characteristics of the microphones of the network device 1010. It is also good. One or more
characteristics of the microphones of the network device 1010 may include a model of the
network device 1010 or a model of the microphones of the network device 1010 or some other
possibilities. In one example, the database may be stored locally at network device 1010. In
another example, the database may be sent to and stored in another device, such as the computer
1012 or one or more of the playback devices 1002, 1004, and 1006. Other examples are also
possible.
[0163]
In the database, multiple entries of the microphone calibration algorithm and / or an association
between the microphone calibration algorithm and one or more characteristics of the
microphone of the network device may be added. As mentioned above, the microphone
calibration arrangement 1000 may be in an acoustic test facility where the microphones of the
network device are calibrated. In such cases, it may be added to the database via calibration in
the acoustic test facility. In this case, the microphone calibration arrangement 1000 is in the
user's household where the user calibrates the playback devices 1002, 1004 and 1006 using the
11-04-2019
48
network device 1010, and in the database, microphone calibration of the crowd source An
algorithm may be added. In some cases, the database may include entries generated from
calibrations in the acoustic test facility and entries for crowd sources.
[0164]
The database is applied when processing audio signals that are accessed by other network
devices, computers including computer 1012 and playback devices including playback devices
1002, 1004, and 1006 and output from microphones of certain network devices. An audio
processing algorithm corresponding to the microphone of the network device may be identified.
[0165]
In some cases, the same may be due to manufacturing variations, microphone manufacturing
quality control variations, and variations during calibration (eg, mismatches in where network
devices are placed during calibration, and other possibilities) The microphone calibration
algorithm determined by the network device or microphone of the model changes.
In such cases, a representative microphone calibration algorithm may be determined from the
changed microphone calibration algorithm. For example, a representative microphone calibration
algorithm may be an average of varying microphone calibration algorithms. In some cases,
entries in the database of a network device of a model may be updated with the updated
representative calibration algorithm each time calibration is performed on the microphones of
the network device of the model.
[0166]
As mentioned above, method 900 may be at least partially coordinated and / or performed by
network device 1010. Also, in some embodiments, certain functions of method 900 may be
performed and / or coordinated by one or more other devices. The one or more other devices
include one or more of the playback devices 1002, 1004, and 1006, or the computer 1012 or
others as possible. For example, blocks 902 and 908 may be performed by network device 1010,
while in some cases blocks 904 and 906 may be at least partially performed by computer 1012.
Other examples are also possible.
11-04-2019
49
[0167]
In some cases, network device 1010 may at least partially adjust and / or perform the function of
calibrating the microphones of another network device.
[0168]
b.
Second Exemplary Method of Calibration of Microphones of Network Device FIG. 11 shows an
exemplary flow diagram of a second method of calibration of microphones of the network device.
The method 1100 shown in FIG. 11, for example, includes the media playback system 100 of FIG.
1, one or more of the playback device 200 of FIG. 2, one or more of the control device 300 of
FIG. 3, and the microphone calibration shown in FIG. 7 illustrates an embodiment of a method
that may be performed within an operating environment that includes the calibration
configuration 1000 of a session. Method 1100 may include one or more operations, functions, or
acts as illustrated in one or more of blocks 1102-1108. Although the blocks are shown in order,
the blocks may be performed in parallel and / or in an order different from the order described
herein. Also, these blocks may be combined to achieve a smaller number of blocks, divided to
increase the number of blocks, and / or removed based on the desired implementation.
[0169]
In an example, method 1100 may be at least partially performed by a computer, such as
computer 1012 of FIG. As shown in FIG. 11, method 1100 includes, at block 1102, a first audio
signal detected by a microphone of the network device while the network device is located within
a predetermined physical range of the microphone of the playback device. Receiving data
indicative of the second audio signal from the network device, block 1104 receiving data
indicative of the second audio signal detected by the microphone of the playback device, at block
1106 data indicative of the first audio signal and the second audio Identifying a microphone
calibration algorithm based on the data indicative of the signal; and, at block 1108, performing a
calibration function associated with the network device and the playback device. Comprising the
step of applying the present calibration algorithm.
[0170]
11-04-2019
50
At block 1102, the method 1100 includes, from the network device, data indicative of a first
audio signal detected by the network device microphone while the network device is located
within the predetermined physical range of the playback device microphone. Including the step
of receiving. The data indicative of the first audio signal further indicates that the first audio
signal is detected by the network device microphone while the network device is located within
the predetermined physical range of the playback device microphone. It is also good. In one
example, block 1102 of method 1100 may be substantially similar to block 902 of method 900,
except that it is coordinated and / or performed by computer 1012 instead of network device
1010. . It should be noted that the description of block 902 and method 900 may be applicable
to block 1102, sometimes with modifications.
[0171]
At block 1104, method 1100 includes receiving data indicative of a second audio signal detected
by a microphone of the playback device. In one example, block 1104 of method 1100 may be
substantially similar to block 904 of method 900, except that it is coordinated and / or
performed by computer 1012 instead of network device 1010. . It is noted that the description of
block 904 and method 900 may be applicable to block 1104 with occasional modifications.
[0172]
At block 1106, method 1100 includes identifying a microphone calibration algorithm based on
data indicative of the first audio signal and data indicative of the second audio signal. In one
example, block 1106 of method 1100 may be substantially similar to block 906 of method 900,
except that it is coordinated and / or performed by computer 1012 instead of network device
1010. . It should be noted that the description of block 906 and method 900 may be applicable
to block 1106, sometimes with modifications.
[0173]
At block 1108, method 1100 includes applying a microphone calibration algorithm when
performing a calibration function associated with the network device and the playback device. In
one example, block 1108 of method 1100 may be substantially similar to block 908 of method
900 except that it is coordinated and / or performed by computer 1012 instead of network
11-04-2019
51
device 1010. . It should be noted that the description of block 908 and method 900 may be
applicable to block 1108, sometimes with modifications.
[0174]
For example, in this case, the microphone calibration algorithm may be received by the computer
1012 from the respective network device rather than being applied by the respective network
device before the microphone detection audio signal data is transmitted and received by the
computer 1012 May be applied to the detected microphone detected audio signal data. In some
cases, computer 1012 may identify each network device, transmit microphone detection audio
signal data, and apply a microphone calibration algorithm corresponding to data received from
each network device. Good.
[0175]
As described in method 900, the microphone calibration algorithm identified at block 1108 may
be stored in a database. The database stores data regarding the association between the
microphone calibration algorithm and / or the microphone calibration algorithm and one or
more characteristics of the respective network device and / or the microphones of the network
device.
[0176]
Computer 1012 may be configured to coordinate and / or perform the function of calibrating the
microphones of other network devices. For example, the method 1100 may further include data
indicative of an audio signal detected by the microphone of the second network device while the
second network device is located in the predetermined physical range of the microphone of the
playback device. Receiving may be included from the network device. Data indicative of the
detected audio signal is detected by the microphone of the second network device while the
second network device is located within the predetermined physical range of the microphone of
the playback device You may indicate that.
[0177]
11-04-2019
52
Identifying a second microphone calibration algorithm based on the data indicating the detected
audio signal and the data indicating the second audio signal; and the determined second
microphone calibration algorithm; Storing the association between the one or more
characteristics of the microphone in a database. The computer 1012 may further transmit data
indicating a second microphone calibration algorithm to a second network device.
[0178]
As described in method 900, manufacturing variations, microphone manufacturing quality
control variations, and variations during calibration (eg, inconsistencies in where network devices
are placed during calibration, other possibilities) ) Change the microphone calibration algorithm
determined by the network device or microphone of the same model. In such cases, a
representative microphone calibration algorithm may be determined from the changed
microphone calibration algorithm. For example, a representative microphone calibration
algorithm may be an average of varying microphone calibration algorithms. In some cases,
entries in the database of a network device of a model may be updated with the updated
representative calibration algorithm each time calibration is performed on the microphones of
the network device of the model.
[0179]
In such a case, for example, if the second network device is of the same model as the network
device 1010 and has the same model of microphone, then the method 1100 further indicates
that the microphone of the network device 1010 and the microphone of the second network
device are substantially In response to that, the third microphone calibration algorithm is
determined based on the first microphone calibration algorithm (of the microphones of the
network device 1010) and the second microphone calibration algorithm. And an association
between the determined third microphone calibration algorithm and one or more characteristics
of the microphones of the network device 1010. May include be stored in database. As
mentioned above, the second microphone calibration algorithm may be determined as an average
between the first microphone calibration and the second microphone calibration algorithm.
[0180]
11-04-2019
53
As discussed above, method 1100 may be at least partially coordinated and / or performed by
computer 1012. Also, in some cases, certain functions of method 1100 may be performed and /
or coordinated by one or more other devices. The one or more other devices include one or more
of the playback devices 1002, 1004, and 1006, or the computer 1012 or others as possible. For
example, as described above, blocks 1102-1106 may be performed by computer 1012, while in
some cases block 1108 may be performed by network device 1010. Other examples are also
possible.
[0181]
V. Conclusion The above description discloses, among other things, various example systems,
methods, apparatuses, and products that include other components, firmware and / or software
that are executed on hardware. Such embodiments are understood to be merely illustrative and
not limiting, eg, any or all of the aspects or components of firmware, hardware and / or software
may be hardware only It is contemplated that it may be embodied in software only, firmware
only, or a combination of hardware, software and / or firmware. Thus, the provided embodiments
are not the only way implemented in such a system, method, apparatus and / or product.
[0182]
The following examples disclose additional or alternative aspects of the present disclosure. The
device of any of the following examples may be a component of any of the devices described
herein, or may be a configuration of any of the devices described herein .
[0183]
(Feature 1) A microphone, a processor, and a memory storing instructions for causing the
playback device to execute the function by the processor, wherein the function is located within
the predetermined physical range of the playback device microphone. Meanwhile, detecting a
first audio signal by a microphone of the reproduction device, receiving data indicative of a
second audio signal detected by the microphone of the reproduction device, data indicative of the
first audio signal and a second audio signal Identifying a microphone calibration algorithm based
on the data to be shown; applying the microphone calibration algorithm when performing a
calibration function associated with the playback device; Click device.
[0184]
11-04-2019
54
(Feature 2) The network of Feature 1 further including the step of storing the association
between the identified microphone calibration algorithm and one or more characteristics of the
microphone of the network device in a database. device.
[0185]
(Feature 3) The network device of feature 1 or 2, wherein the second audio signal was detected
by the microphone of the reproduction device while the first audio signal was detected by the
microphone of the reproduction device.
[0186]
(Feature 4) The function further includes the step of causing the one or more playback devices to
play the third audio signal while detecting the first audio signal, the first audio signal and the
second audio signal The network device according to any one of Features 1 to 3, including a
portion corresponding to the third audio signal.
[0187]
(Feature 5) The network device of feature 4, wherein one or more playback devices include the
playback device.
[0188]
(Feature 6) The network device according to any one of Features 1 to 5, wherein the function
further includes the step of receiving an input for calibrating a microphone of the network device
before detecting a first audio signal.
[0189]
(Feature 7) The function further provides the graphical interface with a graphical display
indicating that the network device is located within a predetermined physical range of the
playback device's microphone before detecting the first audio signal. The network device of any
one of the features 1-6.
[0190]
(Feature 8) The function may further include, before detecting the first audio signal, determining
that the network device is located within a predetermined physical range of the playback device
microphone. Any one network device of ~ 7.
11-04-2019
55
[0191]
(Feature 9) The step of specifying the microphone calibration algorithm includes transmitting
data indicating the first audio signal to the computer, and receiving the microphone calibration
algorithm from the computer. Any one network device.
[0192]
(Feature 10) A processor, and a memory storing instructions for causing the processor to execute
functions on the processor, the functions being performed while the network device was placed
within a predetermined physical range of the microphone of the playback device. Receiving a
first audio signal detected by a microphone of the reproduction device, receiving data indicative
of a second audio signal detected by the microphone of the reproduction device, data indicative
of the first audio signal and a second audio signal Identifying a microphone calibration algorithm
based on the data indicative of: applying the microphone calibration algorithm when performing
a calibration function associated with the playback device. .
[0193]
(Feature 11) The computer according to Feature 10, wherein the function further includes the
step of transmitting a microphone calibration algorithm to a network device.
[0194]
(Feature 12) The feature 10 or 11 further including the step of storing the association between
the determined microphone calibration algorithm and one or more characteristics of the
microphone of the network device in a database. Computer.
[0195]
(Feature 13) The network device is a first network device, and the microphone calibration
algorithm is a first microphone calibration algorithm, and the function further includes that the
second network device is within a predetermined physical range of the playback device
microphone. Receiving from the second network device data indicative of the third audio signal
detected by the microphone of the second network device while being located in the second
network device, indicating data indicative of the third audio signal and the second audio signal
Identifying a second microphone calibration algorithm based on the data, the identified second
microphone calibration algorithm, and one or more of the microphones of the second network
device The association between sexual, steps to be stored in the database, including any one of
the computer features 10-12.
11-04-2019
56
[0196]
(Feature 14) The computer according to Feature 13, wherein the function further comprises
transmitting data indicating a second microphone calibration algorithm to a second network
device.
[0197]
(Feature 15) The function may further comprise determining that the microphone of the first
network device and the microphone of the second network device are substantially the same, in
response thereto, the first microphone calibration algorithm and Determining a third microphone
calibration algorithm based on the two microphone calibration algorithm, an association between
the determined third microphone calibration algorithm and one or more characteristics of the
first network device microphone Stored in a database. A computer according to feature 13 or 14.
[0198]
(Feature 16) A non-transitory computer-readable recording medium storing instructions for
causing a computer to perform a function, wherein the function places the network device within
a predetermined physical range of the microphone of the playback device. Receiving from the
network device data indicative of the first audio signal detected by the microphone of the
network device, and receiving data indicative of the second audio signal detected by the
microphone of the reproduction device; Identifying a microphone calibration algorithm based on
data indicative of the first audio signal and data indicative of the second audio signal, the
identified microphone calibration algorithm, and one of the microphones of the network device
Associations, steps to be stored in the database, including, non-transitory computer-readable
recording medium between the plurality of characteristics.
[0199]
(Feature 17) The non-transitory computer readable recording medium according to feature 16,
wherein the function further comprises transmitting data indicating a microphone calibration
algorithm to a network device.
[0200]
(Feature 18) The step of receiving data indicative of the second audio signal by the microphone
of the playback device includes receiving data indicative of the second audio signal detected by
the microphone of the playback device from the playback device 16 or 17 non-transitory
computer readable recording media.
11-04-2019
57
[0201]
(Feature 19) The step of receiving data indicative of the second audio signal by the microphone
of the playback device includes receiving data indicative of the second audio signal detected by
the microphone of the playback device from the network device 16-18 Any one non-transitory
computer readable recording medium.
[0202]
(Feature 20) The function may further include causing one or more playback devices to play the
third audio signal before receiving the data indicating the first audio signal, the first audio signal
being 16. A non-transitory computer readable recording medium according to any one of the
features 16-19, comprising a portion corresponding to 3 audio signals.
[0203]
(Feature 21) A processor, and a memory storing instructions for causing the playback device to
execute a function, wherein the function is an acoustic characteristic of the microphone of the
network device corresponding to a certain characteristic of the network device in the database of
microphone acoustic characteristics. Identifying a step of calibrating the playback device based
at least on the acoustic characteristics of the identified microphone.
[0204]
22. The computer of feature 21 wherein the function further comprises: maintaining a database
of microphone acoustical characteristics.
[0205]
(Feature 23) The step of identifying the acoustic characteristics of the microphone includes, on a
server maintaining a database of microphone acoustic characteristics, data indicating the
characteristics of the network device and a query for the acoustic characteristics corresponding
to the characteristics of the network device. The computer of feature 21 or 22, including
transmitting, receiving from the server data indicative of an acoustic characteristic of the
inquired microphone.
[0206]
(Feature 24) The step of identifying the acoustic characteristic of the microphone comprises:
identifying a model of the microphone corresponding to the acoustic characteristic of the
network device; data indicating the model of the microphone to a server maintaining a database
11-04-2019
58
of microphone acoustic characteristics; 22. A computer as in any one of features 21-23, including
transmitting an acoustic characteristic query corresponding to a model; receiving data from the
server indicating an acoustic characteristic of the inquired microphone.
[0207]
(Feature 25) The step of calibrating the playback device determines an audio processing
algorithm based on the acoustic characteristic of the specified microphone, and causing the
playback device to apply the audio processing algorithm when playing back the media content ,
And any one of the computer of features 21-24.
[0208]
(Feature 26) The function may further include: (i) first data indicating the first audio signal via
the microphone of the network device while the first audio signal is being reproduced by the
reproduction device; Receiving the second data indicative of the two audio signals, wherein the
step of determining the audio processing algorithm further comprises determining the audio
processing algorithm based on the first audio signal and the second audio signal. Features 25
computers, including:
[0209]
(Feature 27) The audio processing algorithm includes the inverse number of the acoustic
characteristics of the specified microphone, and when playing back the media content, causing
the playback device to apply the audio processing algorithm identifies the media content to be
played back. The computer of feature 25 or 26 including correcting by the inverse function of
the acoustic properties of the microphone.
[0210]
(Feature 28) A microphone, a processor, and a memory storing instructions for causing the
playback device to execute the function, wherein the function is an acoustic characteristic of the
microphone corresponding to an acoustic characteristic of the network device in the database of
microphone acoustic characteristics. And C. identifying the at least one of the acoustic
characteristics of the identified microphones.
[0211]
(Feature 29) The network device of feature 28, wherein the function further comprises:
maintaining a database of microphone acoustical properties.
11-04-2019
59
[0212]
(Feature 30) The step of identifying the acoustic characteristic of the microphone comprises: a
server maintained in the database of the microphone acoustic characteristic, data indicating the
characteristic of the network device, and a query of the acoustic characteristic corresponding to
the characteristic of the network device The network device of feature 28 or 29, comprising:
transmitting, receiving from the server data indicative of an acoustic characteristic of the
inquired microphone.
[0213]
(Feature 31) The step of identifying acoustic characteristics of the microphone includes
identifying a model of the microphone corresponding to the characteristics of the network
device, data indicating the model of the microphone to the server maintaining the database of the
acoustic characteristics of the microphone, and the model 30. A network device according to any
of the features 28 to 30, comprising transmitting an acoustic characteristic query corresponding
to and receiving data indicative of the acoustic characteristic of the inquired microphone from
the server.
[0214]
(Feature 32) The step of calibrating the playback device determines an audio processing
algorithm based on the acoustic characteristics of the identified microphone, and causes the
playback device to apply the audio processing algorithm when playing back the media content ,
Or any one of the network devices of features 28-31.
[0215]
(Feature 33) The function further includes (i) data indicating the first audio signal detected
through the microphone while the first audio signal is being played back by the playback device;
(ii) the second audio Receiving the data indicative of the signal, determining the audio processing
algorithm, further comprising determining the audio processing algorithm based on the first
audio signal and the second audio signal. device.
[0216]
(Feature 34) The audio processing algorithm includes the inverse of the acoustic characteristic of
the specified microphone, and the step of applying the audio processing algorithm to the
playback device when playing back the media content identifies the media content to be played
back. 32. A network device according to feature 32 or 33, comprising correcting by an inverse
function of the acoustic properties of the microphone.
11-04-2019
60
[0217]
(Feature 35) A processor and a memory storing instructions for causing the processor to execute
a function by the processor, wherein the function is a microphone of the network device
corresponding to a certain characteristic of the network device in the database of microphone
acoustic characteristics Identifying a sound characteristic, calibrating the reproduction device
based at least on the sound characteristic of the specified microphone.
[0218]
(Feature 36) The reproduction device of feature 35, the function further comprising: maintaining
a database of microphone acoustical properties.
[0219]
(Feature 37) In the step of identifying the acoustic characteristic of the microphone, a server
maintaining a data bus of the microphone acoustic characteristic, data indicating the
characteristic of the reproduction device, and an inquiry (query) of the acoustic characteristic
corresponding to the characteristic of the reproduction device The reproduction device of feature
35 or 36, comprising: transmitting, receiving from the server data indicative of an acoustic
characteristic of the inquired microphone.
[0220]
(Feature 38) The step of identifying the acoustic characteristics of the microphone includes
identifying a model of the microphone corresponding to the characteristics of the reproduction
device, data indicative of the model of the microphone to the server maintaining the database of
the microphone acoustic characteristics, and the model 34. A reproduction device according to
any of the features 35-37, comprising: transmitting an inquiry for acoustic characteristics
corresponding to; and receiving from the server data indicative of the acoustic characteristics of
the inquired microphone.
[0221]
(Feature 39) The step of calibrating the playback device includes determining an audio
processing algorithm based on the acoustic characteristics of the identified microphone, and
applying the audio processing algorithm when playing back the media content. A playback device
according to any one of the features 35-38.
11-04-2019
61
[0222]
(Feature 40) The function further comprises the steps of: reproducing the first audio signal;
receiving data indicative of the second audio signal detected via the network microphone while
the first audio signal is being reproduced The playing device of feature 39, comprising:
determining an audio processing algorithm, further comprising: determining an audio processing
algorithm based on the first audio signal and the second audio signal.
[0223]
Furthermore, reference to "an embodiment" means that the features, structures or characteristics
described in connection with the embodiments may be included in at least one exemplary
embodiment of the present invention.
The terms used in various places in the specification do not necessarily all indicate the same
embodiment, and are not mutually exclusive separate or alternative embodiments from other
embodiments.
Thus, the embodiments described herein, which will be understood explicitly or implicitly by the
person skilled in the art, can also be combined with other embodiments.
[0224]
The specification broadly describes exemplary environments, systems, procedures, steps, logic
blocks, processes, and other symbolic representations that operate directly or indirectly on data
processing devices connected to a network. It is similar to
These process descriptions and representations are commonly used by those skilled in the art to
most effectively convey the substance of their work to others skilled in the art.
Many specific details are provided to understand the present disclosure.
However, it will be understood by one of ordinary skill in the art that certain embodiments of the
11-04-2019
62
present disclosure may be practiced without the specific details.
In other instances, well-known methods, procedures, components and circuits have not been
described in detail to avoid unnecessarily obscuring the embodiments.
Accordingly, the scope of the present disclosure is defined by the appended claims rather than
the embodiments described above.
[0225]
When any of the appended claims simply reads to cover an implementation in software and / or
firmware, one or more of the elements in at least one example may, herein, be software and / or
firmware. It is clearly defined that it includes tangible, non-transitory storage media storing such
as, for example, memory, DVD, CD, Blu-ray (registered trademark) and the like.
11-04-2019
63
Документ
Категория
Без категории
Просмотров
0
Размер файла
89 Кб
Теги
description, jp2017527223
1/--страниц
Пожаловаться на содержимое документа