close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2016021640

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2016021640
An audio device capable of automatically controlling sound output based on an audio signal
received from another audio device through wireless communication is provided. A first audio
device (110) capable of wirelessly communicating with a second audio device (120) detects a
user (20) in a predetermined detection area (30) included in a wireless communication range of
the first audio device (110). And the output unit 112 that outputs a sound, and when the
detection unit 111 detects the user 20, causes the output unit 112 to output a sound based on an
audio signal received from the second audio device 120 via wireless communication. And a
control unit 113. [Selected figure] Figure 3
Audio device and audio system
[0001]
The present invention relates to an audio system including an audio device and another audio
device capable of wireless communication with the audio device, and an audio device included in
the audio system.
[0002]
With the spread of portable devices such as smartphones, the spread of short distance wireless
communication such as Bluetooth (registered trademark) is also in progress.
If two communication devices (for example, a portable music player and headphones) are
11-04-2019
1
connected by near field communication, it is not necessary to connect the communication devices
by wire, and the convenience of the user is improved.
[0003]
As a technology related to such short distance wireless communication, Patent Document 1
discloses a technology for automatically selecting an optimum master from among a plurality of
Bluetooth devices in order to stably connect a plurality of Bluetooth devices. There is.
[0004]
JP, 2005-27280, A
[0005]
For example, when it is desired to output the sound of a smartphone from an audio device using
wireless communication, the user needs to instruct the smartphone to start wireless
communication with the audio device.
In order to omit such an instruction from the user, it is also conceivable to output the sound of
the smartphone from the audio device whenever the smartphone is within the wireless
communication range of the audio device.
[0006]
However, when the smartphone always outputs the sound of the smartphone from the audio
device whenever the smartphone is within the wireless communication range of the audio device,
a sound that the user does not intend may be output from the audio device.
[0007]
Therefore, the present invention provides an audio device capable of automatically and properly
controlling the output of sound based on an audio signal received from another audio device
through wireless communication, and an audio system including the audio device. provide.
[0008]
A control device according to one aspect of the present invention is an audio device capable of
11-04-2019
2
wirelessly communicating with another audio device, and a detection unit that detects a user in a
predetermined detection area included in a wireless communication range of the audio device. An
output unit for outputting a sound, and a control for causing the output unit to output a sound
based on an audio signal received from the other audio device via wireless communication when
the user is detected by the detection unit And a unit.
[0009]
According to this configuration, the audio device receives audio from another audio device
through wireless communication when the user is detected in a predetermined detection area
included in the wireless communication range of the audio device. Sound based on the signal can
be output.
Therefore, it is possible to switch the output of sound from another audio device to an audio
device according to the state of the user.
For example, in the case where the audio device is a device for outputting sound to the living
room, when the user is in the room next to the living room, the audio may be audio even though
the next room is within the wireless communication range of the audio device. The sound can be
output from another audio device without the sound being output from the device.
That is, the audio device can automatically control sound output based on audio signals received
from other audio devices via wireless communication.
[0010]
For example, the other audio device is a portable device, and the control unit further determines
whether the user holds the portable device by hand, and the detection unit detects the user. And
when it is determined that the user does not hold the portable device by hand, a sound based on
the audio signal is output to the output unit, and the detection unit does not detect the user, or
When it is determined that the user holds the portable device by hand, a sound based on the
audio signal may not be output to the output unit.
[0011]
11-04-2019
3
According to this configuration, when it is determined that the user holds the portable device by
hand, the sound based on the audio signal can not be output to the output unit of the audio
device.
When the user holds the portable device by hand, it is expected that the user performs an
operation on the portable device.
Therefore, it is possible to prevent an inappropriate sound from being output from the audio
device by the user's operation on the portable device.
[0012]
For example, the detection unit further detects the movement of the user, and the control unit
further determines whether the user is stationary for a predetermined time or more based on the
movement of the user detected by the detection unit. The volume of the sound output from the
output unit may be reduced when the user is stationary for a predetermined time or more.
[0013]
According to this configuration, when the user is stationary for a predetermined time or more,
the volume of the sound output from the output unit can be reduced.
Therefore, the volume can be reduced while the user sleeps, and sound output can be
automatically controlled appropriately based on the user's condition.
[0014]
For example, the other audio device is a mobile phone, and the audio device is further capable of
wirelessly communicating with the mobile phone using a protocol for realizing hands-free calling,
and the detection unit is further configured to: The movement of the user is detected, and the
control unit further determines whether the user is stationary for a predetermined time or more
based on the movement of the user detected by the detection unit, and the detection unit When
the user is detected and the user does not perform stationary for a predetermined time or more,
wireless communication with the mobile phone using the protocol is permitted, and the user is
not detected by the detection unit. Alternatively, when the user is stationary for a predetermined
11-04-2019
4
time or more, wireless communication with the mobile phone using the protocol may be
prohibited.
[0015]
According to this configuration, when the user stands still for a predetermined time or more,
wireless communication with a mobile phone using a protocol for realizing hands-free calling can
be prohibited.
Therefore, it is possible to prevent the sound based on the call signal from being output from the
audio device while the user sleeps. For example, it is possible to prevent a ringing tone from
being output at a loud volume from the speaker of the audio device while the user is sleeping.
That is, based on the state of the user, the sound output can be automatically controlled
appropriately.
[0016]
For example, the other audio device is an audio visual device for outputting video and sound, and
the output unit has a first speaker and a second speaker arranged side by side, and the audio
signal is an audio visual device. And a first channel signal and a second channel signal, wherein
the control unit causes the first speaker to output a sound based on the second channel signal
when the detection unit detects the user. A sound based on the first channel signal is output to
the second speaker, and a sound based on the first channel signal is output to the first speaker
when the user is not detected by the detection unit, and A sound based on a two-channel signal
may be output to the second speaker.
[0017]
According to this configuration, it is possible to switch between the two speakers to which the
sound based on the first channel signal and the sound based on the second channel signal are
respectively output according to the detection result of the user.
Therefore, when the user is present in the detection area, sounds of a plurality of channels can be
output from the speaker suitable for the position of the user. That is, based on the state of the
user, the sound output can be automatically controlled appropriately.
11-04-2019
5
[0018]
For example, the sensing area is an area on a bed located between the audio device and the
audiovisual device, and the audio device is head side of the bed so as to face the audiovisual
device. It may be installed in
[0019]
According to this configuration, the user can be detected in the area on the bed installed between
the audio device and the audiovisual device.
Therefore, the output of sound can be appropriately controlled in accordance with the positional
relationship between the user and the audio device and the audiovisual device.
[0020]
For example, the bed is switched between a flat state in which the mat is kept flat and a reclining
state in which the head side of the mat is lifted, the control unit is detected by the user by the
detection unit, and the bed is When in the reclining state, a sound based on the second channel
signal is output to the first speaker, and a sound based on the first channel signal is output to the
second speaker, and the detection unit When the user is not detected, or when the bed is not in
the reclining state, a sound based on the first channel signal is output to the first speaker, and a
sound based on the second channel signal is the second speaker May be output.
[0021]
According to this configuration, it is possible to switch between the two speakers to which the
sound based on the first channel signal and the sound based on the second channel signal are
respectively output based on whether or not the bed is in the reclining state.
For example, even when the user is on the bed, it is suitable for the user's condition when the
user is sitting in the reclining bed and when the user is lying on the flat bed. The output of the
muffled sound is different. Therefore, based on whether or not the bed is in the reclining state,
the sound based on the first channel signal and the sound based on the second channel signal are
11-04-2019
6
respectively switched to switch the two speakers so that the sound more suitable for the user's
condition Control can be performed.
[0022]
An audio system according to an aspect of the present invention includes a plurality of first audio
devices, and a second audio device capable of wirelessly communicating with each of the
plurality of first audio devices, each of the plurality of first audio devices A detection unit that
detects a user in a predetermined detection area included in a wireless communication range of
the first audio device, an output unit that outputs a sound, and the detection unit detects the
user; And a control unit that causes the output unit to output a sound based on an audio signal
received from the second audio device via wireless communication.
[0023]
According to this configuration, in the audio system including the plurality of first audio devices,
each first audio device is detected by the user in a predetermined detection area included in the
wireless communication range of the first audio device. In this case, the sound based on the
audio signal received from the second audio device through wireless communication can be
output.
That is, the audio system can exhibit the same effects as the above-described audio device.
[0024]
For example, the plurality of first audio devices include a narrow area audio device for outputting
sound in a first area, and a wide area audio device for outputting sound in a second area
including the first area. The detection unit of the narrow area audio device detects the user in the
first area, and the detection unit of the wide area audio device detects the user in the second
area, and the detection unit of the wide area audio device The control unit causes the output unit
of the wide area audio device to output a sound based on the audio signal when the user is
detected in the second area and the user is not detected in the first area; If the user is not
detected in two areas, or if the user is detected in the first area, It may not be outputting sound
based on the audio signal to the force unit.
[0025]
11-04-2019
7
According to this configuration, the first area corresponding to the narrow area audio device is
included in the second area corresponding to the wide area audio device.
In such a case, when the user is detected in the first area, the wide area audio device does not
output sound based on the audio signal. That is, even if the user is detected in the second area
corresponding to the wide area audio device, the wide area audio apparatus outputs sound when
the user is detected in the first area corresponding to the narrow area audio apparatus do not do.
Therefore, it is possible to prevent the wide area audio device from outputting the sound
redundantly when the narrow area audio device is outputting sound to the user present in the
first area. That is, based on the state of the user, the sound output can be automatically
controlled appropriately.
[0026]
For example, the plurality of first audio devices may include a first audio device for outputting
sound to a first area, and a second area which is remote from the first area or a second area
partially overlapping the first area. The detection unit of the first audio device that includes a
second audio device for outputting a sound in a region, and detects the user in the first region,
the detection unit of the first audio device for outputting a sound in the first region; The
detection unit of the second audio device for outputting a sound to a second region detects the
user in the second region, and the sound of the first audio device for outputting a sound to the
first region The control unit starts output of sound based on the audio signal to the output unit of
the first audio device for outputting sound to the first area when the user is detected in the first
area. Let The audio at the output portion of the first audio device for outputting a sound to the
first area when the user is detected in the second area after the user is detected in the first area
The output of the sound based on the signal may be stopped.
[0027]
According to this configuration, when the user is detected in the second area after the user is
detected in the first area, it is possible to stop the output of the sound based on the audio signal
from the second audio device.
Therefore, when the first area and the second area are separated, it is possible to output a sound
from when the user leaves the first area to when the user reaches the second area. That is, it is
possible to prevent the output of sound from being interrupted. When the first area and the
11-04-2019
8
second area partially overlap with each other, even when the user stays in the first area, when
the user enters the overlapping area with the second area, Sound output in one area can be
stopped. That is, overlapping of sound output can be prevented.
[0028]
The present invention can not only be realized as an audio apparatus and an audio system
provided with such a characteristic processing unit, but also the processing executed by the
characteristic processing unit included in the audio apparatus and the audio system. Can be
realized as a sound output method. The present invention can also be realized as a program for
causing a computer to function as a characteristic processing unit included in an audio device or
a program for causing a computer to execute characteristic steps included in an audio device
control method. It goes without saying that such a program can be distributed via a computer
readable non-temporary recording medium such as a CD-ROM (Compact Disc-Read Only
Memory) or a communication network such as the Internet. .
[0029]
According to the audio device according to one aspect of the present invention, it is possible to
automatically and appropriately control the output of sound based on the audio signal received
from another audio device via wireless communication.
[0030]
FIG. 1 is a perspective view showing an installation example of an audio system according to
Embodiment 1;
FIG. 1 is a perspective view showing an installation example of an audio system according to
Embodiment 1; FIG. 1 is a block diagram showing a functional configuration of an audio system
according to Embodiment 1. 5 is a flowchart showing processing of the first audio device
according to the first embodiment. 7 is a flowchart showing processing of the second audio
device according to the first embodiment. FIG. 10 is a flowchart showing processing of the first
audio device according to Modification 1 of Embodiment 1. FIG. FIG. 10 is a flowchart showing
processing of a first audio device according to Variation 2 of Embodiment 1; FIG. FIG. 7 is a block
diagram showing a functional configuration of an audio system according to a second
embodiment. 7 is a flowchart showing processing of a first audio device according to
11-04-2019
9
Embodiment 2; FIG. 18 is a plan view showing an installation example of the audio system
according to Embodiment 3. FIG. 16 is a block diagram showing a functional configuration of an
audio system according to a third embodiment. FIG. 16 is a flowchart showing processing of the
first audio device according to Embodiment 3. FIG. FIG. 18 is a plan view showing an installation
example of an audio system according to a fourth embodiment. FIG. 16 is a block diagram
showing a functional configuration of an audio system according to a fourth embodiment. FIG. 18
is a flowchart showing processing of the first audio device according to Embodiment 4. FIG. FIG.
21 is a perspective view showing an installation example of the audio system according to the
fifth embodiment. FIG. 21 is a block diagram showing a functional configuration of an audio
system according to a fifth embodiment. FIG. 20 is a flowchart showing processing of the first
audio device according to Embodiment 5. FIG.
[0031]
Embodiments will be specifically described below with reference to the drawings.
[0032]
The embodiments described below are all inclusive or specific examples.
Numerical values, shapes, materials, components, arrangement positions and connection forms of
components, steps, order of steps, and the like shown in the following embodiments are merely
examples, and are not intended to limit the present invention. Moreover, among the components
in the following embodiments, components not described in the independent claims are
described as optional components.
[0033]
Embodiment 1 An audio system according to the present embodiment will be specifically
described with reference to the drawings.
[0034]
[Installation Example of Audio System] First, an installation example of the audio system 100 will
be described.
11-04-2019
10
1 and 2 are perspective views showing an installation example of the audio system 100
according to the first embodiment.
[0035]
Audio system 100 comprises a first audio device 110 and a second audio device 120. The first
audio device 110 and the second audio device 120 can exchange data by wireless
communication. That is, the first audio device 110 can wirelessly communicate with the second
audio device 120.
[0036]
Here, wireless communication is near field communication. For short distance wireless
communication, for example, a wireless personal area network (PAN) including Bluetooth or the
like, a wireless local area network (LAN), or the like is used.
[0037]
The first audio device 110 is a device for outputting sound. Here, the first audio device 110 is
installed above the head side of the bed 10.
[0038]
The second audio device 120 is a device for outputting sound. Here, the second audio device 120
is a portable device carried by the user 20. Specifically, the second audio device 120 is, for
example, a portable music player, a smartphone, a tablet computer, or the like.
[0039]
In FIG. 1, the second audio device 120 is operated by the user 20 lying on the bed 10. Further, in
11-04-2019
11
FIG. 2, the second audio device 120 is operated by the user 20 standing near the bed 10.
[0040]
[Functional Configuration of Audio System] Next, the functional configuration of the audio
system 100 will be described. FIG. 3 is a block diagram showing a functional configuration of the
audio system 100 according to the first embodiment. Audio system 100 comprises a first audio
device 110 and a second audio device 120.
[0041]
First, the functional configuration of the first audio device 110 will be described.
[0042]
The first audio device 110 includes a detection unit 111, an output unit 112, a control unit 113,
and a communication unit 114.
[0043]
The detection unit 111 detects the user 20 in a predetermined detection area 30 included in the
wireless communication range of the first audio device 110.
That is, the detection unit 111 detects the presence or absence of the user 20 in the detection
area 30 which is a part of the wireless communication range of the first audio device 110.
[0044]
The wireless communication range of the first audio device 110 is an area in which wireless
communication with the first audio device 110 is possible.
That is, for example, when Bluetooth is used, the wireless communication range is a range of a
distance of about several tens of meters centered on the first audio device 110. The radio
11-04-2019
12
communication range changes depending on the radio wave intensity or the presence or absence
of an obstacle.
[0045]
As shown in FIGS. 1 and 2, in the present embodiment, the detection area 30 is an area on the
bed 10. A wireless communication range (not shown) is, for example, an area including a room in
which the bed 10 is installed and a room adjacent to the room.
[0046]
Specifically, the detection unit 111 detects the user 20 in the detection area 30 by analyzing an
image (for example, a distance image) captured from above the bed 10, for example. Also, for
example, the detection unit 111 may detect the user 20 using an infrared sensor (for example, a
pyroelectric sensor, a heat sensor, etc.). Also, for example, the detection unit 111 may detect the
user 20 using a mat-like pressure sensor placed in the detection area 30. Further, for example,
the detection unit 111 may detect the user 20 using a Doppler sensor.
[0047]
The output unit 112 outputs a sound. Here, as shown in FIGS. 1 and 2, the output unit 112
includes a first speaker 112 a and a second speaker 112 b which are disposed laterally above
and below the head side of the bed 10.
[0048]
The control unit 113 controls the output unit 112 and the communication unit 114 according to
the detection result in the detection unit 111. Specifically, when the detection unit 111 detects
the user 20, the control unit 113 causes the output unit 112 to output a sound based on an
audio signal received from the second audio device 120 via wireless communication. That is,
when the user 20 is on the bed 10, the control unit 113 switches the output destination of the
sound from the speaker of the second audio device 120 to the speaker of the first audio device
110.
11-04-2019
13
[0049]
The communication unit 114 wirelessly communicates with the second audio device 120.
Specifically, the communication unit 114 is, for example, a Bluetooth communication adapter, a
wireless LAN adapter, or the like.
[0050]
Next, the functional configuration of the second audio device 120 will be described.
[0051]
The second audio device 120 includes an output unit 121, a control unit 122, and a
communication unit 123.
[0052]
The output unit 121 outputs a sound.
Specifically, the output unit 121 is, for example, a built-in speaker of the second audio device
120.
[0053]
The control unit 122 controls the output unit 121 and the communication unit 123.
Specifically, the control unit 122 controls the output unit 121 and the communication unit 123
based on the detection result of the detection unit 111 of the first audio device 110.
[0054]
11-04-2019
14
For example, when the detection result indicating that the user 20 has been detected is received
from the first audio device 110, the control unit 122 causes the communication unit 123 to
transmit an audio signal to the first audio device 110. Furthermore, the control unit 122 causes
the output unit 121 to stop the output of the sound based on the audio signal.
[0055]
On the other hand, for example, when the detection result is not received from the first audio
device 110, the control unit 122 causes the output unit 121 to output a sound based on the
audio signal.
[0056]
The communication unit 123 wirelessly communicates with the first audio device 110.
Specifically, the communication unit 123 is, for example, a Bluetooth communication adapter, a
wireless LAN adapter, or the like. When the communication unit 123 is a Bluetooth
communication adapter, the communication unit 123 transmits an audio signal to the first audio
device 110 using, for example, A2DP (Advanced Audio Distribution Profile).
[0057]
[Operation of Audio System] Next, the operation of the audio system 100 configured as described
above will be described. Here, the process starts in a state where the sound related to the audio
signal is output from the second audio device 120.
[0058]
First, the operation of the first audio device 110 will be described. FIG. 4 is a flowchart showing
processing of the first audio device 110 according to the first embodiment.
[0059]
11-04-2019
15
First, the control unit 113 determines whether the user 20 has been detected in the detection
area 30 by the detection unit 111 (S111). For example, in the state shown in FIG. 1, it is
determined that the user 20 is detected in the detection area 30. Further, for example, in the
state illustrated in FIG. 2, it is determined that the user 20 is not detected in the detection area
30.
[0060]
Here, when the user 20 is detected in the detection area 30 (Yes in S111), the communication
unit 114 receives an audio signal from the second audio device 120 (S112). For example, when
the wireless communication is Bluetooth, the communication unit 114 receives an audio signal
from the second audio device 120 according to A2DP. Also, for example, when the wireless
communication is a wireless LAN, the communication unit 114 may receive an audio signal from
the second audio device 120 according to a Digital Living Network Alliance (DLNA) guideline.
[0061]
Subsequently, the control unit 113 causes the output unit 112 to output a sound based on the
audio signal received from the second audio device 120 (S113). That is, the sound output from
the output unit 121 of the second audio device 120 is output from the output unit 112 of the
first audio device 110.
[0062]
On the other hand, when the user 20 is not detected in the detection area 30 (No in S111), the
control unit 113 does not cause the output unit 112 to output the sound based on the audio
signal (S114). That is, the output of the sound from the output unit 121 of the second audio
device 120 is continued, and the sound based on the audio signal is not output from the output
unit 112 of the first audio device 110.
[0063]
11-04-2019
16
Next, the operation of the second audio device 120 will be described. FIG. 5 is a flowchart
showing processing of the second audio device 120 according to the first embodiment.
[0064]
When the user 20 is detected in the detection area 30 (Yes in S121), the communication unit
123 transmits an audio signal to the first audio device 110 (S122). Then, the control unit 122
causes the output unit 121 to stop the output of the sound based on the audio signal (S123).
That is, the output of the sound based on the audio signal switches from the second audio device
120 to the first audio device 110.
[0065]
On the other hand, when the user 20 is not detected in the detection area 30 (No in S121), the
control unit 113 causes the output unit 121 to continue the output of the sound based on the
audio signal (S124).
[0066]
[Effects] As described above, according to the audio system 100 according to the present
embodiment, the first audio device 110 is in the predetermined detection area 30 included in the
wireless communication range of the first audio device 110. When the user 20 is detected, sound
based on an audio signal received from the second audio device 120 via wireless communication
can be output.
Therefore, the output of sound can be switched from the second audio device 120 to the first
audio device 110 according to the state of the user 20. For example, when the first audio device
110 is a device for outputting sound to the living room, when the user 20 is in a room next to the
living room, the next room is within the wireless communication range of the first audio device
110. Even if this is the case, the second audio device 120 can output the sound without causing
the first audio device 110 to output the sound. That is, the first audio device 110 can
automatically control the sound output based on the audio signal received from the second audio
device 120 via wireless communication.
11-04-2019
17
[0067]
(Modification 1 of Embodiment 1) Next, Modification 1 of Embodiment 1 will be described. In
this variation, the second audio device is a portable device that is held by the user by hand. The
present modification is different from the first embodiment in that the output of sound is
controlled based on whether the user holds the second audio device (mobile device) by hand.
[0068]
Hereinafter, an audio system 100 according to the present modification will be described
focusing on differences from the first embodiment.
[0069]
FIG. 6 is a flowchart showing processing of the first audio device 110 according to the first
modification of the first embodiment.
In FIG. 6, steps in which the same or similar processes as those in FIG.
[0070]
When the user 20 is detected in the detection area 30 (Yes in S111), the control unit 113
determines whether the user 20 holds the second audio device 120 by hand (S115). Specifically,
based on the movement information of the second audio device 120 obtained from the second
audio device 120, for example, the control unit 113 determines whether the user 20 holds the
second audio device 120 by hand. judge.
[0071]
For example, the control unit 113 receives an output signal of a motion sensor (for example, an
acceleration sensor or a gyro sensor) built in the second audio device 120 from the second audio
device 120 via the communication unit 114 as motion information. Then, the control unit 113
determines whether the motion of the second audio device 120 satisfies the condition of the
11-04-2019
18
motion for determining that the second audio device 120 is held by the hand. Determine if you
are holding by hand.
[0072]
The condition of movement for determining holding by hand is, for example, continuation of a
minute movement for a predetermined time or more. The condition of this movement may be
predetermined, for example, empirically or experimentally.
[0073]
Here, when it is determined that the user 20 holds the second audio device 120 by hand (Yes in
S115), the control unit 113 does not cause the output unit 112 to output the sound based on the
audio signal (S114) . That is, the output of the sound from the output unit 121 of the second
audio device 120 is continued, and the sound based on the audio signal is not output from the
output unit 112 of the first audio device 110.
[0074]
On the other hand, when it is determined that the user 20 does not hold the second audio device
120 by hand (No in S115), the communication unit 114 receives an audio signal from the second
audio device 120 (S112). Subsequently, the control unit 113 causes the output unit 112 to
output a sound based on the audio signal received from the second audio device 120 (S113).
[0075]
As described above, according to the audio system 100 according to the present modification,
when it is determined that the user 20 holds the second audio device 120 by hand, the sound
based on the audio signal is transmitted to the first audio device 110. It can not be output to the
output unit 112 of When the user 20 holds the second audio device 120 by hand, it is expected
that the user 20 operates the second audio device 120. Therefore, an inappropriate sound can be
prevented from being output from the first audio device 110 by the operation of the user 20 on
the second audio device 120.
11-04-2019
19
[0076]
(Modification 2 of Embodiment 1) Next, Modification 2 of Embodiment 1 will be described. The
present modification differs from the first embodiment in that the volume of the first audio
device is controlled based on whether or not the user has been stationary for a predetermined
time or more.
[0077]
Hereinafter, an audio system 100 according to the present modification will be described
focusing on differences from the first embodiment.
[0078]
FIG. 7 is a flowchart showing processing of the first audio device 110 according to the second
modification of the first embodiment.
In FIG. 7, steps in which the same or similar processes as those in FIG. 4 are performed are
denoted by the same reference numerals, and the description thereof will be appropriately
omitted. Here, the motion of the user 20 is detected by the detection unit 111 of the first audio
device 110.
[0079]
When a sound based on an audio signal is output from the output unit 112 of the first audio
device 110, the control unit 113 causes the user 20 to stand still for a predetermined time or
more based on the movement of the user 20 detected by the detection unit 111. It is determined
whether or not (S116). Specifically, for example, when the total movement amount of the user 20
within a predetermined time is less than the threshold amount, the control unit 113 determines
that the user 20 is stationary for a predetermined time or more.
[0080]
11-04-2019
20
The predetermined time is a time for estimating that the user 20 is sleeping. That is, the control
unit 113 estimates whether or not the user 20 is sleeping by determining whether or not the
user 20 is stationary for a predetermined time or more.
[0081]
Here, when the user is at rest for a predetermined time or more (Yes in S116), the control unit
113 reduces the volume of the sound output from the output unit 112 (S117). That is, when it is
estimated that the user 20 is sleeping, the volume of the sound output from the output unit 112
is reduced.
[0082]
On the other hand, when the user has not performed still for a predetermined time or more (No
in S116), the process ends. That is, the control unit 113 maintains the volume of the sound
output from the output unit 112.
[0083]
As described above, according to the audio system 100 according to the present modification, the
volume of the sound output from the output unit 112 can be reduced when the user 20 stands
still for a predetermined time or more. Therefore, the volume can be reduced while the user 20
sleeps, and the sound output can be appropriately controlled automatically based on the state of
the user 20.
[0084]
Second Embodiment The second embodiment will be described next. The present embodiment is
different from the first embodiment in that the connection for the hands-free call is controlled
based on whether or not the mobile terminal is stationary for a predetermined time or more.
11-04-2019
21
[0085]
Hereinafter, an audio system according to the present embodiment will be described focusing on
differences from the first embodiment.
[0086]
[Functional Configuration of Audio System] First, the functional configuration of the audio system
200 will be described.
FIG. 8 is a block diagram showing a functional configuration of the audio system 200 according
to the second embodiment. In FIG. 8, the blocks having the same or similar functions as or to
those in FIG.
[0087]
The audio system 200 includes a first audio device 210 and a second audio device 220. Here, the
second audio device 220 is a mobile phone.
[0088]
The first audio device 210 can wirelessly communicate with the second audio device (mobile
phone) 220 using a protocol for realizing hands-free calling. Hereinafter, wireless communication
using a protocol for realizing hands-free calling will be simply referred to as “hands-free
communication”.
[0089]
The protocol for realizing hands-free calling is a wireless communication protocol for realizing
calling with a device instead of a mobile phone without holding the mobile phone in hand. In the
present embodiment, the first audio device 210 outputs the voice of the other party received by
the second audio device 220 using the protocol. Furthermore, the first audio device 210
11-04-2019
22
transmits the voice signal of the user 20 to the second audio device 220 using the protocol.
[0090]
Specifically, a protocol for realizing hands-free calling is, for example, HFP (Hands-Free Profile) in
Bluetooth. Also, for example, the protocol for realizing hands-free calling may be HSP (Headset
Profile) in Bluetooth. The protocol for realizing hands-free calling may not be a Bluetooth profile.
For example, the protocol for realizing hands-free calling may be a non-standardized proprietary
protocol.
[0091]
As shown in FIG. 8, the first audio device 210 includes a detection unit 211, an output unit 112,
a control unit 213, a communication unit 214, and an input unit 215.
[0092]
The detection unit 211 detects the movement of the user 20 in addition to the presence or
absence of the user 20 in the detection area 30.
Specifically, the detection unit 211 is, for example, a distance image sensor.
[0093]
The input unit 215 receives an input of voice from the user 20. Specifically, the input unit 215 is
a microphone. The input unit 215 converts the voice of the user into an electrical signal.
[0094]
The communication unit 214 communicates with the second audio device 220 wirelessly.
Specifically, the communication unit 114 is, for example, a Bluetooth communication adapter, a
wireless LAN adapter, or the like.
11-04-2019
23
[0095]
As in the second modification of the first embodiment, the control unit 213 determines whether
the user 20 is stationary for a predetermined time or more based on the movement of the user
20 detected by the detection unit 211.
[0096]
Here, when the user 20 is detected in the detection area 30 and the user 20 does not stand still
for a predetermined time or more, the control unit 213 performs hands-free communication with
the second audio device 220 (mobile phone). To give permission.
That is, the user 20 talks with the other party via the first audio device 210.
[0097]
On the other hand, when the user 20 is not detected in the detection area 30 or when the user
20 stands still for a predetermined time or more, hands-free communication with the second
audio device 220 (mobile phone) is prohibited. That is, the user 20 talks with the other party via
the second audio device 220.
[0098]
The second audio device 220 includes an output unit 121, a control unit 222, a communication
unit 223, an input unit 224, and a call unit 225.
[0099]
The communication unit 223 wirelessly communicates with the first audio device 210.
Specifically, the communication unit 223 is, for example, a Bluetooth communication adapter, a
wireless LAN adapter, or the like. When the communication unit 223 is a Bluetooth
11-04-2019
24
communication adapter, the communication unit 223 transmits an audio signal to the first audio
device 210 using, for example, A2DP, and communicates a call signal with the first audio device
210 using HSP.
[0100]
The input unit 224 receives an input of voice from the user 20. Specifically, the input unit 224 is
a microphone. The input unit 224 converts the voice of the user into an electrical signal.
[0101]
The call unit 225 communicates with a communication apparatus (for example, a mobile phone)
of the other party via a wide area wireless communication network. That is, the call unit 225
transmits the voice signal of the user 20 to the communication device of the communication
partner, and receives the voice signal of the communication partner from the communication
device of the communication partner.
[0102]
The control unit 222 selectively sets an audio mode for audio reproduction and a call mode for
calling. In the audio mode, the control unit 222 controls the output unit 121 and the
communication unit 223 as in the control unit 122 of the first embodiment.
[0103]
In the call mode, the control unit 222 controls the input unit 224, the output unit 121, the
communication unit 223, and the call unit 225 based on the detection result of the detection unit
211 of the first audio device 210.
[0104]
For example, when a detection result indicating that the user 20 does not stand still for a
predetermined time or more in the detection area 30 is received from the first audio device 210,
11-04-2019
25
the control unit 222 causes the communication unit 223 to hold hands with the first audio device
210. Execute free communication.
That is, the control unit 222 causes the communication unit 223 to receive an audio signal of the
user 20 from the first audio device 210. Then, the control unit 222 causes the call unit 225 to
transmit the voice signal of the user 20 received from the first audio device 210 to the
communication device of the other party. Further, the control unit 222 causes the
communication unit 223 to transmit the voice signal of the other party received by the call unit
225 to the first audio device 210.
[0105]
On the other hand, when the detection result is not received from the first audio device 210, the
control unit 222 acquires the voice signal of the user 20 from the input unit 224, and the voice
signal is transmitted to the communication device of the other party via the call unit 225. Send to
Furthermore, the control unit 222 causes the output unit 121 to output the voice signal of the
other party received via the call unit 225.
[0106]
[Operation of Audio System] Next, the operation of the audio system 200 configured as described
above will be described. FIG. 9 is a flowchart showing processing of the first audio device 210
according to the second embodiment. In FIG. 9, steps in which the same or similar processes as
those in FIG. 4 are performed are denoted by the same reference numerals, and the description
will be appropriately omitted.
[0107]
When the second audio device 220 is in the audio mode (audio mode in S211), the
communication unit 214 receives an audio signal from the second audio device 220 (S112).
Subsequently, the control unit 213 causes the output unit 112 to output a sound based on the
audio signal received from the second audio device 220 (S113).
11-04-2019
26
[0108]
On the other hand, when the second audio device 220 is in the call mode (the call mode of S211),
the control unit 213 controls the user based on the movement of the user 20 detected by the
detection unit 211, as in step S116 of FIG. It is determined whether 20 is stationary for a
predetermined time or more (S212).
[0109]
If the user has not performed still for a predetermined time or more (No in S212), the control
unit 213 permits hands-free communication with the second audio device 220 (S213).
On the other hand, when the user is stationary for a predetermined time or more (Yes in S212),
hands-free communication with the second audio device 220 is prohibited (S214).
[0110]
[Effects] As described above, according to the audio system 200 according to the present
embodiment, hands-free communication can be prohibited when the user 20 stands still for a
predetermined time or more. Therefore, it is possible to prevent the sound based on the call
signal from being output from the first audio device 210 while the user 20 is sleeping. For
example, it is possible to prevent the ringtone from being output at a high volume from the
speakers 121a and 121b of the first audio device 210 while the user 20 is sleeping. That is,
based on the state of the user 20, the output of sound can be automatically controlled
appropriately.
[0111]
Third Embodiment Next, the third embodiment will be described. The present embodiment differs
from the first embodiment in that the audio system includes a plurality of first audio devices, and
the detection range of one first audio device includes the detection range of another first audio
device.
[0112]
11-04-2019
27
Hereinafter, an audio system according to the present embodiment will be described focusing on
differences from the first embodiment.
[0113]
[Installation Example of Audio System] First, an installation example of the audio system 300
according to the present embodiment will be described.
FIG. 10 is a plan view showing an installation example of the audio system 300 according to the
third embodiment.
[0114]
The audio system 300 includes a plurality of first audio devices 310a, 310b and a second audio
device 120 (not shown in FIG. 10) capable of wirelessly communicating with each of the plurality
of first audio devices 310a, 310b.
[0115]
The first audio device (short-range audio device) 310a is a device for outputting sound to the
first area 31a.
Here, the first area 31a is an area of a living room.
[0116]
The first audio device (wide area audio device) 310b is a device for outputting sound to the
second area 31b. The second area 31 b includes the first area 31 a. That is, the first area 31a is a
part of the second area 31b. Here, the second area 31 b is an area including all indoor rooms.
That is, the first audio device 310b is a multi-room audio device.
[0117]
11-04-2019
28
[Functional Configuration of Audio System] Next, the functional configuration of the audio
system 300 will be described. FIG. 11 is a block diagram showing a functional configuration of
the audio system 300 according to the third embodiment. In FIG. 11, the blocks having the same
or similar functions as or to those in FIG. 3 are assigned the same reference numerals and
descriptions thereof will be omitted as appropriate.
[0118]
The first audio device (short-range audio device) 310 a includes a detection unit 111, an output
unit 112, a control unit 113, and a communication unit 114. The detection unit 111 of the first
audio device 310a detects the user 20 in the first area 31a.
[0119]
The first audio device (wide area audio device) 310 b includes a detection unit 111, an output
unit 112, a control unit 313, and a communication unit 114. The detection unit 111 of the first
audio device 310b detects the user 20 in the second area 31b.
[0120]
When the user 20 is detected in the second area 31 b and the user 20 is not detected in the first
area 31 a, the control unit 313 sets the first sound based on the audio signal received from the
second audio device 120. It is output to the output unit 112 of the audio device 310b.
[0121]
In addition, when the user 20 is not detected in the second area 31 b or when the user 20 is
detected in the first area 31 a, the control unit 313 does not cause the output unit 112 of the
first audio device 310 b to output a sound.
That is, even if the user 20 is detected in the second area 31 b, if the user 20 is detected also in
the first area 31 a, the first audio device 310 b receives an audio signal from the second audio
11-04-2019
29
device 120. Does not output sound based on.
[0122]
[Operation of Audio System] Next, the operation of the audio system 300 configured as described
above will be described. Here, the operation of the first audio device (wide area audio device)
310b will be described in detail.
[0123]
FIG. 12 is a flowchart showing processing of the first audio device 310b according to the third
embodiment. In FIG. 12, steps in which the same or similar processes as those in FIG. 4 are
performed are denoted by the same reference numerals, and the description thereof will be
appropriately omitted.
[0124]
The control unit 313 determines whether the user 20 is detected in the second area 31 b by the
detection unit 111 of the first audio device (wide area audio device) 310 b (S 311). Here, when
the user 20 is detected in the second area 31b (Yes in S311), the control unit 313 causes the
detection unit 111 of the other first audio device (the narrow area audio device) 310a to detect
the user in the first area 31a. It is determined whether 20 is detected (S312). This determination
is performed, for example, based on the detection result received from the first audio device
310a. Here, when the user 20 is not detected in the first area 31a (No in S312), the
communication unit 114 receives an audio signal from the second audio device 120 (S112).
Subsequently, the control unit 313 causes the output unit 112 to output a sound based on the
audio signal received from the second audio device 120 (S113).
[0125]
On the other hand, when the user 20 is not detected in the second area 31b (No in S311), or
when the user 20 is detected in the first area 31a (Yes in S312), the control unit 313 outputs
audio to the output unit 112. The sound based on the signal is not output (S114).
11-04-2019
30
[0126]
[Effects] As described above, according to the audio system 300 according to the present
embodiment, the first area 31a corresponding to the first audio device (short-range audio device)
310a is the first audio device (wide-area audio device) 310b. Are included in the second area 31
b corresponding to
In such a case, when the user 20 is detected in the first area 31a, the wide area audio device does
not output sound based on the audio signal. That is, even when the user 20 is detected in the
second area 31b corresponding to the wide area audio device, the wide area audio apparatus
when the user 20 is detected in the first area 31a corresponding to the narrow area audio
apparatus Does not output sound. Therefore, it is possible to prevent the wide area audio device
from outputting the sound redundantly while the narrow area audio device is outputting the
sound to the user present in the first area 31a. That is, based on the state of the user, the sound
output can be automatically controlled appropriately.
[0127]
Fourth Embodiment Next, the fourth embodiment will be described. The present embodiment
differs from the third embodiment in that the first region is separated from the second region.
Hereinafter, the audio system according to the present embodiment will be described focusing on
differences from the first and third embodiments.
[0128]
[Installation Example of Audio System] First, an installation example of the audio system 400
according to the present embodiment will be described. FIG. 13 is a plan view showing an
installation example of the audio system 400 according to the fourth embodiment.
[0129]
The audio system 400 includes a plurality of first audio devices 410a, 410b and a second audio
device 120 (not shown in FIG. 13) capable of wirelessly communicating with each of the plurality
of first audio devices 410a, 410b.
11-04-2019
31
[0130]
The first audio device 410a is a device for outputting sound to the first area 32a.
Here, the first area 32a is an area of a living room.
[0131]
The first audio device 410b is a device for outputting sound to the second area 32b. The second
area 32b is separated from the first area 32a. Here, the second area 32 b is an area of the first
bedroom.
[0132]
In FIG. 13, paths 41 to 44 are an example of the movement path of the user 20. The operation of
the first audio device 410a when the user 20 moves along the paths 41 to 44 will be described
later with reference to FIG.
[0133]
[Functional Configuration of Audio System] Next, the functional configuration of the audio
system 400 will be described. FIG. 14 is a block diagram showing a functional configuration of
the audio system 400 according to the fourth embodiment. In FIG. 14, the blocks having the
same or similar functions as or to those in FIG.
[0134]
Each of the plurality of first audio devices 410 a and 410 b includes a detection unit 111, an
output unit 112, a control unit 413, and a communication unit 114. The detection unit 111 of
the first audio device 410a detects the user 20 in the first area 32a. In addition, the detection
11-04-2019
32
unit 111 of the first audio device 410b detects the user 20 in the second area 32b.
[0135]
The control unit 413 of the first audio device 410a outputs sound based on the audio signal from
the second audio device 120 to the output unit 112 of the first audio device 410a when the user
20 is detected in the first area 32a. To start. Thereafter, when the user 20 is detected in the
second area 32b, the control unit 413 of the first audio device 410a causes the output unit 112
of the first audio device 410a to generate sound based on the audio signal from the second audio
device 120. Stop the output of.
[0136]
In addition, when the user 20 is detected in the second area 32b, the control unit 413 of the first
audio device 410b causes the output unit 112 of the first audio device 410b to generate sound
based on the audio signal from the second audio device 120. Start the output of Thereafter, when
the user 20 is detected in the first area 32a, the control unit 413 of the first audio device 410b
causes the output unit 112 of the first audio device 410b to generate sound based on the audio
signal from the second audio device 120. Stop the output of.
[0137]
[Operation of Audio System] Next, the operation of the audio system 400 configured as described
above will be described. FIG. 15 is a flowchart showing processing of the first audio device 410a
according to the fourth embodiment. Specifically, FIG. 15 shows the process of the first audio
device 410a when the user 20 moves along the paths 41 to 44 shown in FIG. In FIG. 15, steps in
which the same or similar processes as those in FIG.
[0138]
When the user 20 is detected in the first area 32a (Yes in S111), the communication unit 114
receives an audio signal from the second audio device 120 (S112). Then, the control unit 413
causes the output unit 112 to start output of sound based on the received audio signal (S411).
11-04-2019
33
For example, when the user 20 enters the living room through the path 41 (that is, when the user
20 reaches the start point of the path 42), the output of sound to the first area 32a is started.
[0139]
Subsequently, the control unit 413 determines whether the user 20 is detected by the other first
audio device 410b (S412). That is, the control unit 413 determines whether the user 20 is
detected in the second area 32b.
[0140]
Here, when the user 20 is not detected in the second area 32b (No in S412), step S412 is
repeated. That is, when the user 20 exists on the paths 42 and 43, the output of the sound in the
first area 32a is continued.
[0141]
On the other hand, when the user 20 is detected in the second area 32b (Yes in S412), the output
unit 112 stops the output of sound based on the audio signal (S413). That is, when the user
enters the first bedroom (that is, when the user reaches the start point of the route 44), the
output of the sound in the first area 32a is stopped. On the other hand, in the second area 32b,
the output of sound based on the audio signal is started by the first audio device 410b.
[0142]
[Effect] As described above, according to the audio system 400 according to the present
embodiment, when the user 20 is detected in the second area 32b after the user 20 is detected in
the first area 32a, the second audio is generated. The output of sound based on the audio signal
from the device 120 can be stopped. Therefore, when the first area 32a and the second area 32b
are separated, it is possible to output a sound from when the user 20 leaves the first area 32a to
when it reaches the second area 32b. . That is, it is possible to prevent the output of sound from
being interrupted.
11-04-2019
34
[0143]
Fifth Embodiment The fifth embodiment will be described next. The present embodiment is
different from the first embodiment in that the second audio apparatus also outputs an image in
addition to the sound. Hereinafter, an audio system according to the present embodiment will be
described focusing on differences from the first embodiment.
[0144]
[Installation Example of Audio System] First, an installation example of the audio system 500
according to the present embodiment will be described. FIG. 16 is a perspective view showing an
installation example of the audio system 500 according to the fifth embodiment.
[0145]
The audio system 500 comprises a first audio device 510 and a second audio device 520 capable
of wireless communication with the first audio device 510.
[0146]
The first audio device 510 is installed on the head side of the bed 11 so as to face the second
audio device 520.
The output unit 112 of the first audio device 510 includes a first speaker 112 a and a second
speaker 112 b which are disposed laterally above and below the head side of the bed 11. Here,
the first speaker 112a is a left channel speaker, and the second speaker 112b is a right channel
speaker.
[0147]
The second audio device 520 is an audio visual device for outputting video and sound.
Specifically, the second audio device 520 is, for example, a television receiver. The second audio
11-04-2019
35
device 520 is installed on the foot side of the bed 11 so as to face the first audio device 510.
[0148]
The detection area 30 is an area on the bed 11 installed between the first audio device 510 and
the second audio device 520.
[0149]
The bed 11 is switched between a flat state in which the mat is kept flat and a reclining state in
which the head side of the mat is lifted.
FIG. 16 shows the bed 11 in the reclining state. The bed 11 is, for example, an electric bed.
[0150]
[Functional Configuration of Audio System] Next, the functional configuration of the audio
system 500 will be described. FIG. 17 is a block diagram showing a functional configuration of
the audio system 500 according to the fifth embodiment. In FIG. 17, the blocks having the same
or similar functions as those in FIG. 3 are denoted by the same reference numerals, and the
description will be appropriately omitted.
[0151]
The first audio device 510 includes a detection unit 111, an output unit 112, a control unit 513,
and a communication unit 114.
[0152]
When the detection unit 111 detects the user 20 and the bed 11 is in the reclining state, the
control unit 513 causes the first speaker 112 a to output a sound based on the second channel
signal, and causes the first channel signal to be output. Based sound is output to the second
speaker 112b.
11-04-2019
36
The first channel signal and the second channel signal are included in the audio signal received
from the second audio device 520. Here, the first channel signal is a left channel signal, and the
second channel signal is a right channel signal.
[0153]
On the other hand, when the detection unit 111 does not detect the user 20 or when the bed 11
is flat, the control unit 513 causes the first speaker 112 a to output a sound based on the first
channel signal, and the second The sound based on the channel signal is output to the second
speaker 112 b.
[0154]
The second audio device 520 includes an output unit 521, a control unit 522, a communication
unit 523 and a display unit 524.
[0155]
The output unit 521 outputs a sound in the same manner as the output unit 121 in the first
embodiment.
Specifically, the output unit 521 outputs, for example, a sound based on an audio signal in
synchronization with the video displayed on the display unit 524.
[0156]
The control unit 522 controls the output unit 521, the communication unit 523 and the display
unit 524.
Specifically, the control unit 522 controls the output unit 521 and the communication unit 523
based on the detection result of the detection unit 111 of the first audio device 510 and the state
of the bed 11. Specifically, when the sound based on the audio signal is output from the first
audio device 510, the control unit 522 stops the output of the sound from the output unit 521.
11-04-2019
37
[0157]
The communication unit 523 wirelessly communicates with the first audio device 510 in the
same manner as the communication unit 123 in the first embodiment.
[0158]
The display unit 524 displays an image.
The display unit 524 is, for example, a liquid crystal panel.
[0159]
[Operation of Audio System] Next, the operation of the audio system 500 configured as described
above will be described. FIG. 18 is a flowchart showing processing of the first audio device 510
according to the fifth embodiment. In FIG. 18, the steps in which the same or similar processes as
those in FIG.
[0160]
The control unit 522 determines whether the bed 11 is in the reclining state (S511). Specifically,
the control unit 522 determines, for example, whether or not the bed 11 is in the reclining state
by receiving a signal indicating whether the bed 11 is in the reclining state or in the flat state.
[0161]
Here, when the bed 11 is in the reclining state (Yes in S511), the control unit 522 causes the
output unit 112 to output the sound based on the audio signal by switching the left and right
channels (S512). That is, the control unit 522 causes the second speaker 112b to output a sound
based on the first channel signal, and causes the first speaker 112a to output a sound based on
the second channel signal.
11-04-2019
38
[0162]
On the other hand, when the bed 11 is not in the reclining state (No in S511), the control unit
522 causes the output unit 112 to output the sound based on the audio signal without replacing
the left and right channels (S513). That is, the control unit 522 causes the first speaker 112a to
output a sound based on the first channel signal, and causes the second speaker 112b to output
a sound based on the second channel signal.
[0163]
[Effects] As described above, according to the audio system 500 of the present embodiment, the
sound based on the first channel signal and the sound based on the second channel signal are
output according to the detection result of the user 20. Two speakers (a first speaker 112a and a
second speaker 112b) can be switched. Therefore, when the user 20 exists in the detection area
30, each of the sounds of the plurality of channels can be output from the speaker suitable for
the position of the user 20. That is, based on the state of the user 20, the output of sound can be
automatically controlled appropriately.
[0164]
Moreover, according to the audio system 500 according to the present embodiment, the user 20
can be detected in the area on the bed 11 installed between the first audio device 510 and the
second audio device 520. Therefore, according to the positional relationship between the user 20
and the first audio device 510 and the second audio device 520, the sound output can be
appropriately controlled.
[0165]
Further, according to the audio system 500 according to the present embodiment, the sound
based on the first channel signal and the sound based on the second channel signal are
respectively output based on whether or not the bed 11 is in the reclining state 2 One speaker
can be switched. For example, even when the user 20 is present on the bed 11, the user 20 is
sitting on the bed 11 in the reclining state and in the case where the user 20 lies lying on the bed
11-04-2019
39
11 in the flat state. The sound output suitable for the state of the user 20 is different. Therefore,
it is more suitable for the state of the user 20 by switching between two speakers in which the
sound based on the first channel signal and the sound based on the second channel signal are
output based on whether the bed 11 is in the reclining state or not. It becomes possible to
perform control of the muffled sound.
[0166]
Other Embodiments Although the audio apparatus according to the embodiment of the present
invention has been described above, the present invention is not limited to this embodiment.
Without departing from the spirit of the present invention, various modifications that can be
conceived by a person skilled in the art may be applied to the present embodiment, or an
embodiment configured by combining the components in the embodiments is also included in
the scope of the present invention.
[0167]
For example, in each of the above embodiments, the output unit of the first audio device has two
speakers, but the number of speakers is not limited to two. For example, the output unit may
have three or more speakers. In the first to fourth embodiments, the output unit may have only
one speaker.
[0168]
In the first modification of the first embodiment, the determination as to whether or not the
second audio device is held by hand is performed based on the output signal of the motion
sensor, but the present invention is not limited to this. For example, whether or not the second
audio device is held by hand may be determined by analyzing an image obtained by capturing
the detection area.
[0169]
In the above embodiments, the output of the sound is automatically controlled based on the
11-04-2019
40
detection result of the user in the detection area, but when an instruction from the user is
accepted, the user detection in the detection area The output of sound may be controlled based
on the instruction regardless of the result. For example, in the first embodiment, even if the user
is not detected in the detection area, the sound based on the audio signal may be output from the
first audio device based on an instruction from the user.
[0170]
In the first and second embodiments, although the output of the sound from the second audio
device is stopped when the sound is output from the first audio device, the first audio device and
the second audio device Sound may be output from both.
[0171]
In the fifth embodiment, the sound output is controlled based on whether the bed is in the
reclining state, but the present invention is not limited to this.
That is, regardless of the state of the bed, when the user is detected in the detection area, the left
and right channels may be switched to output a sound.
[0172]
In the fifth embodiment, when the sound is output from the first audio device, the output of the
sound from the second audio device is stopped, but the output of the sound from the second
audio device is stopped. It does not have to be. For example, the sound of the front channel may
be output from the first audio device, and the sound of the rear channel may be output from the
second audio device.
[0173]
In the fifth embodiment, the volume of the sound output from the first audio device may be
controlled based on the position of the head of the user. For example, the control unit may
reduce the volume if the user's head is located in a region near the output of the first audio
device in the detection region. In this case, the detection unit of the first audio device detects the
11-04-2019
41
position of the user's head in the detection area.
[0174]
In the fifth embodiment, the second area is an area separated from the first area, but the present
invention is not limited to this. For example, the second area may be an area partially overlapping
the first area. When the user is present in the overlapping area of the first area and the second
area, sound is output from the audio device corresponding to the area to which the user has later
entered, according to the process of FIG. Specifically, for example, when the user enters the first
area and then enters the overlapping area of the first area and the second area, the sound is
output from the first audio device corresponding to the second area. That is, when the first area
and the second area partially overlap, even when the user remains in the first area, when the user
enters the overlapping area with the second area, Sound output in one area can be stopped. That
is, the output of sound can be prevented from overlapping.
[0175]
The audio device according to each of the above embodiments may be specifically configured as
a computer system configured of a microprocessor, a ROM, a RAM, a hard disk drive, a display
unit, a keyboard, a mouse and the like. A computer program is stored in the RAM or the hard disk
drive. The audio device achieves its functions by the microprocessor operating according to the
computer program. Here, the computer program is configured by combining a plurality of
instruction codes indicating instructions to the computer in order to achieve a predetermined
function.
[0176]
Furthermore, some or all of the components constituting the audio apparatus according to each
of the above embodiments may be configured as one system LSI (Large Scale Integration: large
scale integrated circuit). The system LSI is a super-multifunctional LSI manufactured by
integrating a plurality of components on one chip, and more specifically, a computer system
including a microprocessor, a ROM, a RAM, and the like. . A computer program is stored in the
RAM. The system LSI achieves its functions by the microprocessor operating according to the
computer program.
11-04-2019
42
[0177]
Furthermore, some or all of the components constituting the audio device according to each of
the above-described embodiments may be configured from an IC card or a single module that is
removable from the audio device. The IC card or module is a computer system including a
microprocessor, a ROM, a RAM, and the like. The IC card or module may include the abovedescribed ultra-multifunctional LSI. The IC card or module achieves its functions by the
microprocessor operating according to the computer program. This IC card or this module may
be tamper resistant.
[0178]
Also, the present invention may be the method described above. Furthermore, the present
invention may be a computer program that realizes these methods by a computer, or may be a
digital signal composed of the computer program.
[0179]
Furthermore, the present invention provides a non-transitory recording medium that can read
the computer program or the digital signal from a computer, such as a flexible disk, a hard disk, a
CD-ROM, an MO, a DVD, a DVD, a DVD-ROM, a DVD-RAM, a BD It may be recorded on a Blu-ray
(registered trademark) Disc), a semiconductor memory or the like. Further, the present invention
may be the digital signal recorded on these non-temporary recording media.
[0180]
In the present invention, the computer program or the digital signal may be transmitted via a
telecommunication line, a wireless or wired communication line, a network typified by the
Internet, data broadcasting, and the like.
[0181]
Further, the present invention may be a computer system comprising a microprocessor and a
memory, wherein the memory stores the computer program, and the microprocessor operates
according to the computer program.
11-04-2019
43
[0182]
In addition, another computer is independent by recording and transferring the program or the
digital signal on the non-temporary recording medium, or transferring the program or the digital
signal via the network or the like. It may be implemented by a system.
[0183]
An audio device according to an aspect of the present invention can be used as an audio device
installed on the head side of a bed.
[0184]
10, 11 Bed 20 User 30 Detection area 31a, 32a First area 31b, 32b Second area 41, 42, 43, 44
Path 100, 200, 300, 400, 500 Audio system 110, 210, 310a, 310b, 410a, 410b, 510 First audio
device 111, 211 Detection unit 112, 121, 521 Output unit 112a First speaker 112b Second
speaker 113, 122, 213, 222, 313, 513, 522 Control unit 114, 123, 214, 223, 523
Communication unit 120, 220, 520 Second audio device 215, 224 Input unit 225 Call unit 524
Display unit
11-04-2019
44
Документ
Категория
Без категории
Просмотров
0
Размер файла
57 Кб
Теги
description, jp2016021640
1/--страниц
Пожаловаться на содержимое документа