close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2015233284

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2015233284
An object of the present invention is to allow a person watching an object to hear a reproduction
sound of sound data prepared in association with the object. Each user wears an HMD provided
with a camera and headphones. The camera is mounted at a position where it is possible to shoot
in front of the user. In an information processing apparatus carried by a user, based on an image
taken by a camera, which poster the user is looking at is recognized by object recognition. Also,
sound data associated with the poster the user is watching is reproduced, and the reproduced
sound is output from the headphones. The information processing apparatus stores data for
recognition of each of the posters P1 to P4 used to recognize which poster the user is looking at,
and sound data associated with each of the posters P1 to P4. ing. The present technology can be
applied to portable computers. [Selected figure] Figure 1
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND
PROGRAM
[0001]
The present technology relates to an information processing apparatus, an information
processing method, and a program, and in particular, only a person who is looking at a certain
object can listen to reproduced sound of sound data prepared in association with the object. The
present invention relates to an information processing apparatus, an information processing
method, and a program.
[0002]
There is a technology of arranging a speaker on the back or side of the advertisement and
outputting the sound from the speaker when trying to listen to the sound related to the
11-04-2019
1
advertisement to a person who is looking at the advertisement (Patent Document 1).
[0003]
In addition, there is a technology in which a sensor such as a camera is attached to a wall to
which an advertisement is attached, and detection of the presence of a person in front of the
advertisement to output a related sound (Patent Document 2).
[0004]
JP, 2004-77654, A JP, 2001-142420, A
[0005]
According to the above-mentioned technology, when there is a person who does not look at the
advertisement near the person who is looking at the advertisement printed on the poster etc., the
sound is heard to the person other than the person who is looking at the advertisement. There is
a problem.
[0006]
In addition, in the case where a plurality of posters of different advertisements are pasted, there
is a problem that the sounds become difficult to hear because the sounds of the respective
advertisements are mixed.
[0007]
Although the above-mentioned technology is usually adopted in expectation of the effect of
improving the advertising effect by letting only a specific person hear a sound, these problems
may reduce the effect. It also becomes.
[0008]
The present technology has been made in view of such a situation, and enables only a person
who is looking at an object to hear the reproduced sound of sound data prepared in association
with the object. It is a thing.
[0009]
An information processing apparatus according to one aspect of the present technology is
associated with an acquisition unit that acquires an image of an object to be recognized, a
11-04-2019
2
recognition unit that recognizes the object included in the image, and the object recognized by
the recognition unit. And a reproduction unit for outputting a reproduction sound from an output
device worn by the user so as to reproduce the sound data and localize the position of the
recognized object as a sound source position.
[0010]
The recognition unit can recognize an area set in the object, and the reproduction unit can
reproduce the sound data according to the area recognized by the recognition unit.
[0011]
The reproduction unit may reproduce the sound data selected by the user among the plurality of
sound data associated with the object.
[0012]
When a plurality of the objects are included in the image, the recognition unit can perform
recognition on the object located closest to the center of the image.
[0013]
The recognition unit can recognize the object included in the image according to the line of sight
of the user.
[0014]
In one aspect of the present technology, an image of an object to be recognized is acquired, the
object included in the image is recognized, sound data associated with the recognized object is
reproduced, and the position of the recognized object The playback sound is output from the
output device worn by the user so that the sound source position is localized.
[0015]
According to the present technology, only the person who is watching an object can hear the
reproduced sound of the sound data prepared in association with the object.
[0016]
11-04-2019
3
It is a figure showing an example of appearance of AR system using an information processor
concerning one embodiment of this art.
It is a figure which shows the example of the external appearance of the user of FIG.
It is a figure which shows the example of another external appearance of AR system.
It is a block diagram showing the example of composition of the hardware of an information
processor.
It is a block diagram showing an example of functional composition of an information processor.
It is a figure explaining recognition of an object.
It is a flow chart explaining sound reproduction processing of an information processor.
It is a block diagram showing other examples of functional composition of an information
processor.
It is a flowchart explaining the download process of the information processing apparatus which
has a structure shown in FIG.
It is a figure which shows the example of the part set to the poster.
It is a figure which shows the example of the model data and sound data which are made to
respond | correspond to the part of a poster, and are memorize | stored.
It is a figure which shows the example of installation of an information processing apparatus.
11-04-2019
4
[0017]
[AR (Augmented Reality) System] FIG. 1 is a view showing an example of the appearance of an AR
system using an information processing apparatus according to an embodiment of the present
technology.
[0018]
In the example of FIG. 1, the posters P1 to P4 are lined up and affixed to the wall surface W.
For example, advertisements such as products and services are printed on the posters P1 to P4.
[0019]
Further, in the example of FIG. 1, users U1 to U3 stand in front of the wall surface W.
The user U1 is looking at the poster P1 attached to the wall W, and the user U3 is looking at the
poster P4.
The user U2 does not look at any of the posters P1 to P4 attached to the wall W.
Dashed line arrows # 1 to # 3 in FIG. 1 indicate the lines of sight of the users U1 to U3,
respectively.
[0020]
In this case, as indicated by the balloon in the vicinity of each user, the sound associated with the
poster P1 is output so that only the user U1 watching the poster P1 can hear it.
Also, the sound associated with the poster P4 is output so that it can be heard only by the user
U3 who is viewing the poster P4. The sounds associated with the posters P1 and P4 can not be
11-04-2019
5
heard by the user U2 who has not seen the posters.
[0021]
In the information processing apparatus carried by each user, when it is detected that the user
carrying it is looking at the poster, the sound data associated with the poster is reproduced, and
the user is The playback sound is output so that only you can hear it. The sound data associated
with the poster is, for example, voice or music data that introduces a product or service printed
on the poster.
[0022]
FIG. 2 is a view showing an example of the appearance of the user U1 of FIG.
[0023]
As shown in FIG. 2, the user U1 carries the information processing apparatus 1 which is a
portable computer.
In addition, the user U1 wears a head mounted display (HMD (Head Mounted Display)) 2. The
information processing apparatus 1 and the HMD 2 can communicate by wire or wirelessly.
[0024]
The HMD 2 is provided with a camera 11, a headphone 12, and a display 13.
[0025]
The camera 11 is attached at a position where it is possible to capture the front of the user U1
wearing the HMD 2.
The shooting range of the camera 11 includes the line of sight of the user. An image taken by the
camera 11 is transmitted to the information processing apparatus 1. The imaging of the image
11-04-2019
6
(moving image) is continued by the camera 11 at a predetermined frame rate, whereby the
information processing apparatus 1 is provided with the image of the landscape viewed by the
user.
[0026]
The headphones 12 are attached so as to come to the positions of the left and right ears of the
user U1 wearing the HMD 2. The headphones 12 output the reproduction sound transmitted
from the information processing device 1.
[0027]
The display 13 is attached so that the display unit comes to the front of the eyes of the user U1
wearing the HMD 2. The display 13 is formed of a transparent member, and displays information
such as an image or text based on the data transmitted from the information processing
apparatus 1. The user can view the landscape through the display 13 and can also view the
information displayed on the display 13.
[0028]
The users U2 and U3 in FIG. 1 also carry the information processing apparatus 1 and wear the
HMD 2 in the same manner as the user U1.
[0029]
For example, in the information processing apparatus 1 carried by the user U1, based on the
image photographed by the camera 11, which poster the user U1 is looking at is recognized by
object recognition.
The information processing apparatus 1 stores data for recognition of the posters P1 to P4 for
recognizing which poster is viewed.
[0030]
11-04-2019
7
Further, in the information processing apparatus 1, when it is detected that the user U1 is
looking at any one of the posters P1 to P4, sound data associated with the poster that the user
U1 is viewing is reproduced. , Reproduced sound is output from the headphone 12. In the
information processing apparatus 1, sound data is stored in association with each of the posters
P1 to P4.
[0031]
This allows the sound associated with the poster to be heard only by the user viewing the poster.
[0032]
That is, since the reproduction sound is output from the headphones 12, there is no problem that
the person other than the person watching the poster can hear the sound.
In addition, since the sound data associated with any one of the posters P1 to P4 is reproduced,
there is no problem that the sounds of the respective advertisements are mixed and the sounds
become difficult to hear.
[0033]
The playback of sound data associated with the poster occurs only while the user is watching the
poster.
[0034]
For example, as shown in FIG. 3, when the user U1 looks at the poster P3 at the position p1 as
indicated by the end of the broken arrow # 11, the sound data associated with the poster P3 is
reproduced.
The user U1 can hear the reproduced sound of the sound data associated with the poster P3.
11-04-2019
8
[0035]
In addition, when the user U1 does not look at the poster P3 as indicated by the end of the
dashed arrow # 13 by moving to the position p2 as indicated by the solid arrow # 12, the sound
associated with the poster P3 Data playback is stopped. The user U1 can not hear the reproduced
sound of the sound data associated with the poster P3.
[0036]
A series of processes of the information processing apparatus 1 that controls the reproduction of
sound data as described above will be described later.
[0037]
[Configuration of Information Processing Apparatus] FIG. 4 is a block diagram showing an
example of the hardware configuration of the information processing apparatus 1.
[0038]
A central processing unit (CPU) 31, a read only memory (ROM) 32, and a random access memory
(RAM) 33 are mutually connected by a bus 34.
[0039]
Further, an input / output interface 35 is connected to the bus 34.
An input unit 36, an output unit 37, a storage unit 38, a communication unit 39, and a drive 40
are connected to the input / output interface 35.
[0040]
The input unit 36 communicates with the HMD 2 and receives an image captured by the camera
11 of the HMD 2.
[0041]
11-04-2019
9
The output unit 37 communicates with the HMD 2 and causes the headphone 12 to output the
reproduced sound of the sound data.
The output unit 37 also transmits display data to the HMD 2 and causes the display 13 to display
information such as an image or text.
[0042]
The storage unit 38 includes a hard disk, a non-volatile memory, and the like, and stores data for
recognizing a poster and sound data associated with each poster.
[0043]
The communication unit 39 includes a network interface such as a wireless local area network
(LAN) module and the like, and communicates with a server connected via a network.
Data for recognition of posters and sound data stored in the storage unit 38 are provided to the
information processing apparatus 1 by being downloaded from, for example, a server.
[0044]
The drive 40 reads data stored in the mounted removable media 41 and writes data in the
removable media 41.
[0045]
FIG. 5 is a block diagram showing an example of the functional configuration of the information
processing apparatus 1.
[0046]
As shown in FIG. 5, in the information processing apparatus 1, an image acquisition unit 51, a
recognition unit 52, a sound reproduction control unit 53, a model data storage unit 54, a sound
data storage unit 55, and a communication control unit 56 are realized. .
11-04-2019
10
At least a part of these configurations is realized by executing a predetermined program by the
CPU 31 of FIG.
The model data storage unit 54 and the sound data storage unit 55 are formed in, for example,
the storage unit 38.
[0047]
The image acquisition unit 51 acquires an image captured by the camera 11 and received by the
input unit 36.
The image acquisition unit 51 outputs the acquired image to the recognition unit 52.
[0048]
The recognition unit 52 recognizes the object included in the query image based on the model
data stored in the model data storage unit 54, using the image supplied from the image
acquisition unit 51 as a query image.
The model data storage unit 54 stores data representing the features of the poster extracted
from the image including the poster. The object recognition by the recognition unit 52 will be
described later.
[0049]
The recognition unit 52 outputs, for example, the ID of the recognized object (poster) and
posture information indicating the relative position of the recognized poster and the camera 11
(user) to the sound reproduction control unit 53 as a recognition result. The posture information
specifies, for example, the distance to the position of the user and the direction in which the user
is present based on the position of the recognized poster.
11-04-2019
11
[0050]
The sound reproduction control unit 53 reads sound data associated with the ID supplied from
the recognition unit 52 from the sound data storage unit 55 and reproduces the sound data. The
sound reproduction control unit 53 controls the output unit 37 in FIG. 4 to transmit data of
reproduced sound obtained by reproduction to the HMD 2 and causes the headphone 12 to
output the data. In the sound data storage unit 55, the ID of the poster and the sound data are
stored in association with each other.
[0051]
The communication control unit 56 controls the communication unit 39 to communicate with the
server 61, and downloads model data, which is data for recognition representing a feature of a
poster, and sound data associated with the poster. The server 61 has a database of model data
and sound data. The communication control unit 56 stores the downloaded model data in the
model data storage unit 54, and stores the sound data in the sound data storage unit 55.
[0052]
FIG. 6 is a diagram for explaining the recognition of an object (poster).
[0053]
As an algorithm for object recognition by the recognition unit 52, for example, there are
RandomizedFern and Scale Invariant Feature Transform (SIFT).
Randomized Fern is disclosed in “Fast Keypoint Recognition using Random Ferns Mustafa
Ozuysal, Michael Clonder, Vincent Lepetit and Pascal Fua Ecole Polytechnique Federale de
Lausanne (EPFL) Computer Vision Laboratory, & C Faculty CH-1015 Lausanne, Switzerland”.
The SIFT is disclosed in “Distinctive Image Features from Scale-Invariant Keypoints David G.
Lowe January 5, 2004”.
11-04-2019
12
[0054]
As shown in FIG. 6, in the server 61 which is a learning device, an image processing unit 71, a
feature point detection unit 72, a feature amount extraction unit 73, and a combination unit 74
are realized. Each configuration shown in FIG. 6 is realized by the CPU of the server 61 executing
a predetermined program. The server 61 is also configured by a computer as shown in FIG.
[0055]
The image processing unit 71 performs processing such as affine transformation on the model
image, and outputs a model image obtained by performing the processing to the feature point
detection unit 72. Each image of the posters P1 to P4 is sequentially input as a model image to
the image processing unit 71. The model image is also input to the feature extraction unit 73.
[0056]
The feature point detection unit 72 determines each point on the model image supplied from the
image processing unit 71 as a model feature point, and outputs information representing the
position of the model feature point to the feature amount extraction unit 73.
[0057]
The feature amount extraction unit 73 extracts, as a model feature amount, information of a pixel
corresponding to the position of a model feature point among the pixels constituting the model
image.
The data of the model feature quantity extracted by the feature quantity extraction unit 73 is
registered in the model dictionary D1 in association with the ID of the poster included in the
model image from which the feature quantity has been extracted. The model dictionary D1 is
configured as data in which an ID of a poster is associated with data of a model feature of each
model feature point extracted from an image including the poster.
[0058]
Further, the feature amount extraction unit 73 outputs data of the extracted model feature
11-04-2019
13
amount to the combining unit 74.
[0059]
The synthesizing unit 74 synthesizes the input 3D model data and the data of the model feature
supplied from the feature extracting unit 73.
Data representing the three-dimensional shape of each of the posters P1 to P4 is input to the
combining unit 74 as 3D model data.
[0060]
For example, based on the 3D model data, the combining unit 74 calculates the position on the
3D model of each model feature point when the poster is viewed from various angles. The
synthesizing unit 74 synthesizes 3D model data and model feature amount data by assigning
model feature amount data to the calculated position of each model feature point, and generates
3D model data D2.
[0061]
The model dictionary D1 and the 3D model data D2 generated by the combining unit 74 are
provided to the information processing apparatus 1 and stored in the model data storage unit 54.
[0062]
As shown in FIG. 6, the recognition unit 52 includes an image processing unit 81, a feature point
detection unit 82, a feature amount extraction unit 83, a matching unit 84, and a posture
estimation unit 85.
An image captured by the camera 11 and an image acquired by the image acquisition unit 51 is
input to the image processing unit 81 as a query image. The query image is also supplied to the
feature amount extraction unit 83.
11-04-2019
14
[0063]
Similar to the image processing unit 71, the image processing unit 81 performs processing such
as affine transformation on the query image, and outputs the query image obtained by
performing the processing to the feature point detection unit 82.
[0064]
The feature point detection unit 82 determines each point on the query image supplied from the
image processing unit 81 as a query feature point, and outputs information representing the
position of the query feature point to the feature amount extraction unit 83.
[0065]
The feature amount extraction unit 83 extracts, as a query feature amount, information of a pixel
corresponding to the position of the query feature point among the pixels constituting the query
image, and outputs the extracted data of the query feature amount to the matching unit 84.
[0066]
The matching unit 84 performs a nearest neighbor search such as K-NN based on data of feature
amounts included in the model dictionary D1, and determines a model feature point that is the
nearest neighbor of each query feature point.
The matching unit 84 selects, for example, a poster with the largest number of model feature
points in the nearest vicinity, based on the number of model feature points in the closest vicinity
of the query feature points.
The matching unit 84 outputs the ID of the selected poster as a recognition result.
[0067]
The ID of the poster output from the matching unit 84 is supplied to the sound reproduction
control unit 53 of FIG. 5 and to the posture estimation unit 85.
11-04-2019
15
The posture estimation unit 85 is also supplied with information indicating the position of each
query feature point.
[0068]
The posture estimation unit 85 reads out the 3D model data D2 of the poster recognized by the
matching unit 84 from the model data storage unit 54, and based on the 3D model data D2, the
model feature points that are the nearest neighbors of the respective query feature points.
Identify the position on the 3D model. Further, the posture estimation unit 85 outputs posture
information indicating the positional relationship between the poster and the user.
[0069]
If it is possible to specify the position on the 3D model of the model feature point that is the
closest to the query feature point detected from the query image captured by the camera 11,
from what position the query image is taken of the poster That is, it is possible to specify at
which position the user is. Also, if the size and distance of the poster included in the image are
associated in advance, the distance from the position of the poster to the position of the user is
specified from the size of the poster included in the query image captured by the camera 11 It is
possible. The lens of the camera 11 is, for example, a single focus lens without a zoom function.
[0070]
The poster viewed by the user and the relative positional relationship between the poster and the
user are recognized as described above.
[0071]
[Operation of Information Processing Apparatus] Here, the sound reproduction processing of the
information processing apparatus 1 will be described with reference to the flowchart of FIG. 7.
The process of FIG. 7 is repeatedly performed, for example, while shooting is performed by the
11-04-2019
16
camera 11.
[0072]
In step S <b> 1, the image acquisition unit 51 acquires an image captured by the camera 11.
[0073]
In step S2, the recognition unit 52 performs object recognition on the image acquired by the
image acquisition unit 51.
[0074]
In step S3, the recognition unit 52 determines whether an ID that matches the ID of the
recognized object is stored in the model data storage unit 54 as a poster ID, that is, whether the
user is looking at the poster .
[0075]
If it is determined in step S3 that the user has not viewed the poster, in step S4, the sound
reproduction control unit 53 determines whether or not sound data is being reproduced.
[0076]
When it is determined in step S4 that the sound data is being reproduced, the sound
reproduction control unit 53 stops the reproduction of the sound data in step S5.
When the reproduction of the sound data is stopped in step S5 or when it is determined that the
reproduction of the sound data is not in step S4, the process returns to step S1 and the
subsequent processes are repeated.
[0077]
On the other hand, when it is determined in step S3 that the user is looking at the poster, in step
S6, the sound reproduction control unit 53 stores in the sound data storage unit 55 sound data
associated with the poster viewed by the user. It is determined whether the
11-04-2019
17
[0078]
If it is determined in step S6 that the sound data associated with the poster the user is viewing is
not stored, the process returns to step S1 and the subsequent processing is repeated.
[0079]
If it is determined in step S6 that the sound data associated with the poster the user is watching
is stored, in step S7, the sound reproduction control unit 53 is not sound data associated with the
poster the user is watching It is determined whether the reproduction of the sound data is being
performed.
[0080]
If it is determined in step S7 that sound data other than sound data associated with the poster
viewed by the user is being reproduced, the sound reproduction control unit 53 stops the
reproduction of sound data in step S8.
When the reproduction of the sound data is stopped in step S8, the process returns to step S1
and the subsequent processing is repeated.
[0081]
On the other hand, when it is determined in step S7 that sound data other than sound data
associated with the poster viewed by the user is not being reproduced, in step S9, the sound
replay control unit 53 associates with the poster viewed by the user. It is determined whether or
not the reproduced sound data is being reproduced.
[0082]
If it is determined in step S9 that the sound data associated with the poster the user is watching
is being reproduced, the process returns to step S1 and the subsequent processes are performed.
In this case, reproduction of sound data associated with the poster viewed by the user will be
continued.
11-04-2019
18
[0083]
If it is determined in step S9 that the sound data associated with the poster the user is watching
is not being reproduced, in step S10, the sound playback control unit 53 determines the sound
data associated with the poster the user is watching It is read from the sound data storage unit
55, and reproduction is started.
Thereafter, the processing after step S1 is repeated.
[0084]
By the above processing, it is possible to let only the viewer of the poster hear the reproduced
sound of the sound data associated with the poster.
[0085]
When it is recognized that the image taken by the camera 11 includes a plurality of posters, the
poster contained at a position closest to the center of the image may be recognized as the poster
viewed by the user .
[0086]
The volume of the sound output from the left and right speakers of the headphone 12 so that the
reproduction sound is localized at the sound source position, where the paste position of the
poster recognized as being viewed by the user is the sound source position. Alternatively, the
output timing may be adjusted.
This makes it possible to give the user the impression that a sound is being output from the
poster.
[0087]
Modified Example The model data stored in the model data storage unit 54 of the information
processing apparatus 1 and the sound data stored in the sound data storage unit 55 may be
updated according to the position of the user. .
11-04-2019
19
[0088]
FIG. 8 is a block diagram showing another functional configuration example of the information
processing apparatus 1.
[0089]
The configuration shown in FIG. 8 is the same as the configuration shown in FIG. 5 except that
the positioning unit 57 is added.
Duplicate descriptions will be omitted.
[0090]
The positioning unit 57 determines the position of the information processing device 1, ie, the
position of the user carrying the information processing device 1, based on the output of a GPS
(Global Positioning System) sensor (not shown) provided in the information processing device 1.
To detect.
The positioning unit 57 outputs position information indicating the current position to the
communication control unit 56.
[0091]
The communication control unit 56 transmits position information to the server 61, and
downloads model data of a poster pasted in an area including the current position and sound
data associated with the poster.
[0092]
In the server 61, model data and sound data of the poster are managed separately for each area
where the poster is pasted.
11-04-2019
20
The model data and the sound data are downloaded, for example, in units of the model data and
the sound data regarding the poster put in one area.
[0093]
The communication control unit 56 stores the downloaded model data in the model data storage
unit 54, and stores the sound data in the sound data storage unit 55.
[0094]
The download process of the information processing apparatus 1 having the configuration shown
in FIG. 8 will be described with reference to the flowchart of FIG.
[0095]
In step S21, the positioning unit 57 detects the current position, and outputs position
information to the communication control unit 56.
[0096]
In step S22, the communication control unit 56 transmits the position information to the server
61.
[0097]
In step S23, the communication control unit 56 downloads model data of the poster affixed to
the area including the current position and sound data associated with the poster.
[0098]
In step S24, the communication control unit 56 stores the downloaded model data in the model
data storage unit 54, and stores the sound data in the sound data storage unit 55.
Thereafter, the process is ended.
11-04-2019
21
[0099]
After the newly downloaded model data and sound data are stored, model data and sound data of
the poster pasted in the area including the position of the user immediately before are stored in
the model data storage unit 54 and the sound data storage unit 55. It may be deleted from the
This makes it possible to reduce the amount of data of model data and sound data.
[0100]
In the above description, it is assumed that which poster the user is looking at is recognized in
units of posters, and the sound data associated with the posters is reproduced. However,
processing is performed in partial units of one poster May be
In this case, it is recognized which part of which poster the user is looking at, and sound data
associated with the recognized part of the poster is reproduced.
[0101]
FIG. 10 is a diagram showing an example of a part (area) set in the poster P1.
[0102]
In the example of FIG. 10, portions 1-1, 1-2, and 1-3 are set in the poster P1.
Information of different contents is printed on the parts 1-1, 1-2, and 1-3 such that the product
pictures are different.
[0103]
As shown in FIG. 11, model data and sound data are stored in the information processing device
1 in association with the part of the poster.
11-04-2019
22
[0104]
In the example of FIG. 11, the model data 1-1 and the sound data 1-1 are stored corresponding to
the portion 1-1 of the poster P1, and the model data 1-2 is stored corresponding to the portion
1-2. And sound data 1-2 are stored.
Further, model data 1-3 and sound data 1-3 are stored corresponding to the portion 1-3.
[0105]
Similarly, for the posters P2 to P4, model data and sound data are stored corresponding to each
part in the poster.
[0106]
In the information processing apparatus 1, when it is determined that the user is looking at the
part 1-1 of the poster P 1 based on the image captured by the camera 11 and the model data of
each part, Regeneration is started.
[0107]
This makes it possible to switch the sound data to be heard by the user according to the portion
of the poster that the user is looking at.
[0108]
Further, although the information processing apparatus 1 is carried by the user in the above, it
may be installed in another place.
[0109]
FIG. 12 is a diagram showing an example of installation of the information processing apparatus
1.
[0110]
11-04-2019
23
In the example of FIG. 12, the information processing apparatus 1 is installed on the wall surface
W to which the posters P1 to P4 are attached.
Communication is performed between the information processing apparatus 1 and the HMD 2
worn by the user, and transmission and reception of an image captured by the camera 11 and
sound data reproduced by the information processing apparatus 1 are performed.
[0111]
Although the case where the object to be recognized is a poster has been described above, the
image displayed on the display may be recognized, and sound data associated with the
recognized image may be reproduced.
[0112]
Furthermore, although the case where the device that communicates with the information
processing device 1 is the HMD 2 has been described above, the device that communicates with
the information processing device 1 is carried by the user such as a portable music player having
a camera function. It may be another device.
The user can listen to the sound associated with the poster using the portable music player's
earphones by shooting the poster with the portable music player.
[0113]
The type of sound data to be reproduced may be selected by the user.
For example, when a plurality of different voices of different targets are prepared in the
information processing apparatus 1 such as adult voices and child voices in association with the
same poster, the voice selected by the user is reproduced.
[0114]
11-04-2019
24
In this case, the user selects in advance whether to reproduce an adult voice or a child voice, and
stores information representing the content of the selection in the information processing
apparatus 1.
When it is detected that the user is looking at a certain poster, in the information processing
apparatus 1, of the sound data associated with the poster, the reproduction of the sound data of
the type represented by the stored information is It is started.
This allows the user to hear his / her favorite voice even when viewing the same poster.
[0115]
Further, the user may be allowed to select the language of the voice to be reproduced from
voices of different languages, such as Japanese voice and foreign language voice.
[0116]
The series of processes described above can be performed by hardware or software.
When the series of processes are executed by software, a program constituting the software is
installed from a program recording medium in a computer incorporated in dedicated hardware, a
general-purpose personal computer, or the like.
[0117]
The program to be installed is provided by being recorded on a removable medium 41 shown in
FIG. 4 including an optical disk (CD-ROM (Compact Disc-Read Only Memory), DVD (Digital
Versatile Disc), etc.), a semiconductor memory or the like.
Also, it may be provided via a wired or wireless transmission medium such as a local area
network, the Internet, or digital broadcasting.
11-04-2019
25
The program can be installed in advance in the ROM 32 or the storage unit 38.
[0118]
Note that the program executed by the computer may be a program that performs processing in
chronological order according to the order described in this specification, in parallel, or when
necessary, such as when a call is made. It may be a program to be processed.
[0119]
The embodiments of the present technology are not limited to the above-described embodiments,
and various modifications can be made without departing from the scope of the present
technology.
[0120]
Reference Signs List 1 information processing apparatus, 2 HMD, 11 camera, 12 headphones, 13
display, 51 image acquisition unit, 52 recognition unit, 53 sound reproduction control unit, 54
model data storage unit, 55 sound data storage unit, 56 communication control unit
11-04-2019
26
Документ
Категория
Без категории
Просмотров
0
Размер файла
35 Кб
Теги
description, jp2015233284
1/--страниц
Пожаловаться на содержимое документа