close

Вход

Забыли?

вход по аккаунту

?

JP2017034570

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2017034570
Abstract: The present invention provides a method for solving a left / right mismatch between a
captured image and stereo sound in an imaging apparatus in which a camera unit and a display
unit are separable and the display unit includes a stereo microphone. A display unit 102 includes
a camera unit 101 including an imaging element, an image display unit 109, and a first stereo
sound recording unit 117, and a separation detection unit 105 that detects a separation state of
the camera unit and the display unit. 110, and an audio signal processing unit 316 that
processes the audio signal of the first stereo audio recording means. Audio recorded by the first
stereo sound recording unit when it is determined that the camera unit and the display unit are
separated and the shooting direction of the camera unit substantially matches the direction in
which the image display unit of the display unit is directed Control to switch the left and right of
the signal. [Selected figure] Figure 3
Imaging device
[0001]
BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to an
image pickup apparatus configured to be capable of photographing while being separated into a
camera unit and a display unit.
[0002]
The camera unit having an image pickup element and the display unit having an image display
means such as an LCD and the like and an operation unit can be separated and combined freely,
10-05-2019
1
and by operating both in cooperation by wireless communication, imaging can be performed in
any state of separated and combined There is a digital camera.
There is also proposed a digital camera which does not have a display means or an operation
unit, but uses an external device such as a smart phone as a display unit / operation unit in
cooperation with wireless communication.
[0003]
Such a digital camera has an advantage of being able to perform photographing in a free posture
in which the camera unit and the display unit are separated, in addition to the usage similar to
that of the conventional camera in which the camera unit and the display unit are integrated. .
Furthermore, by providing a built-in microphone in each of the camera unit and the display unit,
for example, a photographer who is at a distance from the camera unit records narration voice
using the built-in microphone in the display unit at hand, Various sound collecting methods can
be implemented at the time of moving image shooting.
[0004]
The following references will be cited as recording apparatuses or imaging apparatuses provided
with a microphone in each of the camera unit and the display unit.
[0005]
Patent Document 1 shows a recording apparatus with a video camera in which a recording unit
having a monitor for reproduction and a video camera unit can be separated, and a microphone
and a speaker are provided for each.
[0006]
In Patent Document 2, an imaging unit is provided rotatably with respect to an imaging device
body, voice input means are provided for each of the imaging device body and the imaging unit,
and voice is generated based on the rotational position of the imaging unit with respect to the
imaging device body. An imaging device for controlling input is disclosed.
10-05-2019
2
However, the imaging unit is not configured to be separable from the imaging device main body.
[0007]
JP-A-09-307800 JP-A-2005-286718
[0008]
By the way, when performing a so-called face-to-face shooting (self-portrait) in which the
photographer having the display unit separated from the camera unit is photographed, the
camera unit is used to collect sound using the microphone of the display unit configured in the
stereo system. There may be a problem that the right and left of the image taken by the camera
do not match the right and left of the sound collected by the microphone of the display unit.
[0009]
FIG. 12 shows an example of an imaging apparatus in which the camera unit and the display unit
can be separated.
A photographer 1204 performs shooting while confirming an image shot by the camera unit
1202 on the display screen 1203 of the display unit 1201.
As shown in FIG. 12A, when the photographer 1204 faces in the same direction as the shooting
direction, the stereo microphone unit 1205 of the camera unit 1202 and the stereo microphone
unit 1206 of the display unit 1201 have left and right (left: L; right Since R is identical to R), the
right and left of the captured image and the left and right of the stereo audio coincide regardless
of which microphone is used to collect the sound.
[0010]
On the other hand, when photographing the photographer 1204 having the display unit 1201 as
shown in FIG. 12B, the stereo microphone unit 1206 of the display unit 1201 is reversed in the
left-right arrangement with respect to the stereo microphone unit 1205 of the camera unit 1202
Do.
10-05-2019
3
In this state, the stereo sound collected by the stereo microphone unit 1206 of the display unit
1201 is reversed in the left and right with the image captured by the camera unit 1202, and the
moving image combining the image and the stereo sound becomes uncomfortable. It will
[0011]
SUMMARY OF THE INVENTION It is an object of the present invention to solve the left / right
mismatch between the captured image and the stereo sound as described above in an imaging
apparatus in which the camera unit and the display unit are separable and the display unit
includes a stereo microphone.
[0012]
In order to solve the above-described problems, in the imaging device according to the present
invention, a camera unit including an imaging element, and a display unit including an image
display unit and a first stereo sound recording unit are configured to be separated and combined.
An imaging apparatus, comprising: a separation detection unit that detects a separation state of
the camera unit and the display unit; an orientation determination unit that determines a relative
orientation of the camera unit and the display unit; and the first stereo sound recording unit And
an audio signal processing unit for processing the recorded audio signal, and a control unit for
controlling the audio signal processing unit according to the detection result of the separation
detection unit and the determination result of the direction determination unit. Do.
[0013]
Furthermore, the camera unit is provided with second stereo sound recording means, and the
audio signals acquired by the first and second stereo sound recording means are compared by
the signal comparison means, and the comparison result is the camera unit and the display It is
characterized in that the relative orientation of the parts is determined.
[0014]
Alternatively, the camera unit is provided with a first position acquisition unit and a first
orientation acquisition unit, and the display unit is provided with a second position acquisition
unit and a second orientation acquisition unit, and the first and second units are provided. The
position information acquired by the position acquisition means of the present invention and the
azimuth information acquired by the first and second azimuth acquisition means are compared
by the information comparison means, and the relative orientation of the camera unit and the
display unit is determined from the comparison result. It is characterized by
10-05-2019
4
[0015]
Alternatively, the audio signal processing unit may be provided according to a detection result of
the separation detection unit and an operation state of the display switching operation unit. It is
characterized by controlling.
[0016]
In particular, the direction detection means is characterized in that it detects that the shooting
direction of the camera unit and the direction in which the image display means of the display
unit is directed substantially coincide.
[0017]
Furthermore, the audio signal processing unit is controlled to switch the right and left of the
audio signal obtained from the first stereo audio recording means according to the detection
results of the separation detection means and the direction detection means. .
[0018]
According to the imaging apparatus of the present invention, the audio signal processing unit
that processes the recorded audio signal is appropriately controlled according to the relative
orientation of the separated camera unit and the display unit, and the left and right of the moving
image and the audio are reversed. Face-to-face photography can be performed without causing a
sense of discomfort.
[0019]
It is an external view showing the united state of the imaging system in a first embodiment.
It is an external view showing the isolation | separation state of the imaging system in 1st
embodiment.
It is a block diagram explaining composition of an imaging system in a first embodiment.
It is a figure showing a mode that this imaging system is used in the form of a general camera.
10-05-2019
5
It is a figure showing a mode that this imaging system is isolate | separated and used.
It is a figure showing a mode that this imaging system is isolate | separated and it uses by faceto-face photography.
It is a flow chart which performs sound signal processing setting of an imaging system in a first
embodiment.
It is a block diagram explaining the composition of the imaging system in a second embodiment.
It is a flowchart which performs the audio | voice signal processing setting of the imaging system
in 2nd embodiment. It is a block diagram explaining the composition of the imaging system in a
third embodiment. It is a flowchart which performs the audio | voice signal processing setting of
the imaging system in 3rd embodiment. It is a figure showing a mode that a camera part and a
display part use an imaging device which can be isolate | separated.
[0020]
Hereinafter, an embodiment for carrying out the present invention will be described based on the
drawings.
[0021]
First Embodiment The first embodiment of the present invention will be described.
[0022]
FIG. 1 and FIG. 2 are views showing the appearance of an imaging system 100 configured of a
camera head 101 as a camera unit and a smartphone 102 as a display unit.
FIG. 1 shows a state in which the camera head 101 and the smartphone 102 are combined, and
FIG. 2 shows an appearance in a state in which they are separated.
10-05-2019
6
Moreover, in FIG. 1 and FIG. 2, (A) represents the external appearance which looked down on this
imaging system 100 from the front, and (B) represents the external appearance which looked
down similarly from the back. Furthermore, FIG. 3 is a block diagram for explaining the
configuration of the camera head 101 and the smartphone 102 that constitute the present
imaging system 100.
[0023]
The configuration of the present imaging system 100 will be described with reference to FIGS. 1
to 3.
[0024]
First, the camera head 101 has a substantially cylindrical housing, and has a pair of engaging
claws 107 and 108 at the rear of the housing.
A non-contact detection unit 110 to be described later is built in a substantially middle portion of
the engagement claws 107 and 108 (indicated by a broken line in FIG. 2B). The engagement
claws 107 and 108 can adjust the interval arbitrarily, and can be engaged by pinching the other
device such as the smartphone 102 with the engagement claws 107 and 108.
[0025]
A lens optical system 301 and an imaging device 302 are provided inside the camera head 101
casing. The image sensor 302 is controlled by the camera control unit 303 to capture a subject
image, and the subject image is sent to the image processing unit 304 and converted into
predetermined still image data or moving image data. The still image data or the moving image
data is recorded in the memory 306 provided in the camera head 101 by the control unit 305, or
outside the camera head 101 via the wireless transmission / reception unit 307 represented by
Wi-Fi. It can be sent.
[0026]
10-05-2019
7
Furthermore, a camera unit microphone R112 and a camera unit microphone L113 are provided
on the upper surface portion of the camera head 101 to constitute a stereo microphone.
Hereinafter, the combination of the camera unit microphone R112 and the camera unit
microphone L113 is also referred to as a camera unit microphone 116. The audio collected by
the camera microphone 116 is converted into a predetermined stereo audio signal by the audio
processor 315. Then, it is combined with moving image data generated by the image processing
unit 304 to generate a moving image file. The moving image file can be recorded in the memory
306 or can be transmitted to the outside of the camera head 101 via the wireless transmission /
reception unit 307 as described above.
[0027]
The operation of each part of the camera head 101 is controlled by the control unit 305. Further,
the power of the camera head 101 can be turned on / off by the power switch 111.
[0028]
On the other hand, the smartphone 102 has a flat housing provided with a display element 109
such as an LCD or an OLED on one side. On the surface on the opposite side of the display
element 109, a non-contact detection unit 105 described later is built in (shown by a broken line
in FIG. 2A). On the display element 109, image data received from the outside via the wireless
transmission / reception unit 309 and image data recorded in the memory 311 can be decoded
by the image processing unit 312 and displayed as an image. In addition, the character generated
by the character generator 313 can be superimposed on the image displayed on the display
element 109.
[0029]
Further, the touch panel 314 is disposed so as to overlap the display element 109. Thereby, a
character is displayed at a desired position on the display screen, and the touch panel 314
detects that the portion is touched with a finger or the like, and configures a graphical user
interface (hereinafter, GUI) that performs a predetermined response. .
10-05-2019
8
[0030]
Furthermore, a release switch 104 for instructing an image capturing operation of the image is
provided, and a display microphone R114 and a display microphone L115 are disposed on the
side of the release switch 104 to configure a stereo microphone. Hereinafter, the combination of
the display microphone R114 and the display microphone L115 is also referred to as a display
microphone 117. The audio collected by the display microphone 117 is converted into a
predetermined stereo audio signal by the audio processor 316. The stereo audio signal can be
combined with moving image data received by the wireless transmission / reception unit 309 to
generate a moving image file, and the generated moving image file can be recorded in the
memory 311 or displayed on the display element 109 as an image. be able to. In addition, the
power switch 103 for turning on / off the power of the smartphone 102 is provided. The control
unit 310 is in charge of the operation of each part of the smartphone 102.
[0031]
The camera head 101 and the smartphone 102 communicate with each other via the wireless
transmission and reception units 307 and 309 and perform an operation in cooperation with
each other to form the imaging system 100. Specifically, a control command corresponding to
the operation of the touch panel 314 or the release switch 104 is sent from the wireless
transmission / reception unit 309 of the smartphone 102 and received by the wireless
transmission / reception unit 307 of the camera head 101 to control the operation of the camera
head 101. It is possible. In response to a shooting instruction command from the smartphone
102, the camera head 101 shoots an image and sends out image data from the wireless
transmission / reception unit 307. Image data received by the wireless transmission / reception
unit 309 of the smartphone 102 can be displayed on the display element 109 or recorded in the
memory 311. That is, the camera head 101 and the smartphone 102 can cooperate with each
other within the communication reach distance of the wireless transmission / reception units
307 and 309 regardless of the distance between them, and can take pictures without restriction
of position and posture. Because of that, it is possible to desire the enlargement of the
photographing area. Even in a state where the camera head 101 and the smartphone 102 are
engaged by the engagement claws 107 and 108 as described above, the wireless transmitting
and receiving units 307 and 309 perform mutual communication to perform cooperative
operation.
[0032]
10-05-2019
9
Now, a near field communication technology using NFC (Near Field Communication) is applied to
the non-contact detection unit provided in each of the camera head 101 and the smartphone
102. The non-contact detection unit has an antenna for transmitting and receiving
communication data and a transmission / reception unit that encodes / decodes the
communication data (both not shown), and exchanges information with other non-contact
detection units and IC tags Implements a reader / writer function. In the short-distance wireless
communication technology, by using the weak radio wave whose reaching distance is limited to
about several tens of cm, the non-contact detection unit can detect that the camera head 101 and
the smartphone 102 are in proximity to each other.
[0033]
The operation of the contactless detection unit will now be described. As described above, in the
camera head 101, the non-contact detection unit 110 is built in a substantially middle portion of
the engagement claws 107 and. The result detected by the non-contact detection unit 110
whether or not another non-contact detection unit or an IC tag is in proximity is transmitted to
the control unit 305 and used to control the camera head 101. As shown in FIG. 2A, the noncontact detection unit 105 is disposed on one side surface of the smartphone 102. The detection
result of the non-contact detection unit 105 is transmitted to the control unit 310 and used for
control of the smartphone 102.
[0034]
According to the above-described arrangement, when the camera head 101 is attached to the
smartphone 102, the non-contact detection unit 110 and the non-contact detection unit 105 face
each other in close proximity, and short-distance wireless communication becomes possible. The
control unit 305 of the camera head 101 detects that the non-contact detection unit 110 detects
that the smartphone 102 is in a state capable of near-field wireless communication, that is,
detects that the camera head 101 is attached to the smartphone 102 A signal is transmitted
through the wireless transmission / reception unit 307. The smartphone 102 detects that the
camera head 101 is attached by the detection of the non-contact detection unit 105.
Furthermore, when the attachment detection signal transmitted by the camera head 101 is
received through the wireless transmission / reception unit 309, the camera head 101 is
recognized as attached together with the detection result of the non-contact detection unit 105.
Conversely, the camera head 101 can also receive an attachment detection signal transmitted
from the smartphone 102 through the wireless transmission / reception unit 309 by the wireless
10-05-2019
10
transmission / reception unit 307 and use it for attachment determination.
[0035]
In this manner, the camera head 101 and the smartphone 102 can detect whether they are in the
separated state or in the combined state.
[0036]
Now, according to the detection result of the separation and combining state of the camera head
101 and the smartphone 102 and the detection result of whether the camera head 101 and the
smartphone 102 face each other, that is, whether it is a self-shooting state. Switch the processing
of the audio signal collected by the microphone.
[0037]
FIGS. 4 to 6 illustrate the use of the present imaging system 100. FIG.
FIG. 4 shows how a camera head 101 and a smartphone 102 are combined to perform shooting
in the form of a general camera.
FIG. 5 shows how the camera head 101 is installed on a support 501 or the like, and the
photographer 502 shoots while checking the image with the display element 109 of the
smartphone 102 from a distant place. At this time, the shooting direction of the camera head 101
is the same as the direction in which the photographer 502 faces. FIG. 6 shows how a camera
head 101 is installed on a support 501 or the like, and a direction including a photographer 502
having a smartphone 102 is photographed. This is a shooting method called so-called face-toface shooting (self-portrait), in which the photographer 502 adjusts his / her standing position
while shooting while checking the shot image transmitted from the camera head 101 and
displayed on the display element 109 .
[0038]
In the present imaging system 100, when the camera head 101 and the smartphone 102 are
separated, the photographer himself or herself chooses to use either the microphone of the
10-05-2019
11
camera head 101 or the microphone of the smartphone 102 to perform voice recording. Can. For
example, the imaging situation in FIG. 5 will be described. If sound is collected using the camera
unit microphone 116 in this situation, the photographer 502 is at a distant position, so the voice
of the photographer 502 and the rubbing noise of clothes are not mixed with the recorded voice,
and the voice on the subject side is clear Can be recorded in Alternatively, if sound is collected
using the display unit microphone 117, the photographer 502 can clearly record narration
voices while staying at a position far from the subject. Next, the imaging situation in FIG. 6 will be
described. In this case, if voice is recorded by the display unit microphone 117 at the hand of the
photographer 502, the narration voice of the photographer 502 can be clearly recorded, which is
effective particularly when the camera head 101 is installed at a distance. However, as described
above, there is a problem that the right and left of the captured image of the camera head 101
and the left and right of the stereo sound recorded by the display unit microphone 117 do not
match as it is. Therefore, the imaging system 100 switches the processing of the recorded audio
signal according to the procedure described below.
[0039]
FIG. 7 is a flowchart for explaining the operation of determining the audio signal processing
setting of the present imaging system 100. The operation of the imaging system 100 will be
described with reference to FIG.
[0040]
The audio signal processing setting S501 is started when the imaging system 100 transitions to
an operation mode for capturing a moving image including audio.
[0041]
First, in step S502, the separation and combination state of the camera head 101 and the
smartphone 102 is detected.
This is detected by the non-contact detection unit described above. When the camera head 101
and the smartphone 102 are in the combined state, that is, when the photographing mode in the
general camera form of FIG. 4 is performed, the process proceeds to S503, and the microphone
used for voice recording is automatically set to the camera unit microphone 116. , The display
microphone 117 is disabled. Then, the processing proceeds to step S511, and the imaging
10-05-2019
12
standby state is subsequently performed.
[0042]
In step S502, when it is detected that the camera head 101 and the smartphone 102 are in the
separated state, the process advances to step S504, and the photographer determines which of
the camera unit microphone 116 and the display unit microphone 117 is used to perform voice
recording. This determination is performed by operating the touch panel 314 according to, for
example, a selection menu presented by the GUI. Here, if the photographer decides to use the
camera unit microphone 116, the process advances to step S505, the imaging system 100 sets
the camera unit microphone 116 to perform voice recording, and disables the display unit
microphone 117. In this case, a moving image file in which moving image data and recorded
audio data are combined in the camera head 101 is generated, transmitted to the smartphone
102 via the wireless transmission / reception units 307 and 309, and recorded in the memory
311 and display element Reproduction at 109 is to be performed.
[0043]
Then, if it is determined in S504 that shooting is to be performed using the display microphone
117, the process proceeds to S506, and the display microphone 117 is set to perform voice
recording. In this case, moving image data containing no sound is transmitted from the camera
head 101 to the smartphone 102 via the wireless transmission / reception units 307 and 309. In
the smartphone 102, the moving image data and the sound recorded by the display unit
microphone 117 are combined according to an audio signal processing setting described later to
generate a moving image file. The generated moving image file is recorded in the memory 311,
and reproduction on the display element 109 is performed.
[0044]
Next, the process proceeds to the step of determining the relative positional relationship between
the camera head 101 and the smartphone 102. First, in S507, audio is sampled simultaneously
for a predetermined time by both the camera microphone 116 and the display microphone 117.
The camera head 101 evaluates the sound pressure level difference and the phase difference of
the sound input to the camera unit microphone R112 and the camera unit microphone L113, and
transmits the evaluation result to the smartphone 102 via the wireless transmission / reception
10-05-2019
13
unit 307. In the smartphone 102, the received evaluation result of the camera head 101 is stored
in the memory 311. Furthermore, the sound pressure level difference and the phase difference of
the sound input to the display microphone R114 and the display microphone L115 are
evaluated, and the evaluation result is stored in the memory 311.
[0045]
Next, in S508, the phase difference between the left and right input voices of the camera
microphone 116 and the left and right input voices of the display microphone 117 are compared
and evaluated. Decide whether or not. Since the phase difference between the left and right
indicates the directionality of the sound source, when the camera section microphone 116 and
the display section microphone 117 have a predetermined amount of conflict with each other,
the camera head 101 and the smartphone 102 (and consequently the photographer) have
opposite directions. It can be estimated that you are facing
[0046]
If the phase difference is more than the predetermined level, the process proceeds to S 509 to
compare the sound pressure level difference between the left and right in the camera
microphone 116 and the sound pressure level difference between the left and right in the display
microphone 117. It is determined whether or not there is a conflict. Even when the sound
pressure level difference between the left and right microphones conflict with each other by a
predetermined amount or more between the camera microphone 116 and the display
microphone 117, it is assumed that the camera head 101 and the smartphone 102 (and
consequently the photographer) face in opposite directions. it can. Furthermore, it can be
estimated that the scene is a sound source with a large stereo effect, and a sense of discomfort
can be felt larger when the left and right of the captured image do not match the left and right of
the stereo sound. Therefore, the process proceeds to S510, the input signals of the display
microphone R114 and the display microphone L115 are switched, and the setting is switched so
as to generate the stereo audio signal with the display microphone R114 as the left audio and the
display microphone L115 as the right audio. Then, the process proceeds to step S511, and the
imaging standby state is subsequently performed.
[0047]
10-05-2019
14
In S508 and S509, when the phase difference or sound pressure level difference between the
camera microphone 116 and the display microphone 117 does not conflict with each other
above a predetermined level, the camera head 101 and the smartphone 102 (thus, the
photographer) face in opposite directions. It is judged that the possibility of In that case, the
audio signal processing setting is not changed, and the voice recording is performed with the
display microphone R114 as the right voice and the display microphone L115 as the left voice as
usual.
[0048]
Note that the operation flow for determining the above-described audio signal processing setting
is executed at predetermined intervals when in the imaging standby state as well as when the
operation mode transitions. Or you may perform at the timing which stopped video recording.
[0049]
As described above, this imaging system 100 evaluates the situation where the camera head 101
and the smartphone 102 (and consequently the photographer) face the opposite direction from
each other as shown in FIG. It is determined, and the setting of audio signal processing is
switched so that the left and right of the photographed image and the left and right of the stereo
sound coincide with each other. This makes it possible to avoid shooting an unnatural moving
image in which the image and the sound do not match.
[0050]
Second Embodiment Subsequently, a second embodiment of the present invention will be
described.
[0051]
FIG. 8 is a block diagram showing a configuration of an imaging system 800 including a camera
head 801 and a smartphone 802 in the second embodiment.
The same components as those described in the first embodiment are denoted by the same
10-05-2019
15
reference numerals, and the description thereof is omitted.
[0052]
The camera head 801 includes a position acquisition unit 803, an azimuth acquisition unit 804,
and an attitude detection unit 805.
[0053]
The position acquisition unit 803 performs positioning processing using GPS.
That is, a signal is received from a GPS satellite, and position information indicating the current
position of the camera head 801 is acquired from the received signal in the coordinates of
latitude and longitude. The position information is periodically acquired and recorded and
overwritten in the memory 306 so that the latest position information is always held.
[0054]
The azimuth acquisition unit 804 detects which direction the camera head 801 is facing and
acquires azimuth information. The direction acquisition unit 804 is configured of, for example, an
electronic compass. The electronic compass is also called a geomagnetic sensor, a direction
sensor, or the like, and is a generic term for devices capable of detecting the geomagnetism of
the earth. The electronic compass can detect geomagnetism in two or three dimensions, and can
detect in which direction the electronic compass device itself is facing with respect to the
geomagnetism. The azimuth information is obtained periodically, and recorded and overwritten
in the memory 306 to always keep the latest azimuth information. The azimuth acquisition unit
804 is attached to acquire the photographing azimuth of the camera head 801.
[0055]
The posture detection unit 805 detects the posture of the camera head 801. For example, it is
configured using a tilt sensor. The attitude information of the camera head 801 is periodically
acquired, and is recorded and overwritten in the memory 306 so that the latest attitude
information is always held.
10-05-2019
16
[0056]
Next, similarly to the camera head 801, the smartphone 802 includes a position acquisition unit
806, an azimuth acquisition unit 807, and a posture detection unit 808. The functions are the
same as those of the camera head 801 described above, so the explanation is omitted, but the
position acquisition unit 806 is the position information of the smartphone 802, the azimuth
acquisition unit 807 is the azimuth information, and the attitude detection unit 808 is the
attitude information Get each one. The position information, orientation information, and posture
information acquired periodically are recorded and overwritten in the memory 311, and the
latest posture information is always held. In addition, the azimuth | direction acquisition part 807
shall be attached so that the azimuth | direction to which the display screen of the smart phone
802 turns may be acquired.
[0057]
Further, positional information, azimuth information, and attitude information of the camera
head 801 and the smartphone 802 can be transmitted and received through communication via
the wireless transmission and reception units 307 and 309 and shared.
[0058]
FIG. 9 is a flowchart for explaining the operation of determining the audio signal processing
setting of the present imaging system 800.
The operation of the present imaging system 800 will be described with reference to FIG.
[0059]
The audio signal processing setting S901 is started when the imaging system 800 transitions to
an operation mode for capturing a moving image including audio.
[0060]
10-05-2019
17
First, in step S902, the separation and combining state of the camera head 801 and the
smartphone 802 is detected.
If the camera head 801 and the smartphone 802 are in a combined state, the process advances
to step S903 to automatically set the microphone used for voice recording to the camera
microphone 116, and to invalidate the display microphone 117. Then, the process advances to
step S 912 to shift to a shooting standby state. In S902, when it is detected that the camera head
801 and the smartphone 802 are in the separated state, the process proceeds to S904, and the
photographer uses the touch panel 314 to determine whether to use the camera microphone
116 or the display microphone 117 to perform voice recording. Operate using, etc. and decide.
Here, if the photographer decides to use the camera unit microphone 116, the process advances
to step S905, and the main imaging system 800 performs setting such that voice recording is
performed by the camera unit microphone 116, and the display unit microphone 117 is
invalidated.
[0061]
Then, if it is determined in S904 that shooting is to be performed using the display microphone
117, the process proceeds to S906, and the display microphone 117 is set to perform voice
recording.
[0062]
Next, the process proceeds to the step of determining the relative positional relationship between
the camera head 801 and the smartphone 802.
First, in step S 907, the smartphone 802 acquires the latest position information, orientation
information, and attitude information of the camera head 801 recorded in the memory 306 via
wireless communication. Next, in S 908, the latest position information, orientation information,
and posture information of the smartphone 802 recorded in the memory 311 are referred to,
and each information is compared and evaluated with the information acquired from the camera
head 801.
[0063]
10-05-2019
18
In step S909, the position information of the camera head 801 and the azimuth information are
compared with the position information of the smartphone 802 and the azimuth information, and
it is determined whether the camera head 801 and the smartphone 802 are in the face-to-face
shooting (self-shooting) position. The shooting direction of the camera head 801 obtained by the
direction obtaining unit 804 substantially matches the direction in which the display screen of
the smartphone 802 obtained by the direction obtaining unit 807 is directed, and the shooting
direction vector of the camera head 801 is the position of the smartphone 802 When passing
near the coordinates, it can be estimated that the camera head 801 and the photographer
looking at the display screen are in a positional relationship facing each other. When the facing
positional relationship is maintained for a predetermined time, it can be estimated that it is a
state in which the face-to-face shooting is intended, and thus the process proceeds to S910.
[0064]
In S910, the postures of the camera head 801 and the smartphone 802 are evaluated. If the
camera head 801 and the smartphone 802 are in the normal shooting posture in which the
postures thereof are substantially horizontal, the process advances to step S911. The normal
shooting posture refers to a posture in a state in which the camera microphone 116 and the
display microphone 117 can generate stereo sound in the left-right direction and in an
appropriate positional relationship.
[0065]
In S911, the input signals of the display microphone R114 and the display microphone L115 are
switched, and the setting is switched so that the stereo microphone signal is generated with the
display microphone R114 as the left audio and the display microphone L115 as the right audio.
Then, the process advances to step S 912 to shift to a shooting standby state.
[0066]
If it is determined in S909 that the camera head 801 and the smartphone 802 are not in the
face-to-face shooting state, or if the face-to-face shooting state can not be determined, the audio
signal processing setting is not changed, and the display microphone R114 is turned to the right ,
Voice recording is performed with the display microphone L115 as the left voice.
[0067]
10-05-2019
19
Further, even if the posture of the camera head 801 or the smartphone 802 is a special posture
different from the normal shooting posture in S910, the audio signal processing setting is not
changed, and the display microphone R114 is a right audio, and the display microphone L115 is
Perform voice recording as left voice.
The special posture is, for example, the case where the camera head 801 is rotated 90 degrees
around the photographing optical axis. In this case, since the horizontal direction of the image to
be captured corresponds to the vertical direction for the smartphone 802, the stereo sound
recorded by the display microphone 117 can not be matched with the image.
[0068]
Note that the operation flow for determining the above-described audio signal processing setting
is executed at predetermined intervals when in the imaging standby state as well as when the
operation mode transitions. Or you may perform at the timing which stopped video recording.
[0069]
As described above, the imaging system 800 determines from the position information and the
orientation information whether or not it is in the face-to-face shooting state, and when it is
determined that it is the face-to-face shooting state, The setting of audio signal processing is
switched so as to match. This makes it possible to avoid shooting an unnatural moving image in
which the image and the sound do not match.
[0070]
Third Embodiment Next, a third embodiment of the present invention will be described.
[0071]
FIG. 10 is a block diagram showing a configuration of an imaging system 1000 including a
smartphone 1002 in the third embodiment and a camera head 101 in the first embodiment.
10-05-2019
20
The same components as the components described in the first and second embodiments will be
assigned the same reference numerals and descriptions thereof will be omitted.
[0072]
The smartphone 1002 in the present embodiment is provided with a face-to-face shooting switch
1003. The input of the face-to-face shooting switch 1003 is transmitted to the control unit 310,
and is used for the switching operation of various settings related to the face-to-face shooting.
[0073]
FIG. 11 is a flowchart for explaining the operation of determining the audio signal processing
setting of the present imaging system 1000. The operation of the imaging system 1000 will be
described with reference to FIG.
[0074]
The audio signal processing setting S1101 is started when the imaging system 1000 transitions
to an operation mode for capturing a moving image including audio.
[0075]
First, in S1102, the separation and combination state of the camera head 101 and the
smartphone 1002 is detected.
When the camera head 101 and the smartphone 1002 are in a combined state, the process
advances to step S1103 to automatically set the microphone used for voice recording to the
camera microphone 116, and to invalidate the display microphone 117. When it is detected in
S1102 that the camera head 101 and the smartphone 1002 are in the separated state, the
process proceeds to S1104, and the photographer uses the touch panel 314 to determine
whether to use the camera microphone 116 or the display microphone 117 to perform voice
recording. Etc. to decide. If it is determined that the photographer uses the camera unit
10-05-2019
21
microphone 116, the process advances to step S1105, and the main imaging system 1000 sets
the voice recording with the camera unit microphone 116 and disables the display unit
microphone 117.
[0076]
If it is determined in S1104 that shooting is to be performed using the display microphone 117,
the process proceeds to S1106, and the display microphone 117 is set to perform voice
recording.
[0077]
Next, in step S1106, the operation state of the facing shooting switch 1003 is referred to.
The face-to-face shooting switch 1003 is a switch for switching ON / OFF, and performs an ON
operation when the photographer performs face-to-face shooting (self-shooting). When it is
turned on, the captured image displayed on the display element 109 is switched to a mirror
image of the left and right inverted, and at S1108, the input signals of the display microphone
R114 and the display microphone L115 are interchanged, and the display microphone R114 is a
left voice, display The setting is switched so as to generate a stereo audio signal with the
microphone L115 as the right audio. As described above, the operation mode is switched to the
operation mode suitable for the face-to-face shooting only by operating the face-to-face shooting
switch 1003.
[0078]
If the face-to-face shooting switch 1003 is OFF in S1106, the audio signal processing setting is
not changed, and voice recording is performed with the display microphone R114 as the right
audio and the display microphone L115 as the left audio as usual. Furthermore, the
photographed image displayed on the display element 109 is not switched to the mirror image.
[0079]
As described above, in the present imaging system 1000, the setting of the audio signal
processing is switched so that the left and right of the captured image coincide with the left and
right of the stereo sound in accordance with the operation of the facing imaging switch 1003 of
10-05-2019
22
the photographer. The image displayed on the is also switched to the mirror image to provide an
operation mode suitable for face-to-face shooting.
[0080]
DESCRIPTION OF SYMBOLS 100 imaging system, 101 camera head, 102 smart phone, 103
power switch, 104 release switch, 105 noncontact detection part, 108 engagement claw 107,
109 display element, 110 noncontact detection part, 111 power switch, 112 camera part
microphone R , 113 camera unit microphone L, 114 display unit microphone R, 115 display unit
microphone L, 116 camera unit microphone, 117 display unit microphone, 301 lens optical
system, 302 imaging device, 303 camera control unit, 304 image processing unit, 305 control
Unit 306 memory 307 wireless transmission / reception unit 309 wireless transmission /
reception unit 310 control unit 311 memory 312 image processing unit 313 character generator
314 touch panel 315 audio processing unit 316 audio processing unit 501 post 502
photographer , 800 shooting Image system, 801 camera head, 802 smartphone, 803 position
acquisition unit, 804 azimuth acquisition unit, 805 attitude detection unit, 806 position
acquisition unit, 807 azimuth acquisition unit, 808 attitude detection unit, 1000 imaging system,
1002 smartphone, 1003 face-to-face shooting Switch, 1201 display unit, 1202 camera unit,
1203 display screen, 1204 photographer, 1205 stereo microphone unit, 1206 stereo
microphone unit
10-05-2019
23
Документ
Категория
Без категории
Просмотров
0
Размер файла
36 Кб
Теги
jp2017034570
1/--страниц
Пожаловаться на содержимое документа