close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2011010073

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2011010073
The present invention provides an imaging device that collects an operation sound by reducing
an operation sound at the time of shooting a moving image. In an imaging device, a microphone
detects a surrounding sound and outputs first sound information. The microphone 41B detects
the sound and outputs the second sound information. The audio signal processing circuit 45 uses
the first sound information and the second sound information to remove a common noise signal
included in the first sound information and the second sound information, and corresponds to
one of the sounds. Generate a sound signal. [Selected figure] Figure 3
Imaging device
[0001]
BACKGROUND OF THE INVENTION Field of the Invention The present invention relates to an
imaging device that collects sound at the time of moving image shooting.
[0002]
When the operation sound generated by operating the imaging device is included in the collected
sound, it may be detected as a noise when the sound is collected according to the moving image
shooting.
There is an imaging device that performs noise reduction processing to reduce such noise (see,
for example, Patent Document 1).
10-04-2019
1
[0003]
JP, 2005-244613, A
[0004]
Kanada, "Directivity characteristics of adaptive noise suppression microphone array (AMNOR),"
Journal of the Acoustical Society of Japan, vol. 44, No. 1, pp. 23-30, 1988.
[0005]
However, according to Patent Document 1, a place where noise is generated is specified in
advance, and a microphone for collecting noise is arranged in the vicinity of the noise generation
unit.
In such a noise detection method, when the noise generation points are dispersed, it is necessary
to arrange a microphone for each noise generation point.
For example, in the case where a drive unit for driving an auto-focusing function or the like
provided in the lens barrel of the interchangeable-lens type camera becomes a noise generation
point, there is a problem that it is necessary to arrange a microphone in the lens barrel.
[0006]
The present invention has been made to solve the above-mentioned problems, and an object
thereof is to provide an image pickup apparatus which collects an operation sound at the time of
collecting a sound according to moving image shooting.
[0007]
In order to solve the above problems, the present invention detects a surrounding sound and
outputs a first sound information, and a first sound collecting unit, and detects the sound and
outputs a second sound information. A common noise signal included in the first sound
information and the second sound information is removed using a second sound collection unit
10-04-2019
2
different from the sound collection unit and the first sound information and the second sound
information And an image processing apparatus including a signal processing unit that generates
one sound signal corresponding to the sound.
Further, according to the present invention, in the above-mentioned invention, the signal
processing unit is a difference amount on a time axis between the noise signal included in the
first sound information and the noise signal included in the second sound information. The noise
signal may be removed based on the difference between the sound included in the first sound
information and the difference on the time axis between the sound included in the second sound
information.
[0008]
Further, according to the present invention, in the above-mentioned invention, the signal
processing unit synchronizes the first noise signal included in the first sound information with
the noise signal included in the second sound information. The noise information may be
removed by relatively time-shifting the sound information and the second sound information.
Further, the present invention, in the above-mentioned invention, further includes a noise
generating unit that generates noise to be the noise signal, and the signal processing unit
generates the first sound information and the first sound information according to position
information of the noise generating unit. It is characterized in that the two-tone information is
relatively shifted in time.
[0009]
Further, according to the present invention, in the above-mentioned invention, the positional
information of the noise generating unit is a propagation time until the noise reaches the first
sound collecting unit from the noise generating unit, and the noise from the noise generating
unit. Is information based on a difference from the propagation time until reaching the second
sound collecting unit. Further, in the present invention according to the above-mentioned
invention, the noise signal is based on an operation sound by the operation of the imaging device,
and the signal processing unit is based on at least one of start or stop of the operation. It is
characterized by removing a noise signal.
[0010]
10-04-2019
3
Further, according to the present invention, in the above-described invention, the signal
processing unit removes the noise signal based on at least one of a waveform, a frequency
component, and sound pressure information of the noise signal set in advance. It features.
Further, according to the present invention, in the above-described invention, the first sound
collecting unit and the second sound collecting unit are disposed in a plane substantially
perpendicular to a lens optical axis of the imaging device, and the lens optical axis The distance
from the lens collecting unit to the first sound collecting unit is different from the distance from
the lens optical axis to the second sound collecting unit. Further, according to the present
invention, in the above-mentioned invention, the lens optical axis, the first sound collecting
portion and the second sound collecting portion are on one straight line in a plane substantially
perpendicular to the lens optical axis. It is characterized in that they are arranged in order.
[0011]
According to the present invention, in the imaging device, the first sound collecting unit detects
surrounding sound and outputs first sound information. A second sound collecting unit different
from the first sound collecting unit detects the sound and outputs second sound information. The
signal processing unit is included in the first sound information and the second sound
information using a second sound collection unit different from the first sound collection unit,
and the first sound information and the second sound information. Common noise signal is
removed to generate one sound signal corresponding to the sound. Thereby, the imaging
apparatus detects noise from the vibration generating unit generated due to the vibration by the
two sound collecting units, removes the common noise signal included in the two sound
information, and reduces the noise. It can generate a signal.
[0012]
FIG. 1 is an external view of an imaging device according to an embodiment of the present
invention. It is a block diagram showing composition of an imaging device by the embodiment. It
is a schematic block diagram showing the composition of the speech processing part by the
embodiment. It is a figure which shows the noise reduction process by the embodiment. It is an
external view of the imaging device by the embodiment. It is a schematic block diagram which
shows the different utilization forms in the imaging device by the embodiment.
10-04-2019
4
[0013]
First Embodiment Hereinafter, an embodiment of the present invention will be described with
reference to the drawings. FIG. 1 is an external view of an imaging device 1 according to the
present embodiment. The figures (a) and (b) show the upper surface and the front of the imaging
device 1. The imaging device 1 includes a camera body 100 and a lens barrel 200 that is
detachable from the camera body 100. The lens barrel 200 is attached via a lens mount provided
on the front of the camera body 100. Further, on the front of the camera body 100, a
microphone 41A and a microphone 41B are provided. The microphone 41A and the microphone
41B are provided in a plane substantially perpendicular to the lens optical axis of the lens barrel
200, that is, on the front surface of the camera body 100. The microphone 41A and the
microphone 41B are arranged such that the distance d1 from the lens optical axis of the lens
barrel 200 to the microphone 41A and the distance d2 from the lens optical axis to the
microphone 41B are different. Further, the microphone 41A and the microphone 41B are
arranged in order on the straight line x of the lens optical axis of the lens barrel 200, the
microphone 41A and the microphone 41B in a plane substantially perpendicular to the lens
optical axis.
[0014]
FIG. 2 is a block diagram showing the configuration of the imaging device according to the
present embodiment. The imaging device 1 shown in this figure includes a camera body 100 and
a lens barrel 200 attached to the camera body 100. In the imaging device 1, the lens barrel 200
includes an optical system 210, an optical system drive unit 220, and an optical system control
unit 230. The optical system 210 in the lens barrel 200 includes an optical element that adjusts
the output of light to the imaging device 8 and a structure that protects the optical element and
the like. For example, the optical system 210 has a zoom function that changes the shooting
angle of view, an aperture function that adjusts the amount of light passing, a function that
corrects image shake due to camera shake, and a function that adjusts focus, etc. Sensors,
encoders, etc. are provided. The optical system drive unit 220 drives the optical system 210 by
an actuator such as a motor according to a control signal from the control unit 20 to adjust the
light output to the imaging device 8. The optical system control unit 230 collects information of
various sensors and encoders provided in the optical system 210 and notifies the control unit 20
of the information. The information notified from the optical system control unit 230 includes
lens type information indicating the type of the lens barrel 200, lens focal length information, an
aperture value set by the aperture function, and a distance ring provided in the lens barrel 200.
There are subject distance information etc. based.
10-04-2019
5
[0015]
In the imaging apparatus 1, the camera body 100 includes an imaging processing unit 10, a nonvolatile memory 11, a buffer memory 12, an operation detection circuit 13, a monitor control
circuit 14, a monitor 15, a memory control circuit 16, a memory 17, a control unit 20 and an
audio processing unit 40 is provided. The imaging processing unit 10 in the camera body 100
includes an imaging element control circuit 7, an imaging element 8, and a video circuit 9. The
image pickup device 8 is a light receiving element such as a charge coupled device (CCD) or a
complementary metal oxide semiconductor (CMOS) sensor, converts the image formed and
output from the optical system 210 into an electric signal, and converts an analog image signal
Output. A video circuit 9 and an image sensor control circuit 7 are connected to the image sensor
8. The video circuit 9 amplifies the image signal output from the imaging device 8 and converts it
into a digital signal. The imaging element control circuit 7 drives the imaging element 8 to
control operations such as conversion of an image formed by the imaging element 8 into an
image signal and output of the converted image signal.
[0016]
The nonvolatile memory 11 stores a program for operating the control unit 20, image data
generated by imaging, collected sound information, and information such as various settings and
imaging conditions input from the user. Further, in the information to be stored, a waveform, a
frequency component, sound pressure information, and the like that change according to the
time of the operation sound are recorded in advance as information indicating the feature of the
operation sound of the imaging device 1. The buffer memory 12 is a storage area of temporary
information used for control processing of the control unit 20, and controls, for example, an
image signal output from the imaging device 8 and image data generated according to the image
signal. It is temporarily stored by the unit 20.
[0017]
The operation detection circuit 13 detects the user's operation input to the input unit, and inputs
the detected operation information to the control unit 20 as a control signal. The input unit
includes, for example, a power switch 13a, a release switch 13b, ..., 13z. The input unit may be
provided with a leaf spring or the like as necessary in order to give the user a sense of operation
of operating the input unit. When the input unit is pushed down to a predetermined position, the
10-04-2019
6
leaf spring is bent back to give the user a "click feeling". Due to the reaction of the leaf spring, a
minute vibration occurs and propagates to the camera body 100. Further, the operation
detection circuit 13 includes an AF operation means 13AF for controlling an autofocus (AF)
operation, a selector button 13SEL for setting a photographing mode and the like. Even when the
operation is performed during shooting, the control unit 20 receives the input operation
information, outputs a control instruction based on the operation information, and stores the
operation instruction information in the non-volatile memory 11. The monitor control circuit 14
performs, for example, display control such as turning on / off of the monitor 15 or adjustment
of brightness, and processing for displaying the image data output from the control unit 20 on
the monitor 15. The monitor 15 displays image data, and is configured of, for example, a liquid
crystal display (LCD).
[0018]
The memory control circuit 16 controls input / output of information between the control unit
20 and the memory 17, for example, processing for storing information such as image data and
sound data generated by the control unit 20 in the memory 17, the memory 17 A process of
reading out information such as image data and sound data stored in the image data and
outputting the information to the control unit 20 is performed. The memory 17 is, for example, a
storage medium that can be inserted into and removed from the camera body 100, such as a
memory card, and stores image data, sound data, and the like generated by the control unit 20.
[0019]
The audio processing unit 40 collects sound information to be recorded by the microphones 41A
and 41B in accordance with imaging of a moving image, and performs necessary signal
processing to generate audio to be recorded. FIG. 3 is a schematic block diagram showing an
embodiment of the audio processing unit 40. As shown in FIG. This figure shows the main
configuration when performing an autofocus operation. The same components as those shown in
FIGS. 1 and 2 are designated by the same reference numerals. The audio processing unit 40
includes microphones 41A and 41B, amplification circuits 42A and 42B, analog / digital (A / D)
conversion circuits 43A and 43B, a delay circuit 44, and an audio signal processing circuit 45.
The microphones 41A and 41B in the audio processing unit 40 convert the sound collected at
the time of moving image recording and output a sound signal. The amplifier circuits 42A and
42B amplify the sound signals converted by the microphones 41A and 41B at a predetermined
amplification factor. Each amplification factor is determined by the setting from the control unit
20. The A / D conversion circuits 43A and 43B convert the amplified sound information into
10-04-2019
7
digital signals.
[0020]
Further, the first digital sound signal is output from the first system formed by the microphone
41A, the amplifier circuit 42A, and the A / D converter circuit 43A. A second digital sound signal
is output from a second system formed by the microphone 41B, the amplification circuit 42B and
the A / D conversion circuit 43B. The microphone 41A of the first system, the amplifier circuit
42A, the A / D conversion circuit 43A, and the microphone 41B of the second system, the
amplifier circuit 42B, and the A / D conversion circuit 43B are configured as a pair, It is desirable
that they have the same characteristics. The delay circuit 44 delays the second digital sound
signal collected in the second system based on the set delay time, and outputs a delayed digital
signal. The audio signal processing circuit 45 receives the first digital sound signal and the
delayed digital signal, and performs difference processing while holding the time difference set in
the two signals. The audio signal processing circuit 45 performs signal reproduction processing
based on the differentially processed result, and outputs a reproduced signal reproduced as
recording information. The recording information is input to the control unit 20 and written to
the memory 17 via the memory control unit 16. In this figure, the reproduction signal subjected
to the signal reproduction processing by the audio signal processing circuit 45 is recorded in the
memory 17 Show it in a simplified manner.
[0021]
Returning to FIG. 2, the control unit 20 is shown. The control unit 20 includes a CPU (Central
Processing Unit) that controls the operation of each unit of the camera body 100 based on a
program stored in the non-volatile memory 11. For example, according to the operation
information from the user input to the operation detection circuit 13, the control unit 20 turns
on the power of the camera body 100, controls the drive of the optical system 210 via the optical
system drive unit 220, and the imaging device Driving control of the imaging device 8 through
the control circuit 7, display control of the monitor 15 through the monitor control circuit 14,
photographing processing of an object detected by the imaging device 8, signal processing of
sound information collected by the sound processing unit 40 Control the
[0022]
10-04-2019
8
The control unit 20 includes an image processing unit 21, a display control unit 22, an imaging
control unit 23, an operation detection processing unit 24, a noise removal control unit 25, and a
recording control unit 26. The image processing unit 21 in the control unit 20 reads an image
signal captured in an imaging area of the imaging device 8 and output to the video circuit 9, and
performs image processing to generate image data based on the read image signal. The image
processing unit 21 stores the image data generated by the image processing in the buffer
memory 12. The display control unit 22 reads the image data generated by the image processing
unit 21 and stored in the buffer memory 12 at fixed time intervals, and outputs the read image
data to the monitor 15 in real time. Here, the fixed time interval is, for example, 1/60 seconds,
and in this case, the display control unit 22 displays 60 frames of image data per second as a
through image on the monitor 15 and records it as a moving image in the memory 17 Be done.
[0023]
In response to the control signal such as a shooting process start command or a shooting process
end command for controlling a shooting process based on a user's operation input being input
from the operation detection circuit 13, the shooting control unit 23 sends Output the necessary
control signals. The imaging control unit 23 drives the optical system 210 when an imaging
processing start instruction is input, and performs imaging processing for generating image data.
The imaging control unit 23 controls focusing, exposure control, zooming, and the like of the
optical system 210 via the optical system control unit 230 according to imaging conditions input
in advance by the user.
[0024]
The operation detection processing unit 24 determines the operation information of the user
detected by the operation detection circuit 13, records the determined information in a memory,
and outputs control instructions of various processes required. For example, when an autofocus
(AF) operation is detected by the AF operation means 13AF, an AF control motor that performs
AF control of the optical system drive unit 220 in the lens barrel 200 is driven to adjust the
optical system 210. Take control.
[0025]
The noise removal control unit 25 determines the noise and the portion where the noise is
10-04-2019
9
generated according to the user's operation information detected by the operation detection
processing unit 24 and controls the voice processing unit 40 to reduce the noise. . The recording
control unit 26 associates the sound information subjected to the noise reduction processing in
the sound processing unit 40 controlled by the noise removal control unit 25 with the moving
image information, and writes and associates the sound information in the memory.
[0026]
It shows about the noise reduction process of this embodiment. The noise reduced by the
processing of the noise removal control unit 25 is the operation noise of an actuator (such as a
motor) that generates vibration when the optical system drive unit 220 is driven to control the
optical system 210, and operation detection There is an operation sound or the like generated
when a leaf spring incorporated in the input portion of the circuit 13 is warped in response to
the input operation. Both of these noises are characterized by being generated in response to the
user's operation. The location and time of the noise can be derived based on the signal detected
at the input of the operation detection circuit 13.
[0027]
By the way, the sound from the place apart from the imaging apparatus 1 like the subject arrives
at the same time because the difference in the distance from the place of the subject to the
microphones 41A and 41B is small with respect to the distance between the microphones 41A
and 41B. It can be regarded as On the other hand, the optical system drive unit 220 that
generates a vibration in response to the operation of the operation detection circuit 13 is
disposed at a distance close to the distance (d1-d2) between the microphones 41A and 41B as
shown in FIG. Thus, the propagation time of the sound until the sounds emitted by the optical
system drive unit 220 are collected by the microphones 41A and 41B is different. According to
the arrangement of the optical system drive unit 220 and the microphones 41A and 41B, the
propagation time of the driving sound until reaching the microphones 41A and 41B can be
predetermined. The noise removal control unit 25 performs control to reduce the drive noise
specified as noise by using the difference in propagation time.
[0028]
FIG. 4 is a diagram showing noise reduction processing according to the present embodiment.
10-04-2019
10
The waveform shown in this figure is a schematic representation of the sound pressure level of
the sound converted by the microphones 41A and 41B, the vertical axis indicates the sound
pressure level, and the horizontal axis indicates the passage of time. FIGS. 4A and 4B show
waveforms SA and SB indicating the sounds collected by the microphones 41A and 41B. The
continuous low-frequency basic waveform indicated by the same phase is a waveform that
schematically represents the sound from a distant subject. The waveforms of one cycle indicated
by frequency components higher than the basic waveforms superimposed at time t2 and time t1
on the basic waveforms of the waveforms SA and SB in FIGS. 4A and 4B are detected together
with the basic waveform. A waveform of common noise is shown, and noises are Na and Nb,
respectively. In this figure, noises Na and Nb are shown in one cycle for ease of explanation, but
they may be continuous waveforms. When the times at which the noise is detected are compared,
the noise Na detected by the microphone 41A disposed far from the optical system drive unit
220 is delayed by a time td compared to the noise Nb detected by the microphone 41B disposed
nearby. Is detected. The delay time is determined by the arrangement of the sound source and
the microphone, and can be stored in advance in the storage area in comparison with
information indicating the sound source as a control variable that can be specified in advance.
[0029]
FIG. 4 (c) is a waveform SB 'obtained by delaying the waveform SB of FIG. 4 (b) by the delay
circuit 44 by time td. By this delay processing, the noise Nb 'delayed by the time td is derived.
The phases of the noise Na and the noise Nb can be aligned, that is, synchronized as the noise Nb
'adjusted to the same timing as the noise Na. The time td is a value based on an instruction from
the noise removal control unit 25.
[0030]
FIG. 4 (d) is generated by the subtraction processing of subtracting the amplitude level shown by
the waveform SB ′ of FIG. 4 (c) from the amplitude level shown by the waveform SA of FIG. 4 (a)
in the audio signal processing circuit 45. It is a waveform SC of the difference signal. In the
difference signal derived by the subtraction processing, the noise Na and the noise Nb are
canceled out, so that the signal does not include the noise component. This difference signal
becomes a distorted waveform different from the basic waveform of the original signal by
subtraction processing. FIG. 4 (e) shows a waveform SC 'subjected to the reproduction processing
in the audio signal processing circuit 45 based on the waveform SC of the subtraction processing
shown in FIG. 4 (d). The reproduction process can be performed by performing the process
showing the reverse characteristic that cancels the process up to the above-described subtraction
10-04-2019
11
process.
[0031]
As described above, even if noise is mixed, if the noise contained in the signals obtained by the
two microphones can be canceled out, the original signal from which the noise is removed can be
reproduced. Also, noise is superimposed at different delay times depending on the place where
the noise occurs. Even if the delay time is different, it is possible to identify the place where the
noise occurs and set the delay time according to the place. For example, although the example
which generate | occur | produces when the AF function is driven was shown above, it is
applicable also when reducing the noise which the actuator which drives the other function in
the same lens-barrel 200 generate | occur | produces. The delay time may differ depending on
the position of the actuator to be driven. Even in such a case, the position of the actuator to be
driven can be specified by the information detected by the operation detection circuit 13 and the
structure of the lens barrel 200.
[0032]
According to the above procedure, an example of a specific noise reduction process is shown.
While the AF mechanism is not operated, the processing from the delay circuit 44 or the like in
the second system is not performed, and the sound from the microphone 41A is recorded. During
a period in which the AF mechanism is operated, a delay time corresponding to the noise
generation position is subjected to noise reduction processing by the delay circuit 44. The
control unit 20 derives a delay time determined in advance according to the operation input
detected by the operation detection circuit 13. The delay time is stored in the non-volatile
memory 11 that can be referred to from the control unit 20 as a table using the operation input
information as a key. The control unit 20 sets the delay time led to the delay circuit, causes the
second system to function, and causes the audio signal processing circuit 45 to perform
arithmetic processing of audio signals of two systems. The control unit 20 records the result
derived by the arithmetic processing of the audio signal processing circuit 45 in the memory 17
as an audio signal. The audio signal is shifted in time by the delay time obtained by performing
the delay processing of the second system. The control unit 20 can record continuous voice
without interruption due to the influence of the delay time by performing correction to absorb
the delay time and recording. In addition, it is also necessary to record the audio signal in
synchronization with the video signal. The control unit 20 synchronizes the video signal and the
audio signal, associates them with each other, and records them.
10-04-2019
12
[0033]
Further, the control unit 20 can obtain lens type information indicating the lens type from the
lens barrel 200, and can specify the type of the lens barrel 200 attached to the camera body 100.
Information indicating the feature of the generated operation sound is registered according to
the lens type information, and the generated operation sound can be identified by referring to
necessary information using the lens type information as a key. As information indicating the
feature of the operation sound, positional information of each actuator, information of a
waveform indicating a change in operation sound generated by the vibration when the actuator
is driven, information of its frequency component, information of sound pressure level of
operation sound, etc. is there. The position information of each actuator may be information
indicating the arrival time of the operation sound from each actuator to each microphone or
information indicating the time difference of the arrival time, in addition to the information
indicating the physical arrangement. The time difference information of the arrival time and the
arrival time may be a value determined in advance, or may be a value registered based on the
measurement value by driving the actuator to measure the sound collected by the microphone. .
[0034]
The control unit 20 specifies the input operation and outputs a control signal to the actuator, so
that the timing when the actuator operates can be detected, and the operation sound from the
actuator reaches each of the microphones. It is also possible to switch to enable the noise
reduction process for noise reduction before doing so. Sound collection during noise reduction
processing can use two microphones, and sound collection during noise reduction processing can
also use one microphone.
[0035]
In the above embodiment, the noise included in the sound collected adaptively using a plurality
of microphones is removed by setting the process of adjusting and subtracting the time
difference between the signals converted by the two microphones. . In addition, there is a method
called an adaptive microphone array for NOise Reduction (AMNOR) as a calculation method of
noise processing. The noise reduction processing by the signal processing circuit 45 can also be
processed using the AMNOR method.
10-04-2019
13
[0036]
Second Embodiment Hereinafter, another embodiment of the present invention will be described
with reference to the drawings. FIG. 5 is an external view of the imaging device 1a according to
the present embodiment. The figures (a) and (b) show the upper surface and the front of the
imaging device 1a. The same components as those in FIG. The imaging device 1a includes a
camera body 100a and a lens barrel 200 that is attachable to and detachable from the camera
body 100a. The lens barrel 200 is attached via a lens mount provided on the front of the camera
body 100a. Also, on the front and back of the camera body 100, a microphone 41Aa and a
microphone 41Ba are provided. The microphone 41Aa and the microphone 41Ba are provided
on a straight line substantially parallel to the lens optical axis of the lens barrel 200. That is, the
microphone 41Aa and the microphone 41Ba are disposed at the same distance d1 from the
optical axis of the lens barrel 200.
[0037]
In the imaging device 1a shown in the second embodiment, the arrangement of the microphones
is different from that of the imaging device 1 of FIG. 1, but for the configuration for noise
processing, the configurations shown in FIGS. 2 and 3 can be referred to. The camera body 100a,
the audio processing unit 40a, and the microphones 41Aa and 41Ba can be read as the camera
body 100, the audio processing unit 40, and the microphones 41A and 41B. When the sound
pressure levels at which the operation sounds reach the two microphones are different, the
amplification factors of the amplifier circuits 42A and 42B are set according to the detected
sound pressure level. By setting the amplification factors to different values, it is necessary to
combine and synthesize the delay time of the two signals and the amplitude of the noise. There is
no change in setting the delay time of the delay circuit 44 to an appropriate value so as to match
the time of the included noise. In the process of reproducing from the difference signal from
which the noise has been removed, there is a time difference between the times at which the
desired sound is collected by the two microphones, and the time difference is set as a variable of
the reproduction process.
[0038]
Third Embodiment Hereinafter, another embodiment of the present invention will be described
with reference to the drawings. FIG. 6 is a schematic block diagram showing a different usage
10-04-2019
14
form in the imaging device 1 according to the present embodiment. This figure shows the main
configuration when performing selection operation such as mode setting. The same components
as those shown in FIGS. 1, 2 and 3 are designated by the same reference numerals.
[0039]
For example, when a selection input operation is detected by the operation of the selector button
13SEL, the operation detection processing unit 24 (FIG. 1) records information of the detected
operation in the memory. The operation detection processing unit 24 specifies the operated
selector button 13SEL. Since the operated selector button 13SEL is internally provided with a leaf
spring, the leaf spring responds to the operation to cause warping. The vibration is transmitted to
the entire camera body 100a, and the vibration is detected by the microphones 41Aa and 41Ba.
[0040]
In this embodiment, since there is no actuator driven in response to the operation input, the drive
sound is not generated, but a mode in which the vibration by the operation of the operation input
becomes a noise generation source is shown. The generated noise is transmitted through the
camera body 100a and collected by the two microphones. The time difference until noise is
collected by the respective microphones is set as the delay time of the delay circuit 44a, and
switching to noise reduction processing is performed according to the detection of the operation
input as in the first embodiment.
[0041]
The noise reduction effect can be enhanced by adding the processes described below to the first
to third embodiments. The audio signal processing circuit 45 performs reduction processing
from sound information collected by each microphone based on at least one feature information
of a waveform, a frequency component, and sound pressure information that changes according
to the time of the driving sound and the operation sound. Do. The waveform, frequency
component, and sound pressure information that change according to the time of the drive sound
and the operation sound are pre-settable feature information, are stored in the non-volatile
memory 11 or the like, and are referred to from the control unit 20. By comparing the noise
collected by each of the microphones with the feature information described above, it is possible
to specify the time at which the noise is included. The delay time can be derived based on the
10-04-2019
15
specified time, and can be used as delay time information stored in advance in a table.
[0042]
In addition, this invention is not limited to said each embodiment, It can change in the range
which does not deviate from the meaning of this invention. Although the audio signal processing
function in the imaging apparatus of the present invention has shown adjustment of the delay
time as that for delaying one signal, it is sufficient if the phases of the contained noises can be
matched, so that the time is relatively short. It can be substituted by shifting. Also, although the
optical system drive unit 220 has shown the form of driving the AF mechanism, an object to be
driven may be a camera shake correction mechanism for correcting a camera shake that controls
the optical system 210 or a zoom mechanism. Furthermore, an actuator (motor) for driving the
AF mechanism may be provided in the camera body 100 to perform coupling drive for
mechanically driving. It is assumed that the delay time according to each form is stored in
advance in the non-volatile memory 11 or the like, and the delay time is set. In addition, although
the feature information such as the waveform, the frequency component, and the sound pressure
information that changes according to the time of the driving sound and the operation sound is
stored in the non-volatile memory 11 or the like, the optical system control unit in the lens barrel
200 It may be stored in the storage unit in 230.
[0043]
In the embodiment of the present invention, in the imaging device 1, the microphone 41A detects
the surrounding sound and outputs the first sound information. The microphone 41B different
from the microphone 41A detects the sound and outputs the second sound information. The
audio signal processing circuit 45 uses the first sound information and the second sound
information to remove a common noise signal included in the first sound information and the
second sound information, and corresponds to one of the sounds. Generate a sound signal.
[0044]
Further, in the above embodiment, the audio signal processing circuit 45 includes the amount of
difference on the time axis between the noise signal included in the first sound information and
the noise signal included in the second sound information and the first sound information. The
noise signal is removed based on the difference between the on-time sound and the amount of
10-04-2019
16
difference on the time axis between the sound included in the second sound information. In the
above embodiment, the audio signal processing circuit 45 may be configured to synchronize the
first noise information and the second noise information so that the noise signal included in the
first noise information and the noise signal included in the second noise information are
synchronized. And are relatively time shifted to remove the noise signal.
[0045]
Further, in the above embodiment, the optical system driving unit 220 generates noise that is a
noise signal. The audio signal processing circuit 45 relatively shifts the first sound information
and the second sound information in accordance with the position information of the optical
system drive unit 220. Further, in the above embodiment, the position information of the optical
system drive unit 220 is the propagation time until the noise from the optical system drive unit
220 reaches the microphone 41A and the noise from the optical system drive unit 220 reaches
the microphone 41B. It is information based on the difference with the propagation time to the
end.
[0046]
Moreover, in the above embodiment, the noise signal is based on the operation sound by the
operation of the imaging device 1. The audio signal processing circuit 45 removes the noise
signal based on at least one of the start and stop of the operation. Further, in the above
embodiment, the signal processing unit removes the noise signal based on at least one of the
waveform of the noise signal, the frequency component, and the sound pressure information set
in advance.
[0047]
In the above embodiment, the microphone 41A and the microphone 41B are disposed in a plane
substantially perpendicular to the lens optical axis of the imaging device 1, and the distance from
the lens optical axis to the microphone 41A and The distance to the microphone 41B is different.
In the above embodiment, the lens optical axis, the microphone 41A, and the microphone 41B
are sequentially arranged on one straight line in a plane substantially perpendicular to the lens
optical axis.
10-04-2019
17
[0048]
When providing a microphone for noise detection in the lens barrel, it is necessary to transmit
noise information from the lens barrel to the camera body, and it is necessary to change the lens
mount. According to the above embodiment, Since the microphone is not provided in the lens
barrel, it can be used as it is without changing the existing lens and mount. And, as described in
the present embodiment, even when a noise source is included in the lens cavity, the microphone
is not disposed in the lens barrel, so that the present invention can be applied to the case where
an existing interchangeable lens is used. . In the case where the positional information of the
noise source is not output from the lens barrel, the information can be used in the same manner
by previously setting the information of the delay time corresponding to the non-volatile memory
11 or the like of the camera body.
[0049]
Reference Signs List 1 imaging apparatus 100 camera main body 41A, 41B microphone 44 delay
processing circuit 45 audio signal processing circuit
10-04-2019
18
Документ
Категория
Без категории
Просмотров
0
Размер файла
32 Кб
Теги
description, jp2011010073
1/--страниц
Пожаловаться на содержимое документа