close

Вход

Забыли?

вход по аккаунту

?

JP2003348698

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2003348698
[0001]
The present invention relates to an audio presentation system, an audio presentation control
apparatus, an audio presentation control method, and an audio presentation control program.
Specifically, at least one of the output direction, the sound pressure, and the phase of the sound
is controlled according to the position at which the sound image based on the sound output from
the sound output unit is localized.
[0002]
2. Description of the Related Art Conventionally, as a sound presentation method, not only
speakers in the left channel and right channel but also auxiliary speakers etc. are fixed and
arranged to enhance the sense of depth and the sense of three-dimensionality of sound, and the
sense of presence is excellent. Sound presentation is taking place. For example, in the 5.1 channel
or 6.1 channel surround method used in movies or DVD (Digital Versatile Disk), an auxiliary
speaker is provided in addition to the front to obtain a sense of depth over stereo space and an
image It enhances the sense of fusion of sounds and presents sound with excellent presence.
[0003]
In addition, it is also performed to reproduce the characteristics of an arbitrary room by
convoluting an impulse response of an arbitrary space with sounds collected in an anechoic room
10-05-2019
1
or sounds recorded on a compact disc. For example, the filter characteristics are set in advance
so as to have the same characteristics as the reproduction space such as a concert hall, a live
house or a church, and the user selects the desired filter characteristics to reproduce the sound
field of the desired reproduction space. Things are also being done. In addition, virtual
stereophonic sound is also realized that creates a three-dimensional acoustic field using left and
right channel speakers.
[0004]
By the way, such a method of sound presentation uses a fixed speaker, so the width of sound
expression is small. In addition, it is difficult to well express the depth in the front-rear direction
at the middle position between the speakers in the vertical direction.
[0005]
Therefore, the present invention provides an audio presentation system, an audio presentation
control device, an audio presentation control method, and an audio presentation control program
that can perform audio presentation with a sense of reality, a sense of movement, and a sense of
depth.
[0006]
According to the present invention, there is provided an audio presentation system according to
the present invention, comprising: an audio output means for outputting sound; and a position of
the sound image localized according to the sound output from the audio output means. The audio
presentation control means controls at least one of the output direction, the sound pressure, and
the phase.
[0007]
Further, in the sound presentation control device for controlling the sound output means for
outputting sound based on the sound signal, the sound presentation control device is
characterized in that the sound presentation control device is adapted to position the sound
image based on the sound outputted from the sound output means. An output direction control
means for changing the sound output direction of the sound output means, and an acoustic
signal processing means for controlling at least one of the signal level and the phase of the
acoustic signal according to the position for localizing the sound image. It is.
10-05-2019
2
[0008]
Further, according to the sound presentation control method, in the sound presentation control
method for controlling a sound output unit that performs sound output based on a sound signal,
the sound presentation control method includes the above according to the position to localize
the sound image based on the sound output from the sound output unit. And controlling at least
one of an acoustic output direction of the acoustic output means and a signal level and a phase of
the acoustic signal.
[0009]
Furthermore, the sound presentation control program determines at least one of a signal level
and a phase of a sound signal according to a procedure of determining an sound output direction
according to a position where a computer localizes a sound image and a position where the
sound image is localized. Controlling at least one of the signal level and the phase, and
controlling at least one of the signal level and the phase, the step of controlling the sound output
direction of the sound output means performing the sound output based on the sound signal,
And a step of supplying a controlled sound signal to the sound output means.
[0010]
In the present invention, sound output means for sound output is provided on the left and right
sides of the listener, for example, in a space partitioned by wall surfaces.
The sound output means is configured such that the sound output means is rotated or a plurality
of speakers are arranged so that the sound output direction is different according to the position
at which the sound image based on the sound output from the sound output means is localized.
When in use, the speaker that outputs sound is switched.
Thus, the sound output direction is controlled, and the sound image is localized using the
reflected sound from the wall surface.
[0011]
Further, at least one of the signal level and the phase of the acoustic signal is adjusted in
accordance with the position at which the sound image is localized, and the acoustic output
means is driven based on the adjusted acoustic signal.
10-05-2019
3
[0012]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT An embodiment of the present
invention will be described below with reference to the drawings.
FIG. 1 shows the listening of sound.
In the case where the listener 12 listens to the sound output from the sound source 13 in the
space 11 partitioned by the wall 10, the listener 12 is indicated by the direct sound DS indicated
by the solid line (sound directly reaching the ear from the sound source 13) The reflected sound
(sound reflected by the surrounding wall, ceiling and floor) will hear the integrated sound.
[0013]
FIG. 2 shows the modeled relationship between direct sound and reflected sound.
When the distance L between the listener 12 and the sound source 13 is short (L = L1), as shown
in FIG. 2A, the control rate of the direct sound DS is high, and the time delay Td of the reflected
sound RS relative to the direct sound DS is small.
Also, as the distance L between the listener 12 and the sound source 13 becomes longer (L = L2,
L2> L1), the levels of the direct sound DS and the reflected sound RS become smaller as shown in
FIG. Becomes higher. In addition, the time delay Td of the reflected sound RS with respect to the
direct sound DS is increased. Furthermore, as the distance L between the listener 12 and the
sound source 13 becomes longer (L = L3, L3> L2), the levels of the direct sound DS and the
reflected sound RS become smaller as shown in FIG. 2C. The time delay Td of the reflected sound
RS becomes large. The reflected sound RS and the time delay Td change depending on the
configuration of the space 11.
[0014]
10-05-2019
4
Therefore, in the present invention, by controlling at least one of the sound output direction, the
sound pressure, and the phase according to the position at which the sound image is localized,
the sense of reality or movement and the sense of depth in the front-rear direction or the updown direction can be obtained. Give a certain sound presentation. That is, when the output
direction of the sound is changed, it is possible to control the direct sound DS and the reflected
sound RS according to the reflection characteristic of the space in which the sound is heard.
Furthermore, by controlling the sound pressure and the phase, it is possible to control the direct
sound DS and the reflected sound RS, and it is possible to perform sound presentation with a
sense of reality, a sense of movement and a sense of depth.
[0015]
FIG. 3 shows the configuration of the sound presentation system. In this sound presentation
system, a speaker 23L as sound output means and a speaker 23R as sound output means are
provided on the left side of the listener 12 with the listener 12 at the center. In addition, the
position shown with a broken line in FIG. 3 has shown the image display area 15 where the
image relevant to the sound output, for example from the speakers 23L and 23R is displayed.
[0016]
The speaker 23L is configured to be able to change the output direction of sound by the sound
presentation control device 30. For example, as shown in FIG. 4, the speaker 23L and the support
25L are rotatably mounted via a rotational drive unit (for example, a stepping motor) 24L. Here,
by rotating the speaker 23L by driving the rotation drive unit 24L by the sound presentation
control device 30, the output direction of the sound is varied. The speaker 23R is similarly
configured to be able to change the sound output direction. Further, the sound presentation
control device 30 supplies the sound signal SOL to the speaker 23L, and supplies the sound
signal SOR to the speaker 23R, thereby outputting sound from the speakers 23L and 23R. Note
that, as shown in FIG. 5, the following description will be given with the binocular direction of the
listener 12 as the X axis, the longitudinal direction as the Y axis, and the vertical direction as the
Z axis.
[0017]
When sound is output from the speakers 23L and 23R in the wall surface space, the listener 12
10-05-2019
5
hears the direct sound DS of the sound output from the speakers 23L and 23R and the reflection
sound RS from the wall surface 10. Here, assuming that the rotation axis of the speakers 23L,
23R is in the Z-axis direction, as shown in FIG. 6A, assuming that the output direction of the
sound from the speakers 23L, 23R is the direction of the listener 12, the dominant ratio of the
direct sound DS Becomes higher. Further, when the speakers 23L and 23R are provided at
symmetrical positions on the left and right sides with respect to the listener 12 and the sound
output is equal, the sound image becomes the position of the listener 12.
[0018]
Further, as shown in FIG. 6B, the speaker 23L is rotated counterclockwise by the angle θ1 and
the speaker 23R is rotated clockwise by the angle θ1 to sequentially move the sound output
direction in the forward direction. In this case, the control rate of the reflected sound RS
gradually increases, and the sound image can be moved forward to perform sound presentation
with a sense of depth. Furthermore, as shown in FIG. 6C, when the speakers 23L and 23R are
further rotated to the angle θ2, the time delay of the reflected sound RS is increased and the
direct sound DS is decreased to further increase the dominance of the reflected sound RS. .
[0019]
FIG. 7 shows the configuration of the sound presentation control device. The user interface unit
31 supplies the operation signal PS generated according to the user operation to the sound
image position determination unit 32. The sound image position determination unit 32
determines the position of the sound image based on the operation signal PS or the position
information signal PR supplied from the outside, and supplies the position signal MP to the
output direction control unit 33 and the acoustic signal processing unit 34.
[0020]
The output direction control unit 33 generates drive signals MDL and MDR for rotating the
speakers 23L and 23R so that the position of the sound image is the position indicated by the
position signal MP, and drives the drive signal MDL to the speaker 23L side. While supplying the
rotation drive unit 24L provided, the drive signal MDR is supplied to the rotation drive unit 24R
provided on the speaker 23R side.
[0021]
10-05-2019
6
The acoustic signal processing unit 34 adjusts the phases and signal levels of the input acoustic
signals SAL and SAR so that the position of the sound image becomes the position indicated by
the position signal MP.
The signals subjected to this adjustment are supplied to the signal amplification unit 35 as
adjusted acoustic signals SBL and SBR.
[0022]
The signal amplification unit 35 amplifies the adjusted acoustic signal SBL supplied from the
acoustic signal processing unit 34, and supplies the amplified acoustic signal SBL as an acoustic
signal SOL to the speaker 23L. Further, the adjusted acoustic signal SBR is amplified and supplied
to the speaker 23R as an acoustic signal SOR. By adjusting the phases and signal levels of the
input acoustic signals SAL and SAR in this manner, it is possible to control the sound pressure
and the phase of the sound.
[0023]
As described above, the rotation drive unit 24L is configured using a stepping motor or the like,
rotates the speaker 23L based on the drive signal MDL supplied from the sound presentation
control device 30, and is calculated by the sound image position determination unit 32. The
direction is controlled to be the specified speaker rotation angle. Similarly, the rotation drive unit
24R rotates the speaker 23R based on the drive signal MDR, and controls the direction so as to
be the speaker rotation angle calculated by the sound image position determination unit 32.
[0024]
Next, the operation of the sound presentation control device will be described. When the user
interface unit 31 is operated and the position of the sound image based on the sound output
from the speakers 23L and 23R is specified, the user interface unit 31 generates an operation
signal PS according to the operation and the sound image position determination unit Supply to
32.
10-05-2019
7
[0025]
Further, for example, in the case where sound signals SAL and SAR are obtained by performing
sound acquisition in the shooting direction of the video camera by integrally forming the video
camera and the microphone, the position detection unit is provided in the video camera and the
position detection unit is provided. Thus, the position information signal PR indicating the
position of the sound source is generated and supplied to the sound image position
determination unit 32.
[0026]
FIG. 8 shows the configuration of the position detection unit.
The angle sensor 41 measures an angle using a sensor capable of measuring a rotation angle, a
gyro or the like, detects an imaging direction of the video camera, and supplies an angle signal
Spa to the polar coordinate calculation unit 43. For example, an angle signal Spa indicating an
angle in the horizontal direction with respect to the reference position (hereinafter referred to as
“azimuth angle”) and an angle in the vertical direction with respect to the reference position
(hereinafter referred to as “pitch angle”) is generated and supplied to the polar coordinate
calculation unit 43. The distance measuring sensor 42 measures the distance using light,
ultrasonic waves or the like or based on the focal position of the video camera, detects the
distance LO to the desired object, and calculates the distance signal Spb as the polar coordinate
calculation unit 43. Supply to The polar coordinate calculation unit 43 calculates polar
coordinates from the angle signal Spa and the distance signal Spb, and outputs the polar
coordinates as a position information signal PR.
[0027]
Here, when a video camera is used to chase after shooting a desired subject, sound from the
desired subject is collected by the microphone, and the sound signals SAL and SAR are generated.
Further, the desired subject is taken as a sound image position, and information indicating the
direction and distance of the sound source is output as the position information signal PR.
[0028]
10-05-2019
8
Furthermore, when the movement of the image is detected based on the image signal and the
display position of the image is moved according to the detected movement, the display position
of the moving subject is moved according to the movement of the subject. The display position
may be used as the position of the sound source, and a signal indicating the display position may
be used as the position information signal PR.
[0029]
FIG. 9 shows the configuration of an image signal processing apparatus that generates an image
output signal in which the display position of the image is moved based on the result of detection
of the movement of the image, and a signal indicating the display position.
[0030]
The image signal SV is supplied to the scene change detection unit 51, the motion detection unit
52, and the image position moving unit 53 of the image signal processing unit 50.
[0031]
The scene change detection unit 51 detects a scene change based on the image signal SV, that is,
detects a discontinuous position of an image which is a connection between a continuous scene
and a scene different from the continuous scene and detects a scene change detection signal CH.
Generate
[0032]
The motion detection unit 52 detects a motion vector for a frame indicated to be a continuous
scene by the scene change detection signal CH generated by the scene change detection unit 51
and detects a motion vector of a portion having a large display area, for example, a background
portion. Detect the motion vector of
The motion detection information MVD indicating the motion vector detected by the motion
detection unit 52 is supplied to the image position moving unit 53.
[0033]
10-05-2019
9
The image position moving unit 53 determines the display position based on the scene change
detection signal CH and the motion detection information MVD.
In this determination of the display position, the motion vector indicated by the motion detection
information MVD during the continuous scene is accumulated to generate the motion
accumulation value MVT which is time transition information of the motion vector, and the
motion accumulation value MVT is generated. The movement range of the display position is
determined for each scene by obtaining the swing width.
Next, the movable range is the movable range within the image display area (when the image is
displayed at the right end of the image display area and when the image is horizontally moved
and displayed at the left end, the center of the display image is The display position of the
continuous scene with respect to the first display image is determined such that the center of the
distance.
Also, during the period from displaying the first display image of the continuous scene to
displaying the last image of the continuous scene, when moving the display position of the image
based on the motion detection information MVD, the display image is an image The motion
detection information MVD is corrected so as to be able to enter the display area, and the display
position is determined on the assumption that the image is moved based on the motion detection
information MVD after the correction. A signal indicating the determined display position is
supplied to the sound image position determination unit 32 as a position information signal PR.
Further, the image position moving unit 53 generates and outputs the image signal SVout in
which the image based on the image signal SV is moved to the determined display position.
[0034]
The sound image position determination unit 32 of FIG. 7 generates a position signal MP
indicating the position of the sound image set by the user interface unit 31 based on the
operation signal PS. Further, based on the position information signal PR, a position signal MP is
generated in which the sound source position and the display position of the image are positions
of the sound image. The generated position signal MP is supplied to the output direction control
unit 33 and the acoustic signal processing unit 34.
[0035]
10-05-2019
10
The output direction control unit 33 generates the drive signals MDL and MDR based on the
position signal MP and supplies the drive signals MDL and MDR to the rotational drive units 24L
and 24R, so that the speaker is positioned so that the position of the sound image becomes the
position indicated by the position signal MP. Rotate 23L and 23R.
[0036]
The acoustic signal processing unit 34 adjusts the phase and signal level of the acoustic signal
based on the position signal MP.
FIG. 10 is a diagram for explaining the operation of the acoustic signal processing unit 34. As
shown in FIG. When the rotation angle of the speaker 23L is “θ L” and the rotation angle of
the speaker 23 R is “θ R” based on the position signal MP, the acoustic signal processing unit
34 generates the acoustic signal SAL (t) for the left channel and the acoustic signal for the right
channel. The processing shown in equations (1) and (2) is performed on SAR (t) to generate
adjusted acoustic signals SBL (t) and SBR (t). SBL (t) = α (θL) × SAL (t-TDL) (1) SBR (t) = β
(θR) × SBR (t-TDR) (2)
[0037]
In equations (1) and (2), “α (θL)” is an amplification coefficient based on the rotation angle
θL, and “β (θR)” is an amplification coefficient based on the rotation angle θR.
Furthermore, the phase correction values "TDL" and "TDR" represent time advance or time delay.
The amplification factors “α (θ L)” and “β (θ R)” and the phase correction values
“TDL” and “TDR” indicate the position of the sound image, the sound absorption coefficient
and arrangement of the wall 10, the positional relationship between the wall 10 and the speakers
23L and 23R, etc. Set based on For example, when the sound image is positioned to the right of
the front center, the adjustment sound signal SBL of the left channel is delayed relative to the
adjustment sound signal SBR of the right channel by the phase correction values “TDL” and
“TDR”. If the sound absorption coefficient of the wall surface 10 is high, the amplification
coefficients “α” and “β” are set to large values.
[0038]
The adjusted acoustic signals SBL and SBR generated by the acoustic signal processing unit 34
are amplified by the signal amplification unit 35, supplied to the speaker 23L as the acoustic
10-05-2019
11
signal SOL, and supplied to the speaker 23R as the acoustic signal SOR.
[0039]
When the position of the sound image is determined by the sound image position determination
unit 32, the control rate of the direct sound can be adjusted by the drive signals MDL and MDR.
Here, as shown in FIG. 11, assuming that the speakers 23L and 23R are provided on the left and
right sides of the listener 12, if it is desired to place a sound image near the listener 12, the
speakers 23L and 23R are provided on the listener 12 side. It will be the direction. For example,
when the position of the sound image is close to the position Ib in front of the listener 12, the
rotation angle is adjusted in the vicinity of the B direction in FIG. Similarly, when the position of
the sound image is close to the position Ic behind the listener 12, the rotation angle is adjusted in
the vicinity of the direction C in FIG. 11 to reduce the influence of the reflected sound. On the
contrary, when it is desired to place the sound image at a position far from the listener, the
speaker is turned outward. For example, when setting the position of the sound image to a
position Ia separated in front of the listener 12, the rotation angle is adjusted in the vicinity of
the A direction in FIG. Similarly, when the position of the sound image is set to a position Id
separated behind the listener 12, the rotation angle is adjusted in the vicinity of the direction D
in FIG. 11 to increase the influence of the reflected sound.
[0040]
Thus, by rotating the speakers 23L and 23R according to the position of the sound image to
control the output direction of the sound, for example, when the speakers 23L and 23R are
rotated from the A direction to the D direction, the position of the sound image is from the front
It will move backward. In particular, by providing the speakers 23L and 23R on the left and right
sides of the listener 12, the movement of the position of the sound image becomes clear, and
sound presentation with high sense of reality can be performed. Further, by rotating the speakers
23L and 23R, the output direction of the sound can be continuously changed, so that the position
of the sound image can be moved continuously and smoothly.
[0041]
Also, by controlling the signal level and the phase of the acoustic signal by the acoustic signal
processing unit 34, the sound pressure and the phase of the sound can be easily and accurately
10-05-2019
12
controlled, so the sound image can be localized at a more correct position, and the sense of
reality High sound presentation can be performed.
[0042]
Here, in the expression of the sound image in the front and rear direction, the adjustment sound
signals SBL and SBR are generated so that the left and right volume differences disappear and
the left and right phase differences disappear.
In the representation of the sound image in the left and right direction, the position in the left
and right direction of the sound image is represented by the volume difference and the phase
difference between the sound heard with the left ear and the sound heard with the right ear. For
example, with respect to the sound SUR on the right side indicated by the solid line in FIG. 12,
the sound level of the sound SUL on the left side is reduced as indicated by the broken line and
the time lag Te is generated. I can feel the position. Therefore, by adjusting the amplification
coefficients α and β and the phase correction values TDL and TDR according to the position of
the sound image, it is possible to perform sound presentation with a more realistic feeling.
[0043]
In calculating the phase correction values TDL and TDR, the relationship between the rotation
angle of the speakers 23L and 23R and the time difference is calculated in advance, and the
phase correction values TDL and TDR are added to the time difference corresponding to the
rotation angle of the speakers 23L and 23R. The phase correction values TDL and TDR are
adjusted so that a sound image is formed at a desired position.
[0044]
The relationship between the rotation angle of the speakers 23L and 23R and the time difference
can be obtained based on FIG.
For example, the angle of the speaker 23L is set, and the impulse response at the listening
position QL based on the impulse sound output from the speaker 23L is measured. FIG. 14A
shows an impulse response based on the impulse sound output from the speaker 23L. Similarly,
the impulse response based on the impulse sound output from the speaker 23R is measured. FIG.
10-05-2019
13
14B shows an impulse response based on the impulse sound output from the speaker 23R.
[0045]
Here, the time TLO and TRO from the output of the impulse sound until the first indirect sound
reaches the listening position is determined. The impulse response may be measured directly or
may be obtained in advance by simulation.
[0046]
Thus, when the times TLO and TRO are obtained, the time difference ΔTf can be calculated
based on the equation (3). That is, the time difference ΔTf can be obtained using the times TLO
and TRO based on the rotation angles of the speakers 23L and 23R. Note that this time
difference ΔTf is a delay amount that gives the listener a sense of right and left. ΔTf =
TLO−TRO (3)
[0047]
Therefore, when the time difference ΔTg required to form a sound image at a desired position
can not be obtained as the time difference ΔTf, the phase correction value TD is set so as to
satisfy equation (4). ΔTg = ΔTf + TD (4)
[0048]
For example, when the phase correction value TDR is set, the impulse response shown in FIG.
14B is as shown in FIG. 14C, and the position of the sound image that can not be represented
only by the rotation of the speaker can be displayed by using the phase correction value. .
[0049]
Further, the above-described acoustic signal processing unit 34 performs frequency band
separation of the acoustic signals SAL and SAR, and performs adjustment so that the signal level
of the desired frequency band becomes higher than that of the other frequency bands.
10-05-2019
14
In this case, the sound image based on the sound of the desired frequency band can be expressed
more emphatically. Further, frequency bands of the acoustic signals SAL and SAR may be
separated to perform signal processing according to the frequency band. For example, the low
frequency range has lower directivity of sound compared to the high range and the mid range,
and it is difficult to recognize from which direction it is heard. For this reason, by performing
signal processing so as to emphasize the middle range and the high range, the sound image can
be easily recognized and sound presentation with a high sense of reality can be performed.
[0050]
By the way, in the speakers 23L and 23R described above, the sound output direction is
controlled by rotating clockwise or counterclockwise around the Z axis, but the positions of the
speakers 23L and 23R are set to the Y axis direction ( By moving in the back and forth direction,
the output direction of the sound may be varied.
[0051]
FIG. 15 shows a configuration in the case of moving the positions of the speakers 23L, 23R in the
Y-axis direction.
For example, in FIG. 15A, the rotation radius r when rotating the speakers 23L and 23R is
increased, and not only the output direction of sound is changed, but also the positions of the
speakers 23L and 23R are moved in the front-rear direction. Alternatively, as shown in FIG. 15B,
the rotation of the speakers 23L and 23R and the movement in the front-rear direction are
performed independently by providing the slide mechanism 26 for moving the rotation drive unit
in the front-rear direction. Here, for example, the sense of depth in the front direction can be
enhanced by moving the speaker in the front direction. In addition, by moving the speaker
backward, the sense of depth in the backward direction can be enhanced.
[0052]
As described above, by controlling at least one of the sound output direction, the sound pressure,
and the phase according to the position where the sound image is localized by the sound
presentation control device 30, it is possible to perform sound presentation with high sense of
reality .
10-05-2019
15
[0053]
Furthermore, the above-mentioned processing may be realized not only by hardware but also by
software.
The configuration in this case is shown in FIG. The computer incorporates a CPU (Central
Processing Unit) 701 as shown in FIG. 16. A ROM 702, a RAM 703, a hard disk drive 704, and an
input / output interface 705 are connected to the CPU 701 via a bus 720. There is. Further, an
input unit 711, a recording medium drive 712, a communication unit 713, a signal input unit
714, and a signal output unit 715 are connected to the input / output interface 705.
[0054]
When an instruction is input from an external device or an instruction is input from an input unit
711 configured using an operation unit such as a keyboard or a mouse or a voice input unit such
as a microphone, the instruction is input to the input / output interface 705. It is supplied to the
CPU 701.
[0055]
The CPU 701 executes a program stored in the ROM 702, the RAM 703, or the hard disk drive
704, and performs processing in accordance with the supplied command.
Furthermore, an audio presentation processing program for causing a computer to execute the
signal processing in the above-described audio presentation system is stored in advance in the
ROM 702, the RAM 703, or the hard disk drive 704, and the signal input unit 714 receives the
signal. An acoustic output signal is generated based on the signal and output from the signal
output unit 715. Also, it is assumed that the sound presentation processing program is recorded
in the recording medium, and the recording medium drive 712 records the sound presentation
processing program in the recording medium or reproduces the sound presentation processing
program recorded in the recording medium. It is good also as what is performed. Furthermore,
the sound presentation processing program may be transmitted or received by the
communication unit 713 via a wired or wireless transmission path, and the received sound
presentation processing program may be executed by a computer.
10-05-2019
16
[0056]
Next, the sound presentation processing program will be described. Here, the case where the
signal input unit 714 is supplied with the information signal WS generated based on the acoustic
signals SAL and SAR and the position information signal PR will be described.
[0057]
FIG. 17 is a flowchart showing the sound acquisition process. This sound acquisition process may
be performed, for example, on either the video camera side or the computer side.
[0058]
At step ST1, a position information signal is generated. That is, polar coordinate calculation is
performed based on the angle signal Spa from the angle sensor and the distance signal Spb from
the distance measuring sensor, and the position information signal PR is generated. Further, the
position information signal PR and the acoustic signals SAL and SAR are multiplexed, for
example, to generate an information signal WS.
[0059]
In step ST2, it is determined whether or not the generated information signal WS is set to be
recorded on a recording medium or transmitted to an external device. Here, if the recording and
transmission of the information signal are set, the process proceeds to step ST3. If the recording
and the transmission of the information signal are not performed, the process proceeds to step
ST4.
[0060]
In step ST3, after recording and transmission of the information signal, the process proceeds to
step ST4. Here, in the case of recording the information signal, the information signal is recorded
10-05-2019
17
on a recording medium for performing recording and reproduction using magnetism, light or the
like, or a recording medium constituted using a semiconductor or the like. In addition, when
transmitting an information signal, the information signal is output through a wired or wireless
communication path.
[0061]
In step ST4, it is determined whether or not to end sound acquisition. Here, the sound acquisition
process is ended when the sound acquisition end operation is performed. When the end
operation is not performed, the process returns to step ST1 to continue generation of the
information signal.
[0062]
FIG. 18 is a flowchart showing sound presentation processing. At step ST11, the supplied
information signal WS is separated. For example, when the information signal WS recorded on
the recording medium is reproduced by the recording medium drive 712, or when the
information signal WS is supplied through the communication unit 713, or the information signal
WS is a signal from a video camera or the like. When supplied to the input unit 714, the acoustic
signals SAL and SAR and the position information signal PR are separated from the information
signal WS, and the process proceeds to step ST12.
[0063]
In step ST12, the output direction of the sound is determined, and the rotation angles of the
speakers 23L and 23R according to the position of the sound image indicated by the position
information signal PR are determined. In this determination of the rotational angle, the rotational
angle with respect to the position of the sound image is obtained in advance in consideration of
the installation state of the wall surface 10 and the like, and stored in the hard disk drive or
memory. The rotation angle can be easily determined by reading out the corresponding rotation
angle.
[0064]
In step ST13, adjustment information is generated, and coefficients and phase correction values
10-05-2019
18
for adjusting the acoustic signals SAL and SAR are determined based on the position of the sound
image, the rotation angle of the speaker, and the like.
[0065]
In step ST14, drive signals MDL and MDR for rotating the speakers 23L and 23R so as to have
the rotation angle determined in step ST12 are generated and output from the signal output unit
715.
[0066]
In step ST15, the acoustic signals SAL and SAR are adjusted using the adjustment information
generated in step ST13, and the adjusted acoustic signals SBL and SBR or the acoustic signals
SOL and SOR are generated and output from the signal output unit 715.
[0067]
At step ST16, it is determined whether or not the reproduction of the acoustic signal is ended.
Here, when the end operation is performed using the input unit 711, the sound presentation
process is ended.
When the end operation is not performed, the process returns to step ST11, and the processing
from step ST11 to step ST15 is repeated.
[0068]
By processing in this manner, the rotation of the speakers 23L and 23R and the adjustment of
the sound signals SAL and SAR are sequentially performed according to the position of the sound
image, so that highly realistic sound presentation with a sense of depth can be performed.
[0069]
In the above embodiment, the direction of the speakers is changed to change the output direction
of the sound, but a plurality of speakers are arranged so that the sound output direction is
10-05-2019
19
different to constitute the sound output means. The sound output direction may be changed by
switching the speaker that outputs the sound.
For example, as shown in FIG. 19A, assuming that a plurality of speakers 27 are provided on the
spherical surface, the output direction control unit 33 switches the speakers 27 supplying the
acoustic signals SOL and SOR to extend the sound output direction in all directions. To change.
Further, as shown in FIG. 19B, assuming that a plurality of speakers 27 are provided in the
circumferential direction on the side surface of the cylinder, the output direction control unit 33
switches the speakers supplying the acoustic signals SOL and SOR to output the sound. Variable
over the entire circumference. As described above, if the output direction of sound is changed by
switching the speaker, a drive mechanism for rotating the speaker and a rotation mechanism for
the speaker become unnecessary, and the structure having no movable portion can constitute the
sound output means. In addition, since there is no movable part, the case where the position of
the sound image is moved quickly can be easily coped with.
[0070]
Further, the sound output means is not limited to one using a speaker, and it is needless to say
that any sounding body that outputs a sound may be used. Furthermore, a speaker whose sound
output direction is fixed may be used together. For example, if environmental sounds, sound
effects, etc. are output from a fixed speaker and the output direction of the sound is controlled
from the speaker, more versatile sound presentation can be performed if the sound from the
moving sound source is output. Can.
[0071]
According to the present invention, at least one of the sound output direction, the sound
pressure, and the phase is controlled according to the position where the sound image is
localized, so that the sound has a sense of reality, a sense of movement, and a sense of depth.
You can make a presentation.
[0072]
Further, by providing the sound output means in the space partitioned by the wall surface, the
10-05-2019
20
sound image is localized using the reflected sound from the wall surface.
Furthermore, sound output means are provided on the left and right sides of the listener.
Therefore, movement of the sound image in the front-rear direction, depth feeling in the frontrear direction, and the like can be enhanced.
[0073]
Further, by rotating the sound output means, it is possible to easily control the sound output
direction, and when the sound output means is configured by arranging a plurality of speakers so
that the sound output direction is different, Can be controlled without providing a movable part.
[0074]
Furthermore, since the sound level of the desired frequency band is higher than that of the other
frequency bands, a sound image based on the sound of the desired frequency band can be
enhanced.
Further, since control of sound pressure and phase of sound is performed by controlling signal
level and phase of sound signal, sound pressure and phase of sound can be easily controlled.
10-05-2019
21
Документ
Категория
Без категории
Просмотров
0
Размер файла
33 Кб
Теги
jp2003348698
1/--страниц
Пожаловаться на содержимое документа