close

Вход

Забыли?

вход по аккаунту

?

JPH0682242

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JPH0682242
[0001]
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a
method of detecting a three-dimensional position / attitude, which is applied to threedimensionally detecting the motion of an object and using it as various control information. The
present invention relates to a method for enabling stable detection in a wide area by an
apparatus.
[0002]
2. Description of the Related Art In addition to attitude control systems such as aircraft and
robots, various control methods are performed in various fields by detecting three-dimensional
motion of an object. In order to obtain virtual reality also in sound image control of the above
and image control in a head mounted display system, it is necessary to accurately detect the
three-dimensional position and posture of the viewer.
[0003]
A simple three-dimensional digitizer utilizing a magnetic field as shown in FIG. 5 has been
proposed as a detection apparatus of that type, and is actually applied to a head mounted display
system or the like.
The detection principle of this digitizer is that a fixed source coil (three-dimensional orthogonal
10-05-2019
1
coil) 51 is supplied with an alternating current by the drive circuit 52, and a magnetic field 53
generated by the source coil 51 is attached to a detection target (a three-dimensional orthogonal
coil). The coil 54 detects the voltage induced in the sensor coil 54 by the signal detection circuit
55, and the computer 56 calculates the position and attitude of the sensor coil 54 using the
above detection signal levels. It is designed to create control information.
[0004]
Therefore, if the sensor coil 54 is attached to the display goggles to be detected or the head of
the person wearing it, the position / posture information of the wearer can be obtained, and the
image control can be performed using the information. By doing this, it is possible to realize
video expression that realizes virtual reality.
[0005]
And since this detection method can be realized with an extremely simple configuration using
two three-dimensional orthogonal coils, it differs from the gyro etc. used for attitude control of
the aircraft and although the detection accuracy is not so high, it is small and lightweight Device
has the advantage of being able to detect position and orientation information.
Also, in view of the above detection principle, it is apparent that the present invention can be
applied to an acoustic system that executes sound image control in accordance with the position
and orientation of the listener.
[0006]
By the way, in the head mounted display system and the acoustic system using the above threedimensional digitizer, the sensor coil 54 exists within the range of the magnetic field from which
a valid detection signal can be obtained. There is a problem that the viewer's action range is
limited to a very narrow range. That is, the magnetic field generated by the source coil 51 is
weak, and the sensor coil 54 can not detect a valid signal only in an area with a radius of about
1.5 m even in the actual head mounted display system implemented. Outside the area, normal
video control is not possible. In that case, it is possible to generate a strong magnetic field and
widen the effective area by increasing the conduction current to the source coil 51 side, but the
power consumption increases and the strong magnetic field adversely affects the electronic
circuit of the system. There is a possibility of exerting.
10-05-2019
2
[0007]
Also, the response speed is slow because the three-dimensional digitizer described above is a
detection method using a reactance element, and according to the example of the head mounted
display system described above, detection is performed against movement or rotation of the
sensor coil 54. Since the signal is output with a delay of about 0.25 to 0.33 sec, the tracking
control of the video expression is delayed and the quality of the virtual reality is degraded.
[0008]
Therefore, the present invention has been created for the purpose of providing a novel threedimensional position / posture detection method using ultrasonic waves and solving the abovementioned problems in the prior art.
[0009]
According to the present invention, an ultrasonic generator fixed to three or more different
positions to be detected is fixed to three or more different positions separated from the object to
be detected. Ultrasonic wave detector for detecting the ultrasonic wave transmitted from each
ultrasonic wave generator to be detected, the ultrasonic wave transmission timing of each
ultrasonic wave generator according to an electric signal or an optical signal, and each ultrasonic
wave detection Means for synchronizing the ultrasonic wave detection timing of the ultrasonic
wave generator, ultrasonic waves are transmitted from the ultrasonic wave generators at
different time zones, and the ultrasonic wave generators obtained from the synchronous means
Determining the delay time of the ultrasonic detection timing of each of the ultrasonic detectors
with respect to the ultrasonic transmission timing of each of the plurality of ultrasonic detectors,
and determining the distance between each of the ultrasonic detectors and the respective
ultrasonic generators in proportion to the respective delay time Information and known
According to the three-dimensional position and posture detecting method and obtains the threedimensional position and orientation of the detection object using the position information of
each ultrasonic detector.
[0010]
[Functions] The propagation speed of ultrasonic waves is much smaller than electric and light
signals, but within a certain range such as a room, ultrasonic waves are virtual between the
ultrasonic generator and the ultrasonic detector. It propagates in a time that does not affect the
realization of reality.
10-05-2019
3
The present invention determines the distance information of each ultrasonic detector and each
ultrasonic generator by effectively utilizing the difference between the propagation velocity of
ultrasonic waves and the propagation velocity of electric signals or optical signals, The threedimensional position / posture of the detection target is obtained from the position information
of each ultrasonic detector.
[0011]
The ultrasonic wave transmission timing of each ultrasonic wave generator and the ultrasonic
wave detection timing of each ultrasonic wave detector are synchronized with the electric signal
or the optical signal, and the propagation time of the synchronization signal can be ignored. The
time to reach the sound wave detector (delay time) is proportional to the distance between the
source ultrasonic wave generator and each ultrasonic wave detector.
For example, even if the distance is 10 m, the delay time will be about 30 msec, and if the
posture of the object to be detected is finally detected, the difference between the distance
between each ultrasonic generator and each ultrasonic detector becomes a problem. However,
even if the difference in distance is about 1 cm, the time difference is about 30 μsec, and these
times are times that can be accurately measured with sufficient resolution with a normal
detection system.
[0012]
In the present invention, ultrasonic waves are sequentially transmitted from the ultrasonic wave
generators at different time zones, and each time the ultrasonic waves are received by the
ultrasonic wave detectors, the delay times are determined.
And, since the ultrasonic wave generator and the ultrasonic wave detector are provided at three
or more points respectively at the detection target and the fixed position, it is possible to set
between the ultrasonic wave generator and the ultrasonic wave detectors based on the respective
delay times. The three-dimensional position information of each ultrasonic generator can be
uniquely determined using the distance information and the position information of each
ultrasonic detector that is known, and the detection target is obtained from the position
10-05-2019
4
information of the ultrasonic generator. It becomes possible to detect position and attitude.
[0013]
Embodiments of the present invention will be described in detail with reference to FIGS. 1 to 4.
The present embodiment uses the three-dimensional position / posture detection method of the
present invention for sound image control in an acoustic system, and FIG. 1 shows its basic
configuration. In the figure, 1 is a headphone, and ultrasonic generators E1, E2 and E3 are
attached to the top of the band and the outside of each earphone section, and three ultrasonic
sensors S1 are provided at different positions on the wall surface. , S2, S3 are attached. On the
other hand, 2 is a sound image control unit, 3 is an amplifier unit incorporating a sound image
control circuit, 4 is a source for outputting various audio signals to the amplifier unit 3, and
ultrasonic wave detection signals of the ultrasonic sensors S1, S2, S3 Are input to the sound
image control unit 2 through the signal lines L1, L2, and L3, and a sound image control signal is
output to the amplifier unit 3 from the sound image control unit 2 through the signal line L4.
Further, audio signals are input from the amplifier unit 3 to the left and right earphones 1R and
1L from the amplifier unit 3 through the signal line L5, and a synchronization reference signal is
input from the headphone 1 to the sound image control unit 2 through the signal line L6. It is
supposed to be.
[0014]
The detailed connection relationship between the headphone 1 and the ultrasonic sensors S1, S2
and S3, the sound image control unit 2 and the amplifier unit 3 is shown in the system circuit
diagram of FIG. As is apparent from the figure, the headphone 1 incorporates a microcomputer
circuit (hereinafter referred to as "microcomputer circuit") 1a to provide drivers 5, 6, 7 and a
synchronization reference signal for each of the ultrasonic wave generators E1, E2, E3. The
sound image control unit 2 also incorporates the microcomputer circuit 2a and detects signals of
the ultrasonic sensors S1, S2 and S3 to create a sound image control signal. It is also possible to
use an optical signal without using the signal line L6 for the synchronization reference signal, in
which case an optical sensor is provided on the sound image control unit 2 side.
[0015]
The sound image control operation in this acoustic system will be described below with reference
10-05-2019
5
to the signal timing chart of FIG. 3 and the flowchart of FIG. 4 showing the signal processing
procedure on the sound image control unit 2 side. First, as shown in FIG. 3, the microcomputer
circuit 1a of the headphone 1 creates a synchronization reference signal with a cycle of T0 and
outputs the synchronization reference signal to the sound image control unit 2 through the
signal line L6. Then, the microcomputer circuit 1a on the headphone 1 side activates the drivers
5, 6, 7 with a period of Ta every time the synchronization signal is transmitted, and the ultrasonic
wave generators E1, E2, E3 sequentially transmit ultrasonic waves for Tb time. Generate That is,
when the synchronization reference signal is transmitted, the ultrasonic wave generators E1, E2
and E3 are operated at different time zones, and ultrasonic waves are transmitted in the order of
E1 → E2 → E3 within the period T0. The frequencies of the ultrasonic waves transmitted by the
ultrasonic wave generators E1, E2 and E3 are the same, and in FIG. 3, T0 is set to about 100
msec, Ta to about 25 msec, and Tb to about 10 msec. Tb) is about 15 msec.
[0016]
On the other hand, the ultrasonic sensors S1, S2 and S3 detect ultrasonic waves sequentially
transmitted by the ultrasonic generators E1, E2 and E3. The sound image control unit 2 detects
the synchronization reference signal and at the same time the microcomputer circuit 2a sets a
timer, and the ultrasonic waves of the ultrasonic wave generator E1 are detected by the
ultrasonic sensors S1, S2 and S3. The time .DELTA.T11, .DELTA.T21, .DELTA.T31 is saved in the
built-in RAM (F1 to F9). In this case, the detection order by the ultrasonic sensors S1, S2, and S3
differs depending on the position and posture of the headphones 1 in FIG.
[0017]
Next, the microcomputer circuit 2a resets the timer and starts counting again when the time Ta
has elapsed, and each time each of the ultrasonic sensors S1, S2 and S3 detects the time as
described above, that time ΔT12, ΔT22, ΔT32 Are stored in the built-in RAM (F9, F10 → F11
→ F3 to F9). That is, in this case, the ultrasonic waves of the ultrasonic wave generator E2
transmitted next to the ultrasonic wave generator E1 are detected by the ultrasonic wave sensors
S1, S2 and S3.
[0018]
Further, the microcomputer circuit 2a repeats the above procedure, detects the ultrasonic wave
of the ultrasonic wave generator E3 transmitted next to the ultrasonic wave generator E2 with
each of the ultrasonic sensors S1, S2 and S3, and obtains the synchronization reference signal.
10-05-2019
6
The time ΔT13, ΔT23, ΔT33 until the detection time of the ultrasonic waves by the ultrasonic
sensors S1, S2, S3 is saved in the built-in RAM on the basis of the time when the time 2Ta elapses
after detection (F9, F10 → F11 → F3 ~ F9).
[0019]
As a result, time data (.DELTA.T11, .DELTA.T21, .DELTA.T31, .DELTA.T12, .DELTA.T22,
.DELTA.T32, .DELTA.T13, .DELTA.T23, .DELTA.T33) are saved in the RAM of the microcomputer
circuit 2a, as apparent from FIGS. 1 and 3. These time data give the time taken for ultrasonic
waves to propagate between the ultrasonic wave generators E1, E2, E3 on the headphone 1 side
and the ultrasonic sensors S1, S2, S3. .
[0020]
Therefore, the microcomputer circuit 2a multiplies each time data by the sound velocity Vs of the
ultrasonic wave when all the data are saved, thereby multiplying the ultrasonic wave generators
E1, E2, E3 and the ultrasonic wave sensors S1, S2, S3. Distance data (D11, D21, D31, D12, D22,
D32, D13, D23, D33) between them are stored in the built-in RAM (F12).
[0021]
By the way, since the ultrasonic sensors S1, S2 and S3 are fixed to the wall surface, their position
coordinates (X1, Y1, Z1), (X2, Y2, Z2), (X3, Y3, Z3) Is a known data, and the microcomputer
circuit 2a stores the data in a built-in ROM in advance.
The position coordinates (XE1, YE1, ZE1), (XE2, YE2, ZE2), (XE3, YE2, ZE2) of each ultrasonic
wave generator E1, E2, E3 are obtained using the known position coordinate data and the
distance data determined above. YE3, ZE3) are obtained by calculation (S13).
Specifically, the position coordinates of each of the ultrasonic wave generators E1, E2, and E3
can be obtained by solving three sets of three-dimensional simultaneous equations represented
by the following Equation 1.
[0022]
10-05-2019
7
Then, the microcomputer circuit 2a temporarily saves the position coordinates of each of the
ultrasonic wave generators E1, E2 and E3 obtained above in the built-in RAM, obtains the
attitude coefficient of the headphone 1 from them, and the attitude coefficient is also built-in
RAM Save (S14).
Further, a sound image control signal for the amplifier unit 3 is created from the position
coordinates of each of the ultrasonic wave generators E1, E2 and E3 determined above and the
attitude coefficient of the headphone 1, and the sound image control signal is transmitted to the
amplifier unit through the signal line L4. Output to 3 (S15, S16). The data saved in the previous
procedure is cleared at this point.
[0023]
As a result, the amplifier unit 3 controls the sound output signals to the earphones 1R and 1L
using the sound image control signal, and controls the sound image generated by the reproduced
sound of both channels in correspondence with the position and posture of the headphone 1
Becomes possible. That is, the sound image is controlled according to the position and posture of
the head of the listener wearing the headphones 1. The above procedure is the sound image
control operation within one cycle (100 msec) of the synchronization reference signal, and the
headphone 1 side and the sound image control unit 2 side repeat the above procedure every time
the synchronization reference signal is output. When the position and posture of the listener's
head change, the sound image of the reproduced sound is continuously changed according to the
change (F16 → F1).
[0024]
In the present embodiment, since T0 is set to about 100 msec and (Ta−Tb) is set to about 15
msec in FIG. 3, detection with an extremely short cycle is possible, and response of sound image
control is excellent. Also, even if the distance between the headphone 1 and each of the
ultrasonic sensors S1, S2 and S3 is about 10 m, it can function normally. Therefore, even when
the listener moves his head suddenly or moves around the room, accurate sound image control
can be performed, and highly accurate virtual reality can be realized.
[0025]
10-05-2019
8
The three-dimensional position / posture detection method of the present invention exhibits the
following effects by having the above configuration. Detecting the three-dimensional position and
attitude of the detection target, because the distance information between the three or more
positions of the detection target and the three or more fixed positions of the space is
continuously obtained using ultrasonic waves, so detection is possible The area can be enlarged,
and accurate and stable detection is possible even if the detection object moves in a wide range.
Moreover, as compared with the magnetic detection method of the prior art, the present
invention can be applied to a detection target accompanied with a rapid movement to quickly
obtain a response signal to the movement of the detection target. Therefore, it has the advantage
of being able to configure a detection system with a high degree of freedom without being
limited by the directivity or the material of the object to be detected. In particular, high precision
virtual reality is realized with a simple and inexpensive system configuration by applying it as a
method for detecting the head position and posture in sound image control in an acoustic system
and image control in a head mounted display system. It becomes possible.
[0026]
Brief description of the drawings
[0027]
1 is a diagram showing a basic configuration when using the three-dimensional position · attitude
detection method of the present invention for sound image control in an acoustic system.
[0028]
2 is a system circuit diagram showing the connection relationship between the headphone, the
ultrasonic sensor, the sound image control unit, and the amplifier unit.
[0029]
3 is a signal timing chart showing the timing of the synchronization reference signal, the
transmission signal by each ultrasonic wave generator, and the detection signal by each
ultrasonic wave sensor.
[0030]
4 is a flowchart showing the signal processing procedure of the sound image control unit.
10-05-2019
9
[0031]
5 is a principle view showing a conventional magnetic three-dimensional position · attitude
detection method.
[0032]
Explanation of sign
[0033]
1…Headphones (detection target), 1R, 1L ... earphones, 1a, 2a ... microcomputer circuit
(synchronization means), 2 ... sound image control unit (detection of position and attitude by
microcomputer circuit 2a), 3 ... amplifier unit, 4 ... source 5, 6, 7 ... driver, E1, E2, E3 ... ultrasonic
wave generator, L1 to L6 ... signal line, S1, S2, S3 ... ultrasonic wave sensor (ultrasonic wave
detector).
10-05-2019
10
Документ
Категория
Без категории
Просмотров
0
Размер файла
20 Кб
Теги
jph0682242
1/--страниц
Пожаловаться на содержимое документа