close

Вход

Забыли?

вход по аккаунту

?

DESCRIPTION JP2018518097

код для вставкиСкачать
Patent Translate
Powered by EPO and Google
Notice
This translation is machine-generated. It cannot be guaranteed that it is intelligible, accurate,
complete, reliable or fit for specific purposes. Critical decisions, such as commercially relevant or
financial decisions, should not be based on machine-translation output.
DESCRIPTION JP2018518097
Abstract: The present invention is a method for improving the sound quality of the audio data file
in an audio device by changing the audio signal using the at least one effect device 1, 2, 3, 4, 5, 6,
7 About. Therefore, the present invention comprises the steps of: a) allocating one audio data file
to a plurality of metadata; b) a plurality of setting values of at least one effect device 1 to 7 of the
plurality of metadata. C) comparing with a plurality of stored data files, c) assigning the plurality
of metadata to one data file 21 from method step b), d) said data files from method step c) 21
and loading the effects devices 1-7. In this case, the effect devices 1 to 7 are set by the values
from the data file 21 in method step c).
How to improve the sound quality of audio data files
[0001]
The present invention relates to a method for improving the sound quality of an audio data file in
an audio device by modifying the audio signal using at least one effect device. Furthermore, the
invention relates to an installation for carrying out the method.
[0002]
Methods of the kind mentioned at the outset are known from the prior art and are used to
improve devices for reproducing sound sounds. For this reason, in general, effect devices such as
equalizers, compressors and limiters, and Hall sound devices and reverberation devices are used
11-04-2019
1
separately or as a unit.
[0003]
The compressor / limiter is, for example, a limiter that avoids over peak levels to prevent
overdrive. Furthermore, the limiter is important as an instrument / song or voice volume
regulator / sound pressure regulator. For this reason, the values of the parameters fader level,
threshold value, attack (if any), release and output level are corrected.
[0004]
The equalizer is composed, for example, of a plurality of filters which are used for acoustic
pattern correction, distortion correction or other control and which can process the entire audio
frequency band of the audio signal. This is achieved by correcting the frequency and filter
characteristics separately.
[0005]
Thus, in order to control or improve the sound quality of the audio signal or the audio data file,
the effect device is always connected to a special correction device.
[0006]
In the conventional method, the entire sound of the device for reproducing the sound is adapted.
That is, the selected setting (Einstellungen) of the effects device is valid for all sounds played by
the audio device. For this purpose, the respective home device, adjustable by the consumer or
user, is positioned by the manufacturer of the audio device in the main audio channel path of the
corresponding device. In this case, it does not matter whether the device playing the sound has
one channel, two channels or multiple channels. In order to be able to provide a greater variety
of acoustic corrections to consumers or users of audio devices, conventional effect devices often
have multiple preset values, ie multiple presets. . That is, for example, in the case of a compressor
/ limiter, values of parameters such as fader level, threshold, attack, release and output level are
already set. For example, some pre-set values (preset values) of these pre-set values (preset
11-04-2019
2
values), such as a title of "music", a title of "movie" or a set value with a title of "game", It is
already preprogrammed. The other pre-set values (preset values) are so-called user pre-set values
(user preset values) that can be freely created and stored. If the consumer or user wants to
switch between different settings, the consumer or user needs to select the effect device and
manually change the settings. This is done by validating one stored pre-set value or manually
creating a new set value.
[0007]
However, according to the recent prior art, it is impossible for the consumer or user to realize
and utilize fully automatic acoustic adaptation individually and in response to audio events.
Because of this, the settings found in the audio playback device for all audio events to be played
produce an appropriate sound for some audio events, but for other audio events. Has the
disadvantage of producing little or no adequate sound. For example, the low frequencies
emphasized may best match some audio events, but not the other audio events originally created
by the very low frequencies. Also, while the narration emphasis in a given movie trailer matches
the best, the speech emphasis in a musical performance by a different vocal does not match at
all. Because the music is already composed in the high range, the singer in this case is distorted
and uttered in high notes. Thus, the consumer or user will try to manually change the settings or
pre-settings of the effects device more frequently in order to obtain the best possible listening
experience. As a result of the manual change, certain audio events sound more pleasing to the
consumer or user and other audio events sound more unpleasant. Also, therefore, the prior art
works more disadvantageously than the case where the consumer or user does not initially
configure the effects device provided in the audio device.
[0008]
The object of the present invention is therefore to eliminate these drawbacks.
[0009]
This task is solved by the features of claim 1.
Preferred configurations of the invention are described in the dependent claims.
11-04-2019
3
[0010]
The method of the present invention comprises the following steps: a) assigning the audio data
file to a plurality of metadata b) a plurality of data in which a plurality of setting values of the
effect device are stored Comparing the file with c) c) assigning the plurality of metadata to one
data file from method step b); d) loading the data file from method step c) and And the step of
activating.
[0011]
In this case, the effect device is set by the values from the data file in method step c).
[0012]
The essential idea of the invention is to first adapt the sound individually for every possible audio
data file, as one audio data file is obtained and found using metadata, and then this audio The
audio signal of the data file is to be provided by an appropriately configured effects device.
Metadata is data in which contents are described for media data files such as audio data files and
video data files.
The metadata is used to describe content (e.g. composer, playback time, genre) in the data file or
to integrate various data file formats into a so-called container format. Metadata is a modifiable
component and a non-modifiable component of a media data file, or a description independent of
the media data file, eg, managed in a meta database. At this time, although the metadata is linked
to a specific metadata file, it exists outside and does not exist inside the media data file itself.
Within the scope of the inventive method, both inside the media data file (e.g. ID3 tag) and
outside the media data file (freely selectable media data catalog) can be used. Furthermore, a
plurality of data files having a plurality of setting values of the effect device may be stored in the
memory medium. A plurality of audio data files are stored in this memory medium. Multiple
metadata files are assigned and / or assignable to these audio data files.
[0013]
The method of the present invention solves the problem of unresolved individual acoustic
matching. An advantage of the present invention is that the audio devices that are actually
11-04-2019
4
individually reproduced are each provided by an effect device that is optimally set up for these
audio events. Thus, the consumer or the user themselves no longer needs to acquire the audio
event, but the consumer or user automatically acquires the audio event optimized for the device
that plays the audio. By distributing the automatically configurable effects device and the
manually configurable effects device, the user or the user can obtain the sound in such a way
that the sound can be further adapted to the personal preferences as required. It is further
possible to allow the potential to the consumer or user. The same holds true for those skilled in
the art, such as acoustic engineers, who want to adapt the method of the present invention to an
apparatus or environment for playing audio, as appropriate. In this case, the more information
about the audio data file to be played back, ie more metadata, is present and can be evaluated,
the method of the invention is always more accurate. Preferably, acoustic events of all times,
genres and styles can be individually optimized by the method of the present invention. For the
first time according to the invention, a plurality of audio events of various times which are mixed
and played back and forth are also reproduced homogeneously. This is because, for example, the
lower sound quality of the earlier recordings can be adapted individually to the modern-suited
sound by the method of the invention.
[0014]
In another preferred configuration of the invention it is proposed that the metadata be stored in
an audio data file. In a practical variation of the invention it is proposed that the metadata be
stored in an external database. The following variations of the invention have the advantage of
being able to download the metadata of a large number of providers and the repertoire of songs
of such providers.
[0015]
Preferably, the data files from method step b) are stored in one database or one cloud of the
audio device. In this way, a large amount of data files possessed by a plurality of arranged effect
devices can be aggregated and beneficially stored. The database may be an external database.
That is, the database may exist outside the audio device. In addition, the large amount of data
files has the advantage that the plurality of arranged effect devices, that is, the audio data files
are further divided.
[0016]
11-04-2019
5
In a further preferred configuration of the invention it is proposed that the audio signal modified
by the effects device is sent to the audio output of the installation. This ensures that the audio
device can be further connected to the loudspeaker. This can further improve the acoustic
experience (Klangerlebnis).
[0017]
An installation for carrying out the method is the subject of claim 7. In this case, the facility
comprises: a memory medium in which a plurality of audio data files are stored, and in which a
plurality of metadata files are and / or assignable to these audio data files; One effect device,another memory medium storing a plurality of data files having a plurality of setting values of
the effect device,-the memory medium, the effect device, and the other memory medium And one
control module assigned.
[0018]
In particular, the installation comprises a plurality of audio outputs and / or a plurality of audio
inputs. The installation further comprises a plurality of video inputs and / or a plurality of video
outputs, as well as other control inputs and control outputs for controlling external devices, for
example light sources.
[0019]
The computer program implemented in the installation according to any one of claims 7 to 10 is
the object according to claim 13. In this case, the computer program comprises an algorithm to
be processed by the processor of the facility. In this case, the algorithm implements the method
according to any one of claims 1 to 6. In this case, the computer program is present as a media
player.
[0020]
Hereinafter, the present invention will be described in detail based on the drawings.
11-04-2019
6
[0021]
It is a block diagram which has a plurality of effect devices as a unit.
Fig. 2 shows the method of the invention shown in a block diagram. Fig. 2 shows in a block
diagram the components of a container format according to the invention. FIG. 6 is a flowchart
additionally shown in a block diagram in a control module according to the present invention.
FIG. 5 shows a block diagram of an expanded unit of effect devices. FIG. 7 shows another
embodiment of the invention shown in block diagram. FIG. 7 shows another embodiment of the
invention shown in block diagram. FIG. 7 shows another embodiment of the invention shown in
block diagram. FIG. 7 shows another embodiment of the invention shown in block diagram. FIG. 7
shows another embodiment of the invention shown in block diagram. FIG. 7 shows another
embodiment of the invention shown in block diagram. FIG. 2 shows in a block diagram the
components of a computer program in a device of a media player according to the invention.
[0022]
FIG. 1 shows in a block diagram a plurality of effect devices 1, 2, 3, 4, 5, 6, 7 which are coupled
to one another and which share an audio input 8 and an audio output 9. These effects devices are
used to modify the audio signal of audio data files not shown in FIG.
[0023]
These effect devices 1, 2, 3, 4, 5, 6, 7 have “bypasses” to start / stop the whole unit,
“equalizers” to adjust different frequency bands, eg, in film. "Speech Enhancer" to emphasize
the voice, "Bass Boost / Treble Boost" to raise or lower the freely adjustable low and high
frequency bands, automatically adapt the volume level over time For “leveler”, “stereo
spread” for expanding the sound pattern, and “limiter” for suppressing the last peak level of
the signal chain so that overdrive can not occur.
[0024]
FIG. 2 shows the inventive method in which the metadata stored in an audio data file not shown
in FIG. 2 is supplied to the control module 10 having a microcontroller via at least one data line
12, 23.
11-04-2019
7
Furthermore, in the effect device 2 in which the audio signal of the audio data file stored in the
memory medium not shown in FIG. 2 is present as an equalizer in the embodiment of the present
invention shown in FIG. To reach. Instead of the single effect device 2, a complex facility
consisting of a plurality of effect devices may be used, as shown in FIG. The audio signal
processed by the effect device 2 reaches the audio output unit 9. Data having both audio data
and metadata, i.e. information having information associated with the audio event that is actively
reproduced in practice, is processed in the control module 10. When the audio event starts
reproduction, the control module 10 assigns a plurality of predetermined metadata to a
predetermined plurality of setting values of the effect device 2 through the control line 13 and
validates these setting values. This is because the metadata processed by the control module 10
is present at the beginning of the audio data file, or an external meta linked to the audio data file
actually played, which is not shown in FIG. It is because it is downloaded from the database.
Metadata is data in which contents are described for media data files such as audio data files and
video data files. The metadata is used to describe content (e.g. composer, playback time, genre) in
the data file or to integrate various data file formats into a so-called container format.
[0025]
FIG. 3 shows the container format AVI (audio video interleaving) appended at 14. The metadata
describes the synchronous combination of the video data file 16 and the audio data file 17 as
“head data” 15 here. The demander or the user activates only the higher-ranking AVI data file.
The AVI data file holds the image and the sound together in the container. This makes it possible
to easily combine the image and the sound.
[0026]
The control module 10 assigns a plurality of metadata to predetermined setting values of the
effect device 2 as also shown in FIG. These setting values are stored and recalled in the data file
21 of the memory medium with a unique identifier for the metadata. For this reason, the control
module 10 converts the plurality of metadata into the plurality of control data for the effect
device 2 from the plurality of types of configurable metadata that reach the control module 10
through the at least one data line 12, 22. Convert. The control module 10 recognizes in the first
functional stage 18 what combination of the plurality of metadata to be read is composed of
letters, numbers and symbols. After the recognition of the received plurality of metadata, these
metadata are collated with the plurality of data sets present in the data file 21 and assigned in
11-04-2019
8
the function stage 19. A plurality of combinations of letters, numbers and symbols are stored in
the data file 21. The plurality of combinations of the letters, numbers, and symbols are directly
linked to the plurality of pre-set values of the effect device 2. When a combination of characters,
numerals, and symbols of one metadata matches one data set for controlling the effect device 2
present in the data file 21, the effect device 2 has the function stages of the control module 10.
Activated by 20 and read out. The control module 10 sends the effect device 2 an instruction to
load the assigned data set. After the loading process, the effect device 2 validates a plurality of
setting values of the effect device 2 stored in a specific combination of recognized and activated
letters, numbers and symbols, and accordingly Be adapted. The more metadata that can be
recognized, read, and triggered by the control module 10, the more the repertoire of acoustic
adaptations that are automatically triggered individually for specific acoustic events. If the
metadata of the metadata file is not recognized, the control module 10 validates one determined
reference setting value. As a result, control can not stop.
[0027]
The audio data present in the media data file or the audio data file is supplied directly to the
effect device 2, for example, as LPCM (linear pulse code modulation) according to the audio
format AIFF (file format of audio data). It is processed by the device 2. The sound modulated
sound is output to the audio output unit of the effect device 2 and supplied to another processing
device.
[0028]
The effect device 2 may have a plurality of freely configurable acoustic modules. These acoustic
modules can all be preassembled together by manual or automatic precorrection. At this time,
the setting value which is corrected in advance and stored by being attached to a plurality of
combinations of characters, numbers and symbols is validated by the control module 10. As a
result, the effect device 2 loads the validated pre-set value and modulates the sound according to
the validated pre-set value. In addition to the parameters that can be enabled automatically, the
combined effects device as shown in FIG. 5 includes a so-called master sound module that fits
over all sound modules that can automate the whole sound system. May be. The master
parameters, which consist of an additional equalizer 2b and an intensity 2a, make it possible to
adapt the whole of the acoustic system to special events such as listening with headphones
different from listening in a car. With this combination of automatically correctable and manually
correctable parameters, the method of the invention can be easily adapted and thus used with
maximum flexibility.
11-04-2019
9
[0029]
As can be seen from the embodiments of the invention according to FIGS. 6-11, within the scope
of the invention, a plurality of undefined effects devices in which the control module 10 is
assigned to a plurality of undefined audio data sources. It has also been proposed to control 2, 2a
based on a plurality of undetermined metadata sources (FIG. 6). Furthermore, in another
embodiment of the present invention, the plurality of undetermined control modules 10, 10a use
a plurality of unconfirmed audio data sources, ie based on a plurality of undetermined metadata
sources. Control the effect device 2 (FIG. 7). As in another embodiment of the present invention,
the plurality of undetermined control modules 10 and 10a use the plurality of unconfirmed audio
data sources, based on the plurality of undetermined metadata sources. The effect devices 2, 2a
are controlled (FIG. 8). Instead, in this case, the metadata source 24 of the control module 10 is a
metadata file with embedded metadata. As another variation, the metadata source of the control
module 10 is a data file of the external media, ie, a data file separated from the media data file 25
(audio data file) (FIG. 10). Finally, the metadata sources of the control module 10 are an external
data file 30 and an audio data file 31. In this case, the metadata is supplied from the audio data
file 31 to the control module 10 through the communication line 26 (FIG. 11).
[0030]
FIG. 12 shows a computer program 27. When the audio data file or the audio video data file in
the playlist 28 is activated, the control module 10 is assigned to the metadata of the media data
file activated in the playlist 28 and / or the metadata file The metadata of the external metadata
base 29 is recognized. After the recognition, the control module 10 collates the found metadata
with the data set contained in the data file 21. When the control module 10 can assign a control
data set existing in the data file 21 and / or the external meta database 29 to the media data file
activated in the playlist 28, the control module 10 can A control command that is present in the
file and that conforms to the control data present in the control data file is sent to the effect
device. Finally, the audio output 9 of the effect device 2 is output from the media player existing
as the computer program 27. As a result, the audio output 9 can be input to the amplifier /
loudspeaker or processed further.
[0031]
1 Effect device 2 Effect device 2a Equalizer 3 Effect device 4 Effect device 5 Effect device 6 Effect
device 7 Effect device 8 Effect device 8 Audio input (part) 9 Audio output (part) 10 Control
11-04-2019
10
module 10a Control module 11 Communication line 12 Data line 13 Control line 14 container
format 15 head data 16 video data 17 audio data 18 functional stages 19 functional stages 20
functional stages 21 data files 22 data lines 23 data lines 24 metadata sources 25 metadata files
26 communication lines 27 computer programs 28 playlists 29 meta database 30 external data
file 31 audio data file
11-04-2019
11
Документ
Категория
Без категории
Просмотров
0
Размер файла
22 Кб
Теги
description, jp2018518097
1/--страниц
Пожаловаться на содержимое документа