Signal correlation function. Signal correlation functions

Signal correlation function. Signal correlation functions

12.12.2022

In the early stages of the development of radio engineering, the question of choosing best signals for certain specific applications was not very sharp. This was due, on the one hand, to the relatively simple structure of transmitted messages (telegraph parcels, radio broadcasting); on the other hand, the practical implementation of signals of complex shapes in combination with equipment for their encoding, modulation and reverse conversion into a message turned out to be difficult to implement.

Currently, the situation has changed radically. In modern radio-electronic systems, the choice of signals is dictated primarily not by the technical convenience of their generation, conversion and reception, but by the possibility of optimally solving problems provided for in the design of the system. To understand how the need for signals with specially selected properties arises, consider the following example.

Comparison of time-shifted signals.

Let's turn to the simplified idea of ​​​​the operation of a pulse radar designed to measure the distance to a song. Here, information about the measurement object is contained in the value - the time delay between the probing and received signals. The shapes of the probing and received signals are the same for any delay.

The block diagram of a radar signal processing device intended for range measurement may look as shown in Fig. 3.3.

The system consists of a set of elements that delay the “reference” transmitted signal for certain fixed periods of time

Rice. 3.3. Device for measuring signal delay time

The delayed signals, together with the received signal, are fed to comparison devices, which operate in accordance with the principle: the output signal appears only if both input oscillations are “copies” of each other. Knowing the number of the channel in which the specified event occurs, you can measure the delay, and therefore the range to the target.

Such a device will work the more accurately, the more the signal and its “copy”, shifted in time, differ from each other.

Thus, we have gained a qualitative “idea” of what signals can be considered “good” for a given application.

Let us move on to the exact mathematical formulation of the problem posed and show that this range of issues is directly related to the theory of energy spectra of signals.

Autocorrelation function of the signal.

To quantify the degree of difference between a signal and its time-shifted copy, it is customary to introduce an autocorrelation function (ACF) of the signal equal to the scalar product of the signal and the copy:

In what follows, we will assume that the signal under study has a pulsed character localized in time, so that an integral of the form (3.15) certainly exists.

It is immediately clear that when the autocorrelation function becomes equal to the signal energy:

Among the simplest properties of an ACF is its parity:

Indeed, if we make a change of variables in the integral (3.15), then

Finally, an important property of the autocorrelation function is the following: for any value of the time shift, the ACF modulus does not exceed the signal energy:

This fact directly follows from the Cauchy-Bunyakovsky inequality (see Chapter 1):

So, the ACF is represented by a symmetrical curve with a central maximum, which is always positive. Moreover, depending on the type of signal, the autocorrelation function can have either a monotonically decreasing or oscillating character.

Example 3.3. Find the ACF of a rectangular video pulse.

In Fig. 3.4a shows a rectangular video pulse with amplitude U and duration. Its “copy” is also shown here, shifted in time towards the delay by . Integral (3.15) is calculated in this case simply on the basis of a graphical construction. Indeed, the product and and is nonzero only within the time interval when signal overlap is observed. From Fig. 3.4, it is clear that this time interval is equal if the shift does not exceed the pulse duration. Thus, for the signal under consideration

The graph of such a function is the triangle shown in Fig. 3.4, b. The width of the base of the triangle is twice the duration of the pulse.

Rice. 3.4. Finding the ACF of a rectangular video pulse

Example 3.4. Find the ACF of a rectangular radio pulse.

We will consider a radio signal of the form

Knowing in advance that the ACF is even, we calculate the integral (3.15), setting . Wherein

where we easily get

Naturally, when the value becomes equal to the energy of this pulse (see example 1.9). Formula (3.21) describes the ACF of a rectangular radio pulse for all shifts lying within If the absolute value of the shift exceeds the pulse duration, then the autocorrelation function will identically vanish.

Example 3.5. Determine the ACF of a sequence of rectangular video pulses.

In radar, signals are widely used, which are packets of pulses of the same shape, following each other at the same time interval. To detect such a burst, as well as to measure its parameters, for example, its position in time, devices are created that implement hardware algorithms for calculating the ACF.

Rice. 3.5. ACF of a pack of three identical video pulses: a - pack of pulses; b - ACF graph

In Fig. 3.5c shows a packet consisting of three identical rectangular video pulses. Its autocorrelation function, calculated using formula (3.15) is also presented here (Fig. 3.5, b).

It is clearly seen that the maximum ACF is achieved at However, if the delay is a multiple of the sequence period (at in our case), side lobes of the ACF are observed, comparable in height to the main lobe. Therefore, we can talk about a certain imperfection of the correlation structure of this signal.

Autocorrelation function of an infinitely extended signal.

If it is necessary to consider periodic sequences of unlimited duration in time, then the approach to studying the correlation properties of signals must be somewhat modified.

We will assume that such a sequence is obtained from some time-localized, i.e., pulsed signal, when the duration of the latter tends to infinity. In order to avoid divergence of the resulting expressions, we define the ionic ACF as the average value of the scalar product of the signal and its copy:

With this approach, the autocorrelation function becomes equal to the average mutual power of these two signals.

For example, if you want to find the ACF for a cosine wave unlimited in time, you can use formula (3.21) obtained for a radio pulse of duration and then go to the limit when taking into account definition (3.22). As a result we get

This ACF is itself a periodic function; its value at is equal to

Relationship between the energy spectrum of a signal and its autocorrelation function.

When studying the material in this chapter, the reader may think that the methods of correlation analysis act as some special techniques that have no connection with the principles of spectral decompositions. However, it is not. It is easy to show that there is a close connection between the ACF and the energy spectrum of the signal.

Indeed, in accordance with formula (3.15), the ACF is a scalar product: Here the symbol denotes a time-shifted copy of the signal and ,

Turning to the generalized Rayleigh formula (2.42), we can write the equality

Spectral density of time-shifted signal

Thus, we come to the result:

Module square spectral density, as is known, represents the energy spectrum of the signal. So, the energy spectrum and the autocorrelation function are related by the Fourier transform:

It is clear that there is also an inverse relationship:

These results are fundamentally important for two reasons. Firstly, it turns out to be possible to evaluate the correlation properties of signals based on the distribution of their energy over the spectrum. The wider the signal frequency band, the narrower the main lobe of the autocorrelation function and the more perfect the signal in terms of possibility precise measurement the moment it began.

Secondly, formulas (3.24) and (3.26) indicate the way to experimentally determine the energy spectrum. It is often more convenient to first obtain the autocorrelation function, and then, using the Fourier transform, find the energy spectrum of the signal. This technique has become widespread when studying the properties of signals using high-speed computers in real time.

The relation sovtk It follows that the correlation interval

turns out to be smaller, the higher the upper limit frequency of the signal spectrum.

Restrictions imposed on the form of the autocorrelation function of the signal.

The found connection between the autocorrelation function and the energy spectrum makes it possible to establish an interesting and, at first glance, non-obvious criterion for the existence of a signal with given correlation properties. The fact is that the energy spectrum of any signal, by definition, must be positive [see. formula (3.25)]. This condition will not be fulfilled for any choice of ACF. For example, if we take

and calculate the corresponding Fourier transform, then

This alternating function cannot represent the energy spectrum of any signal.

Correlation is a mathematical operation, similar to convolution, that allows you to obtain a third signal from two signals. It happens: autocorrelation (autocorrelation function), cross-correlation (cross-correlation function, cross-correlation function). Example:

[Cross correlation function]

[Autocorrelation function]

Correlation is a technique for detecting previously known signals against a background of noise, also called optimal filtering. Although correlation is very similar to convolution, they are calculated differently. Their areas of application are also different (c(t)=a(t)*b(t) - convolution of two functions, d(t)=a(t)*b(-t) - cross-correlation).

Correlation is the same convolution, only one of the signals is inverted from left to right. Autocorrelation (autocorrelation function) characterizes the degree of connection between a signal and its copy shifted by τ. The cross-correlation function characterizes the degree of connection between 2 different signals.

Properties of the autocorrelation function:

  • 1) R(τ)=R(-τ). The function R(τ) is even.
  • 2) If x(t) is a sinusoidal function of time, then its autocorrelation function is a cosine function of the same frequency. Information about the initial phase is lost. If x(t)=A*sin(ωt+φ), then R(τ)=A 2 /2 * cos(ωτ).
  • 3) The autocorrelation function and the power spectrum are related by the Fourier transform.
  • 4) If x(t) is any periodic function, then R(τ) for it can be represented as the sum of autocorrelation functions from a constant component and from a sinusoidally varying component.
  • 5) The R(τ) function does not carry any information about the initial phases of the harmonic components of the signal.
  • 6) For a random function of time, R(τ) decreases rapidly with increasing τ. The time interval after which R(τ) becomes equal to 0 is called the autocorrelation interval.
  • 7) A given x(t) corresponds to a well-defined R(τ), but for the same R(τ) different functions x(t) can correspond

Original signal with noise:

Autocorrelation function of the original signal:

Properties of the cross correlation function (MCF):

  • 1) VKF is neither an even nor an odd function, i.e. R xy (τ) is not equal to R xy (-τ).
  • 2) The VCF remains unchanged when the alternation of functions changes and the sign of the argument changes, i.e. R xy (τ)=R xy (-τ).
  • 3) If random functions x(t) and y(t) do not contain constant components and are created by independent sources, then for them R xy (τ) tends to 0. Such functions are called uncorrelated.

Original signal with noise:

Square wave of the same frequency:

Correlation of the original signal and the meander:



Attention! Each electronic lecture notes is the intellectual property of its author and is published on the website for informational purposes only.

Did you know, What is a thought experiment, gedanken experiment?
This is a non-existent practice, an otherworldly experience, an imagination of something that does not actually exist. Thought experiments are like waking dreams. They give birth to monsters. Unlike a physical experiment, which is an experimental test of hypotheses, a “thought experiment” magically replaces experimental testing with desired conclusions that have not been tested in practice, manipulating logical constructions that actually violate logic itself by using unproven premises as proven ones, that is, by substitution. Thus, the main task of the applicants of “thought experiments” is to deceive the listener or reader by replacing a real physical experiment with its “doll” - fictitious reasoning on parole without the physical verification itself.
Filling physics with imaginary, “thought experiments” has led to the emergence of an absurd, surreal, confused picture of the world. A real researcher must distinguish such “candy wrappers” from real values.

Relativists and positivists argue that “thought experiments” are a very useful tool for testing theories (also arising in our minds) for consistency. In this they deceive people, since any verification can only be carried out by a source independent of the object of verification. The applicant of the hypothesis himself cannot be a test of his own statement, since the reason for this statement itself is the absence of contradictions in the statement visible to the applicant.

We see this in the example of SRT and GTR, which have turned into a kind of religion that controls science and public opinion. No amount of facts that contradict them can overcome Einstein’s formula: “If a fact does not correspond to the theory, change the fact” (In another version, “Does the fact not correspond to the theory? - So much the worse for the fact”).

The maximum that a “thought experiment” can claim is only the internal consistency of the hypothesis within the framework of the applicant’s own, often by no means true, logic. This does not check compliance with practice. Real verification can only take place in an actual physical experiment.

An experiment is an experiment because it is not a refinement of thought, but a test of thought. A thought that is self-consistent cannot verify itself. This was proven by Kurt Gödel.

Signals and linear systems. Correlation of signals

Topic 6. Signal correlation

Extreme fear and extreme ardor of courage alike upset the stomach and cause diarrhea.

Michel Montaigne. French lawyer-thinker, 16th century.

This is the number! The two functions have a 100% correlation with the third and are orthogonal to each other. Well, the Almighty had jokes during the creation of the World.

Anatoly Pyshmintsev. Novosibirsk geophysicist of the Ural school, 20th century.

1. Autocorrelation functions of signals. The concept of autocorrelation functions (ACFs). ACF of time-limited signals. AKF periodic signals. Autocovariance functions (ACF). ACF of discrete signals. ACF of noisy signals. ACF of code signals.

2. Cross-correlation functions of signals (CCF). Cross correlation function (CCF). Cross-correlation of noisy signals. CCF of discrete signals. Estimation of periodic signals in noise. Function of mutual correlation coefficients.

3. Spectral densities of correlation functions. Spectral density of ACF. Signal correlation interval. Spectral density of VKF. Calculation of correlation functions using FFT.

Introduction

Correlation, and its special case for centered signals - covariance, is a method of signal analysis. We present one of the options for using the method. Let us assume that there is a signal s(t), which may (or may not) contain some sequence x(t) of finite length T, the temporal position of which interests us. To search for this sequence in a time window of length T sliding along the signal s(t), the scalar products of the signals s(t) and x(t) are calculated. Thus, we “apply” the desired signal x(t) to the signal s(t), sliding along its argument, and by the value of the scalar product we estimate the degree of similarity of the signals at the points of comparison.

Correlation analysis makes it possible to establish in signals (or in series of digital data of signals) the presence of a certain connection between changes in signal values ​​on an independent variable, that is, when large values ​​of one signal (relative to the average signal values) are associated with large values ​​of another signal (positive correlation), or, conversely, small values ​​of one signal are associated with large values ​​of another (negative correlation), or the data of two signals are not related in any way (zero correlation).

In the functional space of signals, this degree of connection can be expressed in normalized units of the correlation coefficient, i.e. in the cosine of the angle between the signal vectors, and, accordingly, will take values ​​from 1 (complete coincidence of signals) to -1 (complete opposite) and does not depend on the value (scale) of the units of measurement.

In the autocorrelation version, a similar technique is used to determine the scalar product of the signal s(t) with its own copy sliding along the argument. Autocorrelation allows you to estimate the average statistical dependence of current signal samples on their previous and subsequent values ​​(the so-called correlation radius of signal values), as well as to identify the presence of periodically repeating elements in the signal.

Correlation methods are of particular importance in the analysis of random processes to identify non-random components and evaluate the non-random parameters of these processes.

Note that there is some confusion regarding the terms "correlation" and "covariance". In the mathematical literature, the term "covariance" is applied to centered functions, and "correlation" to arbitrary ones. In the technical literature, and especially in the literature on signals and methods of their processing, the exact opposite terminology is often used. This is not of fundamental importance, but when familiarizing yourself with literary sources, it is worth paying attention to the accepted purpose of these terms.

Signal correlation functions are used for integral quantitative assessments of signal shapes and the degree of their similarity to each other.

Autocorrelation functions (ACF) of signals (correlation function, CF). In relation to deterministic signals with finite energy, the ACF is a quantitative integral characteristic of the signal shape, and represents the integral of the product of two copies of the signal s(t), shifted relative to each other by time t:

B s (t) = s(t) s(t+t) dt. (2.4.1)

As follows from this expression, the ACF is the scalar product of the signal and its copy in functional dependence on the variable value of the shift t. Accordingly, the ACF has the physical dimension of energy, and at t = 0 the value of the ACF is directly equal to the signal energy and is the maximum possible (the cosine of the angle of interaction of the signal with itself is equal to 1):

B s (0) = s(t) 2 dt = E s .

The ACF function is continuous and even. The latter is easy to verify by replacing the variable t = t-t in expression (2.4.1):

B s (t) = s(t) s(t-t) dt = s(t-t) s(t) dt = B s (-t).

Given parity, graphical representation of the ACF is usually performed only for positive values ​​of t. The +t sign in expression (2.4.1) means that as t values ​​increase from zero, a copy of the signal s(t+t) shifts to the left along the t axis. In practice, signals are usually also specified in the interval of positive argument values ​​from 0-T, which makes it possible to extend the interval with zero values, if necessary for mathematical operations. Within these computational limits, it is more convenient to shift the copy of the signal to the left along the argument axis, i.e. application of the function s(t-t) in expression (2.4.1):

B s (t) = s(t) s(t-t) dt. (2.4.1")

As the value of the shift t for finite signals increases, the temporary overlap of the signal with its copy decreases, and, accordingly, the cosine of the interaction angle and the scalar product as a whole tend to zero:

Example. On the interval (0,T), a rectangular pulse with an amplitude value equal to A is given. Calculate the autocorrelation function of the pulse.

When the copy of the pulse is shifted along the t axis to the right, at 0≤t≤T the signals overlap in the interval from t to T. Dot product:

B s (t) = A 2 dt = A 2 (T-t).

When shifting a copy of the pulse to the left, at -T≤t<0 сигналы перекрываются на интервале от 0 до Т-t. Скалярное произведение:

B s (t) = A 2 dt = A 2 (T+t).

At |t| > T the signal and its copy have no intersection points and the scalar product of the signals is zero (the signal and its shifted copy become orthogonal).

Summarizing the calculations, we can write:

B s (t) = .

In the case of periodic signals, the ACF is calculated over one period T, with averaging of the scalar product and its shifted copy within this period:

B s (t) = (1/T) s(t) s(t-t) dt.

At t=0, the value of the ACF in this case is equal not to the energy, but to the average power of the signals within the interval T. The ACF of periodic signals is also a periodic function with the same period T. Thus, for a signal s(t) = A cos(w 0 t+j 0) at T=2p/w 0 we have:

B s (t) = A cos(w 0 t+j 0) A cos(w 0 (t-t)+j 0) = (A 2 /2) cos(w 0 t).

Note that the obtained result does not depend on the initial phase of the harmonic signal, which is typical for any periodic signals and is one of the properties of the CF.

For signals given over a certain interval, the ACF is also calculated with normalization to the interval length:

B s (t) = s(t) s(t+t) dt. (2.4.2)

In the limit, for non-periodic signals with ACF measurement at interval T:

B s (t) = . (2.4.2")

The autocorrelation of a signal can also be assessed by the autocorrelation coefficient, which is calculated using the formula (based on centered signals):

r s (t) = cos j(t) = ás(t), s(t+t)ñ /||s(t)|| 2.

Cross correlation function (CCF) signals (cross-correlation function, CCF) shows the degree of similarity of shifted copies of two different signals and their relative position along the coordinate (independent variable), for which the same formula (2.4.1) is used as for the ACF, but under the integral there is the product of two different signals, one of which is shifted by time t:

B 12 (t) = s 1 (t) s 2 (t+t) dt. (2.4.3)

When replacing the variable t = t-t in formula (2.4.3), we obtain:

B 12 (t) = s 1 (t-t) s 2 (t) dt = s 2 (t) s 1 (t-t) dt = B 21 (-t)

It follows that the parity condition is not satisfied for the CCF, and the CCF values ​​are not required to have a maximum at t = 0. This can be clearly seen in Fig. 2.4.1, where two identical signals are given with centers at points 0.5 and 1.5. Calculation using formula (2.4.3) with a gradual increase in t values ​​means successive shifts of the signal s2(t) to the left along the time axis (for each value of s1(t), the values ​​s2(t+t) are taken for integrand multiplication).

© 2024 hecc.ru - Computer technology news