Sunday 22 July 2018

DISCRETE SIGNAL OPERATIONS

EDITOR: B. SOMANATHAN NAIR


1. INTRODUCTION
In the previous two blogs, we had discussed operations of scaling and shifting on continuous time-domain signals. In this blog we discuss how these operations are performed on discrete time-domain signals.

Example 1: Consider the discrete-time function x(n) shown in Fig. 1. Obtain the plot of the function x(n–2).    

Solution: The given discrete-time function is shifted to the right by two units; the resulting plot is shown in Fig. 2.

Example 2: Obtain the plot of the function x(n–2) using the plot shown in Fig. 3.                                                                   
                                                                       
Solution: The desired function is obtained by shifting x(n) to the right by two units as shown in Fig. 4.

Example 3: Using the plot shown in Fig. 3, obtain the function x(2n–3).    


Solution: The desired function x(2n–3) is a time-shifted and compressed version of the given function x(n). For the discrete version, we shift the given function by three units to the right as shown in Fig. 5.
            This function is compressed by two units of time. In discrete-time domain, compression by 2 units of time means dividing the sample time by the factor of 2 and then choosing those samples represented by whole numbers (or integers), and neglecting those samples represented by fractions.  To illustrate this idea further, consider the sample of amplitude –1 at time unit n = 1 in Fig. 5. When this is divided by the compression factor of 2, we get the fractional number 1/2; since this is not an integer, this sample is neglected.
            Next, we consider the sample of amplitude –1 at time n = 2 in Fig. 5. When this is divided by the compression factor of 2, we get 2/2 = 1; since this is a whole number, it is selected and drawn in Fig. 6 as the sample of amplitude –1 at n =1.
            In the same manner as described above, we divide the sample of amplitude +1 at time n = 4 by factor 2 to get a valid whole number at 4/2 = 2. This is then drawn as the sample of amplitude +1 at n = 2, as shown in Fig. 6. Extending the same argument to the sample of +1 at n = 5, we find that this is invalid as 5/2 = 2.5, which is a fraction and hence neglected. The desired function x(2n–3)  is thus obtained as shown in Fig. 6.
Example 4: Figure 7 shows a discrete-time function x(n). Get the function x(2n).

Solution: The function x(2n)  represents discrete-time compression of the function x(n) shown in Fig. 7. As explained in Example 3, we divide each value of n by 2 to get the samples of x(2n). Thus dividing sample time n of values equal to -3, -1, 1, and 3 by the compression factor 2 yields fractions of -3/2, -1/2, 1/2, and 3/2, respectively. However dividing n = -2 and 2 by factor 2 yields whole numbers of -1, and 1, respectively. Hence the sample amplitudes of 2 each at these two points (i.e., -2 and 2) are selected as valid sample amplitudes at instants of -1, and 1, respectively and marked in Fig. 8 to yield the function x(2n). 

Thursday 19 July 2018

SIGNAL OPERATIONS - II


EDITOR: B. SOMANATHAN NAIR




Example 5 (Combined Time Scaling and Shifting): – Generalized Method of Solution 1): Figure 9 shows a rectangular function x(t) having a base width of 2τ units of time and an amplitude of one unit. Plot the function y(t) = x(at+b).     

Solution: This is a combined operation of time shifting and scaling.                 
                                                                 
Step 1: We rewrite the given function by taking the factor a outside the brackets and then putting the remaining terms inside a single bracket to get:

                                                        y(t) = x[a(t+b/a)]  (1)
                                                                      
Step 2 (Time shifting): The next step is to find the function x(t+b/a). This is the function x(t) shifted to the left by b/a units of time (Fig. 10). It may be noted that in the figure, the pulse limits are given as c = –b/a +τ and d = –b/aτ, respectively.

Step 3 (Compression): Finally, we find that the function x[a(t+b/a)] is equivalent to the function x(t+b/a) compressed by a units. This is shown in Fig. 11.

Example 6: Assume that in Fig. 9, the base width is 2 units of time and the amplitude one unit. Plot the function y(t) = x(2t+3).


Solution:  Here, we have a = 2, b = 3, and τ = 1. Substituting these values in the relevant expressions we get the center point of the compressed and time-shifted pulse as:

 ‒b/a = ‒3/2= ‒1.5

The limits of the pulse are given by
c = ‒b/a + τ = ‒3/2+1 = ‒0.5

d = ‒b/aτ = ‒3/2‒1 = ‒2.5

Using the above results, we get the desired function y(t) as shown in Fig. 12.         



Example 7 (Combined Time Scaling and Shifting – Generalized Method of Solution 2): Using the function shown in Fig.9, plot the function y(t) = x(at+b), where a = 2, b = 3, and τ = 1.

Solution: This second method can be seen to be simpler than the first method. We use the following steps to perform the operation of x(at+b) = x(2t+3).
                                                                 
Solution:  

Step 1: In this method, we first perform the compression operation x(at).  

Step 2: Next we shift x(at) by b/a units of time to the left or right depending on the requirement.                    
Here, we have a = 2, b = 3, and τ = 1. In step 1, we compress x(t) by a factor 2. This gives the waveform shown in Fig. 13. After compression, the next step is to shift the signal x(2t) by the factor b/a = 3/2 = 1.5 unit  to the left to get the function  y(t) = x(2t+3). as shown in Fig. 14.         











Wednesday 18 July 2018

SIGNAL OPERATIONS - I

EDITOR: B. SOMANATHAN NAIR




1. INTRODUCTION
The following are the mathematical operations performed on signals:

           1.    Addition of signals.       
           2.    Multiplication of signals.
           3.    Differentiation of signals.
           4.    Integration of signals.
           5.    Amplitude scaling and frequency scaling of signals.
           6.    Inversion or reflection of signals.
           7.    Time shifting of signals.

In this blog, we discuss the scaling, inversion, and time-shifting operations of signals.

2. SCALING OPERATION OF SIGNALS
Scaling operations can be performed on amplitude and frequency of a signal.

2.1 AMPLITUDE SCALING
In amplitude scaling, the amplitude of a given signal is multiplied by an integer or fraction; multiplication by integer increases the given amplitude and multiplication by fraction reduces it.  For example, consider a signal y(t) = f(t). Then we may express its amplitude-scaled version as                                       

                                                            y(t) = Af(t)    (1)

where A is a constant of multiplication, known as the amplitude scaling factor. It may be noted that A can be greater than, equal to, or less than 1. 
                                       
2.2 TIME SCALING
In time scaling, the period of a given signal is multiplied by an integer or a fraction; multiplication by integer reduces the period and multiplication by fraction increases it. Let y(t) = f(t) be a given signal; then its time-scaled version is:

                                                                 y(t) = f(at)  (2)                                                

where a is an integer or fraction, whose value can be greater than, equal to, or less than 1.

3. INVERSION (REFLECTION)

Let x(t) be a given signal in the positive time axis. Then the signal x(‒t) is called the reflection or inversion of  x(t).

4. TIME SHIFTING
In many applications, we want a signal to be shifted time from its original location in the left or right directions. For example, consider a signal x(t). Then, the signal x(t T) will move  x(t) to the right by T units of time from its original location. Similarly, the signal x(t + T) will move  x(t) to the left by T units of time.

ILLUSTRATIVE EXAMPLES


Example 1 (Inversion): Invert the pulse shown in Fig. 1.



Solution: The inverted signal is shown in Fig. 2. It can be seen that the whole pulse is inverted in time with respect to y axis as fulcrum.


Example 2 (Time Shifting): Figure 3 shows a signal x(t). Find the signal y(t) = x(t − 2).


Solution:  The signal shown in Fig. 3 is symmetrical with respect to the origin. When this signal is shifted to the right by one unit of time, we obtain the waveform shown in Fig.  4.




Example 3: (Time Scaling): Figure 5 shows a triangular function x(t) of base width 2 units of time and an amplitude of one unit. Plot the functions (a) y1(t) = x(t/2) and (b) y2(t) = x(2t).



Solution:  

(a) y1(t) = x(t/2):  In this case, the time gets halved. Hence using basic theory, we find that the base width of the function gets doubled; but the amplitude does not change and still remains at unity. This is plotted in Fig. 6.




(b) y2(t) = x(2t):  In this case, the time is multiplied by a factor of 2. Hence using basic theory, we find that the base width of the function gets halved; but the amplitude does not change and still remains at unity. This is plotted as shown in Fig. 7.



Example 4 (Amplitude Scaling): Using the data in Example 3, find y3(t) = 2x(2t).

Solution: In this case, the amplitude of the function y2(t) = x(2t) becomes twice that shown in Fig. 7. The result is shown in Fig. 8.


Saturday 14 July 2018

SIGNALS COMMONLY ENCOUNTERED IN SIGNAL PROCESSING APPLICATIONS


1. INTRODUCTION
In signal processing applications, we come across several types of signals, which can be defined mathematically. The most commonly encountered signals are:

·         Delta or impulse function
·         Step function
·         Ramp function
·         Parabolic and exponential functions
·         Periodic functions

These functions are usually applied as inputs to various systems, and we evaluate the performance of the systems based on these inputs.

2. THE DELTA (IMPULSE) FUNCTION
The ideal delta (impulse) function is defined as a function that has infinite amplitude and zero duration. Such a function can exist only in theory, but not in practice. This is because every practical signal is an energy function that requires a finite period (however small this may be) for its existence; it can not exist for a period of zero duration. Hence we conclude that the practical delta function is one that has an extremely high amplitude and existence of an extremely short duration of time.

2.1 DELTA FUNCTION IN CONTINUOUS TIME DOMAIN
Delta function is defined in the continuous time-domain (CTD) mode using the function
Equation (1) says that the area under the curve δ(t) integrated between the infinite limits is unity. As stated before, delta function exists only theoretically in its ideal form. In practice, there are several mathematical functions that can be approximated (within limits) to be the delta function.

            Consider Fig. 1, which shows a rectangular pulse of width τ (tau) and height 1. Now, when τ → 0, 1¥. The figure can thus be approximated as a delta function in the limits shown. In mathematical form, the delta function can be approximated in the form


2.2 DELTA FUNCTION IN DISCRETE TIME DOMAIN
The delta function, also called as Dirac’s delta function, named after its originator P. A. M. Dirac, is defined in discrete time-domain (DTD) mode as

                                                            δ( (n) = 1, n = 0
                                                                    = 0, elsewhere  (2)                                          

Figure 2 shows the representation of the unit delta function.  As shown in the figure, the unit delta function exists only at time n = 0, and has no value at other places in the graph.


Since the delta function has zero (or very short) duration, we call this as an impulse function also, since it acts like a sudden shock. It may be noted that a karate chop very nearly approximates a delta function. The impulse input function is usually used to test the ability of a given system to withstand sudden shocks of extremely large amplitudes, and which can occur at any instant of time in the system. It is quite common that the impulse may occur at any instant of time. The following examples illustrate this idea.

Example 1:   Plot the following impulse functions:
                        (a) δ(n -1)   
                        (b) δ(n +2)
Solution:
(a) To obtain the position of the delayed delta function, we write
                                   
                                    δ(n - 1) = δ(0) = 1, at n = 1
                                                             = 0, elsewhere                   (1)                            
       
From (1), we find that δ(n - 1) is a delta function shifted to the right by a one unit of time.

(b)  By a similar argument, we find that

                           δ (n + 2) = δ(0) = 1, at n = -2  (2)

Equation (2) reveals that δ(n + 2) is delta function shifted to the left by 2 units of time. The functions δ(n - 1) and δ(n + 2) are plotted, as shown in Fig. 3. 


  
3. STEP FUNCTION
We define unit step function in the continuous-time domain mode domain as

                                  u(t) = 1,  0 £ t £ ¥
                                                                 = 0,  elsewhere    (3)                                                                 
           
In the discrete-time domain mode, unit step function becomes

                                                          u(n) = 1,  0 £  n £ ¥
                                                                   =0, elsewhere  (4)                                                      
Figure 4 shows the unit step function in the continuous time-domain mode and Fig. 5 shows the unit step function in the discrete time-domain mode. It may be noted that:
      ·         In a step function, the transition of the waveforms from 0 to 1 occurs in zero time.
·         The step function can assume any desired amplitude. However, when the amplitude   is unity, we call it as unit step function.


4. UNIT RAMP FUNCTION
We define the unit ramp function in the continuous-time domain mode as:

                                 r(t) = t,     0 £ t £ ¥
                                                               = 0, elsewhere           (5)
           
and the unit ramp function in the discrete-time domain mode as:

                                    r(n) = n,  0 £ n £ ¥
                                                                   = 0,  elsewhere     (6)                                               

Figure 6 shows the continuous time-domain version of the unit ramp function, and Fig. 7 shows its discrete time-domain version. As shown in the figures, the ramp function is a waveform whose amplitude is proportional to time.

5. PARABOLIC FUNCTIONS
We define a parabolic waveform in continuous time-domain mode as:
                       
                                    p(t) = t2, - ¥ £ t £ ¥
                                            = 0, elsewhere (7)
                                                         
In the discrete time-domain mode, we define it as

                                                             p(n) = n2- ¥ £ n £ ¥
                                                                            = 0,  elsewhere   (8)    

Figure 8 shows the parabolic waveform in the continuous time-domain mode. The corresponding discrete version is shown in Fig. 9.



We can see that the parabolic waveform can be obtained by integrating the ramp waveform (in the continuous time-domain mode).  In turn, the ramp waveform can be obtained by differentiating the parabolic waveform.
            Similarly, the ramp waveform (in the continuous time-domain mode) can be obtained by integrating the step waveform and in turn, the step waveform can be obtained by differentiating ramp waveform.
      Finally, we find that the delta function may be obtained by differentiating the step waveform and in turn, the step waveform can be obtained by integrating the delta function.
In general functions of the type f(t) = at, where a is a constant,  are known as exponential functions. Usually, for exponential functions, a is chosen as e, the base of natural logarithm.

6. PERIODIC FUNCTIONS
So far, we have discussed waveforms that are non-periodic. Now, let us discuss about a few typical periodic waveforms. Sinusoidal waveforms are considered to be the most fundamental of all the periodic waveforms. They are mathematically expressed as

                                                      y(t) = Vmax sin ωt  (9)

in the continuous time-domain mode, where Vmax = amplitude and ω = angular frequency. In the discrete time-domain mode, it will take the form 

                                                        y(n) = Vmax sin nω  (10)

Other periodic waveforms in use are the square, triangular, and sweep waveforms. These waveforms can be derived from the waveforms we have already discussed. Hence, they will not be described here, now. Rather, they will be described as and when need arises. 



Monday 9 July 2018

SIGNALS AND SYSTEMS-VII PROPERTIES OF SYSTEMS STABILITY

EDITOR: B. SOMANATHAN NAIR



1. INTRODUCTION
A system is said to be stable, if at time t = nT = ¥, the system reaches its steady-state condition. In the steady-state condition, a system does not vibrate or oscillate and its amplitude remains within safe limits. It may be noted that if the amplitude exceeds these safe limits, the system will fail or get destroyed. Stability of systems can best be illustrated by the considering the following examples.
 Consider a chair with three legs of equal lengths. Let us assume that it is standing on these legs over a flat ground. We find that the chair is in a very stable state in this position and that we can sit on it without fear of falling down. If we try to swing the chair by applying a swinging force, it will only move slightly, and will immediately return to the stable state again. The chair resting in this position is said to be asymptotically stable.
            Now, consider the situation, wherein we remove one leg of the chair.  We can still balance it into a position that appears to be stable. However, we know that it is not really stable in this position, and that a small force (applied horizontally) can easily topple it from this position to ground. In this condition, we say that the chair is in an unstable state.
            Consider a third situation, wherein we have the chair modified such that it is converted into a swinging chair. Such a swinging chair is said to be marginally stable, since the chair is not really occupying a very stable position. In a marginally stable condition, a system will be in an unstable state; but the instability will be such that the amplitudes of vibration involved in the process will be well within safe limits.
            We can illustrate stability of electronic devices (and hence, systems) by considering the case of a bipolar junction transistor. Assume that the transistor under consideration is capable of handling an average value of 1 A of collector current. Let its maximum current rating be 2 A. If this transistor is operated at 1 A, then the heat developed in its collector region will be well within safe limits.
Now, consider the case of a transistor being used as a power amplifier. Suppose we are using a heat sink on top of the transistor. Then, we can increase the collector current beyond its rated capacity to some extent, i.e., we are operating the transistor above its rated current by using a heat dissipating device, which under normal circumstances will ensure its safe operation. However, in such situations, the transistor will also be getting heated above its normal temperature-withstanding capacity. But the heat sink will dissipate away the excess heat from its collector surface into the surrounding atmosphere. This keeps the transistor heat well within safe limits. If, however, by any chance, the current exceeds this safe value, then we find that the transistor gets destroyed. We therefore state that our transistor is in a marginally stable condition.
Now, suppose we are trying to operate the transistor at a current much higher than that required for marginally stable operation. Then, definitely the transistor will get destroyed as the heat generated in the collector region will be well above the safe limits. We call this state as the unstable condition.
Finally, one word about oscillations produced by oscillators: we can construct several sinusoidal and non-sinusoidal oscillators using electronic circuits. Such oscillators produce waveforms whose amplitudes vary at every instant; yet we find that they are stable, as their amplitudes are well within safe limits. These oscillators, therefore, belong to the class of marginally stable systems.
            As a final example of instability, consider a place where an extremely cold atmosphere exists. In such cold conditions, the human body may start vibrating. We know that every physical object or material having a mass has a natural frequency of vibration. When these vibrations, created by an external force like the cold atmosphere, coincide with the natural frequency of vibration of the human body, the amplitude of vibrations goes excessively high. This results in the sudden collapse of the owner of the body. So, we see that instability is an extremely dangerous condition in many cases.

 2. BOUNDED-INPUT, BOUNDED-OUTPUT CONDITION FOR STABILITY
We now derive the necessary and sufficient condition for testing whether a given system is stable or not. We have already seen that the relation between the input and output of a system given by the convolution equation:

  │ y(n) │  = │Σx(n)h(nk)│= Σ│x(n)││h(nk)│ (2)          

Equation (2) may also be written as:

                        y(n)│= Σ│x(n k)││h(k)│ (3)    
                                                         

If the output is to be bounded (or finite) under steady-state, then we must have

                                   │y(n)│= K­1           (4)                                                    

where K1 = a constant. Using (3) and (4), we get

                      K1 = Σ│x(n k)││h(k)│ (5)                                                                                       

Now, if the input is assumed to be bounded, then we have
           
                                 K2 = │x(n k)│  (6)                                                      

where K2 = another constant. Using this in (5), we find

                        K1 = K2 Σ│h(k)│  (7)

Equation (7) can be simplified to get the necessary condition for stability as

                              K =  Σ│h(k)│  (8)                                           
                                                          
where K = K1/K2 is a third constant. Equation (6) states that for a given system to be stable, its impulse response must be finite. Equation (8) can also be stated in another form. Thus, for a given system to be stable, we must have

                              Σ│h(k)│< K < ∞   (10)                                                 

Equation (10) is the necessary and sufficient condition for testing the stability of a given system. Since this is derived based on the bounded input/bounded output conditions, it is called the BIBO (Bounded-Input, Bounded-Output) condition for stability.
            We may approach the problem in a slightly different way. We can rewrite (4) by using (7) as:
                                 
                           │y(n)│= K2 Σ│h(k)│  (11)

                                                        
Let us now assume that
                          │h(n)│= K (12)                                                    

where K  is a constant, as discussed above. Using (12) in (11) yields

                           │y(n)│= K2 K = K3  (13)                    
                                            

where K3 is a new constant. Equation (13) says that to test for stability, we should see whether the output is bounded or not for a bounded input. The conditions based on this theory for testing BIBO stability are discussed in the following examples.
                                                                          
Example 1: Test the stability of the system given by the relation

y(n) = Ax­(n) (1)

Solution: For testing BIBO stability, the following rules are used:

·     First, assume that a bounded input is applied to the system to be tested. In some cases, we assume that the input is a constant. In other cases, we use the delta function, unit step function, etc., which are typical examples of bounded inputs. For example, the amplitude of the unit step function u(n) is always unity for any value of time n, and hence is always bounded at 1.
·  See whether the amplitude of the output of the system remains constant (i.e., bounded) at a fixed value, or tends to infinity, as n tends to infinity.
·       Conclude that the given system is stable, if the output remains less than or equal to a fixed value; otherwise it is unstable.

Using the first rule given above, let us rewrite (1) as

                                                y(n) = Au­(n)   (2)                                     

Since u(n) is bounded at 1 (or, unity), y(n) will be bounded at A. If A is a constant, then we conclude that the system is stable.

Example 2: Test the system given by y(n) = A cos (o) u(n) for stability.

Solution: Following the procedure given above, we find that since u(n) is bounded at 1, y(n) will be bounded at A cos (o). If A cos (o) is constant, then we conclude that the system is stable.

Example 3: Test the stability of the system governed by y(n) = n cos (o) u(n).

Solution: As n tends to infinity, we find that y(n) = n cos (o) u(n) also tends to  infinity. This makes the function y(n) tend to infinity and hence the system is unstable.



DISCRETE SIGNAL OPERATIONS

EDITOR: B. SOMANATHAN NAIR 1. INTRODUCTION In the previous two blogs, we had discussed operations of scaling and shifting on conti...