Saturday 30 June 2018

SIGNALS AND SYSTEMS-I PROPERTIES OF SYSTEMS


EDITOR: B. SOMANATHAN NAIR


1. INTRODUCTION
A system may be stated as an entity made up of a combination of several different elements connected to each other in an ordered fashion, or according to some finite rule. A system can have one or more inputs to feed signals into it, and one or more outputs to take data out from it. The system input(s) and output(s) can be seen to be related to each other by some finite rules or algorithms. Computers, electrical machines, radio transmitters, and receivers are a few among an infinite number of systems that exist in this world.
            The behavior of a system can be described by one or more of the following properties:

·               Linearity
·               Causality
·               Time-variance
·               Convolution
·               Stability
·               Memory

2. LINEARITY
A system is said to be linear, if and only if it obeys the principle of superposition. The principle of superposition may be explained with the help of the following.
Consider a system, whose output y(t) is dependent on a certain input x(t). Let us define the relations existing between x(t) and y(t) as

                                                              y(t) = f [x(t)]   (1)                                                      

where f represents a function. For example, let us consider the relations

                                    x3(t) = ax1(t) + bx2(t)  (2)                                          
                                    y3(n) = ay1(n) + by2(n) (3)                                                    

where x1(t), x2(t), and x3(t) are the values of x(t) at three instants of time and y1(t), y2(t), and y3(t) are the corresponding values of y(t) at the same intervals of time, and a and b are constants. Since
                                    y(t) = f [x(t)]                                                               

we can express the relations in the form

           y3(t) = ay1(t) + by2(t) = f [x3(t)] =  f [ax1(t) + bx2(t)] (4)                    

Equation (4) may be written in the form:

            y3(t) = ay1(t) + by2(t) = a f [x1(t)] + b f [x2(t)] (5)                           

            Now, the principle of superposition says that if the system is to be linear, then we must have
                       
                                    f [ax1(t)+ bx2(t)] = a f [x1(t)] + b f [x2(t)]  (6)                                    

Notice that the relations

  y3(t) = f [ax1(t) + bx2(t)]
and
  y3(t) = a f [x1(t)] + b f [x2(t)]

were obtained through two independent methods. We may now state:

            If a given system is to be linear, then its response (or output) to a weighted sum of inputs must be equal to the corresponding weighted sum of its responses to each of the individual inputs.

ILLUSTRATIVE EXAMPLE 1: Determine whether the system governed by the equation  y(t) = 3 x(t) is linear or not. .

Solution: To determine whether a given system is linear or not, we adopt the following procedure, which is based on the principle of superposition given above. Based on that, we may write

                                                            y1(t) = 3x1(t)  (1)
                                                            y2(t) = 3x2(t)  (2)
                                                            y3(t) =  x3(t)  (3)

Let                                                                                                                                   
                                                   x3(t) = ax1(t) + bx2(t) (4)
and                                            
                                                    y3(t) = ay1(t) + by2(t) (5)

where a and b are constants. Now, substituting (4) into (3) yields

                                          y3(t) = 3x3(t) = 3[ax1(t) + bx2(t)]
                                                      = 3ax1(t) + 3bx2(t)

                               = a[3x1(t)] + b[3x2(t)]
                                 
                                 = ay1(t) +by2(t) (6)                                

Comparison of (6) and (5) shows that they are the same. Hence, we conclude that the relation y3(t) = 3x3(t) represents a linear system.
            In the above method, we arrived at the final conclusion through two different paths using the same equation that govern the given system. If the two paths help us to arrive at the same final solution, then we say that the system obeys the principle of superposition, and hence is linear.
However, it must be carefully noted that all equations representing straight lines need not necessarily represent linear systems. The following example will illustrate this idea.

ILLUSTRATIVE EXAMPLE 2: Test whether the system governed by the equation y(t) = Ax(t)+B is linear or not.
                       
Solution: We know that the equation y(t) = Ax(t)+B, where A and B are constants, represents a straight line. We now show that the system represented by this equation is not linear. To prove this statement, we proceed as in Illustrative Example 1. Following the same procedure, let

                                                            y1(t) = Ax1(t)+B  (1)
                                                            y2(t) = Ax2(t)+B  (2)
                                                            y3(t) = Ax3(t)+B  (3)

Let                                                                                                                                   
                                                   x3(t) = ax1(t) + bx2(t) (4)
and                                            
                                                    y3(t) = ay1(t) + by2(t) (5)

where a and b are also constants. Now, substituting (4) into (3) yields

                                          y3(t) = Ax3(t)+B = A[ ax1(t) + bx2(t)] + B
                                                      = Aax1(t) + Abx2(t) + B
                                = a[Ax1(t)] + b[Ax2(t)] + B                                
                                ay1(t) + by2(t)              (6)                  
                                                                                                                                  
 Equation (6) shows that the two paths for arriving at the final result do not agree with each other; We therefore conclude that the system governed by the equation y(t) = Ax(t)+B is not linear.
We now conclude that straight-line equations passing through the origin and extending from ‒∞ to +∞ will represent linear systems. Figure 1 represents a linear system and Fig. 2 represents a nonlinear system.



Both linearity and nonlinearity are desirable properties of practical systems. For example, amplifiers are linear systems. In an amplifier, the output is directly proportional to the input. Any nonlinearity in the amplifier system, as stated above, will produce distortion and noise in its output. However, when the same amplifier is used as a switch, we operate it in the nonlinear regions (for example, in the saturation and cut-off regions) of the system.

3. CAUSALITY

The term causal represents the idea “that which causes”. A system is said to be causal, if the value of its present output(s) depend(s) only on the present and past values of its inputs [which may include inputs derived from the output(s) through feedback connections], and does not in any way depend on the future values of the inputs.
            It is easy to see that all physically realizable systems are causal. Let us consider the example of a student writing examination on a given subject. Before writing the examination, he must have studied the subject thoroughly, and only these studies will help him in writing the examination. Any studies that he may make on that subject after he has written the examination will not help him in any way in writing an examination that has already been over!
In causal systems, inputs applied cause them to produce outputs. So, a method to check for the causality of a system is to check the time period(s) of its input(s) and output(s), and see whether they contain a term or terms with future value(s) in them. The following example will illustrate our procedure for testing causality.

ILLUSTRATIVE EXAMPLE 3: Test the causality of the system whether the system governed by the expression:

                                 y(t) = ay(t‒1) + by(t‒2) + cy(t‒3) + dx(t) (1)
                                                       

Solution: We know that terms containing t in them represent present values, and (t-1), (t-2), etc. represent present values delayed one unit of time, two units of time and so on. It can be seen that the terms in (1) contain only present and past values of input and output among them. They do not contain any value representing a future input. Hence we state that the system governed by (1) is causal and is physically realizable.

ILLUSTRATIVE EXAMPLE 4: Test the causality of the system governed by the expression

                                         y(t) = ay(t‒1) + by(t‒2) + cy(t‒3) + dx(t + 1) (1)
                         

Solution: inspection of (1) reveals that it contains the future term x(t+1), and therefore it is non-causal.



Thursday 28 June 2018

DIGITAL FIR FILTERS-IV DESIGN OF LOW-PASS FILTERS USING WINDOW FUNCTIONS


EDITOR: B. SOMANATHAN NAIR

1. INTRODUCTION
The accuracy of an FIR filter can be increased by increasing the number of filter coefficients. It may be noted that this is a trial-and-error procedure. We find that there will be infinite number of values of filter coefficients as n ® ± ¥. We can also see that the larger the n, the smaller the value of the filter coefficients. Because of this, there is no meaning in finding the coefficients beyond a certain value of n. The actual number of coefficients to be computed depends on the accuracy that we need in the realization of the filter as well as on the number of bits that the computer used to solve the problem can handle.

2. DESIGN OF FIR FILTERS USING WINDOW FUNCTIONS
As stated above, finite register length of computers necessitates abrupt termination of FIR filter coefficients at some finite value of n. In turn, this gives rise to the Gibb’s phenomenon, which is dangerous in many situations, as it gives rise to sharp transients. In many situations, these transients may destroy the hardware used for the construction of the filter. So, it must be avoided at all costs. To prevent the occurrence of the Gibb’s phenomenon, we must avoid abrupt truncation of filter coefficients. To avoid abrupt truncation, we use a function having a tapering characteristic, as shown in Fig. 1. Such functions having tapering characteristics are known as window functions. Half-cosine wave is an example of a window function. When the impulse response h(n), derived from the given transfer function H(w), is multiplied by an appropriate window function w(n), we get a modified impulse function h¢(n), which shows a set of gradually decreasing filter coefficients. These filter coefficients in turn will ensure the absence of the Gibb’s phenomenon from the operating regions of the filter).




            Thus, to prevent the occurrence of the Gibb’s phenomenon, we must use a modified impulse response given by

                                                  h¢(n) = h(n)´w(n)   (1)                                                   

2. COMMONLY USED WINDOW FUNCTIONS
Two typical examples of window functions are:

(a)               The Rectangular window
(b)              The Hann (Hanning) window

(a) THE RECTANGULAR WINDOW
Let us consider the rectangular window defined by the equation

                       wR(n)    = 1, ‒­M n M        
                                                      = 0, elsewhere   (2)                                                               

            The response characteristic of this window can be seen to be the same as that of the ideal characteristic shown in Fig. 1. Now, substitution of (2) into (1) yields the modified impulse response

       h¢(n) = h(n)´wR(n) = h(n), ‒­M n M                                            
                                                            = 0,   elsewhere        (3)                  

            Equation (3) says that the modified impulse response h¢(n) is the same as the original  impulse response h(n). This means that this window will result in abrupt cut-off of filter coefficients, which will lead to the generation of the Gibb’s phenomenon. Hence this window is never used for practical design of FIR filters.

(b) THE HANN (OR HANNING) WINDOW
The Hann or Hanning window (named after J. Von Hann) is defined as
                                   
wH(n) = 0.5 + 0.5 cos(2πn/N), ‒­N/2 n N/2   (4)                          

where N = order of the filter. Putting M = N/2, we rewrite (4) as

                                    wH(n) = 0.5 + 0.5 cos(πn/M), ‒­M n M   (5)
                                                   
Changing the limits, (4) may also be written as

           
wH(n) = 0.5 ‒ 0.5 cos(πn/M), 0 n N   (6)

Inspection of (4) and (6) reveals that, even though they represent the same function, there exists a difference in the signs of the second terms on the right-hand side of the two expressions. As can be seen, this difference in the equations is created by the difference in the limits used. However, both these equations yield the same final results. Even though designs using (5) result in noncausal filters, because of its convenience and easiness in application, in this blog, we shall be using this equation with limits from –M to M.
Let us now use (5), and calculate wH(n) for a tenth-order (i.e., N = 10) window, noting that wH (n) = wH (-n). Thus

wH(0) = 0.5 + 0.5 cos(0) = 1   (7)    
                      wH(1) = wH(‒1) = 0.5 + 0.5 cos(π/5) = 0.9045 = (8)
wH(2) = wH(‒2) = 0.5 + 0.5 cos(2π/5) = 0.6545 (9)
wH(3) = wH(‒3) = 0.5 + 0.5 cos(3π/5) = 0.3455 (10)
                      wH(4) = wH(‒4) = 0.5 + 0.5 cos(4π/5) = 0.0955 (11)
wH(5) = wH(‒5) = 0.5 + 0.5 cos(5π/5) = 0.0 (12)
           
            The results given in (7) to (12) are used for plotting the waveform shown in Fig. 2. This waveform is known as the raised-cosine waveform, as it looks like the positive half-cycle of a cosine wave raised from its negative portion to zero to make it into a variable DC wave.



Example 2: Use the Hann window to the solution given in Example 1 in the previous blog to determine the modified impulse response.

Solution:  We have, from Example 1 in the previous blog, the impulse response

                   h(n) = 0.5 sin(nπ/2)/(nπ/2)   (13)                                                                    
                                                                                   
We now perform h(n) x wH(n) to get the modified impulse response. Thus we obtain

                                                h’(0) = 0.5 x 1 = 0.5
                                                h’(1) = h’(‒1) = 0.3183x0.9045 = 0.2879
                        h’(2) = h’(‒2)  = 0                       
                        h’(3) = h’(‒3)  = ‒1061x 0.3455 =‒0.0367                                                 h’(4) = h’(‒4) = 0
                        h’(5) = h’(‒5) = 0      
                                               
The filter-transfer function can be obtained by using the above values of the modified impulse response. The causal filter so obtained has its transfer function given by

            H(z) = ‒0.0367­z‒2 +0.2879 z‒4 + 0.5 z‒5 +0.2879 z‒6 ‒0.0367 z‒8
                       


Tuesday 26 June 2018

DIGITAL FIR FILTERS-III DESIGN OF LOW-PASS FILTERS WITHOUT USING WINDOW FUNCTIONS


EDITOR: B. SOMANATHAN NAIR


Example 1:   Design a low-pass FIR filter for the following specifications:

·         Cut-off frequency                                  :           500 Hz
·         Sampling frequency                              :           2000 Hz
·         Order of the filter N                               :           10
·         Filter length required L = N + 1             :           11                                       

Solution:

STEP 1: NORMALIZATION OF CUT-OFF FREQUENCY
As in the case of IIR filters, here also we normalize the cut-off frequency as
                                                                  
                                                ωc = 2π(fc/fs) = 2π(500/2000) = π/2   (1)

STEP 2: FIXING THE TRANSFER FUNCTION TO BE USED
As stated previously, we fix the transfer function as
                                               
H(ω) = 1,  - π/2  £  ω  £  π/2
                                                          =  0,   elsewhere                  (2)                                        

In (2), we have neglected the phase part of the transfer function. If the phase factor is also to be taken into account, we then use the expression


where qo = woT  = wo,  (assuming T = 1)  is the desired phase angle. We begin our design with the first set of given specifications.

STEP 3: DETERMINING THE IMPULSE RESPONSE OF THE FILTER
Since the transfer function H(w) is specified as the discrete-time Fourier transformation (DTFT) of the impulse response h(n) of the filter, h(n) can be obtained by taking the inverse discrete-time Fourier transformation (IDFT) of H(w). Thus, we find

Substituting the given value of H(w) from (2) into (4), we get

Equation (5) may be simplified to


                                    h(n) = (1/nπ)(ejnπ/2 e‒jnπ/2)/2j = (1/nπ) sin(nπ/2)
                                          
       = 0.5 sin(nπ/2)/(nπ/2)   (6)                                      

STEP 4: DETERMINING THE COEFFICIENTS OF THE IMPULSE-RESPONSE
In (6), we substitute various values of n and determine the corresponding values of h(n). Thus, for n = 0, we have

                         h(0) =  0.5 sin(0)/(0) = 0.5  (7)                                            

where we have used the L’Hospital’s (to be pronounced as lopithal) rule. Now, when n = 1, we get                      

                  h(1) = 0.5 sin(π/2)/(π/2) = 1/ π = 0.3183  (8)             
Similarly, for n = 2,

                        h(2) = 0.5 sin(π)/(π) = 0   (9)                                                  

and for n = 3,

                                                h(3) = 0.5 sin(1.5π)/(1.5π) = ‒0.1061   (10)                     
                                           
and for n = 4,
                                                h(4) = 0.5 sin(2π)/(2π) = 0   (11)
                    
                                                   

Finally, for n = 5,
                                           h(5) = 0.5 sin(2.5π)/(2.5π) = 0.0637   (12)
         
We stop our computation at this point, since the required length of the filter L = N + 1 is only 11, and we can achieve this length by truncating the number of samples at n = 5.  It may be noted that, since sine functions are odd symmetric, we have

                                     h(n) = h(-n)  (13)                                                             

This means that h(-1) = h(1), h(-2) = h(2) = 0, and so on. Thus we get the impulse-response

   h(n) = (0.0637, 0, ‒0.1061, 0, 0.3183, 0.5, 0.3183, 0, ‒0.1061, 0 , 0.0637) (14)                     
The coefficients for negative values of n [i.e., h(-1), h(-2), etc.] appear in the value of h(n) because we are determining the Fourier series expansion of H(w), and in this, we have to determine the values of the coefficients from -¥ to +¥.

STEP 5: DETERMINING THE TRANSFER FUNCTION FROM IR
 We can now obtain the transfer function back from the impulse-response sequence by attaching appropriate value of z. Thus we find

H(z) = 0.064z5, ‒0.106 z3, 0.318 z1, 0.5, 0.318 z‒1, ‒0.106 z‒3, 0.064 z‒5  (15)
                                                                                                         
There exists one problem with this computation. The negative values of frequencies (i.e., terms containing positive powers of z, such as z1, z2, etc.) indicate that we are going to realize a noncausal filter. This means that we can not physically construct the filter that we have just now designed. To make the filter causal and physically realizable, we multiply all the coefficients with an appropriate power of z-n (in this case, with z -5). This results in the equation

  H(z) = 0.064, ‒0.106 z‒2, 0.318z‒4, 0.5z‒5, 0.318z‒6, ‒0.106z‒8, 0.064z‒10 (16)
                                                                                                                                    
Equation (16) shows the transfer function of a physically realizable low-pass FIR filter. Figure 1 shows the circuit implementation of this filter.






DISCRETE SIGNAL OPERATIONS

EDITOR: B. SOMANATHAN NAIR 1. INTRODUCTION In the previous two blogs, we had discussed operations of scaling and shifting on conti...