Advertisement

On the convergence of the LMS algorithm with a rank-deficient input autocorrelation matrix

Description
On the convergence of the LMS algorithm with a rank-deficient input autocorrelation matrix
Categories
Published
of 7
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Share
Transcript
  On the convergence of the LMS algorithm with a rank-deficient inputautocorrelation matrix D.C. McLernon a,  , M.M. Lara b,1 , A.G. Orozco-Lugo b,1 a Institute of Integrated Information Systems, School of Electronic and Electrical Engineering, University of Leeds, Leeds LS2 9JT, UK  b Centre for Research and Advanced Studies of IPN, Communications Section, Av. IPN No. 2508, Colonia San Pedro Zacatenco, CP. 07360, Me´ xico D.F., Mexico a r t i c l e i n f o  Article history: Received 3 February 2009Accepted 2 May 2009Available online 12 May 2009 Keywords: Adaptive filteringLMS algorithmStochastic gradientMinimum-norm a b s t r a c t In all books and papers on adaptive filtering, the input autocorrelation matrix  R   xx  isalways considered positive definite and hence the theoretical Wiener–Hopf normalequations ( R   xx h ¼ r   xd ) have a unique solution  h ¼ h opt  (‘‘ there is only a single globaloptimum ’’, [B. Widrow, S. Stearns, Adaptive Signal Processing, Prentice-Hall,1985, p. 21])due to the invertibility of   R   xx  (i.e., it is full-rank). But what if   R   xx  is positive semi-definiteand not full-rank? In this case the Wiener–Hopf normal equations are still consistentbut with an infinite number of possible solutions.Now it is well known that the filter coefficients of the least mean square (LMS),stochastic gradient algorithm, converge (in the mean) to the unique Wiener–Hopf solution ( h opt ) when  R   xx  is full-rank. In this paper, we will show that even when  R   xx  isnot full-rank it is still possible to predict the (convergence) behaviour of the LMSalgorithm based upon knowledge of   R   xx ,  r   xd  and the initial conditions of the filtercoefficients. &  2009 Elsevier B.V. All rights reserved. 1. Introduction Consider the well-known generic Wiener filteringproblem [1–4], using real variables (without loss of generality). Here we filter an input  x ( n ) with an FIR filter(impulse response  f h k g L  1 k ¼ 0 ) to get an output  y ( n ). The‘‘desired output’’ is  d ( n ) and we define the error term as e ( n ) ¼ d ( n )   y ( n ). The objective is to choose  f h k g L  1 k ¼ 0  suchthat the mean square error (MSE), – i.e.,  J  ( h ) ¼ E  [ e 2 ( n )] – isminimised, where  y ð n Þ¼ X L  1 k ¼ 0 h k  x ð n  k Þ¼  x  T ð n Þ h ¼ h T  x  ð n Þ  (1)  J  ð h Þ¼ E  ½ e 2 ð n Þ¼ E  ½ð d ð n Þ  y ð n ÞÞ 2 ¼ E  ½ d 2 ð n Þþ h T R   xx h  2 r  T  xd h  (2)with  x  T ð n Þ¼½  x ð n Þ  x ð n  1 Þ  . . .  x ð n  L þ 1 Þ h T ¼½ h 0  h 1  . . .  h L  1  R   xx  ¼ E  ½  x  ð n Þ  x  T ð n Þ r   xd  ¼ E  ½ d ð n Þ  x  ð n Þ 9>>>>=>>>>; (3)and  E  [  ] is the expectation operator for random vectors/scalars. Note that in Section 3, when deterministic(periodic) signals are used, then  E  [  ] reverts to the samplemean taken over one period — as in [1]. The optimalcoefficient vector ( h opt ) to minimise the MSE follows fromsolving  @  J  ð h Þ =@ h ¼ 0 , and gives the well-known Wiener–Hopf solution: h opt  ¼ R   1  xx  r   xd . (4)As an alternative to having to explicitly calculate (in (4))the auto-correlation matrix ( R   xx ) and the cross-correlation Contents lists available at ScienceDirectjournal homepage: www.elsevier.com/locate/sigpro Signal Processing ARTICLE IN PRESS 0165-1684/$-see front matter  &  2009 Elsevier B.V. All rights reserved.doi:10.1016/j.sigpro.2009.05.002  Corresponding author. Tel.: +441133432050; fax: +441133432054. E-mail addresses:  d.c.mclernon@leeds.ac.uk (D.C. McLernon),mlara@cinvestav.mex (M.M. Lara), aorozco@cinvestav.mex(A.G. Orozco-Lugo). 1 Tel.: +525557473800x6350, +525557473768;fax: +525557473977.Signal Processing 89 (2009) 2244–2250  vector ( r   xd ) in a time-varying scenario, then we can usethe well-known [1–4] adaptive least mean square(LMS) (stochastic gradient) algorithm. This is describedby the following equations (for the filter structure inFig. 1):for  n ¼ 0 ; 1 ; 2 ;  . . .  ; N  1  y ð n Þ¼ h T ð n Þ  x  ð n Þ ;  initial conditions ;  h ð 0 Þ e ð n Þ¼ d ð n Þ  y ð n Þ h ð n þ 1 Þ¼ h ð n Þþ 2 m e ð n Þ  x  ð n Þ next  n 9>>>>>>=>>>>>>; (5)where  x  ( n ) is as previously defined and  h T ð n Þ¼½ h 0 ð n Þ h 1 ð n Þ  . . .  h L  1 ð n Þ . It can be shown [1] that  h ( n ) converges(in the mean) to the Wiener–Hopf solution, i.e.,Lim n !1 E  ½ h ð n Þ¼ h opt  ¼ R   1  xx  r   xd . (6)Now, as quoted in one of the earliest texts on adaptivefiltering [1, p. 25], ‘‘ in physical situations  R   xx  will almost always be positive definite ,  but a positive semi-definite  R   xx could occur  ’’. In that book, and in virtually all subsequentbooks/research papers (to the knowledge of theseauthors)  R   xx  has always been considered positive definite(and hence full-rank) in the context of the LMS algorithm.So let us now consider the case where the ( L  L )autocorrelation matrix  R   xx  is of rank  r  o L . This scenariocould arise in the estimation of   r   complex sinusoids. Soeigendecomposition gives [2,5] R   xx  ¼ Q  K Q  T ¼ X L  1 k ¼ 0 l k q k q T k  ¼ X r   1 k ¼ 0 l k q k q T k  (7)with Q   ¼½ q 0  q 1  . . .  q L  1  K ¼ diag  l 0 ; l 1 ;  . . .  ; l r   1 ; l r  ; l r  þ 1 ;  . . .  ; l L  1  |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl}  ZERO 0@1A ; where  l i 4 0 ; i  r   1 Q  T Q   ¼ I ð for unitary  Q  Þ ;  where  I  is the identity matrix 9>>>>>>>>=>>>>>>>>; (8)where  f l k g L  1 k ¼ 0  are the eigenvalues of   R   xx , with ordering l 0   l 1   l r   1 .Thus the normal equations R   xx h ¼ r   xd  (9)are still consistent, but we now have an infinity of solutions and so (4) becomes h opt  ¼  ¯  h þ h null  (10)where  ¯  h  is any one solution to the normal equations(i.e.,  R   xx  ¯  h ¼ r   xd ) and  h null  ¼ P L  1 k ¼ r  b k q k  is any vector lyingin the ( L  r  )-dimensional nullspace [5] of   R   xx . So thequestion to be answered here is this – if we use theLMS algorithm, and  R   xx  is of rank  r  o L , then whatcan we say about Lim n !1  E  ½ h ð n Þ ? That is, to which (if any) of the infinity of solutions in (10) will  E  f h ð n Þg converge?Now, to the best of the authors’ knowledge, there hasonly been one other publication [7] that considers asimilar scenario to this paper. In [7] the author examinesthe convergence characteristics of both the LMS and RLSalgorithms when the  L  L  matrix  R   xx  is indeed rank-deficient. Regarding the section on the LMS, he showsthat the convergence characteristics (and final steady-state error) for the LMS algorithm is independent of   L (dimension of the input data vector), and it is the same asthe LMS algorithm with an  M  -dimensional ( M  o L ) full-rank signal with the same (non-zero) eigenvalue distribu-tion for the new ( M   M  ) autocorrelation matrix. The onlyLMS results presented relate to the ‘‘convergence-rate’’ orthe ‘‘excess MSE’’, and nowhere (implicitly or explicitly)does he attempt to derive this paper’s result forLim n !1  E  ½ h ð n Þ .Finally, as there may be some confusion between thetitle of this paper and what is called ‘‘reduced-rankfiltering’’, we have added a short Appendix A to clarifythe difference. 2. LMS algorithm for rank-deficient R   xx  So first start with the normal equations in (9). Becausethey are consistent (but have an infinity of solutions) thenlet  h ¼  ¯  h  be any one solution, and so from (7): R   xx  ¯  h ¼ r   xd  ) X r   1 k ¼ 0 l k q k q T k  ¯  h ¼ X r   1 k ¼ 0 ð l k q T k  ¯  h Þ  |fflfflfflffl{zfflfflfflffl}  a k q k  ¼ X r   1 k ¼ 0 a k q k  ¼ r   xd .(11)This means that the cross-correlation vector  r   xd  must lie inthe space spanned by the  r   ‘‘largest’’ (orthonormal)eigenvectors of   R   xx — i.e.,  f q k g r   1 k ¼ 0 . This also means that  r   xd must be orthogonal to all the remaining (orthonormal)eigenvectors that lie in the nullspace of   R   xx — i.e., q T k r   xd  ¼ 0 ;  k ¼ r  ; r  þ 1 ;  . . .  ; L  1. (12)We will use this result later. Now, the normal methodfor analysing the LMS algorithm with full-rank  R   xx is to work in the principle coordinate axes systemof the quadratic surface (i.e.,  h ð n Þ Q   1 ð h ð n Þ R   1  xx  r   xd Þ ),but since  R   1  xx  does not exist we must adopt a dif-ferent approach. So from (5), and with the usual assump-tion of independence between  x  ( n ) and  h ( n ) [1, p. 102],then h ð n þ 1 Þ¼ h ð n Þþ 2 m ð d ð n Þ h T ð n Þ  x  ð n ÞÞ  x  ð n Þ¼ð I  2 m  x  ð n Þ  x  T ð n ÞÞ h ð n Þþ 2 m d ð n Þ  x  ð n Þ) E   h ð n þ 1 Þ   ¼ð I  2 m R   xx Þ E  ½ h ð n Þþ 2 m r   xd , n  0. (13) ARTICLE IN PRESS  x ( n ) { h k    ( n ) }  L − 1 k = 0  y ( n ) d ( n ) e ( n ) Σ+− Fig. 1.  Schematic for the LMS adaptive filter structure. D.C. McLernon et al. / Signal Processing 89 (2009) 2244–2250  2245  From (8), let  h ð n Þ¼ Qh 0 ð n Þ and then (13) becomes (see (7)and (8)) E  ½ h 0 ð n þ 1 Þ¼ ð I  2 m K Þ  |fflfflfflfflfflffl{zfflfflfflfflfflffl}   A E  ½ h 0 ð n Þþ 2 m Q  T r   xd  |fflfflfflfflffl{zfflfflfflfflffl}  b ;  n  0 ) E  ½ h 0 ð n Þ¼  A n h 0 ð 0 Þþ X n  1 k ¼ 0  A k  ! b ;  n  1(14)where  h 0 ð 0 Þ¼ Q  T h ð 0 Þ  (with  h (0) the initial value for theadaptive filter’s coefficients) and  A k ¼ diag  ð 1  2 m l 0 Þ k ; ð 1  2 m l 1 Þ k ;  . . .  ; ð 1  2 m l r   1 Þ k ; 1 ; 1 ;  . . .  ; 1  |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl}  ð L  r  Þ  values 0B@1CA . (15)Now from (8), (12) and (14) b ¼ 2 m Q  T r   xd  ¼ 2 m q T0 ... q T r   1 q T r  ... q T L  1 26666666666643777777777775 r   xd  ¼ 2 m q T0 r   xd ... q T r   1 r   xd 0 ... 0 9>=>; L  r  26666666666643777777777775 . (16)And from (8) and (15) we can say X n  1 k ¼ 0  A k ¼ diag X n  1 k ¼ 0 ð 1  2 m l 0 Þ k ; 0B@X n  1 k ¼ 0 ð 1  2 m l 1 Þ k ;  . . .  ; X n  1 k ¼ 0 ð 1  2 m l r   1 Þ k ; n ; n ;  . . .  ; n  |fflfflfflfflfflffl{zfflfflfflfflfflffl}  ð L  r  Þ  values 1CA .(17)So, using (15)–(17), then when we choose 0 o m o 1 = l max (where by our definition  l max  ¼ l 0 ), so (14) becomeswhere h 0 ð 0 Þð r   :  L  1 Þ¼ h 0 r  ð 0 Þ h 0 r  þ 1 ð 0 Þ ... h 0 L  1 ð 0 Þ 26666643777775 and  0 r   1  is an ( r   1) column vector of zeros. And sofinally from (18), we can sayLim n !1 E  ½ h ð n Þ¼ Q   Lim n !1 h 0 ð n Þ¼ ½ q 0  q 1  . . .  q L  1  0 r   1 h 0 ð 0 Þð r   :  L  1 Þ " # þ q T0 r   xd l 0 ... q T r   1 r   xd l r   1 0 ... 0 266666666666666666437777777777777777750BBBBBBBBBBBBBBBBB@1CCCCCCCCCCCCCCCCCA ¼ X L  1 k ¼ r  h 0 k ð 0 Þ q k þ X r   1 k ¼ 0 q k q T k l k " # r   xd  )  Lim n !1 E  ½ h ð n Þ ¼ X L  1 k ¼ r  h 0 k ð 0 Þ q k  |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl}  Contributions from initialconditions  this term lieswithin nullspace of   R   xx : þ  R  y  xx r   xd  |fflfflffl{zfflfflffl}  min :  norm sol :  this term lieswithin the row space of   R   xx : (19)where  R  y  xx r   xd  is the ‘‘minimum-norm’’ solution for  h  in R   xx h ¼ r   xd  and the pseudo-inverse is defined as [6] R  y  xx  ¼ X r   1 k ¼ 0 q k q T k l k . (20)Note that (19) is consistent with (6), because when  R   xx  isfull-rank, the dimension of its nullspace is zero, so the firstterm on the RHS of (19) disappears and  R  y  xx  ¼ R   1  xx  . 3. Simulation one Let us now implement some simulations to verify theresult in (19). ARTICLE IN PRESS Lim n !1 E  ½ h 0 ð n Þ¼ 0 r   1 h 0 ð 0 Þð r   :  L  1 Þ " # þ  Lim n !1 diag X n  1 k ¼ 0 ð 1  2 m l 0 Þ k ;  . . .  ; X n  1 k ¼ 0 ð 1  2 m l r   1 Þ k ; n ; n ;  . . .  ; n  |fflfflfflfflfflffl{zfflfflfflfflfflffl}  ð L  r  Þ  values 0B@1CA 2 m q T  0 r   xd ... q T r   1 r   xd 0 ... 0 2666666666666643777777777777750BBBBBBBBBBBBB@1CCCCCCCCCCCCCA )  Lim n !1 E  ½ h 0 ð n Þ¼ 0 r   1 h 0 ð 0 Þð r   :  L  1 Þ " # þ q T  0 r   xd l 0 ... q T r   1 r   xd l r   1 0 ... 0 26666666666666643777777777777775 (18) D.C. McLernon et al. / Signal Processing 89 (2009) 2244–2250 2246   3.1. Simulation 1 Let Fig. 1 now be configured as a three tap ( L ¼ 3),2-step, adaptive LMS linear predictor where:  x ð n Þ¼ sin ð 2 p ð n  2 Þ = N  Þ ;  n ð L  1 Þ  and d ð n Þ¼ sin ð 2 p n = N  Þþ v ð n Þ ;  n  0 (21)with  v ( n ) zero-mean, white Gaussian noise with variance s 2 v  ¼ E  f v 2 ð n Þg and  N  ¼ 10. As regards the LMS algorithm in(5) the parameters chosen were  N  1 ¼ 2000,  m ¼ 0.01 andthe algorithm was started with filter coefficients h T ð 0 Þ¼½ h 0 ð 0 Þ  h 1 ð 0 Þ  h 2 ð 0 Þ¼½ 1 2 3  . Now it is not diffi-cult to show that (allowing for mixed deterministic andnon-deterministic signals) we can still write R   xx  ¼ 0 : 51 cos 2 p N     cos 4 p N    cos 2 p N     1 cos 2 p N    cos 4 p N     cos 2 p N     1 2666666666437777777775 ¼ 0 : 51 0 : 8090 0 : 30900 : 8090 1 0 : 80900 : 3090 0 : 8090 1 26643775 r   xd  ¼ 0 : 5cos 4 p N    cos 6 p N    cos 8 p N   26666666643777777775 ¼ 0 : 50 : 3090  0 : 3090  0 : 8090 264375 and  r   ¼ rank ð R   xx Þ¼ 2 (not full-rank). So from (19) we getthe following predicted behaviour:Lim n !1 E  ½ h ð n Þ¼ X L  1 k ¼ r  h 0 k ð 0 Þ q k þ R  y  xx r   xd  ¼ h 0 2 ð 0 Þ q 2 þ R  y  xx r   xd ¼ 0 : 8662  0 : 4015  0 : 7519 26643775 . (22)This predicted result of (22) is easily confirmed if weconsider the zero noise situation ( s 2 v  ¼ 0 in (21)).Then because of the deterministic inputs in Fig.1, we find that the actual coefficients of the predictor convergeexactly (i.e., for any single realisation as opposed toconverging ‘‘in the mean’’) – as shown in Fig. 2. Butif we now let  s 2 v  ¼ 2 in (21), then Fig. 3 shows asingle realisation of the coefficient trajectories wherethe convergence ‘‘in the mean’’ of (22) is also clearlyevident.Finally — we should also comment upon the adaptivefilter’s initial state. From (21), that was  f  x ð n Þg ð L  1 Þ n ¼ 1  ¼f sin ð 2 p ð n  2 Þ = N  Þg ð L  1 Þ n ¼ 1  — i.e., the filter’s memory was‘‘full’’ with the input values from  x ( n ) before theLMS algorithm commenced (i.e., in the calculation of   y (0) in (5)). However, if we were to start with differ-ent initial conditions (i.e., not from sin ð 2 p ð n  2 Þ = N  Þ ), ARTICLE IN PRESS h 0 ( n ) h 1  ( n ) h 2  ( n )0200400600800100012001400160018002000-1.5-1-0.500.511.522.53ITERATION NUMBER n    C   O   E   F   F   I   C   I   E   N   T   V   A   L   U   E Fig. 2.  Trajectory of the second-order predictor coefficients (for a singlerealisation and with  s 2 v  ¼ 0 in (21)) in simulation 1. h 0 ( n ) h 1  ( n ) h 2  ( n )0200400600800100012001400160018002000-1.5-1-0.500.511.522.53ITERATION NUMBER n    C   O   E   F   F   I   C   I   E   N   T   V   A   L   U   E Fig. 3.  Trajectory of the second-order predictor coefficients (for a singlerealisation and with  s 2 v  ¼ 2 in (21)) in simulation 1. h 0  ( n ) h 1 ( n ) h 2  ( n )    0   2   0   0  4   0   0  6   0   0   8   0   0   1   0   0   0   1   2   0   0   1  4   0   0   1  6   0   0   1   8   0   0   2   0   0   0 -1.5-1-0.500.511.522.53ITERATION NUMBER n    C   O   E   F   F   I   C   I   E   N   T   V   A   L   U   E Fig. 4.  Trajectory of the second-order predictor coefficients (for a singlerealisation and with  s 2 v  ¼ 0 in (21)) in simulation 1. The initial state of the filter was changed to  ½  x ð 1 Þ  x ð 2 Þ¼½ 5 5  . D.C. McLernon et al. / Signal Processing 89 (2009) 2244–2250  2247  then as expected, the algorithm will not converge to thesolution in (21). But the new solution can be predicted bysimply replacing  h 0 2 ð 0 Þ  (in (22)) with  h 0 2 ð 2 Þ — i.e., the valueof   h 0 2 ð n Þ  after two iterations, when the filter’s memory isnow ‘‘full’’ with  x ( n ).So as an example, consider the different initialconditions  ½  x ð 1 Þ  x ð 2 Þ¼½ 5 5  , and the new result isshown in Fig. 4 for the zero noise situation ( s 2 v  ¼ 0in (21)). This agrees with (22) (modified for  h 0 2 ð 2 Þ¼ 1 : 1539) which now givesLim n !1 E  ½ h ð n Þ¼ h 0 2 ð 2 Þ q 2 þ R  y  xx r   xd  ¼ 1 : 2377  1 : 0026  0 : 3803 264375 . (23)We should reiterate, that for a full-rank  R   xx , neither theadaptive filter’s initial state  f  x ð n Þg ð L  1 Þ n ¼ 1  , nor the initialvalues ( h 0 (0)) of the filter’s coefficients in (19), have anybearing upon Lim n !1  E  ½ h ð n Þ  for the convergence of theLMS algorithm in (5). 4. Discussion and simulation two We can now explain the significance of the twoterms in (19). With the transformation  h ¼ Qh 0  then (2)becomes  J  ð h 0 Þ¼ E  ½ d 2 ð n Þþ h 0 T K h 0  2 r  T  xd Qh 0 ¼ E  ½ d 2 ð n Þþ h 0 T K h 0  2 X r   1 i ¼ 0 ð q T i  r   xd Þ h 0 i  (24)where  h 0 T ¼½ h 0 0  h 0 1  . . .  h 0 L  1  , and we have used (7), (8)and (12). In addition, it follows from (24) that @  J  ð h 0 Þ @ h 0 k ¼ 2 l k h 0 k  2 q T k r   xd ;  k ¼ 0 ; 1 ;  . . .  ; r   12 l k h 0 k  ¼ 0 ; k ¼ r  ; r  þ 1 ;  . . .  ; L  1 ;  since  l k  ¼ 0 : ( (25)Now we already know [1] that for the LMS algorithm, E  ½ h 0 ð n Þ  follows the direction of steepest descentof   J  ( h 0 ). But from (25), along the  L  r   axes  f h 0 k g L  1 k ¼ r  the gradient is zero, and so along these directions E  ½ h 0 ð n Þ  will not change from its initial condition,  h 0 (0).ThusLim n !1 E  ½ h 0 k ð n Þ¼ h 0 k ð 0 Þ ;  k ¼ r  ;  r  þ 1 ;  . . .  ; L  1. (26)Along the remaining orthogonal directions  f h 0 k g r   1 k ¼ 0 , thenfrom (25), convergence of the LMS will occur when @  J  ð h 0 Þ =@ h 0 k  ¼ 0, or  h 0 k  ¼ q T k r   xd = l k ,  k ¼ 0 ; 1 ;  . . .  ; r   1. Thuswe can sayLim n !1 E  ½ h 0 k ð n Þ¼ q T k r   xd l k ;  k ¼ 0 ; 1 ;  . . .  ; r   1. (27)Finally, from (26), (27) and using  h ð n Þ¼ Qh 0 ð n Þ , we obtainLim n !1 E  ½ h ð n Þ¼ Q   Lim n !1 h 0 ð n Þ¼½ q 0  q 1  . . .  q L  1  q T0 r   xd l 0 ... q T r   1 r   xd l r   1 h 0 r  ð 0 Þ ... h 0 L  1 ð 0 Þ 2666666666666666666437777777777777777775 ¼ X L  1 k ¼ r  h 0 k ð 0 Þ q k þ R  y  xx r   xd  (28)which is the same as (19). This new understanding of (19)will now be developed in a graphical interpretation insimulation 2. 4.1. Simulation 2 Now consider Fig. 1 with  L ¼ 2, and (for simplevisualisation) the almost trivial case of   x ð n Þ¼ cos ð 2 p n = N  Þ and  d ð n Þ¼ cos ð 2 p n = N  Þþ v ð n Þ , where  N  ¼ 2, and  v ( n ) iszero-mean, white Gaussian noise with variance s 2 v  ¼ E  f v 2 ð n Þg . So now the (2  2) autocorrelation matrix R   xx  has rank  r  ¼ 1, and the normal equations imply thefollowing ( L  r  )-dimensional solution plane: R   xx h ¼ r   xd  ) 0 : 5   0 : 5  0 : 5 0 : 5 " #  h 0 ð 0 Þ h 1 ð 0 Þ " # ¼ 0 : 5  0 : 5 " # ) h 0 ð 0 Þ h 1 ð 0 Þ¼ 1. (29)With the arbitrary initial conditions for the filter coeffi-cients of   h ð 0 Þ¼½ h 0 ð 0 Þ  h 1 ð 0 Þ¼½ 0 2  T then from (19) weget:Lim n !1 E  ½ h ð n Þ¼ X L  1 k ¼ r  h 0 k ð 0 Þ q k þ R  y  xx r   xd  ¼ h 0 1 ð 0 Þ q 1 þ  q T0 r   xd l 0   q 0 ¼ 1 : 50 : 5 " # . (30)This result is illustrated graphically in Fig. 5, where theminimum-norm solution part of (30) is R  y  xx r   xd  ¼  q T0 r   xd l 0   q 0  ¼ 0 : 5  0 : 5   . (31)This is orthogonal to the following contribution in (30)from the non-zero initial conditions ( h (0)), i.e. X L  1 k ¼ r  h 0 k ð 0 Þ q k  ¼ h 0 1 ð 0 Þ q 1  ¼ 11   . (32)Finally, adding (31) and (32) we get the resultLim n !1  E  ½ h ð n Þ¼½ 1 : 5 0 : 5  T , for initial filter coefficients h ð 0 Þ¼½ 0 2  T — see Fig. 5. ARTICLE IN PRESS D.C. McLernon et al. / Signal Processing 89 (2009) 2244–2250 2248

Lab Report Cmb...

Apr 28, 2018
Search
Similar documents
View more...
Tags
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks
SAVE OUR EARTH

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!

x