Skip to main content

Full text of "Acoustic noise removal by combining wiener and wavelet filtering techniques"

See other formats




Institutional Archive of the Naval Postgraduate School 





Calhoun: The NPS Institutional Archive 


DSpace Repository 


Theses and Dissertations 


1998-06 


1. Thesis and Dissertation Collection, all items 


Acoustic noise removal by combining wiener 
and wavelet filtering techniques 


Forney, Fredric D. 


Monterey, California. Naval Postg 


http://ndl.handle.net/10945/7845 


raduate School 


Downloaded from NPS Archive: Calhoun 


atthe | DUDLEY 


WN] | ciseaRy 


http://www.nps.edu/library 





Calhoun is the Naval Postgraduate School's public access digital repository for 

research materials and institutional publications created by the NPS community. 

Calhoun is named for Professor of Mathematics Guy K. Calhoun, NPS's first 
appointed — and published — scholarly author. 


Dudley Knox Library / Naval Postgraduate School 
411 Dyer Road / 1 University Circle 
Monterey, California USA 93943 
































































































oe ate Pe ea ar “aby 
eh o ig t ; esi oh O 
se uneteaa sail SE ag ea tlt 1a Gud ta 
“= eer ee hd By ” CPteLY, ae adm 
" A as 1d ch te ade sf IAT A eos Prd is 
es . yh lag ays ni at 6 Ak 
1 iat SP ne ls? 
R oer eee mile tis 
= arareenas teat, whanes ) 
cs can wane 
5! Res nid teeth sctistiaa mes 
- 8. 06 pa ata atag | are pei 4 . ral seuss 
e , athe 7H ae tf tg Bot ty pre v8 
ite Aah stein! re) ” 5 2 ous Piotseecaanden 
heb ‘pate : aves -oten 
7 ows of 4s oarft Sinaia ie ois 
athe 42.8 € 0 6% al oF sigh tidal whe iat ad Fad HF ait 
a Me By parse See m3 iis 
ree! . 
Ae 
* 


FORNEY, F. 


7 i! ins ae 















eens 
i an id ‘a 
coe 


ells 

wae siggy 
et is roth git dou bats 
" tet ve Ted tones gestae act has 
chee mgt J Re 

gr 0% re waege alec 
al a@natae 























aS Heh at 


















































































Pa hae eee saton m4 ‘a 
Ten pte ote tp sta. palite wn a {itiige Aevad 
ue Relor, i Pere goyeuee ae Te Vedas 
ae 
He eer a eh ial, eer Beats ut wens Nae ahah d, 

hat oy ege sey dete tas sy Satan 3 a wae Fab Dec eMal ees See 

eaia Se ea ee te Tih yt deka oS wha) Gilg2 ete cote Ms 

ors ae bend 8 a0 wl a ‘ at ae 1 Ya det A Ooty gdeg platen eye tet 

uns ‘ wo i re tg Nas ran nt gde pita te ytateday! 1pyy 

ane Bae fa we C4 * > ° it agboeinisginie a Tass bac ches Ch 

care "3 ais ob LOS hare ad yenee Satiaragniteeey 

Hoe . 
















eee aes : he athe 3 
a a 
Bane he ange 7g Reet acs ie 
ry seth ra eg P Se Nekahgahe Pet Tog ahd 
pa pen et pata 















obetal sae te ta 


. 
v vatehne ats 
% auledader vase eet 
a 
iat ata! 


Nita dae ae i 
fabio 








af 
ny Sphhhics 











H 
as ae 



















+ 
mee ens 
4 





















at 
4 
+ fg lntstated padet® CS re 
* ata batedateds a acter td 
ie inte af ae wie tgieat sete» 
- eva bad eral ef 4 peas 2 igs aa 
- nary A een 
i is Be arte 
+ 
a 
E , Ce ek Har ah Muted Made Qe biFe 
B ames 8 et LaSade tencla te otehee 1 So® eee em teh e sFe8s eeatedet it” 
A a ee MST ser Bie ey Ny Sakae wf Men eh 3a" 
1s eae weg srylole'e aSeaedes ony 0 Babe te obst sete ut of 
Ny tat of Gane te DeMoivie iets vont Od gg pp evade! Seat aoe” 
watets etattede 








pee dote i td 9% 
ee teh G04 wake 
4 As wt 






as sate 
$s 

aanieatt.: 

aie 

Cae ere Oe 

ot erate bene® 

















rr ee ie Le 
"tk 






rer Ag rs) “pear ~eed 
Te tetes tea oe yt 8 ee Seueee onade 


ant a 3 nf 
. 





.° = 
awals & 
Seted Bie * 










‘aoe 
ares 
: taSeaeate © oa 









eae 8 sea haty td 
A, i pene «= 


. 
ns Pie 


rae ™ hf s entg 
nN bases = 2, 
e * hate of 
Tate ka a 
sor any 
fie 
aa 












os a ee tergtee 
a8 gtyaee eee a” 
te 4 3° ¢ NM 

Ca Behe 4 il ee 


ae 
2 













oe ate . oor. 4 
‘ ig etree 










re | 
. pene 






‘ . 4h yi 
ae Per a roe 
oem 148 
rn ,." “uh 2 
4A Rar ea ee 

Gonea 
















gna te 
ere’ os a se pare 
ie ee Oye a ge etant” Serene 
2 UN aS ane eat 
tis POON had af bie 





















ry aa poe: Ser anes 
*, 1%e* - Ay a 
Ra oats aes sd CaN 1 
SP ar ne qa 
am 























ue agen ee 


oF 
<4 ie ee id 
a Det 


paneme cy 

















wa sl 


Pie fa 
ry soncaoip ees 





eta 7105 842g Be + ‘4 
ea" “| oe 
ase PS ath Lokal de tet seers : a : a = zl 


Pree yt he dh at 
“86 pachecty aA gre eg Oe. 8 be fe urate 















ee bad 




















































ae a — Re « pe ST) 
Re eer : . ya ghg ath i voatety ata bk ae t) seryigterei 
Beer: pee a “7% ee LP etd aren meee Nt 
ae yay Cae nity Ait ite pakistan fa sass steet 
om git es at ait incur! ree seit Se fights rae ¥, pepe yong > a 
tie abe ae cee sto tate wer a hee i Ne PNT Atay 
an cabetarn secherg tary meet store B08! rs tte at aetiye Eder bho gurees 






PP ak dee he Sa pa iL Sy Tie Baars Tie it acd bt ae a ihn 
Le es Pitan ear ahty rate coats 
fa “| 
ral mt al urhety tata we 
Me eap escnepeaetes iw eet ne cl etatensteadant 
at te? Be st ut a6 taegaree fet ea rages te pick Ter el 
ey Seat) stab ety" sghyegine hee Nitaeey 
























































































r Ae seh by ted binest " ak Sieee akong i; 
.¥ geuaze ba ps4 oe an sone 4 atewutts gents HY anette i ee et ts ; ‘ 
eh isa cee ie ae es SS eat ananassae on ete 
wate Rosod ey ace rep Piey ig ern get te) Se Bi be PRE be ah bot 
ts" A ney preter 74 y tt daptunaute ecerud wens areey” 
ee % . : fate al Seat i a astitey crea ee a H edtoee" erie Le a et rc te i ory Meee eee a & a) 
aie ess A che ‘ ahs aa Ae aaa atet Dy ay he aytets "4 eyee em, bade he ord ware 
fT. ; a ost ‘ ie ake A ‘Sescethe ee tye A amr Pine Sh Reet tt 
% seat wot erg tar Aa “ene wih a patel a eh4e ie = Hifi en ae eae a es Dita ipls fe seit! ot rT Detetatece tenure ae 
hearin ie *esetg Re ite = apys Pits PRL Ted ie, pe Ee ty tgeesbrene WZ teteteta wegen Seo 
' OL aif ies ’ tater HA whoa ayer pepe Mes me shee ta'n a lela dA i eoraresece cnr 
tgng-'* ar rereryet SW Serates eT ar, bp Sees tarig's meee ~ 
es ae etry oe i ries ence po 
A AR A ae a eR at fahy es eens 
i be tp siel ge are are efile ett ote? raga lay baal abe 
tes dite! aa vag ea) Lee tate pee 28 egw TIME STE Se = st wi 48 es ser ppacmaceae 
jo"s. io” “ tPA é FH < u iat meee . 7 Crit. tid = 
pray eee cigetley iets et inlet ames 


ead sites inetses eee 


whe 
"ase weeps Tar Ft Py 
ndagenig tees 








1 

Ait yiate, eee ster atad 
! -y‘e" 4 

se "2 eet se Dee 






att ‘8 
Ges 
oes op ne 2 ed ink 


ve 
y re ed be stk Oe wid igi tees 

De ee Me me Pirie. herds atere Paine > 
cecets pe ce) baum Spacey f sb aetsee 8 ebatae st itt, ie 







eet 
Syagstewtty Aceee” : Uk aa Pity 
ope pera sere Beare oe i a ao ense 



















NAVAL POSTGRADUATE SCHOOL 
Monterey, California 





THESIS 


ACOUSTIC NOISE REMOVAL BY 
COMBINING WIENER AND 
WAVELET FILTERING TECHNIQUES 


by 


Fredric D. Forney, Jr. 
June 1998 


| Thesis Advisor: Monique P. Fargues 
Co-Advisor: Ralph Hippenstiel 





Approved for public release; distribution is unlimited. 








: REPORT DOCUMENTATION PAGE 


| Public reporung burden for this collection of oeeaen is estimated to average | hour per response, mivalreiny: the ume for reviewing instruction, searching existing }} 
data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate 
or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for 






| Information Operations and eae ae Jefferson Davis ae Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, 


AGENCY USE ONLY i blank) REPORT DATE REPORT TYPE AND DATES COVERED 
June 1998 Master’s Thesis 


TITLE AND SUBTITLE: ACOUSTIC NOISE REMOVAL BY FUNDING NUMBERS 


COMBINING WIENER AND WAVELET FILTERING 
TECHNIQUES 


6. AUTHOR(S) Fredric D. Forney, Jr. 


7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 
Naval Postgraduate School ORGANIZATION 
Monterey CA 93943-5000 REPORT NUMBER 


9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING 
AGENCY REPORT NUMBER 


11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the 
official policy or position of the Department of Defense or the U.S. Government. 


12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE 
Approved for public release; distribution is unlimited. 


13. ABSTRACT (maximum 200 words) 

This thesis investigates the application of Wiener filtering and wavelet techniques for the removal of noise 
from underwater acoustic signals. Both FIR and IIR Wiener filters are applied in separate methods which 
involve the filtering of wavelet coefficients which have been produced through a discrete wavelet 
decomposition of the acoustic signal. The effectiveness of the noise removal methods is evaluated by applying 
them to simulated data. The combined Wiener wavelet filtering methods are compared to traditional denoising 
techniques which include Wiener filtering and wavelet thresholding methods. 





PERFORMING 











14. SUBJECT TERMS Wavelet Analysis, Wiener Filtering, Denoising, Acoustic 15. NUMBER OF 
Eiris. PAGES 148 


16. PRICE CODE 


17. SECURITY CLASSIFI- 18. SECURITY CLASSIFI- 19. SECURITY CLASSIFI- 20. LIMITATION OF 
CATION OF REPORT CATION OF THIS PAGE | CATION OF ABSTRACT | ABSTRACT 


Unclassified Unclassified Unclassified UL 


NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) 
i Prescribed by ANSI Std. 239-18 298-102 











Approved for public release; distribution is unlimited. 


ACOUSTIC NOISE REMOVAL BY COMBINING WIENER 
AND WAVELET FILTERING TECHNIQUES 


Fredric D. Forney, Jr. 
Lieutenant Commander, United States N avy 


B.S., United States Naval Academy, 1983 
M.S., The George Washington University, 1991 


Submitted in partial fulfillment 
of the requirements for the degree of 


MASTER OF SCIENCE IN ELECTRICAL ENGINEERING 
from the 


NAVAL POSTGRADUATE SCHOOL 
June 1998 








ABSTRACT 


This thesis investigates the application of Wiener filtering and wavelet 
techniques for the removal of noise from underwater acoustic signals. Both FIR and 
IIR Wiener filters are applied in separate methods which involve the filtering of 
Wavelet coefficients which have been produced through a discrete wavelet 
decomposition of the acoustic signal. The effectiveness of the noise removal 
methods is evaluated by applying them to simulated data. The combined Wiener 
wavelet filtering methods are compared to more traditional denoising techniques 


which include Wiener filtering and wavelet thresholding methods. 





- 
—— | 2 





TABLE OF CONTENTS 


_ LT INTRODUCTION ... 2.6.6. eee ence eee eee een eens l 
1. OCEAN ACOUSTIC NOMS Bo vicar ee nin eect S 
A. NOISE SOURGES oo: vs. 55 5 cre ere ciel + 

l. AMDICHEINOISE. .... 2:5. << << 400 cacee eee ee 4 

a. Ultra-Low Band (<1 Hz) ....... Sing ep ie PI 4 

b. Infrasonic Band (1 to 20 1iz).......... ee ee 4 

Cc. Low Sonic Bana(20to Z0UTHz) «ee ss 5 

d. High Sonic Band (200 to SU UUIZ) ee ee 5 

e. Ultrasonie Band (> 50,000:HzZ)... . 7 eee. . ee 6 

Ze SelONoise ec... . Sh es ee eee di 


B. TRANSMISSION LOSS AND WATER MASS CHARACTERISTICS 


ee errr re we oh . fd eas Ske 8 

c. WOU Sle) (C)} 15] 5 EE sj on coe eee 8 

mime ei NEN FUMING gees 8 es ee ee oes no see a eo eee cs en Hi 
A. MODEIMDES@RIDTIOIN........:.:+-- +... ee. Os es 1] 

B. FIR WHENER FIDTER DEV ELOPVIE Nig. ee ee ees 2 12 

IV. WAVELET ANALYSIS AND MULTIRATE SYSTEMS .................... 19 


Vil 


A. Pelt 1 ecatibes Ee CIN Meena eee OE re ee ee eee ie 


B. THE CONTINUOUS WAVELET TRANSFORM ...............-. 22 

C. THE DISCRETE WAVELET TRANSFORM .................... 4) 

D. IVETE RATES SwY.S EVES AUN SS ames sas ss ws See 30 

Jel DAUBEGHITE SW Ay ECE ro IOS. ss. sds ae S/S) 

V. DENOISING METHODS USING WAVELET THRESHOLDING ............. 43 
A. WAVELET COEFFICIENT THRESHOLDING .................. 43 

B. TREES IOI Sees CO aaa ks se es ee 45 

l. LOPES EUG CT lore oe ee ee 45 

2. SIRE i itesO ld eee ei: ss << s « Se le 46 

8. IVD BIG AN TESMNO| dae cepa wi sie... Semone aecy cee ee 46 

4. Minimax Primeaple Vntesheld) . . “aiteicc.u as... sss sesso 47 

>: Threshold Discussion ... et ee 47 

pple ReUNe wey NORIO ER . eee tees ee ee aks eee ay 
A. M@IPTMIRATE SYSTEMS ANAIEYSIS 2...) . i scugenn 00 ee 60 

B. AMPIAS ‘CANCELEA TION 9.) igs ee eee 66 

Pe elimliicen te Nite tn WW AVERT TIGDIIR ox 2 eee ss in a 98S ee tie we 2 
A. De AIEEE |) 6) 522 ook il 2) See rrr ane Ome 71 


B. IIR WIENER FILTER APPLIED TO THE WAVELET DOMAIN _... 73 


Vill 


@F WIENERSHRINK. . peepee 719 


VI. COMPARISON OF DENOISING METHODS .......................005. 85 
A. WIENER FILTER ........ ... 0.0%... oS tee 86 

B WAVELET THRESHOLDIIG & «27ers esc 87 

C FIR WIENER WAVELET FILTER .. se ee ere 87 

D TTR WIENEReWAVELET FILTER «22a oe ee eee 88 

E WIENERSHRINK = oi: esos Se ee. ee 88 
iPemveAVELE |) PACKETS: 2... soe Seri ic..... lee ae 101 
A WAVELET PACKET THEORY Sue a0 262 een eee 102 

B WAVELET PACKETS APPLIED TO TEST SIGNAL ............ 104 

l. Short-time Wiener Filter 2... 245... 22 eee. 106 

ip Wavelet Ehresholding .........iah.. 2. oe eee ee 106 

3. FIR Witeper Wavelct Filter... 0: soe ne 106 

4. IR Wiener Wavelet Milter... 0.22... ear see 107 

Be Wavelet Packet [hresholdime =.) acer nen es: 107 

6. BIR OW ienen Wavelet Packet Rilietc--pa es oe 6 ble a eee 108 

7. IIR Wiener Wavelet Packet Filter ..... ics oe 108 

8. SUMIRIARY 9)... eens I A Ee i ae 108 

PCO CILUSIONS oi cn nt ee eile se ea eee He 


1X 


I. INTRODUCTION 


Accurate analysis of ocean acoustic signals has proven to be a difficult task due to 
unwanted noise which is usually present. Significant effort has been directed to the problem 
of noise removal from underwater acoustic signals. In the 1940s, Norbert Wiener conducted 
extensive research in the area of linear minimum mean-square error (MMSE) filtering. 
Provided the spectral characteristics of an additive combination of signal and noise are 
known, the Wiener filter will operate in a linear manner to yield the best separation of the 
signal from the noise. Here, “best’”” means minimum mean-square error. The Wiener filter 
is today a well tested method of noise reduction. 

Wavelet techniques are, conversely, a more modern approach to noise reduction, 
although their origin lies in the timeless methods developed by Fourier. Wavelet analysis 
decomposes the signal into a family of basis functions and provides two significant 
advantages over the traditional Fourier analysis. First, in wavelet analysis there is a wide 
variety of basis functions to choose from, and second, a multi-resolution capability is 
provided in the time-frequency domain which is critical to the identification and elimination 
of noise in a non-stationary environment. Wavelet techniques have proven to be a viable tool 
for the denoising of acoustic signals. 

Recently, a question has been posed concerning the combination of these two 
invaluable techniques. The approach presented applies a discrete wavelet transform to the 
noisy signal. Next, rather than thresholding the wavelet coefficients, a Wiener filter is 


applied to separate the noise from the signal. While this approach may intuitively seem 


reasonable, there are problems associated with aliasing and perfect reconstruction of the 
denoised signal which must be considered. Given that the aliasing problem can be dealt 
with, the results still may not be superior to either Wiener filtering alone or wavelet-based 
techniques alone but the possibilities are certainly worthy of investigation. 

In this thesis, an approach is presented which combines the Wiener filter, in both the 
causal finite impulse response (FIR) and the non-causal infinite impulse response (IR) 
forms, with wavelet-based techniques. The various noise removal methods are applied to 
simulated data which offers the advantage of providing ground truth to the analysis. 
Additionally, the signal-to-noise ratio (SNR) level can be easily modified to evaluate the 
effectiveness of the denoising algorithms. The combined Wiener wavelet based denoising 
methods demonstrate promise in the recovery of ocean acoustic signals from the noisy ocean 
environment in which ships operate. | 

The problem of denoising underwater acoustic signals is addressed in nine additional 
chapters summarized as follows. Chapter II presents background information on the 
characteristics of ocean acoustic noise . In Chapter III the theory and application of Wiener 
filtering methods is discussed. Chapter IV presents the theoretical development of wavelet 
analysis and and mutirate systems. Standard wavelet threshold based denoising techniques 
are presented in Chapter V. FIR Wiener wavelet noise removal methods are developed in 
Chapter VI followed by IR Wiener wavelet methods in Chapter VI. A comparison of the 
various denoising methods is presented in Chapter VIII. Wavelet packet denoising 
techniques are applied to the problem of high frequency signals of short duration in Chapter 


[X. Finally, conclusions are presented in Chapter X. 


Il. OCEAN ACOUSTIC NOISE 

Though significant resources have been devoted to non-acoustic detection methods 
in the undersea warfare environment, sonar remains the primary and most reliable method 
of detection. However, as modern technology continues to enable significant reduction in 
radiated source levels the problem of early detection and classification of underwater 
acoustic signals gains greater significance. A possible solution lies in the area of improved 
digital signal processing and filtering methods which would allow detection and 
classification of signals at lower signal-to-noise ratios. The passive sonar equation states that 
the source level of the target minus the loss due to propagation through the medium, minus 
the sum of all interfering noises plus improvement by the spatial processing gain of the 
receiver, must be greater than the detection threshold for a sonar system to sense the presence 
of a target with a fifty percent probability of detection [1] 

SL-TL>NL- DI+DT, (2.1) 
where: SL_ = Source Level of the target being detected passively 

TL = Transmission Loss as the signal propagates to the detector 

NL = Noise Spectrum Level of the ambient noise and self noise in the ocean 

DI =Directivity of the detecting system 

DT = Detection threshold for 50% probability of detection 
and all terms are in dB referenced to 1pPa. 


A reduction in the noise spectrum level would certainly result in an improved 


detection probability. The noise spectrum level includes both self noise and ambient noise. 


aX NOISE SOURCES 

1. Ambient Noise 

Urick identifies ambient noise as that part of the total noise background observed 
with a nondirectional hydrophone which is not due to that hydrophone and its manner of 
mounting called “self-noise”, or to some identifiable localized noise source [2]. Ambient 
noise 1s produced by a variety of sources and may be found in the frequency range from 1 
Hz to 100 KHz. The noise frequency range is typically divided into five frequency bands of 
interest, which are discussed next. 

a. Ultra-Low Band (<1 Hz) 

Though little is known about the exact contributors at the lower end of the 
spectrum, it 1s surmised that these sources include tides and hydrostatic effects of waves, 
Seismic disturbances, and oceanic turbulence [3]. Tides and waves cause hydrostatic 
pressure changes resulting in a low frequency, high amplitude contribution to the ambient 
noise spectrum. The constant seismic activity measured on land extends into the ocean 
environment causing low frequency, high amplitude contributions which add to those 
produced by tides and waves. Oceanic turbulence gives rise to varying dynamic pressures 
which are detected by pressure sensitive hydrophones. 

b. Infrasonic Band (1 to 20 Hz) 

This band has gained importance with the emergence of improved low 
frequency narrowband passive sonar systems. It contains the strong blade-rate fundamental 


frequency of propeller-driven vessels and its accompanying harmonics. A steep negative 


spectral slope of 10 dB per octave is common in the region from | to5 Hz. This slope goes 
positive from 5 to 20 Hz as shipping noise begins to become a more significant factor. In the 
absence of ship traffic this region continues to fall off and is more affected by wind speed. 

C. Low Sonic Band (20 to 200 Hz) 

Studies have shown that distant ship traffic is the dominant source of noise 
at 100 Hz and has a signineent effect in the low thc band [3]. In areas of low shipping 
intensity, wind speed continues to be the major factor just as it is in the infrasonic and high 
sonic bands. Thus, an area of heavy shipping such as the North Atlantic, where on average 
1,100 ships are underway, will see a much greater effect than less traveled areas such as the 
South Pacific. 

d. High Sonic Band (200 to 50,000 Hz) 

The well known acoustician V.O. Knudsen conducted extensive studies in this 
band during World War II. In these studies, he was able to correlate noise with wind speed 
in the frequency range 500 Hz to 25 KHz. His results, published in 1948, are best 
summarized by the curves shown in Figure 2.1. These curves show a clear relationship 
between wind speed (or sea state which isn’t measured as accurately) and spectrum levels. 
Subsequent studies have shown the spectrum to be flatter in the range 200 to 800 Hz but 
have generally confirmed Knudsen’s results [4]. Crouch and Burt [5] have developed an 
expression to model the noise spectrum level in dB which is given as 

NL(f) = Bf) + 20 n logy V, (2.2) 


where NL is the noise spectrum level in dB referenced to 1 Pa at frequency f, B(f) is the 


noise level at a wind speed of 1 knot at a particular frequency, n is an empirical coefficient, 


100 


Moderate: 
Light 


Spectrum Level (dB re 1 micro-Pa} 





10" 10 
Frequency (kHz) 
Figure 2.1: Average Ambient Noise Spectra. From Ref. [6]. 


and V is the wind speed in knots. For n = 1 the noise level increases as 20 log,, V, and the 
noise intensity will increase as the square of the wind speed. 

é. Ultrasonic Band (> 50,000 Hz) 

At frequencies above 50,000 Hz thermal noise becomes the predominant 
contributor to the noise background. In 1952 Mellen [7] showed theoretically that the 
thermal noise of the molecules of the sea affects hydrophones and places a limit on their 
sensitivity at high frequencies. Based on principles of classical statistical mechanics he 


formulated the following expression for the spectrum level NL in dB referenced to | Pa of 


the thermal noise at frequency f in kHz given by 
NL(f) = -15 + 20 log, f- (2.3) 
Though measurements have been recorded in this band they have not 
conclusively substantiated the above expression due to excessive shipping noise in nearby 
ports. No measurements in this band in deep, quiet open ocean water appear to have been 
made until now. 
Zs Self Noise 
Self noise includes all noise created by the receiving platform and usually falls into 
one of two categories which include equipment noise and platform noise. Equipment noise 
includes electronic or thermal noise produced within the sonar electronic system. Platform 
noise is produced from the same sources as radiated noise except that the source of platform 
noise is the receiving platform vice the source or target platform. These sources include 
machinery noise, hydrodynamic noise, propeller noise and transients. Platform noise reaches 
the receiving transducer by a variety of methods including vibration via an all-hull path, all- 
water direct path, all-water back scattered path from volume scatterers, and all-water bottom 
reflected path [2]. Machinery noise occurs principally as low frequency tonals which are 
relatively independent of speed. Hydrodynamic noise which includes all sources of noise 
resulting from the flow of water past the hydrophone and its support and the outer hull 
structure of the vessel, becomes more significant as speed increases. At high speeds 


propeller noise becomes the dominant contributor to self noise. 


B. TRANSMISSION LOSS AND WATER MASS CHARACTERISTICS 

Properties of the water mass such as temperature, salinity, and density affect the 
transmission path of sound in water and therefore alter the signal received at the hydrophone. 
Additionally, the depth of water and bottom structure influence the path traveled and could 
result in multiple transmission paths between the source and receiver. The affects of 
absorption and attenuation cause the ocean to filter out the high frequency spectrum while 
passing the low frequency spectrum. Thus, ocean acoustic noise appears to occur more 
predominantly in the lower frequency region. 
CG NOISE MODEL 

Modeling ocean acoustic noise as a stationary white Gaussian random process 
substantially simplifies the analysis and study of denoising. However, the variety of sources 
which contribute to self and ambient noise and the acoustic transmission characteristics of 
the ocean result in a noise contribution which is colored vice white in nature. A common 
method around this obstacle is to pre-whiten the acoustic signal. 

Stationarity is another assumption which must be applied carefully. Certainly the 
Ocean environment is one in which sources of acoustic signals change regularly. Moreover, 
the water mass properties are in a constant state of flux. This hardly meets the criterion for 
Stationarity at first glance. However, on a time scale of several seconds, the ocean 
environment can indeed be considered stationary. 

Finally, evaluation of ocean acoustic noise reveals that the assumption of a Gaussian 


random process appears to hold true in most cases. An artificial computer generated white 


Gaussian noise sample and its power spectral density are pictured in Figure 2.2. This 
assumption can be proved by comparing the histogram of a noise sample to the Gaussian 
probability density function with appropriate sample mean and variance or by checking a 
normal probability distribution plot of the noise sample as seen in Figure 2.3. Barsanti’s 
results have verified this assumption with many actual ocean acoustic signals [8]. Frack 
discusses several more involved methods of verifying this critical modeling assumption [9]. 

All noisy signals in this thesis are generated by adding white Gaussian noise with 
zero mean (as shown in Figure. 2.2) to the noise-free signal. When analyzing actual ocean 
acoustic signals a noise sample must be available to determine the statistical properties 
necessary to produce optimal filtering. A noise sample with statistical properties which 
accurately reflect the noise embedded in the noisy signal produces a more effective filter and 
amore accurate denoised signal. Frack discusses methods which can be used to model noise 


for filtering purposes when adequate noise samples are not available [9]. 


Noise 


Amplitude 





Oo 100 200 300 400 500 600 700 800 900 1000 
Sample 


PSD of Noise 


Amplitude (dB) 





O O.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 
Normalized Freq 


Figure 2.2: Artificial White Gaussian Noise Sample. 


Histogram of Noise and Superimposed Normal PDF 





Normal Probability Plot of Noise 





8:88 
= 885 
B 0.50 
= es 
Bee | 
= —-2 —1 Oo 1 2 3 
Data 


Figure 2.3: Histogram and Normal Probability Plot of Artificial White Gaussian Noise 
Sample. The superimposed normal PDF has been scaled to match the histogram for 1024 
samples. 


10 


III. WIENER FILTERING 

Wiener filtering is one of the earliest methods used to separate noise from a desired 
signal. Developed by Norbert Wiener in the 1940's, this form of optimal filtering is used to 
denoise ocean acoustic signals with the objective of improving signal detection and 
classification. The FIR Wiener filter and its application to denoising is developed in this 
chapter. 
A. MODEL DESCRIPTION 

The optimum linear discrete-time estimation model is illustrated below in Figure 3.1. 
The filter input signal x(n), consists of a signal s(m) and an additive noise w(n). The 
estimator is constrained to be a linear filter with impulse response h(n), which is designed 
to remove the noise while preserving the characteristics of the signal. The filter impulse 
response is obtained by minimizing the estimation error e(n), defined as the difference 
between the filter output and the desired signal d(n), which is taken as the original signal 


s(n), for filtering applications. 


















, Desired 
Signal lige mesposié 
Discrete-Time 
; 4 Filter E 7 
a) + 4 7 *@)=s@)+wa) 4) y(n) = (2) + d(n) = s(n) 
; Estimation 
Noise Error 
w(n) e(n) = s(n) - s(n) 


Figure 3.1: Linear Discrete-Time Estimation Model. 


1] 


A time-varying filter can be derived for nonstationary signals but the complications 
are significant [10]. A more tractable approach involves segmenting the input signal into 
blocks which can be considered stationary for the short duration of a few milliseconds to a 
few seconds. In developing the filter for this model, the input signal and noise spectral 
characteristics or equivalently, auto- and cross-correlation functions must be known. 
Although typically not known a priori, this information can be estimated from the data 
assuming that the noise can be isolated sometime during the observation interval. 

B. FIR WIENER FILTER DEVELOPMENT 

Given the spectral characteristics of an additive combination of signal and noise, 
Wiener proposed a scheme which best separates signal from the noise [11]. His procedure 
involves solving for the “best” filter coefficients by minimizing the difference between the 
desired ideal noise-free signal and the filter output in a mean square error sense. The mean 
square error is most often used due to its mathematical simplicity, as it leads to a second 
order cost function with a unique minimum obtained by using optimal filter coefficients [10]. 

The FIR Wiener filter is constrained to length P with coefficients h, (0 < k < P-1). 
Its output depends on the finite input data record x(n) = [x(n), x(n-1), ......x(n-P+1)]’ and may 
be expressed as the convolutional sum: 

P-1 
§(n) = y(n) = s h *(k)x(n-k). (3.1) 
The mean square value of the error, e(n), between the desired output s(m), and the filter 


output S(7), can then be expressed as: 


P-1 
o-=E{le(n)?} =E{Is(n)- > h*(k)x(n-k)P}. (3.2) 
k=0 


The principle of orthogonality states that the optimal filter coefficients h(n), for n = 0,]1...., 
P-1, minimize the mean square error if chosen such that E{x(n-i) e'(n)} =0, fori = 0,1...., 
P-1, that is if the error 1s orthogonal to the observations [12]. The minimum mean square 
error can then be given by o*, = E{s(n) e'(n)}. Now applying the orthogonality principle 
results in the following set of equations: 
E{x(n-ie *(n)} | xn-i| 5 “(n) y+ h(k)x oh) | ="05" “forr=Onterre (B38) 
k=0 
Equation (3.3) can also be expressed in terms of the auto-correlation function of the 
observations R,.(k) and the cross-correlation function between the signal and the observation 
sequence R,,(i) which leads to: 
x ieee?) (eeu), for i= 0,],,...,P-1. (3.4) 
k=0 
Assuming the signal and noise are independent and have a zero mean, it can be shown that: 
R (A=R (A) ES) 
and R(A)=R (A) +R (4), 
where R,(k) and R,,(k) are the signal and noise correlation functions respectively. 
The set of discrete Wiener-Hopf equations for the causal stationary filter can be 


expressed in matrix form as 


Kee © _., (3.6) 
where R, = E{x(n)x "T(n)} (3.7) 
and r.. = E{s(n)x(n)} (3.8) 


66 699 
ow 


an represents the reversal operator applied to a vector [12]. Equation (3.6) may be 


13 


solved for the filter coefficients using matrix methods. The minimum mean square error is 


found using the second part of the orthogonality theorem: 


o. = E{s(n)e*(n)} = | sn| s w-¥ h encr-b (3.9) 

or again in terms of the correlation function: 
o. = R{(0) >? h*(k)R_(k). (3.10) 
The Wiener filter is now applied to a synthetically generated noisy sinusoidal signal 
of varying frequency with a SNR of 0 dB. The original signal and noisy signal, shown in 
Figure 3.2(a), are compared to the original signal and denoised signal in Figure 3.2(b). The 
standard measure used to compare denoised signals to the noise free original signal is a 


modified Mean Squared Error (MSE), defined as: 


N 
MSE = > a el (3.11) 


1 norm(s)  norm(y) 


where s(7) is the noise free original signal, y(n) is the denoised output, and N is the length 
of the signal. The signals are energy normalized by dividing by their norms. This represents 
a denoised signal amplitude gain which is applied to account for losses incurred during the 
filtering process. This normalized MSE will be used throughout the thesis as a measure of 
performance. It is not normalized by the signal length. However, a standard signal length 
of 1024 or 16384 points is used exclusively for all denoising trials. The Wiener filtering 
results for various filter orders is depicted in Figures 3.3 and 3.4. A single-frequency noisy 


sinusoidal signal (0 dB SNR) is denoised through ten trials and an average normalized MSE 


14 


is computed. This is repeated for test signals with normalized frequencies which range from 
0.01 to 0.49 in steps of 0.01 and the results are displayed. The noise sample supplied to the 
Wiener filter is an independent noise sample in Figure 3.3 and the original noise sample used 
to create the noisy sinusoidal signal in Figure 3.4. Although the original noise sample is not 
practically available, it is shown for comparison purposes. A table comparing the MSE 
values averaged across the spectrum is shown following the Figures. The MSE performance 
improves for higher filter orders. Additionally, the MSE value is rather flat across the 
spectrum for a filter orders higher than four. These results are as expected for the Wiener 
filter and provide a benchmark standard of performance against which other denoising 


methods can be compared. 


15 


Original Signal and Noisy Signal: SNR = OdB, Normalized Freq = O.1 


Amplitude 





—3 
Ss 10 18 20 25 30 35 40 45 50 
Time (sample number) 


Original Signal and Wiener Filtered Signal: Filter Order = 12 


3 
2 
a | 
<b> 
= 
= 
e ° 
<< 
~—1 
—2 
=—3 
oO Ss 10 18 20 25 30 35 40 45 50 
Time (sample number) 


Figure 3.2: Wiener Filter (a) Original (dashed) and Noisy Sinusoidal Signal at normalized 
frequency=0.1, sampling frequency=1. (b) Original (dashed) and filtered signal. 


16 


Wienr Filter Results: SNR = 0dB 


0.45 


Filter Order 


0.3 
S 
us 
© 

3S 0.25 
Cop) 
= 
cS 
i: 8) 
> 


0.15 


0.05 





Oo 0.05 O.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 
Normalized Freq 


Figure 3.3: Mean-square error comparison for Wiener filters of various order. 
Independent noise sample. Ten trial average. 


ly 


VViener Filter Results: SNR =» Oodb 


Flliter Order 


Mean Square Erros 





3 0.35 o.4 0.45 o.5 


Oo o0.0S o.1 o.1S eo.2 0.258 oO. 
Normalized Freq 


Figure 3.4: Mean-square error comparison for Wiener filters of various order. Actual 
noise sample. Ten trial average. 


Independent Noise 
Average MSE 














18 


IV. WAVELET ANALYSIS AND MULTIRATE SYSTEMS 

Wavelet analysis has found a myriad of applications in areas ranging from financial 
research to engineering analysis. It is particularly well suited for signal processing 
applications such as image compression, sound enhancement, and statistical analysis [13]. 
This chapter presents the underlying theory of wavelet analysis including both the 
mathematical basis, which stems from Fourier analysis, and the multirate implementation in 
filter banks. 
A. FOURIER ANALYSIS 

Fourier and wavelet analysis are similar in that the goal is to express an observed real 


time-series signal, x(t), as a linear decomposition of the form 


x(t) = >> a, €,(0), (4.1) 
k 


where k is an integer index for the finite or infinite sum, a, are real-valued expansion 
coefficients, and &,(t) are a set of functions called the expansion set [14]. If an appropriate 
expansion set is chosen, the resulting expansion coefficients will provide useful information 
about the signal allowing better analysis and processing. The expansion set is called a basis 
for the signal space if every signal in that space can be expressed as a unique set of linearly 
independent functions. Further, if the basis functions are orthogonal so that their inner 
product is zero: 

GLOLEIG = i E(te,(t)dt = 0, j+k (4.2) 


then the coefficients can be calculated by the inner product 


1) 


a, = (x(t),€,(f) = [ x(t) €;(t) dt. (4.3) 
In addition to simplifying the computation of the expansion coefficients, an expansion 
utilizing an orthogonal basis function results in a sparse representation where most of the 
coefficients are zero or near zero. This has great utility in the nonlinear noise reduction 
methods developed in later chapters. 

A well known transform which is used to extract spectral information from time- 
series signals is the Fourier transform, 

NG) [xe a dg (4.4) 
which uses orthogonal sinusoidal ee functions contained in the complex exponential 
function. Since the Fourier transform is integrated over all time, it is only suitable for 
stationary signals which have no time varying frequencies. However, most real-world signals 
are non-stationary, and therefore, a time-varying spectral representation is essential in 
conceptualizing the temporal relationship between the spectral components. 

The short-time Fourier transform (STFT) is the key to adapting the Fourier transform 
to time-frequency analysis. Let x(#) be the nonstationary signal of interest. A temporal 
window, w(t-T), is applied to the signal. The window, which is centered at t, segments the 
signal into short intervals which can be considered stationary. The traditional Fourier 
transform is then applied to the windowed signal. Thus, the STFT may be expressed as 
follows: 

peer (c.f ) -f x(t) w(t-t) exp(-j2mft)dt. (4.5) 


As a result, the STFT maps a one-dimensional time series into a two-dimensional 


20 


time-frequency function STFT(t,f). Equation (4.5) can be considered the inner (scalar) 
product of the signal x(t) with a two parameter family of basis functions w(t-t) exp(-j27/t) 
for varying t and f [15]. The squared modulus of the short-term Fourier transform of a 
signal is known as a spectrogram and is expressed as SPEC(t,f ) = | STFT (tf) 1*. The 
spectrogram is a measure of the signal energy in the time-frequency plane and provides 
powerful insight when analyzing nonstationary signals. A common band-pass filter 
implementation of the STFT is shown in Figure 4.1. In this model, the frequency f might 
be considered the midband frequency of the bandpass filter. This model results in a filter 
bank setup with a set of analysis filters that perform a STFT for each specific value of 


frequency f [15]. 


Band-pass Filter 


x(t) w(-t)exp(j2xft) STFT( tf) 





f=nf, n=(1,2,..) 


exp( -j2xft) 


Figure 4.1: Band-pass Filter Implementation of STFT. 


In terms of time and frequency resolution the STFT must satisfy the time-bandwidth 
limitations of the uncertainty principle, namely: 


AtAf > — (4.6) 


21 


where At is the time resolution and Af is the frequency resolution or bandwidth. This 
equality can be met by using a Gaussian window [16]. However, once the window is 
specified a constant bandwidth filter bank results with fixed time-frequency resolution over 
the entire time-frequency plane. The time-frequency plane has a uniform tiling as 

shown in Figure 4.2. Although the Gaussian window minimizes the time-bandwidth product, 
the STFT still doesn’t provide sufficient time resolution without a significant degradation 
in frequency resolution. A narrow or compactly supported window function provides better 
time resolution at the expense of poorer frequency resolution. Similarly, a wider window 
function results in better frequency resolution but poorer time resolution. An alternative 
approach to the STFT is required, one which analyzes the signal at different frequencies with 
different resolutions. 

B. THE CONTINUOUS WAVELET TRANSFORM 


The wavelet transform is a multi-resolution analysis (MRA) method designed to 


STFT Time-Frequency Display 


Frequency (A f) 
A) > 


N 


0 1 2 3 4 5 
Time (A t) 


Figure 4.2: STFT Time-Frequency Display. 


22 


provide good time resolution and poor frequency resolution at high frequencies and good 
frequency resolution and poor time resolution at low frequencies. Fortunately, many 
practical ocean acoustic signals are composed of high frequency components of short 
duration plus low frequency components of long duration, and thus are well suited to this 
form of analysis [17]. In terms of the filter bank model described earlier, the time resolution 
must increase as the central frequency of the analysis filters increases. The filterbank is then 
composed of band-pass filters with constant relative bandwidth otherwise known as constant- 
Q filters where the constant relative bandwidth is defined as: 

Sf = constant = Q. (4.7) 
f 
A clearer understanding may be obtained by comparing the division of the frequency 


domain, as shown in Figure 4.3 below. The STFT analysis filters in Figure 4.3(a) are 


regularly spaced over the frequency axis while WT multi-resolution analysis filters in Figure 


STFT with Constant Bandwidth Filters 


Amplitude 


0 {. Zt, 3f, 4f, 5f, 6f, ce Sf, 
Frequenc 


Figure 4.3(a): STFT decomposition with Constant Bandwidth Filters. 


2 


WTF with Constant-Q Factor Filters 


Amplitude 


O f,  2f, 4f, 8f, 
Frequency 


Figure 4.3(b): WT decomposition with Constant-Q Factor Filters. 


4.3(b) are regularly spread in a logarithmic scale. The improved frequency resolution at low 
frequencies and better time resolution at higher frequencies 1s clearly illustrated for the WT 
case. 

The continuous wavelet transform (CWT) is a powerful multi-resolution analysis 
method which has become a widely used alternative to the STFT. It may be viewed as a 
signal decomposition onto a set of basis functions where the basis functions are called 
wavelets [15]. The CWT has some similarities to the STFT in that a segmented time-domain 
signal is transformed by multiplying it by a wavelet function which is similar to a windowed 


function in the STFT. However, the basis functions for the two transforms are quite 


| 
UN) ee 


Figure 4.4: Basis Functions - Sine Wave and Daubechies-8 
Wavelet. 





24 


different. Whereas, the Fourier transform breaks the signal into a series of sine waves of 
different frequencies, the wavelet transform breaks the signal into its “wavelets”, scaled and 
shifted versions of the “mother wavelet”. 

The two basis functions are illustrated in Figure 4.4, where a sine wave and a 
Daubechies-8 wavelet are compared. The smooth sine wave of finite length is a direct 
contrast to the wavelet which is irregular in shape. Its irregular shape improves the analysis 
of signals with discontinuities or sharp changes, while its compact support allows temporal 
localization of a signal’s features [18]. 


The two-parameter family of basis functions known as wavelets may be expressed 


I faut 
as: (= ( — | (4.8) 
? Ja a 

where a is a Scale factor or dilation parameter and t is a time delay. This leads to the 
definition of the CWT: 

wy { t-t 

CWT (t,a) =— [ x(ONw [2 dt. (4.9) 
a aa a 


In contrast to the Short Time Fourier transform, the CWT offers improved frequency 
resolution at low frequency while the time resolution is improved at higher frequency. 
_ Figure 4.5 illustrates the time-frequency plane for the CWT. The wavelet coefficients, 


determined by the translation and dilation operations shown in Equation (4.8), represent the 


25 


Wavelet Time-Frequency Display 


Frequency (A f) 





Time (A t) 


Figure 4.5: CWT Time-Frequency Display. 


correlation between the wavelet and a localized section of the signal. The process of 


translation or shifting and dilation or scaling is illustrated in Figure 4.6. 


AN fi Nin VAY AA a) ia 
WA Wy Wry ¥ Wy "V a 


WC!) 







WE) 


Ww (¢-3k) 








Shi fling —_—_» 
Shifting 


Shi fringe : 
-__oo 
Scaling | Shifting 


Repeat Shifting Operation 


NIN WAY IN rfAAANIA AANA 
NYS VY NYVVY VV KV VWVVY ‘ail 


WU(E/ 2) 


Figure 4.6: Scaling and Shifting Process of the Wavelet Transform (Used 
with permission from Joshua Altmann, [18]). 


26 


G THE DISCRETE WAVELET TRANSFORM 

The advantages of multi-resolution analysis and the CWT have made the wavelet 
transform an invaluable tool in signal processing. However, real-time signal processing and 
denoising operations using the CWT would require a substantial expenditure in 
computational time and resources [19]. A much simpler implementation of the wavelet 
transform is available which reduces the redundant information experienced in CWT 
analysis. Based on work by Croisier, Esteban, and Galand [20], the discrete wavelet 
transform (DWT) is an efficient discrete signal processing technique which lends itself to 
digital computational methods. The DWT efficiently performs signal decomposition and 
reconstruction. 

A complete mathematical interpretation of wavelets is based on functional analysis, 
which is well defined in reference [21]. Basic concepts from both functional and multi- 
resolution analysis, as discussed in reference [14], will be used to define the scaling function 
and the wavelet function and describe their significance in the DWT of a signal. The 
objective, expressed by (4.1), 1s to represent a signal in a given vector space as a linear 
combination of basis functions in a transformed vector space. These basis functions include 
dilated and shifted versions of the orthonormal scaling and wavelet functions. A set of 
scaling functions which span an arbitrary vector subspace V, may be defined as 

62) = Ot-k, keEZ, be L’, (4.10) 
where Z denotes the set of all integers and L’ is the space of square integrable functions. 


Equation (4.1) may be rewritten as, 


ag 


x(t) = X a,0,(t) for any 5 (an fe (4.11) 
The subspace spanned by the scaling function may be expanded to V; by forming a set of 
scaling functions which are both scaled and translated as follows 
2) = 2? b(2/t-k). (4.12) 
Consequently, (4.11) becomes 
x(t) = yy a,,,(t) for any Gls VA (4.13) 
For j > 0, @,, is shorter and translates in smaller steps allowing the representation of signals 


with finer details. 


A nested set of spanned spaces 1s defined as 


* CV_CV_ CV CV CVi6 oc ibe (4.14) 
or Vic Vi forall j € Z. (4.15) 
The recursive relationship 

ot) = Yo h(n) ¥2b(2t-n), ne Z (4.16) 


allows the lower scale (2) to be expressed in terms of a weighted sum of the shifted higher 
scale (21). The coefficients h(n) are the scaling function coefficients which form a filter to 
be used in the DWT. 

Considering the scaling function as a coarse approximation of a signal, the wavelet 
function, which spans the differences between the spaces spanned by the various scales of 
the scaling function, provides the finer details. The two functions are orthogonal at a given 
scale and the wavelet function can be expressed in terms of the weighted sum of shifted 


scaling function (2f) defined in (4.16) by 


28 


WO) = DY) a(n) ¥2H(2r-n), 1 € Z, (4.17) 
where g(n) are the wavelet function coefficients. The scaling function coefficients and 
wavelet function coefficients are required by orthogonality to be related by 

g(n) = (-1)" h(N-1-n), (4.18) 
where JN is the finite even length of the signal. The relationship 
Wi) = 2!" W(2/t-k), (4.19) 
which is similar to (4.12), determines the two-dimensional family of wavelet functions by 
scaling and translating the mother wavelet given in (4.17). Figure 4.7 below depicts the 
relationship between the scaling function and wavelet function spaces. 


With the scaling and wavelet functions defined, any signal x(#) € L? (R) can be written 


W,1W,1W,1V, V2 ),2).2. 


LQ 


Figure 4.7: Scaling function and wavelet function space 
orthogonal relationship [14]. 


as the discrete series 
xp =) c, (2 *6(2"t-) + > d (kK) y(2t-b). (4.20) 
7 é 


kK J=Jo 
The choice of jy sets the coarsest scale spanned by the scaling function @,,(¢) providing a 


Des 


coarse approximation of the signal. The rest of the space is spanned by the wavelet function 
which provides the high resolution details of the signal. The set of coefficients from (4.20) 
is called the discrete wavelet transform of the signal. 

D. MULTIRATE SYSTEMS ANALYSIS 

Multirate systems and filter banks are the workhorse of the DWT. A deeper 
understanding of their role in signal analysis and reconstruction is essential in exploring the 
effects of —_—e the Wiener filter with wavelet analysis. Multirate system components 
used to implement the DWT include decimators, interpolators, analysis filters, and synthesis 
filters. These operations will be discussed in the context of the DWT and then a simple 
multirate system will be analyzed. 

Decimation involves the subsampling or downsampling of a discrete sequence by a 
factor of two for the DWT. This is equivalent to discarding every other sample and results 
in reducing the sampling rate by a factor of two. Mathematically this process is represented 
by 

BA) 24) (4.21) 
Taking the Fourier transform of (4.21) yields the frequency domain expression for 
decimation 
X,(w) = -[X(S) + X(S+n)). (4.22) 
The Z-transform of (4.21) provides the Z-domain expression 
X12) = s1X@")+X(-2'")]. (4.23) 


The frequency and Z-domain expressions show that in addition to reproducing the input at 


30 


half the sampling rate, X(>), decimation produces a second term shifted by 7 radians, 
X(>+T). This second term is responsible for aliasing which causes a distorted frequency 
response unless the original signal is properly bandlimited. If X(w) is bandlimited 
to|w| < = , then there is no overlap between the two terms; and the aliasing term (i.e. 
shifted term) can be removed by filtering. 

Interpolation, also known as expansion, upsamples a discrete sequence, again by a 
factor of two for the DWT. The insertion of a zero between each sample in the sequence 


leads to a doubling of the sampling rate. Mathematically this is expressed as 


x(n/2) for even n 


2) = 0 for odd n eee 
The Fourier and Z-transforms of (4.24) produce the following results 
X,,(W) = X(20) 
os (4.25) 


X,(z) = X(z°). 


Doubling of the sampling rate causes the original frequency response to be compressed by 
a factor of two. Additionally, interpolation causes an effect known as imaging where one 
input frequency causes two output frequencies, one at = radians and a second image at 
=n radians. This is different from aliasing where two input frequencies, @ and w + 7 result 


in the same output [22]. Block diagrams of the decimation and interpolation operators are 


Decimator 


Interpolator 


Figure 4.8: Decimation and 
Interpolation Operators. 


shown in Figure 4.8. 

The analysis/synthesis filter bank employs the decimation and interpolation operators 
in sequential order in order to recover the original sequence, however, the effects of aliasing 
and imaging must be considered. Decimation followed by interpolation produce the 


following frequency and Z-domain results 


Xiryin(@) = [X(@)+X(o+7)] 


i (4.26) 
X12y12)(2) = 5lA@) +X(-z)]; 


which represent a scaled reproduction of the original sequence plus an additional term caused 
by aliasing and imaging 122), 

A lowpass filter H,(z) and highpass filter H,(z), known as analysis filters, are now 
introduced prior to decimation in order to bandlimit the input signal. These maximally 
decimated FIR filters form a Quadrature Mirror Filter (QMF) bank in which the filters 
exhibit frequency responses which are a mirror image of each other as shown in Figure 4.9. 


Since the analysis filters have a nonzero transition bandwidth and stop-band gain, some 


Quadrature Mirror Fitters: Daubechies~-20 Wavelet 


Nomalzed Gain 


oO 0.0$ 0.1 0.18 0.2 0.2 0.35 0.4 0.45 0.5 


5 0.3 
Normalizad Frequancy 


Figure 4.9: QMF bank: Analysis Filters H) and A. 


a7 


aliasing does occur due to the decimation and must be eliminated by other means. 

The imaging problem is remedied by a second QMF bank following the interpolation 
operation. These filters, known as synthesis filters, remove the images produced by the 
interpolation operation. However, they suffer from the same non-ideal characteristics as the 
analysis filters and thus are not capable of completely removing all images. If the analysis 
and synthesis filters meet specific conditions, they are able to remove all aliasing and 
imaging and perfectly reconstruct the signal. 

The analysis and synthesis filters as well as the decimator and interpolator operators 
are now assembled to form a filter bank as shown below in Figure 4.10. Perfect 
reconstruction is defined mathematically as 


A) NOx (Ham) (4.27) 


A 
x(n) y,(n) v,(n) u,(n) x(n) 


Input Analysis Decimators Interpolators Synthesis Output 


Figure 4.10: Two-channel Quadrature Mirror Filter bank [23]. 


where c is a non-zero constant and n, a positive integer. Thus x(m) is merely a scaled and 
delayed version of x(n) [23]. A Z-transform analysis of the filter bank in Figure 4.10 


provides the conditions necessary for perfect reconstruction. The output of the analysis filter 


38 


for channel k (k = 0,1) 1s expressed as 
amit a) ix(Z) (4.28) 


According to (4.23) decimation produces the result 
AOS AGEO) 


(4.29) 
= =A, (2 I NGG ne) +H ( ~7 x ~7 at uh 
The interpolator output, determined by applying (4.25) and (4.26), becomes 
Uz) = V,{z*) 
(4.30) 


= ~[H(2)X@) +H (-z)X(-2)]. 


Finally, the reconstructed signal, which is the sum of the synthesis filter bank channels, is 
X(2) = ~(Fo()Ho(2)+F | @H,@)X@)+{Fo(2)Ho(-z)+F @)A(-2))X(-z). (4.31) 
In matrix form, (4.31) becomes 


(4.32) 





Kee) X(- 
= AO ACM ay) H(-2) [Fi 


The second component in (4.31) is due to aliasing and imaging and produces replica of the 





Ho)  H,@) ts 


first term shifted 7t radians to the right on the unit circle. It is commonly referred to as the 
alias term [23]. The following must be true to eliminate aliasing and imaging 
La) td (ey elt AZ) id eee) OU. (4.33) 
Additionally, the remaining term must conform to the following to remove distortion caused 
by the presence of the analysis and synthesis filters 
F (2H (2+F (H(z) = 227. (4.34) 
Having met the alias cancellation and no distortion conditions, perfect reconstruction results 
in the following, where / is the delay for this filter bank based on the filter length: 
X(z) = z'X(z) (4.35) 


The analysis and synthesis filters can now be selected based on (4.33) and (4.34). 


34 


Assuming the following lowpass analysis filter H(z) = y h(n)z ” , the remaining filters 
are obtained as follows to produce a two-channel perfect ee filter bank [23] 
TE) ere EI aca) 
PiZe 2 ae) (4.36) 
F(z) = z-“H,(z"'). 
E. DAUBECHIES WAVELET 
The Daubechies wavelet, named after Ingrid Daubechies of Bell Laboratories, is a 
common choice for the analysis and synthesis filters, because it possesses several nice 
properties. First, the filters produced by this family of wavelets are orthogonal with compact 
Byaport providing many advantages previously mentioned. Second, the frequency response 
has maximum flatness atw =O andw=m. For a filter of length N, placing half the filter 
zeros at z= -1 (or W = 7), results in a maximum flat filter of regularity K = N/2. This second 
property leads to excellent results when Daubechies filters are used for the DWT 
decomposition and reconstruction of a large class of signals. 
The following theorem from Daubechies 1988 paper on orthonormal wavelets [24] 
summarizes her findings: 
Theorem 1. A discrete-time Fourier transform of the filter coefficients h(n) having 
K zeros at w = 7 of the form 
Hw) = cs * Go) (4.37) 
satisfies IH(w)? + IA(w+n)P? = 2 (4.38) 


if and only if L(w) =I{(w)l? can be written as 


35 


L(w) = P(sin* w/2), (4.39) 


with P(y) = Py) + y* R(1/2 - y), (4.40) 
K-1 

where Py) = aad eg (4.41) 
k=0 \ 


and R is an odd polynomial, chosen such that P(y) 20 for O<y<1. There is no explicit 
expression for the scaling or wavelet functions. 
A brief explanation of Daubechies development from references [14] and [22] 
follows. Equation (4.37) is expressed in terms of the frequency response IH(w)l* as follows: 
lH(w)*I = 0), (4.42) 
where L(w) = I{(w)I?. In trigonometric terms, (4.42) may be expressed as polynomials in 
cos(W@): 
lH(sin?(w/2))? = Icos*(w/2) P(sin?(w/2)) (4.43) 
which, after the change of variables of y = sin?(w/2)=1-cos?(w/2), becomes 
lH(y)? = (1-y)*PO), (4.44) 
where P(y) is an (N-K) order polynomial. A similar derivation for [H(w+m)!* enables (4.38) 
to be expressed as: 
(1-y)KP(y)+y KP(1-y) = 2. (4.45) 
Daubechies then applied Bezout’s theorem with R=0 in (4.40) resulting in minimum length 
N for a given regularity K (N=K/2) to determine the explicit solution given by (4.41). P() 
is the binomial series for (1-y)*, truncated after K terms, with degree K-1 and K coefficients: 


Pf aa 


Py) = | +Ky oy 24 RK 2) ye = (1-y)*+0(y *). (4.46) 


36 


Hence the response in terms of y becomes: 
K-1 


IHF = (1-yyk a yt (4.47) 
The filter frequency response is now expressed as: 
(P= (Hse) (esa (4.48) 
where the following transformations were used: 
y= and iy = (4.49) 


Equation(4.48) has K zeros at w=m and K-1 zeros due to the binomial term P (@). 
Transforming (4.48) to the Z-domain using z + z’ = 2 cos(w) simplifies the calculations. 
The minimum phase transfer function H(z) and its transpose H(-z) are defined as follows: 
H(z) = Shen mandi ea)e= y h(n)z”. (4.50) - 
Equation(4.48) becomes: 


F(z) = H@)H() = 2) — 


Z 


p> ad el 1) (4.51) 


which has 2K zeros at z = -1, half of which belong to H(z), and 2K-2 zeros due to P(z), half 
of which are inside the unit circle and belong to H(z). The maximum flat Daubechies filter 
coefficients are solved for by applying spectral factorization methods to (4.51) to obtain A(z). 
References [14] and [25] provide software methods for determining the scaling and wavelet 
functions from which the Daubechies wavelet filter coefficients can be determined. The 
results using MATLAB [25] are shown for the Daubechies-2 and Daubechies-20 wavelet 
(filter lengths of four and forty respectively) in Figures 4.11 and 4.12. Notice the improved 
regularity and smoothness as the filter order increases. 


The DWT is implemented through a series of half-band highpass and lowpass filters 


a 


of varying cutoff frequencies which analyze the signal at different frequency resolution. The 
filtering operations, which repeatedly half the signal spectral content, determine the 
frequency resolution. Following this filtering operation, the number of signal samples 
remains unchanged and consequently half the samples are redundant based on Nyquist’s 
sampling criterion. Thus, the filtering operation is followed by a downsampling operation 
to eliminate the unnecessary samples. Downsampling a signal by a factor of two corresponds 
to reducing the sampling rate by a factor of two which results in halving the time resolution. 
Thus each successive filtering and downsampling operation improves the frequency 
resolution by a factor of two while the time resolution suffers by a factor of two. 

The DWT of signal x[n] is determined as follows. The signal is passed through a 
half band digital lowpass filter with impulse response h[n] and a half band digital highpass 
filter with impulse response g[n]. The highpass filter is associated with the wavelet function 
while the lowpass filter is determined from the scaling function. This is equivalent to 


convolving the signal with the filter impulse response and is expressed as: 


Viol] = x[n]*h[n] = Do x[k]A[n-k] = D0 ALA) x[n-K] 
k k 


Yrigal] = x[n]*gln] = D/ x[k]gin-k] = )/ glk]xin-d1. Cm 
k k 


The filtering process divides the frequency band of y[”] in half forming two signals, y,,,,[7] 
and y,ie,17] which now have redundant information based on Nyquist’s sampling criterion. 
These signals are then decimated or subsampled by a factor of two without any loss of 


information. The combined filtering and subsampling process reduces the time resolution 


38 


Scaling function phi 


0.5 1 1:5 2 25 
Analysis low-pass filter 


fe) 1 2 3 
Synthesis low-pass filter 


Oo 1 2 3 


0.2 
0.4 
0 1 2 3 


Wavelet function psi 


0 0.5 1 io 2 2.5 


Analysis high-pass filter 


Oo 1 2 3 


Synthesis high-pass filter 


Figure 4.11: Daubechies-2 Scaling and Wavelet Functions [25]. 


Scaling function phi 


0.5 


0 10 20 30 


Analysis low-pass filter 
0.6 


0.4 
0.2 


=0:2 
=0.4 
0 3 6 9 12 15 18 21 24 27 30 33 36 39 


Synthesis low-pass filter 
0.6 


0.4 
0.2 


0.2 
~0.4 


0 3 6 9 12 15 18 21 24 27 30 33 36 39 


Wavelet function psi 


0.5 


0 10 20 30 
Analysis high-pass filter 
0.6 


0.4 
Or2 


-—0.2 
-0.4 
0 3 6 9 12 15 18 21 24 27 30 33 36 39 
Synthesis high-pass filter 
0.6 
0.4 


0.2 


-0.2 
~0.4 


0 3 6 9 12 15 18 21 24 27 30 33 36 39 


Figure 4.12: Daubechies-20 Scaling and Wavelet Functions [25]. 


59 


by a factor of two and doubles the scale which improves the frequency resolution by a factor 


of two. Mathematically this is expressed as 


Viowlk] = > x[n] h[2k-n] 


n 


Yrignlk] = 0 x[n] g[2k-n]. (4.43) 


n 


The combined operation of filtering and subsampling is repeated to decompose the signal 
to some desired level which is limited by the length of the signal. At every level, filtering 
and subsampling operations produce data with half the number of samples (reducing the time 
resolution by a factor of two) and half the frequency spectrum bandwidth (which doubles the 
frequency resolution). The filterbank representation of the DWT depicted in Figure 4.13 can 
be efficiently implemented, and with a reasonable amount of computational effort most 
acoustic signals can be decomposed for denoising purposes. 

The DWT coefficients resulting from the process described above provide a spectral 
analysis of the signal with a time resolution that varies depending on the level of 
decomposition. Large amplitude DWT coefficients indicate significant spectral content and 
the position of these coefficients within the DWT vector provides time localization. The 
time resolution is precise at high frequencies and gets worse at each successive level of 


decomposition (while the frequency resolution improves). 


40 


x[n] 


Level 1 DWT 
g[n] he Coefficients 
| } Level 2 DWT 
HPF |Coefficierts 





Level 3 DWT 
Coefficients 


Figure 4.13: Filterbank Representation of the DWT [18]. 


41 








V. DENOISING METHODS USING WAVELET THRESHOLDING 

David Donoho and his colleagues at Stanford University have led the way in applying 
wavelet techniques to the theory of statistics. A particularly relevant outcome of this work 
is a denoising technique termed “wavelet shrinkage” by Donoho and his co-developer Iain 
Johnstone [26]. The wavelet transform is applied to a noisy data set producing a set of 
coefficients which are then “shrunk” towards zero using a Soft or hard statistical thresholding 
method to select the appropriate coefficients which represent the desired signal. The 
resulting coefficients are then back transformed to the time domain to produce a denoised 
signal. A brief and informal explanation of the process follows. A more detailed argument 
can be found in many of Donoho’s writings [26,27,28]. 
A. WAVELET COEFFICIENT THRESHOLDING 

Assuming a suitable wavelet basis function is chosen, the DWT decomposition of 
a signal will compress the energy of the signal into a small number of large magnitude 
wavelet coefficients. Gaussian white noise in any one orthogonal basis is transformed by the 
DWT into white wavelet coefficients of small magnitude. This property of the DWT allows 
the suppression of noise by applying a threshold which retains wavelet coefficients 
representing the signal and removes low magnitude coefficients which represent noise. 

Assume a finite signal s(k) of length N is corrupted by zero mean, additive white 
Gaussian noise n(k) with standard deviation o, leading to the noisy signal 

x(k) = s(k) + 6 n(k). (5.1) 


The objective is to recover the signal s(k) from the noisy observations x(k) using thresholding 


43 


of the DWT coefficients. The DWT of x(k) is performed using (4.18) and the estimate § is 
determined by thresholding the individual wavelet coefficients. 
The three step method for recovery of the desired signal, s(k), from the noisy data, 
x(k), follows 
(1) Perform a J-leve] DWT of the data yielding noisy wavelet coefficients w, , , j=j,, 
oJ , k=0, ... , 2/- 1, where j is the scale or decomposition level, and k is the length 
of the signal. 


(2) Apply thresholding in the wavelet domain, using either hard-thresholding of the 


coefficients defined as: 


; Wig lw, | > 6 
Wik 7 Tard W429) = 0 Iw < §’ (3.2) 


or soft-thresholding of the coefficients defined as: 


> = _ J sign(w,,)(w, 1-6) bw > 6 
Mie = Trop Wiw) = 1s Te ISP (3.3) 


The threshold, 6, is usually determined in one of four ways to be described below. 

(3) Perform the inverse DWT and obtain the signal estimate S(k). 

Thus, thresholding cancels the wavelet coefficients which are below a certain 
threshold value defined in terms of the noise standard deviation o. The artificial signals used 


in this thesis employ the basic model (5.1) with the standard deviation set at one. The signal 


is then scaled to provide the desired SNR. In practice the noise standard deviation must be 
estimated. Standard statistical procedures estimate o as the median absolute deviation of the 
wavelet coefficients at the finest level J, divided by 0.6745 [8]. Results have shown that the 
empirical wavelet coefficients at the finest scale are, with few exceptions, essentially pure 
noise. Although subject to an upward bias due to the presence of some signal at the finest 
scale, this robust estimate has produced reliable results [26]. A level dependent 
determination of o is another alternative available if colored noise is suspected. All three 
options for determining the noise standard deviation are available in the MATLAB Wavelet 
Toolbox [25]. 
B. THRESHOLD SELECTION 

Four commonly used threshold determination methods are discussed next: Universal 
threshold, Stein’s Unbiased Risk Estimator (SURE), Hybrid threshold, and the Minimax 
threshold. 

1. Universal Threshold 

The universal threshold, defined as 

6,, = of 2logN, (5.4) 

where o is the standard deviation of the noise and N is the sample length. It uses a single 
threshold for all detail wavelet coefficients and ensures that asymptotically all detail 
coefficients are annihilated [29]. The universal threshold is a less conservative method 
which removes noise effectively but may remove small details of the signal which lie in the 


noise range [25]. It is easily implemented and is particularly well suited to minimal spectral 


45 


content signals which have sparse wavelet coefficients [26]. One drawback involves signals 
of short duration (several hundred samples in length), in which case other methods may 
provide more accurate results in a MSE sense. 

2 SURE Threshold 

The SURE threshold, 6, , selects a threshold based on Charles Stein’s work on 
unbiased risk estimators [30]. First implemented by Donoho [27], this method minimizes 
a risk function to determine an optimum threshold. The unbiased risk estimate used for soft 
thresholding is defined as follows: 

RISK(X) = 1 + (X? - 2) I(IXl<8.) + 6; (XI > 6), (5.5) 
where X ~ N(@,1), J(-) is the indicator function, and 6, is the threshold [29]. Using the model 
defined by (5.1) the threshold becomes 

6. = nin y » RISK “| } (5.6) 
fe 
where the wavelet coefficients w,,, divided by the standard deviation have replaced the 
variable X in the unbiased risk estimate. This technique suffers from the sparse wavelet 
coefficient problem mentioned for the universal threshold and is often combined with the 
universal threshold in a hybrid threshold method. 
3. Hybrid Threshold 
The hybrid threshold method is a combination of the first two methods. Developed 
by Donoho [27], this method uses the SURE threshold unless a low SNR situation with 
sparse wavelet coefficients develops, in which case, the universal threshold method is 


utilized. 


46 


4. Minimax Principle Threshold 

The inarae principle, often used in statistics to design estimators, uses a fixed 
threshold, 6,,, selected to produce minimax performance in terms of the mean square error 
when compared to an ideal procedure [31]. Like the universal threshold, minimax uses a 
single threshold for all detail wavelet coefficients. The minimax threshold is a function of 
the sample size, the threshold type (soft or hard), and the oracle type (projection or 
shrinkage) and can be found in tabulated form in references [28] and [29]. MATLAB 
Wavelet Toolbox [25] uses the following expression which approximates the minimax 
threshold for the soft threshold nonlinearity, with comparison to a projection oracle: 

6 (2) = 0.3936 + 0.1829 *(log(n)Aog(2)), (5) 

where n is the sample length. 

a Threshold Discussion 

The four thresholding methods are compared in Figure 5.1 where various wavelet 
thresholding schemes are applied to the same noisy sinusoidal signal used earlier in Chapter 
Il. The SURE and Hybrid thresholding methods represent the signal as a smoothed sinusoid 
after a four-level decomposition using the Daubechies-20 wavelet for the DWT. The 
Universal and Minimax thresholding methods applied to a four-level decomposition do not 
perform as well as expected, although this signal has minimal spectral content and therefore 
a Sparse representation in the wavelet domain as seen in Figure 5.2. This is explained by 
comparing Figures 5.2, 5.3, and 5.4 which show the approximation and detail coefficients 


before and after universal and Minimax thresholding. The Universal and Minimax methods 


47 


apply a constant threshold for a given decomposition level. This results in poor denoising 
performance at any scale other than the coarsest or highest scale otherwise known as the 
approximation. This is true because the high threshold value results in the signal as well as 
the noise being removed by the thresholding except for the approximation which is not 
thresholded in “wave shrink” methods. This effect 1s most noticeable at low SNR values 
where the signal and noise wavelet coefficients approach the same magnitude. The SURE 
and Hybrid thresholding methods employ scale dependent thresholds which are more 
effective in alow SNR environment as demonstrated in Figures 5.5 and 5.6 which show the 
wavelet coefficients after applying these two methods. 

The normalized spectral decomposition filters are pictured in Figure 5.7 for the 
Daubechies-20 wavelet. A MSE comparison of the four thresholding methods across the 
normalized frequency spectrum from 0 to 0.5 (Sampling Frequency = 1) is shown in Figures 
5.8 and 5.9. The poor performance of the Universal and Minimax methods at ” SNR level 
is clearly evident. Another significant phenomenon noticeable in these figures is the 
improvement in normalized MSE with each successive scale and decomposition level. This 
occurs because with each successive scale there is less noise energy as represented by the 
wavelet coefficients, whereas the signal energy remains the same. The spectral bandwidth 
is reduced with each level of decomposition eliminating approximately half the remaining 
white noise which reduces the noise variance and improves the SNR at that particular scale. 
The energy difference between the signal and noise wavelet coefficients is greater at higher 


or coarser scales resulting in more effective removal of the noise by thresholding. 


48 


A closer inspection of the threshold values chosen for a particular thresholding 
method reveals that the threshold is dependent on the number of decomposition levels even 
though the signal length of 1024 data point remains the same. This behavior is due to the 
specific implementation employed in the MATLAB Wavelet Toolbox [25] which doesn’t 
keep the total number of wavelet coefficients constant. Threshold values for the Universal 


and Minimax methods are compared below for four levels of decomposition. 


Level Wavelet Coefficients Threshold Threshold 
sh 


3.7220 22500 
3.7607 22596 


Since the threshold value for the Universal and Minimax methods is dependent on the 










number of wavelet coefficients, the threshold varies depending on the decomposition level 
resulting in independent MSE curves. The SURE and Hybrid methods calculate a different 
threshold for each scale allowing them to generally outperform the Universal and Minimax 
methods. This is particularly noticeable at the lower or finest scale where scale dependent 


SURE and Hybrid thresholding methods generate much better results. 


49 


Original Signal and Noisy Signal: SNR = 0dB, Signal Freq = .1 
5 


15 20 25 30 35 
Universal Method: MSE = 1.416 


fh 
© 
th 
On 
oO 
© 


10 15 20 25 30 35 
SURE Method: MSE = 0.2097 


Be 
© 
aN 
On 
On 
© 


10 ils, 


20 25 30 35 40 
Hybrid Method: MSE = 0.1939 


& 

oO 
oO 
© 


10 15 20 25 30 


35 40 
Minimax Method: MSE = 0.6907 


f 

on 
Oo 
© 


Ampl 
kK WO 


0 5 10 15 20 25 30 35 40 45 50 
Time (sample number) 


Figure 5.1: Comparison of wavelet thresholding schemes; sinusoidal signal at Frequency 


=0.1, Sampling Frequency F=1, Original signal (dashed); Denoised signal (solid); Five- 
level DWT decomposition using Daubechies-20 wavelet. 


Approximation Coeft.: dice SNR=0dB 


0 10 20 30 40 50 60 70 80 90 100 
Detail Coef.: Level 4 


4 
2 ; 
@) 
~2 
0 10 20 30 40 50 60 70 80 90 100 
Detail Coef.: Level 3 


10 
| apne — 
-10 
0 20 40 60 80 100 120 140 160 180 
Detail Coef.: Level 2 


0 50 100 150 200 250 300 
Detail Coef.: Level 1 





0 100 200 300 400 500 600 
Figure 5.2: Approximation and detail wavelet coefficients resulting from five-level DWT 
decomposition of noisy sinusoidal signal, Frequency=0.1 (F=1), SNR=0 dB using 
Daubechies-20 wavelet. 


51 


Universal Threshold, Approx. Coef.:Freq=0.1, SNR=0dB 


4 
2 
eee 
-2 
0 10 20 30 40 50 60 70 80 90 100 
Detail Coef.: Level 4 thres = 3.761 
1 
0) 
—1 
0 10 20 30 40 50 60 70 80 90 100 


Detail Coef.: Level 3 thres = 3.761 


S 
pee | 
=, 


100 120 80 
aa Pay Oe 2 thres = 3.761 


1 
ae 
= 


100 200 300 
Detail Coef.: — 1 thres = 3.761 


- 
0 100 200 300 400 500 600 


Figure 5.3: Approximation and detail wavelet coefficients after applying Universal soft 
thresholding. Threshold values indicated for each level or scale. 


Minimax Threshold, Approx. Coef.:Freq=0.1, SNR=0dB 


4 
2 
| ena 
—2 
0 10 20 30 40 50 60 70 80 90 100 
Detail Coef.: Level 4 thres = 2.26 
0.2 
0 
0 10 20 30 40 50 60 70 80 90 100 


Detail Coef.: Level 3 thres = 2.26 


0 20 40 60 80 100 120 140 160 180 
Detail Coef.: Level 2 thres = 2.26 


0 50 100 150 200 250 300 
Detail Coef.: Level 1 thres = 2.26 


; 
pape 
= 


100 200 300 400 500 600 
Figure 5.4: Approximation and detail wavelet coefficients after applying Minimax soft 
thresholding. Threshold values indicated for each level or scale. 


© 


SURE Threshold, Approx. Coef.:Freq=0.1, SNR=0dB 


4 
2 
| AA, 
-2 
0 10 20 30 40 50 60 70 80 90 100 
Detail Coef.: Level 4 thres = 1.851 
1 
0.5 
0 
0 10 20 30 40 50 60 70 80 90 100. 


Detail Coef.: Level 3 thres = 0.1886 


20 40 60 80 100 120 140 160 180 
Detail Coef.: Level 2 thres = 2.26 


1 

| 1. 
—1 
100 150 200 250 300 
Detail Coef.: Level 1 thres = 1.682 


8 
felt lac rer 
~2 


100 200 300 400 500 600 
Figure 5.5: Approximation and detail wavelet coefficients after applying SURE soft 
thresholding . Threshold values indicated for each level or scale. 


=) 
on 
© 


© 


Hybrid Threshold, Approx. Coef.:Freq=0.1, SNR=0dB 


4 
2 
0 
—-2 
0 10 20 30 40 50 60 70 80 90 100 
Detail Coef.: Level 4 thres = 3.035 
1 
0 
—1 
0 10 20 30 40 50 60 70 80 90 100 


Detail Coef.: Level 3 thres = 0.1886 


10 
PS 
Te 


100 120 80 
an = — 2 thres = 3.362 


1 
ee 
We 
100 150 300 
Detail Coef.: Level 1 thres = Bec 


z| 
0 100 200 300 400 500 600 


Figure 5.6: Approximation and detail wavelet coefficients after applying Hybrid soft 
thresholding. Threshold values indicated at each level or scale. 


Four Level Spectral Decomposition Using Daubechies—20 Wavelet 


Normalized Gain 








; \ ae : 
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 
Normalized Frequency 


Figure 5.7: Four-level Spectral Decomposition Filters Using Daubechies-20 Wavelet. 


56 





0.5 


Universal SURE 


owl, 








0.9 
0.8 
Oly a 
2 2 
9.6 i 
2 2 
ig) 3) 
2.0.5 =. 
” ” 
& 0.4 = 
® ® 
= = 
0.3 
0.2 
0.1 
oki 
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 OD 
Normalized Freq Normalized Freq 
Hybrid Minimax 
0.2 0.5 
0.18 0.45 
0.16 0.4 
_ 0.14 =10-35 
2 = 
Ww O.12 W 0.3 
= 2 
3 i) 
— 0.1 2.0.25 
“) ” 
@ 0.08 Ss 0.2 
® ® 
= = 
0.06 0.15 
0.04 0.1 
0.02 0.05 
0 ob 
0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 


Normalized Freq Normalized Freq 


Figure 5.8: Soft thresholding methods applied to noisy sinusoidal signal: SNR = 10 dB. 
Four level DWT decomposition using Daubechies-20 wavelet. MSE curves for one 
through four levels (Lvl) of decomposition or scale pictured. Ten trial average. 


o/ 


Universal SURE 


Mean Square Error 


VN 78a See 


Mean Square Error 





0 0.1 0.2 OS 0.4 0.5 @) 0.1 0.2 0.3 0.4 0:5 
Normalized Freq Normalized Freq 
Hybrid Minimax 


Mean Square Error 
Mean Square Error 





0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 
Normalized Freq Normalized Freq 


Figure 5.9: Soft thresholding methods applied to noisy sinusoidal signal: SNR = 0 dB 
Four level DWT decomposition using Daubechies-20 wavelet. MSE curves for one 
through four levels (Lvl) of decomposition or scale pictured. Ten trial average. 


58 


VI. FIR WIENER WAVELET FILTER 

Wiener filtering and wavelet thresholding are both effective means of denoising 
acoustic signals. In this chapter the two methods are combined to form an enhanced FIR 
filter which will be referred to as the Wiener Wavelet filter. This method of denoising 
utilizes the discrete wavelet transform to represent the signal as a series of wavelet expansion 
coefficients. Wavelet analysis has several distinct properties which make it an ideal method 
of signal decomposition, analysis, and reconstruction. First, wavelets are an unconditional 
basis for a wide variety of signals which means that a signal can be represented by a small 
number of expansion coefficients, d;, in (4.20), since the magnitudes of these coefficients 
drop off rapidly for a wide class of signals [32]. Second, wavelet analysis provides a multi- 
resolution analysis in time and frequency allowing a more accurate description and 
separation of signal and noise [14]. Third, there are a variety of wavelet functions which 
makes wavelets adaptable to represent signals of differing characteristics. Finally, wavelet 
analysis and particularly the DWT is well suited to implementation on a digital computer 
since it involves only multiplications and additions and not derivatives or integrals. 

Once the signal has been transformed into the wavelet domain, the wavelet expansion 
coefficients are filtered using the optimal Wiener filter. In this process the wavelet 
coefficients are being shrunk towards zero, not by a nonlinear thresholding method but by 
an optimal linear filtering process. The shrunk coefficients are then inverse transformed to 
reconstruct the denoised signal. The Wiener filtering process introduces distortion and 


aliasing which affects the ability to perfectly reconstruct the denoised signal. These effects 


3g 


will be discussed after introducing the necessary multirate system and filter bank theory. 
A. MULTIRATE SYSTEMS ANALYSIS 

The application of multirate systems and filter banks in signal analysis and 
reconstruction was discussed in Chapter IV. The necessary conditions for perfect 
reconstruction were derived. The optimal Wiener filter will now be applied following 
decimation and prior to interpolation in each stage of a filter bank. The two-channel filter 


bank is illustrated below in Figure 6.1. 





Input Analysis Decimators Wiener Filters Interpolators Synthesis Output 


Figure 6.1: FIR Wiener Wavelet Filter bank. 


The Wiener filters which have been inserted in the filter bank above must be provided 


with the statistical properties of the noise. This is accomplished by isolating a noise-only 


60 


portion of the noisy signal, x(n). A DWT of the noise, w(7), is performed using the same 
analysis filter bank structure, as was used for the noisy signal. The Wiener-Hopf equations 
(3.6) to (3.8) are applied to the noisy signal wavelet coefficients and the noise wavelet 
coefficients to produce the filter coefficients for the Wiener filters W,) and W,. This two- 
channel filter bank is repeated to produce a multi-level FIR Wiener wavelet filter bank. 

A four-level decomposition FIR Wiener wavelet filter bank is applied to the noisy 
sinusoidal signal from previous chapters and the results are displayed in Figure 6.2. The 
normalized analysis filters for the Daubechies-20 wavelet used in this analysis are pictured 
in Figure 6.3. Figure 6.4 and 6.5 show the MSE performance for this filter bank with the 
Wiener filter order varying from 4 to 20. In Figure 6.4 an independent white Gaussian noise 
sample with a zero mean was provided to the Wiener filter while in Figure 6.5 the actual 
noise sample used to produce the noisy signal was provided. The MSE difference between 
the independent and actual noise cases becomes greater at lower SNR levels. A common 
feature of all wavelet based techniques is the improved MSE performance with successive 
levels of decomposition. This was observed in wavelet thresholding and will occur in the 
DR Wiener wavelet methods described in the next chapter. The Wiener filters are applied on 
a level dependent basis, which is different from universal and minimax wavelet thresholding 
methods where the threshold was applied in a global manner across all of the scales of a 
particular level of decomposition resulting in independent MSE curves. 

Another characteristic observed in wavelet based denoising methods is the degraded 


MSE performance in the transition regions of the analysis and synthesis filters. The DWT 


61 


filter bank, as originally constructed, met the perfect reconstruction conditions. 
Consequently, aliasing and distortion effects which would normally occur in the transition 
eee of the filters are canceled. Denoising by filtering or thresholding the DWT 
coefficients between the decimation and interpolation operations results in a filter bank 
which no longer meets the perfect reconstruction conditions. This effect is most noticeable 
in the filter transition regions because this is where the perfect reconstruction property was 
so essential in canceling the effects of aliasing and distortion. This unfortunate result can be 
minimized by choosing wavelet analysis and synthesis filters of longer length to reduce the 
transition region. Some detrimental effects occur as a result, including more computational 
effort and longer filter transient periods. The transient period, during which the filter output 
is unreliable, is equal to the filter length. As the filter length increases relative to the signal 


length the transient period may become significant. 


62 


Original (dashed) and Nicisy Signal: SNR ~~ odB, Signal Freq — Oo. 





Criginal and FiR Wiener Wavelet Fiitered Signal: MSEe= 0.06392 


—a 
& i Ko) 156 325 ao as6c So 


rire ECsemiiie muciess 
Figure 6.2: FIR Wiener Wavelet Filter (a) Original (dashed) and Noisy Sinusoidal 
Signal. Frequency=0.1, Sampling Frequency=1. (B) Original (dashed) and filtered 


signal. Three level decomposition using Daubechies-20 wavelet. Filter Order = 8. 


Four Level Spectral Decomposition Using Daubechies—20 Wavelet 


Normalized Gain 


0.2 0.25 0.3 0.35 o.4 
Normalized Frequency 


Figure 6.3: Gain normalized analysis filters using Daubechies-20 wavelet. 





63 


FIR Wiener Wavelet: Order 4 FIR Wiener Wavelet: Order 8 


0.25 










Oo o 0.2 
iT im 
g o 
o 
= 3.0.15 
7) 7) 
Cc 
o c 
@ @ 
= = 0.1 


0.05 





0.2 0.3 
Normalized Freq 


0 0.1 0.2 0.3 04 05 
Normalized Freq 


FIR Wiener Wavelet: Order 12 FIR Wiener Wavelet: Order 20 


O25 


Guu 5 
, , 
aT mT 
@ 
Fs 5 
3.0.15 5 
oO 
Y) ep) 
c= 
o 7 
® © 
= 0.1 Ss 


0.05 





0 01) 02060306040 (05 0 O01 O02 O38 04 O85 
Normalized Freq Normalized Freq 
Figure 6.4: FIR Wiener Wavelet filter applied to noisy sinusoidal signal: SNR = 0 dB. 
Four level DWT decomposition using Daubechies-20 wavelet. Independent noise sample. 
Ten trial average. 


FIR Wiener Wavelet: Order 4 FIR Wiener Wavelet: Order 8 





0.4 
O55 
0.25 
0.3 
5 5 0.2 
i 0-2 AT 
© © 
wo rn) 
= 3.0.15 
ep) (op) 
(<= = 
© 0.1 eS 
= = 0.1 
0.1 
0.05 
0.05 
O 
0 0.1 0.2 0.3 0.4 0.5 O 0.1 0.2 0.3 0.4 0.5 
Normalized Freq Normalized Freq 
FIR Wiener Wavelet: Order 12 FIR Wiener Wavelet: Order 20 


0.25 


© 
No 


Mean Square Error 
© 
oi, 
on 

Mean Square Error 





0 0.1 0.2 0.3 0.4 0.5 0 0.1 0.2 0.3 0.4 0.5 
Normalized Freq Normalized Freq 


Figure 6.5: FIR Wiener Wavelet filter applied to noisy sinusoidal signal: SNR = 0 dB. 
Four level DWT decomposition using Daubechies-20 wavelet. Actual noise sample. Ten 
trial average. 


65 


B. ALIAS CANCELLATION 
The conditions for perfect reconstruction were specified in Chapter 4. A two-channel 
filter bank was analyzed in the Z-domain with the following result 
X(@) = YF y(2Ho@) +F@H,@)X@)+{Fo@H(-2)+F @H,(-2)X(-z). (6.1) 


Inserting the Wiener filters to denoise the data in the wavelet domain alters (6.1) as follows: 


R@ = YF )We (2) +F (W202) X@) 


(6.2) 
+ UF o(z)Wo(2)Hy(-2) +F (2)W,(2?)H,(-2)] X(-2). 
Alias cancellation is obtained by choosing the following synthesis filters, 
F(z) = W(z2)H,(-z 
o(Z) 1( ; 1(-Z) 6.3) 
Leila) a o(Z )Hy(-2Z). 
which cancel the X(-z) alias term in (6.2) leading to 
X@ = LW, (e?)Wo(e HOH, (-2)-Hy(-H (IX) (6.4) 


As before, the alternating flip relationship between the analysis filters is chosen to eliminate 
distortion affects: 
EGA) = — vee 2) (6.5) 

Substituting (6.5) into (6.4) results in the following Z-domain expression for the 
reconstructed signal in terms of the orginal signal, 

X(z) = W,(@*)W,(z7)z “X(2). (6.6) 
Equation (6.6) is applied to the two-level decomposition pictured in Figure 6.6 with the 
following result for the reconstructed Z-domain signal, 


X(z) =W,(z7)W,(z*)W,(24)z *X(2). (6.7) 


66 


Equations (6.6) and (6.7) can be generalized for a J level decomposition as follows, 

X(z) =W,(z7)z *X(z) 1] Wz). (6.8) 
The reconstructed signal is now a function of a delay determined by filter length N, and the 
product of a series of alias cancellation filters given in (6.8). These alias cancellation filters 
are nothing more than interpolated Wiener Filters which are multiplied in the Z-domain or 
convolved in the time domain. Unfortunately, simulations show that the alias cancellation 


filters produce significant phase and amplitude distortion which is more detrimental than the 


W,(D) 





Analysis Filters Wiener Filters Synthesis Filters Alias Cancellation Filters 
Figure 6.6: Two-level FIR Wiener Wavelet Filter Bank with Alias Cancellation 
aliasing effects they are designed to remove. 
The FIR Wiener Wavelet filter with alias cancellation is applied to the same noisy 


sinusoidal signal as used before. The denoised signal is poorly reconstructed based on an 


evaluation of the MSE alone. The MSE values are high with erratic results across the 


67 


spectrum. A comparison of the power spectral densities of the noisy signal and the denoised 
signal for two-level FIR Wiener wavelet ftlters with and without alias cancellation is shown 
in Figure 6.7. Figures 6.7 (a) and (b) show the power spectral densities of the noisy and 
denoised signal without alias cancellation. The alias cancellation filter with frequency 
response as shown in Figure 6.7 (c) is used to produce the denoised signal shown in Figure 
6.7 (d) which has been normalized to remove the effects of the gain attenuation. Although 
the aliasing frequency at a normalized frequency of 0.2 has been cancelled, significant 
distortion is occuring at this same frequency as well as at normalized frequencies of 0.05 and 
0.45. The average gain attenuation for a two-level FIR Wiener wavelet decomposition with 
alias cancellation (but without energy normalization of signal spectra) is 41.5 dB measured 
at the signal frequency which is quite significant when compared to 0.49 dB for the same 
filter without alias cancellation (Attenuation values for signal frequencies from 0.01 to 0.49 
were computed and then averaged). 

The FIR Wiener filter with alias cancellation might still be considered a suitable 
method if the denoised signal can be normalized. A final comparison is made between the 
two filtering methods based on the difference of two spectra, determined from energy 
normalized signals, on a log magnitude versus frequency scale defined by: 

Viw) = log S(w) - log S‘(w). (6.9) 
where S(w)is the energy normalized noise-free signal spectra and S(w)is the energy 
normalized denoised signal spectra. The Mean Absolute Log Spectrum Distortion (MALSD) 


is defined in reference [33] as: 


68 


MALSD = [Ivey (6.10) 
Expressed in terms of a summation of eee terms, (6.10) becomes 
MALSD = = lV(sti/N)I. (6.11) 
=0 
where N is the number of frequency samples taken between zero and the half sampling 


frequency, 7. When applied to two-level decomposition filter banks with and without alias 


cancellation, the following results were obtained (ten trial average): 


FIR Wiener Wavelet Filter MALSD 
Without Alias Cancellation 1703 





With Alias Cancellation 2584 


The FIR Wiener Wavelet filter without alias cancellation exhibits a smaller magnitude 
MALSD indicating the addition of alias cancellation filters is detrimental to the denoising 
performance of this filter. For this reason any further results obtained using a FIR Wiener 


Wavelet filter will not include alias cancellation filters. 


69 


(a) Noisy Signal: SNR = OdB (b) FIR Wien. Wave. No Alias Canc. 


Amplitude (dB) 
Amplitude (dB) 








Freq ) Freq 
(c) Alias Canc. Filter Freq. Response (d) FIR Wien. Wave. With Alias Canc. 
-—20 
—25 
ran) fon 
13) = 
® @ 
3 -35 E 
ro ie: 
£ = 
< < 
—40 
—45 
—50 
0 0.1 0.2 0.3 0.4 OLS: 8) 0.1 0.2 0.3 0.4 0.5 
Freq Freq 


Figure 6.7: Power spectral densities of: (a) noisy sinusoidal signal, Frequency=0.3, F,=1, 
SNR=0 dB, (b) denoised result as determined with no alias cancellation filter applied, (d) 
denoised result with alias cancellation filter in (c) applied. 


70 


VIL. WR WIENER WAVELET FILTER 

The denoising methods developed thus far have been restricted to the FIR Wiener 
filter, Wavelet thresholding methods, and a combination of the two. This chapter develops 
the noncausal IR Wiener filter and then introduces its application in the wavelet domain. 
Next, the results are compared to previously discussed methods. 
A. IIR WIENER FILTER 

The IIR Wiener filter 1s recursive in form and requires fewer parameters to determine 
the optimal filter weights than a comparable FIR form of the filter [12] . The IR causal filter 
problem was solved (originally in the continuous case) by Wiener in the transform domain 
using spectral factorization methods [11]. This section describes the transform domain 
solution of the noncausal Wiener-Hopf equation. The noncausal solution allows the filter 
to “look ahead” of real time and thus operates in an off-line application. This advantage 
results in improved performance in the mean-square error sense over the causal FIR Wiener 
filter. 


The noncausal IR Wiener filter estimate of the desired signal is of the form 


§(n) =y(n) = )) A *(@)x(n-k), . (7.1) 
k=-@ 


which differs from (3.1) in two respects. First, the upper limit of the sum extends to +o and 
the impulse response has nonzero values for n <0. Application of the orthogonality principle 
yields the noncausal IR form of the Wiener-Hopf equation 

x R (i-k)h*(k) = RQ); -o <i <o (7.2) 


k=-c 


fl 


which can be written in convolutional form as: 


Ri) *h (i) = Rid), (7.3) 
with mean-square error 
o. = R{0)- y h *(k)R,(k). (7.4) 
k= -0 


After conjugating equation (7.3), the left-hand side is a discrete convolution of the filter 
coefficients h(k), and the auto-correlation function, R,(k). Expressing the Wiener-Hopf 


equation in the frequency domain leads to: 


S(@)H(w) = S_(o), C765.) 
where H(w) = > h(nje", -n<ws<t 
Sw). = > R (ne OW ee <0) Te (7.6) 
S(@) = > R (ne (On ee 


Thus the filter transfer function H(w) can be expressed as: 


S@) 
A A@) = CO) 
S(@) 
Assuming that the signal and noise are independent and have a zero mean results in: 
S(@)=S (@) and S(w)=S (@)+S (@), (7.8) 
allowing (7.7) to be expressed as 
S(@) - S(@ 
Hf, (@) = Sa Ss (7.9) 


5 (w) 


Oe 


which may be rewritten as 
S.(w)/S (w) 


Aw) = S@y5,(a) #1 (7.10) 
The expression S,(@)/S,(@) is a measure of the signal-to-noise power ratio at 
frequency w. When S,(@)/S,,(@) >> 1 the filter coefficient magnitude, H,.(@), approaches 
unity, and when S,(q)/S,,(@) << 1 the filter coefficient magnitude approaches zero. Between 
these two limits the filter coefficients are chosen to pass the signal and eliminate noise with 
minimal distortion [34]. 
B. IIR WIENER FILTER APPLIED TO THE WAVELET DOMAIN 
A key property which underlies the success of wavelets is that they form an 
unconditional basis for a wide variety of signal classes [35]. Wavelet expansions can 
therefore concentrate the signal energy into a relatively small number of large coefficients. 
This signal compaction property makes the discrete wavelet transform an ideal tool in 
constructing an empirical IIR Wiener filter. The result will be referred to as the IIR Wiener 
Wavelet filter. 
The original time-series model given by 
x(n) = s(n) + w(n); n = 1,2,...,.N (7.11) 
becomes, in the wavelet domain, 


y, Up Ce eee! 3 ka 1,2,..,N, (7.12) 


where the terms are the DWT coefficients found from the following 
x(n) = » Yo ak) 
s(n) = y 6,27 y(2t-k) (7.13) 
w(n) = y Zi 2)? w(2t-k). 


1S 


The indices j and k refer to level of DWT decomposition and signal sample number 
respectively. 

The signal estimate § is determined by applying the inverse wavelet transform to the 
coefficients, Oy The noise used to produce the noisy signal is provided in order to 
determine the optimal Wiener filter coefficients. Denoising an actual ocean acoustic signal 
would require an isolated noise only segment or some other noise estimate in order to 
determine the approprite noise statistical characteristics. A noise sample or estimate which 
is not highly correlated with the actual noise will degrade the denoising capability of this 
filter. The observation sequence wavelet coefficients, y,, , and noise wavelet coefficients, 


Zj,» are provided allowing (7.9) to be rewritten as follows for a multiple level DWT: 


> y2 - S23 N (7.14) 
h, = a aa eel ae i ee | 
m ; Dui 
DSH 
k=] 


where WN is the signal length, 7 is the level of the DWT decomposition, and J represents the 
maximum decomposition level. This filter is applied to the observation sequence wavelet 
coefficients at each decomposition level to determine the signal estimate coefficients as 


follows 


See 10 re alse eat k=1,2,...,N2 7. (Ties 


74 


The IIR Wiener Wavelet method is implemented in a one-level filter bank structure, 
as illustrated in Figure 7.1. This is the same filter bank structure as that used for wavelet 
thresholding and FIR Wiener wavelet filtering except that now at each scale the noisy 
wavelet coefficients are thresholded by the filter coefficients h, and h,. The structure may 
be expanded to additional levels or scales, limited only by the signal length. This denoising 
method is applied to the noisy sinusoid used in previous methods with results as shown if 
Figure 7.2. The filter represents the denoised signal as a smoothed sinusoid after a three 


level decomposition using the Daubechies-20 wavelet. The Daubechies-20 analysis filters 





Input Analysis | Decimators IR Wiener Filters Interpolators Synthesis Output 


Figure 7.1: One-level noncausal IR Wiener wavelet filter bank. 


of length forty are pictured in Figure 7.3 with gain normalized to the maximum gain filter 


12 


in order to show the spectral bands associated with the scales. This normalization is only 
used to allow a better spectral view of the filters and is not representative of the actual filter 


gains used in the DWT. 


Original (dashed) and Noisy Signal: SNR = OdB, Signal Freq = 0.1 


Amplitude 


Amplitude 





oO 5 10 15 20 25 30 35 40 45 50 
Time (Sample Number) 


Figure 7.2: IR Wiener Wavelet Filter (a) Original (dashed) and Noisy 
Sinusoidal Signal Frequency=0.1, F=1. (b) Original (dashed) and 
filtered signal. Three level decomposition using Daubechies-20 wavelet. 


A ten trial average normalized MSE is plotted across the normalized frequency range 
of 0 to 0.5 for a sampling frequency of 1 as shown in Figures 7.4 and 7.5. An independent 
noise sample of white Gaussian noise with a zero mean was used in Figure 7.4 while the 
actual noise sample was used in Figure 7.5. The MSE is reduced with each successive scale 
or decomposition level. This occurs because with each successive scale there is less noise 
energy as represented by the wavelet coefficients, whereas the signal energy remains the 
same. The spectral bandwidth is reduced with each level of decomposition eliminating 


approximately half the remaining white noise which reduces the noise variance and improves 


76 


the SNR at that particular scale. This leads to an IR Wiener filter which is more effective 
at removing noise as demonstrated by the improved normalized MSE. 

The IR Wiener filter is a level dependent filter which means that increasing the level 
of decomposition will only improve the MSE performance for the spectral band associated 
with the particular analysis filter. This concept is better understood by comparing Figures 
7.3 and 7.4. Notice the degraded MSE that occurs in the analysis filter overlap regions. The 
spectral range over which this degradation occurs can be reduced by choosing analysis filters 
with smaller transition regions. Higher order filters will accomplish this at the expense of 
greater computational effort. The distortion can never be completely removed since the 
perfect reconstruction property of the filter bank has not been met due to the insertion of the 


IIR Wiener filter. 


Four Level Spectral Decomposition Using Daubechies—20 Wavelet 


Normalized Gain 





O 0.05 ‘ 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 
Normalized Frequency 


Figure 7.3: Normalized Spectral Decomposition Filters Used For IIR 
Wiener Wavelet Denoising. (Daubechies-20 Wavelet). 


77 


MR Wiener Wavelet Filter: SNR @w CO daB 


Mean Square Encor 


o.35 o.4 





oO 0.08 o.7 o.18 oC.2 0.25 eo. 
Normalized Freq 


Figure 7.4: Mean-square error comparison for four level decomposition of noisy sinusoid 
using Daubechies-20 wavelet. Independent noise sample. Average of 10 trials. 


uR Wiener Vaveilet Filter Reautlta: S~rRrm = OdBB 





o.1& o.2 o.265 4 
NMorwnalizedcdt Freq 


Figure 7.5: Mean-square error comparison for four level decomposition of noisy sinusoid 
using Daubechies-20 wavelet. Actual noise sample. Average of 10 trials. 


78 


C3 WIENERSHRINK 

The IR Wiener wavelet filter just discussed requires an isolated noise segment in 
order to determine the statistical noise characteristics necessary to generate a Wiener filter. 
WienerShrink, developed by Ghael, Sayeed, and Baraniuk of Rice University [36], is a 
wavelet-based Wiener filtering technique which differs from previous methods in this regard. 
Initially, a wavelet thresholding technique similar to those discussed 1n Chapter V 1s applied 
to the noisy signal. The resulting denoised signal, referred to as the pilot signal, is then 
wavelet transformed a second time and used to construct an IR Wiener filter. This wavelet- 
based empirical Wiener filtering method is described by the block diagram shown in Figure 
MO. 


The noisy signal as represented by (7.11) 1s wavelet transformed by the DWT 





Figure 7.6: Wavelet-based empirical Wiener filtering [25]. 


79 


represented by W,. Any of the four wavelet thresholding methods, represented by block H, 
is applied to the wavelet coefficients y” ,,, to produce the wavelet coefficient signal estimate, 
Sines: This is then transformed by the inverse DWT represented by W,”' to form the signal 


estimate, 5,. The signal estimate is then wavelet transformed again in block W, to form the 


wavelet coefficient signal estimate cy This 1s used to design an empirical IIR Wiener 
filter given by 
Aan) 
>, (Or 5)” ” 
bh. =k ne= —e a 202 (7.16) 


ays 
Soul Vik 
k=1 
where N is the signal length, 7 is the level of the DWT decomposition, and J represents the 
maximum decomposition level. This empirical IR Wiener filter is the ratio of the energy 
of the estimated noise-free signal wavelet coefficients for a particular scale divided by the 
energy of the noisy signal wavelet coefficients for the same scale. The IIR Wiener filter is 
applied to a DWT of the original noisy signal y”,,, formed by W;, to produce the wavelet 
coefficient signal estimate ome An inverse DWT, represented by W,’ is applied to produce 
the final signal estimate §. A Daubechies-8 wavelet is used for the DWT in block W, while 
a Daubechies-16 wavelet is used for the DWT in blocks W, and W,. 

Ghael et al provide the following explanation for the improved performance of their 
product compared to traditional thresholding methods [36]. The goal of any wavelet based 
denoising technique is to determine the noise-free signal wavelet coefficients from the noisy 


signal wavelet coefficients. The noise-free wavelet coefficients consist of a number of 


80 


trustworthy coefficients which are most assuredly due to the signal and then a number of less 
trustworthy coefficients which are more difficult to determine due to the presence of noise 
coefficients. WienerShrink uses the initial thresholding method to determine the trustworthy 
coefficients given by ee The untrustworthy or dubious coefficients are then predicted by 
the IIR Wiener filter formed from the trustworthy coefficients. Ghael et al indicate that 
classical wavelet thresholding methods are overly conservative in thresholding the 
coefficients resulting in the retention of only the trustworthy coefficients. 

WienerShrink is applied to the same noisy sinusoidal signal used previously with the 
MSE results displayed in Figure 7.7 for a SNR of 0 dB and Figure 7.8 fora SNR of 10 dB. 
The Heuristic SURE and SURE methods of thresholding were chosen due to their superior 
performance over minimax and universal thresholding methods at lower SNR levels. The 
hard threshold option outperforms the soft threshold option in some areas, especially at the 
lower SNR and the lowest scale. The difference is not substantial allowing either 
thresholding option to be utilized. A comparison of Figures 7.4 and 7.7 reveals that the 
minimum MSE obtained by both the IIR Wiener wavelet and WienerShmnnk at a particular 
scale are nearly identical. The IZR Wiener wavelet exhibits superior MSE performance in 
the finer scales for a particular decomposition level. For example, the single-level or scale 
decomposition for IR Wiener wavelet produces an average MSE value of 0.37 in the lowest 
scale (normalized frequency of 0.25 to 0.5) while the best WienerShrink method for a single- 
level or scale decomposition with Heuristic SURE hard thresholding achieves an average 


MSE value of only 0.6 at the same scale. 


81 


If the filter bank can be adapted by changing the level of decomposition (alternatively, the 
number of scales) to confine the signal of interest to the highest scale then this difference 
would have little effect. Otherwise, the IR Wiener wavelet filter may be preferable to 
WienerShrink. Thus the choice becomes a signal dependent decision. A significant 
improvement in MSE performance for Wienershrink is achieved when the SNR is increased 
to 10 dB (see Figure 7.8). This difference is particularly noticeable for denoising methods 
which employ wavelet thresholding. 

Comparing wavelet thresholding methods (Figure 5.9) to WienerShrink (Figure 7.7) 
reveals minimal difference between the two when applied to a noisy sinusoidal signal. 
WienerShrink is slightly better in terms of MSE performance. In Chapter VIII signals with 
more interesting characteristics will be denoised using the various methods and the results 


compared. 


82 


Heuristic SURE Soft Heuristic SURE Hard 


Mean Square Error 


0.2 0.3 . , 03 04 «0.5 
Normalized Freq a Normalized Freq 


SURE Soft SURE Hard 


Mean Square Error 





0 0 ee C203 04 was 


0 0.1 0.2 0.3 0.4 0.5 
Normalized Freq Normalized Freq 


Figure 7.7: WienerShrink applied to noisy sinusoidal signal: SNR = 0 dB. Four level 
DWT decomposition using Daubechies-8 to form pilot signal and Daubechies-16 for IIR 
filtering. Heuristic SURE and SURE thresholds used with soft and hard options. 


83 


Heuristic SURE Soft Heuristic SURE Hard 
0.12 


0.1 


Mean Square Error 
©o So 
© © 
oO 10.6) 


ad 
oO 
BR 


° 
re) 
© 


0.2 0.3 : : ; 0.2 0.3 
Normalized Freq Normalized Freq 


SURE Soft | SURE Hard 


OZ 


Mean Square Error 
od © 
© © 
Oo) 0O 


© 
© 
rs 


0.02 





) 0.1 O02 03 O04 #405 0 01 02, 03° 0455 
Normalized Freq Normalized Freq 


Figure 7.8: WienerShrink applied to noisy sinusoidal signal: SNR = 10 dB. Four level 
DWT decomposition using Daubechies-8 to form pilot signal and Daubechies-16 for IR 
filtering. Heuristic SURE and SURE thresholds used with soft and hard options. 


84 


VIII. COMPARISON OF DENOISING METHODS 
Various Wiener and wavelet filtering methods have been presented. Thus far the 
ability to denoise a noisy sinusoidal signal has been the basis for comparison between the 
denoising methods. In this chapter the denoising methods are applied to on synthetic 
signals which are commonly used as benchmarks for comparison because of their 
characteristics. These signals were originally chosen by the producers of Wavelab at the 
Stanford University Statistics Department [37]. The noise-free signals are shown below in 


Figure 8.1 and are referred to as Doppler, HeaviSine, Bumps, and Blocks. 


Doppler HeaviSine 





0 900 1000 "0 900 1000 


Bumps Blocks 





—20 
0 900 1000 0 900 1000 
Figure 8.1: Noise-free test signals: Doppler, HeaviSine, Bumps, and Blocks. 


85 


All of the denoising methods discussed in this thesis are applied to the four test 
signals and the average normalized MSE values are determined for twenty trials. The results 
are tabulated in the Appendix. Graphs depicting the MSE performance of the various 
methods are displayed as Figures 8.2 to 8.8. All methods, except for the Wiener filter, are 
compared in Figures 8.9 and 8.10. All Figures depicting the performance of Wiener wavelet 
denoising methods show two different assumptions concerning the noise estimation. Figures 
8.4, 8.6, and 8.9 assume an ideal situation in which the exact noise sample that produced the 
noisy signal is supplied to the Wiener filter. In Figures 8.5, 8.7, and 8.10, an independent 
white Gaussian noise source with standard deviation equal to one and a zero mean is 
provided to the Wiener filter. The second option is realistic and better approximates the true 
performance of these methods. 

A visual comparison of four denoising methods is depicted in Figures 8.11 and 8.12. 
Noise-free and noisy (SNR=0 dB) doppler signals of length 1024 are shown in Figure 8.11. 
The noisy doppler signal has a MSE of 0.5774. Denoised doppler results are shown for 
wavelet thresholding, IIR Wiener wavelet, WaveShrink, and FIR Wiener wavelet noise 
removal techniques. These plots provide a visual interpretation of the MSE results. Specific 
observations regarding the denoising methods follow. 

A. WIENER FILTER 

The Wiener filter did not denoise these signals as well as the other methods except 

in the case of the HeaviSine signal. Doppler, Bumps, and Blocks are nonstationary and 


therefore are not effectively filtered by a standard Wiener filter. HeaviSine exhibits some 


86 


degree of stationarity, resulting in better denoising by the Wiener filter. A short-time Wiener 
filter could be designed with an optimal window size and filter length which would improve 
the Wiener filter MSE performance significantly. This type of filter 1s utilized in denoising 
nonstationary signals in the next chapter. 
B. WAVELET THRESHOLDING 

Wavelet thresholding or “Wave Shrink” performs as well as or better than all the 
other methods at SNR levels from -5 to 20 dB. In general the SURE and Hybrid thresholding 
methods are the best of the four thresholding methods. The appendix tables show results for 
both soft and hard threshold options, although only the soft option was used in the plots. In 
some cases better MSE results are obtained using the hard threshold, however, the soft 
threshold provided better overall results. The signals in this study have all been scaled in 
amplitude such that the desired SNR is achieved while maintaining the noise level at a 
standard deviation of one. Recall the method for determining the noise standard deviation 
in actual signals required determining the mean absolute deviation from the wavelet 
coefficients at the lowest or finest scale. In cases where some of the signal energy resides 
in this finest scale, the performance of the wavelet threshold denoising method will be 
degraded. 
c. FIR WIENER WAVELET FILTER 

The FIR Wiener wavelet filter is the best denoising method in terms of MSE 
performance in the ideal noise conditions described above and shown in Figures 8.4 and 8.9. 


Figures 8.5 and 8.10 show that this method is affected the most when provided with an 


87 


independent noise source. The MSE more than doubles for some of the test signals. An 
accurate noise estimate is critical to the performance of the FIR Wiener wavelet denoising 
method. Transient affects also must be considered when using a FIR filter. These 
shortcomings make this method the least preferred. 
D. IIR WIENER WAVELET FILTER 

The IR Wiener wavelet filter exhibits a MSE performance that is comparable to 
wavelet thresholding and WienerShrink. Although the MSE error performance is degraded 
when assuming an independent noise estimate as seen by comparing Figures 8.6 and 8.7, it 
still provides substantial denoising capability. Furthermore, it is the easiest to implement, 
requires the least computational effort, and has no transient effects. 
E. WIENERSHRINK 

WienerShrink outperformed all other methods but not by a substantial amount. It is 
likely that Ghael et al, the developers of this technique, may have applied an unpublished 


Statistical thresholding method in their original work which was not replicated in this thesis. 


88 


Doppler HeaviSine 





—5 0 5 10 15 —5 0 s) 10 15 


SNR (dB) SNR (dB) 
Bumps Blocks 





25 0 5 10 15 —5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.2: MSE vs. SNR performance for Wiener Filter (Filter Order 8,12,16,20) on four 
test signals. Twenty trial average. 


89 


Doppler HeaviSine 


0.25 


0.05 





£0 0 sa “ 
-5 0 S 10 15 -5 0 5 10 75 
SNR (dB) SNR (dB) 
Bumps Blocks 





=5 0 5 10 15 -5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.3: MSE vs. SNR performance for wavelet threshold denoising of four test 
signals. Threshold types shown include SURE, Heuristic SURE, Minimax, and 
Universal. Five level or scale decomposition using Daubechies-20 wavelet. Twenty trial 
average. 


90 


Doppler 


MSE 


0 
-5 0 5 10 15 
SNR (dB) 


Bumps 


MSE 
o 


-5 0 5 Sie ets 
SNR (dB) 





HeaviSine 
0.06 


0.0 


-5 0 5 10 15 
SNR (dB) 


Blocks 


—5 0 5 10 15 
SNR (dB) 


Figure 8.4: MSE vs. SNR performance for FIR Wiener wavelet denoising of four test 
signals. Wiener filter orders of 8 and 16 shown. Five level or scale decomposition using 


Daubechies-20 wavelet. Twenty trial average. 


Doppler HeaviSine 





ae 0 Sano Nani “5 0 5 10 15 
SNR (aB) SNR (dB) 
Bumps Blocks 


0.25 


0.1 


0.05 





—5 0 5 10 15 -5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.5: MSE vs. SNR performance for FIR Wiener wavelet denoising of four test 
signals. Wiener filter provided independent noise source. Filter orders of 8 and 16 
shown. Five level or scale decomposition using Daubechies-20 wavelet. Twenty trial 
average. 


22 


Doppler HeaviSine 


0.25 


0.1 





0.05 
O Ra 2 ae 
5 0 5 10 15 5 0 5 10 15 
SNR (dB) | SNR (dB) 
Bumps Blocks 
0.2 
0.18 
0.2 
0.16} 
0.14 
0.2 
O42 
oD oD 
2 0.15 2 0.1 
0.08 
0.1 
0.06 
04 
0.05 oo 
0.02 
0 Ss 0 
5 0 5 10 15 —5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.6: MSE vs. SNR performance for IIR Wiener wavelet denoising of four test 
signals. Three, four, and five level or scale decomposition shown using Daubechies-20 
wavelet. Twenty trial average. 


93 


Doppler HeaviSine 


MSE 
MSE 





0 
-5 0 5 10 15 
SNR (dB) 
Bumps Blocks 
0.2 
0.18 
0.25 
0.16 
0.14 
0.2 
0.1 
QD D 
S 0.15 S 0.1 
0.08 
0.1 
0.06 
0.04 
0.05 
0.02 
0 ————— 0 
—5 0 5 10 15 -5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.7: MSE vs. SNR performance for IIR Wiener wavelet denoising of four test 
signals. Wiener filter provided with independent noise source. Three, four, and five level 
or scale decomposition shown using Daubechies-20 wavelet. Twenty trial average. 


94 


Doppler HeaviSine 


MSE 





-5 0 5 10 15 -5 0 5 10 15 
SNR (dB) SNR (dB) 
Bumps Blocks 


MSE 





-5 0 5 10 15 ) -5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.8: MSE vs. SNR performance for WienerShrink denoising of four test signals. 
SURE, Heuristic SURE, Minimax, and Universal thresholding used with soft option. 
Five level or scale decomposition with Daubechies-20 wavelet. Twenty trial average. 


5) 


Doppler HeaviSine 





—5 0 5 10 15 -5 0 5 10 15 
SNR (dB) SNR (dB) 


Blocks 





0 i ~ 
-5 0 5 10 «15 -5 0 5 10 15 
SNR (dB) SNR (dB) 


Figure 8.9: MSE vs. SNR performance comparison of wavelet thresholding, 
WienerShrink, FIR Wiener wavelet, and IIR Wiener wavelet denoising of four test 
signals. 


96 


Doppler HeaviSine 





—5 0 5, 10 15 —5 ) 5 10 15 
SNR (dB) SNR (dB) 


Blocks 





sso |S. 10mm. = 0 Ss MSCS 
SNR (dB) SNR (dB) 
Figure 8.10: MSE vs. SNR performance comparison of wavelet thresholding, 
WienerShrink, FIR Wiener wavelet, and IIR Wiener wavelet denoising of four test 
signals. Wiener filters provided with independent noise source. 


97 


Noise—Free Doppler 





0 100 200 300 400 500 600 700 800 900 1000 


Noisy Doppler: SNR= 0 dB, MSE = 0.5774 





0) 100 200 300 400 500 600 700 800 900 1000 


Figure 8.11: Noise-free and noisy doppler signal of length 1024 data points. SNR=0 dB. 
MSE of noisy doppler is 0.5774. 


98 


Wave. Thresh.: MSE=0.08859 IIR Wien/Wave: MSE=0.09048 








2.5 
2 
eS 
1 
0.5 
O 
—0.5 
= 
-1.5 
—2 
Be" 200 400 600 800 1000 70 200 400 600 800 1000 
WaveShrink: MSE=0.1021 FIR Wien/Wave: MSE=0.1449 
2.5 2.5 
2 2 
ES) AS 
1 1 
0.5 0.5 
O 0 | 
-0.5 —0.5 
= 4 
—1.5 —1.5 
—2 -2 
Fo 200 400 600 800 1000 -—< 200 400 600 800 1000 


Figure 8.12: Comparison of four denoising techniques applied to noisy doppler with 
SNR=0 dB. MSE values are shown for each method. Independent noise sources 
provided to Wiener filters. Daubechies-20 wavelet used in five-level or scale 
decompositions. SURE (soft) threshold used for wavelet thresholding. 


be, 








IX. WAVELET PACKETS 

Wavelet analysis has been shown to be particularly effective at removing noise from 
low frequency signals such as narrowband tonals which might occur in the ocean 
environment. The constant-Q filters provide a logarithmic frequency resolution with narrow 
bandwidths at low frequencies and wide bandwidths at high frequencies. This high 
frequency resolution at low frequencies is produced by iterating the lowpass scaling branch 
of the Mallat algorithm tree. In addition to low frequency tonals, ocean acoustic signals often 
include high frequency signals which may be of interest as well. These signals are usually 
short duration transients or acoustic energy pulses. Wavelet based denoising techniques are 
not optimal for removing noise from this class of signals. 

A richer signal analysis tool, termed wavelet packets, was introduced by Ronald 
Coifman [38] to provide high resolution decomposition at high frequencies. This 1s 
accomplished by iterating the highpass wavelet branch of the Mallat algorithm tree. The 
detail coefficient vector is thus decomposed into two parts by splitting, filtering and 
decimating in the same manner as the approximation coefficient vector. The full binary tree 
is pictured in Figure 9.1 where the H and L blocks are the high and low-pass filters 
determined by the wavelet and scaling functions. The numerous signal expansions that are 
possible with wavelet packets come at a cost in computational complexity of O(N log,.()) 


compared to O(N) for the wavelet transform. 


101 


Hl 

W530 W354 W3. W333 W334 W355 W356 W37 
Figure 9.1: Wavelet packet tree (Used with permission from Joshua Altmann, [14]). 
A. WAVELET PACKET THEORY 

Wavelet packets are formed by modifying (4.16 ) and (4.17 ), which determine the 
DWT filter coefficients in terms of the scaling and wavelet functions. A third index is 
required to describe the wavelet packet in terms of its position within the wavelet packet tree. 
This index describes the wavelet packet in terms of its frequency resolution. The wavelet 


packet equations are given by 


o,(2) 
(0) 


h(n) ¥2.,(2t-n), neZ 


9.1 
» a(n) ¥2.,(2t-n), neé€ Z. a 


The wavelet packets for a three-level wavelet packet decomposition using the familiar 
Daubechies-2 wavelet are shown in Figure 9.2. These wavelet packets correspond to the 
Figure 9.1 tree with the notation W, ,, with j = 3 denoting the scale parameter and p =0 to 
7 the frequency parameter. Notice the first two wavelet packets correspond to the 


Daubechies-2 filters (Figure 4.11) developed from the scaling and wavelet functions for the 


102 


0 


w2 W3 
3 3 3 3 
2 2 2 2 
1 1 { 1 
0 0 0 0 
= ~ 4 | = 
2, a: - xD 


Figure 9.2: Daubechies-2 wavelet packets for a three-level or scale decomposition. W, , 
for j = 3 and p = 0 to 7. Supported on the interval [0,3]. 


© 
ol, 
NO 
© 
ak 
NO 
© 
=i 
NO 


3 3 3 3 
2 2 2 2 
1 1 1 1 
0 0 0 0 
i = a at 
&D 2 2 = 
ee 
wo 
1 2 
W4 


= 
ol 
= 
Oo 
= 
~] 


wavelet transform. Thus the wavelet transform is a subset of wavelet packets. 

The wavelet packet decomposition of a signal results in a binary tree composed of 
waveforms known as “atoms.” This collection of atoms forms an overcomplete 
“dictionary” or library from which a signal of length NV can be decomposed and reconstructed 
in at most 2” different ways. The Best Basis algorithm, developed by R. Coifman and V. 
Wickerhauser [38], and used in MATLAB [22], seeks to minimize an additive cost function 


in order to determine the best basis for representing the signal. The cost function chosen is 


103 


entropy given by 
E(s) = -}/ 5; log(s;) (9.3) 

where s is the signal and s, the coefficients of s in an orthonormal basis. Best Basis is an 
efficient algorithm which computes the entropy at each node in the binary tree and then finds 
the best wavelet packet representation of the signal based on minimizing this quantity. 
Reference [34] provides additional information. 
B. WAVELET PACKETS APPLIED TO TEST SIGNAL 

The test signal chosen for analysis 1s a series of three high frequency transient pulses 
of decreasing amplitude. This test signal, which will be referred to as transients, was used 
by Barsanti in reference [8]. The signal is produced from the following truncated 


exponentially decaying sinusoids: 


Sl = sin(21k1/4) xexp(-k1/200) Kigee2....,256 
$2 = 1/2 sin(27k2/4) xexp(-k2/200) k2 = 1,2,...,200 (9.2) 
S3 = 1/3 sin(27k3/4) *exp(-k3/200) KS 2 clo 


This is a realistic acoustic signal which 1s artificially produced to allow MSE comparisons 
at differing SNR levels. The noisy version of transients is produced by adding white 
Gaussian noise with a zero mean. The standard deviation of the noise remains constant at 
a value of one while the noise-free signal is scaled to obtain the desired SNR level. Noise- 
free transients and noisy transients with an SNR of 0 dB are shown in Figure 9.3. The first 
transient is also pictured expanded in the noise-free and noisy form so that the details may 


be better observed. 


104 


Noise—free Transients Single Noise—free Transient 





10 10 
8 8 
6 6 
4 4 
2 2 
O 0 
22 -—2 
4 24 
-6 6 
-8 -8 
ms 5000 10000 15000 ~105 100 200 300 





Noisy Transients: SNR = 0 dB 73 Single Noisy Transient 

8 
6 
4 
2 

O : 
-2 
—4 
—6 } 3 -6 
-8 | -8 

a: 5000 10000 15000 mS 100 200 300 


Figure 9.3: Noise-free and noisy transients (SNR = 0 dB). Expanded view of first 
transient without and with noise (SNR = 0 dB). 


Seven different methods of denoising are applied to the transients. The results for 
the short-time Wiener filter is shown in Figure 9.4. The other six methods are compared in 
Figures 9.5 through 9.10 for SNR levels of 5, 0, and -5 dB. Specific comments concerning 


each method follow. 


105 


1. Short-time Wiener Filter 

The short-time Wiener filter of length twenty is applied to the 16384 point transient 
signal in segments of 256 points. The noise source provided to the Wiener filter is an 
independent noise source from that used to produce the noisy transients signal. It is white 
Gaussian noise with standard deviation equal to one and a zero mean. Windowing the data 
allows the Wiener filter to completely remove the noise in noise-only segments. This does 
not occur when the Wiener filter 1s applied to the entire signal in one block due to the non- 
stationary nature of the data. 

ae Wavelet Thresholding 

This signal does not meet the low frequency assumption of wavelet denoising and 
therefore the MSE is greater than that achieved by wavelet packet thresholding. When using 
wavelet thresholding for denoising, the approximation coefficients are not usually 
thresholded since the assumption is that these coefficients contain the signal for the most 
part. The algorithm was modified to support approximation coefficient thresholding which 
removed additional noise and reduced the MSE. 

3. FIR Wiener Wavelet Filter 

The FIR Wiener wavelet filter does not perform well for this signal, again due to the 
high frequency characteristics of the signal. The MSE is higher than what could be optimally 
achieved due to the residual noise present in the noise-only segments. This is true for all of 
the Wiener/Wavelet methods. This is partially remedied by implementing a windowed or 


segmented filtering algorithm that uses a triangular window with fifty- percent overlap. A 


106 


window length of 4096 points with a Wiener filter length of four and decomposition level 
of two achieved the best results which are listed in the table at the end of this section. The 
short-time FIR Wiener wavelet filter approaches the MSE performance of the FIR Wiener 
Wavelet packet filter. 

4. IIR Wiener Wavelet Filter 

The observations made for the FIR Wiener wavelet filter apply to this method as well. 
The IIR Wiener/Wavelet filter experiences the most severe degradation when the signal to 
be denoised doesn’t meet the low frequency assumption. Significant improvement is 
achieved by implementing the same triangular window short-time filtering algorithm 
described above for the short-time FIR Wiener wavelet filter. A window length of 256 
points was used to produce the results shown in Figure 9.11. Additional improvement in 
terms of MSE is realized by applying a threshold to the WR Wiener wavelet filter 
coefficients. If the filter coefficient magnitude is below some threshold value, in this case 
0.4 was chosen by trial and error, then the coefficient is set to zero. The resulting 
improvement is seen in column four of Figure 9.11. 

5. Wavelet Packet Thresholding 

Wavelet Packet thresholding exhibits significant MSE improvement over wavelet 
thresholding. Greater frequency resolution at high frequencies allows this method to produce 
transient pulses with better fidelity. Wavelet packet thresholding uses the SURE threshold 
described previously in Chapter IV. The approximation coefficients, represented by the 


“atom” in the lower left corner of the binary tree, are thresholded as well, leading to a noise- 


107 


free result in the segments between the pulses which contained only noise. 

6. FIR Wiener Wavelet Packet Filter 

In this method a separate FIR Wiener filter is applied to each atom in the terminal or 
lowest nodes of the binary tree. No Best Basis optimization is performed. Some MSE 
improvement is realized when comparing this method to the FIR Wiener wavelet method but, 
the short-time Wiener filter and wavelet packet thresholding still achieve better results. 

De IIR Wiener Wavelet Packet Filter 

In this method an IIR Wiener Wavelet filter is applied to each atom at the terminal 
nodes. The IIR Wiener Wavelet Packet filter displays some improvement over the IR 
Wiener wavelet filter but it is the poorest performer of all the wavelet packet methods. It 
would likely achieve better results in some type of windowed or segmented implementation. 

8. Summary 

The table following shows a comparison of all the methods in increasing MSE order. 
An optimized ST Wiener filter still outperforms all other methods with the wavelet packet 
threshold method and the ST IIR Wiener/wavelet method following close behind. Methods 
which do not utilize wavelet packets or a short-time filter are unable to effectively denoise 
this high frequency transient signal. For this particular signal, a short-time implementation 
of wavelet packet based denoising methods produced no further improvement and was 


computationally cumbersome. 


108 













Method ts dB | op | sap 


Swine nae | oe | mes | 


Wavelet Threshold 0.8283 0.3832 0.1399 
TR Wiener/wavelet 0.8555 0.4788 





109 


Short-time Wiener (SNR=—-5dB): MSE=.1468 


0 2000 4000 6000 8000 10000 12000 14000 16000 


Short-time Wiener (SNR=0dB): MSE=.0684 
10 





2000 4000 6000 8000 10000 12000 14000 16000 


Short-time Wiener (SNR=5dB): MSE=.0363 


-5 
-10 


-15 
0 2000 4000 6000 8000 10000 12000 14000 16000 


Figure 9.4: Denoising transients using short-time Wiener filter of length twenty. 
Window size: 256. SNR levels of -5, 0, 5 dB. 


110 


Noisy Transients: SNR=5dB 


15 


10 





0 5000 10000 15000 


FIR Wien/Wave: MSE=0.1768 
15 


10 





-15 
0 5000 10000 15000 


Wavelet Thresh: MSE=0.1399 
15 


10 


0 3000 10000 15000 


IIR Wien/Wave: MSE=0.4788 
15 


10 








-5 





-10 


-15 
0 5000 10000 15000 


Figure 9.5: Denoising transients using wavelet thresholding, FIR Wiener/Wavelet, and 


UR Wiener/Wavelet methods. SNR =5 dB. 


Noisy Transients: SNR=5dB 





0 5000 10000 15000 


WP FIR Wien/Wave: MSE=.0969 
15 


10 


-15 
0 9000 10000 15000 


WP Thresh: MSE=.0439 
15 


10 


0 5000 10000 15000 


WP IIR Wien/Wave: MSE=.1283 
15 


10 





—15 
0 5000 10000 15000 


Figure 9.6: Denoising transients using wavelet packet thresholding, FIR 
Wiener/Wavelet packet, and IR Wiener/Wavelet packet methods. SNR = 5 dB. 


Noisy Transients: SNR=0dB Wavelet Thresh: MSE=0.3832 
10 10 








0 5000 10000 15000 0 5000 10000 15000 
FIR Wien/Wave: MSE=0.3558 IIR Wien/Wave: MSE=0.8555 

10 10 
8 8 
6 6 
4 4 
Z 2 
0 0 
2 -2 
fl =A 
6 —6 
-8 -8 

m6 5000 10000 15000 6 5000 10000 15000 


Figure 9.7: Denoising transients using wavelet thresholding, FIR Wiener/Wavelet, 
and IIR Wiener/Wavelet methods. SNR = 0 db. 


Ws 


Noisy Transients: SNR=0dB WP Thresh: MSE=.1231 
10 





0 5000 10000 15000 0 5000 10000 15000 


WP FIR Wien/Wave: MSE=.1884 WP JiR Wien/Wave: MSE=.3029 
10 10 





-10 ~10 
0 5000 10000 15000 0 5000 10000 15000 


Figure 9.8: Denoising transients using wavelet packet thresholding, FIR 
Wiener/Wavelet packet, and IR Wiener/Wavelet packet methods. SNR = 0 dB. 


114 


Noisy Transients: SNR=-5dB Wavelet Thresh: MSE=0.8283 








=¥) | —2 









-4 4 
6 5000 10000 15000 0 5000 10000 15000 
p FIR Wien/Wave: MSE=0.6944 d IIR Wien/Wave: MSE=1.2640 
4 4 
2 2 
0 0 
2) —2 
-4 4 
sais 5000 10000 ~—-15000 "6 5000 10000 15000 


Figure 9.9: Denoising transients using wavelet thresholding, FIR Wiener/Wavelet, 
and IR Wiener/Wavelet methods. SNR =- 5 dB. 


els 


Noisy Transients: SNR=—5dB WP Thresh: MSE=.2888 








6 
4 
2 
0 
-2 
=4 
0 5000 10000 15000 “8 5000 10000 15000 
; WP FIR Wien/Wave: MSE=.4102 WP IIR Wien/Wave: MSE=.7055 
4 
2 
0 
=o -2 
~4 4 
<0 5000 10000 15000 =6 5000 10000 15000 


Figure 9.10: Denoising transients using wavelet packet thresholding, FIR 
Wiener/Wavelet packet, and IIR Wiener/Wavelet packet methods. SNR = -5 dB. 


116 


ST IIR: MSE=0.07569 


Transients: SNR=5dB 





ro 10 
5 5 
, O 
26 -5 
“195 5000 10000 15000 ~ 0 5000 10000 15000 


Transients: SNR=0dB ST IIR: MSE=0.2025 








6 6 
4} 4 
2 2 
0! O 
—2'7 —2 
=4 = 
—6 —6 
-8 “=e 5 -8 
@) 5000 10000 15000 0 5000 10000 15000 
Transients: SNR=—5dB ST IIR: MSE=0.5950 

6 

4 

2 





=6 6 
0 5000 10000 15000 0 5000 10000 15000 


ST IIR w/thr: MSE=0.06108 
10 


—10 


O 5000 10000 15000 


ST IIR w/éAthr: MSE=0.1562 


Oo NM Ff ODO 


—6 


—8 
0 5000 10000 15000 


ST 1IR w/thr: MSE=0.3751 
6 


4 
2 
0 

—2 


~4 


—6 
) 5000 10000 15000 


Figure 9.11: Short-time IIR Wiener wavelet filter applied to transients with SNR levels of 
5,0, and -5 dB. 1 column: noisy transients. 2" column: denoised transients. 3" 
column: denoised transients using ST IIR Wiener wavelet filter with hard threshold. 


117 


X. CONCLUSIONS 

This study was undertaken to determine the feasibility of combining Wiener filter and 
wavelet based denoising techniques. The two methods have been used independently for 
years to denoise many types of signals. Their development and performance were 
demonstrated in Chapters III through V. Before any attempt was made to combine the 
methods, legitimate issues arose concerning the aliasing effects produced by inserting a 
Wiener filter into the DWT filterbank and how this might affect the ability to perfectly 
reconstruct the denoised signal. This concern led to the development of an alias cancellation 
filter in Chapter VI. It was hoped that this would cancel the deleterious affects produced by 
aliasing. Unfortunately, the alias cancellation filter only produced greater distortion and was 
not found to be of any benefit. 

Though the original intent was to combine a FIR Wiener filter with wavelet analysis, 
the possibility of an IIR Wiener wavelet filter was motivated by the WienerShrink method 
of reference [36]. The simplicity of this method combined with its advertised performance 
led to the IR Wiener wavelet filter methods developed in Chapter VI. 

In the earlier chapters the Wiener filter, wavelet thresholding, the FIR Wiener 
wavelt filter and IR Wiener wavelet filters were developed and applied to a stationary noisy 
sinusoidal signal. Initial results indicated that the IIR and FIR Wiener wavelet filters might 
actually outperform, in a MSE sense, the more traditional Wiener filter and wavelet 
thresholding methods. These results proved that the Wiener wavelet combination could 


indeed denoise a signal. However, removing the noise from a single sinusoidal waveform 


119 


was not deemed to be a comprehensive test of the denoising capabilities of these techniques. 
Therefore, the robustness of the various methods was measured by applying them to four test 
signals. In these results, the IR Wiener wavelet methods compared favorable with wavelet 
thresholding. The FIR Wiener wavelet filter results which seemed promising in Chapter VI 
were rather disappointing compared to the other methods with MSE values as much as twice 
those of the other methods. 

Since many ocean acoustic signals of interest are high frequency transients, wavelet 
packet methods were developed in Chapter [IX to provide greater frequency resolution at 
high frequencies. The FIR and OR Wiener filtering techniques were applied to the terminal 
nodes of the wavelet packet decomposition oe providing an enhanced capability in 
denoising high frequency signals compared to the previously developed Wiener wavelet 
methods. The various denoising methods were applied to synthetically generated noisy 
transients. The results were similar in that the traditional methods of short-time Wiener 
filtering and wavelet packet thresholding outperformed all other methods. However, an IR 
Wiener wavelet filter used in a short-time implementation performed rather well with MSE 
values somewhat greater than the traditional methods. 

This study has proven the feasibility of combining Wiener and wavelet based 
techniques into a single filter. In general, it does not outperform either of the methods from 
which it is derived. The basis for comparison between the various methods is not by any 
means clear cut. Each method has numerous parameters which can be adjusted to optimize 


its denoising performance. To complicate matters further, many of these parameters are 


120 


signal dependent. Although the conclusions drawn are well supported, specific results in 
denoising a particular signal may vary depending on a variety of parameters such as Wiener 
filter length, wavelet filter length, window length, wavelet type, threshold type, hard or soft 
threshold option, thresholding of approximation coefficients, and the list continues. The 
variability makes this a fascinating topic which often leads to more questions than answers. 
No doubt this subject will stimulate interest for years to come. Future studies might involve 

the application of a median filter to the wavelet coefficients or some type of hybrid Wiener 
wavelet threshold method in which a minimum filter coefficient value is set based on 


Statistical analysis. 


121 








APPENDIX 

This appendix lists modified MSE results as defined by (3.11). The MSE results are 
tabulated for the four test signals Doppler, HeaviSine, Bumps, and Blocks which are pictured 
in Figure 8.1. Variable parameters include: SNR, Wiener filter order, level of 
decomposition, wavelet threshold type, and wavelet threshold option (s for soft, h for hard). 
Actual and independent noise sample refers to the noise provided to the Wiener filter. In the 
actual case the noise used to produce the noisy signal provided, while the independent case 
refers to an independent sample of white Gaussian noise with a zero mean and variance as 


required to meet the SNR level desired. 


7 . cameaa . am oes cecaikntcaaete a pee x 
ag oe P Vig RET, ame s aise? or hanes AE om 4d 9 24 LOR Cyl PS = 
a ae 4 5 igs Tae o s te! a 
ME Ci BE Se Bie RIS: oes, 
anita “Glhe. Be Sipe 205 Re Bs EEG oh O a 
Fie ea. Pi gitar ee Gace Ne Se A Sn TD 
/ oe yee ee BX ne ate ht 4’ Misa "4 
. ig NR RE A S53 ML Gal sote bi YP 


ss | oss | oacs | oamzs | 02019 
r 
re 


om ee 


Ti = 0.1543 0.06870 0.1415 0.08855 


[0 [2% | oasi0 | o0m20 | o.v0e | o0sese 





13 


: yan Io = : = SE — =— , ee 
oj P at he ae eee ne ee FO ry ce Te eke a ean ae ae ee Senter 7 = 
- mm 3-2 "Se ef Ree — 6 i a ee 
bes (as, . Sed eee} ease Baad 
: =. 7 F Br. eS ee re eae ; rte 
: ee ~¢ 3 e b 54 - Cao es ae. ae 
Pe va stirs 5 f A”: tw : 
be ho Pe ae tue ~*~ ee i ae a Sas ee a Se Sheet 


ah eet ae 


Fe me 
at ta s 4 
Rare od 3% 

me May? 3558 
Rw? er, 
jn ite 


ak SOS 
: - tes 
- sagtes’>) 
> diay eS ~~ 
& ae ee 
Ke 
oe $ 
e 


ie; 
hae 
aes 
es ~% 
Y~ gi) 
— > 


] 
] 
I 
] 
] 
I 


[20 [8 | eons | c00asee | noises | 0005s 


Tart |b lmaice ones | oan 
ig py Ea 


D 
8 
0 
0 
5 
15 
5 
3 





ere eS Pere re 
SURE s 0.1005 s 0.04833 s 0.08631 s 0.06316 
h 0.2228 h 0.17490 h 0.17420 h 0.13840 
HE s 0.1326 s 0.03152 s 0.14560 s 0.07379 
h 0.1241 h 0.04417 h 0.12580 h 0.07464 


| 0 UNIV s 0.1470 s 0.03136 s 0.1638 s 0.10510 
| | h 0.1357 h 0.03705 | h0O.1454 h 0.09166 


124 


Ss “WAVELET. ‘THRESHOLDING _ 


s 0.1190 s 0.03637 s 0.1268 s 0.08342 
——" ail 1560 h 0. =a h 0. — 


s 0. | §0.04490 s 0. | $0.02456 S | 50.03505 03505 S | 50.03082 03082 


Se s 0.05948 s 0.01982 s 0.03499 s 0.03760 
Es wav en errs wre ee 
ros MINI | s 0.05178 s 0.02136 s 0.04663 s 0.03469 


SURE s 0.01707 s 0.006897 s 0.01335 s 0.01524 

h 0.04011 h 0.02166 h 0.02437 h 0.02763 

HEUR s 0.02027 s 0.005605 s 0.01629 s 0.02055 

h 0.02043 h 0.007024 h 0.01555 h 0.02004 

10 UNIV s 0.04507 s 0.005605 s 0.05424 s 0.04428 
h 0.01955 h 0.006310 h 0.02529 h 0.02796 

s 0.02485 s 0.006006 s 0.02930 s 0.02766 

h 0. eg h 0. ene h 0. Fe — 0.02427 

S / $0.006774 006774 | s | 0.003568 003568 | s | §0.005015 | 005015 | sO. } s 0.006461 


ee ee es 
s 0.01571 s 0.003350 s 0.01632 s 0.01962 
oe Patt ae tae 


20 SURE | s0.002439 | s0.001517 | s 0.001941 | s 0.002452 

h 0.005082 | h 0.003641 | h 0.024370 | h 0.003784 

20 HEUR | s 0.003283 | s 0.002278 | s0.001953 | s 0.002452 

h 0.003231 | h0.001900 | h 0.002258 | h 0.003784 

UNIV s 0.008027 | s 0.002587 | s 0.009745 | s 0.012060 

h 0.002472 | h0.002176 | h 0.002961 | h 0.004294 | 
ar s 0.003962 | s 0.001910 | s0.004805 | s 0.005942 
| _h 0.002943 | h 0.002796 | h0.002950 | h 0.002862 | 





125 


> 
* » te 
« é 





Ss a 
] eta al oS aa) 
SK ae . - 
- = “te Fe 8 eee , 
< a + Wer’ i 

* a { + “ 

i. we ‘ fy att = 


. + 
Bae! 


Fags = aes Mtoe, 
PIG I AOR TI GN Gg PONG RY 
J ' ie scr 
ae - 
nan , emi 3d : ? 
BR ae wy! MEE Lye seat “Tae Ge st 


ies” 1 a ES Sgr SS WS a OE gel = iat -— ga 
all a d Sue! ~*~ ) = + ay > ae 
a a SE Te th SiS oe TS oe Sees LINE cr aa 
ek ante ee Or a ee Pes 
ce Lae sae Fagen EE Set ee eR sat Sees ww PPR, 


s 0.1932 s 0.09354 s 0.1997 s 0.1221 
oles) cease: | coon | woszes 
ue Naaman. | pale.) caren 


SURE s 0.09323 s 0.03300 s 0.08491 s 0.05615 

h 0.1385 h 0.06595 h 0.10900 h 0.07381 

HEUR s 0.11540 s 0.03226 s 0.12690 s 0.07499 

h 0.09107 h 0.03374 h 0.08628 h 0.05957 

UNIV s 0.11990 s 0.03226 s 0.13910 s 0.08629 

h 0.09974 h 0.03250 h 0.09558 h 0.07248 

MINI s 0.1100 s 0.03232 s 0.1212 s 0.07864 

h 0.1119 h 0.05961 h 0.1003 h 0.06312 

5004872 | s001210 | 003748 | 5003000 
s 0.05846 s 0.01195 s 0.03773 s 0.03857 


s 0.08133 s 0.01194 s 0.09507 s 0.06621 
mI Sr 


SURE s 0.02533 s 0.005288 s 0.01500 s 0.01618 

h 0.02893 h 0.009307 h 0.01791 h 0.02224 

HEUR s 0.02903 s 0.005371 s 0.01529 s 0.02002 

h 0.02278 h 0.005379 h 0.01392 h 0.01575 

UNIV s 0.05147 s 0.005374 s 0.05208 s 0.0463 1 

h 0.02309 h 0.005384 h 0.01507 h 0.01765 

MINI s 0.03664 s 0.005355 s 0.03187 s 0.03208 

h 0.02468 h 0.007919 h 0.01619 h 0.01599 

=a SURE s 0.01221 s 0.002783 s 0.005936 | s 0.008363 


T= ay HEUR s 0.01272 s 0.003171 s 0.006209 | s 0.009316 
a5) | UNV s 0.02784 s 0.003214 s 0.02354 s 0.02822 


126 
















2 ov yh oe we oem 


oe 
« ste ntti ton Sng at 
25 PRET BRE CRAG OES Ne StF 


a = ne 
. rom 753 " TE are aan = ins Mp 20m tr 
ats > a ie Ne dh or yi zs ou Pt bee wipe yp wrt ps 
: . AE EOE f a. oa —_—. CHA 
: W WR dial BY. A _ ee sc 7 oS <a 7 
= Tait Ygthenry - 
<4 i em POF eg 
ae vA ee ee 


Pr 


oT 
| 20 SURE 5 0.005398 | s0.001565 | s 0.002292 | s 0.003784 
| h 0.005575 | h0.002023 | h0.002786 | h 0.003806 
20 HEUR 5 0.005914 | 0.001863 | s0.002318 | s 0.003784 
h 0.004698 | h0.001469 | 0.002182 | h 0.003806 
20 UNIV 5 0.01312 | s 0.002333 | s 0.009679 | s 0.014820 
h 0.004717 | h0.001546 | h0.002183 | h0.003411 
20 MINI 5 0.008285 | s 0.001990 | s 0.005220 | s 0.008439 
| h 0.004842 | h 0.001678 | h 0.002328 | h 0.003374 





ae >" 
we 
. Ne a ae - 
nen es ih 
aes es, re 
C vies ove -. 
7 ‘ aie 





? 


SEN a Pb 
ie 
Whe fe, 
DY get 
ack Sees 


cS 
>. 





| gy more ee: Sf ery OO 


: 


00458 


0.04686 0.01820 0.03268 0.02902 


I 


2d 


gs 
2 
4 

os 


aa 
Oe 








Ps [os [+ | cows 
Ts fs [5 | oniico  o0essee [onsise [ 002003 

ss [emia | omer Laois | 0010s 
rio fs | «| o0ares | o00ss50 [oor2s7 | consi 
Tio fs fs | one [0001s [001260 | 001460 
ras fs | 3} oovsres | a00ses0 [ooossr0 | aooners 
ras || «| eowsrse | aco2see [ooossis | ooo 
ris [ss | o0wsre0 | cone [onmise | coor 
Tis [16 [soos | 0002730 [onnioas | coors 


15 0 
To [8 


0.004269 0.001505 0.001942 | 0.003290 
0.004268 0.001484 0.001943 | 0.003290 


128 


568 ¥ 
ios 


st —_ 
APSE SS ES EG RS REET 2 ine A fe SE ETS RE 
BON” AS % ¥ ‘ ag . pf % 
~ FIR WIENER WAVELET : 
a Sai ee ee epeea™ hee eee a as A = “2X nn Sage SO 


BAe Lan = SS 
3 
Pe 


i a 
AR Cure < 
Wore 


t 
TOMES: 


rs fs [3 [ ose | ozs | ozs | osece 
ms fos f+ | oa oats | oat | ozs 
rs fs fs | oso [oz | o3iso | ose 
To fs fs | ons o20s | ooo | ones 
To fs | | cio | oor [eee | ose 
To fs fs | ons | ei | on | oars 
Ts fs [| oosesr | 003568 | onmse | o0sis7 
Ts fs fs | oom [ons | 0csso | o0zsio 
Ts fs fs | oosras | oss [oor [ cosas 
Tie fs [3 | 0226s | ooorsis [oovst | ooisre 

Foams | 0005303 


0.001608 0.002140 } 0.003333 
001967 | 0.003295 













Wy Tee A 
$e RS o 
WAT eS a ae 
BEES ERS 
SESS Be 











to 

si 
5 oA 
ee 


SNR LEVEL DOPPLER | HEAVSINE BUMPS BLOCKS 
a eee Win & om 


Bese SS ARM CA gee 

£ ae a gk. 4 
at ~ re 4 
: 3 
a 

~ “ 

O30 

hee’ 


We, 






es pe pe Pam 
i SE Se 
0 [+ | eonese | ooseoe | oorres0 | o0s0ss 
sf 5] conta | oosie0 | oorrs | o00s70 
0 [3 oeoe | oaoirs | cone | 0003s 


ef 0.004317 0.001540 0.002096 0.003328 
0.004318 0.001499 0.002096 0.003328 






















“IR WIENER FILTER ‘(Independent Noise Sample) = 5 = 


era 
a a 
Mh ato 
Tes emma [ones | aos | coi 








130 





















tye Ke = > 
as 4a Rey Te Pro edge re oer) ve es: yn, 3 ote Res Sp ,260- ys oe See shy: “ty “y @ 
at ~ ve a 


R WIENER FILTER (Independent Noise Sample) 







Se 


2, on 5 4 : od 
Cr ay Some, pt AGE ae Ke a? ye eee np oa oa We Sage aero ee Re “S93. 


0.009947 | 0.002998 | 0.005642 | 0.007588 
0.004359 | 0.001572 | 0.002141 | 0.003333 


on 





REFERENCES 


[1] D. Frieden, Principles of Naval Weapons Systems, Naval Institute Press, Annapolis, 
MD, 1985. 


[2] R. Urick, Principles of Underwater Sound, McGraw-Hill Book Company, New York, 
1975. 


[3] R. Urick, Ambient Noise in the Sea, Peninsula Publishing, Los Altos, CA, 1986. 


[4] V. Knuden, R. Alford, and J. Emling, “Underwater Ambient Noise,” Journal of Mar. 
Research 7, 410, 1948. 


[5] W. Crouch and P. Burt, “Logarithmic Dependence of Surface-Generated Sea Noise 
Spectrum Load on Wind Speed,” JASA 51,1066,1972. 


[6] R.F.W Coates, Underwater Acoustic Systems, John Wiley & Sons, Inc., New York, 
1989. 


[7] R. Mellen, “Thermal-Noise Limit in the Detection of Underwater Acoustic Signals,” 
JASA 24, 478, 1952. 


[8] R. Barsant, Jr., Denoising of Ocean Acoustic Signals Using Wavelet-Based 
Techniques, Master’s Thesis, Naval Postgraduate School, Monterey, CA, December, 
1996. 


[9] K. Frack, Improving Transient Signal Synthesis Through Noise Modeling and Noise 
Removal, Master’s Thesis, Naval Postgraduate School, Monterey, CA, March, 
1994. 
[10] S. Haykin, Adaptive Filter Theory, Prentice Hall, Englewood Cliffs, NJ, 1996. 
[11] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, 
The M.LT. Press (formerly Technology Press), Cambridge, MA, 1949. (Reprinted as 
a paperback in 1964). 


[12] C. Therrien, Discrete Random Signals and Statistical Signal Processing, Prentice 
Hall PTR, Englewood Cliffs, NJ, 1992. 


[13] B. Cipra, “Wavelet Applications Come to the Fore,” SIAM News, Vol.26, No.7, 
November 1993. 


3 


[14] C. Burrus, R. Gopinath, and H. Guo, Introduction to Wavelets and Wavelet 
Transforms, Prentice Hall, Englewood Cliffs, NJ, 1997. 


[15] S. Haykin, Communication Systems, John Wiley & Sons, New York, 1994. 


[16] D. Gabor, “Theory of Communications,” Journal of the Institute for Electrical 
Engineers, 93:429-439, 1946. 


[17] O. Rioul and M. Vetterli, “Wavelets and Signal Processing,” JEEE Signal 
Processing Magazine, October, 1991. 


[18] J. Altmann, “Surfing the Wavelets,” http://www.monash.edu.au/cmcm/wavelet/ 
wavelet.html, February 1998. 


[19] R. Polikar, “The Wavelet Tutorial: Part IV,” http://www.public.iastate.edu/~rpolikar/ 
WAVELETS/WTrtutorial.html, November 1997. 


[20] A. Croisier, D. Esteban, and C. Galand, “Perfect Channel Splitting by Use of 
Interpolation, Decimation, and Tree Decomposition Techniques,” Int. Conf. on 
Information Sciences/Systems, Patras, pp. 443-446, Aug. 1976. 


[21] P. Vaidyanathan and I. Djokovic, “Wavelet Transforms”. In Wai-Kai Chen, editor, 
The Circuits and Filters Handbook, chapter 6, pages 134-219, CRC Press and IEEE 
Press, Roca Raton, FL, 1995. 


[22] G. Strang and T. Nguyen, Wavelets and Filter Banks, Wellesley-Cambridge Press, 
Wellesley, MA, 1996. 


[23] P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice Hall PTR, 
Englewood Cliffs, NJ, 1993. 


[24] I. Daubechies, “Orthonormal Bases of Compactly Supported Wavelets,” 
Communications on Pure and Applied Mathematics, Vol. XLI, 1988. 


[25] The Mathworks, Inc., Wavelet Toolbox, Natick, MA, 1996. 

[26] D. Donoho, I. Johnstone, “Ideal Spatial Adaptation by Wavelet Shrinkage,” 
Biometrika, 81:425-455, 1994. Also Stanford Statistics Dept. Report TR-400, July 
1992. 


[27] D. Donoho, I. Johnstone, “Adapting to Unknown Smoothness via Wavelet 
Shrinkage,” Journal of American Statist. Assn., to appear 1995. Also Stanford 


134 


Statistics Dept. Report TR-425, June 1993. 


[28] D. Donoho, “De-noising by soft-Thresholding, ” JEEE Transactions on Information 
Theory, 41(3):613-627, May 1995. Also Stanford Statistics Dept. Report TR-409, 
November 1992. 


[29] H.Gao, “Threshold Selection in WaveShrink,” Technical Report, StatSci Division of 
MathSoft, Inc., Seattle, WA, 1995. 


[30] C. Stein, “Estimation of the Mean of a Multivariate Normal Distribution,” The 
Annals of Statistics, Vol. 9, pp.1135-1151, 1981. 


[31] M. Ledbetter, G. Lindgren, and H. Rootzen, Extremes and Related Properties of 
Random Sequences and Processes, Springer-Verlag, New York, 1983. 


[32] D. Donoho, “Wavelet Shrinkage and W.V.D. - A Ten Minute Tour,” Technical 
Report TR- 416, Statistics Department, Stanford University, Stanford, CA, January 
1993. 


[33] L. Rabiner and B. Juang, Fundamentals of Speech Recognition, Prentice Hall PTR, 
Englewood Cliffs, NJ, 1993. 


[34] H.V. Poor, An Introduction to Signal Detection and Estimation, Springer-Verlag, 
New York, 1988. 


[35] D. Donoho, “Unconditional Bases are Optimal Bases for Data Compression and for 
Statistical Estimation,” Applied and Computational Harmonic Analysis, 1(1):100- 
115, December 1993. Also Stanford Statistics Dept. Report TR-410, Nov. 1992. 


[36] S. Ghael, A. Sayeed, and R. Baranuik, “Improved Wavelet Denoising via Empirical 
Wiener Filtering,” Proceedings of SPIE, Mathematical Imaging, San Diego, CA, 
July, 1997. 


[37] J. Buckheit, S. Chen, D. Donoho, and J. Scargle, ‘““Wavelab.700," 
http://wwww.wavelab/playfair.stanford.edu, 1996. 


[38] R. Coifman and M. Wickerhauser, “Entropy-based algorithms for best-basis 


selection,” JEEE Transactions on Information Theory, vol. 38, pp. 713-718, March 
2. 


ois 








INITIAL DISTRIBUTION LIST 


Defense Technical Information Center.......................08. 2 
8725 John J. Kingman Rd., STE 0944 
Ft. Belvoir, VA 22060-6218 


Dudley Kioxer brary 2)<.<. 5: 210s ee Jee es: Zz 
Naval Postgraduate School 

411 Dyer Rd. 

Monterey, CA 93943-5101 


Clairol Od esis © cain, ea in. 4. nacre ene er reek evens ae eee ] 
Department of Electrical and Computer Engineering 

Naval Postgraduate School 

Monterey, CA 93943-5121 


BroteNienique arenes Code BG(Ka. maw 2.24 ss feed ee Ys 
Department of Electrical and Computer Engineering 

Naval Postgraduate School 

Monterey, CA 93943-5121 


Prot, Ralplptippenstiel, Code BC/H1 . a... .... 65 24 see te eee oo ] 
Department of Electrical and Computer Engineering 

Naval Postgraduate School 

Monterey, CA 93943-5121 


INGW Ci3 ualamon 3200 Rim 387 ....%6i0<  faweess ia seas He es ] 
att: Mr. Steve Greneider, Code 2121 

Newport Division 

Newport, RI 02841 


NUWC, Building 1320, Rm 387............. 2.00 eee eeeeeeeeee l 
att: Dr. Paul M. Baggenstoss, Code 2121 

Newport Division 

Newport, RI 02841 


OOD rednic IW Onomicye line... teeta es 6 5 dis ws ee one ee 3 


12008 Green Court 
Glenn Dale, MD 20769 


La 




















ABNF i 


Miya, ae: i00 




















adm 


beter rr Ors 
a 04/30 buf nindsd. 


Seeesregs 
& pA ‘4 Vm orm —~eh ~~ 
Perr eek Ae A IRS Ec OR setesapreage Oe eo ieee 
Send a cetes -4:8 3 : i “0%. @$ 0° Obi: oa tees oa eves ee ll i Se... 
Lars + Kees ae a i - 7 _ ‘ a : ‘ — P o2 Li e : 
oo cone Prater s tate Swe ssagerene Soneta® wags * ’ a ee,t 
ly corny nn PAOD dn By Ae Ryo O~ 02% eat? Dott 0m. feces @ t KNOZA 3/5 ee Ci : . 
i . 2S a = = _ t , ‘ t : oe 
aoe’. « eA a perp oar ons bapa At re : : A 
ath Soft Or dp Snt. 4p & So Oo Oo Fo eo . we | . ee 
Pettetint ein <tttn te 2 co ; rotwr) by cud eyagte { 1 : “7 
5 mo Meo 61 mete 4 as 8 raed ty * , « ; x 8° q ; 
corte ye) “ 'o Srey 2.4 ae 6 a : ; A : 
PE care pre CRO aR Rie 
wes 2 ae Oe. e00’ Sime ; eo a ies - 
Cat Biatn® Felie® x * 
waa, oF? 


eee §=§— 3 2768 00307936 6 i a 





e.atue + i ae k 
ete’ ‘ cant oe 
ry } ° 7 
Pre reece. * ox : mee @e tf catat 
Oqteen OE" > 6.0.4 && ‘ elt ee0ed = erane A 
PAT bd e « <6 
eta 8h ¥ E F ee 
4 wngeete a q : ; i 
be 4 
° 
Were by ae ee te es Se ris a 
ah math 228. tots ote e e 
Reds fon. Letw set. @ OO ef Sac le . asiores eis ee ee 
fap trie et Ooft « nt ag PM Me oe ta tS Boe } " . at - "he oe 5 
te ores &° oe aig a” sO ; ¥ - e « @ r) . ° 
estet «Oe. ay teste # : e ee? 
G20 fie : dene : "i atic 
oh 6 d e e« ° 
«t-@ : %e ‘ Pe id 
head ete 
Gis fot O. ; ei o? 
me ¥ a e © ee” e 
8. ¢ s& 200 
Dr 2 y foe 0 0 e UM © ® tai yeas %» 
a ewe 
re Le rn : ooe% 
ent Pad) aed toot ’ f, : ate 
@ be ph Goh 828. Oe 4 r , ® 
he etree tise ny 4 : ant . mene 6 
te Mle alta epg ‘ are Ore hy Vast Tal Tar ae os] 
i er hn 8 oe ES ag aT bal : 
ar Rate al to? ae i 
Dotpled Petits ae ; wash, ’ . ‘ ‘ 4 . : 
ot 4 : mw 5 o ‘ a rece F ee 
. 
cape’ 3? ie aw oe ® 4 . ° 
e 4 he ‘ e 
. is = oe ) 
eo 8 mat 28 j 
8 . . . 
‘ e 7 ® 
an * 8 
neta no Mi £ mt 
ep be8 & & borer . a6 @ @66 y 
em terme & ry 
et.er ¢ Pa - oe 
ts 28 "ee 10 & Ps BIe e, ape 8 4. * e 
beng te Gerywee% * = @ e.mee,? eu %@ 
es att eine or dos 
Rete ua te one rw-p Peery Se 
poi tem 9 gta A Rate bbe Behe 02g be tof Oe @ jet 
ant ot eGo? -e BG ote - . 
4 Roh we On er, On keg dyes © eat er sf cS 
Ont ley ® de Fa Ge 00° Ota . ® 
a he 9 © 3 : ‘ om sh *et J ¢ . ‘ » . : ° ° 
Taina of to hela oh btee %- ‘ Ay 
rv 35-1.- Stiatnie on @ te ot riate®- : 4 : 
20 6 ote alate © oe rife . . 8 
atatett rte 29s eae ¢ ® ee . 5 
wie! aight st toate  » aber =o! ster meta & te? : ; 
© we Pee tem Pp. » - oe *o = Been katy . 
- her ew ? we oo ip? ‘ 
“es , - 
en fe . 
° 
. 
. 
° 
e 
. 
° 
' 
. 
1 
° 
. 
e 
® 
® 
8 
¢ 
e 
® 


rire : ei ee ot we ( 
‘I # iA ms ° _ j 
i eteaes werk ¢ te ' ; 

a j t 





~~