Skip to main content

Full text of "Interactive tools for sound signal analysis"

See other formats




Institutional Archive of the Naval Postgraduate School 





Calhoun: The NPS Institutional Archive 


DSpace Repository 


Theses and Dissertations 


1997-03 


1. Thesis and Dissertation Collection, all items 


Interactive tools for sound signal analysis 


Chuang, Ming-Fel 


Monterey, California. Naval Postg 


http://ndl.handle.net/10945/8550 


raduate School 


Copyright is reserved by the copyright owner 


Downloaded from NPS Archive: C 


DUDLEY 
KNOX 
LIBRARY 





http://www.nps.edu/library 


alhoun 


Calhoun is the Naval Postgraduate School's public access digital repository for 

research materials and institutional publications created by the NPS community. 

Calhoun is named for Professor of Mathematics Guy K. Calhoun, NPS's first 
appointed — and published — scholarly author. 


Dudley Knox Library / Naval Postgraduate School 
411 Dyer Road / 1 University Circle 
Monterey, California USA 93943 


NPS ARCHIVE 
1997, 03 
CHUANG, M. 


NAVAL POSTGRADUATE SCHOOL 
MONTEREY, CALIFORNIA 





THESIS 


INTERACTIVE TOOLS FOR SOUND SIGNAL 
ANALYSIS/SYNTHESIS BASED ON A SINUSOIDAL 
REPRESENTATION 


by 
Ming-Fei Chuang 


March 1997 


Thesis Advisor: Charles W. Therrien 
Second Reader: Roberto Cristi 


Thesis Approved for public release; distribution is unlimited. 
C47834 





DUDLEY KNOX LIBRARY 


NAVAL POSTGRADUATE SCHOOL 
MONTEREY CA $3943-5101 





REPORT DOCUMENTATION PAGE Form Approved OMB No, 0704-0188 


Public reporting burden for this collection of information is esumated to average | hour per response, including the ume for reviewing instruction, searching existing data sources, | 

gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this 

collection of information, including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson j 
| Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 


AGENCY USE ONLY (Leave blank) REPORT DATE REPORT TYPE AND DATES COVERED 
March 1997 Master’s Thesis 

TITLE AND SUBTITLE INTERACTIVE TOOLS FOR SOUND SIGNAL | 5. FUNDING NUMBERS 

ANAL YSIS/SYNTHESIS BASED ON A SINUSOIDAL REPRESENTATION 





AUTHOR(S) Beko MiagtaCuang —SSSSSCiC Chuang 


___ ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING 
Naval Postgraduate School ORGANIZATION 
Monterey, CA 93943-5000 REPORT NUMBER 


9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING 
AGENCY REPORT NUMBER 


11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official 
policy or position of the Department of Defense or the U.S. Government. 


12a. DISTRIBUTION/AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE 
Approved for public release; distribution is unlimited. 


13. ABSTRACT (maximum 200 words) 
This thesis develops a series of programs that implement the sinusoidal representation model for 


speech and sound waveform analysis and synthesis. This sinusoidal representation model can also be used 
for a variety of sound signal transformations such as time-scale modification and frequency scaling. The 
above sound analysis/synthesis sinusoidal representations and transformations were developed as two 
interactive tools with Graphical User Interface (GUI) using MATLAB. In addition, an interactive tool for 
signal frequency component editing based on the sinusoidal model is also presented in this thesis. 








SUBJECT TERMS Sinusoidal Representation, Analysis/Synthesis, GUI, STFT,| 1°. NUMBER OF 


56 
Frequency Track, Speech, Time-Scale Modification, Frequency Scaling pace: 


16. PRICE CODE 


SECURITY CLASSIFICA-|18. SECURITY CLASSIFI- _ SECURITY CLASSIFICA- | 20. LIMITATION OF 
TION OF REPORT CATION OF THIS PAGE TION OF ABSTRACT ABSTRACT 
Unclassified Unclassified | Unclassified UL 





NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) 
Prescribed by ANSI Std. 239-18 298-102 





Approved for public release; distribution is unlimited. 


INTERACTIVE TOOLS FOR SOUND SIGNAL ANALYSIS/SYNTHESIS BASED ON A 
SINUSOIDAL REPRESENTATION 


Ming-Fei Chuang 
Lieutenant, Republic of China Navy 
B.S., Chung-Chang Institute of Technology, 1992 


Submitted in partial fulfillment 
of the requirements for the degree of 


MASTER OF SCIENCE 
IN 
ENGINEERING ACOUSTICS 


from the 


NAVAL POSTGRADUATE SCHOOL 


March 1997 
eS “Lane = 





DUDLEY KNox LIBRARY 


NAVAL POSTGRAD! 
POs UATE S 
MONTEREY CA goog. e;np foo 


ABSTRACT 

This thesis develops a series of programs that implement the sinusoidal 
representation model for speech and sound waveform analysis and synthesis. This 
sinusoidal representation model can also be used for a variety of sound signal 
transformations such as time-scale modification and frequency scaling. The above sound 
analysis/synthesis sinusoidal representations and transformations were developed as two 
interactive tools with Graphical User Interface (GUI) using MATLAB. In addition, an 
interactive tool for signal frequency component editing based on the sinusoidal model is 


also presented in this thesis. 





TABLE OF CONTENTS 


1 LIN ATIRGO IDI STMOIN| 355s eee ene l 
Fe NR Ua eee e111 ass00..n.-ncesnceadenssnveosceccenescesnesucadasisudcsedcsedsdasesenensedes l 
eps Pb desey re WO) NSPS ee reese cccccaaedeaccenccceacecancencssesessocensscsddconnsevscsesseeoncosecesssusessuncsensoes 2 


I]. ANALYSIS /SYNTHESIS BASED ON A SINUSOIDAL REPRESENTATION ......... 3 
Nay Ay BANAL YSIS / SYNTHESIS MODEL.......:..sc:ss0.sssesnsezssestecssncascess 3 

MPa a SUTNLIS GIGI TIEN CPTESETILALOM..-.2.<..--...2-.00.0000c0sesseecoeseceesconesvsoacesnceoccerensce: 3 

PMU DIG eNO oe cE ARAENT4 52 sc coe ceccccsascancccasessessasesceeserevscecsuassvscsssesscsoceets 4 

0), SAIMQVERTIS Soo connec 0:0 500008 eee e RHEE OL ea 8 

B. TIME-SCALE AND FREQUENCY TRANSFORMATION .........cccecseeeesseees 10 
NRC OMENS MANO Coe, 20.222.5-0-e¥orcc-seace-soeeecsacccicszecoesausens Seer eee eee 10 

2 |e PEE TO SY SOE TY ge 1 

evi ee MENTATION OF THE INTERACTIVE TOOLS ..................ccssssscssssseseeeees le 
A. THE SOUND ANALYSIS / SYNTHESIS INTERACTIVE TOOL...........00... iy 

B. THE FREQUENCY TRACK EDITING INTERACTIVE TOOL... 19 

C. TIME AND FREQUENCY SCALING INTERACTIVE TOOL... cece ee Us 

2) FORUIN CILIOISIIICUISDS) 525 sco 0s ee ee S| 
Ox. LOSI TS SIGUIN| (0) E18 5) od DNS er 3] 
ene Ge SONS HORM RUTURE STUDY..............0.1.....-ccsecceserontesenereecooenoseoneee Be 

Be IO rene eee ee reese enycacovsnovoreeeceeencenensnsecsuenesennstuesidsssessnsnoteeasnorereesessarataratsrareeserseates 33 


Vu 


A. SOUND ANALYSIS/SYNIHESIS HRING TIONS. eee 33 

1. Analysis FUunction......cscs.ssessesscssseesoesseteeteaeerae nt enmanee: ot ae 34 

2. Synthesis Function .......4....25..cssesssssseressaessa cases eee tes 36 

B. FREQUENCY TRACK EDITING FUNCTIONS 223 36 

C. SOUND TRANSFORMATION FUNCTIONS Jee 37 

LIST OF REFERENCES... .<..:.:cocsseesscsscoopsesereyveuscsvepesaeye/ 1 eetee aaa 39 
BIBLIOGRAPHY .......ccscssecsecoscececesessuetensnstetelateresiccsigu00/es000 teat ttt stata 41 
INITIAL DISTRIBUTION LIST ......c.ccciioiicccccscceeocssceeseen ters). eee= sane ee 43 


Vill 


LIST OF FIGURES 


WOE Pr OS ec PLL OCUCTION.....20--.¢-0c¢eceee-ceaeeceseaesccceocscessacerscsccocecsesoeeseeteccssessucsecreensceracees 3 
Pees Ocha aha O! SINUSOIGAl ANALYSIS <22.c.2.-2c2.00c0-2...0-..cccesnseoceseccevoovesvenssbeacieserseorsevaaness 6 
Peiy pical Prequency idiracks foria Sound Sigma ...................cc0s:cssscosssosssssssevseesoosereeuseesserae ¢ 
Be locke DiacramnOr Sinusoidal Synthesis...................0....seccsessssosssisiisssscccscesceesrerseesecees 10 
Seite! yy abping Witheeixed Rate Change 0 > 1 ..............s0c..cccessesoccossocssssscoscsseerennnecosescoce 11 
6. Time-scale Expansion of Speech (a) Onginal (b) Expansion ( p=2)............ccccseeeeeeeees 14 
7. Frequency Scaling of Speech (a) Onginal (b) Pitch-scaled ( B=2)......... ee eeeeeceeeeees 15 
8. Sound Analysis/Synthesis Interactive Tool (a) Before Synthesis...............ccccccecscessseeees 20 
9. Frequency Editing Tool Sample View (a) Original (b) After Frequency Tracks of 

eee ee UMetciMes trav ey Cem MIMMINACE...2.......--cccace-cceocsnsseoeseecceossvcesenssassvsossccsanesses 22 
10. Frequency Editing Tool Sample View (a) Original (b) After Frequency Range From 

mnt Zito ae OU0 Fiz bias Beeiy Bhiminated.............1.:......0cc..ccccscosssoocseevacccvsocesscereresorases 23 
11. Frequency Editing Tool Sample View (a) Original (b) Modified ....... eee eeeeeeeeee 24 
12. Frequency Editing Tool Sample View (a) Original (b) After Frequency Tracks Frames 

SUPOPEL SS) 1G) G0) E@N@ 3 SS IU cae crt os ERE eee 25 
13. Frequency Editing Tool Sample View (a) Original (b) After Frequency Tracks Frames 

roms to au dave been Kepeated at Frame 30........................csccscccesssessscccssnesencerenees aii 
14. Frequency Editing Tool Sample View (a) Orginal (b) After Some Frequency Tracks 

Have Been Eliminated by Mouse Selecting (Dotted Lines) .00...... eee eeeceeeeceeeeeeeeeeeees 28 
15. Time-Scale Expansion of a Segment of Male Speech by a Factor of 2.0.00... 30 
16. Frequency Scaling of a Segment of Male Speech by a Factor of 2 .00... eee eeeeeeeeeeeees 30 


1X 





ACKNOWLEDGEMENTS 


Thanks first go to my thesis advisor, Professor Charles W. Therrien. He is always 
patient in answering any question that I might have. This thesis could not be completed 
without his excellent instruction. Thanks also to my good frends, Steve Bergman, Hakki 
Celebioglu, Natanael Ruiz, James Scrofani, and Charles Victory, their good friendship and 
sense of humor have made my two-year study experience much more endurable. Last, but 
not the least, I would like to thank my parents for their continuous encouragement and 


support. 


XI 





I. INTRODUCTION 


A. OVERVIEW 


Sinusoidal representation 1s a useful model for speech and sound analysis/synthesis. 
It has been shown that the synthetic waveform preserves the general waveform shape and is 
perceptually indistinguishable from the original sound [Refs. 1,2,3]. However, in a number 
of applications it is required to transform a sound signal to a different waveform which is 
more useful than the original. For example, in time-scale modification of speech, the rate of 
articulation may be slowed down in order to make degraded speech more comprehensible. 
Alternatively, the sound can be speeded up so we can quickly scan a passage or compress it 
into a fixed time interval. In other applications, the sound is compressed or expanded in 
time or frequency. For instance, in music synthesis it is useful to change the length or pitch 
of a tone without changing its tonal quality or timbre. In all of these cases, it is desired to 
perform sourid modification. This thesis implements a fixed time-scale modification and 
frequency scaling based on the sinusoidal model [Ref. 4]. The above sound 
analysis/synthesis sinusoidal representations and transformations have been developed as 
two interactive tools with Graphical User Interface (GUI) using MATLAB. 

Since some frequency components in a signal are redundant (they may either 
correspond to the noise, or carry no information), we may not want to include them when 
we resynthesize the sound signal. In other cases, only a portion of the original signal needs 
to be regenerated, or a small part of the signal is required to be repeated at a specific time 
instant. In these applications, we need to have the ability to edit the frequency components 
of the synthetic signal. Thus, an interactive tool for signal editing based on the sinusoidal 


model is also presented in this thesis. 


B. THESIS OUTLINE 


The remainder of this thesis is organized as follows. Chapter II addresses the sine- 
wave speech model in two parts. First, the speech analysis/synthesis model based on a 
sinusoidal representation is presented [Ref. 1]. Following this, algorithms for fixed time- 
scale modification and frequency scaling based on the sinusoidal representation are 
introduced [Ref. 4]. Chapter III describes the implementation of the three interactive tools 
for sound analysis/synthesis with GUI. The methods implemented include signal editing, 
fixed time-scale modification, and frequency scaling. Results of their use are also shown 


here. Finally, Chapter IV gives conclusions and recommendations for future work. 


Il. ANALYSIS /SYNTHESIS BASED ON A SINUSOIDAL REPRESENTATION 


A. SINE-WAVE ANALYSIS / SYNTHESIS MODEL 


1. The Sinusoidal Representation 
A speech modeling technique developed by McAulay and Quatieri [Ref. 1], is based 
upon a sinusoidal representation of the original waveform. In the general speech production 


model, the output waveform is assumed to be the result of passing the glottal excitation 
e(t) through a linear time-varying filter h(¢,t)which models the vocal tract. The model 


can be written as 


CAN fel) A(t,t—t)dt . | (1) 


and is depicted in Figure 1. 


e(t) H (@ : t) s(t) 


Figure 1. Model of Speech Production 


In the sinusoidal model the excitation is written as a sum of sinusoids with time-varying 


amplitudes and phases, namely 


e(t) = yy a,(t) cos|Q,,(t)| (2) 
where (2,(t) is given by 


Q(t) =¥,(1)+0, , (3) 
with 


@,(c)do , (4) 


In this model, t, is the onset time of the 2" sine wave and L(t) is the number of sine-wave 
components at time ¢. For the @* sine-wave component, a,(f)is the time-varying 
amplitude, w,(¢) is the frequency and Q,(t) the phase corresponding to the 2” sine wave. 
The quantity ‘Y,(t) is the time-varying contribution to the phase while 9, is a fixed phase 
offset needed since the sine wave components for different indices @ are generally not 


aligned. 


If the vocal tract transfer function is written as 
H(o,t)= M(a,t)exp| jO(o,1)] , (5) 


then the output speech s(t) can be written as 


L(t) . 
s(t)= > A(t) cos|6 ,(t)| (6) 
é=l 
where 


A,(t)=a,(t)M,(¢) . (7) 
and 

0,(t)=Q,(t)+ ®,(2) , (8) 
are the amplitude and phase of the 2” sine wave component corresponding to the frequency 
@ ,(t). 

The sine wave model has been found to be useful for modeling other types of 
sounds besides speech. Other specific applications for this method have been in music 
synthesis [Ref. 1, 2] and in underwater acoustics [Ref. 3]. For these applications, the 
separation of the sound into excitation and system components as shown in Figure 1 may or 
may not be appropriate. Still, most of the basic ingredients of the model remain and can be 
applied in these applications. 

2: Analysis 

The purpose of the analysis step is to estimate the composite amplitudes, 


frequencies, and phases of the sine wave model. This can be done from the high-resolution 


short-time Fourier transform (STFT). The original analysis method proposed by McAuley 
and Quatieri [Ref. 1] uses a purely sine-wave-based model (i.e., the excitation and system 
contributions of each sine-wave component are not explicitly represented). In their 
following work, a new analysis procedure is developed which separates the vocal cord 
excitation and vocal tract system contributions as described above. Since we are interested 
in more general types of sounds rather than speech, the original analysis method along with 
the modified amplitude and phase representations is used. Thus, we account only for the 
model of the vocal cord excitation contribution and ignore the vocal tract system 
contribution. With this simplification, Eq. (7) and Eq. (8) become 
A,(t) a a,(t) , (9) 
and 
8,(¢) = Q,(¢) . (10) 
The analysis proceeds as follows. First, the data is sectioned into frames of equal 
length for-spectral analysis and a Hamming window is applied before taking the Fourier 
transform. Frames are formed at an interval of less than the frame length allowing for 
overlap of data. For a speech signal, a frame length of 20 ~ 30 milliseconds and overlap 
interval of 10 ~ 15 milliseconds are recommended [Ref. 1]. If the Fourier transform of the 


windowed speech segment 1s written as S(@,kR), then the frequencies of e(t) in Eq. (2) at 


time KR (i.e., the XK" analysis frame), are chosen to correspond to the L(kR) largest peaks in 





the magnitude of the short-time Fourier transform, |S (,kR)). The locations of the largest 


peaks are estimated by looking for a change of slope from positive to negative of the 


Fourier transform magnitude. 
If we denote the frequency estimate of the 2" sinusoidal component at the k” 
analysis frame by ©; =@,(kR), then the amplitudes and phases of the sine-wave 


component are given by the samples of S (@,kR) at the specific frequency positions. In 


other words, the amplitudes and phases are written as 


at =|S(a5, KR) (11) 
and 

Qf = arg] S(o/ KR) | (12) 
where “arg” denotes the principal phase value. A block diagram of the analysis scheme is 


given in Figure 2. 


Phases 







Frequencies 
Window q 


Peak 
Picking 


Figure 2. Block Diagram of Sinusoidal Analysis 


Amplitudes 





The number of peaks are not constant from frame to frame in general, and there will 
be spurious peaks due to the effects of window sidelobe Rican In addition, the 
locations of the peaks will change as the pitch changes; and rapid changes in both the 
location and number of peaks often occur in certain regions of the sound signal. In order to 
account for such movements in the spectral peaks, the concept of “birth” and “death” of 
sinusoidal components is introduced here. Suppose that the peaks up to frame k have been 


matched and a new parameter set for frame & + 1 is generated. We now attempt to match 


k+l 
m 


frequency * in frame k to the frequencies in frame k + 1. If all frequencies w**' in frame 


k + 1 lie outside a “matching interval” of w* , then the frequency track associated with w* 
is declared “dead” on entering frame & + 1. When all frequencies of frame k have been 


tested and assigned to continuing tracks or to dying tracks, there may remain frequencies in 


k+l 
m 


frame & + 1 for which no matches have been made. It is assumed that such frequencies @ 


were “born” in frame k and a new frequency *, is created in frame k with zero magnitude. 


This procedure is done for all unmatched frequencies. Further details of this “birth” and 
“death” matching procedure can be found in [Ref. 1]. 

The result of applying this method to a segment of a sound signal is shown in Figure 
3. Each horizontal line represents a particular frequency component which is present for 
some number of frames. These lines are called “frequency tracks.” The frequency tracks 
demonstrate the ability of the method to adapt quickly through the transitory regions such as 
voiced/unvoiced transitions in speech. Typically there are many very short frequency tracks. 
Some of these may not contribute significantly to the general structure of the waveform but 
merely serve to match small details. As will be seen later, the editing tools developed in this 
thesis allow one to eliminate many of these shorter frequency tracks and thus simplify the 


sinusoidal model for the signal. 


Frequency tracks for signal 
4000 





3500 


3000 


2500 


Frequency (Hz) 
™ 
oO 
oO 
Oo 


1500 
1000 


900 


a SN gO saree OO nl rn) 
0 10 20 30 40 50 60 70 80 
- Frame number 


Figure 3. Typical Frequency Tracks for a Sound Signal 


3: Synthesis 

Sound signal synthesis from the sine-wave parameters begins with matching the 
amplitude and phase samples in Eq. (11) and Eq. (12) of each sine-wave computed at 
consecutive frame boundaries. This is followed by interpolation of the resulting pairs of 
amplitude and phase samples of the signal over each frame. The interpolation of parameters 


is based on the assumption that the signal 1s “slowly varying” across each frame and that the 
frequencies of the sine waves form smooth frequency tracks  ,(¢). This constraint allows 
us to interpolate samples over a frame duration. If linear interpolation is used for the 


amplitude, the amplitude estimate G,(r) over the k” frame is given by 


a(t) = ay +(4" —a, a ; (13) 


a,*" are a successive pair of excitation amplitude estimates for the @ 


where a; and 4G 
frequency track, T is the frame duration and t [0,7] is the time into the x” frame. 

This simple linear interpolating procedure cannot be used for estimating the phase 
and frequency of the sinusoid over a frame, however. This is because the phase Qi may 
contain discontinuities of 27 since the phase of S(@,kR) in Eq. (12) is-measured modulo 
2. Hence, phase unwrapping must be performed for interpolation of the excitation phase to 
ensure that the frequency tracks are sufficiently “smooth” across the frame boundaries. A 
cubic polynomial for solving this problem is first proposed in [Ref. 1] for sine-wave-based 
synthesis. For the duration at a single frame the estimate is defined as 

Q,(t)=a+bt+ct?+dt° , (14) 
with ¢=0 corresponding to frame k and t=T7 corresponding to frame k+1. The 


instantaneous frequency is then the derivative of the phase, namely 


6 (i) = — <6) = baer oar ae (15) 


In order to provide a good synthetic waveform, it is necessary that the cubic phase function 


and its derivative equal the excitation phase and frequencies measured at the frame 


boundaries. By using the algorithms in [Ref. 1], the resulting phase function not only 
matches the phase at the frame boundaries, but also resolves the 27 phase discontinuities. 
Details of phase unwrapping and cubic interpolation can be found in [Ref. 1]. 

It was noted earlier that the phase estimate over the k” frame can be written in terms 


of a time-varying term and a constant. Specifically, from Eq. (3) and Eq. (4) 
Onn j & ,(c)do +6, 
- ae A 
- j, @,(0)do + [6 .(o)do +, , 


where the time origin (¢ = 0) is taken to be at the beginning of the current frame and 1, is 


(16) 


the onset time of the " sine-wave. Let Sy denote the phase due to the time-varying 


frequency accumulated up to frame 4; that is, 


k Un 

dy = |, Odo)de (17) 
If V,(t) denotes the phase due to the time-varying frequency accumulated over frame &; 
that is, 

Zz 2 

V(t) = | d(o)do , (18) 
then the excitation phase can be written as 

_ Zz — 

Q(t) =V,(t)+ Do, + be - (19) 


The resulting excitation phase function ,(1) consists of a constant component and 
a time-varying portion. The constant component consists of two parts: the phase offset 
: A k , 
estimate ,, and the accumulated phase component })) , which can be obtained 
recursively as 
k+1 k ~ 
Ah) (20) 


The interpolated amplitudes and phases are used to generate sinusoids which are then 


summed to generate the output sound signal. The final synthetic waveform is written as 


L(t) . 
3(t) = ¥) 4,(z) cos] Q,(2)] (21) 
(=! 
where 


O,{t)=V,(t)+ > +6, . (22) 


A block diagram of the synthesis structure is given in Figure 4. 


Synthetic 
Signal 


~ Frame-to-frame QO (t) 
Output 
- ' Phase Unwrapping ‘ Sine Wave re Sum All pe 
- se ance & Interpolation Generator oN Sine Waves 


Phases 





Nein Frame-to-frame Qe (¢ ) 
—-————> Linear 
Interpolation 





Figure 4. Block Diagram of Sinusoidal Synthesis 


B. TIME-SCALE AND FREQUENCY TRANSFORMATION 


i: Fixed Rate Change 

The goal of time-scale modification is to maintain the perceptual quality of the 
original sound while changing the apparent rate of sound production. In speech, the 
technique is used to synthesize speech corresponding to a person speaking more rapidly or 
slowly without changing the quality of the person’s voice. The scheme illustrated here is 
based upon the algorithm developed by Quatier1 and McAulay with slight simplification 
[Ref. 4]. Although the authors proposed both fixed rate change and time-varying rate 


change, only the fixed rate change is performed here. 


For a fixed time-scale transformation, the time f, corresponding to the original 


sound production rate is mapped to the transformed time ¢, through the mapping # = pt,. 


The case p>I1corresponds to time-scale expansion, while the case p <1 corresponds to 


time- scale compression. The case of time-scale expansion is depicted in Figure 5. 


: ty = ply 


Figure 5. Time Warping with Fixed Rate Change p > 1 


In the sine-wave model discussed here, the parameters which are scaled are the 


model amplitudes, frequencies, and phases. The model parameters are modified so that 


frequency tracks w,(f) are stretched or compressed in time while the value of w,(7), 


which corresponds to pitch, 1s maintained. The mathematical model for the fixed time-scale 


modified sound s‘(t’), is then given by 


L(t’) 


sie 2akF) cos|Q;(z")| , 
where 


PACs Ean (ae 


and 
rf! f. =| 
| = j,o.(p t)dt+, 
Letting o =p 't, then /(r’) can be written as 


ose) [Pov(olds/o* +6 


=V(o7r")/p"+(S) ) +o, 


I] 


(23) 


(24) 


(25) 


(26) 


Since these model parameters are derived on a frame-by-frame basis, we can think of the 
inverted time pt’ as the time into the k” frame within the original time scale. Therefore, 


the fixed time-scale synthetic waveform can be obtained as 


5'(r)= Yai(e)oos[ax(r] 7) 
where 


a(r)=4|(o""),| 28) 


and 


Ai(0)= Fo"), or +(Xi ) +4 as) 


and (>: computed recursively as 


pa ) =a, ) +V(T)/p" (30) 


The notation (_ ); in Eq. (28) denotes modulo T, which is the original Gane duration. Figure 
6 illustrates an example in which a segment of male speech is expanded by a factor of 2. 

ap Frequency Scaling 

The sound can be changed in pitch by performing frequency scaling. This is 
accomplished by taking the synthesized phase to be 


Q(t) = | Bo, (rar +9, 
= BV, (t)+, , 


where f is the scaling factor for each frequency track @ Ae) . The operation performed here 


(31) 


is equivalent to shifting the frequency tracks to new locations. The resulting modified 
waveform over the k" frame is given by 


L(1) ; 
$'(t) = S a,(t) cos|;(7)] ; (32) 


where 


O1(t) = BV(0) +( +, , (33) 


with (y" computed recursively as 


(Si) =(Ei ) +677) | Gs 


This waveform modification corresponds to an expansion or compression of frequency and 
a change in pitch. Figure 7 illustrates an example in which the pitch of a male speech is 


scaled by a factor of 2. 








(b) 
Figure 6. Time-scale Expansion of Speech (a) Original (b) Expansion ( p=2) 





14 





Figure 7. Frequency Scaling of Speech (a) Original (b) Pitch-scaled ( B=2) 


15 





Il. IMPLEMENTATION OF THE INTERACTIVE TOOLS 


The definition of a user interface is moving from a command-line oriented interface 
to one that includes graphic features. Over the years, graphical user interfaces (GUI) have 
grown in popularity. GUI use push buttons, editable boxes, and other graphical controls 
which can be activated with a mouse to select various options and execute commands 
[Ref. 5]. The purpose here was to develop interactive user interface tools which can be used 
to perform the sound analysis/synthesis, frequency track editing, and sound transformations 
based on the sinusoidal representation. The use of these GUI relieves the user of the need to 
memorize a large number of textual commands, and allows him/her to see the results almost 
immediately. The interactive tools described here were developed on Unix workstations, 
and require MATLAB version 4.2c as well as its Signal Processing Toolbox. Although 
MATLAB provides the necessary support for the GUI on both Unix and IBM PC- 
compatible platforms, some modifications will need to be made if the user wants to use 


these tools on IBM compatible PCs. 


A. THE SOUND ANALYSIS / SYNTHESIS INTERACTIVE TOOL 


An interactive sound analysis/synthesis tool based on sinusoidal representation 
model is described in this section. This tool allows the user to analyze an existing sound 
waveform, extract the parameters that represent a quasi-stationary portion of that waveform, 
and then use those parameters to reconstruct an approximation that is “very close” to the 
original signal. In other words, the algorithms behind this tool contain two parts, namely, 
analysis and synthesis. When this tool is invoked, the user must indicate a sound signal as 
an input argument in the associated .m function; the signal will be drawn in the top portion 
of the window as shown in Figure 8 (a) and labeled “Original Signal.” After loading the 
signal into the workspace, the user needs to provide some important values which are used 


in the signal analysis and synthesis algorithms. 


Ne 


The first value is the sampling frequency for the analysis and synthesis procedures 
which should be same as the value used in digitizing the onginal sound signal. For all cases 
discussed in this thesis, the sampling frequency 8,000 Hz is used. This is close to the actual 
value of 8,192 Hz used in the SUN Unix workstations. 

The next values to be entered are the windowed frame length and overlap width. A 
windowed frame greater than 20 milliseconds is sufficient for generating a good quality 
synthetic waveform according to [Ref. 1]; this corresponds to 160 points if the sampling 
frequency is 8,000 Hz. A 50% overlap of the frame is recommended for this sinusoidal 
representation model, which would result in a frame overlap width of 80 points. The default 
values for the windowed frame length and overlap width in this tool are 200 and 100 points, 
respectively. These two user-input values are shown in the second and third editable boxes 
in Figure 8 (a) and (b). 

The fourth parameter is the threshold level (in dB), which allows the user to limit 
the maximum number of peaks detected over a frame. The typical range for this threshold 
value is from 60 to 90 dB. A default value of 80 dB has been-used throughout the 
experiments. In general, the performance will not be affected much by the choice of this 
threshold level unless too few peaks are allowed. ; 

A concept of “birth” and “death” of sinusoidal components was described earlier in 
Chapter I (B) and is used to account for the rapid change on both the number and location 
of spectral peaks. The fifth input value in this tool is the frequency interval used while the 
frame-to-frame peak matching procedure is performed. It indicates the number of frequency 
bins that the frequencies on two successive frames can deviate and still be considered to be 
“matched.” A value of 10 has been set as a default. 

The last input value is the number of points used in the computation for discrete 
Fourier transform (DFT) of each frame. Typically, 512 to 1024 points should be enough for 
generating the synthesis signal if the frame length does not exceed 500 points. In this thesis, 


all experiments were done using 1024 points. 


18 


Having entered these values the user now is ready to do the sound signal 
analysis/synthesis. The “Synthesize” push button activates both the analysis and the 
synthesis functions. In other words, when the user presses the “Synthesize” button, the tool 
extracts the waveform parameters first, which are required for the model, and then passes 
those parameters to the synthesis system so that the synthetic waveform will be generated. 
The result after entering the parameters and pressing the “Synthesize”’ button is as shown on 
the bottom portion in Figure 8 (b). 

Of the above six user-input values, only the first three values (i.e., sampling 
frequency, windowed frame length, and overlap width) are essential and case-dependent for 
the sound signal analysis/synthesis. It is usually not necessary to change the other three 
values (i.e., threshold level, frequency matching interval, and DFT points). Additionally, 
push buttons are available for the users to “play” (1.e., listen to) both the original and 
synthesis sound signal using the platform's audio output. For new users and users that are 
not familiar with this interactive tool, an on-line help function is available by pressing the 


“Help” push button. 


B. THE FREQUENCY TRACK EDITING INTERACTIVE TOOL 


The frequency track editing tool allows the user to “edit” frequency components of a 
signal by inputting some appropriate parameters and even by using a pointing device such 
as a mouse. In a number of applications, it is desired to make the synthetic signal fit in a 
specific time interval, or to eliminate short frequency tracks to simplify the model. 
Frequently, some of these short tracks only serve to match the “detail” in the original 
waveform, and removing them will not change the general characteristics of the sound. 

Since the results of the sinusoidal representation model are very robust, it is not 
necessary to reconstruct the signal with all frequency components which are extracted from 
the original waveform. This interactive tool offers five frequency track editing functions for 


users who wish to generate different types of synthetic signals. 





Sapling tea 





Overlay «cin 






@000 S000 6000 7000 6000 (ieee 
we Pormice} 


cn ce iggagytenag 


Dec ern pete lem a ie I 


Matenng ots 


tenho tin — 








6000 7000 Bo00 | 








Overtan woh 


4 


Tress 






3000 





3000 





1000 2000 





Synthesis Signal 
AHH RAHT 
i Hi i il fe 
; TATA ig th 
ry BRN Sats Hille, 
1)! ' 6 af 1 rtd 
W ! "Why |! 
mastiee I i$ 
2000 3000 4000 $000 6000 7000 6000 


as 


(b) 


Figure 8. Sound Analysis/Synthesis Interactive Tool (a) Before Synthesis 
(b) After Synthesis 


20 


The first option provided by this tool allows the user to eliminate all frequency 
tracks of less than some specified length. After eliminating these tracks, the associated 
signal is resynthesized according to the new frequency tracks. Figure 9 illustrates an 
example in which all frequency tracks less than 20 frames in length are removed. Notice 
that there is no large difference between the “original” synthesis waveform in (a) and the 
“new” synthesis waveform in (b), although the underlying model has been considerably 
simplified. 

In many signal processing applications, users are likely to design different kind of 
filters, such as low pass, high pass, or band pass filters, in order to eliminate the unwanted 
frequency components and preserve the specific range of frequencies which carries the 
information they need. The second option offered by this tool allows the user to implement 
those filters very easily, just by indicating the specific range of frequencies to be removed. 
The example in Figure 10 illustrates the result of eliminating frequencies in the range from 
3,000 Hz to 4,000 Hz, which corresponds to passing the signal through a low-pass filter, 
and the anal “new” synthetic waveform. In this case there are not many long tracks of 
high frequency components in the range of frequencies removed so there is only a slightly 
noticeable effect on the waveform. The elimination of the higher frequencies are most 
apparent when listening to, or “playing” the sound. 

Another example is shown in Figure 11 (b) where all frequency tracks less than 20 
frames in length are removed, followed by eliminating the frequency range from 3,000 Hz 
to 4,000 Hz. Figure 11 (a) again shows the original frequency track plot and associated 
synthetic waveform. 

In some cases, it is desired to regenerate the signal using only part of the original 
signal. Thus, we may need to be able to “cut” a small region in time of the frequency tracks, 
and then regenerate the signal again. This tool allows the user to indicate the frame range to 
be cropped. An example is shown in Figure 12 where frames 30 to 40 have been cut, 


therefore eliminating some of the “silent region” between the two major portions of the 


21 


Fee A a a en 
a Seaps het: Vie * 


wh ls ee, See ; SrRneneey | : 


Nt een oeatte a Aut ee ee ee Ob CANE tear 







Synthesis Signal 












2000 3000 4000 5000 


Frequency tacks for signal 
4000 


3000 

2500 Laan sani 
& 200 ei ae ee 3 . 
4 oe oo ramet WOE pore ~ SS. 
a. 


SOO ears ee Sikes 


1000 


500 












Sneps bot View 


tees AAU Ate ae Panes 






Synthesis Signal 





4 — mY 
ee ew in 


0 1000 2000 3000 4000 5000 6000 7000 8000 Frea Range 





Frequency tracks for signal 


heane Adg 


So 












~~. ON ON ee, —- 








~_ — 

i Ss ee ee see cmt ra an aN pee, i mente 
- — a 5 ‘ 

— ag ee CO oe peti mcs awa OSS a ee a ee 0, F 

a LO | q aie ovtons 

© 2000 ne a ee ar 

= 2 i reghd af ‘ 

aw 

- 

ive 


MEUSe LOR 


(b) 


Figure 9. Frequency Editing Tool Sample View (a) Original (b) After Frequency 
Tracks of Length < 20 Frames Have Been Eliminated 


i) 
tO 





a Snapshot View 


Synthesis Signal 


7000 8006 


4000 


oe : a nee Se — as, - . : Frome Crt 4 


es 






3000 : 

‘ by 1 

Se reel cows a net ce : 

oe BS On ee Frene Ads 

= =~, ie I ” j 

2 2000 Se eae 

= 
ao 
a 
- 
a. 


1500 ae Se eas oe 
sae ae ° y 


1000 Poa + ye eer lo 





3000 4000 $000 6000 7000 


Frequency tracks for signal 
4000 


3500 


2500 


nN 
Oo 
o 
o 


Frequency {H2) 


1$00 


= 


NS ET A | gt ae AS FR 


w 
Oo 
o 


| (b) 


Figure 10. Frequency Editing Tool Sample View (a) Original (b) After Frequency 
Range From 3,000 Hz to 4,000 Hz Has Been Eliminated 


ye 












a Snags bet: View 
Y ale helsk elt RAL gi ate bd MEL: Dalam Cavis ane emecy Tracks 6diti ne 2mek com Gan j 


Symthests Signal 


1000 2000 3000 4000 6000 


3000 


2500 


ane 
a “a. 


— 
. A eae te 







a ae gee at OO es OS 









i) 
cS 
Ss 
o 


Frequency (Hz) 


1500 


1000 Si 2 ee 


* OO ee EY heh em 
a TENET St ee ag ap, 





500 





—— 











Saaps her View 
oe ee a ery nee hei tund ate Te ba Marhene. Mth sisi 900 sotaiss. OAL APE AD 


Synthesis Signal 





0 1000 2000 3000 4000 $000 6000 7000 8000 


Frequency tracks for signal 


4000 


! 3500 


3000 





2500 
3 nT ay ant gy a, 
— on 
> So et gg Nc 
Donn ees 
© 2000 
= 
4 ims 
3 Ly 
- 
Du: 


1500 





(b) 
Figure 11. Frequency Editing Tool Sample View (a) Original (b) Modified 
24 


















ej Snapshot: View * 
a. cmeanee nee ate II oT HN nesin. and froquemy Lreaies b4Ring leah acc0l tices Oat i ee 
Synthesis Signal 
Os 
-05 
-1 
3000 4000 $000 7000 
Frequency tracks for signat 
4000 
3500 
3000 
2s00 
= 
& 2000 
>] 
oC 
2 
La. 


1800 


1000 


ne LT Ee eS eT 
rn a os 
a a SO a I 





soo 





Se a | en 










Bow I te STR SIRen nse ean ae hast Se 









eo “4 


Synthesis Signal 


track Longe: 








0 1000 2000 3000 4000 $090 6000 7000 


Frequency tracks for signal 





Frequency (Hz) 
N 
o 
S 


1800 


(b) 


Figure 12.F requency Editing Tool Sample View (a) Original (b) After Frequency 
Tracks Frames From 30 to 40 Have Been Cut 


z 


sound. The resynthesized waveform is also shown in the top portion of the window in 
Figure 12 (b). 

In other situations, the user may desire to move or repeat a small portion of sound 
signal at a specific time instant. This can also be done by using the frequency track editing 
tool. Users are required to input the range of frames to be repeated, and the position in time 
where the frames are to be inserted. Figure 13 illustrates an example in which frames from 
30 to 40 are copied and re-inserted at frame 30. In this case, the result is an increase the 
“silent region” between the two major portions of the sound. The resynthesized signal is 
shown in the top portion of the window in Figure 13 (b). 

Sometimes, it is desired to remove some specific longer frequency tracks after most 
of the shorter tracks have been removed. Many times this is not possible using the methods 
that have been previously described. A handy mouse frequency track editing function was 
developed for this purpose. The user activates this editing function by pressing the 
“START” push button at the nght bottom corner of the window. The cursor changes from 
an arrow to cross-hairs indicating that the editing function is active. The user then places the 
cross-hair cursor on a frequency track to be eliminated and “selects” that track with the left 
mouse button. Every selected frequency track changes from its normal yellow solid color to 
a red, dotted line. The unaffected frequency components will be saved for the use of 
generating the new sound signal. 

Users are allowed to select as many frequency tracks as they wish to eliminate. The 
“new modified” synthetic signal is not generated until the user has finished the frequency 
track selecting step. When the user has finished selecting frequency components he/she 
presses the right mouse button in the region and then presses the “OK” button to synthesize 
the waveform. An example of the use of the tool and this function is shown in Figure 14. In 
Figure 14 (a), all frequency tracks less than 20 frames in length were initially removed. 
Figure 14 (b) shows the result where three specific additional frequency tracks, one at 


approximately 3,700 Hz, one at 2,300 Hz and one near 0 Hz, have been eliminated. The 


26 


en TS SSS St ev SSMU 
a Sanpshet: View 









A. + {Ra mention aban ete ae netis. ond tremeancy trecks 
Synthesis Signal 
6000 7000 gaoc 
: goo 
; 3500 
F = 3000 
i 2500 
x 

; F 000 
Bes eg 
: o ro 
y 2 =a. 
1500 
r 1000 
i 

: a finaps set: View 

; 9 ee ARSE seu domain se Oe Symeris Ind FuqReRcy Treats baiting Toston as a ee sci Lis, MEDTIEN Ae Oa sad 


Synthesis Signal 


1000 2000 4000 $000 6000 7000 6000 3000 


Frequency tracks for signal 


—_— 
Sone oa 


De eae can pee ar os 
oe ae 
= 


— 

= 
~ 
oO 
= 
x} 
=] 
or 
.o 
~ 

ke 


TT A TT TN 


40 50 
Frame number 





(b) 


Figure 13. Frequency Editing Tool Sample View (a) Original (b) After Frequency 
Tracks Frames From 30 to 40 Have Been Repeated at Frame 30 


ae. 


4000 3000 6000 


Frequency tracks for signal 


nm 
w 
ive] 
oO 


7 - 
et a A a Ns nee A js 


Frequency (Hz) 
nN 
=} 
i J 


_ 
Ww 
Oo 
Oo 




















ra Snapshat: View " a 
F ist MDA 
| tae Synthesis Signed 
HTL; } Tiere Tinch Leogt: e 
0.5 eeiuilu) lin PMN Pes Stes cg eens ase 3 ue ® i bi eet} fev t , _ 
RUAN AAA RIN aT eae HET Saapeanagt| 
0 ‘ n . ; . } : EO en. c 
slide : ish : ; ile alt j Revticka besidh Reset itiecte 
vy | ie | it [PRS | if! i: NTI 
-05- ‘Wh! "| High tel! still adn 
it I 1 ' 
-1 
0 1000 2000 3000 4000 $000 6000 7000 
Frequency tracks for signal 
4000 
3500 
3000 
_—. 2500 : : : n Mate Aid 
a = ae were ae Y 
— oe tia gh NE ea a eal tees! 7 ° 
Co OR es es We... bale 
c 2000 ‘ : 
eas | asics of i 
i=4 
= 
[re 


= = 








(b) 


Figure 14. Frequency Editing Tool Sample View (a) Original (b) After Some 
Frequency Tracks Have Been Eliminated by Mouse Selecting (Dotted Lines) 


28 


tracks eliminated are shown as dotted lines and the waveform shown in Figure 14 (b) is the 


result of synthesis of the removal of these tracks. 


Se: TIME AND FREQUENCY SCALING INTERACTIVE TOOL 


The time and frequency scaling tool provides a means of generating the expansion 
or compression of a sound signal in the time domain, as well as a change in spectral 
envelope and pitch contour according to the methods described in Chapter II (B). There are 
two options offered by this tool. The first is time-scale modification. In the case of time- 
scale modification, the new sound signal is expanded or compressed depending on the value 
which the user inputs. The new sound signal is expanded if the value is greater than 1, and 
is compressed if the value is less than 1. The modified signal is automatically generated 
right after the user inputs the time-scale factor. An example 1s illustrated in the top portion 
of the window as shown in Figure 15 where a segment of male speech is expanded by a 
factor of 2. In this case, it is found that the rate of articulation has been slowed down while 
the perceptual quality of the original sound is maintained. 

If the user wishes to perform a frequency transformation, then the second option of 
this tool can be invoked. The user can increase the pitch of the synthesized sal signal by 
entering a pitch-scaling factor which is greater than 1 or lower the pitch by inputting a value 
that is less than unity. Both the time- and pitch-scaling factors have been limited to the 
values in range from 0.1 to 2, since values outside of this range generally produce poor 
results. An example of frequency scaling by a factor of 2 for male speech is depicted in the 
bottom portion of the window shown in Figure 16. The resulting speech sounds like a 
young boy’s voice since the pitches of children’s voices are higher than those of adults in 


general. 


ES) 


Packs farsa: 


res 5 ais. aan ane: 


La er. te en A 


Time modified synthetic signa! (F actor-2) 





Figure 15. Time-Scale Expansion of a Segment of Male Speech by a Factor of 2 





Plich modified synthetic signal (F actor=2) 








Figure 16. Frequency Scaling of a Segment of Male Speech by a Factor of 2 


30 


IV. CONCLUSIONS 


A. DISCUSSION OF RESULTS 


In this thesis, a sinusoidal representation model for sound signals by McAulay and 
Quatieri is described and is used in an analysis/synthesis technique based on the amplitudes, 
frequencies, and phases of excitation contributions of the sine wave components. In the 
analysis steps, the data is first sectioned into frames and the discrete Fourier transform 
(DFT) is applied over each frame. The peaks in the resultant spectrum determine the 
frequencies of sinusoids to be used in the model and which are “tracked” through 
successive frames. The amplitudes and phases of the sinusoids are given by the appropriate 
samples of the DFT corresponding to those peak frequencies. 

In the synthesis step, these amplitude and phase functions are applied to the sine- 
wave generator, which adds all sinusoidal components to produce the synthetic signal 
output. We Hae found that this model reproduces the sound very accurately and confirms 
the claim by the authors that the sound is “perceptually indistinguishable” from the original 
sound [Ref. 1]. 

Functional relationships for each of the sine-wave parameters have been developed 
by the original authors that allow the synthesis system to perform a variety of sound signal 
transformations, such as time-scale modification and frequency scaling implemented in this 
thesis [Ref. 4]. These were also found to be effective. 

All of the above sound analysis/synthesis sinusoidal representations and 
transformations were developed as two interactive tools with GUI using MATLAB. In 
addition, an interactive tool for signal frequency track editing based on the sinusoidal model 
was also implemented so users can simplify the model and reduce its complexity. Examples 


of the use of these tools are presented in this thesis. 


5) 


B. SUGGESTIONS FOR FUTURE STUDY 


In the sine-wave-based time and frequency scaling modification system, the 
modified synthetic waveforms are good in perceptual quality but some structure of the 
original waveform 1s lost. A new sine-wave-based speech modification algorithm called the 
"Shape Invariant" technique has been proposed by the original authors which is able to 
maintain the temporal structure of the original waveform [Ref. 6]. It would be worthwhile 
to implement this new technique and incorporate it into the existing GUI. 

Although this sinusoidal representation model produces very accurate results, it is 
demanding in terms of computation. Another worthwhile endeavor would be to improve the 
computational performance of the sine-wave-based modification system. MATLAB is poor 
in executing "Loop Iteration" code because it is an interpreted language. Unfortunately, the 
implementation of the modification system uses several "loops" which makes the execution 
time guite long. Rewriting the code to improve the computational efficiency would be very 
desirable so that the user can see the results more quickly. Perhaps compiling the code with 


the MATLAB to C (or C++) compiler would result in faster execution. 


APPENDIX 


This appendix includes the sound signal examples and main programs described in 
this thesis. Several music and speech signals were used in the analysis/synthesis, frequency 
track editing, and transformation experiments, only two of them are presented in this thesis, 


however. They are: 


@ The speech phrase “baseball” from a male speaker (file: obase.mat), and 


¢ Two notes (file: thhi2.mat) excerpted from a four-note trombone passage 
(file: tb_hi.au). 
The program 1s written entirely in MATLAB and makes extensive use of Graphical 
User Interface (GUI) features from MATLAB. In addition, the MATLAB Signal Processing 
Toolbox 1s required to run these programs. 
The programs are divided into three parts, the sound analysis/synthesis, frequency 
track Pine and sound transformation (including time-scale modification and frequency 


scaling). They correspond to Chapter III (A), (B), and (C), respectively. 
A. SOUND ANALYSIS/SYNTHESIS FUNCTIONS 


[cand, mag, phas, sig, par, synt] = gsinwave(signal); 


This is the main program which calls other associated MATLAB functions in order 
to perform the sound analysis/synthesis. The user invokes this program. This function calls 
guisynth.m which implements all GUI actions and calls the two other functions analy.m and 
synth.m which perform the anlysis and synthesis, respectively. The input to Peeve ii 1S 
the original sound signal and the outputs are the synthesized signal (synt) and a set of 
variables which are generated from the analy.m and synth.m functions. These other 


variables are described below. 


33 


I 


Analysis Function 


[cand, mag, phas, sig, par] = analy(signal, sam_fs, N, n, peak_thr, mat_win, fft_size); 


Default Values 

Inputs signal original sound signal 

sam_fs — sampling frequency 8000 Hz 

N windowed frame length 200 points 

n overlap width 100 points 

peak_thr peak picking threshold 80 dB 

mat_win frequency matching interval 10 

fft_size DFT points 1024 
Outputs cand sinusoids peak candidate matrix 

mag magnitude matrix of DFT 

phas phase matrix of DFT 

sig ' windowed original signal 

par parameters used by other functions 


The output variables listed above can be described in more detail as follows. 


® cand 


This matrix contains the “matched” (i.e., after applying the “birth” and “death” 
process and “frame-to-frame” peak matching procedure) peak frequencies 
information of sinusoids which are extracted from the DFT spectrum. The 
number of rows of this matrix is equal to one half of the DFT points, and the 
number of columns depends on both the windowed frame length and length of 
the sound signal. 


A cand matrix of the sound tbhi2.mat was generated by using the default input 
values mentioned earlier in the analysis function. The size of this cand matrix is 
512 by 77. A small matrix corresponding to rows 36 to 60 and columns 1 to 10 
was excerpted from the cand matrix as an example. The location and value of 
each element indicate the frequency and the “matched” position on the 
following frame, respectively. 


For instance, the element (37,2) of the original cand matrix corresponds to the 
frequency value 289 Hz. The value “38” in this position of the matrix indicates 


oon 


that the element (38,3) is the next matched position (“38” corresponds to the 
frequency value 297 Hz). Also, since there is no value “37” in column | of this 
matrix, this frequency track is considered to be “born” in frame 1. Let us 


examine another element (53,2) of this matrix. Its value “53” indicates that the 


element (53,3) would possible be a nonzero value so that this frequency track 
can continue. However, since the value of element (53,3) is zero, the track 
“dies” at this point. 


Column (Frame) 

















6 10 





\e) 
3 oc 


us 


SSOCODSCCOecho oo cee ooeoceeosc se 





2 oo OC COO Co oo OG o Co co Se © ott 


in 
G2 
Sle Cie 2 OC Oem Clo SOC SO CGte ooo Se SY oan 


Hec@moceMeeoocooco se Soe se ese oo emt 
=> COCc iC CSC OCemeo SoC SC SOS SCe SC oo CS SS Cm 
oeoetoooewmeoeo eo eoeoeeoaeaceo sc Se Comm 
SS OCS OO Clo OO OC Gm oe Soo eo SC Bec amr 
Se So4A4 eco oeCOoocoSoC Oooo oC oeco eC eae oc: 
SSO) Sle Siete SCS SS Ge ee ero Se > oO amt 
So Om ClO Gore Oo CSC OS Oop ono Oo So SC 


Sa ae ee ee 


e mag 


This 1s the matrix which contains the magnitudes of DFT. The number of rows 
of this matrix is equal to one half of the DFT points, and the number of columns 
depends on the sound signal. — 


e phas 


This is the matrix which contains the phases of DFT. It has the same size as the 
mag matrix. 


35 


e sig 


This is a windowed version of the original sound signal. 


e par 


This vector is composed of the essential parameters provided by the user 
including the sampling frequency, windowed frame length, overlap width, and 
the number of DFT points which were used in the synthesis and other programs. 


Dis Synthesis Function 
[sig, synt] = synth(cand, mag, phas, sig, par); 


Inputs All input parameters are generated by the analysis function. 
(see above description) 

Outputs sig windowed signal 
synt = synthesized signal 


B. FREQUENCY TRACK EDITING FUNCTIONS 


jncand, nmag, nphas, nsig, npar, nsynt] = guifreq(cand, mag, phas, Sig, par, synt); 


This is the main program which needs to be invoked if the user wishes to perform 
the frequency track editing. This function calls the associated function guiedit.m which 
performs all GUI and “frequency track editing” operations. The inputs are the same 
variables used in the analysis and synthesis functions mentioned earlier and the outputs are 
modified versions of these variables. For example, ncand is the resulting candidate matrix 
after the original candidate matrix cand has been “edited.” The functions called by 


guiedit.m are described below: 


edit.m eliminates short frequency tracks 

Zetouii eliminates a specific range of frequencies 

cut.m deletes a small portion of signal 

paste.m cuts a small portion of signal and pastes it at a specific time instant 


mousedit.m implements the mouse editing capability 


C. SOUND TRANSFORMATION FUNCTIONS 


[newsynt] = guimod(cand, mag, phas, sig, par, synt); 


This is the main program which calls the associated MATLAB function guiscale.m 
in order to perform the sound transformations, including time-scale modification and 
frequency scaling. The inputs are from one of the previous two programs, namely, the 
sound analysis/synthesis or frequency track editing programs, and the output is the modified 
waveform. The function guiscale.m includes all GUI and calls function modsynt.m which is 
required to perform sound transformations. 

The user can also get on-line help by typing “help func_name” in the MATLAB 
workspace to look at the description about how to use the above functions. Alternatively, 
one can simply activate these interactive GUI tools and press the “Help” button to get more 


information. 





o>) 


LIST OF REFERENCES 


McAulay, Robert J., and Thomas F. Quatien, “Speech Analysis/Synthesis Based on 
a Sinusoidal Representation,’ JEEE Transactions on Acoustics, Speech, and Signal 
Processing, Vol. ASSP-34, No. 4, pp. 744-754, August 1986. 


Serra, Xavier, “A System for Sound Analysis/Transformation/Synthesis Based on a 
Deterministic plus Stochastic Decomposition,” Report No. STAN-M-58, CCRMA, 
Department of Music, Stanford University, October 1989. 


Victory, Charles W., “Comparison of Signal Processing Modeling Methods for 
Passive Sonar Data,’ Master Thesis, Naval Postgraduate School, March 1993. 


Quatieri, T. F., and R. J. McAulay, “Speech Transformation Based on a Sinusoidal 
Representation,’ JEEE Transactions on Acoustics, Speech, and Signal Processing, 
Vol. ASSP-34, No. 6, pp. 1449-1464, December 1986. 


Building a Graphical User Interface by the MathWorks Inc., June 1993. 


Quatieri, Thomas F., and Robert J. McAulay, “Shape Invariant Time - Scale and 
Pitch Modification of Speech,” JEEE Transactions on Signal Processing, Vol. 40, 
No. 3, pp. 497-510, March 1992. - 


Bo 





BIBLIOGRAPHY 


Brown, Dennis, W., “SPC Toolbox: A MATLAB Based Software Package for Signal 
Analysis,” Master Thesis, Naval Postgraduate School, September 1995. 


Deller, John R., Jr., John G. Proakis, John H. L. Hansen, Discrete-Time Processing of 
Speech Signals, Englewood Cliffs, New Jersey: Prentice Hall, 1987. 


Haykin, Simon, Communication Systems, New York: John Wiley & Sons, Inc., 1994. 


Portnoff, M. R., “Time-Scale Modification of Speech Based on Short-Time Fourier 
Analysis,” JEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 
ASSP-30, No. 3, pp. 374-390, June 1981. 


Therrien, Charles W., Discrete Random Signals and Statistical Signal Processing, 
Englewood Cliffs, New Jersey: Prentice Hall, 1992. 


41 





INITIAL DISTRIBUTION LIST 


No. Copies 


Defense Technical Information Center ....................csssssesesseesecececcesssesseescesceecees 2 
8725 John J. Kingman Rd., STE 0944 
Ft. Belvoir, VA 22060-6218 


PNG ey MEMO ONE TOL ANY -<<c-csa ser, cabetes yt cusvevivsss<ussesssosceessasesesssuudeleacecevesdecexecceeeectee 2 
Naval Postgraduate School 

411 Dyer Rd. 

Monterey, California 93943-5101 


Eames TM@ MATS Ga yy eUIICTIICH wc.sscc.c:ces.-.-...s0c0se.ssccecdccecsescsesssacesesecaesseesceveeeneves 4 
Code EC/TI 

Naval Postgraduate School 

Monterey, California 93943 


Pe retltoes UMElSe OMCs TISLD..... 5. ssc cc,e-cdesc)cdss+-eedecoe-coccc+e+seaccssssdauctcsssssauscesdacenedene sect l 
Code EC/CX 

Naval Postgraduate School 

Monterey, California 93943 


Library of Chung Cheng Institute of Technology...............::cscc0e Lerner ea l 
P.O. Box 90047 

Ta-hsi, Taoyuan 

TAIWAN, R.O.C. 


Be Bea fe apres 1 Mt ST MMe eac es 0a ee ekccassewaatanseaseadssseactsenssecuccsisoisuaiees oeensnssueneanssnel 1 
SGC #2552 NPS 
Monterey, California 93943 


ILD vive PES (CQOT ETOYS acoso ee eee ee Z 
No. 34, Lane 152, Sec. 3 Yuanlin Rd. 

Ta-hsi, Taoyuan 

TAIWAN, R.O.C. 


43 





DUDLEY Knox LIBRARY 


NAVAL POSTGRADs. TE SCHOOL 
MONTEREY CA 22945-5104 








DUDLEY KNOX LIBRARY 


— p 


ee 00336 


EE