Skip to main content

Full text of "Sensor fusion for boost phase interception of ballistic missiles"

See other formats


Calhoun 

Iniiiiuiiortfl Arthivcof (he Navjl Pwigndualt School 


Calhoun: The NPS Institutional Archive 
DSpace Repository 



Theses and Dissertations 


1. Thesis and Dissertation Collection, all items 


2006-09 

Expansion of the Center for Network 
Innovation and Experimentation (CENETIX) 
network to a worldwide presence 

Humali, I. Gokhan 

Monterey California. Naval Postgraduate School 


http://hdl.handle.net/10945/2591 


Downloaded from NPS Archive: Calhoun 



DUDLEY 

KNOX 

LIBRARY 


Calhoun is the Naval Postgraduate School's public access digitaI repository for 
research materials and institutional publications created by the NPS community. 
Calhoun is named for Professor of Mathematics Guy K. Calhoun, NPS's first 
appointed —and published —scholarly author. 

Dudley Knox Library / Naval Postgraduate School 
411 Dyer Road / 1 University Circle 
Monterey, California USA 93943 


htt p ://w w w. n ps.edu/l ib ra ry 







NAVAL 

POSTGRADUATE 

SCHOOL 

MONTEREY, CALIFORNIA 


THESIS 


SENSOR FUSION FOR BOOST PHASE 

INTERCEPTION OF BALLISTIC MISSILES 

by 


I. Gokhan Humali 


September 2004 


Thesis Co-Advisor: 

Phillip E. Pace 

Thesis Co-Advisor: 

Murali Tummala 


Approved for public release; distribution is unlimited 




THIS PAGE INTENTIONALLY LEFT BLANK 



REPORT DOCUMENTATION PAGE 


Form Approved OMB No. 0704-0188 

Public reporting burden for this collection of information is estimated to average 1 hour per response, including 
the time for reviewing instruction, searching existing data sources, gathering and maintaining the data needed, and 
completing and reviewing the collection of information. Send comments regarding this burden estimate or any 
other aspect of this collection of information, including suggestions for reducing this burden, to Washington 
headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 
1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project 
(0704-0188) Washington DC 20503, __ 

I. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED 

September 2004 Master’s Thesis 

4. TITLE AND SUBTITLE: Sensor Fusion for Boost Phase Interception of 5. FUNDING NUMBERS 
Ballistic Missiles 

6. AUTHOR(S) Ismail Goklian Humali 

7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING 

Center for Joint Services Electronic Warfare ORGANIZATION REPORT 

Naval Postgraduate School NUMBER 

Monterey, CA 93943-5000 

9. SPONSORING /MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSORING/MONITORING 
Missile Defense Agency AGENCY REPORT NUMBER 

II. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not reflect the official 
policy or position of the Department of Defense or the U.S. Government. 

12a. DISTRIBUTION / AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE 

Approved for public release; distribution is unlimited _ 

13. ABSTRACT (maximum 200 words) 

In the boost phase interception of ballistic missiles, determining the exact position of a ballistic missile has a 
significant importance. Several sensors are used to detect and track the missile. These sensors differ from each other in many 
different aspects. The outputs of radars give range, elevation and azimuth information of the target while space based infrared 
sensors give elevation and azimuth information. These outputs have to be combined (fused) achieve better position information 
for the missile. The architecture that is used in this thesis is decision level fusion architecture. This thesis examines four 
algorithms to fuse the results of radar sensors and space based infrared sensors. An averaging technique, a weighted averaging 
technique, a Kalman filtering approach and a Bayesian technique are compared. The ballistic missile boost phase segment and 
the sensors are modeled in MATLAB. The missile vector and dynamics are based upon Newton’s laws and the simulation uses 
an earth-centered coordinate system. The Bayesian algorithm has the best performance resulting in a rms missile position error 
of less than 20 m. 


14. SUBJECT TERMS Ballistic Missile Defense System, Boost Phase Interception, Sensor Fusion, 15. NUMBER OF 
Radar design, IR satellites PAGES 

_91_ 

16. PRICE CODE 

17. SECURITY 18. SECURITY 19. SECURITY 20. LIMITATION 

CLASSIFICATION OF CLASSIFICATION OF THIS CLASSIFICATION OF OF ABSTRACT 

REPORT PAGE ABSTRACT 

_ Unclassified _ Unclassified _ Unclassified _UL_ 

NSN 7540-01-280-5500 Standard Form 298 (Rev. 2-89) 

Prescribed by ANSI Std. 239-18 


1 




























THIS PAGE INTENTIONALLY LEFT BLANK 


11 



Approved for public release; distribution is unlimited 


SENSOR FUSION FOR BOOST PHASE INTERCEPTION OF BALLISTIC 

MISSILES 

I. Gokhan Humali 
1 st Lieutenant, Turkish Air Force 
B.Eng, Turkish Air Force Academy, 1996 


Submitted in partial fulfillment of the 
requirements for the degree of 


MASTER OF SCIENCE IN SYSTEMS ENGINEERING 


from the 


NAVAL POSTGRADUATE SCHOOL 
September 2004 


Author: Ismail Gokhan Humali 


Approved by: Phillip E. Pace 

Co-Advisor 


Murali Tummala 
Co-Advisor 


Dan Boger 

Chairman, Department of Information Sciences 



THIS PAGE INTENTIONALLY LEFT BLANK 


IV 



ABSTRACT 


In the boost phase interception of ballistic missiles, determining the exact position 
of a ballistic missile has a significant importance. Several sensors are used to detect and 
track the missile. These sensors differ from each other in many different aspects. The 
outputs of radars give range, elevation and azimuth information of the target while space 
based infrared sensors give elevation and azimuth information. These outputs have to be 
combined (fused) achieve better position infonnation for the missile. The architecture 
that is used in this thesis is decision level fusion architecture. This thesis examines four 
algorithms to fuse the results of radar sensors and space based infrared sensors. An 
averaging technique, a weighted averaging technique, a Kalman filtering approach and a 
Bayesian technique are compared. The ballistic missile boost phase segment and the 
sensors are modeled in MATLAB. The missile vector and dynamics are based upon 
Newton’s laws and the simulation uses an earth-centered coordinate system. The 
Bayesian algorithm has the best performance resulting in a rms missile position error of 
less than 20 m. 


v 



THIS PAGE INTENTIONALLY LEFT BLANK 


vi 



TABLE OF CONTENTS 


I. INTRODUCTION.1 

A. NATIONAL MISSILE DEFENSE.1 

B. THESIS OUTLINE.2 

II. SENSORS.3 

A. IR SENSORS.3 

1. IR Signature of Target Missile.3 

2. Infrared Sensor Design.9 

B. RADAR.14 

1. Radar Equations.14 

2. Radar Parameters.17 

3. Position of Radar Sensors.17 

4. Radar Results.21 

III. DATA FUSION ARCHITECTURE.25 

A. FUSION MODEL.25 

B. DATA FUSION NODE DESIGN.26 

1. Data Alignment.26 

2. Data Association.27 

3. State Estimation.27 

C. PROCESSING ARCHITECTURES.27 

1. Direct Fusion.27 

2. Feature Level Fusion.28 

3. Decision Level Fusion.29 

IV DECISION LEVEL FUSION ALGORITHMS.31 

A. AVERAGING TECHNIQUE.31 

B. WEIGHTED AVERAGING TECHNIQUE.34 

C. KALMAN FILTERING.37 

D. BAYESIAN TECHNIQUE.43 

1 Theory.43 

2. Implementation.44 

3. Results.45 

V. CONCLUSION.47 

A. CONCLUSIONS.47 

B. RECOMMENDATIONS.47 

APPENDIX MATLAB CODES.49 

LIST OF REFERENCES.73 

INITIAL DISTRIBUTION LIST.75 

vii 







































THIS PAGE INTENTIONALLY LEFT BLANK 



LIST OF FIGURES 


Figure 1. Spectral intensity of Titan IIIB at angle of attack of 7.4 deg (From Ref 5).5 

Figure 2. Radiant exitance of Titan IIIB (1035 K).6 

Figure 3. Atmospheric transmittance calculated using Searad.7 

Figure 4. The change of the plume at the atmosphere (From Ref 5).8 

Figure 5. Radiance map of Titan IIIB for MWIR at altitudes 18 km (a) and 118 km 

(b) (From Ref 5).8 

Figure 6. Satellite with infrared sensor.10 

Figure 7. Intersection volume of infrared sensors.11 

Figure 8. Target area seen in the detector area.12 

Figure 9. Illustration of the process determining the intersection volume.13 

Figure 10. Intersection volume matrix with true target position indicated.13 

Figure 11. Radar cross section of the ballistic missile for four stages [From Ref 9].16 

Figure 12. The possible radar positions and ballistic missile trajectories towards San 

Francisco and Washington DC.18 

Figure 13. Number of times S/N exceeds the threshold (headed to SF).19 

Figure 14. Number of times SNR exceeds the threshold (headed to Washington).20 

Figure 15. Locations of launch site and radar sensors.21 

Figure 16. The rms error of RF1 (arbitrary position).22 

Figure 17. The rms error of RF2 (arbitrary position).22 

Figure 18. The rms error of RF1 using optimal positions.23 

Figure 19. The rms error of RF2 using optimal positions.24 

Figure 20. JDL Data Fusion Model (After Ref, 11 pg. 16-18).25 

Figure 21. Direct level fusion (After Ref 11, pg. 1-7).28 

Figure 22. Feature level fusion (After Ref 11, pg. 1-7).29 

Figure 23. Decision level fusion (After Ref 11, pg. 1-7).30 

Figure 24. True target position, sensed positions by radars and arithmetic mean of 

sensed positions of the target.31 

Figure 25. The rms error of (a) RF1 and (b) RF2.32 

Figure 26. The rms error of averaging technique.33 

Figure 27. True target position, sensed positions by radars and weighted averaging 

position of the target.35 

Figure 28. The rms error of (a) RF1 and (b) RF2.35 

Figure 29. The rms error of weighted averaging technique estimation of target 

position.36 

Figure 30. Kalman filtered errors for RF1: (a) range, (b) elevation and (c) azimuth.39 

Figure 31. Overall position error after using Kalman filter for RF 1.40 

Figure 32. Kalman filtered errors for RF2: (a) range, (b) elevation and (c) azimuth.41 

Figure 33. Overall position error after using Kalman filter for RF2.41 

Figure 34. The nns error for weighted averaging technique after RF1 and RF2 

outputs are Kalman filtered.42 


IX 





































Figure 35. The PDFs of radars’ measurements and infrared sensors’ IFOV 

intersection volume.44 

Figure 36. The rms position error using Bayesian technique.45 


x 





LIST OF TABLES 


Table 1. Length of the stages of Peacekeeper ballistic missile.16 

Table 2. Radar parameters.17 

Table 3. Optimum radar positions (for launch angles to San Francisco and 

Washington, DC).20 

Table 4. Average nns error for radars and averaging technique.33 

Table 5. Average nns error for radar sensors and weighted averaging technique.36 

Table 6. Average nns error for radar sensors and Kalman filtering.43 

Table 7. Average error for radar sensors and Bayesian technique.46 










THIS PAGE INTENTIONALLY LEFT BLANK 



ACKNOWLEDGMENTS 


to my father M. Yuksel... 

I’d like to tha nk my advisors Professors Phillip E. Pace and Murali Tummala for 
their support in the completion of this thesis and also Prof. Brett Michael for giving me 
the opportunity to work on this exciting project. 

I’d like to thank my lovely wife Aylin for her support throughout my education at 
the Naval Postgraduate School. 


This work was supported by the Missile Defense Agency. 



THIS PAGE INTENTIONALLY LEFT BLANK 


xiv 



I. INTRODUCTION 


A. NATIONAL MISSILE DEFENSE 

The national Missile Defense Act of 1999 states the policy of the United States 
for deploying a National Missile Defense system capable of intercepting a limited 
number of ballistic missiles armed with weapons of mass destruction, fired towards the 
US [1]. The threat of an intercontinental ballistic missile attack has increased due to the 
proliferation of ballistic missile technology. 

The ballistic missile attack has three phases: boost, midcourse, and terminal. 
Defending against the attack in each of these phases has its advantages and 
disadvantages. 

A boost phase defense system is designed to intercept the ballistic missile in the 
first three or four minutes of the ballistic missile flight [2]. In this phase, the engine of the 
ballistic missile ignites and thrusts the missile. To detect and track the ballistic missile in 
the boost phase is easier due the bright and hot plume of the missile. One of the 
advantages of intercepting the ballistic missile in this phase is the difficulty for it to 
deploy countenneasures. The other advantage is that if the defense cannot intercept the 
incoming missile in this initial phase, there is still a chance to intercept it in the other 
phases. The disadvantage of boost phase interception is the time and the geographical 
limitation. The defense should locate the ground based interceptor missile as close as 
possible to the ballistic missile launch site due to the short engagement time. 

A midcourse defense system covers the phase after the ballistic missile’s booster 
bums out and ends when the missile enters the atmosphere [2]. This phase takes 
approximately 20 minutes, which is the longest of the three phases. In this phase, the 
ballistic missile is traveling in the vacuum of space. Any countenneasures deployed by 
the missile in this phase can be extremely effective. For example, the ballistic missile can 
release many lightweight decoys. The decoys expand in space where there is no drag 
causing them to travel at the same speed as the actual warhead. Using some reflectors, 
heaters, coolers, etc., these decoys can imitate the warhead successfully, which makes the 
discrimination of the warhead extremely hard. 


1 




The terminal phase of the ballistic missile starts when the warhead reenters the 
atmosphere. The decoys and the debris are not an issue in this phase because they will be 
slowed due to the atmospheric drag, and the warhead can be identified easily. The 
interception in this phase is the last opportunity for the defense. As the target of the 
ballistic missile is unknown, the defense has to consider stationing many interceptors 
throughout the country to cover the entire U.S. 

The interception of the ballistic missile in these phases has completely different 
technical requirements. Therefore, any type of ballistic missile defense system can only 
operate in its specific region. The study in this thesis is focused on the boost phase 
interception. 

In order to detect and track the ballistic missile more accurately in the boost 
phase, different types of sensors are used with different capabilities. These sensors 
provide a position estimation of the ballistic missile. Since the sensors provide the 
target’s position to the interceptor, the most accurate position of the ballistic missile is 
critical. The more accurate the position estimation, the higher the interception probability. 
In this study, the fusion of two space based infrared sensors and two ground based radar 
sensors is investigated. The purpose is to achieve better position estimation by combining 
the outputs of these sensors. Four algorithms are investigated to fuse the results and 
reduce the positional error of the target. These include an averaging technique, a 
weighted average technique, a Kalman filter based algorithm and a Bayesian approach. 


B. THESIS OUTLINE 

The thesis is organized as follows. In Chapter II, the infrared and radar sensor 
design issues are discussed. Chapter III describes the sensor fusion model used here as 
well as the processing architectures. In Chapter IV, the four algorithms that can be used 
in sensor fusion are presented, and the results are compared. Conclusions about the work 
are discussed in Chapter V. Appendix A includes the MATLAB code used in this thesis. 


2 



II. SENSORS 


There are many sensors that can be used to sense, i.e, detect and track ballistic 
missiles or targets. In this study, we consider two types of sensors, namely, radar sensors 
and satellite based infrared (IR) sensors. For the interception of ballistic missiles in the 
boost phase, the sensors must provide complete coverage throughout the journey of the 
missile. The output of the sensors must be reliable because the sensor outputs are used in 
the guidance of the interceptor until the kill vehicle is launched and begins to track the 
missile using its own sensors. 


A. IR SENSORS 

1. IR Signature of Target Missile 

The important parameter used in determining the spectral bandwidth of the 
infrared sensor’s detector is the rocket plume signature in the IR band. In this discussion, 
we use the measurement results of a Titan III ballistic missile as an example to design the 
satellite based infrared sensors. 

A target’s spectral radiant intensity is a function of the temperature. In a given 
direction, the spectral radiant intensity of the target is defined as the integration of the 
spectral radiance (for the projected area) in that direction [3]. The spectral radiant 
intensity is given by 

I A = L a A t W sr' 1 pm 1 (2-1-1) 

where A T is the area of the target, and the spectral radiance L A is calculated as a 
Lambertian source as 

L a = W cm' 2 sr' 1 pm' 1 (2-1-2) 

n 

where M A is the spectral radiant exitance (emittance) of the target. 

For target temperatures above zero degrees Kelvin (0 K), the radiation is called 
blackbody radiation [4] if the emissivity is one. Two simple facts are true for blackbody 


3 



radiation: (1) if the temperature of the body is higher, the emission of flux is higher and 
(2) the flux spectral distribution shifts to shorter wavelengths when the temperature of the 
target increases. The emissivity characteristic of the body, however, does not affect these 
rules. 

The temperature and the emissivity detennine the spectral distribution and 
magnitude of the target’s radiation. The radiant exitance of the target is given by 

M A =£ A M AB (2-1-3) 


where e A is the spectral (hemispherical) emissivity and M AB is radiant exitance of a 
blackbody, which can be expressed using Plank’s equation as 

1 


M ab =^ 


A 5 (e c > UT -l) 


W m 2 pm 1 


(2-1-4) 


where 

A = wavelength, /um 

c, = In he 2 = 3.7418xl0“ 16 W m 2 

c 2 = ch/k = 1.4387xl0“ 2 m K 

T = absolute temperature, K 

c = speed of light = 3x 10 s m/s 

h = Planck’s constant = 6.626xl0“ 34 W s 2 

k = Boltzmann’s constant = 1.3807xl0“ 23 W s K' 1 

In order to detennine the radiant exitance of a given target, we first need to 
determine the radiant exitance of a blackbody, which in turn requires the value of average 
temperature T. The spectral radiant intensity I A of a Titan IIIB, at a look angle of 7.4 
degrees, is shown in Figure 1 [5]. The emissivity of Titan IIIB is assumed to be s A = 0.5 
[6]. The spectral radiant intensity lies mostly in the 2.5 //into 3.0 ju m infrared region. 


4 




Figure 1. Spectral intensity of Titan IIIB at angle of attack of 7.4 deg (From Ref 5) 


In Figure 1, the maximum intensity value occurs around A peak = 2.8 /dm. The average 
temperature T of the target’s plume can then be calculated using Wien’s law as [7]: 


T = 


2897.8 


A 


K 


'peak 


(2-1-5) 


where A peak is the wavelength at which the peak value of spectral radiant intensity occurs 
(in /dm). From (2-1-5), for /l peak == 2.8//m (see Figure 1), the target’s average 
temperature can be calculated as 1035 K. 

From (2-1-3) and (2-1-4) and using T= 1035 K, the radiant exitance M A of Titan 
IIIB is calculated and plotted as shown in Figure 2. By comparing the plots in Figures 1 
and 2, the peak values in both cases occur at a wavelength of ~2.8 /im as desired. If the 
radiant spectral intensity shown in Figure 1 is assumed to be due to just to the rocket 
plume, then by (2-1-1) the plume area can be approximated as 

T 7tT 

A t = = 600 m 2 (2-1-6) 

Al=2.8 jum M A=2i ^ m 


5 









Figure 2. Radiant exitance of Titan IIIB (1035 K) 


To calculate the radiant exitance within the detector band 

M = = (2-1-7) 

4 A 

where the limits from Figure 2 are A, =3 //m and A 2 = 5 //m, which gives 

M - 1.15 Wcm 2 . If we assume that the surface area of the plume as 600 m 2 [8], then the 
radiation intensity of the plume is 

Ip _ M-Aj- _ 55o kWsr ' I (2-1-8) 

n 

Before choosing the detector for the infrared sensor, we need to consider the 
effects of the atmosphere. The effects of the atmosphere are predominant for altitudes up 
to 15 km from the surface of the Earth. Although that is a small part of the boost phase, 
for early detection of the target launch, the atmospheric effects must be taken into 
account. 


6 


















The atmosphere is made up of many different gases and aerosols. Some gases in 
the atmosphere are: nitrogen, oxygen, argon, neon, carbon dioxide and water vapor. 
Aerosols include dirt, dust, sea salt, water droplets, and pollutants. The concentration of 
these gases differs from place to place. Most of the attenuation in the 2.5 /i m to 2.9 /Ll m 
region is caused by carbon dioxide and water vapor. Using a Searad model the 
atmospheric transmittance is calculated. In this model, we used 1976 US standard 
atmosphere, maritime extinction (visibility 23 km), air mass character (ICSTL) of 3, and 
no clouds or rain. The atmospheric transmittance results of the Searad calculations are 
shown in Figure 3. The atmospheric transmittance is not uniform for 
3 /ura < A < 5 /Lira . Several absorption areas in the transmittance spectrum can be 
identified. 



Figure 3. Atmospheric transmittance calculated using Searad 


From Figure 2, the plume energy is concentrated in the infrared region of about 2.8 /i m . 
Consequently, we may choose the midwave infrared region of 3-5 /l m for designing the 


7 



























detector. Note that the transmittance plot in Figure 3 depicts some absorption about that 
wavelength. 

The atmosphere not only affects the transmittance, but also affects the shape and 
size of the target plume. Because of the change in pressure and the concentration of the 
gases in the atmosphere, the size and shape of the plume changes with altitude. An 
example of these effects is shown in Figure 4 for the plume in the afterburning stage, the 
continuous flow regime, the molecular flow regime and the vacuum limit. The plume 

Enhancement 


Afterburning, Continuum Molecular Vacuum limit, 

D~ 10-100 m flow regime, flow regime, n~l-10m 



Figure 4. The change of the plume at the atmosphere (From Ref 5) 
grows bigger with increasing altitude, and it gets smaller after it goes out of the 
atmosphere. The size of the plume diameter is about 10-100 meters at the beginning of 
the boost phase. At an altitude of 60 km (continuous flow regime) the diameter of the 
plume begins to expand, and at an altitude of 160 km (molecular flow regime) it has a 
maximum diameter of 1-10 km. At 300 km, the diameter decreases to 1-10 m due to the 
vacuum limit. The change in radiance is shown in Figure 5 for the Titan IIIB for altitudes 
18 km and 118 km. 



Figure 5. Radiance map of Titan IIIB for MWIR at altitudes 18 km (a) and 118 km (b) 

(From Ref 5) 


8 













2. Infrared Sensor Design 

The infrared sensors are low orbit staring type focal plane arrays on satellites. The 
missile is a point target for the infrared sensors because of the large distance between the 
sensors and the target [2]. The detector area A d depends on the instantaneous field of 

1/ 

view (IFOV) of the detector ay and the focal length f \: 

A d =a d f 2 rn 2 (2-1-9) 


A typical side dimension for a square detector size is Ay- = 30 jUm . The diameter D of 
the sensor optics is calculated using diffraction as 

d= 2A4A m (2-1-10) 

i/ 

Using a sensor design with focal length f x = 1.5 m gives ay =20 //rad, and the diameter 
D = 24.4 cm. 


For mixed terrain, the radiance of the background is L = 300x10 6 Wsfi'cm' 2 

i/ 

[Ref 6, pg. 210]. For a d = 20 //m and the satellite at R c = 1000 km above the ground, a 

footprint of 20 m x 20 m square results in an area of 400 m 2 . The total radiation 
intensity becomes I c =1.2 kWsr 1 . Given these results, the signal-to-clutter ratio (SCR) 
can be expressed as 


nD 


2 V 




R u 


nD 


2 A 


i c r 2 p 


j Rc 


( 2 - 1 - 11 ) 


where S T is the signal power from the target, S c is the clutter power, I p is the radiation 
intensity of the plume, I c is the radiation intensity of clutter, R p is the range between IR 
sensor and the plume, and R c is the range between IR sensor and the clutter. 


9 



At launch, the initial SCR is estimated by setting range of the plume R p = R c . We 


then have 



( 2 - 1 - 12 ) 


I c 1.2 kW 


The SCR is high enough throughout the boost phase that we can assume that the infrared 
sensor will track the target continuously. 

The most important infrared sensor parameter used in the fusion algorithm is the 
IFOV. The IFOV dictates the spatial resolution of each detector. The infrared sensor, the 
sensor’s field of view (FOV) and the IFOV are shown in Figure 6. The target missile’s 
plume and a footprint on the Earth are shown to be within the sensor’s IFOV. 


SENSOR 



Figure 6. Satellite with infrared sensor 


Infrared sensors are passive sensors. They give the azimuth and elevation 
information of the target. The azimuth, elevation and range information are required to 
guide the intercept missile. To derive the target range information, the intersection of 
each infrared sensor’s IFOV is used. In Figure 7, the intersection area of two IFOVs is 
shown. 


10 




IR Sensor #1 


IR Sensor #2 



Figure 7. Intersection volume of infrared sensors. 


For the exact location of the satellites, the IFOV values of each sensor and the 
azimuth 0 and elevation 9 angles to the target are assumed to be known. By using 
triangulation and the two intersecting IFOV cones, a volume can be derived that contains 
the target plume. As the target is a point source, the source area as seen by the sensor 
array can be anywhere within the detector area as illustrated in Figure 8. The detector will 
declare that there is a target regardless of source area’s position within the detector area. 
From this, we have the knowledge of the detector element that has the target image and 
the IFOV cone that contains the target. Additionally, the position of the IR satellite is 
known. The IFOV and the satellite position information are sent to the fusion center and 
used to find the intersection volume (as depicted in Figure 7) in order to determine the 
location of the ballistic missile. 


11 




Detector area 


Figure 8. Target area seen in the detector area 

In the simulation, the true vector of target-to-satellite is determined. Then angles 
9 and O from the satellite to the target are calculated. A random uniformly distributed 
error is added to the 9 and <f> angles. As the target must be within the IFOV lines, we 
choose the error value so that the midline of the IFOV can move up to ± IFOV / 2 
radians. We repeat these steps for satellite number two. Now we have the midlines for 
both satellites. The target is within the intersection volume of these two IFOV cones. This 
volume is found and used in the sensor fusion algorithm to find the most probable 
location of the target. To determine the intersection volume, we search the points (in one 
meter increments) in the space to find which points are in both the IFOV cones to 
determine the intersection volume. These points are collected with their coordinates in a 
matrix. This matrix is the intersection volume matrix. Figure 9 illustrates the collection of 
points to form the intersection volume. The desired target is assumed to be present in this 
volume. The intersection volume matrix is shown in Figure 10. 


12 







Sat #1 


This point is in both IFOV’s 
We put this points coordinates 
to the matrix 



These points are only in one of the IFOV 


Sat #2 


This point is not in either IFOV cones 


O Arbitrary points in space that we check if they are both in two IFOV cones 
Figure 9. Illustration of the process determining the intersection volume 



Figure 10. Intersection volume matrix with true target position indicated 

The infrared sensor’s range to the target directly affects the size of the matrix. The 
volume within the IFOV cone increases with range. If we use high earth orbit satellites 
(like Defense Support Program—DSP— satellites), for a given IFOV value (20 /mad in 

our simulation), the footprint will be -640 km 2 . In order to reduce the foot print size to 


13 








more reasonable values (footprint is around 400 in 2 in this work), we choose low earth 
orbit satellites (1000 km above Earth’s surface). 


B. RADAR 

The forward based radar systems are operated in X-band with a low pulse 
repetition frequency (PRF). The reason an X-band radar is chosen is that the high 
resolution capability of this radar provides a good capability for tracking ballistic targets 
in the boost phase. The resolution capability of a radar is related to the beamwidth as 
given by [2] 

( 2 - 2 - 1 ). 

where D r is the antenna diameter. The other issue that has to be addressed is the 
unambiguous range R u of the radar. The range R u must be large enough to be able to 

track the target throughout the boost phase, which can be up to 2,000 km (for a liquid 
propellant ballistic missile). 

1. Radar Equations 

The radar parameters detennine the accuracy of the track information being 
provided to the sensor fusion. The radar single pulse signal-to-noise ratio SIN required 
at the input to the receiver can be calculated as 

S/N = (2-2-2) 

(4tt) kTBF R^L 

where P T is the peak power of the transmitter, n is the compression factor (n = 1 if no 
pulse compression is used) [6], G T and G R are the transmit and receive gains of the 
antenna, a is radar cross section of the missile target, X is the wavelength of the radar, k 
is the Boltzmann’s constant (1.38xl0“ 23 J/deg), B is the receiver’s input bandwidth, F is 
the system noise factor, and L is the total loss. In our simulation, we assume that the 


14 



antenna gain G R = G r . The bandwidth is B = 1 It where r is the pulsewidth. The 
system noise temperature is 290 K and L is 1. 

In the radar simulation, we add angular and range errors to the actual target 
position in order to generate the radar output. The noise added is Gaussian with its 
variance calculated for range and azimuth angle as 

_ cr 1 

° ranse ~ 2 k^(2S/N)N i (2-2-3) 


el k^(2S/N)N i 


(2-2-4) 


where c is speed of light, & B is the 3-dB beamwidth of the antenna, N j is the number of 

coherently integrated pulses, and k is the antenna error slope and is between 1 and 2 (for 
our scenario k = 1.7 for a monopulse antenna [6]). 

The radar cross section of the ballistic missile plays an important role in sensing 
its position. From (2-2-2), S / N is directly proportional to the radar cross section of the 
ballistic missile. In (2-2-3) and (2-2-4), the variances of the range and angular errors are 
calculated using the SIN. Therefore, the radar cross section of the missile plays a 
significant role in the error variance. 

Figure 11 shows the radar cross section of a ballistic missile in X-band (10 GHz) for all 
four stages of the missile [9]. The fourth stage is the payload. The similarity of the radar 
cross section of the different stages is significant. Even as the length of the missile 
decreases (jettisoning the canisters), the radar cross section of the missile does not change 
appreciably. The lengths of the stages are shown in Table 1. 


15 



res (m) res (m) 


Stage 1 



Angle (degree) 
Stage 3 



Angle (degree) 


Stage 2 



Angle (degree) 
Stage 4 



Angle (degree) 


Figure 11. Radar cross section of the ballistic missile for four stages [From Ref 9] 



Length of the stage 

Remaining length of 

the missile 

Stage 1 

8.175 m 

21.8 m 

Stage 2 

5.86 m 

13.625 m 

Stage 3 

2.3 m 

7.765 m 

Stage 4 

5.45 m 

5.45 m 


Table 1. Length of the stages of Peacekeeper ballistic missile 


16 




















































2. Radar Parameters 

The radar parameters used in the boost phase simulation are shown in Table 2. 
The main issues that are taken into account in selecting these parameters are range and 
resolution. The pulsewidth assumed is 50 ^s, and the number of pulses integrated is 20. 
The beamwidth is 0.5 degrees, and the pulse repetition frequency is 150 Hz. 


Band 

X-band 

Frequency 

10 GHz 

Peak power ( P T ) 

500 kW 

Antenna diameter ( D r ) 

4.15 m 

Antenna efficiency ( 7] ) 

0.68 

Antenna gain ( G r =G t ) 

50 dB 

Noise factor (F) 

4 

Number of pulses integrated ( N]) 

20 

Beamwidth () 

0.5 degrees 

Pulsewidth ( t ) 

50 //s 

PRF (F r ) 

150 Hz 


Table 2. Radar parameters 

3. Position of Radar Sensors 

Positioning of the radar sensors play an important role in tracking the ballistic 
missile target in the boost phase. During the travel of the ballistic missile, it is sensed 
from many different aspects by either radar. The continuous motion and change of 
aspects cause fluctuations in radar cross section of the ballistic missile. These fluctuations 
in radar cross section directly affect the results of the signal-to-noise ratio and the error of 


17 

























the radar output measurements. The Peacekeeper ballistic missile’s radar cross section for 
X-band is used in our simulation [9], 

Figure 12 shows all the possible positions of the radar sensors examined in the 
simulation. The trajectory shown is for a target launch from North Korea to San 
Francisco. The possible radar positions are indicated by hollow circles. By changing the 



Figure 12. The possible radar positions and ballistic missile trajectories towards San 

Francisco and Washington DC 

distance between the radar and the missile launch site (400 km to 1000 km) and similarly 
the angle between true north, launch site and radar position (0 degrees to 180 degrees), 
we investigate the S / N for the entire boost phase flight. 

For a tracking radar, it is assumed that the S / N must be greater than 6 dB [10]. 
For any position of the radar, the total number of times that the S / N exceeds this 
threshold gives us an idea of the best position for the radar. The boost phase simulation 
takes 180 s, and the sampling period is 0.1 s. This gives 1,800 data points to be examined. 
The best position of the radar is the one that gives us the maximum number of detections 
throughout the flight. Figure 13 shows the number of times that the S / N exceeded the 
threshold as a function of the position of the radar. The best position for the radar 

18 






























Figure 13. Number of times S/N exceeds the threshold (headed to SF) 

according to Figure 13 is 127° (the angle between true north, launch site and radar 
location) and 600-1000 km from launch site. If we locate the radar position 
corresponding to the peak values in this figure, the radar will track the missile closely for 
the entire boost phase. Since we do not kn ow the exact heading of the missile, we have to 
examine other heading possibilities and check if the radar positions have their S/N 
exceed this threshold. 

In Figure 14, the number of times the S / N of the radar exceeds the threshold is 
shown for a missile launched to hit Washington, DC. In Figure 14, the best position of 
the radar has changed to 95° with a range of 680-880 km; that is, the launch angle 
changes the best location for the radar sensors. 

Using the simulation, we optimized the position of the radar systems; the best 
positions of the radar systems are listed in Table 3 (for both launch angles to San 
Francisco and Washington, DC). For optimization, we permutated the possible locations 
of the radar sensors and checked how many times both the radar sensors’ S/N exceeds 


19 






















Distance between radar and launch site (m) 


Angle between true north, launch site and radar (radians) 


Figure 14. Number of times SNR exceeds the threshold (headed to Washington) 


the threshold. As a result, the positions shown in Table 3 have one or both radar 
sensor’s 57 N exceeding the threshold throughout the boost phase. The location of the 
launch site and the radar sensors are shown in Figure 15. 



Angle between true north, 

launch site and radar 

Distance between launch 

site and radar 

RF1 

21 degrees 

400 km 

RF2 

127 degrees 

670 km 


Table 3. Optimum radar positions (for launch angles to San Francisco and Washington, 

DC) 


20 








































Figure 15. Locations of launch site and radar sensors 

4. Radar Results 

Each radar senses the position of the ballistic missile. While sensing the position 
of the ballistic missile, some errors occur. The most prevalent cause for the error is 
thermal noise. These errors are injected into our simulation as Gaussian errors to azimuth, 
elevation and range of the target to the radar. The variances of the Gaussian noise 
components are calculated using (2-2-3) and (2-2-4). Flere, the S / N changes as a 
function of range to the target R T for each scenario and the radar cross section of the 
ballistic missile. 

For RF1 ( see Table 3) the magnitude of the rms error e nm is shown in Figure 16. 
The rms error is calculated as 

e nns = V(x-x) 2 +(v- vf +{z-zf (2-2-5) 

where (x,y,z) is the true position of the ballistic missile and ( x,y,z ) is the radar sensor’s 
measurement of the ballistic target at any given time. In Figure 16, the error that RF1 
makes while sensing the ballistic target increases as the flight time increases. It is due to 
the increase in range between missile and the radar and changes in the radar cross section 
of the missile as seen by the radar. 


21 







E 

8 

s 

</) 

E 


Flight time (min) 

Figure 16. The rms error of RF1 (arbitrary position) 

For RF2, the rms error versus flight time plot is shown in Figure 17. The rms 



0 0.5 1 1.5 2 2.5 3 

Flight time (min) 

Figure 17. The rms error of RF2 (arbitrary position) 



22 



































































































































































































































error of RF2 differs from that in Figure 16 because of the difference in their location. 
These locations are arbitrary. If we use the positions of the radar that we calculated in 
Table 3, the results change significantly. 

Figures 18 and 19 show the rms position error of RF1 and RF2, respectively, 
when they are positioned according to Table 3. The reason for this improvement is due to 
the improvement in the radar S / N because of their optimal positions. 



Figure 18. The rms error of RF1 using optimal positions 

In this chapter, we examined the infrared and radar sensor specifications of the 
ballistic missile. The radiant exitance and the radar cross section of the ballistic missile 
are investigated. Using these target specifications, the design parameters for the infrared 
sensors and radar sensors are established. The positioning of the radar sensors is 
examined, and an optimal positioning has been achieved. The infrared and radar sensors’ 
results are presented. 

In the next chapter, we discuss the data fusion architectures used to combine the 
radar and IR sensor outputs. 


23 



















































600 



Figure 19. The rms error of RF2 using optimal positions 


24 



































































































III. DATA FUSION ARCHITECTURE 


In this chapter, a data fusion model for the ballistic missile interception in the 
boost phase is presented. Data fusion node design and processing architectures are 
examined. The decision fusion processing architecture is determined to be the best 
processing architecture for sensor fusion in this work. 


A. FUSION MODEL 

In the literature, many solutions for sensor (data) fusion have been proposed and 
investigated. This thesis focuses on a general sensor fusion model for combining target 
position data from both RF and IR sensors in order to determine the most accurate 
location of the missile target in the boost phase. The fusion scheme considered here is 
similar to the Joint Directors of Laboratories (JDL) model [11] and is shown in Figure 20. 
Sensors, communication links, data fusion, and response systems are the major functional 
blocks of the model. 



Figure 20. JDL Data Fusion Model (After Ref, 11 pg. 16-18) 


In this model, the sensors send the information to the data fusion node via 

wireless communication links. The sensor information sent by the sensors varies 

25 











according to the type of fusion processing used. The data fusion node perfonns the data 
alignment, data association and position or state estimation functions. The results of the 
data fusion are sent to a resource management function. The resource management 
function plans and controls the available resources (weapons, sensors, guidance and 
control, and process control) using the fused information and the user directives. The 
weapons and sensors are selected using the results of the resource management decisions. 
The response systems then react to the environment according to the resource 
management 

In the following sections, two important functions of the model are discussed 
further: the data fusion node design and the fusion processing algorithms. 


B. DATA FUSION NODE DESIGN 

The data fusion node performs three major functions: data alignment, data 
association, and state estimation. Each of these is described below. 

1. Data Alignment 

Data alignment also known as data preparation or common referencing [Ref 11, 
pg. 16-30] changes or modifies the data that come from the sensors so that this data can 
be associated and compared. Data alignment modifies the sensor data to appropriate 
formats, and translates the information to the correct spatio-temporal coordinate system. 
It also compensates for the misalignments during changes between these dimensions. 

Data alignment executes five processes that include common formatting, time 
propagation, coordinate conversion, misalignment compensation, and evidential 
conditioning. In the common formatting process, the data is being tested and transformed 
to system standard data units and types. The fused track data are updated to predict the 
expected location so that the new sensor inputs can be associated with them in the time 
propagation function. The data that come from separate sensors are converted to a 
common coordinate system. In this study, the coordinate systems for radars and infrared 
sensors are different from each other, but through data alignment they are converted to 
the Earth centric Cartesian coordinate system. In the misalignment compensation, the 


26 



data are corrected for the parallax between sensors. In the evidential conditioning, some 
confidence values are assigned to the data that come from each sensor [11]. 

2. Data Association 

In the data association function, the data that belong to the same target are 
associated for improved position estimation. Data association is executed in three steps 
[11]: hypothesis generation, hypothesis evaluation and hypothesis selection. Using 
hypothesis generation, the solution space is reduced to a practical number. Feasibility 
gating of prior tracks or data clustering is used for hypothesis generation. Kinematic, 
parametric and a priori data are used for evaluating these hypotheses and a score is 
assigned to each hypothesis. The hypothesis selection uses these scores to select one or 
more sets of data to be used in the next step, which is state estimation. Data association is 
not used in this study since only one target is being tracked [11]. 

3. State Estimation 

The state estimation estimates and predicts the target position using the data that 
come from data association. There are many algorithms to estimate the position of the 
target. The algorithms that we use in this study include averaging (arithmetic), weighting 
(using S/N), Kalman filter and Bayesian techniques. These algorithms will be described 
in detail in Chapter IV. 


C. PROCESSING ARCHITECTURES 

There are three basic architectures for multisensor data fusion: direct fusion of 
feature vectors that are representations of sensor data, and decision level sensor fusion. 

1, Direct Fusion 

Direct fusion uses raw data to fuse the sensor outputs. In Figure 21, the direct 
fusion architecture is shown. The data received from the sensors are first subjected to the 
data association function. The associated data are then fused together. This is followed by 
the feature extraction operation. The results of the feature extraction block are then sent 
to position estimation. These fused positions are sent to the resource management, and 
guidance and control unit guides the interceptor missile to intercept the ballistic missile. 


27 



Data Fusion node 



o 

£3 


o 

CD 

o 


o 

£3 


Figure 21. Direct level fusion (After Ref 11, pg. 1-7) 


Direct fusion has the potential to achieve the best target position estimation. 
Another advantage of direct fusion is that, at the end of the fusion process, the targets can 
be detected even if the sensors cannot detect the target by themselves individually. 

Direct fusion architecture gives the best results, but it also has some 
disadvantages. The data flow from the sensors to the fusion center is large, and the 
bandwidth needs are great. Direct fusion has the highest computational effort. With this 
fusion architecture, position estimations are based on the information from the sensors by 
evaluating the raw data. The registration accuracies play an important role, so direct 
fusion is very sensitive to registration errors. The sensors are required to be the same or 
similar; in this work, they are not. Since a variety of sensors (passive infrared sensors and 
active radar sensors) are used in this thesis in a ballistic missile interception task, the 
direct fusion architecture is not considered. 

2. Feature Level Fusion 

Feature level fusion combines the features of the targets that are detected in the 
each sensor’s domain. In Figure 22, the feature level fusion architecture is shown. The 
sensors must detect the targets in advance to be able to use this fusion process. The 
sensors extract the features for each target, and these features create a feature space for 
target detection [12]. The sensors process and extract the features of the measurement 


28 








outputs individually and then these processed data are sent to the association module in 
the fusion center. After the data are associated, they are fused in the feature level fusion 
center. A joint decision is formed and sent to the resource management module. 


Data Fusion node 



o 

h-k • 

a 

a 

CD 

O 


o 

ts 


Figure 22. Feature level fusion (After Ref 11, pg. 1-7) 


This type of fusion reduces the demand on registration, and the bandwidth 
required for the data to flow from each sensor to the fusion center is low compared to 
direct fusion. 

This kind of fusion is often used for infrared sensors, but in ballistic missile 
interception missions all the sensors are not infrared. The features that the radar sensors 
and infrared sensors extract are different (infrared sensors use the plume temperature 
while the radar sensors use the radar cross section of the ballistic missile). As a result, we 
do not use this kind of fusion processing in this work. 

3. Decision Level Fusion 

Decision level fusion combines the local decisions of independent sensors. The 
decision level fusion architecture is shown in Figure 23. For this kind of fusion process, 
the sensors must make preliminary decisions. The raw data in the sensors are processed 
in the sensor, and only the results that have the position estimation of the ballistic missile 
are sent to the fusion center. In the fusion center, the processed position data of the 
ballistic missile are associated. This associated data are then fused to achieve more 


29 
























accurate position estimation. The fused data are sent to the resource management module, 
and the interceptor is guided accordingly. 


Data Fusion node 



Figure 23. Decision level fusion (After Ref 11, pg. 1-7) 


This data fusion process is less sensitive to spatial misregistration than the direct 
and feature level fusion approaches [11]. That is, it allows a more accurate association of 
the targets that contain registration errors. One of the advantages of this type of data 
fusion is the simplicity of adding and subtracting the sensors to the fusion system. The 
variety of the sensors does not affect the results from this fusion architecture. 

In this chapter, the JDL fusion model is considered. The data fusion node design 
and the data alignment, data association, and state estimation functions of the fusion node 
are described. Direct fusion, feature level fusion, and decision level fusion architectures 
are described, and their relative advantages and disadvantages for the ballistic missile 
intercept in the boost phase are presented. The decision level fusion is selected as the 
architecture for the algorithms described in Chapter IV. 


30 












IV DECISION LEVEL FUSION ALGORITHMS 


A decision level fusion architecture is used in the simulation. Below, the decision 
level fusion algorithms examined are described. They include an averaging technique, a 
weighted averaging technique, a Kalman filtering, and a Bayesian technique. 


A. AVERAGING TECHNIQUE 

The first fusion algorithm investigated is an averaging technique. The sensors 
process their own data and they send these decisions to the fusion center. In this work, 
this data is the position of the ballistic missile sensed by each sensor. The averaging 
technique computes the fused position as an arithmetic mean using the formula 


Pa( X >y> Z ) 


p l (x l ,y 1 ,z l ) + p 2 (x 2 ,y 2 ,z 2 ) 
2 


(4-1-1) 


where p a (x,y,z ) is the position estimation of the averaging technique, and p x (x,, Vj,Zj) 
and p 2 (x 2 ,y 2 ,z 2 ) are the position estimations of RF1 and RF2, respectively. 

An example of the sensed positions from both RF1 and RF2 are shown in Figure 
24. In Figure 24, the target’s true position, estimated positions as sensed by RF1 and 


x 10 6 



Figure 24. True target position, sensed positions by radars and arithmetic mean of sensed 

positions of the target 
31 














RF2, and the arithmetic mean of the sensed positions are shown at an arbitrary time 
instant. The RF1 sensor senses the target with a 158-m position error, RF2 sensor senses 
it with a 64-m error, and the fused or arithmetic mean position is 95 m away from true 
target position. In this case, however, the fused position of the target is worse than RF2 
results. This situation, however, is not always true. For example, if the magnitude of the 
sensed position by one radar is opposite that of the other radar’s, then the arithmetic mean 
position will be better than that given by either of the radars individually. 

We examine the cumulative error sensed by the radars and also the arithmetic 
mean position through simulation. The rms error computed using (2-2-5) of RF1 and RF2 
obtained from MATLAB simulation are shown in Figure 25. The arithmetic mean of 
these errors is shown in Figure 26. We observe that the cumulative position estimation 
error of RF1 is the worst of all; the results of the arithmetic mean position are better than 
the RF2’s results, but RF1 gives the best results among these three. 



Figure 25. The rms error of (a) RF1 and (b) RF2 


32 





























































































600 


500 

400 
'E 

I 300 

<D 
£ 

200 

100 

0 

0 0.5 1 1.5 2 2.5 3 

Flight time (min) 

Figure 26. The rms error of averaging technique 

The average rms error e rms for each radar using the averaging technique is shown 
in Table 4 . The average rms error is computed as 

H e n,A n ) (4-1-2) 

A n =1 

where e rms (n) is the rms error at time n, and N is the number of data points. From Table 
4, the averaging technique result is worse than that of RF1. A fusion algorithm is 
expected to provide a better solution than either of the sensors, but in this case the 
average rms error of the averaging technique is worse than RF1. 




A,,, in m 

RF1 

99 

RF2 

157 

Averaging tech. 

102 


Table 4. Average rms error for radars and averaging technique 


33 

































































B. WEIGHTED AVERAGING TECHNIQUE 

The next sensor fusion algorithm that we investigate is the weighted average 
method. In this algorithm, the sensors process their own data and send these decisions to 
the fusion center. These decisions are the positions of the ballistic missile sensed by each 
sensor. 


The weighted average algorithm is similar to the averaging method; however, in 
weighted average method, we weigh the sensor data by using the radar sensors’ SIN for 
every time sample. The SIN for the radar sensors are calculated using (2-2-2). In the 
weighted average algorithm, the higher the SIN, the larger the weight for that target 
estimate. The weighted average of the target position is calculated using 


P w (x,y,z) 


A(*i, v^z^xjSIN),+ p 2 (x 2 ,y 2 ,z 2 )x(SIN) 2 
(S/N\+(S/N) 2 


(4-2-1) 


where p w {x,y,z) is the fused target position vector using weighted averaging technique, 
/), (x,, y,, z,) is the position vector of the target sensed by RF1, (577V), is the signal to 
noise ratio of RF1, p 2 (x 2 ,y 2 ,z 2 ) is the position vector of the target sensed by RF2, and 
(S / N) 2 is the signal to noise ratio of RF2. 

An example of these sensed and weighted average positions is shown in Figure 
27. In this example, RF1 senses the target with a 125-m position error, RF2 senses the 
same target with a 44-m error, and the weighted average position is 36 m away from the 
true target position. The fused position of the target is better than both radar sensor 
results. 

The cumulative error sensed by the radars is examined, and the weighted average 
position computed in a MATLAB simulation. We observe that the cumulative position 
estimation error for the weighted averaging technique is better than that of both radars. 
The nns error plots of the radar sensors are shown again in Figure 28. The results of the 
weighted averaging technique are shown in Figure 29. By comparing the results of Figure 
28 and 29, the weighted averaging technique provides the best position estimate among 
the three plots. 


34 



rms error (m) 


x 10® 



Figure 27. True target position, sensed positions by radars and weighted averaging position 

of the target 



35 

























































































































Figure 29. 


9 

CD 

GO 

E 


Flight time (min) 

The rms error of weighted averaging technique estimation of target position 



The average rms errors e rms , computed as given by (4-1-2), are shown in Table 5. 
For the averaging technique (see Table 4), the average rms error was 102 m., which was 
worse than that of RF1. The weighted averaging technique provides a significant 
improvement over these results. The average error due to weighted averaging technique 
is 68 m. 



f in m 

RF1 

99 

RF2 

157 

Weighted averaging technique. 

68 


Table 5. Average rms error for radar sensors and weighted averaging technique 


36 


















































C. KALMAN FILTERING 

Another sensor fusion approach to achieve better position estimation uses Kalman 
filtering. By using Kalman filtering, we can minimize the fluctuations that occur during 
the sensing of the ballistic missile. These fluctuations are due to the random error 
described in Chapter II. 

To apply Kalman filter to sensor data for estimating the ballistic missile position, 
the ballistic missile must be modeled by a set of differential equations. In this study, a 
discrete-time Markov model is used as [13]: 

x(t) = Fx(t -1) + w(t -1) (4-3-1) 

where x(t) represents the state vector of the ballistic missile at time t, F is the transition 
matrix, and w(t) is the white, Gaussian noise with zero-mean with the following 
properties [14]: 

E[w\ = 0 (4-3-2) 

= Q 

where /:[] represents the expected value, and Q is the covariance matrix of the process 
noise. The sensor measurements of the ballistic missile’s positions must be linearly 
related to the system state variables according to 

z(t -1) = H (t -1 )x(t -1) + v(t -1) (4-3-3) 

where z(t) is the measurement vector, H(t) is the measurement matrix, and v(t) is the 
white Gaussian measurement noise with zero mean with the following properties 

E[v] = 0 (4-3-4) 

e\w t ^ = R 

where R is the covariance matrix of the measurement noise v(t). Using (4-3-1) and 
(4-3-3), the Kalman filter can be established. The state estimate can be obtained as 

x(t 1 1 -1) = F(t - l)x(t -11 1 - 1). 

If P is the covariance matrix of estimation errors computed recursively as 

37 


(4-3-5) 



P(t\t- 1) = F{t -1 )P(t -1 \t- l)F(t - l) r + Q(t -1) , (4-3-6) 

the Kalman gain can be calculated using the following formula: 

K(t) = P(t\t -1 )H(t) T ( H(t)P(t 1 1 -1 )H(tf + (4-3-7) 

The equation of the optimum estimate of the ballistic missile state vector is given by 

x(t\t) = x(t\t-l) + K(t)(z(t) - H(t)x(t 1 1 -1)) (4-3-8) 

and the update for the error covariance update is 

P(t | t) = (l- K(t)H(t))P(t 1 1 -1) (/ -K(t)H(t)) T + K(t)RK(t) T (4-3-9) 


By repeating the equations recursively, the updated state estimations can be found. 


The Kalman filter processes the measurements coming from the sensors in real 
time and smooths the outputs of the radar sensors’ range, elevation and azimuth 
information to obtain better target position estimates. Error in range r, elevation <f>, and 
azimuth 6 are computed using 


r err 


r 

err 

= 

<t> 

0 

err _ 


0 


(4-3-10) 


where (r en ., (f> err , 0, n .) are error components, (r, (ft, 0 ) are true values, and ( r, (ft, 0 ) are 

the measurements (sensor data) or estimates of the Kalman filter. The Kalman filtered 
range, elevation, and azimuth error of RF1 are shown in Figure 30. The blue lines 
represent the error for the range, elevation, and azimuth sensed by the RF1. The black 
line is the Kalman filtered error for the azimuth, elevation, and range for RF1. By using 
the Kalman filter, the fluctuations of the error have been reduced significantly in all three 
plots. 


The rms position error for RF1 can be computed by first converting from the 
spherical to the Cartesian coordinates and then using (2-2-5). The rms position errors for 
sensor data (blue line) and Kalman filtered data (black line) are shown in Figure 31. 


38 



Clearly, the Kalman filter helps reduce the rms position error. Next, the Kalman filtered 
position estimates for both RF1 and RF2 will be fused using weighted averaging. 




(a) 


(b) 



(c) 


Figure 30. Kalman filtered errors for RF1: (a) range, (b) elevation and (c) azimuth 


39 

















































































600 


500 

400 

£ 

I 300 
a) 
sn 
E 

200 

100 

0 

0 20 40 60 80 100 120 1 40 1 60 1 80 

Time steps (second) 

Figure 31. Overall position error after using Kalman filter for RF1 

Figure 32 shows the range, elevation, and azimuth error plots for sensor data 
(blue) and Kalman filtered data (black) for RF2. The fluctuations of the rms error 
diminished in all three plots. Figure 33 shows the rms position error for sensor (blue) and 
Kalman filtered (black) data. As in Figure 31, the Kalman helps reduce the rms position 
error of RF2 significantly. 



40 













































x 10' 



(C) 

Figure 32. Kalman filtered errors for RF2: (a) range, (b) elevation and (c) azimuth 



Figure 33. Overall position error after using Kalman filter for RF2 

41 

































































































































To fuse the Kalman filtered radar sensor outputs, the weighted averaging 
technique is used. The rms error of the Kalman filtered and sensor data are combined 
using (4-2-1). The signal-to-noise ratios are used for weighing the Kalman filtered RF1 
and RF2 outputs. Weighted average results of the Kalman filtered rms error are shown in 
Figure 34. 



0 20 40 60 80 100 120 140 1 60 1 80 

Time steps (second) 


Figure 34. The rms error for weighted averaging technique after RF1 and RF2 outputs are 

Kalman filtered 


Table 6 lists the average rms errors e nns for RF1, RF2, and the weight averaged 

Kalman filtered errors. The average error for Kalman filtering is 52 m, which is about 
half that of RF1 and one third that of RF2. From Table 4, recall that the averaging 
technique has produced an average rms error of 102 m. From Table 5, the weighted 
averaging technique has produced an average rms error of 68 m. The Kalman filtered 
algorithm is clearly better than those two algorithms. 


42 



























































v in m 

RF1 

99 

RF2 

157 

Kalman filtering 

52 


Table 6. Average rms error for radar sensors and Kalman filtering 

D. BAYESIAN TECHNIQUE 

The application of the Bayesian algorithm to the sensor fusion problem begins 
with assigning probabilities of ballistic missile being at a point in the space, according to 
the sensor outputs. These probabilities are used to find the maximum probable position of 
the ballistic missile target at that time. By combining the probabilities of both radar and 
infrared sensor outputs, we can estimate the target position that has the highest 
probability. 

1 Theory 

The Baye’s rule for probability density functions (PDF) is given by [11] 

fA*\y) = fAy l x ? f : (x) (4-4-D 

fy(y) 

where f x (x | v) and f Y (y \ x) are the conditional PDFs, f x (x) and f Y (y) are the 
marginal PDFs,, y represents measurements, and x the true position. Here, f x (x) is the a 
priori PDF of true position of the target. The conditional PDF f x (x | v) becomes the new 
a prior PDF as new sensor measurements y are made available. If the a priori 
information is not available, a unifonn distribution is used for the a prior PDF f x (x) [2], 
Baye’s rule is applied recursively as new position data are available from sensor 
measurements. 


43 




2. Implementation 

In Chapter II, we discussed the probability density functions of the radar sensor 
data. The error in estimating the position of the target using the radar sensor is modeled 
as Gaussian noise. The variances of these Gaussian noise are calculated using (2-2-3) 
and (2-2-4). The probability density functions of the target position within the infrared 
sensors’ IFOV intersection are determined in Chapter II to be uniform. The intersection 
volume of two infrared sensors’ instantaneous field of views can be illustrated as in 
Figure 10. The intersection of the IR and radar probability density functions are shown in 
Figure 35. The PDFs of radar sensors’ measurements are Gaussian and the PDF of 
infrared sensors’ IFOV intersection volume is uniform. 

The probabilities of target being within a small interval for each PDF can be 
calculated by integrating the area between two points as shown in Figure 35. For 
example, the probability of the target being between a x and a 2 can be found by integrating 
the shaded area. 


Target volume from IR sensors 

I 



Figure 35. The PDFs of radars’ measurements and infrared sensors’ IFOV intersection 

volume 

The probability that the target is present in any small interval is calculated for 
each sensor. The probabilities need to be calculated only over the region of overlap 
(shown in red in Figure 35) as the product is zero outside of the overlap. These 


44 









probabilities are represented as P RFX {Y \ X ) for RF1, P RF2 (Y \ X ) for RF2 and P m (Y \ X) 
for intersection volume of IR sensors’ IFOVs. First, the overall P(Y \ X) is obtained 
using 

P(Y | X) = P RFl (Y | X)xP RF2 (Y | X)xP m (Y | X) (4-4-2) 

Then, based on (4-4-1), the probability of target position X given the measurements Y is 
given by: 

P(X | Y) = — * (4-4-3) 

P(Y) 

where P(X) and P(Y) are the marginal probabilities of target position X and measured 
data Y, respectively. 

In the simulation, the probabilities are computed throughout the ballistic missile’s 
flight. The Baye’s rule is applied to the sensor measurements, and the position with the 
largest probability of having the ballistic missile is found. This position, having the 
highest probability, is the Bayesian technique’s estimate of target position. 

3. Results 

Using Baye’s rule (4-4-3), the position estimates of the Bayesian technique are 
found. The rms position error for the estimates using Bayesian technique can be 
computed using (2-2-5). The rms position errors using Bayesian technique are shown in 
Figure 36. 


. 



. 


1 






i_ 


























0 0.5 1 1.5 2 2.5 3 

Flight time (min) 


Figure 36. The rms position error using Bayesian technique. 

45 















The results of the Bayesian technique are clearly the best among the four fusion 
algorithms discussed here. To achieve a better result, one can use a combination of these 
algorithms. The average rms error for Bayesian technique (see Table 7) decreased to 
19.5 m. The average error for the averaging technique, weighted averaging technique, 
and the Kalman filtering technique were 102 m, 68 m, and 52 m, respectively. This is a 
significant improvement for the ballistic missile position estimation using Bayesian 
technique based sensor fusion. 



e rms in m 

RF1 

99 

RF2 

157 

Bayesian technique. 

19.5 


Table 7. Average error for radar sensors and Bayesian technique 

In this chapter, four different fusion algorithms are investigated. First, the 
averaging technique is examined. The average rms error of the averaging technique is 
worse than that of RF1. The second algorithm investigated is the weighted averaging 
technique. The average rms error of this algorithm is better than that of the averaging 
technique. For the Kalman filtering algorithm, the average rms error is better than the 
previous two algorithms. The fourth algorithm is the Bayesian technique, which gives the 
best results. 


46 




V. CONCLUSION 


In this thesis, the multiple sensor fusion in the boost phase of a ballistic missile 
intercept is examined. Measurements of RF and IR sensors are considered for fusion 
here. The fused sensor outputs lead to better guidance of the intercept missile and 
tracking of the ballistic missile. A MATLAB simulation is used to model the ballistic 
missile and the infrared and radar sensors. Four different data fusion algorithms are 
simulated and their results compared. 

A. CONCLUSIONS 

From the results of the IR sensor analysis, in the designing of infrared sensors, 
3 //m to 5 JLim band should be used for detecting and tracking the ballistic missile. The 
infrared sensor satellites should be low earth orbit (LEO) satellites as the higher orbital 
satellites increase the IFOV intersection volume. The signal-to-clutter ratio, which plays 
an important role in detecting the ballistic missile, must be high enough to detect and 
track the ballistic missile for the entire boost phase. In this thesis, the triangulation of the 
instantaneous field of view for the infrared sensors is used to obtain the range 
information. 

For the radar sensors, the positions of the radar sensors play an important role in 
detecting and tracking the ballistic missile. 

The decision level fusion for combining the sensor outputs is considered in this 
work. Four sensor fusion algorithms are investigated. In the averaging technique, the 
fused results are not always better than these of the individual sensor outputs. The 
weighted averaging technique performs better than the averaging technique. The Kalman 
filtering approach helps decrease the sensor rms errors significantly. The Bayesian 
technique has the best perfonnance of all four fusion algorithm investigated here. 

B. RECOMMENDATIONS 

This thesis investigated a single target scenario. In a future study, fusion 
algorithms for intercepting multiple ballistic missiles in the boost phase may be 
investigated. The issues of association and correlation need to be addressed. 


47 



In this thesis, the interceptor missile is not included; only the detection, tracking 
and position estimation of the ballistic missile is studied. In a future study, the effects of 
sensor fusion on the interceptor missile’s kill vehicle effectiveness may be quantified. 

The ballistic missile may use electronic attack techniques, such as jamming, 
throughout the boost phase. The effects of electronic attack on fusion performance may 
be studied in a future work. 


48 



APPENDIX MATLAB CODES 


The MATLAB codes to simulate the sensors, ballistic missile and compute the 
algorithms are included in this appendix. 

%gokhan humali 2004 NPS 

%sensor fusion for boost phase interception of ballistic missile 

clear; 

clc; 


%Constants 


Re = 6371e3; 

Me = 5.9742e24; 

Gc = 6.673e-l 1; 

gO = Gc * Me / (Re A 2); 

c = 299792458; 

%Earth radius (m) 

%Earth mass (kg) 

%Gravitational constant (m A 3 kg A -1 s A -2) 

%Gravitational Acceleration (sea level) 

%Speed of light (m/s) 

t = 0; 

dt = 0.1; 

%Time (s) 

%Time increment (s) 

posEarth = [0; 0; 0]; 

%Earth’s center position 

degRad = pi/180; 

%Degree to Radian conversion 

%target information 

balMisLatH = 'N'; 

balMisLatD = 41; 

balMisLatM = 00; 

%Bal. Mis. launch site latitude hemisphere 

%Bal. Mis. launch site latitude (degree) 

%Bal. Mis. launch site latitude (minute) 

balMisLonH = ’E’; 

balMisLonD =129; 

%Bal. Mis. launch site longitude hemisphere 

%Bal. Mis. launch site longitude (degree) 


49 


balMisLonM = 00; 


%Bal. Mis. launch site longitude (minute) 


%change the geographical coordinates of the ballistic missile to cartesian 

[thetaBM phiBM] = geo2sph(balMisLatH, balMisLatD, balMisLatM, balMisLonH, 
balMisLonD, balMisLonM); 

[xBM, yBM, zBM] = sph2car(thetaBM, phiBM, Re); 


posBM = [xBM; yBM; zBM]; 

posLaunchFac = posBM; 

accBM = [0; 0; 0]; 

%position of ballistic missile laucnh facility 

%Ballistic missile acceleration 

balMisLauAngAzDeg = 50.1; 

%Bal. Mis. launch angle (az) (from true north) 


balMisLauAngElDeg = 84; %Bal. Mis. launch angle (elevation) 

balMisLauAngAz = balMisLauAngAzDeg * degRad; %Bal. Mis. launch ang (az)rad) 
balMisLauAngEl = balMisLauAngElDeg * degRad; %Bal. Mis. launch ang (el)(rad) 


balMisGrWeiStgl =48988; 

balMisGrWeiStg2 = 27669; 

balMisGrWeiStg3 = 7711; 

balMisGrWeiStg4 = 2268; 

%Bal. Mis. stage 1 weight (kg) 

%Bal. Mis. stage 2 weight (kg) 

%Bal. Mis. stage 3 weight (kg) 

%Bal. Mis. stage 4 weight (kg) 

balMisFuWeiStgl =41640; 

balMisFuWeiStg2 = 23972; 

balMisFuWeiStg3 = 6554; 

balMisFuWeiStg4 = 0; 

%Bal. Mis. stage 1 fuel weight (kg) 

%Bal. Mis. stage 2 fuel weight (kg) 

%Bal. Mis. stage 3 fuel weight (kg) 

%Bal. Mis. stage 4 fuel weight (kg) 

balMisISPstgl =300; 

balMisISPstg2 = 300; 

balMisISPstg3 = 300; 

balMisISPstg4 = 0; 

%Bal. Mis. ISP for stage 1 

%Bal. Mis. ISP for stage 2 

%Bal. Mis. ISP for stage 3 

%Bal. Mis. ISP for stage 4 

balMisBurTimStgl = 60; 

balMisBurTimStg2 = 60; 

balMisBurTimStg3 = 60; 

%Bal. Mis. bum time for stage 1 

%Bal. Mis. burn time for stage 2 

%Bal. Mis. burn time for stage 3 

50 


balMisBurT imStg4 = 1; 


%Bal. Mis. burn time for stage 4 


%total mass of ballistic missile 

totMass = balMisGrWeiStg l+balMisGrWeiStg2+balMisGrWeiStg3+balMisGrWeiStg4; 

%dM/dt of ballistic missile 

dMdtStgl = balMisFuWeiStgl / balMisBurTimStgl; 
dMdtStg2 = balMisFuWeiStg2 / balMisBurTimStg2; 
dMdtStg3 = balMisFuWeiStg3 / balMisBurTimStg3; 
dMdtStg4 = balMisFuWeiStg4 / balMisBurTimStg4; 

%canister weight of ballistic missile 

canWeiStgl = balMisGrWeiStgl - balMisFuWeiStgl; 
canWeiStg2 = balMisGrWeiStg2 - balMisFuWeiStg2; 
canWeiStg3 = balMisGrWeiStg3 - balMisFuWeiStg3; 
canWeiStg4 = balMisGrWeiStg4 - balMisFuWeiStg4; 

%next stage time 

nexStgTimel = 0; 

nexStgTime2 = nexStgTimel + balMisBurTimStgl; 
nexStgTime3 = nexStgTime2 + balMisBurTimStg2; 
nexStgTime4 = nexStgTime3 + balMisBurTimStg3; 

%ballistic missile velocity and thrust unit vectors 

unWeiBalMis = (posEarth - posBM) ./ Re; %Weight unit vector 

[vBMx vBMy vBMz] = top2car(balMisLauAngAz, balMisLauAngEl, balMisLatH, 
balMisLatD, balMisLatM, balMisLonFI, balMisLonD, balMisLonM); 

unBMvel = [vBMx; vBMy; vBMz]; %Velocity unit vector 

unThrBM = unBMvel; %Thrust unit vector 

stgBM = 1; %Ballistic missile stage 

magThrBM = dMdtStgl * gO * balMisISPstgl; %magnitude of BM thrust vector 
magVelBM =17; %arbitrary silo exit velocity 


51 


velBM = magVelBM * unBMvel; 
gmdTrckBM = posBM; 

%target information ends 


%velocity of ballistic missile 
%ground track of BM 


%infrared sensor information 


hIRl = 1000e3; 

hIR2 = 1000e3; 

%Height of infrared sensor 1 (above ground) (m) 

%Height of infrared sensor 2 (above ground) (m) 

IRILatH = ’N’; 

IRILatD = 36; 

IRILatM = 00; 

%infrared sensor (IR1) latitude hemisphere 

%IR1 latitude (degree) 

%IR1 latitude (minute) 

IRILonH = 'F; 

IRILonD = 132; 

IRILonM = 00; 

%IR1 longitude hemisphere 

%IR1 longitude (degree) 

%IR1 longitude (minute) 

IR2LatH = ISP; 

IR2LatD = 46; 

IR2LatM = 00; 

%infrared sensor (IR2) latitude hemisphere 

%IR2 latitude (degree) 

%IR2 latitude (minute) 

IR2LonH = 'F; 

IR2LonD = 132; 

IR2LonM = 00; 

%IR2 longitude hemisphere 

%IR2 longitude (degree) 

%IR2 longitude (minute) 


%change the geographical coordinates of the IR sensors to cartesian 

[thetalRl philRl] = geo2sph(IRlLatH, IRILatD, IRILatM, IRILonH, IRILonD, 
IRILonM); 

[xIRl, ylRl, zIRl] = sph2car(thetaIRl, philRl, (Re + hIRl)); 
posIRl = [xIRl; ylRl; zIRl]; 

[thetaIR2 phiIR2] = geo2sph(IR2LatH, IR2LatD, IR2LatM, IR2LonH, IR2LonD, 
IR2LonM); 

[xIR2, yIR2, zIR2] = sph2car(thetaIR2, phiIR2, (Re + hIR2)); 


52 


posIR2 = [xIR2; yIR2; zIR2]; 


IF0V1 = 20e-6; 

IFOV2 = 20e-6; %IFOV of infrared sensor #2 

%infrared sensor information ends 


%Ballistic Missile RCS information for X band radar (from kuzun thesis) 


load POstage 1_X; 
balMisRCSstglX = Sth; 
load POstage2_X; 
balMisRCSstg2X = Sth; 
load POstage3_X; 
balMisRCSstg3X = Sth; 
load POstage4_X; 
balMisRCSstg4X = Sth; 


%load res data of bal. mis. for stage 1 (x band) 


%load res data of bal. mis. for stage 2 (x band) 


%load res data of bal. mis. for stage 3 (x band) 


%load res data of bal. mis. for stage 4 (x band) 


rcsOrgAngMono = 0:360; 

rcslnc = 0:0.1:360; %angle incriments for interpolation 


%interpolation of res data to 0.1 degrees increments 

rcsXstgl = interpl(rcsOrgAngMono, balMisRCSstglX, rcslnc); 
rcsXstg2 = interpl (rcsOrgAngMono, balMisRCSstg2X, rcslnc); 
rcsXstg3 = interpl (rcsOrgAngMono, balMisRCSstg3X, rcslnc); 
rcsXstg4 = interpl (rcsOrgAngMono, balMisRCSstg4X, rcslnc); 


%radar sensor information 


RFlLatH = ’N’; 
RFILatD = 44; 
RFILatM = 34; 


%radar sensor 1 (RF1) latitude hemisphere 
%RF1 latitude (degree) 

%RF1 latitude (minute) 


RFlLonH = ’E’; 
RFILonD = 130; 
RFILonM = 48; 


%RF1 longitude hemisphere 
%RF1 longitude (degree) 
%RF1 longitude (minute) 


53 


RF2LatH = TNT; 
RF2LatD = 37; 
RF2LatM = 21; 


%radar sensor 2 (RF2) latitude hemisphere 
%RF2 latitude (degree) 

%RF2 latitude (minute) 


RF2LonH = ’E’; 
RF2LonD =135; 
RF2LonM = 04; 


%RF2 longitude hemisphere 
%RF2 longitude (degree) 
%RF2 longitude (minute) 


%change the geographical coordinates of the radar sensors to cartesian 

[thetaRFl phiRFl] = geo2sph(RFlLatH, RFILatD, RFILatM, RFILonH, RFILonD, 
RFILonM); 

[xRFl, yRFl, zRFl] = sph2car(thetaRFl, phiRFl, Re); 
posRFl = [xRFl; yRFl; zRFl]; 


[thetaRF2 phiRF2] = geo2sph(RF2LatH, RF2LatD, RF2LatM, RF2LonH, RF2LonD, 
RF2LonM); 

[xRF2, yRF2, zRF2] = sph2car(thetaRF2, phiRF2, Re); 
posRF2 = [xRF2; yRF2; zRF2]; 


%radar sensor specifications 

PtRl = le6; 

PtR2 = le6; 

DR1 =4.15; 

DR2 = 4.15; 
fRl = 10e9; 
fR2= 10e9; 
roRl = 0.68; 
roR2 = 0.7; 
tauRl = 50e-6; 
tauR2 = 50e-6; 

FRl = 4; 

FR2 = 4; 


%RF1 peak power (w) 
%RF2 peak power (w) 

%RF 1 antenna diameter (m) 
%RF2 antenna diameter (m) 
%RF1 frequency (Hz) 

%RF2 frequency (Hz) 

%RF1 antenna efficiency 
%RF2 antenna efficiency 
%RF1 pulsewidth 
%RF2 pulsewidth 
%RF1 noise figure 
%RF2 noise figure 


54 


nRl = 20; 
nR2 = 20; 
kTO = 4e-21; 
kAng = 1.7; 
kRan = 1.7; 
lamRl = c ./ fRl; 
lamR2 = c ./ fR2; 

AeRl = pi .* ((DR1 ./ 2) A 2); 

AeR2 = pi .* ((DR2 ./ 2) A 2); 

GR1 = (4 * pi * roRl * AeRl / (lamRl A 2)); 
GR2 = (4 * pi * roR2 * AeR2 / (lamR2 A 2)); 
beamWRIDeg = 65 * lamRl / DR1; 
beamWR2Deg = 65 * lamR2 / DR2; 
beamWRl = beamWRIDeg * degRad; 
beamWR2 = beamWR2Deg * degRad; 
%radar sensor information ends 


%RF1 # of pulses being integrated 

%RF2 # of pulses being integrated 

%Watts/FIz 

%k value for angle 

%k value for angle 

%wavelength of RF1 

%wavelength of RF2 

%RF 1 antenna physical area 

%RF2 antenna physical area 

%RF 1 antenna gain 
%RF2 antenna gain 
%RF1 beamwidth (degree) 
%RF2 beamwidth (degree) 
%RF1 beamwidth (radian) 
%RF2 beamwidth (radian) 


%initial values for misc variables 

magDiffBMRF 1 = 0; 

%mag 

magDiffBM_RF2 = 0; 

%mag 

magDifAritMean = 0; 

results 

%mag 

magDifWeiAve = 0; 

RF2 results 

%mag 

magDifWeilR = 0; 

RF2 in IR volume 

%mag 


of dif of true BM position and sensed pos. by RF1 
of dif of true BM position and sensed pos. by RF2 
of dif of true BM pos and arit mean of RF1 and RF2 

of dif of true BM pos and weighted ave of RF 1 and 

of dif of true BM pos and weighted ave of RF 1 and 


%Arrays 

timeArr = []; 
posArrBM = []; 
gmdTrckArrBM = []; 
dist ArrBM = []; 
velArrBM = []; 


%Simulation time array 
%Ballistic missile position array 
%Ballistic missile ground track array 
%Ballistic missile ground distance array 
%Ballistic missile velocity array 


55 


difArrBMRFl = []; %Array of difference between true pos of BM and RF1 sensed 

difArrBM_RF2 = []; %Array of difference between true pos of BM and RF2 sensed 

difAritMeanBM RF = []; %Array of diff between true pos of BM and arit mean of RF 
sensor outputs 

difWeiArrBM RF = []; %Array of diff between true pos of BM and weighted ave. 
pos of RF sensor outputs 

difWeiArrIR = []; %Array of diff between true pos of BM and weighted ave 

pos of RF sensor outputs using IR volume 

flagl = 1; 

while t < nexStgTime4 

%assign ISPT and dMdt values for each stage 

if t < nexStgTime2 
if flagl == 1 

ISPT = balMisISPstgl; 
dMdt = dMdtStgl; 
flagl =2; 
end 

stageBM = 1; 

elseif (nexStgTime2 <= t) & (t < nexStgTime3) 
if flagl =2 

totMass = totMass - canWeiStgl; 

ISPT = balMisISPstg2; 
dMdt = dMdtStg2; 
flagl = 3; 

end 

stageBM = 2; 

elseif (nexStgTime3 <= t) & (t < nexStgTime4) 
if flagl == 3 

totMass = totMass - canWeiStg2; 

ISPT = balMisISPstg3; 
dMdt = dMdtStg3; 


56 


flagl =4; 


end 

stageBM = 3; 

else 

totMass = totMass - canWeiStg3; 
ISPT = balMisISPstg4; 
dMdt = dMdtStg4; 
stageBM = 4; 

end 


%magnitude of position vector of Ballistic missile 

magPosBM = sqrt(posBM(l) A 2 + posBM(2) A 2 + posBM(3) A 2); 

%unit vector of ballistic missile position vector 
unPosBM = posBM / magPosBM; 

gBM = (Gc * Me) / (magPosBM A 2); %gravitational acceleration of BM 

velBM = velBM + accBM * dt; %velocity vector of BM 

%magnitude of velocity vector of BM 

mag VelBM = sqrt(velBM(l) A 2 + velBM(2) A 2 + velBM(3) A 2); 
unBMvel = velBM / magVelBM; %unit vector of vel vec of BM 

magWeiBM = totMass * gBM; %magnitude of weight vector of ball missile 
unMagWeiBM = -unPosBM; %unit vec of weight vector of ball missile 

weiVec = unMagWeiBM * magWeiBM; %weight vector of ballistic missile 


magThrBM = dMdt * gBM * ISPT; 

unThrBM = unBMvel; 

thrBM = magThrBM * unThrBM; 


%magnitude of thrust vector of ball missile 
%unit vector of thrust vec of ball missile 
%thrust vector of ballistic missile 


totForceBM = weiVec + thrBM; 

accBM = totForceBM / totMass; %acceleration of ballistic missile 


totMass = totMass - dMdt * dt; %total mass of the ballistic missile 


57 


posBM = posBM + velBM * dt; %new position of the ballistic missile 

LOSRF1BM = posBM - posRFl; 

%magnitude of line of sight of ballistic missile from RF1 

magLOSRFIBM = sqrt(LOSRFlBM(l) A 2 + L0SRF1BM(2) A 2 + 

L0SRF1BM(3) A 2); 

%unit vector of line of sight of ballistic missile from RF1 
unLOSRFIBM = LOSRF1BM / magLOSRFIBM; 

%angle of line of sight 

lookAngRFl = acos(dot(unLOSRFlBM, unBMvel)); 

LOSRF2BM = posBM - posRF2; %line of sight of ballistic missile from RF2 
%magnitude of line of sight of ballistic missile from RF2 

magLOSRF2BM = sqrt(LOSRF2BM(l) A 2 + LOSRF2BM(2) A 2 + 

LOSRF2BM(3) A 2); 

%unit vector of line of sight of ballistic missile from RF2 
unLOSRF2BM = LOSRF2BM / magLOSRF2BM; 

%angle of line of sight 

lookAngRF2 = acos(dot(unLOSRF2BM, unBMvel)); 

RFIRCSIndex = round((lookAngRF 1 * 180/pi)* 10) + 1; 

RF2RCSIndex = round((lookAngRF2* 180/pi)* 10) + 1; 

%Determine RCS Seen by RF Sensors According to Stage (after kuzun thesis) 
if stageBM == 1 

RCS1 = rcsXstgl(RFlRCSIndex); 

RCS2 = rcsXstgl(RF2RCSIndex); 
elseif stageBM == 2 

RCS1 = rcsXstg2(RFlRCSIndex); 

RCS2 = rcsXstg2(RF2RCSIndex); 
elseif stageBM == 3 

RCS1 = rcsXstg3 (RFIRCSIndex); 


58 


RCS2 = rcsXstg3(RF2RCSIndex); 

else 

RCS1 = rcsXstg4(RFlRCSIndex); 

RCS2 = rcsXstg4(RF2RCSIndex); 

end 

vecBMRFl = posBM - posRFl; % Vector between Ballistic missile and RF1 
%Magnitude of Ballistic missile - RF1 vector 

magBMRF 1 = sqrt((vecBM_RFl(l) A 2) + (vecBM_RFl(2) A 2) + 

(vecBM_RFl(3) A 2)); 

%True angle between Ballistic missile and RF1 

trueAngBM_RFl = atan2(vecBM_RFl(2), vecBM_RFl(l)); 

vecBM_RF2 = posBM - posRF2; %Vector between target and RF2 

%Magnitude of Ballistic missile - RF2 vector 

magBM_RF2 = sqrt((vecBM_RF2(l) A 2) + (vecBM_RF2(2) A 2) + 

(vecBM_RF2(3) A 2)); 

%True angle between Ballistic missile and RF2 

trueAngBM_RF2 = atan2(vecBM_RF2(2), vecBM_RF2(l)); 

SNR1 = PtRl * (GR1 A 2) * (lamRl A 2) * (10 A (RCS1 / 10)) * tauRl / (((4 * pi) A 3) 

* kTO * FR1 * (magBM RF 1 A 4)); %SNR of RF1 

SNR2 = PtR2 * (GR2 A 2) * (lamR2 A 2) * (10 A (RCS2 / 10)) * tauR2 / (((4 * pi) A 3) 

* kTO * FR2 * (magBM_RF2 A 4)); %SNR of RF2 

%Sigma of angle error of RF1 

sigAngleRFl = beamWRl / (kAng * sqrt(2 * SNR1 * nRl)); 

%Sigma of angle error of RF2 

sigAngleRF2 = beamWR2 / (kAng * sqrt(2 * SNR2 * nR2)); 

errAzRF 1 = sigAngleRF 1 * randn; 
errAzRF2 = sigAngleRF2 * randn; 


errEIRFI = sigAngleRFl * randn; 

59 


%Erroneous angle for RF 1 


errElRF2 = sigAngleRF2 * randn; 


%Erroneous angle for RF2 


%Sigma of range error of RF1 

sigRanRFl = c * tauRl / (2 * kAng * sqrt(2 * SNR1 * nRl)); 

%Sigma of range error of RF2 

sigRanRF2 = c * tauR2 / (2 * kAng * sqrt(2 * SNR2 * nR2)); 

errRanRF 1 = sigRanRF 1 * randn; %Erroneous range for RF1 

errRanRF2 = sigRanRF2 * randn; %Erroneous range for RF2 

%Position of target according to RF1 with error due to az, el and range sigmas 

errPosBMRFl = senPos(vecBM_RF 1, magBMRFl, posRFl, RFIFatH, 
RFIFatD, RFIFatM, RFIFonH, RFIFonD, RFIFonM, errAzRFl, 
errEIRFI, errRanRF 1); 

%Position of target according to RF2 with error due to az, el and range sigmas 

errPosBM_RF2 = senPos(vecBM_RF2, magBM_RF2, posRF2, RF2FatH, 
RF2FatD, RF2FatM, RF2FonH, RF2FonD, RF2FonM, errAzRF2, 
errElRF2, errRanRF2); 

%Magnitude of target position ace to RF 1 with error 

magErrPosBMRF 1 = sqrt(errPosBM_RFl(l) A 2 + errPosBM_RFl(2) A 2 + 
errPosBM_RFl(3) A 2); 

%Magnitude of target position ace to RF2 with error 

magErrPosBM_RF2 = sqrt(errPosBM_RF2(l) A 2 + errPosBM_RF2(2) A 2 + 
errPosBM_RF2(3) A 2); 

%Position of target sensed by RF1 (from the origin of the earth) 

posBMRFl = posRFl + errPosBMRF 1; 

%Position of target sensed by RF2 (from the origin of the earth) 

posBM_RF2 = posRF2 + errPosBM_RF2; 

diffBM RFl = posBM - posBM RFl; %Difference between bal mis and 
RF1 sensed 

%Magnitude of difference between bal mis and RF 1 sensed 


60 


magDiffBMRF 1 = sqrt((diffBM_RFl(l) A 2) + (diffBM_RFl(2) A 2) + 
(diffBM_RFl(3) A 2)); 

diffBM_RF2 = posBM - posBM_RF2; %Difference between bal mis and 
RF2 sensed 

%Magnitude of difference between bal mis and RF2 sensed 

magDiffBM_RF2 = sqrt((diffBM_RF2(l) A 2) + (diffBM_RF2(2) A 2) + 
(diffBM_RF2(3) A 2)); 

%Midline of IR 1 with error up to IFOV/2 

errBMIRl = midIRline(posBM, posIRl, IFOV1); 

%Midline of IR 2 with error up to IFOV/2 

errBM_IR2 = midIRline(posBM, posIR2, IFOV2); 

%volume of intersection of IR 1 and IR 2 using volumeIR function 

volArray = volumeIR(posBM, posIRl, posIR2, errBM IRl, errBM_IR2, IFOV1, 
IFOV2); 

% Arithmetic mean position of bal mis sensed by radars 
aritMean = (posBM_RFl + posBM_RF2) ./ 2; 

%Difference between true position of bal mis and arithmetic mean position 
difAritMean = posBM - aritMean; 

%Magnitude of difAritMean 

magDifAritMean = sqrt(dilAritMean(l) A 2 + difAritMean(2) A 2 + 
difAritMean(3) A 2); 

%Weighted position of target, sensed by radars, using range 

weiPosBMRF = (SNR1 * posBM_RFl + SNR2 * posBM_RF2) / (SNR1 + 
SNR2); 

%Difference between true position of bal mis and weighted position 
difWeiPosBM = posBM - weiPosBMRF; 

%Magnitude of difWeiPosBM 

magDifWeiPosBM = sqrt(difWeiPosBM(l) A 2 + difWeiPosBM(2) A 2 + 
difWeiPosBM(3) A 2); 


61 


%Shifting the position of sensed bal mis to the nearest point in the IR volume 
using corrTRIR function 

finPosBMRFIRl = corrTRIR(volArray, errPosBMRFl, errBMIRl, 

errBM_IR2, posIRl, posIR2, IFOV1, IFOV2); 

finPosBM_RFIR2 = corrTRIR(volArray, errPosBM_RF2, errBMIRl, 

errBM_IR2, posIRl, posIR2, IFOV1, IFOV2); 

%Magnitude of finPosBM RFIRl 

magFinPosBMRFIRl = sqrt(finPosBM_RFIRl(l) A 2 + finPosBM_RFIRl(2) A 
2 + finPosBM_RFIRl(3) A 2); 

magFinPosBM_RFIR2 = sqrt(finPosBM_RFIR2(l) A 2 + finPosBM_RFIR2(2) A 
2 + finPosBM_RFIR2(3) A 2); 

%Weighted result of shifted positions 

weiBMRFIR = (SNR1 * finPosBM RFIRl + SNR2 * finPosBM_RFIR2) / 
(SNR1 + SNR2); 

difWeiBMRFIR = posBM - weiBM RFIR; 

magDifWeiBMRFIR = sqrt(difWeiBM_RFIR(l) A 2 + difWeiBM_RFIR(2) A 2 
+ difWeiBM_RFIR(3) A 2); 

timeArr = [timeArr t]; %Time array 

posArrBM = [posArrBM posBM]; %position array of ballistic missile 

%Array of difference between true bal mis position and sensed by RF1 

difArrBM RF 1 = [difArrBM RFl magDiffBM RFl]; 

%Array of difference between true bal mis position and sensed by RF2 

difArrBM_RF2 = [difArrBM_RF2 magDiffBM_RF2]; 

%Array of difference between true bal mis position and mean position 

difAritMeanBM RF = [difAritMeanBM RF magDifAritMean]; 

%Array of difference between true bal mis position and weighted position 

difWeiArrBM RF = [difWeiArrBM RF magDifWeiPosBM]; 

%Array of difference between true bal mis pos and corrected position using 
weighted IR 

difWeiArrIR = [difWeiArrIR magDifWeiBMRFIR]; 

t = t + dt; %Increase time 


62 


end 


%Define Earth 

[xE, yE, zE] = sphere(36); 
xE = xE .* Re; 
yE = yE .* Re; 
zE = zE .* Re; 

figure 
axis equal; 

axis([-7e6 7e6 -7e6 7e6 -7e6 7e6]); 

view(280,30); 

grid on; 

hold on; 

surf(xE, yE, zE); 

%3D Target Trajectory 

title('Traj ectories') 
xlabel('x(m)'); 
ylabel(’y(m)'); 
zlabel('z(m)’); 

%Plot Target Trajectory 

posArrayTx = posArrBM(l,:); 
posArrayTy = posArrBM(2,:); 
posArrayTz = posArrBM(3,:); 
plot3(posArrayTx, posArrayTy, posArrayTz, 'y-'); 

plot3(posLaunchFac(l), posLaunchFac(2), posLaunchFac(3), 'yo') 

plot3(posRFl(l), posRFl(2), posRFl(3), ’ko’); 

plot3(posRF2(l), posRF2(2), posRF2(3), ’co’); 

plot3(posIRl(l), posIRl(2), posIRl(3), ’co’); 

plot3(posIR2(l), posIR2(2), posIR2(3), 'co'); 

63 


figure %figure of true bal mis position and sensed position by RF1 

plot((timeArr / 60), difArrBMRFl); 

title('True bal mis position vs. sensed by RF1'); 

xlabel(’Flight time (min)’); 

ylabel(’rms error (m)’); 

axis([0 3 0 600]); 

grid 

figure %figure of true bal mis position and sensed position by RF2 

plot((timeArr / 60), difArrBM_RF2); 

title(’True bal mis position vs. sensed by RF2’); 

xlabel(Tlight time (min)’); 

ylabel(’rms error (m)’); 

axis([0 3 0 600]); 

grid 

%figure of true bal mis position and arithmetic mean position sensed by radar sensors 
figure 

plot((timeArr / 60), difAritMeanBM_RF); 

title(’True bal mis position vs. arithmetic mean of target sensed by RF1, RF2’); 

xlabel(’Flight time (min)’); 

ylabel(’rms error (m)’); 

axis([0 3 0 600]); 

grid 

%figure of true bal mis position and weighted position sensed by radar sensors 
figure 

plot((timeArr / 60), difWeiArrBM_RF); 

title(’True bal mis position vs. weighted position(RFl-RF2)’); 

xlabel(’Flight time (min)’); 

ylabel('rms error (m)’); 

axis([0 3 0 600]); 


64 


grid 


%figure of true bal mis position and shifted position using IR (weighted) 
figure 

plot((timeArr / 60), difWeiArrIR); 

title('True bal mis position vs. final algorithm weighted(RF 1, RF2, IR1, IR2)'); 

xlabel('Flight time (min)'); 

ylabel('rms error (m)'); 

axis([0 3 0 600]); 

grid 

%figure of infrared intersection volume and bal mis 
figure 

for i = l:size(volArray,2) 

plot3(volArray(l,i), volArray(2,i), volArray(3,i)); 
hold on 

end 

axis square 

plot3(posBM(l), posBM(2), posBM(3), 'or') 


disp(['Sum of errors of dif between 
num2str(sum(difArrBM RFl))'m']); 

bal mis 

and 

sensed 

by 

RF1 

disp(['Sum of errors of dif between 
num2str(sum(difArrBM RF2)) ' m']); 

bal mis 

and 

sensed 

by 

RF2 


disp(['Sum of errors of dif between bal mis and arithmetic mean position 
num2str(sum(difAritMeanBM_RF)) ' m']); 

disp(['Sum of errors of dif between bal mis and weighted position 
num2str(sum(difWeiArrBM_RF)) ' m']); 

disp(['Sum of errors of dif between bal mis and final algorithm (wei) 
num2str(sum(difWeiArrIR)) ' m']); 


65 


%atmospheric transmittance 
%gokhan humali 8/9/04 
%searad used 


clear; 

clc; 


searad trans = [.9328 .9275 .9227 .9306 .9302 .9207 .9001 .8751 .8567 .7957 .7434 
.4336 .0555 .0218 .0874 .0484 .0765 .1289 .2287 .3747 .4049 .4925 .5784 .6283 

.5879 .7434 .8454 .8881 .9195 .9328 .9385 .9396 .9362 .9367 .9394 .9348 .9403 

.9406 .9389 .9391 .9390 .9392 .9325 .9206 .9130 .9023 .8696 .8011 .6401 .3155 

.0946 .0296 .0533 .0608 .0522 .0734 .1849 .4432 .709 .8404 .6925 .8878 .8761 

.8936 .9372 .9393 .9449 .9366 .9317 .9416 .9453 .943 .9418 .9361 .9107 .876 
.8297 .7833 .6852 .4998 .2034 .0073 0 0 0 0 0 0 .0007 .0289 .2039 .3526 .4673 

.5374 .4545 .4079 .6907 .4365 .4958 .5504 .6695 .8613 .9296 .9184 .9147 .9113 

.9310 .9311 .935 .9452 .931 .9072 .1963 0 .0615 .4598 .7967 .8593 .7222 .6097 
.4793 .2297 .1051 .0122 .0001 0 0 0 .0001 0 0 .0011 .026 .1834 .4187 .7958 .9064 
.9129 .9308 .9355 .9554 .9493 .9357 .9054 .8351 .334 .0032 .1797 .3284 .2486 
.2486]; 


wavenumber = linspace(8000,500,151); 
wavelength = 1 ./ wavenumber .* Ie4; 
figure 

semilogx(wavelength,searad_trans) 
axis([l 20 0 1]) 

xlabel('Wavelength (micrometer)') 
ylabel(' Atmospheric transmittance') 


66 


%gokhan humali 2004 

%conversion of geographic coordinates to spherical coordinates 

function [thet, phi] = geo2sph(latH, latD, latM, lonH, lonD, lonM) 

deg2rad = pi / 180; 
latDegree = latD + latM / 60; 
latRad = latDegree * deg2rad; 

lonDegree = lonD + lonM / 60; 
lonRad = lonDegree * deg2rad; 

if latH == 'N' 
thet = pi / 2 - latRad; 
elseif latH == 'S’ 

thet = pi / 2 + latRad; 

end 

if lonH == ’F 
phi = lonRad; 
elseif lonH == ’W’ 
phi = 2 * pi - lonRad; 

end 

%gokhan humali 2004 

%conversion of spherical coordinates to cartesian coordinates 

function [x, y, z] = sph2car(thet, phi, R) 

x = sin(thet) * cos(phi) * R; 
y = sin(thet) * sin(phi) * R; 
z = cos(thet) * R; 


67 


%gokhan humali 2004 

%excitance of the ballistic missile plume at temperature T 
clear;clc; 

lam=linspace( 1,14,1000); 
h = 6.625e-34; 
c = 3e8; 
k = 1.38e-23; 

T= 1035; 
emissivity = 0.5; 
cl = 3.7417749e4; 
c2= 1.4387e4; 

gh = exp(h.*c./(lam.*k.*T)); 

M = cl ./ (lam . A 5 .* (exp(c2./(lam.*T))-l)); 

figure 

plot(lam,M*le4) %multiply with le4 to convert the result to m A -2 
xlabel('Wavelength (micrometer)') 

ylabel('Radiant exitance of blackbody (W/(m A 2 micrometer))') 
grid 

figure 

plot(lam,M*emissivity* le4) 
xlabel('Wavelength (micrometer)') 

ylabel('Radiant exitance of graybody (W/(m A 2 micrometer))') 
grid 


68 


%gokhan humali 2004 

%computes the midline of the infrared sensor's IFOV. 
function errTIR = midIRline(posTar, posIR, IFOV) 
flag = 1; 

vecTIR = posTar - posIR; %vector from IR sat. to ballistic missile 

%magnitude of the vector between IR sat. and ballistic missile 
magVecTIR = sqrt(vecTIR(l) A 2 + vecTIR(2) A 2 + vecTIR(3) A 2); 

%theta angle for the vector between IR sat. and bal. mis. 
theta = atan2(vecTIR(2), vecTIR(l)); 

%phi angle for the vector between IR sat. and bal. mis. 
phi = acos(vecTIR(3) / magVecTIR); 

while flag 

ran i = rand - 0.5; %first random number between -0.5 to 0.5 

ran_2 = rand - 0.5; %second random number between -0.5 to 0.5 

%components of the new vector between IR sat. and bal. mis. with adding random 
number times IFOV 

x = magVecTIR * cos(theta + ran i * IFOV) * sin(phi + ran_2 * IFOV); 
y = magVecTIR * sin(theta + ran i * IFOV) * sin(phi + ran_2 * IFOV); 
z = magVecTIR * cos(phi + ran_2 * IFOV); 
errTIR = [x; y; z]; 

magErrTIR = sqrt(errTIR(l) A 2 + errTIR(2) A 2 + errTIR(3) A 2); 

%check the new vector if it is really inside IFOV/2 

a = dot(errTIR, vecTIR); 
b = magErrTIR * magVecTIR; 
d = acos(a / b); 
if (d <= (IFOV / 2)) 
flag = 0; 
end 
end 


69 


%gokhan humali 2004 

%conversion of topographic coordinates to cartesian coordinates [After kuzun thesis] 

function [x, y, z] = top2car(az, el, latH, latD, latM, lonH, lonD, lonM) 

deg2rad = pi / 180; 
if latH == ’N’ 

lat = (latD + latM / 60) * deg2rad; 
elseif latH == 'S' 

lat = -(latD + latM / 60) * deg2rad; 

end 

if lonH == ’E’ 

Ion = (lonD + lonM / 60) * deg2rad; 
elseif lonH == 'S' 

Ion = -(lonD + lonM / 60) *deg2rad; 

end 

HA = sin(el); 

EA = cos(el) * cos(az); 

NA = cos(el) * sin(az); 

%Rotation vector 

T = [ -sin(lat)*cos(lon) -sin(lon) 

-sin(lat)*sin(lon) cos(lon) 

cos(lat) 0 

solVec = [0; 0; 0]; 
solVec = T * [EA; NA; HA]; 

x = solVec(l); 
y = solVec(2); 
z = solVec(3); 

70 


cos(lat)*cos(lon) 
cos(lat)*sin(lon) 
sin(lat) ]; 


%gokhan humali 2004 
%IR intersection volume 

function volArr = volumeIR(posTar, positIRl, positIR2, errorTIRl, errorTIR2, IFOV1, 
IFOV2) 

magErrTIRl = sqrt(errorTIRl(l) A 2 + errorTIRl(2) A 2 + errorTIRl(3) A 2); 
magErrTIR2 = sqrt(errorTIR2(l) A 2 + errorTIR2(2) A 2 + errorTIR2(3) A 2); 
volArr = []; 

for i = (posTar(l) - 25):(posTar(l) + 25) 

for j = (posTar(2) - 25):(posTar(2) + 25) 

for k = (posTar(3) - 25):(posTar(3) + 25) 
tempT = [i; j; k]; 
tempTIRl = tempT - positIRl; 
tempTIR2 = tempT - positIR2; 

magTempTIRl = sqrt(tempTIRl(l) A 2 + tempTIRl(2) A 2 + 
tempTIRl(3) A 2); 

magTempTIR2 = sqrt(tempTIR2(l) A 2 + tempTIR2(2) A 2 + 
tempTIR2(3) A 2); 

al = dot(tempTIRl, errorTIRl); 
bl = magTempTIRl * magErrTIRl; 
dl = acos(al /bl); 

a2 = dot(tempTIR2, errorTIR2); 
b2 = magTempTIR2 * magErrTIR2; 
d2 = acos(a2 / b2); 

if (dl <= (IFOV1 / 2)) & (d2 <= (IFOV_2 / 2)) 
volArr = [volArr tempT]; 

end 

end 

end 

end 


71 


THIS PAGE INTENTIONALLY LEFT BLANK 


72 



LIST OF REFERENCES 


1 . http://www.state.gOv/t/np/rls/fs/20902.htm /Accessed 09/04 

2. Andrew M. Sessler, John M. Cornwall, Bob Dietz, Steve Fetter, Sherman Frankel, 
Richard L. Garwin, et ah, “Countermeasures: A technical evaluation of the 
operational effectiveness of the planned US national missile defense system,” 
Union of Concerned Scientists, MIT Security Studies Program, Cambridge, 
Massachusetts, April 2000 

3. Monroe Schlessinger, Infrared Technology Fundamentals, p. 33, Marcel Dekker, 
New York, NY, 1995 

4. R.G. Driggers, P. Cox, and T. Edwards, Introduction to Infrared and Electro- 
optical Systems, p. 97, Artech Flouse, Norwood, MA, 1999 

5. Theodore A. Postal, “Science, Technology, and Attack Tactics Relevant to 
National Missile Defense Systems”, pp.39-62, (unpublished), Washington, DC, 
June 18, 2001 

6. Filippo Neri, Introduction to Electronic Defense Systems, p206, Artech Flouse, 
Norwood, MA, 2001 

7. Phillip E. Pace, Notes for EC3700 (Introduction to Joint Services Electronic 
Warfare), Naval Postgraduate School, 2004 (unpublished) 

8. David H. Pollock, Editor, The Infrared & Electro-Optic Systems Handbook, Vol. 
7, p. 100, SPIE Optical Engineering Press, Bellingham, WA, 1996 

9. K.Uzun, “Requirements and limitations of Boost Phase Ballistic Missile Intercept 
Systems”, Master’s Thesis, Naval Postgraduate School, Monterey, CA, Sept 2004 

10. Merrill I. Skolnik, Introduction to Radar Systems, p.223, McGraw Hill, New 
York, 2001 

11. David L. Hall, James Llinas, Handbook of Multisensor Data Fusion, pi-7, CRC 
Press, Washington, DC, 2001 


73 



12. M. Kokar, K. Kim, “Review of Multisensor Data Fusion Architectures and 
Techniques,” IEEE Spectrum, 1993 

13. P. E. Pace, M. D. Nash, D. P. Zulaica, A. A. Di Mattesa, A. Hosmer, “Relative 
Targeting Architectures for Captive-Carry HIL Missile Simulator Experiments”, 
IEEE Transactions on Aerospace and Electronic Systems, Vol. 37, No. 3, July 
2001 

14. K. V. Ramachandra, Kalman Filtering Techniques for Radar Tracking, p. 3, 
Marcel Dekker, New York, NY, 2000 


74 



INITIAL DISTRIBUTION LIST 


1. Defense Technical Information Center 
Ft. Belvoir, Virginia 

2. Dudley Knox Library 
Naval Postgraduate School 
Monterey, California 

3. Dr. DanBoger 

Information Sciences Department 
Monterey, California 

4. Dr. Phillip E. Pace 

Department of Electrical and Computer Engineering 
Monterey, California 

5. Dr. Murali Tummala 

Department of Electrical and Computer Engineering 
Monterey, California 

6. 1 st Lt. Gokhan Humali 
Turkish Air Force 
Ankara, Turkey 

7. Mr. Dale S. Caffall 
Missile Defense Agency 
Washington, D.C. 

8. Ms. Gerri Hudson 
Raytheon Company 
Tucson, Arizona. 

9. Mr. Howard Krizek 
Raytheon Company 
Tucson, Arizona. 

10. Mr. Lamoyne Taylor 
Raytheon Company 
Tucson, Arizona. 


75