# How can the IAU 2000A vs IAU 2000B nutation comparison be reproduced?

I am attempting to plot how the IAU 2000A nutation model degrades as its terms are omitted. As a spot-check, I decided to compare it to IAU 2000B, which includes only the 77 most important lunisolar terms. It is widely claimed that IAU 2000B only sacrifices 1 mas of accuracy. For example, in this PDF presentation from SOFA:

The IAU 2000B nutation series is almost as accurate (1 mas) as the full IAU 2000A series

http://www.iausofa.org/publications/aas04.pdf

And in this published paper:

IAU 2000B is more than an order of magnitude smaller than IAU 2000A but achieves 1 mas accuracy throughout 1995-2050.

https://www.aanda.org/articles/aa/full/2008/04/aa8811-07/aa8811-07.right.html

But I have not yet succeeded in comparing them in such a way as to produce a difference that small. When I compare the angles they return each day over the two decades 2000-2020, I see a difference in the first angle - delta psi - of >2 mas. Using the USNO NOVAS implementation of the two series, because it was easier to install (pip install novas) than pysofa, I get:

from math import tau from novas.compat.nutation import iau2000a, iau2000b T0 = 2451545.0 # Year 2000.0 dpsi_differences = [] deps_differences = [] for day in range(0, 366 * 20): # Years 2000.0 through ~2020 dpsi_a, deps_a = iau2000a(T0, day) dpsi_b, deps_b = iau2000b(T0, day) dpsi_differences.append(abs(dpsi_a - dpsi_b)) deps_differences.append(abs(deps_a - deps_b)) def report_difference(name, differences): radians = max(differences) days = differences.index(radians) degrees = radians / tau * 360.0 arcminutes = degrees * 60.0 arcseconds = arcminutes * 60.0 mas = arcseconds * 1000.0 print('Maximum difference for {}: {:.4f} mas at T0 + {} days' .format(name, mas, days)) report_difference('delta psi', dpsi_differences) report_difference('delta epsilon', deps_differences)

Result:

Maximum difference for delta psi: 2.1867 mas at T0 + 1396 days Maximum difference for delta epsilon: 0.8631 mas at T0 + 7017 days

Am I misinterpreting the output of the NOVAS routines? Or, alternately, am I misunderstanding the meaning of the two angles? I understand the angles as being a pair of rotations which each, in the worse case - that of a point on the great circle of the rotation - will move a coordinate through the same angle as the rotation itself. So I understand a 2.1867 mas difference in Δpsi, for example, as changing sky coordinates by a maximum of that same 2.1867 mas when the nutation matrix is used to translate coordinates into or out of the equinox-of-date.

My next step would be trying to get the sofa library installed locally and then running a similar routine against it in case the NOVAS implementation is simply broken, but before trying to install a library by hand, I wanted to double check in case my understanding of the angles was itself faulty.

Thanks for any snags that can be identified in my reasoning!

I suspect that the problem is in your assumption that changes in $$Delta epsilon$$ or $$Delta psi$$ lead to coordinate changes of the same size. In fact, the position of the equator and celestial pole are complex functions of those angles, notably with both positive and negative terms. If you look at Equations (8) and (9) (for the coordinates $$X$$ and $$Y$$ of the pole) in the 2008 A&A paper you linked to, you can see this dependence. For example, in the $$X$$ equation, there is a $$Delta psi sin epsilon_0$$ term, but later there is also a $$- (psi^2_A / 2) Delta psi sin epsilon_0)$$ term that could at least partially cancel it for a given value of $$Delta psi$$. There are also cross terms between $$Delta psi$$ and $$Delta epsilon$$.

While those equations are only for an approximation to IAU 2000B, looking at Capitaine & Wallace 2006 for the exact equation (Eq. 36) shows the same behavior, i.e. that there are both positive and negative terms, as well as cross terms, involving those quantities.

So to do a comparison of accuracy, I think you would need to calculate the $$X, Y$$ coordinates of the pole and the position $$s$$ of the origin with both models, and compare those.

## Lunar 4 Celestial Navigation and Almanac Program

Lunar 4 is a free celestial navigation sight reduction program and almanac for the Windows desktop. Its almanac function produces barycentric, geocentric, and topocentric coordinates. Its sight reduction function produces azimuth, altitude, and intercept for the Marcq St. Hilaire position line method. It also solves for time, given

• altitudes of the Moon and another body, plus their separation angle, observed from an unknown location (the classic "lunar distance" problem), or
• the altitude of a body observed from a known location, i.e., a "time sight", or
• the separation angle between the Moon and another body observed from a known location

The available solar system bodies are the Sun, Moon, and planets, including Pluto. Their positions and velocities are obtained from a Jet Propulsion Laboratory planetary ephemeris. An internal star catalog contains all stars down to third magnitude. There's also a manual entry form for stars not in the catalog.

Lunar4 is copyright © 2019 by Paul S. Hirose. Nonprofit redistribution of the program is permissible if you give me credit.

## Astronomy and Astrophysics)

W. I. Axford . A. Behr * A. Bruzek * C. J. Durrant * H. Enslin . H. Fechtig . W. Fricke F. Gondolatsch * H. Griin * 0. Hachenberg . W.-H. Ip * E. K. Jessberger* T. Kirsten * Ch. Leinert D. Lemke * H. Palme * W. Pilipp * J. Rahe . G. Schmahl . M. Scholer * J. Schubart J. Solf * R. Staubert 1H. E. Suess* J. Triimper . G. Weigelt * R. M. West * R. Wolf * H. D. Zeh

Editors : K. Schaifersand H. H. Voigt

Springer-VerlagBerlin 1Heidelberg New York 1981 l

CIP-Kuntitelaufnahmc der Deutschcn Bihliothek Zohlcnwrfr und Fmktionrn (1

chnik/LandolbBBmslein. &din: Heidelberg: New York: Springer. Parallclt.: Numerical dstn and functionnl relationships in Science and technology. NE: Land&-BBmsfein. _. . PT. N.S./Gcsamlhng.: K.-H. Hcllwege. N.S., Cruppe 6. Aslronomie, Astrophysik und Weltraumforrhung. N.S., Gruppe 6. Bd. 2. Astronomic und Astrophysik: Erg. u. Env. zu Bd. 1. N.S., Gruppe 6, Bd. 2, Teilbd. a. Mcthodcn. Konstanten, Sonncnsystem!W. 1. Axford Hng.: K. Schaifers u. H. H. Voigt. - 1981. - ISBN 3-540-10054-7 (Berlin. Heidelberg. New York). ISBN O-387-10054-7 (New York, Heidelberg. Berlin) [Erscheint: November 19Sl]. NE: Axford. William 1. [Mitverf.] Schaifers, Karl [Hrsg.]: Hellwege. Karl-Heinz [Hrsg.].

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned specifically those of translation, reprinting, reuse of illustrations, broadcasting, reproduction by photocopying machine or similar means, and storage in data banks. Under 0 54 of the German Copyright Law where copies are made for other than private use a fee is payable to “ Verwertungsgesellschaft Wort” Munich. 0 by Springer-Verlag Berlin-Heidelberg 1981 Printed in Germany The use of registered names, trademarks, etc. in this publication does not imply, even in the absenceof a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting, printing and bookbinding: Briihlsche Universitltsdruckerei, 6300 Giessen 2163/302&543210

Editors K. Schaifers, Landessternwarte, Konigstuhl, 6900 Heidelberg, FRG H. H. Voigt, Universitatssternwarte, GeismarlandstraBe 11, 3400 Gottingen, FRG

Contributors W. I. Axford, Max-Planck-Institut fur Aeronomie, 3411 Lindau/Harz, FRG A. Behr, Eschenweg 3,3406 Bovenden, FRG A. Bruzek, Kiepenheuer-Institut fur Sonnenphysik, SchoneckstraBe6, 7800 Freiburg, FRG C. J. Durrant, Kiepenheuer-Institut fur Sonnenphysik, SchBneckstraBe6, 7800 Freiburgi ’ FRG H. Enslin, Deutsches Hydrographisches Institut, Bernhard-Nocht-StraBe 78, 2000 Hamburg 4, FRG H. Fechtig, Max-Planck-Institut fur Kernphysik, Saupfercheckweg, 6900 Heidelberg, FRG W. Fricke, Astronomisches Rechen-Institut, MiinchhofstraBe 12-14, 6900 Heidelberg, FRG F. Gondolatsch, Astronomisches Rechen-Institut, MijnchhofstraBe 12-14, 6900 Heidelberg, FRG H. Griin, Max-Planck-Institut fur Kernphysik, Saupfercheckweg, 6900 Heidelberg, FRG 0. Hachenherg, Radioastronomisches Institut der Universitat, Auf dem Huge171, 5300 Bonn 1, FRG W.-H. Ip, Max-Planck-Institut fur Aeronomie, 3411 Lindau/Harz, FRG E. K. Jessherger, Max-Planck-Institut fur Kernphysik, Saupfercheckweg, 6900 Heidelberg, FRG T. Kirsten, Max-Planck-Institut fur Kernphysik, Saupfercheckweg, 6900 Heidelberg, FRG Ch. Leinert, Max-Planck-Institut fur Astronomie, Konigstuhl, 6900 Heidelberg, FRG D. Lemke, Max-Planck-Institut fur Astronomie, Konigstuhl, 6900 Heidelberg, FRG H. Palme, Max-Planck-Institut fur Chemie, SaarstraDe23, 6500 Mainz, FRG W. Philipp, Max-Planck-Institut fur Physik und Astrophysik, Institut fur Extraterrestrische Physik, 8046 Garching b. Mtinchen, FRG

J. Rahe, Dr. Remeis-Sternwarte, Sternwartstral3e 7, 8600 Bamberg, FRG G. Schmahl,

Universitatssternwarte, Geismarlandstralje 11, 3400 Gottingen, FRG

M. Scholer, Max-Planck-Institut fur Physik und Astrophysik, Institut fur Extraterrestrische Physik, 8046 Garching b. Munchen, FRG J. Schubart, Astronomisches Rechen-Institut, MonchhofstraBe 12-14, 6900 Heidelberg, FRG J. Solf, Max-Planck-Institut fur Astronomie, Kiinigstuhl, 6900 Heidelberg, FRG R, SUaubert,Astronomisches Institut der Universitat, WaldhauserstraBe 64, 7400 Tiibingen, FRG H. E. Suess,Univ. of California, Chemistry Department, La Jolla/Calif. 92093, USA J. Triimper, Max-Planck-Institut fur Physik und Astrophysik, Institut fur Extraterrestrische Physik, 8046 Garching b. Miinchen, FRG G. Weigelt, Physikalisches Institut der Universitat, Erwin-Rommel-Stral3e 1, 8520 Erlangen, FRG R. M. West, European Southern Observatory, Karl-Schwarzschild-Stral3e 2, 8046 Garching b. Munchen, FRG R. Wolf, Max-Planck-Institut fur Astronomie, Kiinigstuhl, 6900 Heidelberg, FRG H. D. Zeh, Institut fur Theoretische Physik der Universitat, Philosophenweg 19, 6900 Heidelberg, FRG

Preface In all fields of sciencethe steady increase in the number of ever more specialized and intricate publications calls from time to time for a complete, critical and well-arranged compilation of facts, numerical values and functions. This not only applies to the classical laboratory sciences,but also to astronomy and astrophysics. In “Landolt-Biirnstein” astronomy was first treated as part of the third volume of the sixth edition (1952): “Astronomie und Geophysik” edited by J. Bartels and P. ten Bruggencate. In Group VI of the New Series this field was treated anew in 1965 by Volume VI/l “Astronomy and Astrophysics”, edited by H. H. Voigt, and now sixteen years later extended and supplemented by the present VI/2 (in three subvolumes 2a, 2b, 2c), the structure of which largely follows that of the 1965 volume. Where in 1952 astronomy could be treated by 25 authors in 255 pages and in 1965 by 39 authors in 700 pages, now there are more than 60 experts at work on the three subvolumes. This increase in size within 30 years has numerous causesthat need not be discussed here in detail, but that are obvious when comparing the contents of these three volumes on astronomy that have appearedover a period of 30 years. The disappearanceof somesections and emergencyof whole new topics, as well as a change in approach - from statistics to the individual object - (and consequently the enlarging of some chapters, for instance “Peculiar stars”) show the development of our science in the last decades.However, the previous volume retains its importance not only for the historian of science,it is also the main source for the numerical values and functions published before 1965, since the present Volume VI/2 refers in casesof older data back to the discussions in this previous volume. In spite of this, the present bibliography’s bulk has grown considerably, although citation of review articles and monographs is generally preferred to that of primary literature. The size of this new volume “Astronomy and Astrophysics”, required a division into three subvolumes: a) Methods. Constants. Solar System. b) Stars and Star Clusters. c) Interstellar Matter. Galaxy. Universe. A comprehensive index for all three subvolumes is included at the end of Subvolume VI/2c. About three decadesago the editors of the volume “Astronomie und Geophysik” in the 6th edition wrote: “Authors, publishers and editors believe they have succeeded,if each reader responds with: I’m not satisfied with the chapter on my speciality, but the other sections are quite useful.” We, the present editors, can only adhere to this motto. Our thanks are due first of all to the authors of the individual chapters. They had to do the scientific work and bear the final responsibility, and they usually followed our ideas and suggestions with regard to the selection and presentation of the material. We also want to thank the Landolt-Biirnstein editorial staff in Darmstadt, especially Mrs. G. Burfeindt, who was responsible for the actual editing, and Dr. Durrant in Freiburg for checking the English text. Thanks are also due to the publishers - always following our wishes if at all possible - for the high quality presentation of this volume which, as with all Landolt-Biirnstein volumes, is published without financial support from outside sources. Heidelberg, Gijttingen, August 1981

List of abbreviations and not always explained explicitly in this book. Astronomical Unit (= Distance Luminosity Class LC AU Luminosity Function LF Earth-Sun) Large Magellanic Cloud LMC Bolometric Correction B.C. Local thermodynamic equilibrium LTE Bonner Durchmusterung BD Center-limb variation Messier Catalogue M CLV Magneto-hydrodynamics MHD Colour-magnitude-diagram CMD Multi-Mirror-Telescope MMT Carbon. Nitrogen. and Oxygen (not as CNO Max-Planck-Institut MPI molecule) e.g.CNO cycle, National Aeronautics and Space NASA CNO anomalies Administration ESA European SpaceAgency Noise Equivalent Power NEP European Southern Observatory ES0 New General Catalogue NGC ET or E.T. EphemerisTime Non-local thermodynamic equilibrium NLTE Extreme ultraviolet EUV National Radio Astronomy ObservaNRA0 FWHW Full Width of Half Maximum tory, Green Banks,W. Va., USA Henry Draper Catalogue HD Palomar Observatory Sky Survey POSS Harvard RevisedCatalogue HR Radial velocity RV Hertzsprung-Russell Diagram HRD Small Magellanic Cloud SMC International Astronomical Union IAU Spectral type Infrared IR SP International Union of Radio Science URSI Interstellar Matter ISM Universal time Julian Date UT JD Ultraviolet uv LB, NS Landoh-Bornstein, Numerical Data Very Long Baseline Interferometry VLBI and Functional Relationships in or LB X-ray and ultraviolet region xuv Scienceand Technology, New Series Zero Age Main Sequence ZAMS or: Landolt-Bornstein, NS

Abbreviations of further Star Catalogues: see8.1.1 For abbreviations of special star types (e.g. WR stars), see “Spectralclassification” (4.1.1), “Variable stars” (5.1) “Peculiar stars” (5.2) and subject index. Some important

ANS ATS cos GIRL HEAO HEOS IMP IRAS

Astronomical Netherlands Satellite (The Netherlands NASA) Applications Technology Satellite Cosmic Ray Satellite (ESA) German Infrared Laboratory High Energy Astrophysical Observatory (NASA) High Eccentricity Earth-Orbiting Satellite (ESA) Interplanetary Monitoring Platform Infrared Astronomical Satellite

IUE OAO OGO OS0 MTS RAE SAS

International Ultraviolet Explorer (NASA-UK-ESA) Orbiting Astronomical Observatory (NASA) Orbiting Geophysical Observatory Orbiting Solar Observatory Meteoroid Technology Satellite (NASA) Radio Astronomy Explorer Small Astronomy Satellite (NASA)

## 3. Selection of Events

[14] The accuracy of STOA and ISPM was first evaluated in a real-time forecast study [ Smith et al., 2000 ]. Recently, Fry et al. [2003] compared the performance of the HAFv.2 model with the performances of the STOA and ISPM models for 173 metric type II events during the rise of solar maximum from February 1997 to October 2000. Their statistical comparison between the models showed them to be practically equivalent in predicting SAT. The uncertainty of the SAT estimates as determined by RMS error is about 12 hours for each model. On the other hand, Gopalswamy et al. [2001] applied the CME-ICME propagation model to 47 CME events observed from December 1996 to July 2000 by SOHO. Then they showed that the average prediction error of the model is 10.7 hours. Gopalswamy et al. [2003] extended the CME-ICME model to predict 1 AU arrival of IP shocks as discussed in section 2.2.2. They used a set of 29 IP shocks and the following ICMEs observed by WIND spacecraft from January 1997 to May 2002 and concluded that empirically shifting the CME-ICME model by an interval corresponding to the gas dynamic bow shock standoff distance provided a simple, however physically inconsistent, means of estimating the shock arrival time.

[15] The prediction errors of these models are not conclusive because these values are obtained from different data sets. Recently, it has been suggested that CMEs and flares (metric type II) are initiated nearly simultaneously [e.g., Zhang et al., 2001 Neupert et al., 2001 Moon et al., 2002b Cho et al., 2003 Shanmugaraju et al., 2003 ]. Therefore it would be meaningful to compare the prediction errors of the above two types of models for near-simultaneous CME-metric type II events. Thus we select the CMEs that have the temporal and spatial proximity to the type II events in Table 1 of Fry et al. [2003] . For this, we use the first C2 appearance time, position angle, and linearly fitted speeds of the CMEs that were adopted from SOHO/LASCO CME Catalog of CSPSW/NRL (available at http://cdaw.gsfc.nasa.gov/CME_list/). The errors of the CME speeds are known by their experience to be typically 10% but sometimes 30% (S. Yashiro, private communication, 2002).

[16] The procedure for examining the arrival time predictions of ICMEs and IP shocks for the near-simultaneous events are summarized as follows: (1) From the 173 type II events of Fry et al. [2003] , we choose a total of 101 CMEs that are within a threshold window (±90 min). (2) We select 89 events from this group by comparing the position angles and the coordinate information of the associated flares. (3) We apply the adopted prediction models (the ensemble of shock propagation models and the empirical CME propagation models) to the selected events. Then we look for IP shocks that appear near the predicted times. For this, we examine the IP shocks identified by Fry et al. [2003] who used the NOAA/SEC 1-min resolution ACE and/or WIND plasma and field data, searching for simultaneous jumps in velocity, density, temperature, and total magnetic field magnitude according to the Rankine-Hugoniot relations. As a result, we identified 38 IP shocks. (4) We then search for ICMEs associated with the 38 IP shocks. For the identification of ICMEs, we look for MC and EJ from in situ magnetic field-plasma measurements and particle detection of ACE (available at http://www.srl.caltech.edu/ACE/ASC/level2/index.html). According to Burlaga [1995] and Berdichevsky et al. [2002] , a MC is defined as a large flux-rope structure of an almost cylindrical shape with low plasma beta (<0.1), high alpha/proton ratios (>0.6), enhanced magnetic field strength (>10 nT), and a large and smooth rotation of the magnetic field direction. In the case of EJs, which are not flux ropes and have disordered magnetic fields, smooth rotation may not be present. We also refer to previously identified sources of ICMEs [ Gopalswamy et al., 2001 Cane and Richardson, 2003 ] and the Magnetic Cloud Table (available at http://lepmfi.gsfc.nasa.gov/mfi/mag_cloud_pub1p.html).

[17] Figure 1 presents typical observations of EJ (left panels) and MC (right panels) showing signatures of IP shocks and ICMEs. Starting from the top, the first panel contains the solar wind speed as measured by the ACE spacecraft. The second panel presents the magnitude of magnetic field. The horizontal dashed line in this panel indicates the historical value of 5.5 nT that supposedly “defines” the occurrence of an ICME when taken with the other parameters [ Berdichevsky et al., 2002 ]. The third panel presents the latitude of the magnetic field in the spacecraft-centered coordinate system. The fourth panel shows the proton plasma beta (βP), and the bottom panel is the ratio of alpha particle to proton density, Nα/NP. A dashed line across the Nα/NP panel gives 2.3% which is the approximate mean value of this density ratio [see, e.g., Berdichevsky et al., 2002 ]. While large ICMEs obviously satisfy the definitions suggested by Burlaga et al. [1981] , small ones have many small and less obvious structures. The uncertainty in determination of the ICME's leading edge can be several hours for different criteria. On the basis of these observational criteria, we have identified 22 ICME events. It is noted that most of the ICMEs (17/22 or 77%) originated from halo CMEs, which is consistent with the results of Gopalswamy et al. [2003] .

[18] Table 1 shows the details of the 38 near-simultaneous CME-type II burst events that are followed by IP shocks and/or ICMEs. The first six columns give the type II burst event number from Fry et al. [2003] , the first CME appearance (time) in the C2 image, the position angle of the CME measured counterclockwise in degrees from solar north, the linearly fitted speed of the CME, the mean speed of the type II radio burst, and the difference between the first C2 appearance time and the starting time of the type II burst, respectively. All CME information is taken from the SOHO/LASCO CME Catalog of CSPSW/NRL, and the details of the type IIs can be found in Table 1 of Fry et al. [2003] . The seventh and eighth columns represent the arrival times of the shocks and the ICMEs at 1AU, respectively.

Event Solar Disturbances Arrival Time
No. f f Event numbers are taken from the metric type II/flare events in the work of Fry et al. [2003] .
Date/UT (CME) P.A. a a Position angles of CMEs.
VCME (km/s) Vtype II (km/s) ΔTCT b b Time difference between the first CME appearance and the starting of type II bursts.
(min)
Date/UT c c Observed shock arrival date and time at L1.
(Shock)
Date/UT d d Observed CME arrival date and time at L1. M denotes Magnetic cloud and E, Ejecta.
(ICME)
2 970407/1427 Halo 830 800 29 0410/1258 0411/0600(E)
3 970512/0630 Halo 464 1400 74 0515/0115 0515/1000(M)
5 971104/0610 Halo 785 1400 2 1106/2218 1107/0530(M)
6 971127/1357 98 441 700 40 1130/0714
22 981105/2059 Halo 1124 900 68 1108/0420 1108/0900(M)
38 990209/0533 235 808 600 14 0211/0858
44 990308/0654 115 664 700 16 0310/0038
55 990622/1854 Halo 1133 1400 30 0626/0217 0626/0500(E)
57 990629/0554 Halo 589 750 39 0702/0025
60 990711/0131 81 318 650 78 0713/0845
62 990719/0306 Halo 430 500 50 0722/0950
70 990804/0626 262 405 462 34 0808/1750 0809/1048(E)
74 990820/2326 95 812 700 9 0823/1130
78 990828/1826 120 462 600 19 0831/0131
79 990830/0850 9 404 700 47 0902/0935
80 990913/1731 109 444 500 69 0915/2005
97 991222/0230 Halo 570 500 29 1226/2126 1227/1800(E)
102 000118/1754 Halo 739 400 35 0122/0023 0122/1800(E)
104 000208/0930 Halo 1079 600 33 0211/0213 0211/1000(M)
105 000210/0230 Halo 944 1100 42 0211/2318 0212/1500(M)
106 000212/0431 Halo 1107 700 25 0214/0656 0215/0000(M)
108 000217/2130 Halo 550 550 66 0220/2050 0221/0948(E)
129 000430/0854 186 540 700 49 0502/1044
130 000510/2006 83 641 680 28 0512/1712 0514/0300(M)
133 000520/0626 187 557 500 30 0523/2315
135 000606/1554 Halo 1119 1189 31 0608/0840 0608/1200(M)
136 000607/1630 Halo 842 826 40 0611/0716 0611/0900(E)
140 000615/2006 298 1081 996 20 0618/1702
142 000618/0210 307 629 660 12 0621/1500
151 000710/2150 67 1352 1300 27 0713/0918
152 000712/2030 281 820 950 16 0714/1532 0715/0600(M)
153 000714/1054 Halo 1674 1800 34 0715/1437 0715/2200(M)
158 000722/1154 304 1230 1000 29 0725/1322
159 000725/0330 Halo 528 903 41 0728/0541 0728/1500(E)
165 000901/1854 244 411 500 27 0906/1612 0907/0400(E)
169 000916/0518 Halo 1215 773 45 0917/1657 0918/0100(M)
171 001001/1350 94 427 1100 38 1003/0007 e e ICME associated IP shock which was observed at 1005/0241 (UT).
1005/1200(M)
172 001009/2350 Halo 798 925 12 1012/2144 1013/1700(M)
• a Position angles of CMEs.
• b Time difference between the first CME appearance and the starting of type II bursts.
• c Observed shock arrival date and time at L1.
• d Observed CME arrival date and time at L1. M denotes Magnetic cloud and E, Ejecta.
• e ICME associated IP shock which was observed at 1005/0241 (UT).
• f Event numbers are taken from the metric type II/flare events in the work of Fry et al. [2003] .

## 4 Quantifying the star formation histories

We proceed by quantifying the RSF history of the ETG population to explore the ages and mass fractions of the young stellar populations that are forming in the disturbed ETGs and study the differences between the relaxed ETGs and their disturbed counterparts. We estimate parameters governing the star formation history (SFH) of each galaxy by comparing its multi-wavelength COSMOS photometry to a library of synthetic photometry, generated using a large collection of model SFHs that are specifically designed for studying ETGs. The chosen parametrisation describes the broad characteristics of ETG SFHs with a minimum of free parameters. A key feature of the scheme is that the RSF episode is decoupled from the star formation that creates the bulk, underlying population.

Since the existing literature on ETGs demonstrates that the bulk of the stellar mass in these galaxies is metal-rich and forms at high redshift over short timescales (see introduction), we model the underlying stellar population using an instantaneous burst at high redshift. We put this first burst at z = 3 and assume that it has solar metallicity. The RSF episode is modelled by a second instantaneous burst, which is allowed to vary in (a) age between 0.01 Gyrs and the look-back time corresponding to z = 3 in the rest-frame of the galaxy (b) mass fraction between 0 and 1 and (c) metallicity between 0.05Z ⊙ and 2.5Z ⊙ . In addition, we allow a range of dust values, parametrised by E B − V , in the range 0 to 0.5. The dust is applied to the model galaxy as a whole and the empirical law of Calzetti et al. (2000) is adopted to calculate the dust-extincted SEDs. The free parameters are the age ( t 2 ), mass fraction ( f 2 ) and metallicity ( Z 2 ) of the second burst and the dust content ( E B − V ) of the galaxy.

Note that putting the first burst at z = 2 , or even z = 1 , does not affect our conclusions about the RSF, because the first burst does not contribute to the UV, which is dominated by hot, massive main sequence stars with short lifetimes. The UV decays after around a Gyr (and has almost completely disappeared after ∼ 2 Gyrs) as the UV-producing stars come to the end to their lifetimes. Recall that the highest redshift being sampled in this study is z ∼ 0.7 which corresponds to an age of ∼ 5 Gyrs if star formation begins at z = 3 in a standard cosmology. It is worth noting that our parametrisation is similar to previous ones used to study elliptical galaxies at low redshifts using UV/optical photometry (e.g. Ferreras & Silk 2000a) and spectroscopic line indices (e.g. Trager et al. 2000) .

To build a model library of synthetic photometry, each combination of the free parameters is combined with the stellar models of Yi (2003) and convolved with the correct COSMOS ( u , g , r , i , z , K s ) 1 1 1 Note that the u -band filter is from the CFHT Mega-Prime instrument, the g , r , i , z filters are from the Subaru Suprime-Cam instrument and the K s filter is from the KPNO FLAMINGOS instrument. filtercurves. The library contains ∼ 600 , 000 individual models. Since our galaxy sample spans a range in redshift, equivalent libraries are constructed at redshift intervals δ z = 0.02 in the redshift range 0.5 < z < 0.7 . Note that the stellar models assume a Salpeter (1955) initial mass function.

For each ETG in our sample, parameters are estimated by choosing the model library that is closest to it in redshift and comparing its ( u , g , r , i , z , K s ) photometry to every model in the synthetic library. The likelihood of each model ( exp − χ 2 / 2 ) is calculated using the value of χ 2 , computed in the standard way. The error in χ 2 is computed by adding, in quadrature, the observational uncertainties in the COSMOS filters with errors adopted for the stellar models, which we assume to be 0.05 mags for each optical filter and 0.1 mags for the K s filter (see Yi 2003). From the joint probability distribution, each free parameter is marginalised to extract its one-dimensional probability distribution function (PDF). We take the median value of the PDF to be the best estimate of the parameter in question. The 25th and 75th quartile values provide an estimate of the uncertainties in these estimates. This procedure yields, for every galaxy in our sample, a best estimate and error for each free parameter. Note that the accuracy of the photometric redshifts provided in the catalogue is sufficient for accurate parameter estimation. Past experience suggests that, given the degeneracies within the parameter space, the added accuracy of spectroscopic redshifts does not change the derived distributions of parameter values in such a study.

It is worth noting that the presence of (Type II) AGN will not affect our analysis of the UV colours or the derived values of the RSF parameters (our sample does not contain quasars). The contamination from a Type II AGN is likely to be less than around 15% in UV flux (Salim et al. 2007) , which translates to around 0.15 mags in the ( N U V − r ) colour, much smaller than the observed spread (around 4 mags) in the UV colour-magnitude relation (see Figure 5 above). Previous work on ETGs using UV/optical data at low redshift indicates that blue ETG colours are not restricted to galaxies hosting emission line AGN (Schawinski et al. 2007) . The same analyses also show that the quality of the SED fitting is equally good in galaxies which carry emission-line signatures of AGN and those that do not show any signs of AGN, indicating that there is no measurable contribution from a power-law component in the UV and optical spectrum. Finally, a study of the GALEX (Martin et al. 2005) UV images of nearby AGN hosts indicates that the UV emission is extended, making it unlikely that it comes from a central source (Kauffmann et al. 2007) . In summary, the parameter estimation performed in this study is immune to the presence of a Type II AGN.

In the top panel of Figure 8 we present the t 2 − f 2 space for the ETG population studied in this paper. Relaxed ETGs are shown using filled circles and disturbed ETGs are shown using crosses. Galaxies are colour-coded according to their ( u − i ) colours. We find that the RSF ages in the disturbed ETGs are typically between ∼ 0.03 Gyrs and ∼ 0.3 Gyrs. Not unexpectedly, the relaxed ETG population, which predominantly resides on the UV red sequence, tends to occupy high values of t 2 . However, it is worth noting that, within errors, the values of t 2 in the top panel of Figure 8 indicate that some relaxed ETGs, especially those that are bluer than ( u − i ) ∼ 4 , are likely to contain intermediate-age (1-3 Gyr old) stellar populations. In other words, while their photometry does not indicate the presence of RSF, it is also inconsistent with their entire stellar populations forming at high redshift. Such objects are therefore likely to host intermediate-age stellar components.

The mass fractions formed in the RSF events are typically less than 10% (with a small tail to higher values). Note that the median estimate for the mass fraction typically has a large error (more than half a decade). The reason for these large uncertainties becomes apparent if we refer back to Figure 1 , which shows the evolution of the ( u − i ) colour as a function of the RSF mass fraction at a given RSF age. For all RSF ages, increasing the mass fraction beyond a threshold value of ∼ 10-20% does not produce any further significant changes in the ( u − i ) colour. This is because, beyond a point, the spectral energy distribution (SED) becomes dominated by the young stellar component, so that increasing its mass fraction further only changes the normalisation of the SED but not its shape (which determines the colours). Hence, the same ( u − i ) colour can be consistent with a large range of mass fractions, producing a large degeneracy in the mass fraction values. Note, however, that the ( u − i ) colour evolves rapidly with age regardless of mass fraction, making the rest-frame UV a more robust indicator of the RSF age ( t 2 ) than the RSF mass fraction ( f 2 ).

includegraphics [ w i d t h = 433.62 p t ] m e d i a n u i c o l o u r s includegraphics [ w i d t h = 433.62 p t ] r e d b l u e f r a c t i o n s

Figure 7: TOP: Median ( u − i ) colours of relaxed ETGs (filled circles), disturbed ETGs (crosses) and late-types (open squares) in the three luminosity bins used in Figure 6 . Note that the points are plotted at the mid-points of each luminosity bin. BOTTOM: The fraction of each population that lies on the UV red sequence (red lines) and in UV blue cloud (blue lines).

includegraphics [ w i d t h = 433.62 p t ] t y f y includegraphics [ w i d t h = 433.62 p t ] r s f m e t e b v

Figure 8: TOP: The t 2 − f 2 space for the ETG population studied in this paper. Recall that t 2 is an estimate of the age of the recent star formation (RSF) and f 2 is its mass fraction. BOTTOM: The metallicity of the RSF (y-axis) plotted against the dust extinction in the galaxy (x-axis). Relaxed ETGs are shown using filled circles and disturbed ETGs are shown using crosses. Galaxies are colour-coded according to their ( u − i ) colours.

In the bottom panel of this figure we present the remaining free parameters in the analysis: the metallicity of the RSF and the dust extinction applied to the galaxy. Similar to red early-types at low redshift, the relaxed ETG population is typically dust-poor, with E B − V values less than 0.1. Not unexpectedly, the bulk of the disturbed ETGs are dustier ( 0.1 < E B − V < 0.4 ), since their star-forming regions (which dominate the UV fluxes) are gas-rich and therefore expected to also contain dust. An interesting result is that the RSF metallicities are typically sub-solar but reasonably high, suggesting that the gas that forms the young stars could already be metal-enriched. Recall that the metallicity grid spans a wide range of values (0.05-2.5Z ⊙ ) and very low metallicities are included in the grid.

## 5 The Future

### 5.1 Exciting Trends and Near-Future Science Potential

As has been said before, now is a practical time for reviewing the major achievements of optical interferometry, for we are entering a new era boasting facilities with significantly greater sensitivity, angular resolution, spectral resolution, and wavelength-coverage. In this section, I will give my views of some of the new capabilities and the expected science returns.

One important trend that must be bolstered is the inclusion of theorists and modellers in the observations and interpretations of interferometry data. In many areas, the interferometry observations are outstripping the ready tools for analysis. For example, the wavelength-dependent and time-dependent diameters of AGB stars require a combination of time-dependent hydrodynamical atmospheres and sophisticated radiative transfer codes, a problem very challenging even with today’s supercomputers. Understanding the hotspots seen on the surfaces of stars will required 3-dimensional simulations of stellar convection. Accretion disk physics around young stars should include magnetic fields and demand thoughtful considerations of gas and dust physics in a 2-D or 3-D context. Dust production in colliding winds is very poorly understood, and poses a formidable numerical simulation problem. While tackling these difficult physical problems will require the new high-resolution data from optical interferometers, it is also true that input from the modellers and theorists is needed to guide and suggest experiments and observing strategies.

Another general comment is that increasing the angular resolution usually means probing ever decreasing physical scales. Since interferometers often probe scales smaller than an AU, significant changes in time are expected for even small characteristic velocities ( ∼ km/s). This poses both a risk and an opportunity: a risk since data must be taken rapidly and efficiently to accurately capture snapshots of ever-evolving and changing environs, and an opportunity to include dynamics and time-evolution into our models and understanding. Observing the dynamics of circumstellar and/or stellar environments allow new physics to be understood, physics that usually can not be unambiguously reconstructed from typical datasets. Thus, I hope that new dynamical information will break theoretical stalemates which paralyze a number of fields. Interferometers have the opportunity to revolutionize the way we think of the universe: from distant “frozen” images of the past, to a dynamic and engaging unfolding of the present.

#### 5.1.1 New Long Baselines

New long baselines will allow unprecedented high resolution measurements on select sources. With sub-milliarcsecond resolution, one can measure the diameters of “small” sources which have largely eluded current surveys, such as hot stars and nearby low-mass stars. Distortions in the photospheric shapes of rapidly rotating stars or binary stars in nearly Roche-lobe filling systems can be directly detected. Limb-darkening studies of important objects, such as Cepheids, can be accomplished to put the Cepheid distance scale of firm direct footing. Further, long baselines make a variety of exoplanet studies possible, such as directly detecting 51 Peg B-like planets (“Hot” jupiters) or resolving planetary transits across the stellar disk. The NPOI, CHARA, and SUSI interferometers will possess the longest baselines in the near-term, while future projects such as the MRO or ’OHANA (on Mauna Kea, HI) might someday extend the resolution below even 0.10 milliarcseconds with > 1 km baselines.

#### 5.1.2 Imaging

Imaging with optical interferometry is currently tedious at best, and can only investigate simple objects such as resolved photospheres or binary stars. The 6-telescope systems of NPOI and CHARA will soon possess the capability of (comparably) excellent “snapshot” coverage, allowing more complicated and higher dynamic range imaging of select targets. The CHARA array can not be reconfigured and hence will only image well targets with the appropriately-sized structures – for a maximum baseline of ∼ 330 m at 1.65 μ m, the optimum size scale is a few milliarcseconds. The NPOI interferometer can be reconfigured to “fill-in” the (u,v)-plane completely over time, and be adjusted for individual sources to optimally measure the needed visibility and closure phase information. In the longer term, the auxiliary telescope array at the VLTI and the proposed outrigger telescopes at Keck will allow even fainter infrared targets to be observed. For imaging, the MRO is currently the most ambitious project in the works, hoping to include > 10 telescopes, which would make it the premiere imaging interferometer in the world.

Good imaging capabilities would open up new avenues of research, especially in studies of the circumstellar environments at infrared wavelengths. The ability to study disks around young stars and the time evolution of gaps, rings, or other structures would revolutionize our understanding of planet formation. At visible wavelengths, imaging spots on the surfaces of other stars is a major goal, and would allow solar physics to be applied in detail to other stars for the first time.

The unexpected discoveries of Keck aperture masking justify our optimism that imaging will uncover many new phenomena that currently are hidden unnoticed in spectral energy distributions. For example, the Wolf-Rayet dust spirals (see Figure 30 ) have only been observed in a few systems, and represent a new area of study when imaging interferometric arrays are fully commissioned.

However, current imaging work using COAST, NPOI, and IOTA interferometers suffer from the lack of dedicated software resources. Unfortunately, the decades of software development in radio interferometry can not be fully leveraged for optical interferometry, since radio work now relies largely on phase-referencing techniques not generally available in the optical. New imaging software is needed which can take into account the unique nature of optical interferometry data as well as the different nature of our target sources. The recent adoption of a common data exchange format, defined by the COAST and NPOI interferometer teams, represent an important first step towards these goals (see http://www.mrao.cam.ac.uk/ jsy1001/exchange/).

#### 5.1.3 Precision Interferometry

This is a rapidly developing area since the advent of single-mode fibers for spatial filtering and “dual-star” phase referencing. When a model of the astronomical source is well-known, then incredibly precise measurements are possible. The most potential for this is in the general area of binary stars, where the stars either are point sources or partially-resolved uniform-disks. The case of detecting an exosolar planet is included in this category, since it can be considered as very high-dynamic-range imaging of a faint companion.

While there are open questions in binary evolution and stellar astrophysics which demand such high precision, a more popular reason to pursue “Precision Interferometry” is towards detection of extrasolar planets around nearby stars. There are many ways this can be manifested, and I will outline a few of these.

Narrow-angle astrometry is a comparatively “classical” way to detect an exosolar planet. Akin to the doppler shift-radial velocity method, precision astrometry attempts to detect the minute wobble of the parent star as a planet proceeds in its orbit. This can be done by monitoring the angular distance between a star and a background reference star. In this case, the target star is normally quite bright and used for phase-referencing to a faint star projected within an isoplanatic patch from the target ( < ∼ 30”). Lane et al. ( 2000a ) reported the first measurements of this kind using the PTI (see Figure 31 ). For reference, the motion of Saturn and Jupiter perturb the Sun ∼ 1 milliarcsecond as viewed from 10 pc. This technique will be applied by the Keck Interferometer and the VLTI interferometer for a planet survey, and there is talk of pursuing this in Antarctica where the isoplanatic patch is larger and the coherence times longer (e.g., Lloyd et al. , 2002 Swain , 2002 ) .

Figure 31: State-of-the-art narrow-angle astrometry of the binary 61 Cyg by the PTI. For a period of one week, the residual astrometric error in declination was ∼ 100 micro- arcseconds. Figure printed with permission of SPIE, originally appearing in Lane et al. ( 2000a ) .

Another method also being aggressively pursued by the Keck and VLTI interferometers is a multi-wavelength approach to find massive exoplanets by detecting a very slight photocenter shift between different infrared bands due to hypothesized absorption bands in the planet’s atmosphere (i.e., the differential phase method e.g., Akeson and Swain , 1999 Lopez and Petrov , 2000 ) . This method has the advantage of using the bright target star as its own phase reference. However, recent studies of line-of-sight variability of atmospheric water vapor ( Akeson et al. , 2000b ) indicate that differential chromatic dispersion might be more difficult to calibrate for differential phase methods than originally expected.

Precision measurements of closure phases can also be used to detect faint companions, a method which has not received as much attention. As described earlier in this review (§ 2.2.3 ), the closure phase is formed by summing the interferometer phases on three baselines around a triangle of telescopes, and this quantity is immune to atmospheric phase delays. The lack of attention to precision closure phase methods is understandable since few interferometers possess the requisite minimum of three telescopes. Monnier ( 2002 ) and Segransan ( 2002 ) recently discussed how closure phases are immune to dominant calibration problems of differential phase and that they can also be used to solve for all the parameters of a binary system without needing to measure any visibility amplitudes. For reference, a typical closure phase for a binary with brightness ratio of 10 4 is ∼ 0.01 degrees as long as the component separation is resolved by the interferometer – the same magnitude effect as for differential phase methods.

Current published measurement precision of closure phases is only 0.5 to 5 degrees ( Tuthill et al. , 2000c Benson et al. , 1997b Young et al. , 2000a ) . Improving the three orders of magnitudes needed to detect even the brightest possible exoplanet is a daunting challenge. While there are surely unconsidered systematic effects (perhaps due to birefringence or drifts in optical alignment) which will degrade the sensitivity of the precision closure phase technique, the lack of any “showstopper” effects, like differential atmospheric dispersion for the differential phase methods, strongly argues for the further development of the closure phase technique.

#### 5.1.4 Nulling

Another approach being pursued for planet detection is nulling ( Bracewell , 1978 ) . The initial nulling experiments with the MMT ( Hinz et al. , 1998 ) have continued ( Hinz , 2001 ) , and ultimately will be applied on the Large Binocular Telescope Interferometer ( Hinz et al. , 2001a ) . This project is still many years away, but offer an alternative approach to the “precision” phase methods above.

In the nearer term, the Keck Interferometer will be applying nulling in the mid-infrared in order to measure and characterize the zodiacal dust around nearby stars. This source of infrared radiation is expected to be the dominant background for an eventual space-based planet detection interferometer, the so-called “Terrestrial Planet Finder” mission (more information in § 5.2 ). Serabyn and Colavita ( 2001 ) describe the “fully symmetric” nulling combiner being implemented on the Keck Interferometer, and initial on-sky tests are expected to begin in 2003. A more complete description of the observing strategy and expected sensitivity has been documented in Kuchner and Serabyn ( 2003 ) .

Nulling can also be applied on large single-apertures, and then are called nulling coronagraphs. New clever designs in coronagraphy are competing with nulling interferometry for space mission concepts to detect terrestial planets around other stars, and I recommend interesting papers on optimally-shaped and apodized pupils ( Spergel , 2002 Nisenson and Papaliolios , 2001 ) , bandpass-limited image-masks ( Kuchner and Traub , 2002 ) , and phase-mask-based approaches (e.g., Guyon et al. , 1999 Rouan et al. , 2000 ) .

#### 5.1.5 Spectroscopy

There have been only a few significant results combining spectroscopy and interferometry fortunately, this is about to change. The near-infrared AMBER instrument, slated to arrive at the VLTI interferometer in 2003, will combine three telescope beams together and disperse the light with 3 different spectral resolutions, the maximum is R > ∼ 10000. This resolution will allow interferometry on individual spectral lines in the 1-2.5 μ m regime, opening up shock-excited emission lines, CO-absorption/emission features, and even emission from YSO jets to be probed in novel and exciting ways for the first time. We can expect the value of interferometric observations to be greatly enhanced by these new capabilities.

#### 5.1.6 Polarimetry

Imaging stars in polarized light with interferometers also promise fascinating new insights into many areas of astrophysics, although this capability is difficult to implement with current interferometers. Vakili et al. ( 2002 ) discuss interesting applications of combining the high spectral resolution of AMBER with polarimetry, and highlight the new capabilities for imaging scattered light and potentially even measuring stellar magnetic fields from the Zeeman effect. Experimental efforts ( Rousselet-Perraut et al. , 1997 ) in this area have been very limited compared to the theoretical progress ( Rousselet-Perraut et al. , 2000 ) this situation should be remedied soon.

#### 5.1.7 New Observables

Along with greater spectral coverage and more telescopes come new interferometric observables. While § 5.1.3 discussed possible applications of differential phase and differential closure phase , there are other interferometric observables yet to be exploited for precision interferometry.

Measuring the diameter of a star by precisely locating the first null of the visibility pattern is immune to amplitude calibration errors. This could be done by using a well-calibrated spectrograph to search for the null, either measuring fringe amplitudes or looking for the signature phase-flip across the null (e.g., Mozurkewich, private communication). This technique is similar to the method of A. Michelson in measuring the diameter of Betelgeuse ( Michelson and Pease , 1921 ) , where the baseline was adjusted in order to find the visibility minimum as detected by his eyes.

The closure amplitude (requires sets of 4 telescopes) is an important quantity in radio interferometry to compensate for unstable amplifier gains and varying antenna efficiencies that can be linked to individual telescopes (e.g., Readhead et al. , 1980 ) . Closure amplitudes are not practical for current optical interferometers partially because most fringe amplitude variations are not caused by telescope-specific gain changes but rather by changing coherence (e.g., due to changing atmosphere). However, the introduction of spatial filtering (e.g., single-mode fibers) should make the closure amplitude a useful tool for optical interferometry soon (see discussion in Monnier , 2000 ) .

Necessarily, most new observables have yet to be used in practice or described in print. I mention here a few possibilities that this author has considered to encourage future experimentation. For instance, it may be possible to use closure amplitudes in the case when fringe jitter causes loss of visibility contrast in a fringe-tracking interferometer, due to the way in which small random phase errors degrade coherence. Also, the Closure Differential Phase is a recently defined quantity ( Monnier , 2002 ) , introduced to overcome one limitation of current phase-referencing techniques, namely, that differential phase (and differential closure phase) methods requires assumptions about the source structure of the phase calibrator.

#### 5.1.8 Sensitivity (iKeck and VLTI)

Another area where we expect immediate progress is in observing new classes of faint objects for the first time. The Keck and VLTI interferometers will have the capability of observing sources as faint as K ∼ 11 magnitude (down to K ∼ 20 with phase referencing), opening up extragalactic sources for the first time. By the time this review is printed, I expect that the first optical interferometric observations of the core of an Active Galactic Nuclei (AGN) will be announced. Size measurements of AGN should offer new constraints on models of the infrared continuum and, when coupled with high spectral resolution, could determine the physical origin of observed broad line regions and possibly even measure dynamical black hole masses.

In terms of galactic sources, this increase in sensitivity will allow a broad census of sources to be taken, including YSOs spanning a broad range of ages, luminosities, and distances and binary systems of all masses. For instance, infrared observations of pre-main-sequence binaries allow unique probes of the evolution of binary fraction (e.g., Ghez et al. , 1993 ) as well as important measurements of masses of young stellar objects ( Tamazian et al. , 2002 ) . I expect interferometer observations to play an increasingly important role in this area as the sensitivity increases.

Of course the additional sensitivity will permit new projects too, such as tracking the motions of stars orbiting the black hole at the center of the Milky Way with an order of magnitude greater precision than possible today with single-aperture telescopes (e.g., Schödel et al. , 2002 ) . Precision astrometry may allow even new tests of General Relativity near super massive black holes at the center of nearby galaxies.

In addition, the MIDI instrument for the VLTI will allow sensitive measurements in the mid-infrared for the first time. While the ISI interferometer pioneered interferometry in this wavelength range, MIDI+VLTI will be first to probe a wide range of sources with resolution of ∼ 0.”01 and down to N ∼ 4 mag ( > ∼ 100 × fainter than the ISI). Mid-infrared observations are sensitive to emission from relatively cool dust and can peer through thicker layers of dust than possible in the visible or near-infrared. There are great possibilities for advancing our understanding of young and evolved stars both, and studying dust distributions in a variety of environments.

### 5.2 Space Interferometry

The greatest limitations to optical interferometers arise from atmospheric turbulence. It dramatically limits the sensitivity, the ability to do imaging, and forces the engineering to be clumsy and complicated. Space is naturally an ideal place for interferometry, with no atmosphere to corrupt the phase nor limit the coherent integration time. And long baselines are obviously possible by combining light intercepted by separate spacecraft flying in formation.

#### 5.2.1 Critical Technologies Needed

In order to successfully build space interferometers, many technologies must first be developed. To this day, there has not been any dedicated space interferometer flown (except for the Fine Guidance Sensors on the Hubble Space Telescopes e.g. Franz et al. , 1991 ) .

For interferometers deployed on a single structure, one has to contend with truss vibrations, thermal and gravitational gradients, and an unusually large number of mechanisms (failures of which could end the mission). There are issues with propellant and power consumption for maneuvering the array to point around the sky. The Space Interferometry Mission (SIM) is in advanced planning stages and is being designed to measure accurate positions of stars with micro-arcsecond resolution. SIM is a “simple” 2-element interferometer on a deployable truss ( ∼ 10 m maximum baseline), and will be the first space mission to attempt space interferometry.

Ultimately, one would want to have baselines much longer than ∼ 10 meters, and this will require separate, free-flying spacecraft. For a space interferometer consisting of “free-flyers,” there are other problems. For instance, maintaining the physical distances between space telescopes to sub-micron tolerances is indeed a challenge. Probably this can not be done however by monitoring the spacecraft drifts in real-time using laser metrology, the changing distances can be compensated for by onboard (short) delay lines. Some engineering missions have been proposed to test ideas, but have yet to really get-off-the-ground (e.g., the NASA Starlight mission was recently cancelled). NASA and ESA should give such a test mission a high priority since the science potential for a free-flyer interferometer is so much greater than for one limited to a single structure.

#### 5.2.2 Review of Current NASA & ESA Missions

There are a number of mission concepts involving space interferometry being considered by NASA and the European Space Agency (ESA). As mentioned before, the only one in advanced design stages is the NASA Space Interferometry Mission (SIM). In Table 4 , I summarize some of the missions that are being proposed, and their main science drivers. Considering the unreliability of expected launch dates, I have omitted these from the table – it is unlikely any of these will fly before 2010 (2020?).

NASA and ESA have spent much energy on designing missions to detect Earth-like planets around nearby stars, and to measure their crude reflectance (or emission) spectra. With luck, an extrasolar planet spectrum could encode distinctive atmospheric spectral features indicating the presence of life (biomarkers) on the distant planet (e.g., Woolf et al. , 2002 ) . While originally envisioned as an infrared interferometer mission, concepts involving a visible-light coronagraph have been proposed lately. This mission is known at the Terrestrial Planet Finder (TPF) in NASA, and as IRSI-Darwin at ESA. The summary table also includes a few TPF follow-on missions, such as “Life Finder.” These missions are very futuristic, and testify to NASA’s ebullient imagination.

Another area of interest is imaging the far-infrared and sub-millimeter sky at high angular resolution using space interferometry. These wavelengths are difficult to access from the ground due to water absorption in the atmosphere. Because of this, the angular resolution of current observations are very limited ( ∼ 30”) compared to all other wavelengths, the sky has been surveyed with the lowest resolution in the far-IR.

The proposed NASA mission “Submillimeter Probe of the Evolution of Cosmic Structure” (SPECS) would be a separate-telescope space interferometer (possibly tethered together and not “free-flying”) designed to map the sky with great sensitivity at a resolution comparable to that currently achievable at other wavelengths. ( ∼ 0.010”). This would avoid the confusion-limited regime encountered by current low-angular-resolution galaxy count surveys, and allow the evolution of cosmic structure to be investigated back to high redshift. The SPIRIT mission is meant as a precursor to SPECS to test out various aspects on a single platform.

The X-ray community has also proposed a space interferometer, which would boast micro-arcsecond resolution and be capable of studying the hot material at the event horizon of nearby Black Holes. Bolstered by successful lab experiments ( Cash et al. , 2000 ) , plans for a free-flying x-ray interferometer called the Micro-Arcsecond X-ray Imaging Mission (MAXIM) have begun. Controlling distances between macroscopic mirrors to picometer-precision, as is needed for X-ray interferometry, is indeed a daunting challenge. However, a MAXIM precursor mission with only a few meter baseline would have orders of magnitude greater resolution than the Chandra X-ray telescope and stands some chance of being flown.

Acronym
Full Name & Primary Science Drivers
SIM (NASA) Space Interferometry Mission
Precision astrometry exosolar planets
FKSI (NASA) Fourier-Kelvin Space Interferometer
Find Jovian planets (nuller) map circumstellar disks
SMART-3 (ESA) SMART-3
Test free-flying concept for ESA IRSI-Darwin mission
IRSI-Darwin (ESA) Infra-Red Space Interferometer (one concept: Darwin)
Image terrestial planets (IR nuller) measure spectra
TPF (NASA) Terrestrial Planet Finder
Image terrestial planets (IR nuller) measure spectra
SPIRIT (NASA) Space Infrared Interferometry Trailblazer
Far-IR, sub-mm galaxy counts precurser to SPECS
SPECS (NASA) Submillimeter Probe of the Evolution of Cosmic Structure
High-resolution map of High-Z universe (far-IR, sub-mm)
SI (NASA) Stellar Imager
Image surfaces of stars (visible, ultraviolet)
MAXIM (NASA) Micro-Arcsecond X-ray Imaging Mission
Map black hole accretion disks and event horizons (X-rays)
MAXIM Pathfinder (NASA) MAXIM Pathfinder
Demonstrate feasibility of X-ray interferometry achieve 100 μ -arcsecond resolution
LF (NASA) Life Finder
Search for biomarkers in planet spectra TPF extension
PI (NASA) Planet Imager
Image surfaces of terrestial planets, 25x25 pixels
(requires 6000km baselines, futuristic!)
Table 4: Proposed Space Interferometers

### 5.3 Future Ground-based Interferometers

While it is interesting to speculate about the future of space interferometry, we recognize that it will be expensive, difficult, and slow-paced. In the next 10 or 20 years, we can expect more affordable and rapid progress to be possible from the ground. In this concluding section, I review some of the necessary characteristics of an Optical Very Large Array (OVLA). Ridgway ( 2000 ) discusses many of these considerations, and I refer the reader to his interesting report for further details.

#### 5.3.1 Design Goals

The main design goal of a next-generation optical interferometer array will be to allow the ordinary astronomer to observe a wide-range of targets without requiring extensive expert knowledge in interferometer observations. An imaging interferometer with great sensitivity could fulfill this promise by providing finished images, the most intuitive data format currently in use. It will not be a specialty instrument with narrow science drivers, but a general purpose facility to advance our understanding in a wide range of astrophysical areas.

#### 5.3.2 Optical Very Large Array

One way to achieve this design goal is to scale up the existing arrays. Simply put, this main goal will require an array with a large number of telescopes ( > ∼ 20 to allow reliable aperture synthesis imaging) and with large-aperture telescopes corrected by adaptive optics (preferably using laser guide stars for full-sky coverage), allowing a reasonably faint limiting magnitude (roughly speaking, brighter than ∼ 15th magnitude in the infrared with no phase referencing).

This array would likely be reconfigurable, like the radio VLA, to allow different angular resolutions to be investigated. The longest baselines should cover a few kilometers ( ∼ 0.1 milli -arcsecond resolution in the near-IR). The main limitation of such a system will be a small field-of-view, typically limited to the diffraction-limited beam of an individual telescope (for 10-m class telescopes, the instantaneous field of view would be only about ∼ 50 milliarcseconds) – although mosaicing would be possible, as in the radio. There are schemes which can image a larger field simultaneously, but are probably not very practical.

With an even larger (billion-dollar) budget, one can partially combine the goals of interferometry with the community priority for a 30 m diameter telescope. This clever idea was recently proposed by R. Angel and colleagues at the University of Arizona. In their “20/20” scheme, light from two extremely large telescopes (diameter > 20 meters) would be combined in a Fizeau combination scheme, patterned after the Large Binocular Telescope, maintaining the entire field-of-view ( ∼ 30”, limited by atmospheric turbulence) with the resolution of the two-element interferometer. Further, this scheme maximizes raw collecting area and would boast potentially incredible sensitivity ( > 20 mag!). One demanding feature of this design is that the two 20+ m telescopes would have to smoothly move along a track in real-time to maintain the large field-of-view this may not be impossible, but is surely an interesting complication. Further, the imaging advantages of this system only work when the two-telescope baseline is 5-10 × as large as the telescope diameter, and hence the “20/20” interferometer would have maximum baselines of only a few hundred meters at most, not much better than current interferometer arrays. While granting that this system could allow much fainter objects to be observed, this option would cost many times more than a dedicated OVLA system described above.

#### 5.3.3 Technological Obstacles Needed to be Overcome

If optical interferometry is to continue its impressive growth over the coming decades, important breakthroughs must be made in critical areas. Here, I briefly list a few obvious improvements which would make an OVLA more affordable.

The main advance needed to make the OVLA affordable will be the development of “cheap” large aperture telescopes with adaptive optics. Currently, it costs millions of dollars to build even a 4 m-class telescope – without adaptive optics. Advances in lightweight mirrors with adaptive optics designed-in from the beginning may change the economics of the situation.

Another area which could revolutionize optical interferometry is advances in photonic bandgap fiber materials (e.g., Mueller et al. , 2002 ) . These materials offer possibility of extremely wide-bandwidth, low dispersion and low-loss single-mode fibers, which could open up the possibility of practical fiber delay lines. Such an advance would greatly simplify the optical beam-train and engineering of an optical interferometer, making projects such as ’OHANA straightforward. This would put optical interferometry on more similar footing as radio interferometry, where cable delay lines (either coaxial or fiber) are routinely used.

Combining dozens of telescopes may not be practical using bulk optics, and solutions involving integrated optics should be pursued. The main limitation of this technology is restricted wavelength coverage, currently only proven shortward of 2.2 μ m. Development of materials (e.g., lithium niobate) and fabrication processes that can extend the coverage into the thermal infrared (1-5 μ m) would mean that a general purpose interferometer could be built around an integrated optics combiner. Work is currently underway in Europe towards this end, in particular in pursuit of mid-infrared nulling capabilities for the ESA IRSI-Darwin mission ( Haguenauer and others , 2002 ) .

Lastly, improved infrared detectors are crucial to maximizing the scientific output of a future interferometer. It has already been discussed here (see § 3.3.5 ) that near-infrared detectors remain limited by avoidable detector “read” noise, and a future OVLA must have better detectors.

## 6. CONCLUSIONS AND DISCUSSION

Our survey and analysis provide observational evidence that significant H2 molecule formation is present in sunspots that are able to maintain maximum fields greater than 2500 G. Measurements of the OH equivalent width seen in sunspot umbrae are qualitatively consistent with the predictions from spectra synthesized by radiative transfer models. We infer a molecular gas fraction of a few percent H2 in the largest sunspots. The formation of this small faction appears to alter the equilibrium of pressures in the sunspot isothermally, resulting in an increase of the slope of the thermal–magnetic relation at temperatures lower than a 15650 Å continuum brightness temperature of 6000 K where the H2 fraction begins to rapidly increase with temperature. We suggest that the formation of H2 molecules in the sunspot umbra causes a rapid intensification of the magnetic field without a significant decrease in temperature which would explain the increase in slope of the thermal–magnetic relation.

We hypothesize that H2 plays an important role in the formation and evolution of sunspots. During the initial stage of sunspot emergence and cooling, the formation of H2 may trigger a temporary "runaway" magnetic field intensification process. As magnetic flux emerges and strengthens, the sunspot atmosphere cools due to the suppression of convective heating by the magnetic field. When sufficiently low temperatures are reached H2 begins to form in substantial numbers in the coolest parts of the umbra. As free hydrogen atoms combine to form H2, the total particle number density is reduced. The dissociation energy released into the atmosphere is rapidly dissipated by radiative cooling due to the low opacity of the photosphere, reducing the total kinetic pressure without a corresponding reduction in temperature. The decrease in gas pressure causes this region to shrink in size, and due to the high electrical conductivity of the atmosphere the magnetic fields are compressed with the plasma (the "frozen-in field" effect). The resulting higher magnetic field density further inhibits the convective heating of the sunspot atmosphere, which leads to further cooling. This "runaway" magnetic field intensification process is most likely a temporary phenomenon which is arrested long before all the hydrogen atoms have condensed into molecular form. At some point the transport of energy by convection will be effectively quenched and increases in magnetic field will no longer result in decreases in temperature. The transfer of radiative energy from surrounding hotter regions would also keep the umbra from becoming excessively cool. Therefore, further H2 formation would be halted.

While the formation of H2 may initiate a more rapid intensification of the sunspot magnetic field during sunspot emergence, we speculate that during the decay phase in a sunspot which has already formed a substantial H2 population, the highest concentrations of molecular gas would tend to maintain the magnetic field against decay and extend the lifetime of the sunspot. As the magnetic field in a sunspot weakens, regions of the umbra once cool become warm and H2 dissociates back into atomic hydrogen. The more rapid increase in pressure in warmer regions of the umbra would compress the remaining cool regions, concentrating the magnetic field and maintaining the cool interior against convective heating.

The formation of H2 would speed up the process of sunspot emergence, and the dissociation of H2 would slow down their disappearance. While the effects of the formation and destruction of molecules would produce similar signatures in the thermal–magnetic relation, we would expect to see more cases of sunspots in the decay phase due to observational bias. There is evidence that the intensification of the magnetic field occurs in discrete cores, such as can been seen in NOAA 9429, which is consistent with this speculative picture of molecule formation during the growth and decay of sunspots. If this is the case the molecular fraction would be significantly underestimated in sunspots due to the effect of filling factor, and may exist in quantities of 5% in unresolved features, consistent with the coolest models of the umbra such as those presented in Maltby et al. (1986).

It is possible that two other effects contribute to the nonlinearity of the thermal–magnetic relation. In previous studies (Martínez Pillet & Vázquez 1993 Solanki et al. 1993 Mathew et al. 2004), the nonlinearity of the thermal–magnetic relation was interpreted as a radiative transfer effect, i.e., the Wilson Depression effect in sunspots. Cooler atmospheres in a sunspot are more optically thin, therefore the magnetic field and continuum measurements originate from a greater geometrical depth. In seeing deeper into the atmosphere we are able to see relatively hotter regions, and therefore the observed temperature would seem to decrease less rapidly (relative to radius or B) than in a single geometrical layer. This effect would therefore tend to increase the slope of the thermal–magnetic relation however, the temperature should still decrease as the magnetic field strength increases.

Through all of this work we have also neglected the curvature force. Increased contributions to the horizontal support from the curvature force in outlying regions may cause the magnetic pressure in the sunspot core to seem boosted, contributing to the nonlinearity of the thermal–magnetic relation.

Detailed modeling efforts are necessary to determine the validity of the proposed scenarios for H2 formation and destruction during the emergence and decay of sunspots, and contribution of the radiative transfer effect and the curvature force to the nonlinearity of the thermal–magnetic relation. We intend to more carefully consider the effects of the curvature force and the Wilson Depression and further investigate the problem of H2 formation using the simultaneous dual-height observations obtained with the 6302 and 15650 Å channels of FIRS to perform a detailed comparison with recent MHD sunspot models from Rempel (2011).

Particularly for the cases of isothermal intensification of the magnetic field in NOAA 9429 and 11130, it is unlikely that such a sharp upturn in the slope can be explained simply through radiative phenomenon or the curvature force. The formation of H2 appears to be the most likely cause of the sharp increase in the slope of the thermal–magnetic relation in sunspots. In addition to this magnetic intensification process, the formation of molecules increases the heat capacity of the sunspot atmosphere. Due to the additional degrees of freedom of the H2 molecule, the formation of an H2 fraction of 10% would ideally raise the heat capacity of the gas by 13% over an equivalent number density of atomic gas. This non-thermal reservoir for energy may have an important effect on the local radiative output of the Sun. Consequently, we suggest that modeling of the MHS equilibrium condition of sunspots in the form of Equation (1) must include a multi-component atmospheric model with the proper equation of state to account for the altered thermodynamics of the sunspot atmosphere due to the formation of H2.

Our sample was obtained sporadically through the minimum phase of solar cycle 23 and does not contain very large sunspots with high magnetic field strengths. An intriguing and unexpected finding is the distinctly different behavior of the B 2 versus T curve of the largest sunspots in the survey (e.g., NOAA 11131). While we cannot provide a definitive explanation at this point, we point out that Equation (1) describes only the MHS equilibrium condition for sunspots, and sunspots cannot be in perfect MHS equilibrium all the time (in such a case sunspots would be static without possibility of evolution). Therefore, we should not expect that the B 2 versus T curves to represent the equilibrium state, or that they follow the same track. We suspect that differences in the observed behavior of the B 2 versus T curves are the manifestation of changing magnetic and thermal environment of sunspots at different stages of their evolution. A continued observational effort following sunspots through their life cycle should provide the necessary data to address this issue as solar cycle 24 enters its maximum phase and larger sunspots start to appear.

This work is part of a dissertation submitted to the University of Hawai'i in fulfillment of the requirements for the degree of Doctor of Philosophy.

The FIRS project was funded by the National Science Foundation Major Research Instrument program, grant number ATM-0421582, and was completed through a collaboration between the Institute for Astronomy and the National Solar Observatory. We express our profound gratitude to the NSO for all of their assistance and especially thank the DST observers, Doug Gilliam, Joe Elrod, and Mike Bradford for their patience and ingenuity during the commissioning of FIRS.

Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway). The Hinode SOT/SP Inversions with MERLIN were conducted at NCAR under the framework of the Community Spectropolarimtetric Analysis Center (CSAC http://www.csac.hao.ucar.edu/).

We also thank the Vector Magnetogram Comparison Group and Andres Asensio Ramos for their essential advice on inversion techniques. Finally, we thank Huw Morgan, Ali Tritschler, and our referee who have contributed many useful suggestions for improving the text of this article.

## 1. Introduction

[2] Accurate topographic information, and, in particular, high-resolution digital elevation models are of intense interest for all phases of Mars exploration and scientific investigation, from landing site selection to the quantitative analysis of the morphologic record of surface processes. The need to select geologically interesting yet safe landing sites for the two Mars Exploration Rovers that will arrive in January 2004 created an especially urgent need for topographic and slope information about candidate sites [ Golombek et al., 2003 ]. The MER landing system is similar to that used for Mars Pathfinder and incorporates a cluster of airbags to protect the spacecraft on impact. Surface slopes on a variety of lengthscales can pose a hazard to this system. For example, even modest slopes (≥2°) over kilometer baselines may cause the spacecraft to roll at high speed and thus be damaged, and intermediate slopes (≥5°) over hundred-meter baselines could cause the final stages of landing (parachute jettison and retrorocket fire) to occur at an unsafe altitude. Finally, slopes at the scale of the airbag cluster (∼5 m diameter) could cause the spacecraft to bounce either too vertically (leading to “stroking out” and structural damage on the next bounce) or too horizontally (tearing the airbags on the following impact). As a rough estimate, slopes ≥15° at the airbag scale are considered dangerous, though the potential for damage depends on the rocks present at the site and on random details of the trajectory and must thus be assessed by detailed simulations.

[3] Unfortunately, though Mars Orbiter Laser Altimeter (MOLA) altimetry can be used to assess the slope hazards over kilometer and even 100-m baselines [ Anderson et al., 2003 ], the availability of extremely high resolution topographic data needed to determine slopes at the airbag scale has hitherto been limited. The current “gold standard” for Martian topographic data, MOLA [ Zuber et al., 1992 Smith et al., 1999 , 2001 ] has collected data globally with astonishingly high accuracy, but the sample spacing of this data set is only about 300 m along track and, in many places near the equator, adjacent MOLA ground tracks are separated by gaps of one to several kilometers. The MOLA pulsewidth also provides information about relief over smaller distances, but only as an average over the ∼160 m footprint of the laser pulse [ Garvin et al., 1999 Haldemann and Anderson, 2002 Neumann et al., 2003 ]. Viking Orbiter images provide stereo coverage of the entire planet at low resolution and expected vertical precision (EP, a function of image resolution and stereo viewing geometry as discussed below) but highest resolution stereo coverage only of extremely limited areas [ Kirk et al., 1999b ]. Given that the minimum separation of independent stereo measurements is about 3 pixels because of the necessity of matching finite-sized image patches, the highest resolution Viking images, at about 8 m/pixel, support stereomapping only at horizontal resolutions >24 m. Two-dimensional photoclinometry, or shape-from-shading [ Kirk, 1987 Kirk et al., 2003a ], can be used to produce DEMs at the pixel resolution from single images. Since single-image coverage is much more abundant than stereo coverage, this in principle increases the likelihood that a given region can be mapped as well as improving the DEM resolution. Photoclinometry must be calibrated against topographic data from another source if quantitatively accurate results are to be obtained, however, so in practice stereo coverage is still needed. In any case, the best (nonstereo) Viking images of the candidate MER landing sites have resolutions much too poor to be useful.

[4] The MOC-NA camera, with a maximum resolution of 1.5 m/pixel [ Malin et al., 1992 , 1998 Malin and Edgett, 2001 ], offers the prospect of stereotopographic mapping at a horizontal resolution of ∼5 m and EP ∼ 1 m, though the majority of images used in this study were obtained at ≥3 m/pixel and have correspondingly poorer stereo resolution and EP. MOC-NA stereo coverage is limited because, until late in the prime mission, most images were obtained with nadir pointing and were not targeted to overlap one another. More than 150 MOC-MOC stereopairs were nonetheless obtained by mission phase E14 [ Caplinger, 2003 ]. It is also likely that some MOC images will provide useful stereo coverage when paired with oblique Viking Orbiter images or, eventually, with THEMIS visible-band images. In addition, obtaining images, including stereopairs, of candidate MER landing sites has been an important objective of the MGS extended mission, and these images have been made available for site assessment prior to their formal release. For all these reasons, a capability for stereomapping with the MOC-NA images is highly desirable and has been developed independently by our group [ Kirk et al., 2001a , 2002a , 2002b , 2003b ] and by others [ Ivanov and Lorre, 2002 Ivanov, 2003 Caplinger, 2003 ]. The push broom scanner geometry of the camera means that stereo software used for framing cameras (e.g., those of Viking Orbiter) must be modified in order to be used with MOC. The other main challenges in working with MOC data are identifying suitable stereopairs and providing adequate geodetic control for such high-resolution images.

[5] Photoclinometric software initially developed for framing cameras [ Kirk, 1987 Kirk et al., 2003a ] required only minor modifications for use with MOC images, but the results depend on the accuracy of atmospheric and surface radiative transfer models, and in particular on accurate calibration of the atmospheric haze contribution to each image. Our photoclinometric mapping of landing sites relies heavily on high-resolution stereo data for this calibration but improves the horizontal resolution to the single-pixel level.

[6] In this paper we describe our methods for deriving stereo and calibrated photoclinometric DEMs from MOC-NA images, assess the accuracy of our methods with a variety of tests involving real and simulated data, and describe our results for the topographic slopes in the Mars Pathfinder and seven candidate MER landing sites.

## 7. SUMMARY AND CONCLUSIONS

Deep HST/ACS data have allowed us to derive the lifetime SFH of the Tucana dwarf galaxy, one of the most isolated dSphs in the LG. We have shown that Tucana experienced a strong event of star formation at the oldest possible age, >12.5 Gyr ago. After the first initial peak, the measured intensity of the star formation steadily decreased until stopping ≈9 Gyr ago. The tests we performed with mock stellar populations disclose the broadening effect of the observational errors. We find that the actual underlying SFH is compatible with an episode of short duration, in the range σ≤ 1.0 Gyr, if we assume a Gaussian profile ψ(t) peaked 13 Gyr ago. Our attempt to put firm constraints on the age limits of the main event of star formation are hampered by the limited time resolution, and thus we are not able to clearly answer the question of whether the reionization was decisive in ending Tucana's star formation activity.

We explored alternative mechanisms (both external and internal) that may have shaped Tucana's evolution. On the one hand, current measurements of its radial velocity do not rule out the possibility that Tucana may have traversed the inner regions of the LG once. Therefore, a morphological transformation linked to a close interaction with a larger LG member at a time consistent with the end of its star formation cannot be ruled out. On the other hand, we explored the possibility that the feedback from SN explosions might have been responsible for the gas loss. We used two different arguments to conclude that gas loss connected to SN events must indeed have been very important in Tucana's early evolution: first, comparison of the total amount of energy released by the SNe expected in the early evolution of Tucana with the model by Mac Low & Ferrara (1999) shows that Tucana could be in the blowout region. Second, investigating the chemical difference between two main sub-populations of distinct metallicity present in Tucana, we found evidence that the vast majority of the metals produced by the SNe must have been lost by Tucana to the intergalactic medium.

We devoted a particular effort to compare the properties of Tucana and Cetus, the two isolated dSphs analyzed consistently in the LCID project, that also share the important characteristic of being the most isolated dSphs in the LG. Both are (at first approximation) as old as the oldest Milky Way satellites, such as Draco, UMi, or Sculptor, with no traces of star formation younger than 9 Gyr. The fact that they do not follow the morphology–density relation that has been observed in Milky Way dSph satellites poses interesting questions concerning the effectiveness of the environment in shaping the SFHs of dwarf galaxies. This gives some support to models such as the one recently published by Sawala et al. (2010) in which internal mechanisms such as SNe, enhanced by the effects of cosmic reionization, are able to reproduce the main characteristics of dSph galaxies without having to invoke strong environmental effects. Still, new clues will come from a better understanding of the structure and kinematics of the stellar component of Tucana and by comparing such properties in detail with those of the classical dSph satellites that were clearly affected by the environment.

Despite the obvious similarities in the CMDs and SFHs of Cetus and Tucana, we also demonstrated important differences in their early evolution. We have shown that the formation time of the bulk of the stellar populations in Cetus is clearly delayed compared to Tucana. This clearly appears in the derived SFHs, and other independent indicators support the same conclusion: the morphology of the HB, the properties of the RR Lyrae variable stars (Bernard et al. 2008, 2009), and the characteristics of the RGB bump (Monelli et al. 2010a). The most important conclusion we can draw from this comparison is that it strongly reinforces the conclusions of Monelli et al. (2010b), in particular that the vast majority of the stars in Cetus were formed well after the end of the reionization epoch, therefore suggesting that the end of the star formation in Cetus was not predominantly caused by it. This has important implications for state-of-the-art models on the effects of reionization in the early SFH of dwarf galaxies.

Support for this work was provided by NASA through grant GO-10515 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, the IAC (grant 310394), and the Education and Science Ministry of Spain (grants AYA2004-06343 and AYA2007-3E3507). This research has made use of NASA's Astrophysics Data System Bibliographic Services and the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.

Facility: HST (ACS) - Hubble Space Telescope satellite

## Footnotes

Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained by the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.

Based in part on observations obtained at the Kitt Peak National Observatory and Cerro Tololo Interamerican Observatory, National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.

Based in part on observations collected at the La Silla Paranal Observatory, ESO, Chile.

For a discussion of the role of HD 140283 in the history of astronomy, see Bond et al. (2013).