The CHaracterizing ExOPlanet Satellite (CHEOPS) mission pipeline provides photometry for the central star in its field; ARCHI takes in data from the CHEOPS mission pipeline, analyzes the background stars, and determines the photometry of these stars, thus creating the possibility of producing photometric time-series of several close-by targets at once, in addition to using different stars in the image to calibrate systematic errors.
ARCHANGEL is a Unix-based package for the surface photometry of galaxies. While oriented for large angular size systems (i.e. many pixels), its tools can be applied to any imaging data of any size. The package core contains routines to perform the following critical galaxy photometry functions: sky determination; frame cleaning; ellipse fitting; profile fitting; and total and isophotal magnitudes.
The goal of the package is to provide an automated, assembly-line type of reduction system for galaxy photometry of space-based or ground-based imaging data. The procedures outlined in the documentation are flux independent, thus, these routines can be used for non-optical data as well as typical imaging datasets.
ARCHANGEL has been tested on several current OS's (RedHat Linux, Ubuntu Linux, Solaris, Mac OS X). A tarball for installation is available at the download page. The main routines are Python and FORTRAN based, therefore, a current installation of Python and a FORTRAN compiler are required. The ARCHANGEL package also contains Python hooks to the PGPLOT package, an XML processor and network tools which automatically link to data archives (i.e. NED, HST, 2MASS, etc) to download images in a non-interactive manner.
The Arcetri spectral code allows to evaluate the spectrum of the radiation emitted by hot and optically thin plasmas in the spectral range 1 - 2000 Angstroms. The database has been updated including atomic data and radiative and collisional rates to calculate level population and line emissivities for a number of ions of the minor elements; a critical compilation of the electron collision excitation for these elements has been performed. The present version of the program includes the CHIANTI database for the most abundant elements, the minor elements data, and Fe III atomic model, radiative and collisional data.
Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.
APS finds Frequentist confidence limits on high-dimensional parameter spaces by using Gaussian Process interpolation to identify regions of parameter space for which chisquared is less than or equal to some specified limit. The code is written in C++, is robust against multi-modal chisquared functions and converges comparably fast to Monte Carlo methods. Code is also provided to draw Bayesian credible limits using the outputs of APS, though this code does not converge as well. APS requires the linear algebra libraries LAPACK, BLAS, and ARPACK (ascl:1311.010) to run.
APPSPACK is serial or parallel, derivative-free optimization software for solving nonlinear unconstrained, bound-constrained, and linearly-constrained optimization problems, with possibly noisy and expensive objective functions.
Applefy calculates detection limits for exoplanet high contrast imaging (HCI) datasets. The package provides features and functionalities to improve the accuracy and robustness of contrast curve calculations. Applefy implements the classical approach based on the t-test, as well as the parametric boostrap test for non-Gaussian residual noise. Applefy enables the comparison of imaging results across instruments with different noise characteristics.
APPLawD (Accurate Disk Potentials for Power Law Surface densities) determines the gravitational potential in the equatorial plane of a flat axially symmetric disk (inside and outside) with finite size and power law surface density profile. Potential values are computed on the basis of the density splitting method, where the residual Poisson kernel is expanded over the modulus of the complete elliptic integral of the first kind. In contrast with classical multipole expansions of potential theory, the residual series converges linearly inside sources, leading to very accurate potential values for low order truncations of the series. The code is easy to use, works under variable precision, and is written in Fortran 90 with no external dependencies.
APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.
The appaloosa suite automates flare-finding in every Kepler light curves. It builds quiescent light curve models that include long- and short-cadence data through iterative de-trending and includes completeness estimates via artificial flare injection and recovery tests.
APOLLO forward models the radiative transfer of light through a planetary (or brown dwarf) atmosphere; it also forward models transit and emission spectra and retrieves atmospheric properties of extrasolar planets. The code has two operational modes: one to compute a planetary spectrum given a set of parameters, and one to retrieve those parameters based on an observed spectrum. The package uses emcee (ascl:1303.002) to find the best fit to a spectrum for a given parameter set. APOLLO is modular and offers many options that may be turned on and off, including the type of observations, a flexible molecular composition, multiple cloud prescriptions, multiple temperature-pressure profile prescriptions, multiple priors, and continuum normalization.
apollinaire provides functions and a framework for helioseismic and asteroseismic instruments data managing and analysis, and includes all the tools necessary to analyze the acoustic oscillations of solar-like stars. The core of the package is the peakbagging library, which provides a full framework to extract oscillation modes parameters from solar and stellar power spectra.
The apogee package works with SDSS-III APOGEE and SDSS-IV APOGEE-2 data. It reads various data products and applies cuts, works with APOGEE bitmasks, and plots APOGEE spectra. It can generate model spectra for APOGEE spectra, and APOGEE model grids can be used to fit spectra. apogee includes some simple stacking functions and implements the effective selection function for APOGEE.
APLpy (the Astronomical Plotting Library in Python) is a Python module for producing publication-quality plots of astronomical imaging data in FITS format. The module uses Matplotlib, a powerful and interactive plotting package. It is capable of creating output files in several graphical formats, including EPS, PDF, PS, PNG, and SVG. Plots can be made interactively or by using scripts, and can generate co-aligned FITS cubes to make three-color RGB images. It also offers different overlay capabilities, including contour sets, markers with customizable symbols, and coordinate grids, and a range of other useful features.
APERO (A PipelinE to Reduce Observations) performs data reduction for the Canada-France-Hawaii Telescope's near-infrared spectropolarimeter SPIRou and offers different recipes or modules for performing specific tasks. APERO can individually run recipes or process a set of files, such as cleaning a data file of detector effects, collecting all dark files and creating a master dark image to use for correction, and creating a bad pixel mask for identifying and dealing with bad pixels. It can extract out flat images to measure the blaze and produced blaze correction and flat correction images, extract dark frames to provide correction for the thermal background after extraction of science or calibration frames, and correct extracted files for leakage coming from a FP (for OBJ_FP files only). It can also take a hot star and calculate telluric transmission, and then use the telluric transmission to calculate principle components (PCA) for correcting input images of atmospheric absorption, among many other tasks.
Apercal is a dedicated, automated data reduction and analysis pipeline written for the Apertif (APERture Tile In Focus) upgrade to the Westerbork Synthesis Radio Telescope. This upgrade dramatically increases the field of view and survey speed of the telescope and is being used for survey observations that can produce 5 terabytes of data for each observation. Apercal uses existing and new tools and parallelization to provide the performance needed for the large volume of data produced Apertif surveys. The software is written entirely in Python and uses third–party astronomical software, such as AOFlagger (ascl:1010.017), CASA (ascl:1107.013), and Miriad (ascl:1106.007), for certain tasks. Apercal is modular, making it possible to run specific modules manually instead of the full pipeline, and information can be exchanged between modules because status parameters are written and read from a python pickled dictionary file. The pipeline can also run fully automatically.
AP3M is an adaptive particle-particle, particle-mesh code. It is older than Hydra (ascl:1103.010) but faster and more memory-efficient for dark-matter only calculations. The Adaptive P3M technique (AP3M) is built around the standard P3M algorithm. AP3M produces fully equivalent forces to P3M but represents a more efficient implementation of the force splitting idea of P3M. The AP3M program may be used in any of the three modes with an appropriate choice of input parameter.
AOTOOLS reduces IR images from adaptive optics. It uses effective dithering, either sky subtraction or dark-subtration, and flat-fielding techniques to determine the effect of the instrument on an image of an object. It also performs bad pixel masking, degrades an AO on-axis PSF due to effects of anisoplanicity, and corrects an AO on-axis PSF due to effects of seeing.
The AOtools package offers generic adaptive optics processing tools in addition to astronomy-specific methods; among these are analyzing data in the pupil plane, images and point spread functions in the focal plane, wavefront sensors, modeling of atmospheric turbulence, physical optical propagation of wavefronts, and conversion functions to convert stellar brightness into photon flux for a given waveband. The software also calculates integrated atmospheric parameters, such as coherence time and isoplanatic angle from atmospheric turbulence and wind speed profile.
The radio frequency interference code AOFlagger automatically flags data and can be used to analyze the data in a measurement. The purpose of flagging is to mark samples that are affected by interfering sources such as radio stations, airplanes, electrical fences or other transmitting interferers.
The tools in the package are meant for offline use. The software package contains a graphical interface ("rfigui") that can be used to visualize a measurement set and analyze mitigation techniques. It also contains a console flagger ("rficonsole") that can execute a script of mitigation functions without the overhead of a graphical environment. All tools were written in C++.
The software has been tested extensively on low radio frequencies (150 MHz or lower) produced by the WSRT and LOFAR telescopes. LOFAR is the Low Frequency Array that is built in and around the Netherlands. Higher frequencies should work as well. Some of the methods implemented are the SumThreshold, the VarThreshold and the singular value decomposition (SVD) method. Included also are several surface fitting algorithms.
The software is published under the GNU General Public License version 3.
The anzu package offers two independent codes for hybrid Lagrangian bias models in large-scale structure. The first code measures the hybrid "basis functions"; the second takes measurements of these basis functions and constructs an emulator to obtain predictions from them at any cosmology (within the bounds of the training set). anzu is self-contained; given a set of N-body simulations used to build emulators, it measures the basis functions. Alternatively, given measurements of the basis functions, anzu should in principle be useful for constructing a custom emulator.
AntiparticleDM calculates the prospects of future direct detection experiments to discriminate between Majorana and Dirac Dark Matter (i.e., to determine whether Dark Matter is its own antiparticle). Direct detection event rates and mock data generation are dealt with by a variation of the WIMpy code.
Global mm-VLBI Array (GMVA) observations are accompanied by a lot of metadata (i.e., the so-called 'ANTAB' files) that contain the system temperature (Tsys) and the gain values of the individual GMVA antennas. These data are required for the amplitude calibration of GMVA data which is an essential part in the data reduction. Unfortunately, Tsys measurements in the ANTAB files are not perfect and there are almost always erroneous values in some of the ANTAB files (particularly in the VLBA data). This could lead to incorrect results in the amplitude calibration and thus need to be corrected with proper data inspection/treatment. However, every GMVA station provides the ANTAB file in their own data format which makes the examination tricky. AntabGMVA was designed to resolve these issues and allows GMVA users to manage the GMVA ANTAB files easily and efficiently. Using AntabGMVA, one can perform extraction/inspection/visualization/correction of the Tsys data from the ANTAB files and finally generate one single ANTAB file which includes all the final products.
ANNz2, a newer implementation of ANNz (ascl:1209.009), utilizes multiple machine learning methods such as artificial neural networks, boosted decision/regression trees and k-nearest neighbors to measure photo-zs based on limited spectral data. The code dynamically optimizes the performance of the photo-z estimation and properly derives the associated uncertainties. In addition to single-value solutions, ANNz2 also generates full probability density functions (PDFs) in two different ways. In addition, estimators are incorporated to mitigate possible problems of spectroscopic training samples which are not representative or are incomplete. ANNz2 is also adapted to provide optimized solutions to general classification problems, such as star/galaxy separation.
ANNz is a freely available software package for photometric redshift estimation using Artificial Neural Networks. ANNz learns the relation between photometry and redshift from an appropriate training set of galaxies for which the redshift is already known. Where a large and representative training set is available, ANNz is a highly competitive tool when compared with traditional template-fitting methods.
For a newer implementation of this package, please see ANNz2 (ascl:1910.014).
Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.
A python package created around Eric Gendron’s code for analytically (and quickly) generating field-varying SCAO PSFs for the ELT.
The calculation of distances is of fundamental importance in extragalactic astronomy and cosmology. However, no practical implementation for the general case has previously been available. We derive a second-order differential equation for the angular size distance valid not only in all homogeneous Friedmann-Lemaitre cosmological models, parametrised by $lambda_{0}$ and $Omega_{0}$, but also in inhomogeneous 'on-average' Friedmann-Lemaitre models, where the inhomogeneity is given by the (in the general case redshift-dependent) parameter $eta$. Since most other distances can be obtained trivially from the angular size distance, and since the differential equation can be efficiently solved numerically, this offers for the first time a practical method for calculating distances in a large class of cosmological models. We also briefly discuss our numerical implementation, which is publicly available.
AngPow computes the auto (z1 = z2) and cross (z1 ≠ z2) angular power spectra between redshift bins (i.e. Cℓ(z1,z2)). The developed algorithm is based on developments on the Chebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. AngPow is flexible and can handle any user-defined power spectra, transfer functions, bias functions, and redshift selection windows. The code is fast enough to be embedded inside programs exploring large cosmological parameter spaces through the Cℓ(z1,z2) comparison with data.
anesthetic brings together tools for processing nested sampling chains, leveraging standard scientific python libraries. The code provides computation of Bayesian evidences, Kullback-Liebler divergences and Bayesian model dimensionalities, marginalized 1d and 2d plots, and dynamic replaying of nested sampling. anesthetic was designed primarily for use with nested sampling outputs, although it can be used for normal MCMC chains.
AnalyticLC generates an analytic light-curve, and optionally RV and astrometry data, from a set of initial (free) orbital elements and simultaneously fits these data. Written in MATLAB, the code is fast and efficient, and provides insight into the motion of the orbital elements, which is difficult to obtain from numerical integration. A Python wrapper for AnalyticLC is available separately.
This code contains several simple radiative transfer models used for fitting the blue-asymmetric spectral line signature often found in infalling molecular cloud cores. It attempts to provide a direct measure of several physical parameters of the infalling core, including infall velocity, excitation temperature, and line of site optical depth. The code includes 6 radiative transfer models, however the conclusion of the associated paper is that the 5 parameter "hill" model (hill5) is most likely the best match to the physical excitation conditions of real infalling Bonnor-Ebert type clouds.
This code analyzes a dipole axis in the distribution of galaxy spin directions. The code takes as input a list of galaxies, their equatorial coordinates, and their spin directions. It then determines the statistical significance of possible dipole axis at any point in the sky by comparing the cosine dependence of the spin directions to the mean and standard deviation of the cosine dependence after 2000 runs with random spin directions. A code to analyze the binomial distribution of the spin directions using Monte Carlo simulation is also available.
This notebook provides a comprehensive approach for analyzing and visualizing astronomical data from FITS (Flexible Image Transport System) files, focusing on moment maps derived from molecular line emissions within the galaxy NGC 0628. The analysis involves applying various image processing techniques to handle corrupted pixels, reconstruct images, and enhance the quality of moment maps. The notebook also demonstrates how to simulate super-resolution to improve the spatial resolution of the data. By utilizing Gaussian filtering, median filtering, and contrast enhancement, the approach improves the clarity and precision of the data, making it suitable for detailed astrophysical studies. This tool serves as an efficient method for processing and visualizing large-scale astronomical datasets for further analysis and scientific interpretation.
Analysator analyzes vlsv files produced by Vlasiator (ascl:1908.014). The code facilitates studies of particle paths, pitch angle distributions, velocity distributions, and more. It can read and write VLSV files and do calculations with the data, plot the real space from VLSV files with Mayavi (ascl:1205.008), and plot the velocity space (both blocks and iso surface) from VLSV files. It can also take cut-throughs, pitch angle distributions, gyrophase angle, and 3d slices, plot variables with sub plots in a clean format, and fit 1D polynomials to data.
ANAigm offers an updated version of the Madau model for the attenuation by the intergalactic neutral hydrogen against the radiation from distant objects. This new model is written in Fortran90 and predicts, for some redshifts, more than 0.5--1 mag different attenuation magnitudes through usual broad-band filters relative to the original Madau model.
ANA calculates the likelihood function for a model comprised of two components to the astrophysical neutrino flux detected by IceCube. The first component is extragalactic. Since point sources have not been found and there is increasing evidence that one source catalog cannot describe the entire data set, ANA models the extragalactic flux as isotropic. The second component is galactic. A variety of catalogs of interest are also provided. ANA takes the galactic contribution to be proportional to the matter density of the universe. The likelihood function has one free parameter fgal that is the fraction of the astrophysical flux that is galactic. ANA finds the best fit value of fgal and scans over 0<fgal<1.
AMUSE is an open source software framework for large-scale simulations in astrophysics, in which existing codes for gravitational dynamics, stellar evolution, hydrodynamics and radiative transport can be easily coupled and placed in the appropriate observational context.
The software framework AMReX is designed for building massively parallel block-structured adaptive mesh refinement (AMR) applications. Key features of AMReX include C++ and Fortran interfaces; 1-, 2- and 3-D support; and support for cell-centered, face-centered, edge-centered, and nodal data. The framework also supports hyperbolic, parabolic, and elliptic solves on hierarchical adaptive grid structure, optional subcycling in time for time-dependent PDEs, and parallelization via flat MPI, OpenMP, hybrid MPI/OpenMP, or MPI/MPI, and parallel I/O. AMReX supports the plotfile format with AmrVis, VisIt (ascl:1103.007), ParaView (ascl:1103.014), and yt (ascl:1011.022).
AmpF numerically calculates the amplification factor for solar lensing. The import parameters are the gravitational-wave frequency and the source angular position with respect to the solar center; the code outputs are the amplification factor and its geometrical-optics limit. AmpF accepts variables for several attributes and the overall amplitude of the lensing potential can be changed as needed. The method has been implemented in both C and Python.
AMPEL provides an analysis framework for high-throughput surveys and is suited for streamed data. The package combines the functionality of an alert broker with a generic framework capable of hosting user-contributed code; it encourages provenance and keeps track of the varying information states that a transient displays. The latter concept includes information gathered over time and data policies such as access or calibration levels.
AMOEBA (Automated Molecular Excitation Bayesian line-fitting Algorithm) employs a Bayesian approach to Gaussian decomposition, resulting in an objective and statistically robust identification of individual clouds along the line-of-sight. It uses the Python implementation of Goodman & Weare's Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler emcee (ascl:1303.002) to sample the posterior probability distribution and numerically evaluate the integrals required to compute the Bayes Factor. Amoeba takes as input a set of OH optical depth spectra and a set of expected brightness temperature spectra that are obtained by measuring the brightness temperature towards the bright background continuum source (the "on-source" observations), and in a pattern surrounding the continuum source (the "off-source" observations). Amoeba can also take as input a set of OH optical depth spectra only, and also allows input of an arbitrary number of spectra to be fit simultaneously.
AMIsurvey is a fully automated calibration and imaging pipeline for data from the AMI-LA radio observatory; it has two key dependencies. The first is drive-ami, included in this entry. Drive-ami is a Python interface to the specialized AMI-REDUCE calibration pipeline, which applies path delay corrections, automatic flags for interference, pointing errors, shadowing and hardware faults, applies phase and amplitude calibrations, Fourier transforms the data into the frequency domain, and writes out the resulting data in uvFITS format. The second is chimenea, which implements an automated imaging algorithm to convert the calibrated uvFITS into science-ready image maps. AMIsurvey links the calibration and imaging stages implemented within these packages together, configures the chimenea algorithm with parameters appropriate to data from AMI-LA, and provides a command-line interface.
AMIGA is a publicly available adaptive mesh refinement code for (dissipationless) cosmological simulations. It combines an N-body code with an Eulerian grid-based solver for the full set of magnetohydrodynamics (MHD) equations in order to conduct simulations of dark matter, baryons and magnetic fields in a self-consistent way in a fully cosmological setting. Our numerical scheme includes effective methods to ensure proper capturing of shocks and highly supersonic flows and a divergence-free magnetic field. The high accuracy of the code is demonstrated by a number of numerical tests.
AMICAL (Aperture Masking Interferometry Calibration and Analysis Library) processes Aperture Masking Interferometry (AMI) data from major existing facilities, such as NIRISS on the JWST, SPHERE and VISIR from the European Very Large Telescope (VLT) and VAMPIRES from SUBARU telescope. The library cleans the reduced datacube from the standard instrument pipelines, extracts the interferometrical quantities (visibilities and closure phases) using a Fourier sampling approach, and calibrates those quantities to remove the instrumental biases. In addition, two external packages (CANDID and Pymask) are included to analyze the final outputs obtained from a binary-like sources (star-star or star-planet); these stand-alone packages are interfaced with AMICAL to quickly estimate scientific results (e.g., separation, position angle, contrast ratio, and contrast limits) using different approaches.
AMBIG is a fast, automated algorithm for resolving the 180° ambiguity in vector magnetic field data, including those data from Hinode/Spectropolarimeter. The Fortran-based code is loosely based on the Minimum Energy Algorithm, and is distributed to provide ambiguity-resolved data for the general user community.
AMBER (Apertif Monitor for Bursts Encountered in Real-time) detects single-pulse radio phenomena, such as pulsars and fast radio bursts, in real time. It is a fully auto-tuned pipeline that offloads compute-intensive kernels to many-core accelerators; the software automatically tunes these kernels to achieve high performance on different platforms.
AMBER data reduction software has an optional graphic interface in a high level language, allowing the user to control the data reduction step by step or in a completely automatic manner. The software has a robust calibration scheme that make use of the full calibration sets available during the night. The output products are standard OI-FITS files, which can be used directly in high level software like model fitting or image reconstruction tools.
AMBER (Abundance Matching Box for the Epoch of Reionization) models the cosmic dawn. The semi-numerical code allows users to directly specify the reionization history through the redshift midpoint, duration, and asymmetry input parameters. The reionization process is further controlled through the minimum halo mass for galaxy formation and the radiation mean free path for radiative transfer. The parallelized code is over four orders of magnitude faster than radiative transfer simulations and will efficiently enable large-volume models, full-sky mock observations, and parameter-space studies.
amber_meta integrates a few routines to launch AMBER (ascl:2209.007) in a systematic manner. To avoid typing a string in the command line manually with all parameters required to launch AMBER, amber_meta generates the command from configuration files, and can directly launch AMBER instances.
AMADA allows an iterative exploration and information retrieval of high-dimensional data sets. This is done by performing a hierarchical clustering analysis for different choices of correlation matrices and by doing a principal components analysis in the original data. Additionally, AMADA provides a set of modern visualization data-mining diagnostics. The user can switch between them using the different tabs.
AM3 simulates lepto-hadronic interactions in astrophysical environments. It solves the time-dependent partial differential equations for the energy spectra of electrons, positrons, protons, neutrons, photons, neutrinos as well as charged secondaries (pions and muons), immersed in an isotropic magnetic field. The code accounts for the emission of photons and charged secondaries in electromagnetic and hadronic interactions feed back into the interaction rates in a time-dependent manner, therefore grasping non-linear effects including electromagnetic cascades. AM3 is computationally efficient, making it possible to scan vast source parameter scans and fit the observational data, and has been deployed to explain multi-wavelength observations from blazars, gamma-ray bursts and tidal disruption events.
am performs optical depth, radiative transfer, and refraction computations involving propagation through the terrestrial atmosphere and other media at microwave through submillimeter wavelengths. The program is used in radio astronomy, atmospheric radiometry, and radio spectrum management.
AlterBBN evaluates the abundances of the elements generated by Big-Bang nucleosynthesis (BBN). This program computes the abundances of the elements in the standard model of cosmology and allows the user to alter the assumptions of the cosmological model to study their consequences on the abundances of the elements. In particular the baryon-to-photon ratio and the effective number of neutrinos, as well as the expansion rate and the entropy content of the Universe during BBN can be modified in AlterBBN. Such features allow the user to test the cosmological models by confronting them to BBN constraints.
AltaiPony de-trend light curves from Kepler, K2, and TESS missions, and searches them for flares. The code also injects and recovers synthetic flares to account for de-trending and noise loss in flare energy and determines energy-dependent recovery probability for every flare candidate. AltaiPony uses K2SC (ascl:1605.012), AstroPy (ascl:1304.002) and lightkurve (ascl:1812.013) in addition to other common codes, and extensive documentation and tutorials are provided for the software.
alpconv calculates the alp-photon conversion by calculating the degree of irregularity of the spectrum, in contract to some other methods that fit the source's spectrum with both null and ALP models and then compare the goodness of fit between the two.
ALminer queries, analyzes, and visualizes the ALMA Science Archive. Users can programmatically query the archive for positions, target names, or other keywords in the archive metadata (such as proposal title, abstract, or scientific category). ALminer's plotting routines allow the query results to be visualized, and its analysis functions allow users to filter the results and check whether certain frequencies of interest are covered in the queried observations. The code also allows users to directly download ALMA data products in FITS format and/or the raw data that can be used for manual image processing. ALminer has been designed to make mining the ALMA archive as simple as possible, while being flexible to be customized according to the user's scientific interests. The code is released with a detailed tutorial Jupyter notebook, introducing ALminer's common functions as well as some of its more advanced options.
ALMA3 computes loading and tidal Love numbers for a spherically symmetric, radially stratified planet. Both real (time-domain) and complex (frequency-domain) Love numbers can be computed. The planetary structure can include an arbitrary number of layers, and each layer can have a different rheological law. ALMA3 can model numerous linear rheologies, including Elastic, Maxwell visco-elastic, Newtonian viscous fluid, Kelvin-Voigt solid, Burgers and Andrade transient rheologies.
AllStarFit analyzes optical and infrared images and includes functions for:
- object detection and image segmentation using the ProFound package (ascl:1804.006);
- PSF determination using the ProFit package (ascl:1612.004) to fit multiple stars in the field simultaneously; and
- galaxy modelling with ProFit, using the previously determined PSF and user-specified models.
AllStarFit supports a variety of optimization methods (provided by external packages), including maximum-likelihood and Markov chain Monte Carlo (MCMC).
allesfitter provides flexible and robust inference of stars and exoplanets given photometric and radial velocity (RV) data. The software offers a rich selection of orbital and transit models, accommodating multiple exoplanets, multi-star systems, star spots, stellar flares, and various noise models. It features both parameter estimation and model selection. A graphical user interface is used to specify input parameters, and to easily run a nested sampling or Markov Chain Monte Carlo (MCMC) fit, producing publication-ready tables, LaTex code, and plots. allesfitter provides an inference framework that unites the versatile packages ellc (ascl:1603.016), aflare (flare model; Davenport et al. 2014), dynesty (ascl:1809.013), emcee (ascl:1303.002) and celerite (ascl:1709.008).
allantools calculates Allan deviation and related time & frequency statistics. The library is written in Python and has a GPL v3+ license. It takes input data that is either evenly spaced observations of either fractional frequency, or phase in seconds. Deviations are calculated for given tau values in seconds. Several noise generators for creating synthetic datasets are also included.
AlignBandColors (ABC) aligns inter-color-band astronomical images to a 100th of a pixel accuracy using surrounding stars as guiding points. It has currently been tested with Sloan Digital Sky Survey (SDSS) Data Release 12 images, but is designed to be survey-independent. The code is part of the SpArcFiRe (ascl:2107.010) method.
ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.
alf fits the absorption line optical—NIR spectrum. Initially written to constrain the stellar IMF in old massive galaxies, the code now also offers theoretical age and metallicity-dependent response functions covering 19 elements, nuisance parameters to capture uncertainties in stellar evolution, and parameters to capture uncertainties in the data, including modeling telluric absorption and sky line residuals. alf can fit stellar populations with metallicities from approximately -2.0 to +0.3 and performs well when fitting stellar populations ranging from metal-poor globular clusters to brightest cluster galaxies. The software works in continuum-normalized space and so does not make any use of the shape of the continuum (nor of corresponding photometry). Fitting is handled with emcee (ascl:1303.002); the code is MPI parallelized and runs efficiently on many processors, though fitting data with alf is time intensive.
ALCHEMIC solves chemical kinetics problems, including gas-grain interactions, surface reactions, deuterium fractionization, and transport phenomena and can model the time-dependent chemical evolution of molecular clouds, hot cores, corinos, and protoplanetary disks.
Albatross analyzes Milky Way stellar streams. This Simulation-Based Inference (SBI) library is built on top of swyft (ascl:2302.016), which implements neural ratio estimation to efficiently access marginal posteriors for all parameters of interest. Using swyft for its internal Truncated Marginal Neural Ratio Estimation (TMNRE) algorithm and sstrax (ascl:2306.008) for fast simulation and modeling, Albatross provides a modular inference pipeline to support parameter inference on all relevant parts of stellar stream models.
Aladin is an interactive software sky atlas allowing the user to visualize digitized astronomical images, superimpose entries from astronomical catalogues or databases, and interactively access related data and information from the Simbad database, the VizieR service and other archives for all known sources in the field.
Created in 1999, Aladin has become a widely-used VO tool capable of addressing challenges such as locating data of interest, accessing and exploring distributed datasets, visualizing multi-wavelength data. Compliance with existing or emerging VO standards, interconnection with other visualisation or analysis tools, ability to easily compare heterogeneous data are key topics allowing Aladin to be a powerful data exploration and integration tool as well as a science enabler.
Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.
AIRY simulates optical and near-infrared interferometric observations; it can also perform subsequent image restoration or deconvolution. It is based on the CAOS (ascl:1106.017) Problem Solving Environment. Written in IDL, it consists of a set of specific modules, each handling a particular task.
The objective of this work is to report on the influence of muon interactions on the development of air showers initiated by astroparticles. We make a comparative study of the different theoretical approaches to muon bremsstrahlung and muonic pair production interactions. A detailed algorithm that includes all the relevant characteristics of such processes has been implemented in the AIRES air shower simulation system. We have simulated ultra high energy showers in different conditions in order to measure the influence of these muonic electromagnetic interactions. We have found that during the late stages of the shower development (well beyond the shower maximum) many global observables are significantly modified in relative terms when the mentioned interactions are taken into account. This is most evident in the case of the electromagnetic component of very inclined showers. On the other hand, our simulations indicate that the studied processes do not induce significant changes either in the position of the shower maximum or the structure of the shower front surface.
AIPY collects together tools for radio astronomical interferometry. In addition to pure-python phasing, calibration, imaging, and deconvolution code, this package includes interfaces to MIRIAD (ascl:1106.007) and HEALPix (ascl:1107.018), and math/fitting routines from SciPy.
AIPSLite is an extension for ParselTongue (ascl:1208.020) that allows machines without an AIPS (ascl:9911.003) distribution to bootstrap themselves with a minimal AIPS environment. This allows deployment of AIPS routines on distributed systems, which is useful when data can be easily be split into smaller chunks and handled independently.
AIPS ("Classic") is a software package for interactive and batch calibration and editing of astronomical data, typically radio interferometric data. AIPS can be used for the calibration, construction, enhancement, display, and analysis of astronomical images made from data using Fourier synthesis methods. Design and development of the package begin in 1978. AIPS presently consists of over 1,000,000 lines of code and 400,000 lines of documentation, representing over 65 person-years of effort.
AIOLOS solves differential equations for hydrodynamics, friction, (thermal) radiation transport and (photo)chemistry for simulating accretion onto, and hydrodynamic escape from, planetary atmospheres. The 1-D multispecies, multiphysics hydrodynamics code, written in C++, compiles in a flexible mode that runs problems with any number of input species, and can be sped up by setting the number of species at compile time, and allows the user to provide initial conditions or boundary conditions if desired. AIOLOS provides output and diagnostic files that give snapshots in time of the state of the simulation. Output files are specific to each species, and diagnostic files contain summary as well as detailed information for, for example, the radiation transport, opacities for all species, and optical cell depths per band, in addition to other information.
AIMS (Asteroseismic Inference on a Massive Scale) estimates stellar parameters and credible intervals/error bars in a Bayesian manner from a set of seismic frequency data and so-called classic constraints. To achieve reliable parameter estimates and computational efficiency it searches through a grid of pre-computed models using an MCMC algorithm; interpolation within the grid of models is performed by first tessellating the grid using a Delaunay triangulation and then doing a linear barycentric interpolation on matching simplexes. Inputs for the modeling consists of individual frequencies from peak-bagging, which can be complemented with classic spectroscopic constraints.
AIDA is an implementation and extension of the MISTRAL myopic deconvolution method developed by Mugnier et al. (2004) (see J. Opt. Soc. Am. A 21:1841-1854). The MISTRAL approach has been shown to yield object reconstructions with excellent edge preservation and photometric precision when used to process astronomical images. AIDA improves upon the original MISTRAL implementation. AIDA, written in Python, can deconvolve multiple frame data and three-dimensional image stacks encountered in adaptive optics and light microscopic imaging.
AI-Feynman fits analytical expressions to data sets via symbolic regression, mapping the target variable to different features supplied in the data array. Using a neural network with constraints in the number of parameters utilized, the code provides the ability to obtain analytical expressions for normalized features that are used to predict a Pareto-optimal target. AI-Feynman is robust in handling noisy data, recursively generating multidimensional symbolic expressions that match data from an unknown functions.
Cosmological simulations are the key tool for investigating the different processes involved in the formation of the universe from small initial density perturbations to galaxies and clusters of galaxies observed today. The identification and analysis of bound objects, halos, is one of the most important steps in drawing useful physical information from simulations. In the advent of larger and larger simulations, a reliable and parallel halo finder, able to cope with the ever-increasing data files, is a must. In this work we present the freely available MPI parallel halo finder AHF. We provide a description of the algorithm and the strategy followed to handle large simulation data. We also describe the parameters a user may choose in order to influence the process of halo finding, as well as pointing out which parameters are crucial to ensure untainted results from the parallel approach. Furthermore, we demonstrate the ability of AHF to scale to high-resolution simulations.
AGNvar calculates the expected reverberation signal in any given energy band, for a given spectral energy distribution (SED), assuming variable X-ray emission. The code predicts the shape of the re-processed continuum by modeling the time-averaged SED according to input parameters, which include geometry, mass, and mass accretion rate; generally the input parameters are based off typical XSPEC (ascl:9910.005) models. It evaluates the SED response to an input driving light-curve (assumed to originate in the X-ray corona) and creates a set of time-dependent SEDs. It then takes the results from the set of time-dependent SEDs and extracts the light-curve in a given band pass.
agnpy focuses on the numerical computation of the photon spectra produced by leptonic radiative processes in jetted Active Galactic Nuclei (AGN). It includes classes describing the galaxy components responsible for line and thermal emission and calculates the absorption due to gamma-gamma pair production on soft (IR-UV) photon fields.
AGNfitter is a fully Bayesian MCMC method to fit the spectral energy distributions (SEDs) of active galactic nuclei (AGN) and galaxies from the sub-mm to the UV; it enables robust disentanglement of the physical processes responsible for the emission of sources. Written in Python, AGNfitter makes use of a large library of theoretical, empirical, and semi-empirical models to characterize both the nuclear and host galaxy emission simultaneously. The model consists of four physical emission components: an accretion disk, a torus of AGN heated dust, stellar populations, and cold dust in star forming regions. AGNfitter determines the posterior distributions of numerous parameters that govern the physics of AGN with a fully Bayesian treatment of errors and parameter degeneracies, allowing one to infer integrated luminosities, dust attenuation parameters, stellar masses, and star formation rates.
Agatha is a framework of periodograms to disentangle periodic signals from correlated noise and to solve the two-dimensional model selection problem: signal dimension and noise model dimension. These periodograms are calculated by applying likelihood maximization and marginalization and combined in a self-consistent way. Agatha can be used to select the optimal noise model and to test the consistency of signals in time and can be applied to time series analyses in other astronomical and scientific disciplines. An interactive web implementation of the software is also available at http://agatha.herts.ac.uk/.
The AGAMA library is a collection of tools for constructing and analyzing models of galaxies. It computes gravitational potential and forces, performs orbit integration and analysis, and can convert between position/velocity and action/angle coordinates. It offers a framework for finding best-fit parameters of a model from data and self-consistent multi-component galaxy models, and contains useful auxiliary utilities such as various mathematical routines. The core of the library is written in C++, and there are Python and Fortran interfaces. AGAMA may be used as a plugin for the stellar-dynamical software packages galpy (ascl:1411.008), AMUSE (ascl:1107.007), and NEMO (ascl:1010.051).
AFR, or ASPFitsReader, reduces, processes, and manipulates pulsar data, including calibration, template profile creation, and interactive excision of radio frequency interference from pulsar profile data. It also creates times-of-arrival compatible with Tempo (ascl:1509.002) and Tempo2 (ascl:1210.015) timing software.
AFINO (Automated Flare Inference of Oscillations) finds oscillations in time series data using a Fourier-based model comparison approach. The code analyzes the date and generates a results file in either JSON or Pickle format, which contains numerous properties of the data and analysis, and a summary plot.
aesop (ARC Echelle Spectroscopic Observation Pipeline) analyzes echelle spectra for observations made by the Astrophysics Research Consortium (ARC) Echelle Spectrograph on the ARC 3.5 m Telescope at Apache Point Observatory. It is a high resolution spectroscopy software toolkit that picks up where the traditional IRAF reduction scripts leave off, and offers blaze function normalization by polynomial fits to observations of early-type stars, a robust least-squares normalization method, and radial velocity measurements (or offset removals) via cross-correlation with model spectra, including barycentric radial velocity calculations. It also concatenates multiple echelle orders into a simple 1D spectrum and provides approximate flux calibration.
Aegean, written in python, finds compact sources within radio images by seeking out islands of pixels above a given threshold and then using the curvature of the image to determine how many Gaussian components should be used to describe the island. The Gaussian fitting is initiated with parameters determined from the curvature and intensity maps, and makes use of mpfit to perform a constrained fit. Aegean has been optimized for compact radio sources in images that have no diffuse background emission, but by pre-processing the images with a spatial filter, or by convolving an optical image with an appropriately small PSF, Aegean is able to produce excellent results in a range of applications.
ACIS Extract (AE), written in the IDL language, provides innovative and automated solutions to the varied challenges found in the analysis of X-ray data taken by the ACIS instrument on NASA's Chandra observatory. AE addresses complications found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission. AE can perform virtually all the data processing and analysis tasks that lie between Level 2 ACIS data and publishable LaTeX tables of point-like and diffuse source properties and spectral models.
The goal of the development of the Aarhus Adiabatic Oscillation Package was to have a simple and efficient tool for the computation of adiabatic oscillation frequencies and eigenfunctions for general stellar models, emphasizing also the accuracy of the results. The Fortran code offers considerable flexibility in the choice of integration method as well as ability to determine all frequencies of a given model, in a given range of degree and frequency. Development of the Aarhus adiabatic pulsation code started around 1978. Although the main features have been stable for more than a decade, development of the code is continuing, concerning numerical properties and output. The code has been provided as a generally available package and has seen substantial use at a number of installations. Further development of the package, including bringing the documentation closer to being up to date, is planned as part of the HELAS Coordination Action.
adiabatic-tides evaluates the tidal stripping of dark matter (sub)haloes in the adiabatic limit. It exactly reproduces the remnant of an NFW halo that is exposed to a slowly increasing isotropic tidal field and approximately reproduces the remnant for an anisotropic tidal field. adiabatic-tides also predicts the asymptotic mass loss limit for orbiting subhaloes and differently concentrated host-haloes with and without baryonic components, and can be used to improve predictions of dark matter annihilation.
ADBSat computes aerodynamic coefficient databases for satellite geometries in free-molecular flow (FMF) conditions. Written in MATLAB, ADBSat imports body geometry from .stl or .obj mesh files, calculates aerodynamic force and moment coefficient for different gas-surface interaction models, and calculates solar radiation pressure force and moment coefficient. It also takes multiple surface and material characteristics into consideration. ADBSat is a panel-method tool that is able to calculate aerodynamic or solar force and moment coefficient sets for satellite geometries by applying analytical (closed-form) expressions for the interactions to discrete flat-plate mesh elements. The panel method of ADBSat assumes FMF conditions. The code analyzes basic shadowing to identify panels that are shielded from the flow by other parts of the body and will therefore not experience any surface interactions. However, this method is dependent on the refinement of the input mesh and can be sensitive to the orientation and arrangement of the mesh elements with respect to the oncoming flow direction.
ADAPTSMOOTH serves to smooth astronomical images in an adaptive fashion in order to enhance the signal-to-noise ratio (S/N). The adaptive smoothing scheme allows taking full advantage of the spatially resolved photometric information contained in an image in that at any location the minimal smoothing is applied to reach the requested S/N. Support is given to match more images on the same smoothing length, such that proper estimates of local colors can be done, with a big potential impact on multi-wavelength studies of extended sources (galaxies, nebulae). Different modes to estimate local S/N are provided. In addition to classical arithmetic-mean averaging mode, the code can operate in median averaging mode, resulting in a significant enhancement of the final image quality and very accurate flux conservation.
AdaptiveBin takes one or more images and adaptively bins them. If one image is supplied, then the pixels are binned by fractional error on the intensity. If two or more images are supplied, then the pixels are fractional binned by error on the combined color.
AdaptaHOP is a structure and substructure detector. It reads an input particle distribution file and can compute the mean square distance between each particle and its nearest neighbors or the SPH density associated to each particle + the list of its nearest neighbors. It can also read an input particle distribution and a neighbors file (output from a previous run) and output the tree of the structures in structures.
AdaMet (Adaptive Metropolis) performs efficient Bayesian analysis. The user-friendly Python package is an implementation of the Adaptive Metropolis algorithm. In many real-world applications, it is more efficient and robust than emcee (ascl:1303.002), which warm-up phase scales linearly with the number of walkers. For this reason, and because of its didactic value, the AdaMet code is provided as an alternative.
ADAM (All-Data Asteroid Modeling) models asteroid shape reconstruction from observations. Developed in MATLAB with core routines in C, its features include general nonconvex and non-starlike parametric 3D shape supports and reconstruction of asteroid shape from any combination of lightcurves, adaptive optics images, HST/FGS data, disk-resolved thermal images, interferometry, and range-Doppler radar images. ADAM does not require boundary contour extraction for reconstruction and can be run in parallel.
ActSNClass uses a parametric feature extraction method, Random Forest classifier and two learning strategies (uncertainty sampling and random sampling) to performs active learning for supernova photometric classification.
The ACStools package contains Python tools to work with data from the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS). The package has several calibration utilities and a zeropoints calculator, can detect satellite trails, and offers destriping, polarization, and photometric tools.
ALMA Common Software (ACS) provides a software infrastructure common to all ALMA partners and consists of a documented collection of common patterns and components which implement those patterns. The heart of ACS is based on a distributed Component-Container model, with ACS Components implemented as CORBA objects in any of the supported programming languages. ACS provides common CORBA-based services such as logging, error and alarm management, configuration database and lifecycle management. Although designed for ALMA, ACS can and is being used in other control systems and distributed software projects, since it implements proven design patterns using state of the art, reliable technology. It also allows, through the use of well-known standard constructs and components, that other team members whom are not authors of ACS easily understand the architecture of software modules, making maintenance affordable even on a very large project.
acorns generates a hierarchical system of clusters within discrete data by using an n-dimensional unsupervised machine-learning algorithm that clusters spectroscopic position-position-velocity data. The algorithm is based on a technique known as hierarchical agglomerative clustering. Although acorns was designed with the analysis of discrete spectroscopic position-position-velocity (PPV) data in mind (rather than uniformly spaced data cubes), clustering can be performed in n-dimensions and the algorithm can be readily applied to other data sets in addition to PPV measurements.
ACORNS-ADI, written in python, is a parallelized software package which reduces high-contrast imaging data. Originally written for imaging data from Subaru/HiCIAO, it requires minimal modification to reduce data from other instruments. It is efficient, open-source, and includes several optional features which may improve performance.
The AbundanceMatching Python module creates (interpolates and extrapolates) abundance functions and also provides fiducial deconvolution and abundance matching.
abundance, written in Fortran, provides driver and fitting routines to compute the predicted number of clusters in a ΛCDM cosmology that agrees with CMB, SN, BAO, and H0 measurements (up to 2010) at some specified parameter confidence and the mass that would rule out that cosmology at some specified sample confidence. It also computes the expected number of such clusters in the light cone and the Eddington bias factor that must be applied to observed masses.
Line broadening cross sections for the broadening of spectral lines by collisions with neutral hydrogen atoms have been tabulated by Anstee & O’Mara (1995), Barklem & O’Mara (1997) and Barklem, O’Mara & Ross (1998) for s–p, p–s, p–d, d–p, d–f and f–d transitions. abo-cross, written in Fortran, interpolates in these tabulations to make these data more accessible to the end user. This code can be incorporated into existing spectrum synthesis programs or used it in a stand-alone mode to compute line broadening cross sections for specific transitions.
abcpmc is a Python Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. It is extendable with k-nearest neighbour (KNN) or optimal local covariance matrix (OLCM) pertubation kernels and has built-in support for massively parallelized sampling on a cluster using MPI.
autoregressive-bbh-inference, written in Python, models the distributions of binary black hole masses, spins, and redshifts to identify physical features appearing in these distributions without the need for strongly-parametrized population models. This allows not only agnostic study of the “known unknowns” of the black hole population but also reveals the “unknown unknowns," the unexpected and impactful features that may otherwise be missed by the standard building-block method.
aartfaac2ms converts raw Aartfaac correlator files to the casacore (ascl:1912.002) measurement set format. It phase rotates the data to a common phase center, and (optionally) flags, averages, and compresses the data. The code includes a tool, afedit, to splice a raw Aartfaac set based on LST.
AART (Adaptive Analytical Ray Tracing) exploits the integrability properties of the Kerr spacetime to compute high-resolution black hole images and their visibility amplitude on long interferometric baselines. It implements a non-uniform adaptive grid on the image plane suitable to study black hole photon rings (narrow ring-shaped features, predicted by general relativity but not yet observed). The code implements all the relevant equations required to compute the appearance of equatorial sources on the (far) observer's screen.
This python code automatically detects solar active regions (AR). Based on morphological operation and region growing, it uses synoptic magnetograms from SOHO/MDI and SDO/HMI and calculates the parameters that characterize each AR, including the latitude and longitude of the flux-weighted centroid of two polarities and the whole AR, the area, and the flux of each polarity, and the initial and final dipole moments.
AAOGlimpse is an experimental display program that uses OpenGL to display FITS data (and even JPEG images) as 3D surfaces that can be rotated and viewed from different angles, all in real-time. It is WCS-compliant and designed to handle three-dimensional data. Each plane in a data cube is surfaced in the same way, and the program allows the user to travel through a cube by 'peeling off' successive planes, or to look into a cube by suppressing the display of data below a given cutoff value. It can blink images and can superimpose images and contour maps from different sources using their world coordinate data. A limited socket interface allows communication with other programs.
The ALeRCE anomaly detector cross-validates six anomaly detection algorithms for three classes (transient, periodic, and stochastic) of anomalous sources within the Zwicky Transient Facility (ZTF) data stream using the ALeRCE light curve features. A machine and deep learning-based framework is used for anomaly detection. For each class, a distinct anomaly detection model is constructed using only information about the known objects (i.e., inliers) for training. An anomaly score is computed using the probabilities to determine whether the light curve corresponds to a transient, stochastic, or periodic nature.
a3cosmos-gas-evolution calculates galaxies' cold molecular gas properties using gas scaling functions derived from the A3COSMOS project. By known galaxies' redshifts or cosmic age, stellar masses, and star formation enhancement to galaxies' star-forming main sequence (Delta MS), the gas scaling functions predict their stellar mass ratio (gas fraction) and gas depletion time.
A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.
A-SLOTH (Ancient Stars and Local Observables by Tracing Halos) connects the formation of the first stars and galaxies to observables. The model is based on dark matter merger trees, on which A-SLOTH applies analytical recipes for baryonic physics to model the formation of both metal-free and metal-poor stars and the transition between them. The software samples individual stars and includes radiative, chemical, and mechanical feedback. A-SLOTH has versatile applications with moderate computational requirements. It can be used to constrain the properties of the first stars and high-z galaxies based on local observables, predicts properties of the oldest and most metal-poor stars in the Milky Way, can serve as a subgrid model for larger cosmological simulations, and predicts next-generation observables of the early Universe, such as supernova rates or gravitational wave events.
Photon asymmetry is a novel robust substructure statistic for X-ray cluster observations with only a few thousand counts; it exhibits better stability than power ratios and centroid shifts and has a smaller statistical uncertainty than competing substructure parameters, allowing for low levels of substructure to be measured with confidence. A_phot computes the photon asymmetry (A_phot) parameter for morphological classification of clusters and allows quantifying substructure in samples of distant clusters covering a wide range of observational signal-to-noise ratios. The python scripts are completely automatic and can be used to rapidly classify galaxy cluster morphology for large numbers of clusters without human intervention.
Working with a GUI, or adding interaction in plotting, will help a lot in data analysis. However, the common GUI of Python is OS-dependent, while manually adding interactive codes is too complex. A pseudo-GUI tool is introduced in this work. It will help to add buttons/checkers in the graph and assign callback functions to them. The remaining problem is that the documents in this package are in Chinese and will be in English in the next version. This program is published to the PyPI, and can be installed by 'pip install pltgui'.
Two neural networks were designed to identify hazardous planetesimals that were trained on object trajectories calculated in a cloud computing environment. The first neural network was fully-connected and was trained on the orbital elements (OEs) of real/simulated planetesimals, while the second was a 1-dimensional convolutional neural network that was trained on the position Cartesian coordinates of real/simulated planetesimals. Ultimately, the network trained on OEs had a better performance by identifying one-third of known potentially hazardous objects including the 3 asteroids with the highest chance of impact with Earth (2009 FD, 1999 RQ36, 1950 DA) as established by NASA's Monte Carlo based Sentry system.
We present corrections to the Schlegel, Finkbeiner, Davis (SFD98) reddening maps over the Sloan Digital Sky Survey northern Galactic cap area. To find these corrections, we employ what we dub the "standard crayon" method, in which we use passively evolving galaxies as color standards by which to measure deviations from the reddening map. We select these passively evolving galaxies spectroscopically, using limits on the H alpha and O II equivalent widths to remove all star-forming galaxies from the SDSS main galaxy catalog. We find that by correcting for known reddening, redshift, color-magnitude relation, and variation of color with environmental density, we can reduce the scatter in color to below 3% in the bulk of the 151,637 galaxies we select. Using these galaxies we construct maps of the deviation from the SFD98 reddening map at 4.5 degree resolution, with 1-sigma error of ~ 1.5 millimagnitudes E(B-V). We find that the SFD98 maps are largely accurate with most of the map having deviations below 3 millimagnitudes E(B-V), though some regions do deviate from SFD98 by as much as 50%. The maximum deviation found is 45 millimagnitudes in E(B-V), and spatial structure of the deviation is strongly correlated with the observed dust temperature, such that SFD98 underpredicts reddening in regions of low dust temperature. The maps of these deviations, as well as their errors, are made available to the scientific community as supplemental correction to SFD98 at the URL below.
4DAO launches DAOSPEC (ascl:1011.002) for a large sample of spectra. Written in Fortran, the software allows one to easily manage the input and output files of DAOSPEC, optimize the main DAOSPEC parameters, and mask specific spectral regions. It also provides suitable graphical tools to evaluate the quality of the solution and provides final, normalized, zero radial velocity spectra.
In cosmological N-body simulations, higher-order Lagrangian perturbation on the initial condition affects the formation of nonlinear structure. Using this code, the initial condition generated by Zel'dovich approximation (Lagrangian linear perturbation) for Gadget-2 code to initial condition with second- or third-order Lagrangian perturbation (2LPT, 3LPT).
3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.
High precision cosmology requires analysis of large scale surveys in 3D spherical coordinates, i.e. Fourier-Bessel decomposition. Current methods are insufficient for future data-sets from wide-field cosmology surveys. 3DEX (3D EXpansions) is a public code for fast Fourier-Bessel decomposition of 3D all-sky surveys which takes advantage of HEALPix for the calculation of tangential modes. For surveys with millions of galaxies, computation time is reduced by a factor 4-12 depending on the desired scales and accuracy. The formulation is also suitable for pre-calculations and external storage of the spherical harmonics, which allows for further speed improvements. The 3DEX code can accommodate data with masked regions of missing data. It can be applied not only to cosmological data, but also to 3D data in spherical coordinates in other scientific fields.
3DCORE forward models solar storm magnetic flux ropes called 3-Dimensional Coronal Rope Ejection (3DCORE). The code is able to produce synthetic in situ observations of the magnetic cores of solar coronal mass ejections sweeping over planets and spacecraft. Near Earth, these data are taken currently by the Wind, ACE and DSCOVR spacecraft. Other suitable spacecraft making these kind of observations carrying magnetometers in the solar wind were MESSENGER, Venus Express, MAVEN, and even Helios.
3D-PDR is a three-dimensional photodissociation region code written in Fortran. It uses the Sundials package (written in C) to solve the set of ordinary differential equations and it is the successor of the one-dimensional PDR code UCL_PDR (ascl:1303.004). Using the HEALpix ray-tracing scheme (ascl:1107.018), 3D-PDR solves a three-dimensional escape probability routine and evaluates the attenuation of the far-ultraviolet radiation in the PDR and the propagation of FIR/submm emission lines out of the PDR. The code is parallelized (OpenMP) and can be applied to 1D and 3D problems.
3D-Barolo (3D-Based Analysis of Rotating Object via Line Observations) or BBarolo is a tool for fitting 3D tilted-ring models to emission-line datacubes. BBarolo works with 3D FITS files, i.e. image arrays with two spatial and one spectral dimensions. BBarolo recovers the true rotation curve and estimates the intrinsic velocity dispersion even in barely resolved galaxies (about 2 resolution elements) if the signal to noise of the data is larger than 2-3. It has source-detection and first-estimate modules, making it suitable for analyzing large 3D datasets automatically, and is a useful tool for deriving reliable kinematics for both local and high-redshift galaxies.
The Matlab Tool generates a 3D model (WRL, texturized in height false color map) of a defined region of the Mars surface. It defines the region of interest of the Mars surface (by Lat Long), a resolution of the MOLA DTMs to be considered (with a minimum px onground of 468 m), a scale factor to be multiplied to the height of the surface to improve features visibility for bumping or shadowing effect.
2MASS Kit is an open source software for use in easily constructing a high performance search server for important astronomical catalogs. It is tuned for optimal coordinate search performance (Radial Search, Box Search, Rectangular Search) of huge catalogs, thus increasing the speed by more than an order of magnitude when compared to simple indexing on a single table. Optimal conditions enable more than 3,000 searches per second for radial search of 2MASS PSC. The kit is best characterized by its flexible tuning. Each table index is registered in one of six table spaces (each resides in a separate directory), thus allowing only the essential parts to be easily moved onto fast devices. Given the terrific evolution that has taken place with recent SSDs in performance, a very cost-effective way of constructing high-performance servers is moving part of or all table indices to a fast SSD.
Setting initial conditions in numerical simulations using the standard procedure based on the Zel'dovich approximation (ZA) generates incorrect second and higher-order growth and therefore excites long-lived transients in the evolution of the statistical properties of density and velocity fields. Using more accurate initial conditions based on second-order Lagrangian perturbation theory (2LPT) reduces transients significantly; initial conditions based on 2LPT are thus much more appropriate for numerical simulations devoted to precision cosmology. The 2LPTIC code provides initial conditions for running cosmological simulations based on second-order Lagrangian Perturbation Theory (2LPT), rather than first-order (Zel'dovich approximation).
The vectorized physical domain structure function (SF) algorithm calculates the velocity anisotropy within two-dimensional molecular line emission observations. The vectorized approach is significantly faster than brute force iterative algorithms and is very efficient for even relatively large images. Furthermore, unlike frequency domain algorithms which require the input data to be fully integrable, this algorithm, implemented in Python, has no such requirements, making it a robust tool for observations with irregularities such as asymmetric boundaries and missing data.
The Python module 2DFFTUtils implements tasks associated with measuring spiral galaxy pitch angle with 2DFFT (ascl:1608.015). Since most of the 2DFFT utilities are implemented in one place, it makes preparing images for 2DFFT and dealing with 2DFFT data interactively or in scripts event easier.
2DFFT utilizes two-dimensional fast Fourier transformations of images of spiral galaxies to isolate and measure the pitch angles of their spiral arms; this provides a quantitative way to measure this morphological feature and allows comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. 2DFFT requires fourn.c from Numerical Recipes in C (Press et al. 1989).
P2DFFT (ascl:1806.011) is a parallelized version of 2DFFT.
2dfdr is an automatic data reduction pipeline dedicated to reducing multi-fibre spectroscopy data, with current implementations for AAOmega (fed by the 2dF, KOALA-IFU, SAMI Multi-IFU or older SPIRAL front-ends), HERMES, 2dF (spectrograph), 6dF, and FMOS. A graphical user interface is provided to control data reduction and allow inspection of the reduced spectra.
2DBAT implements Bayesian fits of 2D tilted-ring models to derive rotation curves of galaxies. It performs 2D tilted-ring analysis based on a Bayesian Markov Chain Monte Carlo (MCMC) technique, thus quantifying the kinematic geometry of galaxy discs, and deriving high-quality rotation curves that can be used for mass modeling of baryons and dark matter halos.
2D-FFTLog takes the FFTLog algorithm for 1D Hankel transforms and generalizes it for 2D Hankel transforms. The algorithm is useful for efficiently computing non-Gaussian covariance matrices of cosmological 2-point statistics in configuration space from Fourier space covariances. Fast bin-averaging method is also developed for both the logarithmic binning and general binning choices. C and Python versions of the code are available.
2cosmos is a modification of Monte Python (ascl:1307.002) and allows the user to write likelihood modules that can request two independent instances of CLASS (ascl:1106.020) and separate dictionaries and structures for all cosmological and nuisance parameters. The intention is to be able to evaluate two independent cosmological calculations and their respective parameters within the same likelihood. This is useful for evaluating a likelihood using correlated datasets (e.g. mutually exclusive subsets of the same dataset for which one wants to take into account all correlations between the subsets).
21cmvFAST demonstrates that including dark matter (DM)-baryon relative velocities produces velocity-induced acoustic oscillations (VAOs) in the 21-cm power spectrum. Based on 21cmFAST (ascl:1102.023) and 21CMMC (ascl:1608.017), 21cmvFAST accounts for molecular-cooling haloes, which are expected to drive star formation during cosmic dawn, as both relative velocities and Lyman-Werner feedback suppress halo formation. This yields accurate 21-cm predictions all the way to reionization (z>~10).
21cmSense calculates the expected sensitivities of 21cm experiments to the Epoch of Reionization power spectrum. Written in Python, it requires NumPy, SciPy, and AIPY (ascl:1609.012).
21CMMC is an efficient Python sampler of the semi-numerical reionization simulation code 21cmFAST (ascl:1102.023). It can recover constraints on astrophysical parameters from current or future 21 cm EoR experiments, accommodating a variety of EoR models, as well as priors on individual model parameters and the reionization history. By studying the resulting impact on the EoR astrophysical constraints, 21CMMC can be used to optimize foreground cleaning algorithms; interferometer designs; observing strategies; alternate statistics characterizing the 21cm signal; and synergies with other observational programs.
21cmFirstCLASS extends 21cmFAST (ascl:1102.023) and interfaces with CLASS (ascl:1106.020) to generate initial conditions at recombination that are consistent with the input cosmological model. These initial conditions can be set during the time of recombination, allowing one to compute the 21cm signal (and its spatial fluctuations) throughout the dark ages, as well as in the proceeding cosmic dawn and reionization epochs, just as in the standard 21cmFAST. 21cmFirstCLASS tracks both the CDM density field δc as well as the baryons density field δb. In addition, the user interface in 21cmFirstCLASS has been improved and allows one to easily plot the 21cm power spectrum while including noise from the output of 21cmSense (ascl:1609.013).
21cmFAST is a powerful semi-numeric modeling tool designed to efficiently simulate the cosmological 21-cm signal. The code generates 3D realizations of evolved density, ionization, peculiar velocity, and spin temperature fields, which it then combines to compute the 21-cm brightness temperature. Although the physical processes are treated with approximate methods, the results were compared to a state-of-the-art large-scale hydrodynamic simulation, and the findings indicate good agreement on scales pertinent to the upcoming observations (>~ 1 Mpc). The power spectra from 21cmFAST agree with those generated from the numerical simulation to within 10s of percent, down to the Nyquist frequency. Results were shown from a 1 Gpc simulation which tracks the cosmic 21-cm signal down from z=250, highlighting the various interesting epochs. Depending on the desired resolution, 21cmFAST can compute a redshift realization on a single processor in just a few minutes. The code is fast, efficient, customizable and publicly available, making it a useful tool for 21-cm parameter studies.
21cmEMU emulates 21cmFAST (ascl:1102.023) summary statistics, among them the 21-cm power spectrum, 21-cm global brightness temperature, IGM spin temperature, and neutral fraction. It also emulates the Thomson scattering optical depth and UV luminosity functions. With 21cmFAST installed, parameters can be supplied direction to 21cmEMU, and 21cmEMU can be used for, for example, analytic calculations of taue and UV luminosity functions. The code is included as an alternative simulator in 21cmMC (ascl:1608.017).
21cmDeepLearning extracts the underlying matter density map from a 21 cm intensity field by making use of a convolutional neural network (CNN) with the U-Net architecture; the software is implemented in Pytorch. The astrophysical parameters of the simulations can be predicted with a secondary CNN. The simulations of matter density and 21 cm maps are performed with the code 21cmFAST (ascl:1102.023).
2-DUST is a general-purpose dust radiative transfer code for an axisymmetric system that reveals the global energetics of dust grains in the shell and the 2-D projected morphologies of the shell that are strongly dependent on the mixed effects of the axisymmetric dust distribution and inclination angle. It can be used to model a variety of axisymmetric astronomical dust systems.
The transient search pipeline realfast integrates with the real-time environment at the Very Large Array (VLA) to look for fast radio bursts, pulsars, and other rare astrophysical transients. The software monitors multicast messages, catches visibility data, and defines a fast transient search pipeline with rfpipe (ascl:1710.002). It indexes candidate transients and other metadata for the search interface, and writes and archives new visibility files for candidate transients. realfast provides support for GPU algorithms, manages distributed futures, and performs blind injection and management of mock transients, among other tasks, and rapidly distributes data products and transient alerts to the public.
*Colume* uses the statistical and spatial distribution of a column density map to infer a likely volume density distribution along each line of sight. Fast and easy to use but with large memory requirements. Available as a Python package incorporating all pre-processing (in particular re-sampling) functions needed to efficiently work on the column density maps. Outputs saved in Numpy format.
Would you like to view a random code?