ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1102.013] Cactus: HPC infrastructure and programming tools

Cactus provides computational scientists and engineers with a collaborative, modular and portable programming environment for parallel high performance computing. Cactus can make use of many other technologies for HPC, such as Samrai, HDF5, PETSc and PAPI, and several application domains such as numerical relativity, computational fluid dynamics and quantum gravity are developing open community toolkits for Cactus.

[ascl:1102.014] Einstein Toolkit for Relativistic Astrophysics

The Einstein Toolkit is a collection of software components and tools for simulating and analyzing general relativistic astrophysical systems. Such systems include gravitational wave space-times, collisions of compact objects such as black holes or neutron stars, accretion onto compact objects, core collapse supernovae and Gamma-Ray Bursts.

The Einstein Toolkit builds on numerous software efforts in the numerical relativity community including CactusEinstein, Whisky, and Carpet. The Einstein Toolkit currently uses the Cactus Framework as the underlying computational infrastructure that provides large-scale parallelization, general computational components, and a model for collaborative, portable code development.

[ascl:1102.015] PMFASTIC: Initial condition generator for PMFAST

PMFASTIC is a parallel initial condition generator, a slab decomposition Fortran 90 parallel cosmological initial condition generator for use with PMFAST (ascl:1102.008). Files required for generating initial dark matter particle distributions and instructions are included, however one would require CMBFAST to create alternative transfer functions.

[ascl:1102.016] HERACLES: 3D Hydrodynamical Code to Simulate Astrophysical Fluid Flows

HERACLES is a 3D hydrodynamical code used to simulate astrophysical fluid flows. It uses a finite volume method on fixed grids to solve the equations of hydrodynamics, MHD, radiative transfer and gravity. This software is developed at the Service d'Astrophysique, CEA/Saclay as part of the COAST project and is registered under the CeCILL license. HERACLES simulates astrophysical fluid flows using a grid based Eulerian finite volume Godunov method. It is capable of simulating pure hydrodynamical flows, magneto-hydrodynamic flows, radiation hydrodynamic flows (using either flux limited diffusion or the M1 moment method), self-gravitating flows using a Poisson solver or all of the above. HERACLES uses cartesian, spherical and cylindrical grids.

[ascl:1102.017] FARGO: Fast Advection in Rotating Gaseous Objects

FARGO is an efficient and simple modification of the standard transport algorithm used in explicit eulerian fixed polar grid codes, aimed at getting rid of the average azimuthal velocity when applying the Courant condition. This results in a much larger timestep than the usual procedure, and it is particularly well-suited to the description of a Keplerian disk where one is traditionally limited by the very demanding Courant condition on the fast orbital motion at the inner boundary. In this modified algorithm, the timestep is limited by the perturbed velocity and by the shear arising from the differential rotation. The speed-up resulting from the use of the FARGO algorithm is problem dependent. In the example presented in the code paper below, which shows the evolution of a Jupiter sized protoplanet embedded in a minimum mass protoplanetary nebula, the FARGO algorithm is about an order of magnitude faster than a traditional transport scheme, with a much smaller numerical diffusivity.

[ascl:1102.018] Karma: Visualisation Test-Bed Toolkit

Karma is a toolkit for interprocess communications, authentication, encryption, graphics display, user interface and manipulating the Karma network data structure. It contains KarmaLib (the structured libraries and API) and a large number of modules (applications) to perform many standard tasks. A suite of visualisation tools are distributed with the library.

[ascl:1102.019] HOP: A Group-finding Algorithm for N-body Simulations

We describe a new method (HOP) for identifying groups of particles in N-body simulations. Having assigned to every particle an estimate of its local density, we associate each particle with the densest of the Nh particles nearest to it. Repeating this process allows us to trace a path, within the particle set itself, from each particle in the direction of increasing density. The path ends when it reaches a particle that is its own densest neighbor; all particles reaching the same such particle are identified as a group. Combined with an adaptive smoothing kernel for finding the densities, this method is spatially adaptive, coordinate-free, and numerically straight-forward. One can proceed to process the output by truncating groups at a particular density contour and combining groups that share a (possibly different) density contour. While the resulting algorithm has several user-chosen parameters, we show that the results are insensitive to most of these, the exception being the outer density cutoff of the groups.

[ascl:1102.020] SKID: Finding Gravitationally Bound Groups in N-body Simulations

SKID finds gravitationally bound groups in N-body simulations. The SKID program will group different types of particles depending on the type of input binary file. This could be either dark matter particles, gas particles, star particles or gas and star particles depending on what is in the input tipsy binary file. Once groups with at least a certain minimum number of members have been determined, SKID will remove particles which are not bound to the group. SKID must use the original positions of all the particles to determine whether or not particles are bound. This procedure which we call unbinding, is again dependent on the type of grouping we are dealing with. There are two cases, one for dark matter only or star particles only (case 1 unbinding), the other for inputs including gas (also stars in a dark matter environment this is case 2 unbinding).

Skid version 1.3 is a much improved version of the old denmax-1.1 version. The new name was given to avoid confusion with the DENMAX program of Gelb & Bertschinger, and although it is based on the same idea it represents a substantial evolution in the method.

[ascl:1102.021] DIRT: Dust InfraRed Toolbox

DIRT is a Java applet for modelling astrophysical processes in circumstellar dust shells around young and evolved stars. With DIRT, you can select and display over 500,000 pre-run model spectral energy distributions (SEDs), find the best-fit model to your data set, and account for beam size in model fitting. DIRT also allows you to manipulate data and models with an interactive viewer, display gas and dust density and temperature profiles, and display model intensity profiles at various wavelengths.

[ascl:1102.022] PDRT: Photo Dissociation Region Toolbox

Ultraviolet photons from O and B stars strongly influence the structure and emission spectra of the interstellar medium. The UV photons energetic enough to ionize hydrogen (hν > 13.6 eV) will create the H II region around the star, but lower energy UV photons escape. These far-UV photons (6 eV < hν < 13.6 eV) are still energetic enough to photodissociate molecules and to ionize low ionization-potential atoms such as carbon, silicon, and sulfur. They thus create a photodissociation region (PDR) just outside the H II region. In aggregate, these PDRs dominate the heating and cooling of the neutral interstellar medium.

The PDR Toolbox is a science-enabling Python package for the community, designed to help astronomers determine the physical parameters of photodissociation regions from observations. Typical observations of both Galactic and extragalactic PDRs come from ground- and space-based millimeter, submillimeter, and far-infrared telescopes such as ALMA, SOFIA, JWST, Spitzer, and Herschel. Given a set of observations of spectral line or continuum intensities, PDR Toolbox can compute best-fit FUV incident intensity and cloud density based on our models of PDR emission.

[ascl:1102.023] 21cmFAST: A Fast, Semi-Numerical Simulation of the High-Redshift 21-cm Signal

21cmFAST is a powerful semi-numeric modeling tool designed to efficiently simulate the cosmological 21-cm signal. The code generates 3D realizations of evolved density, ionization, peculiar velocity, and spin temperature fields, which it then combines to compute the 21-cm brightness temperature. Although the physical processes are treated with approximate methods, the results were compared to a state-of-the-art large-scale hydrodynamic simulation, and the findings indicate good agreement on scales pertinent to the upcoming observations (>~ 1 Mpc). The power spectra from 21cmFAST agree with those generated from the numerical simulation to within 10s of percent, down to the Nyquist frequency. Results were shown from a 1 Gpc simulation which tracks the cosmic 21-cm signal down from z=250, highlighting the various interesting epochs. Depending on the desired resolution, 21cmFAST can compute a redshift realization on a single processor in just a few minutes. The code is fast, efficient, customizable and publicly available, making it a useful tool for 21-cm parameter studies.

[ascl:1102.024] DiFX2: A more flexible, efficient, robust and powerful software correlator

Software correlation, where a correlation algorithm written in a high-level language such as C++ is run on commodity computer hardware, has become increasingly attractive for small to medium sized and/or bandwidth constrained radio interferometers. In particular, many long baseline arrays (which typically have fewer than 20 elements and are restricted in observing bandwidth by costly recording hardware and media) have utilized software correlators for rapid, cost-effective correlator upgrades to allow compatibility with new, wider bandwidth recording systems and improve correlator flexibility. The DiFX correlator, made publicly available in 2007, has been a popular choice in such upgrades and is now used for production correlation by a number of observatories and research groups worldwide. Here we describe the evolution in the capabilities of the DiFX correlator over the past three years, including a number of new capabilities, substantial performance improvements, and a large amount of supporting infrastructure to ease use of the code. New capabilities include the ability to correlate a large number of phase centers in a single correlation pass, the extraction of phase calibration tones, correlation of disparate but overlapping sub-bands, the production of rapidly sampled filterbank and kurtosis data at minimal cost, and many more. The latest version of the code is at least 15% faster than the original, and in certain situations many times this value. Finally, we also present detailed test results validating the correctness of the new code.

[ascl:1102.025] LensPix: Fast MPI full sky transforms for HEALPix

Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full sky simulation and subsequent parameter estimation at Planck-like sensitivity.

[ascl:1102.026] CAMB: Code for Anisotropies in the Microwave Background

We present a fully covariant and gauge-invariant calculation of the evolution of anisotropies in the cosmic microwave background (CMB) radiation. We use the physically appealing covariant approach to cosmological perturbations, which ensures that all variables are gauge-invariant and have a clear physical interpretation. We derive the complete set of frame-independent, linearised equations describing the (Boltzmann) evolution of anisotropy and inhomogeneity in an almost Friedmann-Robertson-Walker (FRW) cold dark matter (CDM) universe. These equations include the contributions of scalar, vector and tensor modes in a unified manner. Frame-independent equations for scalar and tensor perturbations, which are valid for any value of the background curvature, are obtained straightforwardly from the complete set of equations. We discuss the scalar equations in detail, including the integral solution and relation with the line of sight approach, analytic solutions in the early radiation dominated era, and the numerical solution in the standard CDM model. Our results confirm those obtained by other groups, who have worked carefully with non-covariant methods in specific gauges, but are derived here in a completely transparent fashion.

[ascl:1102.027] ZENO: N-body and SPH Simulation Codes

The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere.

Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems; snapshot generation routines to create particle distributions with various properties; systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium; and snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.

Simulation codes include both pure N-body and combined N-body/SPH programs. Pure N-body codes are available in both uniprocessor and parallel versions. SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions. Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

[ascl:1102.028] ZEUS-MP/2: Computational Fluid Dynamics Code

ZEUS-MP is a multiphysics, massively parallel, message-passing implementation of the ZEUS code. ZEUS-MP offers an MHD algorithm that is better suited for multidimensional flows than the ZEUS-2D module by virtue of modifications to the method of characteristics scheme first suggested by Hawley & Stone. This MHD module is shown to compare quite favorably to the TVD scheme described by Ryu et al. ZEUS-MP is the first publicly available ZEUS code to allow the advection of multiple chemical (or nuclear) species. Radiation hydrodynamic simulations are enabled via an implicit flux-limited radiation diffusion (FLD) module. The hydrodynamic, MHD, and FLD modules can be used, singly or in concert, in one, two, or three space dimensions. In addition, so-called 1.5D and 2.5D grids, in which the "half-D'' denotes a symmetry axis along which a constant but nonzero value of velocity or magnetic field is evolved, are supported. Self-gravity can be included either through the assumption of a GM/r potential or through a solution of Poisson's equation using one of three linear solver packages (conjugate gradient, multigrid, and FFT) provided for that purpose. Point-mass potentials are also supported.

Because ZEUS-MP is designed for large simulations on parallel computing platforms, considerable attention is paid to the parallel performance characteristics of each module in the code. Strong-scaling tests involving pure hydrodynamics (with and without self-gravity), MHD, and RHD are performed in which large problems (2563 zones) are distributed among as many as 1024 processors of an IBM SP3. Parallel efficiency is a strong function of the amount of communication required between processors in a given algorithm, but all modules are shown to scale well on up to 1024 processors for the chosen fixed problem size.

[ascl:1101.001] Second-order Tight-coupling Code

Prior to recombination photons, electrons, and atomic nuclei rapidly scattered and behaved, almost, like a single tightly-coupled photon-baryon plasma. In order to solve the cosmological perturbation equations during that time, Cosmic Microwave Background (CMB) codes use the so-called tight-coupling approximation in which the problematic terms (i.e. the source of the stiffness) are expanded in inverse powers of the Thomson Opacity. Most codes only keep the terms linear in the inverse Thomson Opacity. We have developed a second-order tight-coupling code to test the validity of the usual first-order tight-coupling code. It is based on the publicly available code CAMB.

[ascl:1101.002] NDSPMHD Smoothed Particle Magnetohydrodynamics Code

This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several `urban myths' regarding SPH, in particular the idea that one can simply increase the `neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

[ascl:1101.003] IGMtransfer: Intergalactic Radiative Transfer Code

This document describes the publically available numerical code "IGMtransfer", capable of performing intergalactic radiative transfer (RT) of light in the vicinity of the Lyman alpha (Lya) line. Calculating the RT in a (possibly adaptively refined) grid of cells resulting from a cosmological simulation, the code returns 1) a "transmission function", showing how the intergalactic medium (IGM) affects the Lya line at a given redshift, and 2) the "average transmission" of the IGM, making it useful for studying the results of reionization simulations.

[ascl:1101.004] InterpMC: Caching and Interpolated Likelihoods -- Accelerating Cosmological Monte Carlo Markov Chains

We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a "proof of concept", and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

[ascl:1101.005] CMHOG: Code for Ideal Compressible Hydrodynamics

CMHOG (Connection Machine Higher Order Godunov) is a code for ideal compressible hydrodynamics based on the Lagrange-plus-remap version of the piecewise parabolic method (PPM) of Colella & Woodward (1984, J. Comp. Phys., 74, 1). It works in one-, two- or three-dimensional Cartesian coordinates with either an adiabatic or isothermal equation of state. A limited amount of extra physics has been added using operator splitting, including optically-thin radiative cooling, and chemistry for combustion simulations.

[ascl:1101.006] NIRVANA: A Numerical Tool for Astrophysical Gas Dynamics

The NIRVANA code is capable of the simulation of multi-scale self-gravitational magnetohydrodynamics problems in three space dimensions employing the technique of adaptive mesh refinement. The building blocks of NIRVANA are (i) a fully conservative, divergence-free Godunov-type central scheme for the solution of the equations of magnetohydrodynamics; (ii) a block-structured mesh refinement algorithm which automatically adds and removes elementary grid blocks whenever necessary to achieve adequate resolution and; (iii) an adaptive mesh Poisson solver based on multigrid philosophy which incorporates the so-called elliptic matching condition to keep the gradient of the gravitational potential continous at fine/coarse mesh interfaces.

[ascl:1101.007] Galaxia: A Code to Generate a Synthetic Survey of the Milky Way

We present here a fast code for creating a synthetic survey of the Milky Way. Given one or more color-magnitude bounds, a survey size and geometry, the code returns a catalog of stars in accordance with a given model of the Milky Way. The model can be specified by a set of density distributions or as an N-body realization. We provide fast and efficient algorithms for sampling both types of models. As compared to earlier sampling schemes which generate stars at specified locations along a line of sight, our scheme can generate a continuous and smooth distribution of stars over any given volume. The code is quite general and flexible and can accept input in the form of a star formation rate, age metallicity relation, age velocity dispersion relation and analytic density distribution functions. Theoretical isochrones are then used to generate a catalog of stars and support is available for a wide range of photometric bands. As a concrete example we implement the Besancon Milky Way model for the disc. For the stellar halo we employ the simulated stellar halo N-body models of Bullock & Johnston (2005). In order to sample N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. As an application we study the prospect of identifying structures in the stellar halo with a simulated GAIA survey.

[ascl:1101.008] CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

CRASH (Center for Radiative Shock Hydrodynamics) is a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

[ascl:1101.009] MasQU: Finite Differences on Masked Irregular Stokes Q,U Grids

MasQU extracts polarization information in the CMB by reducing contamination from so-called "ambiguous modes" on a masked sky, which contain leakage from the larger E-mode signal and utilizing derivative operators on the real-space Stokes Q and U parameters. In particular, the package can perform finite differences on masked, irregular grids and is applied to a semi-regular spherical pixellization, the HEALPix grid. The formalism reduces to the known finite-difference solutions in the case of a regular grid. On a masked sphere, the software represents a considerable reduction in B-mode noise from limited sky coverage.

[ascl:1101.010] TOPCAT: Tool for OPerations on Catalogues And Tables

TOPCAT is an interactive graphical viewer and editor for tabular data. Its aim is to provide most of the facilities that astronomers need for analysis and manipulation of source catalogues and other tables, though it can be used for non-astronomical data as well. It understands a number of different astronomically important formats (including FITS and VOTable) and more formats can be added.

It offers a variety of ways to view and analyse tables, including a browser for the cell data themselves, viewers for information about table and column metadata, and facilities for 1-, 2-, 3- and higher-dimensional visualisation, calculating statistics and joining tables using flexible matching algorithms. Using a powerful and extensible Java-based expression language new columns can be defined and row subsets selected for separate analysis. Table data and metadata can be edited and the resulting modified table can be written out in a wide range of output formats.

It is a stand-alone application which works quite happily with no network connection. However, because it uses Virtual Observatory (VO) standards, it can cooperate smoothly with other tools in the VO world and beyond, such as VODesktop, Aladin and ds9. Between 2006 and 2009 TOPCAT was developed within the AstroGrid project, and is offered as part of a standard suite of applications on the AstroGrid web site, where you can find information on several other VO tools.

The program is written in pure Java and available under the GNU General Public Licence. It has been developed in the UK within the Starlink and AstroGrid projects, and under PPARC and STFC grants. Its underlying table processing facilities are provided by STIL.

[ascl:1011.001] Identikit 1: A Modeling Tool for Interacting Disk Galaxies

By combining test-particle and self-consistent techniques, we have developed a method to rapidly explore the parameter space of galactic encounters. Our method, implemented in an interactive graphics program, can be used to find the parameters required to reproduce the observed morphology and kinematics of interacting disk galaxies. We test this system on an artificial data-set of 36 equal-mass merging encounters, and show that it is usually possible to reproduce the morphology and kinematics of these encounters and that a good match strongly constrains the encounter parameters. An update to this software with additional capabilities, Identikit 2 (ascl:1102.011), is available.

[ascl:1011.002] DAOSPEC: An Automatic Code for Measuring Equivalent Widths in High-resolution Stellar Spectra

DAOSPEC is a Fortran code for measuring equivalent widths of absorption lines in stellar spectra with minimal human involvement. It works with standard FITS format files and it is designed for use with high resolution (R>15000) and high signal-to-noise-ratio (S/N>30) spectra that have been binned on a linear wavelength scale. First, we review the analysis procedures that are usually employed in the literature. Next, we discuss the principles underlying DAOSPEC and point out similarities and differences with respect to conventional measurement techniques. Then experiments with artificial and real spectra are discussed to illustrate the capabilities and limitations of DAOSPEC, with special attention given to the issues of continuum placement; radial velocities; and the effects of strong lines and line crowding. Finally, quantitative comparisons with other codes and with results from the literature are also presented.

[ascl:1011.003] ZPEG: An Extension of the Galaxy Evolution Model PEGASE.2

Photometric redshifts are estimated on the basis of template scenarios with the help of the code ZPEG, an extension of the galaxy evolution model PEGASE.2 and available on the PEGASE web site. The spectral energy distribution (SED) templates are computed for nine spectral types including starburst, irregular, spiral and elliptical. Dust, extinction and metal effects are coherently taken into account, depending on evolution scenarios. The sensitivity of results to adding near-infrared colors and IGM absorption is analyzed. A comparison with results of other models without evolution measures the evolution factor which systematically increases the estimated photometric redshift values by $Delta z$ > 0.2 for z > 1.5. Moreover we systematically check that the evolution scenarios match observational standard templates of nearby galaxies, implying an age constraint of the stellar population at z=0 for each type. The respect of this constraint makes it possible to significantly improve the accuracy of photometric redshifts by decreasing the well-known degeneracy problem. The method is applied to the HDF-N sample. From fits on SED templates by a $chi^2$-minimization procedure, not only is the photometric redshift derived but also the corresponding spectral type and the formation redshift $z_for$ when stars first formed. Early epochs of galaxy formation z > 5 are found from this new method and results are compared to faint galaxy count interpretations.

[ascl:1011.004] MARS: The MAGIC Analysis and Reconstruction Software

With the commissioning of the second MAGIC gamma-ray Cherenkov telescope situated close to MAGIC-I, the standard analysis package of the MAGIC collaboration, MARS, has been upgraded in order to perform the stereoscopic reconstruction of the detected atmospheric showers. MARS is a ROOT-based code written in C++, which includes all the necessary algorithms to transform the raw data recorded by the telescopes into information about the physics parameters of the observed targets. An overview of the methods for extracting the basic shower parameters is presented, together with a description of the tools used in the background discrimination and in the estimation of the gamma-ray source spectra.

[ascl:1011.006] DAME: A Web Oriented Infrastructure for Scientific Data Mining & Exploration

DAME (DAta Mining & Exploration) is an innovative, general purpose, Web-based, VObs compliant, distributed data mining infrastructure specialized in Massive Data Sets exploration with machine learning methods. Initially fine tuned to deal with astronomical data only, DAME has evolved in a general purpose platform which has found applications also in other domains of human endeavor.

[ascl:1011.007] RAMSES: A new N-body and hydrodynamical code

A new N-body and hydrodynamical code, called RAMSES, is presented. It has been designed to study structure formation in the universe with high spatial resolution. The code is based on Adaptive Mesh Refinement (AMR) technique, with a tree based data structure allowing recursive grid refinements on a cell-by-cell basis. The N-body solver is very similar to the one developed for the ART code (Kravtsov et al. 97), with minor differences in the exact implementation. The hydrodynamical solver is based on a second-order Godunov method, a modern shock-capturing scheme known to compute accurately the thermal history of the fluid component. The accuracy of the code is carefully estimated using various test cases, from pure gas dynamical tests to cosmological ones. The specific refinement strategy used in cosmological simulations is described, and potential spurious effects associated to shock waves propagation in the resulting AMR grid are discussed and found to be negligible. Results obtained in a large N-body and hydrodynamical simulation of structure formation in a low density LCDM universe are finally reported, with 256^3 particles and 4.1 10^7 cells in the AMR grid, reaching a formal resolution of 8192^3. A convergence analysis of different quantities, such as dark matter density power spectrum, gas pressure power spectrum and individual haloes temperature profiles, shows that numerical results are converging down to the actual resolution limit of the code, and are well reproduced by recent analytical predictions in the framework of the halo model.

[ascl:1011.008] Binsim: Visualising Interacting Binaries in 3D

Binsim produces images of interacting binaries for any system parameters. Though not suitable for modeling light curves or spectra, the resulting images are helpful in visualizing the geometry of a given system and are also helpful in talks and educational work. The code uses the OpenGL API to do the 3D rendering. The software can produce images of cataclysmic variables and X-ray binaries, and can render the mass donor star, an axisymmetric disc (without superhumps, warps or spirals), the accretion stream and hotspot, and a "corona."

[ascl:1011.009] DRAGON: DRoplet and hAdron GeneratOr for Nuclear collisions

A Monte Carlo generator of the final state of hadrons emitted from an ultrarelativistic nuclear collision is introduced. An important feature of the generator is a possible fragmentation of the fireball and emission of the hadrons from fragments. Phase space distribution of the fragments is based on the blast wave model extended to azimuthally non-symmetric fireballs. Parameters of the model can be tuned and this allows to generate final states from various kinds of fireballs. A facultative output in the OSCAR1999A format allows for a comprehensive analysis of phase-space distributions and/or use as an input for an afterburner. DRAGON's purpose is to produce artificial data sets which resemble those coming from real nuclear collisions provided fragmentation occurs at hadronisation and hadrons are emitted from fragments without any further scattering. Its name, DRAGON, stands for DRoplet and hAdron GeneratOr for Nuclear collisions. In a way, the model is similar to THERMINATOR, with the crucial difference that emission from fragments is included.

[ascl:1011.010] Global Sky Model (GSM): A Model of Diffuse Galactic Radio Emission from 10 MHz to 100 GHz

Understanding diffuse Galactic radio emission is interesting both in its own right and for minimizing foreground contamination of cosmological measurements. Cosmic Microwave Background experiments have focused on frequencies > 10 GHz, whereas 21 cm tomography of the high redshift universe will mainly focus on < 0.2 GHz, for which less is currently known about Galactic emission. Motivated by this, we present a global sky model derived from all publicly available total power large-area radio surveys, digitized with optical character recognition when necessary and compiled into a uniform format, as well as the new Villa Elisa data extending the 1.4 GHz map to the entire sky. We quantify statistical and systematic uncertainties in these surveys by comparing them with various global multi-frequency model fits. We find that a principal component based model with only three components can fit the 11 most accurate data sets (at 10, 22, 45 & 408 MHz and 1.4, 2.3, 23, 33, 41, 61, 94 GHz) to an accuracy around 1%-10% depending on frequency and sky region. The data compilation and software returning a predicted all-sky map at any frequency from 10 MHz to 100 GHz are publicly available in the archive file at the link below.

[ascl:1011.011] turboGL: Accurate Modeling of Weak Lensing

turboGL is a fast Mathematica code based on a stochastic approach to cumulative weak lensing. It can easily compute the lensing PDF relative to arbitrary halo mass distributions, selection biases, number of observations, halo profiles and evolutions, making it a useful tool to study how lensing depends on cosmological parameters and impact on observations.

[ascl:1011.012] DEFROST: Simulating preheating after inflation

At the end of inflation, dynamical instability can rapidly deposit the energy of homogeneous cold inflaton into excitations of other fields. This process, known as preheating, is rather violent, inhomogeneous and non-linear, and has to be studied numerically. DEFROST simulates preheating of the Universe after the end of the inflation. It is small, easy to modify, very fast, and fully instrumented for 3D visualizations. An MPI extension for this code, MPI-DEFROST (ascl:1106.022), is available.

[ascl:1011.013] EasyLTB: Code for Testing LTB Models against CosmologyConfronting Lemaitre-Tolman-Bondi Models with Observational Cosmology

The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a LCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat Lemaitre-Tolman-Bondi (LTB) models with a series of observations, from Type Ia Supernovae to Cosmic Microwave Background and Baryon Acoustic Oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density Omega_M and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with 4 or 5 independent parameters. The best fit models have a chi^2 very close to that of the LCDM model. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einstein-de Sitter model.

[ascl:1011.014] CO5BOLD: COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3

CO5BOLD - nickname COBOLD - is the short form of "COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3''.

It is used to model solar and stellar surface convection. For solar-type stars only a small fraction of the stellar surface layers are included in the computational domain. In the case of red supergiants the computational box contains the entire star. Recently, the model range has been extended to sub-stellar objects (brown dwarfs).

CO5BOLD solves the coupled non-linear equations of compressible hydrodynamics in an external gravity field together with non-local frequency-dependent radiation transport. Operator splitting is applied to solve the equations of hydrodynamics (including gravity), the radiative energy transfer (with a long-characteristics or a short-characteristics ray scheme), and possibly additional 3D (turbulent) diffusion in individual sub steps. The 3D hydrodynamics step is further simplified with directional splitting (usually). The 1D sub steps are performed with a Roe solver, accounting for an external gravity field and an arbitrary equation of state from a table.

The radiation transport is computed with either one of three modules:

  • MSrad module: It uses long characteristics. The lateral boundaries have to be periodic. Top and bottom can be closed or open ("solar module'').
  • LHDrad module: It uses long characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (old "supergiant module'').
  • SHORTrad module: It uses short characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (new "supergiant module'').

The code was supplemented with an (optional) MHD version [Schaffenberger et al. (2005)] that can treat magnetic fields. There are also modules for the formation and advection of dust available. The current version now contains the treatment of chemical reaction networks, mostly used for the formation of molecules [Wedemeyer-Böhm et al. (2005)], and hydrogen ionization [Leenaarts & Wedemeyer-Böhm (2005)], too.

CO5BOLD is written in Fortran90. The parallelization is done with OpenMP directives.

[ascl:1011.015] Geokerr: Computing Photon Orbits in a Kerr Spacetime

Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. Programmed in Fortran, Geokerr uses a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semi-analytically for the first time.

[ascl:1011.016] Non-LTE Models and Theoretical Spectra of Accretion Disks in Active Galactic Nuclei. III. Integrated Spectra for Hydrogen-Helium Disks

We have constructed a grid of non-LTE disk models for a wide range of black hole mass and mass accretion rate, for several values of viscosity parameter alpha, and for two extreme values of the black hole spin: the maximum-rotation Kerr black hole, and the Schwarzschild (non-rotating) black hole. Our procedure calculates self-consistently the vertical structure of all disk annuli together with the radiation field, without any approximations imposed on the optical thickness of the disk, and without any ad hoc approximations to the behavior of the radiation intensity. The total spectrum of a disk is computed by summing the spectra of the individual annuli, taking into account the general relativistic transfer function. The grid covers nine values of the black hole mass between M = 1/8 and 32 billion solar masses with a two-fold increase of mass for each subsequent value; and eleven values of the mass accretion rate, each a power of 2 times 1 solar mass/year. The highest value of the accretion rate corresponds to 0.3 Eddington. We show the vertical structure of individual annuli within the set of accretion disk models, along with their local emergent flux, and discuss the internal physical self-consistency of the models. We then present the full disk-integrated spectra, and discuss a number of observationally interesting properties of the models, such as optical/ultraviolet colors, the behavior of the hydrogen Lyman limit region, polarization, and number of ionizing photons. Our calculations are far from definitive in terms of the input physics, but generally we find that our models exhibit rather red optical/UV colors. Flux discontinuities in the region of the hydrogen Lyman limit are only present in cool, low luminosity models, while hotter models exhibit blueshifted changes in spectral slope.

[ascl:1011.017] Microccult: Occultation and Microlensing

Occultation and microlensing are different limits of the same phenomena of one body passing in front of another body. We derive a general exact analytic expression which describes both microlensing and occultation in the case of spherical bodies with a source of uniform brightness and a non-relativistic foreground body. We also compute numerically the case of a source with quadratic limb-darkening. In the limit that the gravitational deflection angle is comparable to the angular size of the foreground body, both microlensing and occultation occur as the objects align. Such events may be used to constrain the size ratio of the lens and source stars, the limb-darkening coefficients of the source star, and the surface gravity of the lens star (if the lens and source distances are known). Application of these results to microlensing during transits in binaries and giant-star microlensing are discussed. These results unify the microlensing and occultation limits and should be useful for rapid model fitting of microlensing, eclipse, and "microccultation" events.

[ascl:1011.018] chrom_exact: Transit of a Spherical Planet of a Stellar Chromosphere which is Geometrically Thin

Transit light curves for stellar continua have only one minimum and a "U" shape. By contrast, transit curves for optically thin chromospheric emission lines can have a "W" shape because of stellar limb-brightening. We calculate light curves for an optically thin shell of emission and fit these models to time-resolved observations of Si IV absorption by the planet HD209458b. We find that the best fit Si IV absorption model has R_p,SIV/R_*= 0.34+0.07-0.12, similar to the Roche lobe of the planet. While the large radius is only at the limit of statistical significance, we develop formulae applicable to transits of all optically thin chromospheric emission lines.

[ascl:1011.019] FLY: MPI-2 High Resolution code for LSS Cosmological Simulations

Cosmological simulations of structures and galaxies formations have played a fundamental role in the study of the origin, formation and evolution of the Universe. These studies improved enormously with the use of supercomputers and parallel systems and, recently, grid based systems and Linux clusters. Now we present the new version of the tree N-body parallel code FLY that runs on a PC Linux Cluster using the one side communication paradigm MPI-2 and we show the performances obtained. FLY is included in the Computer Physics Communication Program Library. This new version was developed using the Linux Cluster of CINECA, an IBM Cluster with 1024 Intel Xeon Pentium IV 3.0 Ghz. The results show that it is possible to run a 64 Million particle simulation in less than 15 minutes for each timestep, and the code scalability with the number of processors is achieved. This lead us to propose FLY as a code to run very large N-Body simulations with more than $10^{9}$ particles with the higher resolution of a pure tree code.

[ascl:1011.020] VisIVO: Integrated Tools and Services for Large-Scale Astrophysical Visualization

VisIVO is an integrated suite of tools and services specifically designed for the Virtual Observatory. This suite constitutes a software framework for effective visual discovery in currently available (and next-generation) very large-scale astrophysical datasets. VisIVO consists of VisiVO Desktop - a stand alone application for interactive visualization on standard PCs, VisIVO Server - a grid-enabled platform for high performance visualization and VisIVO Web - a custom designed web portal supporting services based on the VisIVO Server functionality. The main characteristic of VisIVO is support for high-performance, multidimensional visualization of very large-scale astrophysical datasets. Users can obtain meaningful visualizations rapidly while preserving full and intuitive control of the relevant visualization parameters. This paper focuses on newly developed integrated tools in VisIVO Server allowing intuitive visual discovery with 3D views being created from data tables. VisIVO Server can be installed easily on any web server with a database repository. We discuss briefly aspects of our implementation of VisiVO Server on a computational grid and also outline the functionality of the services offered by VisIVO Web. Finally we conclude with a summary of our work and pointers to future developments.

[ascl:1011.021] GRALE: A genetic algorithm for the non-parametric inversion of strong lensing systems

We present a non-parametric technique to infer the projected-mass distribution of a gravitational lens system with multiple strong-lensed images. The technique involves a dynamic grid in the lens plane on which the mass distribution of the lens is approximated by a sum of basis functions, one per grid cell. We used the projected mass densities of Plummer spheres as basis functions. A genetic algorithm then determines the mass distribution of the lens by forcing images of a single source, projected back onto the source plane, to coincide as well as possible. Averaging several tens of solutions removes the random fluctuations that are introduced by the reproduction process of genomes in the genetic algorithm and highlights those features common to all solutions. Given the positions of the images and the redshifts of the sources and the lens, we show that the mass of a gravitational lens can be retrieved with an accuracy of a few percent and that, if the sources sufficiently cover the caustics, the mass distribution of the gravitational lens can also be reliably retrieved. A major advantage of the algorithm is that it makes full use of the information contained in the radial images, unlike methods that minimise the residuals of the lens equation, and is thus able to accurately reconstruct also the inner parts of the lens.

[ascl:1011.022] yt: A Multi-Code Analysis Toolkit for Astrophysical Simulation Data

yt is an open source, community-developed volumetric analysis and visualization toolkit. Originally designed for handling Enzo's (ascl:1010.072) structure adaptive mesh refinement (AMR) data, yt has been extended to work with numerous simulation methods and simulation codes including Orion, RAMSES (ascl:1011.007), and FLASH (ascl:1010.082). Analysis and visualization with yt are oriented around physically relevant quantities rather than quantities native to data representation on-disk or in-memory. yt can be used for projections, multivariate volume rendering, multi-dimensional histograms, halo finding, light cone generation and topologically-connected isocontour identification.

yt benefits from the contributions of a broad range of community members, and a full list of credits for the code can be found on the yt website or in the source repository.

[ascl:1011.023] HyRec: A Fast and Highly Accurate Primordial Hydrogen and Helium Recombination Code

We present a state-of-the-art primordial recombination code, HyRec, including all the physical effects that have been shown to significantly affect recombination. The computation of helium recombination includes simple analytic treatments of hydrogen continuum opacity in the He I 2 1P - 1 1S line, the He I] 2 3P - 1 1S line, and treats feedback between these lines within the on-the-spot approximation. Hydrogen recombination is computed using the effective multilevel atom method, virtually accounting for an infinite number of excited states. We account for two-photon transitions from 2s and higher levels as well as frequency diffusion in Lyman-alpha with a full radiative transfer calculation. We present a new method to evolve the radiation field simultaneously with the level populations and the free electron fraction. These computations are sped up by taking advantage of the particular sparseness pattern of the equations describing the radiative transfer. The computation time for a full recombination history is ~2 seconds. This makes our code well suited for inclusion in Monte Carlo Markov chains for cosmological parameter estimation from upcoming high-precision cosmic microwave background anisotropy measurements.

[ascl:1010.001] CFITSIO: A FITS File Subroutine Library

CFITSIO is a library of C and Fortran subroutines for reading and writing data files in FITS (Flexible Image Transport System) data format. CFITSIO provides simple high-level routines for reading and writing FITS files that insulate the programmer from the internal complexities of the FITS format. CFITSIO also provides many advanced features for manipulating and filtering the information in FITS files.

[ascl:1010.002] fpack: FITS Image Compression Program

fpack is a utility program for optimally compressing images in the FITS data format. The associated funpack program will restore the compressed file back to its original state. These programs may be run from the host operating system command line and are analogous to the gzip and gunzip utility programs, except that they are specifically optimized for FITS format images and offer a wider choice of compression options.

fpack uses the tiled image compression convention for storing the compressed images. This convention can in principle support any number of of different compression algorithms; currently GZIP, Rice, Hcompress, and the IRAF pixel list compression algorithms have been implemented.

The main advantages of fpack compared to the commonly used technique of externally compressing the whole FITS file with gzip are:

- It is generally faster and offers better compression than gzip.
- The FITS header keywords remain uncompressed for fast access.
- Each HDU of a multi-extension FITS file is compressed separately, so it is not necessary to uncompress the entire file to read a single image in a multi-extension file.
- Dividing the image into tiles before compression enables faster access to small subsections of the image.
- The compressed image is itself a valid FITS file and can be manipulated by other general FITS utility software.
- Lossy compression can be used for much higher compression in cases where it is not necessary to exactly preserve the original image.
- The CHECKSUM keywords are automatically updated to help verify the integrity of the files.
- Software that supports the tiled image compression technique can directly read and write the FITS images in their compressed form.

Would you like to view a random code?