ASCL.net

Astrophysics Source Code Library

Making codes discoverable since 1999

Browsing Codes

Order
Title Date
 
Mode
Abstract Compact
Per Page
50100250All
[ascl:1103.008] Parallel HOP: A Scalable Halo Finder for Massive Cosmological Data Sets

Modern N-body cosmological simulations contain billions ($10^9$) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory, and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly-employed halo finders, such that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes MPI and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger datasets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit yt, an analysis toolkit for Adaptive Mesh Refinement (AMR) data that includes complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and datasets in excess of $2000^3$ particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable. Parallel HOP is part of yt.

[ascl:1103.009] SPHRAY: A Smoothed Particle Hydrodynamics Ray Tracer for Radiative Transfer

SPHRAY, a Smoothed Particle Hydrodynamics (SPH) ray tracer, is designed to solve the 3D, time dependent, radiative transfer (RT) equations for arbitrary density fields. The SPH nature of SPHRAY makes the incorporation of separate hydrodynamics and gravity solvers very natural. SPHRAY relies on a Monte Carlo (MC) ray tracing scheme that does not interpolate the SPH particles onto a grid but instead integrates directly through the SPH kernels. Given initial conditions and a description of the sources of ionizing radiation, the code will calculate the non-equilibrium ionization state (HI, HII, HeI, HeII, HeIII, e) and temperature (internal energy/entropy) of each SPH particle. The sources of radiation can include point like objects, diffuse recombination radiation, and a background field from outside the computational volume. The MC ray tracing implementation allows for the quick introduction of new physics and is parallelization friendly. A quick Axis Aligned Bounding Box (AABB) test taken from computer graphics applications allows for the acceleration of the raytracing component. We present the algorithms used in SPHRAY and verify the code by performing all the test problems detailed in the recent Radiative Transfer Comparison Project of Iliev et. al. The Fortran 90 source code for SPHRAY and example SPH density fields are made available online.

[ascl:1103.010] Hydra: A Parallel Adaptive Grid Code

We describe the first parallel implementation of an adaptive particle-particle, particle-mesh code with smoothed particle hydrodynamics. Parallelisation of the serial code, "Hydra," is achieved by using CRAFT, a Cray proprietary language which allows rapid implementation of a serial code on a parallel machine by allowing global addressing of distributed memory.

The collisionless variant of the code has already completed several 16.8 million particle cosmological simulations on a 128 processor Cray T3D whilst the full hydrodynamic code has completed several 4.2 million particle combined gas and dark matter runs. The efficiency of the code now allows parameter-space explorations to be performed routinely using $64^3$ particles of each species. A complete run including gas cooling, from high redshift to the present epoch requires approximately 10 hours on 64 processors.

[ascl:1103.011] AP3M: Adaptive Particle-particle, Particle-mesh Code

AP3M is an adaptive particle-particle, particle-mesh code. It is older than Hydra (ascl:1103.010) but faster and more memory-efficient for dark-matter only calculations. The Adaptive P3M technique (AP3M) is built around the standard P3M algorithm. AP3M produces fully equivalent forces to P3M but represents a more efficient implementation of the force splitting idea of P3M. The AP3M program may be used in any of the three modes with an appropriate choice of input parameter.

[ascl:1103.012] Pyflation: Second Order Perturbations During Inflation Beyond Slow-roll

Pyflation calculates cosmological perturbations during an inflationary expansion of the universe. The modules in the pyflation Python package can be used to run simulations of different scalar field models of the early universe. The main classes are contained in the cosmomodels module and include simulations of background fields and first order and second order perturbations. The sourceterm package contains modules required for the computation of the term required for the evolution of second order perturbations.

Alongside the Python package, the bin directory contains Python scripts which can run first and second order simulations. A helper script called pyflation-qsubstart.py sets up a full second order run (including background, first order and source calculations) to be used on queueing system which contains the qsub executable (e.g. a Rocks cluster).

[ascl:1103.014] ParaView: Data Analysis and Visualization Application

ParaView is an open-source, multi-platform data analysis and visualization application. ParaView users can quickly build visualizations to analyze their data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView's batch processing capabilities.

ParaView was developed to analyze extremely large datasets using distributed memory computing resources. It can be run on supercomputers to analyze datasets of terascale as well as on laptops for smaller data.

[ascl:1103.015] Cloudy_3D: Quick Pseudo-3D Photoionization Code

We developed a new quick pseudo-3D photoionization code based on Cloudy (G. Ferland) and IDL (RSI) tools. The code is running the 1D photoionization code Cloudy various times, changing at each run the input parameters (e.g. inner radius, density law) according to an angular law describing the morphology of the object. Then a cube is generated by interpolating the outputs of Cloudy. In each cell of the cube, the physical conditions (electron temperature and density, ionic fractions) and the emissivities of lines are determined. Associated tools (VISNEB and VELNEB_3D) are used to rotate the nebula and to compute surface brightness maps and emission line profiles, given a velocity law and taking into account the effect of the thermal broadening and eventually the turbulence. Integrated emission line profiles are computed, given aperture shapes and positions (seeing and instrumental width effects are included). The main advantage of this tool is the short time needed to compute a model (a few tens minutes).

Cloudy_3D has been superseded by pycloudy (ascl:1304.020).

[ascl:1102.001] N-MODY: A Code for Collisionless N-body Simulations in Modified Newtonian Dynamics

N-MODY is a parallel particle-mesh code for collisionless N-body simulations in modified Newtonian dynamics (MOND). N-MODY is based on a numerical potential solver in spherical coordinates that solves the non-linear MOND field equation, and is ideally suited to simulate isolated stellar systems. N-MODY can be used also to compute the MOND potential of arbitrary static density distributions. A few applications of N-MODY indicate that some astrophysically relevant dynamical processes are profoundly different in MOND and in Newtonian gravity with dark matter.

[ascl:1102.002] PBL: Particle-Based Lensing for Gravitational Lensing Mass Reconstructions of Galaxy Clusters

Particle-Based Lensing (PBL) does gravitational lensing mass reconstructions of galaxy clusters. Traditionally, most methods have employed either a finite inversion or gridding to turn observational lensed galaxy ellipticities into an estimate of the surface mass density of a galaxy cluster. We approach the problem from a different perspective, motivated by the success of multi-scale analysis in smoothed particle hydrodynamics. In PBL, we treat each of the lensed galaxies as a particle and then reconstruct the potential by smoothing over a local kernel with variable smoothing scale. In this way, we can tune a reconstruction to produce constant signal-noise throughout, and maximally exploit regions of high information density.

PBL is designed to include all lensing observables, including multiple image positions and fluxes from strong lensing, as well as weak lensing signals including shear and flexion. In this paper, however, we describe a shear-only reconstruction, and apply the method to several test cases, including simulated lensing clusters, as well as the well-studied "Bullet Cluster" (1E0657-56). In the former cases, we show that PBL is better able to identify cusps and substructures than are grid-based reconstructions, and in the latter case, we show that PBL is able to identify substructure in the Bullet Cluster without even exploiting strong lensing measurements.

[ascl:1102.003] GRAVLENS: Computational Methods for Gravitational Lensing

Modern applications of strong gravitational lensing require the ability to use precise and varied observational data to constrain complex lens models. Two sets of computational methods for lensing calculations are discussed. The first is a new algorithm for solving the lens equation for general mass distributions. This algorithm makes it possible to apply arbitrarily complicated models to observed lenses. The second is an evaluation of techniques for using observational data including positions, fluxes, and time delays of point-like images, as well as maps of extended images, to constrain models of strong lenses. The techniques presented here are implemented in a flexible and user-friendly software package called gravlens, which is made available to the community.

[ascl:1102.004] LENSTOOL: A Gravitational Lensing Software for Modeling Mass Distribution of Galaxies and Clusters (strong and weak regime)

We describe a procedure for modelling strong lensing galaxy clusters with parametric methods, and to rank models quantitatively using the Bayesian evidence. We use a publicly available Markov chain Monte-Carlo (MCMC) sampler ('Bayesys'), allowing us to avoid local minima in the likelihood functions. To illustrate the power of the MCMC technique, we simulate three clusters of galaxies, each composed of a cluster-scale halo and a set of perturbing galaxy-scale subhalos. We ray-trace three light beams through each model to produce a catalogue of multiple images, and then use the MCMC sampler to recover the model parameters in the three different lensing configurations. We find that, for typical Hubble Space Telescope (HST)-quality imaging data, the total mass in the Einstein radius is recovered with ~1-5% error according to the considered lensing configuration. However, we find that the mass of the galaxies is strongly degenerated with the cluster mass when no multiple images appear in the cluster centre. The mass of the galaxies is generally recovered with a 20% error, largely due to the poorly constrained cut-off radius. Finally, we describe how to rank models quantitatively using the Bayesian evidence. We confirm the ability of strong lensing to constrain the mass profile in the central region of galaxy clusters in this way. Ultimately, such a method applied to strong lensing clusters with a very large number of multiple images may provide unique geometrical constraints on cosmology.

[ascl:1102.005] MRLENS: Multi-Resolution methods for gravitational LENSing

The MRLENS package offers a new method for the reconstruction of weak lensing mass maps. It uses the multiscale entropy concept, which is based on wavelets, and the False Discovery Rate which allows us to derive robust detection levels in wavelet space. We show that this new restoration approach outperforms several standard techniques currently used for weak shear mass reconstruction. This method can also be used to separate E and B modes in the shear field, and thus test for the presence of residual systematic effects. We concentrate on large blind cosmic shear surveys, and illustrate our results using simulated shear maps derived from N-Body Lambda-CDM simulations with added noise corresponding to both ground-based and space-based observations.

[ascl:1102.006] NBODY Codes: Numerical Simulations of Many-body (N-body) Gravitational Interactions

I review the development of direct N-body codes at Cambridge over nearly 40 years, highlighting the main stepping stones. The first code (NBODY1) was based on the simple concepts of a force polynomial combined with individual time steps, where numerical problems due to close encounters were avoided by a softened potential. Fortuitously, the elegant Kustaanheimo-Stiefel two-body regularization soon permitted small star clusters to be studied (NBODY3). Subsequent extensions to unperturbed three-body and four-body regularization proved beneficial in dealing with multiple interactions. Investigations of larger systems became possible with the Ahmad-Cohen neighbor scheme which was used more than 20 years ago for expanding universe models of 4000 galaxies (NBODY2). Combining the neighbor scheme with the regularization procedures enabled more realistic star clusters to be considered (NBODY5). After a period of simulations with no apparent technical progress, chain regularization replaced the treatment of compact subsystems (NBODY3, NBODY5). More recently, the Hermite integration method provided a major advance and has been implemented on the special-purpose HARP computers (NBODY4) together with an alternative version for workstations and supercomputers (NBODY6). These codes also include a variety of algorithms for stellar evolution based on fast lookup functions. The treatment of primordial binaries contains efficient procedures for chaotic two-body motion as well as tidal circularization, and special attention is paid to hierarchical systems and their stability. This family of N-body codes constitutes a powerful tool for dynamical simulations which is freely available to the astronomical community, and the massive effort owes much to collaborators.

[ascl:1102.007] PixeLens: A Portable Modeler of Lensed Quasars

We introduce and implement two novel ideas for modeling lensed quasars. The first is to require different lenses to agree about H0. This means that some models for one lens can be ruled out by data on a different lens. We explain using two worked examples. One example models 1115+080 and 1608+656 (time-delay quadruple systems) and 1933+503 (a prospective time-delay system) all together, yielding time-delay predictions for the third lens and a 90% confidence estimate of H0-1=14.6+9.4-1.7 Gyr (H0=67+9-26 km s-1 Mpc-1) assuming ΩM=0.3 and ΩΛ=0.7. The other example models the time-delay doubles 1520+530, 1600+434, 1830-211, and 2149-275, which gives H0-1=14.5+3.3-1.5 Gyr (H0=67+8-13 km s-1 Mpc-1). Our second idea is to write the modeling software as a highly interactive Java applet, which can be used both for coarse-grained results inside a browser and for fine-grained results on a workstation. Several obstacles come up in trying to implement a numerically intensive method thus, but we overcome them.

[ascl:1102.008] PMFAST: Towards Optimal Parallel PM N-body Codes

The parallel PM N-body code PMFAST is cost-effective and memory-efficient. PMFAST is based on a two-level mesh gravity solver where the gravitational forces are separated into long and short range components. The decomposition scheme minimizes communication costs and allows tolerance for slow networks. The code approaches optimality in several dimensions. The force computations are local and exploit highly optimized vendor FFT libraries. It features minimal memory overhead, with the particle positions and velocities being the main cost. The code features support for distributed and shared memory parallelization through the use of MPI and OpenMP, respectively.

The current release version uses two grid levels on a slab decomposition, with periodic boundary conditions for cosmological applications. Open boundary conditions could be added with little computational overhead. Timing information and results from a recent cosmological production run of the code using a 3712^3 mesh with 6.4 x 10^9 particles are available.

[ascl:1102.009] AHF: Amiga's Halo Finder

Cosmological simulations are the key tool for investigating the different processes involved in the formation of the universe from small initial density perturbations to galaxies and clusters of galaxies observed today. The identification and analysis of bound objects, halos, is one of the most important steps in drawing useful physical information from simulations. In the advent of larger and larger simulations, a reliable and parallel halo finder, able to cope with the ever-increasing data files, is a must. In this work we present the freely available MPI parallel halo finder AHF. We provide a description of the algorithm and the strategy followed to handle large simulation data. We also describe the parameters a user may choose in order to influence the process of halo finding, as well as pointing out which parameters are crucial to ensure untainted results from the parallel approach. Furthermore, we demonstrate the ability of AHF to scale to high-resolution simulations.

[ascl:1102.010] SEREN: A SPH code for star and planet formation simulations

SEREN is an astrophysical Smoothed Particle Hydrodynamics code designed to investigate star and planet formation problems using self-gravitating hydrodynamics simulations of molecular clouds, star-forming cores, and protostellar disks.

SEREN is written in Fortran 95/2003 with a modular philosophy for adding features into the code. Each feature can be easily activated or deactivated by way of setting options in the Makefile before compiling the code. This has the added benefit of allowing unwanted features to be removed at the compilation stage resulting in a smaller and faster executable program. SEREN is written with OpenMP directives to allow parallelization on shared-memory architecture.

[ascl:1102.011] Identikit 2: An Algorithm for Reconstructing Galactic Collisions

Using a combination of self-consistent and test-particle techniques, Identikit 1 (ascl:1011.001) provided a way to vary the initial geometry of a galactic collision and instantly visualize the outcome. Identikit 2 uses the same techniques to define a mapping from the current morphology and kinematics of a tidal encounter back to the initial conditions. By requiring that various regions along a tidal feature all originate from a single disc with a unique orientation, this mapping can be used to derive the initial collision geometry. In addition, Identikit 2 offers a robust way to measure how well a particular model reproduces the morphology and kinematics of a pair of interacting galaxies. A set of eight self-consistent simulations is used to demonstrate the algorithm's ability to search a ten-dimensional parameter space and find near-optimal matches; all eight systems are successfully reconstructed.

[ascl:1102.012] CPROPS: Bias-free Measurement of Giant Molecular Cloud Properties

CPROPS, written in IDL, processes FITS data cubes containing molecular line emission and returns the properties of molecular clouds contained within it. Without corrections for the effects of beam convolution and sensitivity to GMC properties, the resulting properties may be severely biased. This is particularly true for extragalactic observations, where resolution and sensitivity effects often bias measured values by 40% or more. We correct for finite spatial and spectral resolutions with a simple deconvolution and we correct for sensitivity biases by extrapolating properties of a GMC to those we would expect to measure with perfect sensitivity. The resulting method recovers the properties of a GMC to within 10% over a large range of resolutions and sensitivities, provided the clouds are marginally resolved with a peak signal-to-noise ratio greater than 10. We note that interferometers systematically underestimate cloud properties, particularly the flux from a cloud. The degree of bias depends on the sensitivity of the observations and the (u,v) coverage of the observations. In the Appendix to the paper we present a conservative, new decomposition algorithm for identifying GMCs in molecular-line observations. This algorithm treats the data in physical rather than observational units, does not produce spurious clouds in the presence of noise, and is sensitive to a range of morphologies. As a result, the output of this decomposition should be directly comparable among disparate data sets.

The CPROPS package contains within it a distribution of the CLUMPFIND code (ascl:1107.014) written by Jonathan Williams and described in Williams, de Geus, and Blitz (1994). If you make use of the CLUMPFIND functionality in the CPROPS package for a publication, please cite Jonathan's original article.

[ascl:1102.013] Cactus: HPC infrastructure and programming tools

Cactus provides computational scientists and engineers with a collaborative, modular and portable programming environment for parallel high performance computing. Cactus can make use of many other technologies for HPC, such as Samrai, HDF5, PETSc and PAPI, and several application domains such as numerical relativity, computational fluid dynamics and quantum gravity are developing open community toolkits for Cactus.

[ascl:1102.014] Einstein Toolkit for Relativistic Astrophysics

The Einstein Toolkit is a collection of software components and tools for simulating and analyzing general relativistic astrophysical systems. Such systems include gravitational wave space-times, collisions of compact objects such as black holes or neutron stars, accretion onto compact objects, core collapse supernovae and Gamma-Ray Bursts.

The Einstein Toolkit builds on numerous software efforts in the numerical relativity community including CactusEinstein, Whisky, and Carpet. The Einstein Toolkit currently uses the Cactus Framework as the underlying computational infrastructure that provides large-scale parallelization, general computational components, and a model for collaborative, portable code development.

[ascl:1102.015] PMFASTIC: Initial condition generator for PMFAST

PMFASTIC is a parallel initial condition generator, a slab decomposition Fortran 90 parallel cosmological initial condition generator for use with PMFAST (ascl:1102.008). Files required for generating initial dark matter particle distributions and instructions are included, however one would require CMBFAST (ascl:9909.004) to create alternative transfer functions.

[ascl:1102.016] HERACLES: 3D Hydrodynamical Code to Simulate Astrophysical Fluid Flows

HERACLES is a 3D hydrodynamical code used to simulate astrophysical fluid flows. It uses a finite volume method on fixed grids to solve the equations of hydrodynamics, MHD, radiative transfer and gravity. This software is developed at the Service d'Astrophysique, CEA/Saclay as part of the COAST project and is registered under the CeCILL license. HERACLES simulates astrophysical fluid flows using a grid based Eulerian finite volume Godunov method. It is capable of simulating pure hydrodynamical flows, magneto-hydrodynamic flows, radiation hydrodynamic flows (using either flux limited diffusion or the M1 moment method), self-gravitating flows using a Poisson solver or all of the above. HERACLES uses cartesian, spherical and cylindrical grids.

[ascl:1102.017] FARGO: Fast Advection in Rotating Gaseous Objects

FARGO is an efficient and simple modification of the standard transport algorithm used in explicit eulerian fixed polar grid codes, aimed at getting rid of the average azimuthal velocity when applying the Courant condition. This results in a much larger timestep than the usual procedure, and it is particularly well-suited to the description of a Keplerian disk where one is traditionally limited by the very demanding Courant condition on the fast orbital motion at the inner boundary. In this modified algorithm, the timestep is limited by the perturbed velocity and by the shear arising from the differential rotation. The speed-up resulting from the use of the FARGO algorithm is problem dependent. In the example presented in the code paper below, which shows the evolution of a Jupiter sized protoplanet embedded in a minimum mass protoplanetary nebula, the FARGO algorithm is about an order of magnitude faster than a traditional transport scheme, with a much smaller numerical diffusivity.

[ascl:1102.018] Karma: Visualisation Test-Bed Toolkit

Karma is a toolkit for interprocess communications, authentication, encryption, graphics display, user interface and manipulating the Karma network data structure. It contains KarmaLib (the structured libraries and API) and a large number of modules (applications) to perform many standard tasks. A suite of visualisation tools are distributed with the library.

[ascl:1102.019] HOP: A Group-finding Algorithm for N-body Simulations

We describe a new method (HOP) for identifying groups of particles in N-body simulations. Having assigned to every particle an estimate of its local density, we associate each particle with the densest of the Nh particles nearest to it. Repeating this process allows us to trace a path, within the particle set itself, from each particle in the direction of increasing density. The path ends when it reaches a particle that is its own densest neighbor; all particles reaching the same such particle are identified as a group. Combined with an adaptive smoothing kernel for finding the densities, this method is spatially adaptive, coordinate-free, and numerically straight-forward. One can proceed to process the output by truncating groups at a particular density contour and combining groups that share a (possibly different) density contour. While the resulting algorithm has several user-chosen parameters, we show that the results are insensitive to most of these, the exception being the outer density cutoff of the groups.

[ascl:1102.020] SKID: Finding Gravitationally Bound Groups in N-body Simulations

SKID finds gravitationally bound groups in N-body simulations. The SKID program will group different types of particles depending on the type of input binary file. This could be either dark matter particles, gas particles, star particles or gas and star particles depending on what is in the input tipsy binary file. Once groups with at least a certain minimum number of members have been determined, SKID will remove particles which are not bound to the group. SKID must use the original positions of all the particles to determine whether or not particles are bound. This procedure which we call unbinding, is again dependent on the type of grouping we are dealing with. There are two cases, one for dark matter only or star particles only (case 1 unbinding), the other for inputs including gas (also stars in a dark matter environment this is case 2 unbinding).

Skid version 1.3 is a much improved version of the old denmax-1.1 version. The new name was given to avoid confusion with the DENMAX program of Gelb & Bertschinger, and although it is based on the same idea it represents a substantial evolution in the method.

[ascl:1102.021] DIRT: Dust InfraRed Toolbox

DIRT is a Java applet for modelling astrophysical processes in circumstellar dust shells around young and evolved stars. With DIRT, you can select and display over 500,000 pre-run model spectral energy distributions (SEDs), find the best-fit model to your data set, and account for beam size in model fitting. DIRT also allows you to manipulate data and models with an interactive viewer, display gas and dust density and temperature profiles, and display model intensity profiles at various wavelengths.

[ascl:1102.022] PDRT: Photo Dissociation Region Toolbox

Ultraviolet photons from O and B stars strongly influence the structure and emission spectra of the interstellar medium. The UV photons energetic enough to ionize hydrogen (hν > 13.6 eV) will create the H II region around the star, but lower energy UV photons escape. These far-UV photons (6 eV < hν < 13.6 eV) are still energetic enough to photodissociate molecules and to ionize low ionization-potential atoms such as carbon, silicon, and sulfur. They thus create a photodissociation region (PDR) just outside the H II region. In aggregate, these PDRs dominate the heating and cooling of the neutral interstellar medium.

The PDR Toolbox is a science-enabling Python package for the community, designed to help astronomers determine the physical parameters of photodissociation regions from observations. Typical observations of both Galactic and extragalactic PDRs come from ground- and space-based millimeter, submillimeter, and far-infrared telescopes such as ALMA, SOFIA, JWST, Spitzer, and Herschel. Given a set of observations of spectral line or continuum intensities, PDR Toolbox can compute best-fit FUV incident intensity and cloud density based on our models of PDR emission.

[ascl:1102.023] 21cmFAST: A Fast, Semi-Numerical Simulation of the High-Redshift 21-cm Signal

21cmFAST is a powerful semi-numeric modeling tool designed to efficiently simulate the cosmological 21-cm signal. The code generates 3D realizations of evolved density, ionization, peculiar velocity, and spin temperature fields, which it then combines to compute the 21-cm brightness temperature. Although the physical processes are treated with approximate methods, the results were compared to a state-of-the-art large-scale hydrodynamic simulation, and the findings indicate good agreement on scales pertinent to the upcoming observations (>~ 1 Mpc). The power spectra from 21cmFAST agree with those generated from the numerical simulation to within 10s of percent, down to the Nyquist frequency. Results were shown from a 1 Gpc simulation which tracks the cosmic 21-cm signal down from z=250, highlighting the various interesting epochs. Depending on the desired resolution, 21cmFAST can compute a redshift realization on a single processor in just a few minutes. The code is fast, efficient, customizable and publicly available, making it a useful tool for 21-cm parameter studies.

[ascl:1102.024] DiFX2: A more flexible, efficient, robust and powerful software correlator

Software correlation, where a correlation algorithm written in a high-level language such as C++ is run on commodity computer hardware, has become increasingly attractive for small to medium sized and/or bandwidth constrained radio interferometers. In particular, many long baseline arrays (which typically have fewer than 20 elements and are restricted in observing bandwidth by costly recording hardware and media) have utilized software correlators for rapid, cost-effective correlator upgrades to allow compatibility with new, wider bandwidth recording systems and improve correlator flexibility. The DiFX correlator, made publicly available in 2007, has been a popular choice in such upgrades and is now used for production correlation by a number of observatories and research groups worldwide. Here we describe the evolution in the capabilities of the DiFX correlator over the past three years, including a number of new capabilities, substantial performance improvements, and a large amount of supporting infrastructure to ease use of the code. New capabilities include the ability to correlate a large number of phase centers in a single correlation pass, the extraction of phase calibration tones, correlation of disparate but overlapping sub-bands, the production of rapidly sampled filterbank and kurtosis data at minimal cost, and many more. The latest version of the code is at least 15% faster than the original, and in certain situations many times this value. Finally, we also present detailed test results validating the correctness of the new code.

[ascl:1102.025] LensPix: Fast MPI full sky transforms for HEALPix

Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full sky simulation and subsequent parameter estimation at Planck-like sensitivity.

[ascl:1102.026] CAMB: Code for Anisotropies in the Microwave Background

We present a fully covariant and gauge-invariant calculation of the evolution of anisotropies in the cosmic microwave background (CMB) radiation. We use the physically appealing covariant approach to cosmological perturbations, which ensures that all variables are gauge-invariant and have a clear physical interpretation. We derive the complete set of frame-independent, linearised equations describing the (Boltzmann) evolution of anisotropy and inhomogeneity in an almost Friedmann-Robertson-Walker (FRW) cold dark matter (CDM) universe. These equations include the contributions of scalar, vector and tensor modes in a unified manner. Frame-independent equations for scalar and tensor perturbations, which are valid for any value of the background curvature, are obtained straightforwardly from the complete set of equations. We discuss the scalar equations in detail, including the integral solution and relation with the line of sight approach, analytic solutions in the early radiation dominated era, and the numerical solution in the standard CDM model. Our results confirm those obtained by other groups, who have worked carefully with non-covariant methods in specific gauges, but are derived here in a completely transparent fashion.

[ascl:1102.027] ZENO: N-body and SPH Simulation Codes

The ZENO software package integrates N-body and SPH simulation codes with a large array of programs to generate initial conditions and analyze numerical simulations. Written in C, the ZENO system is portable between Mac, Linux, and Unix platforms. It is in active use at the Institute for Astronomy (IfA), at NRAO, and possibly elsewhere.

Zeno programs can perform a wide range of simulation and analysis tasks. While many of these programs were first created for specific projects, they embody algorithms of general applicability and embrace a modular design strategy, so existing code is easily applied to new tasks. Major elements of the system include structured data file utilities facilitate basic operations on binary data, including import/export of ZENO data to other systems; snapshot generation routines to create particle distributions with various properties; systems with user-specified density profiles can be realized in collisionless or gaseous form; multiple spherical and disk components may be set up in mutual equilibrium; and snapshot manipulation routines permit the user to sift, sort, and combine particle arrays, translate and rotate particle configurations, and assign new values to data fields associated with each particle.

Simulation codes include both pure N-body and combined N-body/SPH programs. Pure N-body codes are available in both uniprocessor and parallel versions. SPH codes offer a wide range of options for gas physics, including isothermal, adiabatic, and radiating models. Snapshot analysis programs calculate temporal averages, evaluate particle statistics, measure shapes and density profiles, compute kinematic properties, and identify and track objects in particle distributions. Visualization programs generate interactive displays and produce still images and videos of particle distributions; the user may specify arbitrary color schemes and viewing transformations.

[ascl:1102.028] ZEUS-MP/2: Computational Fluid Dynamics Code

ZEUS-MP is a multiphysics, massively parallel, message-passing implementation of the ZEUS code. ZEUS-MP offers an MHD algorithm that is better suited for multidimensional flows than the ZEUS-2D module by virtue of modifications to the method of characteristics scheme first suggested by Hawley & Stone. This MHD module is shown to compare quite favorably to the TVD scheme described by Ryu et al. ZEUS-MP is the first publicly available ZEUS code to allow the advection of multiple chemical (or nuclear) species. Radiation hydrodynamic simulations are enabled via an implicit flux-limited radiation diffusion (FLD) module. The hydrodynamic, MHD, and FLD modules can be used, singly or in concert, in one, two, or three space dimensions. In addition, so-called 1.5D and 2.5D grids, in which the "half-D'' denotes a symmetry axis along which a constant but nonzero value of velocity or magnetic field is evolved, are supported. Self-gravity can be included either through the assumption of a GM/r potential or through a solution of Poisson's equation using one of three linear solver packages (conjugate gradient, multigrid, and FFT) provided for that purpose. Point-mass potentials are also supported.

Because ZEUS-MP is designed for large simulations on parallel computing platforms, considerable attention is paid to the parallel performance characteristics of each module in the code. Strong-scaling tests involving pure hydrodynamics (with and without self-gravity), MHD, and RHD are performed in which large problems (2563 zones) are distributed among as many as 1024 processors of an IBM SP3. Parallel efficiency is a strong function of the amount of communication required between processors in a given algorithm, but all modules are shown to scale well on up to 1024 processors for the chosen fixed problem size.

[ascl:1101.001] Second-order Tight-coupling Code

Prior to recombination photons, electrons, and atomic nuclei rapidly scattered and behaved, almost, like a single tightly-coupled photon-baryon plasma. In order to solve the cosmological perturbation equations during that time, Cosmic Microwave Background (CMB) codes use the so-called tight-coupling approximation in which the problematic terms (i.e. the source of the stiffness) are expanded in inverse powers of the Thomson Opacity. Most codes only keep the terms linear in the inverse Thomson Opacity. We have developed a second-order tight-coupling code to test the validity of the usual first-order tight-coupling code. It is based on the publicly available code CAMB.

[ascl:1101.002] NDSPMHD Smoothed Particle Magnetohydrodynamics Code

This paper presents an overview and introduction to Smoothed Particle Hydrodynamics and Magnetohydrodynamics in theory and in practice. Firstly, we give a basic grounding in the fundamentals of SPH, showing how the equations of motion and energy can be self-consistently derived from the density estimate. We then show how to interpret these equations using the basic SPH interpolation formulae and highlight the subtle difference in approach between SPH and other particle methods. In doing so, we also critique several `urban myths' regarding SPH, in particular the idea that one can simply increase the `neighbour number' more slowly than the total number of particles in order to obtain convergence. We also discuss the origin of numerical instabilities such as the pairing and tensile instabilities. Finally, we give practical advice on how to resolve three of the main issues with SPMHD: removing the tensile instability, formulating dissipative terms for MHD shocks and enforcing the divergence constraint on the particles, and we give the current status of developments in this area. Accompanying the paper is the first public release of the NDSPMHD SPH code, a 1, 2 and 3 dimensional code designed as a testbed for SPH/SPMHD algorithms that can be used to test many of the ideas and used to run all of the numerical examples contained in the paper.

[ascl:1101.003] IGMtransfer: Intergalactic Radiative Transfer Code

This document describes the publically available numerical code "IGMtransfer", capable of performing intergalactic radiative transfer (RT) of light in the vicinity of the Lyman alpha (Lya) line. Calculating the RT in a (possibly adaptively refined) grid of cells resulting from a cosmological simulation, the code returns 1) a "transmission function", showing how the intergalactic medium (IGM) affects the Lya line at a given redshift, and 2) the "average transmission" of the IGM, making it useful for studying the results of reionization simulations.

[ascl:1101.004] InterpMC: Caching and Interpolated Likelihoods -- Accelerating Cosmological Monte Carlo Markov Chains

We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a "proof of concept", and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

[ascl:1101.005] CMHOG: Code for Ideal Compressible Hydrodynamics

CMHOG (Connection Machine Higher Order Godunov) is a code for ideal compressible hydrodynamics based on the Lagrange-plus-remap version of the piecewise parabolic method (PPM) of Colella & Woodward (1984, J. Comp. Phys., 74, 1). It works in one-, two- or three-dimensional Cartesian coordinates with either an adiabatic or isothermal equation of state. A limited amount of extra physics has been added using operator splitting, including optically-thin radiative cooling, and chemistry for combustion simulations.

[ascl:1101.006] NIRVANA: A Numerical Tool for Astrophysical Gas Dynamics

The NIRVANA code is capable of the simulation of multi-scale self-gravitational magnetohydrodynamics problems in three space dimensions employing the technique of adaptive mesh refinement. The building blocks of NIRVANA are (i) a fully conservative, divergence-free Godunov-type central scheme for the solution of the equations of magnetohydrodynamics; (ii) a block-structured mesh refinement algorithm which automatically adds and removes elementary grid blocks whenever necessary to achieve adequate resolution and; (iii) an adaptive mesh Poisson solver based on multigrid philosophy which incorporates the so-called elliptic matching condition to keep the gradient of the gravitational potential continous at fine/coarse mesh interfaces.

[ascl:1101.007] Galaxia: A Code to Generate a Synthetic Survey of the Milky Way

We present here a fast code for creating a synthetic survey of the Milky Way. Given one or more color-magnitude bounds, a survey size and geometry, the code returns a catalog of stars in accordance with a given model of the Milky Way. The model can be specified by a set of density distributions or as an N-body realization. We provide fast and efficient algorithms for sampling both types of models. As compared to earlier sampling schemes which generate stars at specified locations along a line of sight, our scheme can generate a continuous and smooth distribution of stars over any given volume. The code is quite general and flexible and can accept input in the form of a star formation rate, age metallicity relation, age velocity dispersion relation and analytic density distribution functions. Theoretical isochrones are then used to generate a catalog of stars and support is available for a wide range of photometric bands. As a concrete example we implement the Besancon Milky Way model for the disc. For the stellar halo we employ the simulated stellar halo N-body models of Bullock & Johnston (2005). In order to sample N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. As an application we study the prospect of identifying structures in the stellar halo with a simulated GAIA survey.

[ascl:1101.008] CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

CRASH (Center for Radiative Shock Hydrodynamics) is a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

[ascl:1101.009] MasQU: Finite Differences on Masked Irregular Stokes Q,U Grids

MasQU extracts polarization information in the CMB by reducing contamination from so-called "ambiguous modes" on a masked sky, which contain leakage from the larger E-mode signal and utilizing derivative operators on the real-space Stokes Q and U parameters. In particular, the package can perform finite differences on masked, irregular grids and is applied to a semi-regular spherical pixellization, the HEALPix grid. The formalism reduces to the known finite-difference solutions in the case of a regular grid. On a masked sphere, the software represents a considerable reduction in B-mode noise from limited sky coverage.

[ascl:1101.010] TOPCAT: Tool for OPerations on Catalogues And Tables

TOPCAT is an interactive graphical viewer and editor for tabular data. Its aim is to provide most of the facilities that astronomers need for analysis and manipulation of source catalogues and other tables, though it can be used for non-astronomical data as well. It understands a number of different astronomically important formats (including FITS and VOTable) and more formats can be added.

It offers a variety of ways to view and analyse tables, including a browser for the cell data themselves, viewers for information about table and column metadata, and facilities for 1-, 2-, 3- and higher-dimensional visualisation, calculating statistics and joining tables using flexible matching algorithms. Using a powerful and extensible Java-based expression language new columns can be defined and row subsets selected for separate analysis. Table data and metadata can be edited and the resulting modified table can be written out in a wide range of output formats.

It is a stand-alone application which works quite happily with no network connection. However, because it uses Virtual Observatory (VO) standards, it can cooperate smoothly with other tools in the VO world and beyond, such as VODesktop, Aladin and ds9. Between 2006 and 2009 TOPCAT was developed within the AstroGrid project, and is offered as part of a standard suite of applications on the AstroGrid web site, where you can find information on several other VO tools.

The program is written in pure Java and available under the GNU General Public Licence. It has been developed in the UK within the Starlink and AstroGrid projects, and under PPARC and STFC grants. Its underlying table processing facilities are provided by STIL.

[ascl:1011.013] EasyLTB: Code for Testing LTB Models against CosmologyConfronting Lemaitre-Tolman-Bondi Models with Observational Cosmology

The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a LCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat Lemaitre-Tolman-Bondi (LTB) models with a series of observations, from Type Ia Supernovae to Cosmic Microwave Background and Baryon Acoustic Oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density Omega_M and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with 4 or 5 independent parameters. The best fit models have a chi^2 very close to that of the LCDM model. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einstein-de Sitter model.

[ascl:1011.014] CO5BOLD: COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3

CO5BOLD - nickname COBOLD - is the short form of "COnservative COde for the COmputation of COmpressible COnvection in a BOx of L Dimensions with l=2,3''.

It is used to model solar and stellar surface convection. For solar-type stars only a small fraction of the stellar surface layers are included in the computational domain. In the case of red supergiants the computational box contains the entire star. Recently, the model range has been extended to sub-stellar objects (brown dwarfs).

CO5BOLD solves the coupled non-linear equations of compressible hydrodynamics in an external gravity field together with non-local frequency-dependent radiation transport. Operator splitting is applied to solve the equations of hydrodynamics (including gravity), the radiative energy transfer (with a long-characteristics or a short-characteristics ray scheme), and possibly additional 3D (turbulent) diffusion in individual sub steps. The 3D hydrodynamics step is further simplified with directional splitting (usually). The 1D sub steps are performed with a Roe solver, accounting for an external gravity field and an arbitrary equation of state from a table.

The radiation transport is computed with either one of three modules:

  • MSrad module: It uses long characteristics. The lateral boundaries have to be periodic. Top and bottom can be closed or open ("solar module'').
  • LHDrad module: It uses long characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (old "supergiant module'').
  • SHORTrad module: It uses short characteristics and is restricted to an equidistant grid and open boundaries at all surfaces (new "supergiant module'').

The code was supplemented with an (optional) MHD version [Schaffenberger et al. (2005)] that can treat magnetic fields. There are also modules for the formation and advection of dust available. The current version now contains the treatment of chemical reaction networks, mostly used for the formation of molecules [Wedemeyer-Böhm et al. (2005)], and hydrogen ionization [Leenaarts & Wedemeyer-Böhm (2005)], too.

CO5BOLD is written in Fortran90. The parallelization is done with OpenMP directives.

[ascl:1011.015] Geokerr: Computing Photon Orbits in a Kerr Spacetime

Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. Programmed in Fortran, Geokerr uses a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semi-analytically for the first time.

[ascl:1011.016] Non-LTE Models and Theoretical Spectra of Accretion Disks in Active Galactic Nuclei. III. Integrated Spectra for Hydrogen-Helium Disks

We have constructed a grid of non-LTE disk models for a wide range of black hole mass and mass accretion rate, for several values of viscosity parameter alpha, and for two extreme values of the black hole spin: the maximum-rotation Kerr black hole, and the Schwarzschild (non-rotating) black hole. Our procedure calculates self-consistently the vertical structure of all disk annuli together with the radiation field, without any approximations imposed on the optical thickness of the disk, and without any ad hoc approximations to the behavior of the radiation intensity. The total spectrum of a disk is computed by summing the spectra of the individual annuli, taking into account the general relativistic transfer function. The grid covers nine values of the black hole mass between M = 1/8 and 32 billion solar masses with a two-fold increase of mass for each subsequent value; and eleven values of the mass accretion rate, each a power of 2 times 1 solar mass/year. The highest value of the accretion rate corresponds to 0.3 Eddington. We show the vertical structure of individual annuli within the set of accretion disk models, along with their local emergent flux, and discuss the internal physical self-consistency of the models. We then present the full disk-integrated spectra, and discuss a number of observationally interesting properties of the models, such as optical/ultraviolet colors, the behavior of the hydrogen Lyman limit region, polarization, and number of ionizing photons. Our calculations are far from definitive in terms of the input physics, but generally we find that our models exhibit rather red optical/UV colors. Flux discontinuities in the region of the hydrogen Lyman limit are only present in cool, low luminosity models, while hotter models exhibit blueshifted changes in spectral slope.

[ascl:1011.017] Microccult: Occultation and Microlensing

Occultation and microlensing are different limits of the same phenomena of one body passing in front of another body. We derive a general exact analytic expression which describes both microlensing and occultation in the case of spherical bodies with a source of uniform brightness and a non-relativistic foreground body. We also compute numerically the case of a source with quadratic limb-darkening. In the limit that the gravitational deflection angle is comparable to the angular size of the foreground body, both microlensing and occultation occur as the objects align. Such events may be used to constrain the size ratio of the lens and source stars, the limb-darkening coefficients of the source star, and the surface gravity of the lens star (if the lens and source distances are known). Application of these results to microlensing during transits in binaries and giant-star microlensing are discussed. These results unify the microlensing and occultation limits and should be useful for rapid model fitting of microlensing, eclipse, and "microccultation" events.

Would you like to view a random code?