How well do STARLAB and NBODY4 compare? I: Simple models
Monte Carlo simulation of the electron transport through thin slabs: A comparative study of PENELOPE, GEANT3, GEANT4, EGSnrc and MCNPX
Computational AstroStatistics: Fast and Efficient Tools for Analysing Huge Astronomical Data Sources
Astrocomp: a web service for the use of high performance computers in Astrophysics
Group Identification in N-Body Simulations: SKID and DENMAX Versus Friends-of-Friends
Comparing Numerical Methods for Isothermal Magnetized Supersonic Turbulence
Haloes gone MAD: The Halo-Finder Comparison Project
GEMS: Galaxy fitting catalogues and testing parametric galaxy fitting codes
A Comparison of Cosmological Codes (TVD, ENZO, and GADGET)
Running your first SPH simulation
A Guide to Comparisons of Star Formation Simulations with Observations
Galaxies going MAD: The Galaxy-Finder Comparison Project
Streams Going Notts: The tidal debris finder comparison project
Simplifying Complex Software Assembly: The Component Retrieval Language and Implementation
nIFTy Cosmology: Comparison of Galaxy Formation Models
Modified Gravity N-body Code Comparison Project
A systematic review of strong gravitational lens modeling software
Testing approximate predictions of displacements of cosmological dark matter halos (added April 30, 2017)
Abstract: N-body simulations are widely used to simulate the dynamical evolution of a variety of systems, among them star clusters. Much of our understanding of their evolution rests on the results of such direct N-body simulations. They provide insight in the structural evolution of star clusters, as well as into the occurrence of stellar exotica. Although the major pure N-body codes STARLAB/KIRA and NBODY4 are widely used for a range of applications, there is no thorough comparison study yet. Here we thoroughly compare basic quantities as derived from simulations performed either with STARLAB/KIRA or NBODY4.
We construct a large number of star cluster models for various stellar mass function settings (but without stellar/binary evolution, primordial binaries, external tidal fields etc), evolve them in parallel with STARLAB/KIRA and NBODY4, analyse them in a consistent way and compare the averaged results quantitatively. For this quantitative comparison we develop a bootstrap algorithm for functional dependencies.
We find an overall excellent agreement between the codes, both for the clusters' structural and energy parameters as well as for the properties of the dynamically created binaries. However, we identify small differences, like in the energy conservation before core collapse and the energies of escaping stars, which deserve further studies. Our results reassure the comparability and the possibility to combine results from these two major N-body codes, at least for the purely dynamical models (i.e. without stellar/binary evolution) we performed. (abridged)
Credit: P. Anders, H. Baumgardt, N. Bissantz, S. Portegies Zwart
Abstract: The Monte Carlo simulation of the electron transport through thin slabs is studied with five general purpose codes: PENELOPE, GEANT3, GEANT4, EGSnrc and MCNPX. The different material foils analyzed in the old experiments of Kulchitsky and Latyshev [Phys. Rev. 61 (1942) 254-266] and Hanson et al. [Phys. Rev. 84 (1951) 634-637] are used to perform the comparison between the Monte Carlo codes. Non-negligible differences are observed in the angular distributions of the transmitted electrons obtained with the some of the codes. The experimental data are reasonably well described by EGSnrc, PENELOPE (v. 2005) and GEANT4. A general good agreement is found for EGSnrc and GEANT4 in all the cases analyzed.
Credit: M. Vilches, S. Garcia-Pareja, R. Guerrero, M. Anguiano, A.M. Lallena
Abstract: I present here a review of past and present multi-disciplinary research of the Pittsburgh Computational AstroStatistics (PiCA) group. This group is dedicated to developing fast and efficient statistical algorithms for analysing huge astronomical data sources. I begin with a short review of multi-resolutional kd-trees which are the building blocks for many of our algorithms. For example, quick range queries and fast n-point correlation functions. I will present new results from the use of Mixture Models (Connolly et al. 2000) in density estimation of multi-color data from the Sloan Digital Sky Survey (SDSS). Specifically, the selection of quasars and the automated identification of X-ray sources. I will also present a brief overview of the False Discovery Rate (FDR) procedure (Miller et al. 2001a) and show how it has been used in the detection of ``Baryon Wiggles'' in the local galaxy power spectrum and source identification in radio data. Finally, I will look forward to new research on an automated Bayes Network anomaly detector and the possible use of the Locally Linear Embedding algorithm (LLE; Roweis & Saul 2000) for spectral classification of SDSS spectra.
Credit: R. C. Nichol, S. Chong, A. J. Connolly, S. Davies, C. Genovese, A. M. Hopkins, C. J. Miller, A. W. Moore, D. Pelleg, G. T. Richards, J. Schneider, I. Szapudi, L. Wasserman
Abstract: Astrocomp is a joint project, developed by the INAF-Astrophysical Observatory of Catania, University of Roma La Sapienza and Enea. The project has the goal of providing the scientific community of a web-based user-friendly interface which allows running parallel codes on a set of high-performance computing (HPC) resources, without any need for specific knowledge about parallel programming and Operating Systems commands. Astrocomp provides, also, computing time on a set of parallel computing systems, available to the authorized user. At present, the portal makes a few codes available, among which: FLY, a cosmological code for studying three-dimensional collisionless self-gravitating systems with periodic boundary conditions; ATD, a parallel tree-code for the simulation of the dynamics of boundary-free collisional and collisionless self-gravitating systems and MARA, a code for stellar light curves analysis. Other codes are going to be added to the portal.
Credit: U. Becciani, R. Capuzzo Dolcetta, A. Costa, P. Di Matteo, P. Miocchi, V. Rosato
Abstract: Three popular algorithms (FOF, DENMAX, and SKID) to identify halos in cosmological N-body simulations are compared with each other and with the predicted mass function from Press-Schechter theory. It is shown that the resulting distribution of halo masses strongly depends upon the choice of free parameters in the three algorithms, and therefore much care in their choice is needed. For many parameter values, DENMAX and SKID have the tendency to include in the halos particles at large distances from the halo center with low peculiar velocities. FOF does not suffer from this problem, and its mass distribution furthermore is reproduced well by the prediction from Press-Schechter theory.
Credit: M. Goetz, J. P. Huchra, R. H. Brandenberger
Abstract: We employ simulations of supersonic super-Alfv'enic turbulence decay as a benchmark test problem to assess and compare the performance of nine astrophysical MHD methods actively used to model star formation. The set of nine codes includes: ENZO, FLASH, KT-MHD, LL-MHD, PLUTO, PPML, RAMSES, STAGGER, and ZEUS. We present a comprehensive set of statistical measures designed to quantify the effects of numerical dissipation in these MHD solvers. We compare power spectra for basic fields to determine the effective spectral bandwidth of the methods and rank them based on their relative effective Reynolds numbers. We also compare numerical dissipation for solenoidal and dilatational velocity components to check for possible impacts of the numerics on small-scale density statistics. Finally, we discuss convergence of various characteristics for the turbulence decay test and impacts of various components of numerical schemes on the accuracy of solutions. We show that the best performing codes employ a consistently high order of accuracy for spatial reconstruction of the evolved fields, transverse gradient interpolation, conservation law update step, and Lorentz force computation. The best results are achieved with divergence-free evolution of the magnetic field using the constrained transport method, and using little to no explicit artificial viscosity. Codes which fall short in one or more of these areas are still useful, but they must compensate higher numerical dissipation with higher numerical resolution. This paper is the largest, most comprehensive MHD code comparison on an application-like test problem to date. We hope this work will help developers improve their numerical algorithms while helping users to make informed choices in picking optimal applications for their specific astrophysical problems.
Credit: Alexei G. Kritsuk, Aake Nordlund, David Collins, Paolo Padoan, Michael L. Norman, Tom Abel, Robi Banerjee, Christoph Federrath, Mario Flock, Dongwook Lee, Pak Shing Li, Wolf-Christian Mueller, Romain Teyssier, Sergey D. Ustyugov, Christian Vogel, Hao Xu
Abstract: We present a detailed comparison of fundamental dark matter halo properties retrieved by a substantial number of different halo finders. These codes span a wide range of techniques including friends-of-friends (FOF), spherical-overdensity (SO) and phase-space based algorithms. We further introduce a robust (and publicly available) suite of test scenarios that allows halo finder developers to compare the performance of their codes against those presented here. This set includes mock haloes containing various levels and distributions of substructure at a range of resolutions as well as a cosmological simulation of the large-scale structure of the universe. All the halo finding codes tested could successfully recover the spatial location of our mock haloes. They further returned lists of particles (potentially) belonging to the object that led to coinciding values for the maximum of the circular velocity profile and the radius where it is reached. All the finders based in configuration space struggled to recover substructure that was located close to the centre of the host halo and the radial dependence of the mass recovered varies from finder to finder. Those finders based in phase space could resolve central substructure although they found difficulties in accurately recovering its properties. Via a resolution study we found that most of the finders could not reliably recover substructure containing fewer than 30-40 particles. However, also here the phase space finders excelled by resolving substructure down to 10-20 particles. By comparing the halo finders using a high resolution cosmological volume we found that they agree remarkably well on fundamental properties of astrophysical significance (e.g. mass, position, velocity, and peak of the rotation curve).
Credit: Alexander Knebe, Steffen R. Knollmann, Stuart I. Muldrew, Frazer R. Pearce, Miguel Angel Aragon-Calvo, Yago Ascasibar, Peter S. Behroozi, Daniel Ceverino, Stephane Colombi, Juerg Diemand, Klaus Dolag, Bridget L. Falck, Patricia Fasel, Jeff Gardner, Stefan Gottloeber, Chung-Hsing Hsu, Francesca Iannuzzi, Anatoly Klypin, Zarija Lukic, Michal Maciejewski, Cameron McBride, Mark C. Neyrinck, Susana Planelles, Doug Potter, Vicent Quilis, Yann Rasera, Justin I. Read, Paul M. Ricker, Fabrice Roy, Volker Springel, Joachim Stadel, Greg Stinson, P. M. Sutter, Victor Turchaninov, Dylan Tweed, Gustavo Yepes, Marcel Zemp
Comments: 27 interesting pages, 20 beautiful figures, and 4 informative tables accepted for publication in MNRAS. The high-resolution version of the paper as well as all the test cases and analysis can be found at this web site.
Abstract: In the context of measuring structure and morphology of intermediate redshift galaxies with recent HST/ACS surveys, we tune, test, and compare two widely used fitting codes (GALFIT and GIM2D) for fitting single-component Sersic models to the light profiles of both simulated and real galaxy data. We find that fitting accuracy depends sensitively on galaxy profile shape. Exponential disks are well fit with Sersic models and have small measurement errors, whereas fits to de Vaucouleurs profiles show larger uncertainties owing to the large amount of light at large radii. We find that both codes provide reliable fits and little systematic error, when the effective surface brightness is above that of the sky. Moreover, both codes return errors that significantly underestimate the true fitting uncertainties, which are best estimated with simulations. We find that GIM2D suffers significant systematic errors for spheroids with close companions owing to the difficulty of effectively masking out neighboring galaxy light; there appears to be no work around to this important systematic in GIM2D's current implementation. While this crowding error affects only a small fraction of galaxies in GEMS, it must be accounted for in the analysis of deeper cosmological images or of more crowded fields with GIM2D. In contrast, GALFIT results are robust to the presence of neighbors because it can simultaneously fit the profiles of multiple companions thereby deblending their effect on the fit to the galaxy of interest. We find GALFIT's robustness to nearby companions and factor of >~20 faster runtime speed are important advantages over GIM2D for analyzing large HST/ACS datasets. Finally we include our final catalog of fit results for all 41,495 objects detected in GEMS.
Credit: Boris Häußler, Daniel H. McIntosh, Marco Barden, Eric F. Bell, Hans-Walter Rix, Andrea Borch, Steven V. W. Beckwith, John A. R. Caldwell, Catherine Heymans, Knud Jahnke, Shardha Jogee, Sergey E. Koposov, Klaus Meisenheimer, Sebastian F. Sánchez, Rachel S. Somerville, Lutz Wisotzki, Christian Wolf
Abstract: We present results for the statistics of thermal gas and the shock wave properties for a large volume simulated with three different cosmological numerical codes: the Eulerian total variations diminishing code TVD, the Eulerian piecewise parabolic method-based code ENZO, and the Lagrangian smoothed-particle hydrodynamics code GADGET. Starting from a shared set of initial conditions, we present convergence tests for a cosmological volume of side-length 100 Mpc/h, studying in detail the morphological and statistical properties of the thermal gas as a function of mass and spatial resolution in all codes. By applying shock finding methods to each code, we measure the statistics of shock waves and the related cosmic ray acceleration efficiencies, within the sample of simulations and for the results of the different approaches. We discuss the regimes of uncertainties and disagreement among codes, with a particular focus on the results at the scale of galaxy clusters. We report that, even if the bulk of thermal and shock properties are reasonably in agreement among the three codes, yet some differences exist (especially between Eulerian methods and smoothed particle hydrodynamics) and are mostly associated with a different reconstruction of shock heating and entropy production in the accretion regions at the outskirts of galaxy clusters.
Credit: F. Vazza, K. Dolag, D. Ryu, G. Brunetti, C. Gheller, H. Kang, C. Pfrommer
By Nathan Goldbaum
Today’s astrobite will be a sequel to a post I wrote a few months ago on using the smoothed particle hydrodynamics (SPH) code Gadget-2. In the first post, I went over how to install Gadget and showed how to run one of the test cases included in the Gadget distribution. Today, I’d like to show how to set up, run, and analyze a simple hydrodynamics test problem of your own.
Abstract: We review an approach to observation-theory comparisons we call "Taste-Testing." In this approach, synthetic observations are made of numerical simulations, and then both real and synthetic observations are "tasted" (compared) using a variety of statistical tests. We first lay out arguments for bringing theory to observational space rather than observations to theory space. Next, we explain that generating synthetic observations is only a step along the way to the quantitative, statistical, taste tests that offer the most insight. We offer a set of examples focused on polarimetry, scattering and emission by dust, and spectral-line mapping in starforming regions. We conclude with a discussion of the connection between statistical tests used to date and the physics we seek to understand. In particular, we suggest that the "lognormal" nature of molecular clouds can be created by the interaction of many random processes, as can the lognormal nature of the IMF, so that the fact that both the "Clump Mass Function" (CMF) and IMF appear lognormal does not necessarily imply a direct relationship between them.
Credit: Alyssa A. Goodman
Abstract: With the ever increasing size and complexity of fully self-consistent simulations of galaxy formation within the framework of the cosmic web, the demands upon object finders for these simulations has simultaneously grown. To this extent we initiated the Halo Finder Comparison Project that gathered together all the experts in the field and has so far led to two comparison papers, one for dark matter field haloes (Knebe et al. 2011), and one for dark matter subhaloes (Onions et al. 2012). However, as state-of-the-art simulation codes are perfectly capable of not only following the formation and evolution of dark matter but also account for baryonic physics (e.g. hydrodynamics, star formation, feedback) object finders should also be capable of taking these additional processes into consideration. Here we report on a comparison of codes as applied to the Constrained Local UniversE Simulation (CLUES) of the formation of the Local Group which incorporates much of the physics relevant for galaxy formation. We compare both the properties of the three main galaxies in the simulation (representing the MW, M31, and M33) as well as their satellite populations for a variety of halo finders ranging from phase-space to velocity-space to spherical overdensity based codes, including also a mere baryonic object finder. We obtain agreement amongst codes comparable to (if not better than) our previous comparisons, at least for the total, dark, and stellar components of the objects. However, the diffuse gas content of the haloes shows great disparity, especially for low-mass satellite galaxies. This is primarily due to differences in the treatment of the thermal energy during the unbinding procedure. We acknowledge that the handling of gas in halo finders is something that needs to be dealt with carefully, and the precise treatment may depend sensitively upon the scientific problem being studied.
Credit: Alexander Knebe, Noam I. Libeskind, Frazer Pearce, Peter Behroozi, Javier Casado, Klaus Dolag, Rosa Dominguez-Tenreiro, Pascal Elahi, Hanni Lux, Stuart I. Muldrew, Julian Onions
Abstract: While various codes exist to systematically and robustly find haloes and subhaloes in cosmological simulations (Knebe et al., 2011, Onions et al., 2012), this is the first work to introduce and rigorously test codes that find tidal debris (streams and other unbound substructure) in fully cosmological simulations of structure formation. We use one tracking and three non-tracking codes to identify substructure (bound and unbound) in a Milky Way type simulation from the Aquarius suite (Springel et al., 2008) and post-process their output with a common pipeline to determine the properties of these substructures in a uniform way. By using output from a fully cosmological simulation, we also take a step beyond previous studies of tidal debris that have used simple toy models. We find that both tracking and non-tracking codes agree well on the identification of subhaloes and more importantly, the unbound tidal features associated with them. The distributions of basic properties of the total substructure distribution (mass, velocity dispersion, position) are recovered with a scatter of $sim20%$. Using the tracking code as our reference, we show that the non-tracking codes identify complex tidal debris with purities of $sim40%$. Analysing the results of the substructure finders, we find that the general distribution of substructures differ significantly from the distribution of bound subhaloes. Most importantly, both bound and unbound substructures together constitute $sim18%$ of the host halo mass, which is a factor of $sim2$ higher than the fraction in self-bound subhaloes. However, this result is restricted by the remaining challenge to cleanly define when an unbound structure has become part of the host halo. Nevertheless, the more general substructure distribution provides a more complete picture of a halo's accretion history.
Credit: Pascal J. Elahi, Jiaxin Han, Hanni Lux, Yago Ascasibar, Peter Behroozi, Alexander Knebe, Stuart I. Muldrew, Julian Onions, Frazer Pearce
Abstract: Assembling simulation software along with the associated tools and utilities is a challenging endeavor, particularly when the components are distributed across multiple source code versioning systems. It is problematic for researchers compiling and running the software across many different supercomputers, as well as for novices in a field who are often presented with a bewildering list of software to collect and install. In this paper, we describe a language (CRL) for specifying software components with the details needed to obtain them from source code repositories. The language supports public and private access. We describe a tool called GetComponents which implements CRL and can be used to assemble software. We demonstrate the tool for application scenarios with the Cactus Framework on the NSF TeraGrid resources. The tool itself is distributed with an open source license and freely available from our web page.
Credit: Eric L. Seidel, Gabrielle Allen, Steven Brandt, Frank Löffler, Erik Schnetter
Abstract: We present a comparison of 14 galaxy formation models: 12 different semi-analytical models and 2 halo-occupation distribution models for galaxy formation based upon the same cosmological simulation and merger tree information derived from it. The participating codes have proven to be very successful in their own right but they have all been calibrated independently using various observational data sets, stellar models, and merger trees. In this paper we apply them without recalibration and this leads to a wide variety of predictions for the stellar mass function, specific star formation rates, stellar-to- halo mass ratios, and the abundance of orphan galaxies. The scatter is much larger than seen in previous comparison studies primarily because the codes have been used outside of their native environment within which they are well tested and calibrated. The purpose of the `nIFTy comparison of galaxy formation models' is to bring together as many different galaxy formation modellers as possible and to investigate a common approach to model calibration. This paper provides a unified description for all participating models and presents the initial, uncalibrated comparison as a baseline for our future studies where we will develop a common calibration framework and address the extent to which that reduces the scatter in the model predictions seen here.
Credit: Alexander Knebe, Frazer R. Pearce, Peter A. Thomas, Andrew Benson, Jeremy Blaizot, Richard Bower, Jorge Carretero, Francisco J. Castander, Andrea Cattaneo, Sofia A. Cora, Darren J. Croton, Weiguang Cui, Daniel Cunnama, Gabriella De Lucia, Julien E. Devriendt, Pascal J. Elahi, Andreea Font, Fabio Fontanot, Juan Garcia-Bellido, Ignacio D. Gargiulo, Violeta Gonzalez-Perez, John Helly, Bruno Henriques, Michaela Hirschmann, Jaehyun Lee, Gary A. Mamon, Pierluigi Monaco, Julian Onions, Nelson D. Padilla, Chris Power, Arnau Pujol, Ramin A. Skibba, Rachel S. Somerville, Chaichalit Srisawat, Cristian A. Vega-Martinez, Sukyoung K. Yi
Abstract: Self-consistent N-body simulations of modified gravity models are a key ingredient to obtain rigorous constraints on deviations from General Relativity using large-scale structure observations. This paper provides the first detailed comparison of the results of different N-body codes for the f(R), DGP, and Symmetron models, starting from the same initial conditions. We find that the fractional deviation of the matter power spectrum from CDM agrees to better than 1% up to k ~ 5 -- 10hMpc-1 between the different codes. These codes are thus able to meet the stringent accuracy requirements of upcoming observational surveys. All codes are also in good agreement in their results for the velocity divergence power spectrum, halo abundances and halo profiles. We also test the quasi-static limit, which is employed in most modified gravity Nbody codes, for the Symmetron model for which the most significant non-static effects among the models considered are expected. We conclude that this limit is a very good approximation for all of the observables considered here.
Credit: Hans A. Winther, Fabian Schmidt, Alexandre Barreira, Christian Arnold, Sownak Bose, Claudio Llinares, Marco Baldi, Bridget Falck, Wojciech A. Hellwing, Kazuya Koyama, Baojiu Li, David F. Mota, Ewald Puchwein, Robert Smith, Gong-Bo Zhao
Abstract: Despite expanding research activity in gravitational lens modeling, there is no particular software which is considered a standard. Much of the gravitational lens modeling software is written by individual investigators for their own use. Some gravitational lens modeling software is freely available for download but is widely variable with regard to ease of use and quality of documentation. This review of 13 software packages was undertaken to provide a single source of information. Gravitational lens models are classified as parametric models or non-parametric models, and can be further divided into research and educational software. Software used in research includes the GRAVLENS package (with both gravlens and lensmodel), Lenstool, LensPerfect, glafic, PixeLens, SimpLens, Lensview, and GRALE. In this review, GravLensHD, G-Lens, Gravitational Lensing, lens and MOWGLI are categorized as educational programs that are useful for demonstrating various aspects of lensing. Each of the 13 software packages is reviewed with regard to software features (installation, documentation, files provided, etc.) and lensing features (type of model, input data, output data, etc.) as well as a brief review of studies where they have been used. Recent studies have demonstrated the utility of strong gravitational lensing data for mass mapping, and suggest increased use of these techniques in the future. Coupled with the advent of greatly improved imaging, new approaches to modeling of strong gravitational lens systems are needed. This is the first systematic review of strong gravitational lens modeling software, providing investigators with a starting point for future software development to further advance gravitational lens modeling research.
Credit: Alan T. Lefor, Toshifumi Futamase, Mohammad Akhlaghi
Abstract: We present a test to quantify how well some approximate methods, designed to reproduce the mildly non-linear evolution of perturbations, are able to reproduce the clustering of DM halos once the grouping of particles into halos is defined and kept fixed. The following methods have been considered: Lagrangian Perturbation Theory (LPT) up to third order, Truncated LPT, Augmented LPT, MUSCLE and COLA. The test runs as follows: halos are defined by applying a friends-of-friends (FoF) halo finder to the output of an Nbody simulation. The approximate methods are then applied to the same initial conditions of the simulation, producing for all particles displacements from their starting position and velocities. The position and velocity of each halo are computed by averaging over the particles that belong to that halo, according to the FoF halo finder. This procedure allows us to perform a well-posed test of how clustering of the matter density and halo density fields are recovered, without asking to the approximate method an accurate reconstruction of halos. We have considered the results at z = 0; 0:5; 1, and we have analysed power spectrum in real and redshift space, object-by-object difference in position and velocity, density Probability Distribution Function (PDF) and its moments, phase difference of Fourier modes.
We find that higher LPT orders are generally able to better reproduce the clustering of halos, while little or no improvement is found for the matter density field when going to 2LPT and 3LPT. Augmentation provides some improvement when coupled with 2LPT, while its effect is limited when coupled with 3LPT. Little improvement is brought by MUSCLE with respect to Augmentation. The more expensive particle-mesh code COLA outperforms all LPT methods, and this is true even for mesh sizes as large as the inter-particle distance. This test sets an upper limit on the ability of these methods to reproduce the clustering of halos, for the cases when these objects are reconstructed at the object-by-object level.
Credit: Emiliano Munari, Pierluigi Monaco, Jun Koda, Francisco-Shu Kitaura, Emiliano Sefusatti, Stefano Borgani